././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735100.0699942 cypari2-2.1.2/0000755000175000017500000000000000000000000014224 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1484310731.0 cypari2-2.1.2/LICENSE0000644000175000017500000004325400000000000015241 0ustar00vincentvincent00000000000000 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1516200100.0 cypari2-2.1.2/MANIFEST.in0000644000175000017500000000025300000000000015762 0ustar00vincentvincent00000000000000include LICENSE README.rst VERSION global-include *.py *.pyx *.pxd *.pxi recursive-exclude cypari2 auto_* global-exclude *.c .gitignore .travis* readthedocs* prune dist ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1604256609.0 cypari2-2.1.2/Makefile0000644000175000017500000000073600000000000015672 0ustar00vincentvincent00000000000000# Optional Makefile for easier development VERSION = $(shell cat VERSION) PYTHON = python PIP = $(PYTHON) -m pip -v build: $(PYTHON) setup.py build install: $(PIP) install --no-index --upgrade . check: ulimit -s 8192; $(PYTHON) -u tests/rundoctest.py ulimit -s 8192; $(PYTHON) tests/test_integers.py ulimit -s 8192; $(PYTHON) tests/test_backward.py dist: chmod go+rX-w -R . umask 0022 && $(PYTHON) setup.py sdist --formats=gztar .PHONY: build install check dist ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735100.0699942 cypari2-2.1.2/PKG-INFO0000644000175000017500000001173700000000000015332 0ustar00vincentvincent00000000000000Metadata-Version: 1.0 Name: cypari2 Version: 2.1.2 Summary: A Python interface to the number theory library PARI/GP Home-page: https://github.com/sagemath/cypari2 Author: Luca De Feo, Vincent Delecroix, Jeroen Demeyer, Vincent Klein Author-email: sage-devel@googlegroups.com License: GNU General Public License, version 2 or later Description: CyPari 2 ======== .. image:: https://travis-ci.org/sagemath/cypari2.svg?branch=master :target: https://travis-ci.org/sagemath/cypari2 .. image:: https://readthedocs.org/projects/cypari2/badge/?version=latest :target: https://cypari2.readthedocs.io/en/latest/?badge=latest :alt: Documentation Status A Python interface to the number theory library `PARI/GP `_. This library supports both Python 2 and Python 3. Installation ------------ GNU/Linux ^^^^^^^^^ A package `python-cypari2` or `python2-cypari2` or `python3-cypari2` might be available in your package manager. Using pip ^^^^^^^^^ Requirements: - PARI/GP >= 2.9.4 (header files and library) - Python 2.7 or Python >= 3.4 - pip - `cysignals `_ >= 1.7 - Cython >= 0.28 Install cypari2 via the Python Package Index (PyPI) via :: $ pip install cypari2 [--user] (the optional option *--user* allows to install cypari2 for a single user and avoids using pip with administrator rights). Depending on your operating system the pip command might also be called pip2 or pip3. If you want to try the development version use :: $ pip install git+https://github.com/sagemath/cypari2.git [--user] If you have an error saying libpari-gmp*.so* is missing and have all requirements already installed, try to reinstall cysignals and cypari2 :: $ pip install cysignals --upgrade [--user] $ pip install cypari2 --upgrade [--user] Other ^^^^^ Any other way to install cypari2 is not supported. In particular, ``python setup.py install`` will produce an error. Usage ----- The interface as been kept as close as possible from PARI/GP. The following computation in GP :: ? zeta(2) %1 = 1.6449340668482264364724151666460251892 ? p = x^3 + x^2 + x - 1; ? modulus = t^3 + t^2 + t - 1; ? fq = factorff(p, 3, modulus); ? centerlift(lift(fq)) %5 = [ x - t 1] [x + (t^2 + t - 1) 1] [ x + (-t^2 - 1) 1] translates into :: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari(2).zeta() 1.64493406684823 >>> p = pari("x^3 + x^2 + x - 1") >>> modulus = pari("t^3 + t^2 + t - 1") >>> fq = p.factorff(3, modulus) >>> fq.lift().centerlift() [x - t, 1; x + (t^2 + t - 1), 1; x + (-t^2 - 1), 1] The object **pari** above is the object for the interface and acts as a constructor. It can be called with basic Python objects like integer or floating point. When called with a string as in the last example the corresponding string is interpreted as if it was executed in a GP shell. Beyond the interface object **pari** of type **Pari**, any object you get a handle on is of type **Gen** (that is a wrapper around the **GEN** type from libpari). All PARI/GP functions are then available in their original names as *methods* like **zeta**, **factorff**, **lift** or **centerlift** above. Alternatively, the pari functions are accessible as methods of **pari**. The same computations be done via :: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari.zeta(2) 1.64493406684823 >>> p = pari("x^3 + x^2 + x - 1") >>> modulus = pari("t^3 + t^2 + t - 1") >>> fq = pari.factorff(p, 3, modulus) >>> pari.centerlift(pari.lift(fq)) [x - t, 1; x + (t^2 + t - 1), 1; x + (-t^2 - 1), 1] The complete documentation of cypari2 is available at http://cypari2.readthedocs.io and the PARI/GP documentation at http://pari.math.u-bordeaux.fr/doc.html Contributing ------------ Submit pull request or get in touch with the SageMath developers. Keywords: PARI/GP number theory Platform: UNKNOWN ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1556307469.0 cypari2-2.1.2/README.rst0000644000175000017500000000703400000000000015717 0ustar00vincentvincent00000000000000CyPari 2 ======== .. image:: https://travis-ci.org/sagemath/cypari2.svg?branch=master :target: https://travis-ci.org/sagemath/cypari2 .. image:: https://readthedocs.org/projects/cypari2/badge/?version=latest :target: https://cypari2.readthedocs.io/en/latest/?badge=latest :alt: Documentation Status A Python interface to the number theory library `PARI/GP `_. This library supports both Python 2 and Python 3. Installation ------------ GNU/Linux ^^^^^^^^^ A package `python-cypari2` or `python2-cypari2` or `python3-cypari2` might be available in your package manager. Using pip ^^^^^^^^^ Requirements: - PARI/GP >= 2.9.4 (header files and library) - Python 2.7 or Python >= 3.4 - pip - `cysignals `_ >= 1.7 - Cython >= 0.28 Install cypari2 via the Python Package Index (PyPI) via :: $ pip install cypari2 [--user] (the optional option *--user* allows to install cypari2 for a single user and avoids using pip with administrator rights). Depending on your operating system the pip command might also be called pip2 or pip3. If you want to try the development version use :: $ pip install git+https://github.com/sagemath/cypari2.git [--user] If you have an error saying libpari-gmp*.so* is missing and have all requirements already installed, try to reinstall cysignals and cypari2 :: $ pip install cysignals --upgrade [--user] $ pip install cypari2 --upgrade [--user] Other ^^^^^ Any other way to install cypari2 is not supported. In particular, ``python setup.py install`` will produce an error. Usage ----- The interface as been kept as close as possible from PARI/GP. The following computation in GP :: ? zeta(2) %1 = 1.6449340668482264364724151666460251892 ? p = x^3 + x^2 + x - 1; ? modulus = t^3 + t^2 + t - 1; ? fq = factorff(p, 3, modulus); ? centerlift(lift(fq)) %5 = [ x - t 1] [x + (t^2 + t - 1) 1] [ x + (-t^2 - 1) 1] translates into :: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari(2).zeta() 1.64493406684823 >>> p = pari("x^3 + x^2 + x - 1") >>> modulus = pari("t^3 + t^2 + t - 1") >>> fq = p.factorff(3, modulus) >>> fq.lift().centerlift() [x - t, 1; x + (t^2 + t - 1), 1; x + (-t^2 - 1), 1] The object **pari** above is the object for the interface and acts as a constructor. It can be called with basic Python objects like integer or floating point. When called with a string as in the last example the corresponding string is interpreted as if it was executed in a GP shell. Beyond the interface object **pari** of type **Pari**, any object you get a handle on is of type **Gen** (that is a wrapper around the **GEN** type from libpari). All PARI/GP functions are then available in their original names as *methods* like **zeta**, **factorff**, **lift** or **centerlift** above. Alternatively, the pari functions are accessible as methods of **pari**. The same computations be done via :: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari.zeta(2) 1.64493406684823 >>> p = pari("x^3 + x^2 + x - 1") >>> modulus = pari("t^3 + t^2 + t - 1") >>> fq = pari.factorff(p, 3, modulus) >>> pari.centerlift(pari.lift(fq)) [x - t, 1; x + (t^2 + t - 1), 1; x + (-t^2 - 1), 1] The complete documentation of cypari2 is available at http://cypari2.readthedocs.io and the PARI/GP documentation at http://pari.math.u-bordeaux.fr/doc.html Contributing ------------ Submit pull request or get in touch with the SageMath developers. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1604735013.0 cypari2-2.1.2/VERSION0000644000175000017500000000000600000000000015270 0ustar00vincentvincent000000000000002.1.2 ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1604735099.786661 cypari2-2.1.2/autogen/0000755000175000017500000000000000000000000015666 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1516200100.0 cypari2-2.1.2/autogen/__init__.py0000644000175000017500000000133300000000000017777 0ustar00vincentvincent00000000000000from __future__ import absolute_import import glob import os from os.path import join, getmtime, exists from .generator import PariFunctionGenerator from .paths import pari_share def rebuild(force=False): pari_module_path = 'cypari2' src_files = [join(pari_share(), 'pari.desc')] + \ glob.glob(join('autogen', '*.py')) gen_files = [join(pari_module_path, 'auto_paridecl.pxd'), join(pari_module_path, 'auto_gen.pxi')] if not force and all(exists(f) for f in gen_files): src_mtime = max(getmtime(f) for f in src_files) gen_mtime = min(getmtime(f) for f in gen_files) if gen_mtime > src_mtime: return G = PariFunctionGenerator() G() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1604256609.0 cypari2-2.1.2/autogen/args.py0000644000175000017500000003045700000000000017205 0ustar00vincentvincent00000000000000""" Arguments for PARI calls """ #***************************************************************************** # Copyright (C) 2015 Jeroen Demeyer # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 2 of the License, or # (at your option) any later version. # http://www.gnu.org/licenses/ #***************************************************************************** from __future__ import unicode_literals # Some replacements for reserved words replacements = {'char': 'character', 'return': 'return_value'} class PariArgument(object): """ This class represents one argument in a PARI call. """ def __init__(self, namesiter, default, index): """ Create a new argument for a PARI call. INPUT: - ``namesiter`` -- iterator over all names of the arguments. Usually, the next name from this iterator is used as argument name. - ``default`` -- default value for this argument (``None`` means that the argument is not optional). - ``index`` -- (integer >= 0). Index of this argument in the list of arguments. Index 0 means a ``"self"`` argument which is treated specially. For a function which is not a method, start counting at 1. """ self.index = index try: self.name = self.get_argument_name(namesiter) except StopIteration: # No more names available, use something default. # This is used in listcreate() and polsturm() for example # which have deprecated arguments which are not listed in # the help. self.name = "_arg%s" % index self.undocumented = True else: self.undocumented = False if self.index == 0: # "self" argument can never have a default self.default = None elif default is None: self.default = self.always_default() elif default == "": self.default = self.default_default() else: self.default = default # Name for a temporary variable. Only a few classes actually use this. self.tmpname = "_" + self.name def __repr__(self): s = self._typerepr() + " " + self.name if self.default is not None: s += "=" + self.default return s def _typerepr(self): """ Return a string representing the type of this argument. """ return "(generic)" def ctype(self): """ The corresponding C type. This is used for auto-generating the declarations of the C function. In some cases, this is also used for passing the argument from Python to Cython. """ raise NotImplementedError def always_default(self): """ If this returns not ``None``, it is a value which is always the default for this argument, which is then automatically optional. """ return None def default_default(self): """ The default value for an optional argument if no other default was specified in the prototype. """ return "NULL" def get_argument_name(self, namesiter): """ Return the name for this argument, given ``namesiter`` which is an iterator over the argument names given by the help string. """ n = next(namesiter) try: return replacements[n] except KeyError: return n def prototype_code(self): """ Return code to appear in the prototype of the Cython wrapper. """ raise NotImplementedError def deprecation_warning_code(self, function): """ Return code to appear in the function body to give a deprecation warning for this argument, if applicable. ``function`` is the function name to appear in the message. """ if not self.undocumented: return "" s = " if {name} is not None:\n" s += " from warnings import warn\n" s += " warn('argument {index} of the PARI/GP function {function} is undocumented and deprecated', DeprecationWarning)\n" return s.format(name=self.name, index=self.index, function=function) def convert_code(self): """ Return code to appear in the function body to convert this argument to something that PARI understand. This code can also contain extra checks. It will run outside of ``sig_on()``. """ return "" def c_convert_code(self): """ Return additional conversion code which will be run after ``convert_code`` and inside the ``sig_on()`` block. This must not involve any Python code (in particular, it should not raise exceptions). """ return "" def call_code(self): """ Return code to put this argument in a PARI function call. """ return self.name class PariArgumentObject(PariArgument): """ Class for arguments which are passed as generic Python ``object``. """ def prototype_code(self): """ Return code to appear in the prototype of the Cython wrapper. """ s = self.name if self.default is not None: # Default corresponds to None, actual default value should # be handled in convert_code() s += "=None" return s class PariArgumentClass(PariArgument): """ Class for arguments which are passed as a specific C/Cython class. The C/Cython type is given by ``self.ctype()``. """ def prototype_code(self): """ Return code to appear in the prototype of the Cython wrapper. """ s = self.ctype() + " " + self.name if self.default is not None: s += "=" + self.default return s class PariInstanceArgument(PariArgumentObject): """ ``self`` argument for ``Pari`` object. This argument is never actually used. """ def __init__(self): PariArgument.__init__(self, iter(["self"]), None, 0) def _typerepr(self): return "Pari" def ctype(self): return "GEN" class PariArgumentGEN(PariArgumentObject): def _typerepr(self): return "GEN" def ctype(self): return "GEN" def convert_code(self): """ Conversion to Gen """ if self.index == 0: # self argument s = "" elif self.default is None: s = " {name} = objtogen({name})\n" elif self.default is False: # This is actually a required argument # See parse_prototype() in parser.py why we need this s = " if {name} is None:\n" s += " raise TypeError(\"missing required argument: '{name}'\")\n" s += " {name} = objtogen({name})\n" else: s = " cdef bint _have_{name} = ({name} is not None)\n" s += " if _have_{name}:\n" s += " {name} = objtogen({name})\n" return s.format(name=self.name) def c_convert_code(self): """ Conversion Gen -> GEN """ if not self.default: # required argument s = " cdef GEN {tmp} = ({name}).g\n" elif self.default == "NULL": s = " cdef GEN {tmp} = NULL\n" s += " if _have_{name}:\n" s += " {tmp} = ({name}).g\n" elif self.default == "0": s = " cdef GEN {tmp} = gen_0\n" s += " if _have_{name}:\n" s += " {tmp} = ({name}).g\n" else: raise ValueError("default value %r for GEN argument %r is not supported" % (self.default, self.name)) return s.format(name=self.name, tmp=self.tmpname) def call_code(self): return self.tmpname class PariArgumentString(PariArgumentObject): def _typerepr(self): return "str" def ctype(self): return "char *" def convert_code(self): if self.default is None: s = " {name} = to_bytes({name})\n" s += " cdef char* {tmp} = {name}\n" else: s = " cdef char* {tmp}\n" s += " if {name} is None:\n" s += " {tmp} = {default}\n" s += " else:\n" s += " {name} = to_bytes({name})\n" s += " {tmp} = {name}\n" return s.format(name=self.name, tmp=self.tmpname, default=self.default) def call_code(self): return self.tmpname class PariArgumentVariable(PariArgumentObject): def _typerepr(self): return "var" def ctype(self): return "long" def default_default(self): return "-1" def convert_code(self): if self.default is None: s = " cdef long {tmp} = get_var({name})\n" else: s = " cdef long {tmp} = {default}\n" s += " if {name} is not None:\n" s += " {tmp} = get_var({name})\n" return s.format(name=self.name, tmp=self.tmpname, default=self.default) def call_code(self): return self.tmpname class PariArgumentLong(PariArgumentClass): def _typerepr(self): return "long" def ctype(self): return "long" def default_default(self): return "0" class PariArgumentULong(PariArgumentClass): def _typerepr(self): return "unsigned long" def ctype(self): return "unsigned long" def default_default(self): return "0" class PariArgumentPrec(PariArgumentClass): def _typerepr(self): return "prec" def ctype(self): return "long" def always_default(self): return "0" def get_argument_name(self, namesiter): return "precision" def c_convert_code(self): s = " {name} = prec_bits_to_words({name})\n" return s.format(name=self.name) class PariArgumentBitprec(PariArgumentClass): def _typerepr(self): return "bitprec" def ctype(self): return "long" def always_default(self): return "0" def get_argument_name(self, namesiter): return "precision" def c_convert_code(self): s = " if not {name}:\n" s += " {name} = default_bitprec()\n" return s.format(name=self.name) class PariArgumentSeriesPrec(PariArgumentClass): def _typerepr(self): return "serprec" def ctype(self): return "long" def default_default(self): return "-1" def get_argument_name(self, namesiter): return "serprec" def c_convert_code(self): s = " if {name} < 0:\n" s += " {name} = precdl # Global PARI series precision\n" return s.format(name=self.name) class PariArgumentGENPointer(PariArgumentObject): default = "NULL" def _typerepr(self): return "GEN*" def ctype(self): return "GEN*" def convert_code(self): """ Conversion to NULL or Gen """ s = " cdef bint _have_{name} = ({name} is not None)\n" s += " if _have_{name}:\n" s += " raise NotImplementedError(\"optional argument {name} not available\")\n" return s.format(name=self.name) def c_convert_code(self): """ Conversion Gen -> GEN """ s = " cdef GEN * {tmp} = NULL\n" return s.format(name=self.name, tmp=self.tmpname) def call_code(self): return self.tmpname pari_arg_types = { 'G': PariArgumentGEN, 'W': PariArgumentGEN, 'r': PariArgumentString, 's': PariArgumentString, 'L': PariArgumentLong, 'U': PariArgumentULong, 'n': PariArgumentVariable, 'p': PariArgumentPrec, 'b': PariArgumentBitprec, 'P': PariArgumentSeriesPrec, '&': PariArgumentGENPointer, # Codes which are known but not actually supported yet 'V': None, 'I': None, 'E': None, 'J': None, 'C': None, '*': None, '=': None} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1604256609.0 cypari2-2.1.2/autogen/doc.py0000644000175000017500000002462200000000000017013 0ustar00vincentvincent00000000000000# -*- coding: utf-8 -*- """ Handle PARI documentation """ from __future__ import unicode_literals import re import subprocess leading_ws = re.compile("^( +)", re.MULTILINE) trailing_ws = re.compile("( +)$", re.MULTILINE) double_space = re.compile(" +") end_space = re.compile(r"(@\[end[a-z]*\])([A-Za-z])") end_paren = re.compile(r"(@\[end[a-z]*\])([(])") begin_verb = re.compile(r"@1") end_verb = re.compile(r"@[23] *@\[endcode\]") verb_loop = re.compile("^( .*)@\[[a-z]*\]", re.MULTILINE) dollars = re.compile(r"@\[dollar\]\s*(.*?)\s*@\[dollar\]", re.DOTALL) doubledollars = re.compile(r"@\[doubledollar\]\s*(.*?)\s*@\[doubledollar\] *", re.DOTALL) math_loop = re.compile(r"(@\[start[A-Z]*MATH\][^@]*)@\[[a-z]*\]") math_backslash = re.compile(r"(@\[start[A-Z]*MATH\][^@]*)=BACKSLASH=") prototype = re.compile("^[^\n]*\n\n") library_syntax = re.compile("The library syntax is.*", re.DOTALL) newlines = re.compile("\n\n\n\n*") bullet_loop = re.compile("(@BULLET( [^\n]*\n)*)([^ \n])") indent_math = re.compile("(@\\[startDISPLAYMATH\\].*\n(.+\n)*)(\\S)") escape_backslash = re.compile(r"^(\S.*)[\\]", re.MULTILINE) escape_mid = re.compile(r"^(\S.*)[|]", re.MULTILINE) escape_percent = re.compile(r"^(\S.*)[%]", re.MULTILINE) escape_hash = re.compile(r"^(\S.*)[#]", re.MULTILINE) label_define = re.compile(r"@\[label [a-zA-Z0-9:]*\]") label_ref = re.compile(r"(Section *)?@\[startref\](se:)?([^@]*)@\[endref\]") def sub_loop(regex, repl, text): """ In ``text``, substitute ``regex`` by ``repl`` recursively. As long as substitution is possible, ``regex`` is substituted. INPUT: - ``regex`` -- a compiled regular expression - ``repl`` -- replacement text - ``text`` -- input text OUTPUT: substituted text EXAMPLES: Ensure there a space between any 2 letters ``x``:: >>> from autogen.doc import sub_loop >>> import re >>> print(sub_loop(re.compile("xx"), "x x", "xxx_xx")) x x x_x x """ while True: text, n = regex.subn(repl, text) if not n: return text def raw_to_rest(doc): r""" Convert raw PARI documentation (with ``@``-codes) to reST syntax. INPUT: - ``doc`` -- the raw PARI documentation OUTPUT: a unicode string EXAMPLES:: >>> from autogen.doc import raw_to_rest >>> print(raw_to_rest(b"@[startbold]hello world@[endbold]")) :strong:`hello world` TESTS:: >>> raw_to_rest(b"@[invalid]") Traceback (most recent call last): ... SyntaxError: @ found: @[invalid] >>> s = b'@3@[startbold]*@[endbold] snip @[dollar]0@[dollar]\ndividing @[dollar]#E@[dollar].' >>> print(raw_to_rest(s)) - snip :math:`0` dividing :math:`\#E`. """ doc = doc.decode("utf-8") # Work around a specific problem with doc of "component" doc = doc.replace("[@[dollar]@[dollar]]", "[]") # Work around a specific problem with doc of "algdivl" doc = doc.replace(r"\y@", r"\backslash y@") # Special characters doc = doc.replace("@[lt]", "<") doc = doc.replace("@[gt]", ">") doc = doc.replace("@[pm]", "±") doc = doc.replace("@[nbrk]", "\xa0") doc = doc.replace("@[agrave]", "à") doc = doc.replace("@[aacute]", "á") doc = doc.replace("@[eacute]", "é") doc = doc.replace("@[ouml]", "ö") doc = doc.replace("@[uuml]", "ü") doc = doc.replace("\\'{a}", "á") # Remove leading and trailing whitespace from every line doc = leading_ws.sub("", doc) doc = trailing_ws.sub("", doc) # Remove multiple spaces doc = double_space.sub(" ", doc) # Sphinx dislikes inline markup immediately followed by a letter: # insert a non-breaking space doc = end_space.sub("\\1\xa0\\2", doc) # Similarly, for inline markup immediately followed by an open # parenthesis, insert a space doc = end_paren.sub("\\1 \\2", doc) # Fix labels and references doc = label_define.sub("", doc) doc = label_ref.sub("``\\3`` (in the PARI manual)", doc) # Bullet items doc = doc.replace("@3@[startbold]*@[endbold] ", "@BULLET ") doc = sub_loop(bullet_loop, "\\1 \\3", doc) doc = doc.replace("@BULLET ", "- ") # Add =VOID= in front of all leading whitespace (which was # intentionally added) to avoid confusion with verbatim blocks. doc = leading_ws.sub(r"=VOID=\1", doc) # Verbatim blocks doc = begin_verb.sub("::\n\n@0", doc) doc = end_verb.sub("", doc) doc = doc.replace("@0", " ") doc = doc.replace("@3", "") # Remove all further markup from within verbatim blocks doc = sub_loop(verb_loop, "\\1", doc) # Pair dollars -> beginmath/endmath doc = doc.replace("@[dollar]@[dollar]", "@[doubledollar]") doc = dollars.sub(r"@[startMATH]\1@[endMATH]", doc) doc = doubledollars.sub(r"@[startDISPLAYMATH]\1@[endDISPLAYMATH]", doc) # Replace special characters (except in verbatim blocks) # \ -> =BACKSLASH= # | -> =MID= # % -> =PERCENT= # # -> =HASH= doc = sub_loop(escape_backslash, "\\1=BACKSLASH=", doc) doc = sub_loop(escape_mid, "\\1=MID=", doc) doc = sub_loop(escape_percent, "\\1=PERCENT=", doc) doc = sub_loop(escape_hash, "\\1=HASH=", doc) # Math markup doc = doc.replace("@[obr]", "{") doc = doc.replace("@[cbr]", "}") doc = doc.replace("@[startword]", "\\") doc = doc.replace("@[endword]", "") # (special rules for Hom and Frob, see trac ticket 21005) doc = doc.replace("@[startlword]Hom@[endlword]", "\\text{Hom}") doc = doc.replace("@[startlword]Frob@[endlword]", "\\text{Frob}") doc = doc.replace("@[startlword]", "\\") doc = doc.replace("@[endlword]", "") doc = doc.replace("@[startbi]", "\\mathbb{") doc = doc.replace("@[endbi]", "}") # PARI TeX macros doc = doc.replace(r"\Cl", r"\mathrm{Cl}") doc = doc.replace(r"\Id", r"\mathrm{Id}") doc = doc.replace(r"\Norm", r"\mathrm{Norm}") doc = doc.replace(r"\disc", r"\mathrm{disc}") doc = doc.replace(r"\gcd", r"\mathrm{gcd}") doc = doc.replace(r"\lcm", r"\mathrm{lcm}") # Remove extra markup inside math blocks doc = sub_loop(math_loop, "\\1", doc) # Replace special characters by escape sequences # Note that =BACKSLASH= becomes an unescaped backslash in math mode # but an escaped backslash otherwise. doc = sub_loop(math_backslash, r"\1\\", doc) doc = doc.replace("=BACKSLASH=", r"\\") doc = doc.replace("=MID=", r"\|") doc = doc.replace("=PERCENT=", r"\%") doc = doc.replace("=HASH=", r"\#") doc = doc.replace("=VOID=", "") # Handle DISPLAYMATH doc = doc.replace("@[endDISPLAYMATH]", "\n\n") doc = sub_loop(indent_math, "\\1 \\3", doc) doc = doc.replace("@[startDISPLAYMATH]", "\n\n.. MATH::\n\n ") # Inline markup. We do use the more verbose :foo:`text` style since # those nest more easily. doc = doc.replace("@[startMATH]", ":math:`") doc = doc.replace("@[endMATH]", "`") doc = doc.replace("@[startpodcode]", "``") doc = doc.replace("@[endpodcode]", "``") doc = doc.replace("@[startcode]", ":literal:`") doc = doc.replace("@[endcode]", "`") doc = doc.replace("@[startit]", ":emphasis:`") doc = doc.replace("@[endit]", "`") doc = doc.replace("@[startbold]", ":strong:`") doc = doc.replace("@[endbold]", "`") # Remove prototype doc = prototype.sub("", doc) # Remove everything starting with "The library syntax is" # (this is not relevant for Python) doc = library_syntax.sub("", doc) # Allow at most 2 consecutive newlines doc = newlines.sub("\n\n", doc) # Strip result doc = doc.strip() # Ensure no more @ remains try: i = doc.index("@") except ValueError: return doc ilow = max(0, i-30) ihigh = min(len(doc), i+30) raise SyntaxError("@ found: " + doc[ilow:ihigh]) def get_raw_doc(function): r""" Get the raw documentation of PARI function ``function``. INPUT: - ``function`` -- name of a PARI function EXAMPLES:: >>> from autogen.doc import get_raw_doc >>> print(get_raw_doc("cos").decode()) @[startbold]cos@[dollar](x)@[dollar]:@[endbold] @[label se:cos] Cosine of @[dollar]x@[dollar]. ... >>> get_raw_doc("abcde") Traceback (most recent call last): ... RuntimeError: no help found for 'abcde' """ doc = subprocess.check_output(["gphelp", "-raw", function]) if doc.endswith(b"""' not found !\n"""): raise RuntimeError("no help found for '{}'".format(function)) return doc def get_rest_doc(function): r""" Get the documentation of the PARI function ``function`` in reST syntax. INPUT: - ``function`` -- name of a PARI function EXAMPLES:: >>> from autogen.doc import get_rest_doc >>> print(get_rest_doc("teichmuller")) Teichmüller character of the :math:`p`-adic number :math:`x`, i.e. the unique :math:`(p-1)`-th root of unity congruent to :math:`x / p^{v_p(x)}` modulo :math:`p`... :: >>> print(get_rest_doc("weber")) One of Weber's three :math:`f` functions. If :math:`flag = 0`, returns .. MATH:: f(x) = \exp (-i\pi/24).\eta ((x+1)/2)/\eta (x) {such that} j = (f^{24}-16)^3/f^{24}, where :math:`j` is the elliptic :math:`j`-invariant (see the function :literal:`ellj`). If :math:`flag = 1`, returns .. MATH:: f_1(x) = \eta (x/2)/\eta (x) {such that} j = (f_1^{24}+16)^3/f_1^{24}. Finally, if :math:`flag = 2`, returns .. MATH:: f_2(x) = \sqrt{2}\eta (2x)/\eta (x) {such that} j = (f_2^{24}+16)^3/f_2^{24}. Note the identities :math:`f^8 = f_1^8+f_2^8` and :math:`ff_1f_2 = \sqrt2`. :: >>> doc = get_rest_doc("ellap") # doc depends on PARI version :: >>> print(get_rest_doc("bitor")) bitwise (inclusive) :literal:`or` of two integers :math:`x` and :math:`y`, that is the integer .. MATH:: \sum (x_i or y_i) 2^i See ``bitand`` (in the PARI manual) for the behavior for negative arguments. """ raw = get_raw_doc(function) return raw_to_rest(raw) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1604133828.0 cypari2-2.1.2/autogen/generator.py0000644000175000017500000003326000000000000020232 0ustar00vincentvincent00000000000000""" Auto-generate methods for PARI functions. """ #***************************************************************************** # Copyright (C) 2015 Jeroen Demeyer # 2017 Vincent Delecroix # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 2 of the License, or # (at your option) any later version. # http://www.gnu.org/licenses/ #***************************************************************************** from __future__ import absolute_import, print_function, unicode_literals import os, re, sys, io from .args import PariArgumentGEN, PariInstanceArgument from .parser import read_pari_desc, parse_prototype from .doc import get_rest_doc autogen_top = "# This file is auto-generated by {}\n".format( os.path.relpath(__file__)) gen_banner = autogen_top + ''' cdef class Gen_base: """ Wrapper for a PARI ``GEN`` containing auto-generated methods. This class does not manage the ``GEN`` inside in any way. It is just a dumb wrapper. In particular, it might be invalid if the ``GEN`` is on the PARI stack and the PARI stack has been garbage collected. You almost certainly want to use one of the derived class :class:`Gen` instead. That being said, :class:`Gen_base` can be used by itself to pass around a temporary ``GEN`` within Python where we cannot use C calls. """ ''' instance_banner = autogen_top + ''' cdef class Pari_auto: """ Part of the :class:`Pari` class containing auto-generated functions. You must never use this class directly (in fact, Python may crash if you do), use the derived class :class:`Pari` instead. """ ''' decl_banner = autogen_top + ''' from .types cimport * cdef extern from *: ''' function_re = re.compile(r"^[A-Za-z][A-Za-z0-9_]*$") function_blacklist = {"O", # O(p^e) needs special parser support "alias", # Not needed and difficult documentation "listcreate", # "redundant and obsolete" according to PARI "print", # Conflicts with Python builtin "allocatemem", # Dangerous; overridden in PariInstance "global", # Invalid in Python (and obsolete) "inline", # Total confusion "uninline", # idem "local", # idem "my", # idem } class PariFunctionGenerator(object): """ Class to auto-generate ``auto_gen.pxi`` and ``auto_instance.pxi``. The PARI file ``pari.desc`` is read and all suitable PARI functions are written as methods of either :class:`Gen` or :class:`Pari`. """ def __init__(self): self.gen_filename = os.path.join('cypari2', 'auto_gen.pxi') self.instance_filename = os.path.join('cypari2', 'auto_instance.pxi') self.decl_filename = os.path.join('cypari2', 'auto_paridecl.pxd') def can_handle_function(self, function, cname="", **kwds): """ Can we actually handle this function? EXAMPLES:: >>> from autogen.generator import PariFunctionGenerator >>> G = PariFunctionGenerator() >>> G.can_handle_function("bnfinit", "bnfinit0", **{"class":"basic"}) True >>> G.can_handle_function("_bnfinit", "bnfinit0", **{"class":"basic"}) False >>> G.can_handle_function("bnfinit", "bnfinit0", **{"class":"hard"}) False """ if not cname: # No corresponding C function => must be specific to GP or GP2C return False if function in function_blacklist: # Blacklist specific troublesome functions return False if not function_re.match(function): # Not a legal function name, like "!_" return False cls = kwds.get("class", "unknown") sec = kwds.get("section", "unknown") if cls != "basic": # Different class: probably something technical or # specific to gp or gp2c return False if sec == "programming/control": # Skip if, return, break, ... return False return True def handle_pari_function(self, function, cname, prototype="", help="", obsolete=None, **kwds): r""" Handle one PARI function: decide whether or not to add the function, in which file (as method of :class:`Gen` or of :class:`Pari`?) and call :meth:`write_method` to actually write the code. EXAMPLES:: >>> from autogen.parser import read_pari_desc >>> from autogen.generator import PariFunctionGenerator >>> G = PariFunctionGenerator() >>> G.gen_file = sys.stdout >>> G.instance_file = sys.stdout >>> G.decl_file = sys.stdout >>> G.handle_pari_function("bnfinit", ... cname="bnfinit0", prototype="GD0,L,DGp", ... help=r"bnfinit(P,{flag=0},{tech=[]}): compute...", ... **{"class":"basic", "section":"number_fields"}) GEN bnfinit0(GEN, long, GEN, long) def bnfinit(P, long flag=0, tech=None, long precision=0): ... cdef bint _have_tech = (tech is not None) if _have_tech: tech = objtogen(tech) sig_on() cdef GEN _P = (P).g cdef GEN _tech = NULL if _have_tech: _tech = (tech).g precision = prec_bits_to_words(precision) cdef GEN _ret = bnfinit0(_P, flag, _tech, precision) return new_gen(_ret) ... >>> G.handle_pari_function("ellmodulareqn", ... cname="ellmodulareqn", prototype="LDnDn", ... help=r"ellmodulareqn(N,{x},{y}): return...", ... **{"class":"basic", "section":"elliptic_curves"}) GEN ellmodulareqn(long, long, long) def ellmodulareqn(self, long N, x=None, y=None): ... cdef long _x = -1 if x is not None: _x = get_var(x) cdef long _y = -1 if y is not None: _y = get_var(y) sig_on() cdef GEN _ret = ellmodulareqn(N, _x, _y) return new_gen(_ret) >>> G.handle_pari_function("setrand", ... cname="setrand", prototype="vG", ... help=r"setrand(n): reset the seed...", ... doc=r"reseeds the random number generator...", ... **{"class":"basic", "section":"programming/specific"}) void setrand(GEN) def setrand(n): r''' Reseeds the random number generator... ''' sig_on() cdef GEN _n = (n).g setrand(_n) clear_stack() def setrand(self, n): r''' Reseeds the random number generator... ''' n = objtogen(n) sig_on() cdef GEN _n = (n).g setrand(_n) clear_stack() >>> G.handle_pari_function("polredord", ... cname="polredord", prototype="G", ... help="polredord(x): this function is obsolete, use polredbest.", ... obsolete="2008-07-20", ... **{"class":"basic", "section":"number_fields"}) GEN polredord(GEN) def polredord(x): r''' This function is obsolete, use polredbest. ''' from warnings import warn warn('the PARI/GP function polredord is obsolete (2008-07-20)', DeprecationWarning) sig_on() cdef GEN _x = (x).g cdef GEN _ret = polredord(_x) return new_gen(_ret) def polredord(self, x): r''' This function is obsolete, use polredbest. ''' from warnings import warn warn('the PARI/GP function polredord is obsolete (2008-07-20)', DeprecationWarning) x = objtogen(x) sig_on() cdef GEN _x = (x).g cdef GEN _ret = polredord(_x) return new_gen(_ret) """ try: args, ret = parse_prototype(prototype, help) except NotImplementedError: return # Skip unsupported prototype codes doc = get_rest_doc(function) self.write_declaration(cname, args, ret, self.decl_file) if len(args) > 0 and isinstance(args[0], PariArgumentGEN): # If the first argument is a GEN, write a method of the # Gen class. self.write_method(function, cname, args, ret, args, self.gen_file, doc, obsolete) # In any case, write a method of the Pari class. # Parse again with an extra "self" argument. args, ret = parse_prototype(prototype, help, [PariInstanceArgument()]) self.write_method(function, cname, args, ret, args[1:], self.instance_file, doc, obsolete) def write_declaration(self, cname, args, ret, file): """ Write a .pxd declaration of a PARI library function. INPUT: - ``cname`` -- name of the PARI C library call - ``args``, ``ret`` -- output from ``parse_prototype`` - ``file`` -- a file object where the declaration should be written to """ args = ", ".join(a.ctype() for a in args) s = ' {ret} {function}({args})'.format(ret=ret.ctype(), function=cname, args=args) print(s, file=file) def write_method(self, function, cname, args, ret, cargs, file, doc, obsolete): """ Write Cython code with a method to call one PARI function. INPUT: - ``function`` -- name for the method - ``cname`` -- name of the PARI C library call - ``args``, ``ret`` -- output from ``parse_prototype``, including the initial args like ``self`` - ``cargs`` -- like ``args`` but excluding the initial args - ``file`` -- a file object where the code should be written to - ``doc`` -- the docstring for the method - ``obsolete`` -- if ``True``, a deprecation warning will be given whenever this method is called """ doc = doc.replace("\n", "\n ") # Indent doc protoargs = ", ".join(a.prototype_code() for a in args) callargs = ", ".join(a.call_code() for a in cargs) s = " def {function}({protoargs}):\n" if doc: # Use triple single quotes to make it easier to doctest # this within triply double quoted docstrings. s += " r'''\n {doc}\n '''\n" # Warning for obsolete functions if obsolete: s += " from warnings import warn\n" s += " warn('the PARI/GP function {function} is obsolete ({obsolete})', DeprecationWarning)\n" # Warning for undocumented arguments for a in args: s += a.deprecation_warning_code(function) for a in args: s += a.convert_code() s += " sig_on()\n" for a in args: s += a.c_convert_code() s += ret.assign_code("{cname}({callargs})") s += ret.return_code() s = s.format(function=function, protoargs=protoargs, cname=cname, callargs=callargs, doc=doc, obsolete=obsolete) print(s, file=file) def __call__(self): """ Top-level function to generate the auto-generated files. """ D = read_pari_desc() D = sorted(D.values(), key=lambda d: d['function']) sys.stdout.write("Generating PARI functions:") self.gen_file = io.open(self.gen_filename + '.tmp', 'w', encoding='utf-8') self.gen_file.write(gen_banner) self.instance_file = io.open(self.instance_filename + '.tmp', 'w', encoding='utf-8') self.instance_file.write(instance_banner) self.decl_file = io.open(self.decl_filename + '.tmp', 'w', encoding='utf-8') self.decl_file.write(decl_banner) # Check for availability of hi-res SVG plotting. This requires # PARI-2.10 or later. have_plot_svg = False for v in D: func = v["function"] if self.can_handle_function(**v): sys.stdout.write(" %s" % func) sys.stdout.flush() self.handle_pari_function(**v) if func == "plothraw": have_plot_svg = True else: sys.stdout.write(" (%s)" % func) sys.stdout.write("\n") self.instance_file.write("DEF HAVE_PLOT_SVG = {}".format(have_plot_svg)) self.gen_file.close() self.instance_file.close() self.decl_file.close() # All done? Let's commit. os.rename(self.gen_filename + '.tmp', self.gen_filename) os.rename(self.instance_filename + '.tmp', self.instance_filename) os.rename(self.decl_filename + '.tmp', self.decl_filename) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1604256609.0 cypari2-2.1.2/autogen/parser.py0000644000175000017500000001405100000000000017535 0ustar00vincentvincent00000000000000""" Read and parse the file pari.desc """ #***************************************************************************** # Copyright (C) 2015 Jeroen Demeyer # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 2 of the License, or # (at your option) any later version. # http://www.gnu.org/licenses/ #***************************************************************************** from __future__ import absolute_import, unicode_literals import os, re, io from .args import pari_arg_types from .ret import pari_ret_types from .paths import pari_share paren_re = re.compile(r"[(](.*)[)]") argname_re = re.compile(r"[ {]*&?([A-Za-z_][A-Za-z0-9_]*)") def read_pari_desc(): """ Read and parse the file ``pari.desc``. The output is a dictionary where the keys are GP function names and the corresponding values are dictionaries containing the ``(key, value)`` pairs from ``pari.desc``. EXAMPLES:: >>> from autogen.parser import read_pari_desc >>> D = read_pari_desc() >>> Dcos = D["cos"] >>> if "description" in Dcos: _ = Dcos.pop("description") >>> Dcos.pop("doc").startswith('cosine of $x$.') True >>> Dcos == { 'class': 'basic', ... 'cname': 'gcos', ... 'function': 'cos', ... 'help': 'cos(x): cosine of x.', ... 'prototype': 'Gp', ... 'section': 'transcendental'} True """ pari_desc = os.path.join(pari_share(), 'pari.desc') with io.open(pari_desc, encoding="utf-8") as f: lines = f.readlines() n = 0 N = len(lines) functions = {} while n < N: fun = {} while True: L = lines[n]; n += 1 if L == "\n": break # As long as the next lines start with a space, append them while lines[n].startswith(" "): L += (lines[n])[1:]; n += 1 key, value = L.split(":", 1) # Change key to an allowed identifier name key = key.lower().replace("-", "") fun[key] = value.strip() name = fun["function"] functions[name] = fun return functions def parse_prototype(proto, help, initial_args=[]): """ Parse arguments and return type of a PARI function. INPUT: - ``proto`` -- a PARI prototype like ``"GD0,L,DGDGDG"`` - ``help`` -- a PARI help string like ``"qfbred(x,{flag=0},{d},{isd},{sd})"`` - ``initial_args`` -- other arguments to this function which come before the PARI arguments, for example a ``self`` argument. OUTPUT: a tuple ``(args, ret)`` where - ``args`` is a list consisting of ``initial_args`` followed by :class:`PariArgument` instances with all arguments of this function. - ``ret`` is a :class:`PariReturn` instance with the return type of this function. EXAMPLES:: >>> from autogen.parser import parse_prototype >>> proto = 'GD0,L,DGDGDG' >>> help = 'qfbred(x,{flag=0},{d},{isd},{sd})' >>> parse_prototype(proto, help) ([GEN x, long flag=0, GEN d=NULL, GEN isd=NULL, GEN sd=NULL], GEN) >>> proto = "GD&" >>> help = "sqrtint(x,{&r})" >>> parse_prototype(proto, help) ([GEN x, GEN* r=NULL], GEN) >>> parse_prototype("lp", "foo()", [str("TEST")]) (['TEST', prec precision=0], long) """ # Use the help string just for the argument names. # "names" should be an iterator over the argument names. m = paren_re.search(help) if m is None: names = iter([]) else: s = m.groups()[0] matches = [argname_re.match(x) for x in s.split(",")] names = (m.groups()[0] for m in matches if m is not None) # First, handle the return type try: c = proto[0] t = pari_ret_types[c] n = 1 # index in proto except (IndexError, KeyError): t = pari_ret_types[""] n = 0 # index in proto ret = t() # Go over the prototype characters and build up the arguments args = list(initial_args) have_default = False # Have we seen any default argument? while n < len(proto): c = proto[n]; n += 1 # Parse default value if c == "D": default = "" if proto[n] not in pari_arg_types: while True: c = proto[n]; n += 1 if c == ",": break default += c c = proto[n]; n += 1 else: default = None try: t = pari_arg_types[c] if t is None: raise NotImplementedError('unsupported prototype character %r' % c) except KeyError: if c == ",": continue # Just skip additional commas else: raise ValueError('unknown prototype character %r' % c) arg = t(names, default, index=len(args)) if arg.default is not None: have_default = True elif have_default: # We have a non-default argument following a default # argument, which means trouble... # # A syntactical wart of Python is that it does not allow # that: something like def foo(x=None, y) is a SyntaxError # (at least with Python-2.7.13, Python-3.6.1 and Cython-0.25.2) # # A small number of GP functions (nfroots() for example) # wants to do this anyway. Luckily, this seems to occur only # for arguments of type GEN (prototype code "G") # # To work around this, we add a "fake" default value and # then raise an error if it was not given... if c != "G": raise NotImplementedError("non-default argument after default argument is only implemented for GEN arguments") arg.default = False args.append(arg) return (args, ret) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1556307469.0 cypari2-2.1.2/autogen/paths.py0000644000175000017500000000514000000000000017357 0ustar00vincentvincent00000000000000""" Find out installation paths of PARI/GP """ #***************************************************************************** # Copyright (C) 2017 Jeroen Demeyer # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 2 of the License, or # (at your option) any later version. # http://www.gnu.org/licenses/ #***************************************************************************** from __future__ import absolute_import, unicode_literals import os from glob import glob from distutils.spawn import find_executable # find_executable() returns None if nothing was found gppath = find_executable("gp") if gppath is None: # This almost certainly won't work, but we need to put something here prefix = "." else: # Assume gppath is ${prefix}/bin/gp prefix = os.path.dirname(os.path.dirname(gppath)) def pari_share(): r""" Return the directory where the PARI data files are stored. EXAMPLES:: >>> import os >>> from autogen.parser import pari_share >>> os.path.isfile(os.path.join(pari_share(), "pari.desc")) True """ from subprocess import Popen, PIPE if not gppath: raise EnvironmentError("cannot find an installation of PARI/GP: make sure that the 'gp' program is in your $PATH") # Ignore GP_DATA_DIR environment variable env = dict(os.environ) env.pop("GP_DATA_DIR", None) gp = Popen([gppath, "-f", "-q"], stdin=PIPE, stdout=PIPE, env=env) out = gp.communicate(b"print(default(datadir))")[0] # Convert out to str if needed if not isinstance(out, str): from sys import getfilesystemencoding out = out.decode(getfilesystemencoding(), "surrogateescape") datadir = out.strip() if not os.path.isdir(datadir): # As a fallback, try a path relative to the prefix datadir = os.path.join(prefix, "share", "pari") if not os.path.isdir(datadir): raise EnvironmentError("PARI data directory {!r} does not exist".format(datadir)) return datadir def include_dirs(): """ Return a list of directories containing PARI include files. """ dirs = [os.path.join(prefix, "include")] return [d for d in dirs if os.path.isdir(os.path.join(d, "pari"))] def library_dirs(): """ Return a list of directories containing PARI library files. """ dirs = [os.path.join(prefix, s) for s in ("lib", "lib32", "lib64")] return [d for d in dirs if glob(os.path.join(d, "libpari*"))] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1516200100.0 cypari2-2.1.2/autogen/ret.py0000644000175000017500000000446600000000000017044 0ustar00vincentvincent00000000000000""" Return types for PARI calls """ #***************************************************************************** # Copyright (C) 2015 Jeroen Demeyer # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 2 of the License, or # (at your option) any later version. # http://www.gnu.org/licenses/ #***************************************************************************** from __future__ import unicode_literals class PariReturn(object): """ This class represents the return value of a PARI call. """ def __init__(self): self.name = "_ret" def __repr__(self): return self.ctype() def ctype(self): """ Return the C type of the result of the PARI call. """ raise NotImplementedError def assign_code(self, value): """ Return code to assign the result of the PARI call in ``value`` to the variable named ``self.name``. """ s = " cdef {ctype} {name} = {value}\n" return s.format(ctype=self.ctype(), name=self.name, value=value) def return_code(self): """ Return code to return from the Cython wrapper. """ s = " clear_stack()\n" s += " return {name}\n" return s.format(name=self.name) class PariReturnGEN(PariReturn): def ctype(self): return "GEN" def return_code(self): s = " return new_gen({name})\n" return s.format(name=self.name) class PariReturnInt(PariReturn): def ctype(self): return "int" class PariReturnLong(PariReturn): def ctype(self): return "long" class PariReturnULong(PariReturn): def ctype(self): return "unsigned long" class PariReturnVoid(PariReturn): def ctype(self): return "void" def assign_code(self, value): return " {value}\n".format(value=value) def return_code(self): s = " clear_stack()\n" return s pari_ret_types = { '': PariReturnGEN, 'm': PariReturnGEN, 'i': PariReturnInt, 'l': PariReturnLong, 'u': PariReturnULong, 'v': PariReturnVoid, } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1604735099.796661 cypari2-2.1.2/cypari2/0000755000175000017500000000000000000000000015575 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1500458520.0 cypari2-2.1.2/cypari2/__init__.py0000644000175000017500000000013100000000000017701 0ustar00vincentvincent00000000000000from .pari_instance import Pari from .handle_error import PariError from .gen import Gen ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779110.0 cypari2-2.1.2/cypari2/closure.pxd0000644000175000017500000000013000000000000017760 0ustar00vincentvincent00000000000000from .gen cimport Gen cpdef Gen objtoclosure(f) cdef int _pari_init_closure() except -1 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779110.0 cypari2-2.1.2/cypari2/closure.pyx0000644000175000017500000001566700000000000020032 0ustar00vincentvincent00000000000000""" Convert Python functions to PARI closures ***************************************** AUTHORS: - Jeroen Demeyer (2015-04-10): initial version, :trac:`18052`. Examples: >>> def the_answer(): ... return 42 >>> import cypari2 >>> pari = cypari2.Pari() >>> f = pari(the_answer) >>> f() 42 >>> cube = pari(lambda i: i**3) >>> cube.apply(range(10)) [0, 1, 8, 27, 64, 125, 216, 343, 512, 729] """ #***************************************************************************** # Copyright (C) 2015 Jeroen Demeyer # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 2 of the License, or # (at your option) any later version. # http://www.gnu.org/licenses/ #***************************************************************************** from __future__ import absolute_import, division, print_function from cysignals.signals cimport sig_on, sig_off, sig_block, sig_unblock, sig_error from cpython.tuple cimport * from cpython.object cimport PyObject_Call from cpython.ref cimport Py_INCREF from .paridecl cimport * from .stack cimport new_gen, new_gen_noclear, clone_gen_noclear, DetachGen from .gen cimport objtogen try: from inspect import getfullargspec as getargspec except ImportError: from inspect import getargspec cdef inline GEN call_python_func_impl "call_python_func"(GEN* args, object py_func) except NULL: """ Call ``py_func(*args)`` where ``py_func`` is a Python function and ``args`` is an array of ``GEN``s terminated by ``NULL``. The arguments are converted from ``GEN`` to a cypari ``gen`` before calling ``py_func``. The result is converted back to a PARI ``GEN``. """ # We need to ensure that nothing above avma is touched avmaguard = new_gen_noclear(avma) # How many arguments are there? cdef Py_ssize_t n = 0 while args[n] is not NULL: n += 1 # Construct a Python tuple for args cdef tuple t = PyTuple_New(n) cdef Py_ssize_t i for i in range(n): a = clone_gen_noclear(args[i]) Py_INCREF(a) # Need to increase refcount because the tuple steals it PyTuple_SET_ITEM(t, i, a) # Call the Python function r = PyObject_Call(py_func, t, NULL) # Convert the result to a GEN and copy it to the PARI stack # (with a special case for None) if r is None: return gnil # Safely delete r and avmaguard d = DetachGen(objtogen(r)) del r res = d.detach() d = DetachGen(avmaguard) del avmaguard d.detach() return res # We rename this function to be able to call it with a different # signature. In particular, we want manual exception handling and we # implicitly convert py_func from a PyObject* to an object. cdef extern from *: GEN call_python_func(GEN* args, PyObject* py_func) cdef GEN call_python(GEN arg1, GEN arg2, GEN arg3, GEN arg4, GEN arg5, ulong nargs, ulong py_func): """ This function, which will be installed in PARI, is a front-end for ``call_python_func_impl``. It has 5 optional ``GEN``s as argument, a ``nargs`` argument specifying how many arguments are valid and one ``ulong``, which is actually a Python callable object cast to ``ulong``. """ if nargs > 5: sig_error() # Convert arguments to a NULL-terminated array. cdef GEN args[6] args[0] = arg1 args[1] = arg2 args[2] = arg3 args[3] = arg4 args[4] = arg5 args[nargs] = NULL sig_block() # Disallow interrupts during the Python code inside # call_python_func_impl(). We need to do this because this function # is very likely called within sig_on() and interrupting arbitrary # Python code is bad. cdef GEN r = call_python_func(args, py_func) sig_unblock() if not r: # An exception was raised sig_error() return r # Install the function "call_python" for use in the PARI library. cdef entree* ep_call_python cdef int _pari_init_closure() except -1: sig_on() global ep_call_python ep_call_python = install(call_python, "call_python", 'DGDGDGDGDGD5,U,U') sig_off() cpdef Gen objtoclosure(f): """ Convert a Python function (more generally, any callable) to a PARI ``t_CLOSURE``. .. NOTE:: With the current implementation, the function can be called with at most 5 arguments. .. WARNING:: The function ``f`` which is called through the closure cannot be interrupted. Therefore, it is advised to use this only for simple functions which do not take a long time. Examples: >>> from cypari2.closure import objtoclosure >>> def pymul(i,j): return i*j >>> mul = objtoclosure(pymul) >>> mul (v1,v2)->call_python(v1,v2,0,0,0,2,...) >>> mul(6,9) 54 >>> mul.type() 't_CLOSURE' >>> mul.arity() 2 >>> def printme(x): ... print(x) >>> objtoclosure(printme)('matid(2)') [1, 0; 0, 1] Construct the Riemann zeta function using a closure: >>> from cypari2 import Pari; pari = Pari() >>> def coeffs(n): ... return [1 for i in range(n)] >>> Z = pari.lfuncreate([coeffs, 0, [0], 1, 1, 1, 1]) >>> Z.lfun(2) 1.64493406684823 A trivial closure: >>> f = pari(lambda x: x) >>> f(10) 10 Test various kinds of errors: >>> mul(4) Traceback (most recent call last): ... TypeError: pymul() ... >>> mul(None, None) Traceback (most recent call last): ... ValueError: Cannot convert None to pari >>> mul(*range(100)) Traceback (most recent call last): ... PariError: call_python: too many parameters in user-defined function call >>> mul([1], [2]) Traceback (most recent call last): ... PariError: call_python: forbidden multiplication t_VEC (1 elts) * t_VEC (1 elts) """ if not callable(f): raise TypeError("argument to objtoclosure() must be callable") # Determine number of arguments of f cdef Py_ssize_t i, nargs try: argspec = getargspec(f) except Exception: nargs = 5 else: nargs = len(argspec.args) # Only 5 arguments are supported for now... if nargs > 5: nargs = 5 # Fill in default arguments of PARI function sig_on() cdef GEN args = cgetg((5 - nargs) + 2 + 1, t_VEC) for i in range(5 - nargs): set_gel(args, i + 1, gnil) set_gel(args, (5 - nargs) + 1, stoi(nargs)) # Convert f to a t_INT containing the address of f set_gel(args, (5 - nargs) + 1 + 1, utoi(f)) # Create a t_CLOSURE which calls call_python() with py_func equal to f cdef Gen res = new_gen(snm_closure(ep_call_python, args)) # We need to keep a reference to f somewhere and there is no way to # have PARI handle this reference for us. So the only way out is to # force f to be never deallocated Py_INCREF(f) return res ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779110.0 cypari2-2.1.2/cypari2/convert.pxd0000644000175000017500000000403400000000000017773 0ustar00vincentvincent00000000000000from .paridecl cimport (GEN, t_COMPLEX, dbltor, real_0_bit, stoi, cgetg, set_gel, gen_0) from .gen cimport Gen from cpython.int cimport PyInt_AS_LONG from cpython.float cimport PyFloat_AS_DOUBLE from cpython.complex cimport PyComplex_RealAsDouble, PyComplex_ImagAsDouble # Conversion PARI -> Python cdef GEN gtoi(GEN g0) except NULL cdef PyObject_FromGEN(GEN g) cdef PyInt_FromGEN(GEN g) cpdef gen_to_python(Gen z) cpdef gen_to_integer(Gen x) # Conversion C -> PARI cdef inline GEN double_to_REAL(double x): # PARI has an odd concept where it attempts to track the accuracy # of floating-point 0; a floating-point zero might be 0.0e-20 # (meaning roughly that it might represent any number in the # range -1e-20 <= x <= 1e20). # PARI's dbltor converts a floating-point 0 into the PARI real # 0.0e-307; PARI treats this as an extremely precise 0. This # can cause problems; for instance, the PARI incgam() function can # be very slow if the first argument is very precise. # So we translate 0 into a floating-point 0 with 53 bits # of precision (that's the number of mantissa bits in an IEEE # double). if x == 0: return real_0_bit(-53) else: return dbltor(x) cdef inline GEN doubles_to_COMPLEX(double re, double im): cdef GEN g = cgetg(3, t_COMPLEX) if re == 0: set_gel(g, 1, gen_0) else: set_gel(g, 1, dbltor(re)) if im == 0: set_gel(g, 2, gen_0) else: set_gel(g, 2, dbltor(im)) return g # Conversion Python -> PARI cdef inline GEN PyInt_AS_GEN(x): return stoi(PyInt_AS_LONG(x)) cdef GEN PyLong_AS_GEN(x) cdef inline GEN PyFloat_AS_GEN(x): return double_to_REAL(PyFloat_AS_DOUBLE(x)) cdef inline GEN PyComplex_AS_GEN(x): return doubles_to_COMPLEX( PyComplex_RealAsDouble(x), PyComplex_ImagAsDouble(x)) cdef GEN PyObject_AsGEN(x) except? NULL # Deprecated functions still used by SageMath cdef Gen new_gen_from_double(double) cdef Gen new_t_COMPLEX_from_double(double re, double im) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779110.0 cypari2-2.1.2/cypari2/convert.pyx0000644000175000017500000004424700000000000020032 0ustar00vincentvincent00000000000000# cython: cdivision = True """ Convert PARI objects to/from Python/C native types ************************************************** This modules contains the following conversion routines: - integers, long integers <-> PARI integers - list of integers -> PARI polynomials - doubles -> PARI reals - pairs of doubles -> PARI complex numbers PARI integers are stored as an array of limbs of type ``pari_ulong`` (which are 32-bit or 64-bit integers). Depending on the kernel (GMP or native), this array is stored little-endian or big-endian. This is encapsulated in macros like ``int_W()``: see section 4.5.1 of the `PARI library manual `_. Python integers of type ``int`` are just C longs. Python integers of type ``long`` are stored as a little-endian array of type ``digit`` with 15 or 30 bits used per digit. The internal format of a ``long`` is not documented, but there is some information in `longintrepr.h `_. Because of this difference in bit lengths, converting integers involves some bit shuffling. """ #***************************************************************************** # Copyright (C) 2016 Jeroen Demeyer # Copyright (C) 2016 Luca De Feo # Copyright (C) 2016 Vincent Delecroix # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 2 of the License, or # (at your option) any later version. # http://www.gnu.org/licenses/ #***************************************************************************** from __future__ import absolute_import, division, print_function from cysignals.signals cimport sig_on, sig_off, sig_error from cpython.version cimport PY_MAJOR_VERSION from cpython.object cimport Py_SIZE from cpython.int cimport PyInt_AS_LONG, PyInt_FromLong from cpython.longintrepr cimport (_PyLong_New, digit, PyLong_SHIFT, PyLong_MASK) from libc.limits cimport LONG_MIN, LONG_MAX from libc.math cimport INFINITY from .paridecl cimport * from .stack cimport new_gen, reset_avma from .string_utils cimport to_string, to_bytes cdef extern from *: ctypedef struct PyLongObject: digit* ob_digit Py_ssize_t* Py_SIZE_PTR "&Py_SIZE"(object) ######################################################################## # Conversion PARI -> Python ######################################################################## cpdef gen_to_python(Gen z): r""" Convert the PARI element ``z`` to a Python object. OUTPUT: - a Python integer for integers (type ``t_INT``) - a ``Fraction`` (``fractions`` module) for rationals (type ``t_FRAC``) - a ``float`` for real numbers (type ``t_REAL``) - a ``complex`` for complex numbers (type ``t_COMPLEX``) - a ``list`` for vectors (type ``t_VEC`` or ``t_COL``). The function ``gen_to_python`` is then recursively applied on the entries. - a ``list`` of Python integers for small vectors (type ``t_VECSMALL``) - a ``list`` of ``list``s for matrices (type ``t_MAT``). The function ``gen_to_python`` is then recursively applied on the entries. - the floating point ``inf`` or ``-inf`` for infinities (type ``t_INFINITY``) - a string for strings (type ``t_STR``) - other PARI types are not supported and the function will raise a ``NotImplementedError`` Examples: >>> from cypari2 import Pari >>> from cypari2.convert import gen_to_python >>> pari = Pari() Converting integers: >>> z = pari('42'); z 42 >>> a = gen_to_python(z); a 42 >>> type(a) <... 'int'> >>> gen_to_python(pari('3^50')) == 3**50 True >>> type(gen_to_python(pari('3^50'))) == type(3**50) True Converting rational numbers: >>> z = pari('2/3'); z 2/3 >>> a = gen_to_python(z); a Fraction(2, 3) >>> type(a) Converting real numbers (and infinities): >>> z = pari('1.2'); z 1.20000000000000 >>> a = gen_to_python(z); a 1.2 >>> type(a) <... 'float'> >>> z = pari('oo'); z +oo >>> a = gen_to_python(z); a inf >>> type(a) <... 'float'> >>> z = pari('-oo'); z -oo >>> a = gen_to_python(z); a -inf >>> type(a) <... 'float'> Converting complex numbers: >>> z = pari('1 + I'); z 1 + I >>> a = gen_to_python(z); a (1+1j) >>> type(a) <... 'complex'> >>> z = pari('2.1 + 3.03*I'); z 2.10000000000000 + 3.03000000000000*I >>> a = gen_to_python(z); a (2.1+3.03j) Converting vectors: >>> z1 = pari('Vecsmall([1,2,3])'); z1 Vecsmall([1, 2, 3]) >>> z2 = pari('[1, 3.4, [-5, 2], oo]'); z2 [1, 3.40000000000000, [-5, 2], +oo] >>> z3 = pari('[1, 5.2]~'); z3 [1, 5.20000000000000]~ >>> z1.type(), z2.type(), z3.type() ('t_VECSMALL', 't_VEC', 't_COL') >>> a1 = gen_to_python(z1); a1 [1, 2, 3] >>> type(a1) <... 'list'> >>> [type(x) for x in a1] [<... 'int'>, <... 'int'>, <... 'int'>] >>> a2 = gen_to_python(z2); a2 [1, 3.4, [-5, 2], inf] >>> type(a2) <... 'list'> >>> [type(x) for x in a2] [<... 'int'>, <... 'float'>, <... 'list'>, <... 'float'>] >>> a3 = gen_to_python(z3); a3 [1, 5.2] >>> type(a3) <... 'list'> >>> [type(x) for x in a3] [<... 'int'>, <... 'float'>] Converting matrices: >>> z = pari('[1,2;3,4]') >>> gen_to_python(z) [[1, 2], [3, 4]] >>> z = pari('[[1, 3], [[2]]; 3, [4, [5, 6]]]') >>> gen_to_python(z) [[[1, 3], [[2]]], [3, [4, [5, 6]]]] Converting strings: >>> z = pari('"Hello"') >>> a = gen_to_python(z); a 'Hello' >>> type(a) <... 'str'> Some currently unsupported types: >>> z = pari('x') >>> z.type() 't_POL' >>> gen_to_python(z) Traceback (most recent call last): ... NotImplementedError: conversion not implemented for t_POL >>> z = pari('12 + O(2^13)') >>> z.type() 't_PADIC' >>> gen_to_python(z) Traceback (most recent call last): ... NotImplementedError: conversion not implemented for t_PADIC """ return PyObject_FromGEN(z.g) cpdef gen_to_integer(Gen x): """ Convert a PARI ``gen`` to a Python ``int`` or ``long``. INPUT: - ``x`` -- a PARI ``t_INT``, ``t_FRAC``, ``t_REAL``, a purely real ``t_COMPLEX``, a ``t_INTMOD`` or ``t_PADIC`` (which are lifted). Examples: >>> from cypari2.convert import gen_to_integer >>> from cypari2 import Pari >>> pari = Pari() >>> a = gen_to_integer(pari("12345")); a; type(a) 12345 <... 'int'> >>> gen_to_integer(pari("10^30")) == 10**30 True >>> gen_to_integer(pari("19/5")) 3 >>> gen_to_integer(pari("1 + 0.0*I")) 1 >>> gen_to_integer(pari("3/2 + 0.0*I")) 1 >>> gen_to_integer(pari("Mod(-1, 11)")) 10 >>> gen_to_integer(pari("5 + O(5^10)")) 5 >>> gen_to_integer(pari("Pol(42)")) 42 >>> gen_to_integer(pari("u")) Traceback (most recent call last): ... TypeError: unable to convert PARI object u of type t_POL to an integer >>> s = pari("x + O(x^2)") >>> s x + O(x^2) >>> gen_to_integer(s) Traceback (most recent call last): ... TypeError: unable to convert PARI object x + O(x^2) of type t_SER to an integer >>> gen_to_integer(pari("1 + I")) Traceback (most recent call last): ... TypeError: unable to convert PARI object 1 + I of type t_COMPLEX to an integer Tests: >>> gen_to_integer(pari("1.0 - 2^64")) == -18446744073709551615 True >>> gen_to_integer(pari("1 - 2^64")) == -18446744073709551615 True >>> import sys >>> if sys.version_info.major == 3: ... long = int >>> for i in range(10000): ... x = 3**i ... if long(pari(x)) != long(x) or int(pari(x)) != x: ... print(x) Check some corner cases: >>> for s in [1, -1]: ... for a in [1, 2**31, 2**32, 2**63, 2**64]: ... for b in [-1, 0, 1]: ... Nstr = str(s * (a + b)) ... N1 = gen_to_integer(pari(Nstr)) # Convert via PARI ... N2 = int(Nstr) # Convert via Python ... if N1 != N2: ... print(Nstr, N1, N2) ... if type(N1) is not type(N2): ... print(N1, type(N1), N2, type(N2)) """ return PyInt_FromGEN(x.g) cdef PyObject_FromGEN(GEN g): cdef long t = typ(g) cdef Py_ssize_t i, j cdef Py_ssize_t lr, lc if t == t_INT: return PyInt_FromGEN(g) elif t == t_FRAC: from fractions import Fraction num = PyInt_FromGEN(gel(g, 1)) den = PyInt_FromGEN(gel(g, 2)) return Fraction(num, den) elif t == t_REAL: return rtodbl(g) elif t == t_COMPLEX: re = PyObject_FromGEN(gel(g, 1)) im = PyObject_FromGEN(gel(g, 2)) return complex(re, im) elif t == t_VEC or t == t_COL: return [PyObject_FromGEN(gel(g, i)) for i in range(1, lg(g))] elif t == t_VECSMALL: return [g[i] for i in range(1, lg(g))] elif t == t_MAT: lc = lg(g) if lc <= 1: return [[]] lr = lg(gel(g,1)) return [[PyObject_FromGEN(gcoeff(g, i, j)) for j in range(1, lc)] for i in range(1, lr)] elif t == t_INFINITY: if inf_get_sign(g) >= 0: return INFINITY else: return -INFINITY elif t == t_STR: return to_string(GSTR(g)) else: tname = to_string(type_name(t)) raise NotImplementedError(f"conversion not implemented for {tname}") cdef PyInt_FromGEN(GEN g): # First convert the input to a t_INT try: g = gtoi(g) finally: # Reset avma now. This is OK as long as we are not calling further # PARI functions before this function returns. reset_avma() if not signe(g): return PyInt_FromLong(0) cdef ulong u if PY_MAJOR_VERSION == 2: # Try converting to a Python 2 "int" first. Note that we cannot # use itos() from PARI since that does not deal with LONG_MIN # correctly. if lgefint(g) == 3: # abs(x) fits in a ulong u = g[2] # u = abs(x) # Check that (u) or (-u) does not overflow if signe(g) >= 0: if u <= LONG_MAX: return PyInt_FromLong(u) else: if u <= -LONG_MIN: return PyInt_FromLong(-u) # Result does not fit in a C long res = PyLong_FromINT(g) return res cdef GEN gtoi(GEN g0) except NULL: """ Convert a PARI object to a PARI integer. This function is shallow and not stack-clean. """ if typ(g0) == t_INT: return g0 cdef GEN g try: sig_on() g = simplify_shallow(g0) if typ(g) == t_COMPLEX: if gequal0(gel(g,2)): g = gel(g,1) if typ(g) == t_INTMOD: g = gel(g,2) g = trunc_safe(g) if typ(g) != t_INT: sig_error() sig_off() except RuntimeError: s = to_string(stack_sprintf( "unable to convert PARI object %Ps of type %s to an integer", g0, type_name(typ(g0)))) raise TypeError(s) return g cdef PyLong_FromINT(GEN g): # Size of input in words, bits and Python digits. The size in # digits might be a small over-estimation, but that is not a # problem. cdef size_t sizewords = (lgefint(g) - 2) cdef size_t sizebits = sizewords * BITS_IN_LONG cdef size_t sizedigits = (sizebits + PyLong_SHIFT - 1) // PyLong_SHIFT # Actual correct computed size cdef Py_ssize_t sizedigits_final = 0 x = _PyLong_New(sizedigits) cdef digit* D = (x).ob_digit cdef digit d cdef ulong w cdef size_t i, j, bit for i in range(sizedigits): # The least significant bit of digit number i of the output # integer is bit number "bit" of word "j". bit = i * PyLong_SHIFT j = bit // BITS_IN_LONG bit = bit % BITS_IN_LONG w = int_W(g, j)[0] d = w >> bit # Do we need bits from the next word too? if BITS_IN_LONG - bit < PyLong_SHIFT and j+1 < sizewords: w = int_W(g, j+1)[0] d += w << (BITS_IN_LONG - bit) d = d & PyLong_MASK D[i] = d # Keep track of last non-zero digit if d: sizedigits_final = i+1 # Set correct size (use a pointer to hack around Cython's # non-support for lvalues). cdef Py_ssize_t* sizeptr = Py_SIZE_PTR(x) if signe(g) > 0: sizeptr[0] = sizedigits_final else: sizeptr[0] = -sizedigits_final return x ######################################################################## # Conversion Python -> PARI ######################################################################## cdef GEN PyLong_AS_GEN(x): cdef const digit* D = (x).ob_digit # Size of the input cdef size_t sizedigits cdef long sgn if Py_SIZE(x) == 0: return gen_0 elif Py_SIZE(x) > 0: sizedigits = Py_SIZE(x) sgn = evalsigne(1) else: sizedigits = -Py_SIZE(x) sgn = evalsigne(-1) # Size of the output, in bits and in words cdef size_t sizebits = sizedigits * PyLong_SHIFT cdef size_t sizewords = (sizebits + BITS_IN_LONG - 1) // BITS_IN_LONG # Compute the most significant word of the output. # This is a special case because we need to be careful not to # overflow the ob_digit array. We also need to check for zero, # in which case we need to decrease sizewords. # See the loop below for an explanation of this code. cdef size_t bit = (sizewords - 1) * BITS_IN_LONG cdef size_t dgt = bit // PyLong_SHIFT bit = bit % PyLong_SHIFT cdef ulong w = (D[dgt]) >> bit if 1*PyLong_SHIFT - bit < BITS_IN_LONG and dgt+1 < sizedigits: w += (D[dgt+1]) << (1*PyLong_SHIFT - bit) if 2*PyLong_SHIFT - bit < BITS_IN_LONG and dgt+2 < sizedigits: w += (D[dgt+2]) << (2*PyLong_SHIFT - bit) if 3*PyLong_SHIFT - bit < BITS_IN_LONG and dgt+3 < sizedigits: w += (D[dgt+3]) << (3*PyLong_SHIFT - bit) if 4*PyLong_SHIFT - bit < BITS_IN_LONG and dgt+4 < sizedigits: w += (D[dgt+4]) << (4*PyLong_SHIFT - bit) if 5*PyLong_SHIFT - bit < BITS_IN_LONG and dgt+5 < sizedigits: w += (D[dgt+5]) << (5*PyLong_SHIFT - bit) # Effective size in words plus 2 special codewords cdef pariwords = sizewords+2 if w else sizewords+1 cdef GEN g = cgeti(pariwords) g[1] = sgn + evallgefint(pariwords) if w: int_MSW(g)[0] = w # Fill all words cdef GEN ptr = int_LSW(g) cdef size_t i for i in range(sizewords - 1): # The least significant bit of word number i of the output # integer is bit number "bit" of Python digit "dgt". bit = i * BITS_IN_LONG dgt = bit // PyLong_SHIFT bit = bit % PyLong_SHIFT # Now construct the output word from the Python digits: we need # to check that we shift less than the number of bits in the # type. 6 digits are enough assuming that PyLong_SHIFT >= 15 and # BITS_IN_LONG <= 76. A clever compiler should optimize away all # but one of the "if" statements below. w = (D[dgt]) >> bit if 1*PyLong_SHIFT - bit < BITS_IN_LONG: w += (D[dgt+1]) << (1*PyLong_SHIFT - bit) if 2*PyLong_SHIFT - bit < BITS_IN_LONG: w += (D[dgt+2]) << (2*PyLong_SHIFT - bit) if 3*PyLong_SHIFT - bit < BITS_IN_LONG: w += (D[dgt+3]) << (3*PyLong_SHIFT - bit) if 4*PyLong_SHIFT - bit < BITS_IN_LONG: w += (D[dgt+4]) << (4*PyLong_SHIFT - bit) if 5*PyLong_SHIFT - bit < BITS_IN_LONG: w += (D[dgt+5]) << (5*PyLong_SHIFT - bit) ptr[0] = w ptr = int_nextW(ptr) return g cdef GEN PyObject_AsGEN(x) except? NULL: """ Convert basic Python types to a PARI GEN. """ cdef GEN g = NULL if isinstance(x, unicode): x = to_bytes(x) if isinstance(x, bytes): sig_on() g = gp_read_str(x) sig_off() elif isinstance(x, long): sig_on() g = PyLong_AS_GEN(x) sig_off() elif isinstance(x, int): sig_on() g = PyInt_AS_GEN(x) sig_off() elif isinstance(x, float): sig_on() g = PyFloat_AS_GEN(x) sig_off() elif isinstance(x, complex): sig_on() g = PyComplex_AS_GEN(x) sig_off() return g #################################### # Deprecated functions #################################### cdef Gen new_gen_from_double(double x): sig_on() return new_gen(double_to_REAL(x)) cdef Gen new_t_COMPLEX_from_double(double re, double im): sig_on() return new_gen(doubles_to_COMPLEX(re, im)) def integer_to_gen(x): """ Convert a Python ``int`` or ``long`` to a PARI ``gen`` of type ``t_INT``. Examples: >>> from cypari2.convert import integer_to_gen >>> from cypari2 import Pari >>> pari = Pari() >>> a = integer_to_gen(int(12345)); a; type(a) 12345 <... 'cypari2.gen.Gen'> >>> integer_to_gen(float(12345)) Traceback (most recent call last): ... TypeError: integer_to_gen() needs an int or long argument, not float >>> integer_to_gen(2**100) 1267650600228229401496703205376 Tests: >>> import sys >>> if sys.version_info.major == 3: ... long = int >>> assert integer_to_gen(long(12345)) == 12345 >>> for i in range(10000): ... x = 3**i ... if pari(long(x)) != pari(x) or pari(int(x)) != pari(x): ... print(x) """ if isinstance(x, long): sig_on() return new_gen(PyLong_AS_GEN(x)) elif isinstance(x, int): sig_on() return new_gen(stoi(PyInt_AS_LONG(x))) else: raise TypeError("integer_to_gen() needs an int or long argument, not {}".format(type(x).__name__)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1556307469.0 cypari2-2.1.2/cypari2/cypari.h0000644000175000017500000000071200000000000017235 0ustar00vincentvincent00000000000000/* * Additional macros and fixes for the PARI headers. This is meant to * be included after including */ #undef coeff /* Conflicts with NTL (which is used by SageMath) */ /* Array element assignment */ #define set_gel(x, n, z) (gel((x), (n)) = (z)) #define set_gmael(x, i, j, z) (gmael((x), (i), (j)) = (z)) #define set_gcoeff(x, i, j, z) (gcoeff((x), (i), (j)) = (z)) #define set_uel(x, n, z) (uel((x), (n)) = (z)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779110.0 cypari2-2.1.2/cypari2/gen.pxd0000644000175000017500000000375700000000000017077 0ustar00vincentvincent00000000000000from cpython.object cimport PyObject from .types cimport GEN, pari_sp cdef class Gen_base: # The actual PARI GEN cdef GEN g cdef class Gen(Gen_base): # There are 3 kinds of memory management for a GEN: # * stack: GEN on the PARI stack # * clone: refcounted clone on the PARI heap # * constant: universal constant such as gen_0 # # A priori, it makes sense to have separate classes for these cases. # However, a GEN may be moved from the stack to the heap. This is # easier to support when there is just one class. Second, the # differences between the cases are really implementation details # which should not affect the user. # Base address of the GEN that we wrap. On the stack, this is the # value of avma when new_gen() was called. For clones, this is the # memory allocated by gclone(). For constants, this is NULL. cdef GEN address cdef inline pari_sp sp(self): return self.address # The Gen objects on the PARI stack form a linked list, from the # bottom to the top of the stack. This makes sense since we can only # deallocate a Gen which is on the bottom of the PARI stack. If this # is the last object on the stack, then next = top_of_stack # (a singleton object). # # In the clone and constant cases, this is None. cdef Gen next # A cache for __getitem__. Initially, this is None but it will be # turned into a dict when needed. cdef dict itemcache cdef inline int cache(self, key, value) except -1: """ Add ``(key, value)`` to ``self.itemcache``. """ if self.itemcache is None: self.itemcache = {} self.itemcache[key] = value cdef Gen new_ref(self, GEN g) cdef GEN fixGEN(self) except NULL cdef GEN ref_target(self) except NULL cdef inline Gen Gen_new(GEN g, GEN addr): z = Gen.__new__(Gen) z.g = g z.address = addr return z cdef Gen list_of_Gens_to_Gen(list s) cpdef Gen objtogen(s) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1604735005.0 cypari2-2.1.2/cypari2/gen.pyx0000644000175000017500000040746400000000000017127 0ustar00vincentvincent00000000000000""" The Gen class wrapping PARI's GEN type ************************************** AUTHORS: - William Stein (2006-03-01): updated to work with PARI 2.2.12-beta - William Stein (2006-03-06): added newtonpoly - Justin Walker: contributed some of the function definitions - Gonzalo Tornaria: improvements to conversions; much better error handling. - Robert Bradshaw, Jeroen Demeyer, William Stein (2010-08-15): Upgrade to PARI 2.4.3 (:trac:`9343`) - Jeroen Demeyer (2011-11-12): rewrite various conversion routines (:trac:`11611`, :trac:`11854`, :trac:`11952`) - Peter Bruin (2013-11-17): move Pari to a separate file (:trac:`15185`) - Jeroen Demeyer (2014-02-09): upgrade to PARI 2.7 (:trac:`15767`) - Martin von Gagern (2014-12-17): Added some Galois functions (:trac:`17519`) - Jeroen Demeyer (2015-01-12): upgrade to PARI 2.8 (:trac:`16997`) - Jeroen Demeyer (2015-03-17): automatically generate methods from ``pari.desc`` (:trac:`17631` and :trac:`17860`) - Kiran Kedlaya (2016-03-23): implement infinity type - Luca De Feo (2016-09-06): Separate Sage-specific components from generic C-interface in ``Pari`` (:trac:`20241`) - Vincent Delecroix (2017-04-29): Python 3 support and doctest conversion """ #***************************************************************************** # Copyright (C) 2006,2010 William Stein # Copyright (C) ???? Justin Walker # Copyright (C) ???? Gonzalo Tornaria # Copyright (C) 2010 Robert Bradshaw # Copyright (C) 2010-2018 Jeroen Demeyer # Copyright (C) 2016 Luca De Feo # Copyright (C) 2017 Vincent Delecroix # # Distributed under the terms of the GNU General Public License (GPL) # as published by the Free Software Foundation; either version 2 of # the License, or (at your option) any later version. # http://www.gnu.org/licenses/ #***************************************************************************** from __future__ import absolute_import, division, print_function cimport cython from cpython.object cimport (Py_EQ, Py_NE, Py_LE, Py_GE, Py_LT, Py_GT, PyTypeObject) from cysignals.memory cimport sig_free, check_malloc from cysignals.signals cimport sig_check, sig_on, sig_off, sig_block, sig_unblock from .types cimport * from .string_utils cimport to_string, to_bytes from .paripriv cimport * from .convert cimport PyObject_AsGEN, gen_to_integer from .pari_instance cimport (prec_bits_to_words, prec_words_to_bits, default_bitprec, get_var) from .stack cimport (new_gen, new_gens2, new_gen_noclear, clone_gen, clear_stack, reset_avma, remove_from_pari_stack, move_gens_to_heap) from .closure cimport objtoclosure from .paridecl cimport * include 'auto_gen.pxi' cdef bint ellwp_flag1_bug = -1 cdef inline bint have_ellwp_flag1_bug(): """ The PARI function ``ellwp(..., flag=1)`` has a bug in PARI versions 2.9.x where the derivative is a factor 2 too small. This function does a cached check for this bug, returning 1 if the bug is there and 0 if not. """ global ellwp_flag1_bug if ellwp_flag1_bug >= 0: return ellwp_flag1_bug # Check whether our PARI/GP version is buggy or not. This # computation should return 1.0, but in older PARI versions it # returns 0.5. sig_on() cdef GEN res = gp_read_str(b"localbitprec(128); my(E=ellinit([0,1/4])); ellwp(E,ellpointtoz(E,[0,1/2]),1)[2]") cdef double d = gtodouble(res) sig_off() if d == 1.0: ellwp_flag1_bug = 0 elif d == 0.5: ellwp_flag1_bug = 1 else: raise AssertionError(f"unexpected result from ellwp() test: {d}") return ellwp_flag1_bug # Compatibility wrappers cdef extern from *: """ #if PARI_VERSION_CODE >= PARI_VERSION(2, 10, 1) #define new_nf_nfzk nf_nfzk #define new_nfeltup nfeltup #else static GEN new_nf_nfzk(GEN nf, GEN rnfeq) { GEN zknf, czknf; nf_nfzk(nf, rnfeq, &zknf, &czknf); return mkvec2(zknf, czknf); } static GEN new_nfeltup(GEN nf, GEN x, GEN arg) { GEN zknf = gel(arg, 1); GEN czknf = gel(arg, 2); return nfeltup(nf, x, zknf, czknf); } #endif #if PARI_VERSION_CODE >= PARI_VERSION(2, 12, 0) #define old_nfbasis(x, yptr, p) nfbasis(mkvec2(x, p), yptr) #else #define old_nfbasis nfbasis #endif """ GEN new_nf_nfzk(GEN nf, GEN rnfeq) GEN new_nfeltup(GEN nf, GEN x, GEN zknf) GEN old_nfbasis(GEN x, GEN * y, GEN p) cdef class Gen(Gen_base): """ Wrapper for a PARI ``GEN`` with memory management. This wraps PARI objects which live either on the PARI stack or on the PARI heap. Results from PARI computations appear on the PARI stack and we try to keep them there. However, when the stack fills up, we copy ("clone" in PARI speak) all live objects from the stack to the heap. This happens transparently for the user. """ def __init__(self): raise RuntimeError("PARI objects cannot be instantiated directly; use pari(x) to convert x to PARI") def __dealloc__(self): if self.next is not None: # stack remove_from_pari_stack(self) elif self.address is not NULL: # clone gunclone_deep(self.address) cdef Gen new_ref(self, GEN g): """ Create a new ``Gen`` pointing to ``g``, which is a component of ``self.g``. In this case, ``g`` should point to some location in the memory allocated by ``self``. This will not allocate any new memory: the newly returned ``Gen`` will point to the memory allocated for ``self``. .. NOTE:: Usually, there is only one ``Gen`` pointing to a given PARI ``GEN``. This function can be used when a complicated ``GEN`` is allocated with a single ``Gen`` pointing to it, and one needs a ``Gen`` pointing to one of its components. For example, doing ``x = pari("[1, 2]")`` allocates a ``Gen`` pointing to the list ``[1, 2]``. To create a ``Gen`` pointing to the first element, one can do ``x.new_ref(gel(x.fixGEN(), 1))``. See :meth:`Gen.__getitem__` for an example of usage. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari("[[1, 2], 3]")[0][1] # indirect doctest 2 """ if self.next is not None: raise TypeError("cannot create reference to PARI stack (call fixGEN() first)") if is_on_stack(g): raise ValueError("new_ref() called with GEN which does not belong to parent") if self.address is not NULL: gclone_refc(self.address) return Gen_new(g, self.address) cdef GEN fixGEN(self) except NULL: """ Return the PARI ``GEN`` corresponding to ``self`` which is guaranteed not to change. """ if self.next is not None: move_gens_to_heap(self.sp()) return self.g cdef GEN ref_target(self) except NULL: """ Return a PARI ``GEN`` corresponding to ``self`` which is usable as target for a reference in another ``GEN``. This increases the PARI refcount of ``self``. """ if is_universal_constant(self.g): return self.g return gcloneref(self.fixGEN()) def __repr__(self): """ Display representation of a gen. OUTPUT: a Python string Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari('vector(5,i,i)') [1, 2, 3, 4, 5] >>> pari('[1,2;3,4]') [1, 2; 3, 4] >>> pari('Str(hello)') "hello" """ cdef char *c sig_on() # Use sig_block(), which is needed because GENtostr() uses # malloc(), which is dangerous inside sig_on() sig_block() c = GENtostr(self.g) sig_unblock() sig_off() s = bytes(c) pari_free(c) return to_string(s) def __str__(self): """ Convert this Gen to a string. Except for PARI strings, we have ``str(x) == repr(x)``. For strings (type ``t_STR``), the returned string is not quoted. OUTPUT: a Python string Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> str(pari('vector(5,i,i)')) '[1, 2, 3, 4, 5]' >>> str(pari('[1,2;3,4]')) '[1, 2; 3, 4]' >>> str(pari('Str(hello)')) 'hello' """ # Use __repr__ except for strings if typ(self.g) == t_STR: return to_string(GSTR(self.g)) return repr(self) def __hash__(self): """ Return the hash of self, computed using PARI's hash_GEN(). Tests: >>> from cypari2 import Pari >>> pari = Pari() >>> type(pari('1 + 2.0*I').__hash__()) <... 'int'> >>> L = pari("[42, 2/3, 3.14]") >>> hash(L) == hash(L.__copy__()) True >>> hash(pari.isprime(4)) == hash(pari(0)) True """ # There is a bug in PARI/GP where the hash value depends on the # CLONE bit. So we remove that bit before hashing. See # https://pari.math.u-bordeaux.fr/cgi-bin/bugreport.cgi?bug=2091 cdef ulong* G = self.g cdef ulong G0 = G[0] cdef ulong G0clean = G0 & ~CLONEBIT if G0 != G0clean: # Only write if we actually need to change something, as # G may point to read-only memory G[0] = G0clean h = hash_GEN(self.g) if G0 != G0clean: # Restore CLONE bit G[0] = G0 return h def __iter__(self): """ Iterate over the components of ``self``. The items in the iteration are of type :class:`Gen` with the following exceptions: - items of a ``t_VECSMALL`` are of type ``int`` - items of a ``t_STR`` are of type ``str`` Examples: We can iterate over PARI vectors or columns: >>> from cypari2 import Pari >>> pari = Pari() >>> L = pari("vector(10,i,i^2)") >>> L.__iter__() >>> [x for x in L] [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] >>> list(L) [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] >>> list(pari("vector(10,i,i^2)~")) [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] For polynomials, we iterate over the list of coefficients: >>> pol = pari("x^3 + 5/3*x"); list(pol) [0, 5/3, 0, 1] For power series or Laurent series, we get all coefficients starting from the lowest degree term. This includes trailing zeros: >>> list(pari('x^2 + O(x^8)')) [1, 0, 0, 0, 0, 0] >>> list(pari('x^-2 + O(x^0)')) [1, 0] For matrices, we iterate over the columns: >>> M = pari.matrix(3,2,[1,4,2,5,3,6]); M [1, 4; 2, 5; 3, 6] >>> list(M) [[1, 2, 3]~, [4, 5, 6]~] Other types are first converted to a vector using :meth:`Vec`: >>> Q = pari('Qfb(1, 2, 3)') >>> tuple(Q) (1, 2, 3) >>> Q.Vec() [1, 2, 3] We get an error for "scalar" types or for types which cannot be converted to a PARI vector: >>> iter(pari(42)) Traceback (most recent call last): ... TypeError: PARI object of type t_INT is not iterable >>> iter(pari("x->x")) Traceback (most recent call last): ... PariError: incorrect type in gtovec (t_CLOSURE) For ``t_VECSMALL``, the items are Python integers: >>> v = pari("Vecsmall([1,2,3,4,5,6])") >>> list(v) [1, 2, 3, 4, 5, 6] >>> type(list(v)[0]).__name__ 'int' For ``t_STR``, the items are Python strings: >>> v = pari('"hello"') >>> list(v) ['h', 'e', 'l', 'l', 'o'] """ # We return a generator expression instead of using "yield" # because we want to raise an exception for non-iterable # objects immediately when calling __iter__() and not while # iterating. cdef long i cdef long t = typ(self.g) # First convert self to a vector type cdef Gen v if t == t_VEC or t == t_COL or t == t_MAT: # These are vector-like and can be iterated over directly v = self elif t == t_POL: v = self.Vecrev() elif is_scalar_t(t): raise TypeError(f"PARI object of type {self.type()} is not iterable") elif t == t_VECSMALL: # Special case: items of type int return (self.g[i] for i in range(1, lg(self.g))) elif t == t_STR: # Special case: convert to str return iter(to_string(GSTR(self.g))) else: v = self.Vec() # Now iterate over the vector v x = v.fixGEN() return (v.new_ref(gel(x, i)) for i in range(1, lg(x))) def list(self): """ Convert ``self`` to a Python list with :class:`Gen` components. Examples: A PARI vector becomes a Python list: >>> from cypari2 import Pari >>> pari = Pari() >>> L = pari("vector(10,i,i^2)").list() >>> L [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] >>> type(L) <... 'list'> >>> type(L[0]) <... 'cypari2.gen.Gen'> For polynomials, list() returns the list of coefficients: >>> pol = pari("x^3 + 5/3*x"); pol.list() [0, 5/3, 0, 1] For power series or Laurent series, we get all coefficients starting from the lowest degree term. This includes trailing zeros: >>> pari('x^2 + O(x^8)').list() [1, 0, 0, 0, 0, 0] >>> pari('x^-2 + O(x^0)').list() [1, 0] For matrices, we get a list of columns: >>> M = pari.matrix(3,2,[1,4,2,5,3,6]); M [1, 4; 2, 5; 3, 6] >>> M.list() [[1, 2, 3]~, [4, 5, 6]~] """ return [x for x in self] def __reduce__(self): """ Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> from pickle import loads, dumps >>> f = pari('x^3 - 3') >>> loads(dumps(f)) == f True >>> f = pari('"hello world"') >>> loads(dumps(f)) == f True """ s = repr(self) return (objtogen, (s,)) def __add__(left, right): """ Return ``left`` plus ``right``. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(15) + pari(6) 21 >>> pari("x^3+x^2+x+1") + pari("x^2") x^3 + 2*x^2 + x + 1 >>> 2e20 + pari("1e20") 3.00000000000000 E20 >>> -2 + pari(3) 1 """ cdef Gen t0, t1 try: t0 = objtogen(left) t1 = objtogen(right) except Exception: return NotImplemented sig_on() return new_gen(gadd(t0.g, t1.g)) def __sub__(left, right): """ Return ``left`` minus ``right``. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(15) - pari(6) 9 >>> pari("x^3+x^2+x+1") - pari("x^2") x^3 + x + 1 >>> 2e20 - pari("1e20") 1.00000000000000 E20 >>> -2 - pari(3) -5 """ cdef Gen t0, t1 try: t0 = objtogen(left) t1 = objtogen(right) except Exception: return NotImplemented sig_on() return new_gen(gsub(t0.g, t1.g)) def __mul__(left, right): cdef Gen t0, t1 try: t0 = objtogen(left) t1 = objtogen(right) except Exception: return NotImplemented sig_on() return new_gen(gmul(t0.g, t1.g)) def __div__(left, right): # Python 2 old-style division: same implementation as __truediv__ cdef Gen t0, t1 try: t0 = objtogen(left) t1 = objtogen(right) except Exception: return NotImplemented sig_on() return new_gen(gdiv(t0.g, t1.g)) def __truediv__(left, right): """ Examples: >>> from cypari2 import Pari; pari = Pari() >>> pari(11) / pari(4) 11/4 >>> pari("x^2 + 2*x + 3") / pari("x") (x^2 + 2*x + 3)/x """ cdef Gen t0, t1 try: t0 = objtogen(left) t1 = objtogen(right) except Exception: return NotImplemented sig_on() return new_gen(gdiv(t0.g, t1.g)) def __floordiv__(left, right): """ Examples: >>> from cypari2 import Pari; pari = Pari() >>> pari(11) // pari(4) 2 >>> pari("x^2 + 2*x + 3") // pari("x") x + 2 """ cdef Gen t0, t1 try: t0 = objtogen(left) t1 = objtogen(right) except Exception: return NotImplemented sig_on() return new_gen(gdivent(t0.g, t1.g)) def __mod__(left, right): """ Return ``left`` modulo ``right``. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(15) % pari(6) 3 >>> pari("x^3+x^2+x+1") % pari("x^2") x + 1 >>> pari(-2) % 3 1 >>> -2 % pari(3) 1 """ cdef Gen t0, t1 try: t0 = objtogen(left) t1 = objtogen(right) except Exception: return NotImplemented sig_on() return new_gen(gmod(t0.g, t1.g)) def __pow__(left, right, m): """ Return ``left`` to the power ``right`` (if ``m`` is ``None``) or ``Mod(left, m)^right`` if ``m`` is not ``None``. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(5) ** pari(3) 125 >>> pari("x-1") ** 3 x^3 - 3*x^2 + 3*x - 1 >>> pow(pari(5), pari(28), int(29)) Mod(1, 29) >>> 2 ** pari(-5) 1/32 >>> pari(2) ** -5 1/32 """ cdef Gen t0, t1 try: t0 = objtogen(left) t1 = objtogen(right) except Exception: return NotImplemented if m is not None: t0 = t0.Mod(m) sig_on() return new_gen(gpow(t0.g, t1.g, prec_bits_to_words(0))) def __neg__(self): sig_on() return new_gen(gneg(self.g)) def __rshift__(self, long n): """ Divide ``self`` by `2^n` (truncating or not, depending on the input type). Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(25) >> 3 3 >>> pari('25/2') >> 2 25/8 >>> pari("x") >> 3 1/8*x >>> pari(1.0) >> 100 7.88860905221012 E-31 >>> 33 >> pari(2) 8 """ cdef Gen t0 = objtogen(self) sig_on() return new_gen(gshift(t0.g, -n)) def __lshift__(self, long n): """ Multiply ``self`` by `2^n`. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(25) << 3 200 >>> pari("25/32") << 2 25/8 >>> pari("x") << 3 8*x >>> pari(1.0) << 100 1.26765060022823 E30 >>> 33 << pari(2) 132 """ cdef Gen t0 = objtogen(self) sig_on() return new_gen(gshift(t0.g, n)) def __invert__(self): sig_on() return new_gen(ginv(self.g)) def getattr(self, attr): """ Return the PARI attribute with the given name. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> K = pari("nfinit(x^2 - x - 1)") >>> K.getattr("pol") x^2 - x - 1 >>> K.getattr("disc") 5 >>> K.getattr("reg") Traceback (most recent call last): ... PariError: _.reg: incorrect type in reg (t_VEC) >>> K.getattr("zzz") Traceback (most recent call last): ... PariError: not a function in function call """ attr = to_bytes(attr) t = b"_." + attr sig_on() return new_gen(closure_callgen1(strtofunction(t), self.g)) def mod(self): """ Given an INTMOD or POLMOD ``Mod(a,m)``, return the modulus `m`. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(4).Mod(5).mod() 5 >>> pari("Mod(x, x*y)").mod() y*x >>> pari("[Mod(4,5)]").mod() Traceback (most recent call last): ... TypeError: Not an INTMOD or POLMOD in mod() """ if typ(self.g) != t_INTMOD and typ(self.g) != t_POLMOD: raise TypeError("Not an INTMOD or POLMOD in mod()") # The hardcoded 1 below refers to the position in the internal # representation of a INTMOD or POLDMOD where the modulus is # stored. return self.new_ref(gel(self.fixGEN(), 1)) # Special case: SageMath uses polred(), so mark it as not # obsolete: https://trac.sagemath.org/ticket/22165 def polred(self, *args, **kwds): r''' This function is :emphasis:`deprecated`, use :meth:`~cypari2.gen.Gen_base.polredbest` instead. ''' import warnings with warnings.catch_warnings(): warnings.simplefilter("ignore") return super(Gen, self).polred(*args, **kwds) def nf_get_pol(self): """ Returns the defining polynomial of this number field. INPUT: - ``self`` -- A PARI number field being the output of ``nfinit()``, ``bnfinit()`` or ``bnrinit()``. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> x = pari('x') >>> K = (x**4 - 4*x**2 + 1).bnfinit() >>> bnr = K.bnrinit(2*x) >>> bnr.nf_get_pol() x^4 - 4*x^2 + 1 For relative number fields, this returns the relative polynomial: >>> y = pari.varhigher('y') >>> L = K.rnfinit(y**2 - 5) >>> L.nf_get_pol() y^2 - 5 An error is raised for invalid input: >>> pari("[0]").nf_get_pol() Traceback (most recent call last): ... PariError: incorrect type in pol (t_VEC) """ sig_on() return clone_gen(member_pol(self.g)) def nf_get_diff(self): """ Returns the different of this number field as a PARI ideal. INPUT: - ``self`` -- A PARI number field being the output of ``nfinit()``, ``bnfinit()`` or ``bnrinit()``. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> x = pari('x') >>> K = (x**4 - 4*x**2 + 1).nfinit() >>> K.nf_get_diff() [12, 0, 0, 0; 0, 12, 8, 0; 0, 0, 4, 0; 0, 0, 0, 4] """ sig_on() return clone_gen(member_diff(self.g)) def nf_get_sign(self): """ Returns a Python list ``[r1, r2]``, where ``r1`` and ``r2`` are Python ints representing the number of real embeddings and pairs of complex embeddings of this number field, respectively. INPUT: - ``self`` -- A PARI number field being the output of ``nfinit()``, ``bnfinit()`` or ``bnrinit()``. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> x = pari('x') >>> K = (x**4 - 4*x**2 + 1).nfinit() >>> s = K.nf_get_sign(); s [4, 0] >>> type(s); type(s[0]) <... 'list'> <... 'int'> >>> pari.polcyclo(15).nfinit().nf_get_sign() [0, 4] """ cdef long r1 cdef long r2 cdef GEN sign sig_on() sign = member_sign(self.g) r1 = itos(gel(sign, 1)) r2 = itos(gel(sign, 2)) sig_off() return [r1, r2] def nf_get_zk(self): """ Returns a vector with a `\ZZ`-basis for the ring of integers of this number field. The first element is always `1`. INPUT: - ``self`` -- A PARI number field being the output of ``nfinit()``, ``bnfinit()`` or ``bnrinit()``. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> x = pari('x') >>> K = (x**4 - 4*x**2 + 1).nfinit() >>> K.nf_get_zk() [1, x, x^3 - 4*x, x^2 - 2] """ sig_on() return clone_gen(member_zk(self.g)) def bnf_get_fu(self): """ Return the fundamental units Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> x = pari('x') >>> (x**2 - 65).bnfinit().bnf_get_fu() [Mod(x - 8, x^2 - 65)] >>> (x**4 - x**2 + 1).bnfinit().bnf_get_fu() [Mod(x - 1, x^4 - x^2 + 1)] >>> p = x**8 - 40*x**6 + 352*x**4 - 960*x**2 + 576 >>> len(p.bnfinit().bnf_get_fu()) 7 """ sig_on() return clone_gen(member_fu(self.g)) def bnf_get_tu(self): r""" Return the torsion unit Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> x = pari('x') >>> for p in [x**2 - 65, x**4 - x**2 + 1, x**8 - 40*x**6 + 352*x**4 - 960*x**2 + 576]: ... bnf = p.bnfinit() ... n, z = bnf.bnf_get_tu() ... if pari.version() < (2,11,0) and z.lift().poldegree() == 0: z = z.lift() ... print([p, n, z]) [x^2 - 65, 2, -1] [x^4 - x^2 + 1, 12, Mod(x, x^4 - x^2 + 1)] [x^8 - 40*x^6 + 352*x^4 - 960*x^2 + 576, 2, -1] """ sig_on() return clone_gen(member_tu(self.g)) def bnfunit(self): r""" Deprecated in cypari 2.1.2 Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> x = pari('x') >>> import warnings >>> with warnings.catch_warnings(record=True) as w: ... warnings.simplefilter('always') ... funits = (x**2 - 65).bnfinit().bnfunit() ... assert len(w) == 1 ... assert issubclass(w[0].category, DeprecationWarning) >>> funits [Mod(x - 8, x^2 - 65)] """ from warnings import warn warn("'bnfunit' in cypari2 is deprecated, use 'bnf_get_fu'", DeprecationWarning) return self.bnf_get_fu() def bnf_get_no(self): """ Returns the class number of ``self``, a "big number field" (``bnf``). Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> x = pari('x') >>> K = (x**2 + 65).bnfinit() >>> K.bnf_get_no() 8 """ sig_on() return clone_gen(bnf_get_no(self.g)) def bnf_get_cyc(self): """ Returns the structure of the class group of this number field as a vector of SNF invariants. NOTE: ``self`` must be a "big number field" (``bnf``). Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> x = pari('x') >>> K = (x**2 + 65).bnfinit() >>> K.bnf_get_cyc() [4, 2] """ sig_on() return clone_gen(bnf_get_cyc(self.g)) def bnf_get_gen(self): """ Returns a vector of generators of the class group of this number field. NOTE: ``self`` must be a "big number field" (``bnf``). Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> x = pari('x') >>> K = (x**2 + 65).bnfinit() >>> G = K.bnf_get_gen(); G [[3, 2; 0, 1], [2, 1; 0, 1]] """ sig_on() return clone_gen(bnf_get_gen(self.g)) def bnf_get_reg(self): """ Returns the regulator of this number field. NOTE: ``self`` must be a "big number field" (``bnf``). Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> x = pari('x') >>> K = (x**4 - 4*x**2 + 1).bnfinit() >>> K.bnf_get_reg() 2.66089858019037... """ sig_on() return clone_gen(bnf_get_reg(self.g)) def idealmoddivisor(self, Gen ideal): """ Return a 'small' ideal equivalent to ``ideal`` in the ray class group that the bnr structure ``self`` encodes. INPUT: - ``self`` -- a bnr structure as outputted from bnrinit. - ``ideal`` -- an ideal in the underlying number field of the bnr structure. OUTPUT: An ideal representing the same ray class as ``ideal`` but with 'small' generators. If ``ideal`` is not coprime to the modulus of the bnr, this results in an error. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> i = pari('i') >>> K = (i**4 - 2).bnfinit() >>> R = K.bnrinit(5,1) >>> R.idealmoddivisor(K[6][6][1]) [2, 0, 0, 0; 0, 1, 0, 0; 0, 0, 1, 0; 0, 0, 0, 1] >>> R.idealmoddivisor(K.idealhnf(5)) Traceback (most recent call last): ... PariError: elements not coprime in idealaddtoone: [5, 0, 0, 0; 0, 5, 0, 0; 0, 0, 5, 0; 0, 0, 0, 5] [5, 0, 0, 0; 0, 5, 0, 0; 0, 0, 5, 0; 0, 0, 0, 5] """ sig_on() return new_gen(idealmoddivisor(self.g, ideal.g)) def pr_get_p(self): """ Returns the prime of `\ZZ` lying below this prime ideal. NOTE: ``self`` must be a PARI prime ideal (as returned by ``idealprimedec`` for example). Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> i = pari('i') >>> K = (i**2 + 1).nfinit() >>> F = K.idealprimedec(5); F [[5, [-2, 1]~, 1, 1, [2, -1; 1, 2]], [5, [2, 1]~, 1, 1, [-2, -1; 1, -2]]] >>> F[0].pr_get_p() 5 """ sig_on() return clone_gen(pr_get_p(self.g)) def pr_get_e(self): """ Returns the ramification index (over `\QQ`) of this prime ideal. NOTE: ``self`` must be a PARI prime ideal (as returned by ``idealprimedec`` for example). Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> i = pari('i') >>> K = (i**2 + 1).nfinit() >>> K.idealprimedec(2)[0].pr_get_e() 2 >>> K.idealprimedec(3)[0].pr_get_e() 1 >>> K.idealprimedec(5)[0].pr_get_e() 1 """ cdef long e sig_on() e = pr_get_e(self.g) sig_off() return e def pr_get_f(self): """ Returns the residue class degree (over `\QQ`) of this prime ideal. NOTE: ``self`` must be a PARI prime ideal (as returned by ``idealprimedec`` for example). Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> i = pari('i') >>> K = (i**2 + 1).nfinit() >>> K.idealprimedec(2)[0].pr_get_f() 1 >>> K.idealprimedec(3)[0].pr_get_f() 2 >>> K.idealprimedec(5)[0].pr_get_f() 1 """ cdef long f sig_on() f = pr_get_f(self.g) sig_off() return f def pr_get_gen(self): """ Returns the second generator of this PARI prime ideal, where the first generator is ``self.pr_get_p()``. NOTE: ``self`` must be a PARI prime ideal (as returned by ``idealprimedec`` for example). Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> i = pari('i') >>> K = (i**2 + 1).nfinit() >>> g = K.idealprimedec(2)[0].pr_get_gen(); g [1, 1]~ >>> g = K.idealprimedec(3)[0].pr_get_gen(); g [3, 0]~ >>> g = K.idealprimedec(5)[0].pr_get_gen(); g [-2, 1]~ """ sig_on() return clone_gen(pr_get_gen(self.g)) def bid_get_cyc(self): """ Returns the structure of the group `(O_K/I)^*`, where `I` is the ideal represented by ``self``. NOTE: ``self`` must be a "big ideal" (``bid``) as returned by ``idealstar`` for example. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> i = pari('i') >>> K = (i**2 + 1).bnfinit() >>> J = K.idealstar(4*i + 2) >>> J.bid_get_cyc() [4, 2] """ sig_on() return clone_gen(bid_get_cyc(self.g)) def bid_get_gen(self): """ Returns a vector of generators of the group `(O_K/I)^*`, where `I` is the ideal represented by ``self``. NOTE: ``self`` must be a "big ideal" (``bid``) with generators, as returned by ``idealstar`` with ``flag`` = 2. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> i = pari('i') >>> K = (i**2 + 1).bnfinit() >>> J = K.idealstar(4*i + 2, 2) >>> J.bid_get_gen() [7, [-2, -1]~] We get an exception if we do not supply ``flag = 2`` to ``idealstar``: >>> J = K.idealstar(3) >>> J.bid_get_gen() Traceback (most recent call last): ... PariError: missing bid generators. Use idealstar(,,2) """ sig_on() return clone_gen(bid_get_gen(self.g)) def __getitem__(self, n): """ Return the n-th entry of self. .. NOTE:: The indexing is 0-based, like everywhere else in Python, but *unlike* in PARI/GP. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> p = pari('1 + 2*x + 3*x^2') >>> p[0] 1 >>> p[2] 3 >>> p[100] 0 >>> p[-1] 0 >>> q = pari('x^2 + 3*x^3 + O(x^6)') >>> q[3] 3 >>> q[5] 0 >>> q[6] Traceback (most recent call last): ... IndexError: index out of range >>> m = pari('[1,2;3,4]') >>> m[0] [1, 3]~ >>> m[1,0] 3 >>> l = pari('List([1,2,3])') >>> l[1] 2 >>> s = pari('"hello, world!"') >>> s[0] 'h' >>> s[4] 'o' >>> s[12] '!' >>> s[13] Traceback (most recent call last): ... IndexError: index out of range >>> v = pari('[1,2,3]') >>> v[0] 1 >>> c = pari('Col([1,2,3])') >>> c[1] 2 >>> sv = pari('Vecsmall([1,2,3])') >>> sv[2] 3 >>> type(sv[2]) <... 'int'> >>> [pari('1 + 5*I')[i] for i in range(2)] [1, 5] >>> [pari('Qfb(1, 2, 3)')[i] for i in range(3)] [1, 2, 3] >>> pari(57)[0] Traceback (most recent call last): ... TypeError: PARI object of type t_INT cannot be indexed >>> m = pari("[[1,2;3,4],5]") ; m[0][1,0] 3 >>> v = pari(range(20)) >>> v[2:5] [2, 3, 4] >>> v[:] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] >>> v[10:] [10, 11, 12, 13, 14, 15, 16, 17, 18, 19] >>> v[:5] [0, 1, 2, 3, 4] >>> v[5:5] [] >>> v [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] >>> v[-1] Traceback (most recent call last): ... IndexError: index out of range >>> v[:-3] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16] >>> v[5:] [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] >>> pari([])[::] [] """ cdef Py_ssize_t i, j, k cdef object ind cdef int pari_type pari_type = typ(self.g) if isinstance(n, tuple): if pari_type != t_MAT: raise TypeError("tuple indices are only defined for matrices") i, j = n if i < 0 or i >= glength(gel(self.g, 1)): raise IndexError("row index out of range") if j < 0 or j >= glength(self.g): raise IndexError("column index out of range") ind = (i, j) if self.itemcache is not None and ind in self.itemcache: return self.itemcache[ind] else: # Create a new Gen as child of self # and store it in itemcache val = self.new_ref(gmael(self.fixGEN(), j+1, i+1)) self.cache(ind, val) return val elif isinstance(n, slice): l = glength(self.g) start, stop, step = n.indices(l) inds = xrange(start, stop, step) k = len(inds) # fast exit for empty vector if k == 0: sig_on() return new_gen(zerovec(0)) # fast call, beware pari is one based if pari_type == t_VEC: if step==1: return self.vecextract('"'+str(start+1)+".."+str(stop)+'"') if step==-1: return self.vecextract('"'+str(start+1)+".."+str(stop+2)+'"') # slow call return objtogen(self[i] for i in inds) # Index is not a tuple or slice, convert to integer i = n ## there are no "out of bounds" problems ## for a polynomial or power series, so these go before ## bounds testing if pari_type == t_POL: sig_on() return new_gen(polcoeff0(self.g, i, -1)) elif pari_type == t_SER: bound = valp(self.g) + lg(self.g) - 2 if i >= bound: raise IndexError("index out of range") sig_on() return new_gen(polcoeff0(self.g, i, -1)) elif pari_type in (t_INT, t_REAL, t_FRAC, t_RFRAC, t_PADIC, t_QUAD, t_FFELT, t_INTMOD, t_POLMOD): # these are definitely scalar! raise TypeError(f"PARI object of type {self.type()} cannot be indexed") elif i < 0 or i >= glength(self.g): raise IndexError("index out of range") elif pari_type == t_VEC or pari_type == t_MAT: #t_VEC : row vector [ code ] [ x_1 ] ... [ x_k ] #t_MAT : matrix [ code ] [ col_1 ] ... [ col_k ] ind = i if self.itemcache is not None and ind in self.itemcache: return self.itemcache[ind] else: # Create a new Gen as child of self # and store it in itemcache val = self.new_ref(gel(self.fixGEN(), i+1)) self.cache(ind, val) return val elif pari_type == t_VECSMALL: #t_VECSMALL: vec. small ints [ code ] [ x_1 ] ... [ x_k ] return self.g[i+1] elif pari_type == t_STR: #t_STR : string [ code ] [ man_1 ] ... [ man_k ] return chr(GSTR(self.g)[i]) elif pari_type == t_LIST: return self.component(i+1) else: # generic code for other types return self.new_ref(gel(self.fixGEN(), i+1)) def __setitem__(self, n, y): r""" Set the n-th entry to a reference to y. .. NOTE:: - The indexing is 0-based, like everywhere else in Python, but *unlike* in PARI/GP. - Assignment sets the nth entry to a reference to y. This is the same as in Python, but *different* than what happens in the GP interpreter, where assignment makes a copy of y. - Because setting creates references it is *possible* to make circular references, unlike in GP. Do *not* do this (see the example below). If you need circular references, work at the Python level (where they work well), not the PARI object level. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> l = pari.List([1,2,3]) >>> l[0] = 3 >>> l List([3, 2, 3]) >>> v = pari(range(10)) >>> v [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> v[0] = 10 >>> w = pari([5,8,-20]) >>> v [10, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> v[1] = w >>> v [10, [5, 8, -20], 2, 3, 4, 5, 6, 7, 8, 9] >>> w[0] = -30 >>> v [10, [-30, 8, -20], 2, 3, 4, 5, 6, 7, 8, 9] >>> t = v[1]; t[1] = 10 ; v [10, [-30, 10, -20], 2, 3, 4, 5, 6, 7, 8, 9] >>> v[1][0] = 54321 ; v [10, [54321, 10, -20], 2, 3, 4, 5, 6, 7, 8, 9] >>> w [54321, 10, -20] >>> v = pari([[[[0,1],2],3],4]) >>> v[0][0][0][1] = 12 >>> v [[[[0, 12], 2], 3], 4] >>> m = pari.matrix(2,2,range(4)) ; l = pari([5,6]) ; n = pari.matrix(2,2,[7,8,9,0]) ; m[1,0] = l ; l[1] = n ; m[1,0][1][1,1] = 1111 ; m [0, 1; [5, [7, 8; 9, 1111]], 3] >>> m = pari("[[1,2;3,4],5,6]") ; m[0][1,1] = 11 ; m [[1, 2; 3, 11], 5, 6] Finally, we create a circular reference: >>> v = pari([0]) >>> w = pari([v]) >>> v [0] >>> w [[0]] >>> v[0] = w Now there is a circular reference. Accessing v[0] will crash Python. >>> s = pari.vector(2,[0,0]) >>> s[:1] [0] >>> s[:1]=[1] >>> s [1, 0] >>> type(s[0]) <... 'cypari2.gen.Gen'> >>> s = pari(range(20)) ; s [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] >>> s[0:10:2] = range(50,55) ; s [50, 1, 51, 3, 52, 5, 53, 7, 54, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] >>> s[10:20:3] = range(100,150) ; s [50, 1, 51, 3, 52, 5, 53, 7, 54, 9, 100, 11, 12, 101, 14, 15, 102, 17, 18, 103] Tests: >>> v = pari(range(10)) ; v [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> v[:] = range(20, 30) >>> v [20, 21, 22, 23, 24, 25, 26, 27, 28, 29] >>> type(v[0]) <... 'cypari2.gen.Gen'> """ cdef Py_ssize_t i, j, step cdef Gen x = objtogen(y) if isinstance(n, tuple): if typ(self.g) != t_MAT: raise TypeError("cannot index PARI type %s by tuple" % typ(self.g)) i, j = n if i < 0 or i >= glength(gel(self.g, 1)): raise IndexError("row i(=%s) must be between 0 and %s" % (i, self.nrows()-1)) if j < 0 or j >= glength(self.g): raise IndexError("column j(=%s) must be between 0 and %s" % (j, self.ncols()-1)) self.cache((i,j), x) xt = x.ref_target() set_gcoeff(self.g, i+1, j+1, xt) return elif isinstance(n, slice): l = glength(self.g) inds = xrange(*n.indices(l)) k = len(inds) if k > len(y): raise ValueError("attempt to assign sequence of size %s to slice of size %s" % (len(y), k)) # actually set the values for a, b in enumerate(inds): sig_check() self[b] = y[a] return # Index is not a tuple or slice, convert to integer i = n if i < 0 or i >= glength(self.g): raise IndexError("index (%s) must be between 0 and %s" % (i, glength(self.g)-1)) self.cache(i, x) xt = x.ref_target() if typ(self.g) == t_LIST: listput(self.g, xt, i+1) else: # Correct indexing for t_POLs if typ(self.g) == t_POL: i += 1 # Actually set the value set_gel(self.g, i+1, xt) def __len__(self): return glength(self.g) def __richcmp__(self, right, int op): """ Compare ``self`` and ``right`` using ``op``. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> a = pari(5) >>> b = 10 >>> a < b True >>> a <= b True >>> a <= 5 True >>> a > b False >>> a >= b False >>> a >= pari(10) False >>> a == 5 True >>> a is 5 False >>> pari(2.5) > None True >>> pari(3) == pari(3) True >>> pari('x^2 + 1') == pari('I-1') False >>> pari(I) == pari(I) True This does not define a total order. An error is raised when applying inequality operators to non-ordered types: >>> pari("Mod(1,3)") <= pari("Mod(2,3)") Traceback (most recent call last): ... PariError: forbidden comparison t_INTMOD , t_INTMOD >>> pari("[0]") <= pari("0") Traceback (most recent call last): ... PariError: forbidden comparison t_VEC (1 elts) , t_INT Tests: Check that :trac:`16127` has been fixed: >>> pari('1/2') < pari('1/3') False >>> pari(1) < pari('1/2') False >>> pari('O(x)') == 0 True >>> pari('O(2)') == 0 True """ cdef Gen t1 try: t1 = objtogen(right) except Exception: return NotImplemented cdef bint r cdef GEN x = self.g cdef GEN y = t1.g sig_on() if op == Py_EQ: r = (gequal(x, y) != 0) elif op == Py_NE: r = (gequal(x, y) == 0) elif op == Py_LE: r = (gcmp(x, y) <= 0) elif op == Py_GE: r = (gcmp(x, y) >= 0) elif op == Py_LT: r = (gcmp(x, y) < 0) else: # Py_GT r = (gcmp(x, y) > 0) sig_off() return r def cmp(self, right): """ Compare ``self`` and ``right``. This uses PARI's ``cmp_universal()`` routine, which defines a total ordering on the set of all PARI objects (up to the indistinguishability relation given by ``gidentical()``). .. WARNING:: This comparison is only mathematically meaningful when comparing 2 integers. In particular, when comparing rationals or reals, this does not correspond to the natural ordering. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(5).cmp(pari(5)) 0 >>> pari('x^2 + 1').cmp(pari('I-1')) 1 >>> I = pari('I') >>> I.cmp(I) 0 >>> pari('2/3').cmp(pari('2/5')) -1 >>> two = pari('2.000000000000000000000000') >>> two.cmp(pari(1.0)) 1 >>> two.cmp(pari(2.0)) 1 >>> two.cmp(pari(3.0)) 1 >>> f = pari("0*ffgen(ffinit(29, 10))") >>> pari(0).cmp(f) -1 >>> pari("'x").cmp(f) 1 >>> pari("'x").cmp(0) Traceback (most recent call last): ... TypeError: Cannot convert int to cypari2.gen.Gen_base """ other = right sig_on() r = cmp_universal(self.g, other.g) sig_off() return r def __copy__(self): sig_on() return clone_gen(self.g) def __oct__(self): """ Return the octal digits of self in lower case. """ cdef GEN x cdef long lx cdef long *xp cdef long w cdef char *s cdef char *sp cdef char *octdigits = "01234567" cdef int i, j cdef int size x = self.g if typ(x) != t_INT: raise TypeError("gen must be of PARI type t_INT") if not signe(x): return "0" lx = lgefint(x) - 2 # number of words size = lx * 4 * sizeof(long) s = check_malloc(size+3) # 1 char for sign, 1 char for 0, 1 char for '\0' sp = s + size + 3 - 1 # last character sp[0] = 0 xp = int_LSW(x) for i from 0 <= i < lx: w = xp[0] for j in range(4*sizeof(long)): sp -= 1 sp[0] = octdigits[w & 7] w >>= 3 xp = int_nextW(xp) # remove leading zeros! while sp[0] == c'0': sp += 1 sp -= 1 sp[0] = c'0' if signe(x) < 0: sp -= 1 sp[0] = c'-' k = sp sig_free(s) return k def __hex__(self): """ Return the hexadecimal digits of self in lower case. """ cdef GEN x cdef long lx cdef long *xp cdef long w cdef char *s cdef char *sp cdef char *hexdigits = "0123456789abcdef" cdef int i, j cdef int size x = self.g if typ(x) != t_INT: raise TypeError("gen must be of PARI type t_INT") if not signe(x): return "0x0" lx = lgefint(x) - 2 # number of words size = lx*2*sizeof(long) s = check_malloc(size+4) # 1 char for sign, 2 chars for 0x, 1 char for '\0' sp = s + size + 4 - 1 # last character sp[0] = 0 xp = int_LSW(x) for i from 0 <= i < lx: w = xp[0] for j in range(2*sizeof(long)): sp -= 1 sp[0] = hexdigits[w & 15] w >>= 4 xp = int_nextW(xp) # remove leading zeros! while sp[0] == c'0': sp = sp + 1 sp -= 1 sp[0] = 'x' sp -= 1 sp[0] = '0' if signe(x) < 0: sp -= 1 sp[0] = c'-' k = sp sig_free(s) return k def __int__(self): """ Convert ``self`` to a Python integer. If the number is too large to fit into a Python ``int``, a Python ``long`` is returned instead. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> int(pari(0)) 0 >>> int(pari(10)) 10 >>> int(pari(-10)) -10 >>> int(pari(123456789012345678901234567890)) == 123456789012345678901234567890 True >>> int(pari(-123456789012345678901234567890)) == -123456789012345678901234567890 True >>> int(pari(2**31-1)) 2147483647 >>> int(pari(-2**31)) -2147483648 >>> int(pari("Pol(10)")) 10 >>> int(pari("Mod(2, 7)")) 2 >>> int(pari(2**63-1)) == 9223372036854775807 True >>> int(pari(2**63+2)) == 9223372036854775810 True """ return gen_to_integer(self) def __index__(self): """ Coerce ``self`` (which must be a :class:`Gen` of type ``t_INT``) to a Python integer. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> from operator import index >>> i = pari(2) >>> index(i) 2 >>> L = [0, 1, 2, 3, 4] >>> L[i] 2 >>> print(index(pari("2^100"))) 1267650600228229401496703205376 >>> index(pari("2.5")) Traceback (most recent call last): ... TypeError: cannot coerce 2.50000000000000 (type t_REAL) to integer >>> for i in [0,1,2,15,16,17,1213051238]: ... assert bin(pari(i)) == bin(i) ... assert bin(pari(-i)) == bin(-i) ... assert oct(pari(i)) == oct(i) ... assert oct(pari(-i)) == oct(-i) ... assert hex(pari(i)) == hex(i) ... assert hex(pari(-i)) == hex(-i) """ if typ(self.g) != t_INT: raise TypeError(f"cannot coerce {self!r} (type {self.type()}) to integer") return gen_to_integer(self) def python_list_small(self): """ Return a Python list of the PARI gens. This object must be of type t_VECSMALL, and the resulting list contains python 'int's. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> v=pari([1,2,3,10,102,10]).Vecsmall() >>> w = v.python_list_small() >>> w [1, 2, 3, 10, 102, 10] >>> type(w[0]) <... 'int'> """ cdef long n if typ(self.g) != t_VECSMALL: raise TypeError("Object (=%s) must be of type t_VECSMALL." % self) return [self.g[n+1] for n in range(glength(self.g))] def python_list(self): """ Return a Python list of the PARI gens. This object must be of type t_VEC or t_COL. INPUT: None OUTPUT: - ``list`` - Python list whose elements are the elements of the input gen. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> v = pari([1,2,3,10,102,10]) >>> w = v.python_list() >>> w [1, 2, 3, 10, 102, 10] >>> type(w[0]) <... 'cypari2.gen.Gen'> >>> pari("[1,2,3]").python_list() [1, 2, 3] >>> pari("[1,2,3]~").python_list() [1, 2, 3] """ # TODO: deprecate cdef long n cdef Gen t if typ(self.g) != t_VEC and typ(self.g) != t_COL: raise TypeError("Object (=%s) must be of type t_VEC or t_COL." % self) return [self[n] for n in range(glength(self.g))] def python(self): """ Return the closest Python equivalent of the given PARI object. See :func:`~sage.libs.cypari.convert.gen_to_python` for more informations. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari('1.2').python() 1.2 >>> pari('389/17').python() Fraction(389, 17) """ from .convert import gen_to_python return gen_to_python(self) def sage(self, locals=None): r""" Return the closest SageMath equivalent of the given PARI object. INPUT: - ``locals`` -- optional dictionary used in fallback cases that involve ``sage_eval`` See :func:`~sage.libs.pari.convert_sage.gen_to_sage` for more information. """ from sage.libs.pari.convert_sage import gen_to_sage return gen_to_sage(self, locals) def __long__(self): """ Convert ``self`` to a Python ``long``. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> import sys >>> if sys.version_info.major == 3: ... long = int >>> assert isinstance(long(pari(0)), long) >>> assert long(pari(0)) == 0 >>> assert long(pari(10)) == 10 >>> assert long(pari(-10)) == -10 >>> assert long(pari(123456789012345678901234567890)) == 123456789012345678901234567890 >>> assert long(pari(-123456789012345678901234567890)) == -123456789012345678901234567890 >>> assert long(pari(2**31-1)) == 2147483647 >>> assert long(pari(-2**31)) == -2147483648 >>> assert long(pari("Pol(10)")) == 10 >>> assert long(pari("Mod(2, 7)")) == 2 """ x = gen_to_integer(self) if isinstance(x, long): return x else: return long(x) def __float__(self): """ Return Python float. """ cdef double d sig_on() d = gtodouble(self.g) sig_off() return d def __complex__(self): r""" Return ``self`` as a Python ``complex`` value. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> g = pari(-1.0)**(0.2); g 0.809016994374947 + 0.587785252292473*I >>> g.__complex__() (0.8090169943749475+0.5877852522924731j) >>> complex(g) (0.8090169943749475+0.5877852522924731j) >>> g = pari('2/3') >>> complex(g) (0.6666666666666666+0j) >>> g = pari.quadgen(-23) >>> complex(g) (0.5+2.3979157616563596j) >>> g = pari.quadgen(5) + pari('2/3') >>> complex(g) (2.2847006554165614+0j) >>> g = pari('Mod(3,5)'); g Mod(3, 5) >>> complex(g) Traceback (most recent call last): ... PariError: incorrect type in gtofp (t_INTMOD) """ cdef double re, im sig_on() # First convert to floating point (t_REAL or t_COMPLEX) # Note: DEFAULTPREC means 64 bits of precision fp = gtofp(self.g, DEFAULTPREC) if typ(fp) == t_REAL: re = rtodbl(fp) im = 0 elif typ(fp) == t_COMPLEX: re = gtodouble(gel(fp, 1)) im = gtodouble(gel(fp, 2)) else: sig_off() raise AssertionError("unrecognized output from gtofp()") clear_stack() return complex(re, im) def __nonzero__(self): """ Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari('1').__nonzero__() True >>> pari('x').__nonzero__() True >>> bool(pari(0)) False >>> a = pari('Mod(0,3)') >>> a.__nonzero__() False """ return not gequal0(self.g) def gequal(a, b): r""" Check whether `a` and `b` are equal using PARI's ``gequal``. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> a = pari(1); b = pari(1.0); c = pari('"some_string"') >>> a.gequal(a) True >>> b.gequal(b) True >>> c.gequal(c) True >>> a.gequal(b) True >>> a.gequal(c) False WARNING: this relation is not transitive: >>> a = pari('[0]'); b = pari(0); c = pari('[0,0]') >>> a.gequal(b) True >>> b.gequal(c) True >>> a.gequal(c) False """ cdef Gen t0 = objtogen(b) sig_on() cdef int ret = gequal(a.g, t0.g) sig_off() return ret != 0 def gequal0(a): r""" Check whether `a` is equal to zero. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(0).gequal0() True >>> pari(1).gequal0() False >>> pari(1e-100).gequal0() False >>> pari("0.0 + 0.0*I").gequal0() True >>> (pari('ffgen(3^20)')*0).gequal0() True """ sig_on() cdef int ret = gequal0(a.g) sig_off() return ret != 0 def gequal_long(a, long b): r""" Check whether `a` is equal to the ``long int`` `b` using PARI's ``gequalsg``. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> a = pari(1); b = pari(2.0); c = pari('3*matid(3)') >>> a.gequal_long(1) True >>> a.gequal_long(-1) False >>> a.gequal_long(0) False >>> b.gequal_long(2) True >>> b.gequal_long(-2) False >>> c.gequal_long(3) True >>> c.gequal_long(-3) False """ sig_on() cdef int ret = gequalsg(b, a.g) sig_off() return ret != 0 def isprime(self, long flag=0): """ Return True if x is a PROVEN prime number, and False otherwise. INPUT: - ``flag`` -- If flag is 0 or omitted, use a combination of algorithms. If flag is 1, the primality is certified by the Pocklington-Lehmer Test. If flag is 2, the primality is certified using the APRCL test. If flag is 3, use ECPP. OUTPUT: bool Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(9).isprime() False >>> pari(17).isprime() True >>> n = pari(561) # smallest Carmichael number >>> n.isprime() # not just a pseudo-primality test! False >>> n.isprime(1) False >>> n.isprime(2) False >>> n = pari(2**31-1) >>> n.isprime(1) True """ sig_on() x = gisprime(self.g, flag) # PARI-2.9 may return a primality certificate if flag==1. # So a non-INT is interpreted as True cdef bint ret = (typ(x) != t_INT) or (signe(x) != 0) clear_stack() return ret def ispseudoprime(self, long flag=0): """ ispseudoprime(x, flag=0): Returns True if x is a pseudo-prime number, and False otherwise. INPUT: - ``flag`` - int 0 (default): checks whether x is a Baillie-Pomerance-Selfridge-Wagstaff pseudo prime (strong Rabin-Miller pseudo prime for base 2, followed by strong Lucas test for the sequence (P,-1), P smallest positive integer such that `P^2 - 4` is not a square mod x). 0: checks whether x is a strong Miller-Rabin pseudo prime for flag randomly chosen bases (with end-matching to catch square roots of -1). OUTPUT: - ``bool`` - True or False, or when flag=1, either False or a tuple (True, cert) where ``cert`` is a primality certificate. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(9).ispseudoprime() False >>> pari(17).ispseudoprime() True >>> n = pari(561) # smallest Carmichael number >>> n.ispseudoprime(2) False """ sig_on() cdef long t = ispseudoprime(self.g, flag) sig_off() return t != 0 def ispower(self, k=None): r""" Determine whether or not self is a perfect k-th power. If k is not specified, find the largest k so that self is a k-th power. INPUT: - ``k`` - int (optional) OUTPUT: - ``power`` - int, what power it is - ``g`` - what it is a power of Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(9).ispower() (2, 3) >>> pari(17).ispower() (1, 17) >>> pari(17).ispower(2) (False, None) >>> pari(17).ispower(1) (1, 17) >>> pari(2).ispower() (1, 2) """ cdef int n cdef GEN x cdef Gen t0 if k is None: sig_on() n = gisanypower(self.g, &x) if n == 0: sig_off() return 1, self else: return n, new_gen(x) else: t0 = objtogen(k) sig_on() n = ispower(self.g, t0.g, &x) if n == 0: sig_off() return False, None else: return k, new_gen(x) def isprimepower(self): r""" Check whether ``self`` is a prime power (with an exponent >= 1). INPUT: - ``self`` - A PARI integer OUTPUT: A tuple ``(k, p)`` where `k` is a Python integer and `p` a PARI integer. - If the input was a prime power, `p` is the prime and `k` the power. - Otherwise, `k = 0` and `p` is ``self``. .. SEEALSO:: If you don't need a proof that `p` is prime, you can use :meth:`ispseudoprimepower` instead. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(9).isprimepower() (2, 3) >>> pari(17).isprimepower() (1, 17) >>> pari(18).isprimepower() (0, 18) >>> pari(3**12345).isprimepower() (12345, 3) """ cdef GEN x cdef long n sig_on() n = isprimepower(self.g, &x) if n == 0: sig_off() return 0, self else: return n, new_gen(x) def ispseudoprimepower(self): r""" Check whether ``self`` is the power (with an exponent >= 1) of a pseudo-prime. INPUT: - ``self`` - A PARI integer OUTPUT: A tuple ``(k, p)`` where `k` is a Python integer and `p` a PARI integer. - If the input was a pseudoprime power, `p` is the pseudoprime and `k` the power. - Otherwise, `k = 0` and `p` is ``self``. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(3**12345).ispseudoprimepower() (12345, 3) >>> p = pari(2**1500 + 1465) # nextprime(2^1500) >>> (p**11).ispseudoprimepower()[0] # very fast 11 """ cdef GEN x cdef long n sig_on() n = ispseudoprimepower(self.g, &x) if n == 0: sig_off() return 0, self else: return n, new_gen(x) def vecmax(x): """ Return the maximum of the elements of the vector/matrix `x`. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari([1, '-5/3', 8.0]).vecmax() 8.00000000000000 """ sig_on() return new_gen(vecmax(x.g)) def vecmin(x): """ Return the minimum of the elements of the vector/matrix `x`. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari([1, '-5/3', 8.0]).vecmin() -5/3 """ sig_on() return new_gen(vecmin(x.g)) def Ser(f, v=None, long precision=-1): """ Return a power series or Laurent series in the variable `v` constructed from the object `f`. INPUT: - ``f`` -- PARI gen - ``v`` -- PARI variable (default: `x`) - ``precision`` -- the desired relative precision (default: the value returned by ``pari.get_series_precision()``). This is the absolute precision minus the `v`-adic valuation. OUTPUT: - PARI object of type ``t_SER`` The series is constructed from `f` in the following way: - If `f` is a scalar, a constant power series is returned. - If `f` is a polynomial, it is converted into a power series in the obvious way. - If `f` is a rational function, it will be expanded in a Laurent series around `v = 0`. - If `f` is a vector, its coefficients become the coefficients of the power series, starting from the constant term. This is the convention used by the function ``Polrev()``, and the reverse of that used by ``Pol()``. .. warning:: This function will not transform objects containing variables of higher priority than `v`. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(2).Ser() 2 + O(x^16) >>> pari('Mod(0, 7)').Ser() Mod(0, 7)*x^15 + O(x^16) >>> x = pari([1, 2, 3, 4, 5]) >>> x.Ser() 1 + 2*x + 3*x^2 + 4*x^3 + 5*x^4 + O(x^16) >>> f = x.Ser('v'); print(f) 1 + 2*v + 3*v^2 + 4*v^3 + 5*v^4 + O(v^16) >>> pari(1)/f 1 - 2*v + v^2 + 6*v^5 - 17*v^6 + 16*v^7 - 5*v^8 + 36*v^10 - 132*v^11 + 181*v^12 - 110*v^13 + 25*v^14 + 216*v^15 + O(v^16) >>> pari('x^5').Ser(precision=20) x^5 + O(x^25) >>> pari('1/x').Ser(precision=1) x^-1 + O(x^0) """ if precision < 0: precision = precdl # Global PARI series precision sig_on() cdef long vn = get_var(v) if typ(f.g) == t_VEC: # The precision flag is ignored for vectors, so we first # convert the vector to a polynomial. return new_gen(gtoser(gtopolyrev(f.g, vn), vn, precision)) else: return new_gen(gtoser(f.g, vn, precision)) def Str(self): """ Str(self): Return the print representation of self as a PARI object. INPUT: - ``self`` - gen OUTPUT: - ``gen`` - a PARI Gen of type t_STR, i.e., a PARI string Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari([1,2,['abc',1]]).Str() "[1, 2, [abc, 1]]" >>> pari([1,1, 1.54]).Str() "[1, 1, 1.54000000000000]" >>> pari(1).Str() # 1 is automatically converted to string rep "1" >>> x = pari('x') # PARI variable "x" >>> x.Str() # is converted to string rep. "x" >>> x.Str().type() 't_STR' """ cdef char* c sig_on() # Use sig_block(), which is needed because GENtostr() uses # malloc(), which is dangerous inside sig_on() sig_block() c = GENtostr(self.g) sig_unblock() v = new_gen(strtoGENstr(c)) pari_free(c) return v def Strexpand(x): """ Concatenate the entries of the vector `x` into a single string, then perform tilde expansion and environment variable expansion similar to shells. INPUT: - ``x`` -- PARI gen. Either a vector or an element which is then treated like `[x]`. OUTPUT: - PARI string (type ``t_STR``) Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari('"~/subdir"').Strexpand() "..." >>> pari('"$SHELL"').Strexpand() "..." Tests: >>> a = pari('"$HOME"') >>> a.Strexpand() != a True """ if typ(x.g) != t_VEC: x = list_of_Gens_to_Gen([x]) sig_on() return new_gen(Strexpand(x.g)) def Strtex(x): r""" Strtex(x): Translates the vector x of PARI gens to TeX format and returns the resulting concatenated strings as a PARI t_STR. INPUT: - ``x`` -- PARI gen. Either a vector or an element which is then treated like `[x]`. OUTPUT: - PARI string (type ``t_STR``) Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> v = pari('x^2') >>> v.Strtex() "x^2" >>> v = pari(['1/x^2','x']) >>> v.Strtex() "\\frac{1}{x^2}x" >>> v = pari(['1 + 1/x + 1/(y+1)','x-1']) >>> v.Strtex() "\\frac{ \\left(y\n + 2\\right) \\*x\n + \\left(y\n + 1\\right) }{ \\left(y\n + 1\\right) \\*x}x\n - 1" """ if typ(x.g) != t_VEC: x = list_of_Gens_to_Gen([x]) sig_on() return new_gen(Strtex(x.g)) def bittest(x, long n): """ bittest(x, long n): Returns bit number n (coefficient of `2^n` in binary) of the integer x. Negative numbers behave as if modulo a big power of 2. INPUT: - ``x`` - Gen (pari integer) OUTPUT: - ``bool`` - a Python bool Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> x = pari(6) >>> x.bittest(0) False >>> x.bittest(1) True >>> x.bittest(2) True >>> x.bittest(3) False >>> pari(-3).bittest(0) True >>> pari(-3).bittest(1) False >>> [pari(-3).bittest(n) for n in range(10)] [True, False, True, True, True, True, True, True, True, True] """ sig_on() cdef long b = bittest(x.g, n) sig_off() return b != 0 lift_centered = Gen_base.centerlift def padicprime(self): """ The uniformizer of the p-adic ring this element lies in, as a t_INT. INPUT: - ``x`` - gen, of type t_PADIC OUTPUT: - ``p`` - gen, of type t_INT Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> y = pari('11^-10 + 5*11^-7 + 11^-6 + O(11)') >>> y.padicprime() 11 >>> y.padicprime().type() 't_INT' """ return self.new_ref(gel(self.fixGEN(), 2)) def round(x, bint estimate=False): """ round(x,estimate=False): If x is a real number, returns x rounded to the nearest integer (rounding up). If the optional argument estimate is True, also returns the binary exponent e of the difference between the original and the rounded value (the "fractional part") (this is the integer ceiling of log_2(error)). When x is a general PARI object, this function returns the result of rounding every coefficient at every level of PARI object. Note that this is different than what the truncate function does (see the example below). One use of round is to get exact results after a long approximate computation, when theory tells you that the coefficients must be integers. INPUT: - ``x`` - gen - ``estimate`` - (optional) bool, False by default OUTPUT: - if estimate is False, return a single gen. - if estimate is True, return rounded version of x and error estimate in bits, both as gens. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari('1.5').round() 2 >>> pari('1.5').round(True) (2, -1) >>> pari('1.5 + 2.1*I').round() 2 + 2*I >>> pari('1.0001').round(True) (1, -14) >>> pari('(2.4*x^2 - 1.7)/x').round() (2*x^2 - 2)/x >>> pari('(2.4*x^2 - 1.7)/x').truncate() 2.40000000000000*x """ cdef int n cdef long e cdef Gen y sig_on() if not estimate: return new_gen(ground(x.g)) y = new_gen(grndtoi(x.g, &e)) return y, e def sizeword(x): """ Return the total number of machine words occupied by the complete tree of the object x. A machine word is 32 or 64 bits, depending on the computer. INPUT: - ``x`` - gen OUTPUT: int (a Python int) Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari('0').sizeword() 2 >>> pari('1').sizeword() 3 >>> pari('1000000').sizeword() 3 >>> import sys >>> bitness = '64' if sys.maxsize > (1 << 32) else '32' >>> pari('10^100').sizeword() == (13 if bitness == '32' else 8) True >>> pari(1.0).sizeword() == (4 if bitness == '32' else 3) True >>> pari('x + 1').sizeword() 10 >>> pari('[x + 1, 1]').sizeword() 16 """ return gsizeword(x.g) def sizebyte(x): """ Return the total number of bytes occupied by the complete tree of the object x. Note that this number depends on whether the computer is 32-bit or 64-bit. INPUT: - ``x`` - gen OUTPUT: int (a Python int) Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> import sys >>> bitness = '64' if sys.maxsize > (1 << 32) else '32' >>> pari('1').sizebyte() == (12 if bitness == '32' else 24) True """ return gsizebyte(x.g) def truncate(x, bint estimate=False): """ truncate(x,estimate=False): Return the truncation of x. If estimate is True, also return the number of error bits. When x is in the real numbers, this means that the part after the decimal point is chopped away, e is the binary exponent of the difference between the original and truncated value (the "fractional part"). If x is a rational function, the result is the integer part (Euclidean quotient of numerator by denominator) and if requested the error estimate is 0. When truncate is applied to a power series (in X), it transforms it into a polynomial or a rational function with denominator a power of X, by chopping away the `O(X^k)`. Similarly, when applied to a p-adic number, it transforms it into an integer or a rational number by chopping away the `O(p^k)`. INPUT: - ``x`` - gen - ``estimate`` - (optional) bool, which is False by default OUTPUT: - if estimate is False, return a single gen. - if estimate is True, return rounded version of x and error estimate in bits, both as gens. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari('(x^2+1)/x').round() (x^2 + 1)/x >>> pari('(x^2+1)/x').truncate() x >>> pari('1.043').truncate() 1 >>> pari('1.043').truncate(True) (1, -5) >>> pari('1.6').truncate() 1 >>> pari('1.6').round() 2 >>> pari('1/3 + 2 + 3^2 + O(3^3)').truncate() 34/3 >>> pari('sin(x+O(x^10))').truncate() 1/362880*x^9 - 1/5040*x^7 + 1/120*x^5 - 1/6*x^3 + x >>> pari('sin(x+O(x^10))').round() # each coefficient has abs < 1 x + O(x^10) """ cdef long e cdef Gen y sig_on() if not estimate: return new_gen(gtrunc(x.g)) y = new_gen(gcvtoi(x.g, &e)) return y, e def _valp(x): """ Return the valuation of x where x is a p-adic number (t_PADIC) or a Laurent series (t_SER). If x is a different type, this will give a bogus number. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari('1/x^2 + O(x^10)')._valp() -2 >>> pari('O(x^10)')._valp() 10 >>> pari('(1145234796 + O(3^10))/771966234')._valp() -2 >>> pari('O(2^10)')._valp() 10 """ # This is a simple macro, so we don't need sig_on() return valp(x.g) def bernfrac(self): r""" The Bernoulli number `B_x`, where `B_0 = 1`, `B_1 = -1/2`, `B_2 = 1/6,\ldots,` expressed as a rational number. The argument `x` should be of type integer. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(18).bernfrac() 43867/798 >>> [pari(n).bernfrac() for n in range(10)] [1, -1/2, 1/6, 0, -1/30, 0, 1/42, 0, -1/30, 0] """ sig_on() return new_gen(bernfrac(self)) def bernreal(self, unsigned long precision=0): r""" The Bernoulli number `B_x`, as for the function bernfrac, but `B_x` is returned as a real number (with the current precision). Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(18).bernreal() 54.9711779448622 """ sig_on() return new_gen(bernreal(self, prec_bits_to_words(precision))) def besselk(nu, x, unsigned long precision=0): """ nu.besselk(x): K-Bessel function (modified Bessel function of the second kind) of index nu, which can be complex, and argument x. If `nu` or `x` is an exact argument, it is first converted to a real or complex number using the optional parameter precision (in bits). If the arguments are inexact (e.g. real), the smallest of their precisions is used in the computation, and the parameter precision is ignored. INPUT: - ``nu`` - a complex number - ``x`` - real number (positive or negative) Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(complex(2, 1)).besselk(3) 0.0455907718407551 + 0.0289192946582081*I >>> pari(complex(2, 1)).besselk(-3) -4.34870874986752 - 5.38744882697109*I >>> pari(complex(2, 1)).besselk(300) 3.74224603319728 E-132 + 2.49071062641525 E-134*I """ cdef Gen t0 = objtogen(x) sig_on() return new_gen(kbessel(nu.g, t0.g, prec_bits_to_words(precision))) def eint1(x, long n=0, unsigned long precision=0): r""" x.eint1(n): exponential integral E1(x): .. MATH:: \int_{x}^{\infty} \frac{e^{-t}}{t} dt If n is present, output the vector [eint1(x), eint1(2\*x), ..., eint1(n\*x)]. This is faster than repeatedly calling eint1(i\*x). If `x` is an exact argument, it is first converted to a real or complex number using the optional parameter precision (in bits). If `x` is inexact (e.g. real), its own precision is used in the computation, and the parameter precision is ignored. REFERENCE: - See page 262, Prop 5.6.12, of Cohen's book "A Course in Computational Algebraic Number Theory". Examples: """ sig_on() if n <= 0: return new_gen(eint1(x.g, prec_bits_to_words(precision))) else: return new_gen(veceint1(x.g, stoi(n), prec_bits_to_words(precision))) log_gamma = Gen_base.lngamma def polylog(x, long m, long flag=0, unsigned long precision=0): """ x.polylog(m,flag=0): m-th polylogarithm of x. flag is optional, and can be 0: default, 1: D_m -modified m-th polylog of x, 2: D_m-modified m-th polylog of x, 3: P_m-modified m-th polylog of x. If `x` is an exact argument, it is first converted to a real or complex number using the optional parameter precision (in bits). If `x` is inexact (e.g. real), its own precision is used in the computation, and the parameter precision is ignored. TODO: Add more explanation, copied from the PARI manual. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(10).polylog(3) 5.64181141475134 - 8.32820207698027*I >>> pari(10).polylog(3,0) 5.64181141475134 - 8.32820207698027*I >>> pari(10).polylog(3,1) 0.523778453502411 >>> pari(10).polylog(3,2) -0.400459056163451 """ sig_on() return new_gen(polylog0(m, x.g, flag, prec_bits_to_words(precision))) def sqrtn(x, n, unsigned long precision=0): r""" x.sqrtn(n): return the principal branch of the n-th root of x, i.e., the one such that `\arg(\sqrt(x)) \in ]-\pi/n, \pi/n]`. Also returns a second argument which is a suitable root of unity allowing one to recover all the other roots. If it was not possible to find such a number, then this second return value is 0. If the argument is present and no square root exists, return 0 instead of raising an error. If `x` is an exact argument, it is first converted to a real or complex number using the optional parameter precision (in bits). If `x` is inexact (e.g. real), its own precision is used in the computation, and the parameter precision is ignored. .. NOTE:: intmods (modulo a prime) and `p`-adic numbers are allowed as arguments. INPUT: - ``x`` - gen - ``n`` - integer OUTPUT: - ``gen`` - principal n-th root of x - ``gen`` - root of unity z that gives the other roots Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> s, z = pari(2).sqrtn(5) >>> z 0.309016994374947 + 0.951056516295154*I >>> s 1.14869835499704 >>> s**5 2.00000000000000 >>> (s*z)**5 2.00000000000000 + 0.E-19*I >>> import sys >>> bitness = '64' if sys.maxsize > (1 << 32) else '32' >>> s = str(z**5) >>> s == ('1.00000000000000 - 2.710505431 E-20*I' if bitness == '32' else '1.00000000000000 - 2.71050543121376 E-20*I') True """ cdef GEN ans, zetan cdef Gen t0 = objtogen(n) sig_on() ans = gsqrtn(x.g, t0.g, &zetan, prec_bits_to_words(precision)) return new_gens2(ans, zetan) def ffprimroot(self): r""" Return a primitive root of the multiplicative group of the definition field of the given finite field element. INPUT: - ``self`` -- a PARI finite field element (``FFELT``) OUTPUT: - A generator of the multiplicative group of the finite field generated by ``self``. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> b = pari(9).ffgen().ffprimroot() >>> b.fforder() 8 """ sig_on() return new_gen(ffprimroot(self.g, NULL)) def fibonacci(self): r""" Return the Fibonacci number of index x. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(18).fibonacci() 2584 >>> [pari(n).fibonacci() for n in range(10)] [0, 1, 1, 2, 3, 5, 8, 13, 21, 34] """ sig_on() return new_gen(fibo(self)) def issquare(x, find_root=False): """ issquare(x,n): ``True`` if x is a square, ``False`` if not. If ``find_root`` is given, also returns the exact square root. """ cdef GEN G cdef long t cdef Gen g sig_on() if find_root: t = itos(gissquareall(x.g, &G)) if t: return True, new_gen(G) else: clear_stack() return False, None else: t = itos(gissquare(x.g)) clear_stack() return t != 0 def issquarefree(self): """ Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(10).issquarefree() True >>> pari(20).issquarefree() False """ sig_on() cdef long t = issquarefree(self.g) sig_off() return t != 0 def sumdiv(n): """ Return the sum of the divisors of `n`. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(10).sumdiv() 18 """ sig_on() return new_gen(sumdiv(n.g)) def sumdivk(n, long k): """ Return the sum of the k-th powers of the divisors of n. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(10).sumdivk(2) 130 """ sig_on() return new_gen(sumdivk(n.g, k)) def Zn_issquare(self, n): """ Return ``True`` if ``self`` is a square modulo `n`, ``False`` if not. INPUT: - ``self`` -- integer - ``n`` -- integer or factorisation matrix Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(3).Zn_issquare(4) False >>> pari(4).Zn_issquare(pari(30).factor()) True """ cdef Gen t0 = objtogen(n) sig_on() cdef long t = Zn_issquare(self.g, t0.g) clear_stack() return t != 0 def Zn_sqrt(self, n): """ Return a square root of ``self`` modulo `n`, if such a square root exists; otherwise, raise a ``ValueError``. INPUT: - ``self`` -- integer - ``n`` -- integer or factorisation matrix Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(3).Zn_sqrt(4) Traceback (most recent call last): ... ValueError: 3 is not a square modulo 4 >>> pari(4).Zn_sqrt(pari(30).factor()) 22 """ cdef Gen t0 = objtogen(n) cdef GEN s sig_on() s = Zn_sqrt(self.g, t0.g) if s == NULL: clear_stack() raise ValueError("%s is not a square modulo %s" % (self, n)) return new_gen(s) def ellan(self, long n, python_ints=False): """ Return the first `n` Fourier coefficients of the modular form attached to this elliptic curve. See ellak for more details. INPUT: - ``n`` - a long integer - ``python_ints`` - bool (default is False); if True, return a list of Python ints instead of a PARI Gen wrapper. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> e = pari([0, -1, 1, -10, -20]).ellinit() >>> e.ellan(3) [1, -2, -1] >>> e.ellan(20) [1, -2, -1, 2, 1, 2, -2, 0, -2, -2, 1, -2, 4, 4, -1, -4, -2, 4, 0, 2] >>> e.ellan(-1) [] >>> v = e.ellan(10, python_ints=True); v [1, -2, -1, 2, 1, 2, -2, 0, -2, -2] >>> type(v) <... 'list'> >>> type(v[0]) <... 'int'> """ sig_on() cdef Gen g = new_gen(ellan(self.g, n)) if not python_ints: return g return [gtolong(gel(g.g, i+1)) for i in range(glength(g.g))] def ellaplist(self, long n, python_ints=False): r""" e.ellaplist(n): Returns a PARI list of all the prime-indexed coefficients `a_p` (up to n) of the `L`-function of the elliptic curve `e`, i.e. the Fourier coefficients of the newform attached to `e`. INPUT: - ``self`` -- an elliptic curve - ``n`` -- a long integer - ``python_ints`` -- bool (default is False); if True, return a list of Python ints instead of a PARI Gen wrapper. .. WARNING:: The curve e must be a medium or long vector of the type given by ellinit. For this function to work for every n and not just those prime to the conductor, e must be a minimal Weierstrass equation. If this is not the case, use the function ellminimalmodel first before using ellaplist (or you will get INCORRECT RESULTS!) Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> e = pari([0, -1, 1, -10, -20]).ellinit() >>> v = e.ellaplist(10); v [-2, -1, 1, -2] >>> type(v) <... 'cypari2.gen.Gen'> >>> v.type() 't_VEC' >>> e.ellan(10) [1, -2, -1, 2, 1, 2, -2, 0, -2, -2] >>> v = e.ellaplist(10, python_ints=True); v [-2, -1, 1, -2] >>> type(v) <... 'list'> >>> type(v[0]) <... 'int'> Tests: >>> v = e.ellaplist(1) >>> v, type(v) ([], <... 'cypari2.gen.Gen'>) >>> v = e.ellaplist(1, python_ints=True) >>> v, type(v) ([], <... 'list'>) """ if python_ints: return [int(x) for x in self.ellaplist(n)] sig_on() if n < 2: return new_gen(zerovec(0)) # Make a table of primes up to n: this returns a t_VECSMALL # that we artificially change to a t_VEC cdef GEN g = primes_upto_zv(n) settyp(g, t_VEC) # Replace each prime in the table by ellap of it cdef long i cdef GEN curve = self.g for i in range(1, lg(g)): set_gel(g, i, ellap(curve, utoi(g[i]))) return new_gen(g) def ellisoncurve(self, x): """ e.ellisoncurve(x): return True if the point x is on the elliptic curve e, False otherwise. If the point or the curve have inexact coefficients, an attempt is made to take this into account. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> e = pari([0,1,1,-2,0]).ellinit() >>> e.ellisoncurve([1,0]) True >>> e.ellisoncurve([1,1]) False >>> e.ellisoncurve([1,0.00000000000000001]) False >>> e.ellisoncurve([1,0.000000000000000001]) True >>> e.ellisoncurve([0]) True """ cdef Gen t0 = objtogen(x) sig_on() cdef int t = oncurve(self.g, t0.g) sig_off() return t != 0 def ellminimalmodel(self): """ ellminimalmodel(e): return the standard minimal integral model of the rational elliptic curve e and the corresponding change of variables. INPUT: - ``e`` - Gen (that defines an elliptic curve) OUTPUT: - ``gen`` - minimal model - ``gen`` - change of coordinates Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> e = pari([1,2,3,4,5]).ellinit() >>> F, ch = e.ellminimalmodel() >>> F[:5] [1, -1, 0, 4, 3] >>> ch [1, -1, 0, -1] >>> e.ellchangecurve(ch)[:5] [1, -1, 0, 4, 3] """ cdef GEN x, y sig_on() x = ellminimalmodel(self.g, &y) return new_gens2(x, y) def elltors(self): """ Return information about the torsion subgroup of the given elliptic curve. INPUT: - ``e`` - elliptic curve over `\QQ` OUTPUT: - ``gen`` - the order of the torsion subgroup, a.k.a. the number of points of finite order - ``gen`` - vector giving the structure of the torsion subgroup as a product of cyclic groups, sorted in non-increasing order - ``gen`` - vector giving points on e generating these cyclic groups Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> e = pari([1,0,1,-19,26]).ellinit() >>> e.elltors() [12, [6, 2], [[1, 2], [3, -2]]] """ sig_on() return new_gen(elltors(self.g)) def omega(self): """ Return the basis for the period lattice of this elliptic curve. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> e = pari([0, -1, 1, -10, -20]).ellinit() >>> e.omega() [1.26920930427955, 0.634604652139777 - 1.45881661693850*I] The precision is determined by the ``ellinit`` call: >>> e = pari([0, -1, 1, -10, -20]).ellinit(precision=256) >>> e.omega().bitprecision() 256 This also works over quadratic imaginary number fields: >>> e = pari.ellinit([0, -1, 1, -10, -20], "nfinit(y^2 - 2)") >>> if pari.version() >= (2, 10, 1): ... w = e.omega() """ sig_on() return new_gen(member_omega(self.g)) def disc(self): """ Return the discriminant of this object. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> e = pari([0, -1, 1, -10, -20]).ellinit() >>> e.disc() -161051 >>> _.factor() [-1, 1; 11, 5] """ sig_on() return clone_gen(member_disc(self.g)) def j(self): """ Return the j-invariant of this object. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> e = pari([0, -1, 1, -10, -20]).ellinit() >>> e.j() -122023936/161051 >>> _.factor() [-1, 1; 2, 12; 11, -5; 31, 3] """ sig_on() return clone_gen(member_j(self.g)) def _eltabstorel(self, x): """ Return the relative number field element corresponding to `x`. The result is a ``t_POLMOD`` with ``t_POLMOD`` coefficients. .. WARNING:: This is a low-level version of :meth:`rnfeltabstorel` that only needs the output of :meth:`_nf_rnfeq`, not a full PARI ``rnf`` structure. This method may raise errors or return undefined results if called with invalid arguments. Tests: >>> from cypari2 import Pari >>> pari = Pari() >>> K = pari('y^2 + 1').nfinit() >>> rnfeq = K._nf_rnfeq('x^2 + 2') >>> f_abs = rnfeq[0]; f_abs x^4 + 6*x^2 + 1 >>> x_rel = rnfeq._eltabstorel('x'); x_rel Mod(x + Mod(-y, y^2 + 1), x^2 + 2) >>> f_abs(x_rel) Mod(0, x^2 + 2) """ cdef Gen t0 = objtogen(x) sig_on() return new_gen(eltabstorel(self.g, t0.g)) def _eltabstorel_lift(self, x): """ Return the relative number field element corresponding to `x`. The result is a ``t_POL`` with ``t_POLMOD`` coefficients. .. WARNING:: This is a low-level version of :meth:`rnfeltabstorel` that only needs the output of :meth:`_nf_rnfeq`, not a full PARI ``rnf`` structure. This method may raise errors or return undefined results if called with invalid arguments. Tests: >>> from cypari2 import Pari >>> pari = Pari() >>> K = pari('y^2 + 1').nfinit() >>> rnfeq = K._nf_rnfeq('x^2 + 2') >>> rnfeq._eltabstorel_lift('x') x + Mod(-y, y^2 + 1) """ cdef Gen t0 = objtogen(x) sig_on() return new_gen(eltabstorel_lift(self.g, t0.g)) def _eltreltoabs(self, x): """ Return the absolute number field element corresponding to `x`. The result is a ``t_POL``. .. WARNING:: This is a low-level version of :meth:`rnfeltreltoabs` that only needs the output of :meth:`_nf_rnfeq`, not a full PARI ``rnf`` structure. This method may raise errors or return undefined results if called with invalid arguments. Tests: >>> from cypari2 import Pari >>> pari = Pari() >>> K = pari('y^2 + 1').nfinit() >>> rnfeq = K._nf_rnfeq('x^2 + 2') >>> rnfeq._eltreltoabs('x') 1/2*x^3 + 7/2*x >>> rnfeq._eltreltoabs('y') 1/2*x^3 + 5/2*x """ cdef Gen t0 = objtogen(x) sig_on() return new_gen(eltreltoabs(self.g, t0.g)) def galoissubfields(self, long flag=0, v=None): """ List all subfields of the Galois group ``self``. This wraps the `galoissubfields`_ function from PARI. This method is essentially the same as applying :meth:`galoisfixedfield` to each group returned by :meth:`galoissubgroups`. INPUT: - ``self`` -- A Galois group as generated by :meth:`galoisinit`. - ``flag`` -- Has the same meaning as in :meth:`galoisfixedfield`. - ``v`` -- Has the same meaning as in :meth:`galoisfixedfield`. OUTPUT: A vector of all subfields of this group. Each entry is as described in the :meth:`galoisfixedfield` method. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> G = pari('x^6 + 108').galoisinit() >>> G.galoissubfields(flag=1) [x, x^2 + 972, x^3 + 54, x^3 + 864, x^3 - 54, x^6 + 108] >>> G = pari('x^4 + 1').galoisinit() >>> G.galoissubfields(flag=2, v='z')[3] [...^2 + 2, Mod(x^3 + x, x^4 + 1), [x^2 - z*x - 1, x^2 + z*x - 1]] .. _galoissubfields: http://pari.math.u-bordeaux.fr/dochtml/html.stable/Functions_related_to_general_number_fields.html#galoissubfields """ sig_on() return new_gen(galoissubfields(self.g, flag, get_var(v))) def nfeltval(self, x, p): """ Return the valuation of the number field element `x` at the prime `p`. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> nf = pari('x^2 + 1').nfinit() >>> p = nf.idealprimedec(5)[0] >>> nf.nfeltval('50 - 25*x', p) 3 """ cdef Gen t0 = objtogen(x) cdef Gen t1 = objtogen(p) sig_on() v = nfval(self.g, t0.g, t1.g) sig_off() return v def nfbasis(self, long flag=0, fa=None): """ Integral basis of the field `\QQ[a]`, where ``a`` is a root of the polynomial x. INPUT: - ``flag``: if set to 1 and ``fa`` is not given: assume that no square of a prime > 500000 divides the discriminant of ``x``. - ``fa``: If present, encodes a subset of primes at which to check for maximality. This must be one of the three following things: - an integer: check all primes up to ``fa`` using trial division. - a vector: a list of primes to check. - a matrix: a partial factorization of the discriminant of ``x``. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari('x^3 - 17').nfbasis() [1, x, 1/3*x^2 - 1/3*x + 1/3] We test ``flag`` = 1, noting it gives a wrong result when the discriminant (-4 * `p`^2 * `q` in the example below) has a big square factor: >>> p = pari(10**10).nextprime(); q = (p+1).nextprime() >>> x = pari('x'); f = x**2 + p**2*q >>> pari(f).nfbasis(1) # Wrong result [1, x] >>> pari(f).nfbasis() # Correct result [1, 1/10000000019*x] >>> pari(f).nfbasis(fa=10**6) # Check primes up to 10^6: wrong result [1, x] >>> pari(f).nfbasis(fa="[2,2; %s,2]"%p) # Correct result and faster [1, 1/10000000019*x] >>> pari(f).nfbasis(fa=[2,p]) # Equivalent with the above [1, 1/10000000019*x] The following alternative syntax closer to PARI/GP can be used >>> pari.nfbasis([f, 1]) [1, x] >>> pari.nfbasis(f) [1, 1/10000000019*x] >>> pari.nfbasis([f, 10**6]) [1, x] >>> pari.nfbasis([f, "[2,2; %s,2]"%p]) [1, 1/10000000019*x] >>> pari.nfbasis([f, [2,p]]) [1, 1/10000000019*x] """ cdef Gen t0 cdef GEN g0 if fa is not None: t0 = objtogen(fa) g0 = t0.g elif flag: g0 = utoi(500000) else: g0 = NULL sig_on() return new_gen(old_nfbasis(self.g, NULL, g0)) def nfbasis_d(self, long flag=0, fa=None): """ Like :meth:`nfbasis`, but return a tuple ``(B, D)`` where `B` is the integral basis and `D` the discriminant. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> F = pari('x^3 - 2').nfinit() >>> F[0].nfbasis_d() ([1, x, x^2], -108) >>> G = pari('x^5 - 11').nfinit() >>> G[0].nfbasis_d() ([1, x, x^2, x^3, x^4], 45753125) >>> pari([-2,0,0,1]).Polrev().nfbasis_d() ([1, x, x^2], -108) """ cdef Gen t0 cdef GEN g0 cdef GEN ans, disc if fa is not None: t0 = objtogen(fa) g0 = t0.g elif flag & 1: g0 = utoi(500000) else: g0 = NULL sig_on() ans = old_nfbasis(self.g, &disc, g0) return new_gens2(ans, disc) def nfbasistoalg_lift(nf, x): r""" Transforms the column vector ``x`` on the integral basis into a polynomial representing the algebraic number. INPUT: - ``nf`` -- a number field - ``x`` -- a column of rational numbers of length equal to the degree of ``nf`` or a single rational number OUTPUT: - ``nf.nfbasistoalg(x).lift()`` Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> K = pari('x^3 - 17').nfinit() >>> K.nf_get_zk() [1, 1/3*x^2 - 1/3*x + 1/3, x] >>> K.nfbasistoalg_lift(42) 42 >>> K.nfbasistoalg_lift("[3/2, -5, 0]~") -5/3*x^2 + 5/3*x - 1/6 >>> K.nf_get_zk() * pari("[3/2, -5, 0]~") -5/3*x^2 + 5/3*x - 1/6 """ cdef Gen t0 = objtogen(x) sig_on() return new_gen(gel(basistoalg(nf.g, t0.g), 2)) def nfgenerator(self): f = self[0] x = f.variable() return x.Mod(f) def _nf_rnfeq(self, relpol): """ Return data for converting number field elements between absolute and relative representation. .. NOTE:: The output of this method is suitable for the methods :meth:`_eltabstorel`, :meth:`_eltabstorel_lift` and :meth:`_eltreltoabs`. Tests: >>> from cypari2 import Pari >>> pari = Pari() >>> K = pari('y^2 + 1').nfinit() >>> K._nf_rnfeq('x^2 + 2') [x^4 + 6*x^2 + 1, 1/2*x^3 + 5/2*x, -1, y^2 + 1, x^2 + 2] """ cdef Gen t0 = objtogen(relpol) sig_on() return new_gen(nf_rnfeq(self.g, t0.g)) def _nf_nfzk(self, rnfeq): """ Return data for constructing relative number field elements from elements of the base field. INPUT: - ``rnfeq`` -- relative number field data as returned by :meth:`_nf_rnfeq` .. NOTE:: The output of this method is suitable for the method :meth:`_nfeltup`. """ cdef Gen t0 = objtogen(rnfeq) sig_on() return new_gen(new_nf_nfzk(self.g, t0.g)) def _nfeltup(self, x, nfzk): """ Construct a relative number field element from an element of the base field. INPUT: - ``x`` -- element of the base field - ``nfzk`` -- relative number field data as returned by :meth:`_nf_nfzk` .. WARNING:: This is a low-level version of :meth:`rnfeltup` that only needs the output of :meth:`_nf_nfzk`, not a full PARI ``rnf`` structure. This method may raise errors or return undefined results if called with invalid arguments. Tests: >>> from cypari2 import Pari >>> pari = Pari() >>> nf = pari('nfinit(y^2 - 2)') >>> nfzk = nf._nf_nfzk(nf._nf_rnfeq('x^2 - 3')) >>> nf._nfeltup('y', nfzk) -1/2*x^3 + 9/2*x """ cdef Gen t0 = objtogen(x) cdef Gen t1 = objtogen(nfzk) sig_on() return clone_gen(new_nfeltup(self.g, t0.g, t1.g)) def eval(self, *args, **kwds): """ Evaluate ``self`` with the given arguments. This is currently implemented in 3 cases: - univariate polynomials, rational functions, power series and Laurent series (using a single unnamed argument or keyword arguments), - any PARI object supporting the PARI function :pari:`substvec` (in particular, multivariate polynomials) using keyword arguments, - objects of type ``t_CLOSURE`` (functions in GP bytecode form) using unnamed arguments. In no case is mixing unnamed and keyword arguments allowed. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> f = pari('x^2 + 1') >>> f.type() 't_POL' >>> f.eval(pari('I')) 0 >>> f.eval(x=2) 5 >>> (1/f).eval(x=1) 1/2 The notation ``f(x)`` is an alternative for ``f.eval(x)``: >>> f(3) == f.eval(3) True >>> f = pari('Mod(x^2 + x + 1, 3)') >>> f(2) Mod(1, 3) Evaluating a power series: >>> f = pari('1 + x + x^3 + O(x^7)') >>> f(2*pari('y')**2) 1 + 2*y^2 + 8*y^6 + O(y^14) Substituting zero is sometimes possible, and trying to do so in illegal cases can raise various errors: >>> pari('1 + O(x^3)').eval(0) 1 >>> pari('1/x').eval(0) Traceback (most recent call last): ... PariError: impossible inverse in gdiv: 0 >>> pari('1/x + O(x^2)').eval(0) Traceback (most recent call last): ... PariError: impossible inverse in gsubst: 0 >>> pari('1/x + O(x^2)').eval(pari('O(x^3)')) Traceback (most recent call last): ... PariError: impossible inverse in ... >>> pari('O(x^0)').eval(0) Traceback (most recent call last): ... PariError: forbidden substitution t_SER , t_INT Evaluating multivariate polynomials: >>> f = pari('y^2 + x^3') >>> f(1) # Dangerous, depends on PARI variable ordering y^2 + 1 >>> f(x=1) # Safe y^2 + 1 >>> f(y=1) x^3 + 1 >>> f(1, 2) Traceback (most recent call last): ... TypeError: evaluating PARI t_POL takes exactly 1 argument (2 given) >>> f(y='x', x='2*y') x^2 + 8*y^3 >>> f() x^3 + y^2 It's not an error to substitute variables which do not appear: >>> f.eval(z=37) x^3 + y^2 >>> pari(42).eval(t=0) 42 We can define and evaluate closures as follows: >>> T = pari('n -> n + 2') >>> T.type() 't_CLOSURE' >>> T.eval(3) 5 >>> T = pari('() -> 42') >>> T() 42 >>> pr = pari('s -> print(s)') >>> pr.eval('"hello world"') hello world >>> f = pari('myfunc(x,y) = x*y') >>> f.eval(5, 6) 30 Default arguments work, missing arguments are treated as zero (like in GP): >>> f = pari("(x, y, z=1.0) -> [x, y, z]") >>> f(1, 2, 3) [1, 2, 3] >>> f(1, 2) [1, 2, 1.00000000000000] >>> f(1) [1, 0, 1.00000000000000] >>> f() [0, 0, 1.00000000000000] Variadic closures are supported as well (:trac:`18623`): >>> f = pari("(v[..])->length(v)") >>> f('a', f) 2 >>> g = pari("(x,y,z[..])->[x,y,z]") >>> g(), g(1), g(1,2), g(1,2,3), g(1,2,3,4) ([0, 0, []], [1, 0, []], [1, 2, []], [1, 2, [3]], [1, 2, [3, 4]]) Using keyword arguments, we can substitute in more complicated objects, for example a number field: >>> nf = pari('x^2 + 1').nfinit() >>> nf [x^2 + 1, [0, 1], -4, 1, [Mat([1, 0.E-38 + 1.00000000000000*I]), [1, 1.00000000000000; 1, -1.00000000000000], ..., [2, 0; 0, -2], [2, 0; 0, 2], [1, 0; 0, -1], [1, [0, -1; 1, 0]], [2]], [0.E-38 + 1.00000000000000*I], [1, x], [1, 0; 0, 1], [1, 0, 0, -1; 0, 1, 1, 0]] >>> nf(x='y') [y^2 + 1, [0, 1], -4, 1, [Mat([1, 0.E-38 + 1.00000000000000*I]), [1, 1.00000000000000; 1, -1.00000000000000], ..., [2, 0; 0, -2], [2, 0; 0, 2], [1, 0; 0, -1], [1, [0, -1; 1, 0]], [2]], [0.E-38 + 1.00000000000000*I], [1, y], [1, 0; 0, 1], [1, 0, 0, -1; 0, 1, 1, 0]] Tests: >>> T = pari('n -> 1/n') >>> T.type() 't_CLOSURE' >>> T(0) Traceback (most recent call last): ... PariError: _/_: impossible inverse in gdiv: 0 >>> pari('() -> 42')(1,2,3) Traceback (most recent call last): ... PariError: too many parameters in user-defined function call >>> pari('n -> n')(n=2) Traceback (most recent call last): ... TypeError: cannot evaluate a PARI closure using keyword arguments >>> pari('x + y')(4, y=1) Traceback (most recent call last): ... TypeError: mixing unnamed and keyword arguments not allowed when evaluating a PARI object >>> pari("12345")(4) Traceback (most recent call last): ... TypeError: cannot evaluate PARI t_INT using unnamed arguments """ return self(*args, **kwds) def __call__(self, *args, **kwds): """ Evaluate ``self`` with the given arguments. See ``eval``. """ cdef long t = typ(self.g) cdef Gen t0 cdef GEN result cdef long arity cdef long nargs = len(args) cdef long nkwds = len(kwds) # Closure must be evaluated using *args if t == t_CLOSURE: if nkwds: raise TypeError("cannot evaluate a PARI closure using keyword arguments") if closure_is_variadic(self.g): arity = closure_arity(self.g) - 1 args = list(args[:arity]) + [0]*(arity-nargs) + [args[arity:]] t0 = objtogen(args) sig_on() result = closure_callgenvec(self.g, t0.g) if result is gnil: clear_stack() return None return new_gen(result) # Evaluate univariate polynomials, rational functions and # series using *args if nargs: if nkwds: raise TypeError("mixing unnamed and keyword arguments not allowed when evaluating a PARI object") if not (t == t_POL or t == t_RFRAC or t == t_SER): raise TypeError("cannot evaluate PARI %s using unnamed arguments" % self.type()) if nargs != 1: raise TypeError("evaluating PARI %s takes exactly 1 argument (%d given)" % (self.type(), nargs)) t0 = objtogen(args[0]) sig_on() if t == t_POL or t == t_RFRAC: return new_gen(poleval(self.g, t0.g)) else: # t == t_SER return new_gen(gsubst(self.g, varn(self.g), t0.g)) # Call substvec() using **kwds cdef list V = [to_bytes(k) for k in kwds] # Variables as Python byte strings t0 = objtogen(kwds.values()) # Replacements sig_on() cdef GEN v = cgetg(nkwds+1, t_VEC) # Variables as PARI polynomials cdef long i for i in range(nkwds): varname = V[i] set_gel(v, i+1, pol_x(fetch_user_var(varname))) return new_gen(gsubstvec(self.g, v, t0.g)) def arity(self): """ Return the number of arguments of this ``t_CLOSURE``. >>> from cypari2 import Pari >>> pari = Pari() >>> pari("() -> 42").arity() 0 >>> pari("(x) -> x").arity() 1 >>> pari("(x,y,z) -> x+y+z").arity() 3 """ if typ(self.g) != t_CLOSURE: raise TypeError(f"arity() requires a t_CLOSURE") return closure_arity(self.g) def factorpadic(self, p, long r=20): """ p-adic factorization of the polynomial ``pol`` to precision ``r``. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pol = pari('x^2 - 1')**2 >>> pari(pol).factorpadic(5) [(1 + O(5^20))*x + (1 + O(5^20)), 2; (1 + O(5^20))*x + (4 + 4*5 + 4*5^2 + 4*5^3 + 4*5^4 + 4*5^5 + 4*5^6 + 4*5^7 + 4*5^8 + 4*5^9 + 4*5^10 + 4*5^11 + 4*5^12 + 4*5^13 + 4*5^14 + 4*5^15 + 4*5^16 + 4*5^17 + 4*5^18 + 4*5^19 + O(5^20)), 2] >>> pari(pol).factorpadic(5,3) [(1 + O(5^3))*x + (1 + O(5^3)), 2; (1 + O(5^3))*x + (4 + 4*5 + 4*5^2 + O(5^3)), 2] """ cdef Gen t0 = objtogen(p) sig_on() return new_gen(factorpadic(self.g, t0.g, r)) def ncols(self): """ Return the number of columns of self. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari('matrix(19,8)').ncols() 8 """ cdef long n sig_on() n = glength(self.g) sig_off() return n def nrows(self): """ Return the number of rows of self. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari('matrix(19,8)').nrows() 19 """ cdef long n sig_on() # if this matrix has no columns # then it has no rows. if self.ncols() == 0: sig_off() return 0 n = glength(gel(self.g, 1)) sig_off() return n def mattranspose(self): """ Transpose of the matrix self. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari('[1,2,3; 4,5,6; 7,8,9]').mattranspose() [1, 4, 7; 2, 5, 8; 3, 6, 9] Unlike PARI, this always returns a matrix: >>> pari('[1,2,3]').mattranspose() [1; 2; 3] >>> pari('[1,2,3]~').mattranspose() Mat([1, 2, 3]) """ sig_on() return new_gen(gtrans(self.g)).Mat() def lllgram(self): return self.qflllgram(0) def lllgramint(self): return self.qflllgram(1) def qfrep(self, B, long flag=0): """ Vector of (half) the number of vectors of norms from 1 to `B` for the integral and definite quadratic form ``self``. Binary digits of flag mean 1: count vectors of even norm from 1 to `2B`, 2: return a ``t_VECSMALL`` instead of a ``t_VEC`` (which is faster). Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> M = pari("[5,1,1;1,3,1;1,1,1]") >>> M.qfrep(20) [1, 1, 2, 2, 2, 4, 4, 3, 3, 4, 2, 4, 6, 0, 4, 6, 4, 5, 6, 4] >>> M.qfrep(20, flag=1) [1, 2, 4, 3, 4, 4, 0, 6, 5, 4, 12, 4, 4, 8, 0, 3, 8, 6, 12, 12] >>> M.qfrep(20, flag=2) Vecsmall([1, 1, 2, 2, 2, 4, 4, 3, 3, 4, 2, 4, 6, 0, 4, 6, 4, 5, 6, 4]) """ # PARI 2.7 always returns a t_VECSMALL, but for backwards # compatibility, we keep returning a t_VEC (unless flag & 2) cdef Gen t0 = objtogen(B) cdef GEN r sig_on() r = qfrep0(self.g, t0.g, flag & 1) if (flag & 2) == 0: r = vecsmall_to_vec(r) return new_gen(r) def matkerint(self, long flag=0): """ Return the integer kernel of a matrix. This is the LLL-reduced Z-basis of the kernel of the matrix x with integral entries. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari('[2,1;2,1]').matkerint() [1; -2] >>> import warnings >>> with warnings.catch_warnings(record=True) as w: ... warnings.simplefilter('always') ... pari('[2,1;2,1]').matkerint(1) ... assert len(w) == 1 ... assert issubclass(w[0].category, DeprecationWarning) [1; -2] """ if flag: # Keep this deprecation warning as long as PARI supports # this deprecated flag from warnings import warn warn("the 'flag' argument of the PARI/GP function matkerint is obsolete", DeprecationWarning) sig_on() return new_gen(matkerint0(self.g, flag)) def factor(self, long limit=-1, proof=None): """ Return the factorization of x. INPUT: - ``limit`` -- (default: -1) is optional and can be set whenever x is of (possibly recursive) rational type. If limit is set, return partial factorization, using primes up to limit. - ``proof`` -- optional flag. If ``False`` (not the default), returned factors larger than `2^{64}` may only be pseudoprimes. If ``True``, always check primality. If not given, use the global PARI default ``factor_proven`` which is ``True`` by default in cypari. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari('x^10-1').factor() [x - 1, 1; x + 1, 1; x^4 - x^3 + x^2 - x + 1, 1; x^4 + x^3 + x^2 + x + 1, 1] >>> pari(2**100-1).factor() [3, 1; 5, 3; 11, 1; 31, 1; 41, 1; 101, 1; 251, 1; 601, 1; 1801, 1; 4051, 1; 8101, 1; 268501, 1] >>> pari(2**100-1).factor(proof=True) [3, 1; 5, 3; 11, 1; 31, 1; 41, 1; 101, 1; 251, 1; 601, 1; 1801, 1; 4051, 1; 8101, 1; 268501, 1] >>> pari(2**100-1).factor(proof=False) [3, 1; 5, 3; 11, 1; 31, 1; 41, 1; 101, 1; 251, 1; 601, 1; 1801, 1; 4051, 1; 8101, 1; 268501, 1] We illustrate setting a limit: >>> pari(pari(10**50).nextprime()*pari(10**60).nextprime()*pari(10**4).nextprime()).factor(10**5) [10007, 1; 100000000000000000000000000000000000000000000000151000000000700000000000000000000000000000000000000000000001057, 1] Setting a limit is invalid when factoring polynomials: >>> pari('x^11 + 1').factor(limit=17) Traceback (most recent call last): ... PariError: incorrect type in boundfact (t_POL) """ cdef GEN g global factor_proven cdef int saved_factor_proven = factor_proven try: if proof is not None: factor_proven = 1 if proof else 0 sig_on() if limit >= 0: g = boundfact(self.g, limit) else: g = factor(self.g) return new_gen(g) finally: factor_proven = saved_factor_proven # Standard name for SageMath multiplicative_order = Gen_base.znorder def __abs__(self): return self.abs() def nextprime(self, bint add_one=False): """ nextprime(x): smallest pseudoprime greater than or equal to `x`. If ``add_one`` is non-zero, return the smallest pseudoprime strictly greater than `x`. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(1).nextprime() 2 >>> pari(2).nextprime() 2 >>> pari(2).nextprime(add_one = 1) 3 >>> pari(2**100).nextprime() 1267650600228229401496703205653 """ sig_on() if add_one: return new_gen(nextprime(gaddsg(1, self.g))) return new_gen(nextprime(self.g)) def change_variable_name(self, var): """ In ``self``, which must be a ``t_POL`` or ``t_SER``, set the variable to ``var``. If the variable of ``self`` is already ``var``, then return ``self``. .. WARNING:: You should be careful with variable priorities when applying this on a polynomial or series of which the coefficients have polynomial components. To be safe, only use this function on polynomials with integer or rational coefficients. For a safer alternative, use :meth:`subst`. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> f = pari('x^3 + 17*x + 3') >>> f.change_variable_name("y") y^3 + 17*y + 3 >>> f = pari('1 + 2*y + O(y^10)') >>> f.change_variable_name("q") 1 + 2*q + O(q^10) >>> f.change_variable_name("y") is f True In PARI, ``I`` refers to the square root of -1, so it cannot be used as variable name. Note the difference with :meth:`subst`: >>> f = pari('x^2 + 1') >>> f.change_variable_name("I") Traceback (most recent call last): ... PariError: I already exists with incompatible valence >>> f.subst("x", "I") 0 """ cdef long n = get_var(var) if varn(self.g) == n: return self if typ(self.g) != t_POL and typ(self.g) != t_SER: raise TypeError("set_variable() only works for polynomials or power series") # Copy self and then change the variable in place sig_on() newg = clone_gen(self.g) setvarn(newg.g, n) return newg def nf_subst(self, z): """ Given a PARI number field ``self``, return the same PARI number field but in the variable ``z``. INPUT: - ``self`` -- A PARI number field being the output of ``nfinit()``, ``bnfinit()`` or ``bnrinit()``. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> K = pari('y^2 + 5').nfinit() We can substitute in a PARI ``nf`` structure: >>> K.nf_get_pol() y^2 + 5 >>> L = K.nf_subst('a') >>> L.nf_get_pol() a^2 + 5 We can also substitute in a PARI ``bnf`` structure: >>> K = pari('y^2 + 5').bnfinit() >>> K.nf_get_pol() y^2 + 5 >>> K.bnf_get_cyc() # Structure of class group [2] >>> L = K.nf_subst('a') >>> L.nf_get_pol() a^2 + 5 >>> L.bnf_get_cyc() # We still have a bnf after substituting [2] """ cdef Gen t0 = objtogen(z) sig_on() return new_gen(gsubst(self.g, gvar(self.g), t0.g)) def type(self): """ Return the PARI type of self as a string. .. NOTE:: In Cython, it is much faster to simply use typ(self.g) for checking PARI types. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(7).type() 't_INT' >>> pari('x').type() 't_POL' >>> pari('oo').type() 't_INFINITY' """ sig_on() s = type_name(typ(self.g)) sig_off() return to_string(s) def polinterpolate(self, ya, x): """ self.polinterpolate(ya,x,e): polynomial interpolation at x according to data vectors self, ya (i.e. return P such that P(self[i]) = ya[i] for all i). Also return an error estimate on the returned value. """ cdef Gen t0 = objtogen(ya) cdef Gen t1 = objtogen(x) cdef GEN dy, g sig_on() g = polint(self.g, t0.g, t1.g, &dy) return new_gens2(g, dy) def ellwp(self, z='z', long n=20, long flag=0, unsigned long precision=0): """ Return the value or the series expansion of the Weierstrass `P`-function at `z` on the lattice `self` (or the lattice defined by the elliptic curve `self`). INPUT: - ``self`` -- an elliptic curve created using ``ellinit`` or a list ``[om1, om2]`` representing generators for a lattice. - ``z`` -- (default: 'z') a complex number or a variable name (as string or PARI variable). - ``n`` -- (default: 20) if 'z' is a variable, compute the series expansion up to at least `O(z^n)`. - ``flag`` -- (default = 0): If ``flag`` is 0, compute only `P(z)`. If ``flag`` is 1, compute `[P(z), P'(z)]`. OUTPUT: - `P(z)` (if ``flag`` is 0) or `[P(z), P'(z)]` (if ``flag`` is 1). numbers Examples: We first define the elliptic curve X_0(11): >>> from cypari2 import Pari >>> pari = Pari() >>> E = pari([0,-1,1,-10,-20]).ellinit() Compute P(1): >>> E.ellwp(1) 13.9658695257485 Compute P(1+i), where i = sqrt(-1): >>> E.ellwp(pari(complex(1, 1))) -1.11510682565555 + 2.33419052307470*I >>> E.ellwp(complex(1, 1)) -1.11510682565555 + 2.33419052307470*I The series expansion, to the default `O(z^20)` precision: >>> E.ellwp() z^-2 + 31/15*z^2 + 2501/756*z^4 + 961/675*z^6 + 77531/41580*z^8 + 1202285717/928746000*z^10 + 2403461/2806650*z^12 + 30211462703/43418875500*z^14 + 3539374016033/7723451736000*z^16 + 413306031683977/1289540602350000*z^18 + O(z^20) Compute the series for wp to lower precision: >>> E.ellwp(n=4) z^-2 + 31/15*z^2 + O(z^4) Next we use the version where the input is generators for a lattice: >>> pari([1.2692, complex(0.63, 1.45)]).ellwp(1) 13.9656146936689 + 0.000644829272810...*I With flag=1, compute the pair P(z) and P'(z): >>> E.ellwp(1, flag=1) [13.9658695257485, 101.123860176015] """ cdef Gen t0 = objtogen(z) cdef GEN g0 = t0.g sig_on() # Polynomial or rational function as input: # emulate toser_i() but with given precision if typ(g0) == t_POL: g0 = RgX_to_ser(g0, n+4) elif typ(g0) == t_RFRAC: g0 = rfrac_to_ser(g0, n+4) cdef GEN r = ellwp0(self.g, g0, flag, prec_bits_to_words(precision)) if flag == 1 and have_ellwp_flag1_bug(): # Work around ellwp() bug: double the second element set_gel(r, 2, gmulgs(gel(r, 2), 2)) return new_gen(r) def debug(self, long depth=-1): r""" Show the internal structure of self (like the ``\x`` command in gp). Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari('[1/2, 1 + 1.0*I]').debug() [&=...] VEC(lg=3):... 1st component = [&=...] FRAC(lg=3):... num = [&=...] INT(lg=3):... (+,lgefint=3):... den = [&=...] INT(lg=3):... (+,lgefint=3):... 2nd component = [&=...] COMPLEX(lg=3):... real = [&=...] INT(lg=3):... (+,lgefint=3):... imag = [&=...] REAL(lg=...):... (+,expo=0):... """ sig_on() dbgGEN(self.g, depth) clear_stack() return def allocatemem(self, *args): """ Do not use this. Use ``pari.allocatemem()`` instead. Tests: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(2**10).allocatemem(2**20) Traceback (most recent call last): ... NotImplementedError: the method allocatemem() should not be used; use pari.allocatemem() instead """ raise NotImplementedError("the method allocatemem() should not be used; use pari.allocatemem() instead") cdef int Gen_clear(self) except -1: """ Implementation of tp_clear() for Gen. We need to override Cython's default since we do not want self.next to be cleared: it is crucial that the next Gen stays alive until remove_from_pari_stack(self) is called by __dealloc__. """ # Only itemcache needs to be cleared (self).itemcache = None (Gen).tp_clear = Gen_clear @cython.boundscheck(False) @cython.wraparound(False) cdef Gen list_of_Gens_to_Gen(list s): """ Convert a Python list whole elements are all :class:`Gen` objects (this is not checked!) to a single PARI :class:`Gen` of type ``t_VEC``. This is called from :func:`objtogen` to convert iterables to PARI. Tests: >>> from cypari2.gen import objtogen >>> from cypari2 import Pari >>> pari = Pari() >>> objtogen(range(10)) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> objtogen(i**2 for i in range(5)) [0, 1, 4, 9, 16] >>> objtogen([pari("Mod(x, x^2+1)")]) [Mod(x, x^2 + 1)] >>> objtogen([]) [] """ cdef Py_ssize_t length = len(s) sig_on() cdef GEN g = cgetg(length+1, t_VEC) cdef Py_ssize_t i for i in range(length): set_gel(g, i+1, (s[i]).g) return clone_gen(g) cpdef Gen objtogen(s): """ Convert any SageMath/Python object to a PARI :class:`Gen`. For SageMath types, this uses the ``__pari__()`` method on the object. Basic Python types like ``int`` are converted directly. For other types, the string representation is used. Examples: >>> from cypari2 import Pari >>> pari = Pari() >>> pari(0) 0 >>> pari([2,3,5]) [2, 3, 5] >>> a = pari(1) >>> a, a.type() (1, 't_INT') >>> from fractions import Fraction >>> a = pari(Fraction('1/2')) >>> a, a.type() (1/2, 't_FRAC') Conversion from reals uses the real's own precision: >>> a = pari(1.2); a, a.type(), a.bitprecision() (1.20000000000000, 't_REAL', 64) Conversion from strings uses the current PARI real precision. By default, this is 64 bits: >>> a = pari('1.2'); a, a.type(), a.bitprecision() (1.20000000000000, 't_REAL', 64) Unicode and bytes work fine: >>> pari(b"zeta(3)") 1.20205690315959 >>> pari(u"zeta(3)") 1.20205690315959 But we can change this precision: >>> pari.set_real_precision(35) # precision in decimal digits 15 >>> a = pari('Pi'); a, a.type(), a.bitprecision() (3.1415926535897932384626433832795029, 't_REAL', 128) >>> a = pari('1.2'); a, a.type(), a.bitprecision() (1.2000000000000000000000000000000000, 't_REAL', 128) Set the precision to 15 digits for the remaining tests: >>> pari.set_real_precision(15) 35 Conversion from basic Python types: >>> pari(int(-5)) -5 >>> pari(2**150) 1427247692705959881058285969449495136382746624 >>> import math >>> pari(math.pi) 3.14159265358979 >>> one = pari(complex(1,0)); one, one.type() (1.00000000000000, 't_COMPLEX') >>> pari(complex(0, 1)) 1.00000000000000*I >>> pari(complex(0.3, 1.7)) 0.300000000000000 + 1.70000000000000*I >>> pari(False) 0 >>> pari(True) 1 The following looks strange, but it is what PARI does: >>> pari(["print(x)"]) x [0] >>> pari("[print(x)]") x [0] Tests: >>> pari(None) Traceback (most recent call last): ... ValueError: Cannot convert None to pari """ if isinstance(s, Gen): return s try: m = s.__pari__ except AttributeError: pass else: return m() cdef GEN g = PyObject_AsGEN(s) if g is not NULL: res = new_gen_noclear(g) reset_avma() return res # Check for iterables. Handle the common cases of lists and tuples # separately as an optimization cdef list L if isinstance(s, list): L = [objtogen(x) for x in s] return list_of_Gens_to_Gen(L) if isinstance(s, tuple): L = [objtogen(x) for x in s] return list_of_Gens_to_Gen(L) # Check for iterable object s try: L = [objtogen(x) for x in s] except TypeError: pass else: return list_of_Gens_to_Gen(L) if callable(s): return objtoclosure(s) if s is None: raise ValueError("Cannot convert None to pari") # Simply use the string representation return objtogen(str(s)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779110.0 cypari2-2.1.2/cypari2/handle_error.pxd0000644000175000017500000000022200000000000020752 0ustar00vincentvincent00000000000000from .types cimport GEN cdef void _pari_init_error_handling() cdef int _pari_err_handle(GEN E) except 0 cdef void _pari_err_recover(long errnum) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779110.0 cypari2-2.1.2/cypari2/handle_error.pyx0000644000175000017500000001410400000000000021003 0ustar00vincentvincent00000000000000""" Handling PARI errors ******************** AUTHORS: - Peter Bruin (September 2013): initial version (:trac:`9640`) - Jeroen Demeyer (January 2015): use ``cb_pari_err_handle`` (:trac:`14894`) """ #***************************************************************************** # Copyright (C) 2013 Peter Bruin # Copyright (C) 2015 Jeroen Demeyer # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 2 of the License, or # (at your option) any later version. # http://www.gnu.org/licenses/ #***************************************************************************** from __future__ import absolute_import, division, print_function from cpython cimport PyErr_Occurred from cysignals.signals cimport sig_block, sig_unblock, sig_error from .paridecl cimport * from .paripriv cimport * from .stack cimport clone_gen_noclear, reset_avma, after_resize # We derive PariError from RuntimeError, for backward compatibility with # code that catches the latter. class PariError(RuntimeError): """ Error raised by PARI """ def errnum(self): r""" Return the PARI error number corresponding to this exception. EXAMPLES: >>> import cypari2 >>> pari = cypari2.Pari() >>> try: ... pari('1/0') ... except PariError as err: ... print(err.errnum()) 31 """ return self.args[0] def errtext(self): """ Return the message output by PARI when this error occurred. EXAMPLES: >>> import cypari2 >>> pari = cypari2.Pari() >>> try: ... pari('pi()') ... except PariError as e: ... print(e.errtext()) not a function in function call """ return self.args[1] def errdata(self): """ Return the error data (a ``t_ERROR`` gen) corresponding to this error. EXAMPLES: >>> import cypari2 >>> pari = cypari2.Pari() >>> try: ... pari('Mod(2,6)')**-1 ... except PariError as e: ... E = e.errdata() >>> E error("impossible inverse in Fp_inv: Mod(2, 6).") >>> E.component(2) Mod(2, 6) """ return self.args[2] def __repr__(self): r""" TESTS: >>> import cypari2 >>> pari = cypari2.Pari() >>> PariError(11) PariError(11) """ return "PariError(%d)"%self.errnum() def __str__(self): r""" Return a suitable message for displaying this exception. This is simply the error text with certain trailing characters stripped. EXAMPLES: >>> import cypari2 >>> pari = cypari2.Pari() >>> try: ... pari('1/0') ... except PariError as err: ... print(err) _/_: impossible inverse in gdiv: 0 A syntax error: >>> pari('!@#$%^&*()') Traceback (most recent call last): ... PariError: syntax error, unexpected $undefined """ return self.errtext().rstrip(" .:") cdef void _pari_init_error_handling(): """ Set up our code for handling PARI errors. TESTS: >>> import cypari2 >>> pari = cypari2.Pari() >>> try: ... p = pari.polcyclo(-1) ... except PariError as e: ... print(e.errtext()) domain error in polcyclo: index <= 0 Warnings still work just like in GP:: >>> pari('warning("test")') """ global cb_pari_err_handle global cb_pari_err_recover cb_pari_err_handle = _pari_err_handle cb_pari_err_recover = _pari_err_recover cdef int _pari_err_handle(GEN E) except 0: """ Convert a PARI error into a Python exception. This function is a callback from the PARI error handler. EXAMPLES: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari('error("test")') Traceback (most recent call last): ... PariError: error: user error: test >>> pari(1)/pari(0) Traceback (most recent call last): ... PariError: impossible inverse in gdiv: 0 Test exceptions with a pointer to a PARI object: >>> from cypari2 import Pari >>> def exc(): ... K = Pari().nfinit("x^2 + 1") ... I = K.idealhnf(2) ... I[0] ... try: ... K.idealaddtoone(I, I) ... except RuntimeError as e: ... return e >>> L = [exc(), exc()] >>> print(L[0]) elements not coprime in idealaddtoone: [2, 0; 0, 2] [2, 0; 0, 2] """ cdef long errnum = E[1] cdef char* errstr cdef const char* s if errnum == e_STACK: # Custom error message for PARI stack overflow pari_error_string = "the PARI stack overflows (current size: {}; maximum size: {})\n" pari_error_string += "You can use pari.allocatemem() to change the stack size and try again" pari_error_string = pari_error_string.format(pari_mainstack.size, pari_mainstack.vsize) else: sig_block() try: errstr = pari_err2str(E) pari_error_string = errstr.decode('ascii') pari_free(errstr) finally: sig_unblock() s = closure_func_err() if s is not NULL: pari_error_string = s.decode('ascii') + ": " + pari_error_string raise PariError(errnum, pari_error_string, clone_gen_noclear(E)) cdef void _pari_err_recover(long errnum): """ Reset the error string and jump back to ``sig_on()``, either to retry the code (in case of no error) or to make the already-raised exception known to Python. """ reset_avma() # Special case errnum == -1 corresponds to a reallocation of the # PARI stack. This is not an error, so call after_resize() and # proceed as if nothing happened. if errnum < 0: after_resize() return # An exception was raised. Jump to the signal-handling code # which will cause sig_on() to see the exception. sig_error() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779110.0 cypari2-2.1.2/cypari2/pari_instance.pxd0000644000175000017500000000066100000000000021134 0ustar00vincentvincent00000000000000from .types cimport * cimport cython from .gen cimport Gen cpdef long prec_bits_to_words(unsigned long prec_in_bits) cpdef long prec_words_to_bits(long prec_in_words) cpdef long default_bitprec() cdef class Pari_auto: pass cdef class Pari(Pari_auto): cdef readonly Gen PARI_ZERO, PARI_ONE, PARI_TWO cpdef Gen zero(self) cpdef Gen one(self) cdef Gen _empty_vector(self, long n) cdef long get_var(v) except -2 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1604256609.0 cypari2-2.1.2/cypari2/pari_instance.pyx0000644000175000017500000012763600000000000021175 0ustar00vincentvincent00000000000000# -*- coding: utf-8 -*- r""" Interface to the PARI library ***************************** AUTHORS: - William Stein (2006-03-01): updated to work with PARI 2.2.12-beta - William Stein (2006-03-06): added newtonpoly - Justin Walker: contributed some of the function definitions - Gonzalo Tornaria: improvements to conversions; much better error handling. - Robert Bradshaw, Jeroen Demeyer, William Stein (2010-08-15): Upgrade to PARI 2.4.3 (:trac:`9343`) - Jeroen Demeyer (2011-11-12): rewrite various conversion routines (:trac:`11611`, :trac:`11854`, :trac:`11952`) - Peter Bruin (2013-11-17): split off this file from gen.pyx (:trac:`15185`) - Jeroen Demeyer (2014-02-09): upgrade to PARI 2.7 (:trac:`15767`) - Jeroen Demeyer (2014-09-19): upgrade to PARI 2.8 (:trac:`16997`) - Jeroen Demeyer (2015-03-17): automatically generate methods from ``pari.desc`` (:trac:`17631` and :trac:`17860`) - Luca De Feo (2016-09-06): Separate Sage-specific components from generic C-interface in ``Pari`` (:trac:`20241`) Examples: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari('5! + 10/x') (120*x + 10)/x >>> pari('intnum(x=0,13,sin(x)+sin(x^2) + x)') 85.6215190762676 >>> f = pari('x^3 - 1') >>> v = f.factor(); v [x - 1, 1; x^2 + x + 1, 1] >>> v[0] # indexing is 0-based unlike in GP. [x - 1, x^2 + x + 1]~ >>> v[1] [1, 1]~ For most functions, you can call the function as method of ``pari`` or you can first create a :class:`Gen` object and then call the function as method of that. In other words, the following two commands do the same: >>> pari('x^3 - 1').factor() [x - 1, 1; x^2 + x + 1, 1] >>> pari.factor('x^3 - 1') [x - 1, 1; x^2 + x + 1, 1] Arithmetic operations cause all arguments to be converted to PARI: >>> type(pari(1) + 1) <... 'cypari2.gen.Gen'> >>> type(1 + pari(1)) <... 'cypari2.gen.Gen'> Guide to real precision in the PARI interface ============================================= In the PARI interface, "real precision" refers to the precision of real numbers, so it is the floating-point precision. This is a non-trivial issue, since there are various interfaces for different things. Internal representation of floating-point numbers in PARI --------------------------------------------------------- Real numbers in PARI have a precision associated to them, which is always a multiple of the CPU wordsize. So, it is a multiple of 32 of 64 bits. When converting a ``float`` from Python to PARI, the ``float`` has 53 bits of precision which is rounded up to 64 bits in PARI: >>> x = 1.0 >>> pari(x) 1.00000000000000 >>> pari(x).bitprecision() 64 It is possible to change the precision of a PARI object with the :meth:`Gen.bitprecision` method: >>> p = pari(1.0) >>> p.bitprecision() 64 >>> p = p.bitprecision(100) >>> p.bitprecision() # Rounded up to a multiple of the wordsize 128 Beware that these extra bits are just bogus. For example, this will not magically give a more precise approximation of ``math.pi``: >>> import math >>> p = pari(math.pi) >>> pari("Pi") - p 1.225148... E-16 >>> p = p.bitprecision(1000) >>> pari("Pi") - p 1.225148... E-16 Another way to create numbers with many bits is to use a string with many digits: >>> p = pari("3.1415926535897932384626433832795028842") >>> p.bitprecision() 128 .. _pari_output_precision: Output precision for printing ----------------------------- Even though PARI reals have a precision, not all significant bits are printed by default. The maximum number of digits when printing a PARI real can be set using the methods :meth:`Pari.set_real_precision_bits` or :meth:`Pari.set_real_precision`. Note that this will also change the input precision for strings, see :ref:`pari_input_precision`. We create a very precise approximation of pi and see how it is printed in PARI: >>> pi = pari.pi(precision=1024) The default precision is 15 digits: >>> pi 3.14159265358979 With a different precision, we see more digits. Note that this does not affect the object ``pi`` at all, it only affects how it is printed: >>> _ = pari.set_real_precision(50) >>> pi 3.1415926535897932384626433832795028841971693993751 Back to the default: >>> _ = pari.set_real_precision(15) >>> pi 3.14159265358979 .. _pari_input_precision: Input precision for function calls ---------------------------------- When we talk about precision for PARI functions, we need to distinguish three kinds of calls: 1. Using the string interface, for example ``pari("sin(1)")``. 2. Using the library interface with *exact* inputs, for example ``pari.sin(1)``. 3. Using the library interface with *inexact* inputs, for example ``pari.sin(1.0)``. In the first case, the relevant precision is the one set by the methods :meth:`Pari.set_real_precision_bits` or :meth:`Pari.set_real_precision`: >>> pari.set_real_precision_bits(150) >>> pari("sin(1)") 0.841470984807896506652502321630298999622563061 >>> pari.set_real_precision_bits(53) >>> pari("sin(1)") 0.841470984807897 In the second case, the precision can be given as the argument ``precision`` in the function call, with a default of 53 bits. The real precision set by :meth:`Pari.set_real_precision_bits` or :meth:`Pari.set_real_precision` does not affect the call (but it still affects printing). As explained before, the precision increases to a multiple of the wordsize (and you should not assume that the extra bits are meaningful): >>> a = pari.sin(1, precision=180); a 0.841470984807897 >>> a.bitprecision() 192 >>> b = pari.sin(1, precision=40); b 0.841470984807897 >>> b.bitprecision() 64 >>> c = pari.sin(1); c 0.841470984807897 >>> c.bitprecision() 64 >>> pari.set_real_precision_bits(90) >>> print(a); print(b); print(c) 0.841470984807896506652502322 0.8414709848078965067 0.8414709848078965067 In the third case, the precision is determined only by the inexact inputs and the ``precision`` argument is ignored: >>> pari.sin(1.0, precision=180).bitprecision() 64 >>> pari.sin(1.0, precision=40).bitprecision() 64 >>> pari.sin("1.0000000000000000000000000000000000000").bitprecision() 128 Tests: Check that the documentation is generated correctly: >>> from inspect import getdoc >>> getdoc(pari.Pi) 'The constant :math:`\\pi` ...' Check that output from PARI's print command is actually seen by Python (:trac:`9636`): >>> pari('print("test")') test Verify that ``nfroots()`` (which has an unusual signature with a non-default argument following a default argument) works: >>> pari.nfroots(x='x^4 - 1') [-1, 1] >>> pari.nfroots(pari.nfinit('t^2 + 1'), "x^4 - 1") [-1, 1, Mod(-t, t^2 + 1), Mod(t, t^2 + 1)] Reset default precision for the following tests: >>> pari.set_real_precision_bits(53) Test that interrupts work properly: >>> pari.allocatemem(8000000, 2**29) PARI stack size set to 8000000 bytes, maximum size set to ... >>> from cysignals.alarm import alarm, AlarmInterrupt >>> for i in range(1, 11): ... try: ... alarm(i/11.0) ... pari.binomial(2**100, 2**22) ... except AlarmInterrupt: ... pass Test that changing the stack size using ``default`` works properly: >>> pari.default("parisizemax", 2**23) >>> pari = cypari2.Pari() # clear stack >>> a = pari(1) >>> pari.default("parisizemax", 2**29) >>> a + a 2 >>> pari.default("parisizemax") 536870912 """ #***************************************************************************** # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 2 of the License, or # (at your option) any later version. # http://www.gnu.org/licenses/ #***************************************************************************** from __future__ import absolute_import, division import sys from libc.stdio cimport * cimport cython from cysignals.signals cimport sig_check, sig_on, sig_off, sig_error from .string_utils cimport to_string, to_bytes from .paridecl cimport * from .paripriv cimport * from .gen cimport Gen, objtogen from .stack cimport (new_gen, new_gen_noclear, clear_stack, set_pari_stack_size, before_resize, after_resize) from .handle_error cimport _pari_init_error_handling from .closure cimport _pari_init_closure # Default precision (in PARI words) for the PARI library interface, # when no explicit precision is given and the inputs are exact. cdef long prec = prec_bits_to_words(53) ################################################################# # conversions between various real precision models ################################################################# def prec_bits_to_dec(long prec_in_bits): r""" Convert from precision expressed in bits to precision expressed in decimal. Examples: >>> from cypari2.pari_instance import prec_bits_to_dec >>> prec_bits_to_dec(53) 15 >>> [(32*n, prec_bits_to_dec(32*n)) for n in range(1, 9)] [(32, 9), (64, 19), (96, 28), (128, 38), (160, 48), (192, 57), (224, 67), (256, 77)] """ return nbits2ndec(prec_in_bits) def prec_dec_to_bits(long prec_in_dec): r""" Convert from precision expressed in decimal to precision expressed in bits. Examples: >>> from cypari2.pari_instance import prec_dec_to_bits >>> prec_dec_to_bits(15) 50 >>> [(n, prec_dec_to_bits(n)) for n in range(10, 100, 10)] [(10, 34), (20, 67), (30, 100), (40, 133), (50, 167), (60, 200), (70, 233), (80, 266), (90, 299)] """ cdef double log_10 = 3.32192809488736 return int(prec_in_dec*log_10 + 1.0) # Add one to round up cpdef long prec_bits_to_words(unsigned long prec_in_bits): r""" Convert from precision expressed in bits to pari real precision expressed in words. Note: this rounds up to the nearest word, adjusts for the two codewords of a pari real, and is architecture-dependent. Examples: >>> from cypari2.pari_instance import prec_bits_to_words >>> import sys >>> bitness = '64' if sys.maxsize > (1 << 32) else '32' >>> prec_bits_to_words(70) == (5 if bitness == '32' else 4) True >>> ans32 = [(32, 3), (64, 4), (96, 5), (128, 6), (160, 7), (192, 8), (224, 9), (256, 10)] >>> ans64 = [(32, 3), (64, 3), (96, 4), (128, 4), (160, 5), (192, 5), (224, 6), (256, 6)] >>> [(32*n, prec_bits_to_words(32*n)) for n in range(1, 9)] == (ans32 if bitness == '32' else ans64) True """ if not prec_in_bits: return prec cdef unsigned long wordsize = BITS_IN_LONG # This equals ceil(prec_in_bits/wordsize) + 2 return (prec_in_bits - 1)//wordsize + 3 cpdef long prec_words_to_bits(long prec_in_words): r""" Convert from pari real precision expressed in words to precision expressed in bits. Note: this adjusts for the two codewords of a pari real, and is architecture-dependent. Examples: >>> from cypari2.pari_instance import prec_words_to_bits >>> import sys >>> bitness = '64' if sys.maxsize > (1 << 32) else '32' >>> prec_words_to_bits(10) == (256 if bitness == '32' else 512) True >>> ans32 = [(3, 32), (4, 64), (5, 96), (6, 128), (7, 160), (8, 192), (9, 224)] >>> ans64 = [(3, 64), (4, 128), (5, 192), (6, 256), (7, 320), (8, 384), (9, 448)] # 64-bit >>> [(n, prec_words_to_bits(n)) for n in range(3, 10)] == (ans32 if bitness == '32' else ans64) True """ # see user's guide to the pari library, page 10 return (prec_in_words - 2) * BITS_IN_LONG cpdef long default_bitprec(): r""" Return the default precision in bits. Examples: >>> from cypari2.pari_instance import default_bitprec >>> default_bitprec() 64 """ return (prec - 2) * BITS_IN_LONG def prec_dec_to_words(long prec_in_dec): r""" Convert from precision expressed in decimal to precision expressed in words. Note: this rounds up to the nearest word, adjusts for the two codewords of a pari real, and is architecture-dependent. Examples: >>> from cypari2.pari_instance import prec_dec_to_words >>> import sys >>> bitness = '64' if sys.maxsize > (1 << 32) else '32' >>> prec_dec_to_words(38) == (6 if bitness == '32' else 4) True >>> ans32 = [(10, 4), (20, 5), (30, 6), (40, 7), (50, 8), (60, 9), (70, 10), (80, 11)] >>> ans64 = [(10, 3), (20, 4), (30, 4), (40, 5), (50, 5), (60, 6), (70, 6), (80, 7)] # 64-bit >>> [(n, prec_dec_to_words(n)) for n in range(10, 90, 10)] == (ans32 if bitness == '32' else ans64) True """ return prec_bits_to_words(prec_dec_to_bits(prec_in_dec)) def prec_words_to_dec(long prec_in_words): r""" Convert from precision expressed in words to precision expressed in decimal. Note: this adjusts for the two codewords of a pari real, and is architecture-dependent. Examples: >>> from cypari2.pari_instance import prec_words_to_dec >>> import sys >>> bitness = '64' if sys.maxsize > (1 << 32) else '32' >>> prec_words_to_dec(5) == (28 if bitness == '32' else 57) True >>> ans32 = [(3, 9), (4, 19), (5, 28), (6, 38), (7, 48), (8, 57), (9, 67)] >>> ans64 = [(3, 19), (4, 38), (5, 57), (6, 77), (7, 96), (8, 115), (9, 134)] >>> [(n, prec_words_to_dec(n)) for n in range(3, 10)] == (ans32 if bitness == '32' else ans64) True """ return prec_bits_to_dec(prec_words_to_bits(prec_in_words)) # Callbacks from PARI to print stuff using sys.stdout.write() instead # of C library functions like puts(). cdef PariOUT python_pariOut cdef void python_putchar(char c): cdef char s[2] s[0] = c s[1] = 0 try: # avoid string conversion if possible sys.stdout.buffer.write(s) except AttributeError: sys.stdout.write(to_string(s)) # Let PARI think the last character was a newline, # so it doesn't print one when an error occurs. pari_set_last_newline(1) cdef void python_puts(const char* s): try: # avoid string conversion if possible sys.stdout.buffer.write(s) except AttributeError: sys.stdout.write(to_string(s)) pari_set_last_newline(1) cdef void python_flush(): sys.stdout.flush() include 'auto_instance.pxi' cdef class Pari(Pari_auto): def __cinit__(self): r""" (Re)-initialize the PARI library. Tests: >>> from cypari2.pari_instance import Pari >>> Pari.__new__(Pari) Interface to the PARI C library >>> pari = Pari() >>> pari("print('hello')") """ # PARI is already initialized, nothing to do... if avma: return # Take 1MB as minimal stack. Use maxprime=0, which PARI will # internally increase to some small value like 65537. # (see function initprimes in src/language/forprime.c) pari_init_opts(1000000 * sizeof(long), 0, INIT_DFTm) after_resize() # Disable PARI's stack overflow checking which is incompatible # with multi-threading. pari_stackcheck_init(NULL) _pari_init_error_handling() _pari_init_closure() # Set printing functions global pariOut, pariErr pariOut = &python_pariOut pariOut.putch = python_putchar pariOut.puts = python_puts pariOut.flush = python_flush # Use 53 bits as default precision self.set_real_precision_bits(53) # Disable pretty-printing GP_DATA.fmt.prettyp = 0 # This causes PARI/GP to use output independent of the terminal # (which is what we want for the PARI library interface). GP_DATA.flags = gpd_TEST # Ensure that Galois groups are represented in a sane way, # see the polgalois section of the PARI users manual. global new_galois_format new_galois_format = 1 # By default, factor() should prove primality of returned # factors. This not only influences the factor() function, but # also many functions indirectly using factoring. global factor_proven factor_proven = 1 # Monkey-patch default(parisize) and default(parisizemax) ep = pari_is_default("parisize") if ep: ep.value = patched_parisize ep = pari_is_default("parisizemax") if ep: ep.value = patched_parisizemax def __init__(self, size_t size=8000000, size_t sizemax=0, unsigned long maxprime=500000): """ (Re)-Initialize the PARI system. INPUT: - ``size`` -- (default: 8000000) the number of bytes for the initial PARI stack (see notes below) - ``sizemax`` -- the maximal number of bytes for the dynamically increasing PARI stack. The default ``0`` means to use the same value as ``size`` (see notes below) - ``maxprime`` -- (default: 500000) limit on the primes in the precomputed prime number table which is used for sieving algorithms When the PARI system is already initialized, the PARI stack is only grown if ``size`` is greater than the current stack, and the table of primes is only computed if ``maxprime`` is larger than the current bound. Examples: >>> from cypari2.pari_instance import Pari >>> pari = Pari() >>> pari2 = Pari(10**7) >>> pari2 Interface to the PARI C library >>> pari2 is pari False >>> pari2.PARI_ZERO == pari.PARI_ZERO True >>> pari2 = Pari(10**6) >>> pari.stacksize(), pari2.stacksize() (10000000, 10000000) >>> Pari().default("primelimit") 500000 >>> Pari(maxprime=20000).default("primelimit") 20000 For more information about how precision works in the PARI interface, see :mod:`cypari2.pari_instance`. .. NOTE:: PARI has a "real" stack size (``size``) and a "virtual" stack size (``sizemax``). The idea is that the real stack will be used if possible, but that the stack might be increased up to ``sizemax`` bytes. Therefore, it is not a problem to set ``sizemax`` to a large value. On the other hand, it also makes no sense to set this to a value larger than what your system can handle. .. NOTE:: Normally, all results from PARI computations end up on the PARI stack. CyPari2 tries to keep everything on the PARI stack. However, if over half of the PARI stack space is used, all live objects on the PARI stack are copied to the PARI heap (they become so-called clones). """ # Increase (but don't decrease) size and sizemax to the # requested value size = max(size, pari_mainstack.rsize) sizemax = max(max(size, pari_mainstack.vsize), sizemax) set_pari_stack_size(size, sizemax) # Increase the table of primes if needed GP_DATA.primelimit = maxprime self.init_primes(maxprime) # Initialize some constants self.PARI_ZERO = new_gen_noclear(gen_0) self.PARI_ONE = new_gen_noclear(gen_1) self.PARI_TWO = new_gen_noclear(gen_2) IF HAVE_PLOT_SVG: # If we are running under IPython, setup for displaying SVG plots. if "IPython" in sys.modules: pari_set_plot_engine(get_plot_ipython) def debugstack(self): r""" Print the internal PARI variables ``top`` (top of stack), ``avma`` (available memory address, think of this as the stack pointer), ``bot`` (bottom of stack). """ # We deliberately use low-level functions to minimize the # chances that something goes wrong here (for example, if we # are out of memory). printf("top = %p\navma = %p\nbot = %p\nsize = %lu\n", pari_mainstack.top, avma, pari_mainstack.bot, pari_mainstack.rsize) fflush(stdout) def __repr__(self): return "Interface to the PARI C library" def __hash__(self): return 907629390 def set_debug_level(self, level): """ Set the debug PARI C library variable. """ self.default('debug', int(level)) def get_debug_level(self): """ Set the debug PARI C library variable. """ return int(self.default('debug')) def set_real_precision_bits(self, n): """ Sets the PARI default real precision in bits. This is used both for creation of new objects from strings and for printing. It determines the number of digits in which real numbers numbers are printed. It also determines the precision of objects created by parsing strings (e.g. pari('1.2')), which is *not* the normal way of creating new PARI objects using cypari. It has *no* effect on the precision of computations within the PARI library. .. seealso:: :meth:`set_real_precision` to set the precision in decimal digits. Examples: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari.set_real_precision_bits(200) >>> pari('1.2') 1.20000000000000000000000000000000000000000000000000000000000 >>> pari.set_real_precision_bits(53) """ cdef bytes strn = to_bytes(n) sig_on() sd_realbitprecision(strn, d_SILENT) clear_stack() def get_real_precision_bits(self): """ Return the current PARI default real precision in bits. This is used both for creation of new objects from strings and for printing. It determines the number of digits in which real numbers numbers are printed. It also determines the precision of objects created by parsing strings (e.g. pari('1.2')), which is *not* the normal way of creating new PARI objects using cypari. It has *no* effect on the precision of computations within the PARI library. .. seealso:: :meth:`get_real_precision` to get the precision in decimal digits. Examples: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari.get_real_precision_bits() 53 """ sig_on() r = itos(sd_realbitprecision(NULL, d_RETURN)) clear_stack() return r def set_real_precision(self, long n): """ Sets the PARI default real precision in decimal digits. This is used both for creation of new objects from strings and for printing. It is the number of digits *IN DECIMAL* in which real numbers are printed. It also determines the precision of objects created by parsing strings (e.g. pari('1.2')), which is *not* the normal way of creating new PARI objects in CyPari2. It has *no* effect on the precision of computations within the pari library. Returns the previous PARI real precision. .. seealso:: :meth:`set_real_precision_bits` to set the precision in bits. Examples: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari.set_real_precision(60) 15 >>> pari('1.2') 1.20000000000000000000000000000000000000000000000000000000000 >>> pari.set_real_precision(15) 60 """ old = self.get_real_precision() self.set_real_precision_bits(prec_dec_to_bits(n)) return old def get_real_precision(self): """ Returns the current PARI default real precision. This is used both for creation of new objects from strings and for printing. It is the number of digits *IN DECIMAL* in which real numbers are printed. It also determines the precision of objects created by parsing strings (e.g. pari('1.2')), which is *not* the normal way of creating new PARI objects in CyPari2. It has *no* effect on the precision of computations within the pari library. .. seealso:: :meth:`get_real_precision_bits` to get the precision in bits. Examples: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari.get_real_precision() 15 """ cdef long r sig_on() r = itos(sd_realprecision(NULL, d_RETURN)) sig_off() return r def set_series_precision(self, long n): global precdl precdl = n def get_series_precision(self): return precdl def version(self): """ Return the PARI version as tuple with 3 or 4 components: (major, minor, patch) or (major, minor, patch, VCSversion). Examples: >>> from cypari2 import Pari >>> Pari().version() >= (2, 9, 0) True """ return tuple(Pari_auto.version(self)) def complex(self, re, im): """ Create a new complex number, initialized from re and im. """ cdef Gen t0 = objtogen(re) cdef Gen t1 = objtogen(im) sig_on() return new_gen(mkcomplex(t0.g, t1.g)) def __call__(self, s): """ Create the PARI object obtained by evaluating s using PARI. Examples: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari(0) 0 >>> pari([2,3,5]) [2, 3, 5] >>> a = pari(1); a, a.type() (1, 't_INT') >>> a = pari('1/2'); a, a.type() (1/2, 't_FRAC') >>> s = pari(u'"éàèç"') >>> s.type() 't_STR' Some commands are just executed without returning a value: >>> pari("dummy = 0; kill(dummy)") >>> print(pari("dummy = 0; kill(dummy)")) None See :func:`objtogen` for more examples. """ cdef Gen g = objtogen(s) if g.g is gnil: return None return g cpdef Gen zero(self): """ Examples: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari.zero() 0 """ return self.PARI_ZERO cpdef Gen one(self): """ Examples: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari.one() 1 """ return self.PARI_ONE def new_with_bits_prec(self, s, long precision): r""" pari.new_with_bits_prec(self, s, precision) creates s as a PARI Gen with (at most) precision *bits* of precision. """ # TODO: deprecate cdef unsigned long old_prec old_prec = GP_DATA.fmt.sigd precision = prec_bits_to_dec(precision) if not precision: precision = old_prec self.set_real_precision(precision) x = objtogen(s) self.set_real_precision(old_prec) return x ############################################################ # Initialization ############################################################ def stacksize(self): r""" Return the current size of the PARI stack, which is `10^6` by default. However, the stack size is automatically increased when needed up to the given maximum stack size. .. SEEALSO:: - :meth:`stacksizemax` to get the maximum stack size - :meth:`allocatemem` to change the current or maximum stack size Examples: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari.stacksize() 8000000 >>> pari.allocatemem(2**18, silent=True) >>> pari.stacksize() 262144 """ return pari_mainstack.size def stacksizemax(self): r""" Return the maximum size of the PARI stack, which is determined at startup in terms of available memory. Usually, the PARI stack size is (much) smaller than this maximum but the stack will be increased up to this maximum if needed. .. SEEALSO:: - :meth:`stacksize` to get the current stack size - :meth:`allocatemem` to change the current or maximum stack size Examples: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari.allocatemem(2**18, 2**26, silent=True) >>> pari.stacksizemax() 67108864 """ return pari_mainstack.vsize def allocatemem(self, size_t s=0, size_t sizemax=0, *, silent=False): r""" Change the PARI stack space to the given size ``s`` (or double the current size if ``s`` is `0`) and change the maximum stack size to ``sizemax``. PARI tries to use only its current stack (the size which is set by ``s``), but it will increase its stack if needed up to the maximum size which is set by ``sizemax``. The PARI stack is never automatically shrunk. You can use the command ``pari.allocatemem(10^6)`` to reset the size to `10^6`, which is the default size at startup. Note that the results of computations using cypari are copied to the Python heap, so they take up no space in the PARI stack. The PARI stack is cleared after every computation. It does no real harm to set this to a small value as the PARI stack will be automatically enlarged when we run out of memory. INPUT: - ``s`` - an integer (default: 0). A non-zero argument is the size in bytes of the new PARI stack. If `s` is zero, double the current stack size. - ``sizemax`` - an integer (default: 0). A non-zero argument is the maximum size in bytes of the PARI stack. If ``sizemax`` is 0, the maximum of the current maximum and ``s`` is taken. Examples: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari.allocatemem(10**7, 10**7) PARI stack size set to 10000000 bytes, maximum size set to 100... >>> pari.allocatemem() # Double the current size PARI stack size set to 20000000 bytes, maximum size set to 200... >>> pari.stacksize() 20000000 >>> pari.allocatemem(10**6) PARI stack size set to 1000000 bytes, maximum size set to 200... The following computation will automatically increase the PARI stack size: >>> a = pari('2^100000000') ``a`` is now a Python variable on the Python heap and does not take up any space on the PARI stack. The PARI stack is still large because of the computation of ``a``: >>> pari.stacksize() > 10**6 True Setting a small maximum size makes this fail: >>> pari.allocatemem(10**6, 2**22) PARI stack size set to 1000000 bytes, maximum size set to 4194304 >>> a = pari('2^100000000') Traceback (most recent call last): ... PariError: _^s: the PARI stack overflows (current size: 1000000; maximum size: 4194304) You can use pari.allocatemem() to change the stack size and try again Tests: Do the same without using the string interface and starting from a very small stack size: >>> pari.allocatemem(1, 2**26) PARI stack size set to 1024 bytes, maximum size set to 67108864 >>> a = pari(2)**100000000 >>> pari.stacksize() > 10**6 True We do not allow ``sizemax`` less than ``s``: >>> pari.allocatemem(10**7, 10**6) Traceback (most recent call last): ... ValueError: the maximum size (10000000) should be at least the stack size (1000000) """ if s == 0: s = pari_mainstack.size * 2 if s < pari_mainstack.size: raise OverflowError("cannot double stack size") elif s < 1024: s = 1024 # arbitrary minimum size if sizemax == 0: # For the default sizemax, use the maximum of current # sizemax and the given size s. if pari_mainstack.vsize > s: sizemax = pari_mainstack.vsize else: sizemax = s elif sizemax < s: raise ValueError("the maximum size ({}) should be at least the stack size ({})".format(s, sizemax)) set_pari_stack_size(s, sizemax) if not silent: print("PARI stack size set to {} bytes, maximum size set to {}". format(self.stacksize(), self.stacksizemax())) @staticmethod def pari_version(): """ Return a string describing the version of PARI/GP. >>> from cypari2 import Pari >>> Pari.pari_version() 'GP/PARI CALCULATOR Version ...' """ return to_string(PARIVERSION) def init_primes(self, unsigned long M): """ Recompute the primes table including at least all primes up to M (but possibly more). Examples: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari.init_primes(200000) We make sure that ticket :trac:`11741` has been fixed: >>> pari.init_primes(2**30) Traceback (most recent call last): ... ValueError: Cannot compute primes beyond 436273290 """ # Hardcoded bound in PARI sources (language/forprime.c) if M > 436273289: raise ValueError("Cannot compute primes beyond 436273290") if M <= maxprime(): return sig_on() initprimetable(M) sig_off() def primes(self, n=None, end=None): """ Return a pari vector containing the first `n` primes, the primes in the interval `[n, end]`, or the primes up to `end`. INPUT: Either - ``n`` -- integer or - ``n`` -- list or tuple `[a, b]` defining an interval of primes or - ``n, end`` -- start and end point of an interval of primes or - ``end`` -- end point for the list of primes OUTPUT: a PARI list of prime numbers Examples: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari.primes(3) [2, 3, 5] >>> pari.primes(10) [2, 3, 5, 7, 11, 13, 17, 19, 23, 29] >>> pari.primes(20) [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71] >>> len(pari.primes(1000)) 1000 >>> pari.primes(11,29) [11, 13, 17, 19, 23, 29] >>> pari.primes((11,29)) [11, 13, 17, 19, 23, 29] >>> pari.primes(end=29) [2, 3, 5, 7, 11, 13, 17, 19, 23, 29] >>> pari.primes(10**30, 10**30 + 100) [1000000000000000000000000000057, 1000000000000000000000000000099] Tests: >>> pari.primes(0) [] >>> pari.primes(-1) [] >>> pari.primes(end=1) [] >>> pari.primes(end=-1) [] >>> pari.primes(3,2) [] """ cdef Gen t0, t1 if end is None: t0 = objtogen(n) sig_on() return new_gen(primes0(t0.g)) elif n is None: t0 = self.PARI_TWO # First prime else: t0 = objtogen(n) t1 = objtogen(end) sig_on() return new_gen(primes_interval(t0.g, t1.g)) euler = Pari_auto.Euler pi = Pari_auto.Pi def polchebyshev(self, long n, v=None): """ Chebyshev polynomial of the first kind of degree `n`, in the variable `v`. Examples: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari.polchebyshev(7) 64*x^7 - 112*x^5 + 56*x^3 - 7*x >>> pari.polchebyshev(7, 'z') 64*z^7 - 112*z^5 + 56*z^3 - 7*z >>> pari.polchebyshev(0) 1 """ sig_on() return new_gen(polchebyshev1(n, get_var(v))) def factorial_int(self, long n): """ Return the factorial of the integer n as a PARI gen. Give result as an integer. Examples: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari.factorial_int(0) 1 >>> pari.factorial_int(1) 1 >>> pari.factorial_int(5) 120 >>> pari.factorial_int(25) 15511210043330985984000000 """ sig_on() return new_gen(mpfact(n)) def polsubcyclo(self, long n, long d, v=None): """ polsubcyclo(n, d, v=x): return the pari list of polynomial(s) defining the sub-abelian extensions of degree `d` of the cyclotomic field `\QQ(\zeta_n)`, where `d` divides `\phi(n)`. Examples:: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari.polsubcyclo(8, 4) [x^4 + 1] >>> pari.polsubcyclo(8, 2, 'z') [z^2 + 2, z^2 - 2, z^2 + 1] >>> pari.polsubcyclo(8, 1) [x - 1] >>> pari.polsubcyclo(8, 3) [] """ cdef Gen plist sig_on() plist = new_gen(polsubcyclo(n, d, get_var(v))) if typ(plist.g) != t_VEC: return self.vector(1, [plist]) else: return plist def setrand(self, seed): """ Sets PARI's current random number seed. INPUT: - ``seed`` -- either a strictly positive integer or a GEN of type ``t_VECSMALL`` as output by ``getrand()`` Examples: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari.setrand(50) >>> a = pari.getrand() >>> pari.setrand(a) >>> a == pari.getrand() True Tests: Check that invalid inputs are handled properly: >>> pari.setrand("foobar") Traceback (most recent call last): ... PariError: incorrect type in setrand (t_POL) """ cdef Gen t0 = objtogen(seed) sig_on() setrand(t0.g) sig_off() def vector(self, long n, entries=None): """ vector(long n, entries=None): Create and return the length n PARI vector with given list of entries. Examples: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari.vector(5, [1, 2, 5, 4, 3]) [1, 2, 5, 4, 3] >>> pari.vector(2, ['x', 1]) [x, 1] >>> pari.vector(2, ['x', 1, 5]) Traceback (most recent call last): ... IndexError: length of entries (=3) must equal n (=2) """ # TODO: deprecate v = self._empty_vector(n) if entries is not None: if len(entries) != n: raise IndexError("length of entries (=%s) must equal n (=%s)"%\ (len(entries), n)) for i, x in enumerate(entries): v[i] = x return v cdef Gen _empty_vector(self, long n): cdef Gen v sig_on() v = new_gen(zerovec(n)) return v def matrix(self, long m, long n, entries=None): """ matrix(long m, long n, entries=None): Create and return the m x n PARI matrix with given list of entries. Examples: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari.matrix(3, 3, range(9)) [0, 1, 2; 3, 4, 5; 6, 7, 8] """ cdef long i, j, k cdef Gen x sig_on() A = new_gen(zeromatcopy(m,n)) if entries is not None: if len(entries) != m * n: raise IndexError("len of entries (=%s) must be %s*%s=%s"%(len(entries),m,n,m*n)) k = 0 for i in range(m): for j in range(n): sig_check() x = objtogen(entries[k]) set_gcoeff(A.g, i+1, j+1, x.ref_target()) A.cache((i,j), x) k += 1 return A def genus2red(self, P, p=None): """ Let `P` be a polynomial with integer coefficients. Determines the reduction of the (proper, smooth) genus 2 curve `C/\QQ`, defined by the hyperelliptic equation `y^2 = P`. The special syntax ``genus2red([P,Q])`` is also allowed, where the polynomials `P` and `Q` have integer coefficients, to represent the model `y^2 + Q(x)y = P(x)`. If the second argument `p` is specified, it must be a prime. Then only the local information at `p` is computed and returned. Examples: >>> import cypari2 >>> pari = cypari2.Pari() >>> x = pari('x') >>> pari.genus2red([-5*x**5, x**3 - 2*x**2 - 2*x + 1]) [1416875, [2, -1; 5, 4; 2267, 1], x^6 - 240*x^4 - 2550*x^3 - 11400*x^2 - 24100*x - 19855, [[2, [2, [Mod(1, 2)]], []], [5, [1, []], ["[V] page 156", [3]]], [2267, [2, [Mod(432, 2267)]], ["[I{1-0-0}] page 170", []]]]] >>> pari.genus2red([-5*x**5, x**3 - 2*x**2 - 2*x + 1],2267) [2267, Mat([2267, 1]), x^6 - 24*x^5 + 10*x^3 - 4*x + 1, [2267, [2, [Mod(432, 2267)]], ["[I{1-0-0}] page 170", []]]] """ cdef Gen t0 = objtogen(P) if p is None: sig_on() return new_gen(genus2red(t0.g, NULL)) cdef Gen t1 = objtogen(p) sig_on() return new_gen(genus2red(t0.g, t1.g)) def List(self, x=None): """ Create an empty list or convert `x` to a list. Examples: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari.List(range(5)) List([0, 1, 2, 3, 4]) >>> L = pari.List() >>> L List([]) >>> L.listput(42, 1) 42 >>> L List([42]) >>> L.listinsert(24, 1) 24 >>> L List([24, 42]) """ if x is None: sig_on() return new_gen(mklist()) cdef Gen t0 = objtogen(x) sig_on() return new_gen(gtolist(t0.g)) cdef long get_var(v) except -2: """ Convert ``v`` into a PARI variable number. If ``v`` is a PARI object, return the variable number of ``variable(v)``. If ``v`` is ``None``, return -1. Otherwise, treat ``v`` as a string and return the number of the variable named ``v``. OUTPUT: a PARI variable number (varn) or -1 if there is no variable number. .. WARNING:: You can easily create variables with garbage names using this function. This can actually be used as a feature, if you want variable names which cannot be confused with ordinary user variables. Examples: We test this function using ``Pol()`` which calls this function: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari("[1,0]").Pol() x >>> pari("[2,0]").Pol('x') 2*x >>> pari("[Pi,0]").Pol('!@#$%^&') 3.14159265358979*!@#$%^& We can use ``varhigher()`` and ``varlower()`` to create temporary variables without a name. The ``"xx"`` below is just a string to display the variable, it doesn't create a variable ``"xx"``: >>> xx = pari.varhigher("xx") >>> pari("[x,0]").Pol(xx) x*xx Indeed, this is not the same as: >>> pari("[x,0]").Pol("xx") Traceback (most recent call last): ... PariError: incorrect priority in gtopoly: variable x <= xx """ if v is None: return -1 cdef long varno if isinstance(v, Gen): sig_on() varno = gvar((v).g) sig_off() if varno < 0: return -1 else: return varno cdef bytes s = to_bytes(v) sig_on() varno = fetch_user_var(s) sig_off() return varno # Monkey-patched versions of default(parisize) and default(parisizemax) # which call before_resize() and after_resize(). # The monkey-patching is set up in PariInstance.__cinit__ cdef GEN patched_parisize(const char* v, long flag): # Cast to int(*)() to avoid exception handling if (before_resize)(): sig_error() return sd_parisize(v, flag) cdef GEN patched_parisizemax(const char* v, long flag): # Cast to int(*)() to avoid exception handling if (before_resize)(): sig_error() return sd_parisizemax(v, flag) IF HAVE_PLOT_SVG: cdef void get_plot_ipython(PARI_plot* T): # Values copied from src/graph/plotsvg.c in PARI sources T.width = 480 T.height = 320 T.hunit = 3 T.vunit = 3 T.fwidth = 9 T.fheight = 12 T.draw = draw_ipython cdef void draw_ipython(PARI_plot *T, GEN w, GEN x, GEN y): global avma cdef pari_sp av = avma cdef char* svg = rect2svg(w, x, y, T) from IPython.core.display import SVG, display display(SVG(svg)) avma = av ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1604256609.0 cypari2-2.1.2/cypari2/paridecl.pxd0000644000175000017500000063126400000000000020111 0ustar00vincentvincent00000000000000# distutils: libraries = gmp pari """ Declarations for non-inline functions from PARI. This file contains all declarations from headers/paridecl.h from the PARI distribution, except the inline functions which are in declinl.pxi (that file is automatically included by this file). AUTHORS: - (unknown authors before 2010) - Robert Bradshaw, Jeroen Demeyer, William Stein (2010-08-15): Upgrade to PARI 2.4.3 (:trac:`9343`) - Jeroen Demeyer (2010-08-15): big clean up (:trac:`9898`) - Jeroen Demeyer (2014-02-09): upgrade to PARI 2.7 (:trac:`15767`) - Jeroen Demeyer (2014-09-19): upgrade to PARI 2.8 (:trac:`16997`) """ #***************************************************************************** # Distributed under the terms of the GNU General Public License (GPL) # as published by the Free Software Foundation; either version 2 of # the License, or (at your option) any later version. # http://www.gnu.org/licenses/ #***************************************************************************** from __future__ import print_function from libc.stdio cimport FILE from cpython.getargs cimport va_list from .types cimport * cdef extern from *: # PARI headers already included by types.pxd char* PARIVERSION # parierr.h enum err_list: e_SYNTAX, e_BUG e_ALARM, e_FILE e_MISC, e_FLAG, e_IMPL, e_ARCH, e_PACKAGE, e_NOTFUNC e_PREC, e_TYPE, e_DIM, e_VAR, e_PRIORITY, e_USER e_STACK, e_OVERFLOW, e_DOMAIN, e_COMPONENT e_MAXPRIME e_CONSTPOL, e_IRREDPOL, e_COPRIME, e_PRIME, e_MODULUS, e_ROOTS0 e_OP, e_TYPE2, e_INV e_MEM e_SQRTN e_NONE int warner, warnprec, warnfile, warnmem, warnuser # paricast.h long mael2(GEN, long, long) long mael3(GEN, long, long, long) long mael4(GEN, long, long, long, long) long mael5(GEN, long, long, long, long, long) long mael(GEN, long, long) GEN gmael1(GEN, long) GEN gmael2(GEN, long, long) GEN gmael3(GEN, long, long, long) GEN gmael4(GEN, long, long, long, long) GEN gmael5(GEN, long, long, long, long, long) GEN gmael(GEN, long, long) GEN gel(GEN, long) GEN gcoeff(GEN, long, long) long coeff(GEN, long, long) char* GSTR(GEN) # paricom.h GEN gen_1 GEN gen_m1 GEN gen_2 GEN gen_m2 GEN ghalf GEN gen_0 GEN gnil GEN err_e_STACK int PARI_SIGINT_block, PARI_SIGINT_pending void NEXT_PRIME_VIADIFF(long, byteptr) void PREC_PRIME_VIADIFF(long, byteptr) int INIT_JMPm, INIT_SIGm, INIT_DFTm, INIT_noPRIMEm, INIT_noIMTm int new_galois_format, factor_add_primes, factor_proven int precdl # The "except 0" here is to ensure compatibility with # _pari_err_handle() in handle_error.pyx int (*cb_pari_err_handle)(GEN) except 0 int (*cb_pari_handle_exception)(long) except 0 void (*cb_pari_err_recover)(long) # kernel/gmp/int.h GEN int_MSW(GEN z) GEN int_LSW(GEN z) GEN int_W(GEN z, long i) GEN int_W_lg(GEN z, long i, long lz) GEN int_precW(GEN z) GEN int_nextW(GEN z) # paristio.h extern pari_sp avma struct _pari_mainstack "pari_mainstack": pari_sp top, bot, vbot size_t size, rsize, vsize, memused extern _pari_mainstack* pari_mainstack struct PariOUT: void (*putch)(char) void (*puts)(char*) void (*flush)() extern PariOUT* pariOut extern PariOUT* pariErr extern byteptr diffptr ############################################### # # # All declarations below come from paridecl.h # # # ############################################### # OBSOLETE GEN buchimag(GEN D, GEN c1, GEN c2, GEN gCO) GEN buchreal(GEN D, GEN gsens, GEN c1, GEN c2, GEN gRELSUP, long prec) GEN zidealstar(GEN nf, GEN x) GEN zidealstarinit(GEN nf, GEN x) GEN zidealstarinitgen(GEN nf, GEN x) GEN factmod(GEN f, GEN p) void mpbern(long n, long prec) GEN simplefactmod(GEN f, GEN p) void listkill(GEN l) GEN isprincipalforce(GEN bnf,GEN x) GEN isprincipalgen(GEN bnf, GEN x) GEN isprincipalgenforce(GEN bnf,GEN x) # F2v.c GEN F2Ms_ker(GEN M, long nrows) GEN F2Ms_to_F2m(GEN M, long nrows) GEN F2c_to_ZC(GEN x) GEN F2c_to_mod(GEN x) GEN F2m_F2c_gauss(GEN a, GEN b) GEN F2m_F2c_invimage(GEN A, GEN y) GEN F2m_F2c_mul(GEN x, GEN y) GEN F2m_deplin(GEN x) ulong F2m_det(GEN x) ulong F2m_det_sp(GEN x) GEN F2m_gauss(GEN a, GEN b) GEN F2m_inv(GEN x) GEN F2m_invimage(GEN A, GEN B) GEN F2m_ker(GEN x) GEN F2m_ker_sp(GEN x, long deplin) GEN F2m_mul(GEN x, GEN y) GEN F2m_powu(GEN x, ulong n) long F2m_rank(GEN x) GEN F2m_row(GEN x, long j) GEN F2m_rowslice(GEN x, long a, long b) GEN F2m_to_F2Ms(GEN M) GEN F2m_to_Flm(GEN z) GEN F2m_to_ZM(GEN z) GEN F2m_to_mod(GEN z) GEN F2m_transpose(GEN x) void F2v_add_inplace(GEN x, GEN y) void F2v_and_inplace(GEN x, GEN y) ulong F2v_dotproduct(GEN x, GEN y) int F2v_equal0(GEN a) ulong F2v_hamming(GEN H) void F2v_negimply_inplace(GEN x, GEN y) void F2v_or_inplace(GEN x, GEN y) GEN F2v_slice(GEN x, long a, long b) GEN F2v_to_Flv(GEN x) GEN matid_F2m(long n) # F2x.c GEN F2x_F2xq_eval(GEN Q, GEN x, GEN T) GEN F2x_F2xqV_eval(GEN P, GEN V, GEN T) GEN F2x_Frobenius(GEN T) GEN F2x_1_add(GEN y) GEN F2x_add(GEN x, GEN y) GEN F2x_deflate(GEN x0, long d) GEN F2x_degfact(GEN f) long F2x_degree(GEN x) GEN F2x_deriv(GEN x) GEN F2x_divrem(GEN x, GEN y, GEN *pr) ulong F2x_eval(GEN P, ulong x) void F2x_even_odd(GEN p, GEN *pe, GEN *po) GEN F2x_extgcd(GEN a, GEN b, GEN *ptu, GEN *ptv) GEN F2x_gcd(GEN a, GEN b) GEN F2x_get_red(GEN T) GEN F2x_halfgcd(GEN a, GEN b) int F2x_issquare(GEN a) GEN F2x_matFrobenius(GEN T) GEN F2x_mul(GEN x, GEN y) GEN F2x_recip(GEN T) GEN F2x_rem(GEN x, GEN y) GEN F2x_shift(GEN y, long d) GEN F2x_sqr(GEN x) GEN F2x_sqrt(GEN x) GEN F2x_to_F2v(GEN x, long n) GEN F2x_to_F2xX(GEN z, long sv) GEN F2x_to_Flx(GEN x) GEN F2x_to_ZX(GEN x) long F2x_valrem(GEN x, GEN *Z) GEN F2xC_to_FlxC(GEN v) GEN F2xC_to_ZXC(GEN x) GEN F2xV_to_F2m(GEN v, long n) void F2xV_to_FlxV_inplace(GEN v) void F2xV_to_ZXV_inplace(GEN v) GEN F2xX_F2x_add(GEN x, GEN y) GEN F2xX_F2x_mul(GEN P, GEN U) GEN F2xX_add(GEN x, GEN y) GEN F2xX_deriv(GEN z) GEN F2xX_renormalize(GEN x, long lx) GEN F2xX_to_Kronecker(GEN P, long d) GEN F2xX_to_FlxX(GEN B) GEN F2xX_to_ZXX(GEN B) GEN F2xX_to_F2xC(GEN x, long N, long sv) GEN F2xXV_to_F2xM(GEN v, long n, long sv) GEN F2xXC_to_ZXXC(GEN B) GEN F2xY_F2xq_evalx(GEN P, GEN x, GEN T) GEN F2xY_F2xqV_evalx(GEN P, GEN x, GEN T) long F2xY_degreex(GEN b) GEN F2xn_inv(GEN f, long e) GEN F2xn_red(GEN a, long n) GEN F2xq_Artin_Schreier(GEN a, GEN T) GEN F2xq_autpow(GEN x, long n, GEN T) GEN F2xq_conjvec(GEN x, GEN T) GEN F2xq_div(GEN x,GEN y,GEN T) GEN F2xq_inv(GEN x, GEN T) GEN F2xq_invsafe(GEN x, GEN T) GEN F2xq_log(GEN a, GEN g, GEN ord, GEN T) GEN F2xq_matrix_pow(GEN y, long n, long m, GEN P) GEN F2xq_mul(GEN x, GEN y, GEN pol) GEN F2xq_order(GEN a, GEN ord, GEN T) GEN F2xq_pow(GEN x, GEN n, GEN pol) GEN F2xq_pow_init(GEN x, GEN n, long k, GEN T) GEN F2xq_pow_table(GEN R, GEN n, GEN T) GEN F2xq_powu(GEN x, ulong n, GEN pol) GEN F2xq_powers(GEN x, long l, GEN T) GEN F2xq_sqr(GEN x, GEN pol) GEN F2xq_sqrt(GEN a, GEN T) GEN F2xq_sqrt_fast(GEN c, GEN sqx, GEN T) GEN F2xq_sqrtn(GEN a, GEN n, GEN T, GEN *zeta) ulong F2xq_trace(GEN x, GEN T) GEN F2xqX_F2xq_mul(GEN P, GEN U, GEN T) GEN F2xqX_F2xq_mul_to_monic(GEN P, GEN U, GEN T) GEN F2xqX_F2xqXQ_eval(GEN Q, GEN x, GEN S, GEN T) GEN F2xqX_F2xqXQV_eval(GEN P, GEN V, GEN S, GEN T) GEN F2xqX_disc(GEN x, GEN T) GEN F2xqX_divrem(GEN x, GEN y, GEN T, GEN *pr) GEN F2xqX_extgcd(GEN x, GEN y, GEN T, GEN *ptu, GEN *ptv) GEN F2xqX_gcd(GEN a, GEN b, GEN T) GEN F2xqX_get_red(GEN S, GEN T) GEN F2xqX_halfgcd(GEN x, GEN y, GEN T) GEN F2xqX_invBarrett(GEN T, GEN Q) long F2xqX_ispower(GEN f, long k, GEN T, GEN *pt_r) GEN F2xqX_mul(GEN x, GEN y, GEN T) GEN F2xqX_normalize(GEN z, GEN T) GEN F2xqX_powu(GEN x, ulong n, GEN T) GEN F2xqX_red(GEN z, GEN T) GEN F2xqX_rem(GEN x, GEN S, GEN T) GEN F2xqX_resultant(GEN x, GEN y, GEN T) GEN F2xqX_sqr(GEN x, GEN T) GEN F2xqXQ_inv(GEN x, GEN S, GEN T) GEN F2xqXQ_invsafe(GEN x, GEN S, GEN T) GEN F2xqXQ_mul(GEN x, GEN y, GEN S, GEN T) GEN F2xqXQ_sqr(GEN x, GEN S, GEN T) GEN F2xqXQ_pow(GEN x, GEN n, GEN S, GEN T) GEN F2xqXQ_powers(GEN x, long l, GEN S, GEN T) GEN F2xqXQ_autpow(GEN aut, long n, GEN S, GEN T) GEN F2xqXQ_auttrace(GEN aut, long n, GEN S, GEN T) GEN F2xqXQV_red(GEN z, GEN S, GEN T) GEN Flm_to_F2m(GEN x) GEN Flv_to_F2v(GEN x) GEN Flx_to_F2x(GEN x) GEN FlxC_to_F2xC(GEN x) GEN FlxX_to_F2xX(GEN B) GEN FlxXC_to_F2xXC(GEN B) GEN Kronecker_to_F2xqX(GEN z, GEN T) GEN Rg_to_F2xq(GEN x, GEN T) GEN RgM_to_F2m(GEN x) GEN RgV_to_F2v(GEN x) GEN RgX_to_F2x(GEN x) GEN Z_to_F2x(GEN x, long v) GEN ZM_to_F2m(GEN x) GEN ZV_to_F2v(GEN x) GEN ZX_to_F2x(GEN x) GEN ZXX_to_F2xX(GEN B, long v) GEN const_F2v(long m) GEN gener_F2xq(GEN T, GEN *po) GEN monomial_F2x(long d, long vs) GEN pol1_F2xX(long v, long sv) GEN polx_F2xX(long v, long sv) GEN random_F2xqX(long d1, long v, GEN T) # F2xqE.c GEN F2xq_ellcard(GEN a2, GEN a6, GEN T) GEN F2xq_ellgens(GEN a2, GEN a6, GEN ch, GEN D, GEN m, GEN T) GEN F2xq_ellgroup(GEN a2, GEN a6, GEN N, GEN T, GEN *pt_m) void F2xq_elltwist(GEN a, GEN a6, GEN T, GEN *pt_a, GEN *pt_a6) GEN F2xqE_add(GEN P, GEN Q, GEN a2, GEN T) GEN F2xqE_changepoint(GEN x, GEN ch, GEN T) GEN F2xqE_changepointinv(GEN x, GEN ch, GEN T) GEN F2xqE_dbl(GEN P, GEN a2, GEN T) GEN F2xqE_log(GEN a, GEN b, GEN o, GEN a2, GEN T) GEN F2xqE_mul(GEN P, GEN n, GEN a2, GEN T) GEN F2xqE_neg(GEN P, GEN a2, GEN T) GEN F2xqE_order(GEN z, GEN o, GEN a2, GEN T) GEN F2xqE_sub(GEN P, GEN Q, GEN a2, GEN T) GEN F2xqE_tatepairing(GEN t, GEN s, GEN m, GEN a2, GEN T) GEN F2xqE_weilpairing(GEN t, GEN s, GEN m, GEN a2, GEN T) const bb_group * get_F2xqE_group(void **E, GEN a2, GEN a6, GEN T) GEN RgE_to_F2xqE(GEN x, GEN T) GEN random_F2xqE(GEN a2, GEN a6, GEN T) # Fle.c ulong Fl_elldisc(ulong a4, ulong a6, ulong p) ulong Fl_elldisc_pre(ulong a4, ulong a6, ulong p, ulong pi) ulong Fl_ellj(ulong a4, ulong a6, ulong p) void Fl_ellj_to_a4a6(ulong j, ulong p, ulong *pt_a4, ulong *pt_a6) void Fl_elltwist(ulong a4, ulong a6, ulong p, ulong *pt_a4, ulong *pt_a6) void Fl_elltwist_disc(ulong a4, ulong a6, ulong D, ulong p, ulong *pt_a4, ulong *pt_a6) GEN Fle_add(GEN P, GEN Q, ulong a4, ulong p) GEN Fle_dbl(GEN P, ulong a4, ulong p) GEN Fle_changepoint(GEN x, GEN ch, ulong p) GEN Fle_changepointinv(GEN x, GEN ch, ulong p) GEN Fle_log(GEN a, GEN b, GEN o, ulong a4, ulong p) GEN Fle_mul(GEN P, GEN n, ulong a4, ulong p) GEN Fle_mulu(GEN P, ulong n, ulong a4, ulong p) GEN Fle_order(GEN z, GEN o, ulong a4, ulong p) GEN Fle_sub(GEN P, GEN Q, ulong a4, ulong p) GEN Fle_to_Flj(GEN P) GEN Flj_add_pre(GEN P, GEN Q, ulong a4, ulong p, ulong pi) GEN Flj_dbl_pre(GEN P, ulong a4, ulong p, ulong pi) GEN Flj_mulu_pre(GEN P, ulong n, ulong a4, ulong p, ulong pi) GEN Flj_neg(GEN Q, ulong p) GEN Flj_to_Fle_pre(GEN P, ulong p, ulong pi) GEN random_Fle(ulong a4, ulong a6, ulong p) GEN random_Fle_pre(ulong a4, ulong a6, ulong p, ulong pi) GEN random_Flj_pre(ulong a4, ulong a6, ulong p, ulong pi) # Flv.c GEN Flc_to_ZC(GEN z) GEN Flc_to_ZC_inplace(GEN z) GEN Flm_Flc_gauss(GEN a, GEN b, ulong p) GEN Flm_Flc_invimage(GEN mat, GEN y, ulong p) GEN Flm_adjoint(GEN A, ulong p) GEN Flm_deplin(GEN x, ulong p) ulong Flm_det(GEN x, ulong p) ulong Flm_det_sp(GEN x, ulong p) GEN Flm_gauss(GEN a, GEN b, ulong p) GEN Flm_intersect(GEN x, GEN y, ulong p) GEN Flm_inv(GEN x, ulong p) GEN Flm_invimage(GEN m, GEN v, ulong p) GEN Flm_ker(GEN x, ulong p) GEN Flm_ker_sp(GEN x, ulong p, long deplin) long Flm_rank(GEN x, ulong p) GEN Flm_to_ZM(GEN z) GEN Flm_to_ZM_inplace(GEN z) GEN Flv_to_ZV(GEN z) # Flx.c GEN Fl_to_Flx(ulong x, long sv) int Fl2_equal1(GEN x) GEN Fl2_inv_pre(GEN x, ulong D, ulong p, ulong pi) GEN Fl2_mul_pre(GEN x, GEN y, ulong D, ulong p, ulong pi) ulong Fl2_norm_pre(GEN x, ulong D, ulong p, ulong pi) GEN Fl2_pow_pre(GEN x, GEN n, ulong D, ulong p, ulong pi) GEN Fl2_sqr_pre(GEN x, ulong D, ulong p, ulong pi) GEN Fl2_sqrtn_pre(GEN a, GEN n, ulong D, ulong p, ulong pi, GEN *zeta) GEN Flm_to_FlxV(GEN x, long sv) GEN Flm_to_FlxX(GEN x, long v,long w) GEN Flv_Flm_polint(GEN xa, GEN ya, ulong p, long vs) GEN Flv_inv(GEN x, ulong p) void Flv_inv_inplace(GEN x, ulong p) void Flv_inv_pre_inplace(GEN x, ulong p, ulong pi) GEN Flv_inv_pre(GEN x, ulong p, ulong pi) GEN Flv_invVandermonde(GEN L, ulong den, ulong p) GEN Flv_polint(GEN xa, GEN ya, ulong p, long vs) ulong Flv_prod(GEN v, ulong p) ulong Flv_prod_pre(GEN x, ulong p, ulong pi) GEN Flv_roots_to_pol(GEN a, ulong p, long vs) GEN Flv_to_Flx(GEN x, long vs) GEN Flx_Fl_add(GEN y, ulong x, ulong p) GEN Flx_Fl_mul(GEN y, ulong x, ulong p) GEN Flx_Fl_mul_to_monic(GEN y, ulong x, ulong p) GEN Flx_Fl_sub(GEN y, ulong x, ulong p) GEN Flx_Fl2_eval_pre(GEN x, GEN y, ulong D, ulong p, ulong pi) GEN Flx_Flv_multieval(GEN P, GEN v, ulong p) GEN Flx_Flxq_eval(GEN f, GEN x, GEN T, ulong p) GEN Flx_FlxqV_eval(GEN f, GEN x, GEN T, ulong p) GEN Flx_Frobenius(GEN T, ulong p) GEN Flx_Laplace(GEN x, ulong p) GEN Flx_Newton(GEN P, long n, ulong p) GEN Flx_add(GEN x, GEN y, ulong p) GEN Flx_blocks(GEN P, long n, long m) GEN Flx_deflate(GEN x0, long d) GEN Flx_deriv(GEN z, ulong p) GEN Flx_diff1(GEN P, ulong p) GEN Flx_digits(GEN x, GEN T, ulong p) GEN Flx_div_by_X_x(GEN a, ulong x, ulong p, ulong *rem) GEN Flx_divrem(GEN x, GEN y, ulong p, GEN *pr) GEN Flx_double(GEN y, ulong p) int Flx_equal(GEN V, GEN W) ulong Flx_eval(GEN x, ulong y, ulong p) ulong Flx_eval_powers_pre(GEN x, GEN y, ulong p, ulong pi) ulong Flx_eval_pre(GEN x, ulong y, ulong p, ulong pi) GEN Flx_extgcd(GEN a, GEN b, ulong p, GEN *ptu, GEN *ptv) ulong Flx_extresultant(GEN a, GEN b, ulong p, GEN *ptU, GEN *ptV) GEN Flx_fromNewton(GEN P, ulong p) GEN Flx_gcd(GEN a, GEN b, ulong p) GEN Flx_get_red(GEN T, ulong p) GEN Flx_halfgcd(GEN a, GEN b, ulong p) GEN Flx_halve(GEN y, ulong p) GEN Flx_inflate(GEN x0, long d) GEN Flx_integ(GEN z, ulong p) GEN Flx_invBarrett(GEN T, ulong p) GEN Flx_invLaplace(GEN x, ulong p) int Flx_is_squarefree(GEN z, ulong p) int Flx_is_smooth(GEN g, long r, ulong p) GEN Flx_matFrobenius(GEN T, ulong p) GEN Flx_mod_Xn1(GEN T, ulong n, ulong p) GEN Flx_mod_Xnm1(GEN T, ulong n, ulong p) GEN Flx_mul(GEN x, GEN y, ulong p) GEN Flx_neg(GEN x, ulong p) GEN Flx_neg_inplace(GEN x, ulong p) GEN Flx_normalize(GEN z, ulong p) GEN Flx_powu(GEN x, ulong n, ulong p) GEN Flx_recip(GEN x) GEN Flx_red(GEN z, ulong p) GEN Flx_rem(GEN x, GEN y, ulong p) GEN Flx_renormalize(GEN x, long l) GEN Flx_rescale(GEN P, ulong h, ulong p) ulong Flx_resultant(GEN a, GEN b, ulong p) GEN Flx_shift(GEN a, long n) GEN Flx_splitting(GEN p, long k) GEN Flx_sqr(GEN x, ulong p) GEN Flx_sub(GEN x, GEN y, ulong p) GEN Flx_translate1(GEN P, ulong p) GEN Flx_translate1_basecase(GEN P, ulong p) GEN Flx_to_Flv(GEN x, long N) GEN Flx_to_FlxX(GEN z, long v) GEN Flx_to_ZX(GEN z) GEN Flx_to_ZX_inplace(GEN z) GEN Flx_triple(GEN y, ulong p) long Flx_val(GEN x) long Flx_valrem(GEN x, GEN *Z) GEN FlxC_FlxqV_eval(GEN x, GEN v, GEN T, ulong p) GEN FlxC_Flxq_eval(GEN x, GEN F, GEN T, ulong p) GEN FlxC_eval_powers_pre(GEN z, GEN x, ulong p, ulong pi) GEN FlxC_neg(GEN x, ulong p) GEN FlxC_sub(GEN x, GEN y, ulong p) GEN FlxC_to_ZXC(GEN x) GEN FlxM_Flx_add_shallow(GEN x, GEN y, ulong p) GEN FlxM_eval_powers_pre(GEN z, GEN x, ulong p, ulong pi) GEN FlxM_neg(GEN x, ulong p) GEN FlxM_sub(GEN x, GEN y, ulong p) GEN FlxM_to_ZXM(GEN z) GEN FlxT_red(GEN z, ulong p) GEN FlxV_Flc_mul(GEN V, GEN W, ulong p) GEN FlxV_Flv_multieval(GEN P, GEN v, ulong p) GEN FlxV_Flx_fromdigits(GEN x, GEN T, ulong p) GEN FlxV_prod(GEN V, ulong p) GEN FlxV_red(GEN z, ulong p) GEN FlxV_to_Flm(GEN v, long n) GEN FlxV_to_FlxX(GEN x, long v) GEN FlxV_to_ZXV(GEN x) void FlxV_to_ZXV_inplace(GEN v) GEN Flxn_exp(GEN h, long e, ulong p) GEN Flxn_expint(GEN h, long e, ulong p) GEN Flxn_inv(GEN f, long e, ulong p) GEN Flxn_mul(GEN a, GEN b, long n, ulong p) GEN Flxn_sqr(GEN a, long n, ulong p) GEN Flxn_red(GEN a, long n) GEN Flxq_autpow(GEN x, ulong n, GEN T, ulong p) GEN Flxq_autpowers(GEN x, ulong n, GEN T, ulong p) GEN Flxq_autsum(GEN x, ulong n, GEN T, ulong p) GEN Flxq_auttrace(GEN x, ulong n, GEN T, ulong p) GEN Flxq_charpoly(GEN x, GEN T, ulong p) GEN Flxq_conjvec(GEN x, GEN T, ulong p) GEN Flxq_div(GEN x, GEN y, GEN T, ulong p) GEN Flxq_inv(GEN x,GEN T,ulong p) GEN Flxq_invsafe(GEN x, GEN T, ulong p) int Flxq_issquare(GEN x, GEN T, ulong p) int Flxq_is2npower(GEN x, long n, GEN T, ulong p) GEN Flxq_log(GEN a, GEN g, GEN ord, GEN T, ulong p) GEN Flxq_lroot(GEN a, GEN T, long p) GEN Flxq_lroot_fast(GEN a, GEN sqx, GEN T, long p) GEN Flxq_matrix_pow(GEN y, long n, long m, GEN P, ulong l) GEN Flxq_minpoly(GEN x, GEN T, ulong p) GEN Flxq_mul(GEN x, GEN y, GEN T, ulong p) ulong Flxq_norm(GEN x, GEN T, ulong p) GEN Flxq_order(GEN a, GEN ord, GEN T, ulong p) GEN Flxq_pow(GEN x, GEN n, GEN T, ulong p) GEN Flxq_pow_init(GEN x, GEN n, long k, GEN T, ulong p) GEN Flxq_pow_table(GEN R, GEN n, GEN T, ulong p) GEN Flxq_powu(GEN x, ulong n, GEN T, ulong p) GEN Flxq_powers(GEN x, long l, GEN T, ulong p) GEN Flxq_sqr(GEN y,GEN T,ulong p) GEN Flxq_sqrt(GEN a, GEN T, ulong p) GEN Flxq_sqrtn(GEN a, GEN n, GEN T, ulong p, GEN *zetan) ulong Flxq_trace(GEN x, GEN T, ulong p) GEN FlxqC_Flxq_mul(GEN x, GEN y, GEN T, ulong p) GEN FlxqM_Flxq_mul(GEN x, GEN y, GEN T, ulong p) GEN FlxqV_dotproduct(GEN x, GEN y, GEN T, ulong p) ulong Rg_to_F2(GEN x) ulong Rg_to_Fl(GEN x, ulong p) GEN Rg_to_Flxq(GEN x, GEN T, ulong p) GEN RgX_to_Flx(GEN x, ulong p) GEN RgXV_to_FlxV(GEN x, ulong p) GEN Z_to_Flx(GEN x, ulong p, long sv) GEN ZX_to_Flx(GEN x, ulong p) GEN ZXV_to_FlxV(GEN v, ulong p) GEN ZXT_to_FlxT(GEN z, ulong p) GEN gener_Flxq(GEN T, ulong p, GEN *o) GEN monomial_Flx(ulong a, long d, long vs) GEN random_Flx(long d1, long v, ulong p) GEN zero_FlxC(long n, long sv) GEN zero_FlxM(long r, long c, long sv) GEN zlx_translate1(GEN P, ulong p, long e) GEN zx_to_Flx(GEN x, ulong p) # FlxX.c GEN FlxX_Fl_mul(GEN x, ulong y, ulong p) GEN FlxX_Flx_add(GEN y, GEN x, ulong p) GEN FlxX_Flx_mul(GEN x, GEN y, ulong p) GEN FlxX_Flx_sub(GEN y, GEN x, ulong p) GEN FlxX_Laplace(GEN x, ulong p) GEN FlxX_add(GEN P, GEN Q, ulong p) GEN FlxX_blocks(GEN P, long n, long m, long vs) GEN FlxX_deriv(GEN z, ulong p) GEN FlxX_double(GEN x, ulong p) GEN FlxX_invLaplace(GEN x, ulong p) GEN FlxX_neg(GEN x, ulong p) GEN FlxX_renormalize(GEN x, long lx) GEN FlxX_shift(GEN a, long n, long vs) GEN FlxX_sub(GEN P, GEN Q, ulong p) GEN FlxX_swap(GEN x, long n, long ws) GEN FlxX_to_Flm(GEN v, long n) GEN FlxX_to_Flx(GEN f) GEN FlxX_to_FlxC(GEN x, long N, long sv) GEN FlxX_to_ZXX(GEN B) GEN FlxX_translate1(GEN P, long p, long n) GEN FlxX_triple(GEN x, ulong p) GEN FlxXC_to_ZXXC(GEN B) GEN FlxXM_to_ZXXM(GEN B) GEN FlxXV_to_FlxM(GEN v, long n, long sv) GEN FlxXn_red(GEN a, long n) GEN FlxY_Flx_div(GEN x, GEN y, ulong p) GEN FlxY_Flx_translate(GEN P, GEN c, ulong p) GEN FlxY_Flxq_evalx(GEN P, GEN x, GEN T, ulong p) GEN FlxY_FlxqV_evalx(GEN P, GEN x, GEN T, ulong p) long FlxY_degreex(GEN b) ulong FlxY_eval_powers_pre(GEN pol, GEN ypowers, GEN xpowers, ulong p, ulong pi) GEN FlxY_evalx(GEN Q, ulong x, ulong p) GEN FlxY_evalx_powers_pre(GEN pol, GEN ypowers, ulong p, ulong pi) GEN FlxYqq_pow(GEN x, GEN n, GEN S, GEN T, ulong p) GEN FlxqV_roots_to_pol(GEN V, GEN T, ulong p, long v) GEN FlxqX_FlxqXQ_eval(GEN Q, GEN x, GEN S, GEN T, ulong p) GEN FlxqX_FlxqXQV_eval(GEN P, GEN V, GEN S, GEN T, ulong p) GEN FlxqX_Flxq_mul(GEN P, GEN U, GEN T, ulong p) GEN FlxqX_Flxq_mul_to_monic(GEN P, GEN U, GEN T, ulong p) GEN FlxqX_Newton(GEN P, long n, GEN T, ulong p) GEN FlxqX_divrem(GEN x, GEN y, GEN T, ulong p, GEN *pr) GEN FlxqX_disc(GEN x, GEN T, ulong p) GEN FlxqX_dotproduct(GEN x, GEN y, GEN T, ulong p) GEN FlxqX_extgcd(GEN a, GEN b, GEN T, ulong p, GEN *ptu, GEN *ptv) GEN FlxqX_fromNewton(GEN P, GEN T, ulong p) GEN FlxqX_gcd(GEN P, GEN Q, GEN T, ulong p) GEN FlxqX_get_red(GEN S, GEN T, ulong p) GEN FlxqX_halfgcd(GEN x, GEN y, GEN T, ulong p) GEN FlxqX_invBarrett(GEN T, GEN Q, ulong p) GEN FlxqX_mul(GEN x, GEN y, GEN T, ulong p) GEN FlxqX_normalize(GEN z, GEN T, ulong p) GEN FlxqX_powu(GEN V, ulong n, GEN T, ulong p) GEN FlxqX_red(GEN z, GEN T, ulong p) GEN FlxqX_rem(GEN x, GEN y, GEN T, ulong p) GEN FlxqX_resultant(GEN x, GEN y, GEN T, ulong p) GEN FlxqX_safegcd(GEN P, GEN Q, GEN T, ulong p) GEN FlxqX_saferesultant(GEN a, GEN b, GEN T, ulong p) GEN FlxqX_sqr(GEN x, GEN T, ulong p) GEN FlxqXQ_div(GEN x, GEN y, GEN S, GEN T, ulong p) GEN FlxqXQ_inv(GEN x, GEN S, GEN T, ulong p) GEN FlxqXQ_invsafe(GEN x, GEN S, GEN T, ulong p) GEN FlxqXQ_matrix_pow(GEN x, long n, long m, GEN S, GEN T, ulong p) GEN FlxqXQ_minpoly(GEN x, GEN S, GEN T, ulong p) GEN FlxqXQ_mul(GEN x, GEN y, GEN S, GEN T, ulong p) GEN FlxqXQ_pow(GEN x, GEN n, GEN S, GEN T, ulong p) GEN FlxqXQ_powu(GEN x, ulong n, GEN S, GEN T, ulong p) GEN FlxqXQ_powers(GEN x, long n, GEN S, GEN T, ulong p) GEN FlxqXQ_sqr(GEN x, GEN S, GEN T, ulong p) GEN FlxqXQ_autpow(GEN x, long n, GEN S, GEN T, ulong p) GEN FlxqXQ_autsum(GEN aut, long n, GEN S, GEN T, ulong p) GEN FlxqXQ_auttrace(GEN x, ulong n, GEN S, GEN T, ulong p) GEN FlxqXV_prod(GEN V, GEN T, ulong p) GEN FlxqXn_expint(GEN h, long e, GEN T, ulong p) GEN FlxqXn_inv(GEN f, long e, GEN T, ulong p) GEN FlxqXn_mul(GEN a, GEN b, long n, GEN T, ulong p) GEN FlxqXn_sqr(GEN a, long n, GEN T, ulong p) GEN Fly_to_FlxY(GEN B, long v) GEN Kronecker_to_FlxqX(GEN z, GEN T, ulong p) GEN RgX_to_FlxqX(GEN x, GEN T, ulong p) GEN pol1_FlxX(long v, long sv) GEN polx_FlxX(long v, long sv) GEN random_FlxqX(long d1, long v, GEN T, ulong p) GEN zlxX_translate1(GEN P, long p, long e, long n) GEN zxX_to_FlxX(GEN B, ulong p) GEN zxX_to_Kronecker(GEN P, GEN Q) # FlxqE.c GEN Flxq_ellcard(GEN a4, GEN a6, GEN T, ulong p) GEN Flxq_ellgens(GEN a4, GEN a6, GEN ch, GEN D, GEN m, GEN T, ulong p) GEN Flxq_ellgroup(GEN a4, GEN a6, GEN N, GEN T, ulong p, GEN *pt_m) void Flxq_elltwist(GEN a, GEN a6, GEN T, ulong p, GEN *pt_a, GEN *pt_a6) GEN Flxq_ellj(GEN a4, GEN a6, GEN T, ulong p) void Flxq_ellj_to_a4a6(GEN j, GEN T, ulong p, GEN *pt_a4, GEN *pt_a6) GEN FlxqE_add(GEN P, GEN Q, GEN a4, GEN T, ulong p) GEN FlxqE_changepoint(GEN x, GEN ch, GEN T, ulong p) GEN FlxqE_changepointinv(GEN x, GEN ch, GEN T, ulong p) GEN FlxqE_dbl(GEN P, GEN a4, GEN T, ulong p) GEN FlxqE_log(GEN a, GEN b, GEN o, GEN a4, GEN T, ulong p) GEN FlxqE_mul(GEN P, GEN n, GEN a4, GEN T, ulong p) GEN FlxqE_neg(GEN P, GEN T, ulong p) GEN FlxqE_order(GEN z, GEN o, GEN a4, GEN T, ulong p) GEN FlxqE_sub(GEN P, GEN Q, GEN a4, GEN T, ulong p) GEN FlxqE_tatepairing(GEN t, GEN s, GEN m, GEN a4, GEN T, ulong p) GEN FlxqE_weilpairing(GEN t, GEN s, GEN m, GEN a4, GEN T, ulong p) const bb_group * get_FlxqE_group(void **E, GEN a4, GEN a6, GEN T, ulong p) GEN RgE_to_FlxqE(GEN x, GEN T, ulong p) GEN random_FlxqE(GEN a4, GEN a6, GEN T, ulong p) # FpE.c long Fl_elltrace(ulong a4, ulong a6, ulong p) long Fl_elltrace_CM(long CM, ulong a4, ulong a6, ulong p) GEN Fp_ellcard(GEN a4, GEN a6, GEN p) GEN Fp_elldivpol(GEN a4, GEN a6, long n, GEN p) GEN Fp_ellgens(GEN a4, GEN a6, GEN ch, GEN D, GEN m, GEN p) GEN Fp_ellgroup(GEN a4, GEN a6, GEN N, GEN p, GEN *pt_m) GEN Fp_ellj(GEN a4, GEN a6, GEN p) int Fp_elljissupersingular(GEN j, GEN p) void Fp_elltwist(GEN a4, GEN a6, GEN p, GEN *pt_a4, GEN *pt_a6) GEN Fp_ffellcard(GEN a4, GEN a6, GEN q, long n, GEN p) GEN FpE_add(GEN P, GEN Q, GEN a4, GEN p) GEN FpE_changepoint(GEN x, GEN ch, GEN p) GEN FpE_changepointinv(GEN x, GEN ch, GEN p) GEN FpE_dbl(GEN P, GEN a4, GEN p) GEN FpE_log(GEN a, GEN b, GEN o, GEN a4, GEN p) GEN FpE_mul(GEN P, GEN n, GEN a4, GEN p) GEN FpE_neg(GEN P, GEN p) GEN FpE_order(GEN z, GEN o, GEN a4, GEN p) GEN FpE_sub(GEN P, GEN Q, GEN a4, GEN p) GEN FpE_to_mod(GEN P, GEN p) GEN FpE_tatepairing(GEN t, GEN s, GEN m, GEN a4, GEN p) GEN FpE_weilpairing(GEN t, GEN s, GEN m, GEN a4, GEN p) GEN FpXQ_ellcard(GEN a4, GEN a6, GEN T, GEN p) GEN FpXQ_elldivpol(GEN a4, GEN a6, long n, GEN T, GEN p) GEN FpXQ_ellgens(GEN a4, GEN a6, GEN ch, GEN D, GEN m, GEN T, GEN p) GEN FpXQ_ellgroup(GEN a4, GEN a6, GEN N, GEN T, GEN p, GEN *pt_m) GEN FpXQ_ellj(GEN a4, GEN a6, GEN T, GEN p) int FpXQ_elljissupersingular(GEN j, GEN T, GEN p) void FpXQ_elltwist(GEN a4, GEN a6, GEN T, GEN p, GEN *pt_a4, GEN *pt_a6) GEN FpXQE_add(GEN P, GEN Q, GEN a4, GEN T, GEN p) GEN FpXQE_changepoint(GEN x, GEN ch, GEN T, GEN p) GEN FpXQE_changepointinv(GEN x, GEN ch, GEN T, GEN p) GEN FpXQE_dbl(GEN P, GEN a4, GEN T, GEN p) GEN FpXQE_log(GEN a, GEN b, GEN o, GEN a4, GEN T, GEN p) GEN FpXQE_mul(GEN P, GEN n, GEN a4, GEN T, GEN p) GEN FpXQE_neg(GEN P, GEN T, GEN p) GEN FpXQE_order(GEN z, GEN o, GEN a4, GEN T, GEN p) GEN FpXQE_sub(GEN P, GEN Q, GEN a4, GEN T, GEN p) GEN FpXQE_tatepairing(GEN t, GEN s, GEN m, GEN a4, GEN T, GEN p) GEN FpXQE_weilpairing(GEN t, GEN s, GEN m, GEN a4, GEN T, GEN p) GEN Fq_elldivpolmod(GEN a4, GEN a6, long n, GEN h, GEN T, GEN p) GEN RgE_to_FpE(GEN x, GEN p) GEN RgE_to_FpXQE(GEN x, GEN T, GEN p) const bb_group * get_FpE_group(void **E, GEN a4, GEN a6, GEN p) const bb_group * get_FpXQE_group(void **E, GEN a4, GEN a6, GEN T, GEN p) GEN elltrace_extension(GEN t, long n, GEN p) GEN random_FpE(GEN a4, GEN a6, GEN p) GEN random_FpXQE(GEN a4, GEN a6, GEN T, GEN p) # FpX.c int Fp_issquare(GEN x, GEN p) GEN Fp_FpX_sub(GEN x, GEN y, GEN p) GEN Fp_FpXQ_log(GEN a, GEN g, GEN ord, GEN T, GEN p) GEN FpV_FpM_polint(GEN xa, GEN ya, GEN p, long vs) GEN FpV_inv(GEN x, GEN p) GEN FpV_invVandermonde(GEN L, GEN den, GEN p) GEN FpV_polint(GEN xa, GEN ya, GEN p, long v) GEN FpV_roots_to_pol(GEN V, GEN p, long v) GEN FpX_Fp_add(GEN x, GEN y, GEN p) GEN FpX_Fp_add_shallow(GEN y, GEN x, GEN p) GEN FpX_Fp_mul(GEN x, GEN y, GEN p) GEN FpX_Fp_mul_to_monic(GEN y, GEN x, GEN p) GEN FpX_Fp_mulspec(GEN y, GEN x, GEN p, long ly) GEN FpX_Fp_sub(GEN x, GEN y, GEN p) GEN FpX_Fp_sub_shallow(GEN y, GEN x, GEN p) GEN FpX_FpV_multieval(GEN P, GEN xa, GEN p) GEN FpX_FpXQ_eval(GEN f, GEN x, GEN T, GEN p) GEN FpX_FpXQV_eval(GEN f, GEN x, GEN T, GEN p) GEN FpX_Frobenius(GEN T, GEN p) GEN FpX_add(GEN x, GEN y, GEN p) GEN FpX_center(GEN x, GEN p, GEN pov2) GEN FpX_chinese_coprime(GEN x, GEN y, GEN Tx, GEN Ty, GEN Tz, GEN p) GEN FpX_deriv(GEN x, GEN p) GEN FpX_digits(GEN x, GEN y, GEN p) GEN FpX_disc(GEN x, GEN p) GEN FpX_div_by_X_x(GEN a, GEN x, GEN p, GEN *r) GEN FpX_divrem(GEN x, GEN y, GEN p, GEN *pr) GEN FpX_dotproduct(GEN x, GEN y, GEN p) GEN FpX_eval(GEN x, GEN y, GEN p) GEN FpX_extgcd(GEN x, GEN y, GEN p, GEN *ptu, GEN *ptv) GEN FpX_fromdigits(GEN x, GEN T, GEN p) GEN FpX_gcd(GEN x, GEN y, GEN p) GEN FpX_get_red(GEN T, GEN p) GEN FpX_halve(GEN y, GEN p) GEN FpX_halfgcd(GEN x, GEN y, GEN p) GEN FpX_invBarrett(GEN T, GEN p) int FpX_is_squarefree(GEN f, GEN p) GEN FpX_matFrobenius(GEN T, GEN p) GEN FpX_mul(GEN x, GEN y, GEN p) GEN FpX_mulspec(GEN a, GEN b, GEN p, long na, long nb) GEN FpX_mulu(GEN x, ulong y, GEN p) GEN FpX_neg(GEN x, GEN p) GEN FpX_normalize(GEN z, GEN p) GEN FpX_powu(GEN x, ulong n, GEN p) GEN FpX_red(GEN z, GEN p) GEN FpX_rem(GEN x, GEN y, GEN p) GEN FpX_rescale(GEN P, GEN h, GEN p) GEN FpX_resultant(GEN a, GEN b, GEN p) GEN FpX_sqr(GEN x, GEN p) GEN FpX_sub(GEN x, GEN y, GEN p) long FpX_valrem(GEN x0, GEN t, GEN p, GEN *py) GEN FpXC_FpXQV_eval(GEN Q, GEN x, GEN T, GEN p) GEN FpXM_FpXQV_eval(GEN Q, GEN x, GEN T, GEN p) GEN FpXQ_autpow(GEN x, ulong n, GEN T, GEN p) GEN FpXQ_autpowers(GEN aut, long f, GEN T, GEN p) GEN FpXQ_autsum(GEN x, ulong n, GEN T, GEN p) GEN FpXQ_auttrace(GEN x, ulong n, GEN T, GEN p) GEN FpXQ_charpoly(GEN x, GEN T, GEN p) GEN FpXQ_conjvec(GEN x, GEN T, GEN p) GEN FpXQ_div(GEN x, GEN y, GEN T, GEN p) GEN FpXQ_inv(GEN x, GEN T, GEN p) GEN FpXQ_invsafe(GEN x, GEN T, GEN p) int FpXQ_issquare(GEN x, GEN T, GEN p) GEN FpXQ_log(GEN a, GEN g, GEN ord, GEN T, GEN p) GEN FpXQ_matrix_pow(GEN y, long n, long m, GEN P, GEN l) GEN FpXQ_minpoly(GEN x, GEN T, GEN p) GEN FpXQ_mul(GEN y, GEN x, GEN T, GEN p) GEN FpXQ_norm(GEN x, GEN T, GEN p) GEN FpXQ_order(GEN a, GEN ord, GEN T, GEN p) GEN FpXQ_pow(GEN x, GEN n, GEN T, GEN p) GEN FpXQ_powu(GEN x, ulong n, GEN T, GEN p) GEN FpXQ_powers(GEN x, long l, GEN T, GEN p) GEN FpXQ_red(GEN x, GEN T, GEN p) GEN FpXQ_sqr(GEN y, GEN T, GEN p) GEN FpXQ_sqrt(GEN a, GEN T, GEN p) GEN FpXQ_sqrtn(GEN a, GEN n, GEN T, GEN p, GEN *zetan) GEN FpXQ_trace(GEN x, GEN T, GEN p) GEN FpXQC_to_mod(GEN z, GEN T, GEN p) GEN FpXQM_autsum(GEN x, ulong n, GEN T, GEN p) GEN FpXT_red(GEN z, GEN p) GEN FpXV_prod(GEN V, GEN p) GEN FpXV_red(GEN z, GEN p) int Fq_issquare(GEN x, GEN T, GEN p) long Fq_ispower(GEN x, GEN K, GEN T, GEN p) GEN Fq_log(GEN a, GEN g, GEN ord, GEN T, GEN p) GEN Fq_sqrtn(GEN a, GEN n, GEN T, GEN p, GEN *z) GEN Fq_sqrt(GEN a, GEN T, GEN p) long FqX_ispower(GEN f, ulong k, GEN T, GEN p, GEN *pt) GEN FqV_inv(GEN x, GEN T, GEN p) GEN Z_to_FpX(GEN a, GEN p, long v) GEN gener_FpXQ(GEN T, GEN p, GEN *o) GEN gener_FpXQ_local(GEN T, GEN p, GEN L) long get_FpX_degree(GEN T) GEN get_FpX_mod(GEN T) long get_FpX_var(GEN T) const bb_group *get_FpXQ_star(void **E, GEN T, GEN p) GEN random_FpX(long d, long v, GEN p) # FpX_factor.c GEN F2x_factor(GEN f) int F2x_is_irred(GEN f) void F2xV_to_FlxV_inplace(GEN v) void F2xV_to_ZXV_inplace(GEN v) int Flx_is_irred(GEN f, ulong p) GEN Flx_degfact(GEN f, ulong p) GEN Flx_factor(GEN f, ulong p) long Flx_nbfact(GEN z, ulong p) long Flx_nbfact_Frobenius(GEN T, GEN XP, ulong p) GEN Flx_nbfact_by_degree(GEN z, long *nb, ulong p) long Flx_nbroots(GEN f, ulong p) ulong Flx_oneroot(GEN f, ulong p) ulong Flx_oneroot_split(GEN f, ulong p) GEN Flx_roots(GEN f, ulong p) GEN Flx_rootsff(GEN P, GEN T, ulong p) void FlxV_to_ZXV_inplace(GEN v) GEN FpX_degfact(GEN f, GEN p) int FpX_is_irred(GEN f, GEN p) int FpX_is_totally_split(GEN f, GEN p) GEN FpX_factor(GEN f, GEN p) GEN FpX_factorff(GEN P, GEN T, GEN p) long FpX_nbfact(GEN f, GEN p) long FpX_nbfact_Frobenius(GEN T, GEN XP, GEN p) long FpX_nbroots(GEN f, GEN p) GEN FpX_oneroot(GEN f, GEN p) GEN FpX_roots(GEN f, GEN p) GEN FpX_rootsff(GEN P, GEN T, GEN p) GEN FpX_split_part(GEN f, GEN p) GEN factcantor(GEN x, GEN p) GEN factormod0(GEN f, GEN p, long flag) GEN rootmod0(GEN f, GEN p, long flag) # FpXQX_factor.c GEN F2xqX_roots(GEN x, GEN T) GEN FlxqX_Frobenius(GEN S, GEN T, ulong p) GEN FlxqXQ_halfFrobenius(GEN a, GEN S, GEN T, ulong p) GEN FlxqX_roots(GEN S, GEN T, ulong p) long FlxqX_nbroots(GEN f, GEN T, ulong p) GEN FpXQX_Frobenius(GEN S, GEN T, GEN p) GEN FpXQX_factor(GEN x, GEN T, GEN p) long FpXQX_nbfact(GEN u, GEN T, GEN p) long FpXQX_nbroots(GEN f, GEN T, GEN p) GEN FpXQX_roots(GEN f, GEN T, GEN p) GEN FpXQXQ_halfFrobenius(GEN a, GEN S, GEN T, GEN p) long FqX_is_squarefree(GEN P, GEN T, GEN p) long FqX_nbfact(GEN u, GEN T, GEN p) long FqX_nbroots(GEN f, GEN T, GEN p) GEN factorff(GEN f, GEN p, GEN a) GEN polrootsff(GEN f, GEN p, GEN T) # FpXX.c GEN FpXQX_FpXQ_mul(GEN P, GEN U, GEN T, GEN p) GEN FpXQX_FpXQXQV_eval(GEN P, GEN V, GEN S, GEN T, GEN p) GEN FpXQX_FpXQXQ_eval(GEN P, GEN x, GEN S, GEN T, GEN p) GEN FpXQX_div_by_X_x(GEN a, GEN x, GEN T, GEN p, GEN *pr) GEN FpXQX_divrem(GEN x, GEN y, GEN T, GEN p, GEN *pr) GEN FpXQX_digits(GEN x, GEN B, GEN T, GEN p) GEN FpXQX_extgcd(GEN x, GEN y, GEN T, GEN p, GEN *ptu, GEN *ptv) GEN FpXQX_fromdigits(GEN x, GEN B, GEN T, GEN p) GEN FpXQX_gcd(GEN P, GEN Q, GEN T, GEN p) GEN FpXQX_get_red(GEN S, GEN T, GEN p) GEN FpXQX_halfgcd(GEN x, GEN y, GEN T, GEN p) GEN FpXQX_invBarrett(GEN S, GEN T, GEN p) GEN FpXQX_mul(GEN x, GEN y, GEN T, GEN p) GEN FpXQX_powu(GEN x, ulong n, GEN T, GEN p) GEN FpXQX_red(GEN z, GEN T, GEN p) GEN FpXQX_rem(GEN x, GEN S, GEN T, GEN p) GEN FpXQX_sqr(GEN x, GEN T, GEN p) GEN FpXQXQ_div(GEN x, GEN y, GEN S, GEN T, GEN p) GEN FpXQXQ_inv(GEN x, GEN S, GEN T, GEN p) GEN FpXQXQ_invsafe(GEN x, GEN S, GEN T, GEN p) GEN FpXQXQ_matrix_pow(GEN y, long n, long m, GEN S, GEN T, GEN p) GEN FpXQXQ_mul(GEN x, GEN y, GEN S, GEN T, GEN p) GEN FpXQXQ_pow(GEN x, GEN n, GEN S, GEN T, GEN p) GEN FpXQXQ_powers(GEN x, long n, GEN S, GEN T, GEN p) GEN FpXQXQ_sqr(GEN x, GEN S, GEN T, GEN p) GEN FpXQXQV_autpow(GEN aut, long n, GEN S, GEN T, GEN p) GEN FpXQXQV_autsum(GEN aut, long n, GEN S, GEN T, GEN p) GEN FpXQXQV_auttrace(GEN aut, long n, GEN S, GEN T, GEN p) GEN FpXQXV_prod(GEN V, GEN Tp, GEN p) GEN FpXX_Fp_mul(GEN x, GEN y, GEN p) GEN FpXX_FpX_mul(GEN x, GEN y, GEN p) GEN FpXX_add(GEN x, GEN y, GEN p) GEN FpXX_deriv(GEN P, GEN p) GEN FpXX_mulu(GEN P, ulong u, GEN p) GEN FpXX_neg(GEN x, GEN p) GEN FpXX_red(GEN z, GEN p) GEN FpXX_sub(GEN x, GEN y, GEN p) GEN FpXY_FpXQ_evalx(GEN P, GEN x, GEN T, GEN p) GEN FpXY_FpXQV_evalx(GEN P, GEN x, GEN T, GEN p) GEN FpXY_eval(GEN Q, GEN y, GEN x, GEN p) GEN FpXY_evalx(GEN Q, GEN x, GEN p) GEN FpXY_evaly(GEN Q, GEN y, GEN p, long vy) GEN FpXYQQ_pow(GEN x, GEN n, GEN S, GEN T, GEN p) GEN Kronecker_to_FpXQX(GEN z, GEN pol, GEN p) GEN Kronecker_to_ZXX(GEN z, long N, long v) GEN ZXX_mul_Kronecker(GEN x, GEN y, long n) GEN ZXX_sqr_Kronecker(GEN x, long n) long get_FpXQX_degree(GEN T) GEN get_FpXQX_mod(GEN T) long get_FpXQX_var(GEN T) GEN random_FpXQX(long d1, long v, GEN T, GEN p) # FpV.c GEN F2m_F2c_mul(GEN x, GEN y) GEN F2m_mul(GEN x, GEN y) GEN F2m_powu(GEN x, ulong n) GEN Flc_to_mod(GEN z, ulong pp) GEN Flm_Fl_add(GEN x, ulong y, ulong p) GEN Flm_Fl_mul(GEN y, ulong x, ulong p) void Flm_Fl_mul_inplace(GEN y, ulong x, ulong p) GEN Flm_Flc_mul(GEN x, GEN y, ulong p) GEN Flm_Flc_mul_pre(GEN x, GEN y, ulong p, ulong pi) GEN Flm_add(GEN x, GEN y, ulong p) GEN Flm_center(GEN z, ulong p, ulong ps2) GEN Flm_mul(GEN x, GEN y, ulong p) GEN Flm_neg(GEN y, ulong p) GEN Flm_powu(GEN x, ulong n, ulong p) GEN Flm_sub(GEN x, GEN y, ulong p) GEN Flm_to_mod(GEN z, ulong pp) GEN Flm_transpose(GEN x) GEN Flv_Fl_div(GEN x, ulong y, ulong p) void Flv_Fl_div_inplace(GEN x, ulong y, ulong p) GEN Flv_Fl_mul(GEN x, ulong y, ulong p) void Flv_Fl_mul_inplace(GEN x, ulong y, ulong p) void Flv_Fl_mul_part_inplace(GEN x, ulong y, ulong p, long l) GEN Flv_add(GEN x, GEN y, ulong p) void Flv_add_inplace(GEN x, GEN y, ulong p) GEN Flv_center(GEN z, ulong p, ulong ps2) ulong Flv_dotproduct(GEN x, GEN y, ulong p) ulong Flv_dotproduct_pre(GEN x, GEN y, ulong p, ulong pi) GEN Flv_neg(GEN v, ulong p) void Flv_neg_inplace(GEN v, ulong p) GEN Flv_sub(GEN x, GEN y, ulong p) void Flv_sub_inplace(GEN x, GEN y, ulong p) ulong Flv_sum(GEN x, ulong p) ulong Flx_dotproduct(GEN x, GEN y, ulong p) GEN Fp_to_mod(GEN z, GEN p) GEN FpC_FpV_mul(GEN x, GEN y, GEN p) GEN FpC_Fp_mul(GEN x, GEN y, GEN p) GEN FpC_center(GEN z, GEN p, GEN pov2) void FpC_center_inplace(GEN z, GEN p, GEN pov2) GEN FpC_red(GEN z, GEN p) GEN FpC_to_mod(GEN z, GEN p) GEN FpM_add(GEN x, GEN y, GEN p) GEN FpM_Fp_mul(GEN X, GEN c, GEN p) GEN FpM_FpC_mul(GEN x, GEN y, GEN p) GEN FpM_FpC_mul_FpX(GEN x, GEN y, GEN p, long v) GEN FpM_center(GEN z, GEN p, GEN pov2) void FpM_center_inplace(GEN z, GEN p, GEN pov2) GEN FpM_mul(GEN x, GEN y, GEN p) GEN FpM_powu(GEN x, ulong n, GEN p) GEN FpM_red(GEN z, GEN p) GEN FpM_sub(GEN x, GEN y, GEN p) GEN FpM_to_mod(GEN z, GEN p) GEN FpMs_FpC_mul(GEN M, GEN B, GEN p) GEN FpMs_FpCs_solve(GEN M, GEN B, long nbrow, GEN p) GEN FpMs_FpCs_solve_safe(GEN M, GEN A, long nbrow, GEN p) GEN FpMs_leftkernel_elt(GEN M, long nbrow, GEN p) GEN FpC_add(GEN x, GEN y, GEN p) GEN FpC_sub(GEN x, GEN y, GEN p) GEN FpV_FpMs_mul(GEN B, GEN M, GEN p) GEN FpV_add(GEN x, GEN y, GEN p) GEN FpV_sub(GEN x, GEN y, GEN p) GEN FpV_dotproduct(GEN x, GEN y, GEN p) GEN FpV_dotsquare(GEN x, GEN p) GEN FpV_red(GEN z, GEN p) GEN FpV_to_mod(GEN z, GEN p) GEN FpVV_to_mod(GEN z, GEN p) GEN FpX_to_mod(GEN z, GEN p) GEN ZV_zMs_mul(GEN B, GEN M) GEN ZpMs_ZpCs_solve(GEN M, GEN B, long nbrow, GEN p, long e) GEN gen_FpM_Wiedemann(void *E, GEN (*f)(void*, GEN), GEN B, GEN p) GEN gen_ZpM_Dixon(void *E, GEN (*f)(void*, GEN), GEN B, GEN p, long e) GEN gen_matid(long n, void *E, const bb_field *S) GEN matid_F2m(long n) GEN matid_Flm(long n) GEN matid_F2xqM(long n, GEN T) GEN matid_FlxqM(long n, GEN T, ulong p) GEN scalar_Flm(long s, long n) GEN zCs_to_ZC(GEN C, long nbrow) GEN zMs_to_ZM(GEN M, long nbrow) GEN zMs_ZC_mul(GEN M, GEN B) GEN ZMV_to_FlmV(GEN z, ulong m) # Hensel.c GEN Z2_sqrt(GEN x, long e) GEN Zp_sqrt(GEN x, GEN p, long e) GEN Zp_sqrtlift(GEN b, GEN a, GEN p, long e) GEN Zp_sqrtnlift(GEN b, GEN n, GEN a, GEN p, long e) GEN ZpX_Frobenius(GEN T, GEN p, long e) GEN ZpX_ZpXQ_liftroot(GEN P, GEN S, GEN T, GEN p, long e) GEN ZpX_ZpXQ_liftroot_ea(GEN P, GEN S, GEN T, GEN p, long n, void *E, int early(void *E, GEN x, GEN q)) GEN ZpX_liftfact(GEN pol, GEN Q, GEN T, GEN p, long e, GEN pe) GEN ZpX_liftroot(GEN f, GEN a, GEN p, long e) GEN ZpX_liftroots(GEN f, GEN S, GEN p, long e) GEN ZpX_roots(GEN f, GEN p, long e) GEN ZpXQ_inv(GEN a, GEN T, GEN p, long e) GEN ZpXQ_invlift(GEN b, GEN a, GEN T, GEN p, long e) GEN ZpXQ_log(GEN a, GEN T, GEN p, long N) GEN ZpXQ_sqrt(GEN a, GEN T, GEN p, long e) GEN ZpXQ_sqrtnlift(GEN b, GEN n, GEN a, GEN T, GEN p, long e) GEN ZpXQM_prodFrobenius(GEN M, GEN T, GEN p, long e) GEN ZpXQX_liftroot(GEN f, GEN a, GEN T, GEN p, long e) GEN ZpXQX_liftroot_vald(GEN f, GEN a, long v, GEN T, GEN p, long e) GEN gen_ZpX_Dixon(GEN F, GEN V, GEN q, GEN p, long N, void *E, GEN lin(void *E, GEN F, GEN d, GEN q), GEN invl(void *E, GEN d)) GEN gen_ZpX_Newton(GEN x, GEN p, long n, void *E, GEN eval(void *E, GEN f, GEN q), GEN invd(void *E, GEN V, GEN v, GEN q, long M)) GEN polhensellift(GEN pol, GEN fct, GEN p, long exp) ulong quadratic_prec_mask(long n) # QX_factor.c GEN QX_factor(GEN x) GEN ZX_factor(GEN x) long ZX_is_irred(GEN x) GEN ZX_squff(GEN f, GEN *ex) GEN polcyclofactors(GEN f) long poliscyclo(GEN f) long poliscycloprod(GEN f) # RgV.c GEN Rg_RgC_sub(GEN a, GEN x) GEN RgC_Rg_add(GEN x, GEN y) GEN RgC_Rg_div(GEN x, GEN y) GEN RgC_Rg_mul(GEN x, GEN y) GEN RgC_Rg_sub(GEN x, GEN y) GEN RgC_RgM_mul(GEN x, GEN y) GEN RgC_RgV_mul(GEN x, GEN y) GEN RgC_add(GEN x, GEN y) long RgC_is_ei(GEN x) GEN RgC_neg(GEN x) GEN RgC_sub(GEN x, GEN y) GEN RgM_Rg_add(GEN x, GEN y) GEN RgM_Rg_add_shallow(GEN x, GEN y) GEN RgM_Rg_div(GEN x, GEN y) GEN RgM_Rg_mul(GEN x, GEN y) GEN RgM_Rg_sub(GEN x, GEN y) GEN RgM_Rg_sub_shallow(GEN x, GEN y) GEN RgM_RgC_mul(GEN x, GEN y) GEN RgM_RgV_mul(GEN x, GEN y) GEN RgM_ZM_mul(GEN x, GEN y) GEN RgM_add(GEN x, GEN y) GEN RgM_det_triangular(GEN x) int RgM_is_QM(GEN x) int RgM_is_ZM(GEN x) int RgM_isdiagonal(GEN x) int RgM_isidentity(GEN x) int RgM_isscalar(GEN x, GEN s) GEN RgM_mul(GEN x, GEN y) GEN RgM_multosym(GEN x, GEN y) GEN RgM_neg(GEN x) GEN RgM_powers(GEN x, long l) GEN RgM_sqr(GEN x) GEN RgM_sub(GEN x, GEN y) GEN RgM_sumcol(GEN A) GEN RgM_transmul(GEN x, GEN y) GEN RgM_transmultosym(GEN x, GEN y) GEN RgMrow_zc_mul(GEN x, GEN y, long i) GEN RgM_zc_mul(GEN x, GEN y) GEN RgM_zm_mul(GEN x, GEN y) GEN RgMrow_RgC_mul(GEN x, GEN y, long i) GEN RgV_RgM_mul(GEN x, GEN y) GEN RgV_RgC_mul(GEN x, GEN y) GEN RgV_Rg_mul(GEN x, GEN y) GEN RgV_add(GEN x, GEN y) GEN RgV_dotproduct(GEN x, GEN y) GEN RgV_dotsquare(GEN x) int RgV_is_ZMV(GEN V) GEN RgV_kill0(GEN v) GEN RgV_neg(GEN x) GEN RgV_prod(GEN v) GEN RgV_sub(GEN x, GEN y) GEN RgV_sum(GEN v) GEN RgV_sumpart(GEN v, long n) GEN RgV_sumpart2(GEN v, long m, long n) GEN RgV_zc_mul(GEN x, GEN y) GEN RgV_zm_mul(GEN x, GEN y) GEN RgX_RgM_eval(GEN x, GEN y) GEN RgX_RgMV_eval(GEN x, GEN y) int isdiagonal(GEN x) GEN matid(long n) GEN scalarcol(GEN x, long n) GEN scalarcol_shallow(GEN x, long n) GEN scalarmat(GEN x, long n) GEN scalarmat_shallow(GEN x, long n) GEN scalarmat_s(long x, long n) # RgX.c GEN Kronecker_to_mod(GEN z, GEN pol) GEN QX_ZXQV_eval(GEN P, GEN V, GEN dV) GEN QXQ_charpoly(GEN A, GEN T, long v) GEN QXQ_powers(GEN a, long n, GEN T) GEN QXQX_to_mod_shallow(GEN z, GEN T) GEN QXQV_to_mod(GEN V, GEN T) GEN QXQXV_to_mod(GEN V, GEN T) GEN QXV_QXQ_eval(GEN v, GEN a, GEN T) GEN QXX_QXQ_eval(GEN v, GEN a, GEN T) GEN Rg_RgX_sub(GEN x, GEN y) GEN Rg_to_RgC(GEN x, long N) GEN RgM_to_RgXV(GEN x, long v) GEN RgM_to_RgXX(GEN x, long v, long w) GEN RgV_to_RgX(GEN x, long v) GEN RgV_to_RgM(GEN v, long n) GEN RgV_to_RgX_reverse(GEN x, long v) GEN RgX_RgXQ_eval(GEN f, GEN x, GEN T) GEN RgX_RgXQV_eval(GEN P, GEN V, GEN T) GEN RgX_RgXn_eval(GEN Q, GEN x, long n) GEN RgX_RgXnV_eval(GEN Q, GEN x, long n) GEN RgX_Rg_add(GEN y, GEN x) GEN RgX_Rg_add_shallow(GEN y, GEN x) GEN RgX_Rg_div(GEN y, GEN x) GEN RgX_Rg_divexact(GEN x, GEN y) GEN RgX_Rg_eval_bk(GEN Q, GEN x) GEN RgX_Rg_mul(GEN y, GEN x) GEN RgX_Rg_sub(GEN y, GEN x) GEN RgX_RgV_eval(GEN Q, GEN x) GEN RgX_add(GEN x, GEN y) GEN RgX_blocks(GEN P, long n, long m) GEN RgX_deflate(GEN x0, long d) GEN RgX_deriv(GEN x) GEN RgX_div_by_X_x(GEN a, GEN x, GEN *r) GEN RgX_divrem(GEN x, GEN y, GEN *r) GEN RgX_divs(GEN y, long x) long RgX_equal(GEN x, GEN y) void RgX_even_odd(GEN p, GEN *pe, GEN *po) GEN RgX_get_0(GEN x) GEN RgX_get_1(GEN x) GEN RgX_inflate(GEN x0, long d) GEN RgX_mul(GEN x, GEN y) GEN RgX_mul_normalized(GEN A, long a, GEN B, long b) GEN RgX_mulXn(GEN x, long d) GEN RgX_muls(GEN y, long x) GEN RgX_mulspec(GEN a, GEN b, long na, long nb) GEN RgX_neg(GEN x) GEN RgX_normalize(GEN x) GEN RgX_pseudodivrem(GEN x, GEN y, GEN *ptr) GEN RgX_pseudorem(GEN x, GEN y) GEN RgX_recip(GEN x) GEN RgX_recip_shallow(GEN x) GEN RgX_renormalize_lg(GEN x, long lx) GEN RgX_rescale(GEN P, GEN h) GEN RgX_rotate_shallow(GEN P, long k, long p) GEN RgX_shift(GEN a, long n) GEN RgX_shift_shallow(GEN x, long n) GEN RgX_splitting(GEN p, long k) GEN RgX_sqr(GEN x) GEN RgX_sqrspec(GEN a, long na) GEN RgX_sub(GEN x, GEN y) GEN RgX_to_RgC(GEN x, long N) GEN RgX_translate(GEN P, GEN c) GEN RgX_unscale(GEN P, GEN h) GEN RgXQ_matrix_pow(GEN y, long n, long m, GEN P) GEN RgXQ_norm(GEN x, GEN T) GEN RgXQ_pow(GEN x, GEN n, GEN T) GEN RgXQ_powers(GEN x, long l, GEN T) GEN RgXQ_powu(GEN x, ulong n, GEN T) GEN RgXQC_red(GEN P, GEN T) GEN RgXQV_RgXQ_mul(GEN v, GEN x, GEN T) GEN RgXQV_red(GEN P, GEN T) GEN RgXQX_RgXQ_mul(GEN x, GEN y, GEN T) GEN RgXQX_divrem(GEN x, GEN y, GEN T, GEN *r) GEN RgXQX_mul(GEN x, GEN y, GEN T) GEN RgXQX_pseudodivrem(GEN x, GEN y, GEN T, GEN *ptr) GEN RgXQX_pseudorem(GEN x, GEN y, GEN T) GEN RgXQX_red(GEN P, GEN T) GEN RgXQX_sqr(GEN x, GEN T) GEN RgXQX_translate(GEN P, GEN c, GEN T) GEN RgXV_RgV_eval(GEN Q, GEN x) GEN RgXV_to_RgM(GEN v, long n) GEN RgXV_unscale(GEN v, GEN h) GEN RgXX_to_RgM(GEN v, long n) long RgXY_degreex(GEN bpol) GEN RgXY_swap(GEN x, long n, long w) GEN RgXY_swapspec(GEN x, long n, long w, long nx) GEN RgXn_eval(GEN Q, GEN x, long n) GEN RgXn_exp(GEN f, long e) GEN RgXn_inv(GEN f, long e) GEN RgXn_mul(GEN f, GEN g, long n) GEN RgXn_powers(GEN f, long m, long n) GEN RgXn_red_shallow(GEN a, long n) GEN RgXn_reverse(GEN f, long e) GEN RgXn_sqr(GEN f, long n) GEN RgXnV_red_shallow(GEN P, long n) GEN RgXn_powu(GEN x, ulong m, long n) GEN RgXn_powu_i(GEN x, ulong m, long n) GEN ZX_translate(GEN P, GEN c) GEN ZX_unscale2n(GEN P, long n) GEN ZX_unscale(GEN P, GEN h) GEN ZX_unscale_div(GEN P, GEN h) int ZXQX_dvd(GEN x, GEN y, GEN T) long brent_kung_optpow(long d, long n, long m) GEN gen_bkeval(GEN Q, long d, GEN x, int use_sqr, void *E, const bb_algebra *ff, GEN cmul(void *E, GEN P, long a, GEN x)) GEN gen_bkeval_powers(GEN P, long d, GEN V, void *E, const bb_algebra *ff, GEN cmul(void *E, GEN P, long a, GEN x)) const bb_algebra * get_Rg_algebra() # ZG.c void ZGC_G_mul_inplace(GEN v, GEN x) void ZGC_add_inplace(GEN x, GEN y) GEN ZGC_add_sparse(GEN x, GEN y) void ZGM_add_inplace(GEN x, GEN y) GEN ZGM_add_sparse(GEN x, GEN y) GEN G_ZGC_mul(GEN x, GEN v) GEN G_ZG_mul(GEN x, GEN y) GEN ZGC_G_mul(GEN v, GEN x) GEN ZGC_Z_mul(GEN v, GEN x) GEN ZG_G_mul(GEN x, GEN y) GEN ZG_Z_mul(GEN x, GEN c) GEN ZG_add(GEN x, GEN y) GEN ZG_mul(GEN x, GEN y) GEN ZG_neg(GEN x) GEN ZG_normalize(GEN x) GEN ZG_sub(GEN x, GEN y) # ZV.c void Flc_lincomb1_inplace(GEN X, GEN Y, ulong v, ulong q) void RgM_check_ZM(GEN A, const char *s) void RgV_check_ZV(GEN A, const char *s) GEN ZV_zc_mul(GEN x, GEN y) GEN ZC_ZV_mul(GEN x, GEN y) GEN ZC_Z_add(GEN x, GEN y) GEN ZC_Z_divexact(GEN X, GEN c) GEN ZC_Z_mul(GEN X, GEN c) GEN ZC_Z_sub(GEN x, GEN y) GEN ZC_add(GEN x, GEN y) GEN ZC_copy(GEN x) GEN ZC_hnfremdiv(GEN x, GEN y, GEN *Q) long ZC_is_ei(GEN x) GEN ZC_lincomb(GEN u, GEN v, GEN X, GEN Y) void ZC_lincomb1_inplace(GEN X, GEN Y, GEN v) GEN ZC_neg(GEN M) GEN ZC_reducemodlll(GEN x, GEN y) GEN ZC_reducemodmatrix(GEN v, GEN y) GEN ZC_sub(GEN x, GEN y) GEN ZC_z_mul(GEN X, long c) GEN ZM_ZC_mul(GEN x, GEN y) GEN ZM_Z_div(GEN X, GEN c) GEN ZM_Z_divexact(GEN X, GEN c) GEN ZM_Z_mul(GEN X, GEN c) GEN ZM_add(GEN x, GEN y) GEN ZM_copy(GEN x) GEN ZM_det_triangular(GEN mat) GEN ZM_diag_mul(GEN m, GEN d) int ZM_equal(GEN A, GEN B) GEN ZM_hnfdivrem(GEN x, GEN y, GEN *Q) int ZM_ishnf(GEN x) int ZM_isidentity(GEN x) int ZM_isscalar(GEN x, GEN s) long ZM_max_lg(GEN x) GEN ZM_mul(GEN x, GEN y) GEN ZM_mul_diag(GEN m, GEN d) GEN ZM_multosym(GEN x, GEN y) GEN ZM_neg(GEN x) GEN ZM_nm_mul(GEN x, GEN y) GEN ZM_pow(GEN x, GEN n) GEN ZM_powu(GEN x, ulong n) GEN ZM_reducemodlll(GEN x, GEN y) GEN ZM_reducemodmatrix(GEN v, GEN y) GEN ZM_sqr(GEN x) GEN ZM_sub(GEN x, GEN y) GEN ZM_supnorm(GEN x) GEN ZM_to_Flm(GEN x, ulong p) GEN ZM_to_zm(GEN z) GEN ZM_transmul(GEN x, GEN y) GEN ZM_transmultosym(GEN x, GEN y) GEN ZMV_to_zmV(GEN z) void ZM_togglesign(GEN M) GEN ZM_zc_mul(GEN x, GEN y) GEN ZM_zm_mul(GEN x, GEN y) GEN ZMrow_ZC_mul(GEN x, GEN y, long i) GEN ZV_ZM_mul(GEN x, GEN y) int ZV_abscmp(GEN x, GEN y) int ZV_cmp(GEN x, GEN y) GEN ZV_content(GEN x) GEN ZV_dotproduct(GEN x, GEN y) GEN ZV_dotsquare(GEN x) int ZV_equal(GEN V, GEN W) int ZV_equal0(GEN V) long ZV_max_lg(GEN x) void ZV_neg_inplace(GEN M) GEN ZV_prod(GEN v) GEN ZV_sum(GEN v) GEN ZV_to_Flv(GEN x, ulong p) GEN ZV_to_nv(GEN z) void ZV_togglesign(GEN M) GEN gram_matrix(GEN M) GEN nm_Z_mul(GEN X, GEN c) GEN zm_mul(GEN x, GEN y) GEN zm_to_Flm(GEN z, ulong p) GEN zm_to_ZM(GEN z) GEN zm_zc_mul(GEN x, GEN y) GEN zmV_to_ZMV(GEN z) long zv_content(GEN x) long zv_dotproduct(GEN x, GEN y) int zv_equal(GEN V, GEN W) int zv_equal0(GEN V) GEN zv_neg(GEN x) GEN zv_neg_inplace(GEN M) long zv_prod(GEN v) GEN zv_prod_Z(GEN v) long zv_sum(GEN v) GEN zv_to_Flv(GEN z, ulong p) GEN zv_z_mul(GEN v, long n) GEN zv_ZM_mul(GEN x, GEN y) int zvV_equal(GEN V, GEN W) # ZX.c void RgX_check_QX(GEN x, const char *s) void RgX_check_ZX(GEN x, const char *s) void RgX_check_ZXX(GEN x, const char *s) GEN Z_ZX_sub(GEN x, GEN y) GEN ZX_Z_add(GEN y, GEN x) GEN ZX_Z_add_shallow(GEN y, GEN x) GEN ZX_Z_divexact(GEN y, GEN x) GEN ZX_Z_mul(GEN y, GEN x) GEN ZX_Z_sub(GEN y, GEN x) GEN ZX_add(GEN x, GEN y) GEN ZX_copy(GEN x) GEN ZX_deriv(GEN x) GEN ZX_div_by_X_1(GEN a, GEN *r) int ZX_equal(GEN V, GEN W) GEN ZX_eval1(GEN x) long ZX_max_lg(GEN x) GEN ZX_mod_Xnm1(GEN T, ulong n) GEN ZX_mul(GEN x, GEN y) GEN ZX_mulspec(GEN a, GEN b, long na, long nb) GEN ZX_mulu(GEN y, ulong x) GEN ZX_neg(GEN x) GEN ZX_rem(GEN x, GEN y) GEN ZX_remi2n(GEN y, long n) GEN ZX_rescale2n(GEN P, long n) GEN ZX_rescale(GEN P, GEN h) GEN ZX_rescale_lt(GEN P) GEN ZX_shifti(GEN x, long n) GEN ZX_sqr(GEN x) GEN ZX_sqrspec(GEN a, long na) GEN ZX_sub(GEN x, GEN y) long ZX_val(GEN x) long ZX_valrem(GEN x, GEN *Z) GEN ZXT_remi2n(GEN z, long n) GEN ZXV_Z_mul(GEN y, GEN x) GEN ZXV_dotproduct(GEN V, GEN W) int ZXV_equal(GEN V, GEN W) GEN ZXV_remi2n(GEN x, long n) GEN ZXX_Z_divexact(GEN y, GEN x) GEN ZXX_Z_mul(GEN y, GEN x) GEN ZXX_Z_add_shallow(GEN x, GEN y) long ZXX_max_lg(GEN x) GEN ZXX_renormalize(GEN x, long lx) GEN ZXX_to_Kronecker(GEN P, long n) GEN ZXX_to_Kronecker_spec(GEN P, long lP, long n) GEN scalar_ZX(GEN x, long v) GEN scalar_ZX_shallow(GEN x, long v) GEN zx_to_ZX(GEN z) # algebras.c GEN alg_centralproj(GEN al, GEN z, int maps) GEN alg_change_overorder_shallow(GEN al, GEN ord) GEN alg_complete(GEN rnf, GEN aut, GEN hi, GEN hf, long maxord) GEN alg_csa_table(GEN nf, GEN mt, long v, long maxord) GEN alg_cyclic(GEN rnf, GEN aut, GEN b, long maxord) GEN alg_decomposition(GEN al) long alg_get_absdim(GEN al) long algabsdim(GEN al) GEN alg_get_abssplitting(GEN al) GEN alg_get_aut(GEN al) GEN algaut(GEN al) GEN alg_get_auts(GEN al) GEN alg_get_b(GEN al) GEN algb(GEN al) GEN algcenter(GEN al) GEN alg_get_center(GEN al) GEN alg_get_char(GEN al) GEN algchar(GEN al) long alg_get_degree(GEN al) long algdegree(GEN al) long alg_get_dim(GEN al) long algdim(GEN al) GEN alg_get_hasse_f(GEN al) GEN alghassef(GEN al) GEN alg_get_hasse_i(GEN al) GEN alghassei(GEN al) GEN alg_get_invbasis(GEN al) GEN alginvbasis(GEN al) GEN alg_get_multable(GEN al) GEN alg_get_basis(GEN al) GEN algbasis(GEN al) GEN alg_get_relmultable(GEN al) GEN algrelmultable(GEN al) GEN alg_get_splitpol(GEN al) GEN alg_get_splittingfield(GEN al) GEN algsplittingfield(GEN al) GEN alg_get_splittingbasis(GEN al) GEN alg_get_splittingbasisinv(GEN al) GEN alg_get_splittingdata(GEN al) GEN algsplittingdata(GEN al) GEN alg_get_tracebasis(GEN al) GEN alg_hasse(GEN nf, long n, GEN hi, GEN hf, long var, long flag) GEN alg_hilbert(GEN nf, GEN a, GEN b, long v, long flag) GEN alg_matrix(GEN nf, long n, long v, GEN L, long flag) long alg_model(GEN al, GEN x) GEN alg_ordermodp(GEN al, GEN p) GEN alg_quotient(GEN al, GEN I, int maps) GEN algradical(GEN al) GEN algsimpledec(GEN al, int maps) GEN algsubalg(GEN al, GEN basis) long alg_type(GEN al) GEN algadd(GEN al, GEN x, GEN y) GEN algalgtobasis(GEN al, GEN x) GEN algbasischarpoly(GEN al, GEN x, long v) GEN algbasismul(GEN al, GEN x, GEN y) GEN algbasismultable(GEN al, GEN x) GEN algbasismultable_Flm(GEN mt, GEN x, ulong m) GEN algbasistoalg(GEN al, GEN x) GEN algcharpoly(GEN al, GEN x, long v) GEN algdisc(GEN al) GEN algdivl(GEN al, GEN x, GEN y) GEN algdivr(GEN al, GEN x, GEN y) GEN alggroup(GEN gal, GEN p) GEN alghasse(GEN al, GEN pl) GEN alginit(GEN A, GEN B, long v, long flag) long algindex(GEN al, GEN pl) GEN alginv(GEN al, GEN x) int algisassociative(GEN mt0, GEN p) int algiscommutative(GEN al) int algisdivision(GEN al, GEN pl) int algisramified(GEN al, GEN pl) int algissemisimple(GEN al) int algissimple(GEN al, long ss) int algissplit(GEN al, GEN pl) int algisdivl(GEN al, GEN x, GEN y, GEN* ptz) int algisinv(GEN al, GEN x, GEN* ptix) GEN algleftordermodp(GEN al, GEN Ip, GEN p) GEN algmul(GEN al, GEN x, GEN y) GEN algmultable(GEN al) GEN alglathnf(GEN al, GEN m) GEN algleftmultable(GEN al, GEN x) GEN algneg(GEN al, GEN x) GEN algnorm(GEN al, GEN x) GEN algpoleval(GEN al, GEN pol, GEN x) GEN algpow(GEN al, GEN x, GEN n) GEN algprimesubalg(GEN al) GEN algramifiedplaces(GEN al) GEN algrandom(GEN al, GEN b) GEN algsplittingmatrix(GEN al, GEN x) GEN algsqr(GEN al, GEN x) GEN algsub(GEN al, GEN x, GEN y) GEN algtableinit(GEN mt, GEN p) GEN algtensor(GEN al1, GEN al2, long maxord) GEN algtrace(GEN al, GEN x) long algtype(GEN al) GEN bnfgwgeneric(GEN bnf, GEN Lpr, GEN Ld, GEN pl, long var) GEN bnrgwsearch(GEN bnr, GEN Lpr, GEN Ld, GEN pl) void checkalg(GEN x) void checkhasse(GEN nf, GEN hi, GEN hf, long n) long cyclicrelfrob(GEN rnf, GEN auts, GEN pr) GEN hassecoprime(GEN hi, GEN hf, long n) GEN hassedown(GEN nf, long n, GEN hi, GEN hf) GEN hassewedderburn(GEN hi, GEN hf, long n) long localhasse(GEN rnf, GEN cnd, GEN pl, GEN auts, GEN b, long k) GEN nfgrunwaldwang(GEN nf0, GEN Lpr, GEN Ld, GEN pl, long var) GEN nfgwkummer(GEN nf, GEN Lpr, GEN Ld, GEN pl, long var) # alglin1.c GEN F2m_F2c_gauss(GEN a, GEN b) GEN F2m_F2c_invimage(GEN A, GEN y) GEN F2m_deplin(GEN x) ulong F2m_det(GEN x) ulong F2m_det_sp(GEN x) GEN F2m_gauss(GEN a, GEN b) GEN F2m_image(GEN x) GEN F2m_indexrank(GEN x) GEN F2m_inv(GEN x) GEN F2m_invimage(GEN A, GEN B) GEN F2m_ker(GEN x) GEN F2m_ker_sp(GEN x, long deplin) long F2m_rank(GEN x) GEN F2m_suppl(GEN x) GEN F2xqM_F2xqC_mul(GEN a, GEN b, GEN T) GEN F2xqM_det(GEN a, GEN T) GEN F2xqM_ker(GEN x, GEN T) GEN F2xqM_image(GEN x, GEN T) GEN F2xqM_inv(GEN a, GEN T) GEN F2xqM_mul(GEN a, GEN b, GEN T) long F2xqM_rank(GEN x, GEN T) GEN Flm_Flc_gauss(GEN a, GEN b, ulong p) GEN Flm_Flc_invimage(GEN mat, GEN y, ulong p) GEN Flm_deplin(GEN x, ulong p) ulong Flm_det(GEN x, ulong p) ulong Flm_det_sp(GEN x, ulong p) GEN Flm_gauss(GEN a, GEN b, ulong p) GEN Flm_image(GEN x, ulong p) GEN Flm_invimage(GEN m, GEN v, ulong p) GEN Flm_indexrank(GEN x, ulong p) GEN Flm_intersect(GEN x, GEN y, ulong p) GEN Flm_inv(GEN x, ulong p) GEN Flm_ker(GEN x, ulong p) GEN Flm_ker_sp(GEN x, ulong p, long deplin) long Flm_rank(GEN x, ulong p) GEN Flm_suppl(GEN x, ulong p) GEN FlxqM_FlxqC_gauss(GEN a, GEN b, GEN T, ulong p) GEN FlxqM_FlxqC_mul(GEN a, GEN b, GEN T, ulong p) GEN FlxqM_det(GEN a, GEN T, ulong p) GEN FlxqM_gauss(GEN a, GEN b, GEN T, ulong p) GEN FlxqM_ker(GEN x, GEN T, ulong p) GEN FlxqM_image(GEN x, GEN T, ulong p) GEN FlxqM_inv(GEN x, GEN T, ulong p) GEN FlxqM_mul(GEN a, GEN b, GEN T, ulong p) long FlxqM_rank(GEN x, GEN T, ulong p) GEN FpM_FpC_gauss(GEN a, GEN b, GEN p) GEN FpM_FpC_invimage(GEN m, GEN v, GEN p) GEN FpM_deplin(GEN x, GEN p) GEN FpM_det(GEN x, GEN p) GEN FpM_gauss(GEN a, GEN b, GEN p) GEN FpM_image(GEN x, GEN p) GEN FpM_indexrank(GEN x, GEN p) GEN FpM_intersect(GEN x, GEN y, GEN p) GEN FpM_inv(GEN x, GEN p) GEN FpM_invimage(GEN m, GEN v, GEN p) GEN FpM_ker(GEN x, GEN p) long FpM_rank(GEN x, GEN p) GEN FpM_suppl(GEN x, GEN p) GEN FqM_FqC_gauss(GEN a, GEN b, GEN T, GEN p) GEN FqM_FqC_mul(GEN a, GEN b, GEN T, GEN p) GEN FqM_deplin(GEN x, GEN T, GEN p) GEN FqM_det(GEN x, GEN T, GEN p) GEN FqM_gauss(GEN a, GEN b, GEN T, GEN p) GEN FqM_ker(GEN x, GEN T, GEN p) GEN FqM_image(GEN x, GEN T, GEN p) GEN FqM_inv(GEN x, GEN T, GEN p) GEN FqM_mul(GEN a, GEN b, GEN T, GEN p) long FqM_rank(GEN a, GEN T, GEN p) GEN FqM_suppl(GEN x, GEN T, GEN p) GEN QM_inv(GEN M, GEN dM) GEN RgM_Fp_init(GEN a, GEN p, ulong *pp) GEN RgM_RgC_invimage(GEN A, GEN B) GEN RgM_diagonal(GEN m) GEN RgM_diagonal_shallow(GEN m) GEN RgM_Hadamard(GEN a) GEN RgM_inv_upper(GEN a) GEN RgM_invimage(GEN A, GEN B) GEN RgM_solve(GEN a, GEN b) GEN RgM_solve_realimag(GEN x, GEN y) void RgMs_structelim(GEN M, long nbrow, GEN A, GEN *p_col, GEN *p_lin) GEN ZM_det(GEN a) GEN ZM_detmult(GEN A) GEN ZM_gauss(GEN a, GEN b) GEN ZM_imagecompl(GEN x) GEN ZM_indeximage(GEN x) GEN ZM_indexrank(GEN x) GEN ZM_inv(GEN M, GEN dM) GEN ZM_inv_ratlift(GEN M, GEN *pden) long ZM_rank(GEN x) GEN ZlM_gauss(GEN a, GEN b, ulong p, long e, GEN C) GEN closemodinvertible(GEN x, GEN y) GEN deplin(GEN x) GEN det(GEN a) GEN det0(GEN a, long flag) GEN det2(GEN a) GEN detint(GEN x) GEN eigen(GEN x, long prec) GEN gauss(GEN a, GEN b) GEN gaussmodulo(GEN M, GEN D, GEN Y) GEN gaussmodulo2(GEN M, GEN D, GEN Y) GEN gen_Gauss(GEN a, GEN b, void *E, const bb_field *ff) GEN gen_Gauss_pivot(GEN x, long *rr, void *E, const bb_field *ff) GEN gen_det(GEN a, void *E, const bb_field *ff) GEN gen_ker(GEN x, long deplin, void *E, const bb_field *ff) GEN gen_matcolmul(GEN a, GEN b, void *E, const bb_field *ff) GEN gen_matmul(GEN a, GEN b, void *E, const bb_field *ff) GEN image(GEN x) GEN image2(GEN x) GEN imagecompl(GEN x) GEN indexrank(GEN x) GEN inverseimage(GEN mat, GEN y) GEN ker(GEN x) GEN keri(GEN x) GEN mateigen(GEN x, long flag, long prec) GEN matimage0(GEN x, long flag) GEN matker0(GEN x, long flag) GEN matsolvemod0(GEN M, GEN D, GEN Y, long flag) long rank(GEN x) GEN reducemodinvertible(GEN x, GEN y) GEN reducemodlll(GEN x, GEN y) GEN split_realimag(GEN x, long r1, long r2) GEN suppl(GEN x) # alglin2.c GEN Flm_charpoly(GEN x, ulong p) GEN Flm_hess(GEN x, ulong p) GEN FpM_charpoly(GEN x, GEN p) GEN FpM_hess(GEN x, GEN p) GEN Frobeniusform(GEN V, long n) GEN QM_minors_coprime(GEN x, GEN pp) GEN QM_ImZ_hnf(GEN x) GEN QM_ImQ_hnf(GEN x) GEN QM_charpoly_ZX(GEN M) GEN QM_charpoly_ZX_bound(GEN M, long bit) GEN QM_charpoly_ZX2_bound(GEN M, GEN M2, long bit) GEN ZM_charpoly(GEN x) GEN adj(GEN x) GEN adjsafe(GEN x) GEN caract(GEN x, long v) GEN caradj(GEN x, long v, GEN *py) GEN carberkowitz(GEN x, long v) GEN carhess(GEN x, long v) GEN charpoly(GEN x, long v) GEN charpoly0(GEN x, long v, long flag) GEN gnorm(GEN x) GEN gnorml1(GEN x, long prec) GEN gnorml1_fake(GEN x) GEN gnormlp(GEN x, GEN p, long prec) GEN gnorml2(GEN x) GEN gsupnorm(GEN x, long prec) void gsupnorm_aux(GEN x, GEN *m, GEN *msq, long prec) GEN gtrace(GEN x) GEN hess(GEN x) GEN intersect(GEN x, GEN y) GEN jacobi(GEN a, long prec) GEN matadjoint0(GEN x, long flag) GEN matcompanion(GEN x) GEN matrixqz0(GEN x, GEN pp) GEN minpoly(GEN x, long v) GEN qfgaussred(GEN a) GEN qfgaussred_positive(GEN a) GEN qfsign(GEN a) # alglin3.c GEN apply0(GEN f, GEN A) GEN diagonal(GEN x) GEN diagonal_shallow(GEN x) GEN extract0(GEN x, GEN l1, GEN l2) GEN fold0(GEN f, GEN A) GEN genapply(void *E, GEN (*f)(void *E, GEN x), GEN A) GEN genfold(void *E, GEN (*f)(void *E, GEN x, GEN y), GEN A) GEN genindexselect(void *E, long (*f)(void *E, GEN x), GEN A) GEN genselect(void *E, long (*f)(void *E, GEN x), GEN A) GEN gtomat(GEN x) GEN gtrans(GEN x) GEN matmuldiagonal(GEN x, GEN d) GEN matmultodiagonal(GEN x, GEN y) GEN matslice0(GEN A, long x1, long x2, long y1, long y2) GEN parapply(GEN V, GEN C) void parfor(GEN a, GEN b, GEN code, void *E, long call(void*, GEN, GEN)) void parforprime(GEN a, GEN b, GEN code, void *E, long call(void*, GEN, GEN)) void parforvec(GEN x, GEN code, long flag, void *E, long call(void*, GEN, GEN)) GEN parselect(GEN C, GEN D, long flag) GEN select0(GEN A, GEN f, long flag) GEN shallowextract(GEN x, GEN L) GEN shallowtrans(GEN x) GEN vecapply(void *E, GEN (*f)(void* E, GEN x), GEN x) GEN veccatapply(void *E, GEN (*f)(void* E, GEN x), GEN x) GEN veccatselapply(void *Epred, long (*pred)(void* E, GEN x), void *Efun, GEN (*fun)(void* E, GEN x), GEN A) GEN vecrange(GEN a, GEN b) GEN vecrangess(long a, long b) GEN vecselapply(void *Epred, long (*pred)(void* E, GEN x), void *Efun, GEN (*fun)(void* E, GEN x), GEN A) GEN vecselect(void *E, long (*f)(void* E, GEN x), GEN A) GEN vecslice0(GEN A, long y1, long y2) GEN vecsum(GEN v) # anal.c void addhelp(const char *e, char *s) void alias0(const char *s, const char *old) GEN compile_str(const char *s) GEN chartoGENstr(char c) long delete_var() long fetch_user_var(const char *s) long fetch_var() long fetch_var_higher() GEN fetch_var_value(long vx, GEN t) char * gp_embedded(const char *s) void gp_embedded_init(long rsize, long vsize) GEN gp_read_str(const char *t) entree* install(void *f, const char *name, const char *code) entree* is_entry(const char *s) void kill0(const char *e) void pari_var_close() void pari_var_init() long pari_var_next() long pari_var_next_temp() long pari_var_create(entree *ep) void name_var(long n, const char *s) GEN readseq(char *t) GEN* safegel(GEN x, long l) long* safeel(GEN x, long l) GEN* safelistel(GEN x, long l) GEN* safegcoeff(GEN x, long a, long b) GEN strntoGENstr(const char *s, long n0) GEN strtoGENstr(const char *s) GEN strtoi(const char *s) GEN strtor(const char *s, long prec) GEN type0(GEN x) GEN varhigher(const char *s, long v) GEN varlower(const char *s, long v) # aprcl.c long isprimeAPRCL(GEN N) # Qfb.c GEN Qfb0(GEN x, GEN y, GEN z, GEN d, long prec) void check_quaddisc(GEN x, long *s, long *r, const char *f) void check_quaddisc_imag(GEN x, long *r, const char *f) void check_quaddisc_real(GEN x, long *r, const char *f) long cornacchia(GEN d, GEN p, GEN *px, GEN *py) long cornacchia2(GEN d, GEN p, GEN *px, GEN *py) GEN nucomp(GEN x, GEN y, GEN L) GEN nudupl(GEN x, GEN L) GEN nupow(GEN x, GEN n, GEN L) GEN primeform(GEN x, GEN p, long prec) GEN primeform_u(GEN x, ulong p) int qfb_equal1(GEN f) GEN qfbcompraw(GEN x, GEN y) GEN qfbpowraw(GEN x, long n) GEN qfbred0(GEN x, long flag, GEN D, GEN isqrtD, GEN sqrtD) GEN qfbredsl2(GEN q, GEN S) GEN qfbsolve(GEN Q, GEN n) GEN qfi(GEN x, GEN y, GEN z) GEN qfi_1(GEN x) GEN qfi_Shanks(GEN a, GEN g, long n) GEN qfi_log(GEN a, GEN g, GEN o) GEN qfi_order(GEN q, GEN o) GEN qficomp(GEN x, GEN y) GEN qficompraw(GEN x, GEN y) GEN qfipowraw(GEN x, long n) GEN qfisolvep(GEN Q, GEN p) GEN qfisqr(GEN x) GEN qfisqrraw(GEN x) GEN qfr(GEN x, GEN y, GEN z, GEN d) GEN qfr3_comp(GEN x, GEN y, qfr_data *S) GEN qfr3_pow(GEN x, GEN n, qfr_data *S) GEN qfr3_red(GEN x, qfr_data *S) GEN qfr3_rho(GEN x, qfr_data *S) GEN qfr3_to_qfr(GEN x, GEN z) GEN qfr5_comp(GEN x, GEN y, qfr_data *S) GEN qfr5_dist(GEN e, GEN d, long prec) GEN qfr5_pow(GEN x, GEN n, qfr_data *S) GEN qfr5_red(GEN x, qfr_data *S) GEN qfr5_rho(GEN x, qfr_data *S) GEN qfr5_to_qfr(GEN x, GEN d0) GEN qfr_1(GEN x) void qfr_data_init(GEN D, long prec, qfr_data *S) GEN qfr_to_qfr5(GEN x, long prec) GEN qfrcomp(GEN x, GEN y) GEN qfrcompraw(GEN x, GEN y) GEN qfrpow(GEN x, GEN n) GEN qfrpowraw(GEN x, long n) GEN qfrsolvep(GEN Q, GEN p) GEN qfrsqr(GEN x) GEN qfrsqrraw(GEN x) GEN quadgen(GEN x) GEN quadpoly(GEN x) GEN quadpoly0(GEN x, long v) GEN redimag(GEN x) GEN redreal(GEN x) GEN redrealnod(GEN x, GEN isqrtD) GEN rhoreal(GEN x) GEN rhorealnod(GEN x, GEN isqrtD) # arith1.c ulong Fl_order(ulong a, ulong o, ulong p) GEN Fl_powers(ulong x, long n, ulong p) GEN Fl_powers_pre(ulong x, long n, ulong p, ulong pi) ulong Fl_powu(ulong x, ulong n, ulong p) ulong Fl_powu_pre(ulong x, ulong n, ulong p, ulong pi) ulong Fl_sqrt(ulong a, ulong p) ulong Fl_sqrt_pre(ulong a, ulong p, ulong pi) ulong Fl_sqrtl(ulong a, ulong l, ulong p) ulong Fl_sqrtl_pre(ulong a, ulong l, ulong p, ulong pi) GEN Fp_factored_order(GEN a, GEN o, GEN p) int Fp_ispower(GEN x, GEN K, GEN p) GEN Fp_log(GEN a, GEN g, GEN ord, GEN p) GEN Fp_order(GEN a, GEN o, GEN p) GEN Fp_pow(GEN a, GEN n, GEN m) GEN Fp_powers(GEN x, long n, GEN p) GEN Fp_pows(GEN A, long k, GEN N) GEN Fp_powu(GEN x, ulong k, GEN p) GEN Fp_sqrt(GEN a, GEN p) GEN Fp_sqrtn(GEN a, GEN n, GEN p, GEN *zetan) GEN Z_ZV_mod(GEN P, GEN xa) GEN Z_chinese(GEN a, GEN b, GEN A, GEN B) GEN Z_chinese_all(GEN a, GEN b, GEN A, GEN B, GEN *pC) GEN Z_chinese_coprime(GEN a, GEN b, GEN A, GEN B, GEN C) GEN Z_chinese_post(GEN a, GEN b, GEN C, GEN U, GEN d) void Z_chinese_pre(GEN A, GEN B, GEN *pC, GEN *pU, GEN *pd) GEN Z_factor_listP(GEN N, GEN L) long Z_isanypower(GEN x, GEN *y) long Z_isfundamental(GEN x) long Z_ispow2(GEN x) long Z_ispowerall(GEN x, ulong k, GEN *pt) long Z_issquareall(GEN x, GEN *pt) GEN Z_nv_mod(GEN P, GEN xa) GEN ZV_allpnqn(GEN x) GEN ZV_chinese(GEN A, GEN P, GEN *pt_mod) GEN ZV_chinese_tree(GEN A, GEN P, GEN tree, GEN *pt_mod) GEN ZV_producttree(GEN xa) GEN ZX_nv_mod_tree(GEN P, GEN xa, GEN T) GEN Zideallog(GEN bid, GEN x) long Zp_issquare(GEN a, GEN p) GEN bestappr(GEN x, GEN k) GEN bestapprPade(GEN x, long B) long cgcd(long a, long b) GEN chinese(GEN x, GEN y) GEN chinese1(GEN x) GEN chinese1_coprime_Z(GEN x) GEN classno(GEN x) GEN classno2(GEN x) long clcm(long a, long b) GEN contfrac0(GEN x, GEN b, long flag) GEN contfracpnqn(GEN x, long n) GEN fibo(long n) GEN gboundcf(GEN x, long k) GEN gcf(GEN x) GEN gcf2(GEN b, GEN x) const bb_field *get_Fp_field(void **E, GEN p) ulong pgener_Fl(ulong p) ulong pgener_Fl_local(ulong p, GEN L) GEN pgener_Fp(GEN p) GEN pgener_Fp_local(GEN p, GEN L) ulong pgener_Zl(ulong p) GEN pgener_Zp(GEN p) long gisanypower(GEN x, GEN *pty) GEN gissquare(GEN x) GEN gissquareall(GEN x, GEN *pt) GEN hclassno(GEN x) long hilbert(GEN x, GEN y, GEN p) long hilbertii(GEN x, GEN y, GEN p) long isfundamental(GEN x) long ispolygonal(GEN x, GEN S, GEN *N) long ispower(GEN x, GEN k, GEN *pty) long isprimepower(GEN x, GEN *pty) long ispseudoprimepower(GEN n, GEN *pt) long issquare(GEN x) long issquareall(GEN x, GEN *pt) long krois(GEN x, long y) long kroiu(GEN x, ulong y) long kronecker(GEN x, GEN y) long krosi(long s, GEN x) long kross(long x, long y) long kroui(ulong x, GEN y) long krouu(ulong x, ulong y) GEN lcmii(GEN a, GEN b) long logint0(GEN B, GEN y, GEN *ptq) GEN mpfact(long n) GEN muls_interval(long a, long b) GEN mulu_interval(ulong a, ulong b) GEN ncV_chinese_center(GEN A, GEN P, GEN *pt_mod) GEN nmV_chinese_center(GEN A, GEN P, GEN *pt_mod) GEN odd_prime_divisors(GEN q) GEN order(GEN x) GEN pnqn(GEN x) GEN qfbclassno0(GEN x, long flag) GEN quaddisc(GEN x) GEN quadregulator(GEN x, long prec) GEN quadunit(GEN x) ulong rootsof1_Fl(ulong n, ulong p) GEN rootsof1_Fp(GEN n, GEN p) GEN rootsof1u_Fp(ulong n, GEN p) long sisfundamental(long x) GEN sqrtint(GEN a) GEN ramanujantau(GEN n) ulong ugcd(ulong a, ulong b) long uisprimepower(ulong n, ulong *p) long uissquare(ulong A) long uissquareall(ulong A, ulong *sqrtA) long unegisfundamental(ulong x) long uposisfundamental(ulong x) GEN znlog(GEN x, GEN g, GEN o) GEN znorder(GEN x, GEN o) GEN znprimroot(GEN m) GEN znstar(GEN x) GEN znstar0(GEN N, long flag) # arith2.c GEN Z_smoothen(GEN N, GEN L, GEN *pP, GEN *pe) GEN boundfact(GEN n, ulong lim) GEN check_arith_pos(GEN n, const char *f) GEN check_arith_non0(GEN n, const char *f) GEN check_arith_all(GEN n, const char *f) GEN clean_Z_factor(GEN f) GEN corepartial(GEN n, long l) GEN core0(GEN n, long flag) GEN core2(GEN n) GEN core2partial(GEN n, long l) GEN coredisc(GEN n) GEN coredisc0(GEN n, long flag) GEN coredisc2(GEN n) long corediscs(long D, ulong *f) GEN digits(GEN N, GEN B) GEN divisors(GEN n) GEN divisorsu(ulong n) GEN divisorsu_fact(GEN P, GEN e) GEN factor_pn_1(GEN p, ulong n) GEN factor_pn_1_limit(GEN p, long n, ulong lim) GEN factoru_pow(ulong n) GEN fromdigits(GEN x, GEN B) GEN fuse_Z_factor(GEN f, GEN B) GEN gen_digits(GEN x, GEN B, long n, void *E, bb_ring *r, GEN (*div)(void *E, GEN x, GEN y, GEN *r)) GEN gen_fromdigits(GEN x, GEN B, void *E, bb_ring *r) byteptr initprimes(ulong maxnum, long *lenp, ulong *lastp) void initprimetable(ulong maxnum) ulong init_primepointer_geq(ulong a, byteptr *pd) ulong init_primepointer_gt(ulong a, byteptr *pd) ulong init_primepointer_leq(ulong a, byteptr *pd) ulong init_primepointer_lt(ulong a, byteptr *pd) int is_Z_factor(GEN f) int is_Z_factornon0(GEN f) int is_Z_factorpos(GEN f) ulong maxprime() void maxprime_check(ulong c) GEN sumdigits(GEN n) GEN sumdigits0(GEN n, GEN B) ulong sumdigitsu(ulong n) GEN usumdiv_fact(GEN f) GEN usumdivk_fact(GEN f, ulong k) # base1.c GEN FpX_FpC_nfpoleval(GEN nf, GEN pol, GEN a, GEN p) GEN embed_T2(GEN x, long r1) GEN embednorm_T2(GEN x, long r1) GEN embed_norm(GEN x, long r1) void check_ZKmodule(GEN x, const char *s) void checkbid(GEN bid) GEN checkbnf(GEN bnf) void checkbnr(GEN bnr) void checkbnrgen(GEN bnr) void checkabgrp(GEN v) void checksqmat(GEN x, long N) GEN checknf(GEN nf) GEN checknfelt_mod(GEN nf, GEN x, const char *s) void checkprid(GEN bid) void checkrnf(GEN rnf) GEN factoredpolred(GEN x, GEN fa) GEN factoredpolred2(GEN x, GEN fa) GEN galoisapply(GEN nf, GEN aut, GEN x) GEN get_bnf(GEN x, long *t) GEN get_bnfpol(GEN x, GEN *bnf, GEN *nf) GEN get_nf(GEN x, long *t) GEN get_nfpol(GEN x, GEN *nf) GEN get_prid(GEN x) GEN idealfrobenius(GEN nf, GEN gal, GEN pr) GEN idealfrobenius_aut(GEN nf, GEN gal, GEN pr, GEN aut) GEN idealramfrobenius(GEN nf, GEN gal, GEN pr, GEN ram) GEN idealramgroups(GEN nf, GEN gal, GEN pr) GEN nf_get_allroots(GEN nf) long nf_get_prec(GEN x) GEN nfcertify(GEN x) GEN nfgaloismatrix(GEN nf, GEN s) GEN nfgaloispermtobasis(GEN nf, GEN gal) void nfinit_step1(nfbasic_t *T, GEN x, long flag) GEN nfinit_step2(nfbasic_t *T, long flag, long prec) GEN nfinit(GEN x, long prec) GEN nfinit0(GEN x, long flag, long prec) GEN nfinitall(GEN x, long flag, long prec) GEN nfinitred(GEN x, long prec) GEN nfinitred2(GEN x, long prec) GEN nfisincl(GEN a, GEN b) GEN nfisisom(GEN a, GEN b) GEN nfnewprec(GEN nf, long prec) GEN nfnewprec_shallow(GEN nf, long prec) GEN nfpoleval(GEN nf, GEN pol, GEN a) long nftyp(GEN x) GEN polredord(GEN x) GEN polgalois(GEN x, long prec) GEN polred(GEN x) GEN polred0(GEN x, long flag, GEN fa) GEN polred2(GEN x) GEN polredabs(GEN x) GEN polredabs0(GEN x, long flag) GEN polredabs2(GEN x) GEN polredabsall(GEN x, long flun) GEN polredbest(GEN x, long flag) GEN rnfpolredabs(GEN nf, GEN pol, long flag) GEN rnfpolredbest(GEN nf, GEN relpol, long flag) GEN smallpolred(GEN x) GEN smallpolred2(GEN x) GEN tschirnhaus(GEN x) GEN ZX_Q_normalize(GEN pol, GEN *ptlc) GEN ZX_Z_normalize(GEN pol, GEN *ptk) GEN ZX_to_monic(GEN pol, GEN *lead) GEN ZX_primitive_to_monic(GEN pol, GEN *lead) # base2.c GEN Fq_to_nf(GEN x, GEN modpr) GEN FqM_to_nfM(GEN z, GEN modpr) GEN FqV_to_nfV(GEN z, GEN modpr) GEN FqX_to_nfX(GEN x, GEN modpr) GEN Rg_nffix(const char *f, GEN T, GEN c, int lift) GEN RgV_nffix(const char *f, GEN T, GEN P, int lift) GEN RgX_nffix(const char *s, GEN nf, GEN x, int lift) long ZpX_disc_val(GEN f, GEN p) GEN ZpX_gcd(GEN f1, GEN f2, GEN p, GEN pm) GEN ZpX_reduced_resultant(GEN x, GEN y, GEN p, GEN pm) GEN ZpX_reduced_resultant_fast(GEN f, GEN g, GEN p, long M) long ZpX_resultant_val(GEN f, GEN g, GEN p, long M) void checkmodpr(GEN modpr) GEN ZX_compositum_disjoint(GEN A, GEN B) GEN compositum(GEN P, GEN Q) GEN compositum2(GEN P, GEN Q) GEN nfdisc(GEN x) GEN get_modpr(GEN x) GEN indexpartial(GEN P, GEN DP) GEN modpr_genFq(GEN modpr) GEN nf_to_Fq_init(GEN nf, GEN *pr, GEN *T, GEN *p) GEN nf_to_Fq(GEN nf, GEN x, GEN modpr) GEN nfM_to_FqM(GEN z, GEN nf, GEN modpr) GEN nfV_to_FqV(GEN z, GEN nf, GEN modpr) GEN nfX_to_FqX(GEN x, GEN nf, GEN modpr) GEN nfbasis(GEN x, GEN *y, GEN p) # PARI <= (2,12,0) GEN nfcompositum(GEN nf, GEN A, GEN B, long flag) void nfmaxord(nfmaxord_t *S, GEN T, long flag) GEN nfmodprinit(GEN nf, GEN pr) GEN nfreducemodpr(GEN nf, GEN x, GEN modpr) GEN nfsplitting(GEN T, GEN D) GEN polcompositum0(GEN P, GEN Q, long flag) GEN idealprimedec(GEN nf, GEN p) GEN idealprimedec_limit_f(GEN nf, GEN p, long f) GEN idealprimedec_limit_norm(GEN nf, GEN p, GEN B) GEN rnfbasis(GEN bnf, GEN order) GEN rnfdedekind(GEN nf, GEN T, GEN pr, long flag) GEN rnfdet(GEN nf, GEN order) GEN rnfdiscf(GEN nf, GEN pol) GEN rnfequation(GEN nf, GEN pol) GEN rnfequation0(GEN nf, GEN pol, long flall) GEN rnfequation2(GEN nf, GEN pol) GEN nf_rnfeq(GEN nf, GEN relpol) GEN nf_rnfeqsimple(GEN nf, GEN relpol) GEN rnfequationall(GEN A, GEN B, long *pk, GEN *pLPRS) GEN rnfhnfbasis(GEN bnf, GEN order) long rnfisfree(GEN bnf, GEN order) GEN rnflllgram(GEN nf, GEN pol, GEN order, long prec) GEN rnfpolred(GEN nf, GEN pol, long prec) GEN rnfpseudobasis(GEN nf, GEN pol) GEN rnfsimplifybasis(GEN bnf, GEN order) GEN rnfsteinitz(GEN nf, GEN order) long factorial_lval(ulong n, ulong p) GEN zk_to_Fq_init(GEN nf, GEN *pr, GEN *T, GEN *p) GEN zk_to_Fq(GEN x, GEN modpr) GEN zkmodprinit(GEN nf, GEN pr) # base3.c GEN Idealstar(GEN nf, GEN x, long flun) GEN RgC_to_nfC(GEN nf, GEN x) GEN RgM_to_nfM(GEN nf, GEN x) GEN RgX_to_nfX(GEN nf, GEN pol) GEN algtobasis(GEN nf, GEN x) GEN basistoalg(GEN nf, GEN x) GEN gpnfvalrem(GEN nf, GEN x, GEN pr, GEN *py) GEN ideallist(GEN nf, long bound) GEN ideallist0(GEN nf, long bound, long flag) GEN ideallistarch(GEN nf, GEN list, GEN arch) GEN idealprincipalunits(GEN nf, GEN pr, long e) GEN idealstar0(GEN nf, GEN x, long flag) GEN indices_to_vec01(GEN archp, long r) GEN matalgtobasis(GEN nf, GEN x) GEN matbasistoalg(GEN nf, GEN x) GEN nf_to_scalar_or_alg(GEN nf, GEN x) GEN nf_to_scalar_or_basis(GEN nf, GEN x) GEN nfadd(GEN nf, GEN x, GEN y) GEN nfarchstar(GEN nf, GEN x, GEN arch) GEN nfdiv(GEN nf, GEN x, GEN y) GEN nfdiveuc(GEN nf, GEN a, GEN b) GEN nfdivrem(GEN nf, GEN a, GEN b) GEN nfembed(GEN nf, GEN x, long k) GEN nfinv(GEN nf, GEN x) GEN nfinvmodideal(GEN nf, GEN x, GEN ideal) int nfchecksigns(GEN nf, GEN x, GEN pl) GEN nfmod(GEN nf, GEN a, GEN b) GEN nfmul(GEN nf, GEN x, GEN y) GEN nfmuli(GEN nf, GEN x, GEN y) GEN nfnorm(GEN nf, GEN x) GEN nfpow(GEN nf, GEN x, GEN k) GEN nfpow_u(GEN nf, GEN z, ulong n) GEN nfpowmodideal(GEN nf, GEN x, GEN k, GEN ideal) GEN nfsign(GEN nf, GEN alpha) GEN nfsign_arch(GEN nf, GEN alpha, GEN arch) GEN nfsign_from_logarch(GEN Larch, GEN invpi, GEN archp) GEN nfsqr(GEN nf, GEN x) GEN nfsqri(GEN nf, GEN x) GEN nftrace(GEN nf, GEN x) long nfval(GEN nf, GEN x, GEN vp) long nfvalrem(GEN nf, GEN x, GEN pr, GEN *py) GEN polmod_nffix(const char *f, GEN rnf, GEN x, int lift) GEN polmod_nffix2(const char *f, GEN T, GEN relpol, GEN x, int lift) int pr_equal(GEN nf, GEN P, GEN Q) GEN rnfalgtobasis(GEN rnf, GEN x) GEN rnfbasistoalg(GEN rnf, GEN x) GEN rnfeltnorm(GEN rnf, GEN x) GEN rnfelttrace(GEN rnf, GEN x) GEN set_sign_mod_divisor(GEN nf, GEN x, GEN y, GEN idele, GEN sarch) GEN vec01_to_indices(GEN arch) GEN vecsmall01_to_indices(GEN v) GEN vecmodii(GEN a, GEN b) GEN ideallog(GEN nf, GEN x, GEN bigideal) GEN multable(GEN nf, GEN x) GEN tablemul(GEN TAB, GEN x, GEN y) GEN tablemul_ei(GEN M, GEN x, long i) GEN tablemul_ei_ej(GEN M, long i, long j) GEN tablemulvec(GEN M, GEN x, GEN v) GEN tablesqr(GEN tab, GEN x) GEN ei_multable(GEN nf, long i) long ZC_nfval(GEN nf, GEN x, GEN P) long ZC_nfvalrem(GEN nf, GEN x, GEN P, GEN *t) GEN zk_multable(GEN nf, GEN x) GEN zk_scalar_or_multable(GEN, GEN x) int ZC_prdvd(GEN nf, GEN x, GEN P) # base4.c GEN RM_round_maxrank(GEN G) GEN ZM_famat_limit(GEN fa, GEN limit) GEN famat_Z_gcd(GEN M, GEN n) GEN famat_inv(GEN f) GEN famat_inv_shallow(GEN f) GEN famat_makecoprime(GEN nf, GEN g, GEN e, GEN pr, GEN prk, GEN EX) GEN famat_mul(GEN f, GEN g) GEN famat_pow(GEN f, GEN n) GEN famat_sqr(GEN f) GEN famat_reduce(GEN fa) GEN famat_to_nf(GEN nf, GEN f) GEN famat_to_nf_modideal_coprime(GEN nf, GEN g, GEN e, GEN id, GEN EX) GEN famat_to_nf_moddivisor(GEN nf, GEN g, GEN e, GEN bid) GEN famatsmall_reduce(GEN fa) GEN gpidealval(GEN nf, GEN ix, GEN P) GEN idealtwoelt(GEN nf, GEN ix) GEN idealtwoelt0(GEN nf, GEN ix, GEN a) GEN idealtwoelt2(GEN nf, GEN x, GEN a) GEN idealadd(GEN nf, GEN x, GEN y) GEN idealaddmultoone(GEN nf, GEN list) GEN idealaddtoone(GEN nf, GEN x, GEN y) GEN idealaddtoone_i(GEN nf, GEN x, GEN y) GEN idealaddtoone0(GEN nf, GEN x, GEN y) GEN idealappr(GEN nf, GEN x) GEN idealappr0(GEN nf, GEN x, long fl) GEN idealapprfact(GEN nf, GEN x) GEN idealchinese(GEN nf, GEN x, GEN y) GEN idealcoprime(GEN nf, GEN x, GEN y) GEN idealcoprimefact(GEN nf, GEN x, GEN fy) GEN idealdiv(GEN nf, GEN x, GEN y) GEN idealdiv0(GEN nf, GEN x, GEN y, long flag) GEN idealdivexact(GEN nf, GEN x, GEN y) GEN idealdivpowprime(GEN nf, GEN x, GEN vp, GEN n) GEN idealmulpowprime(GEN nf, GEN x, GEN vp, GEN n) GEN idealfactor(GEN nf, GEN x) GEN idealhnf(GEN nf, GEN x) GEN idealhnf_principal(GEN nf, GEN x) GEN idealhnf_shallow(GEN nf, GEN x) GEN idealhnf_two(GEN nf, GEN vp) GEN idealhnf0(GEN nf, GEN a, GEN b) GEN idealintersect(GEN nf, GEN x, GEN y) GEN idealinv(GEN nf, GEN ix) GEN idealinv_HNF(GEN nf, GEN I) GEN idealred0(GEN nf, GEN I, GEN vdir) GEN idealmul(GEN nf, GEN ix, GEN iy) GEN idealmul0(GEN nf, GEN ix, GEN iy, long flag) GEN idealmul_HNF(GEN nf, GEN ix, GEN iy) GEN idealmulred(GEN nf, GEN ix, GEN iy) GEN idealnorm(GEN nf, GEN x) GEN idealnumden(GEN nf, GEN x) GEN idealpow(GEN nf, GEN ix, GEN n) GEN idealpow0(GEN nf, GEN ix, GEN n, long flag) GEN idealpowred(GEN nf, GEN ix, GEN n) GEN idealpows(GEN nf, GEN ideal, long iexp) GEN idealprodprime(GEN nf, GEN L) GEN idealsqr(GEN nf, GEN x) long idealtyp(GEN *ideal, GEN *arch) long idealval(GEN nf, GEN ix, GEN vp) long isideal(GEN nf, GEN x) GEN idealmin(GEN nf, GEN ix, GEN vdir) GEN nf_get_Gtwist(GEN nf, GEN vdir) GEN nf_get_Gtwist1(GEN nf, long i) GEN nfC_nf_mul(GEN nf, GEN v, GEN x) GEN nfdetint(GEN nf, GEN pseudo) GEN nfdivmodpr(GEN nf, GEN x, GEN y, GEN modpr) GEN nfhnf(GEN nf, GEN x) GEN nfhnf0(GEN nf, GEN x, long flag) GEN nfhnfmod(GEN nf, GEN x, GEN d) GEN nfkermodpr(GEN nf, GEN x, GEN modpr) GEN nfmulmodpr(GEN nf, GEN x, GEN y, GEN modpr) GEN nfpowmodpr(GEN nf, GEN x, GEN k, GEN modpr) GEN nfreduce(GEN nf, GEN x, GEN ideal) GEN nfsnf(GEN nf, GEN x) GEN nfsnf0(GEN nf, GEN x, long flag) GEN nfsolvemodpr(GEN nf, GEN a, GEN b, GEN modpr) GEN to_famat(GEN x, GEN y) GEN to_famat_shallow(GEN x, GEN y) GEN vecdiv(GEN x, GEN y) GEN vecinv(GEN x) GEN vecmul(GEN x, GEN y) GEN vecpow(GEN x, GEN n) # base5.c GEN eltreltoabs(GEN rnfeq, GEN x) GEN eltabstorel(GEN eq, GEN P) GEN eltabstorel_lift(GEN rnfeq, GEN P) GEN rnfeltabstorel(GEN rnf, GEN x) GEN rnfeltdown(GEN rnf, GEN x) GEN rnfeltdown0(GEN rnf, GEN x, long flag) GEN rnfeltreltoabs(GEN rnf, GEN x) GEN rnfeltup(GEN rnf, GEN x) GEN rnfeltup0(GEN rnf, GEN x, long flag) GEN rnfidealabstorel(GEN rnf, GEN x) GEN rnfidealdown(GEN rnf, GEN x) GEN rnfidealhnf(GEN rnf, GEN x) GEN rnfidealmul(GEN rnf, GEN x, GEN y) GEN rnfidealnormabs(GEN rnf, GEN x) GEN rnfidealnormrel(GEN rnf, GEN x) GEN rnfidealprimedec(GEN rnf, GEN pr) GEN rnfidealreltoabs(GEN rnf, GEN x) GEN rnfidealreltoabs0(GEN rnf, GEN x, long flag) GEN rnfidealtwoelement(GEN rnf, GEN x) GEN rnfidealup(GEN rnf, GEN x) GEN rnfidealup0(GEN rnf, GEN x, long flag) GEN rnfinit(GEN nf, GEN pol) GEN rnfinit0(GEN nf, GEN pol, long flag) # bb_group.c GEN dlog_get_ordfa(GEN o) GEN dlog_get_ord(GEN o) GEN gen_PH_log(GEN a, GEN g, GEN ord, void *E, const bb_group *grp) GEN gen_Shanks_init(GEN g, long n, void *E, const bb_group *grp) GEN gen_Shanks(GEN T, GEN x, ulong N, void *E, const bb_group *grp) GEN gen_Shanks_sqrtn(GEN a, GEN n, GEN q, GEN *zetan, void *E, const bb_group *grp) GEN gen_gener(GEN o, void *E, const bb_group *grp) GEN gen_ellgens(GEN d1, GEN d2, GEN m, void *E, const bb_group *grp, GEN pairorder(void *E, GEN P, GEN Q, GEN m, GEN F)) GEN gen_ellgroup(GEN N, GEN F, GEN *pt_m, void *E, const bb_group *grp, GEN pairorder(void *E, GEN P, GEN Q, GEN m, GEN F)) GEN gen_factored_order(GEN a, GEN o, void *E, const bb_group *grp) GEN gen_order(GEN x, GEN o, void *E, const bb_group *grp) GEN gen_select_order(GEN o, void *E, const bb_group *grp) GEN gen_plog(GEN x, GEN g0, GEN q, void *E, const bb_group *grp) GEN gen_pow(GEN x, GEN n, void *E, GEN (*sqr)(void*, GEN), GEN (*mul)(void*, GEN, GEN)) GEN gen_pow_i(GEN x, GEN n, void *E, GEN (*sqr)(void*, GEN), GEN (*mul)(void*, GEN, GEN)) GEN gen_pow_fold(GEN x, GEN n, void *E, GEN (*sqr)(void*, GEN), GEN (*msqr)(void*, GEN)) GEN gen_pow_fold_i(GEN x, GEN n, void *E, GEN (*sqr)(void*, GEN), GEN (*msqr)(void*, GEN)) GEN gen_powers(GEN x, long l, int use_sqr, void *E, GEN (*sqr)(void*, GEN), GEN (*mul)(void*, GEN, GEN), GEN (*one)(void*)) GEN gen_powu(GEN x, ulong n, void *E, GEN (*sqr)(void*, GEN), GEN (*mul)(void*, GEN, GEN)) GEN gen_powu_i(GEN x, ulong n, void *E, GEN (*sqr)(void*, GEN), GEN (*mul)(void*, GEN, GEN)) GEN gen_powu_fold(GEN x, ulong n, void *E, GEN (*sqr)(void*, GEN), GEN (*msqr)(void*, GEN)) GEN gen_powu_fold_i(GEN x, ulong n, void *E, GEN (*sqr)(void*, GEN), GEN (*msqr)(void*, GEN)) GEN gen_product(GEN x, void *data, GEN (*mul)(void*, GEN, GEN)) # bb_hnf.c GEN matdetmod(GEN A, GEN d) GEN matimagemod(GEN A, GEN d, GEN* U) GEN matinvmod(GEN A, GEN d) GEN matkermod(GEN A, GEN d, GEN* im) GEN matsolvemod(GEN M, GEN D, GEN Y, long flag) # bern.c GEN bernfrac(long n) GEN bernpol(long k, long v) GEN bernreal(long n, long prec) GEN bernvec(long nomb) void constbern(long n) GEN eulerfrac(long k) GEN eulerpol(long k, long v) GEN eulerreal(long n, long prec) GEN eulervec(long n) # bibli1.c int QR_init(GEN x, GEN *pB, GEN *pQ, GEN *pL, long prec) GEN R_from_QR(GEN x, long prec) GEN RgM_Babai(GEN B, GEN t) int RgM_QR_init(GEN x, GEN *pB, GEN *pQ, GEN *pL, long prec) GEN RgM_gram_schmidt(GEN e, GEN *ptB) GEN Xadic_lindep(GEN x) GEN algdep(GEN x, long n) GEN algdep0(GEN x, long n, long bit) void forqfvec(void *E, long (*fun)(void *, GEN, GEN, double), GEN a, GEN BORNE) void forqfvec0(GEN a, GEN BORNE, GEN code) GEN gaussred_from_QR(GEN x, long prec) GEN lindep0(GEN x, long flag) GEN lindep(GEN x) GEN lindep2(GEN x, long bit) GEN mathouseholder(GEN Q, GEN v) GEN matqr(GEN x, long flag, long prec) GEN minim(GEN a, GEN borne, GEN stockmax) GEN minim_raw(GEN a, GEN borne, GEN stockmax) GEN minim2(GEN a, GEN borne, GEN stockmax) GEN padic_lindep(GEN x) GEN perf(GEN a) GEN qfrep0(GEN a, GEN borne, long flag) GEN qfminim0(GEN a, GEN borne, GEN stockmax, long flag, long prec) GEN seralgdep(GEN s, long p, long r) GEN zncoppersmith(GEN P0, GEN N, GEN X, GEN B) # bibli2.c GEN QXQ_reverse(GEN a, GEN T) GEN RgV_polint(GEN X, GEN Y, long v) GEN RgXQ_reverse(GEN a, GEN T) GEN ZV_indexsort(GEN L) long ZV_search(GEN x, GEN y) GEN ZV_sort(GEN L) GEN ZV_sort_uniq(GEN L) GEN ZV_union_shallow(GEN x, GEN y) GEN binomial(GEN x, long k) GEN binomialuu(ulong n, ulong k) int cmp_Flx(GEN x, GEN y) int cmp_RgX(GEN x, GEN y) int cmp_nodata(void *data, GEN x, GEN y) int cmp_prime_ideal(GEN x, GEN y) int cmp_prime_over_p(GEN x, GEN y) int cmp_universal(GEN x, GEN y) GEN convol(GEN x, GEN y) int gen_cmp_RgX(void *data, GEN x, GEN y) GEN polcyclo(long n, long v) GEN polcyclo_eval(long n, GEN x) GEN dirdiv(GEN x, GEN y) GEN dirmul(GEN x, GEN y) GEN gen_indexsort(GEN x, void *E, int (*cmp)(void*, GEN, GEN)) GEN gen_indexsort_uniq(GEN x, void *E, int (*cmp)(void*, GEN, GEN)) long gen_search(GEN x, GEN y, long flag, void *data, int (*cmp)(void*, GEN, GEN)) GEN gen_setminus(GEN set1, GEN set2, int (*cmp)(GEN, GEN)) GEN gen_sort(GEN x, void *E, int (*cmp)(void*, GEN, GEN)) void gen_sort_inplace(GEN x, void *E, int (*cmp)(void*, GEN, GEN), GEN *perm) GEN gen_sort_uniq(GEN x, void *E, int (*cmp)(void*, GEN, GEN)) long getstack() long gettime() long getabstime() GEN getwalltime() GEN gprec(GEN x, long l) GEN gprec_wtrunc(GEN x, long pr) GEN gprec_w(GEN x, long pr) GEN gtoset(GEN x) GEN indexlexsort(GEN x) GEN indexsort(GEN x) GEN indexvecsort(GEN x, GEN k) GEN laplace(GEN x) GEN lexsort(GEN x) GEN mathilbert(long n) GEN matqpascal(long n, GEN q) GEN merge_factor(GEN fx, GEN fy, void *data, int (*cmp)(void *, GEN, GEN)) GEN merge_sort_uniq(GEN x, GEN y, void *data, int (*cmp)(void *, GEN, GEN)) GEN modreverse(GEN x) GEN numtoperm(long n, GEN x) GEN permtonum(GEN x) GEN polhermite(long n, long v) GEN polhermite_eval(long n, GEN x) GEN pollegendre(long n, long v) GEN pollegendre_eval(long n, GEN x) GEN polint(GEN xa, GEN ya, GEN x, GEN *dy) GEN polchebyshev(long n, long kind, long v) GEN polchebyshev_eval(long n, long kind, GEN x) GEN polchebyshev1(long n, long v) GEN polchebyshev2(long n, long v) GEN polrecip(GEN x) GEN setbinop(GEN f, GEN x, GEN y) GEN setintersect(GEN x, GEN y) long setisset(GEN x) GEN setminus(GEN x, GEN y) long setsearch(GEN x, GEN y, long flag) GEN setunion(GEN x, GEN y) GEN sort(GEN x) GEN sort_factor(GEN y, void *data, int (*cmp)(void*, GEN, GEN)) GEN stirling(long n, long m, long flag) GEN stirling1(ulong n, ulong m) GEN stirling2(ulong n, ulong m) long tablesearch(GEN T, GEN x, int (*cmp)(GEN, GEN)) GEN vecbinome(long n) long vecsearch(GEN v, GEN x, GEN k) GEN vecsort(GEN x, GEN k) GEN vecsort0(GEN x, GEN k, long flag) long zv_search(GEN x, long y) # bit.c GEN bits_to_int(GEN x, long l) ulong bits_to_u(GEN v, long l) GEN binaire(GEN x) GEN binary_2k(GEN x, long k) GEN binary_2k_nv(GEN x, long k) GEN binary_zv(GEN x) long bittest(GEN x, long n) GEN fromdigits_2k(GEN x, long k) GEN gbitand(GEN x, GEN y) GEN gbitneg(GEN x, long n) GEN gbitnegimply(GEN x, GEN y) GEN gbitor(GEN x, GEN y) GEN gbittest(GEN x, long n) GEN gbitxor(GEN x, GEN y) long hammingweight(GEN n) GEN ibitand(GEN x, GEN y) GEN ibitnegimply(GEN x, GEN y) GEN ibitor(GEN x, GEN y) GEN ibitxor(GEN x, GEN y) GEN nv_fromdigits_2k(GEN x, long k) # buch1.c GEN Buchquad(GEN D, double c1, double c2, long prec) GEN quadclassunit0(GEN x, long flag, GEN data, long prec) GEN quadhilbert(GEN D, long prec) GEN quadray(GEN bnf, GEN f, long prec) # buch2.c GEN Buchall(GEN P, long flag, long prec) GEN Buchall_param(GEN P, double bach, double bach2, long nbrelpid, long flun, long prec) GEN bnfcompress(GEN bnf) GEN bnfinit0(GEN P, long flag, GEN data, long prec) GEN bnfisprincipal0(GEN bnf, GEN x, long flall) GEN bnfisunit(GEN bignf, GEN x) GEN bnfnewprec(GEN nf, long prec) GEN bnfnewprec_shallow(GEN nf, long prec) GEN bnrnewprec(GEN bnr, long prec) GEN bnrnewprec_shallow(GEN bnr, long prec) GEN isprincipalfact(GEN bnf, GEN C, GEN L, GEN f, long flag) GEN isprincipalfact_or_fail(GEN bnf, GEN C, GEN P, GEN e) GEN isprincipal(GEN bnf, GEN x) GEN nfcyclotomicunits(GEN nf, GEN zu) GEN nfsign_units(GEN bnf, GEN archp, int add_zu) GEN signunits(GEN bignf) # buch3.c GEN ABC_to_bnr(GEN A, GEN B, GEN C, GEN *H, int gen) GEN Buchray(GEN bnf, GEN module, long flag) GEN bnrautmatrix(GEN bnr, GEN aut) GEN bnrchar(GEN bnr, GEN g, GEN v) GEN bnrchar_primitive(GEN bnr, GEN chi, GEN bnrc) GEN bnrclassno(GEN bignf, GEN ideal) GEN bnrclassno0(GEN A, GEN B, GEN C) GEN bnrclassnolist(GEN bnf, GEN listes) GEN bnrconductor0(GEN A, GEN B, GEN C, long flag) GEN bnrconductor(GEN bnr, GEN H0, long flag) GEN bnrconductor_i(GEN bnr, GEN H0, long flag) GEN bnrconductorofchar(GEN bnr, GEN chi) GEN bnrdisc0(GEN A, GEN B, GEN C, long flag) GEN bnrdisc(GEN bnr, GEN H, long flag) GEN bnrdisclist0(GEN bnf, GEN borne, GEN arch) GEN bnrgaloismatrix(GEN bnr, GEN aut) GEN bnrgaloisapply(GEN bnr, GEN mat, GEN x) GEN bnrinit0(GEN bignf, GEN ideal, long flag) long bnrisconductor0(GEN A, GEN B, GEN C) long bnrisconductor(GEN bnr, GEN H) long bnrisgalois(GEN bnr, GEN M, GEN H) GEN bnrisprincipal(GEN bnf, GEN x, long flag) GEN bnrsurjection(GEN bnr1, GEN bnr2) GEN buchnarrow(GEN bignf) long bnfcertify(GEN bnf) long bnfcertify0(GEN bnf, long flag) GEN decodemodule(GEN nf, GEN fa) GEN discrayabslist(GEN bnf, GEN listes) GEN discrayabslistarch(GEN bnf, GEN arch, ulong bound) GEN discrayabslistlong(GEN bnf, ulong bound) GEN idealmoddivisor(GEN bnr, GEN x) GEN isprincipalray(GEN bnf, GEN x) GEN isprincipalraygen(GEN bnf, GEN x) GEN rnfconductor(GEN bnf, GEN polrel) long rnfisabelian(GEN nf, GEN pol) GEN rnfnormgroup(GEN bnr, GEN polrel) GEN subgrouplist0(GEN bnr, GEN indexbound, long all) # buch4.c GEN bnfisnorm(GEN bnf, GEN x, long flag) GEN rnfisnorm(GEN S, GEN x, long flag) GEN rnfisnorminit(GEN bnf, GEN relpol, int galois) GEN bnfissunit(GEN bnf, GEN suni, GEN x) GEN bnfsunit(GEN bnf, GEN s, long PREC) long nfhilbert(GEN bnf, GEN a, GEN b) long nfhilbert0(GEN bnf, GEN a, GEN b, GEN p) long hyperell_locally_soluble(GEN pol, GEN p) long nf_hyperell_locally_soluble(GEN nf, GEN pol, GEN p) # char.c int char_check(GEN cyc, GEN chi) GEN charconj(GEN cyc, GEN chi) GEN charconj0(GEN cyc, GEN chi) GEN chardiv(GEN x, GEN a, GEN b) GEN chardiv0(GEN x, GEN a, GEN b) GEN chareval(GEN G, GEN chi, GEN n, GEN z) GEN charker(GEN cyc, GEN chi) GEN charker0(GEN cyc, GEN chi) GEN charmul(GEN x, GEN a, GEN b) GEN charmul0(GEN x, GEN a, GEN b) GEN charorder(GEN cyc, GEN x) GEN charorder0(GEN x, GEN chi) GEN char_denormalize(GEN cyc, GEN D, GEN chic) GEN char_normalize(GEN chi, GEN ncyc) GEN char_rootof1(GEN d, long prec) GEN char_rootof1_u(ulong d, long prec) GEN char_simplify(GEN D, GEN C) GEN cyc_normalize(GEN c) int zncharcheck(GEN G, GEN chi) GEN zncharconj(GEN G, GEN chi) GEN znchardiv(GEN G, GEN a, GEN b) GEN zncharker(GEN G, GEN chi) GEN znchareval(GEN G, GEN chi, GEN n, GEN z) GEN zncharinduce(GEN G, GEN chi, GEN N) long zncharisodd(GEN G, GEN chi) GEN zncharmul(GEN G, GEN a, GEN b) GEN zncharorder(GEN G, GEN chi) int znconrey_check(GEN cyc, GEN chi) GEN znconrey_normalized(GEN G, GEN chi) GEN znconreychar(GEN bid, GEN m) GEN znconreyfromchar_normalized(GEN bid, GEN chi) GEN znconreyconductor(GEN bid, GEN co, GEN *pm) GEN znconreyexp(GEN bid, GEN x) GEN znconreyfromchar(GEN bid, GEN chi) GEN znconreylog(GEN bid, GEN x) GEN znconreylog_normalize(GEN G, GEN m) # compile.c GEN closure_deriv(GEN G) long localvars_find(GEN pack, entree *ep) GEN localvars_read_str(const char *str, GEN pack) GEN snm_closure(entree *ep, GEN data) GEN strtoclosure(const char *s, long n, ...) GEN strtofunction(const char *s) # concat.c GEN gconcat(GEN x, GEN y) GEN gconcat1(GEN x) GEN matconcat(GEN v) GEN shallowconcat(GEN x, GEN y) GEN shallowconcat1(GEN x) GEN shallowmatconcat(GEN v) GEN vconcat(GEN A, GEN B) # default.c extern int d_SILENT, d_ACKNOWLEDGE, d_INITRC, d_RETURN GEN default0(const char *a, const char *b) long getrealprecision() entree *pari_is_default(const char *s) GEN sd_TeXstyle(const char *v, long flag) GEN sd_colors(const char *v, long flag) GEN sd_compatible(const char *v, long flag) GEN sd_datadir(const char *v, long flag) GEN sd_debug(const char *v, long flag) GEN sd_debugfiles(const char *v, long flag) GEN sd_debugmem(const char *v, long flag) GEN sd_factor_add_primes(const char *v, long flag) GEN sd_factor_proven(const char *v, long flag) GEN sd_format(const char *v, long flag) GEN sd_histsize(const char *v, long flag) GEN sd_log(const char *v, long flag) GEN sd_logfile(const char *v, long flag) GEN sd_nbthreads(const char *v, long flag) GEN sd_new_galois_format(const char *v, long flag) GEN sd_output(const char *v, long flag) GEN sd_parisize(const char *v, long flag) GEN sd_parisizemax(const char *v, long flag) GEN sd_path(const char *v, long flag) GEN sd_prettyprinter(const char *v, long flag) GEN sd_primelimit(const char *v, long flag) GEN sd_realbitprecision(const char *v, long flag) GEN sd_realprecision(const char *v, long flag) GEN sd_secure(const char *v, long flag) GEN sd_seriesprecision(const char *v, long flag) GEN sd_simplify(const char *v, long flag) GEN sd_sopath(char *v, int flag) GEN sd_strictargs(const char *v, long flag) GEN sd_strictmatch(const char *v, long flag) GEN sd_string(const char *v, long flag, const char *s, char **f) GEN sd_threadsize(const char *v, long flag) GEN sd_threadsizemax(const char *v, long flag) GEN sd_toggle(const char *v, long flag, const char *s, int *ptn) GEN sd_ulong(const char *v, long flag, const char *s, ulong *ptn, ulong Min, ulong Max, const char **msg) GEN setdefault(const char *s, const char *v, long flag) long setrealprecision(long n, long *prec) # gplib.c GEN sd_breakloop(const char *v, long flag) GEN sd_echo(const char *v, long flag) GEN sd_graphcolormap(const char *v, long flag) GEN sd_graphcolors(const char *v, long flag) GEN sd_help(const char *v, long flag) GEN sd_histfile(const char *v, long flag) GEN sd_lines(const char *v, long flag) GEN sd_linewrap(const char *v, long flag) GEN sd_prompt(const char *v, long flag) GEN sd_prompt_cont(const char *v, long flag) GEN sd_psfile(const char *v, long flag) GEN sd_readline(const char *v, long flag) GEN sd_recover(const char *v, long flag) GEN sd_timer(const char *v, long flag) void pari_hit_return() void gp_load_gprc() int gp_meta(const char *buf, int ismain) const char **gphelp_keyword_list() void pari_center(const char *s) void pari_print_version() const char *gp_format_time(long delay) const char *gp_format_prompt(const char *p) void pari_alarm(long s) GEN gp_alarm(long s, GEN code) GEN gp_input() void gp_allocatemem(GEN z) int gp_handle_exception(long numerr) void gp_alarm_handler(int sig) void gp_sigint_fun() extern int h_REGULAR, h_LONG, h_APROPOS, h_RL void gp_help(const char *s, long flag) void gp_echo_and_log(const char *prompt, const char *s) void print_fun_list(char **list, long nbli) # dirichlet.c GEN direxpand(GEN a, long L) GEN direuler(void *E, GEN (*eval)(void *, GEN), GEN a, GEN b, GEN c) # ellanal.c GEN ellanalyticrank(GEN e, GEN eps, long prec) GEN ellanalyticrank_bitprec(GEN e, GEN eps, long bitprec) GEN ellanal_globalred_all(GEN e, GEN *N, GEN *cb, GEN *tam) GEN ellheegner(GEN e) GEN ellL1(GEN E, long r, long prec) GEN ellL1_bitprec(GEN E, long r, long bitprec) # elldata.c GEN ellconvertname(GEN s) GEN elldatagenerators(GEN E) GEN ellidentify(GEN E) GEN ellsearch(GEN A) GEN ellsearchcurve(GEN name) void forell(void *E, long call(void*, GEN), long a, long b, long flag) # ellfromeqn.c GEN ellfromeqn(GEN s) # elliptic.c extern int t_ELL_Rg, t_ELL_Q, t_ELL_Qp, t_ELL_Fp, t_ELL_Fq, t_ELL_NF long ellQ_get_CM(GEN e) int ell_is_integral(GEN E) GEN ellbasechar(GEN E) GEN akell(GEN e, GEN n) GEN anell(GEN e, long n) GEN anellsmall(GEN e, long n) GEN bilhell(GEN e, GEN z1, GEN z2, long prec) void checkell(GEN e) void checkell_Fq(GEN e) void checkell_Q(GEN e) void checkell_Qp(GEN e) void checkellisog(GEN v) void checkellpt(GEN z) void checkell5(GEN e) GEN ec_bmodel(GEN e) GEN ec_f_evalx(GEN E, GEN x) GEN ec_h_evalx(GEN e, GEN x) GEN ec_dFdx_evalQ(GEN E, GEN Q) GEN ec_dFdy_evalQ(GEN E, GEN Q) GEN ec_dmFdy_evalQ(GEN e, GEN Q) GEN ec_2divpol_evalx(GEN E, GEN x) GEN ec_half_deriv_2divpol_evalx(GEN E, GEN x) GEN ellanal_globalred(GEN e, GEN *gr) GEN ellQ_get_N(GEN e) void ellQ_get_Nfa(GEN e, GEN *N, GEN *faN) GEN ellQp_Tate_uniformization(GEN E, long prec) GEN ellQp_u(GEN E, long prec) GEN ellQp_u2(GEN E, long prec) GEN ellQp_q(GEN E, long prec) GEN ellQp_ab(GEN E, long prec) GEN ellQp_root(GEN E, long prec) GEN ellR_ab(GEN E, long prec) GEN ellR_eta(GEN E, long prec) GEN ellR_omega(GEN x, long prec) GEN ellR_roots(GEN E, long prec) GEN elladd(GEN e, GEN z1, GEN z2) GEN ellap(GEN e, GEN p) long ellap_CM_fast(GEN E, ulong p, long CM) GEN ellcard(GEN E, GEN p) GEN ellchangecurve(GEN e, GEN ch) GEN ellchangeinvert(GEN w) GEN ellchangepoint(GEN x, GEN ch) GEN ellchangepointinv(GEN x, GEN ch) GEN elldivpol(GEN e, long n, long v) GEN elleisnum(GEN om, long k, long flag, long prec) GEN elleta(GEN om, long prec) GEN ellff_get_card(GEN E) GEN ellff_get_gens(GEN E) GEN ellff_get_group(GEN E) GEN ellff_get_o(GEN x) GEN ellff_get_p(GEN E) GEN ellfromj(GEN j) GEN ellformaldifferential(GEN e, long n, long v) GEN ellformalexp(GEN e, long n, long v) GEN ellformallog(GEN e, long n, long v) GEN ellformalpoint(GEN e, long n, long v) GEN ellformalw(GEN e, long n, long v) GEN ellgenerators(GEN E) GEN ellglobalred(GEN e1) GEN ellgroup(GEN E, GEN p) GEN ellgroup0(GEN E, GEN p, long flag) GEN ellheight0(GEN e, GEN a, GEN b, long prec) GEN ellheight(GEN e, GEN a, long prec) GEN ellheightmatrix(GEN E, GEN x, long n) GEN ellheightoo(GEN e, GEN z, long prec) GEN ellinit(GEN x, GEN p, long prec) GEN ellintegralmodel(GEN e, GEN *pv) GEN ellisoncurve(GEN e, GEN z) int ellissupersingular(GEN x, GEN p) int elljissupersingular(GEN x) GEN elllseries(GEN e, GEN s, GEN A, long prec) GEN elllocalred(GEN e, GEN p1) GEN elllog(GEN e, GEN a, GEN g, GEN o) GEN ellminimalmodel(GEN E, GEN *ptv) GEN ellminimaltwist(GEN e) GEN ellminimaltwist0(GEN e, long fl) GEN ellminimaltwistcond(GEN e) GEN ellmul(GEN e, GEN z, GEN n) GEN ellnonsingularmultiple(GEN e, GEN P) GEN ellneg(GEN e, GEN z) GEN ellorder(GEN e, GEN p, GEN o) long ellorder_Q(GEN E, GEN P) GEN ellordinate(GEN e, GEN x, long prec) GEN ellpadicfrobenius(GEN E, ulong p, long n) GEN ellpadicheight(GEN e, GEN p, long n, GEN P) GEN ellpadicheight0(GEN e, GEN p, long n, GEN P, GEN Q) GEN ellpadicheightmatrix(GEN e, GEN p, long n, GEN P) GEN ellpadiclog(GEN E, GEN p, long n, GEN P) GEN ellpadics2(GEN E, GEN p, long n) GEN ellperiods(GEN w, long flag, long prec) GEN elltwist(GEN E, GEN D) GEN ellrandom(GEN e) long ellrootno(GEN e, GEN p) long ellrootno_global(GEN e) GEN ellsea(GEN E, ulong smallfact) GEN ellsigma(GEN om, GEN z, long flag, long prec) GEN ellsub(GEN e, GEN z1, GEN z2) GEN elltaniyama(GEN e, long prec) GEN elltatepairing(GEN E, GEN t, GEN s, GEN m) GEN elltors(GEN e) GEN elltors0(GEN e, long flag) GEN ellweilpairing(GEN E, GEN t, GEN s, GEN m) GEN ellwp(GEN w, GEN z, long prec) GEN ellwp0(GEN w, GEN z, long flag, long prec) GEN ellwpseries(GEN e, long v, long PRECDL) GEN ellxn(GEN e, long n, long v) GEN ellzeta(GEN om, GEN z, long prec) GEN expIxy(GEN x, GEN y, long prec) int oncurve(GEN e, GEN z) GEN orderell(GEN e, GEN p) GEN pointell(GEN e, GEN z, long prec) GEN point_to_a4a6(GEN E, GEN P, GEN p, GEN *pa4) GEN point_to_a4a6_Fl(GEN E, GEN P, ulong p, ulong *pa4) GEN zell(GEN e, GEN z, long prec) # elltors.c long ellisdivisible(GEN E, GEN P, GEN n, GEN *Q) # ellisogeny.c GEN ellisogenyapply(GEN f, GEN P) GEN ellisogeny(GEN e, GEN G, long only_image, long vx, long vy) GEN ellisomat(GEN E, long flag) # ellsea.c GEN Fp_ellcard_SEA(GEN a4, GEN a6, GEN p, long early_abort) GEN Fq_ellcard_SEA(GEN a4, GEN a6, GEN q, GEN T, GEN p, long early_abort) GEN ellmodulareqn(long l, long vx, long vy) # es.c GEN externstr(const char *cmd) char *gp_filter(const char *s) GEN gpextern(const char *cmd) void gpsystem(const char *s) GEN readstr(const char *s) GEN GENtoGENstr_nospace(GEN x) GEN GENtoGENstr(GEN x) char* GENtoTeXstr(GEN x) char* GENtostr(GEN x) char* GENtostr_raw(GEN x) char* GENtostr_unquoted(GEN x) GEN Str(GEN g) GEN Strchr(GEN g) GEN Strexpand(GEN g) GEN Strtex(GEN g) void brute(GEN g, char format, long dec) void dbgGEN(GEN x, long nb) void error0(GEN g) void dbg_pari_heap() int file_is_binary(FILE *f) void err_flush() void err_printf(const char* pat, ...) GEN gp_getenv(const char *s) GEN gp_read_file(const char *s) GEN gp_read_str_multiline(const char *s, char *last) GEN gp_read_stream(FILE *f) GEN gp_readvec_file(char *s) GEN gp_readvec_stream(FILE *f) void gpinstall(const char *s, const char *code, const char *gpname, const char *lib) GEN gsprintf(const char *fmt, ...) GEN gvsprintf(const char *fmt, va_list ap) char* itostr(GEN x) void matbrute(GEN g, char format, long dec) char* os_getenv(const char *s) void (*os_signal(int sig, void (*f)(int)))(int) void outmat(GEN x) void output(GEN x) char* RgV_to_str(GEN g, long flag) void pari_add_hist(GEN z, long t) void pari_ask_confirm(const char *s) void pari_fclose(pariFILE *f) void pari_flush() pariFILE* pari_fopen(const char *s, const char *mode) pariFILE* pari_fopen_or_fail(const char *s, const char *mode) pariFILE* pari_fopengz(const char *s) void pari_fprintf(FILE *file, const char *fmt, ...) void pari_fread_chars(void *b, size_t n, FILE *f) GEN pari_get_hist(long p) long pari_get_histtime(long p) char* pari_get_homedir(const char *user) int pari_is_dir(const char *name) int pari_is_file(const char *name) int pari_last_was_newline() void pari_set_last_newline(int last) ulong pari_nb_hist() void pari_printf(const char *fmt, ...) void pari_putc(char c) void pari_puts(const char *s) pariFILE* pari_safefopen(const char *s, const char *mode) char* pari_sprintf(const char *fmt, ...) int pari_stdin_isatty() char* pari_strdup(const char *s) char* pari_strndup(const char *s, long n) char* pari_unique_dir(const char *s) char* pari_unique_filename(const char *s) void pari_unlink(const char *s) void pari_vfprintf(FILE *file, const char *fmt, va_list ap) void pari_vprintf(const char *fmt, va_list ap) char* pari_vsprintf(const char *fmt, va_list ap) char* path_expand(const char *s) void out_print0(PariOUT *out, const char *sep, GEN g, long flag) void out_printf(PariOUT *out, const char *fmt, ...) void out_putc(PariOUT *out, char c) void out_puts(PariOUT *out, const char *s) void out_term_color(PariOUT *out, long c) void out_vprintf(PariOUT *out, const char *fmt, va_list ap) char* pari_sprint0(const char *msg, GEN g, long flag) extern int f_RAW, f_PRETTYMAT, f_PRETTY, f_TEX void print0(GEN g, long flag) void print1(GEN g) void printf0(const char *fmt, GEN args) void printsep(const char *s, GEN g) void printsep1(const char *s, GEN g) void printtex(GEN g) char* stack_sprintf(const char *fmt, ...) char* stack_strcat(const char *s, const char *t) char* stack_strdup(const char *s) void strftime_expand(const char *s, char *buf, long max) GEN Strprintf(const char *fmt, GEN args) FILE* switchin(const char *name) void switchout(const char *name) void term_color(long c) char* term_get_color(char *s, long c) void texe(GEN g, char format, long dec) const char* type_name(long t) void warning0(GEN g) void write0(const char *s, GEN g) void write1(const char *s, GEN g) void writebin(const char *name, GEN x) void writetex(const char *s, GEN g) # eval.c extern int br_NONE, br_BREAK, br_NEXT, br_MULTINEXT, br_RETURN void bincopy_relink(GEN C, GEN vi) GEN break0(long n) GEN call0(GEN fun, GEN args) GEN closure_callgen1(GEN C, GEN x) GEN closure_callgen1prec(GEN C, GEN x, long prec) GEN closure_callgen2(GEN C, GEN x, GEN y) GEN closure_callgenall(GEN C, long n, ...) GEN closure_callgenvec(GEN C, GEN args) GEN closure_callgenvecprec(GEN C, GEN args, long prec) void closure_callvoid1(GEN C, GEN x) long closure_context(long start, long level) void closure_disassemble(GEN n) void closure_err(long level) GEN closure_evalbrk(GEN C, long *status) GEN closure_evalgen(GEN C) GEN closure_evalnobrk(GEN C) GEN closure_evalres(GEN C) void closure_evalvoid(GEN C) GEN closure_trapgen(GEN C, long numerr) GEN copybin_unlink(GEN C) GEN get_lex(long vn) long get_localprec() long get_localbitprec() GEN gp_call(void *E, GEN x) GEN gp_callprec(void *E, GEN x, long prec) GEN gp_call2(void *E, GEN x, GEN y) long gp_callbool(void *E, GEN x) long gp_callvoid(void *E, GEN x) GEN gp_eval(void *E, GEN x) long gp_evalbool(void *E, GEN x) GEN gp_evalprec(void *E, GEN x, long prec) GEN gp_evalupto(void *E, GEN x) long gp_evalvoid(void *E, GEN x) void localprec(long p) void localbitprec(long p) long loop_break() GEN next0(long n) GEN pareval(GEN C) GEN pari_self() GEN parsum(GEN a, GEN b, GEN code, GEN x) GEN parvector(long n, GEN code) void pop_lex(long n) void pop_localprec() void push_lex(GEN a, GEN C) void push_localbitprec(long p) void push_localprec(long p) GEN return0(GEN x) void set_lex(long vn, GEN x) # FF.c GEN FF_1(GEN a) GEN FF_Z_Z_muldiv(GEN x, GEN y, GEN z) GEN FF_Q_add(GEN x, GEN y) GEN FF_Z_add(GEN a, GEN b) GEN FF_Z_mul(GEN a, GEN b) GEN FF_add(GEN a, GEN b) GEN FF_charpoly(GEN x) GEN FF_conjvec(GEN x) GEN FF_div(GEN a, GEN b) GEN FF_ellcard(GEN E) GEN FF_ellcard_SEA(GEN E, long smallfact) GEN FF_ellgens(GEN E) GEN FF_ellgroup(GEN E) GEN FF_elllog(GEN E, GEN P, GEN Q, GEN o) GEN FF_ellmul(GEN E, GEN P, GEN n) GEN FF_ellorder(GEN E, GEN P, GEN o) GEN FF_elltwist(GEN E) GEN FF_ellrandom(GEN E) GEN FF_elltatepairing(GEN E, GEN P, GEN Q, GEN m) GEN FF_ellweilpairing(GEN E, GEN P, GEN Q, GEN m) int FF_equal(GEN a, GEN b) int FF_equal0(GEN x) int FF_equal1(GEN x) int FF_equalm1(GEN x) long FF_f(GEN x) GEN FF_inv(GEN a) long FF_issquare(GEN x) long FF_issquareall(GEN x, GEN *pt) long FF_ispower(GEN x, GEN K, GEN *pt) GEN FF_log(GEN a, GEN b, GEN o) GEN FF_minpoly(GEN x) GEN FF_mod(GEN x) GEN FF_mul(GEN a, GEN b) GEN FF_mul2n(GEN a, long n) GEN FF_neg(GEN a) GEN FF_neg_i(GEN a) GEN FF_norm(GEN x) GEN FF_order(GEN x, GEN o) GEN FF_p(GEN x) GEN FF_p_i(GEN x) GEN FF_pow(GEN x, GEN n) GEN FF_primroot(GEN x, GEN *o) GEN FF_q(GEN x) int FF_samefield(GEN x, GEN y) GEN FF_sqr(GEN a) GEN FF_sqrt(GEN a) GEN FF_sqrtn(GEN x, GEN n, GEN *zetan) GEN FF_sub(GEN x, GEN y) GEN FF_to_F2xq(GEN x) GEN FF_to_F2xq_i(GEN x) GEN FF_to_Flxq(GEN x) GEN FF_to_Flxq_i(GEN x) GEN FF_to_FpXQ(GEN x) GEN FF_to_FpXQ_i(GEN x) GEN FF_trace(GEN x) GEN FF_zero(GEN a) GEN FFM_FFC_mul(GEN M, GEN C, GEN ff) GEN FFM_det(GEN M, GEN ff) GEN FFM_image(GEN M, GEN ff) GEN FFM_inv(GEN M, GEN ff) GEN FFM_ker(GEN M, GEN ff) GEN FFM_mul(GEN M, GEN N, GEN ff) long FFM_rank(GEN M, GEN ff) GEN FFX_factor(GEN f, GEN x) GEN FFX_roots(GEN f, GEN x) GEN FqX_to_FFX(GEN x, GEN ff) GEN Fq_to_FF(GEN x, GEN ff) GEN Z_FF_div(GEN a, GEN b) GEN ffgen(GEN T, long v) GEN fflog(GEN x, GEN g, GEN o) GEN fforder(GEN x, GEN o) GEN ffprimroot(GEN x, GEN *o) GEN ffrandom(GEN ff) int Rg_is_FF(GEN c, GEN *ff) int RgC_is_FFC(GEN x, GEN *ff) int RgM_is_FFM(GEN x, GEN *ff) GEN p_to_FF(GEN p, long v) GEN Tp_to_FF(GEN T, GEN p) # galconj.c GEN checkgal(GEN gal) GEN checkgroup(GEN g, GEN *S) GEN embed_disc(GEN r, long r1, long prec) GEN embed_roots(GEN r, long r1) GEN galois_group(GEN gal) GEN galoisconj(GEN nf, GEN d) GEN galoisconj0(GEN nf, long flag, GEN d, long prec) GEN galoisexport(GEN gal, long format) GEN galoisfixedfield(GEN gal, GEN v, long flag, long y) GEN galoisidentify(GEN gal) GEN galoisinit(GEN nf, GEN den) GEN galoisisabelian(GEN gal, long flag) long galoisisnormal(GEN gal, GEN sub) GEN galoispermtopol(GEN gal, GEN perm) GEN galoissubgroups(GEN G) GEN galoissubfields(GEN G, long flag, long v) long numberofconjugates(GEN T, long pdepart) GEN vandermondeinverse(GEN L, GEN T, GEN den, GEN prep) # galpol.c GEN galoisnbpol(long a) GEN galoisgetpol(long a, long b, long s) # gen1.c GEN conjvec(GEN x, long prec) GEN gadd(GEN x, GEN y) GEN gaddsg(long x, GEN y) GEN gconj(GEN x) GEN gdiv(GEN x, GEN y) GEN gdivgs(GEN x, long s) GEN ginv(GEN x) GEN gmul(GEN x, GEN y) GEN gmul2n(GEN x, long n) GEN gmulsg(long s, GEN y) GEN gsqr(GEN x) GEN gsub(GEN x, GEN y) GEN gsubsg(long x, GEN y) GEN inv_ser(GEN b) GEN mulcxI(GEN x) GEN mulcxmI(GEN x) GEN ser_normalize(GEN x) # gen2.c GEN gassoc_proto(GEN f(GEN, GEN), GEN, GEN) GEN map_proto_G(GEN f(GEN), GEN x) GEN map_proto_lG(long f(GEN), GEN x) GEN map_proto_lGL(long f(GEN, long), GEN x, long y) long Q_pval(GEN x, GEN p) long Q_pvalrem(GEN x, GEN p, GEN *y) long RgX_val(GEN x) long RgX_valrem(GEN x, GEN *z) long RgX_valrem_inexact(GEN x, GEN *Z) int ZV_Z_dvd(GEN v, GEN p) long ZV_pval(GEN x, GEN p) long ZV_pvalrem(GEN x, GEN p, GEN *px) long ZV_lval(GEN x, ulong p) long ZV_lvalrem(GEN x, ulong p, GEN *px) long ZX_lvalrem(GEN x, ulong p, GEN *px) long ZX_lval(GEN x, ulong p) long ZX_pval(GEN x, GEN p) long ZX_pvalrem(GEN x, GEN p, GEN *px) long Z_lval(GEN n, ulong p) long Z_lvalrem(GEN n, ulong p, GEN *py) long Z_lvalrem_stop(GEN *n, ulong p, int *stop) long Z_pval(GEN n, GEN p) long Z_pvalrem(GEN x, GEN p, GEN *py) GEN cgetp(GEN x) GEN cvstop2(long s, GEN y) GEN cvtop(GEN x, GEN p, long l) GEN cvtop2(GEN x, GEN y) GEN gabs(GEN x, long prec) void gaffect(GEN x, GEN y) void gaffsg(long s, GEN x) int gcmp(GEN x, GEN y) int gequal0(GEN x) int gequal1(GEN x) int gequalX(GEN x) int gequalm1(GEN x) int gcmpsg(long x, GEN y) GEN gcvtop(GEN x, GEN p, long r) int gequal(GEN x, GEN y) int gequalsg(long s, GEN x) long gexpo(GEN x) GEN gpvaluation(GEN x, GEN p) long gvaluation(GEN x, GEN p) int gidentical(GEN x, GEN y) long glength(GEN x) GEN gmax(GEN x, GEN y) GEN gmaxgs(GEN x, long y) GEN gmin(GEN x, GEN y) GEN gmings(GEN x, long y) GEN gneg(GEN x) GEN gneg_i(GEN x) GEN RgX_to_ser(GEN x, long l) GEN RgX_to_ser_inexact(GEN x, long l) int gsigne(GEN x) GEN gtolist(GEN x) long gtolong(GEN x) int lexcmp(GEN x, GEN y) GEN listcreate_typ(long t) GEN listcreate() GEN listinsert(GEN list, GEN object, long index) void listpop(GEN L, long index) void listpop0(GEN L, long index) GEN listput(GEN list, GEN object, long index) GEN listput0(GEN list, GEN object, long index) void listsort(GEN list, long flag) GEN matsize(GEN x) GEN mklist() GEN mklistcopy(GEN x) GEN normalize(GEN x) GEN normalizepol(GEN x) GEN normalizepol_approx(GEN x, long lx) GEN normalizepol_lg(GEN x, long lx) ulong padic_to_Fl(GEN x, ulong p) GEN padic_to_Fp(GEN x, GEN Y) GEN quadtofp(GEN x, long l) GEN rfrac_to_ser(GEN x, long l) long sizedigit(GEN x) long u_lval(ulong x, ulong p) long u_lvalrem(ulong x, ulong p, ulong *py) long u_lvalrem_stop(ulong *n, ulong p, int *stop) long u_pval(ulong x, GEN p) long u_pvalrem(ulong x, GEN p, ulong *py) long vecindexmax(GEN x) long vecindexmin(GEN x) GEN vecmax0(GEN x, GEN *pv) GEN vecmax(GEN x) GEN vecmin0(GEN x, GEN *pv) GEN vecmin(GEN x) long z_lval(long s, ulong p) long z_lvalrem(long s, ulong p, long *py) long z_pval(long n, GEN p) long z_pvalrem(long n, GEN p, long *py) # gen3.c GEN padic_to_Q(GEN x) GEN padic_to_Q_shallow(GEN x) GEN QpV_to_QV(GEN v) GEN RgM_mulreal(GEN x, GEN y) GEN RgX_cxeval(GEN T, GEN u, GEN ui) GEN RgX_deflate_max(GEN x0, long *m) long RgX_deflate_order(GEN x) long ZX_deflate_order(GEN x) GEN ZX_deflate_max(GEN x, long *m) long RgX_degree(GEN x, long v) GEN RgX_integ(GEN x) GEN bitprecision0(GEN x, long n) GEN bitprecision00(GEN x, GEN n) GEN ceil_safe(GEN x) GEN ceilr(GEN x) GEN centerlift(GEN x) GEN centerlift0(GEN x, long v) GEN compo(GEN x, long n) GEN deg1pol(GEN x1, GEN x0, long v) GEN deg1pol_shallow(GEN x1, GEN x0, long v) long degree(GEN x) GEN denom(GEN x) GEN deriv(GEN x, long v) GEN derivser(GEN x) GEN diffop(GEN x, GEN v, GEN dv) GEN diffop0(GEN x, GEN v, GEN dv, long n) GEN diviiround(GEN x, GEN y) GEN divrem(GEN x, GEN y, long v) GEN floor_safe(GEN x) GEN gceil(GEN x) GEN gcvtoi(GEN x, long *e) GEN gdeflate(GEN x, long v, long d) GEN gdivent(GEN x, GEN y) GEN gdiventgs(GEN x, long y) GEN gdiventsg(long x, GEN y) GEN gdiventres(GEN x, GEN y) GEN gdivmod(GEN x, GEN y, GEN *pr) GEN gdivround(GEN x, GEN y) int gdvd(GEN x, GEN y) GEN geq(GEN x, GEN y) GEN geval(GEN x) GEN gfloor(GEN x) GEN gtrunc2n(GEN x, long s) GEN gfrac(GEN x) GEN gge(GEN x, GEN y) GEN ggrando(GEN x, long n) GEN ggt(GEN x, GEN y) GEN gimag(GEN x) GEN gisexactzero(GEN g) GEN gle(GEN x, GEN y) GEN glt(GEN x, GEN y) GEN gmod(GEN x, GEN y) GEN gmodgs(GEN x, long y) GEN gmodsg(long x, GEN y) GEN gmodulo(GEN x, GEN y) GEN gmodulsg(long x, GEN y) GEN gmodulss(long x, long y) GEN gne(GEN x, GEN y) GEN gnot(GEN x) GEN gpolvar(GEN y) GEN gppadicprec(GEN x, GEN p) GEN gppoldegree(GEN x, long v) long gprecision(GEN x) GEN gpserprec(GEN x, long v) GEN greal(GEN x) GEN grndtoi(GEN x, long *e) GEN ground(GEN x) GEN gshift(GEN x, long n) GEN gsubst(GEN x, long v, GEN y) GEN gsubstpol(GEN x, GEN v, GEN y) GEN gsubstvec(GEN x, GEN v, GEN y) GEN gtocol(GEN x) GEN gtocol0(GEN x, long n) GEN gtocolrev(GEN x) GEN gtocolrev0(GEN x, long n) GEN gtopoly(GEN x, long v) GEN gtopolyrev(GEN x, long v) GEN gtoser(GEN x, long v, long precdl) GEN gtovec(GEN x) GEN gtovec0(GEN x, long n) GEN gtovecrev(GEN x) GEN gtovecrev0(GEN x, long n) GEN gtovecsmall(GEN x) GEN gtovecsmall0(GEN x, long n) GEN gtrunc(GEN x) long gvar(GEN x) long gvar2(GEN x) GEN hqfeval(GEN q, GEN x) GEN imag_i(GEN x) GEN integ(GEN x, long v) GEN integser(GEN x) GEN inv_ser(GEN b) int iscomplex(GEN x) int isexactzero(GEN g) int isrationalzeroscalar(GEN g) int isinexact(GEN x) int isinexactreal(GEN x) int isint(GEN n, GEN *ptk) int isrationalzero(GEN g) int issmall(GEN n, long *ptk) GEN lift(GEN x) GEN lift0(GEN x, long v) GEN liftall(GEN x) GEN liftall_shallow(GEN x) GEN liftint(GEN x) GEN liftint_shallow(GEN x) GEN liftpol(GEN x) GEN liftpol_shallow(GEN x) GEN mkcoln(long n, ...) GEN mkintn(long n, ...) GEN mkpoln(long n, ...) GEN mkvecn(long n, ...) GEN mkvecsmalln(long n, ...) GEN mulreal(GEN x, GEN y) GEN numer(GEN x) long padicprec(GEN x, GEN p) long padicprec_relative(GEN x) GEN polcoeff0(GEN x, long n, long v) GEN polcoeff_i(GEN x, long n, long v) long poldegree(GEN x, long v) GEN poleval(GEN x, GEN y) GEN pollead(GEN x, long v) long precision(GEN x) GEN precision0(GEN x, long n) GEN precision00(GEN x, GEN n) GEN qf_apply_RgM(GEN q, GEN M) GEN qf_apply_ZM(GEN q, GEN M) GEN qfbil(GEN x, GEN y, GEN q) GEN qfeval(GEN q, GEN x) GEN qfevalb(GEN q, GEN x, GEN y) GEN qfnorm(GEN x, GEN q) GEN real_i(GEN x) GEN round0(GEN x, GEN *pte) GEN roundr(GEN x) GEN roundr_safe(GEN x) GEN scalarpol(GEN x, long v) GEN scalarpol_shallow(GEN x, long v) GEN scalarser(GEN x, long v, long prec) GEN ser_unscale(GEN P, GEN h) long serprec(GEN x, long v) GEN serreverse(GEN x) GEN sertoser(GEN x, long prec) GEN simplify(GEN x) GEN simplify_shallow(GEN x) GEN tayl(GEN x, long v, long precdl) GEN toser_i(GEN x) GEN trunc0(GEN x, GEN *pte) GEN uu32toi(ulong a, ulong b) GEN vars_sort_inplace(GEN z) GEN vars_to_RgXV(GEN h) GEN variables_vecsmall(GEN x) GEN variables_vec(GEN x) # genus2red.c GEN genus2red(GEN PQ, GEN p) # groupid.c long group_ident(GEN G, GEN S) long group_ident_trans(GEN G, GEN S) # hash.c hashtable *hash_create_ulong(ulong s, long stack) hashtable *hash_create_str(ulong s, long stack) hashtable *hash_create(ulong minsize, ulong (*hash)(void*), int (*eq)(void*, void*), int use_stack) void hash_insert(hashtable *h, void *k, void *v) void hash_insert2(hashtable *h, void *k, void *v, ulong hash) GEN hash_keys(hashtable *h) GEN hash_values(hashtable *h) hashentry *hash_search(hashtable *h, void *k) hashentry *hash_search2(hashtable *h, void *k, ulong hash) hashentry *hash_select(hashtable *h, void *k, void *E, int(*select)(void *, hashentry *)) hashentry *hash_remove(hashtable *h, void *k) hashentry *hash_remove_select(hashtable *h, void *k, void *E, int (*select)(void*, hashentry*)) void hash_destroy(hashtable *h) ulong hash_str(const char *str) ulong hash_str2(const char *s) ulong hash_GEN(GEN x) # hyperell.c GEN hyperellpadicfrobenius(GEN x, ulong p, long e) GEN hyperellcharpoly(GEN x) GEN nfhyperellpadicfrobenius(GEN H, GEN T, ulong p, long n) # hnf_snf.c GEN RgM_hnfall(GEN A, GEN *pB, long remove) GEN ZM_hnf(GEN x) GEN ZM_hnfall(GEN A, GEN *ptB, long remove) GEN ZM_hnfcenter(GEN M) GEN ZM_hnflll(GEN A, GEN *ptB, int remove) GEN ZV_extgcd(GEN A) GEN ZV_snfall(GEN D, GEN *pU, GEN *pV) GEN ZV_snf_group(GEN d, GEN *newU, GEN *newUi) GEN ZM_hnfmod(GEN x, GEN d) GEN ZM_hnfmodall(GEN x, GEN dm, long flag) GEN ZM_hnfmodid(GEN x, GEN d) GEN ZM_hnfperm(GEN A, GEN *ptU, GEN *ptperm) void ZM_snfclean(GEN d, GEN u, GEN v) GEN ZM_snf(GEN x) GEN ZM_snf_group(GEN H, GEN *newU, GEN *newUi) GEN ZM_snfall(GEN x, GEN *ptU, GEN *ptV) GEN ZM_snfall_i(GEN x, GEN *ptU, GEN *ptV, int return_vec) GEN zlm_echelon(GEN x, long early_abort, ulong p, ulong pm) GEN ZpM_echelon(GEN x, long early_abort, GEN p, GEN pm) GEN gsmith(GEN x) GEN gsmithall(GEN x) GEN hnf(GEN x) GEN hnf_divscale(GEN A, GEN B, GEN t) GEN hnf_solve(GEN A, GEN B) GEN hnf_invimage(GEN A, GEN b) GEN hnfall(GEN x) int hnfdivide(GEN A, GEN B) GEN hnflll(GEN x) GEN hnfmerge_get_1(GEN A, GEN B) GEN hnfmod(GEN x, GEN d) GEN hnfmodid(GEN x, GEN p) GEN hnfperm(GEN x) GEN matfrobenius(GEN M, long flag, long v) GEN mathnf0(GEN x, long flag) GEN matsnf0(GEN x, long flag) GEN smith(GEN x) GEN smithall(GEN x) GEN smithclean(GEN z) # ifactor1.c GEN Z_factor(GEN n) GEN Z_factor_limit(GEN n, ulong all) GEN Z_factor_until(GEN n, GEN limit) long Z_issmooth(GEN m, ulong lim) GEN Z_issmooth_fact(GEN m, ulong lim) long Z_issquarefree(GEN x) GEN absi_factor(GEN n) GEN absi_factor_limit(GEN n, ulong all) long bigomega(GEN n) long bigomegau(ulong n) GEN core(GEN n) ulong coreu(ulong n) GEN eulerphi(GEN n) ulong eulerphiu(ulong n) ulong eulerphiu_fact(GEN f) GEN factorint(GEN n, long flag) GEN factoru(ulong n) int ifac_isprime(GEN x) int ifac_next(GEN *part, GEN *p, long *e) int ifac_read(GEN part, GEN *p, long *e) void ifac_skip(GEN part) GEN ifac_start(GEN n, int moebius) int is_357_power(GEN x, GEN *pt, ulong *mask) int is_pth_power(GEN x, GEN *pt, forprime_t *T, ulong cutoffbits) long ispowerful(GEN n) long issquarefree(GEN x) long istotient(GEN n, GEN *px) long moebius(GEN n) long moebiusu(ulong n) GEN nextprime(GEN n) GEN numdiv(GEN n) long omega(GEN n) long omegau(ulong n) GEN precprime(GEN n) GEN sumdiv(GEN n) GEN sumdivk(GEN n, long k) ulong tridiv_bound(GEN n) int uis_357_power(ulong x, ulong *pt, ulong *mask) int uis_357_powermod(ulong x, ulong *mask) long uissquarefree(ulong n) long uissquarefree_fact(GEN f) ulong unextprime(ulong n) ulong uprecprime(ulong n) GEN usumdivkvec(ulong n, GEN K) # init.c long timer_delay(pari_timer *T) long timer_get(pari_timer *T) void timer_start(pari_timer *T) int chk_gerepileupto(GEN x) GENbin* copy_bin(GEN x) GENbin* copy_bin_canon(GEN x) void dbg_gerepile(pari_sp av) void dbg_gerepileupto(GEN q) GEN errname(GEN err) GEN gclone(GEN x) GEN gcloneref(GEN x) void gclone_refc(GEN x) GEN gcopy(GEN x) GEN gcopy_avma(GEN x, pari_sp *AVMA) GEN gcopy_lg(GEN x, long lx) GEN gerepile(pari_sp ltop, pari_sp lbot, GEN q) void gerepileallsp(pari_sp av, pari_sp tetpil, int n, ...) void gerepilecoeffssp(pari_sp av, pari_sp tetpil, long *g, int n) void gerepilemanysp(pari_sp av, pari_sp tetpil, GEN* g[], int n) GEN getheap() void gp_context_save(gp_context* rec) void gp_context_restore(gp_context* rec) long gsizeword(GEN x) long gsizebyte(GEN x) void gunclone(GEN x) void gunclone_deep(GEN x) GEN listcopy(GEN x) void timer_printf(pari_timer *T, const char *format, ...) void msgtimer(const char *format, ...) long name_numerr(const char *s) void new_chunk_resize(size_t x) GEN newblock(size_t n) const char * numerr_name(long errnum) GEN obj_check(GEN S, long K) GEN obj_checkbuild(GEN S, long tag, GEN (*build)(GEN)) GEN obj_checkbuild_padicprec(GEN S, long tag, GEN (*build)(GEN, long), long prec) GEN obj_checkbuild_realprec(GEN S, long tag, GEN (*build)(GEN, long), long prec) GEN obj_checkbuild_prec(GEN S, long tag, GEN (*build)(GEN, long), long (*pr)(GEN), long prec) void obj_free(GEN S) GEN obj_init(long d, long n) GEN obj_insert(GEN S, long K, GEN O) GEN obj_insert_shallow(GEN S, long K, GEN O) void pari_add_function(entree *ep) void pari_add_module(entree *ep) void pari_add_defaults_module(entree *ep) void pari_close() void pari_close_opts(ulong init_opts) GEN pari_compile_str(const char *lex) int pari_daemon() void pari_err(int numerr, ...) GEN pari_err_last() char * pari_err2str(GEN err) void pari_init_opts(size_t parisize, ulong maxprime, ulong init_opts) void pari_init(size_t parisize, ulong maxprime) void pari_stackcheck_init(void *pari_stack_base) void pari_sighandler(int sig) void pari_sig_init(void (*f)(int)) void pari_thread_alloc(pari_thread *t, size_t s, GEN arg) void pari_thread_close() void pari_thread_free(pari_thread *t) void pari_thread_init() GEN pari_thread_start(pari_thread *t) void pari_thread_valloc(pari_thread *t, size_t s, size_t v, GEN arg) GEN pari_version() void pari_warn(int numerr, ...) void paristack_newrsize(ulong newsize) void paristack_resize(ulong newsize) void paristack_setsize(size_t rsize, size_t vsize) void parivstack_resize(ulong newsize) void parivstack_reset() GEN trap0(const char *e, GEN f, GEN r) void shiftaddress(GEN x, long dec) void shiftaddress_canon(GEN x, long dec) long timer() long timer2() void traverseheap( void(*f)(GEN, void *), void *data ) # intnum.c GEN contfraceval(GEN CF, GEN t, long nlim) GEN contfracinit(GEN M, long lim) GEN intcirc(void *E, GEN (*eval) (void *, GEN), GEN a, GEN R, GEN tab, long prec) GEN intfuncinit(void *E, GEN (*eval) (void *, GEN), GEN a, GEN b, long m, long prec) GEN intnum(void *E, GEN (*eval) (void *, GEN), GEN a, GEN b, GEN tab, long prec) GEN intnumgauss(void *E, GEN (*eval)(void*, GEN), GEN a, GEN b, GEN tab, long prec) GEN intnumgaussinit(long n, long prec) GEN intnuminit(GEN a, GEN b, long m, long prec) GEN intnumromb(void *E, GEN (*eval)(void *, GEN), GEN a, GEN b, long flag, long prec) GEN intnumromb_bitprec(void *E, GEN (*eval)(void *, GEN), GEN a, GEN b, long flag, long bit) GEN sumnum(void *E, GEN (*eval)(void*, GEN), GEN a, GEN tab, long prec) GEN sumnuminit(GEN fast, long prec) GEN sumnummonien(void *E, GEN (*eval)(void*, GEN), GEN a, GEN tab, long prec) GEN sumnummonieninit(GEN asymp, GEN w, GEN n0, long prec) # krasner.c GEN padicfields0(GEN p, GEN n, long flag) GEN padicfields(GEN p, long m, long d, long flag) # kummer.c GEN rnfkummer(GEN bnr, GEN subgroup, long all, long prec) # lfun.c long is_linit(GEN data) GEN ldata_get_an(GEN ldata) GEN ldata_get_dual(GEN ldata) GEN ldata_get_gammavec(GEN ldata) long ldata_get_degree(GEN ldata) long ldata_get_k(GEN ldata) GEN ldata_get_conductor(GEN ldata) GEN ldata_get_rootno(GEN ldata) GEN ldata_get_residue(GEN ldata) long ldata_get_type(GEN ldata) long ldata_isreal(GEN ldata) GEN ldata_vecan(GEN ldata, long L, long prec) long linit_get_type(GEN linit) GEN linit_get_ldata(GEN linit) GEN linit_get_tech(GEN linit) GEN lfun_get_domain(GEN tech) GEN lfun_get_dom(GEN tech) long lfun_get_bitprec(GEN tech) GEN lfun_get_factgammavec(GEN tech) GEN lfun_get_step(GEN tech) GEN lfun_get_pol(GEN tech) GEN lfun_get_Residue(GEN tech) GEN lfun_get_k2(GEN tech) GEN lfun_get_w2(GEN tech) GEN lfun_get_expot(GEN tech) long lfun_get_der(GEN tech) long lfun_get_bitprec(GEN tech) GEN lfun(GEN ldata, GEN s, long bitprec) GEN lfun0(GEN ldata, GEN s, long der, long bitprec) long lfuncheckfeq(GEN data, GEN t0, long bitprec) GEN lfunconductor(GEN data, GEN maxcond, long flag, long bitprec) GEN lfuncost(GEN lmisc, GEN dom, long der, long bitprec) GEN lfuncost0(GEN L, GEN dom, long der, long bitprec) GEN lfuncreate(GEN obj) GEN lfunan(GEN ldata, long L, long prec) GEN lfunhardy(GEN ldata, GEN t, long bitprec) GEN lfuninit(GEN ldata, GEN dom, long der, long bitprec) GEN lfuninit0(GEN ldata, GEN dom, long der, long bitprec) GEN lfuninit_make(long t, GEN ldata, GEN molin, GEN domain) long lfunisvgaell(GEN Vga, long flag) GEN lfunlambda(GEN ldata, GEN s, long bitprec) GEN lfunlambda0(GEN ldata, GEN s, long der, long bitprec) GEN lfunmisc_to_ldata(GEN ldata) GEN lfunmisc_to_ldata_shallow(GEN ldata) long lfunorderzero(GEN ldata, long m, long bitprec) GEN lfunprod_get_fact(GEN tech) GEN lfunrootno(GEN data, long bitprec) GEN lfunrootres(GEN data, long bitprec) GEN lfunrtopoles(GEN r) GEN lfuntheta(GEN data, GEN t, long m, long bitprec) long lfunthetacost0(GEN L, GEN tdom, long m, long bitprec) long lfunthetacost(GEN ldata, GEN tdom, long m, long bitprec) GEN lfunthetainit(GEN ldata, GEN tdom, long m, long bitprec) GEN lfunthetacheckinit(GEN data, GEN tinf, long m, long *ptbitprec, long fl) GEN lfunzeros(GEN ldata, GEN lim, long divz, long bitprec) int sdomain_isincl(long k, GEN dom, GEN dom0) GEN theta_get_an(GEN tdata) GEN theta_get_K(GEN tdata) GEN theta_get_R(GEN tdata) long theta_get_bitprec(GEN tdata) long theta_get_m(GEN tdata) GEN theta_get_tdom(GEN tdata) GEN theta_get_sqrtN(GEN tdata) # lfunutils.c GEN dirzetak(GEN nf, GEN b) GEN ellmoddegree(GEN e, long bitprec) GEN lfunabelianrelinit(GEN bnfabs, GEN bnf, GEN polrel, GEN dom, long der, long bitprec) GEN lfunartin(GEN N, GEN G, GEN M, long o) GEN lfundiv(GEN ldata1, GEN ldata2, long bitprec) GEN lfunellmfpeters(GEN E, long bitprec) GEN lfunetaquo(GEN eta) GEN lfungenus2(GEN PS) GEN lfunmfspec(GEN lmisc, long bitprec) GEN lfunmfpeters(GEN ldata, long bitprec) GEN lfunmul(GEN ldata1, GEN ldata2, long bitprec) GEN lfunqf(GEN ldata, long prec) GEN lfunsymsq(GEN ldata, GEN known, long prec) GEN lfunsymsqspec(GEN lmisc, long bitprec) GEN lfunzetakinit(GEN pol, GEN dom, long der, long flag, long bitprec) # lll.c GEN ZM_lll_norms(GEN x, double D, long flag, GEN *B) GEN kerint(GEN x) GEN lll(GEN x) GEN lllfp(GEN x, double D, long flag) GEN lllgen(GEN x) GEN lllgram(GEN x) GEN lllgramgen(GEN x) GEN lllgramint(GEN x) GEN lllgramkerim(GEN x) GEN lllgramkerimgen(GEN x) GEN lllint(GEN x) GEN lllintpartial(GEN mat) GEN lllintpartial_inplace(GEN mat) GEN lllkerim(GEN x) GEN lllkerimgen(GEN x) GEN matkerint0(GEN x, long flag) GEN qflll0(GEN x, long flag) GEN qflllgram0(GEN x, long flag) # map.c GEN gtomap(GEN M) void mapdelete(GEN T, GEN a) GEN mapdomain(GEN T) GEN mapdomain_shallow(GEN T) GEN mapget(GEN T, GEN a) int mapisdefined(GEN T, GEN a, GEN *pt_z) void mapput(GEN T, GEN a, GEN b) GEN maptomat(GEN T) GEN maptomat_shallow(GEN T) # mellininv.c double dbllambertW0(double a) double dbllambertW_1(double a) double dbllemma526(double a, double b, double c, double B) double dblcoro526(double a, double c, double B) GEN gammamellininv(GEN Vga, GEN s, long m, long bitprec) GEN gammamellininvasymp(GEN Vga, long nlimmax, long m) GEN gammamellininvinit(GEN Vga, long m, long bitprec) GEN gammamellininvrt(GEN K, GEN s, long bitprec) # members.c GEN member_a1(GEN x) GEN member_a2(GEN x) GEN member_a3(GEN x) GEN member_a4(GEN x) GEN member_a6(GEN x) GEN member_area(GEN x) GEN member_b2(GEN x) GEN member_b4(GEN x) GEN member_b6(GEN x) GEN member_b8(GEN x) GEN member_bid(GEN x) GEN member_bnf(GEN x) GEN member_c4(GEN x) GEN member_c6(GEN x) GEN member_clgp(GEN x) GEN member_codiff(GEN x) GEN member_cyc(GEN clg) GEN member_diff(GEN x) GEN member_disc(GEN x) GEN member_e(GEN x) GEN member_eta(GEN x) GEN member_f(GEN x) GEN member_fu(GEN x) GEN member_futu(GEN x) GEN member_gen(GEN x) GEN member_group(GEN x) GEN member_index(GEN x) GEN member_j(GEN x) GEN member_mod(GEN x) GEN member_nf(GEN x) GEN member_no(GEN clg) GEN member_omega(GEN x) GEN member_orders(GEN x) GEN member_p(GEN x) GEN member_pol(GEN x) GEN member_polabs(GEN x) GEN member_reg(GEN x) GEN member_r1(GEN x) GEN member_r2(GEN x) GEN member_roots(GEN x) GEN member_sign(GEN x) GEN member_t2(GEN x) GEN member_tate(GEN x) GEN member_tufu(GEN x) GEN member_tu(GEN x) GEN member_zk(GEN x) GEN member_zkst(GEN bid) # mp.c GEN addmulii(GEN x, GEN y, GEN z) GEN addmulii_inplace(GEN x, GEN y, GEN z) ulong Fl_inv(ulong x, ulong p) ulong Fl_invgen(ulong x, ulong p, ulong *pg) ulong Fl_invsafe(ulong x, ulong p) int Fp_ratlift(GEN x, GEN m, GEN amax, GEN bmax, GEN *a, GEN *b) int absi_cmp(GEN x, GEN y) int absi_equal(GEN x, GEN y) int absr_cmp(GEN x, GEN y) GEN addii_sign(GEN x, long sx, GEN y, long sy) GEN addir_sign(GEN x, long sx, GEN y, long sy) GEN addrr_sign(GEN x, long sx, GEN y, long sy) GEN addsi_sign(long x, GEN y, long sy) GEN addui_sign(ulong x, GEN y, long sy) GEN addsr(long x, GEN y) GEN addumului(ulong a, ulong b, GEN Y) void affir(GEN x, GEN y) void affrr(GEN x, GEN y) GEN bezout(GEN a, GEN b, GEN *u, GEN *v) long cbezout(long a, long b, long *uu, long *vv) GEN cbrtr_abs(GEN x) int cmpii(GEN x, GEN y) int cmprr(GEN x, GEN y) long dblexpo(double x) ulong dblmantissa(double x) GEN dbltor(double x) GEN diviiexact(GEN x, GEN y) GEN divir(GEN x, GEN y) GEN divis(GEN y, long x) GEN divis_rem(GEN x, long y, long *rem) GEN diviu_rem(GEN y, ulong x, ulong *rem) GEN diviuuexact(GEN x, ulong y, ulong z) GEN diviuexact(GEN x, ulong y) GEN divri(GEN x, GEN y) GEN divrr(GEN x, GEN y) GEN divrs(GEN x, long y) GEN divru(GEN x, ulong y) GEN divsi(long x, GEN y) GEN divsr(long x, GEN y) GEN divur(ulong x, GEN y) GEN dvmdii(GEN x, GEN y, GEN *z) int equalii(GEN x, GEN y) int equalrr(GEN x, GEN y) GEN floorr(GEN x) GEN gcdii(GEN x, GEN y) GEN int2n(long n) GEN int2u(ulong n) GEN int_normalize(GEN x, long known_zero_words) int invmod(GEN a, GEN b, GEN *res) ulong invmod2BIL(ulong b) GEN invr(GEN b) GEN mantissa_real(GEN x, long *e) GEN modii(GEN x, GEN y) void modiiz(GEN x, GEN y, GEN z) GEN mulii(GEN x, GEN y) GEN mulir(GEN x, GEN y) GEN mulrr(GEN x, GEN y) GEN mulsi(long x, GEN y) GEN mulsr(long x, GEN y) GEN mulss(long x, long y) GEN mului(ulong x, GEN y) GEN mulur(ulong x, GEN y) GEN muluu(ulong x, ulong y) GEN muluui(ulong x, ulong y, GEN z) GEN remi2n(GEN x, long n) double rtodbl(GEN x) GEN shifti(GEN x, long n) GEN sqri(GEN x) GEN sqrr(GEN x) GEN sqrs(long x) GEN sqrtr_abs(GEN x) GEN sqrtremi(GEN S, GEN *R) GEN sqru(ulong x) GEN subsr(long x, GEN y) GEN truedvmdii(GEN x, GEN y, GEN *z) GEN truedvmdis(GEN x, long y, GEN *z) GEN truedvmdsi(long x, GEN y, GEN *z) GEN trunc2nr(GEN x, long n) GEN mantissa2nr(GEN x, long n) GEN truncr(GEN x) ulong umodiu(GEN y, ulong x) long vals(ulong x) # nffactor.c GEN FpC_ratlift(GEN P, GEN mod, GEN amax, GEN bmax, GEN denom) GEN FpM_ratlift(GEN M, GEN mod, GEN amax, GEN bmax, GEN denom) GEN FpX_ratlift(GEN P, GEN mod, GEN amax, GEN bmax, GEN denom) GEN nffactor(GEN nf, GEN x) GEN nffactormod(GEN nf, GEN pol, GEN pr) GEN nfgcd(GEN P, GEN Q, GEN nf, GEN den) GEN nfgcd_all(GEN P, GEN Q, GEN T, GEN den, GEN *Pnew) int nfissquarefree(GEN nf, GEN x) GEN nfroots(GEN nf, GEN pol) GEN polfnf(GEN a, GEN t) GEN rootsof1(GEN x) GEN rootsof1_kannan(GEN nf) # paricfg.c extern const char *paricfg_datadir extern const char *paricfg_version extern const char *paricfg_buildinfo extern const long paricfg_version_code extern const char *paricfg_vcsversion extern const char *paricfg_compiledate extern const char *paricfg_mt_engine # part.c void forpart(void *E, long call(void*, GEN), long k, GEN nbound, GEN abound) void forpart_init(forpart_t *T, long k, GEN abound, GEN nbound) GEN forpart_next(forpart_t *T) GEN forpart_prev(forpart_t *T) GEN numbpart(GEN x) GEN partitions(long k, GEN nbound, GEN abound) # perm.c GEN abelian_group(GEN G) GEN cyclicgroup(GEN g, long s) GEN cyc_pow(GEN cyc, long exp) GEN cyc_pow_perm(GEN cyc, long exp) GEN dicyclicgroup(GEN g1, GEN g2, long s1, long s2) GEN group_abelianHNF(GEN G, GEN L) GEN group_abelianSNF(GEN G, GEN L) long group_domain(GEN G) GEN group_elts(GEN G, long n) GEN group_export(GEN G, long format) long group_isA4S4(GEN G) long group_isabelian(GEN G) GEN group_leftcoset(GEN G, GEN g) long group_order(GEN G) long group_perm_normalize(GEN N, GEN g) GEN group_quotient(GEN G, GEN H) GEN group_rightcoset(GEN G, GEN g) GEN group_set(GEN G, long n) long group_subgroup_isnormal(GEN G, GEN H) GEN group_subgroups(GEN G) GEN groupelts_abelian_group(GEN S) GEN groupelts_center(GEN S) GEN groupelts_set(GEN G, long n) int perm_commute(GEN p, GEN q) GEN perm_cycles(GEN v) long perm_order(GEN perm) GEN perm_pow(GEN perm, long exp) GEN quotient_group(GEN C, GEN G) GEN quotient_perm(GEN C, GEN p) GEN quotient_subgroup_lift(GEN C, GEN H, GEN S) GEN subgroups_tableset(GEN S, long n) long tableset_find_index(GEN tbl, GEN set) GEN trivialgroup() GEN vecperm_orbits(GEN v, long n) GEN vec_insert(GEN v, long n, GEN x) int vec_is1to1(GEN v) int vec_isconst(GEN v) long vecsmall_duplicate(GEN x) long vecsmall_duplicate_sorted(GEN x) GEN vecsmall_indexsort(GEN V) void vecsmall_sort(GEN V) GEN vecsmall_uniq(GEN V) GEN vecsmall_uniq_sorted(GEN V) GEN vecvecsmall_indexsort(GEN x) long vecvecsmall_search(GEN x, GEN y, long flag) GEN vecvecsmall_sort(GEN x) GEN vecvecsmall_sort_uniq(GEN x) # mt.c void mt_broadcast(GEN code) void mt_sigint_block() void mt_sigint_unblock() void mt_queue_end(pari_mt *pt) GEN mt_queue_get(pari_mt *pt, long *jobid, long *pending) void mt_queue_start(pari_mt *pt, GEN worker) void mt_queue_submit(pari_mt *pt, long jobid, GEN work) void pari_mt_init() void pari_mt_close() # plotport.c void color_to_rgb(GEN c, int *r, int *g, int *b) void colorname_to_rgb(const char *s, int *r, int *g, int *b) void pari_plot_by_file(const char *env, const char *suf, const char *img) void pari_set_plot_engine(void (*plot)(PARI_plot *)) void pari_kill_plot_engine() void plotbox(long ne, GEN gx2, GEN gy2) void plotclip(long rect) void plotcolor(long ne, long color) void plotcopy(long source, long dest, GEN xoff, GEN yoff, long flag) GEN plotcursor(long ne) void plotdraw(GEN list, long flag) GEN ploth(void *E, GEN(*f)(void*, GEN), GEN a, GEN b, long flags, long n, long prec) GEN plothraw(GEN listx, GEN listy, long flag) GEN plothsizes(long flag) void plotinit(long ne, GEN x, GEN y, long flag) void plotkill(long ne) void plotline(long ne, GEN gx2, GEN gy2) void plotlines(long ne, GEN listx, GEN listy, long flag) void plotlinetype(long ne, long t) void plotmove(long ne, GEN x, GEN y) void plotpoints(long ne, GEN listx, GEN listy) void plotpointsize(long ne, GEN size) void plotpointtype(long ne, long t) void plotrbox(long ne, GEN x2, GEN y2) GEN plotrecth(void *E, GEN(*f)(void*, GEN), long ne, GEN a, GEN b, ulong flags, long n, long prec) GEN plotrecthraw(long ne, GEN data, long flags) void plotrline(long ne, GEN x2, GEN y2) void plotrmove(long ne, GEN x, GEN y) void plotrpoint(long ne, GEN x, GEN y) void plotscale(long ne, GEN x1, GEN x2, GEN y1, GEN y2) void plotstring(long ne, char *x, long dir) void psdraw(GEN list, long flag) GEN psploth(void *E, GEN(*f)(void*, GEN), GEN a, GEN b, long flags, long n, long prec) GEN psplothraw(GEN listx, GEN listy, long flag) char* rect2ps(GEN w, GEN x, GEN y, PARI_plot *T) char* rect2ps_i(GEN w, GEN x, GEN y, PARI_plot *T, int plotps) char* rect2svg(GEN w, GEN x, GEN y, PARI_plot *T) # polarit1.c GEN ZX_Zp_root(GEN f, GEN a, GEN p, long prec) GEN Zp_appr(GEN f, GEN a) GEN factorpadic(GEN x, GEN p, long r) GEN gdeuc(GEN x, GEN y) GEN grem(GEN x, GEN y) GEN padicappr(GEN f, GEN a) GEN poldivrem(GEN x, GEN y, GEN *pr) GEN rootpadic(GEN f, GEN p, long r) # polarit2.c GEN FpV_factorback(GEN L, GEN e, GEN p) GEN Q_content(GEN x) GEN Q_denom(GEN x) GEN Q_div_to_int(GEN x, GEN c) GEN Q_gcd(GEN x, GEN y) GEN Q_mul_to_int(GEN x, GEN c) GEN Q_muli_to_int(GEN x, GEN d) GEN Q_primitive_part(GEN x, GEN *ptc) GEN Q_primpart(GEN x) GEN Q_remove_denom(GEN x, GEN *ptd) GEN RgXQ_charpoly(GEN x, GEN T, long v) GEN RgXQ_inv(GEN x, GEN y) GEN RgX_disc(GEN x) GEN RgX_extgcd(GEN x, GEN y, GEN *U, GEN *V) GEN RgX_extgcd_simple(GEN a, GEN b, GEN *pu, GEN *pv) GEN RgX_gcd(GEN x, GEN y) GEN RgX_gcd_simple(GEN x, GEN y) int RgXQ_ratlift(GEN y, GEN x, long amax, long bmax, GEN *P, GEN *Q) GEN RgX_resultant_all(GEN P, GEN Q, GEN *sol) long RgX_sturmpart(GEN x, GEN ab) long RgX_type(GEN x, GEN *ptp, GEN *ptpol, long *ptpa) void RgX_type_decode(long x, long *t1, long *t2) int RgX_type_is_composite(long t) GEN ZX_content(GEN x) GEN centermod(GEN x, GEN p) GEN centermod_i(GEN x, GEN p, GEN ps2) GEN centermodii(GEN x, GEN p, GEN po2) GEN content(GEN x) GEN deg1_from_roots(GEN L, long v) GEN factor(GEN x) GEN factor0(GEN x, long flag) GEN factorback(GEN fa) GEN factorback2(GEN fa, GEN e) GEN famat_mul_shallow(GEN f, GEN g) GEN gbezout(GEN x, GEN y, GEN *u, GEN *v) GEN gdivexact(GEN x, GEN y) GEN gen_factorback(GEN L, GEN e, GEN (*_mul)(void*, GEN, GEN), GEN (*_pow)(void*, GEN, GEN), void *data) GEN ggcd(GEN x, GEN y) GEN ggcd0(GEN x, GEN y) GEN ginvmod(GEN x, GEN y) GEN glcm(GEN x, GEN y) GEN glcm0(GEN x, GEN y) GEN gp_factor0(GEN x, GEN flag) GEN idealfactorback(GEN nf, GEN L, GEN e, int red) long polisirreducible(GEN x) GEN newtonpoly(GEN x, GEN p) GEN nffactorback(GEN nf, GEN L, GEN e) GEN nfrootsQ(GEN x) GEN poldisc0(GEN x, long v) GEN polresultant0(GEN x, GEN y, long v, long flag) GEN polsym(GEN x, long n) GEN primitive_part(GEN x, GEN *c) GEN primpart(GEN x) GEN reduceddiscsmith(GEN pol) GEN resultant2(GEN x, GEN y) GEN resultant_all(GEN u, GEN v, GEN *sol) GEN rnfcharpoly(GEN nf, GEN T, GEN alpha, long v) GEN roots_from_deg1(GEN x) GEN roots_to_pol(GEN a, long v) GEN roots_to_pol_r1(GEN a, long v, long r1) long sturmpart(GEN x, GEN a, GEN b) GEN subresext(GEN x, GEN y, GEN *U, GEN *V) GEN sylvestermatrix(GEN x, GEN y) GEN trivial_fact() GEN gcdext0(GEN x, GEN y) GEN polresultantext0(GEN x, GEN y, long v) GEN polresultantext(GEN x, GEN y) GEN prime_fact(GEN x) # polarit3.c GEN Flx_FlxY_resultant(GEN a, GEN b, ulong pp) GEN Flx_factorff_irred(GEN P, GEN Q, ulong p) void Flx_ffintersect(GEN P, GEN Q, long n, ulong l, GEN *SP, GEN *SQ, GEN MA, GEN MB) GEN Flx_ffisom(GEN P, GEN Q, ulong l) GEN Flx_roots_naive(GEN f, ulong p) GEN FlxX_resultant(GEN u, GEN v, ulong p, long sx) GEN Flxq_ffisom_inv(GEN S, GEN Tp, ulong p) GEN FpX_FpXY_resultant(GEN a, GEN b0, GEN p) GEN FpX_factorff_irred(GEN P, GEN Q, GEN p) void FpX_ffintersect(GEN P, GEN Q, long n, GEN l, GEN *SP, GEN *SQ, GEN MA, GEN MB) GEN FpX_ffisom(GEN P, GEN Q, GEN l) GEN FpX_translate(GEN P, GEN c, GEN p) GEN FpXQ_ffisom_inv(GEN S, GEN Tp, GEN p) GEN FpXQX_normalize(GEN z, GEN T, GEN p) GEN FpXV_FpC_mul(GEN V, GEN W, GEN p) GEN FpXY_Fq_evaly(GEN Q, GEN y, GEN T, GEN p, long vx) GEN Fq_Fp_mul(GEN x, GEN y, GEN T, GEN p) GEN Fq_add(GEN x, GEN y, GEN T, GEN p) GEN Fq_div(GEN x, GEN y, GEN T, GEN p) GEN Fq_halve(GEN x, GEN T, GEN p) GEN Fq_inv(GEN x, GEN T, GEN p) GEN Fq_invsafe(GEN x, GEN T, GEN p) GEN Fq_mul(GEN x, GEN y, GEN T, GEN p) GEN Fq_mulu(GEN x, ulong y, GEN T, GEN p) GEN Fq_neg(GEN x, GEN T, GEN p) GEN Fq_neg_inv(GEN x, GEN T, GEN p) GEN Fq_pow(GEN x, GEN n, GEN T, GEN p) GEN Fq_powu(GEN x, ulong n, GEN pol, GEN p) GEN Fq_sub(GEN x, GEN y, GEN T, GEN p) GEN Fq_sqr(GEN x, GEN T, GEN p) GEN Fq_sqrt(GEN x, GEN T, GEN p) GEN Fq_sqrtn(GEN x, GEN n, GEN T, GEN p, GEN *zeta) GEN FqC_add(GEN x, GEN y, GEN T, GEN p) GEN FqC_sub(GEN x, GEN y, GEN T, GEN p) GEN FqC_Fq_mul(GEN x, GEN y, GEN T, GEN p) GEN FqC_to_FlxC(GEN v, GEN T, GEN pp) GEN FqM_to_FlxM(GEN x, GEN T, GEN pp) GEN FqV_roots_to_pol(GEN V, GEN T, GEN p, long v) GEN FqV_red(GEN z, GEN T, GEN p) GEN FqV_to_FlxV(GEN v, GEN T, GEN pp) GEN FqX_Fq_add(GEN y, GEN x, GEN T, GEN p) GEN FqX_Fq_mul_to_monic(GEN P, GEN U, GEN T, GEN p) GEN FqX_eval(GEN x, GEN y, GEN T, GEN p) GEN FqX_translate(GEN P, GEN c, GEN T, GEN p) GEN FqXQ_powers(GEN x, long l, GEN S, GEN T, GEN p) GEN FqXQ_matrix_pow(GEN y, long n, long m, GEN S, GEN T, GEN p) GEN FqXY_eval(GEN Q, GEN y, GEN x, GEN T, GEN p) GEN FqXY_evalx(GEN Q, GEN x, GEN T, GEN p) GEN QX_disc(GEN x) GEN QX_gcd(GEN a, GEN b) GEN QX_resultant(GEN A, GEN B) GEN QXQ_intnorm(GEN A, GEN B) GEN QXQ_inv(GEN A, GEN B) GEN QXQ_norm(GEN A, GEN B) int Rg_is_Fp(GEN x, GEN *p) int Rg_is_FpXQ(GEN x, GEN *pT, GEN *pp) GEN Rg_to_Fp(GEN x, GEN p) GEN Rg_to_FpXQ(GEN x, GEN T, GEN p) GEN RgC_to_FpC(GEN x, GEN p) int RgM_is_FpM(GEN x, GEN *p) GEN RgM_to_Flm(GEN x, ulong p) GEN RgM_to_FpM(GEN x, GEN p) int RgV_is_FpV(GEN x, GEN *p) GEN RgV_to_Flv(GEN x, ulong p) GEN RgV_to_FpV(GEN x, GEN p) int RgX_is_FpX(GEN x, GEN *p) GEN RgX_to_FpX(GEN x, GEN p) int RgX_is_FpXQX(GEN x, GEN *pT, GEN *pp) GEN RgX_to_FpXQX(GEN x, GEN T, GEN p) GEN RgX_to_FqX(GEN x, GEN T, GEN p) GEN ZX_ZXY_rnfequation(GEN A, GEN B, long *Lambda) GEN ZXQ_charpoly(GEN A, GEN T, long v) GEN ZX_disc(GEN x) int ZX_is_squarefree(GEN x) GEN ZX_gcd(GEN A, GEN B) GEN ZX_gcd_all(GEN A, GEN B, GEN *Anew) GEN ZX_resultant(GEN A, GEN B) int Z_incremental_CRT(GEN *H, ulong Hp, GEN *q, ulong p) GEN Z_init_CRT(ulong Hp, ulong p) int ZM_incremental_CRT(GEN *H, GEN Hp, GEN *q, ulong p) GEN ZM_init_CRT(GEN Hp, ulong p) int ZX_incremental_CRT(GEN *ptH, GEN Hp, GEN *q, ulong p) GEN ZX_init_CRT(GEN Hp, ulong p, long v) GEN characteristic(GEN x) GEN ffinit(GEN p, long n, long v) GEN ffnbirred(GEN p, long n) GEN ffnbirred0(GEN p, long n, long flag) GEN ffsumnbirred(GEN p, long n) const bb_field *get_Fq_field(void **E, GEN T, GEN p) GEN init_Fq(GEN p, long n, long v) GEN pol_x_powers(long N, long v) GEN residual_characteristic(GEN x) # polclass.c GEN polclass(GEN D, long inv, long xvar) # polmodular.c GEN Flm_Fl_polmodular_evalx(GEN phi, long L, ulong j, ulong p, ulong pi) GEN Fp_polmodular_evalx(long L, long inv, GEN J, GEN P, long v, int compute_derivs) GEN polmodular(long L, long inv, GEN x, long yvar, long compute_derivs) GEN polmodular_ZM(long L, long inv) GEN polmodular_ZXX(long L, long inv, long xvar, long yvar) # prime.c long BPSW_isprime(GEN x) long BPSW_psp(GEN N) GEN addprimes(GEN primes) GEN gisprime(GEN x, long flag) GEN gispseudoprime(GEN x, long flag) GEN gprimepi_upper_bound(GEN x) GEN gprimepi_lower_bound(GEN x) long isprime(GEN x) long ispseudoprime(GEN x, long flag) long millerrabin(GEN n, long k) GEN prime(long n) GEN primepi(GEN x) double primepi_upper_bound(double x) double primepi_lower_bound(double x) GEN primes(long n) GEN primes_interval(GEN a, GEN b) GEN primes_interval_zv(ulong a, ulong b) GEN primes_upto_zv(ulong b) GEN primes0(GEN n) GEN primes_zv(long m) GEN randomprime(GEN N) GEN removeprimes(GEN primes) int uislucaspsp(ulong n) int uisprime(ulong n) ulong uprime(long n) ulong uprimepi(ulong n) # qfisom.c GEN qfauto(GEN g, GEN flags) GEN qfauto0(GEN g, GEN flags) GEN qfautoexport(GEN g, long flag) GEN qfisom(GEN g, GEN h, GEN flags) GEN qfisom0(GEN g, GEN h, GEN flags) GEN qfisominit(GEN g, GEN flags, GEN minvec) GEN qfisominit0(GEN g, GEN flags, GEN minvec) GEN qforbits(GEN G, GEN V) # qfparam.c GEN qfsolve(GEN G) GEN qfparam(GEN G, GEN sol, long fl) # random.c GEN genrand(GEN N) GEN getrand() ulong pari_rand() GEN randomi(GEN x) GEN randomr(long prec) ulong random_Fl(ulong n) long random_bits(long k) void setrand(GEN seed) # rootpol.c GEN QX_complex_roots(GEN p, long l) GEN ZX_graeffe(GEN p) GEN cleanroots(GEN x, long l) double fujiwara_bound(GEN p) double fujiwara_bound_real(GEN p, long sign) int isrealappr(GEN x, long l) GEN polgraeffe(GEN p) GEN polmod_to_embed(GEN x, long prec) GEN roots(GEN x, long l) GEN realroots(GEN P, GEN ab, long prec) long ZX_sturm(GEN P) long ZX_sturmpart(GEN P, GEN ab) GEN ZX_Uspensky(GEN P, GEN ab, long flag, long prec) # subcyclo.c GEN factor_Aurifeuille(GEN p, long n) GEN factor_Aurifeuille_prime(GEN p, long n) GEN galoissubcyclo(GEN N, GEN sg, long flag, long v) GEN polsubcyclo(long n, long d, long v) # subfield.c GEN nfsubfields(GEN nf, long d) # subgroup.c GEN subgrouplist(GEN cyc, GEN bound) void forsubgroup(void *E, long fun(void*, GEN), GEN cyc, GEN B) # stark.c GEN bnrL1(GEN bnr, GEN sbgrp, long flag, long prec) GEN bnrrootnumber(GEN bnr, GEN chi, long flag, long prec) GEN bnrstark(GEN bnr, GEN subgroup, long prec) GEN cyc2elts(GEN cyc) # sumiter.c GEN asympnum(void *E, GEN (*f)(void *, GEN, long), long muli, GEN alpha, long prec) GEN derivnum(void *E, GEN (*eval)(void *, GEN, long prec), GEN x, long prec) GEN derivfun(void *E, GEN (*eval)(void *, GEN, long prec), GEN x, long prec) int forcomposite_init(forcomposite_t *C, GEN a, GEN b) GEN forcomposite_next(forcomposite_t *C) GEN forprime_next(forprime_t *T) int forprime_init(forprime_t *T, GEN a, GEN b) int forvec_init(forvec_t *T, GEN x, long flag) GEN forvec_next(forvec_t *T) GEN limitnum(void *E, GEN (*f)(void *, GEN, long), long muli, GEN alpha, long prec) GEN polzag(long n, long m) GEN prodeuler(void *E, GEN (*eval)(void *, GEN), GEN ga, GEN gb, long prec) GEN prodinf(void *E, GEN (*eval)(void *, GEN), GEN a, long prec) GEN prodinf1(void *E, GEN (*eval)(void *, GEN), GEN a, long prec) GEN sumalt(void *E, GEN (*eval)(void *, GEN), GEN a, long prec) GEN sumalt2(void *E, GEN (*eval)(void *, GEN), GEN a, long prec) GEN sumpos(void *E, GEN (*eval)(void *, GEN), GEN a, long prec) GEN sumpos2(void *E, GEN (*eval)(void *, GEN), GEN a, long prec) GEN suminf(void *E, GEN (*eval)(void *, GEN), GEN a, long prec) ulong u_forprime_next(forprime_t *T) int u_forprime_init(forprime_t *T, ulong a, ulong b) void u_forprime_restrict(forprime_t *T, ulong c) int u_forprime_arith_init(forprime_t *T, ulong a, ulong b, ulong c, ulong q) GEN zbrent(void *E, GEN (*eval)(void *, GEN), GEN a, GEN b, long prec) GEN solvestep(void *E, GEN (*eval)(void *, GEN), GEN a, GEN b, GEN step, long flag, long prec) # thue.c GEN bnfisintnorm(GEN x, GEN y) GEN bnfisintnormabs(GEN bnf, GEN a) GEN thue(GEN thueres, GEN rhs, GEN ne) GEN thueinit(GEN pol, long flag, long prec) # trans1.c GEN Pi2n(long n, long prec) GEN PiI2(long prec) GEN PiI2n(long n, long prec) GEN Qp_exp(GEN x) GEN Qp_log(GEN x) GEN Qp_sqrt(GEN x) GEN Qp_sqrtn(GEN x, GEN n, GEN *zetan) long Zn_ispower(GEN a, GEN q, GEN K, GEN *pt) long Zn_issquare(GEN x, GEN n) GEN Zn_sqrt(GEN x, GEN n) GEN Zp_teichmuller(GEN x, GEN p, long n, GEN q) GEN agm(GEN x, GEN y, long prec) GEN constcatalan(long prec) GEN consteuler(long prec) GEN constlog2(long prec) GEN constpi(long prec) GEN cxexpm1(GEN z, long prec) GEN expIr(GEN x) GEN exp1r_abs(GEN x) GEN gcos(GEN x, long prec) GEN gcotan(GEN x, long prec) GEN gcotanh(GEN x, long prec) GEN gexp(GEN x, long prec) GEN gexpm1(GEN x, long prec) GEN glog(GEN x, long prec) GEN gpow(GEN x, GEN n, long prec) GEN gpowers(GEN x, long n) GEN gpowers0(GEN x, long n, GEN x0) GEN gpowgs(GEN x, long n) GEN grootsof1(long N, long prec) GEN gsin(GEN x, long prec) GEN gsinc(GEN x, long prec) void gsincos(GEN x, GEN *s, GEN *c, long prec) GEN gsqrpowers(GEN q, long n) GEN gsqrt(GEN x, long prec) GEN gsqrtn(GEN x, GEN n, GEN *zetan, long prec) GEN gtan(GEN x, long prec) GEN logr_abs(GEN x) GEN mpcos(GEN x) GEN mpeuler(long prec) GEN mpcatalan(long prec) void mpsincosm1(GEN x, GEN *s, GEN *c) GEN mpexp(GEN x) GEN mpexpm1(GEN x) GEN mplog(GEN x) GEN mplog2(long prec) GEN mppi(long prec) GEN mpsin(GEN x) void mpsincos(GEN x, GEN *s, GEN *c) GEN powersr(GEN a, long n) GEN powis(GEN x, long n) GEN powiu(GEN p, ulong k) GEN powrfrac(GEN x, long n, long d) GEN powrs(GEN x, long n) GEN powrshalf(GEN x, long s) GEN powru(GEN x, ulong n) GEN powruhalf(GEN x, ulong s) GEN powuu(ulong p, ulong k) GEN powgi(GEN x, GEN n) GEN serchop0(GEN s) GEN sqrtnint(GEN a, long n) GEN teich(GEN x) GEN teichmullerinit(long p, long n) GEN teichmuller(GEN x, GEN tab) GEN trans_eval(const char *fun, GEN (*f) (GEN, long), GEN x, long prec) ulong upowuu(ulong p, ulong k) ulong usqrtn(ulong a, ulong n) ulong usqrt(ulong a) # trans2.c GEN Qp_gamma(GEN x) GEN bernfrac(long n) GEN bernpol(long k, long v) GEN bernreal(long n, long prec) GEN gacosh(GEN x, long prec) GEN gacos(GEN x, long prec) GEN garg(GEN x, long prec) GEN gasinh(GEN x, long prec) GEN gasin(GEN x, long prec) GEN gatan(GEN x, long prec) GEN gatanh(GEN x, long prec) GEN gcosh(GEN x, long prec) GEN ggammah(GEN x, long prec) GEN ggamma(GEN x, long prec) GEN ggamma1m1(GEN x, long prec) GEN glngamma(GEN x, long prec) GEN gpsi(GEN x, long prec) GEN gsinh(GEN x, long prec) GEN gtanh(GEN x, long prec) GEN mpfactr(long n, long prec) GEN sumformal(GEN T, long v) # trans3.c double dblmodulus(GEN x) GEN dilog(GEN x, long prec) GEN eint1(GEN x, long prec) GEN eta(GEN x, long prec) GEN eta0(GEN x, long flag, long prec) GEN gerfc(GEN x, long prec) GEN glambertW(GEN y, long prec) GEN gpolylog(long m, GEN x, long prec) GEN gzeta(GEN x, long prec) GEN hbessel1(GEN n, GEN z, long prec) GEN hbessel2(GEN n, GEN z, long prec) GEN hyperu(GEN a, GEN b, GEN gx, long prec) GEN ibessel(GEN n, GEN z, long prec) GEN incgam(GEN a, GEN x, long prec) GEN incgam0(GEN a, GEN x, GEN z, long prec) GEN incgamc(GEN a, GEN x, long prec) GEN jbessel(GEN n, GEN z, long prec) GEN jbesselh(GEN n, GEN z, long prec) GEN jell(GEN x, long prec) GEN kbessel(GEN nu, GEN gx, long prec) GEN mpeint1(GEN x, GEN expx) GEN mplambertW(GEN y) GEN mpveceint1(GEN C, GEN eC, long n) GEN nbessel(GEN n, GEN z, long prec) GEN polylog0(long m, GEN x, long flag, long prec) GEN sumdedekind(GEN h, GEN k) GEN sumdedekind_coprime(GEN h, GEN k) GEN szeta(long x, long prec) GEN theta(GEN q, GEN z, long prec) GEN thetanullk(GEN q, long k, long prec) GEN trueeta(GEN x, long prec) GEN u_sumdedekind_coprime(long h, long k) GEN veceint1(GEN nmax, GEN C, long prec) GEN vecthetanullk(GEN q, long k, long prec) GEN vecthetanullk_tau(GEN tau, long k, long prec) GEN veczeta(GEN a, GEN b, long N, long prec) GEN weber0(GEN x, long flag, long prec) GEN weberf(GEN x, long prec) GEN weberf1(GEN x, long prec) GEN weberf2(GEN x, long prec) # modsym.c GEN Qevproj_apply(GEN T, GEN pro) GEN Qevproj_apply_vecei(GEN T, GEN pro, long k) GEN Qevproj_init(GEN M) GEN RgX_act_Gl2Q(GEN g, long k) GEN RgX_act_ZGl2Q(GEN z, long k) void checkms(GEN W) void checkmspadic(GEN W) GEN ellpadicL(GEN E, GEN p, long n, GEN s, long r, GEN D) GEN msfromcusp(GEN W, GEN c) GEN msfromell(GEN E, long signe) GEN msfromhecke(GEN W, GEN v, GEN H) long msgetlevel(GEN W) long msgetsign(GEN W) long msgetweight(GEN W) GEN msatkinlehner(GEN W, long Q, GEN) GEN mscuspidal(GEN W, long flag) GEN mseisenstein(GEN W) GEN mseval(GEN W, GEN s, GEN p) GEN mshecke(GEN W, long p, GEN H) GEN msinit(GEN N, GEN k, long sign) long msissymbol(GEN W, GEN s) GEN msomseval(GEN W, GEN phi, GEN path) GEN mspadicinit(GEN W, long p, long n, long flag) GEN mspadicL(GEN oms, GEN s, long r) GEN mspadicmoments(GEN W, GEN phi, long D) GEN mspadicseries(GEN M, long teichi) GEN mspathgens(GEN W) GEN mspathlog(GEN W, GEN path) GEN msnew(GEN W) GEN msstar(GEN W, GEN) GEN msqexpansion(GEN W, GEN proV, ulong B) GEN mssplit(GEN W, GEN H, long deglim) GEN mstooms(GEN W, GEN phi) # zetamult.c GEN zetamult(GEN avec, long prec) # level1.h ulong Fl_add(ulong a, ulong b, ulong p) ulong Fl_addmul_pre(ulong x0, ulong x1, ulong y0, ulong p, ulong pi) ulong Fl_addmulmul_pre(ulong x0, ulong y0, ulong x1, ulong y1, ulong p, ulong pi) long Fl_center(ulong u, ulong p, ulong ps2) ulong Fl_div(ulong a, ulong b, ulong p) ulong Fl_double(ulong a, ulong p) ulong Fl_ellj_pre(ulong a4, ulong a6, ulong p, ulong pi) ulong Fl_halve(ulong y, ulong p) ulong Fl_mul(ulong a, ulong b, ulong p) ulong Fl_mul_pre(ulong a, ulong b, ulong p, ulong pi) ulong Fl_neg(ulong x, ulong p) ulong Fl_sqr(ulong a, ulong p) ulong Fl_sqr_pre(ulong a, ulong p, ulong pi) ulong Fl_sub(ulong a, ulong b, ulong p) ulong Fl_triple(ulong a, ulong p) ulong Mod2(GEN x) ulong Mod4(GEN x) ulong Mod8(GEN x) ulong Mod16(GEN x) ulong Mod32(GEN x) ulong Mod64(GEN x) int abscmpiu(GEN x, ulong y) int abscmpui(ulong x, GEN y) int absequaliu(GEN x, ulong y) GEN absi(GEN x) GEN absi_shallow(GEN x) GEN absr(GEN x) int absrnz_equal1(GEN x) int absrnz_equal2n(GEN x) GEN addii(GEN x, GEN y) void addiiz(GEN x, GEN y, GEN z) GEN addir(GEN x, GEN y) void addirz(GEN x, GEN y, GEN z) GEN addis(GEN x, long s) GEN addri(GEN x, GEN y) void addriz(GEN x, GEN y, GEN z) GEN addrr(GEN x, GEN y) void addrrz(GEN x, GEN y, GEN z) GEN addrs(GEN x, long s) GEN addsi(long x, GEN y) void addsiz(long s, GEN y, GEN z) void addsrz(long s, GEN y, GEN z) GEN addss(long x, long y) void addssz(long s, long y, GEN z) GEN adduu(ulong x, ulong y) void affgr(GEN x, GEN y) void affii(GEN x, GEN y) void affiz(GEN x, GEN y) void affrr_fixlg(GEN y, GEN z) void affsi(long s, GEN x) void affsr(long s, GEN x) void affsz(long x, GEN y) void affui(ulong s, GEN x) void affur(ulong s, GEN x) GEN cgetg(long x, long y) GEN cgetg_block(long x, long y) GEN cgetg_copy(GEN x, long *plx) GEN cgeti(long x) GEN cgetineg(long x) GEN cgetipos(long x) GEN cgetr(long x) GEN cgetr_block(long prec) int cmpir(GEN x, GEN y) int cmpis(GEN x, long y) int cmpri(GEN x, GEN y) int cmprs(GEN x, long y) int cmpsi(long x, GEN y) int cmpsr(long x, GEN y) GEN cxtofp(GEN x, long prec) GEN divii(GEN a, GEN b) void diviiz(GEN x, GEN y, GEN z) void divirz(GEN x, GEN y, GEN z) void divisz(GEN x, long s, GEN z) void divriz(GEN x, GEN y, GEN z) void divrrz(GEN x, GEN y, GEN z) void divrsz(GEN y, long s, GEN z) GEN divsi_rem(long x, GEN y, long *rem) void divsiz(long x, GEN y, GEN z) void divsrz(long s, GEN y, GEN z) GEN divss(long x, long y) GEN divss_rem(long x, long y, long *rem) void divssz(long x, long y, GEN z) int dvdii(GEN x, GEN y) int dvdiiz(GEN x, GEN y, GEN z) int dvdis(GEN x, long y) int dvdisz(GEN x, long y, GEN z) int dvdiu(GEN x, ulong y) int dvdiuz(GEN x, ulong y, GEN z) int dvdsi(long x, GEN y) int dvdui(ulong x, GEN y) void dvmdiiz(GEN x, GEN y, GEN z, GEN t) GEN dvmdis(GEN x, long y, GEN *z) void dvmdisz(GEN x, long y, GEN z, GEN t) long dvmdsBIL(long n, long *r) GEN dvmdsi(long x, GEN y, GEN *z) void dvmdsiz(long x, GEN y, GEN z, GEN t) GEN dvmdss(long x, long y, GEN *z) void dvmdssz(long x, long y, GEN z, GEN t) ulong dvmduBIL(ulong n, ulong *r) int equalis(GEN x, long y) int equalsi(long x, GEN y) int absequalui(ulong x, GEN y) ulong ceildivuu(ulong a, ulong b) long evalexpo(long x) long evallg(long x) long evalprecp(long x) long evalvalp(long x) long expi(GEN x) long expu(ulong x) void fixlg(GEN z, long ly) GEN fractor(GEN x, long prec) GEN icopy(GEN x) GEN icopyspec(GEN x, long nx) GEN icopy_avma(GEN x, pari_sp av) ulong int_bit(GEN x, long n) GEN itor(GEN x, long prec) long itos(GEN x) long itos_or_0(GEN x) ulong itou(GEN x) ulong itou_or_0(GEN x) GEN leafcopy(GEN x) GEN leafcopy_avma(GEN x, pari_sp av) double maxdd(double x, double y) long maxss(long x, long y) long maxuu(ulong x, ulong y) double mindd(double x, double y) long minss(long x, long y) long minuu(ulong x, ulong y) long mod16(GEN x) long mod2(GEN x) ulong mod2BIL(GEN x) long mod32(GEN x) long mod4(GEN x) long mod64(GEN x) long mod8(GEN x) GEN modis(GEN x, long y) void modisz(GEN y, long s, GEN z) GEN modsi(long x, GEN y) void modsiz(long s, GEN y, GEN z) GEN modss(long x, long y) void modssz(long s, long y, GEN z) GEN mpabs(GEN x) GEN mpabs_shallow(GEN x) GEN mpadd(GEN x, GEN y) void mpaddz(GEN x, GEN y, GEN z) void mpaff(GEN x, GEN y) GEN mpceil(GEN x) int mpcmp(GEN x, GEN y) GEN mpcopy(GEN x) GEN mpdiv(GEN x, GEN y) long mpexpo(GEN x) GEN mpfloor(GEN x) GEN mpmul(GEN x, GEN y) void mpmulz(GEN x, GEN y, GEN z) GEN mpneg(GEN x) int mpodd(GEN x) GEN mpround(GEN x) GEN mpshift(GEN x, long s) GEN mpsqr(GEN x) GEN mpsub(GEN x, GEN y) void mpsubz(GEN x, GEN y, GEN z) GEN mptrunc(GEN x) void muliiz(GEN x, GEN y, GEN z) void mulirz(GEN x, GEN y, GEN z) GEN mulis(GEN x, long s) GEN muliu(GEN x, ulong s) GEN mulri(GEN x, GEN s) void mulriz(GEN x, GEN y, GEN z) void mulrrz(GEN x, GEN y, GEN z) GEN mulrs(GEN x, long s) GEN mulru(GEN x, ulong s) void mulsiz(long s, GEN y, GEN z) void mulsrz(long s, GEN y, GEN z) void mulssz(long s, long y, GEN z) GEN negi(GEN x) GEN negr(GEN x) GEN new_chunk(size_t x) GEN rcopy(GEN x) GEN rdivii(GEN x, GEN y, long prec) void rdiviiz(GEN x, GEN y, GEN z) GEN rdivis(GEN x, long y, long prec) GEN rdivsi(long x, GEN y, long prec) GEN rdivss(long x, long y, long prec) GEN real2n(long n, long prec) GEN real_m2n(long n, long prec) GEN real_0(long prec) GEN real_0_bit(long bitprec) GEN real_1(long prec) GEN real_1_bit(long bit) GEN real_m1(long prec) GEN remii(GEN a, GEN b) void remiiz(GEN x, GEN y, GEN z) GEN remis(GEN x, long y) void remisz(GEN y, long s, GEN z) ulong remlll_pre(ulong u2, ulong u1, ulong u0, ulong p, ulong pi) GEN remsi(long x, GEN y) void remsiz(long s, GEN y, GEN z) GEN remss(long x, long y) void remssz(long s, long y, GEN z) GEN rtor(GEN x, long prec) long sdivsi(long x, GEN y) long sdivsi_rem(long x, GEN y, long *rem) long sdivss_rem(long x, long y, long *rem) ulong udiviu_rem(GEN n, ulong d, ulong *r) ulong udivuu_rem(ulong x, ulong y, ulong *r) ulong umodi2n(GEN x, long n) void setabssign(GEN x) void shift_left(GEN z2, GEN z1, long min, long M, ulong f, ulong sh) void shift_right(GEN z2, GEN z1, long min, long M, ulong f, ulong sh) ulong shiftl(ulong x, ulong y) ulong shiftlr(ulong x, ulong y) GEN shiftr(GEN x, long n) void shiftr_inplace(GEN z, long d) long smodis(GEN x, long y) long smodss(long x, long y) void stackdummy(pari_sp av, pari_sp ltop) char *stack_malloc(size_t N) char *stack_malloc_align(size_t N, long k) char *stack_calloc(size_t N) GEN stoi(long x) GEN stor(long x, long prec) GEN subii(GEN x, GEN y) void subiiz(GEN x, GEN y, GEN z) GEN subir(GEN x, GEN y) void subirz(GEN x, GEN y, GEN z) GEN subis(GEN x, long y) void subisz(GEN y, long s, GEN z) GEN subri(GEN x, GEN y) void subriz(GEN x, GEN y, GEN z) GEN subrr(GEN x, GEN y) void subrrz(GEN x, GEN y, GEN z) GEN subrs(GEN x, long y) void subrsz(GEN y, long s, GEN z) GEN subsi(long x, GEN y) void subsiz(long s, GEN y, GEN z) void subsrz(long s, GEN y, GEN z) GEN subss(long x, long y) void subssz(long x, long y, GEN z) GEN subuu(ulong x, ulong y) void togglesign(GEN x) void togglesign_safe(GEN *px) void affectsign(GEN x, GEN y) void affectsign_safe(GEN x, GEN *py) GEN truedivii(GEN a, GEN b) GEN truedivis(GEN a, long b) GEN truedivsi(long a, GEN b) ulong udivui_rem(ulong x, GEN y, ulong *rem) ulong umodsu(long x, ulong y) ulong umodui(ulong x, GEN y) ulong umuluu_or_0(ulong x, ulong y) GEN utoi(ulong x) GEN utoineg(ulong x) GEN utoipos(ulong x) GEN utor(ulong s, long prec) GEN uutoi(ulong x, ulong y) GEN uutoineg(ulong x, ulong y) long vali(GEN x) int varncmp(long x, long y) long varnmax(long x, long y) long varnmin(long x, long y) # pariinl.h GEN abgrp_get_cyc(GEN x) GEN abgrp_get_gen(GEN x) GEN abgrp_get_no(GEN x) GEN bid_get_arch(GEN bid) GEN bid_get_archp(GEN bid) GEN bid_get_cyc(GEN bid) GEN bid_get_fact(GEN bid) GEN bid_get_gen(GEN bid) GEN bid_get_gen_nocheck(GEN bid) GEN bid_get_grp(GEN bid) GEN bid_get_ideal(GEN bid) GEN bid_get_mod(GEN bid) GEN bid_get_no(GEN bid) GEN bid_get_sarch(GEN bid) GEN bid_get_sprk(GEN bid) GEN bid_get_U(GEN bid) GEN bnf_get_clgp(GEN bnf) GEN bnf_get_cyc(GEN bnf) GEN bnf_get_fu(GEN bnf) GEN bnf_get_fu_nocheck(GEN bnf) GEN bnf_get_gen(GEN bnf) GEN bnf_get_logfu(GEN bnf) GEN bnf_get_nf(GEN bnf) GEN bnf_get_no(GEN bnf) GEN bnf_get_reg(GEN bnf) GEN bnf_get_tuU(GEN bnf) long bnf_get_tuN(GEN bnf) GEN bnr_get_bnf(GEN bnr) GEN bnr_get_clgp(GEN bnr) GEN bnr_get_cyc(GEN bnr) GEN bnr_get_gen(GEN bnr) GEN bnr_get_gen_nocheck(GEN bnr) GEN bnr_get_no(GEN bnr) GEN bnr_get_bid(GEN bnr) GEN bnr_get_mod(GEN bnr) GEN bnr_get_nf(GEN bnr) GEN ell_get_a1(GEN e) GEN ell_get_a2(GEN e) GEN ell_get_a3(GEN e) GEN ell_get_a4(GEN e) GEN ell_get_a6(GEN e) GEN ell_get_b2(GEN e) GEN ell_get_b4(GEN e) GEN ell_get_b6(GEN e) GEN ell_get_b8(GEN e) GEN ell_get_c4(GEN e) GEN ell_get_c6(GEN e) GEN ell_get_disc(GEN e) GEN ell_get_j(GEN e) long ell_get_type(GEN e) int ell_is_inf(GEN z) GEN ellinf() GEN ellff_get_field(GEN x) GEN ellff_get_a4a6(GEN x) GEN ellnf_get_nf(GEN x) GEN ellQp_get_p(GEN E) long ellQp_get_prec(GEN E) GEN ellQp_get_zero(GEN x) long ellR_get_prec(GEN x) long ellR_get_sign(GEN x) GEN gal_get_pol(GEN gal) GEN gal_get_p(GEN gal) GEN gal_get_e(GEN gal) GEN gal_get_mod(GEN gal) GEN gal_get_roots(GEN gal) GEN gal_get_invvdm(GEN gal) GEN gal_get_den(GEN gal) GEN gal_get_group(GEN gal) GEN gal_get_gen(GEN gal) GEN gal_get_orders(GEN gal) GEN idealchineseinit(GEN nf, GEN x) GEN idealpseudomin(GEN I, GEN G) GEN idealpseudomin_nonscalar(GEN I, GEN G) GEN idealpseudored(GEN I, GEN G) GEN idealred_elt(GEN nf, GEN I) GEN idealred(GEN nf, GEN I) long logint(GEN B, GEN y) ulong ulogint(ulong B, ulong y) GEN modpr_get_pr(GEN x) GEN modpr_get_p(GEN x) GEN modpr_get_T(GEN x) GEN nf_get_M(GEN nf) GEN nf_get_G(GEN nf) GEN nf_get_Tr(GEN nf) GEN nf_get_diff(GEN nf) long nf_get_degree(GEN nf) GEN nf_get_disc(GEN nf) GEN nf_get_index(GEN nf) GEN nf_get_invzk(GEN nf) GEN nf_get_pol(GEN nf) long nf_get_r1(GEN nf) long nf_get_r2(GEN nf) GEN nf_get_ramified_primes(GEN nf) GEN nf_get_roots(GEN nf) GEN nf_get_roundG(GEN nf) void nf_get_sign(GEN nf, long *r1, long *r2) long nf_get_varn(GEN nf) GEN nf_get_zk(GEN nf) GEN nf_get_zkden(GEN nf) GEN nf_get_zkprimpart(GEN nf) long pr_get_e(GEN pr) long pr_get_f(GEN pr) GEN pr_get_gen(GEN pr) GEN pr_get_p(GEN pr) GEN pr_get_tau(GEN pr) int pr_is_inert(GEN P) GEN pr_norm(GEN pr) GEN rnf_get_alpha(GEN rnf) long rnf_get_absdegree(GEN rnf) long rnf_get_degree(GEN rnf) GEN rnf_get_idealdisc(GEN rnf) GEN rnf_get_invzk(GEN rnf) GEN rnf_get_k(GEN rnf) GEN rnf_get_map(GEN rnf) GEN rnf_get_nf(GEN rnf) long rnf_get_nfdegree(GEN rnf) GEN rnf_get_nfpol(GEN rnf) long rnf_get_nfvarn(GEN rnf) GEN rnf_get_pol(GEN rnf) GEN rnf_get_polabs(GEN rnf) GEN rnf_get_zk(GEN nf) void rnf_get_nfzk(GEN rnf, GEN *b, GEN *cb) long rnf_get_varn(GEN rnf) ulong upr_norm(GEN pr) GEN znstar_get_N(GEN G) GEN znstar_get_conreycyc(GEN G) GEN znstar_get_conreygen(GEN G) GEN znstar_get_faN(GEN G) GEN znstar_get_no(GEN G) GEN znstar_get_pe(GEN G) GEN znstar_get_Ui(GEN G) long closure_arity(GEN C) const char * closure_codestr(GEN C) GEN closure_get_code(GEN C) GEN closure_get_oper(GEN C) GEN closure_get_data(GEN C) GEN closure_get_dbg(GEN C) GEN closure_get_text(GEN C) GEN closure_get_frame(GEN C) long closure_is_variadic(GEN C) GEN addmuliu(GEN x, GEN y, ulong u) GEN addmuliu_inplace(GEN x, GEN y, ulong u) GEN lincombii(GEN u, GEN v, GEN x, GEN y) GEN mulsubii(GEN y, GEN z, GEN x) GEN submulii(GEN x, GEN y, GEN z) GEN submuliu(GEN x, GEN y, ulong u) GEN submuliu_inplace(GEN x, GEN y, ulong u) GEN FpXQ_add(GEN x, GEN y, GEN T, GEN p) GEN FpXQ_sub(GEN x, GEN y, GEN T, GEN p) GEN Flxq_add(GEN x, GEN y, GEN T, ulong p) GEN Flxq_sub(GEN x, GEN y, GEN T, ulong p) GEN FpXQX_div(GEN x, GEN y, GEN T, GEN p) GEN FlxqX_div(GEN x, GEN y, GEN T, ulong p) GEN F2xqX_div(GEN x, GEN y, GEN T) GEN Fq_red(GEN x, GEN T, GEN p) GEN Fq_to_FpXQ(GEN x, GEN T, GEN p) GEN gener_Fq_local(GEN T, GEN p, GEN L) GEN FqX_Fp_mul(GEN P, GEN U, GEN T, GEN p) GEN FqX_Fq_mul(GEN P, GEN U, GEN T, GEN p) GEN FqX_add(GEN x, GEN y, GEN T, GEN p) GEN FqX_deriv(GEN f, GEN T, GEN p) GEN FqX_div(GEN x, GEN y, GEN T, GEN p) GEN FqX_div_by_X_x(GEN x, GEN y, GEN T, GEN p, GEN *z) GEN FqX_divrem(GEN x, GEN y, GEN T, GEN p, GEN *z) GEN FqX_extgcd(GEN P, GEN Q, GEN T, GEN p, GEN *U, GEN *V) GEN FqX_factor(GEN f, GEN T, GEN p) GEN FqX_gcd(GEN P, GEN Q, GEN T, GEN p) GEN FqX_get_red(GEN S, GEN T, GEN p) GEN FqX_halfgcd(GEN P, GEN Q, GEN T, GEN p) GEN FqX_mul(GEN x, GEN y, GEN T, GEN p) GEN FqX_mulu(GEN x, ulong y, GEN T, GEN p) GEN FqX_neg(GEN x, GEN T, GEN p) GEN FqX_normalize(GEN z, GEN T, GEN p) GEN FqX_powu(GEN x, ulong n, GEN T, GEN p) GEN FqX_red(GEN z, GEN T, GEN p) GEN FqX_rem(GEN x, GEN y, GEN T, GEN p) GEN FqX_roots(GEN f, GEN T, GEN p) GEN FqX_sqr(GEN x, GEN T, GEN p) GEN FqX_sub(GEN x, GEN y, GEN T, GEN p) GEN FqXQ_add(GEN x, GEN y, GEN S, GEN T, GEN p) GEN FqXQ_div(GEN x, GEN y, GEN S, GEN T, GEN p) GEN FqXQ_inv(GEN x, GEN S, GEN T, GEN p) GEN FqXQ_invsafe(GEN x, GEN S, GEN T, GEN p) GEN FqXQ_mul(GEN x, GEN y, GEN S, GEN T, GEN p) GEN FqXQ_pow(GEN x, GEN n, GEN S, GEN T, GEN p) GEN FqXQ_sqr(GEN x, GEN S, GEN T, GEN p) GEN FqXQ_sub(GEN x, GEN y, GEN S, GEN T, GEN p) long get_F2x_degree(GEN T) GEN get_F2x_mod(GEN T) long get_F2x_var(GEN T) long get_F2xqX_degree(GEN T) GEN get_F2xqX_mod(GEN T) long get_F2xqX_var(GEN T) long get_Flx_degree(GEN T) GEN get_Flx_mod(GEN T) long get_Flx_var(GEN T) long get_FlxqX_degree(GEN T) GEN get_FlxqX_mod(GEN T) long get_FlxqX_var(GEN T) long get_FpX_degree(GEN T) GEN get_FpX_mod(GEN T) long get_FpX_var(GEN T) long get_FpXQX_degree(GEN T) GEN get_FpXQX_mod(GEN T) long get_FpXQX_var(GEN T) ulong F2m_coeff(GEN x, long a, long b) void F2m_clear(GEN x, long a, long b) void F2m_flip(GEN x, long a, long b) void F2m_set(GEN x, long a, long b) void F2v_clear(GEN x, long v) ulong F2v_coeff(GEN x, long v) void F2v_flip(GEN x, long v) GEN F2v_to_F2x(GEN x, long sv) void F2v_set(GEN x, long v) void F2x_clear(GEN x, long v) ulong F2x_coeff(GEN x, long v) void F2x_flip(GEN x, long v) void F2x_set(GEN x, long v) int F2x_equal1(GEN x) int F2x_equal(GEN V, GEN W) GEN F2x_div(GEN x, GEN y) GEN F2x_renormalize(GEN x, long lx) GEN F2m_copy(GEN x) GEN F2v_copy(GEN x) GEN F2x_copy(GEN x) GEN F2v_ei(long n, long i) GEN Flm_copy(GEN x) GEN Flv_copy(GEN x) int Flx_equal1(GEN x) GEN Flx_copy(GEN x) GEN Flx_div(GEN x, GEN y, ulong p) ulong Flx_lead(GEN x) GEN Flx_mulu(GEN x, ulong a, ulong p) GEN FpV_FpC_mul(GEN x, GEN y, GEN p) GEN FpXQX_renormalize(GEN x, long lx) GEN FpXX_renormalize(GEN x, long lx) GEN FpX_div(GEN x, GEN y, GEN p) GEN FpX_renormalize(GEN x, long lx) GEN Fp_add(GEN a, GEN b, GEN m) GEN Fp_addmul(GEN x, GEN y, GEN z, GEN p) GEN Fp_center(GEN u, GEN p, GEN ps2) GEN Fp_div(GEN a, GEN b, GEN m) GEN Fp_halve(GEN y, GEN p) GEN Fp_inv(GEN a, GEN m) GEN Fp_invsafe(GEN a, GEN m) GEN Fp_mul(GEN a, GEN b, GEN m) GEN Fp_muls(GEN a, long b, GEN m) GEN Fp_mulu(GEN a, ulong b, GEN m) GEN Fp_neg(GEN b, GEN m) GEN Fp_red(GEN x, GEN p) GEN Fp_sqr(GEN a, GEN m) GEN Fp_sub(GEN a, GEN b, GEN m) GEN GENbinbase(GENbin *p) GEN Q_abs(GEN x) GEN Q_abs_shallow(GEN x) int QV_isscalar(GEN x) void Qtoss(GEN q, long *n, long *d) GEN R_abs(GEN x) GEN R_abs_shallow(GEN x) GEN RgC_fpnorml2(GEN x, long prec) GEN RgC_gtofp(GEN x, long prec) GEN RgC_gtomp(GEN x, long prec) void RgM_dimensions(GEN x, long *m, long *n) GEN RgM_fpnorml2(GEN x, long prec) GEN RgM_gtofp(GEN x, long prec) GEN RgM_gtomp(GEN x, long prec) GEN RgM_inv(GEN a) GEN RgM_minor(GEN a, long i, long j) GEN RgM_shallowcopy(GEN x) GEN RgV_gtofp(GEN x, long prec) int RgV_isscalar(GEN x) int RgV_is_ZV(GEN x) int RgV_is_QV(GEN x) long RgX_equal_var(GEN x, GEN y) int RgX_is_monomial(GEN x) int RgX_is_rational(GEN x) int RgX_is_QX(GEN x) int RgX_is_ZX(GEN x) int RgX_isscalar(GEN x) GEN RgX_shift_inplace(GEN x, long v) void RgX_shift_inplace_init(long v) GEN RgXQ_mul(GEN x, GEN y, GEN T) GEN RgXQ_sqr(GEN x, GEN T) GEN RgXQX_div(GEN x, GEN y, GEN T) GEN RgXQX_rem(GEN x, GEN y, GEN T) GEN RgX_coeff(GEN x, long n) GEN RgX_copy(GEN x) GEN RgX_div(GEN x, GEN y) GEN RgX_fpnorml2(GEN x, long prec) GEN RgX_gtofp(GEN x, long prec) GEN RgX_rem(GEN x, GEN y) GEN RgX_renormalize(GEN x) GEN Rg_col_ei(GEN x, long n, long i) GEN ZC_hnfrem(GEN x, GEN y) GEN ZM_hnfrem(GEN x, GEN y) GEN ZM_lll(GEN x, double D, long f) int ZV_dvd(GEN x, GEN y) int ZV_isscalar(GEN x) GEN ZV_to_zv(GEN x) int ZX_equal1(GEN x) GEN ZX_renormalize(GEN x, long lx) GEN ZXQ_mul(GEN x, GEN y, GEN T) GEN ZXQ_sqr(GEN x, GEN T) long Z_ispower(GEN x, ulong k) long Z_issquare(GEN x) GEN absfrac(GEN x) GEN absfrac_shallow(GEN x) GEN affc_fixlg(GEN x, GEN res) GEN bin_copy(GENbin *p) long bit_accuracy(long x) double bit_accuracy_mul(long x, double y) long bit_prec(GEN x) int both_odd(long x, long y) GEN cbrtr(GEN x) GEN cbrtr_abs(GEN x) GEN cgetc(long x) GEN cgetalloc(long t, size_t l) GEN cxcompotor(GEN z, long prec) void cgiv(GEN x) GEN col_ei(long n, long i) GEN const_col(long n, GEN x) GEN const_vec(long n, GEN x) GEN const_vecsmall(long n, long c) GEN constant_coeff(GEN x) GEN cxnorm(GEN x) GEN cyclic_perm(long l, long d) double dbllog2r(GEN x) long degpol(GEN x) long divsBIL(long n) void gabsz(GEN x, long prec, GEN z) GEN gaddgs(GEN y, long s) void gaddz(GEN x, GEN y, GEN z) int gcmpgs(GEN y, long s) void gdiventz(GEN x, GEN y, GEN z) GEN gdivsg(long s, GEN y) void gdivz(GEN x, GEN y, GEN z) GEN gen_I() void gerepileall(pari_sp av, int n, ...) void gerepilecoeffs(pari_sp av, GEN x, int n) GEN gerepilecopy(pari_sp av, GEN x) void gerepilemany(pari_sp av, GEN* g[], int n) int gequalgs(GEN y, long s) GEN gerepileupto(pari_sp av, GEN q) GEN gerepileuptoint(pari_sp av, GEN q) GEN gerepileuptoleaf(pari_sp av, GEN q) GEN gmaxsg(long s, GEN y) GEN gminsg(long s, GEN y) void gmodz(GEN x, GEN y, GEN z) void gmul2nz(GEN x, long s, GEN z) GEN gmulgs(GEN y, long s) void gmulz(GEN x, GEN y, GEN z) void gnegz(GEN x, GEN z) void gshiftz(GEN x, long s, GEN z) GEN gsubgs(GEN y, long s) void gsubz(GEN x, GEN y, GEN z) double gtodouble(GEN x) GEN gtofp(GEN z, long prec) GEN gtomp(GEN z, long prec) long gtos(GEN x) long gval(GEN x, long v) GEN identity_perm(long l) int equali1(GEN n) int equalim1(GEN n) long inf_get_sign(GEN x) int is_bigint(GEN n) int is_const_t(long t) int is_extscalar_t(long t) int is_intreal_t(long t) int is_matvec_t(long t) int is_noncalc_t(long tx) int is_pm1(GEN n) int is_rational_t(long t) int is_real_t(long t) int is_recursive_t(long t) int is_scalar_t(long t) int _is_universal_constant "is_universal_constant"(GEN x) int is_vec_t(long t) int isint1(GEN x) int isintm1(GEN x) int isintzero(GEN x) int ismpzero(GEN x) int isonstack(GEN x) void killblock(GEN x) GEN leading_coeff(GEN x) void lg_increase(GEN x) long lgcols(GEN x) long lgpol(GEN x) GEN matpascal(long n) GEN matslice(GEN A, long x1, long x2, long y1, long y2) GEN mkcol(GEN x) GEN mkcol2(GEN x, GEN y) GEN mkcol2s(long x, long y) GEN mkcol3(GEN x, GEN y, GEN z) GEN mkcol3s(long x, long y, long z) GEN mkcol4(GEN x, GEN y, GEN z, GEN t) GEN mkcol4s(long x, long y, long z, long t) GEN mkcol5(GEN x, GEN y, GEN z, GEN t, GEN u) GEN mkcol6(GEN x, GEN y, GEN z, GEN t, GEN u, GEN v) GEN mkcolcopy(GEN x) GEN mkcols(long x) GEN mkcomplex(GEN x, GEN y) GEN mkerr(long n) GEN mkmoo() GEN mkoo() GEN mkfrac(GEN x, GEN y) GEN mkfracss(long x, long y) GEN mkfraccopy(GEN x, GEN y) GEN mkintmod(GEN x, GEN y) GEN mkintmodu(ulong x, ulong y) GEN mkmat(GEN x) GEN mkmat2(GEN x, GEN y) GEN mkmat3(GEN x, GEN y, GEN z) GEN mkmat4(GEN x, GEN y, GEN z, GEN t) GEN mkmat5(GEN x, GEN y, GEN z, GEN t, GEN u) GEN mkmatcopy(GEN x) GEN mkpolmod(GEN x, GEN y) GEN mkqfi(GEN x, GEN y, GEN z) GEN mkquad(GEN n, GEN x, GEN y) GEN mkrfrac(GEN x, GEN y) GEN mkrfraccopy(GEN x, GEN y) GEN mkvec(GEN x) GEN mkvec2(GEN x, GEN y) GEN mkvec2copy(GEN x, GEN y) GEN mkvec2s(long x, long y) GEN mkvec3(GEN x, GEN y, GEN z) GEN mkvec3s(long x, long y, long z) GEN mkvec4(GEN x, GEN y, GEN z, GEN t) GEN mkvec4s(long x, long y, long z, long t) GEN mkvec5(GEN x, GEN y, GEN z, GEN t, GEN u) GEN mkveccopy(GEN x) GEN mkvecs(long x) GEN mkvecsmall(long x) GEN mkvecsmall2(long x, long y) GEN mkvecsmall3(long x, long y, long z) GEN mkvecsmall4(long x, long y, long z, long t) void mpcosz(GEN x, GEN z) void mpexpz(GEN x, GEN z) void mplogz(GEN x, GEN z) void mpsinz(GEN x, GEN z) GEN mul_content(GEN cx, GEN cy) GEN mul_denom(GEN cx, GEN cy) long nbits2nlong(long x) long nbits2extraprec(long x) long nbits2ndec(long x) long nbits2prec(long x) long nbits2lg(long x) long nbrows(GEN x) long nchar2nlong(long x) long ndec2nbits(long x) long ndec2nlong(long x) long ndec2prec(long x) void normalize_frac(GEN z) int odd(long x) void pari_free(void *pointer) void* pari_calloc(size_t size) void* pari_malloc(size_t bytes) void* pari_realloc(void *pointer, size_t size) GEN perm_conj(GEN s, GEN t) GEN perm_inv(GEN x) GEN perm_mul(GEN s, GEN t) GEN pol_0(long v) GEN pol_1(long v) GEN pol_x(long v) GEN pol_xn(long n, long v) GEN pol_xnall(long n, long v) GEN pol0_F2x(long sv) GEN pol1_F2x(long sv) GEN polx_F2x(long sv) GEN pol0_Flx(long sv) GEN pol1_Flx(long sv) GEN polx_Flx(long sv) GEN polx_zx(long sv) GEN powii(GEN x, GEN n) GEN powIs(long n) long prec2nbits(long x) double prec2nbits_mul(long x, double y) long prec2ndec(long x) long precdbl(long x) GEN quad_disc(GEN x) GEN qfb_disc(GEN x) GEN qfb_disc3(GEN x, GEN y, GEN z) GEN quadnorm(GEN q) long remsBIL(long n) GEN row(GEN A, long x1) GEN Flm_row(GEN A, long x0) GEN row_i(GEN A, long x0, long x1, long x2) GEN zm_row(GEN x, long i) GEN rowcopy(GEN A, long x0) GEN rowpermute(GEN A, GEN p) GEN rowslice(GEN A, long x1, long x2) GEN rowslicepermute(GEN A, GEN p, long x1, long x2) GEN rowsplice(GEN a, long j) int ser_isexactzero(GEN x) GEN shallowcopy(GEN x) GEN sqrfrac(GEN x) GEN sqrti(GEN x) GEN sqrtnr(GEN x, long n) GEN sqrtr(GEN x) GEN sstoQ(long n, long d) void pari_stack_alloc(pari_stack *s, long nb) void** pari_stack_base(pari_stack *s) void pari_stack_delete(pari_stack *s) void pari_stack_init(pari_stack *s, size_t size, void **data) long pari_stack_new(pari_stack *s) void pari_stack_pushp(pari_stack *s, void *u) long sturm(GEN x) GEN truecoeff(GEN x, long n) GEN trunc_safe(GEN x) GEN vec_ei(long n, long i) GEN vec_append(GEN v, GEN s) GEN vec_lengthen(GEN v, long n) GEN vec_setconst(GEN v, GEN x) GEN vec_shorten(GEN v, long n) GEN vec_to_vecsmall(GEN z) GEN vecpermute(GEN A, GEN p) GEN vecreverse(GEN A) void vecreverse_inplace(GEN y) GEN vecsmallpermute(GEN A, GEN p) GEN vecslice(GEN A, long y1, long y2) GEN vecslicepermute(GEN A, GEN p, long y1, long y2) GEN vecsplice(GEN a, long j) GEN vecsmall_append(GEN V, long s) long vecsmall_coincidence(GEN u, GEN v) GEN vecsmall_concat(GEN u, GEN v) GEN vecsmall_copy(GEN x) GEN vecsmall_ei(long n, long i) long vecsmall_indexmax(GEN x) long vecsmall_indexmin(GEN x) long vecsmall_isin(GEN v, long x) GEN vecsmall_lengthen(GEN v, long n) int vecsmall_lexcmp(GEN x, GEN y) long vecsmall_max(GEN v) long vecsmall_min(GEN v) long vecsmall_pack(GEN V, long base, long mod) int vecsmall_prefixcmp(GEN x, GEN y) GEN vecsmall_prepend(GEN V, long s) GEN vecsmall_reverse(GEN A) GEN vecsmall_shorten(GEN v, long n) GEN vecsmall_to_col(GEN z) GEN vecsmall_to_vec(GEN z) GEN vecsmall_to_vec_inplace(GEN z) void vecsmalltrunc_append(GEN x, long t) GEN vecsmalltrunc_init(long l) void vectrunc_append(GEN x, GEN t) void vectrunc_append_batch(GEN x, GEN y) GEN vectrunc_init(long l) GEN zc_to_ZC(GEN x) GEN zero_F2m(long n, long m) GEN zero_F2m_copy(long n, long m) GEN zero_F2v(long m) GEN zero_F2x(long sv) GEN zero_Flm(long m, long n) GEN zero_Flm_copy(long m, long n) GEN zero_Flv(long n) GEN zero_Flx(long sv) GEN zero_zm(long x, long y) GEN zero_zv(long x) GEN zero_zx(long sv) GEN zerocol(long n) GEN zeromat(long m, long n) GEN zeromatcopy(long m, long n) GEN zeropadic(GEN p, long e) GEN zeropadic_shallow(GEN p, long e) GEN zeropol(long v) GEN zeroser(long v, long e) GEN zerovec(long n) GEN zerovec_block(long len) GEN zm_copy(GEN x) GEN zm_to_zxV(GEN x, long sv) GEN zm_transpose(GEN x) GEN zv_copy(GEN x) GEN zv_to_ZV(GEN x) GEN zv_to_zx(GEN x, long sv) GEN zx_renormalize(GEN x, long l) GEN zx_shift(GEN x, long n) GEN zx_to_zv(GEN x, long N) GEN err_get_compo(GEN e, long i) long err_get_num(GEN e) void pari_err_BUG(const char *f) void pari_err_COMPONENT(const char *f, const char *op, GEN l, GEN x) void pari_err_CONSTPOL(const char *f) void pari_err_COPRIME(const char *f, GEN x, GEN y) void pari_err_DIM(const char *f) void pari_err_DOMAIN(const char *f, const char *v, const char *op, GEN l, GEN x) void pari_err_FILE(const char *f, const char *g) void pari_err_FLAG(const char *f) void pari_err_IMPL(const char *f) void pari_err_INV(const char *f, GEN x) void pari_err_IRREDPOL(const char *f, GEN x) void pari_err_MAXPRIME(ulong c) void pari_err_MODULUS(const char *f, GEN x, GEN y) void pari_err_OP(const char *f, GEN x, GEN y) void pari_err_OVERFLOW(const char *f) void pari_err_PACKAGE(const char *f) void pari_err_PREC(const char *f) void pari_err_PRIME(const char *f, GEN x) void pari_err_PRIORITY(const char *f, GEN x, const char *op, long v) void pari_err_SQRTN(const char *f, GEN x) void pari_err_TYPE(const char *f, GEN x) void pari_err_TYPE2(const char *f, GEN x, GEN y) void pari_err_VAR(const char *f, GEN x, GEN y) void pari_err_ROOTS0(const char *f) # Work around a bug in PARI's is_universal_constant() cdef inline int is_universal_constant(GEN x): return _is_universal_constant(x) or (x is err_e_STACK) # Auto-generated declarations. There are taken from the PARI version # on the system, so they more up-to-date than the above. In case of # conflicting declarations, auto_paridecl should have priority. from .auto_paridecl cimport * cdef inline int is_on_stack(GEN x) except -1: """ Is the GEN ``x`` stored on the PARI stack? """ s = x if avma <= s: return s < pari_mainstack.top if pari_mainstack.vbot <= s: raise SystemError("PARI object in unused part of PARI stack") return 0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1604256609.0 cypari2-2.1.2/cypari2/paripriv.pxd0000644000175000017500000000202200000000000020142 0ustar00vincentvincent00000000000000# distutils: libraries = gmp pari """ Declarations for private functions from PARI Ideally, we shouldn't use these, but for technical reasons, we have to. """ from .types cimport * cdef extern from "pari/paripriv.h": int t_FF_FpXQ, t_FF_Flxq, t_FF_F2xq int gpd_QUIET, gpd_TEST, gpd_EMACS, gpd_TEXMACS struct pariout_t: char format # e,f,g long fieldw # 0 (ignored) or field width long sigd # -1 (all) or number of significant digits printed int sp # 0 = suppress whitespace from output int prettyp # output style: raw, prettyprint, etc int TeXstyle struct gp_data: pariout_t *fmt unsigned long flags ulong primelimit # deprecated extern gp_data* GP_DATA # In older versions of PARI, this is declared in the private # non-installed PARI header file "anal.h". More recently, this is # declared in "paripriv.h". Since a double declaration does not hurt, # we declare it here regardless. cdef extern const char* closure_func_err() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779110.0 cypari2-2.1.2/cypari2/stack.pxd0000644000175000017500000000106200000000000017416 0ustar00vincentvincent00000000000000from .types cimport GEN, pari_sp from .gen cimport Gen_base, Gen cdef Gen new_gen(GEN x) cdef new_gens2(GEN x, GEN y) cdef Gen new_gen_noclear(GEN x) cdef Gen clone_gen(GEN x) cdef Gen clone_gen_noclear(GEN x) cdef void clear_stack() cdef void reset_avma() cdef void remove_from_pari_stack(Gen self) cdef int move_gens_to_heap(pari_sp lim) except -1 cdef int before_resize() except -1 cdef int set_pari_stack_size(size_t size, size_t sizemax) except -1 cdef void after_resize() cdef class DetachGen: cdef source cdef GEN detach(self) except NULL ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779110.0 cypari2-2.1.2/cypari2/stack.pyx0000644000175000017500000002025500000000000017450 0ustar00vincentvincent00000000000000""" Memory management for Gens on the PARI stack or the heap ******************************************************** """ #***************************************************************************** # Copyright (C) 2016 Luca De Feo # Copyright (C) 2018 Jeroen Demeyer # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 2 of the License, or # (at your option) any later version. # http://www.gnu.org/licenses/ #***************************************************************************** from __future__ import absolute_import, division, print_function cimport cython from cpython.ref cimport PyObject, Py_XINCREF, Py_XDECREF from cpython.exc cimport PyErr_SetString from cysignals.signals cimport (sig_on, sig_off, sig_block, sig_unblock, sig_error) from cysignals.memory cimport check_malloc, sig_free from .gen cimport Gen, Gen_new from .paridecl cimport (avma, pari_mainstack, gnil, gcopy, is_universal_constant, is_on_stack, isclone, gclone, gclone_refc, paristack_setsize) from warnings import warn cdef extern from *: int sig_on_count "cysigs.sig_on_count" int block_sigint "cysigs.block_sigint" # Singleton object to denote the top of the PARI stack cdef Gen top_of_stack = Gen_new(gnil, NULL) # Pointer to the Gen on the bottom of the PARI stack. This is the first # element of the Gen linked list. If the linked list is empty, this # equals top_of_stack. This pointer is *not* refcounted, so it does not # prevent the stackbottom object from being deallocated. In that case, # we update stackbottom in Gen.__dealloc__ cdef PyObject* stackbottom = top_of_stack cdef void remove_from_pari_stack(Gen self): global avma, stackbottom if self is not stackbottom: print("ERROR: removing wrong instance of Gen") print(f"Expected: {stackbottom}") print(f"Actual: {self}") if sig_on_count and not block_sigint: PyErr_SetString(SystemError, "calling remove_from_pari_stack() inside sig_on()") sig_error() if self.sp() != avma: if avma > self.sp(): print("ERROR: inconsistent avma when removing Gen from PARI stack") print(f"Expected: 0x{self.sp():x}") print(f"Actual: 0x{avma:x}") else: warn(f"cypari2 leaked {self.sp() - avma} bytes on the PARI stack", RuntimeWarning, stacklevel=2) n = self.next stackbottom = n self.next = None reset_avma() cdef inline Gen Gen_stack_new(GEN x): """ Allocate and initialize a new instance of ``Gen`` wrapping a GEN on the PARI stack. """ global stackbottom # n = stackbottom must be done BEFORE calling Gen_new() # since Gen_new may invoke gc.collect() which would mess up # the PARI stack. n = stackbottom z = Gen_new(x, avma) z.next = n stackbottom = z sz = z.sp() sn = n.sp() if sz > sn: raise SystemError(f"objects on PARI stack in invalid order (first: 0x{sz:x}; next: 0x{sn:x})") return z cdef void reset_avma(): """ Reset PARI stack pointer to remove unused stuff from the PARI stack. Note that the actual data remains on the stack. Therefore, it is safe to use as long as no further PARI functions are called. """ # NOTE: this can be called with an exception set (the error handler # does that)! global avma avma = (stackbottom).sp() cdef void clear_stack(): """ Call ``sig_off()`` and clean the PARI stack. """ sig_off() reset_avma() cdef int move_gens_to_heap(pari_sp lim) except -1: """ Move some/all Gens from the PARI stack to the heap. If lim == -1, move everything. Otherwise, keep moving as long as avma <= lim. """ while avma <= lim and stackbottom is not top_of_stack: current = stackbottom sig_on() current.g = gclone(current.g) sig_block() remove_from_pari_stack(current) sig_unblock() sig_off() # The .address attribute can only be updated now because it is # needed in remove_from_pari_stack(). This means that the object # is temporarily in an inconsistent state but this does not # matter since .address is normally not used. # # The more important .g attribute is updated correctly before # remove_from_pari_stack(). Therefore, the object can be used # normally regardless of what happens to the PARI stack. current.address = current.g cdef int before_resize() except -1: """ Prepare for resizing the PARI stack This must be called before reallocating the PARI stack """ move_gens_to_heap(-1) if top_of_stack.sp() != pari_mainstack.top: raise RuntimeError("cannot resize PARI stack here") cdef int set_pari_stack_size(size_t size, size_t sizemax) except -1: """ Safely set the PARI stack size """ before_resize() sig_on() paristack_setsize(size, sizemax) sig_off() after_resize() cdef void after_resize(): """ This must be called after reallocating the PARI stack """ top_of_stack.address = pari_mainstack.top cdef Gen new_gen(GEN x): """ Create a new ``Gen`` from a ``GEN``. Except if `x` is ``gnil``, then return ``None`` instead. Also call ``sig_off``() and clear the PARI stack. """ sig_off() if x is gnil: reset_avma() return None return new_gen_noclear(x) cdef new_gens2(GEN x, GEN y): """ Create a 2-tuple of new ``Gen``s from 2 ``GEN``s. Also call ``sig_off``() and clear the PARI stack. """ sig_off() global avma av = avma g1 = new_gen_noclear(x) # Restore avma in case that remove_from_pari_stack() was called avma = av g2 = new_gen_noclear(y) return (g1, g2) cdef Gen new_gen_noclear(GEN x): """ Create a new ``Gen`` from a ``GEN``. """ if not is_on_stack(x): reset_avma() if is_universal_constant(x): return Gen_new(x, NULL) elif isclone(x): gclone_refc(x) return Gen_new(x, x) raise SystemError("new_gen() argument not on PARI stack, not on PARI heap and not a universal constant") z = Gen_stack_new(x) # If we used over half of the PARI stack, move all Gens to the heap if (pari_mainstack.top - avma) >= pari_mainstack.size // 2: if sig_on_count == 0: try: move_gens_to_heap(-1) except MemoryError: pass return z cdef Gen clone_gen(GEN x): x = gclone(x) clear_stack() return Gen_new(x, x) cdef Gen clone_gen_noclear(GEN x): x = gclone(x) return Gen_new(x, x) @cython.no_gc cdef class DetachGen: """ Destroy a :class:`Gen` but keep the ``GEN`` which is inside it. The typical usage is as follows: 1. Creates the ``DetachGen`` object from a :class`Gen`. 2. Removes all other references to that :class:`Gen`. 3. Call the ``detach`` method to retrieve the ``GEN`` (or a copy of it if the original was not on the stack). """ def __init__(self, s): self.source = s cdef GEN detach(self) except NULL: src = self.source # Whatever happens, delete self.source self.source = None # Delete src safely, keeping it available as GEN cdef GEN res = src.g if is_on_stack(res): # Verify that we hold the only reference to src if (src).ob_refcnt != 1: raise SystemError("cannot detach a Gen which is still referenced") elif is_universal_constant(res): pass else: # Make a copy to the PARI stack res = gcopy(res) # delete src but do not change avma global avma cdef pari_sp av = avma avma = src.sp() # Avoid a warning when deallocating del src avma = av return res ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779110.0 cypari2-2.1.2/cypari2/string_utils.pxd0000644000175000017500000000114600000000000021042 0ustar00vincentvincent00000000000000cdef extern from *: int PY_MAJOR_VERSION cpdef bytes to_bytes(s) cpdef unicode to_unicode(s) cpdef inline to_string(s): r""" Converts a bytes and unicode ``s`` to a string. String means bytes in Python2 and unicode in Python3 Examples: >>> from cypari2.string_utils import to_string >>> s1 = to_string(b'hello') >>> s2 = to_string('hello') >>> s3 = to_string(u'hello') >>> type(s1) == type(s2) == type(s3) == str True >>> s1 == s2 == s3 == 'hello' True """ if PY_MAJOR_VERSION <= 2: return to_bytes(s) else: return to_unicode(s) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779110.0 cypari2-2.1.2/cypari2/string_utils.pyx0000644000175000017500000000271100000000000021066 0ustar00vincentvincent00000000000000# -*- coding: utf-8 -*- r""" Conversion functions for bytes/unicode """ import sys encoding = sys.getfilesystemencoding() cpdef bytes to_bytes(s): """ Converts bytes and unicode ``s`` to bytes. Examples: >>> from cypari2.string_utils import to_bytes >>> s1 = to_bytes(b'hello') >>> s2 = to_bytes('hello') >>> s3 = to_bytes(u'hello') >>> type(s1) is type(s2) is type(s3) is bytes True >>> s1 == s2 == s3 == b'hello' True >>> type(to_bytes(1234)) is bytes True >>> int(to_bytes(1234)) 1234 """ cdef int convert for convert in range(2): if convert: s = str(s) if isinstance(s, bytes): return s elif isinstance(s, unicode): return ( s).encode(encoding) raise AssertionError(f"str() returned {type(s)}") cpdef unicode to_unicode(s): r""" Converts bytes and unicode ``s`` to unicode. Examples: >>> from cypari2.string_utils import to_unicode >>> s1 = to_unicode(b'hello') >>> s2 = to_unicode('hello') >>> s3 = to_unicode(u'hello') >>> type(s1) is type(s2) is type(s3) is type(u"") True >>> s1 == s2 == s3 == u'hello' True >>> print(to_unicode(1234)) 1234 >>> type(to_unicode(1234)) is type(u"") True """ if isinstance(s, bytes): return ( s).decode(encoding) elif isinstance(s, unicode): return s return unicode(s) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779110.0 cypari2-2.1.2/cypari2/types.pxd0000644000175000017500000000754600000000000017472 0ustar00vincentvincent00000000000000""" Declarations for types used by PARI This includes both the C types as well as the PARI types (and a few macros for dealing with those). It is important that the functionality in this file does not call any PARI library functions. The reason is that we want to allow just using these types (for example, to define a Cython extension type) without linking to PARI. This file should consist only of typedefs and macros from PARI's include files. """ #***************************************************************************** # Distributed under the terms of the GNU General Public License (GPL) # as published by the Free Software Foundation; either version 2 of # the License, or (at your option) any later version. # http://www.gnu.org/licenses/ #***************************************************************************** cdef extern from "pari/pari.h": ctypedef unsigned long ulong "pari_ulong" ctypedef long* GEN ctypedef char* byteptr ctypedef unsigned long pari_sp # PARI types enum: t_INT t_REAL t_INTMOD t_FRAC t_FFELT t_COMPLEX t_PADIC t_QUAD t_POLMOD t_POL t_SER t_RFRAC t_QFR t_QFI t_VEC t_COL t_MAT t_LIST t_STR t_VECSMALL t_CLOSURE t_ERROR t_INFINITY int BITS_IN_LONG long DEFAULTPREC # 64 bits precision long MEDDEFAULTPREC # 128 bits precision long BIGDEFAULTPREC # 192 bits precision ulong CLONEBIT long typ(GEN x) long settyp(GEN x, long s) long isclone(GEN x) long setisclone(GEN x) long unsetisclone(GEN x) long lg(GEN x) long setlg(GEN x, long s) long signe(GEN x) long setsigne(GEN x, long s) long lgefint(GEN x) long setlgefint(GEN x, long s) long expo(GEN x) long setexpo(GEN x, long s) long valp(GEN x) long setvalp(GEN x, long s) long precp(GEN x) long setprecp(GEN x, long s) long varn(GEN x) long setvarn(GEN x, long s) long evaltyp(long x) long evallg(long x) long evalvarn(long x) long evalsigne(long x) long evalprecp(long x) long evalvalp(long x) long evalexpo(long x) long evallgefint(long x) ctypedef struct PARI_plot: long width long height long hunit long vunit long fwidth long fheight void (*draw)(PARI_plot *T, GEN w, GEN x, GEN y) ctypedef struct entree: const char *name ulong valence void *value long menu const char *code const char *help void *pvalue long arity ulong hash entree *next # Various structures that we don't interface but which need to be # declared, such that Cython understands the declarations of # functions using these types. struct bb_group struct bb_field struct bb_ring struct bb_algebra struct qfr_data struct nfmaxord_t struct forcomposite_t struct forpart_t struct forprime_t struct forvec_t struct gp_context struct nfbasic_t struct pariFILE struct pari_mt struct pari_stack struct pari_thread struct pari_timer struct GENbin struct hashentry struct hashtable # These are actually defined in cypari.h but we put them here to # prevent Cython from reordering the includes. GEN set_gel(GEN x, long n, GEN z) # gel(x, n) = z GEN set_gmael(GEN x, long i, long j, GEN z) # gmael(x, i, j) = z GEN set_gcoeff(GEN x, long i, long j, GEN z) # gcoeff(x, i, j) = z GEN set_uel(GEN x, long n, ulong z) # uel(x, n) = z # It is important that this gets included *after* all PARI includes cdef extern from "cypari.h": pass ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1604735099.796661 cypari2-2.1.2/cypari2.egg-info/0000755000175000017500000000000000000000000017267 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1604735099.0 cypari2-2.1.2/cypari2.egg-info/PKG-INFO0000644000175000017500000001173700000000000020375 0ustar00vincentvincent00000000000000Metadata-Version: 1.0 Name: cypari2 Version: 2.1.2 Summary: A Python interface to the number theory library PARI/GP Home-page: https://github.com/sagemath/cypari2 Author: Luca De Feo, Vincent Delecroix, Jeroen Demeyer, Vincent Klein Author-email: sage-devel@googlegroups.com License: GNU General Public License, version 2 or later Description: CyPari 2 ======== .. image:: https://travis-ci.org/sagemath/cypari2.svg?branch=master :target: https://travis-ci.org/sagemath/cypari2 .. image:: https://readthedocs.org/projects/cypari2/badge/?version=latest :target: https://cypari2.readthedocs.io/en/latest/?badge=latest :alt: Documentation Status A Python interface to the number theory library `PARI/GP `_. This library supports both Python 2 and Python 3. Installation ------------ GNU/Linux ^^^^^^^^^ A package `python-cypari2` or `python2-cypari2` or `python3-cypari2` might be available in your package manager. Using pip ^^^^^^^^^ Requirements: - PARI/GP >= 2.9.4 (header files and library) - Python 2.7 or Python >= 3.4 - pip - `cysignals `_ >= 1.7 - Cython >= 0.28 Install cypari2 via the Python Package Index (PyPI) via :: $ pip install cypari2 [--user] (the optional option *--user* allows to install cypari2 for a single user and avoids using pip with administrator rights). Depending on your operating system the pip command might also be called pip2 or pip3. If you want to try the development version use :: $ pip install git+https://github.com/sagemath/cypari2.git [--user] If you have an error saying libpari-gmp*.so* is missing and have all requirements already installed, try to reinstall cysignals and cypari2 :: $ pip install cysignals --upgrade [--user] $ pip install cypari2 --upgrade [--user] Other ^^^^^ Any other way to install cypari2 is not supported. In particular, ``python setup.py install`` will produce an error. Usage ----- The interface as been kept as close as possible from PARI/GP. The following computation in GP :: ? zeta(2) %1 = 1.6449340668482264364724151666460251892 ? p = x^3 + x^2 + x - 1; ? modulus = t^3 + t^2 + t - 1; ? fq = factorff(p, 3, modulus); ? centerlift(lift(fq)) %5 = [ x - t 1] [x + (t^2 + t - 1) 1] [ x + (-t^2 - 1) 1] translates into :: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari(2).zeta() 1.64493406684823 >>> p = pari("x^3 + x^2 + x - 1") >>> modulus = pari("t^3 + t^2 + t - 1") >>> fq = p.factorff(3, modulus) >>> fq.lift().centerlift() [x - t, 1; x + (t^2 + t - 1), 1; x + (-t^2 - 1), 1] The object **pari** above is the object for the interface and acts as a constructor. It can be called with basic Python objects like integer or floating point. When called with a string as in the last example the corresponding string is interpreted as if it was executed in a GP shell. Beyond the interface object **pari** of type **Pari**, any object you get a handle on is of type **Gen** (that is a wrapper around the **GEN** type from libpari). All PARI/GP functions are then available in their original names as *methods* like **zeta**, **factorff**, **lift** or **centerlift** above. Alternatively, the pari functions are accessible as methods of **pari**. The same computations be done via :: >>> import cypari2 >>> pari = cypari2.Pari() >>> pari.zeta(2) 1.64493406684823 >>> p = pari("x^3 + x^2 + x - 1") >>> modulus = pari("t^3 + t^2 + t - 1") >>> fq = pari.factorff(p, 3, modulus) >>> pari.centerlift(pari.lift(fq)) [x - t, 1; x + (t^2 + t - 1), 1; x + (-t^2 - 1), 1] The complete documentation of cypari2 is available at http://cypari2.readthedocs.io and the PARI/GP documentation at http://pari.math.u-bordeaux.fr/doc.html Contributing ------------ Submit pull request or get in touch with the SageMath developers. Keywords: PARI/GP number theory Platform: UNKNOWN ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1604735099.0 cypari2-2.1.2/cypari2.egg-info/SOURCES.txt0000644000175000017500000013516400000000000021165 0ustar00vincentvincent00000000000000LICENSE MANIFEST.in Makefile README.rst VERSION gen_for_sage.py look_for_dup.py setup.py autogen/__init__.py autogen/args.py autogen/doc.py autogen/generator.py autogen/parser.py autogen/paths.py autogen/ret.py cypari2/__init__.py cypari2/closure.pxd cypari2/closure.pyx cypari2/convert.pxd cypari2/convert.pyx cypari2/cypari.h cypari2/gen.pxd cypari2/gen.pyx cypari2/handle_error.pxd cypari2/handle_error.pyx cypari2/pari_instance.pxd cypari2/pari_instance.pyx cypari2/paridecl.pxd cypari2/paripriv.pxd cypari2/stack.pxd cypari2/stack.pyx cypari2/string_utils.pxd cypari2/string_utils.pyx cypari2/types.pxd cypari2.egg-info/PKG-INFO cypari2.egg-info/SOURCES.txt cypari2.egg-info/dependency_links.txt cypari2.egg-info/requires.txt cypari2.egg-info/top_level.txt docs/Makefile docs/source/closure.rst docs/source/conf.py docs/source/convert.rst docs/source/gen.rst docs/source/handle_error.rst docs/source/index.rst docs/source/pari_instance.rst docs/source/stack.rst tests/rundoctest.py tests/test_backward.py tests/test_integers.py venv/bin/activate_this.py venv/lib/python3.7/site.py venv/lib/python3.7/distutils/__init__.py venv/lib/python3.7/site-packages/cython.py venv/lib/python3.7/site-packages/easy_install.py venv/lib/python3.7/site-packages/Cython/CodeWriter.py venv/lib/python3.7/site-packages/Cython/Coverage.py venv/lib/python3.7/site-packages/Cython/Debugging.py venv/lib/python3.7/site-packages/Cython/Shadow.py venv/lib/python3.7/site-packages/Cython/StringIOTree.py venv/lib/python3.7/site-packages/Cython/TestUtils.py venv/lib/python3.7/site-packages/Cython/Utils.py venv/lib/python3.7/site-packages/Cython/__init__.py venv/lib/python3.7/site-packages/Cython/Build/BuildExecutable.py venv/lib/python3.7/site-packages/Cython/Build/Cythonize.py venv/lib/python3.7/site-packages/Cython/Build/Dependencies.py venv/lib/python3.7/site-packages/Cython/Build/Distutils.py venv/lib/python3.7/site-packages/Cython/Build/Inline.py venv/lib/python3.7/site-packages/Cython/Build/IpythonMagic.py venv/lib/python3.7/site-packages/Cython/Build/__init__.py venv/lib/python3.7/site-packages/Cython/Build/Tests/TestCyCache.py venv/lib/python3.7/site-packages/Cython/Build/Tests/TestInline.py venv/lib/python3.7/site-packages/Cython/Build/Tests/TestIpythonMagic.py venv/lib/python3.7/site-packages/Cython/Build/Tests/TestStripLiterals.py venv/lib/python3.7/site-packages/Cython/Build/Tests/__init__.py venv/lib/python3.7/site-packages/Cython/Compiler/AnalysedTreeTransforms.py venv/lib/python3.7/site-packages/Cython/Compiler/Annotate.py venv/lib/python3.7/site-packages/Cython/Compiler/AutoDocTransforms.py venv/lib/python3.7/site-packages/Cython/Compiler/Buffer.py venv/lib/python3.7/site-packages/Cython/Compiler/Builtin.py venv/lib/python3.7/site-packages/Cython/Compiler/CmdLine.py venv/lib/python3.7/site-packages/Cython/Compiler/Code.pxd venv/lib/python3.7/site-packages/Cython/Compiler/Code.py venv/lib/python3.7/site-packages/Cython/Compiler/CodeGeneration.py venv/lib/python3.7/site-packages/Cython/Compiler/CythonScope.py venv/lib/python3.7/site-packages/Cython/Compiler/DebugFlags.py venv/lib/python3.7/site-packages/Cython/Compiler/Errors.py venv/lib/python3.7/site-packages/Cython/Compiler/ExprNodes.py venv/lib/python3.7/site-packages/Cython/Compiler/FlowControl.pxd venv/lib/python3.7/site-packages/Cython/Compiler/FlowControl.py venv/lib/python3.7/site-packages/Cython/Compiler/FusedNode.py venv/lib/python3.7/site-packages/Cython/Compiler/Future.py venv/lib/python3.7/site-packages/Cython/Compiler/Interpreter.py venv/lib/python3.7/site-packages/Cython/Compiler/Lexicon.py venv/lib/python3.7/site-packages/Cython/Compiler/Main.py venv/lib/python3.7/site-packages/Cython/Compiler/MemoryView.py venv/lib/python3.7/site-packages/Cython/Compiler/ModuleNode.py venv/lib/python3.7/site-packages/Cython/Compiler/Naming.py venv/lib/python3.7/site-packages/Cython/Compiler/Nodes.py venv/lib/python3.7/site-packages/Cython/Compiler/Optimize.py venv/lib/python3.7/site-packages/Cython/Compiler/Options.py venv/lib/python3.7/site-packages/Cython/Compiler/ParseTreeTransforms.pxd venv/lib/python3.7/site-packages/Cython/Compiler/ParseTreeTransforms.py venv/lib/python3.7/site-packages/Cython/Compiler/Parsing.pxd venv/lib/python3.7/site-packages/Cython/Compiler/Parsing.py venv/lib/python3.7/site-packages/Cython/Compiler/Pipeline.py venv/lib/python3.7/site-packages/Cython/Compiler/PyrexTypes.py venv/lib/python3.7/site-packages/Cython/Compiler/Pythran.py venv/lib/python3.7/site-packages/Cython/Compiler/Scanning.pxd venv/lib/python3.7/site-packages/Cython/Compiler/Scanning.py venv/lib/python3.7/site-packages/Cython/Compiler/StringEncoding.py venv/lib/python3.7/site-packages/Cython/Compiler/Symtab.py venv/lib/python3.7/site-packages/Cython/Compiler/TreeFragment.py venv/lib/python3.7/site-packages/Cython/Compiler/TreePath.py venv/lib/python3.7/site-packages/Cython/Compiler/TypeInference.py venv/lib/python3.7/site-packages/Cython/Compiler/TypeSlots.py venv/lib/python3.7/site-packages/Cython/Compiler/UtilNodes.py venv/lib/python3.7/site-packages/Cython/Compiler/UtilityCode.py venv/lib/python3.7/site-packages/Cython/Compiler/Version.py venv/lib/python3.7/site-packages/Cython/Compiler/Visitor.pxd venv/lib/python3.7/site-packages/Cython/Compiler/Visitor.py venv/lib/python3.7/site-packages/Cython/Compiler/__init__.py venv/lib/python3.7/site-packages/Cython/Compiler/Tests/TestBuffer.py venv/lib/python3.7/site-packages/Cython/Compiler/Tests/TestCmdLine.py venv/lib/python3.7/site-packages/Cython/Compiler/Tests/TestFlowControl.py venv/lib/python3.7/site-packages/Cython/Compiler/Tests/TestGrammar.py venv/lib/python3.7/site-packages/Cython/Compiler/Tests/TestMemView.py venv/lib/python3.7/site-packages/Cython/Compiler/Tests/TestParseTreeTransforms.py venv/lib/python3.7/site-packages/Cython/Compiler/Tests/TestSignatureMatching.py venv/lib/python3.7/site-packages/Cython/Compiler/Tests/TestTreeFragment.py venv/lib/python3.7/site-packages/Cython/Compiler/Tests/TestTreePath.py venv/lib/python3.7/site-packages/Cython/Compiler/Tests/TestTypes.py venv/lib/python3.7/site-packages/Cython/Compiler/Tests/TestUtilityLoad.py venv/lib/python3.7/site-packages/Cython/Compiler/Tests/TestVisitor.py venv/lib/python3.7/site-packages/Cython/Compiler/Tests/__init__.py venv/lib/python3.7/site-packages/Cython/Debugger/Cygdb.py venv/lib/python3.7/site-packages/Cython/Debugger/DebugWriter.py venv/lib/python3.7/site-packages/Cython/Debugger/__init__.py venv/lib/python3.7/site-packages/Cython/Debugger/libcython.py venv/lib/python3.7/site-packages/Cython/Debugger/libpython.py venv/lib/python3.7/site-packages/Cython/Debugger/Tests/TestLibCython.py venv/lib/python3.7/site-packages/Cython/Debugger/Tests/__init__.py venv/lib/python3.7/site-packages/Cython/Debugger/Tests/test_libcython_in_gdb.py venv/lib/python3.7/site-packages/Cython/Debugger/Tests/test_libpython_in_gdb.py venv/lib/python3.7/site-packages/Cython/Distutils/__init__.py venv/lib/python3.7/site-packages/Cython/Distutils/build_ext.py venv/lib/python3.7/site-packages/Cython/Distutils/extension.py venv/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py venv/lib/python3.7/site-packages/Cython/Includes/openmp.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_bool.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_buffer.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_bytes.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_cobject.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_complex.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_dict.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_exc.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_float.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_function.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_getargs.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_instance.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_int.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_iterator.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_list.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_long.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_mapping.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_mem.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_method.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_module.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_number.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_object.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_oldbuffer.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_pycapsule.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_ref.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_sequence.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_set.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_string.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_tuple.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_type.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_unicode.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_version.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/python_weakref.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/stdio.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/stdlib.pxd venv/lib/python3.7/site-packages/Cython/Includes/Deprecated/stl.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/__init__.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/array.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/bool.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/buffer.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/bytearray.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/bytes.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/ceval.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/cobject.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/complex.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/datetime.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/dict.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/exc.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/float.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/function.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/getargs.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/instance.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/int.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/iterator.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/list.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/long.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/longintrepr.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/mapping.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/mem.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/method.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/module.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/number.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/object.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/oldbuffer.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/pycapsule.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/pylifecycle.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/pystate.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/pythread.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/ref.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/sequence.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/set.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/slice.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/string.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/tuple.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/type.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/unicode.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/version.pxd venv/lib/python3.7/site-packages/Cython/Includes/cpython/weakref.pxd venv/lib/python3.7/site-packages/Cython/Includes/libc/__init__.pxd venv/lib/python3.7/site-packages/Cython/Includes/libc/errno.pxd venv/lib/python3.7/site-packages/Cython/Includes/libc/float.pxd venv/lib/python3.7/site-packages/Cython/Includes/libc/limits.pxd venv/lib/python3.7/site-packages/Cython/Includes/libc/locale.pxd venv/lib/python3.7/site-packages/Cython/Includes/libc/math.pxd venv/lib/python3.7/site-packages/Cython/Includes/libc/setjmp.pxd venv/lib/python3.7/site-packages/Cython/Includes/libc/signal.pxd venv/lib/python3.7/site-packages/Cython/Includes/libc/stddef.pxd venv/lib/python3.7/site-packages/Cython/Includes/libc/stdint.pxd venv/lib/python3.7/site-packages/Cython/Includes/libc/stdio.pxd venv/lib/python3.7/site-packages/Cython/Includes/libc/stdlib.pxd venv/lib/python3.7/site-packages/Cython/Includes/libc/string.pxd venv/lib/python3.7/site-packages/Cython/Includes/libc/time.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/__init__.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/algorithm.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/cast.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/complex.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/deque.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/forward_list.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/functional.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/iterator.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/limits.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/list.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/map.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/memory.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/pair.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/queue.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/set.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/stack.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/string.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/typeindex.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/typeinfo.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/unordered_map.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/unordered_set.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/utility.pxd venv/lib/python3.7/site-packages/Cython/Includes/libcpp/vector.pxd venv/lib/python3.7/site-packages/Cython/Includes/numpy/__init__.pxd venv/lib/python3.7/site-packages/Cython/Includes/numpy/math.pxd venv/lib/python3.7/site-packages/Cython/Includes/posix/__init__.pxd venv/lib/python3.7/site-packages/Cython/Includes/posix/dlfcn.pxd venv/lib/python3.7/site-packages/Cython/Includes/posix/fcntl.pxd venv/lib/python3.7/site-packages/Cython/Includes/posix/ioctl.pxd venv/lib/python3.7/site-packages/Cython/Includes/posix/mman.pxd venv/lib/python3.7/site-packages/Cython/Includes/posix/resource.pxd venv/lib/python3.7/site-packages/Cython/Includes/posix/select.pxd venv/lib/python3.7/site-packages/Cython/Includes/posix/signal.pxd venv/lib/python3.7/site-packages/Cython/Includes/posix/stat.pxd venv/lib/python3.7/site-packages/Cython/Includes/posix/stdio.pxd venv/lib/python3.7/site-packages/Cython/Includes/posix/stdlib.pxd venv/lib/python3.7/site-packages/Cython/Includes/posix/strings.pxd venv/lib/python3.7/site-packages/Cython/Includes/posix/time.pxd venv/lib/python3.7/site-packages/Cython/Includes/posix/types.pxd venv/lib/python3.7/site-packages/Cython/Includes/posix/unistd.pxd venv/lib/python3.7/site-packages/Cython/Includes/posix/wait.pxd venv/lib/python3.7/site-packages/Cython/Plex/Actions.pxd venv/lib/python3.7/site-packages/Cython/Plex/Actions.py venv/lib/python3.7/site-packages/Cython/Plex/DFA.py venv/lib/python3.7/site-packages/Cython/Plex/Errors.py venv/lib/python3.7/site-packages/Cython/Plex/Lexicons.py venv/lib/python3.7/site-packages/Cython/Plex/Machines.py venv/lib/python3.7/site-packages/Cython/Plex/Regexps.py venv/lib/python3.7/site-packages/Cython/Plex/Scanners.pxd venv/lib/python3.7/site-packages/Cython/Plex/Scanners.py venv/lib/python3.7/site-packages/Cython/Plex/Timing.py venv/lib/python3.7/site-packages/Cython/Plex/Traditional.py venv/lib/python3.7/site-packages/Cython/Plex/Transitions.py venv/lib/python3.7/site-packages/Cython/Plex/__init__.py venv/lib/python3.7/site-packages/Cython/Runtime/__init__.py venv/lib/python3.7/site-packages/Cython/Runtime/refnanny.pyx venv/lib/python3.7/site-packages/Cython/Tempita/__init__.py venv/lib/python3.7/site-packages/Cython/Tempita/_looper.py venv/lib/python3.7/site-packages/Cython/Tempita/_tempita.py venv/lib/python3.7/site-packages/Cython/Tempita/compat3.py venv/lib/python3.7/site-packages/Cython/Tests/TestCodeWriter.py venv/lib/python3.7/site-packages/Cython/Tests/TestCythonUtils.py venv/lib/python3.7/site-packages/Cython/Tests/TestJediTyper.py venv/lib/python3.7/site-packages/Cython/Tests/TestStringIOTree.py venv/lib/python3.7/site-packages/Cython/Tests/__init__.py venv/lib/python3.7/site-packages/Cython/Tests/xmlrunner.py venv/lib/python3.7/site-packages/Cython/Utility/CConvert.pyx venv/lib/python3.7/site-packages/Cython/Utility/CpdefEnums.pyx venv/lib/python3.7/site-packages/Cython/Utility/CppConvert.pyx venv/lib/python3.7/site-packages/Cython/Utility/MemoryView.pyx venv/lib/python3.7/site-packages/Cython/Utility/TestCyUtilityLoader.pyx venv/lib/python3.7/site-packages/Cython/Utility/TestCythonScope.pyx venv/lib/python3.7/site-packages/Cython/Utility/__init__.py venv/lib/python3.7/site-packages/cypari2/__init__.py venv/lib/python3.7/site-packages/cypari2/auto_paridecl.pxd venv/lib/python3.7/site-packages/cypari2/closure.pxd venv/lib/python3.7/site-packages/cypari2/convert.pxd venv/lib/python3.7/site-packages/cypari2/gen.pxd venv/lib/python3.7/site-packages/cypari2/handle_error.pxd venv/lib/python3.7/site-packages/cypari2/pari_instance.pxd venv/lib/python3.7/site-packages/cypari2/paridecl.pxd venv/lib/python3.7/site-packages/cypari2/paripriv.pxd venv/lib/python3.7/site-packages/cypari2/stack.pxd venv/lib/python3.7/site-packages/cypari2/string_utils.pxd venv/lib/python3.7/site-packages/cypari2/types.pxd venv/lib/python3.7/site-packages/cysignals/__init__.py venv/lib/python3.7/site-packages/cysignals/memory.pxd venv/lib/python3.7/site-packages/cysignals/memory.pxi venv/lib/python3.7/site-packages/cysignals/pysignals.pxd venv/lib/python3.7/site-packages/cysignals/signals.pxd venv/lib/python3.7/site-packages/cysignals/signals.pxi venv/lib/python3.7/site-packages/pip/__init__.py venv/lib/python3.7/site-packages/pip/__main__.py venv/lib/python3.7/site-packages/pip/_internal/__init__.py venv/lib/python3.7/site-packages/pip/_internal/build_env.py venv/lib/python3.7/site-packages/pip/_internal/cache.py venv/lib/python3.7/site-packages/pip/_internal/configuration.py venv/lib/python3.7/site-packages/pip/_internal/download.py venv/lib/python3.7/site-packages/pip/_internal/exceptions.py venv/lib/python3.7/site-packages/pip/_internal/index.py venv/lib/python3.7/site-packages/pip/_internal/legacy_resolve.py venv/lib/python3.7/site-packages/pip/_internal/locations.py venv/lib/python3.7/site-packages/pip/_internal/pep425tags.py venv/lib/python3.7/site-packages/pip/_internal/pyproject.py venv/lib/python3.7/site-packages/pip/_internal/wheel.py venv/lib/python3.7/site-packages/pip/_internal/cli/__init__.py venv/lib/python3.7/site-packages/pip/_internal/cli/autocompletion.py venv/lib/python3.7/site-packages/pip/_internal/cli/base_command.py venv/lib/python3.7/site-packages/pip/_internal/cli/cmdoptions.py venv/lib/python3.7/site-packages/pip/_internal/cli/main_parser.py venv/lib/python3.7/site-packages/pip/_internal/cli/parser.py venv/lib/python3.7/site-packages/pip/_internal/cli/status_codes.py venv/lib/python3.7/site-packages/pip/_internal/commands/__init__.py venv/lib/python3.7/site-packages/pip/_internal/commands/check.py venv/lib/python3.7/site-packages/pip/_internal/commands/completion.py venv/lib/python3.7/site-packages/pip/_internal/commands/configuration.py venv/lib/python3.7/site-packages/pip/_internal/commands/debug.py venv/lib/python3.7/site-packages/pip/_internal/commands/download.py venv/lib/python3.7/site-packages/pip/_internal/commands/freeze.py venv/lib/python3.7/site-packages/pip/_internal/commands/hash.py venv/lib/python3.7/site-packages/pip/_internal/commands/help.py venv/lib/python3.7/site-packages/pip/_internal/commands/install.py venv/lib/python3.7/site-packages/pip/_internal/commands/list.py venv/lib/python3.7/site-packages/pip/_internal/commands/search.py venv/lib/python3.7/site-packages/pip/_internal/commands/show.py venv/lib/python3.7/site-packages/pip/_internal/commands/uninstall.py venv/lib/python3.7/site-packages/pip/_internal/commands/wheel.py venv/lib/python3.7/site-packages/pip/_internal/distributions/__init__.py venv/lib/python3.7/site-packages/pip/_internal/distributions/base.py venv/lib/python3.7/site-packages/pip/_internal/distributions/installed.py venv/lib/python3.7/site-packages/pip/_internal/distributions/source.py venv/lib/python3.7/site-packages/pip/_internal/distributions/wheel.py venv/lib/python3.7/site-packages/pip/_internal/models/__init__.py venv/lib/python3.7/site-packages/pip/_internal/models/candidate.py venv/lib/python3.7/site-packages/pip/_internal/models/format_control.py venv/lib/python3.7/site-packages/pip/_internal/models/index.py venv/lib/python3.7/site-packages/pip/_internal/models/link.py venv/lib/python3.7/site-packages/pip/_internal/models/search_scope.py venv/lib/python3.7/site-packages/pip/_internal/models/selection_prefs.py venv/lib/python3.7/site-packages/pip/_internal/models/target_python.py venv/lib/python3.7/site-packages/pip/_internal/operations/__init__.py venv/lib/python3.7/site-packages/pip/_internal/operations/check.py venv/lib/python3.7/site-packages/pip/_internal/operations/freeze.py venv/lib/python3.7/site-packages/pip/_internal/operations/prepare.py venv/lib/python3.7/site-packages/pip/_internal/req/__init__.py venv/lib/python3.7/site-packages/pip/_internal/req/constructors.py venv/lib/python3.7/site-packages/pip/_internal/req/req_file.py venv/lib/python3.7/site-packages/pip/_internal/req/req_install.py venv/lib/python3.7/site-packages/pip/_internal/req/req_set.py venv/lib/python3.7/site-packages/pip/_internal/req/req_tracker.py venv/lib/python3.7/site-packages/pip/_internal/req/req_uninstall.py venv/lib/python3.7/site-packages/pip/_internal/utils/__init__.py venv/lib/python3.7/site-packages/pip/_internal/utils/appdirs.py venv/lib/python3.7/site-packages/pip/_internal/utils/compat.py venv/lib/python3.7/site-packages/pip/_internal/utils/deprecation.py venv/lib/python3.7/site-packages/pip/_internal/utils/encoding.py venv/lib/python3.7/site-packages/pip/_internal/utils/filesystem.py venv/lib/python3.7/site-packages/pip/_internal/utils/glibc.py venv/lib/python3.7/site-packages/pip/_internal/utils/hashes.py venv/lib/python3.7/site-packages/pip/_internal/utils/logging.py venv/lib/python3.7/site-packages/pip/_internal/utils/marker_files.py venv/lib/python3.7/site-packages/pip/_internal/utils/misc.py venv/lib/python3.7/site-packages/pip/_internal/utils/models.py venv/lib/python3.7/site-packages/pip/_internal/utils/outdated.py venv/lib/python3.7/site-packages/pip/_internal/utils/packaging.py venv/lib/python3.7/site-packages/pip/_internal/utils/setuptools_build.py venv/lib/python3.7/site-packages/pip/_internal/utils/temp_dir.py venv/lib/python3.7/site-packages/pip/_internal/utils/typing.py venv/lib/python3.7/site-packages/pip/_internal/utils/ui.py venv/lib/python3.7/site-packages/pip/_internal/utils/virtualenv.py venv/lib/python3.7/site-packages/pip/_internal/vcs/__init__.py venv/lib/python3.7/site-packages/pip/_internal/vcs/bazaar.py venv/lib/python3.7/site-packages/pip/_internal/vcs/git.py venv/lib/python3.7/site-packages/pip/_internal/vcs/mercurial.py venv/lib/python3.7/site-packages/pip/_internal/vcs/subversion.py venv/lib/python3.7/site-packages/pip/_internal/vcs/versioncontrol.py venv/lib/python3.7/site-packages/pip/_vendor/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/appdirs.py venv/lib/python3.7/site-packages/pip/_vendor/distro.py venv/lib/python3.7/site-packages/pip/_vendor/ipaddress.py venv/lib/python3.7/site-packages/pip/_vendor/pyparsing.py venv/lib/python3.7/site-packages/pip/_vendor/retrying.py venv/lib/python3.7/site-packages/pip/_vendor/six.py venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/_cmd.py venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/adapter.py venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/cache.py venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/compat.py venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/controller.py venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/filewrapper.py venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/heuristics.py venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/serialize.py venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/wrapper.py venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/caches/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/caches/file_cache.py venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/caches/redis_cache.py venv/lib/python3.7/site-packages/pip/_vendor/certifi/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/certifi/__main__.py venv/lib/python3.7/site-packages/pip/_vendor/certifi/core.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/big5freq.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/big5prober.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/chardistribution.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/charsetgroupprober.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/charsetprober.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/codingstatemachine.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/compat.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/cp949prober.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/enums.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/escprober.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/escsm.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/eucjpprober.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/euckrfreq.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/euckrprober.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/euctwfreq.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/euctwprober.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/gb2312freq.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/gb2312prober.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/hebrewprober.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/jisfreq.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/jpcntx.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/langbulgarianmodel.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/langcyrillicmodel.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/langgreekmodel.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/langhebrewmodel.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/langhungarianmodel.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/langthaimodel.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/langturkishmodel.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/latin1prober.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/mbcharsetprober.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/mbcsgroupprober.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/mbcssm.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/sbcharsetprober.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/sbcsgroupprober.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/sjisprober.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/universaldetector.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/utf8prober.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/version.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/cli/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/chardet/cli/chardetect.py venv/lib/python3.7/site-packages/pip/_vendor/colorama/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/colorama/ansi.py venv/lib/python3.7/site-packages/pip/_vendor/colorama/ansitowin32.py venv/lib/python3.7/site-packages/pip/_vendor/colorama/initialise.py venv/lib/python3.7/site-packages/pip/_vendor/colorama/win32.py venv/lib/python3.7/site-packages/pip/_vendor/colorama/winterm.py venv/lib/python3.7/site-packages/pip/_vendor/distlib/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/distlib/compat.py venv/lib/python3.7/site-packages/pip/_vendor/distlib/database.py venv/lib/python3.7/site-packages/pip/_vendor/distlib/index.py venv/lib/python3.7/site-packages/pip/_vendor/distlib/locators.py venv/lib/python3.7/site-packages/pip/_vendor/distlib/manifest.py venv/lib/python3.7/site-packages/pip/_vendor/distlib/markers.py venv/lib/python3.7/site-packages/pip/_vendor/distlib/metadata.py venv/lib/python3.7/site-packages/pip/_vendor/distlib/resources.py venv/lib/python3.7/site-packages/pip/_vendor/distlib/scripts.py venv/lib/python3.7/site-packages/pip/_vendor/distlib/util.py venv/lib/python3.7/site-packages/pip/_vendor/distlib/version.py venv/lib/python3.7/site-packages/pip/_vendor/distlib/wheel.py venv/lib/python3.7/site-packages/pip/_vendor/distlib/_backport/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/distlib/_backport/misc.py venv/lib/python3.7/site-packages/pip/_vendor/distlib/_backport/shutil.py venv/lib/python3.7/site-packages/pip/_vendor/distlib/_backport/sysconfig.py venv/lib/python3.7/site-packages/pip/_vendor/distlib/_backport/tarfile.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/_ihatexml.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/_inputstream.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/_tokenizer.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/_utils.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/constants.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/html5parser.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/serializer.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/_trie/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/_trie/_base.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/_trie/datrie.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/_trie/py.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/filters/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/filters/alphabeticalattributes.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/filters/base.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/filters/inject_meta_charset.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/filters/lint.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/filters/optionaltags.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/filters/sanitizer.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/filters/whitespace.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treeadapters/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treeadapters/genshi.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treeadapters/sax.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treebuilders/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treebuilders/base.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treebuilders/dom.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treebuilders/etree.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treebuilders/etree_lxml.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treewalkers/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treewalkers/base.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treewalkers/dom.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treewalkers/etree.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treewalkers/etree_lxml.py venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treewalkers/genshi.py venv/lib/python3.7/site-packages/pip/_vendor/idna/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/idna/codec.py venv/lib/python3.7/site-packages/pip/_vendor/idna/compat.py venv/lib/python3.7/site-packages/pip/_vendor/idna/core.py venv/lib/python3.7/site-packages/pip/_vendor/idna/idnadata.py venv/lib/python3.7/site-packages/pip/_vendor/idna/intranges.py venv/lib/python3.7/site-packages/pip/_vendor/idna/package_data.py venv/lib/python3.7/site-packages/pip/_vendor/idna/uts46data.py venv/lib/python3.7/site-packages/pip/_vendor/lockfile/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/lockfile/linklockfile.py venv/lib/python3.7/site-packages/pip/_vendor/lockfile/mkdirlockfile.py venv/lib/python3.7/site-packages/pip/_vendor/lockfile/pidlockfile.py venv/lib/python3.7/site-packages/pip/_vendor/lockfile/sqlitelockfile.py venv/lib/python3.7/site-packages/pip/_vendor/lockfile/symlinklockfile.py venv/lib/python3.7/site-packages/pip/_vendor/msgpack/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/msgpack/_version.py venv/lib/python3.7/site-packages/pip/_vendor/msgpack/exceptions.py venv/lib/python3.7/site-packages/pip/_vendor/msgpack/fallback.py venv/lib/python3.7/site-packages/pip/_vendor/packaging/__about__.py venv/lib/python3.7/site-packages/pip/_vendor/packaging/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/packaging/_compat.py venv/lib/python3.7/site-packages/pip/_vendor/packaging/_structures.py venv/lib/python3.7/site-packages/pip/_vendor/packaging/markers.py venv/lib/python3.7/site-packages/pip/_vendor/packaging/requirements.py venv/lib/python3.7/site-packages/pip/_vendor/packaging/specifiers.py venv/lib/python3.7/site-packages/pip/_vendor/packaging/utils.py venv/lib/python3.7/site-packages/pip/_vendor/packaging/version.py venv/lib/python3.7/site-packages/pip/_vendor/pep517/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/pep517/_in_process.py venv/lib/python3.7/site-packages/pip/_vendor/pep517/build.py venv/lib/python3.7/site-packages/pip/_vendor/pep517/check.py venv/lib/python3.7/site-packages/pip/_vendor/pep517/colorlog.py venv/lib/python3.7/site-packages/pip/_vendor/pep517/compat.py venv/lib/python3.7/site-packages/pip/_vendor/pep517/envbuild.py venv/lib/python3.7/site-packages/pip/_vendor/pep517/wrappers.py venv/lib/python3.7/site-packages/pip/_vendor/pkg_resources/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/pkg_resources/py31compat.py venv/lib/python3.7/site-packages/pip/_vendor/progress/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/progress/bar.py venv/lib/python3.7/site-packages/pip/_vendor/progress/counter.py venv/lib/python3.7/site-packages/pip/_vendor/progress/spinner.py venv/lib/python3.7/site-packages/pip/_vendor/pytoml/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/pytoml/core.py venv/lib/python3.7/site-packages/pip/_vendor/pytoml/parser.py venv/lib/python3.7/site-packages/pip/_vendor/pytoml/test.py venv/lib/python3.7/site-packages/pip/_vendor/pytoml/utils.py venv/lib/python3.7/site-packages/pip/_vendor/pytoml/writer.py venv/lib/python3.7/site-packages/pip/_vendor/requests/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/requests/__version__.py venv/lib/python3.7/site-packages/pip/_vendor/requests/_internal_utils.py venv/lib/python3.7/site-packages/pip/_vendor/requests/adapters.py venv/lib/python3.7/site-packages/pip/_vendor/requests/api.py venv/lib/python3.7/site-packages/pip/_vendor/requests/auth.py venv/lib/python3.7/site-packages/pip/_vendor/requests/certs.py venv/lib/python3.7/site-packages/pip/_vendor/requests/compat.py venv/lib/python3.7/site-packages/pip/_vendor/requests/cookies.py venv/lib/python3.7/site-packages/pip/_vendor/requests/exceptions.py venv/lib/python3.7/site-packages/pip/_vendor/requests/help.py venv/lib/python3.7/site-packages/pip/_vendor/requests/hooks.py venv/lib/python3.7/site-packages/pip/_vendor/requests/models.py venv/lib/python3.7/site-packages/pip/_vendor/requests/packages.py venv/lib/python3.7/site-packages/pip/_vendor/requests/sessions.py venv/lib/python3.7/site-packages/pip/_vendor/requests/status_codes.py venv/lib/python3.7/site-packages/pip/_vendor/requests/structures.py venv/lib/python3.7/site-packages/pip/_vendor/requests/utils.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/_collections.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/connection.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/connectionpool.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/exceptions.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/fields.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/filepost.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/poolmanager.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/request.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/response.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/_appengine_environ.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/appengine.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/ntlmpool.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/pyopenssl.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/securetransport.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/socks.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/_securetransport/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/_securetransport/bindings.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/_securetransport/low_level.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/six.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/backports/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/backports/makefile.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/_mixin.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/abnf_regexp.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/api.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/builder.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/compat.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/exceptions.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/iri.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/misc.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/normalizers.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/parseresult.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/uri.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/validators.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/ssl_match_hostname/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/ssl_match_hostname/_implementation.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/util/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/util/connection.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/util/queue.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/util/request.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/util/response.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/util/retry.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/util/ssl_.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/util/timeout.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/util/url.py venv/lib/python3.7/site-packages/pip/_vendor/urllib3/util/wait.py venv/lib/python3.7/site-packages/pip/_vendor/webencodings/__init__.py venv/lib/python3.7/site-packages/pip/_vendor/webencodings/labels.py venv/lib/python3.7/site-packages/pip/_vendor/webencodings/mklabels.py venv/lib/python3.7/site-packages/pip/_vendor/webencodings/tests.py venv/lib/python3.7/site-packages/pip/_vendor/webencodings/x_user_defined.py venv/lib/python3.7/site-packages/pkg_resources/__init__.py venv/lib/python3.7/site-packages/pkg_resources/py31compat.py venv/lib/python3.7/site-packages/pkg_resources/_vendor/__init__.py venv/lib/python3.7/site-packages/pkg_resources/_vendor/appdirs.py venv/lib/python3.7/site-packages/pkg_resources/_vendor/pyparsing.py venv/lib/python3.7/site-packages/pkg_resources/_vendor/six.py venv/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/__about__.py venv/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/__init__.py venv/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/_compat.py venv/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/_structures.py venv/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/markers.py venv/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/requirements.py venv/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/specifiers.py venv/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/utils.py venv/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/version.py venv/lib/python3.7/site-packages/pkg_resources/extern/__init__.py venv/lib/python3.7/site-packages/pyximport/__init__.py venv/lib/python3.7/site-packages/pyximport/pyxbuild.py venv/lib/python3.7/site-packages/pyximport/pyximport.py venv/lib/python3.7/site-packages/setuptools/__init__.py venv/lib/python3.7/site-packages/setuptools/_deprecation_warning.py venv/lib/python3.7/site-packages/setuptools/archive_util.py venv/lib/python3.7/site-packages/setuptools/build_meta.py venv/lib/python3.7/site-packages/setuptools/config.py venv/lib/python3.7/site-packages/setuptools/dep_util.py venv/lib/python3.7/site-packages/setuptools/depends.py venv/lib/python3.7/site-packages/setuptools/dist.py venv/lib/python3.7/site-packages/setuptools/extension.py venv/lib/python3.7/site-packages/setuptools/glibc.py venv/lib/python3.7/site-packages/setuptools/glob.py venv/lib/python3.7/site-packages/setuptools/launch.py venv/lib/python3.7/site-packages/setuptools/lib2to3_ex.py venv/lib/python3.7/site-packages/setuptools/monkey.py venv/lib/python3.7/site-packages/setuptools/msvc.py venv/lib/python3.7/site-packages/setuptools/namespaces.py venv/lib/python3.7/site-packages/setuptools/package_index.py venv/lib/python3.7/site-packages/setuptools/pep425tags.py venv/lib/python3.7/site-packages/setuptools/py27compat.py venv/lib/python3.7/site-packages/setuptools/py31compat.py venv/lib/python3.7/site-packages/setuptools/py33compat.py venv/lib/python3.7/site-packages/setuptools/sandbox.py venv/lib/python3.7/site-packages/setuptools/site-patch.py venv/lib/python3.7/site-packages/setuptools/ssl_support.py venv/lib/python3.7/site-packages/setuptools/unicode_utils.py venv/lib/python3.7/site-packages/setuptools/version.py venv/lib/python3.7/site-packages/setuptools/wheel.py venv/lib/python3.7/site-packages/setuptools/windows_support.py venv/lib/python3.7/site-packages/setuptools/_vendor/__init__.py venv/lib/python3.7/site-packages/setuptools/_vendor/pyparsing.py venv/lib/python3.7/site-packages/setuptools/_vendor/six.py venv/lib/python3.7/site-packages/setuptools/_vendor/packaging/__about__.py venv/lib/python3.7/site-packages/setuptools/_vendor/packaging/__init__.py venv/lib/python3.7/site-packages/setuptools/_vendor/packaging/_compat.py venv/lib/python3.7/site-packages/setuptools/_vendor/packaging/_structures.py venv/lib/python3.7/site-packages/setuptools/_vendor/packaging/markers.py venv/lib/python3.7/site-packages/setuptools/_vendor/packaging/requirements.py venv/lib/python3.7/site-packages/setuptools/_vendor/packaging/specifiers.py venv/lib/python3.7/site-packages/setuptools/_vendor/packaging/utils.py venv/lib/python3.7/site-packages/setuptools/_vendor/packaging/version.py venv/lib/python3.7/site-packages/setuptools/command/__init__.py venv/lib/python3.7/site-packages/setuptools/command/alias.py venv/lib/python3.7/site-packages/setuptools/command/bdist_egg.py venv/lib/python3.7/site-packages/setuptools/command/bdist_rpm.py venv/lib/python3.7/site-packages/setuptools/command/bdist_wininst.py venv/lib/python3.7/site-packages/setuptools/command/build_clib.py venv/lib/python3.7/site-packages/setuptools/command/build_ext.py venv/lib/python3.7/site-packages/setuptools/command/build_py.py venv/lib/python3.7/site-packages/setuptools/command/develop.py venv/lib/python3.7/site-packages/setuptools/command/dist_info.py venv/lib/python3.7/site-packages/setuptools/command/easy_install.py venv/lib/python3.7/site-packages/setuptools/command/egg_info.py venv/lib/python3.7/site-packages/setuptools/command/install.py venv/lib/python3.7/site-packages/setuptools/command/install_egg_info.py venv/lib/python3.7/site-packages/setuptools/command/install_lib.py venv/lib/python3.7/site-packages/setuptools/command/install_scripts.py venv/lib/python3.7/site-packages/setuptools/command/py36compat.py venv/lib/python3.7/site-packages/setuptools/command/register.py venv/lib/python3.7/site-packages/setuptools/command/rotate.py venv/lib/python3.7/site-packages/setuptools/command/saveopts.py venv/lib/python3.7/site-packages/setuptools/command/sdist.py venv/lib/python3.7/site-packages/setuptools/command/setopt.py venv/lib/python3.7/site-packages/setuptools/command/test.py venv/lib/python3.7/site-packages/setuptools/command/upload.py venv/lib/python3.7/site-packages/setuptools/command/upload_docs.py venv/lib/python3.7/site-packages/setuptools/extern/__init__.py venv/lib/python3.7/site-packages/wheel/__init__.py venv/lib/python3.7/site-packages/wheel/__main__.py venv/lib/python3.7/site-packages/wheel/bdist_wheel.py venv/lib/python3.7/site-packages/wheel/metadata.py venv/lib/python3.7/site-packages/wheel/pep425tags.py venv/lib/python3.7/site-packages/wheel/pkginfo.py venv/lib/python3.7/site-packages/wheel/util.py venv/lib/python3.7/site-packages/wheel/wheelfile.py venv/lib/python3.7/site-packages/wheel/cli/__init__.py venv/lib/python3.7/site-packages/wheel/cli/convert.py venv/lib/python3.7/site-packages/wheel/cli/pack.py venv/lib/python3.7/site-packages/wheel/cli/unpack.py venv/share/cysignals/cysignals-CSI-helper.py././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1604735099.0 cypari2-2.1.2/cypari2.egg-info/dependency_links.txt0000644000175000017500000000000100000000000023335 0ustar00vincentvincent00000000000000 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1604735099.0 cypari2-2.1.2/cypari2.egg-info/requires.txt0000644000175000017500000000001700000000000021665 0ustar00vincentvincent00000000000000cysignals>=1.7 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1604735099.0 cypari2-2.1.2/cypari2.egg-info/top_level.txt0000644000175000017500000000001200000000000022012 0ustar00vincentvincent00000000000000* cypari2 ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1604735099.796661 cypari2-2.1.2/docs/0000755000175000017500000000000000000000000015154 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1516200100.0 cypari2-2.1.2/docs/Makefile0000644000175000017500000000114300000000000016613 0ustar00vincentvincent00000000000000# Minimal makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = python -msphinx SPHINXPROJ = CyPari2 SOURCEDIR = source BUILDDIR = build # Put it first so that "make" without argument is like "make help". help: @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) .PHONY: help Makefile # Catch-all target: route all unknown targets to Sphinx using the new # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). %: Makefile @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1604735099.796661 cypari2-2.1.2/docs/source/0000755000175000017500000000000000000000000016454 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1516200100.0 cypari2-2.1.2/docs/source/closure.rst0000644000175000017500000000005600000000000020663 0ustar00vincentvincent00000000000000.. automodule:: cypari2.closure :members: ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1516200100.0 cypari2-2.1.2/docs/source/conf.py0000644000175000017500000001250500000000000017756 0ustar00vincentvincent00000000000000# -*- coding: utf-8 -*- # # CyPari2 documentation build configuration file, created by # sphinx-quickstart on Thu May 18 16:07:32 2017. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # # import os # import sys # sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # needs_sphinx = '1.6' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'sphinx.ext.autodoc', 'sphinx.ext.mathjax', 'sphinx.ext.extlinks', ] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix(es) of source filenames. # You can specify multiple suffix as a list of string: # # source_suffix = ['.rst', '.md'] source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. project = u'CyPari2' copyright = u'2017, Many people' author = u'Many people' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = open("../../VERSION").read().strip() # The full version, including alpha/beta/rc tags. release = version # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # # This is also used if you do content translation via gettext catalogs. # Usually you set "language" from the command line for these cases. language = None # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # This patterns also effect to html_static_path and html_extra_path exclude_patterns = [] # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # If true, `todo` and `todoList` produce output, else they produce nothing. todo_include_todos = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # html_theme = 'alabaster' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # # html_theme_options = {} # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # -- Options for HTMLHelp output ------------------------------------------ # Output file base name for HTML help builder. htmlhelp_basename = 'CyPari2doc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # # 'preamble': '', # Latex figure (float) alignment # # 'figure_align': 'htbp', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ (master_doc, 'CyPari2.tex', u'CyPari2 Documentation', u'Many People', 'manual'), ] # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ (master_doc, 'cypari2', u'CyPari2 Documentation', [author], 1) ] # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ (master_doc, 'CyPari2', u'CyPari2 Documentation', author, 'CyPari2', 'One line description of project.', 'Miscellaneous'), ] # Links to external resources (copied from Sage) extlinks = { 'trac': ('https://trac.sagemath.org/%s', 'Sage ticket #'), 'wikipedia': ('https://en.wikipedia.org/wiki/%s', 'Wikipedia article '), 'arxiv': ('http://arxiv.org/abs/%s', 'Arxiv '), 'oeis': ('https://oeis.org/%s', 'OEIS sequence '), 'doi': ('https://dx.doi.org/%s', 'doi:'), 'pari': ('http://pari.math.u-bordeaux.fr/dochtml/help/%s', 'pari:'), 'mathscinet': ('http://www.ams.org/mathscinet-getitem?mr=%s', 'MathSciNet ') } # Monkey-patch inspect with Cython support def isfunction(obj): return hasattr(type(obj), "__code__") import inspect inspect.isfunction = isfunction ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1516200100.0 cypari2-2.1.2/docs/source/convert.rst0000644000175000017500000000005600000000000020667 0ustar00vincentvincent00000000000000.. automodule:: cypari2.convert :members: ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1516200100.0 cypari2-2.1.2/docs/source/gen.rst0000644000175000017500000000005200000000000017754 0ustar00vincentvincent00000000000000.. automodule:: cypari2.gen :members: ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1516200100.0 cypari2-2.1.2/docs/source/handle_error.rst0000644000175000017500000000006300000000000021651 0ustar00vincentvincent00000000000000.. automodule:: cypari2.handle_error :members: ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1547134701.0 cypari2-2.1.2/docs/source/index.rst0000644000175000017500000000077400000000000020325 0ustar00vincentvincent00000000000000.. CyPari2 documentation master file, created by sphinx-quickstart on Thu May 18 16:07:32 2017. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to CyPari2's documentation! =================================== .. toctree:: :maxdepth: 2 :caption: Contents: pari_instance gen stack closure handle_error convert Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1516200100.0 cypari2-2.1.2/docs/source/pari_instance.rst0000644000175000017500000000006400000000000022025 0ustar00vincentvincent00000000000000.. automodule:: cypari2.pari_instance :members: ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1556307469.0 cypari2-2.1.2/docs/source/stack.rst0000644000175000017500000000005400000000000020312 0ustar00vincentvincent00000000000000.. automodule:: cypari2.stack :members: ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1484850667.0 cypari2-2.1.2/gen_for_sage.py0000644000175000017500000000360000000000000017213 0ustar00vincentvincent00000000000000""" TESTS: From ``gen.__repr__``:: sage: pari('vector(5,i,i)') [1, 2, 3, 4, 5] From ``gen.__str__``:: sage: print(pari('vector(5,i,i)')) [1, 2, 3, 4, 5] From ``gen.__list__``. For polynomials, list() behaves as for ordinary Sage polynomials:: sage: pol = pari("x^3 + 5/3*x"); pol.list() [0, 5/3, 0, 1] From ``gen.__add__`` and ``gen.__sub__``:: sage: RR("2e20") + pari("1e20") 3.00000000000000 E20 sage: RR("2e20") - pari("1e20") 1.00000000000000 E20 From ``gen._nf_get_pol`` sage: K. = NumberField(x^4 - 4*x^2 + 1) sage: pari(K).nf_get_pol() y^4 - 4*y^2 + 1 For relative number fields, this returns the relative polynomial. However, beware that ``pari(L)`` returns an absolute number field:: sage: L. = K.extension(x^2 - 5) sage: pari(L).nf_get_pol() # Absolute y^8 - 28*y^6 + 208*y^4 - 408*y^2 + 36 sage: L.pari_rnf().nf_get_pol() # Relative x^2 - 5 TESTS:: sage: x = polygen(QQ) sage: K. = NumberField(x^4 - 4*x^2 + 1) sage: K.pari_nf().nf_get_pol() y^4 - 4*y^2 + 1 sage: K.pari_bnf().nf_get_pol() y^4 - 4*y^2 + 1 sage: K. = NumberField(x^4 - 4*x^2 + 1) sage: pari(K).nf_get_diff() [12, 0, 0, 0; 0, 12, 8, 0; 0, 0, 4, 0; 0, 0, 0, 4] sage: K. = NumberField(x^4 - 4*x^2 + 1) sage: s = K.pari_nf().nf_get_sign(); s [4, 0] sage: type(s); type(s[0]) <... 'list'> <... 'int'> sage: CyclotomicField(15).pari_nf().nf_get_sign() [0, 4] sage: K. = NumberField(x^4 - 4*x^2 + 1) sage: pari(K).nf_get_zk() [1, y, y^3 - 4*y, y^2 - 2] sage: K. = QuadraticField(-65) sage: K.pari_bnf().bnf_get_no() 8 sage: K. = QuadraticField(-65) sage: G = K.pari_bnf().bnf_get_gen(); G [[3, 2; 0, 1], [2, 1; 0, 1]] sage: [K.ideal(J) for J in G] [Fractional ideal (3, a + 2), Fractional ideal (2, a + 1)] """ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1502699712.0 cypari2-2.1.2/look_for_dup.py0000644000175000017500000000142400000000000017261 0ustar00vincentvincent00000000000000from __future__ import absolute_import, print_function import re from autogen import rebuild rebuild() with open('cypari2/auto_gen.pxi') as f: auto_gen = f.read() with open('cypari2/gen.pyx') as f: gen = f.read() with open('cypari2/auto_instance.pxi') as f: auto_instance = f.read() with open('cypari2/pari_instance.pyx') as f: instance = f.read() re_func = re.compile('^ def (?P[^\s()]*)\(', re.MULTILINE) gen_func = set(re_func.findall(gen)) auto_gen_func = set(re_func.findall(auto_gen)) common_func = gen_func.intersection(auto_gen_func) print('{} functions in gen'.format(len(gen_func))) print('{} functions in auto_gen'.format(len(auto_gen_func))) print('{} functons in common:\n - {}'.format(len(common_func), '\n - '.join(sorted(common_func)))) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735100.0699942 cypari2-2.1.2/setup.cfg0000644000175000017500000000004600000000000016045 0ustar00vincentvincent00000000000000[egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1583432207.0 cypari2-2.1.2/setup.py0000755000175000017500000000520600000000000015744 0ustar00vincentvincent00000000000000#!/usr/bin/env python import os from setuptools import setup from distutils.command.build_ext import build_ext as _build_ext from setuptools.command.bdist_egg import bdist_egg as _bdist_egg from setuptools.extension import Extension from autogen import rebuild from autogen.paths import include_dirs, library_dirs ext_kwds = dict(include_dirs=include_dirs(), library_dirs=library_dirs()) if "READTHEDOCS" in os.environ: # When building with readthedocs, disable optimizations to decrease # resource usage during build ext_kwds["extra_compile_args"] = ["-O0"] # Print PARI/GP defaults and environment variables for debugging from subprocess import Popen, PIPE Popen(["gp", "-f", "-q"], stdin=PIPE).communicate(b"default()") for item in os.environ.items(): print("%s=%r" % item) # Adapted from Cython's new_build_ext class build_ext(_build_ext): def finalize_options(self): # Generate auto-generated sources from pari.desc rebuild() self.directives = { "autotestdict.cdef": True, "binding": True, "cdivision": True, "language_level": 2, } _build_ext.finalize_options(self) def run(self): # Run Cython from Cython.Build.Dependencies import cythonize self.distribution.ext_modules[:] = cythonize( self.distribution.ext_modules, compiler_directives=self.directives) _build_ext.run(self) class no_egg(_bdist_egg): def run(self): from distutils.errors import DistutilsOptionError raise DistutilsOptionError("The package cypari2 will not function correctly when built as egg. Therefore, it cannot be installed using 'python setup.py install' or 'easy_install'. Instead, use 'pip install' to install cypari2.") with open('README.rst') as f: README = f.read() with open('VERSION') as f: VERSION = f.read().strip() setup( name='cypari2', version=VERSION, setup_requires=['Cython>=0.28'], install_requires=['cysignals>=1.7'], description="A Python interface to the number theory library PARI/GP", long_description=README, url="https://github.com/sagemath/cypari2", author="Luca De Feo, Vincent Delecroix, Jeroen Demeyer, Vincent Klein", author_email="sage-devel@googlegroups.com", license='GNU General Public License, version 2 or later', ext_modules=[Extension("*", ["cypari2/*.pyx"], **ext_kwds)], keywords='PARI/GP number theory', packages=['cypari2'], package_dir={'cypari2': 'cypari2'}, package_data={'cypari2': ['declinl.pxi', '*.pxd', '*.h']}, cmdclass=dict(build_ext=build_ext, bdist_egg=no_egg) ) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735099.7999942 cypari2-2.1.2/tests/0000755000175000017500000000000000000000000015366 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1547134701.0 cypari2-2.1.2/tests/rundoctest.py0000755000175000017500000000253200000000000020137 0ustar00vincentvincent00000000000000#!/usr/bin/env python import os import sys # Autogen tests must be run in the root dir, and with the proper module path path = os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir)) os.chdir(path) sys.path.append(path) import autogen import cypari2 import doctest # The doctests assume utf-8 encoding cypari2.string_utils.encoding = "utf-8" # For doctests, we want exceptions to look the same, # regardless of the Python version. Python 3 will put the # module name in the traceback, which we avoid by faking # the module to be __main__. cypari2.handle_error.PariError.__module__ = "__main__" # Disable stack size warning messages pari = cypari2.Pari() pari.default("debugmem", 0) failed = 0 attempted = 0 for mod in [cypari2.closure, cypari2.convert, cypari2.gen, cypari2.handle_error, cypari2.pari_instance, cypari2.stack, cypari2.string_utils, autogen.doc, autogen.generator, autogen.parser, autogen.paths]: print("="*80) print("Testing {}".format(mod.__name__)) test = doctest.testmod(mod, optionflags=doctest.ELLIPSIS|doctest.REPORT_NDIFF, verbose=False) failed += test.failed attempted += test.attempted print("="*80) print("Summary result for cypari2:") print(" attempted = {}".format(attempted)) print(" failed = {}".format(failed)) sys.exit(1 if failed else 0) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1604256609.0 cypari2-2.1.2/tests/test_backward.py0000644000175000017500000000213700000000000020560 0ustar00vincentvincent00000000000000#!/usr/bin/env python #***************************************************************************** # Copyright (C) 2020 Vincent Delecroix # # Distributed under the terms of the GNU General Public License (GPL) # as published by the Free Software Foundation; either version 2 of # the License, or (at your option) any later version. # http://www.gnu.org/licenses/ #***************************************************************************** import cypari2 import unittest class TestBackward(unittest.TestCase): def test_polisirreducible(self): pari = cypari2.Pari() p = pari('x^2 + 1') self.assertTrue(p.polisirreducible()) def test_sqrtint(self): pari = cypari2.Pari() self.assertEqual(pari(10).sqrtint(), 3) def test_poldegree(self): pari = cypari2.Pari() self.assertEqual(pari('x + 1').poldegree(), 1) self.assertEqual(pari('x*y^2 + 1').poldegree(pari('x')), 1) self.assertEqual(pari('x*y^2 + 1').poldegree(pari('y')), 2) if __name__ == '__main__': unittest.main() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1604256609.0 cypari2-2.1.2/tests/test_integers.py0000644000175000017500000000373100000000000020623 0ustar00vincentvincent00000000000000#!/usr/bin/env python #***************************************************************************** # Copyright (C) 2020 Vincent Delecroix # # Distributed under the terms of the GNU General Public License (GPL) # as published by the Free Software Foundation; either version 2 of # the License, or (at your option) any later version. # http://www.gnu.org/licenses/ #***************************************************************************** import random import cypari2 import unittest class TestPariInteger(unittest.TestCase): def randint(self): p = random.random() if p < 0.05: return random.randint(-2, 2) elif p < 0.5: return random.randint(-2**30, 2**30) else: return random.randint(-2**100, 2**100) def cmp(self, a, b): pari = cypari2.Pari() pa = pari(a) pb = pari(b) self.assertTrue(pa == pa and a == pa and pa == a) self.assertEqual(a == b, pa == pb) self.assertEqual(a != b, pa != pb) self.assertEqual(a < b, pa < pb) self.assertEqual(a <= b, pa <= pb) self.assertEqual(a > b, pa > pb) self.assertEqual(a >= b, pa >= pb) def test_cmp(self): for _ in range(100): a = self.randint() b = self.randint() self.cmp(a, a) self.cmp(a, b) def test_binop(self): pari = cypari2.Pari() for _ in range(100): a = self.randint() b = self.randint() self.assertEqual(a + b, pari(a) + pari(b)) self.assertEqual(a - b, pari(a) - pari(b)) self.assertEqual(a * b, pari(a) * pari(b)) if b > 0: self.assertEqual(a % b, pari(a) % pari(b)) def test_zero_division(self): pari = cypari2.Pari() with self.assertRaises(cypari2.PariError): pari(2) / pari(0) if __name__ == '__main__': unittest.main() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735099.7799942 cypari2-2.1.2/venv/0000755000175000017500000000000000000000000015202 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735099.7999942 cypari2-2.1.2/venv/bin/0000755000175000017500000000000000000000000015752 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430192.0 cypari2-2.1.2/venv/bin/activate_this.py0000644000175000017500000000216700000000000021161 0ustar00vincentvincent00000000000000"""By using execfile(this_file, dict(__file__=this_file)) you will activate this virtualenv environment. This can be used when you must use an existing Python interpreter, not the virtualenv bin/python """ try: __file__ except NameError: raise AssertionError( "You must run this like execfile('path/to/activate_this.py', dict(__file__='path/to/activate_this.py'))" ) import os import site import sys old_os_path = os.environ.get("PATH", "") os.environ["PATH"] = os.path.dirname(os.path.abspath(__file__)) + os.pathsep + old_os_path base = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) if sys.platform == "win32": site_packages = os.path.join(base, "Lib", "site-packages") else: site_packages = os.path.join(base, "lib", "python%s" % sys.version[:3], "site-packages") prev_sys_path = list(sys.path) site.addsitedir(site_packages) sys.real_prefix = sys.prefix sys.prefix = base # Move the added items to the front of the path: new_sys_path = [] for item in list(sys.path): if item not in prev_sys_path: new_sys_path.append(item) sys.path.remove(item) sys.path[:0] = new_sys_path ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1604735099.776661 cypari2-2.1.2/venv/lib/0000755000175000017500000000000000000000000015750 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735099.7999942 cypari2-2.1.2/venv/lib/python3.7/0000755000175000017500000000000000000000000017521 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735099.7999942 cypari2-2.1.2/venv/lib/python3.7/distutils/0000755000175000017500000000000000000000000021545 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430186.0 cypari2-2.1.2/venv/lib/python3.7/distutils/__init__.py0000644000175000017500000000726200000000000023665 0ustar00vincentvincent00000000000000import imp import os import sys import warnings # opcode is not a virtualenv module, so we can use it to find the stdlib # Important! To work on pypy, this must be a module that resides in the # lib-python/modified-x.y.z directory import opcode dirname = os.path.dirname distutils_path = os.path.join(os.path.dirname(opcode.__file__), "distutils") if os.path.normpath(distutils_path) == os.path.dirname(os.path.normpath(__file__)): warnings.warn("The virtualenv distutils package at %s appears to be in the same location as the system distutils?") else: __path__.insert(0, distutils_path) # noqa: F821 real_distutils = imp.load_module("_virtualenv_distutils", None, distutils_path, ("", "", imp.PKG_DIRECTORY)) # Copy the relevant attributes try: __revision__ = real_distutils.__revision__ except AttributeError: pass __version__ = real_distutils.__version__ from distutils import dist, sysconfig # isort:skip try: basestring except NameError: basestring = str # patch build_ext (distutils doesn't know how to get the libs directory # path on windows - it hardcodes the paths around the patched sys.prefix) if sys.platform == "win32": from distutils.command.build_ext import build_ext as old_build_ext class build_ext(old_build_ext): def finalize_options(self): if self.library_dirs is None: self.library_dirs = [] elif isinstance(self.library_dirs, basestring): self.library_dirs = self.library_dirs.split(os.pathsep) self.library_dirs.insert(0, os.path.join(sys.real_prefix, "Libs")) old_build_ext.finalize_options(self) from distutils.command import build_ext as build_ext_module build_ext_module.build_ext = build_ext # distutils.dist patches: old_find_config_files = dist.Distribution.find_config_files def find_config_files(self): found = old_find_config_files(self) if os.name == "posix": user_filename = ".pydistutils.cfg" else: user_filename = "pydistutils.cfg" user_filename = os.path.join(sys.prefix, user_filename) if os.path.isfile(user_filename): for item in list(found): if item.endswith("pydistutils.cfg"): found.remove(item) found.append(user_filename) return found dist.Distribution.find_config_files = find_config_files # distutils.sysconfig patches: old_get_python_inc = sysconfig.get_python_inc def sysconfig_get_python_inc(plat_specific=0, prefix=None): if prefix is None: prefix = sys.real_prefix return old_get_python_inc(plat_specific, prefix) sysconfig_get_python_inc.__doc__ = old_get_python_inc.__doc__ sysconfig.get_python_inc = sysconfig_get_python_inc old_get_python_lib = sysconfig.get_python_lib def sysconfig_get_python_lib(plat_specific=0, standard_lib=0, prefix=None): if standard_lib and prefix is None: prefix = sys.real_prefix return old_get_python_lib(plat_specific, standard_lib, prefix) sysconfig_get_python_lib.__doc__ = old_get_python_lib.__doc__ sysconfig.get_python_lib = sysconfig_get_python_lib old_get_config_vars = sysconfig.get_config_vars def sysconfig_get_config_vars(*args): real_vars = old_get_config_vars(*args) if sys.platform == "win32": lib_dir = os.path.join(sys.real_prefix, "libs") if isinstance(real_vars, dict) and "LIBDIR" not in real_vars: real_vars["LIBDIR"] = lib_dir # asked for all elif isinstance(real_vars, list) and "LIBDIR" in args: real_vars = real_vars + [lib_dir] # asked for list return real_vars sysconfig_get_config_vars.__doc__ = old_get_config_vars.__doc__ sysconfig.get_config_vars = sysconfig_get_config_vars ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735099.7999942 cypari2-2.1.2/venv/lib/python3.7/site-packages/0000755000175000017500000000000000000000000022241 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735099.8033276 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/0000755000175000017500000000000000000000000023505 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1604735099.806661 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Build/0000755000175000017500000000000000000000000024544 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Build/BuildExecutable.py0000644000175000017500000001033600000000000030162 0ustar00vincentvincent00000000000000""" Compile a Python script into an executable that embeds CPython and run it. Requires CPython to be built as a shared library ('libpythonX.Y'). Basic usage: python cythonrun somefile.py [ARGS] """ from __future__ import absolute_import DEBUG = True import sys import os from distutils import sysconfig def get_config_var(name, default=''): return sysconfig.get_config_var(name) or default INCDIR = sysconfig.get_python_inc() LIBDIR1 = get_config_var('LIBDIR') LIBDIR2 = get_config_var('LIBPL') PYLIB = get_config_var('LIBRARY') PYLIB_DYN = get_config_var('LDLIBRARY') if PYLIB_DYN == PYLIB: # no shared library PYLIB_DYN = '' else: PYLIB_DYN = os.path.splitext(PYLIB_DYN[3:])[0] # 'lib(XYZ).so' -> XYZ CC = get_config_var('CC', os.environ.get('CC', '')) CFLAGS = get_config_var('CFLAGS') + ' ' + os.environ.get('CFLAGS', '') LINKCC = get_config_var('LINKCC', os.environ.get('LINKCC', CC)) LINKFORSHARED = get_config_var('LINKFORSHARED') LIBS = get_config_var('LIBS') SYSLIBS = get_config_var('SYSLIBS') EXE_EXT = sysconfig.get_config_var('EXE') def _debug(msg, *args): if DEBUG: if args: msg = msg % args sys.stderr.write(msg + '\n') def dump_config(): _debug('INCDIR: %s', INCDIR) _debug('LIBDIR1: %s', LIBDIR1) _debug('LIBDIR2: %s', LIBDIR2) _debug('PYLIB: %s', PYLIB) _debug('PYLIB_DYN: %s', PYLIB_DYN) _debug('CC: %s', CC) _debug('CFLAGS: %s', CFLAGS) _debug('LINKCC: %s', LINKCC) _debug('LINKFORSHARED: %s', LINKFORSHARED) _debug('LIBS: %s', LIBS) _debug('SYSLIBS: %s', SYSLIBS) _debug('EXE_EXT: %s', EXE_EXT) def runcmd(cmd, shell=True): if shell: cmd = ' '.join(cmd) _debug(cmd) else: _debug(' '.join(cmd)) try: import subprocess except ImportError: # Python 2.3 ... returncode = os.system(cmd) else: returncode = subprocess.call(cmd, shell=shell) if returncode: sys.exit(returncode) def clink(basename): runcmd([LINKCC, '-o', basename + EXE_EXT, basename+'.o', '-L'+LIBDIR1, '-L'+LIBDIR2] + [PYLIB_DYN and ('-l'+PYLIB_DYN) or os.path.join(LIBDIR1, PYLIB)] + LIBS.split() + SYSLIBS.split() + LINKFORSHARED.split()) def ccompile(basename): runcmd([CC, '-c', '-o', basename+'.o', basename+'.c', '-I' + INCDIR] + CFLAGS.split()) def cycompile(input_file, options=()): from ..Compiler import Version, CmdLine, Main options, sources = CmdLine.parse_command_line(list(options or ()) + ['--embed', input_file]) _debug('Using Cython %s to compile %s', Version.version, input_file) result = Main.compile(sources, options) if result.num_errors > 0: sys.exit(1) def exec_file(program_name, args=()): runcmd([os.path.abspath(program_name)] + list(args), shell=False) def build(input_file, compiler_args=(), force=False): """ Build an executable program from a Cython module. Returns the name of the executable file. """ basename = os.path.splitext(input_file)[0] exe_file = basename + EXE_EXT if not force and os.path.abspath(exe_file) == os.path.abspath(input_file): raise ValueError("Input and output file names are the same, refusing to overwrite") if (not force and os.path.exists(exe_file) and os.path.exists(input_file) and os.path.getmtime(input_file) <= os.path.getmtime(exe_file)): _debug("File is up to date, not regenerating %s", exe_file) return exe_file cycompile(input_file, compiler_args) ccompile(basename) clink(basename) return exe_file def build_and_run(args): """ Build an executable program from a Cython module and runs it. Arguments after the module name will be passed verbatimely to the program. """ cy_args = [] last_arg = None for i, arg in enumerate(args): if arg.startswith('-'): cy_args.append(arg) elif last_arg in ('-X', '--directive'): cy_args.append(arg) else: input_file = arg args = args[i+1:] break last_arg = arg else: raise ValueError('no input file provided') program_name = build(input_file, cy_args) exec_file(program_name, args) if __name__ == '__main__': build_and_run(sys.argv[1:]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Build/Cythonize.py0000644000175000017500000002002700000000000027073 0ustar00vincentvincent00000000000000#!/usr/bin/env python from __future__ import absolute_import import os import shutil import tempfile from distutils.core import setup from .Dependencies import cythonize, extended_iglob from ..Utils import is_package_dir from ..Compiler import Options try: import multiprocessing parallel_compiles = int(multiprocessing.cpu_count() * 1.5) except ImportError: multiprocessing = None parallel_compiles = 0 class _FakePool(object): def map_async(self, func, args): try: from itertools import imap except ImportError: imap=map for _ in imap(func, args): pass def close(self): pass def terminate(self): pass def join(self): pass def parse_directives(option, name, value, parser): dest = option.dest old_directives = dict(getattr(parser.values, dest, Options.get_directive_defaults())) directives = Options.parse_directive_list( value, relaxed_bool=True, current_settings=old_directives) setattr(parser.values, dest, directives) def parse_options(option, name, value, parser): dest = option.dest options = dict(getattr(parser.values, dest, {})) for opt in value.split(','): if '=' in opt: n, v = opt.split('=', 1) v = v.lower() not in ('false', 'f', '0', 'no') else: n, v = opt, True options[n] = v setattr(parser.values, dest, options) def parse_compile_time_env(option, name, value, parser): dest = option.dest old_env = dict(getattr(parser.values, dest, {})) new_env = Options.parse_compile_time_env(value, current_settings=old_env) setattr(parser.values, dest, new_env) def find_package_base(path): base_dir, package_path = os.path.split(path) while os.path.isfile(os.path.join(base_dir, '__init__.py')): base_dir, parent = os.path.split(base_dir) package_path = '%s/%s' % (parent, package_path) return base_dir, package_path def cython_compile(path_pattern, options): pool = None all_paths = map(os.path.abspath, extended_iglob(path_pattern)) try: for path in all_paths: if options.build_inplace: base_dir = path while not os.path.isdir(base_dir) or is_package_dir(base_dir): base_dir = os.path.dirname(base_dir) else: base_dir = None if os.path.isdir(path): # recursively compiling a package paths = [os.path.join(path, '**', '*.{py,pyx}')] else: # assume it's a file(-like thing) paths = [path] ext_modules = cythonize( paths, nthreads=options.parallel, exclude_failures=options.keep_going, exclude=options.excludes, compiler_directives=options.directives, compile_time_env=options.compile_time_env, force=options.force, quiet=options.quiet, **options.options) if ext_modules and options.build: if len(ext_modules) > 1 and options.parallel > 1: if pool is None: try: pool = multiprocessing.Pool(options.parallel) except OSError: pool = _FakePool() pool.map_async(run_distutils, [ (base_dir, [ext]) for ext in ext_modules]) else: run_distutils((base_dir, ext_modules)) except: if pool is not None: pool.terminate() raise else: if pool is not None: pool.close() pool.join() def run_distutils(args): base_dir, ext_modules = args script_args = ['build_ext', '-i'] cwd = os.getcwd() temp_dir = None try: if base_dir: os.chdir(base_dir) temp_dir = tempfile.mkdtemp(dir=base_dir) script_args.extend(['--build-temp', temp_dir]) setup( script_name='setup.py', script_args=script_args, ext_modules=ext_modules, ) finally: if base_dir: os.chdir(cwd) if temp_dir and os.path.isdir(temp_dir): shutil.rmtree(temp_dir) def parse_args(args): from optparse import OptionParser parser = OptionParser(usage='%prog [options] [sources and packages]+') parser.add_option('-X', '--directive', metavar='NAME=VALUE,...', dest='directives', default={}, type="str", action='callback', callback=parse_directives, help='set a compiler directive') parser.add_option('-E', '--compile-time-env', metavar='NAME=VALUE,...', dest='compile_time_env', default={}, type="str", action='callback', callback=parse_compile_time_env, help='set a compile time environment variable') parser.add_option('-s', '--option', metavar='NAME=VALUE', dest='options', default={}, type="str", action='callback', callback=parse_options, help='set a cythonize option') parser.add_option('-2', dest='language_level', action='store_const', const=2, default=None, help='use Python 2 syntax mode by default') parser.add_option('-3', dest='language_level', action='store_const', const=3, help='use Python 3 syntax mode by default') parser.add_option('--3str', dest='language_level', action='store_const', const='3str', help='use Python 3 syntax mode by default') parser.add_option('-a', '--annotate', dest='annotate', action='store_true', help='generate annotated HTML page for source files') parser.add_option('-x', '--exclude', metavar='PATTERN', dest='excludes', action='append', default=[], help='exclude certain file patterns from the compilation') parser.add_option('-b', '--build', dest='build', action='store_true', help='build extension modules using distutils') parser.add_option('-i', '--inplace', dest='build_inplace', action='store_true', help='build extension modules in place using distutils (implies -b)') parser.add_option('-j', '--parallel', dest='parallel', metavar='N', type=int, default=parallel_compiles, help=('run builds in N parallel jobs (default: %d)' % parallel_compiles or 1)) parser.add_option('-f', '--force', dest='force', action='store_true', help='force recompilation') parser.add_option('-q', '--quiet', dest='quiet', action='store_true', help='be less verbose during compilation') parser.add_option('--lenient', dest='lenient', action='store_true', help='increase Python compatibility by ignoring some compile time errors') parser.add_option('-k', '--keep-going', dest='keep_going', action='store_true', help='compile as much as possible, ignore compilation failures') options, args = parser.parse_args(args) if not args: parser.error("no source files provided") if options.build_inplace: options.build = True if multiprocessing is None: options.parallel = 0 if options.language_level: assert options.language_level in (2, 3, '3str') options.options['language_level'] = options.language_level return options, args def main(args=None): options, paths = parse_args(args) if options.lenient: # increase Python compatibility by ignoring compile time errors Options.error_on_unknown_names = False Options.error_on_uninitialized = False if options.annotate: Options.annotate = True for path in paths: cython_compile(path, options) if __name__ == '__main__': main() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Build/Dependencies.py0000644000175000017500000013777400000000000027527 0ustar00vincentvincent00000000000000from __future__ import absolute_import, print_function import cython from .. import __version__ import collections import contextlib import hashlib import os import shutil import subprocess import re, sys, time import warnings from glob import iglob from io import open as io_open from os.path import relpath as _relpath from distutils.extension import Extension from distutils.util import strtobool import zipfile try: import gzip gzip_open = gzip.open gzip_ext = '.gz' except ImportError: gzip_open = open gzip_ext = '' try: import zlib zipfile_compression_mode = zipfile.ZIP_DEFLATED except ImportError: zipfile_compression_mode = zipfile.ZIP_STORED try: import pythran except: pythran = None from .. import Utils from ..Utils import (cached_function, cached_method, path_exists, safe_makedirs, copy_file_to_dir_if_newer, is_package_dir, replace_suffix) from ..Compiler.Main import Context, CompilationOptions, default_options join_path = cached_function(os.path.join) copy_once_if_newer = cached_function(copy_file_to_dir_if_newer) safe_makedirs_once = cached_function(safe_makedirs) if sys.version_info[0] < 3: # stupid Py2 distutils enforces str type in list of sources _fs_encoding = sys.getfilesystemencoding() if _fs_encoding is None: _fs_encoding = sys.getdefaultencoding() def encode_filename_in_py2(filename): if not isinstance(filename, bytes): return filename.encode(_fs_encoding) return filename else: def encode_filename_in_py2(filename): return filename basestring = str def _make_relative(file_paths, base=None): if not base: base = os.getcwd() if base[-1] != os.path.sep: base += os.path.sep return [_relpath(path, base) if path.startswith(base) else path for path in file_paths] def extended_iglob(pattern): if '{' in pattern: m = re.match('(.*){([^}]+)}(.*)', pattern) if m: before, switch, after = m.groups() for case in switch.split(','): for path in extended_iglob(before + case + after): yield path return if '**/' in pattern: seen = set() first, rest = pattern.split('**/', 1) if first: first = iglob(first+'/') else: first = [''] for root in first: for path in extended_iglob(join_path(root, rest)): if path not in seen: seen.add(path) yield path for path in extended_iglob(join_path(root, '*', '**/' + rest)): if path not in seen: seen.add(path) yield path else: for path in iglob(pattern): yield path def nonempty(it, error_msg="expected non-empty iterator"): empty = True for value in it: empty = False yield value if empty: raise ValueError(error_msg) @cached_function def file_hash(filename): path = os.path.normpath(filename) prefix = ('%d:%s' % (len(path), path)).encode("UTF-8") m = hashlib.md5(prefix) with open(path, 'rb') as f: data = f.read(65000) while data: m.update(data) data = f.read(65000) return m.hexdigest() def update_pythran_extension(ext): if pythran is None: raise RuntimeError("You first need to install Pythran to use the np_pythran directive.") try: pythran_ext = pythran.config.make_extension(python=True) except TypeError: # older pythran version only pythran_ext = pythran.config.make_extension() ext.include_dirs.extend(pythran_ext['include_dirs']) ext.extra_compile_args.extend(pythran_ext['extra_compile_args']) ext.extra_link_args.extend(pythran_ext['extra_link_args']) ext.define_macros.extend(pythran_ext['define_macros']) ext.undef_macros.extend(pythran_ext['undef_macros']) ext.library_dirs.extend(pythran_ext['library_dirs']) ext.libraries.extend(pythran_ext['libraries']) ext.language = 'c++' # These options are not compatible with the way normal Cython extensions work for bad_option in ["-fwhole-program", "-fvisibility=hidden"]: try: ext.extra_compile_args.remove(bad_option) except ValueError: pass def parse_list(s): """ >>> parse_list("") [] >>> parse_list("a") ['a'] >>> parse_list("a b c") ['a', 'b', 'c'] >>> parse_list("[a, b, c]") ['a', 'b', 'c'] >>> parse_list('a " " b') ['a', ' ', 'b'] >>> parse_list('[a, ",a", "a,", ",", ]') ['a', ',a', 'a,', ','] """ if len(s) >= 2 and s[0] == '[' and s[-1] == ']': s = s[1:-1] delimiter = ',' else: delimiter = ' ' s, literals = strip_string_literals(s) def unquote(literal): literal = literal.strip() if literal[0] in "'\"": return literals[literal[1:-1]] else: return literal return [unquote(item) for item in s.split(delimiter) if item.strip()] transitive_str = object() transitive_list = object() bool_or = object() distutils_settings = { 'name': str, 'sources': list, 'define_macros': list, 'undef_macros': list, 'libraries': transitive_list, 'library_dirs': transitive_list, 'runtime_library_dirs': transitive_list, 'include_dirs': transitive_list, 'extra_objects': list, 'extra_compile_args': transitive_list, 'extra_link_args': transitive_list, 'export_symbols': list, 'depends': transitive_list, 'language': transitive_str, 'np_pythran': bool_or } @cython.locals(start=cython.Py_ssize_t, end=cython.Py_ssize_t) def line_iter(source): if isinstance(source, basestring): start = 0 while True: end = source.find('\n', start) if end == -1: yield source[start:] return yield source[start:end] start = end+1 else: for line in source: yield line class DistutilsInfo(object): def __init__(self, source=None, exn=None): self.values = {} if source is not None: for line in line_iter(source): line = line.lstrip() if not line: continue if line[0] != '#': break line = line[1:].lstrip() kind = next((k for k in ("distutils:","cython:") if line.startswith(k)), None) if kind is not None: key, _, value = [s.strip() for s in line[len(kind):].partition('=')] type = distutils_settings.get(key, None) if line.startswith("cython:") and type is None: continue if type in (list, transitive_list): value = parse_list(value) if key == 'define_macros': value = [tuple(macro.split('=', 1)) if '=' in macro else (macro, None) for macro in value] if type is bool_or: value = strtobool(value) self.values[key] = value elif exn is not None: for key in distutils_settings: if key in ('name', 'sources','np_pythran'): continue value = getattr(exn, key, None) if value: self.values[key] = value def merge(self, other): if other is None: return self for key, value in other.values.items(): type = distutils_settings[key] if type is transitive_str and key not in self.values: self.values[key] = value elif type is transitive_list: if key in self.values: # Change a *copy* of the list (Trac #845) all = self.values[key][:] for v in value: if v not in all: all.append(v) value = all self.values[key] = value elif type is bool_or: self.values[key] = self.values.get(key, False) | value return self def subs(self, aliases): if aliases is None: return self resolved = DistutilsInfo() for key, value in self.values.items(): type = distutils_settings[key] if type in [list, transitive_list]: new_value_list = [] for v in value: if v in aliases: v = aliases[v] if isinstance(v, list): new_value_list += v else: new_value_list.append(v) value = new_value_list else: if value in aliases: value = aliases[value] resolved.values[key] = value return resolved def apply(self, extension): for key, value in self.values.items(): type = distutils_settings[key] if type in [list, transitive_list]: value = getattr(extension, key) + list(value) setattr(extension, key, value) @cython.locals(start=cython.Py_ssize_t, q=cython.Py_ssize_t, single_q=cython.Py_ssize_t, double_q=cython.Py_ssize_t, hash_mark=cython.Py_ssize_t, end=cython.Py_ssize_t, k=cython.Py_ssize_t, counter=cython.Py_ssize_t, quote_len=cython.Py_ssize_t) def strip_string_literals(code, prefix='__Pyx_L'): """ Normalizes every string literal to be of the form '__Pyx_Lxxx', returning the normalized code and a mapping of labels to string literals. """ new_code = [] literals = {} counter = 0 start = q = 0 in_quote = False hash_mark = single_q = double_q = -1 code_len = len(code) quote_type = quote_len = None while True: if hash_mark < q: hash_mark = code.find('#', q) if single_q < q: single_q = code.find("'", q) if double_q < q: double_q = code.find('"', q) q = min(single_q, double_q) if q == -1: q = max(single_q, double_q) # We're done. if q == -1 and hash_mark == -1: new_code.append(code[start:]) break # Try to close the quote. elif in_quote: if code[q-1] == u'\\': k = 2 while q >= k and code[q-k] == u'\\': k += 1 if k % 2 == 0: q += 1 continue if code[q] == quote_type and ( quote_len == 1 or (code_len > q + 2 and quote_type == code[q+1] == code[q+2])): counter += 1 label = "%s%s_" % (prefix, counter) literals[label] = code[start+quote_len:q] full_quote = code[q:q+quote_len] new_code.append(full_quote) new_code.append(label) new_code.append(full_quote) q += quote_len in_quote = False start = q else: q += 1 # Process comment. elif -1 != hash_mark and (hash_mark < q or q == -1): new_code.append(code[start:hash_mark+1]) end = code.find('\n', hash_mark) counter += 1 label = "%s%s_" % (prefix, counter) if end == -1: end_or_none = None else: end_or_none = end literals[label] = code[hash_mark+1:end_or_none] new_code.append(label) if end == -1: break start = q = end # Open the quote. else: if code_len >= q+3 and (code[q] == code[q+1] == code[q+2]): quote_len = 3 else: quote_len = 1 in_quote = True quote_type = code[q] new_code.append(code[start:q]) start = q q += quote_len return "".join(new_code), literals # We need to allow spaces to allow for conditional compilation like # IF ...: # cimport ... dependency_regex = re.compile(r"(?:^\s*from +([0-9a-zA-Z_.]+) +cimport)|" r"(?:^\s*cimport +([0-9a-zA-Z_.]+(?: *, *[0-9a-zA-Z_.]+)*))|" r"(?:^\s*cdef +extern +from +['\"]([^'\"]+)['\"])|" r"(?:^\s*include +['\"]([^'\"]+)['\"])", re.M) dependency_after_from_regex = re.compile( r"(?:^\s+\(([0-9a-zA-Z_., ]*)\)[#\n])|" r"(?:^\s+([0-9a-zA-Z_., ]*)[#\n])", re.M) def normalize_existing(base_path, rel_paths): return normalize_existing0(os.path.dirname(base_path), tuple(set(rel_paths))) @cached_function def normalize_existing0(base_dir, rel_paths): """ Given some base directory ``base_dir`` and a list of path names ``rel_paths``, normalize each relative path name ``rel`` by replacing it by ``os.path.join(base, rel)`` if that file exists. Return a couple ``(normalized, needed_base)`` where ``normalized`` if the list of normalized file names and ``needed_base`` is ``base_dir`` if we actually needed ``base_dir``. If no paths were changed (for example, if all paths were already absolute), then ``needed_base`` is ``None``. """ normalized = [] needed_base = None for rel in rel_paths: if os.path.isabs(rel): normalized.append(rel) continue path = join_path(base_dir, rel) if path_exists(path): normalized.append(os.path.normpath(path)) needed_base = base_dir else: normalized.append(rel) return (normalized, needed_base) def resolve_depends(depends, include_dirs): include_dirs = tuple(include_dirs) resolved = [] for depend in depends: path = resolve_depend(depend, include_dirs) if path is not None: resolved.append(path) return resolved @cached_function def resolve_depend(depend, include_dirs): if depend[0] == '<' and depend[-1] == '>': return None for dir in include_dirs: path = join_path(dir, depend) if path_exists(path): return os.path.normpath(path) return None @cached_function def package(filename): dir = os.path.dirname(os.path.abspath(str(filename))) if dir != filename and is_package_dir(dir): return package(dir) + (os.path.basename(dir),) else: return () @cached_function def fully_qualified_name(filename): module = os.path.splitext(os.path.basename(filename))[0] return '.'.join(package(filename) + (module,)) @cached_function def parse_dependencies(source_filename): # Actual parsing is way too slow, so we use regular expressions. # The only catch is that we must strip comments and string # literals ahead of time. with Utils.open_source_file(source_filename, error_handling='ignore') as fh: source = fh.read() distutils_info = DistutilsInfo(source) source, literals = strip_string_literals(source) source = source.replace('\\\n', ' ').replace('\t', ' ') # TODO: pure mode cimports = [] includes = [] externs = [] for m in dependency_regex.finditer(source): cimport_from, cimport_list, extern, include = m.groups() if cimport_from: cimports.append(cimport_from) m_after_from = dependency_after_from_regex.search(source, pos=m.end()) if m_after_from: multiline, one_line = m_after_from.groups() subimports = multiline or one_line cimports.extend("{0}.{1}".format(cimport_from, s.strip()) for s in subimports.split(',')) elif cimport_list: cimports.extend(x.strip() for x in cimport_list.split(",")) elif extern: externs.append(literals[extern]) else: includes.append(literals[include]) return cimports, includes, externs, distutils_info class DependencyTree(object): def __init__(self, context, quiet=False): self.context = context self.quiet = quiet self._transitive_cache = {} def parse_dependencies(self, source_filename): if path_exists(source_filename): source_filename = os.path.normpath(source_filename) return parse_dependencies(source_filename) @cached_method def included_files(self, filename): # This is messy because included files are textually included, resolving # cimports (but not includes) relative to the including file. all = set() for include in self.parse_dependencies(filename)[1]: include_path = join_path(os.path.dirname(filename), include) if not path_exists(include_path): include_path = self.context.find_include_file(include, None) if include_path: if '.' + os.path.sep in include_path: include_path = os.path.normpath(include_path) all.add(include_path) all.update(self.included_files(include_path)) elif not self.quiet: print("Unable to locate '%s' referenced from '%s'" % (filename, include)) return all @cached_method def cimports_externs_incdirs(self, filename): # This is really ugly. Nested cimports are resolved with respect to the # includer, but includes are resolved with respect to the includee. cimports, includes, externs = self.parse_dependencies(filename)[:3] cimports = set(cimports) externs = set(externs) incdirs = set() for include in self.included_files(filename): included_cimports, included_externs, included_incdirs = self.cimports_externs_incdirs(include) cimports.update(included_cimports) externs.update(included_externs) incdirs.update(included_incdirs) externs, incdir = normalize_existing(filename, externs) if incdir: incdirs.add(incdir) return tuple(cimports), externs, incdirs def cimports(self, filename): return self.cimports_externs_incdirs(filename)[0] def package(self, filename): return package(filename) def fully_qualified_name(self, filename): return fully_qualified_name(filename) @cached_method def find_pxd(self, module, filename=None): is_relative = module[0] == '.' if is_relative and not filename: raise NotImplementedError("New relative imports.") if filename is not None: module_path = module.split('.') if is_relative: module_path.pop(0) # just explicitly relative package_path = list(self.package(filename)) while module_path and not module_path[0]: try: package_path.pop() except IndexError: return None # FIXME: error? module_path.pop(0) relative = '.'.join(package_path + module_path) pxd = self.context.find_pxd_file(relative, None) if pxd: return pxd if is_relative: return None # FIXME: error? return self.context.find_pxd_file(module, None) @cached_method def cimported_files(self, filename): if filename[-4:] == '.pyx' and path_exists(filename[:-4] + '.pxd'): pxd_list = [filename[:-4] + '.pxd'] else: pxd_list = [] # Cimports generates all possible combinations package.module # when imported as from package cimport module. for module in self.cimports(filename): if module[:7] == 'cython.' or module == 'cython': continue pxd_file = self.find_pxd(module, filename) if pxd_file is not None: pxd_list.append(pxd_file) return tuple(pxd_list) @cached_method def immediate_dependencies(self, filename): all = set([filename]) all.update(self.cimported_files(filename)) all.update(self.included_files(filename)) return all def all_dependencies(self, filename): return self.transitive_merge(filename, self.immediate_dependencies, set.union) @cached_method def timestamp(self, filename): return os.path.getmtime(filename) def extract_timestamp(self, filename): return self.timestamp(filename), filename def newest_dependency(self, filename): return max([self.extract_timestamp(f) for f in self.all_dependencies(filename)]) def transitive_fingerprint(self, filename, module, compilation_options): r""" Return a fingerprint of a cython file that is about to be cythonized. Fingerprints are looked up in future compilations. If the fingerprint is found, the cythonization can be skipped. The fingerprint must incorporate everything that has an influence on the generated code. """ try: m = hashlib.md5(__version__.encode('UTF-8')) m.update(file_hash(filename).encode('UTF-8')) for x in sorted(self.all_dependencies(filename)): if os.path.splitext(x)[1] not in ('.c', '.cpp', '.h'): m.update(file_hash(x).encode('UTF-8')) # Include the module attributes that change the compilation result # in the fingerprint. We do not iterate over module.__dict__ and # include almost everything here as users might extend Extension # with arbitrary (random) attributes that would lead to cache # misses. m.update(str(( module.language, getattr(module, 'py_limited_api', False), getattr(module, 'np_pythran', False) )).encode('UTF-8')) m.update(compilation_options.get_fingerprint().encode('UTF-8')) return m.hexdigest() except IOError: return None def distutils_info0(self, filename): info = self.parse_dependencies(filename)[3] kwds = info.values cimports, externs, incdirs = self.cimports_externs_incdirs(filename) basedir = os.getcwd() # Add dependencies on "cdef extern from ..." files if externs: externs = _make_relative(externs, basedir) if 'depends' in kwds: kwds['depends'] = list(set(kwds['depends']).union(externs)) else: kwds['depends'] = list(externs) # Add include_dirs to ensure that the C compiler will find the # "cdef extern from ..." files if incdirs: include_dirs = list(kwds.get('include_dirs', [])) for inc in _make_relative(incdirs, basedir): if inc not in include_dirs: include_dirs.append(inc) kwds['include_dirs'] = include_dirs return info def distutils_info(self, filename, aliases=None, base=None): return (self.transitive_merge(filename, self.distutils_info0, DistutilsInfo.merge) .subs(aliases) .merge(base)) def transitive_merge(self, node, extract, merge): try: seen = self._transitive_cache[extract, merge] except KeyError: seen = self._transitive_cache[extract, merge] = {} return self.transitive_merge_helper( node, extract, merge, seen, {}, self.cimported_files)[0] def transitive_merge_helper(self, node, extract, merge, seen, stack, outgoing): if node in seen: return seen[node], None deps = extract(node) if node in stack: return deps, node try: stack[node] = len(stack) loop = None for next in outgoing(node): sub_deps, sub_loop = self.transitive_merge_helper(next, extract, merge, seen, stack, outgoing) if sub_loop is not None: if loop is not None and stack[loop] < stack[sub_loop]: pass else: loop = sub_loop deps = merge(deps, sub_deps) if loop == node: loop = None if loop is None: seen[node] = deps return deps, loop finally: del stack[node] _dep_tree = None def create_dependency_tree(ctx=None, quiet=False): global _dep_tree if _dep_tree is None: if ctx is None: ctx = Context(["."], CompilationOptions(default_options)) _dep_tree = DependencyTree(ctx, quiet=quiet) return _dep_tree # If this changes, change also docs/src/reference/compilation.rst # which mentions this function def default_create_extension(template, kwds): if 'depends' in kwds: include_dirs = kwds.get('include_dirs', []) + ["."] depends = resolve_depends(kwds['depends'], include_dirs) kwds['depends'] = sorted(set(depends + template.depends)) t = template.__class__ ext = t(**kwds) metadata = dict(distutils=kwds, module_name=kwds['name']) return (ext, metadata) # This may be useful for advanced users? def create_extension_list(patterns, exclude=None, ctx=None, aliases=None, quiet=False, language=None, exclude_failures=False): if language is not None: print('Warning: passing language={0!r} to cythonize() is deprecated. ' 'Instead, put "# distutils: language={0}" in your .pyx or .pxd file(s)'.format(language)) if exclude is None: exclude = [] if patterns is None: return [], {} elif isinstance(patterns, basestring) or not isinstance(patterns, collections.Iterable): patterns = [patterns] explicit_modules = set([m.name for m in patterns if isinstance(m, Extension)]) seen = set() deps = create_dependency_tree(ctx, quiet=quiet) to_exclude = set() if not isinstance(exclude, list): exclude = [exclude] for pattern in exclude: to_exclude.update(map(os.path.abspath, extended_iglob(pattern))) module_list = [] module_metadata = {} # workaround for setuptools if 'setuptools' in sys.modules: Extension_distutils = sys.modules['setuptools.extension']._Extension Extension_setuptools = sys.modules['setuptools'].Extension else: # dummy class, in case we do not have setuptools Extension_distutils = Extension class Extension_setuptools(Extension): pass # if no create_extension() function is defined, use a simple # default function. create_extension = ctx.options.create_extension or default_create_extension for pattern in patterns: if isinstance(pattern, str): filepattern = pattern template = Extension(pattern, []) # Fake Extension without sources name = '*' base = None ext_language = language elif isinstance(pattern, (Extension_distutils, Extension_setuptools)): cython_sources = [s for s in pattern.sources if os.path.splitext(s)[1] in ('.py', '.pyx')] if cython_sources: filepattern = cython_sources[0] if len(cython_sources) > 1: print("Warning: Multiple cython sources found for extension '%s': %s\n" "See http://cython.readthedocs.io/en/latest/src/userguide/sharing_declarations.html " "for sharing declarations among Cython files." % (pattern.name, cython_sources)) else: # ignore non-cython modules module_list.append(pattern) continue template = pattern name = template.name base = DistutilsInfo(exn=template) ext_language = None # do not override whatever the Extension says else: msg = str("pattern is not of type str nor subclass of Extension (%s)" " but of type %s and class %s" % (repr(Extension), type(pattern), pattern.__class__)) raise TypeError(msg) for file in nonempty(sorted(extended_iglob(filepattern)), "'%s' doesn't match any files" % filepattern): if os.path.abspath(file) in to_exclude: continue module_name = deps.fully_qualified_name(file) if '*' in name: if module_name in explicit_modules: continue elif name: module_name = name Utils.raise_error_if_module_name_forbidden(module_name) if module_name not in seen: try: kwds = deps.distutils_info(file, aliases, base).values except Exception: if exclude_failures: continue raise if base is not None: for key, value in base.values.items(): if key not in kwds: kwds[key] = value kwds['name'] = module_name sources = [file] + [m for m in template.sources if m != filepattern] if 'sources' in kwds: # allow users to add .c files etc. for source in kwds['sources']: source = encode_filename_in_py2(source) if source not in sources: sources.append(source) kwds['sources'] = sources if ext_language and 'language' not in kwds: kwds['language'] = ext_language np_pythran = kwds.pop('np_pythran', False) # Create the new extension m, metadata = create_extension(template, kwds) m.np_pythran = np_pythran or getattr(m, 'np_pythran', False) if m.np_pythran: update_pythran_extension(m) module_list.append(m) # Store metadata (this will be written as JSON in the # generated C file but otherwise has no purpose) module_metadata[module_name] = metadata if file not in m.sources: # Old setuptools unconditionally replaces .pyx with .c/.cpp target_file = os.path.splitext(file)[0] + ('.cpp' if m.language == 'c++' else '.c') try: m.sources.remove(target_file) except ValueError: # never seen this in the wild, but probably better to warn about this unexpected case print("Warning: Cython source file not found in sources list, adding %s" % file) m.sources.insert(0, file) seen.add(name) return module_list, module_metadata # This is the user-exposed entry point. def cythonize(module_list, exclude=None, nthreads=0, aliases=None, quiet=False, force=False, language=None, exclude_failures=False, **options): """ Compile a set of source modules into C/C++ files and return a list of distutils Extension objects for them. :param module_list: As module list, pass either a glob pattern, a list of glob patterns or a list of Extension objects. The latter allows you to configure the extensions separately through the normal distutils options. You can also pass Extension objects that have glob patterns as their sources. Then, cythonize will resolve the pattern and create a copy of the Extension for every matching file. :param exclude: When passing glob patterns as ``module_list``, you can exclude certain module names explicitly by passing them into the ``exclude`` option. :param nthreads: The number of concurrent builds for parallel compilation (requires the ``multiprocessing`` module). :param aliases: If you want to use compiler directives like ``# distutils: ...`` but can only know at compile time (when running the ``setup.py``) which values to use, you can use aliases and pass a dictionary mapping those aliases to Python strings when calling :func:`cythonize`. As an example, say you want to use the compiler directive ``# distutils: include_dirs = ../static_libs/include/`` but this path isn't always fixed and you want to find it when running the ``setup.py``. You can then do ``# distutils: include_dirs = MY_HEADERS``, find the value of ``MY_HEADERS`` in the ``setup.py``, put it in a python variable called ``foo`` as a string, and then call ``cythonize(..., aliases={'MY_HEADERS': foo})``. :param quiet: If True, Cython won't print error and warning messages during the compilation. :param force: Forces the recompilation of the Cython modules, even if the timestamps don't indicate that a recompilation is necessary. :param language: To globally enable C++ mode, you can pass ``language='c++'``. Otherwise, this will be determined at a per-file level based on compiler directives. This affects only modules found based on file names. Extension instances passed into :func:`cythonize` will not be changed. It is recommended to rather use the compiler directive ``# distutils: language = c++`` than this option. :param exclude_failures: For a broad 'try to compile' mode that ignores compilation failures and simply excludes the failed extensions, pass ``exclude_failures=True``. Note that this only really makes sense for compiling ``.py`` files which can also be used without compilation. :param annotate: If ``True``, will produce a HTML file for each of the ``.pyx`` or ``.py`` files compiled. The HTML file gives an indication of how much Python interaction there is in each of the source code lines, compared to plain C code. It also allows you to see the C/C++ code generated for each line of Cython code. This report is invaluable when optimizing a function for speed, and for determining when to :ref:`release the GIL `: in general, a ``nogil`` block may contain only "white" code. See examples in :ref:`determining_where_to_add_types` or :ref:`primes`. :param compiler_directives: Allow to set compiler directives in the ``setup.py`` like this: ``compiler_directives={'embedsignature': True}``. See :ref:`compiler-directives`. """ if exclude is None: exclude = [] if 'include_path' not in options: options['include_path'] = ['.'] if 'common_utility_include_dir' in options: safe_makedirs(options['common_utility_include_dir']) if pythran is None: pythran_options = None else: pythran_options = CompilationOptions(**options) pythran_options.cplus = True pythran_options.np_pythran = True c_options = CompilationOptions(**options) cpp_options = CompilationOptions(**options); cpp_options.cplus = True ctx = c_options.create_context() options = c_options module_list, module_metadata = create_extension_list( module_list, exclude=exclude, ctx=ctx, quiet=quiet, exclude_failures=exclude_failures, language=language, aliases=aliases) deps = create_dependency_tree(ctx, quiet=quiet) build_dir = getattr(options, 'build_dir', None) def copy_to_build_dir(filepath, root=os.getcwd()): filepath_abs = os.path.abspath(filepath) if os.path.isabs(filepath): filepath = filepath_abs if filepath_abs.startswith(root): # distutil extension depends are relative to cwd mod_dir = join_path(build_dir, os.path.dirname(_relpath(filepath, root))) copy_once_if_newer(filepath_abs, mod_dir) modules_by_cfile = collections.defaultdict(list) to_compile = [] for m in module_list: if build_dir: for dep in m.depends: copy_to_build_dir(dep) cy_sources = [ source for source in m.sources if os.path.splitext(source)[1] in ('.pyx', '.py')] if len(cy_sources) == 1: # normal "special" case: believe the Extension module name to allow user overrides full_module_name = m.name else: # infer FQMN from source files full_module_name = None new_sources = [] for source in m.sources: base, ext = os.path.splitext(source) if ext in ('.pyx', '.py'): if m.np_pythran: c_file = base + '.cpp' options = pythran_options elif m.language == 'c++': c_file = base + '.cpp' options = cpp_options else: c_file = base + '.c' options = c_options # setup for out of place build directory if enabled if build_dir: if os.path.isabs(c_file): warnings.warn("build_dir has no effect for absolute source paths") c_file = os.path.join(build_dir, c_file) dir = os.path.dirname(c_file) safe_makedirs_once(dir) if os.path.exists(c_file): c_timestamp = os.path.getmtime(c_file) else: c_timestamp = -1 # Priority goes first to modified files, second to direct # dependents, and finally to indirect dependents. if c_timestamp < deps.timestamp(source): dep_timestamp, dep = deps.timestamp(source), source priority = 0 else: dep_timestamp, dep = deps.newest_dependency(source) priority = 2 - (dep in deps.immediate_dependencies(source)) if force or c_timestamp < dep_timestamp: if not quiet and not force: if source == dep: print("Compiling %s because it changed." % source) else: print("Compiling %s because it depends on %s." % (source, dep)) if not force and options.cache: fingerprint = deps.transitive_fingerprint(source, m, options) else: fingerprint = None to_compile.append(( priority, source, c_file, fingerprint, quiet, options, not exclude_failures, module_metadata.get(m.name), full_module_name)) new_sources.append(c_file) modules_by_cfile[c_file].append(m) else: new_sources.append(source) if build_dir: copy_to_build_dir(source) m.sources = new_sources if options.cache: if not os.path.exists(options.cache): os.makedirs(options.cache) to_compile.sort() # Drop "priority" component of "to_compile" entries and add a # simple progress indicator. N = len(to_compile) progress_fmt = "[{0:%d}/{1}] " % len(str(N)) for i in range(N): progress = progress_fmt.format(i+1, N) to_compile[i] = to_compile[i][1:] + (progress,) if N <= 1: nthreads = 0 if nthreads: # Requires multiprocessing (or Python >= 2.6) try: import multiprocessing pool = multiprocessing.Pool( nthreads, initializer=_init_multiprocessing_helper) except (ImportError, OSError): print("multiprocessing required for parallel cythonization") nthreads = 0 else: # This is a bit more involved than it should be, because KeyboardInterrupts # break the multiprocessing workers when using a normal pool.map(). # See, for example: # http://noswap.com/blog/python-multiprocessing-keyboardinterrupt try: result = pool.map_async(cythonize_one_helper, to_compile, chunksize=1) pool.close() while not result.ready(): try: result.get(99999) # seconds except multiprocessing.TimeoutError: pass except KeyboardInterrupt: pool.terminate() raise pool.join() if not nthreads: for args in to_compile: cythonize_one(*args) if exclude_failures: failed_modules = set() for c_file, modules in modules_by_cfile.items(): if not os.path.exists(c_file): failed_modules.update(modules) elif os.path.getsize(c_file) < 200: f = io_open(c_file, 'r', encoding='iso8859-1') try: if f.read(len('#error ')) == '#error ': # dead compilation result failed_modules.update(modules) finally: f.close() if failed_modules: for module in failed_modules: module_list.remove(module) print("Failed compilations: %s" % ', '.join(sorted([ module.name for module in failed_modules]))) if options.cache: cleanup_cache(options.cache, getattr(options, 'cache_size', 1024 * 1024 * 100)) # cythonize() is often followed by the (non-Python-buffered) # compiler output, flush now to avoid interleaving output. sys.stdout.flush() return module_list if os.environ.get('XML_RESULTS'): compile_result_dir = os.environ['XML_RESULTS'] def record_results(func): def with_record(*args): t = time.time() success = True try: try: func(*args) except: success = False finally: t = time.time() - t module = fully_qualified_name(args[0]) name = "cythonize." + module failures = 1 - success if success: failure_item = "" else: failure_item = "failure" output = open(os.path.join(compile_result_dir, name + ".xml"), "w") output.write(""" %(failure_item)s """.strip() % locals()) output.close() return with_record else: def record_results(func): return func # TODO: Share context? Issue: pyx processing leaks into pxd module @record_results def cythonize_one(pyx_file, c_file, fingerprint, quiet, options=None, raise_on_failure=True, embedded_metadata=None, full_module_name=None, progress=""): from ..Compiler.Main import compile_single, default_options from ..Compiler.Errors import CompileError, PyrexError if fingerprint: if not os.path.exists(options.cache): safe_makedirs(options.cache) # Cython-generated c files are highly compressible. # (E.g. a compression ratio of about 10 for Sage). fingerprint_file_base = join_path( options.cache, "%s-%s" % (os.path.basename(c_file), fingerprint)) gz_fingerprint_file = fingerprint_file_base + gzip_ext zip_fingerprint_file = fingerprint_file_base + '.zip' if os.path.exists(gz_fingerprint_file) or os.path.exists(zip_fingerprint_file): if not quiet: print("%sFound compiled %s in cache" % (progress, pyx_file)) if os.path.exists(gz_fingerprint_file): os.utime(gz_fingerprint_file, None) with contextlib.closing(gzip_open(gz_fingerprint_file, 'rb')) as g: with contextlib.closing(open(c_file, 'wb')) as f: shutil.copyfileobj(g, f) else: os.utime(zip_fingerprint_file, None) dirname = os.path.dirname(c_file) with contextlib.closing(zipfile.ZipFile(zip_fingerprint_file)) as z: for artifact in z.namelist(): z.extract(artifact, os.path.join(dirname, artifact)) return if not quiet: print("%sCythonizing %s" % (progress, pyx_file)) if options is None: options = CompilationOptions(default_options) options.output_file = c_file options.embedded_metadata = embedded_metadata any_failures = 0 try: result = compile_single(pyx_file, options, full_module_name=full_module_name) if result.num_errors > 0: any_failures = 1 except (EnvironmentError, PyrexError) as e: sys.stderr.write('%s\n' % e) any_failures = 1 # XXX import traceback traceback.print_exc() except Exception: if raise_on_failure: raise import traceback traceback.print_exc() any_failures = 1 if any_failures: if raise_on_failure: raise CompileError(None, pyx_file) elif os.path.exists(c_file): os.remove(c_file) elif fingerprint: artifacts = list(filter(None, [ getattr(result, attr, None) for attr in ('c_file', 'h_file', 'api_file', 'i_file')])) if len(artifacts) == 1: fingerprint_file = gz_fingerprint_file with contextlib.closing(open(c_file, 'rb')) as f: with contextlib.closing(gzip_open(fingerprint_file + '.tmp', 'wb')) as g: shutil.copyfileobj(f, g) else: fingerprint_file = zip_fingerprint_file with contextlib.closing(zipfile.ZipFile( fingerprint_file + '.tmp', 'w', zipfile_compression_mode)) as zip: for artifact in artifacts: zip.write(artifact, os.path.basename(artifact)) os.rename(fingerprint_file + '.tmp', fingerprint_file) def cythonize_one_helper(m): import traceback try: return cythonize_one(*m) except Exception: traceback.print_exc() raise def _init_multiprocessing_helper(): # KeyboardInterrupt kills workers, so don't let them get it import signal signal.signal(signal.SIGINT, signal.SIG_IGN) def cleanup_cache(cache, target_size, ratio=.85): try: p = subprocess.Popen(['du', '-s', '-k', os.path.abspath(cache)], stdout=subprocess.PIPE) res = p.wait() if res == 0: total_size = 1024 * int(p.stdout.read().strip().split()[0]) if total_size < target_size: return except (OSError, ValueError): pass total_size = 0 all = [] for file in os.listdir(cache): path = join_path(cache, file) s = os.stat(path) total_size += s.st_size all.append((s.st_atime, s.st_size, path)) if total_size > target_size: for time, size, file in reversed(sorted(all)): os.unlink(file) total_size -= size if total_size < target_size * ratio: break ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Build/Distutils.py0000644000175000017500000000006100000000000027077 0ustar00vincentvincent00000000000000from Cython.Distutils.build_ext import build_ext ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Build/Inline.py0000644000175000017500000003066000000000000026341 0ustar00vincentvincent00000000000000from __future__ import absolute_import import sys, os, re, inspect import imp try: import hashlib except ImportError: import md5 as hashlib from distutils.core import Distribution, Extension from distutils.command.build_ext import build_ext import Cython from ..Compiler.Main import Context, CompilationOptions, default_options from ..Compiler.ParseTreeTransforms import (CythonTransform, SkipDeclarations, AnalyseDeclarationsTransform, EnvTransform) from ..Compiler.TreeFragment import parse_from_strings from ..Compiler.StringEncoding import _unicode from .Dependencies import strip_string_literals, cythonize, cached_function from ..Compiler import Pipeline, Nodes from ..Utils import get_cython_cache_dir import cython as cython_module IS_PY3 = sys.version_info >= (3, 0) # A utility function to convert user-supplied ASCII strings to unicode. if sys.version_info[0] < 3: def to_unicode(s): if isinstance(s, bytes): return s.decode('ascii') else: return s else: to_unicode = lambda x: x class UnboundSymbols(EnvTransform, SkipDeclarations): def __init__(self): CythonTransform.__init__(self, None) self.unbound = set() def visit_NameNode(self, node): if not self.current_env().lookup(node.name): self.unbound.add(node.name) return node def __call__(self, node): super(UnboundSymbols, self).__call__(node) return self.unbound @cached_function def unbound_symbols(code, context=None): code = to_unicode(code) if context is None: context = Context([], default_options) from ..Compiler.ParseTreeTransforms import AnalyseDeclarationsTransform tree = parse_from_strings('(tree fragment)', code) for phase in Pipeline.create_pipeline(context, 'pyx'): if phase is None: continue tree = phase(tree) if isinstance(phase, AnalyseDeclarationsTransform): break try: import builtins except ImportError: import __builtin__ as builtins return tuple(UnboundSymbols()(tree) - set(dir(builtins))) def unsafe_type(arg, context=None): py_type = type(arg) if py_type is int: return 'long' else: return safe_type(arg, context) def safe_type(arg, context=None): py_type = type(arg) if py_type in (list, tuple, dict, str): return py_type.__name__ elif py_type is complex: return 'double complex' elif py_type is float: return 'double' elif py_type is bool: return 'bint' elif 'numpy' in sys.modules and isinstance(arg, sys.modules['numpy'].ndarray): return 'numpy.ndarray[numpy.%s_t, ndim=%s]' % (arg.dtype.name, arg.ndim) else: for base_type in py_type.__mro__: if base_type.__module__ in ('__builtin__', 'builtins'): return 'object' module = context.find_module(base_type.__module__, need_pxd=False) if module: entry = module.lookup(base_type.__name__) if entry.is_type: return '%s.%s' % (base_type.__module__, base_type.__name__) return 'object' def _get_build_extension(): dist = Distribution() # Ensure the build respects distutils configuration by parsing # the configuration files config_files = dist.find_config_files() dist.parse_config_files(config_files) build_extension = build_ext(dist) build_extension.finalize_options() return build_extension @cached_function def _create_context(cython_include_dirs): return Context(list(cython_include_dirs), default_options) _cython_inline_cache = {} _cython_inline_default_context = _create_context(('.',)) def _populate_unbound(kwds, unbound_symbols, locals=None, globals=None): for symbol in unbound_symbols: if symbol not in kwds: if locals is None or globals is None: calling_frame = inspect.currentframe().f_back.f_back.f_back if locals is None: locals = calling_frame.f_locals if globals is None: globals = calling_frame.f_globals if symbol in locals: kwds[symbol] = locals[symbol] elif symbol in globals: kwds[symbol] = globals[symbol] else: print("Couldn't find %r" % symbol) def cython_inline(code, get_type=unsafe_type, lib_dir=os.path.join(get_cython_cache_dir(), 'inline'), cython_include_dirs=None, cython_compiler_directives=None, force=False, quiet=False, locals=None, globals=None, language_level=None, **kwds): if get_type is None: get_type = lambda x: 'object' ctx = _create_context(tuple(cython_include_dirs)) if cython_include_dirs else _cython_inline_default_context # Fast path if this has been called in this session. _unbound_symbols = _cython_inline_cache.get(code) if _unbound_symbols is not None: _populate_unbound(kwds, _unbound_symbols, locals, globals) args = sorted(kwds.items()) arg_sigs = tuple([(get_type(value, ctx), arg) for arg, value in args]) invoke = _cython_inline_cache.get((code, arg_sigs)) if invoke is not None: arg_list = [arg[1] for arg in args] return invoke(*arg_list) orig_code = code code = to_unicode(code) code, literals = strip_string_literals(code) code = strip_common_indent(code) if locals is None: locals = inspect.currentframe().f_back.f_back.f_locals if globals is None: globals = inspect.currentframe().f_back.f_back.f_globals try: _cython_inline_cache[orig_code] = _unbound_symbols = unbound_symbols(code) _populate_unbound(kwds, _unbound_symbols, locals, globals) except AssertionError: if not quiet: # Parsing from strings not fully supported (e.g. cimports). print("Could not parse code as a string (to extract unbound symbols).") cython_compiler_directives = dict(cython_compiler_directives or {}) if language_level is not None: cython_compiler_directives['language_level'] = language_level cimports = [] for name, arg in list(kwds.items()): if arg is cython_module: cimports.append('\ncimport cython as %s' % name) del kwds[name] arg_names = sorted(kwds) arg_sigs = tuple([(get_type(kwds[arg], ctx), arg) for arg in arg_names]) key = orig_code, arg_sigs, sys.version_info, sys.executable, language_level, Cython.__version__ module_name = "_cython_inline_" + hashlib.md5(_unicode(key).encode('utf-8')).hexdigest() if module_name in sys.modules: module = sys.modules[module_name] else: build_extension = None if cython_inline.so_ext is None: # Figure out and cache current extension suffix build_extension = _get_build_extension() cython_inline.so_ext = build_extension.get_ext_filename('') module_path = os.path.join(lib_dir, module_name + cython_inline.so_ext) if not os.path.exists(lib_dir): os.makedirs(lib_dir) if force or not os.path.isfile(module_path): cflags = [] c_include_dirs = [] qualified = re.compile(r'([.\w]+)[.]') for type, _ in arg_sigs: m = qualified.match(type) if m: cimports.append('\ncimport %s' % m.groups()[0]) # one special case if m.groups()[0] == 'numpy': import numpy c_include_dirs.append(numpy.get_include()) # cflags.append('-Wno-unused') module_body, func_body = extract_func_code(code) params = ', '.join(['%s %s' % a for a in arg_sigs]) module_code = """ %(module_body)s %(cimports)s def __invoke(%(params)s): %(func_body)s return locals() """ % {'cimports': '\n'.join(cimports), 'module_body': module_body, 'params': params, 'func_body': func_body } for key, value in literals.items(): module_code = module_code.replace(key, value) pyx_file = os.path.join(lib_dir, module_name + '.pyx') fh = open(pyx_file, 'w') try: fh.write(module_code) finally: fh.close() extension = Extension( name = module_name, sources = [pyx_file], include_dirs = c_include_dirs, extra_compile_args = cflags) if build_extension is None: build_extension = _get_build_extension() build_extension.extensions = cythonize( [extension], include_path=cython_include_dirs or ['.'], compiler_directives=cython_compiler_directives, quiet=quiet) build_extension.build_temp = os.path.dirname(pyx_file) build_extension.build_lib = lib_dir build_extension.run() module = imp.load_dynamic(module_name, module_path) _cython_inline_cache[orig_code, arg_sigs] = module.__invoke arg_list = [kwds[arg] for arg in arg_names] return module.__invoke(*arg_list) # Cached suffix used by cython_inline above. None should get # overridden with actual value upon the first cython_inline invocation cython_inline.so_ext = None _find_non_space = re.compile('[^ ]').search def strip_common_indent(code): min_indent = None lines = code.splitlines() for line in lines: match = _find_non_space(line) if not match: continue # blank indent = match.start() if line[indent] == '#': continue # comment if min_indent is None or min_indent > indent: min_indent = indent for ix, line in enumerate(lines): match = _find_non_space(line) if not match or not line or line[indent:indent+1] == '#': continue lines[ix] = line[min_indent:] return '\n'.join(lines) module_statement = re.compile(r'^((cdef +(extern|class))|cimport|(from .+ cimport)|(from .+ import +[*]))') def extract_func_code(code): module = [] function = [] current = function code = code.replace('\t', ' ') lines = code.split('\n') for line in lines: if not line.startswith(' '): if module_statement.match(line): current = module else: current = function current.append(line) return '\n'.join(module), ' ' + '\n '.join(function) try: from inspect import getcallargs except ImportError: def getcallargs(func, *arg_values, **kwd_values): all = {} args, varargs, kwds, defaults = inspect.getargspec(func) if varargs is not None: all[varargs] = arg_values[len(args):] for name, value in zip(args, arg_values): all[name] = value for name, value in list(kwd_values.items()): if name in args: if name in all: raise TypeError("Duplicate argument %s" % name) all[name] = kwd_values.pop(name) if kwds is not None: all[kwds] = kwd_values elif kwd_values: raise TypeError("Unexpected keyword arguments: %s" % list(kwd_values)) if defaults is None: defaults = () first_default = len(args) - len(defaults) for ix, name in enumerate(args): if name not in all: if ix >= first_default: all[name] = defaults[ix - first_default] else: raise TypeError("Missing argument: %s" % name) return all def get_body(source): ix = source.index(':') if source[:5] == 'lambda': return "return %s" % source[ix+1:] else: return source[ix+1:] # Lots to be done here... It would be especially cool if compiled functions # could invoke each other quickly. class RuntimeCompiledFunction(object): def __init__(self, f): self._f = f self._body = get_body(inspect.getsource(f)) def __call__(self, *args, **kwds): all = getcallargs(self._f, *args, **kwds) if IS_PY3: return cython_inline(self._body, locals=self._f.__globals__, globals=self._f.__globals__, **all) else: return cython_inline(self._body, locals=self._f.func_globals, globals=self._f.func_globals, **all) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Build/IpythonMagic.py0000644000175000017500000005051700000000000027521 0ustar00vincentvincent00000000000000# -*- coding: utf-8 -*- """ ===================== Cython related magics ===================== Magic command interface for interactive work with Cython .. note:: The ``Cython`` package needs to be installed separately. It can be obtained using ``easy_install`` or ``pip``. Usage ===== To enable the magics below, execute ``%load_ext cython``. ``%%cython`` {CYTHON_DOC} ``%%cython_inline`` {CYTHON_INLINE_DOC} ``%%cython_pyximport`` {CYTHON_PYXIMPORT_DOC} Author: * Brian Granger Code moved from IPython and adapted by: * Martín Gaitán Parts of this code were taken from Cython.inline. """ #----------------------------------------------------------------------------- # Copyright (C) 2010-2011, IPython Development Team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file ipython-COPYING.rst, distributed with this software. #----------------------------------------------------------------------------- from __future__ import absolute_import, print_function import imp import io import os import re import sys import time import copy import distutils.log import textwrap try: reload except NameError: # Python 3 from imp import reload try: import hashlib except ImportError: import md5 as hashlib from distutils.core import Distribution, Extension from distutils.command.build_ext import build_ext from IPython.core import display from IPython.core import magic_arguments from IPython.core.magic import Magics, magics_class, cell_magic from IPython.utils import py3compat try: from IPython.paths import get_ipython_cache_dir except ImportError: # older IPython version from IPython.utils.path import get_ipython_cache_dir from IPython.utils.text import dedent from ..Shadow import __version__ as cython_version from ..Compiler.Errors import CompileError from .Inline import cython_inline from .Dependencies import cythonize PGO_CONFIG = { 'gcc': { 'gen': ['-fprofile-generate', '-fprofile-dir={TEMPDIR}'], 'use': ['-fprofile-use', '-fprofile-correction', '-fprofile-dir={TEMPDIR}'], }, # blind copy from 'configure' script in CPython 3.7 'icc': { 'gen': ['-prof-gen'], 'use': ['-prof-use'], } } PGO_CONFIG['mingw32'] = PGO_CONFIG['gcc'] @magics_class class CythonMagics(Magics): def __init__(self, shell): super(CythonMagics, self).__init__(shell) self._reloads = {} self._code_cache = {} self._pyximport_installed = False def _import_all(self, module): mdict = module.__dict__ if '__all__' in mdict: keys = mdict['__all__'] else: keys = [k for k in mdict if not k.startswith('_')] for k in keys: try: self.shell.push({k: mdict[k]}) except KeyError: msg = "'module' object has no attribute '%s'" % k raise AttributeError(msg) @cell_magic def cython_inline(self, line, cell): """Compile and run a Cython code cell using Cython.inline. This magic simply passes the body of the cell to Cython.inline and returns the result. If the variables `a` and `b` are defined in the user's namespace, here is a simple example that returns their sum:: %%cython_inline return a+b For most purposes, we recommend the usage of the `%%cython` magic. """ locs = self.shell.user_global_ns globs = self.shell.user_ns return cython_inline(cell, locals=locs, globals=globs) @cell_magic def cython_pyximport(self, line, cell): """Compile and import a Cython code cell using pyximport. The contents of the cell are written to a `.pyx` file in the current working directory, which is then imported using `pyximport`. This magic requires a module name to be passed:: %%cython_pyximport modulename def f(x): return 2.0*x The compiled module is then imported and all of its symbols are injected into the user's namespace. For most purposes, we recommend the usage of the `%%cython` magic. """ module_name = line.strip() if not module_name: raise ValueError('module name must be given') fname = module_name + '.pyx' with io.open(fname, 'w', encoding='utf-8') as f: f.write(cell) if 'pyximport' not in sys.modules or not self._pyximport_installed: import pyximport pyximport.install() self._pyximport_installed = True if module_name in self._reloads: module = self._reloads[module_name] # Note: reloading extension modules is not actually supported # (requires PEP-489 reinitialisation support). # Don't know why this should ever have worked as it reads here. # All we really need to do is to update the globals below. #reload(module) else: __import__(module_name) module = sys.modules[module_name] self._reloads[module_name] = module self._import_all(module) @magic_arguments.magic_arguments() @magic_arguments.argument( '-a', '--annotate', action='store_true', default=False, help="Produce a colorized HTML version of the source." ) @magic_arguments.argument( '-+', '--cplus', action='store_true', default=False, help="Output a C++ rather than C file." ) @magic_arguments.argument( '-3', dest='language_level', action='store_const', const=3, default=None, help="Select Python 3 syntax." ) @magic_arguments.argument( '-2', dest='language_level', action='store_const', const=2, default=None, help="Select Python 2 syntax." ) @magic_arguments.argument( '-f', '--force', action='store_true', default=False, help="Force the compilation of a new module, even if the source has been " "previously compiled." ) @magic_arguments.argument( '-c', '--compile-args', action='append', default=[], help="Extra flags to pass to compiler via the `extra_compile_args` " "Extension flag (can be specified multiple times)." ) @magic_arguments.argument( '--link-args', action='append', default=[], help="Extra flags to pass to linker via the `extra_link_args` " "Extension flag (can be specified multiple times)." ) @magic_arguments.argument( '-l', '--lib', action='append', default=[], help="Add a library to link the extension against (can be specified " "multiple times)." ) @magic_arguments.argument( '-n', '--name', help="Specify a name for the Cython module." ) @magic_arguments.argument( '-L', dest='library_dirs', metavar='dir', action='append', default=[], help="Add a path to the list of library directories (can be specified " "multiple times)." ) @magic_arguments.argument( '-I', '--include', action='append', default=[], help="Add a path to the list of include directories (can be specified " "multiple times)." ) @magic_arguments.argument( '-S', '--src', action='append', default=[], help="Add a path to the list of src files (can be specified " "multiple times)." ) @magic_arguments.argument( '--pgo', dest='pgo', action='store_true', default=False, help=("Enable profile guided optimisation in the C compiler. " "Compiles the cell twice and executes it in between to generate a runtime profile.") ) @magic_arguments.argument( '--verbose', dest='quiet', action='store_false', default=True, help=("Print debug information like generated .c/.cpp file location " "and exact gcc/g++ command invoked.") ) @cell_magic def cython(self, line, cell): """Compile and import everything from a Cython code cell. The contents of the cell are written to a `.pyx` file in the directory `IPYTHONDIR/cython` using a filename with the hash of the code. This file is then cythonized and compiled. The resulting module is imported and all of its symbols are injected into the user's namespace. The usage is similar to that of `%%cython_pyximport` but you don't have to pass a module name:: %%cython def f(x): return 2.0*x To compile OpenMP codes, pass the required `--compile-args` and `--link-args`. For example with gcc:: %%cython --compile-args=-fopenmp --link-args=-fopenmp ... To enable profile guided optimisation, pass the ``--pgo`` option. Note that the cell itself needs to take care of establishing a suitable profile when executed. This can be done by implementing the functions to optimise, and then calling them directly in the same cell on some realistic training data like this:: %%cython --pgo def critical_function(data): for item in data: ... # execute function several times to build profile from somewhere import some_typical_data for _ in range(100): critical_function(some_typical_data) In Python 3.5 and later, you can distinguish between the profile and non-profile runs as follows:: if "_pgo_" in __name__: ... # execute critical code here """ args = magic_arguments.parse_argstring(self.cython, line) code = cell if cell.endswith('\n') else cell + '\n' lib_dir = os.path.join(get_ipython_cache_dir(), 'cython') key = (code, line, sys.version_info, sys.executable, cython_version) if not os.path.exists(lib_dir): os.makedirs(lib_dir) if args.pgo: key += ('pgo',) if args.force: # Force a new module name by adding the current time to the # key which is hashed to determine the module name. key += (time.time(),) if args.name: module_name = py3compat.unicode_to_str(args.name) else: module_name = "_cython_magic_" + hashlib.md5(str(key).encode('utf-8')).hexdigest() html_file = os.path.join(lib_dir, module_name + '.html') module_path = os.path.join(lib_dir, module_name + self.so_ext) have_module = os.path.isfile(module_path) need_cythonize = args.pgo or not have_module if args.annotate: if not os.path.isfile(html_file): need_cythonize = True extension = None if need_cythonize: extensions = self._cythonize(module_name, code, lib_dir, args, quiet=args.quiet) assert len(extensions) == 1 extension = extensions[0] self._code_cache[key] = module_name if args.pgo: self._profile_pgo_wrapper(extension, lib_dir) self._build_extension(extension, lib_dir, pgo_step_name='use' if args.pgo else None, quiet=args.quiet) module = imp.load_dynamic(module_name, module_path) self._import_all(module) if args.annotate: try: with io.open(html_file, encoding='utf-8') as f: annotated_html = f.read() except IOError as e: # File could not be opened. Most likely the user has a version # of Cython before 0.15.1 (when `cythonize` learned the # `force` keyword argument) and has already compiled this # exact source without annotation. print('Cython completed successfully but the annotated ' 'source could not be read.', file=sys.stderr) print(e, file=sys.stderr) else: return display.HTML(self.clean_annotated_html(annotated_html)) def _profile_pgo_wrapper(self, extension, lib_dir): """ Generate a .c file for a separate extension module that calls the module init function of the original module. This makes sure that the PGO profiler sees the correct .o file of the final module, but it still allows us to import the module under a different name for profiling, before recompiling it into the PGO optimised module. Overwriting and reimporting the same shared library is not portable. """ extension = copy.copy(extension) # shallow copy, do not modify sources in place! module_name = extension.name pgo_module_name = '_pgo_' + module_name pgo_wrapper_c_file = os.path.join(lib_dir, pgo_module_name + '.c') with io.open(pgo_wrapper_c_file, 'w', encoding='utf-8') as f: f.write(textwrap.dedent(u""" #include "Python.h" #if PY_MAJOR_VERSION < 3 extern PyMODINIT_FUNC init%(module_name)s(void); PyMODINIT_FUNC init%(pgo_module_name)s(void); /*proto*/ PyMODINIT_FUNC init%(pgo_module_name)s(void) { PyObject *sys_modules; init%(module_name)s(); if (PyErr_Occurred()) return; sys_modules = PyImport_GetModuleDict(); /* borrowed, no exception, "never" fails */ if (sys_modules) { PyObject *module = PyDict_GetItemString(sys_modules, "%(module_name)s"); if (!module) return; PyDict_SetItemString(sys_modules, "%(pgo_module_name)s", module); Py_DECREF(module); } } #else extern PyMODINIT_FUNC PyInit_%(module_name)s(void); PyMODINIT_FUNC PyInit_%(pgo_module_name)s(void); /*proto*/ PyMODINIT_FUNC PyInit_%(pgo_module_name)s(void) { return PyInit_%(module_name)s(); } #endif """ % {'module_name': module_name, 'pgo_module_name': pgo_module_name})) extension.sources = extension.sources + [pgo_wrapper_c_file] # do not modify in place! extension.name = pgo_module_name self._build_extension(extension, lib_dir, pgo_step_name='gen') # import and execute module code to generate profile so_module_path = os.path.join(lib_dir, pgo_module_name + self.so_ext) imp.load_dynamic(pgo_module_name, so_module_path) def _cythonize(self, module_name, code, lib_dir, args, quiet=True): pyx_file = os.path.join(lib_dir, module_name + '.pyx') pyx_file = py3compat.cast_bytes_py2(pyx_file, encoding=sys.getfilesystemencoding()) c_include_dirs = args.include c_src_files = list(map(str, args.src)) if 'numpy' in code: import numpy c_include_dirs.append(numpy.get_include()) with io.open(pyx_file, 'w', encoding='utf-8') as f: f.write(code) extension = Extension( name=module_name, sources=[pyx_file] + c_src_files, include_dirs=c_include_dirs, library_dirs=args.library_dirs, extra_compile_args=args.compile_args, extra_link_args=args.link_args, libraries=args.lib, language='c++' if args.cplus else 'c', ) try: opts = dict( quiet=quiet, annotate=args.annotate, force=True, ) if args.language_level is not None: assert args.language_level in (2, 3) opts['language_level'] = args.language_level elif sys.version_info[0] >= 3: opts['language_level'] = 3 return cythonize([extension], **opts) except CompileError: return None def _build_extension(self, extension, lib_dir, temp_dir=None, pgo_step_name=None, quiet=True): build_extension = self._get_build_extension( extension, lib_dir=lib_dir, temp_dir=temp_dir, pgo_step_name=pgo_step_name) old_threshold = None try: if not quiet: old_threshold = distutils.log.set_threshold(distutils.log.DEBUG) build_extension.run() finally: if not quiet and old_threshold is not None: distutils.log.set_threshold(old_threshold) def _add_pgo_flags(self, build_extension, step_name, temp_dir): compiler_type = build_extension.compiler.compiler_type if compiler_type == 'unix': compiler_cmd = build_extension.compiler.compiler_so # TODO: we could try to call "[cmd] --version" for better insights if not compiler_cmd: pass elif 'clang' in compiler_cmd or 'clang' in compiler_cmd[0]: compiler_type = 'clang' elif 'icc' in compiler_cmd or 'icc' in compiler_cmd[0]: compiler_type = 'icc' elif 'gcc' in compiler_cmd or 'gcc' in compiler_cmd[0]: compiler_type = 'gcc' elif 'g++' in compiler_cmd or 'g++' in compiler_cmd[0]: compiler_type = 'gcc' config = PGO_CONFIG.get(compiler_type) orig_flags = [] if config and step_name in config: flags = [f.format(TEMPDIR=temp_dir) for f in config[step_name]] for extension in build_extension.extensions: orig_flags.append((extension.extra_compile_args, extension.extra_link_args)) extension.extra_compile_args = extension.extra_compile_args + flags extension.extra_link_args = extension.extra_link_args + flags else: print("No PGO %s configuration known for C compiler type '%s'" % (step_name, compiler_type), file=sys.stderr) return orig_flags @property def so_ext(self): """The extension suffix for compiled modules.""" try: return self._so_ext except AttributeError: self._so_ext = self._get_build_extension().get_ext_filename('') return self._so_ext def _clear_distutils_mkpath_cache(self): """clear distutils mkpath cache prevents distutils from skipping re-creation of dirs that have been removed """ try: from distutils.dir_util import _path_created except ImportError: pass else: _path_created.clear() def _get_build_extension(self, extension=None, lib_dir=None, temp_dir=None, pgo_step_name=None, _build_ext=build_ext): self._clear_distutils_mkpath_cache() dist = Distribution() config_files = dist.find_config_files() try: config_files.remove('setup.cfg') except ValueError: pass dist.parse_config_files(config_files) if not temp_dir: temp_dir = lib_dir add_pgo_flags = self._add_pgo_flags if pgo_step_name: base_build_ext = _build_ext class _build_ext(_build_ext): def build_extensions(self): add_pgo_flags(self, pgo_step_name, temp_dir) base_build_ext.build_extensions(self) build_extension = _build_ext(dist) build_extension.finalize_options() if temp_dir: temp_dir = py3compat.cast_bytes_py2(temp_dir, encoding=sys.getfilesystemencoding()) build_extension.build_temp = temp_dir if lib_dir: lib_dir = py3compat.cast_bytes_py2(lib_dir, encoding=sys.getfilesystemencoding()) build_extension.build_lib = lib_dir if extension is not None: build_extension.extensions = [extension] return build_extension @staticmethod def clean_annotated_html(html): """Clean up the annotated HTML source. Strips the link to the generated C or C++ file, which we do not present to the user. """ r = re.compile('

Raw output: (.*)') html = '\n'.join(l for l in html.splitlines() if not r.match(l)) return html __doc__ = __doc__.format( # rST doesn't see the -+ flag as part of an option list, so we # hide it from the module-level docstring. CYTHON_DOC=dedent(CythonMagics.cython.__doc__\ .replace('-+, --cplus', '--cplus ')), CYTHON_INLINE_DOC=dedent(CythonMagics.cython_inline.__doc__), CYTHON_PYXIMPORT_DOC=dedent(CythonMagics.cython_pyximport.__doc__), ) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735099.8099942 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Build/Tests/0000755000175000017500000000000000000000000025646 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Build/Tests/TestCyCache.py0000644000175000017500000001006700000000000030363 0ustar00vincentvincent00000000000000import difflib import glob import gzip import os import tempfile import Cython.Build.Dependencies import Cython.Utils from Cython.TestUtils import CythonTest class TestCyCache(CythonTest): def setUp(self): CythonTest.setUp(self) self.temp_dir = tempfile.mkdtemp( prefix='cycache-test', dir='TEST_TMP' if os.path.isdir('TEST_TMP') else None) self.src_dir = tempfile.mkdtemp(prefix='src', dir=self.temp_dir) self.cache_dir = tempfile.mkdtemp(prefix='cache', dir=self.temp_dir) def cache_files(self, file_glob): return glob.glob(os.path.join(self.cache_dir, file_glob)) def fresh_cythonize(self, *args, **kwargs): Cython.Utils.clear_function_caches() Cython.Build.Dependencies._dep_tree = None # discard method caches Cython.Build.Dependencies.cythonize(*args, **kwargs) def test_cycache_switch(self): content1 = 'value = 1\n' content2 = 'value = 2\n' a_pyx = os.path.join(self.src_dir, 'a.pyx') a_c = a_pyx[:-4] + '.c' open(a_pyx, 'w').write(content1) self.fresh_cythonize(a_pyx, cache=self.cache_dir) self.fresh_cythonize(a_pyx, cache=self.cache_dir) self.assertEqual(1, len(self.cache_files('a.c*'))) a_contents1 = open(a_c).read() os.unlink(a_c) open(a_pyx, 'w').write(content2) self.fresh_cythonize(a_pyx, cache=self.cache_dir) a_contents2 = open(a_c).read() os.unlink(a_c) self.assertNotEqual(a_contents1, a_contents2, 'C file not changed!') self.assertEqual(2, len(self.cache_files('a.c*'))) open(a_pyx, 'w').write(content1) self.fresh_cythonize(a_pyx, cache=self.cache_dir) self.assertEqual(2, len(self.cache_files('a.c*'))) a_contents = open(a_c).read() self.assertEqual( a_contents, a_contents1, msg='\n'.join(list(difflib.unified_diff( a_contents.split('\n'), a_contents1.split('\n')))[:10])) def test_cycache_uses_cache(self): a_pyx = os.path.join(self.src_dir, 'a.pyx') a_c = a_pyx[:-4] + '.c' open(a_pyx, 'w').write('pass') self.fresh_cythonize(a_pyx, cache=self.cache_dir) a_cache = os.path.join(self.cache_dir, os.listdir(self.cache_dir)[0]) gzip.GzipFile(a_cache, 'wb').write('fake stuff'.encode('ascii')) os.unlink(a_c) self.fresh_cythonize(a_pyx, cache=self.cache_dir) a_contents = open(a_c).read() self.assertEqual(a_contents, 'fake stuff', 'Unexpected contents: %s...' % a_contents[:100]) def test_multi_file_output(self): a_pyx = os.path.join(self.src_dir, 'a.pyx') a_c = a_pyx[:-4] + '.c' a_h = a_pyx[:-4] + '.h' a_api_h = a_pyx[:-4] + '_api.h' open(a_pyx, 'w').write('cdef public api int foo(int x): return x\n') self.fresh_cythonize(a_pyx, cache=self.cache_dir) expected = [a_c, a_h, a_api_h] for output in expected: self.assertTrue(os.path.exists(output), output) os.unlink(output) self.fresh_cythonize(a_pyx, cache=self.cache_dir) for output in expected: self.assertTrue(os.path.exists(output), output) def test_options_invalidation(self): hash_pyx = os.path.join(self.src_dir, 'options.pyx') hash_c = hash_pyx[:-len('.pyx')] + '.c' open(hash_pyx, 'w').write('pass') self.fresh_cythonize(hash_pyx, cache=self.cache_dir, cplus=False) self.assertEqual(1, len(self.cache_files('options.c*'))) os.unlink(hash_c) self.fresh_cythonize(hash_pyx, cache=self.cache_dir, cplus=True) self.assertEqual(2, len(self.cache_files('options.c*'))) os.unlink(hash_c) self.fresh_cythonize(hash_pyx, cache=self.cache_dir, cplus=False, show_version=False) self.assertEqual(2, len(self.cache_files('options.c*'))) os.unlink(hash_c) self.fresh_cythonize(hash_pyx, cache=self.cache_dir, cplus=False, show_version=True) self.assertEqual(2, len(self.cache_files('options.c*'))) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Build/Tests/TestInline.py0000644000175000017500000000462600000000000030306 0ustar00vincentvincent00000000000000import os, tempfile from Cython.Shadow import inline from Cython.Build.Inline import safe_type from Cython.TestUtils import CythonTest try: import numpy has_numpy = True except: has_numpy = False test_kwds = dict(force=True, quiet=True) global_value = 100 class TestInline(CythonTest): def setUp(self): CythonTest.setUp(self) self.test_kwds = dict(test_kwds) if os.path.isdir('TEST_TMP'): lib_dir = os.path.join('TEST_TMP','inline') else: lib_dir = tempfile.mkdtemp(prefix='cython_inline_') self.test_kwds['lib_dir'] = lib_dir def test_simple(self): self.assertEquals(inline("return 1+2", **self.test_kwds), 3) def test_types(self): self.assertEquals(inline(""" cimport cython return cython.typeof(a), cython.typeof(b) """, a=1.0, b=[], **self.test_kwds), ('double', 'list object')) def test_locals(self): a = 1 b = 2 self.assertEquals(inline("return a+b", **self.test_kwds), 3) def test_globals(self): self.assertEquals(inline("return global_value + 1", **self.test_kwds), global_value + 1) def test_no_return(self): self.assertEquals(inline(""" a = 1 cdef double b = 2 cdef c = [] """, **self.test_kwds), dict(a=1, b=2.0, c=[])) def test_def_node(self): foo = inline("def foo(x): return x * x", **self.test_kwds)['foo'] self.assertEquals(foo(7), 49) def test_class_ref(self): class Type(object): pass tp = inline("Type")['Type'] self.assertEqual(tp, Type) def test_pure(self): import cython as cy b = inline(""" b = cy.declare(float, a) c = cy.declare(cy.pointer(cy.float), &b) return b """, a=3, **self.test_kwds) self.assertEquals(type(b), float) def test_compiler_directives(self): self.assertEqual( inline('return sum(x)', x=[1, 2, 3], cython_compiler_directives={'boundscheck': False}), 6 ) if has_numpy: def test_numpy(self): import numpy a = numpy.ndarray((10, 20)) a[0,0] = 10 self.assertEquals(safe_type(a), 'numpy.ndarray[numpy.float64_t, ndim=2]') self.assertEquals(inline("return a[0,0]", a=a, **self.test_kwds), 10.0) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Build/Tests/TestIpythonMagic.py0000644000175000017500000001452600000000000031463 0ustar00vincentvincent00000000000000# -*- coding: utf-8 -*- # tag: ipython """Tests for the Cython magics extension.""" from __future__ import absolute_import import os import sys from contextlib import contextmanager from Cython.Build import IpythonMagic from Cython.TestUtils import CythonTest try: import IPython.testing.globalipapp from IPython.utils import py3compat except ImportError: # Disable tests and fake helpers for initialisation below. class _py3compat(object): def str_to_unicode(self, s): return s py3compat = _py3compat() def skip_if_not_installed(_): return None else: def skip_if_not_installed(c): return c try: # disable IPython history thread before it gets started to avoid having to clean it up from IPython.core.history import HistoryManager HistoryManager.enabled = False except ImportError: pass code = py3compat.str_to_unicode("""\ def f(x): return 2*x """) cython3_code = py3compat.str_to_unicode("""\ def f(int x): return 2 / x def call(x): return f(*(x,)) """) pgo_cython3_code = cython3_code + py3compat.str_to_unicode("""\ def main(): for _ in range(100): call(5) main() """) if sys.platform == 'win32': # not using IPython's decorators here because they depend on "nose" try: from unittest import skip as skip_win32 except ImportError: # poor dev's silent @unittest.skip() def skip_win32(dummy): def _skip_win32(func): return None return _skip_win32 else: def skip_win32(dummy): def _skip_win32(func): def wrapper(*args, **kwargs): func(*args, **kwargs) return wrapper return _skip_win32 @skip_if_not_installed class TestIPythonMagic(CythonTest): @classmethod def setUpClass(cls): CythonTest.setUpClass() cls._ip = IPython.testing.globalipapp.get_ipython() def setUp(self): CythonTest.setUp(self) self._ip.extension_manager.load_extension('cython') def test_cython_inline(self): ip = self._ip ip.ex('a=10; b=20') result = ip.run_cell_magic('cython_inline', '', 'return a+b') self.assertEqual(result, 30) @skip_win32('Skip on Windows') def test_cython_pyximport(self): ip = self._ip module_name = '_test_cython_pyximport' ip.run_cell_magic('cython_pyximport', module_name, code) ip.ex('g = f(10)') self.assertEqual(ip.user_ns['g'], 20.0) ip.run_cell_magic('cython_pyximport', module_name, code) ip.ex('h = f(-10)') self.assertEqual(ip.user_ns['h'], -20.0) try: os.remove(module_name + '.pyx') except OSError: pass def test_cython(self): ip = self._ip ip.run_cell_magic('cython', '', code) ip.ex('g = f(10)') self.assertEqual(ip.user_ns['g'], 20.0) def test_cython_name(self): # The Cython module named 'mymodule' defines the function f. ip = self._ip ip.run_cell_magic('cython', '--name=mymodule', code) # This module can now be imported in the interactive namespace. ip.ex('import mymodule; g = mymodule.f(10)') self.assertEqual(ip.user_ns['g'], 20.0) def test_cython_language_level(self): # The Cython cell defines the functions f() and call(). ip = self._ip ip.run_cell_magic('cython', '', cython3_code) ip.ex('g = f(10); h = call(10)') if sys.version_info[0] < 3: self.assertEqual(ip.user_ns['g'], 2 // 10) self.assertEqual(ip.user_ns['h'], 2 // 10) else: self.assertEqual(ip.user_ns['g'], 2.0 / 10.0) self.assertEqual(ip.user_ns['h'], 2.0 / 10.0) def test_cython3(self): # The Cython cell defines the functions f() and call(). ip = self._ip ip.run_cell_magic('cython', '-3', cython3_code) ip.ex('g = f(10); h = call(10)') self.assertEqual(ip.user_ns['g'], 2.0 / 10.0) self.assertEqual(ip.user_ns['h'], 2.0 / 10.0) def test_cython2(self): # The Cython cell defines the functions f() and call(). ip = self._ip ip.run_cell_magic('cython', '-2', cython3_code) ip.ex('g = f(10); h = call(10)') self.assertEqual(ip.user_ns['g'], 2 // 10) self.assertEqual(ip.user_ns['h'], 2 // 10) @skip_win32('Skip on Windows') def test_cython3_pgo(self): # The Cython cell defines the functions f() and call(). ip = self._ip ip.run_cell_magic('cython', '-3 --pgo', pgo_cython3_code) ip.ex('g = f(10); h = call(10); main()') self.assertEqual(ip.user_ns['g'], 2.0 / 10.0) self.assertEqual(ip.user_ns['h'], 2.0 / 10.0) @skip_win32('Skip on Windows') def test_extlibs(self): ip = self._ip code = py3compat.str_to_unicode(""" from libc.math cimport sin x = sin(0.0) """) ip.user_ns['x'] = 1 ip.run_cell_magic('cython', '-l m', code) self.assertEqual(ip.user_ns['x'], 0) def test_cython_verbose(self): ip = self._ip ip.run_cell_magic('cython', '--verbose', code) ip.ex('g = f(10)') self.assertEqual(ip.user_ns['g'], 20.0) def test_cython_verbose_thresholds(self): @contextmanager def mock_distutils(): class MockLog: DEBUG = 1 INFO = 2 thresholds = [INFO] def set_threshold(self, val): self.thresholds.append(val) return self.thresholds[-2] new_log = MockLog() old_log = IpythonMagic.distutils.log try: IpythonMagic.distutils.log = new_log yield new_log finally: IpythonMagic.distutils.log = old_log ip = self._ip with mock_distutils() as verbose_log: ip.run_cell_magic('cython', '--verbose', code) ip.ex('g = f(10)') self.assertEqual(ip.user_ns['g'], 20.0) self.assertEquals([verbose_log.INFO, verbose_log.DEBUG, verbose_log.INFO], verbose_log.thresholds) with mock_distutils() as normal_log: ip.run_cell_magic('cython', '', code) ip.ex('g = f(10)') self.assertEqual(ip.user_ns['g'], 20.0) self.assertEquals([normal_log.INFO], normal_log.thresholds) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Build/Tests/TestStripLiterals.py0000644000175000017500000000301600000000000031661 0ustar00vincentvincent00000000000000from Cython.Build.Dependencies import strip_string_literals from Cython.TestUtils import CythonTest class TestStripLiterals(CythonTest): def t(self, before, expected): actual, literals = strip_string_literals(before, prefix="_L") self.assertEqual(expected, actual) for key, value in literals.items(): actual = actual.replace(key, value) self.assertEqual(before, actual) def test_empty(self): self.t("", "") def test_single_quote(self): self.t("'x'", "'_L1_'") def test_double_quote(self): self.t('"x"', '"_L1_"') def test_nested_quotes(self): self.t(""" '"' "'" """, """ '_L1_' "_L2_" """) def test_triple_quote(self): self.t(" '''a\n''' ", " '''_L1_''' ") def test_backslash(self): self.t(r"'a\'b'", "'_L1_'") self.t(r"'a\\'", "'_L1_'") self.t(r"'a\\\'b'", "'_L1_'") def test_unicode(self): self.t("u'abc'", "u'_L1_'") def test_raw(self): self.t(r"r'abc\\'", "r'_L1_'") def test_raw_unicode(self): self.t(r"ru'abc\\'", "ru'_L1_'") def test_comment(self): self.t("abc # foo", "abc #_L1_") def test_comment_and_quote(self): self.t("abc # 'x'", "abc #_L1_") self.t("'abc#'", "'_L1_'") def test_include(self): self.t("include 'a.pxi' # something here", "include '_L1_' #_L2_") def test_extern(self): self.t("cdef extern from 'a.h': # comment", "cdef extern from '_L1_': #_L2_") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Build/Tests/__init__.py0000644000175000017500000000001500000000000027753 0ustar00vincentvincent00000000000000# empty file ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Build/__init__.py0000644000175000017500000000010500000000000026651 0ustar00vincentvincent00000000000000from .Dependencies import cythonize from .Distutils import build_ext ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/CodeWriter.py0000644000175000017500000005653500000000000026144 0ustar00vincentvincent00000000000000""" Serializes a Cython code tree to Cython code. This is primarily useful for debugging and testing purposes. The output is in a strict format, no whitespace or comments from the input is preserved (and it could not be as it is not present in the code tree). """ from __future__ import absolute_import, print_function from .Compiler.Visitor import TreeVisitor from .Compiler.ExprNodes import * class LinesResult(object): def __init__(self): self.lines = [] self.s = u"" def put(self, s): self.s += s def newline(self): self.lines.append(self.s) self.s = u"" def putline(self, s): self.put(s) self.newline() class DeclarationWriter(TreeVisitor): indent_string = u" " def __init__(self, result=None): super(DeclarationWriter, self).__init__() if result is None: result = LinesResult() self.result = result self.numindents = 0 self.tempnames = {} self.tempblockindex = 0 def write(self, tree): self.visit(tree) return self.result def indent(self): self.numindents += 1 def dedent(self): self.numindents -= 1 def startline(self, s=u""): self.result.put(self.indent_string * self.numindents + s) def put(self, s): self.result.put(s) def putline(self, s): self.result.putline(self.indent_string * self.numindents + s) def endline(self, s=u""): self.result.putline(s) def line(self, s): self.startline(s) self.endline() def comma_separated_list(self, items, output_rhs=False): if len(items) > 0: for item in items[:-1]: self.visit(item) if output_rhs and item.default is not None: self.put(u" = ") self.visit(item.default) self.put(u", ") self.visit(items[-1]) def visit_Node(self, node): raise AssertionError("Node not handled by serializer: %r" % node) def visit_ModuleNode(self, node): self.visitchildren(node) def visit_StatListNode(self, node): self.visitchildren(node) def visit_CDefExternNode(self, node): if node.include_file is None: file = u'*' else: file = u'"%s"' % node.include_file self.putline(u"cdef extern from %s:" % file) self.indent() self.visit(node.body) self.dedent() def visit_CPtrDeclaratorNode(self, node): self.put('*') self.visit(node.base) def visit_CReferenceDeclaratorNode(self, node): self.put('&') self.visit(node.base) def visit_CArrayDeclaratorNode(self, node): self.visit(node.base) self.put(u'[') if node.dimension is not None: self.visit(node.dimension) self.put(u']') def visit_CArrayDeclaratorNode(self, node): self.visit(node.base) self.put(u'[') if node.dimension is not None: self.visit(node.dimension) self.put(u']') def visit_CFuncDeclaratorNode(self, node): # TODO: except, gil, etc. self.visit(node.base) self.put(u'(') self.comma_separated_list(node.args) self.endline(u')') def visit_CNameDeclaratorNode(self, node): self.put(node.name) def visit_CSimpleBaseTypeNode(self, node): # See Parsing.p_sign_and_longness if node.is_basic_c_type: self.put(("unsigned ", "", "signed ")[node.signed]) if node.longness < 0: self.put("short " * -node.longness) elif node.longness > 0: self.put("long " * node.longness) self.put(node.name) def visit_CComplexBaseTypeNode(self, node): self.put(u'(') self.visit(node.base_type) self.visit(node.declarator) self.put(u')') def visit_CNestedBaseTypeNode(self, node): self.visit(node.base_type) self.put(u'.') self.put(node.name) def visit_TemplatedTypeNode(self, node): self.visit(node.base_type_node) self.put(u'[') self.comma_separated_list(node.positional_args + node.keyword_args.key_value_pairs) self.put(u']') def visit_CVarDefNode(self, node): self.startline(u"cdef ") self.visit(node.base_type) self.put(u" ") self.comma_separated_list(node.declarators, output_rhs=True) self.endline() def visit_container_node(self, node, decl, extras, attributes): # TODO: visibility self.startline(decl) if node.name: self.put(u' ') self.put(node.name) if node.cname is not None: self.put(u' "%s"' % node.cname) if extras: self.put(extras) self.endline(':') self.indent() if not attributes: self.putline('pass') else: for attribute in attributes: self.visit(attribute) self.dedent() def visit_CStructOrUnionDefNode(self, node): if node.typedef_flag: decl = u'ctypedef ' else: decl = u'cdef ' if node.visibility == 'public': decl += u'public ' if node.packed: decl += u'packed ' decl += node.kind self.visit_container_node(node, decl, None, node.attributes) def visit_CppClassNode(self, node): extras = "" if node.templates: extras = u"[%s]" % ", ".join(node.templates) if node.base_classes: extras += "(%s)" % ", ".join(node.base_classes) self.visit_container_node(node, u"cdef cppclass", extras, node.attributes) def visit_CEnumDefNode(self, node): self.visit_container_node(node, u"cdef enum", None, node.items) def visit_CEnumDefItemNode(self, node): self.startline(node.name) if node.cname: self.put(u' "%s"' % node.cname) if node.value: self.put(u" = ") self.visit(node.value) self.endline() def visit_CClassDefNode(self, node): assert not node.module_name if node.decorators: for decorator in node.decorators: self.visit(decorator) self.startline(u"cdef class ") self.put(node.class_name) if node.base_class_name: self.put(u"(") if node.base_class_module: self.put(node.base_class_module) self.put(u".") self.put(node.base_class_name) self.put(u")") self.endline(u":") self.indent() self.visit(node.body) self.dedent() def visit_CTypeDefNode(self, node): self.startline(u"ctypedef ") self.visit(node.base_type) self.put(u" ") self.visit(node.declarator) self.endline() def visit_FuncDefNode(self, node): self.startline(u"def %s(" % node.name) self.comma_separated_list(node.args) self.endline(u"):") self.indent() self.visit(node.body) self.dedent() def visit_CArgDeclNode(self, node): if node.base_type.name is not None: self.visit(node.base_type) self.put(u" ") self.visit(node.declarator) if node.default is not None: self.put(u" = ") self.visit(node.default) def visit_CImportStatNode(self, node): self.startline(u"cimport ") self.put(node.module_name) if node.as_name: self.put(u" as ") self.put(node.as_name) self.endline() def visit_FromCImportStatNode(self, node): self.startline(u"from ") self.put(node.module_name) self.put(u" cimport ") first = True for pos, name, as_name, kind in node.imported_names: assert kind is None if first: first = False else: self.put(u", ") self.put(name) if as_name: self.put(u" as ") self.put(as_name) self.endline() def visit_NameNode(self, node): self.put(node.name) def visit_IntNode(self, node): self.put(node.value) def visit_NoneNode(self, node): self.put(u"None") def visit_NotNode(self, node): self.put(u"(not ") self.visit(node.operand) self.put(u")") def visit_DecoratorNode(self, node): self.startline("@") self.visit(node.decorator) self.endline() def visit_BinopNode(self, node): self.visit(node.operand1) self.put(u" %s " % node.operator) self.visit(node.operand2) def visit_AttributeNode(self, node): self.visit(node.obj) self.put(u".%s" % node.attribute) def visit_BoolNode(self, node): self.put(str(node.value)) # FIXME: represent string nodes correctly def visit_StringNode(self, node): value = node.value if value.encoding is not None: value = value.encode(value.encoding) self.put(repr(value)) def visit_PassStatNode(self, node): self.startline(u"pass") self.endline() class CodeWriter(DeclarationWriter): def visit_SingleAssignmentNode(self, node): self.startline() self.visit(node.lhs) self.put(u" = ") self.visit(node.rhs) self.endline() def visit_CascadedAssignmentNode(self, node): self.startline() for lhs in node.lhs_list: self.visit(lhs) self.put(u" = ") self.visit(node.rhs) self.endline() def visit_PrintStatNode(self, node): self.startline(u"print ") self.comma_separated_list(node.arg_tuple.args) if not node.append_newline: self.put(u",") self.endline() def visit_ForInStatNode(self, node): self.startline(u"for ") self.visit(node.target) self.put(u" in ") self.visit(node.iterator.sequence) self.endline(u":") self.indent() self.visit(node.body) self.dedent() if node.else_clause is not None: self.line(u"else:") self.indent() self.visit(node.else_clause) self.dedent() def visit_IfStatNode(self, node): # The IfClauseNode is handled directly without a separate match # for clariy. self.startline(u"if ") self.visit(node.if_clauses[0].condition) self.endline(":") self.indent() self.visit(node.if_clauses[0].body) self.dedent() for clause in node.if_clauses[1:]: self.startline("elif ") self.visit(clause.condition) self.endline(":") self.indent() self.visit(clause.body) self.dedent() if node.else_clause is not None: self.line("else:") self.indent() self.visit(node.else_clause) self.dedent() def visit_SequenceNode(self, node): self.comma_separated_list(node.args) # Might need to discover whether we need () around tuples...hmm... def visit_SimpleCallNode(self, node): self.visit(node.function) self.put(u"(") self.comma_separated_list(node.args) self.put(")") def visit_GeneralCallNode(self, node): self.visit(node.function) self.put(u"(") posarg = node.positional_args if isinstance(posarg, AsTupleNode): self.visit(posarg.arg) else: self.comma_separated_list(posarg.args) # TupleNode.args if node.keyword_args: if isinstance(node.keyword_args, DictNode): for i, (name, value) in enumerate(node.keyword_args.key_value_pairs): if i > 0: self.put(', ') self.visit(name) self.put('=') self.visit(value) else: raise Exception("Not implemented yet") self.put(u")") def visit_ExprStatNode(self, node): self.startline() self.visit(node.expr) self.endline() def visit_InPlaceAssignmentNode(self, node): self.startline() self.visit(node.lhs) self.put(u" %s= " % node.operator) self.visit(node.rhs) self.endline() def visit_WithStatNode(self, node): self.startline() self.put(u"with ") self.visit(node.manager) if node.target is not None: self.put(u" as ") self.visit(node.target) self.endline(u":") self.indent() self.visit(node.body) self.dedent() def visit_TryFinallyStatNode(self, node): self.line(u"try:") self.indent() self.visit(node.body) self.dedent() self.line(u"finally:") self.indent() self.visit(node.finally_clause) self.dedent() def visit_TryExceptStatNode(self, node): self.line(u"try:") self.indent() self.visit(node.body) self.dedent() for x in node.except_clauses: self.visit(x) if node.else_clause is not None: self.visit(node.else_clause) def visit_ExceptClauseNode(self, node): self.startline(u"except") if node.pattern is not None: self.put(u" ") self.visit(node.pattern) if node.target is not None: self.put(u", ") self.visit(node.target) self.endline(":") self.indent() self.visit(node.body) self.dedent() def visit_ReturnStatNode(self, node): self.startline("return ") self.visit(node.value) self.endline() def visit_ReraiseStatNode(self, node): self.line("raise") def visit_ImportNode(self, node): self.put(u"(import %s)" % node.module_name.value) def visit_TempsBlockNode(self, node): """ Temporaries are output like $1_1', where the first number is an index of the TempsBlockNode and the second number is an index of the temporary which that block allocates. """ idx = 0 for handle in node.temps: self.tempnames[handle] = "$%d_%d" % (self.tempblockindex, idx) idx += 1 self.tempblockindex += 1 self.visit(node.body) def visit_TempRefNode(self, node): self.put(self.tempnames[node.handle]) class PxdWriter(DeclarationWriter): def __call__(self, node): print(u'\n'.join(self.write(node).lines)) return node def visit_CFuncDefNode(self, node): if 'inline' in node.modifiers: return if node.overridable: self.startline(u'cpdef ') else: self.startline(u'cdef ') if node.visibility != 'private': self.put(node.visibility) self.put(u' ') if node.api: self.put(u'api ') self.visit(node.declarator) def visit_StatNode(self, node): pass class ExpressionWriter(TreeVisitor): def __init__(self, result=None): super(ExpressionWriter, self).__init__() if result is None: result = u"" self.result = result self.precedence = [0] def write(self, tree): self.visit(tree) return self.result def put(self, s): self.result += s def remove(self, s): if self.result.endswith(s): self.result = self.result[:-len(s)] def comma_separated_list(self, items): if len(items) > 0: for item in items[:-1]: self.visit(item) self.put(u", ") self.visit(items[-1]) def visit_Node(self, node): raise AssertionError("Node not handled by serializer: %r" % node) def visit_NameNode(self, node): self.put(node.name) def visit_NoneNode(self, node): self.put(u"None") def visit_EllipsisNode(self, node): self.put(u"...") def visit_BoolNode(self, node): self.put(str(node.value)) def visit_ConstNode(self, node): self.put(str(node.value)) def visit_ImagNode(self, node): self.put(node.value) self.put(u"j") def emit_string(self, node, prefix=u""): repr_val = repr(node.value) if repr_val[0] in 'ub': repr_val = repr_val[1:] self.put(u"%s%s" % (prefix, repr_val)) def visit_BytesNode(self, node): self.emit_string(node, u"b") def visit_StringNode(self, node): self.emit_string(node) def visit_UnicodeNode(self, node): self.emit_string(node, u"u") def emit_sequence(self, node, parens=(u"", u"")): open_paren, close_paren = parens items = node.subexpr_nodes() self.put(open_paren) self.comma_separated_list(items) self.put(close_paren) def visit_ListNode(self, node): self.emit_sequence(node, u"[]") def visit_TupleNode(self, node): self.emit_sequence(node, u"()") def visit_SetNode(self, node): if len(node.subexpr_nodes()) > 0: self.emit_sequence(node, u"{}") else: self.put(u"set()") def visit_DictNode(self, node): self.emit_sequence(node, u"{}") def visit_DictItemNode(self, node): self.visit(node.key) self.put(u": ") self.visit(node.value) unop_precedence = { 'not': 3, '!': 3, '+': 11, '-': 11, '~': 11, } binop_precedence = { 'or': 1, 'and': 2, # unary: 'not': 3, '!': 3, 'in': 4, 'not_in': 4, 'is': 4, 'is_not': 4, '<': 4, '<=': 4, '>': 4, '>=': 4, '!=': 4, '==': 4, '|': 5, '^': 6, '&': 7, '<<': 8, '>>': 8, '+': 9, '-': 9, '*': 10, '@': 10, '/': 10, '//': 10, '%': 10, # unary: '+': 11, '-': 11, '~': 11 '**': 12, } def operator_enter(self, new_prec): old_prec = self.precedence[-1] if old_prec > new_prec: self.put(u"(") self.precedence.append(new_prec) def operator_exit(self): old_prec, new_prec = self.precedence[-2:] if old_prec > new_prec: self.put(u")") self.precedence.pop() def visit_NotNode(self, node): op = 'not' prec = self.unop_precedence[op] self.operator_enter(prec) self.put(u"not ") self.visit(node.operand) self.operator_exit() def visit_UnopNode(self, node): op = node.operator prec = self.unop_precedence[op] self.operator_enter(prec) self.put(u"%s" % node.operator) self.visit(node.operand) self.operator_exit() def visit_BinopNode(self, node): op = node.operator prec = self.binop_precedence.get(op, 0) self.operator_enter(prec) self.visit(node.operand1) self.put(u" %s " % op.replace('_', ' ')) self.visit(node.operand2) self.operator_exit() def visit_BoolBinopNode(self, node): self.visit_BinopNode(node) def visit_PrimaryCmpNode(self, node): self.visit_BinopNode(node) def visit_IndexNode(self, node): self.visit(node.base) self.put(u"[") if isinstance(node.index, TupleNode): self.emit_sequence(node.index) else: self.visit(node.index) self.put(u"]") def visit_SliceIndexNode(self, node): self.visit(node.base) self.put(u"[") if node.start: self.visit(node.start) self.put(u":") if node.stop: self.visit(node.stop) if node.slice: self.put(u":") self.visit(node.slice) self.put(u"]") def visit_SliceNode(self, node): if not node.start.is_none: self.visit(node.start) self.put(u":") if not node.stop.is_none: self.visit(node.stop) if not node.step.is_none: self.put(u":") self.visit(node.step) def visit_CondExprNode(self, node): self.visit(node.true_val) self.put(u" if ") self.visit(node.test) self.put(u" else ") self.visit(node.false_val) def visit_AttributeNode(self, node): self.visit(node.obj) self.put(u".%s" % node.attribute) def visit_SimpleCallNode(self, node): self.visit(node.function) self.put(u"(") self.comma_separated_list(node.args) self.put(")") def emit_pos_args(self, node): if node is None: return if isinstance(node, AddNode): self.emit_pos_args(node.operand1) self.emit_pos_args(node.operand2) elif isinstance(node, TupleNode): for expr in node.subexpr_nodes(): self.visit(expr) self.put(u", ") elif isinstance(node, AsTupleNode): self.put("*") self.visit(node.arg) self.put(u", ") else: self.visit(node) self.put(u", ") def emit_kwd_args(self, node): if node is None: return if isinstance(node, MergedDictNode): for expr in node.subexpr_nodes(): self.emit_kwd_args(expr) elif isinstance(node, DictNode): for expr in node.subexpr_nodes(): self.put(u"%s=" % expr.key.value) self.visit(expr.value) self.put(u", ") else: self.put(u"**") self.visit(node) self.put(u", ") def visit_GeneralCallNode(self, node): self.visit(node.function) self.put(u"(") self.emit_pos_args(node.positional_args) self.emit_kwd_args(node.keyword_args) self.remove(u", ") self.put(")") def emit_comprehension(self, body, target, sequence, condition, parens=(u"", u"")): open_paren, close_paren = parens self.put(open_paren) self.visit(body) self.put(u" for ") self.visit(target) self.put(u" in ") self.visit(sequence) if condition: self.put(u" if ") self.visit(condition) self.put(close_paren) def visit_ComprehensionAppendNode(self, node): self.visit(node.expr) def visit_DictComprehensionAppendNode(self, node): self.visit(node.key_expr) self.put(u": ") self.visit(node.value_expr) def visit_ComprehensionNode(self, node): tpmap = {'list': u"[]", 'dict': u"{}", 'set': u"{}"} parens = tpmap[node.type.py_type_name()] body = node.loop.body target = node.loop.target sequence = node.loop.iterator.sequence condition = None if hasattr(body, 'if_clauses'): # type(body) is Nodes.IfStatNode condition = body.if_clauses[0].condition body = body.if_clauses[0].body self.emit_comprehension(body, target, sequence, condition, parens) def visit_GeneratorExpressionNode(self, node): body = node.loop.body target = node.loop.target sequence = node.loop.iterator.sequence condition = None if hasattr(body, 'if_clauses'): # type(body) is Nodes.IfStatNode condition = body.if_clauses[0].condition body = body.if_clauses[0].body.expr.arg elif hasattr(body, 'expr'): # type(body) is Nodes.ExprStatNode body = body.expr.arg self.emit_comprehension(body, target, sequence, condition, u"()") ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735099.8366609 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Compiler/0000755000175000017500000000000000000000000025257 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Compiler/AnalysedTreeTransforms.py0000644000175000017500000000736200000000000032300 0ustar00vincentvincent00000000000000from __future__ import absolute_import from .Visitor import ScopeTrackingTransform from .Nodes import StatListNode, SingleAssignmentNode, CFuncDefNode, DefNode from .ExprNodes import DictNode, DictItemNode, NameNode, UnicodeNode from .PyrexTypes import py_object_type from .StringEncoding import EncodedString from . import Symtab class AutoTestDictTransform(ScopeTrackingTransform): # Handles autotestdict directive blacklist = ['__cinit__', '__dealloc__', '__richcmp__', '__nonzero__', '__bool__', '__len__', '__contains__'] def visit_ModuleNode(self, node): if node.is_pxd: return node self.scope_type = 'module' self.scope_node = node if not self.current_directives['autotestdict']: return node self.all_docstrings = self.current_directives['autotestdict.all'] self.cdef_docstrings = self.all_docstrings or self.current_directives['autotestdict.cdef'] assert isinstance(node.body, StatListNode) # First see if __test__ is already created if u'__test__' in node.scope.entries: # Do nothing return node pos = node.pos self.tests = [] self.testspos = node.pos test_dict_entry = node.scope.declare_var(EncodedString(u'__test__'), py_object_type, pos, visibility='public') create_test_dict_assignment = SingleAssignmentNode(pos, lhs=NameNode(pos, name=EncodedString(u'__test__'), entry=test_dict_entry), rhs=DictNode(pos, key_value_pairs=self.tests)) self.visitchildren(node) node.body.stats.append(create_test_dict_assignment) return node def add_test(self, testpos, path, doctest): pos = self.testspos keystr = u'%s (line %d)' % (path, testpos[1]) key = UnicodeNode(pos, value=EncodedString(keystr)) value = UnicodeNode(pos, value=doctest) self.tests.append(DictItemNode(pos, key=key, value=value)) def visit_ExprNode(self, node): # expressions cannot contain functions and lambda expressions # do not have a docstring return node def visit_FuncDefNode(self, node): if not node.doc or (isinstance(node, DefNode) and node.fused_py_func): return node if not self.cdef_docstrings: if isinstance(node, CFuncDefNode) and not node.py_func: return node if not self.all_docstrings and '>>>' not in node.doc: return node pos = self.testspos if self.scope_type == 'module': path = node.entry.name elif self.scope_type in ('pyclass', 'cclass'): if isinstance(node, CFuncDefNode): if node.py_func is not None: name = node.py_func.name else: name = node.entry.name else: name = node.name if self.scope_type == 'cclass' and name in self.blacklist: return node if self.scope_type == 'pyclass': class_name = self.scope_node.name else: class_name = self.scope_node.class_name if isinstance(node.entry.scope, Symtab.PropertyScope): property_method_name = node.entry.scope.name path = "%s.%s.%s" % (class_name, node.entry.scope.name, node.entry.name) else: path = "%s.%s" % (class_name, node.entry.name) else: assert False self.add_test(node.pos, path, node.doc) return node ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Compiler/Annotate.py0000644000175000017500000003122600000000000027406 0ustar00vincentvincent00000000000000# Note: Work in progress from __future__ import absolute_import import os import os.path import re import codecs import textwrap from datetime import datetime from functools import partial from collections import defaultdict from xml.sax.saxutils import escape as html_escape try: from StringIO import StringIO except ImportError: from io import StringIO # does not support writing 'str' in Py2 from . import Version from .Code import CCodeWriter from .. import Utils class AnnotationCCodeWriter(CCodeWriter): def __init__(self, create_from=None, buffer=None, copy_formatting=True): CCodeWriter.__init__(self, create_from, buffer, copy_formatting=copy_formatting) if create_from is None: self.annotation_buffer = StringIO() self.last_annotated_pos = None # annotations[filename][line] -> [(column, AnnotationItem)*] self.annotations = defaultdict(partial(defaultdict, list)) # code[filename][line] -> str self.code = defaultdict(partial(defaultdict, str)) # scopes[filename][line] -> set(scopes) self.scopes = defaultdict(partial(defaultdict, set)) else: # When creating an insertion point, keep references to the same database self.annotation_buffer = create_from.annotation_buffer self.annotations = create_from.annotations self.code = create_from.code self.scopes = create_from.scopes self.last_annotated_pos = create_from.last_annotated_pos def create_new(self, create_from, buffer, copy_formatting): return AnnotationCCodeWriter(create_from, buffer, copy_formatting) def write(self, s): CCodeWriter.write(self, s) self.annotation_buffer.write(s) def mark_pos(self, pos, trace=True): if pos is not None: CCodeWriter.mark_pos(self, pos, trace) if self.funcstate and self.funcstate.scope: # lambdas and genexprs can result in multiple scopes per line => keep them in a set self.scopes[pos[0].filename][pos[1]].add(self.funcstate.scope) if self.last_annotated_pos: source_desc, line, _ = self.last_annotated_pos pos_code = self.code[source_desc.filename] pos_code[line] += self.annotation_buffer.getvalue() self.annotation_buffer = StringIO() self.last_annotated_pos = pos def annotate(self, pos, item): self.annotations[pos[0].filename][pos[1]].append((pos[2], item)) def _css(self): """css template will later allow to choose a colormap""" css = [self._css_template] for i in range(255): color = u"FFFF%02x" % int(255/(1+i/10.0)) css.append('.cython.score-%d {background-color: #%s;}' % (i, color)) try: from pygments.formatters import HtmlFormatter except ImportError: pass else: css.append(HtmlFormatter().get_style_defs('.cython')) return '\n'.join(css) _css_template = textwrap.dedent(""" body.cython { font-family: courier; font-size: 12; } .cython.tag { } .cython.line { margin: 0em } .cython.code { font-size: 9; color: #444444; display: none; margin: 0px 0px 0px 8px; border-left: 8px none; } .cython.line .run { background-color: #B0FFB0; } .cython.line .mis { background-color: #FFB0B0; } .cython.code.run { border-left: 8px solid #B0FFB0; } .cython.code.mis { border-left: 8px solid #FFB0B0; } .cython.code .py_c_api { color: red; } .cython.code .py_macro_api { color: #FF7000; } .cython.code .pyx_c_api { color: #FF3000; } .cython.code .pyx_macro_api { color: #FF7000; } .cython.code .refnanny { color: #FFA000; } .cython.code .trace { color: #FFA000; } .cython.code .error_goto { color: #FFA000; } .cython.code .coerce { color: #008000; border: 1px dotted #008000 } .cython.code .py_attr { color: #FF0000; font-weight: bold; } .cython.code .c_attr { color: #0000FF; } .cython.code .py_call { color: #FF0000; font-weight: bold; } .cython.code .c_call { color: #0000FF; } """) # on-click toggle function to show/hide C source code _onclick_attr = ' onclick="{0}"'.format(( "(function(s){" " s.display = s.display === 'block' ? 'none' : 'block'" "})(this.nextElementSibling.style)" ).replace(' ', '') # poor dev's JS minification ) def save_annotation(self, source_filename, target_filename, coverage_xml=None): with Utils.open_source_file(source_filename) as f: code = f.read() generated_code = self.code.get(source_filename, {}) c_file = Utils.decode_filename(os.path.basename(target_filename)) html_filename = os.path.splitext(target_filename)[0] + ".html" with codecs.open(html_filename, "w", encoding="UTF-8") as out_buffer: out_buffer.write(self._save_annotation(code, generated_code, c_file, source_filename, coverage_xml)) def _save_annotation_header(self, c_file, source_filename, coverage_timestamp=None): coverage_info = '' if coverage_timestamp: coverage_info = u' with coverage data from {timestamp}'.format( timestamp=datetime.fromtimestamp(int(coverage_timestamp) // 1000)) outlist = [ textwrap.dedent(u'''\ Cython: {filename}

Generated by Cython {watermark}{more_info}

Yellow lines hint at Python interaction.
Click on a line that starts with a "+" to see the C code that Cython generated for it.

''').format(css=self._css(), watermark=Version.watermark, filename=os.path.basename(source_filename) if source_filename else '', more_info=coverage_info) ] if c_file: outlist.append(u'

Raw output: %s

\n' % (c_file, c_file)) return outlist def _save_annotation_footer(self): return (u'\n',) def _save_annotation(self, code, generated_code, c_file=None, source_filename=None, coverage_xml=None): """ lines : original cython source code split by lines generated_code : generated c code keyed by line number in original file target filename : name of the file in which to store the generated html c_file : filename in which the c_code has been written """ if coverage_xml is not None and source_filename: coverage_timestamp = coverage_xml.get('timestamp', '').strip() covered_lines = self._get_line_coverage(coverage_xml, source_filename) else: coverage_timestamp = covered_lines = None annotation_items = dict(self.annotations[source_filename]) scopes = dict(self.scopes[source_filename]) outlist = [] outlist.extend(self._save_annotation_header(c_file, source_filename, coverage_timestamp)) outlist.extend(self._save_annotation_body(code, generated_code, annotation_items, scopes, covered_lines)) outlist.extend(self._save_annotation_footer()) return ''.join(outlist) def _get_line_coverage(self, coverage_xml, source_filename): coverage_data = None for entry in coverage_xml.iterfind('.//class'): if not entry.get('filename'): continue if (entry.get('filename') == source_filename or os.path.abspath(entry.get('filename')) == source_filename): coverage_data = entry break elif source_filename.endswith(entry.get('filename')): coverage_data = entry # but we might still find a better match... if coverage_data is None: return None return dict( (int(line.get('number')), int(line.get('hits'))) for line in coverage_data.iterfind('lines/line') ) def _htmlify_code(self, code): try: from pygments import highlight from pygments.lexers import CythonLexer from pygments.formatters import HtmlFormatter except ImportError: # no Pygments, just escape the code return html_escape(code) html_code = highlight( code, CythonLexer(stripnl=False, stripall=False), HtmlFormatter(nowrap=True)) return html_code def _save_annotation_body(self, cython_code, generated_code, annotation_items, scopes, covered_lines=None): outlist = [u'
'] pos_comment_marker = u'/* \N{HORIZONTAL ELLIPSIS} */\n' new_calls_map = dict( (name, 0) for name in 'refnanny trace py_macro_api py_c_api pyx_macro_api pyx_c_api error_goto'.split() ).copy self.mark_pos(None) def annotate(match): group_name = match.lastgroup calls[group_name] += 1 return u"%s" % ( group_name, match.group(group_name)) lines = self._htmlify_code(cython_code).splitlines() lineno_width = len(str(len(lines))) if not covered_lines: covered_lines = None for k, line in enumerate(lines, 1): try: c_code = generated_code[k] except KeyError: c_code = '' else: c_code = _replace_pos_comment(pos_comment_marker, c_code) if c_code.startswith(pos_comment_marker): c_code = c_code[len(pos_comment_marker):] c_code = html_escape(c_code) calls = new_calls_map() c_code = _parse_code(annotate, c_code) score = (5 * calls['py_c_api'] + 2 * calls['pyx_c_api'] + calls['py_macro_api'] + calls['pyx_macro_api']) if c_code: onclick = self._onclick_attr expandsymbol = '+' else: onclick = '' expandsymbol = ' ' covered = '' if covered_lines is not None and k in covered_lines: hits = covered_lines[k] if hits is not None: covered = 'run' if hits else 'mis' outlist.append( u'
'
                # generate line number with expand symbol in front,
                # and the right  number of digit
                u'{expandsymbol}{line:0{lineno_width}d}: {code}
\n'.format( score=score, expandsymbol=expandsymbol, covered=covered, lineno_width=lineno_width, line=k, code=line.rstrip(), onclick=onclick, )) if c_code: outlist.append(u"
{code}
".format( score=score, covered=covered, code=c_code)) outlist.append(u"
") return outlist _parse_code = re.compile(( br'(?P__Pyx_X?(?:GOT|GIVE)REF|__Pyx_RefNanny[A-Za-z]+)|' br'(?P__Pyx_Trace[A-Za-z]+)|' br'(?:' br'(?P__Pyx_[A-Z][A-Z_]+)|' br'(?P(?:__Pyx_[A-Z][a-z_][A-Za-z_]*)|__pyx_convert_[A-Za-z_]*)|' br'(?PPy[A-Z][a-z]+_[A-Z][A-Z_]+)|' br'(?PPy[A-Z][a-z]+_[A-Z][a-z][A-Za-z_]*)' br')(?=\()|' # look-ahead to exclude subsequent '(' from replacement br'(?P(?:(?<=;) *if [^;]* +)?__PYX_ERR\([^)]+\))' ).decode('ascii')).sub _replace_pos_comment = re.compile( # this matches what Cython generates as code line marker comment br'^\s*/\*(?:(?:[^*]|\*[^/])*\n)+\s*\*/\s*\n'.decode('ascii'), re.M ).sub class AnnotationItem(object): def __init__(self, style, text, tag="", size=0): self.style = style self.text = text self.tag = tag self.size = size def start(self): return u"%s" % (self.style, self.text, self.tag) def end(self): return self.size, u"" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Compiler/AutoDocTransforms.py0000644000175000017500000001653500000000000031260 0ustar00vincentvincent00000000000000from __future__ import absolute_import, print_function from .Visitor import CythonTransform from .StringEncoding import EncodedString from . import Options from . import PyrexTypes, ExprNodes from ..CodeWriter import ExpressionWriter class AnnotationWriter(ExpressionWriter): def visit_Node(self, node): self.put(u"") def visit_LambdaNode(self, node): # XXX Should we do better? self.put("") class EmbedSignature(CythonTransform): def __init__(self, context): super(EmbedSignature, self).__init__(context) self.class_name = None self.class_node = None def _fmt_expr(self, node): writer = AnnotationWriter() result = writer.write(node) # print(type(node).__name__, '-->', result) return result def _fmt_arg(self, arg): if arg.type is PyrexTypes.py_object_type or arg.is_self_arg: doc = arg.name else: doc = arg.type.declaration_code(arg.name, for_display=1) if arg.annotation: annotation = self._fmt_expr(arg.annotation) doc = doc + (': %s' % annotation) if arg.default: default = self._fmt_expr(arg.default) doc = doc + (' = %s' % default) elif arg.default: default = self._fmt_expr(arg.default) doc = doc + ('=%s' % default) return doc def _fmt_star_arg(self, arg): arg_doc = arg.name if arg.annotation: annotation = self._fmt_expr(arg.annotation) arg_doc = arg_doc + (': %s' % annotation) return arg_doc def _fmt_arglist(self, args, npargs=0, pargs=None, nkargs=0, kargs=None, hide_self=False): arglist = [] for arg in args: if not hide_self or not arg.entry.is_self_arg: arg_doc = self._fmt_arg(arg) arglist.append(arg_doc) if pargs: arg_doc = self._fmt_star_arg(pargs) arglist.insert(npargs, '*%s' % arg_doc) elif nkargs: arglist.insert(npargs, '*') if kargs: arg_doc = self._fmt_star_arg(kargs) arglist.append('**%s' % arg_doc) return arglist def _fmt_ret_type(self, ret): if ret is PyrexTypes.py_object_type: return None else: return ret.declaration_code("", for_display=1) def _fmt_signature(self, cls_name, func_name, args, npargs=0, pargs=None, nkargs=0, kargs=None, return_expr=None, return_type=None, hide_self=False): arglist = self._fmt_arglist(args, npargs, pargs, nkargs, kargs, hide_self=hide_self) arglist_doc = ', '.join(arglist) func_doc = '%s(%s)' % (func_name, arglist_doc) if cls_name: func_doc = '%s.%s' % (cls_name, func_doc) ret_doc = None if return_expr: ret_doc = self._fmt_expr(return_expr) elif return_type: ret_doc = self._fmt_ret_type(return_type) if ret_doc: func_doc = '%s -> %s' % (func_doc, ret_doc) return func_doc def _embed_signature(self, signature, node_doc): if node_doc: return "%s\n%s" % (signature, node_doc) else: return signature def __call__(self, node): if not Options.docstrings: return node else: return super(EmbedSignature, self).__call__(node) def visit_ClassDefNode(self, node): oldname = self.class_name oldclass = self.class_node self.class_node = node try: # PyClassDefNode self.class_name = node.name except AttributeError: # CClassDefNode self.class_name = node.class_name self.visitchildren(node) self.class_name = oldname self.class_node = oldclass return node def visit_LambdaNode(self, node): # lambda expressions so not have signature or inner functions return node def visit_DefNode(self, node): if not self.current_directives['embedsignature']: return node is_constructor = False hide_self = False if node.entry.is_special: is_constructor = self.class_node and node.name == '__init__' if not is_constructor: return node class_name, func_name = None, self.class_name hide_self = True else: class_name, func_name = self.class_name, node.name nkargs = getattr(node, 'num_kwonly_args', 0) npargs = len(node.args) - nkargs signature = self._fmt_signature( class_name, func_name, node.args, npargs, node.star_arg, nkargs, node.starstar_arg, return_expr=node.return_type_annotation, return_type=None, hide_self=hide_self) if signature: if is_constructor: doc_holder = self.class_node.entry.type.scope else: doc_holder = node.entry if doc_holder.doc is not None: old_doc = doc_holder.doc elif not is_constructor and getattr(node, 'py_func', None) is not None: old_doc = node.py_func.entry.doc else: old_doc = None new_doc = self._embed_signature(signature, old_doc) doc_holder.doc = EncodedString(new_doc) if not is_constructor and getattr(node, 'py_func', None) is not None: node.py_func.entry.doc = EncodedString(new_doc) return node def visit_CFuncDefNode(self, node): if not self.current_directives['embedsignature']: return node if not node.overridable: # not cpdef FOO(...): return node signature = self._fmt_signature( self.class_name, node.declarator.base.name, node.declarator.args, return_type=node.return_type) if signature: if node.entry.doc is not None: old_doc = node.entry.doc elif getattr(node, 'py_func', None) is not None: old_doc = node.py_func.entry.doc else: old_doc = None new_doc = self._embed_signature(signature, old_doc) node.entry.doc = EncodedString(new_doc) if hasattr(node, 'py_func') and node.py_func is not None: node.py_func.entry.doc = EncodedString(new_doc) return node def visit_PropertyNode(self, node): if not self.current_directives['embedsignature']: return node entry = node.entry if entry.visibility == 'public': # property synthesised from a cdef public attribute type_name = entry.type.declaration_code("", for_display=1) if not entry.type.is_pyobject: type_name = "'%s'" % type_name elif entry.type.is_extension_type: type_name = entry.type.module_name + '.' + type_name signature = '%s: %s' % (entry.name, type_name) new_doc = self._embed_signature(signature, entry.doc) entry.doc = EncodedString(new_doc) return node ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Compiler/Buffer.py0000644000175000017500000007035100000000000027050 0ustar00vincentvincent00000000000000from __future__ import absolute_import from .Visitor import CythonTransform from .ModuleNode import ModuleNode from .Errors import CompileError from .UtilityCode import CythonUtilityCode from .Code import UtilityCode, TempitaUtilityCode from . import Options from . import Interpreter from . import PyrexTypes from . import Naming from . import Symtab def dedent(text, reindent=0): from textwrap import dedent text = dedent(text) if reindent > 0: indent = " " * reindent text = '\n'.join([indent + x for x in text.split('\n')]) return text class IntroduceBufferAuxiliaryVars(CythonTransform): # # Entry point # buffers_exists = False using_memoryview = False def __call__(self, node): assert isinstance(node, ModuleNode) self.max_ndim = 0 result = super(IntroduceBufferAuxiliaryVars, self).__call__(node) if self.buffers_exists: use_bufstruct_declare_code(node.scope) use_py2_buffer_functions(node.scope) return result # # Basic operations for transforms # def handle_scope(self, node, scope): # For all buffers, insert extra variables in the scope. # The variables are also accessible from the buffer_info # on the buffer entry scope_items = scope.entries.items() bufvars = [entry for name, entry in scope_items if entry.type.is_buffer] if len(bufvars) > 0: bufvars.sort(key=lambda entry: entry.name) self.buffers_exists = True memviewslicevars = [entry for name, entry in scope_items if entry.type.is_memoryviewslice] if len(memviewslicevars) > 0: self.buffers_exists = True for (name, entry) in scope_items: if name == 'memoryview' and isinstance(entry.utility_code_definition, CythonUtilityCode): self.using_memoryview = True break del scope_items if isinstance(node, ModuleNode) and len(bufvars) > 0: # for now...note that pos is wrong raise CompileError(node.pos, "Buffer vars not allowed in module scope") for entry in bufvars: if entry.type.dtype.is_ptr: raise CompileError(node.pos, "Buffers with pointer types not yet supported.") name = entry.name buftype = entry.type if buftype.ndim > Options.buffer_max_dims: raise CompileError(node.pos, "Buffer ndims exceeds Options.buffer_max_dims = %d" % Options.buffer_max_dims) if buftype.ndim > self.max_ndim: self.max_ndim = buftype.ndim # Declare auxiliary vars def decvar(type, prefix): cname = scope.mangle(prefix, name) aux_var = scope.declare_var(name=None, cname=cname, type=type, pos=node.pos) if entry.is_arg: aux_var.used = True # otherwise, NameNode will mark whether it is used return aux_var auxvars = ((PyrexTypes.c_pyx_buffer_nd_type, Naming.pybuffernd_prefix), (PyrexTypes.c_pyx_buffer_type, Naming.pybufferstruct_prefix)) pybuffernd, rcbuffer = [decvar(type, prefix) for (type, prefix) in auxvars] entry.buffer_aux = Symtab.BufferAux(pybuffernd, rcbuffer) scope.buffer_entries = bufvars self.scope = scope def visit_ModuleNode(self, node): self.handle_scope(node, node.scope) self.visitchildren(node) return node def visit_FuncDefNode(self, node): self.handle_scope(node, node.local_scope) self.visitchildren(node) return node # # Analysis # buffer_options = ("dtype", "ndim", "mode", "negative_indices", "cast") # ordered! buffer_defaults = {"ndim": 1, "mode": "full", "negative_indices": True, "cast": False} buffer_positional_options_count = 1 # anything beyond this needs keyword argument ERR_BUF_OPTION_UNKNOWN = '"%s" is not a buffer option' ERR_BUF_TOO_MANY = 'Too many buffer options' ERR_BUF_DUP = '"%s" buffer option already supplied' ERR_BUF_MISSING = '"%s" missing' ERR_BUF_MODE = 'Only allowed buffer modes are: "c", "fortran", "full", "strided" (as a compile-time string)' ERR_BUF_NDIM = 'ndim must be a non-negative integer' ERR_BUF_DTYPE = 'dtype must be "object", numeric type or a struct' ERR_BUF_BOOL = '"%s" must be a boolean' def analyse_buffer_options(globalpos, env, posargs, dictargs, defaults=None, need_complete=True): """ Must be called during type analysis, as analyse is called on the dtype argument. posargs and dictargs should consist of a list and a dict of tuples (value, pos). Defaults should be a dict of values. Returns a dict containing all the options a buffer can have and its value (with the positions stripped). """ if defaults is None: defaults = buffer_defaults posargs, dictargs = Interpreter.interpret_compiletime_options( posargs, dictargs, type_env=env, type_args=(0, 'dtype')) if len(posargs) > buffer_positional_options_count: raise CompileError(posargs[-1][1], ERR_BUF_TOO_MANY) options = {} for name, (value, pos) in dictargs.items(): if not name in buffer_options: raise CompileError(pos, ERR_BUF_OPTION_UNKNOWN % name) options[name] = value for name, (value, pos) in zip(buffer_options, posargs): if not name in buffer_options: raise CompileError(pos, ERR_BUF_OPTION_UNKNOWN % name) if name in options: raise CompileError(pos, ERR_BUF_DUP % name) options[name] = value # Check that they are all there and copy defaults for name in buffer_options: if not name in options: try: options[name] = defaults[name] except KeyError: if need_complete: raise CompileError(globalpos, ERR_BUF_MISSING % name) dtype = options.get("dtype") if dtype and dtype.is_extension_type: raise CompileError(globalpos, ERR_BUF_DTYPE) ndim = options.get("ndim") if ndim and (not isinstance(ndim, int) or ndim < 0): raise CompileError(globalpos, ERR_BUF_NDIM) mode = options.get("mode") if mode and not (mode in ('full', 'strided', 'c', 'fortran')): raise CompileError(globalpos, ERR_BUF_MODE) def assert_bool(name): x = options.get(name) if not isinstance(x, bool): raise CompileError(globalpos, ERR_BUF_BOOL % name) assert_bool('negative_indices') assert_bool('cast') return options # # Code generation # class BufferEntry(object): def __init__(self, entry): self.entry = entry self.type = entry.type self.cname = entry.buffer_aux.buflocal_nd_var.cname self.buf_ptr = "%s.rcbuffer->pybuffer.buf" % self.cname self.buf_ptr_type = entry.type.buffer_ptr_type self.init_attributes() def init_attributes(self): self.shape = self.get_buf_shapevars() self.strides = self.get_buf_stridevars() self.suboffsets = self.get_buf_suboffsetvars() def get_buf_suboffsetvars(self): return self._for_all_ndim("%s.diminfo[%d].suboffsets") def get_buf_stridevars(self): return self._for_all_ndim("%s.diminfo[%d].strides") def get_buf_shapevars(self): return self._for_all_ndim("%s.diminfo[%d].shape") def _for_all_ndim(self, s): return [s % (self.cname, i) for i in range(self.type.ndim)] def generate_buffer_lookup_code(self, code, index_cnames): # Create buffer lookup and return it # This is done via utility macros/inline functions, which vary # according to the access mode used. params = [] nd = self.type.ndim mode = self.type.mode if mode == 'full': for i, s, o in zip(index_cnames, self.get_buf_stridevars(), self.get_buf_suboffsetvars()): params.append(i) params.append(s) params.append(o) funcname = "__Pyx_BufPtrFull%dd" % nd funcgen = buf_lookup_full_code else: if mode == 'strided': funcname = "__Pyx_BufPtrStrided%dd" % nd funcgen = buf_lookup_strided_code elif mode == 'c': funcname = "__Pyx_BufPtrCContig%dd" % nd funcgen = buf_lookup_c_code elif mode == 'fortran': funcname = "__Pyx_BufPtrFortranContig%dd" % nd funcgen = buf_lookup_fortran_code else: assert False for i, s in zip(index_cnames, self.get_buf_stridevars()): params.append(i) params.append(s) # Make sure the utility code is available if funcname not in code.globalstate.utility_codes: code.globalstate.utility_codes.add(funcname) protocode = code.globalstate['utility_code_proto'] defcode = code.globalstate['utility_code_def'] funcgen(protocode, defcode, name=funcname, nd=nd) buf_ptr_type_code = self.buf_ptr_type.empty_declaration_code() ptrcode = "%s(%s, %s, %s)" % (funcname, buf_ptr_type_code, self.buf_ptr, ", ".join(params)) return ptrcode def get_flags(buffer_aux, buffer_type): flags = 'PyBUF_FORMAT' mode = buffer_type.mode if mode == 'full': flags += '| PyBUF_INDIRECT' elif mode == 'strided': flags += '| PyBUF_STRIDES' elif mode == 'c': flags += '| PyBUF_C_CONTIGUOUS' elif mode == 'fortran': flags += '| PyBUF_F_CONTIGUOUS' else: assert False if buffer_aux.writable_needed: flags += "| PyBUF_WRITABLE" return flags def used_buffer_aux_vars(entry): buffer_aux = entry.buffer_aux buffer_aux.buflocal_nd_var.used = True buffer_aux.rcbuf_var.used = True def put_unpack_buffer_aux_into_scope(buf_entry, code): # Generate code to copy the needed struct info into local # variables. buffer_aux, mode = buf_entry.buffer_aux, buf_entry.type.mode pybuffernd_struct = buffer_aux.buflocal_nd_var.cname fldnames = ['strides', 'shape'] if mode == 'full': fldnames.append('suboffsets') ln = [] for i in range(buf_entry.type.ndim): for fldname in fldnames: ln.append("%s.diminfo[%d].%s = %s.rcbuffer->pybuffer.%s[%d];" % \ (pybuffernd_struct, i, fldname, pybuffernd_struct, fldname, i)) code.putln(' '.join(ln)) def put_init_vars(entry, code): bufaux = entry.buffer_aux pybuffernd_struct = bufaux.buflocal_nd_var.cname pybuffer_struct = bufaux.rcbuf_var.cname # init pybuffer_struct code.putln("%s.pybuffer.buf = NULL;" % pybuffer_struct) code.putln("%s.refcount = 0;" % pybuffer_struct) # init the buffer object # code.put_init_var_to_py_none(entry) # init the pybuffernd_struct code.putln("%s.data = NULL;" % pybuffernd_struct) code.putln("%s.rcbuffer = &%s;" % (pybuffernd_struct, pybuffer_struct)) def put_acquire_arg_buffer(entry, code, pos): buffer_aux = entry.buffer_aux getbuffer = get_getbuffer_call(code, entry.cname, buffer_aux, entry.type) # Acquire any new buffer code.putln("{") code.putln("__Pyx_BufFmt_StackElem __pyx_stack[%d];" % entry.type.dtype.struct_nesting_depth()) code.putln(code.error_goto_if("%s == -1" % getbuffer, pos)) code.putln("}") # An exception raised in arg parsing cannot be caught, so no # need to care about the buffer then. put_unpack_buffer_aux_into_scope(entry, code) def put_release_buffer_code(code, entry): code.globalstate.use_utility_code(acquire_utility_code) code.putln("__Pyx_SafeReleaseBuffer(&%s.rcbuffer->pybuffer);" % entry.buffer_aux.buflocal_nd_var.cname) def get_getbuffer_call(code, obj_cname, buffer_aux, buffer_type): ndim = buffer_type.ndim cast = int(buffer_type.cast) flags = get_flags(buffer_aux, buffer_type) pybuffernd_struct = buffer_aux.buflocal_nd_var.cname dtype_typeinfo = get_type_information_cname(code, buffer_type.dtype) code.globalstate.use_utility_code(acquire_utility_code) return ("__Pyx_GetBufferAndValidate(&%(pybuffernd_struct)s.rcbuffer->pybuffer, " "(PyObject*)%(obj_cname)s, &%(dtype_typeinfo)s, %(flags)s, %(ndim)d, " "%(cast)d, __pyx_stack)" % locals()) def put_assign_to_buffer(lhs_cname, rhs_cname, buf_entry, is_initialized, pos, code): """ Generate code for reassigning a buffer variables. This only deals with getting the buffer auxiliary structure and variables set up correctly, the assignment itself and refcounting is the responsibility of the caller. However, the assignment operation may throw an exception so that the reassignment never happens. Depending on the circumstances there are two possible outcomes: - Old buffer released, new acquired, rhs assigned to lhs - Old buffer released, new acquired which fails, reaqcuire old lhs buffer (which may or may not succeed). """ buffer_aux, buffer_type = buf_entry.buffer_aux, buf_entry.type pybuffernd_struct = buffer_aux.buflocal_nd_var.cname flags = get_flags(buffer_aux, buffer_type) code.putln("{") # Set up necessary stack for getbuffer code.putln("__Pyx_BufFmt_StackElem __pyx_stack[%d];" % buffer_type.dtype.struct_nesting_depth()) getbuffer = get_getbuffer_call(code, "%s", buffer_aux, buffer_type) # fill in object below if is_initialized: # Release any existing buffer code.putln('__Pyx_SafeReleaseBuffer(&%s.rcbuffer->pybuffer);' % pybuffernd_struct) # Acquire retcode_cname = code.funcstate.allocate_temp(PyrexTypes.c_int_type, manage_ref=False) code.putln("%s = %s;" % (retcode_cname, getbuffer % rhs_cname)) code.putln('if (%s) {' % (code.unlikely("%s < 0" % retcode_cname))) # If acquisition failed, attempt to reacquire the old buffer # before raising the exception. A failure of reacquisition # will cause the reacquisition exception to be reported, one # can consider working around this later. exc_temps = tuple(code.funcstate.allocate_temp(PyrexTypes.py_object_type, manage_ref=False) for _ in range(3)) code.putln('PyErr_Fetch(&%s, &%s, &%s);' % exc_temps) code.putln('if (%s) {' % code.unlikely("%s == -1" % (getbuffer % lhs_cname))) code.putln('Py_XDECREF(%s); Py_XDECREF(%s); Py_XDECREF(%s);' % exc_temps) # Do not refnanny these! code.globalstate.use_utility_code(raise_buffer_fallback_code) code.putln('__Pyx_RaiseBufferFallbackError();') code.putln('} else {') code.putln('PyErr_Restore(%s, %s, %s);' % exc_temps) code.putln('}') code.putln('%s = %s = %s = 0;' % exc_temps) for t in exc_temps: code.funcstate.release_temp(t) code.putln('}') # Unpack indices put_unpack_buffer_aux_into_scope(buf_entry, code) code.putln(code.error_goto_if_neg(retcode_cname, pos)) code.funcstate.release_temp(retcode_cname) else: # Our entry had no previous value, so set to None when acquisition fails. # In this case, auxiliary vars should be set up right in initialization to a zero-buffer, # so it suffices to set the buf field to NULL. code.putln('if (%s) {' % code.unlikely("%s == -1" % (getbuffer % rhs_cname))) code.putln('%s = %s; __Pyx_INCREF(Py_None); %s.rcbuffer->pybuffer.buf = NULL;' % (lhs_cname, PyrexTypes.typecast(buffer_type, PyrexTypes.py_object_type, "Py_None"), pybuffernd_struct)) code.putln(code.error_goto(pos)) code.put('} else {') # Unpack indices put_unpack_buffer_aux_into_scope(buf_entry, code) code.putln('}') code.putln("}") # Release stack def put_buffer_lookup_code(entry, index_signeds, index_cnames, directives, pos, code, negative_indices, in_nogil_context): """ Generates code to process indices and calculate an offset into a buffer. Returns a C string which gives a pointer which can be read from or written to at will (it is an expression so caller should store it in a temporary if it is used more than once). As the bounds checking can have any number of combinations of unsigned arguments, smart optimizations etc. we insert it directly in the function body. The lookup however is delegated to a inline function that is instantiated once per ndim (lookup with suboffsets tend to get quite complicated). entry is a BufferEntry """ negative_indices = directives['wraparound'] and negative_indices if directives['boundscheck']: # Check bounds and fix negative indices. # We allocate a temporary which is initialized to -1, meaning OK (!). # If an error occurs, the temp is set to the index dimension the # error is occurring at. failed_dim_temp = code.funcstate.allocate_temp(PyrexTypes.c_int_type, manage_ref=False) code.putln("%s = -1;" % failed_dim_temp) for dim, (signed, cname, shape) in enumerate(zip(index_signeds, index_cnames, entry.get_buf_shapevars())): if signed != 0: # not unsigned, deal with negative index code.putln("if (%s < 0) {" % cname) if negative_indices: code.putln("%s += %s;" % (cname, shape)) code.putln("if (%s) %s = %d;" % ( code.unlikely("%s < 0" % cname), failed_dim_temp, dim)) else: code.putln("%s = %d;" % (failed_dim_temp, dim)) code.put("} else ") # check bounds in positive direction if signed != 0: cast = "" else: cast = "(size_t)" code.putln("if (%s) %s = %d;" % ( code.unlikely("%s >= %s%s" % (cname, cast, shape)), failed_dim_temp, dim)) if in_nogil_context: code.globalstate.use_utility_code(raise_indexerror_nogil) func = '__Pyx_RaiseBufferIndexErrorNogil' else: code.globalstate.use_utility_code(raise_indexerror_code) func = '__Pyx_RaiseBufferIndexError' code.putln("if (%s) {" % code.unlikely("%s != -1" % failed_dim_temp)) code.putln('%s(%s);' % (func, failed_dim_temp)) code.putln(code.error_goto(pos)) code.putln('}') code.funcstate.release_temp(failed_dim_temp) elif negative_indices: # Only fix negative indices. for signed, cname, shape in zip(index_signeds, index_cnames, entry.get_buf_shapevars()): if signed != 0: code.putln("if (%s < 0) %s += %s;" % (cname, cname, shape)) return entry.generate_buffer_lookup_code(code, index_cnames) def use_bufstruct_declare_code(env): env.use_utility_code(buffer_struct_declare_code) def buf_lookup_full_code(proto, defin, name, nd): """ Generates a buffer lookup function for the right number of dimensions. The function gives back a void* at the right location. """ # _i_ndex, _s_tride, sub_o_ffset macroargs = ", ".join(["i%d, s%d, o%d" % (i, i, i) for i in range(nd)]) proto.putln("#define %s(type, buf, %s) (type)(%s_imp(buf, %s))" % (name, macroargs, name, macroargs)) funcargs = ", ".join(["Py_ssize_t i%d, Py_ssize_t s%d, Py_ssize_t o%d" % (i, i, i) for i in range(nd)]) proto.putln("static CYTHON_INLINE void* %s_imp(void* buf, %s);" % (name, funcargs)) defin.putln(dedent(""" static CYTHON_INLINE void* %s_imp(void* buf, %s) { char* ptr = (char*)buf; """) % (name, funcargs) + "".join([dedent("""\ ptr += s%d * i%d; if (o%d >= 0) ptr = *((char**)ptr) + o%d; """) % (i, i, i, i) for i in range(nd)] ) + "\nreturn ptr;\n}") def buf_lookup_strided_code(proto, defin, name, nd): """ Generates a buffer lookup function for the right number of dimensions. The function gives back a void* at the right location. """ # _i_ndex, _s_tride args = ", ".join(["i%d, s%d" % (i, i) for i in range(nd)]) offset = " + ".join(["i%d * s%d" % (i, i) for i in range(nd)]) proto.putln("#define %s(type, buf, %s) (type)((char*)buf + %s)" % (name, args, offset)) def buf_lookup_c_code(proto, defin, name, nd): """ Similar to strided lookup, but can assume that the last dimension doesn't need a multiplication as long as. Still we keep the same signature for now. """ if nd == 1: proto.putln("#define %s(type, buf, i0, s0) ((type)buf + i0)" % name) else: args = ", ".join(["i%d, s%d" % (i, i) for i in range(nd)]) offset = " + ".join(["i%d * s%d" % (i, i) for i in range(nd - 1)]) proto.putln("#define %s(type, buf, %s) ((type)((char*)buf + %s) + i%d)" % (name, args, offset, nd - 1)) def buf_lookup_fortran_code(proto, defin, name, nd): """ Like C lookup, but the first index is optimized instead. """ if nd == 1: proto.putln("#define %s(type, buf, i0, s0) ((type)buf + i0)" % name) else: args = ", ".join(["i%d, s%d" % (i, i) for i in range(nd)]) offset = " + ".join(["i%d * s%d" % (i, i) for i in range(1, nd)]) proto.putln("#define %s(type, buf, %s) ((type)((char*)buf + %s) + i%d)" % (name, args, offset, 0)) def use_py2_buffer_functions(env): env.use_utility_code(GetAndReleaseBufferUtilityCode()) class GetAndReleaseBufferUtilityCode(object): # Emulation of PyObject_GetBuffer and PyBuffer_Release for Python 2. # For >= 2.6 we do double mode -- use the new buffer interface on objects # which has the right tp_flags set, but emulation otherwise. requires = None is_cython_utility = False def __init__(self): pass def __eq__(self, other): return isinstance(other, GetAndReleaseBufferUtilityCode) def __hash__(self): return 24342342 def get_tree(self, **kwargs): pass def put_code(self, output): code = output['utility_code_def'] proto_code = output['utility_code_proto'] env = output.module_node.scope cython_scope = env.context.cython_scope # Search all types for __getbuffer__ overloads types = [] visited_scopes = set() def find_buffer_types(scope): if scope in visited_scopes: return visited_scopes.add(scope) for m in scope.cimported_modules: find_buffer_types(m) for e in scope.type_entries: if isinstance(e.utility_code_definition, CythonUtilityCode): continue t = e.type if t.is_extension_type: if scope is cython_scope and not e.used: continue release = get = None for x in t.scope.pyfunc_entries: if x.name == u"__getbuffer__": get = x.func_cname elif x.name == u"__releasebuffer__": release = x.func_cname if get: types.append((t.typeptr_cname, get, release)) find_buffer_types(env) util_code = TempitaUtilityCode.load( "GetAndReleaseBuffer", from_file="Buffer.c", context=dict(types=types)) proto = util_code.format_code(util_code.proto) impl = util_code.format_code( util_code.inject_string_constants(util_code.impl, output)[1]) proto_code.putln(proto) code.putln(impl) def mangle_dtype_name(dtype): # Use prefixes to separate user defined types from builtins # (consider "typedef float unsigned_int") if dtype.is_pyobject: return "object" elif dtype.is_ptr: return "ptr" else: if dtype.is_typedef or dtype.is_struct_or_union: prefix = "nn_" else: prefix = "" return prefix + dtype.specialization_name() def get_type_information_cname(code, dtype, maxdepth=None): """ Output the run-time type information (__Pyx_TypeInfo) for given dtype, and return the name of the type info struct. Structs with two floats of the same size are encoded as complex numbers. One can separate between complex numbers declared as struct or with native encoding by inspecting to see if the fields field of the type is filled in. """ namesuffix = mangle_dtype_name(dtype) name = "__Pyx_TypeInfo_%s" % namesuffix structinfo_name = "__Pyx_StructFields_%s" % namesuffix if dtype.is_error: return "" # It's critical that walking the type info doesn't use more stack # depth than dtype.struct_nesting_depth() returns, so use an assertion for this if maxdepth is None: maxdepth = dtype.struct_nesting_depth() if maxdepth <= 0: assert False if name not in code.globalstate.utility_codes: code.globalstate.utility_codes.add(name) typecode = code.globalstate['typeinfo'] arraysizes = [] if dtype.is_array: while dtype.is_array: arraysizes.append(dtype.size) dtype = dtype.base_type complex_possible = dtype.is_struct_or_union and dtype.can_be_complex() declcode = dtype.empty_declaration_code() if dtype.is_simple_buffer_dtype(): structinfo_name = "NULL" elif dtype.is_struct: fields = dtype.scope.var_entries # Must pre-call all used types in order not to recurse utility code # writing. assert len(fields) > 0 types = [get_type_information_cname(code, f.type, maxdepth - 1) for f in fields] typecode.putln("static __Pyx_StructField %s[] = {" % structinfo_name, safe=True) for f, typeinfo in zip(fields, types): typecode.putln(' {&%s, "%s", offsetof(%s, %s)},' % (typeinfo, f.name, dtype.empty_declaration_code(), f.cname), safe=True) typecode.putln(' {NULL, NULL, 0}', safe=True) typecode.putln("};", safe=True) else: assert False rep = str(dtype) flags = "0" is_unsigned = "0" if dtype is PyrexTypes.c_char_type: is_unsigned = "IS_UNSIGNED(%s)" % declcode typegroup = "'H'" elif dtype.is_int: is_unsigned = "IS_UNSIGNED(%s)" % declcode typegroup = "%s ? 'U' : 'I'" % is_unsigned elif complex_possible or dtype.is_complex: typegroup = "'C'" elif dtype.is_float: typegroup = "'R'" elif dtype.is_struct: typegroup = "'S'" if dtype.packed: flags = "__PYX_BUF_FLAGS_PACKED_STRUCT" elif dtype.is_pyobject: typegroup = "'O'" else: assert False, dtype typeinfo = ('static __Pyx_TypeInfo %s = ' '{ "%s", %s, sizeof(%s), { %s }, %s, %s, %s, %s };') tup = (name, rep, structinfo_name, declcode, ', '.join([str(x) for x in arraysizes]) or '0', len(arraysizes), typegroup, is_unsigned, flags) typecode.putln(typeinfo % tup, safe=True) return name def load_buffer_utility(util_code_name, context=None, **kwargs): if context is None: return UtilityCode.load(util_code_name, "Buffer.c", **kwargs) else: return TempitaUtilityCode.load(util_code_name, "Buffer.c", context=context, **kwargs) context = dict(max_dims=Options.buffer_max_dims) buffer_struct_declare_code = load_buffer_utility("BufferStructDeclare", context=context) buffer_formats_declare_code = load_buffer_utility("BufferFormatStructs") # Utility function to set the right exception # The caller should immediately goto_error raise_indexerror_code = load_buffer_utility("BufferIndexError") raise_indexerror_nogil = load_buffer_utility("BufferIndexErrorNogil") raise_buffer_fallback_code = load_buffer_utility("BufferFallbackError") acquire_utility_code = load_buffer_utility("BufferGetAndValidate", context=context) buffer_format_check_code = load_buffer_utility("BufferFormatCheck", context=context) # See utility code BufferFormatFromTypeInfo _typeinfo_to_format_code = load_buffer_utility("TypeInfoToFormat") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Compiler/Builtin.py0000644000175000017500000005466200000000000027254 0ustar00vincentvincent00000000000000# # Builtin Definitions # from __future__ import absolute_import from .Symtab import BuiltinScope, StructOrUnionScope from .Code import UtilityCode from .TypeSlots import Signature from . import PyrexTypes from . import Options # C-level implementations of builtin types, functions and methods iter_next_utility_code = UtilityCode.load("IterNext", "ObjectHandling.c") getattr_utility_code = UtilityCode.load("GetAttr", "ObjectHandling.c") getattr3_utility_code = UtilityCode.load("GetAttr3", "Builtins.c") pyexec_utility_code = UtilityCode.load("PyExec", "Builtins.c") pyexec_globals_utility_code = UtilityCode.load("PyExecGlobals", "Builtins.c") globals_utility_code = UtilityCode.load("Globals", "Builtins.c") builtin_utility_code = { 'StopAsyncIteration': UtilityCode.load_cached("StopAsyncIteration", "Coroutine.c"), } # mapping from builtins to their C-level equivalents class _BuiltinOverride(object): def __init__(self, py_name, args, ret_type, cname, py_equiv="*", utility_code=None, sig=None, func_type=None, is_strict_signature=False, builtin_return_type=None): self.py_name, self.cname, self.py_equiv = py_name, cname, py_equiv self.args, self.ret_type = args, ret_type self.func_type, self.sig = func_type, sig self.builtin_return_type = builtin_return_type self.is_strict_signature = is_strict_signature self.utility_code = utility_code def build_func_type(self, sig=None, self_arg=None): if sig is None: sig = Signature(self.args, self.ret_type) sig.exception_check = False # not needed for the current builtins func_type = sig.function_type(self_arg) if self.is_strict_signature: func_type.is_strict_signature = True if self.builtin_return_type: func_type.return_type = builtin_types[self.builtin_return_type] return func_type class BuiltinAttribute(object): def __init__(self, py_name, cname=None, field_type=None, field_type_name=None): self.py_name = py_name self.cname = cname or py_name self.field_type_name = field_type_name # can't do the lookup before the type is declared! self.field_type = field_type def declare_in_type(self, self_type): if self.field_type_name is not None: # lazy type lookup field_type = builtin_scope.lookup(self.field_type_name).type else: field_type = self.field_type or PyrexTypes.py_object_type entry = self_type.scope.declare(self.py_name, self.cname, field_type, None, 'private') entry.is_variable = True class BuiltinFunction(_BuiltinOverride): def declare_in_scope(self, scope): func_type, sig = self.func_type, self.sig if func_type is None: func_type = self.build_func_type(sig) scope.declare_builtin_cfunction(self.py_name, func_type, self.cname, self.py_equiv, self.utility_code) class BuiltinMethod(_BuiltinOverride): def declare_in_type(self, self_type): method_type, sig = self.func_type, self.sig if method_type is None: # override 'self' type (first argument) self_arg = PyrexTypes.CFuncTypeArg("", self_type, None) self_arg.not_none = True self_arg.accept_builtin_subtypes = True method_type = self.build_func_type(sig, self_arg) self_type.scope.declare_builtin_cfunction( self.py_name, method_type, self.cname, utility_code=self.utility_code) builtin_function_table = [ # name, args, return, C API func, py equiv = "*" BuiltinFunction('abs', "d", "d", "fabs", is_strict_signature = True), BuiltinFunction('abs', "f", "f", "fabsf", is_strict_signature = True), BuiltinFunction('abs', "i", "i", "abs", is_strict_signature = True), BuiltinFunction('abs', "l", "l", "labs", is_strict_signature = True), BuiltinFunction('abs', None, None, "__Pyx_abs_longlong", utility_code = UtilityCode.load("abs_longlong", "Builtins.c"), func_type = PyrexTypes.CFuncType( PyrexTypes.c_longlong_type, [ PyrexTypes.CFuncTypeArg("arg", PyrexTypes.c_longlong_type, None) ], is_strict_signature = True, nogil=True)), ] + list( BuiltinFunction('abs', None, None, "/*abs_{0}*/".format(t.specialization_name()), func_type = PyrexTypes.CFuncType( t, [PyrexTypes.CFuncTypeArg("arg", t, None)], is_strict_signature = True, nogil=True)) for t in (PyrexTypes.c_uint_type, PyrexTypes.c_ulong_type, PyrexTypes.c_ulonglong_type) ) + list( BuiltinFunction('abs', None, None, "__Pyx_c_abs{0}".format(t.funcsuffix), func_type = PyrexTypes.CFuncType( t.real_type, [ PyrexTypes.CFuncTypeArg("arg", t, None) ], is_strict_signature = True, nogil=True)) for t in (PyrexTypes.c_float_complex_type, PyrexTypes.c_double_complex_type, PyrexTypes.c_longdouble_complex_type) ) + [ BuiltinFunction('abs', "O", "O", "__Pyx_PyNumber_Absolute", utility_code=UtilityCode.load("py_abs", "Builtins.c")), #('all', "", "", ""), #('any', "", "", ""), #('ascii', "", "", ""), #('bin', "", "", ""), BuiltinFunction('callable', "O", "b", "__Pyx_PyCallable_Check", utility_code = UtilityCode.load("CallableCheck", "ObjectHandling.c")), #('chr', "", "", ""), #('cmp', "", "", "", ""), # int PyObject_Cmp(PyObject *o1, PyObject *o2, int *result) #('compile', "", "", ""), # PyObject* Py_CompileString( char *str, char *filename, int start) BuiltinFunction('delattr', "OO", "r", "PyObject_DelAttr"), BuiltinFunction('dir', "O", "O", "PyObject_Dir"), BuiltinFunction('divmod', "OO", "O", "PyNumber_Divmod"), BuiltinFunction('exec', "O", "O", "__Pyx_PyExecGlobals", utility_code = pyexec_globals_utility_code), BuiltinFunction('exec', "OO", "O", "__Pyx_PyExec2", utility_code = pyexec_utility_code), BuiltinFunction('exec', "OOO", "O", "__Pyx_PyExec3", utility_code = pyexec_utility_code), #('eval', "", "", ""), #('execfile', "", "", ""), #('filter', "", "", ""), BuiltinFunction('getattr3', "OOO", "O", "__Pyx_GetAttr3", "getattr", utility_code=getattr3_utility_code), # Pyrex legacy BuiltinFunction('getattr', "OOO", "O", "__Pyx_GetAttr3", utility_code=getattr3_utility_code), BuiltinFunction('getattr', "OO", "O", "__Pyx_GetAttr", utility_code=getattr_utility_code), BuiltinFunction('hasattr', "OO", "b", "__Pyx_HasAttr", utility_code = UtilityCode.load("HasAttr", "Builtins.c")), BuiltinFunction('hash', "O", "h", "PyObject_Hash"), #('hex', "", "", ""), #('id', "", "", ""), #('input', "", "", ""), BuiltinFunction('intern', "O", "O", "__Pyx_Intern", utility_code = UtilityCode.load("Intern", "Builtins.c")), BuiltinFunction('isinstance', "OO", "b", "PyObject_IsInstance"), BuiltinFunction('issubclass', "OO", "b", "PyObject_IsSubclass"), BuiltinFunction('iter', "OO", "O", "PyCallIter_New"), BuiltinFunction('iter', "O", "O", "PyObject_GetIter"), BuiltinFunction('len', "O", "z", "PyObject_Length"), BuiltinFunction('locals', "", "O", "__pyx_locals"), #('map', "", "", ""), #('max', "", "", ""), #('min', "", "", ""), BuiltinFunction('next', "O", "O", "__Pyx_PyIter_Next", utility_code = iter_next_utility_code), # not available in Py2 => implemented here BuiltinFunction('next', "OO", "O", "__Pyx_PyIter_Next2", utility_code = iter_next_utility_code), # not available in Py2 => implemented here #('oct', "", "", ""), #('open', "ss", "O", "PyFile_FromString"), # not in Py3 ] + [ BuiltinFunction('ord', None, None, "__Pyx_long_cast", func_type=PyrexTypes.CFuncType( PyrexTypes.c_long_type, [PyrexTypes.CFuncTypeArg("c", c_type, None)], is_strict_signature=True)) for c_type in [PyrexTypes.c_py_ucs4_type, PyrexTypes.c_py_unicode_type] ] + [ BuiltinFunction('ord', None, None, "__Pyx_uchar_cast", func_type=PyrexTypes.CFuncType( PyrexTypes.c_uchar_type, [PyrexTypes.CFuncTypeArg("c", c_type, None)], is_strict_signature=True)) for c_type in [PyrexTypes.c_char_type, PyrexTypes.c_schar_type, PyrexTypes.c_uchar_type] ] + [ BuiltinFunction('ord', None, None, "__Pyx_PyObject_Ord", utility_code=UtilityCode.load_cached("object_ord", "Builtins.c"), func_type=PyrexTypes.CFuncType( PyrexTypes.c_long_type, [ PyrexTypes.CFuncTypeArg("c", PyrexTypes.py_object_type, None) ], exception_value="(long)(Py_UCS4)-1")), BuiltinFunction('pow', "OOO", "O", "PyNumber_Power"), BuiltinFunction('pow', "OO", "O", "__Pyx_PyNumber_Power2", utility_code = UtilityCode.load("pow2", "Builtins.c")), #('range', "", "", ""), #('raw_input', "", "", ""), #('reduce', "", "", ""), BuiltinFunction('reload', "O", "O", "PyImport_ReloadModule"), BuiltinFunction('repr', "O", "O", "PyObject_Repr", builtin_return_type='str'), #('round', "", "", ""), BuiltinFunction('setattr', "OOO", "r", "PyObject_SetAttr"), #('sum', "", "", ""), #('sorted', "", "", ""), #('type', "O", "O", "PyObject_Type"), #('unichr', "", "", ""), #('unicode', "", "", ""), #('vars', "", "", ""), #('zip', "", "", ""), # Can't do these easily until we have builtin type entries. #('typecheck', "OO", "i", "PyObject_TypeCheck", False), #('issubtype', "OO", "i", "PyType_IsSubtype", False), # Put in namespace append optimization. BuiltinFunction('__Pyx_PyObject_Append', "OO", "O", "__Pyx_PyObject_Append"), # This is conditionally looked up based on a compiler directive. BuiltinFunction('__Pyx_Globals', "", "O", "__Pyx_Globals", utility_code=globals_utility_code), ] # Builtin types # bool # buffer # classmethod # dict # enumerate # file # float # int # list # long # object # property # slice # staticmethod # super # str # tuple # type # xrange builtin_types_table = [ ("type", "PyType_Type", []), # This conflicts with the C++ bool type, and unfortunately # C++ is too liberal about PyObject* <-> bool conversions, # resulting in unintuitive runtime behavior and segfaults. # ("bool", "PyBool_Type", []), ("int", "PyInt_Type", []), ("long", "PyLong_Type", []), ("float", "PyFloat_Type", []), ("complex", "PyComplex_Type", [BuiltinAttribute('cval', field_type_name = 'Py_complex'), BuiltinAttribute('real', 'cval.real', field_type = PyrexTypes.c_double_type), BuiltinAttribute('imag', 'cval.imag', field_type = PyrexTypes.c_double_type), ]), ("basestring", "PyBaseString_Type", [ BuiltinMethod("join", "TO", "T", "__Pyx_PyBaseString_Join", utility_code=UtilityCode.load("StringJoin", "StringTools.c")), ]), ("bytearray", "PyByteArray_Type", [ ]), ("bytes", "PyBytes_Type", [BuiltinMethod("__contains__", "TO", "b", "PySequence_Contains"), BuiltinMethod("join", "TO", "O", "__Pyx_PyBytes_Join", utility_code=UtilityCode.load("StringJoin", "StringTools.c")), ]), ("str", "PyString_Type", [BuiltinMethod("__contains__", "TO", "b", "PySequence_Contains"), BuiltinMethod("join", "TO", "O", "__Pyx_PyString_Join", builtin_return_type='basestring', utility_code=UtilityCode.load("StringJoin", "StringTools.c")), ]), ("unicode", "PyUnicode_Type", [BuiltinMethod("__contains__", "TO", "b", "PyUnicode_Contains"), BuiltinMethod("join", "TO", "T", "PyUnicode_Join"), ]), ("tuple", "PyTuple_Type", [BuiltinMethod("__contains__", "TO", "b", "PySequence_Contains"), ]), ("list", "PyList_Type", [BuiltinMethod("__contains__", "TO", "b", "PySequence_Contains"), BuiltinMethod("insert", "TzO", "r", "PyList_Insert"), BuiltinMethod("reverse", "T", "r", "PyList_Reverse"), BuiltinMethod("append", "TO", "r", "__Pyx_PyList_Append", utility_code=UtilityCode.load("ListAppend", "Optimize.c")), BuiltinMethod("extend", "TO", "r", "__Pyx_PyList_Extend", utility_code=UtilityCode.load("ListExtend", "Optimize.c")), ]), ("dict", "PyDict_Type", [BuiltinMethod("__contains__", "TO", "b", "PyDict_Contains"), BuiltinMethod("has_key", "TO", "b", "PyDict_Contains"), BuiltinMethod("items", "T", "O", "__Pyx_PyDict_Items", utility_code=UtilityCode.load("py_dict_items", "Builtins.c")), BuiltinMethod("keys", "T", "O", "__Pyx_PyDict_Keys", utility_code=UtilityCode.load("py_dict_keys", "Builtins.c")), BuiltinMethod("values", "T", "O", "__Pyx_PyDict_Values", utility_code=UtilityCode.load("py_dict_values", "Builtins.c")), BuiltinMethod("iteritems", "T", "O", "__Pyx_PyDict_IterItems", utility_code=UtilityCode.load("py_dict_iteritems", "Builtins.c")), BuiltinMethod("iterkeys", "T", "O", "__Pyx_PyDict_IterKeys", utility_code=UtilityCode.load("py_dict_iterkeys", "Builtins.c")), BuiltinMethod("itervalues", "T", "O", "__Pyx_PyDict_IterValues", utility_code=UtilityCode.load("py_dict_itervalues", "Builtins.c")), BuiltinMethod("viewitems", "T", "O", "__Pyx_PyDict_ViewItems", utility_code=UtilityCode.load("py_dict_viewitems", "Builtins.c")), BuiltinMethod("viewkeys", "T", "O", "__Pyx_PyDict_ViewKeys", utility_code=UtilityCode.load("py_dict_viewkeys", "Builtins.c")), BuiltinMethod("viewvalues", "T", "O", "__Pyx_PyDict_ViewValues", utility_code=UtilityCode.load("py_dict_viewvalues", "Builtins.c")), BuiltinMethod("clear", "T", "r", "__Pyx_PyDict_Clear", utility_code=UtilityCode.load("py_dict_clear", "Optimize.c")), BuiltinMethod("copy", "T", "T", "PyDict_Copy")]), ("slice", "PySlice_Type", [BuiltinAttribute('start'), BuiltinAttribute('stop'), BuiltinAttribute('step'), ]), # ("file", "PyFile_Type", []), # not in Py3 ("set", "PySet_Type", [BuiltinMethod("__contains__", "TO", "b", "PySequence_Contains"), BuiltinMethod("clear", "T", "r", "PySet_Clear"), # discard() and remove() have a special treatment for unhashable values BuiltinMethod("discard", "TO", "r", "__Pyx_PySet_Discard", utility_code=UtilityCode.load("py_set_discard", "Optimize.c")), BuiltinMethod("remove", "TO", "r", "__Pyx_PySet_Remove", utility_code=UtilityCode.load("py_set_remove", "Optimize.c")), # update is actually variadic (see Github issue #1645) # BuiltinMethod("update", "TO", "r", "__Pyx_PySet_Update", # utility_code=UtilityCode.load_cached("PySet_Update", "Builtins.c")), BuiltinMethod("add", "TO", "r", "PySet_Add"), BuiltinMethod("pop", "T", "O", "PySet_Pop")]), ("frozenset", "PyFrozenSet_Type", []), ("Exception", "((PyTypeObject*)PyExc_Exception)[0]", []), ("StopAsyncIteration", "((PyTypeObject*)__Pyx_PyExc_StopAsyncIteration)[0]", []), ] types_that_construct_their_instance = set([ # some builtin types do not always return an instance of # themselves - these do: 'type', 'bool', 'long', 'float', 'complex', 'bytes', 'unicode', 'bytearray', 'tuple', 'list', 'dict', 'set', 'frozenset' # 'str', # only in Py3.x # 'file', # only in Py2.x ]) builtin_structs_table = [ ('Py_buffer', 'Py_buffer', [("buf", PyrexTypes.c_void_ptr_type), ("obj", PyrexTypes.py_object_type), ("len", PyrexTypes.c_py_ssize_t_type), ("itemsize", PyrexTypes.c_py_ssize_t_type), ("readonly", PyrexTypes.c_bint_type), ("ndim", PyrexTypes.c_int_type), ("format", PyrexTypes.c_char_ptr_type), ("shape", PyrexTypes.c_py_ssize_t_ptr_type), ("strides", PyrexTypes.c_py_ssize_t_ptr_type), ("suboffsets", PyrexTypes.c_py_ssize_t_ptr_type), ("smalltable", PyrexTypes.CArrayType(PyrexTypes.c_py_ssize_t_type, 2)), ("internal", PyrexTypes.c_void_ptr_type), ]), ('Py_complex', 'Py_complex', [('real', PyrexTypes.c_double_type), ('imag', PyrexTypes.c_double_type), ]) ] # set up builtin scope builtin_scope = BuiltinScope() def init_builtin_funcs(): for bf in builtin_function_table: bf.declare_in_scope(builtin_scope) builtin_types = {} def init_builtin_types(): global builtin_types for name, cname, methods in builtin_types_table: utility = builtin_utility_code.get(name) if name == 'frozenset': objstruct_cname = 'PySetObject' elif name == 'bytearray': objstruct_cname = 'PyByteArrayObject' elif name == 'bool': objstruct_cname = None elif name == 'Exception': objstruct_cname = "PyBaseExceptionObject" elif name == 'StopAsyncIteration': objstruct_cname = "PyBaseExceptionObject" else: objstruct_cname = 'Py%sObject' % name.capitalize() the_type = builtin_scope.declare_builtin_type(name, cname, utility, objstruct_cname) builtin_types[name] = the_type for method in methods: method.declare_in_type(the_type) def init_builtin_structs(): for name, cname, attribute_types in builtin_structs_table: scope = StructOrUnionScope(name) for attribute_name, attribute_type in attribute_types: scope.declare_var(attribute_name, attribute_type, None, attribute_name, allow_pyobject=True) builtin_scope.declare_struct_or_union( name, "struct", scope, 1, None, cname = cname) def init_builtins(): init_builtin_structs() init_builtin_types() init_builtin_funcs() builtin_scope.declare_var( '__debug__', PyrexTypes.c_const_type(PyrexTypes.c_bint_type), pos=None, cname='(!Py_OptimizeFlag)', is_cdef=True) global list_type, tuple_type, dict_type, set_type, frozenset_type global bytes_type, str_type, unicode_type, basestring_type, slice_type global float_type, bool_type, type_type, complex_type, bytearray_type type_type = builtin_scope.lookup('type').type list_type = builtin_scope.lookup('list').type tuple_type = builtin_scope.lookup('tuple').type dict_type = builtin_scope.lookup('dict').type set_type = builtin_scope.lookup('set').type frozenset_type = builtin_scope.lookup('frozenset').type slice_type = builtin_scope.lookup('slice').type bytes_type = builtin_scope.lookup('bytes').type str_type = builtin_scope.lookup('str').type unicode_type = builtin_scope.lookup('unicode').type basestring_type = builtin_scope.lookup('basestring').type bytearray_type = builtin_scope.lookup('bytearray').type float_type = builtin_scope.lookup('float').type bool_type = builtin_scope.lookup('bool').type complex_type = builtin_scope.lookup('complex').type init_builtins() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Compiler/CmdLine.py0000644000175000017500000002332300000000000027147 0ustar00vincentvincent00000000000000# # Cython - Command Line Parsing # from __future__ import absolute_import import os import sys from . import Options usage = """\ Cython (http://cython.org) is a compiler for code written in the Cython language. Cython is based on Pyrex by Greg Ewing. Usage: cython [options] sourcefile.{pyx,py} ... Options: -V, --version Display version number of cython compiler -l, --create-listing Write error messages to a listing file -I, --include-dir Search for include files in named directory (multiple include directories are allowed). -o, --output-file Specify name of generated C file -t, --timestamps Only compile newer source files -f, --force Compile all source files (overrides implied -t) -v, --verbose Be verbose, print file names on multiple compilation -p, --embed-positions If specified, the positions in Cython files of each function definition is embedded in its docstring. --cleanup Release interned objects on python exit, for memory debugging. Level indicates aggressiveness, default 0 releases nothing. -w, --working Sets the working directory for Cython (the directory modules are searched from) --gdb Output debug information for cygdb --gdb-outdir Specify gdb debug information output directory. Implies --gdb. -D, --no-docstrings Strip docstrings from the compiled module. -a, --annotate Produce a colorized HTML version of the source. --annotate-coverage Annotate and include coverage information from cov.xml. --line-directives Produce #line directives pointing to the .pyx source --cplus Output a C++ rather than C file. --embed[=] Generate a main() function that embeds the Python interpreter. -2 Compile based on Python-2 syntax and code semantics. -3 Compile based on Python-3 syntax and code semantics. --3str Compile based on Python-3 syntax and code semantics without assuming unicode by default for string literals under Python 2. --lenient Change some compile time errors to runtime errors to improve Python compatibility --capi-reexport-cincludes Add cincluded headers to any auto-generated header files. --fast-fail Abort the compilation on the first error --warning-errors, -Werror Make all warnings into errors --warning-extra, -Wextra Enable extra warnings -X, --directive =[, 1: sys.stderr.write( "cython: Only one source file allowed when using -o\n") sys.exit(1) if len(sources) == 0 and not options.show_version: bad_usage() if Options.embed and len(sources) > 1: sys.stderr.write( "cython: Only one source file allowed when using -embed\n") sys.exit(1) return options, sources ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Compiler/Code.pxd0000644000175000017500000000637100000000000026655 0ustar00vincentvincent00000000000000 from __future__ import absolute_import cimport cython from ..StringIOTree cimport StringIOTree cdef class UtilityCodeBase(object): cpdef format_code(self, code_string, replace_empty_lines=*) cdef class UtilityCode(UtilityCodeBase): cdef public object name cdef public object proto cdef public object impl cdef public object init cdef public object cleanup cdef public object proto_block cdef public object requires cdef public dict _cache cdef public list specialize_list cdef public object file cpdef none_or_sub(self, s, context) cdef class FunctionState: cdef public set names_taken cdef public object owner cdef public object scope cdef public object error_label cdef public size_t label_counter cdef public set labels_used cdef public object return_label cdef public object continue_label cdef public object break_label cdef public list yield_labels cdef public object return_from_error_cleanup_label # not used in __init__ ? cdef public object exc_vars cdef public object current_except cdef public bint in_try_finally cdef public bint can_trace cdef public bint gil_owned cdef public list temps_allocated cdef public dict temps_free cdef public dict temps_used_type cdef public size_t temp_counter cdef public list collect_temps_stack cdef public object closure_temps cdef public bint should_declare_error_indicator cdef public bint uses_error_indicator @cython.locals(n=size_t) cpdef new_label(self, name=*) cpdef tuple get_loop_labels(self) cpdef set_loop_labels(self, labels) cpdef tuple get_all_labels(self) cpdef set_all_labels(self, labels) cpdef start_collecting_temps(self) cpdef stop_collecting_temps(self) cpdef list temps_in_use(self) cdef class IntConst: cdef public object cname cdef public object value cdef public bint is_long cdef class PyObjectConst: cdef public object cname cdef public object type cdef class StringConst: cdef public object cname cdef public object text cdef public object escaped_value cdef public dict py_strings cdef public list py_versions @cython.locals(intern=bint, is_str=bint, is_unicode=bint) cpdef get_py_string_const(self, encoding, identifier=*, is_str=*, py3str_cstring=*) ## cdef class PyStringConst: ## cdef public object cname ## cdef public object encoding ## cdef public bint is_str ## cdef public bint is_unicode ## cdef public bint intern #class GlobalState(object): #def funccontext_property(name): cdef class CCodeWriter(object): cdef readonly StringIOTree buffer cdef readonly list pyclass_stack cdef readonly object globalstate cdef readonly object funcstate cdef object code_config cdef object last_pos cdef object last_marked_pos cdef Py_ssize_t level cdef public Py_ssize_t call_level # debug-only, see Nodes.py cdef bint bol cpdef write(self, s) cpdef put(self, code) cpdef put_safe(self, code) cpdef putln(self, code=*, bint safe=*) @cython.final cdef increase_indent(self) @cython.final cdef decrease_indent(self) cdef class PyrexCodeWriter: cdef public object f cdef public Py_ssize_t level ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Compiler/Code.py0000644000175000017500000027407000000000000026515 0ustar00vincentvincent00000000000000# cython: language_level = 2 # cython: auto_pickle=False # # Code output module # from __future__ import absolute_import import cython cython.declare(os=object, re=object, operator=object, textwrap=object, Template=object, Naming=object, Options=object, StringEncoding=object, Utils=object, SourceDescriptor=object, StringIOTree=object, DebugFlags=object, basestring=object, defaultdict=object, closing=object, partial=object) import os import re import shutil import sys import operator import textwrap from string import Template from functools import partial from contextlib import closing from collections import defaultdict try: import hashlib except ImportError: import md5 as hashlib from . import Naming from . import Options from . import DebugFlags from . import StringEncoding from . import Version from .. import Utils from .Scanning import SourceDescriptor from ..StringIOTree import StringIOTree try: from __builtin__ import basestring except ImportError: from builtins import str as basestring KEYWORDS_MUST_BE_BYTES = sys.version_info < (2, 7) non_portable_builtins_map = { # builtins that have different names in different Python versions 'bytes' : ('PY_MAJOR_VERSION < 3', 'str'), 'unicode' : ('PY_MAJOR_VERSION >= 3', 'str'), 'basestring' : ('PY_MAJOR_VERSION >= 3', 'str'), 'xrange' : ('PY_MAJOR_VERSION >= 3', 'range'), 'raw_input' : ('PY_MAJOR_VERSION >= 3', 'input'), } ctypedef_builtins_map = { # types of builtins in "ctypedef class" statements which we don't # import either because the names conflict with C types or because # the type simply is not exposed. 'py_int' : '&PyInt_Type', 'py_long' : '&PyLong_Type', 'py_float' : '&PyFloat_Type', 'wrapper_descriptor' : '&PyWrapperDescr_Type', } basicsize_builtins_map = { # builtins whose type has a different tp_basicsize than sizeof(...) 'PyTypeObject': 'PyHeapTypeObject', } uncachable_builtins = [ # Global/builtin names that cannot be cached because they may or may not # be available at import time, for various reasons: ## - Py3.7+ 'breakpoint', # might deserve an implementation in Cython ## - Py3.4+ '__loader__', '__spec__', ## - Py3+ 'BlockingIOError', 'BrokenPipeError', 'ChildProcessError', 'ConnectionAbortedError', 'ConnectionError', 'ConnectionRefusedError', 'ConnectionResetError', 'FileExistsError', 'FileNotFoundError', 'InterruptedError', 'IsADirectoryError', 'ModuleNotFoundError', 'NotADirectoryError', 'PermissionError', 'ProcessLookupError', 'RecursionError', 'ResourceWarning', #'StopAsyncIteration', # backported 'TimeoutError', '__build_class__', 'ascii', # might deserve an implementation in Cython #'exec', # implemented in Cython ## - Py2.7+ 'memoryview', ## - platform specific 'WindowsError', ## - others '_', # e.g. used by gettext ] special_py_methods = set([ '__cinit__', '__dealloc__', '__richcmp__', '__next__', '__await__', '__aiter__', '__anext__', '__getreadbuffer__', '__getwritebuffer__', '__getsegcount__', '__getcharbuffer__', '__getbuffer__', '__releasebuffer__' ]) modifier_output_mapper = { 'inline': 'CYTHON_INLINE' }.get class IncludeCode(object): """ An include file and/or verbatim C code to be included in the generated sources. """ # attributes: # # pieces {order: unicode}: pieces of C code to be generated. # For the included file, the key "order" is zero. # For verbatim include code, the "order" is the "order" # attribute of the original IncludeCode where this piece # of C code was first added. This is needed to prevent # duplication if the same include code is found through # multiple cimports. # location int: where to put this include in the C sources, one # of the constants INITIAL, EARLY, LATE # order int: sorting order (automatically set by increasing counter) # Constants for location. If the same include occurs with different # locations, the earliest one takes precedense. INITIAL = 0 EARLY = 1 LATE = 2 counter = 1 # Counter for "order" def __init__(self, include=None, verbatim=None, late=True, initial=False): self.order = self.counter type(self).counter += 1 self.pieces = {} if include: if include[0] == '<' and include[-1] == '>': self.pieces[0] = u'#include {0}'.format(include) late = False # system include is never late else: self.pieces[0] = u'#include "{0}"'.format(include) if verbatim: self.pieces[self.order] = verbatim if initial: self.location = self.INITIAL elif late: self.location = self.LATE else: self.location = self.EARLY def dict_update(self, d, key): """ Insert `self` in dict `d` with key `key`. If that key already exists, update the attributes of the existing value with `self`. """ if key in d: other = d[key] other.location = min(self.location, other.location) other.pieces.update(self.pieces) else: d[key] = self def sortkey(self): return self.order def mainpiece(self): """ Return the main piece of C code, corresponding to the include file. If there was no include file, return None. """ return self.pieces.get(0) def write(self, code): # Write values of self.pieces dict, sorted by the keys for k in sorted(self.pieces): code.putln(self.pieces[k]) def get_utility_dir(): # make this a function and not global variables: # http://trac.cython.org/cython_trac/ticket/475 Cython_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) return os.path.join(Cython_dir, "Utility") class UtilityCodeBase(object): """ Support for loading utility code from a file. Code sections in the file can be specified as follows: ##### MyUtility.proto ##### [proto declarations] ##### MyUtility.init ##### [code run at module initialization] ##### MyUtility ##### #@requires: MyOtherUtility #@substitute: naming [definitions] for prototypes and implementation respectively. For non-python or -cython files backslashes should be used instead. 5 to 30 comment characters may be used on either side. If the @cname decorator is not used and this is a CythonUtilityCode, one should pass in the 'name' keyword argument to be used for name mangling of such entries. """ is_cython_utility = False _utility_cache = {} @classmethod def _add_utility(cls, utility, type, lines, begin_lineno, tags=None): if utility is None: return code = '\n'.join(lines) if tags and 'substitute' in tags and tags['substitute'] == set(['naming']): del tags['substitute'] try: code = Template(code).substitute(vars(Naming)) except (KeyError, ValueError) as e: raise RuntimeError("Error parsing templated utility code of type '%s' at line %d: %s" % ( type, begin_lineno, e)) # remember correct line numbers at least until after templating code = '\n' * begin_lineno + code if type == 'proto': utility[0] = code elif type == 'impl': utility[1] = code else: all_tags = utility[2] if KEYWORDS_MUST_BE_BYTES: type = type.encode('ASCII') all_tags[type] = code if tags: all_tags = utility[2] for name, values in tags.items(): if KEYWORDS_MUST_BE_BYTES: name = name.encode('ASCII') all_tags.setdefault(name, set()).update(values) @classmethod def load_utilities_from_file(cls, path): utilities = cls._utility_cache.get(path) if utilities: return utilities filename = os.path.join(get_utility_dir(), path) _, ext = os.path.splitext(path) if ext in ('.pyx', '.py', '.pxd', '.pxi'): comment = '#' strip_comments = partial(re.compile(r'^\s*#.*').sub, '') rstrip = StringEncoding._unicode.rstrip else: comment = '/' strip_comments = partial(re.compile(r'^\s*//.*|/\*[^*]*\*/').sub, '') rstrip = partial(re.compile(r'\s+(\\?)$').sub, r'\1') match_special = re.compile( (r'^%(C)s{5,30}\s*(?P(?:\w|\.)+)\s*%(C)s{5,30}|' r'^%(C)s+@(?P\w+)\s*:\s*(?P(?:\w|[.:])+)') % {'C': comment}).match match_type = re.compile(r'(.+)[.](proto(?:[.]\S+)?|impl|init|cleanup)$').match with closing(Utils.open_source_file(filename, encoding='UTF-8')) as f: all_lines = f.readlines() utilities = defaultdict(lambda: [None, None, {}]) lines = [] tags = defaultdict(set) utility = type = None begin_lineno = 0 for lineno, line in enumerate(all_lines): m = match_special(line) if m: if m.group('name'): cls._add_utility(utility, type, lines, begin_lineno, tags) begin_lineno = lineno + 1 del lines[:] tags.clear() name = m.group('name') mtype = match_type(name) if mtype: name, type = mtype.groups() else: type = 'impl' utility = utilities[name] else: tags[m.group('tag')].add(m.group('value')) lines.append('') # keep line number correct else: lines.append(rstrip(strip_comments(line))) if utility is None: raise ValueError("Empty utility code file") # Don't forget to add the last utility code cls._add_utility(utility, type, lines, begin_lineno, tags) utilities = dict(utilities) # un-defaultdict-ify cls._utility_cache[path] = utilities return utilities @classmethod def load(cls, util_code_name, from_file=None, **kwargs): """ Load utility code from a file specified by from_file (relative to Cython/Utility) and name util_code_name. If from_file is not given, load it from the file util_code_name.*. There should be only one file matched by this pattern. """ if '::' in util_code_name: from_file, util_code_name = util_code_name.rsplit('::', 1) if not from_file: utility_dir = get_utility_dir() prefix = util_code_name + '.' try: listing = os.listdir(utility_dir) except OSError: # XXX the code below assumes as 'zipimport.zipimporter' instance # XXX should be easy to generalize, but too lazy right now to write it import zipfile global __loader__ loader = __loader__ archive = loader.archive with closing(zipfile.ZipFile(archive)) as fileobj: listing = [os.path.basename(name) for name in fileobj.namelist() if os.path.join(archive, name).startswith(utility_dir)] files = [filename for filename in listing if filename.startswith(prefix)] if not files: raise ValueError("No match found for utility code " + util_code_name) if len(files) > 1: raise ValueError("More than one filename match found for utility code " + util_code_name) from_file = files[0] utilities = cls.load_utilities_from_file(from_file) proto, impl, tags = utilities[util_code_name] if tags: orig_kwargs = kwargs.copy() for name, values in tags.items(): if name in kwargs: continue # only pass lists when we have to: most argument expect one value or None if name == 'requires': if orig_kwargs: values = [cls.load(dep, from_file, **orig_kwargs) for dep in sorted(values)] else: # dependencies are rarely unique, so use load_cached() when we can values = [cls.load_cached(dep, from_file) for dep in sorted(values)] elif not values: values = None elif len(values) == 1: values = list(values)[0] kwargs[name] = values if proto is not None: kwargs['proto'] = proto if impl is not None: kwargs['impl'] = impl if 'name' not in kwargs: kwargs['name'] = util_code_name if 'file' not in kwargs and from_file: kwargs['file'] = from_file return cls(**kwargs) @classmethod def load_cached(cls, utility_code_name, from_file=None, __cache={}): """ Calls .load(), but using a per-type cache based on utility name and file name. """ key = (cls, from_file, utility_code_name) try: return __cache[key] except KeyError: pass code = __cache[key] = cls.load(utility_code_name, from_file) return code @classmethod def load_as_string(cls, util_code_name, from_file=None, **kwargs): """ Load a utility code as a string. Returns (proto, implementation) """ util = cls.load(util_code_name, from_file, **kwargs) proto, impl = util.proto, util.impl return util.format_code(proto), util.format_code(impl) def format_code(self, code_string, replace_empty_lines=re.compile(r'\n\n+').sub): """ Format a code section for output. """ if code_string: code_string = replace_empty_lines('\n', code_string.strip()) + '\n\n' return code_string def __str__(self): return "<%s(%s)>" % (type(self).__name__, self.name) def get_tree(self, **kwargs): pass def __deepcopy__(self, memodict=None): # No need to deep-copy utility code since it's essentially immutable. return self class UtilityCode(UtilityCodeBase): """ Stores utility code to add during code generation. See GlobalState.put_utility_code. hashes/equals by instance proto C prototypes impl implementation code init code to call on module initialization requires utility code dependencies proto_block the place in the resulting file where the prototype should end up name name of the utility code (or None) file filename of the utility code file this utility was loaded from (or None) """ def __init__(self, proto=None, impl=None, init=None, cleanup=None, requires=None, proto_block='utility_code_proto', name=None, file=None): # proto_block: Which code block to dump prototype in. See GlobalState. self.proto = proto self.impl = impl self.init = init self.cleanup = cleanup self.requires = requires self._cache = {} self.specialize_list = [] self.proto_block = proto_block self.name = name self.file = file def __hash__(self): return hash((self.proto, self.impl)) def __eq__(self, other): if self is other: return True self_type, other_type = type(self), type(other) if self_type is not other_type and not (isinstance(other, self_type) or isinstance(self, other_type)): return False self_proto = getattr(self, 'proto', None) other_proto = getattr(other, 'proto', None) return (self_proto, self.impl) == (other_proto, other.impl) def none_or_sub(self, s, context): """ Format a string in this utility code with context. If None, do nothing. """ if s is None: return None return s % context def specialize(self, pyrex_type=None, **data): # Dicts aren't hashable... if pyrex_type is not None: data['type'] = pyrex_type.empty_declaration_code() data['type_name'] = pyrex_type.specialization_name() key = tuple(sorted(data.items())) try: return self._cache[key] except KeyError: if self.requires is None: requires = None else: requires = [r.specialize(data) for r in self.requires] s = self._cache[key] = UtilityCode( self.none_or_sub(self.proto, data), self.none_or_sub(self.impl, data), self.none_or_sub(self.init, data), self.none_or_sub(self.cleanup, data), requires, self.proto_block) self.specialize_list.append(s) return s def inject_string_constants(self, impl, output): """Replace 'PYIDENT("xyz")' by a constant Python identifier cname. """ if 'PYIDENT(' not in impl and 'PYUNICODE(' not in impl: return False, impl replacements = {} def externalise(matchobj): key = matchobj.groups() try: cname = replacements[key] except KeyError: str_type, name = key cname = replacements[key] = output.get_py_string_const( StringEncoding.EncodedString(name), identifier=str_type == 'IDENT').cname return cname impl = re.sub(r'PY(IDENT|UNICODE)\("([^"]+)"\)', externalise, impl) assert 'PYIDENT(' not in impl and 'PYUNICODE(' not in impl return bool(replacements), impl def inject_unbound_methods(self, impl, output): """Replace 'UNBOUND_METHOD(type, "name")' by a constant Python identifier cname. """ if 'CALL_UNBOUND_METHOD(' not in impl: return False, impl utility_code = set() def externalise(matchobj): type_cname, method_name, obj_cname, args = matchobj.groups() args = [arg.strip() for arg in args[1:].split(',')] if args else [] assert len(args) < 3, "CALL_UNBOUND_METHOD() does not support %d call arguments" % len(args) return output.cached_unbound_method_call_code(obj_cname, type_cname, method_name, args) impl = re.sub( r'CALL_UNBOUND_METHOD\(' r'([a-zA-Z_]+),' # type cname r'\s*"([^"]+)",' # method name r'\s*([^),]+)' # object cname r'((?:,\s*[^),]+)*)' # args* r'\)', externalise, impl) assert 'CALL_UNBOUND_METHOD(' not in impl for helper in sorted(utility_code): output.use_utility_code(UtilityCode.load_cached(helper, "ObjectHandling.c")) return bool(utility_code), impl def wrap_c_strings(self, impl): """Replace CSTRING('''xyz''') by a C compatible string """ if 'CSTRING(' not in impl: return impl def split_string(matchobj): content = matchobj.group(1).replace('"', '\042') return ''.join( '"%s\\n"\n' % line if not line.endswith('\\') or line.endswith('\\\\') else '"%s"\n' % line[:-1] for line in content.splitlines()) impl = re.sub(r'CSTRING\(\s*"""([^"]*(?:"[^"]+)*)"""\s*\)', split_string, impl) assert 'CSTRING(' not in impl return impl def put_code(self, output): if self.requires: for dependency in self.requires: output.use_utility_code(dependency) if self.proto: writer = output[self.proto_block] writer.putln("/* %s.proto */" % self.name) writer.put_or_include( self.format_code(self.proto), '%s_proto' % self.name) if self.impl: impl = self.format_code(self.wrap_c_strings(self.impl)) is_specialised1, impl = self.inject_string_constants(impl, output) is_specialised2, impl = self.inject_unbound_methods(impl, output) writer = output['utility_code_def'] writer.putln("/* %s */" % self.name) if not (is_specialised1 or is_specialised2): # no module specific adaptations => can be reused writer.put_or_include(impl, '%s_impl' % self.name) else: writer.put(impl) if self.init: writer = output['init_globals'] writer.putln("/* %s.init */" % self.name) if isinstance(self.init, basestring): writer.put(self.format_code(self.init)) else: self.init(writer, output.module_pos) writer.putln(writer.error_goto_if_PyErr(output.module_pos)) writer.putln() if self.cleanup and Options.generate_cleanup_code: writer = output['cleanup_globals'] writer.putln("/* %s.cleanup */" % self.name) if isinstance(self.cleanup, basestring): writer.put_or_include( self.format_code(self.cleanup), '%s_cleanup' % self.name) else: self.cleanup(writer, output.module_pos) def sub_tempita(s, context, file=None, name=None): "Run tempita on string s with given context." if not s: return None if file: context['__name'] = "%s:%s" % (file, name) elif name: context['__name'] = name from ..Tempita import sub return sub(s, **context) class TempitaUtilityCode(UtilityCode): def __init__(self, name=None, proto=None, impl=None, init=None, file=None, context=None, **kwargs): if context is None: context = {} proto = sub_tempita(proto, context, file, name) impl = sub_tempita(impl, context, file, name) init = sub_tempita(init, context, file, name) super(TempitaUtilityCode, self).__init__( proto, impl, init=init, name=name, file=file, **kwargs) @classmethod def load_cached(cls, utility_code_name, from_file=None, context=None, __cache={}): context_key = tuple(sorted(context.items())) if context else None assert hash(context_key) is not None # raise TypeError if not hashable key = (cls, from_file, utility_code_name, context_key) try: return __cache[key] except KeyError: pass code = __cache[key] = cls.load(utility_code_name, from_file, context=context) return code def none_or_sub(self, s, context): """ Format a string in this utility code with context. If None, do nothing. """ if s is None: return None return sub_tempita(s, context, self.file, self.name) class LazyUtilityCode(UtilityCodeBase): """ Utility code that calls a callback with the root code writer when available. Useful when you only have 'env' but not 'code'. """ __name__ = '' requires = None def __init__(self, callback): self.callback = callback def put_code(self, globalstate): utility = self.callback(globalstate.rootwriter) globalstate.use_utility_code(utility) class FunctionState(object): # return_label string function return point label # error_label string error catch point label # continue_label string loop continue point label # break_label string loop break point label # return_from_error_cleanup_label string # label_counter integer counter for naming labels # in_try_finally boolean inside try of try...finally # exc_vars (string * 3) exception variables for reraise, or None # can_trace boolean line tracing is supported in the current context # scope Scope the scope object of the current function # Not used for now, perhaps later def __init__(self, owner, names_taken=set(), scope=None): self.names_taken = names_taken self.owner = owner self.scope = scope self.error_label = None self.label_counter = 0 self.labels_used = set() self.return_label = self.new_label() self.new_error_label() self.continue_label = None self.break_label = None self.yield_labels = [] self.in_try_finally = 0 self.exc_vars = None self.current_except = None self.can_trace = False self.gil_owned = True self.temps_allocated = [] # of (name, type, manage_ref, static) self.temps_free = {} # (type, manage_ref) -> list of free vars with same type/managed status self.temps_used_type = {} # name -> (type, manage_ref) self.temp_counter = 0 self.closure_temps = None # This is used to collect temporaries, useful to find out which temps # need to be privatized in parallel sections self.collect_temps_stack = [] # This is used for the error indicator, which needs to be local to the # function. It used to be global, which relies on the GIL being held. # However, exceptions may need to be propagated through 'nogil' # sections, in which case we introduce a race condition. self.should_declare_error_indicator = False self.uses_error_indicator = False # labels def new_label(self, name=None): n = self.label_counter self.label_counter = n + 1 label = "%s%d" % (Naming.label_prefix, n) if name is not None: label += '_' + name return label def new_yield_label(self, expr_type='yield'): label = self.new_label('resume_from_%s' % expr_type) num_and_label = (len(self.yield_labels) + 1, label) self.yield_labels.append(num_and_label) return num_and_label def new_error_label(self): old_err_lbl = self.error_label self.error_label = self.new_label('error') return old_err_lbl def get_loop_labels(self): return ( self.continue_label, self.break_label) def set_loop_labels(self, labels): (self.continue_label, self.break_label) = labels def new_loop_labels(self): old_labels = self.get_loop_labels() self.set_loop_labels( (self.new_label("continue"), self.new_label("break"))) return old_labels def get_all_labels(self): return ( self.continue_label, self.break_label, self.return_label, self.error_label) def set_all_labels(self, labels): (self.continue_label, self.break_label, self.return_label, self.error_label) = labels def all_new_labels(self): old_labels = self.get_all_labels() new_labels = [] for old_label, name in zip(old_labels, ['continue', 'break', 'return', 'error']): if old_label: new_labels.append(self.new_label(name)) else: new_labels.append(old_label) self.set_all_labels(new_labels) return old_labels def use_label(self, lbl): self.labels_used.add(lbl) def label_used(self, lbl): return lbl in self.labels_used # temp handling def allocate_temp(self, type, manage_ref, static=False): """ Allocates a temporary (which may create a new one or get a previously allocated and released one of the same type). Type is simply registered and handed back, but will usually be a PyrexType. If type.is_pyobject, manage_ref comes into play. If manage_ref is set to True, the temp will be decref-ed on return statements and in exception handling clauses. Otherwise the caller has to deal with any reference counting of the variable. If not type.is_pyobject, then manage_ref will be ignored, but it still has to be passed. It is recommended to pass False by convention if it is known that type will never be a Python object. static=True marks the temporary declaration with "static". This is only used when allocating backing store for a module-level C array literals. A C string referring to the variable is returned. """ if type.is_const and not type.is_reference: type = type.const_base_type elif type.is_reference and not type.is_fake_reference: type = type.ref_base_type if not type.is_pyobject and not type.is_memoryviewslice: # Make manage_ref canonical, so that manage_ref will always mean # a decref is needed. manage_ref = False freelist = self.temps_free.get((type, manage_ref)) if freelist is not None and freelist[0]: result = freelist[0].pop() freelist[1].remove(result) else: while True: self.temp_counter += 1 result = "%s%d" % (Naming.codewriter_temp_prefix, self.temp_counter) if result not in self.names_taken: break self.temps_allocated.append((result, type, manage_ref, static)) self.temps_used_type[result] = (type, manage_ref) if DebugFlags.debug_temp_code_comments: self.owner.putln("/* %s allocated (%s) */" % (result, type)) if self.collect_temps_stack: self.collect_temps_stack[-1].add((result, type)) return result def release_temp(self, name): """ Releases a temporary so that it can be reused by other code needing a temp of the same type. """ type, manage_ref = self.temps_used_type[name] freelist = self.temps_free.get((type, manage_ref)) if freelist is None: freelist = ([], set()) # keep order in list and make lookups in set fast self.temps_free[(type, manage_ref)] = freelist if name in freelist[1]: raise RuntimeError("Temp %s freed twice!" % name) freelist[0].append(name) freelist[1].add(name) if DebugFlags.debug_temp_code_comments: self.owner.putln("/* %s released */" % name) def temps_in_use(self): """Return a list of (cname,type,manage_ref) tuples of temp names and their type that are currently in use. """ used = [] for name, type, manage_ref, static in self.temps_allocated: freelist = self.temps_free.get((type, manage_ref)) if freelist is None or name not in freelist[1]: used.append((name, type, manage_ref and type.is_pyobject)) return used def temps_holding_reference(self): """Return a list of (cname,type) tuples of temp names and their type that are currently in use. This includes only temps of a Python object type which owns its reference. """ return [(name, type) for name, type, manage_ref in self.temps_in_use() if manage_ref and type.is_pyobject] def all_managed_temps(self): """Return a list of (cname, type) tuples of refcount-managed Python objects. """ return [(cname, type) for cname, type, manage_ref, static in self.temps_allocated if manage_ref] def all_free_managed_temps(self): """Return a list of (cname, type) tuples of refcount-managed Python objects that are not currently in use. This is used by try-except and try-finally blocks to clean up temps in the error case. """ return sorted([ # Enforce deterministic order. (cname, type) for (type, manage_ref), freelist in self.temps_free.items() if manage_ref for cname in freelist[0] ]) def start_collecting_temps(self): """ Useful to find out which temps were used in a code block """ self.collect_temps_stack.append(set()) def stop_collecting_temps(self): return self.collect_temps_stack.pop() def init_closure_temps(self, scope): self.closure_temps = ClosureTempAllocator(scope) class NumConst(object): """Global info about a Python number constant held by GlobalState. cname string value string py_type string int, long, float value_code string evaluation code if different from value """ def __init__(self, cname, value, py_type, value_code=None): self.cname = cname self.value = value self.py_type = py_type self.value_code = value_code or value class PyObjectConst(object): """Global info about a generic constant held by GlobalState. """ # cname string # type PyrexType def __init__(self, cname, type): self.cname = cname self.type = type cython.declare(possible_unicode_identifier=object, possible_bytes_identifier=object, replace_identifier=object, find_alphanums=object) possible_unicode_identifier = re.compile(br"(?![0-9])\w+$".decode('ascii'), re.U).match possible_bytes_identifier = re.compile(r"(?![0-9])\w+$".encode('ASCII')).match replace_identifier = re.compile(r'[^a-zA-Z0-9_]+').sub find_alphanums = re.compile('([a-zA-Z0-9]+)').findall class StringConst(object): """Global info about a C string constant held by GlobalState. """ # cname string # text EncodedString or BytesLiteral # py_strings {(identifier, encoding) : PyStringConst} def __init__(self, cname, text, byte_string): self.cname = cname self.text = text self.escaped_value = StringEncoding.escape_byte_string(byte_string) self.py_strings = None self.py_versions = [] def add_py_version(self, version): if not version: self.py_versions = [2, 3] elif version not in self.py_versions: self.py_versions.append(version) def get_py_string_const(self, encoding, identifier=None, is_str=False, py3str_cstring=None): py_strings = self.py_strings text = self.text is_str = bool(identifier or is_str) is_unicode = encoding is None and not is_str if encoding is None: # unicode string encoding_key = None else: # bytes or str encoding = encoding.lower() if encoding in ('utf8', 'utf-8', 'ascii', 'usascii', 'us-ascii'): encoding = None encoding_key = None else: encoding_key = ''.join(find_alphanums(encoding)) key = (is_str, is_unicode, encoding_key, py3str_cstring) if py_strings is not None: try: return py_strings[key] except KeyError: pass else: self.py_strings = {} if identifier: intern = True elif identifier is None: if isinstance(text, bytes): intern = bool(possible_bytes_identifier(text)) else: intern = bool(possible_unicode_identifier(text)) else: intern = False if intern: prefix = Naming.interned_prefixes['str'] else: prefix = Naming.py_const_prefix if encoding_key: encoding_prefix = '_%s' % encoding_key else: encoding_prefix = '' pystring_cname = "%s%s%s_%s" % ( prefix, (is_str and 's') or (is_unicode and 'u') or 'b', encoding_prefix, self.cname[len(Naming.const_prefix):]) py_string = PyStringConst( pystring_cname, encoding, is_unicode, is_str, py3str_cstring, intern) self.py_strings[key] = py_string return py_string class PyStringConst(object): """Global info about a Python string constant held by GlobalState. """ # cname string # py3str_cstring string # encoding string # intern boolean # is_unicode boolean # is_str boolean def __init__(self, cname, encoding, is_unicode, is_str=False, py3str_cstring=None, intern=False): self.cname = cname self.py3str_cstring = py3str_cstring self.encoding = encoding self.is_str = is_str self.is_unicode = is_unicode self.intern = intern def __lt__(self, other): return self.cname < other.cname class GlobalState(object): # filename_table {string : int} for finding filename table indexes # filename_list [string] filenames in filename table order # input_file_contents dict contents (=list of lines) of any file that was used as input # to create this output C code. This is # used to annotate the comments. # # utility_codes set IDs of used utility code (to avoid reinsertion) # # declared_cnames {string:Entry} used in a transition phase to merge pxd-declared # constants etc. into the pyx-declared ones (i.e, # check if constants are already added). # In time, hopefully the literals etc. will be # supplied directly instead. # # const_cnames_used dict global counter for unique constant identifiers # # parts {string:CCodeWriter} # interned_strings # consts # interned_nums # directives set Temporary variable used to track # the current set of directives in the code generation # process. directives = {} code_layout = [ 'h_code', 'filename_table', 'utility_code_proto_before_types', 'numeric_typedefs', # Let these detailed individual parts stay!, 'complex_type_declarations', # as the proper solution is to make a full DAG... 'type_declarations', # More coarse-grained blocks would simply hide 'utility_code_proto', # the ugliness, not fix it 'module_declarations', 'typeinfo', 'before_global_var', 'global_var', 'string_decls', 'decls', 'late_includes', 'all_the_rest', 'pystring_table', 'cached_builtins', 'cached_constants', 'init_globals', 'init_module', 'cleanup_globals', 'cleanup_module', 'main_method', 'utility_code_def', 'end' ] def __init__(self, writer, module_node, code_config, common_utility_include_dir=None): self.filename_table = {} self.filename_list = [] self.input_file_contents = {} self.utility_codes = set() self.declared_cnames = {} self.in_utility_code_generation = False self.code_config = code_config self.common_utility_include_dir = common_utility_include_dir self.parts = {} self.module_node = module_node # because some utility code generation needs it # (generating backwards-compatible Get/ReleaseBuffer self.const_cnames_used = {} self.string_const_index = {} self.dedup_const_index = {} self.pyunicode_ptr_const_index = {} self.num_const_index = {} self.py_constants = [] self.cached_cmethods = {} self.initialised_constants = set() writer.set_global_state(self) self.rootwriter = writer def initialize_main_c_code(self): rootwriter = self.rootwriter for part in self.code_layout: self.parts[part] = rootwriter.insertion_point() if not Options.cache_builtins: del self.parts['cached_builtins'] else: w = self.parts['cached_builtins'] w.enter_cfunc_scope() w.putln("static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) {") w = self.parts['cached_constants'] w.enter_cfunc_scope() w.putln("") w.putln("static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) {") w.put_declare_refcount_context() w.put_setup_refcount_context("__Pyx_InitCachedConstants") w = self.parts['init_globals'] w.enter_cfunc_scope() w.putln("") w.putln("static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) {") if not Options.generate_cleanup_code: del self.parts['cleanup_globals'] else: w = self.parts['cleanup_globals'] w.enter_cfunc_scope() w.putln("") w.putln("static CYTHON_SMALL_CODE void __Pyx_CleanupGlobals(void) {") code = self.parts['utility_code_proto'] code.putln("") code.putln("/* --- Runtime support code (head) --- */") code = self.parts['utility_code_def'] if self.code_config.emit_linenums: code.write('\n#line 1 "cython_utility"\n') code.putln("") code.putln("/* --- Runtime support code --- */") def finalize_main_c_code(self): self.close_global_decls() # # utility_code_def # code = self.parts['utility_code_def'] util = TempitaUtilityCode.load_cached("TypeConversions", "TypeConversion.c") code.put(util.format_code(util.impl)) code.putln("") def __getitem__(self, key): return self.parts[key] # # Global constants, interned objects, etc. # def close_global_decls(self): # This is called when it is known that no more global declarations will # declared. self.generate_const_declarations() if Options.cache_builtins: w = self.parts['cached_builtins'] w.putln("return 0;") if w.label_used(w.error_label): w.put_label(w.error_label) w.putln("return -1;") w.putln("}") w.exit_cfunc_scope() w = self.parts['cached_constants'] w.put_finish_refcount_context() w.putln("return 0;") if w.label_used(w.error_label): w.put_label(w.error_label) w.put_finish_refcount_context() w.putln("return -1;") w.putln("}") w.exit_cfunc_scope() w = self.parts['init_globals'] w.putln("return 0;") if w.label_used(w.error_label): w.put_label(w.error_label) w.putln("return -1;") w.putln("}") w.exit_cfunc_scope() if Options.generate_cleanup_code: w = self.parts['cleanup_globals'] w.putln("}") w.exit_cfunc_scope() if Options.generate_cleanup_code: w = self.parts['cleanup_module'] w.putln("}") w.exit_cfunc_scope() def put_pyobject_decl(self, entry): self['global_var'].putln("static PyObject *%s;" % entry.cname) # constant handling at code generation time def get_cached_constants_writer(self, target=None): if target is not None: if target in self.initialised_constants: # Return None on second/later calls to prevent duplicate creation code. return None self.initialised_constants.add(target) return self.parts['cached_constants'] def get_int_const(self, str_value, longness=False): py_type = longness and 'long' or 'int' try: c = self.num_const_index[(str_value, py_type)] except KeyError: c = self.new_num_const(str_value, py_type) return c def get_float_const(self, str_value, value_code): try: c = self.num_const_index[(str_value, 'float')] except KeyError: c = self.new_num_const(str_value, 'float', value_code) return c def get_py_const(self, type, prefix='', cleanup_level=None, dedup_key=None): if dedup_key is not None: const = self.dedup_const_index.get(dedup_key) if const is not None: return const # create a new Python object constant const = self.new_py_const(type, prefix) if cleanup_level is not None \ and cleanup_level <= Options.generate_cleanup_code: cleanup_writer = self.parts['cleanup_globals'] cleanup_writer.putln('Py_CLEAR(%s);' % const.cname) if dedup_key is not None: self.dedup_const_index[dedup_key] = const return const def get_string_const(self, text, py_version=None): # return a C string constant, creating a new one if necessary if text.is_unicode: byte_string = text.utf8encode() else: byte_string = text.byteencode() try: c = self.string_const_index[byte_string] except KeyError: c = self.new_string_const(text, byte_string) c.add_py_version(py_version) return c def get_pyunicode_ptr_const(self, text): # return a Py_UNICODE[] constant, creating a new one if necessary assert text.is_unicode try: c = self.pyunicode_ptr_const_index[text] except KeyError: c = self.pyunicode_ptr_const_index[text] = self.new_const_cname() return c def get_py_string_const(self, text, identifier=None, is_str=False, unicode_value=None): # return a Python string constant, creating a new one if necessary py3str_cstring = None if is_str and unicode_value is not None \ and unicode_value.utf8encode() != text.byteencode(): py3str_cstring = self.get_string_const(unicode_value, py_version=3) c_string = self.get_string_const(text, py_version=2) else: c_string = self.get_string_const(text) py_string = c_string.get_py_string_const( text.encoding, identifier, is_str, py3str_cstring) return py_string def get_interned_identifier(self, text): return self.get_py_string_const(text, identifier=True) def new_string_const(self, text, byte_string): cname = self.new_string_const_cname(byte_string) c = StringConst(cname, text, byte_string) self.string_const_index[byte_string] = c return c def new_num_const(self, value, py_type, value_code=None): cname = self.new_num_const_cname(value, py_type) c = NumConst(cname, value, py_type, value_code) self.num_const_index[(value, py_type)] = c return c def new_py_const(self, type, prefix=''): cname = self.new_const_cname(prefix) c = PyObjectConst(cname, type) self.py_constants.append(c) return c def new_string_const_cname(self, bytes_value): # Create a new globally-unique nice name for a C string constant. value = bytes_value.decode('ASCII', 'ignore') return self.new_const_cname(value=value) def new_num_const_cname(self, value, py_type): if py_type == 'long': value += 'L' py_type = 'int' prefix = Naming.interned_prefixes[py_type] cname = "%s%s" % (prefix, value) cname = cname.replace('+', '_').replace('-', 'neg_').replace('.', '_') return cname def new_const_cname(self, prefix='', value=''): value = replace_identifier('_', value)[:32].strip('_') used = self.const_cnames_used name_suffix = value while name_suffix in used: counter = used[value] = used[value] + 1 name_suffix = '%s_%d' % (value, counter) used[name_suffix] = 1 if prefix: prefix = Naming.interned_prefixes[prefix] else: prefix = Naming.const_prefix return "%s%s" % (prefix, name_suffix) def get_cached_unbound_method(self, type_cname, method_name): key = (type_cname, method_name) try: cname = self.cached_cmethods[key] except KeyError: cname = self.cached_cmethods[key] = self.new_const_cname( 'umethod', '%s_%s' % (type_cname, method_name)) return cname def cached_unbound_method_call_code(self, obj_cname, type_cname, method_name, arg_cnames): # admittedly, not the best place to put this method, but it is reused by UtilityCode and ExprNodes ... utility_code_name = "CallUnboundCMethod%d" % len(arg_cnames) self.use_utility_code(UtilityCode.load_cached(utility_code_name, "ObjectHandling.c")) cache_cname = self.get_cached_unbound_method(type_cname, method_name) args = [obj_cname] + arg_cnames return "__Pyx_%s(&%s, %s)" % ( utility_code_name, cache_cname, ', '.join(args), ) def add_cached_builtin_decl(self, entry): if entry.is_builtin and entry.is_const: if self.should_declare(entry.cname, entry): self.put_pyobject_decl(entry) w = self.parts['cached_builtins'] condition = None if entry.name in non_portable_builtins_map: condition, replacement = non_portable_builtins_map[entry.name] w.putln('#if %s' % condition) self.put_cached_builtin_init( entry.pos, StringEncoding.EncodedString(replacement), entry.cname) w.putln('#else') self.put_cached_builtin_init( entry.pos, StringEncoding.EncodedString(entry.name), entry.cname) if condition: w.putln('#endif') def put_cached_builtin_init(self, pos, name, cname): w = self.parts['cached_builtins'] interned_cname = self.get_interned_identifier(name).cname self.use_utility_code( UtilityCode.load_cached("GetBuiltinName", "ObjectHandling.c")) w.putln('%s = __Pyx_GetBuiltinName(%s); if (!%s) %s' % ( cname, interned_cname, cname, w.error_goto(pos))) def generate_const_declarations(self): self.generate_cached_methods_decls() self.generate_string_constants() self.generate_num_constants() self.generate_object_constant_decls() def generate_object_constant_decls(self): consts = [(len(c.cname), c.cname, c) for c in self.py_constants] consts.sort() decls_writer = self.parts['decls'] for _, cname, c in consts: decls_writer.putln( "static %s;" % c.type.declaration_code(cname)) def generate_cached_methods_decls(self): if not self.cached_cmethods: return decl = self.parts['decls'] init = self.parts['init_globals'] cnames = [] for (type_cname, method_name), cname in sorted(self.cached_cmethods.items()): cnames.append(cname) method_name_cname = self.get_interned_identifier(StringEncoding.EncodedString(method_name)).cname decl.putln('static __Pyx_CachedCFunction %s = {0, &%s, 0, 0, 0};' % ( cname, method_name_cname)) # split type reference storage as it might not be static init.putln('%s.type = (PyObject*)&%s;' % ( cname, type_cname)) if Options.generate_cleanup_code: cleanup = self.parts['cleanup_globals'] for cname in cnames: cleanup.putln("Py_CLEAR(%s.method);" % cname) def generate_string_constants(self): c_consts = [(len(c.cname), c.cname, c) for c in self.string_const_index.values()] c_consts.sort() py_strings = [] decls_writer = self.parts['string_decls'] for _, cname, c in c_consts: conditional = False if c.py_versions and (2 not in c.py_versions or 3 not in c.py_versions): conditional = True decls_writer.putln("#if PY_MAJOR_VERSION %s 3" % ( (2 in c.py_versions) and '<' or '>=')) decls_writer.putln('static const char %s[] = "%s";' % ( cname, StringEncoding.split_string_literal(c.escaped_value))) if conditional: decls_writer.putln("#endif") if c.py_strings is not None: for py_string in c.py_strings.values(): py_strings.append((c.cname, len(py_string.cname), py_string)) for c, cname in sorted(self.pyunicode_ptr_const_index.items()): utf16_array, utf32_array = StringEncoding.encode_pyunicode_string(c) if utf16_array: # Narrow and wide representations differ decls_writer.putln("#ifdef Py_UNICODE_WIDE") decls_writer.putln("static Py_UNICODE %s[] = { %s };" % (cname, utf32_array)) if utf16_array: decls_writer.putln("#else") decls_writer.putln("static Py_UNICODE %s[] = { %s };" % (cname, utf16_array)) decls_writer.putln("#endif") if py_strings: self.use_utility_code(UtilityCode.load_cached("InitStrings", "StringTools.c")) py_strings.sort() w = self.parts['pystring_table'] w.putln("") w.putln("static __Pyx_StringTabEntry %s[] = {" % Naming.stringtab_cname) for c_cname, _, py_string in py_strings: if not py_string.is_str or not py_string.encoding or \ py_string.encoding in ('ASCII', 'USASCII', 'US-ASCII', 'UTF8', 'UTF-8'): encoding = '0' else: encoding = '"%s"' % py_string.encoding.lower() decls_writer.putln( "static PyObject *%s;" % py_string.cname) if py_string.py3str_cstring: w.putln("#if PY_MAJOR_VERSION >= 3") w.putln("{&%s, %s, sizeof(%s), %s, %d, %d, %d}," % ( py_string.cname, py_string.py3str_cstring.cname, py_string.py3str_cstring.cname, '0', 1, 0, py_string.intern )) w.putln("#else") w.putln("{&%s, %s, sizeof(%s), %s, %d, %d, %d}," % ( py_string.cname, c_cname, c_cname, encoding, py_string.is_unicode, py_string.is_str, py_string.intern )) if py_string.py3str_cstring: w.putln("#endif") w.putln("{0, 0, 0, 0, 0, 0, 0}") w.putln("};") init_globals = self.parts['init_globals'] init_globals.putln( "if (__Pyx_InitStrings(%s) < 0) %s;" % ( Naming.stringtab_cname, init_globals.error_goto(self.module_pos))) def generate_num_constants(self): consts = [(c.py_type, c.value[0] == '-', len(c.value), c.value, c.value_code, c) for c in self.num_const_index.values()] consts.sort() decls_writer = self.parts['decls'] init_globals = self.parts['init_globals'] for py_type, _, _, value, value_code, c in consts: cname = c.cname decls_writer.putln("static PyObject *%s;" % cname) if py_type == 'float': function = 'PyFloat_FromDouble(%s)' elif py_type == 'long': function = 'PyLong_FromString((char *)"%s", 0, 0)' elif Utils.long_literal(value): function = 'PyInt_FromString((char *)"%s", 0, 0)' elif len(value.lstrip('-')) > 4: function = "PyInt_FromLong(%sL)" else: function = "PyInt_FromLong(%s)" init_globals.putln('%s = %s; %s' % ( cname, function % value_code, init_globals.error_goto_if_null(cname, self.module_pos))) # The functions below are there in a transition phase only # and will be deprecated. They are called from Nodes.BlockNode. # The copy&paste duplication is intentional in order to be able # to see quickly how BlockNode worked, until this is replaced. def should_declare(self, cname, entry): if cname in self.declared_cnames: other = self.declared_cnames[cname] assert str(entry.type) == str(other.type) assert entry.init == other.init return False else: self.declared_cnames[cname] = entry return True # # File name state # def lookup_filename(self, source_desc): entry = source_desc.get_filenametable_entry() try: index = self.filename_table[entry] except KeyError: index = len(self.filename_list) self.filename_list.append(source_desc) self.filename_table[entry] = index return index def commented_file_contents(self, source_desc): try: return self.input_file_contents[source_desc] except KeyError: pass source_file = source_desc.get_lines(encoding='ASCII', error_handling='ignore') try: F = [u' * ' + line.rstrip().replace( u'*/', u'*[inserted by cython to avoid comment closer]/' ).replace( u'/*', u'/[inserted by cython to avoid comment start]*' ) for line in source_file] finally: if hasattr(source_file, 'close'): source_file.close() if not F: F.append(u'') self.input_file_contents[source_desc] = F return F # # Utility code state # def use_utility_code(self, utility_code): """ Adds code to the C file. utility_code should a) implement __eq__/__hash__ for the purpose of knowing whether the same code has already been included b) implement put_code, which takes a globalstate instance See UtilityCode. """ if utility_code and utility_code not in self.utility_codes: self.utility_codes.add(utility_code) utility_code.put_code(self) def use_entry_utility_code(self, entry): if entry is None: return if entry.utility_code: self.use_utility_code(entry.utility_code) if entry.utility_code_definition: self.use_utility_code(entry.utility_code_definition) def funccontext_property(func): name = func.__name__ attribute_of = operator.attrgetter(name) def get(self): return attribute_of(self.funcstate) def set(self, value): setattr(self.funcstate, name, value) return property(get, set) class CCodeConfig(object): # emit_linenums boolean write #line pragmas? # emit_code_comments boolean copy the original code into C comments? # c_line_in_traceback boolean append the c file and line number to the traceback for exceptions? def __init__(self, emit_linenums=True, emit_code_comments=True, c_line_in_traceback=True): self.emit_code_comments = emit_code_comments self.emit_linenums = emit_linenums self.c_line_in_traceback = c_line_in_traceback class CCodeWriter(object): """ Utility class to output C code. When creating an insertion point one must care about the state that is kept: - formatting state (level, bol) is cloned and used in insertion points as well - labels, temps, exc_vars: One must construct a scope in which these can exist by calling enter_cfunc_scope/exit_cfunc_scope (these are for sanity checking and forward compatibility). Created insertion points looses this scope and cannot access it. - marker: Not copied to insertion point - filename_table, filename_list, input_file_contents: All codewriters coming from the same root share the same instances simultaneously. """ # f file output file # buffer StringIOTree # level int indentation level # bol bool beginning of line? # marker string comment to emit before next line # funcstate FunctionState contains state local to a C function used for code # generation (labels and temps state etc.) # globalstate GlobalState contains state global for a C file (input file info, # utility code, declared constants etc.) # pyclass_stack list used during recursive code generation to pass information # about the current class one is in # code_config CCodeConfig configuration options for the C code writer @cython.locals(create_from='CCodeWriter') def __init__(self, create_from=None, buffer=None, copy_formatting=False): if buffer is None: buffer = StringIOTree() self.buffer = buffer self.last_pos = None self.last_marked_pos = None self.pyclass_stack = [] self.funcstate = None self.globalstate = None self.code_config = None self.level = 0 self.call_level = 0 self.bol = 1 if create_from is not None: # Use same global state self.set_global_state(create_from.globalstate) self.funcstate = create_from.funcstate # Clone formatting state if copy_formatting: self.level = create_from.level self.bol = create_from.bol self.call_level = create_from.call_level self.last_pos = create_from.last_pos self.last_marked_pos = create_from.last_marked_pos def create_new(self, create_from, buffer, copy_formatting): # polymorphic constructor -- very slightly more versatile # than using __class__ result = CCodeWriter(create_from, buffer, copy_formatting) return result def set_global_state(self, global_state): assert self.globalstate is None # prevent overwriting once it's set self.globalstate = global_state self.code_config = global_state.code_config def copyto(self, f): self.buffer.copyto(f) def getvalue(self): return self.buffer.getvalue() def write(self, s): # also put invalid markers (lineno 0), to indicate that those lines # have no Cython source code correspondence cython_lineno = self.last_marked_pos[1] if self.last_marked_pos else 0 self.buffer.markers.extend([cython_lineno] * s.count('\n')) self.buffer.write(s) def insertion_point(self): other = self.create_new(create_from=self, buffer=self.buffer.insertion_point(), copy_formatting=True) return other def new_writer(self): """ Creates a new CCodeWriter connected to the same global state, which can later be inserted using insert. """ return CCodeWriter(create_from=self) def insert(self, writer): """ Inserts the contents of another code writer (created with the same global state) in the current location. It is ok to write to the inserted writer also after insertion. """ assert writer.globalstate is self.globalstate self.buffer.insert(writer.buffer) # Properties delegated to function scope @funccontext_property def label_counter(self): pass @funccontext_property def return_label(self): pass @funccontext_property def error_label(self): pass @funccontext_property def labels_used(self): pass @funccontext_property def continue_label(self): pass @funccontext_property def break_label(self): pass @funccontext_property def return_from_error_cleanup_label(self): pass @funccontext_property def yield_labels(self): pass # Functions delegated to function scope def new_label(self, name=None): return self.funcstate.new_label(name) def new_error_label(self): return self.funcstate.new_error_label() def new_yield_label(self, *args): return self.funcstate.new_yield_label(*args) def get_loop_labels(self): return self.funcstate.get_loop_labels() def set_loop_labels(self, labels): return self.funcstate.set_loop_labels(labels) def new_loop_labels(self): return self.funcstate.new_loop_labels() def get_all_labels(self): return self.funcstate.get_all_labels() def set_all_labels(self, labels): return self.funcstate.set_all_labels(labels) def all_new_labels(self): return self.funcstate.all_new_labels() def use_label(self, lbl): return self.funcstate.use_label(lbl) def label_used(self, lbl): return self.funcstate.label_used(lbl) def enter_cfunc_scope(self, scope=None): self.funcstate = FunctionState(self, scope=scope) def exit_cfunc_scope(self): self.funcstate = None # constant handling def get_py_int(self, str_value, longness): return self.globalstate.get_int_const(str_value, longness).cname def get_py_float(self, str_value, value_code): return self.globalstate.get_float_const(str_value, value_code).cname def get_py_const(self, type, prefix='', cleanup_level=None, dedup_key=None): return self.globalstate.get_py_const(type, prefix, cleanup_level, dedup_key).cname def get_string_const(self, text): return self.globalstate.get_string_const(text).cname def get_pyunicode_ptr_const(self, text): return self.globalstate.get_pyunicode_ptr_const(text) def get_py_string_const(self, text, identifier=None, is_str=False, unicode_value=None): return self.globalstate.get_py_string_const( text, identifier, is_str, unicode_value).cname def get_argument_default_const(self, type): return self.globalstate.get_py_const(type).cname def intern(self, text): return self.get_py_string_const(text) def intern_identifier(self, text): return self.get_py_string_const(text, identifier=True) def get_cached_constants_writer(self, target=None): return self.globalstate.get_cached_constants_writer(target) # code generation def putln(self, code="", safe=False): if self.last_pos and self.bol: self.emit_marker() if self.code_config.emit_linenums and self.last_marked_pos: source_desc, line, _ = self.last_marked_pos self.write('\n#line %s "%s"\n' % (line, source_desc.get_escaped_description())) if code: if safe: self.put_safe(code) else: self.put(code) self.write("\n") self.bol = 1 def mark_pos(self, pos, trace=True): if pos is None: return if self.last_marked_pos and self.last_marked_pos[:2] == pos[:2]: return self.last_pos = (pos, trace) def emit_marker(self): pos, trace = self.last_pos self.last_marked_pos = pos self.last_pos = None self.write("\n") if self.code_config.emit_code_comments: self.indent() self.write("/* %s */\n" % self._build_marker(pos)) if trace and self.funcstate and self.funcstate.can_trace and self.globalstate.directives['linetrace']: self.indent() self.write('__Pyx_TraceLine(%d,%d,%s)\n' % ( pos[1], not self.funcstate.gil_owned, self.error_goto(pos))) def _build_marker(self, pos): source_desc, line, col = pos assert isinstance(source_desc, SourceDescriptor) contents = self.globalstate.commented_file_contents(source_desc) lines = contents[max(0, line-3):line] # line numbers start at 1 lines[-1] += u' # <<<<<<<<<<<<<<' lines += contents[line:line+2] return u'"%s":%d\n%s\n' % (source_desc.get_escaped_description(), line, u'\n'.join(lines)) def put_safe(self, code): # put code, but ignore {} self.write(code) self.bol = 0 def put_or_include(self, code, name): include_dir = self.globalstate.common_utility_include_dir if include_dir and len(code) > 1024: include_file = "%s_%s.h" % ( name, hashlib.md5(code.encode('utf8')).hexdigest()) path = os.path.join(include_dir, include_file) if not os.path.exists(path): tmp_path = '%s.tmp%s' % (path, os.getpid()) with closing(Utils.open_new_file(tmp_path)) as f: f.write(code) shutil.move(tmp_path, path) code = '#include "%s"\n' % path self.put(code) def put(self, code): fix_indent = False if "{" in code: dl = code.count("{") else: dl = 0 if "}" in code: dl -= code.count("}") if dl < 0: self.level += dl elif dl == 0 and code[0] == "}": # special cases like "} else {" need a temporary dedent fix_indent = True self.level -= 1 if self.bol: self.indent() self.write(code) self.bol = 0 if dl > 0: self.level += dl elif fix_indent: self.level += 1 def putln_tempita(self, code, **context): from ..Tempita import sub self.putln(sub(code, **context)) def put_tempita(self, code, **context): from ..Tempita import sub self.put(sub(code, **context)) def increase_indent(self): self.level += 1 def decrease_indent(self): self.level -= 1 def begin_block(self): self.putln("{") self.increase_indent() def end_block(self): self.decrease_indent() self.putln("}") def indent(self): self.write(" " * self.level) def get_py_version_hex(self, pyversion): return "0x%02X%02X%02X%02X" % (tuple(pyversion) + (0,0,0,0))[:4] def put_label(self, lbl): if lbl in self.funcstate.labels_used: self.putln("%s:;" % lbl) def put_goto(self, lbl): self.funcstate.use_label(lbl) self.putln("goto %s;" % lbl) def put_var_declaration(self, entry, storage_class="", dll_linkage=None, definition=True): #print "Code.put_var_declaration:", entry.name, "definition =", definition ### if entry.visibility == 'private' and not (definition or entry.defined_in_pxd): #print "...private and not definition, skipping", entry.cname ### return if entry.visibility == "private" and not entry.used: #print "...private and not used, skipping", entry.cname ### return if storage_class: self.put("%s " % storage_class) if not entry.cf_used: self.put('CYTHON_UNUSED ') self.put(entry.type.declaration_code( entry.cname, dll_linkage=dll_linkage)) if entry.init is not None: self.put_safe(" = %s" % entry.type.literal_code(entry.init)) elif entry.type.is_pyobject: self.put(" = NULL") self.putln(";") def put_temp_declarations(self, func_context): for name, type, manage_ref, static in func_context.temps_allocated: decl = type.declaration_code(name) if type.is_pyobject: self.putln("%s = NULL;" % decl) elif type.is_memoryviewslice: from . import MemoryView self.putln("%s = %s;" % (decl, MemoryView.memslice_entry_init)) else: self.putln("%s%s;" % (static and "static " or "", decl)) if func_context.should_declare_error_indicator: if self.funcstate.uses_error_indicator: unused = '' else: unused = 'CYTHON_UNUSED ' # Initialize these variables to silence compiler warnings self.putln("%sint %s = 0;" % (unused, Naming.lineno_cname)) self.putln("%sconst char *%s = NULL;" % (unused, Naming.filename_cname)) self.putln("%sint %s = 0;" % (unused, Naming.clineno_cname)) def put_generated_by(self): self.putln("/* Generated by Cython %s */" % Version.watermark) self.putln("") def put_h_guard(self, guard): self.putln("#ifndef %s" % guard) self.putln("#define %s" % guard) def unlikely(self, cond): if Options.gcc_branch_hints: return 'unlikely(%s)' % cond else: return cond def build_function_modifiers(self, modifiers, mapper=modifier_output_mapper): if not modifiers: return '' return '%s ' % ' '.join([mapper(m,m) for m in modifiers]) # Python objects and reference counting def entry_as_pyobject(self, entry): type = entry.type if (not entry.is_self_arg and not entry.type.is_complete() or entry.type.is_extension_type): return "(PyObject *)" + entry.cname else: return entry.cname def as_pyobject(self, cname, type): from .PyrexTypes import py_object_type, typecast return typecast(py_object_type, type, cname) def put_gotref(self, cname): self.putln("__Pyx_GOTREF(%s);" % cname) def put_giveref(self, cname): self.putln("__Pyx_GIVEREF(%s);" % cname) def put_xgiveref(self, cname): self.putln("__Pyx_XGIVEREF(%s);" % cname) def put_xgotref(self, cname): self.putln("__Pyx_XGOTREF(%s);" % cname) def put_incref(self, cname, type, nanny=True): if nanny: self.putln("__Pyx_INCREF(%s);" % self.as_pyobject(cname, type)) else: self.putln("Py_INCREF(%s);" % self.as_pyobject(cname, type)) def put_decref(self, cname, type, nanny=True): self._put_decref(cname, type, nanny, null_check=False, clear=False) def put_var_gotref(self, entry): if entry.type.is_pyobject: self.putln("__Pyx_GOTREF(%s);" % self.entry_as_pyobject(entry)) def put_var_giveref(self, entry): if entry.type.is_pyobject: self.putln("__Pyx_GIVEREF(%s);" % self.entry_as_pyobject(entry)) def put_var_xgotref(self, entry): if entry.type.is_pyobject: self.putln("__Pyx_XGOTREF(%s);" % self.entry_as_pyobject(entry)) def put_var_xgiveref(self, entry): if entry.type.is_pyobject: self.putln("__Pyx_XGIVEREF(%s);" % self.entry_as_pyobject(entry)) def put_var_incref(self, entry, nanny=True): if entry.type.is_pyobject: if nanny: self.putln("__Pyx_INCREF(%s);" % self.entry_as_pyobject(entry)) else: self.putln("Py_INCREF(%s);" % self.entry_as_pyobject(entry)) def put_var_xincref(self, entry): if entry.type.is_pyobject: self.putln("__Pyx_XINCREF(%s);" % self.entry_as_pyobject(entry)) def put_decref_clear(self, cname, type, nanny=True, clear_before_decref=False): self._put_decref(cname, type, nanny, null_check=False, clear=True, clear_before_decref=clear_before_decref) def put_xdecref(self, cname, type, nanny=True, have_gil=True): self._put_decref(cname, type, nanny, null_check=True, have_gil=have_gil, clear=False) def put_xdecref_clear(self, cname, type, nanny=True, clear_before_decref=False): self._put_decref(cname, type, nanny, null_check=True, clear=True, clear_before_decref=clear_before_decref) def _put_decref(self, cname, type, nanny=True, null_check=False, have_gil=True, clear=False, clear_before_decref=False): if type.is_memoryviewslice: self.put_xdecref_memoryviewslice(cname, have_gil=have_gil) return prefix = '__Pyx' if nanny else 'Py' X = 'X' if null_check else '' if clear: if clear_before_decref: if not nanny: X = '' # CPython doesn't have a Py_XCLEAR() self.putln("%s_%sCLEAR(%s);" % (prefix, X, cname)) else: self.putln("%s_%sDECREF(%s); %s = 0;" % ( prefix, X, self.as_pyobject(cname, type), cname)) else: self.putln("%s_%sDECREF(%s);" % ( prefix, X, self.as_pyobject(cname, type))) def put_decref_set(self, cname, rhs_cname): self.putln("__Pyx_DECREF_SET(%s, %s);" % (cname, rhs_cname)) def put_xdecref_set(self, cname, rhs_cname): self.putln("__Pyx_XDECREF_SET(%s, %s);" % (cname, rhs_cname)) def put_var_decref(self, entry): if entry.type.is_pyobject: self.putln("__Pyx_XDECREF(%s);" % self.entry_as_pyobject(entry)) def put_var_xdecref(self, entry, nanny=True): if entry.type.is_pyobject: if nanny: self.putln("__Pyx_XDECREF(%s);" % self.entry_as_pyobject(entry)) else: self.putln("Py_XDECREF(%s);" % self.entry_as_pyobject(entry)) def put_var_decref_clear(self, entry): self._put_var_decref_clear(entry, null_check=False) def put_var_xdecref_clear(self, entry): self._put_var_decref_clear(entry, null_check=True) def _put_var_decref_clear(self, entry, null_check): if entry.type.is_pyobject: if entry.in_closure: # reset before DECREF to make sure closure state is # consistent during call to DECREF() self.putln("__Pyx_%sCLEAR(%s);" % ( null_check and 'X' or '', entry.cname)) else: self.putln("__Pyx_%sDECREF(%s); %s = 0;" % ( null_check and 'X' or '', self.entry_as_pyobject(entry), entry.cname)) def put_var_decrefs(self, entries, used_only = 0): for entry in entries: if not used_only or entry.used: if entry.xdecref_cleanup: self.put_var_xdecref(entry) else: self.put_var_decref(entry) def put_var_xdecrefs(self, entries): for entry in entries: self.put_var_xdecref(entry) def put_var_xdecrefs_clear(self, entries): for entry in entries: self.put_var_xdecref_clear(entry) def put_incref_memoryviewslice(self, slice_cname, have_gil=False): from . import MemoryView self.globalstate.use_utility_code(MemoryView.memviewslice_init_code) self.putln("__PYX_INC_MEMVIEW(&%s, %d);" % (slice_cname, int(have_gil))) def put_xdecref_memoryviewslice(self, slice_cname, have_gil=False): from . import MemoryView self.globalstate.use_utility_code(MemoryView.memviewslice_init_code) self.putln("__PYX_XDEC_MEMVIEW(&%s, %d);" % (slice_cname, int(have_gil))) def put_xgiveref_memoryviewslice(self, slice_cname): self.put_xgiveref("%s.memview" % slice_cname) def put_init_to_py_none(self, cname, type, nanny=True): from .PyrexTypes import py_object_type, typecast py_none = typecast(type, py_object_type, "Py_None") if nanny: self.putln("%s = %s; __Pyx_INCREF(Py_None);" % (cname, py_none)) else: self.putln("%s = %s; Py_INCREF(Py_None);" % (cname, py_none)) def put_init_var_to_py_none(self, entry, template = "%s", nanny=True): code = template % entry.cname #if entry.type.is_extension_type: # code = "((PyObject*)%s)" % code self.put_init_to_py_none(code, entry.type, nanny) if entry.in_closure: self.put_giveref('Py_None') def put_pymethoddef(self, entry, term, allow_skip=True, wrapper_code_writer=None): if entry.is_special or entry.name == '__getattribute__': if entry.name not in special_py_methods: if entry.name == '__getattr__' and not self.globalstate.directives['fast_getattr']: pass # Python's typeobject.c will automatically fill in our slot # in add_operators() (called by PyType_Ready) with a value # that's better than ours. elif allow_skip: return method_flags = entry.signature.method_flags() if not method_flags: return if entry.is_special: from . import TypeSlots method_flags += [TypeSlots.method_coexist] func_ptr = wrapper_code_writer.put_pymethoddef_wrapper(entry) if wrapper_code_writer else entry.func_cname # Add required casts, but try not to shadow real warnings. cast = '__Pyx_PyCFunctionFast' if 'METH_FASTCALL' in method_flags else 'PyCFunction' if 'METH_KEYWORDS' in method_flags: cast += 'WithKeywords' if cast != 'PyCFunction': func_ptr = '(void*)(%s)%s' % (cast, func_ptr) self.putln( '{"%s", (PyCFunction)%s, %s, %s}%s' % ( entry.name, func_ptr, "|".join(method_flags), entry.doc_cname if entry.doc else '0', term)) def put_pymethoddef_wrapper(self, entry): func_cname = entry.func_cname if entry.is_special: method_flags = entry.signature.method_flags() if method_flags and 'METH_NOARGS' in method_flags: # Special NOARGS methods really take no arguments besides 'self', but PyCFunction expects one. func_cname = Naming.method_wrapper_prefix + func_cname self.putln("static PyObject *%s(PyObject *self, CYTHON_UNUSED PyObject *arg) {return %s(self);}" % ( func_cname, entry.func_cname)) return func_cname # GIL methods def put_ensure_gil(self, declare_gilstate=True, variable=None): """ Acquire the GIL. The generated code is safe even when no PyThreadState has been allocated for this thread (for threads not initialized by using the Python API). Additionally, the code generated by this method may be called recursively. """ self.globalstate.use_utility_code( UtilityCode.load_cached("ForceInitThreads", "ModuleSetupCode.c")) if self.globalstate.directives['fast_gil']: self.globalstate.use_utility_code(UtilityCode.load_cached("FastGil", "ModuleSetupCode.c")) else: self.globalstate.use_utility_code(UtilityCode.load_cached("NoFastGil", "ModuleSetupCode.c")) self.putln("#ifdef WITH_THREAD") if not variable: variable = '__pyx_gilstate_save' if declare_gilstate: self.put("PyGILState_STATE ") self.putln("%s = __Pyx_PyGILState_Ensure();" % variable) self.putln("#endif") def put_release_ensured_gil(self, variable=None): """ Releases the GIL, corresponds to `put_ensure_gil`. """ if self.globalstate.directives['fast_gil']: self.globalstate.use_utility_code(UtilityCode.load_cached("FastGil", "ModuleSetupCode.c")) else: self.globalstate.use_utility_code(UtilityCode.load_cached("NoFastGil", "ModuleSetupCode.c")) if not variable: variable = '__pyx_gilstate_save' self.putln("#ifdef WITH_THREAD") self.putln("__Pyx_PyGILState_Release(%s);" % variable) self.putln("#endif") def put_acquire_gil(self, variable=None): """ Acquire the GIL. The thread's thread state must have been initialized by a previous `put_release_gil` """ if self.globalstate.directives['fast_gil']: self.globalstate.use_utility_code(UtilityCode.load_cached("FastGil", "ModuleSetupCode.c")) else: self.globalstate.use_utility_code(UtilityCode.load_cached("NoFastGil", "ModuleSetupCode.c")) self.putln("#ifdef WITH_THREAD") self.putln("__Pyx_FastGIL_Forget();") if variable: self.putln('_save = %s;' % variable) self.putln("Py_BLOCK_THREADS") self.putln("#endif") def put_release_gil(self, variable=None): "Release the GIL, corresponds to `put_acquire_gil`." if self.globalstate.directives['fast_gil']: self.globalstate.use_utility_code(UtilityCode.load_cached("FastGil", "ModuleSetupCode.c")) else: self.globalstate.use_utility_code(UtilityCode.load_cached("NoFastGil", "ModuleSetupCode.c")) self.putln("#ifdef WITH_THREAD") self.putln("PyThreadState *_save;") self.putln("Py_UNBLOCK_THREADS") if variable: self.putln('%s = _save;' % variable) self.putln("__Pyx_FastGIL_Remember();") self.putln("#endif") def declare_gilstate(self): self.putln("#ifdef WITH_THREAD") self.putln("PyGILState_STATE __pyx_gilstate_save;") self.putln("#endif") # error handling def put_error_if_neg(self, pos, value): # TODO this path is almost _never_ taken, yet this macro makes is slower! # return self.putln("if (unlikely(%s < 0)) %s" % (value, self.error_goto(pos))) return self.putln("if (%s < 0) %s" % (value, self.error_goto(pos))) def put_error_if_unbound(self, pos, entry, in_nogil_context=False): from . import ExprNodes if entry.from_closure: func = '__Pyx_RaiseClosureNameError' self.globalstate.use_utility_code( ExprNodes.raise_closure_name_error_utility_code) elif entry.type.is_memoryviewslice and in_nogil_context: func = '__Pyx_RaiseUnboundMemoryviewSliceNogil' self.globalstate.use_utility_code( ExprNodes.raise_unbound_memoryview_utility_code_nogil) else: func = '__Pyx_RaiseUnboundLocalError' self.globalstate.use_utility_code( ExprNodes.raise_unbound_local_error_utility_code) self.putln('if (unlikely(!%s)) { %s("%s"); %s }' % ( entry.type.check_for_null_code(entry.cname), func, entry.name, self.error_goto(pos))) def set_error_info(self, pos, used=False): self.funcstate.should_declare_error_indicator = True if used: self.funcstate.uses_error_indicator = True if self.code_config.c_line_in_traceback: cinfo = " %s = %s;" % (Naming.clineno_cname, Naming.line_c_macro) else: cinfo = "" return "%s = %s[%s]; %s = %s;%s" % ( Naming.filename_cname, Naming.filetable_cname, self.lookup_filename(pos[0]), Naming.lineno_cname, pos[1], cinfo) def error_goto(self, pos): lbl = self.funcstate.error_label self.funcstate.use_label(lbl) if pos is None: return 'goto %s;' % lbl return "__PYX_ERR(%s, %s, %s)" % ( self.lookup_filename(pos[0]), pos[1], lbl) def error_goto_if(self, cond, pos): return "if (%s) %s" % (self.unlikely(cond), self.error_goto(pos)) def error_goto_if_null(self, cname, pos): return self.error_goto_if("!%s" % cname, pos) def error_goto_if_neg(self, cname, pos): return self.error_goto_if("%s < 0" % cname, pos) def error_goto_if_PyErr(self, pos): return self.error_goto_if("PyErr_Occurred()", pos) def lookup_filename(self, filename): return self.globalstate.lookup_filename(filename) def put_declare_refcount_context(self): self.putln('__Pyx_RefNannyDeclarations') def put_setup_refcount_context(self, name, acquire_gil=False): if acquire_gil: self.globalstate.use_utility_code( UtilityCode.load_cached("ForceInitThreads", "ModuleSetupCode.c")) self.putln('__Pyx_RefNannySetupContext("%s", %d);' % (name, acquire_gil and 1 or 0)) def put_finish_refcount_context(self): self.putln("__Pyx_RefNannyFinishContext();") def put_add_traceback(self, qualified_name, include_cline=True): """ Build a Python traceback for propagating exceptions. qualified_name should be the qualified name of the function. """ format_tuple = ( qualified_name, Naming.clineno_cname if include_cline else 0, Naming.lineno_cname, Naming.filename_cname, ) self.funcstate.uses_error_indicator = True self.putln('__Pyx_AddTraceback("%s", %s, %s, %s);' % format_tuple) def put_unraisable(self, qualified_name, nogil=False): """ Generate code to print a Python warning for an unraisable exception. qualified_name should be the qualified name of the function. """ format_tuple = ( qualified_name, Naming.clineno_cname, Naming.lineno_cname, Naming.filename_cname, self.globalstate.directives['unraisable_tracebacks'], nogil, ) self.funcstate.uses_error_indicator = True self.putln('__Pyx_WriteUnraisable("%s", %s, %s, %s, %d, %d);' % format_tuple) self.globalstate.use_utility_code( UtilityCode.load_cached("WriteUnraisableException", "Exceptions.c")) def put_trace_declarations(self): self.putln('__Pyx_TraceDeclarations') def put_trace_frame_init(self, codeobj=None): if codeobj: self.putln('__Pyx_TraceFrameInit(%s)' % codeobj) def put_trace_call(self, name, pos, nogil=False): self.putln('__Pyx_TraceCall("%s", %s[%s], %s, %d, %s);' % ( name, Naming.filetable_cname, self.lookup_filename(pos[0]), pos[1], nogil, self.error_goto(pos))) def put_trace_exception(self): self.putln("__Pyx_TraceException();") def put_trace_return(self, retvalue_cname, nogil=False): self.putln("__Pyx_TraceReturn(%s, %d);" % (retvalue_cname, nogil)) def putln_openmp(self, string): self.putln("#ifdef _OPENMP") self.putln(string) self.putln("#endif /* _OPENMP */") def undef_builtin_expect(self, cond): """ Redefine the macros likely() and unlikely to no-ops, depending on condition 'cond' """ self.putln("#if %s" % cond) self.putln(" #undef likely") self.putln(" #undef unlikely") self.putln(" #define likely(x) (x)") self.putln(" #define unlikely(x) (x)") self.putln("#endif") def redef_builtin_expect(self, cond): self.putln("#if %s" % cond) self.putln(" #undef likely") self.putln(" #undef unlikely") self.putln(" #define likely(x) __builtin_expect(!!(x), 1)") self.putln(" #define unlikely(x) __builtin_expect(!!(x), 0)") self.putln("#endif") class PyrexCodeWriter(object): # f file output file # level int indentation level def __init__(self, outfile_name): self.f = Utils.open_new_file(outfile_name) self.level = 0 def putln(self, code): self.f.write("%s%s\n" % (" " * self.level, code)) def indent(self): self.level += 1 def dedent(self): self.level -= 1 class PyxCodeWriter(object): """ Can be used for writing out some Cython code. To use the indenter functionality, the Cython.Compiler.Importer module will have to be used to load the code to support python 2.4 """ def __init__(self, buffer=None, indent_level=0, context=None, encoding='ascii'): self.buffer = buffer or StringIOTree() self.level = indent_level self.context = context self.encoding = encoding def indent(self, levels=1): self.level += levels return True def dedent(self, levels=1): self.level -= levels def indenter(self, line): """ Instead of with pyx_code.indenter("for i in range(10):"): pyx_code.putln("print i") write if pyx_code.indenter("for i in range(10);"): pyx_code.putln("print i") pyx_code.dedent() """ self.putln(line) self.indent() return True def getvalue(self): result = self.buffer.getvalue() if isinstance(result, bytes): result = result.decode(self.encoding) return result def putln(self, line, context=None): context = context or self.context if context: line = sub_tempita(line, context) self._putln(line) def _putln(self, line): self.buffer.write("%s%s\n" % (self.level * " ", line)) def put_chunk(self, chunk, context=None): context = context or self.context if context: chunk = sub_tempita(chunk, context) chunk = textwrap.dedent(chunk) for line in chunk.splitlines(): self._putln(line) def insertion_point(self): return PyxCodeWriter(self.buffer.insertion_point(), self.level, self.context) def named_insertion_point(self, name): setattr(self, name, self.insertion_point()) class ClosureTempAllocator(object): def __init__(self, klass): self.klass = klass self.temps_allocated = {} self.temps_free = {} self.temps_count = 0 def reset(self): for type, cnames in self.temps_allocated.items(): self.temps_free[type] = list(cnames) def allocate_temp(self, type): if type not in self.temps_allocated: self.temps_allocated[type] = [] self.temps_free[type] = [] elif self.temps_free[type]: return self.temps_free[type].pop(0) cname = '%s%d' % (Naming.codewriter_temp_prefix, self.temps_count) self.klass.declare_var(pos=None, name=cname, cname=cname, type=type, is_cdef=True) self.temps_allocated[type].append(cname) self.temps_count += 1 return cname ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Compiler/CodeGeneration.py0000644000175000017500000000212400000000000030516 0ustar00vincentvincent00000000000000from __future__ import absolute_import from .Visitor import VisitorTransform from .Nodes import StatListNode class ExtractPxdCode(VisitorTransform): """ Finds nodes in a pxd file that should generate code, and returns them in a StatListNode. The result is a tuple (StatListNode, ModuleScope), i.e. everything that is needed from the pxd after it is processed. A purer approach would be to separately compile the pxd code, but the result would have to be slightly more sophisticated than pure strings (functions + wanted interned strings + wanted utility code + wanted cached objects) so for now this approach is taken. """ def __call__(self, root): self.funcs = [] self.visitchildren(root) return (StatListNode(root.pos, stats=self.funcs), root.scope) def visit_FuncDefNode(self, node): self.funcs.append(node) # Do not visit children, nested funcdefnodes will # also be moved by this action... return node def visit_Node(self, node): self.visitchildren(node) return node ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Compiler/CythonScope.py0000644000175000017500000001361300000000000030073 0ustar00vincentvincent00000000000000from __future__ import absolute_import from .Symtab import ModuleScope from .PyrexTypes import * from .UtilityCode import CythonUtilityCode from .Errors import error from .Scanning import StringSourceDescriptor from . import MemoryView class CythonScope(ModuleScope): is_cython_builtin = 1 _cythonscope_initialized = False def __init__(self, context): ModuleScope.__init__(self, u'cython', None, None) self.pxd_file_loaded = True self.populate_cython_scope() # The Main.Context object self.context = context for fused_type in (cy_integral_type, cy_floating_type, cy_numeric_type): entry = self.declare_typedef(fused_type.name, fused_type, None, cname='') entry.in_cinclude = True def is_cpp(self): # Allow C++ utility code in C++ contexts. return self.context.cpp def lookup_type(self, name): # This function should go away when types are all first-level objects. type = parse_basic_type(name) if type: return type return super(CythonScope, self).lookup_type(name) def lookup(self, name): entry = super(CythonScope, self).lookup(name) if entry is None and not self._cythonscope_initialized: self.load_cythonscope() entry = super(CythonScope, self).lookup(name) return entry def find_module(self, module_name, pos): error("cython.%s is not available" % module_name, pos) def find_submodule(self, module_name): entry = self.entries.get(module_name, None) if not entry: self.load_cythonscope() entry = self.entries.get(module_name, None) if entry and entry.as_module: return entry.as_module else: # TODO: fix find_submodule control flow so that we're not # expected to create a submodule here (to protect CythonScope's # possible immutability). Hack ourselves out of the situation # for now. raise error((StringSourceDescriptor(u"cython", u""), 0, 0), "cython.%s is not available" % module_name) def lookup_qualified_name(self, qname): # ExprNode.as_cython_attribute generates qnames and we untangle it here... name_path = qname.split(u'.') scope = self while len(name_path) > 1: scope = scope.lookup_here(name_path[0]) if scope: scope = scope.as_module del name_path[0] if scope is None: return None else: return scope.lookup_here(name_path[0]) def populate_cython_scope(self): # These are used to optimize isinstance in FinalOptimizePhase type_object = self.declare_typedef( 'PyTypeObject', base_type = c_void_type, pos = None, cname = 'PyTypeObject') type_object.is_void = True type_object_type = type_object.type self.declare_cfunction( 'PyObject_TypeCheck', CFuncType(c_bint_type, [CFuncTypeArg("o", py_object_type, None), CFuncTypeArg("t", c_ptr_type(type_object_type), None)]), pos = None, defining = 1, cname = 'PyObject_TypeCheck') def load_cythonscope(self): """ Creates some entries for testing purposes and entries for cython.array() and for cython.view.*. """ if self._cythonscope_initialized: return self._cythonscope_initialized = True cython_testscope_utility_code.declare_in_scope( self, cython_scope=self) cython_test_extclass_utility_code.declare_in_scope( self, cython_scope=self) # # The view sub-scope # self.viewscope = viewscope = ModuleScope(u'view', self, None) self.declare_module('view', viewscope, None).as_module = viewscope viewscope.is_cython_builtin = True viewscope.pxd_file_loaded = True cythonview_testscope_utility_code.declare_in_scope( viewscope, cython_scope=self) view_utility_scope = MemoryView.view_utility_code.declare_in_scope( self.viewscope, cython_scope=self, whitelist=MemoryView.view_utility_whitelist) # self.entries["array"] = view_utility_scope.entries.pop("array") def create_cython_scope(context): # One could in fact probably make it a singleton, # but not sure yet whether any code mutates it (which would kill reusing # it across different contexts) return CythonScope(context) # Load test utilities for the cython scope def load_testscope_utility(cy_util_name, **kwargs): return CythonUtilityCode.load(cy_util_name, "TestCythonScope.pyx", **kwargs) undecorated_methods_protos = UtilityCode(proto=u""" /* These methods are undecorated and have therefore no prototype */ static PyObject *__pyx_TestClass_cdef_method( struct __pyx_TestClass_obj *self, int value); static PyObject *__pyx_TestClass_cpdef_method( struct __pyx_TestClass_obj *self, int value, int skip_dispatch); static PyObject *__pyx_TestClass_def_method( PyObject *self, PyObject *value); """) cython_testscope_utility_code = load_testscope_utility("TestScope") test_cython_utility_dep = load_testscope_utility("TestDep") cython_test_extclass_utility_code = \ load_testscope_utility("TestClass", name="TestClass", requires=[undecorated_methods_protos, test_cython_utility_dep]) cythonview_testscope_utility_code = load_testscope_utility("View.TestScope") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Compiler/DebugFlags.py0000644000175000017500000000115700000000000027640 0ustar00vincentvincent00000000000000# Can be enabled at the command line with --debug-xxx. debug_disposal_code = 0 debug_temp_alloc = 0 debug_coercion = 0 # Write comments into the C code that show where temporary variables # are allocated and released. debug_temp_code_comments = 0 # Write a call trace of the code generation phase into the C code. debug_trace_code_generation = 0 # Do not replace exceptions with user-friendly error messages. debug_no_exception_intercept = 0 # Print a message each time a new stage in the pipeline is entered. debug_verbose_pipeline = 0 # Raise an exception when an error is encountered. debug_exception_on_error = 0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Compiler/Errors.py0000644000175000017500000001660200000000000027112 0ustar00vincentvincent00000000000000# # Errors # from __future__ import absolute_import try: from __builtin__ import basestring as any_string_type except ImportError: any_string_type = (bytes, str) import sys from contextlib import contextmanager from ..Utils import open_new_file from . import DebugFlags from . import Options class PyrexError(Exception): pass class PyrexWarning(Exception): pass def context(position): source = position[0] assert not (isinstance(source, any_string_type)), ( "Please replace filename strings with Scanning.FileSourceDescriptor instances %r" % source) try: F = source.get_lines() except UnicodeDecodeError: # file has an encoding problem s = u"[unprintable code]\n" else: s = u''.join(F[max(0, position[1]-6):position[1]]) s = u'...\n%s%s^\n' % (s, u' '*(position[2]-1)) s = u'%s\n%s%s\n' % (u'-'*60, s, u'-'*60) return s def format_position(position): if position: return u"%s:%d:%d: " % (position[0].get_error_description(), position[1], position[2]) return u'' def format_error(message, position): if position: pos_str = format_position(position) cont = context(position) message = u'\nError compiling Cython file:\n%s\n%s%s' % (cont, pos_str, message or u'') return message class CompileError(PyrexError): def __init__(self, position = None, message = u""): self.position = position self.message_only = message self.formatted_message = format_error(message, position) self.reported = False # Deprecated and withdrawn in 2.6: # self.message = message Exception.__init__(self, self.formatted_message) # Python Exception subclass pickling is broken, # see http://bugs.python.org/issue1692335 self.args = (position, message) def __str__(self): return self.formatted_message class CompileWarning(PyrexWarning): def __init__(self, position = None, message = ""): self.position = position # Deprecated and withdrawn in 2.6: # self.message = message Exception.__init__(self, format_position(position) + message) class InternalError(Exception): # If this is ever raised, there is a bug in the compiler. def __init__(self, message): self.message_only = message Exception.__init__(self, u"Internal compiler error: %s" % message) class AbortError(Exception): # Throw this to stop the compilation immediately. def __init__(self, message): self.message_only = message Exception.__init__(self, u"Abort error: %s" % message) class CompilerCrash(CompileError): # raised when an unexpected exception occurs in a transform def __init__(self, pos, context, message, cause, stacktrace=None): if message: message = u'\n' + message else: message = u'\n' self.message_only = message if context: message = u"Compiler crash in %s%s" % (context, message) if stacktrace: import traceback message += ( u'\n\nCompiler crash traceback from this point on:\n' + u''.join(traceback.format_tb(stacktrace))) if cause: if not stacktrace: message += u'\n' message += u'%s: %s' % (cause.__class__.__name__, cause) CompileError.__init__(self, pos, message) # Python Exception subclass pickling is broken, # see http://bugs.python.org/issue1692335 self.args = (pos, context, message, cause, stacktrace) class NoElementTreeInstalledException(PyrexError): """raised when the user enabled options.gdb_debug but no ElementTree implementation was found """ listing_file = None num_errors = 0 echo_file = None def open_listing_file(path, echo_to_stderr = 1): # Begin a new error listing. If path is None, no file # is opened, the error counter is just reset. global listing_file, num_errors, echo_file if path is not None: listing_file = open_new_file(path) else: listing_file = None if echo_to_stderr: echo_file = sys.stderr else: echo_file = None num_errors = 0 def close_listing_file(): global listing_file if listing_file: listing_file.close() listing_file = None def report_error(err, use_stack=True): if error_stack and use_stack: error_stack[-1].append(err) else: global num_errors # See Main.py for why dual reporting occurs. Quick fix for now. if err.reported: return err.reported = True try: line = u"%s\n" % err except UnicodeEncodeError: # Python <= 2.5 does this for non-ASCII Unicode exceptions line = format_error(getattr(err, 'message_only', "[unprintable exception message]"), getattr(err, 'position', None)) + u'\n' if listing_file: try: listing_file.write(line) except UnicodeEncodeError: listing_file.write(line.encode('ASCII', 'replace')) if echo_file: try: echo_file.write(line) except UnicodeEncodeError: echo_file.write(line.encode('ASCII', 'replace')) num_errors += 1 if Options.fast_fail: raise AbortError("fatal errors") def error(position, message): #print("Errors.error:", repr(position), repr(message)) ### if position is None: raise InternalError(message) err = CompileError(position, message) if DebugFlags.debug_exception_on_error: raise Exception(err) # debug report_error(err) return err LEVEL = 1 # warn about all errors level 1 or higher def message(position, message, level=1): if level < LEVEL: return warn = CompileWarning(position, message) line = "note: %s\n" % warn if listing_file: listing_file.write(line) if echo_file: echo_file.write(line) return warn def warning(position, message, level=0): if level < LEVEL: return if Options.warning_errors and position: return error(position, message) warn = CompileWarning(position, message) line = "warning: %s\n" % warn if listing_file: listing_file.write(line) if echo_file: echo_file.write(line) return warn _warn_once_seen = {} def warn_once(position, message, level=0): if level < LEVEL or message in _warn_once_seen: return warn = CompileWarning(position, message) line = "warning: %s\n" % warn if listing_file: listing_file.write(line) if echo_file: echo_file.write(line) _warn_once_seen[message] = True return warn # These functions can be used to momentarily suppress errors. error_stack = [] def hold_errors(): error_stack.append([]) def release_errors(ignore=False): held_errors = error_stack.pop() if not ignore: for err in held_errors: report_error(err) def held_errors(): return error_stack[-1] # same as context manager: @contextmanager def local_errors(ignore=False): errors = [] error_stack.append(errors) try: yield errors finally: release_errors(ignore=ignore) # this module needs a redesign to support parallel cythonisation, but # for now, the following works at least in sequential compiler runs def reset(): _warn_once_seen.clear() del error_stack[:] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Compiler/ExprNodes.py0000644000175000017500000205551500000000000027555 0ustar00vincentvincent00000000000000# # Parse tree nodes for expressions # from __future__ import absolute_import import cython cython.declare(error=object, warning=object, warn_once=object, InternalError=object, CompileError=object, UtilityCode=object, TempitaUtilityCode=object, StringEncoding=object, operator=object, local_errors=object, report_error=object, Naming=object, Nodes=object, PyrexTypes=object, py_object_type=object, list_type=object, tuple_type=object, set_type=object, dict_type=object, unicode_type=object, str_type=object, bytes_type=object, type_type=object, Builtin=object, Symtab=object, Utils=object, find_coercion_error=object, debug_disposal_code=object, debug_temp_alloc=object, debug_coercion=object, bytearray_type=object, slice_type=object, _py_int_types=object, IS_PYTHON3=cython.bint) import re import sys import copy import os.path import operator from .Errors import ( error, warning, InternalError, CompileError, report_error, local_errors) from .Code import UtilityCode, TempitaUtilityCode from . import StringEncoding from . import Naming from . import Nodes from .Nodes import Node, utility_code_for_imports, analyse_type_annotation from . import PyrexTypes from .PyrexTypes import py_object_type, c_long_type, typecast, error_type, \ unspecified_type from . import TypeSlots from .Builtin import list_type, tuple_type, set_type, dict_type, type_type, \ unicode_type, str_type, bytes_type, bytearray_type, basestring_type, slice_type from . import Builtin from . import Symtab from .. import Utils from .Annotate import AnnotationItem from . import Future from ..Debugging import print_call_chain from .DebugFlags import debug_disposal_code, debug_temp_alloc, \ debug_coercion from .Pythran import (to_pythran, is_pythran_supported_type, is_pythran_supported_operation_type, is_pythran_expr, pythran_func_type, pythran_binop_type, pythran_unaryop_type, has_np_pythran, pythran_indexing_code, pythran_indexing_type, is_pythran_supported_node_or_none, pythran_type, pythran_is_numpy_func_supported, pythran_get_func_include_file, pythran_functor) from .PyrexTypes import PythranExpr try: from __builtin__ import basestring except ImportError: # Python 3 basestring = str any_string_type = (bytes, str) else: # Python 2 any_string_type = (bytes, unicode) if sys.version_info[0] >= 3: IS_PYTHON3 = True _py_int_types = int else: IS_PYTHON3 = False _py_int_types = (int, long) class NotConstant(object): _obj = None def __new__(cls): if NotConstant._obj is None: NotConstant._obj = super(NotConstant, cls).__new__(cls) return NotConstant._obj def __repr__(self): return "" not_a_constant = NotConstant() constant_value_not_set = object() # error messages when coercing from key[0] to key[1] coercion_error_dict = { # string related errors (unicode_type, str_type): ("Cannot convert Unicode string to 'str' implicitly." " This is not portable and requires explicit encoding."), (unicode_type, bytes_type): "Cannot convert Unicode string to 'bytes' implicitly, encoding required.", (unicode_type, PyrexTypes.c_char_ptr_type): "Unicode objects only support coercion to Py_UNICODE*.", (unicode_type, PyrexTypes.c_const_char_ptr_type): "Unicode objects only support coercion to Py_UNICODE*.", (unicode_type, PyrexTypes.c_uchar_ptr_type): "Unicode objects only support coercion to Py_UNICODE*.", (unicode_type, PyrexTypes.c_const_uchar_ptr_type): "Unicode objects only support coercion to Py_UNICODE*.", (bytes_type, unicode_type): "Cannot convert 'bytes' object to unicode implicitly, decoding required", (bytes_type, str_type): "Cannot convert 'bytes' object to str implicitly. This is not portable to Py3.", (bytes_type, basestring_type): ("Cannot convert 'bytes' object to basestring implicitly." " This is not portable to Py3."), (bytes_type, PyrexTypes.c_py_unicode_ptr_type): "Cannot convert 'bytes' object to Py_UNICODE*, use 'unicode'.", (bytes_type, PyrexTypes.c_const_py_unicode_ptr_type): ( "Cannot convert 'bytes' object to Py_UNICODE*, use 'unicode'."), (basestring_type, bytes_type): "Cannot convert 'basestring' object to bytes implicitly. This is not portable.", (str_type, unicode_type): ("str objects do not support coercion to unicode," " use a unicode string literal instead (u'')"), (str_type, bytes_type): "Cannot convert 'str' to 'bytes' implicitly. This is not portable.", (str_type, PyrexTypes.c_char_ptr_type): "'str' objects do not support coercion to C types (use 'bytes'?).", (str_type, PyrexTypes.c_const_char_ptr_type): "'str' objects do not support coercion to C types (use 'bytes'?).", (str_type, PyrexTypes.c_uchar_ptr_type): "'str' objects do not support coercion to C types (use 'bytes'?).", (str_type, PyrexTypes.c_const_uchar_ptr_type): "'str' objects do not support coercion to C types (use 'bytes'?).", (str_type, PyrexTypes.c_py_unicode_ptr_type): "'str' objects do not support coercion to C types (use 'unicode'?).", (str_type, PyrexTypes.c_const_py_unicode_ptr_type): ( "'str' objects do not support coercion to C types (use 'unicode'?)."), (PyrexTypes.c_char_ptr_type, unicode_type): "Cannot convert 'char*' to unicode implicitly, decoding required", (PyrexTypes.c_const_char_ptr_type, unicode_type): ( "Cannot convert 'char*' to unicode implicitly, decoding required"), (PyrexTypes.c_uchar_ptr_type, unicode_type): "Cannot convert 'char*' to unicode implicitly, decoding required", (PyrexTypes.c_const_uchar_ptr_type, unicode_type): ( "Cannot convert 'char*' to unicode implicitly, decoding required"), } def find_coercion_error(type_tuple, default, env): err = coercion_error_dict.get(type_tuple) if err is None: return default elif (env.directives['c_string_encoding'] and any(t in type_tuple for t in (PyrexTypes.c_char_ptr_type, PyrexTypes.c_uchar_ptr_type, PyrexTypes.c_const_char_ptr_type, PyrexTypes.c_const_uchar_ptr_type))): if type_tuple[1].is_pyobject: return default elif env.directives['c_string_encoding'] in ('ascii', 'default'): return default else: return "'%s' objects do not support coercion to C types with non-ascii or non-default c_string_encoding" % type_tuple[0].name else: return err def default_str_type(env): return { 'bytes': bytes_type, 'bytearray': bytearray_type, 'str': str_type, 'unicode': unicode_type }.get(env.directives['c_string_type']) def check_negative_indices(*nodes): """ Raise a warning on nodes that are known to have negative numeric values. Used to find (potential) bugs inside of "wraparound=False" sections. """ for node in nodes: if node is None or ( not isinstance(node.constant_result, _py_int_types) and not isinstance(node.constant_result, float)): continue if node.constant_result < 0: warning(node.pos, "the result of using negative indices inside of " "code sections marked as 'wraparound=False' is " "undefined", level=1) def infer_sequence_item_type(env, seq_node, index_node=None, seq_type=None): if not seq_node.is_sequence_constructor: if seq_type is None: seq_type = seq_node.infer_type(env) if seq_type is tuple_type: # tuples are immutable => we can safely follow assignments if seq_node.cf_state and len(seq_node.cf_state) == 1: try: seq_node = seq_node.cf_state[0].rhs except AttributeError: pass if seq_node is not None and seq_node.is_sequence_constructor: if index_node is not None and index_node.has_constant_result(): try: item = seq_node.args[index_node.constant_result] except (ValueError, TypeError, IndexError): pass else: return item.infer_type(env) # if we're lucky, all items have the same type item_types = set([item.infer_type(env) for item in seq_node.args]) if len(item_types) == 1: return item_types.pop() return None def make_dedup_key(outer_type, item_nodes): """ Recursively generate a deduplication key from a sequence of values. Includes Cython node types to work around the fact that (1, 2.0) == (1.0, 2), for example. @param outer_type: The type of the outer container. @param item_nodes: A sequence of constant nodes that will be traversed recursively. @return: A tuple that can be used as a dict key for deduplication. """ item_keys = [ (py_object_type, None, type(None)) if node is None # For sequences and their "mult_factor", see TupleNode. else make_dedup_key(node.type, [node.mult_factor if node.is_literal else None] + node.args) if node.is_sequence_constructor else make_dedup_key(node.type, (node.start, node.stop, node.step)) if node.is_slice # For constants, look at the Python value type if we don't know the concrete Cython type. else (node.type, node.constant_result, type(node.constant_result) if node.type is py_object_type else None) if node.has_constant_result() else None # something we cannot handle => short-circuit below for node in item_nodes ] if None in item_keys: return None return outer_type, tuple(item_keys) # Returns a block of code to translate the exception, # plus a boolean indicating whether to check for Python exceptions. def get_exception_handler(exception_value): if exception_value is None: return "__Pyx_CppExn2PyErr();", False elif (exception_value.type == PyrexTypes.c_char_type and exception_value.value == '*'): return "__Pyx_CppExn2PyErr();", True elif exception_value.type.is_pyobject: return ( 'try { throw; } catch(const std::exception& exn) {' 'PyErr_SetString(%s, exn.what());' '} catch(...) { PyErr_SetNone(%s); }' % ( exception_value.entry.cname, exception_value.entry.cname), False) else: return ( '%s(); if (!PyErr_Occurred())' 'PyErr_SetString(PyExc_RuntimeError, ' '"Error converting c++ exception.");' % ( exception_value.entry.cname), False) def maybe_check_py_error(code, check_py_exception, pos, nogil): if check_py_exception: if nogil: code.putln(code.error_goto_if("__Pyx_ErrOccurredWithGIL()", pos)) else: code.putln(code.error_goto_if("PyErr_Occurred()", pos)) def translate_cpp_exception(code, pos, inside, py_result, exception_value, nogil): raise_py_exception, check_py_exception = get_exception_handler(exception_value) code.putln("try {") code.putln("%s" % inside) if py_result: code.putln(code.error_goto_if_null(py_result, pos)) maybe_check_py_error(code, check_py_exception, pos, nogil) code.putln("} catch(...) {") if nogil: code.put_ensure_gil(declare_gilstate=True) code.putln(raise_py_exception) if nogil: code.put_release_ensured_gil() code.putln(code.error_goto(pos)) code.putln("}") # Used to handle the case where an lvalue expression and an overloaded assignment # both have an exception declaration. def translate_double_cpp_exception(code, pos, lhs_type, lhs_code, rhs_code, lhs_exc_val, assign_exc_val, nogil): handle_lhs_exc, lhc_check_py_exc = get_exception_handler(lhs_exc_val) handle_assignment_exc, assignment_check_py_exc = get_exception_handler(assign_exc_val) code.putln("try {") code.putln(lhs_type.declaration_code("__pyx_local_lvalue = %s;" % lhs_code)) maybe_check_py_error(code, lhc_check_py_exc, pos, nogil) code.putln("try {") code.putln("__pyx_local_lvalue = %s;" % rhs_code) maybe_check_py_error(code, assignment_check_py_exc, pos, nogil) # Catch any exception from the overloaded assignment. code.putln("} catch(...) {") if nogil: code.put_ensure_gil(declare_gilstate=True) code.putln(handle_assignment_exc) if nogil: code.put_release_ensured_gil() code.putln(code.error_goto(pos)) code.putln("}") # Catch any exception from evaluating lhs. code.putln("} catch(...) {") if nogil: code.put_ensure_gil(declare_gilstate=True) code.putln(handle_lhs_exc) if nogil: code.put_release_ensured_gil() code.putln(code.error_goto(pos)) code.putln('}') class ExprNode(Node): # subexprs [string] Class var holding names of subexpr node attrs # type PyrexType Type of the result # result_code string Code fragment # result_ctype string C type of result_code if different from type # is_temp boolean Result is in a temporary variable # is_sequence_constructor # boolean Is a list or tuple constructor expression # is_starred boolean Is a starred expression (e.g. '*a') # saved_subexpr_nodes # [ExprNode or [ExprNode or None] or None] # Cached result of subexpr_nodes() # use_managed_ref boolean use ref-counted temps/assignments/etc. # result_is_used boolean indicates that the result will be dropped and the # is_numpy_attribute boolean Is a Numpy module attribute # result_code/temp_result can safely be set to None # annotation ExprNode or None PEP526 annotation for names or expressions result_ctype = None type = None annotation = None temp_code = None old_temp = None # error checker for multiple frees etc. use_managed_ref = True # can be set by optimisation transforms result_is_used = True is_numpy_attribute = False # The Analyse Expressions phase for expressions is split # into two sub-phases: # # Analyse Types # Determines the result type of the expression based # on the types of its sub-expressions, and inserts # coercion nodes into the expression tree where needed. # Marks nodes which will need to have temporary variables # allocated. # # Allocate Temps # Allocates temporary variables where needed, and fills # in the result_code field of each node. # # ExprNode provides some convenience routines which # perform both of the above phases. These should only # be called from statement nodes, and only when no # coercion nodes need to be added around the expression # being analysed. In that case, the above two phases # should be invoked separately. # # Framework code in ExprNode provides much of the common # processing for the various phases. It makes use of the # 'subexprs' class attribute of ExprNodes, which should # contain a list of the names of attributes which can # hold sub-nodes or sequences of sub-nodes. # # The framework makes use of a number of abstract methods. # Their responsibilities are as follows. # # Declaration Analysis phase # # analyse_target_declaration # Called during the Analyse Declarations phase to analyse # the LHS of an assignment or argument of a del statement. # Nodes which cannot be the LHS of an assignment need not # implement it. # # Expression Analysis phase # # analyse_types # - Call analyse_types on all sub-expressions. # - Check operand types, and wrap coercion nodes around # sub-expressions where needed. # - Set the type of this node. # - If a temporary variable will be required for the # result, set the is_temp flag of this node. # # analyse_target_types # Called during the Analyse Types phase to analyse # the LHS of an assignment or argument of a del # statement. Similar responsibilities to analyse_types. # # target_code # Called by the default implementation of allocate_target_temps. # Should return a C lvalue for assigning to the node. The default # implementation calls calculate_result_code. # # check_const # - Check that this node and its subnodes form a # legal constant expression. If so, do nothing, # otherwise call not_const. # # The default implementation of check_const # assumes that the expression is not constant. # # check_const_addr # - Same as check_const, except check that the # expression is a C lvalue whose address is # constant. Otherwise, call addr_not_const. # # The default implementation of calc_const_addr # assumes that the expression is not a constant # lvalue. # # Code Generation phase # # generate_evaluation_code # - Call generate_evaluation_code for sub-expressions. # - Perform the functions of generate_result_code # (see below). # - If result is temporary, call generate_disposal_code # on all sub-expressions. # # A default implementation of generate_evaluation_code # is provided which uses the following abstract methods: # # generate_result_code # - Generate any C statements necessary to calculate # the result of this node from the results of its # sub-expressions. # # calculate_result_code # - Should return a C code fragment evaluating to the # result. This is only called when the result is not # a temporary. # # generate_assignment_code # Called on the LHS of an assignment. # - Call generate_evaluation_code for sub-expressions. # - Generate code to perform the assignment. # - If the assignment absorbed a reference, call # generate_post_assignment_code on the RHS, # otherwise call generate_disposal_code on it. # # generate_deletion_code # Called on an argument of a del statement. # - Call generate_evaluation_code for sub-expressions. # - Generate code to perform the deletion. # - Call generate_disposal_code on all sub-expressions. # # is_sequence_constructor = False is_dict_literal = False is_set_literal = False is_string_literal = False is_attribute = False is_subscript = False is_slice = False is_buffer_access = False is_memview_index = False is_memview_slice = False is_memview_broadcast = False is_memview_copy_assignment = False saved_subexpr_nodes = None is_temp = False is_target = False is_starred = False constant_result = constant_value_not_set child_attrs = property(fget=operator.attrgetter('subexprs')) def not_implemented(self, method_name): print_call_chain(method_name, "not implemented") ### raise InternalError( "%s.%s not implemented" % (self.__class__.__name__, method_name)) def is_lvalue(self): return 0 def is_addressable(self): return self.is_lvalue() and not self.type.is_memoryviewslice def is_ephemeral(self): # An ephemeral node is one whose result is in # a Python temporary and we suspect there are no # other references to it. Certain operations are # disallowed on such values, since they are # likely to result in a dangling pointer. return self.type.is_pyobject and self.is_temp def subexpr_nodes(self): # Extract a list of subexpression nodes based # on the contents of the subexprs class attribute. nodes = [] for name in self.subexprs: item = getattr(self, name) if item is not None: if type(item) is list: nodes.extend(item) else: nodes.append(item) return nodes def result(self): if self.is_temp: #if not self.temp_code: # pos = (os.path.basename(self.pos[0].get_description()),) + self.pos[1:] if self.pos else '(?)' # raise RuntimeError("temp result name not set in %s at %r" % ( # self.__class__.__name__, pos)) return self.temp_code else: return self.calculate_result_code() def pythran_result(self, type_=None): if is_pythran_supported_node_or_none(self): return to_pythran(self) assert(type_ is not None) return to_pythran(self, type_) def is_c_result_required(self): """ Subtypes may return False here if result temp allocation can be skipped. """ return True def result_as(self, type = None): # Return the result code cast to the specified C type. if (self.is_temp and self.type.is_pyobject and type != py_object_type): # Allocated temporaries are always PyObject *, which may not # reflect the actual type (e.g. an extension type) return typecast(type, py_object_type, self.result()) return typecast(type, self.ctype(), self.result()) def py_result(self): # Return the result code cast to PyObject *. return self.result_as(py_object_type) def ctype(self): # Return the native C type of the result (i.e. the # C type of the result_code expression). return self.result_ctype or self.type def get_constant_c_result_code(self): # Return the constant value of this node as a result code # string, or None if the node is not constant. This method # can be called when the constant result code is required # before the code generation phase. # # The return value is a string that can represent a simple C # value, a constant C name or a constant C expression. If the # node type depends on Python code, this must return None. return None def calculate_constant_result(self): # Calculate the constant compile time result value of this # expression and store it in ``self.constant_result``. Does # nothing by default, thus leaving ``self.constant_result`` # unknown. If valid, the result can be an arbitrary Python # value. # # This must only be called when it is assured that all # sub-expressions have a valid constant_result value. The # ConstantFolding transform will do this. pass def has_constant_result(self): return self.constant_result is not constant_value_not_set and \ self.constant_result is not not_a_constant def compile_time_value(self, denv): # Return value of compile-time expression, or report error. error(self.pos, "Invalid compile-time expression") def compile_time_value_error(self, e): error(self.pos, "Error in compile-time expression: %s: %s" % ( e.__class__.__name__, e)) # ------------- Declaration Analysis ---------------- def analyse_target_declaration(self, env): error(self.pos, "Cannot assign to or delete this") # ------------- Expression Analysis ---------------- def analyse_const_expression(self, env): # Called during the analyse_declarations phase of a # constant expression. Analyses the expression's type, # checks whether it is a legal const expression, # and determines its value. node = self.analyse_types(env) node.check_const() return node def analyse_expressions(self, env): # Convenience routine performing both the Type # Analysis and Temp Allocation phases for a whole # expression. return self.analyse_types(env) def analyse_target_expression(self, env, rhs): # Convenience routine performing both the Type # Analysis and Temp Allocation phases for the LHS of # an assignment. return self.analyse_target_types(env) def analyse_boolean_expression(self, env): # Analyse expression and coerce to a boolean. node = self.analyse_types(env) bool = node.coerce_to_boolean(env) return bool def analyse_temp_boolean_expression(self, env): # Analyse boolean expression and coerce result into # a temporary. This is used when a branch is to be # performed on the result and we won't have an # opportunity to ensure disposal code is executed # afterwards. By forcing the result into a temporary, # we ensure that all disposal has been done by the # time we get the result. node = self.analyse_types(env) return node.coerce_to_boolean(env).coerce_to_simple(env) # --------------- Type Inference ----------------- def type_dependencies(self, env): # Returns the list of entries whose types must be determined # before the type of self can be inferred. if hasattr(self, 'type') and self.type is not None: return () return sum([node.type_dependencies(env) for node in self.subexpr_nodes()], ()) def infer_type(self, env): # Attempt to deduce the type of self. # Differs from analyse_types as it avoids unnecessary # analysis of subexpressions, but can assume everything # in self.type_dependencies() has been resolved. if hasattr(self, 'type') and self.type is not None: return self.type elif hasattr(self, 'entry') and self.entry is not None: return self.entry.type else: self.not_implemented("infer_type") def nonlocally_immutable(self): # Returns whether this variable is a safe reference, i.e. # can't be modified as part of globals or closures. return self.is_literal or self.is_temp or self.type.is_array or self.type.is_cfunction def inferable_item_node(self, index=0): """ Return a node that represents the (type) result of an indexing operation, e.g. for tuple unpacking or iteration. """ return IndexNode(self.pos, base=self, index=IntNode( self.pos, value=str(index), constant_result=index, type=PyrexTypes.c_py_ssize_t_type)) # --------------- Type Analysis ------------------ def analyse_as_module(self, env): # If this node can be interpreted as a reference to a # cimported module, return its scope, else None. return None def analyse_as_type(self, env): # If this node can be interpreted as a reference to a # type, return that type, else None. return None def analyse_as_extension_type(self, env): # If this node can be interpreted as a reference to an # extension type or builtin type, return its type, else None. return None def analyse_types(self, env): self.not_implemented("analyse_types") def analyse_target_types(self, env): return self.analyse_types(env) def nogil_check(self, env): # By default, any expression based on Python objects is # prevented in nogil environments. Subtypes must override # this if they can work without the GIL. if self.type and self.type.is_pyobject: self.gil_error() def gil_assignment_check(self, env): if env.nogil and self.type.is_pyobject: error(self.pos, "Assignment of Python object not allowed without gil") def check_const(self): self.not_const() return False def not_const(self): error(self.pos, "Not allowed in a constant expression") def check_const_addr(self): self.addr_not_const() return False def addr_not_const(self): error(self.pos, "Address is not constant") # ----------------- Result Allocation ----------------- def result_in_temp(self): # Return true if result is in a temporary owned by # this node or one of its subexpressions. Overridden # by certain nodes which can share the result of # a subnode. return self.is_temp def target_code(self): # Return code fragment for use as LHS of a C assignment. return self.calculate_result_code() def calculate_result_code(self): self.not_implemented("calculate_result_code") # def release_target_temp(self, env): # # Release temporaries used by LHS of an assignment. # self.release_subexpr_temps(env) def allocate_temp_result(self, code): if self.temp_code: raise RuntimeError("Temp allocated multiple times in %r: %r" % (self.__class__.__name__, self.pos)) type = self.type if not type.is_void: if type.is_pyobject: type = PyrexTypes.py_object_type elif not (self.result_is_used or type.is_memoryviewslice or self.is_c_result_required()): self.temp_code = None return self.temp_code = code.funcstate.allocate_temp( type, manage_ref=self.use_managed_ref) else: self.temp_code = None def release_temp_result(self, code): if not self.temp_code: if not self.result_is_used: # not used anyway, so ignore if not set up return pos = (os.path.basename(self.pos[0].get_description()),) + self.pos[1:] if self.pos else '(?)' if self.old_temp: raise RuntimeError("temp %s released multiple times in %s at %r" % ( self.old_temp, self.__class__.__name__, pos)) else: raise RuntimeError("no temp, but release requested in %s at %r" % ( self.__class__.__name__, pos)) code.funcstate.release_temp(self.temp_code) self.old_temp = self.temp_code self.temp_code = None # ---------------- Code Generation ----------------- def make_owned_reference(self, code): """ If result is a pyobject, make sure we own a reference to it. If the result is in a temp, it is already a new reference. """ if self.type.is_pyobject and not self.result_in_temp(): code.put_incref(self.result(), self.ctype()) def make_owned_memoryviewslice(self, code): """ Make sure we own the reference to this memoryview slice. """ if not self.result_in_temp(): code.put_incref_memoryviewslice(self.result(), have_gil=self.in_nogil_context) def generate_evaluation_code(self, code): # Generate code to evaluate this node and # its sub-expressions, and dispose of any # temporary results of its sub-expressions. self.generate_subexpr_evaluation_code(code) code.mark_pos(self.pos) if self.is_temp: self.allocate_temp_result(code) self.generate_result_code(code) if self.is_temp and not (self.type.is_string or self.type.is_pyunicode_ptr): # If we are temp we do not need to wait until this node is disposed # before disposing children. self.generate_subexpr_disposal_code(code) self.free_subexpr_temps(code) def generate_subexpr_evaluation_code(self, code): for node in self.subexpr_nodes(): node.generate_evaluation_code(code) def generate_result_code(self, code): self.not_implemented("generate_result_code") def generate_disposal_code(self, code): if self.is_temp: if self.type.is_string or self.type.is_pyunicode_ptr: # postponed from self.generate_evaluation_code() self.generate_subexpr_disposal_code(code) self.free_subexpr_temps(code) if self.result(): if self.type.is_pyobject: code.put_decref_clear(self.result(), self.ctype()) elif self.type.is_memoryviewslice: code.put_xdecref_memoryviewslice( self.result(), have_gil=not self.in_nogil_context) code.putln("%s.memview = NULL;" % self.result()) code.putln("%s.data = NULL;" % self.result()) else: # Already done if self.is_temp self.generate_subexpr_disposal_code(code) def generate_subexpr_disposal_code(self, code): # Generate code to dispose of temporary results # of all sub-expressions. for node in self.subexpr_nodes(): node.generate_disposal_code(code) def generate_post_assignment_code(self, code): if self.is_temp: if self.type.is_string or self.type.is_pyunicode_ptr: # postponed from self.generate_evaluation_code() self.generate_subexpr_disposal_code(code) self.free_subexpr_temps(code) elif self.type.is_pyobject: code.putln("%s = 0;" % self.result()) elif self.type.is_memoryviewslice: code.putln("%s.memview = NULL;" % self.result()) code.putln("%s.data = NULL;" % self.result()) else: self.generate_subexpr_disposal_code(code) def generate_assignment_code(self, rhs, code, overloaded_assignment=False, exception_check=None, exception_value=None): # Stub method for nodes which are not legal as # the LHS of an assignment. An error will have # been reported earlier. pass def generate_deletion_code(self, code, ignore_nonexisting=False): # Stub method for nodes that are not legal as # the argument of a del statement. An error # will have been reported earlier. pass def free_temps(self, code): if self.is_temp: if not self.type.is_void: self.release_temp_result(code) else: self.free_subexpr_temps(code) def free_subexpr_temps(self, code): for sub in self.subexpr_nodes(): sub.free_temps(code) def generate_function_definitions(self, env, code): pass # ---------------- Annotation --------------------- def annotate(self, code): for node in self.subexpr_nodes(): node.annotate(code) # ----------------- Coercion ---------------------- def coerce_to(self, dst_type, env): # Coerce the result so that it can be assigned to # something of type dst_type. If processing is necessary, # wraps this node in a coercion node and returns that. # Otherwise, returns this node unchanged. # # This method is called during the analyse_expressions # phase of the src_node's processing. # # Note that subclasses that override this (especially # ConstNodes) must not (re-)set their own .type attribute # here. Since expression nodes may turn up in different # places in the tree (e.g. inside of CloneNodes in cascaded # assignments), this method must return a new node instance # if it changes the type. # src = self src_type = self.type if self.check_for_coercion_error(dst_type, env): return self used_as_reference = dst_type.is_reference if used_as_reference and not src_type.is_reference: dst_type = dst_type.ref_base_type if src_type.is_const: src_type = src_type.const_base_type if src_type.is_fused or dst_type.is_fused: # See if we are coercing a fused function to a pointer to a # specialized function if (src_type.is_cfunction and not dst_type.is_fused and dst_type.is_ptr and dst_type.base_type.is_cfunction): dst_type = dst_type.base_type for signature in src_type.get_all_specialized_function_types(): if signature.same_as(dst_type): src.type = signature src.entry = src.type.entry src.entry.used = True return self if src_type.is_fused: error(self.pos, "Type is not specialized") elif src_type.is_null_ptr and dst_type.is_ptr: # NULL can be implicitly cast to any pointer type return self else: error(self.pos, "Cannot coerce to a type that is not specialized") self.type = error_type return self if self.coercion_type is not None: # This is purely for error checking purposes! node = NameNode(self.pos, name='', type=self.coercion_type) node.coerce_to(dst_type, env) if dst_type.is_memoryviewslice: from . import MemoryView if not src.type.is_memoryviewslice: if src.type.is_pyobject: src = CoerceToMemViewSliceNode(src, dst_type, env) elif src.type.is_array: src = CythonArrayNode.from_carray(src, env).coerce_to(dst_type, env) elif not src_type.is_error: error(self.pos, "Cannot convert '%s' to memoryviewslice" % (src_type,)) else: if src.type.writable_needed: dst_type.writable_needed = True if not src.type.conforms_to(dst_type, broadcast=self.is_memview_broadcast, copying=self.is_memview_copy_assignment): if src.type.dtype.same_as(dst_type.dtype): msg = "Memoryview '%s' not conformable to memoryview '%s'." tup = src.type, dst_type else: msg = "Different base types for memoryviews (%s, %s)" tup = src.type.dtype, dst_type.dtype error(self.pos, msg % tup) elif dst_type.is_pyobject: if not src.type.is_pyobject: if dst_type is bytes_type and src.type.is_int: src = CoerceIntToBytesNode(src, env) else: src = CoerceToPyTypeNode(src, env, type=dst_type) if not src.type.subtype_of(dst_type): if src.constant_result is not None: src = PyTypeTestNode(src, dst_type, env) elif is_pythran_expr(dst_type) and is_pythran_supported_type(src.type): # We let the compiler decide whether this is valid return src elif is_pythran_expr(src.type): if is_pythran_supported_type(dst_type): # Match the case were a pythran expr is assigned to a value, or vice versa. # We let the C++ compiler decide whether this is valid or not! return src # Else, we need to convert the Pythran expression to a Python object src = CoerceToPyTypeNode(src, env, type=dst_type) elif src.type.is_pyobject: if used_as_reference and dst_type.is_cpp_class: warning( self.pos, "Cannot pass Python object as C++ data structure reference (%s &), will pass by copy." % dst_type) src = CoerceFromPyTypeNode(dst_type, src, env) elif (dst_type.is_complex and src_type != dst_type and dst_type.assignable_from(src_type)): src = CoerceToComplexNode(src, dst_type, env) else: # neither src nor dst are py types # Added the string comparison, since for c types that # is enough, but Cython gets confused when the types are # in different pxi files. # TODO: Remove this hack and require shared declarations. if not (src.type == dst_type or str(src.type) == str(dst_type) or dst_type.assignable_from(src_type)): self.fail_assignment(dst_type) return src def fail_assignment(self, dst_type): error(self.pos, "Cannot assign type '%s' to '%s'" % (self.type, dst_type)) def check_for_coercion_error(self, dst_type, env, fail=False, default=None): if fail and not default: default = "Cannot assign type '%(FROM)s' to '%(TO)s'" message = find_coercion_error((self.type, dst_type), default, env) if message is not None: error(self.pos, message % {'FROM': self.type, 'TO': dst_type}) return True if fail: self.fail_assignment(dst_type) return True return False def coerce_to_pyobject(self, env): return self.coerce_to(PyrexTypes.py_object_type, env) def coerce_to_boolean(self, env): # Coerce result to something acceptable as # a boolean value. # if it's constant, calculate the result now if self.has_constant_result(): bool_value = bool(self.constant_result) return BoolNode(self.pos, value=bool_value, constant_result=bool_value) type = self.type if type.is_enum or type.is_error: return self elif type.is_pyobject or type.is_int or type.is_ptr or type.is_float: return CoerceToBooleanNode(self, env) elif type.is_cpp_class: return SimpleCallNode( self.pos, function=AttributeNode( self.pos, obj=self, attribute='operator bool'), args=[]).analyse_types(env) elif type.is_ctuple: bool_value = len(type.components) == 0 return BoolNode(self.pos, value=bool_value, constant_result=bool_value) else: error(self.pos, "Type '%s' not acceptable as a boolean" % type) return self def coerce_to_integer(self, env): # If not already some C integer type, coerce to longint. if self.type.is_int: return self else: return self.coerce_to(PyrexTypes.c_long_type, env) def coerce_to_temp(self, env): # Ensure that the result is in a temporary. if self.result_in_temp(): return self else: return CoerceToTempNode(self, env) def coerce_to_simple(self, env): # Ensure that the result is simple (see is_simple). if self.is_simple(): return self else: return self.coerce_to_temp(env) def is_simple(self): # A node is simple if its result is something that can # be referred to without performing any operations, e.g. # a constant, local var, C global var, struct member # reference, or temporary. return self.result_in_temp() def may_be_none(self): if self.type and not (self.type.is_pyobject or self.type.is_memoryviewslice): return False if self.has_constant_result(): return self.constant_result is not None return True def as_cython_attribute(self): return None def as_none_safe_node(self, message, error="PyExc_TypeError", format_args=()): # Wraps the node in a NoneCheckNode if it is not known to be # not-None (e.g. because it is a Python literal). if self.may_be_none(): return NoneCheckNode(self, error, message, format_args) else: return self @classmethod def from_node(cls, node, **kwargs): """Instantiate this node class from another node, properly copying over all attributes that one would forget otherwise. """ attributes = "cf_state cf_maybe_null cf_is_null constant_result".split() for attr_name in attributes: if attr_name in kwargs: continue try: value = getattr(node, attr_name) except AttributeError: pass else: kwargs[attr_name] = value return cls(node.pos, **kwargs) class AtomicExprNode(ExprNode): # Abstract base class for expression nodes which have # no sub-expressions. subexprs = [] # Override to optimize -- we know we have no children def generate_subexpr_evaluation_code(self, code): pass def generate_subexpr_disposal_code(self, code): pass class PyConstNode(AtomicExprNode): # Abstract base class for constant Python values. is_literal = 1 type = py_object_type def is_simple(self): return 1 def may_be_none(self): return False def analyse_types(self, env): return self def calculate_result_code(self): return self.value def generate_result_code(self, code): pass class NoneNode(PyConstNode): # The constant value None is_none = 1 value = "Py_None" constant_result = None nogil_check = None def compile_time_value(self, denv): return None def may_be_none(self): return True def coerce_to(self, dst_type, env): if not (dst_type.is_pyobject or dst_type.is_memoryviewslice or dst_type.is_error): # Catch this error early and loudly. error(self.pos, "Cannot assign None to %s" % dst_type) return super(NoneNode, self).coerce_to(dst_type, env) class EllipsisNode(PyConstNode): # '...' in a subscript list. value = "Py_Ellipsis" constant_result = Ellipsis def compile_time_value(self, denv): return Ellipsis class ConstNode(AtomicExprNode): # Abstract base type for literal constant nodes. # # value string C code fragment is_literal = 1 nogil_check = None def is_simple(self): return 1 def nonlocally_immutable(self): return 1 def may_be_none(self): return False def analyse_types(self, env): return self # Types are held in class variables def check_const(self): return True def get_constant_c_result_code(self): return self.calculate_result_code() def calculate_result_code(self): return str(self.value) def generate_result_code(self, code): pass class BoolNode(ConstNode): type = PyrexTypes.c_bint_type # The constant value True or False def calculate_constant_result(self): self.constant_result = self.value def compile_time_value(self, denv): return self.value def calculate_result_code(self): if self.type.is_pyobject: return self.value and 'Py_True' or 'Py_False' else: return str(int(self.value)) def coerce_to(self, dst_type, env): if dst_type == self.type: return self if dst_type is py_object_type and self.type is Builtin.bool_type: return self if dst_type.is_pyobject and self.type.is_int: return BoolNode( self.pos, value=self.value, constant_result=self.constant_result, type=Builtin.bool_type) if dst_type.is_int and self.type.is_pyobject: return BoolNode( self.pos, value=self.value, constant_result=self.constant_result, type=PyrexTypes.c_bint_type) return ConstNode.coerce_to(self, dst_type, env) class NullNode(ConstNode): type = PyrexTypes.c_null_ptr_type value = "NULL" constant_result = 0 def get_constant_c_result_code(self): return self.value class CharNode(ConstNode): type = PyrexTypes.c_char_type def calculate_constant_result(self): self.constant_result = ord(self.value) def compile_time_value(self, denv): return ord(self.value) def calculate_result_code(self): return "'%s'" % StringEncoding.escape_char(self.value) class IntNode(ConstNode): # unsigned "" or "U" # longness "" or "L" or "LL" # is_c_literal True/False/None creator considers this a C integer literal unsigned = "" longness = "" is_c_literal = None # unknown def __init__(self, pos, **kwds): ExprNode.__init__(self, pos, **kwds) if 'type' not in kwds: self.type = self.find_suitable_type_for_value() def find_suitable_type_for_value(self): if self.constant_result is constant_value_not_set: try: self.calculate_constant_result() except ValueError: pass # we ignore 'is_c_literal = True' and instead map signed 32bit # integers as C long values if self.is_c_literal or \ not self.has_constant_result() or \ self.unsigned or self.longness == 'LL': # clearly a C literal rank = (self.longness == 'LL') and 2 or 1 suitable_type = PyrexTypes.modifiers_and_name_to_type[not self.unsigned, rank, "int"] if self.type: suitable_type = PyrexTypes.widest_numeric_type(suitable_type, self.type) else: # C literal or Python literal - split at 32bit boundary if -2**31 <= self.constant_result < 2**31: if self.type and self.type.is_int: suitable_type = self.type else: suitable_type = PyrexTypes.c_long_type else: suitable_type = PyrexTypes.py_object_type return suitable_type def coerce_to(self, dst_type, env): if self.type is dst_type: return self elif dst_type.is_float: if self.has_constant_result(): return FloatNode(self.pos, value='%d.0' % int(self.constant_result), type=dst_type, constant_result=float(self.constant_result)) else: return FloatNode(self.pos, value=self.value, type=dst_type, constant_result=not_a_constant) if dst_type.is_numeric and not dst_type.is_complex: node = IntNode(self.pos, value=self.value, constant_result=self.constant_result, type=dst_type, is_c_literal=True, unsigned=self.unsigned, longness=self.longness) return node elif dst_type.is_pyobject: node = IntNode(self.pos, value=self.value, constant_result=self.constant_result, type=PyrexTypes.py_object_type, is_c_literal=False, unsigned=self.unsigned, longness=self.longness) else: # FIXME: not setting the type here to keep it working with # complex numbers. Should they be special cased? node = IntNode(self.pos, value=self.value, constant_result=self.constant_result, unsigned=self.unsigned, longness=self.longness) # We still need to perform normal coerce_to processing on the # result, because we might be coercing to an extension type, # in which case a type test node will be needed. return ConstNode.coerce_to(node, dst_type, env) def coerce_to_boolean(self, env): return IntNode( self.pos, value=self.value, constant_result=self.constant_result, type=PyrexTypes.c_bint_type, unsigned=self.unsigned, longness=self.longness) def generate_evaluation_code(self, code): if self.type.is_pyobject: # pre-allocate a Python version of the number plain_integer_string = str(Utils.str_to_number(self.value)) self.result_code = code.get_py_int(plain_integer_string, self.longness) else: self.result_code = self.get_constant_c_result_code() def get_constant_c_result_code(self): unsigned, longness = self.unsigned, self.longness literal = self.value_as_c_integer_string() if not (unsigned or longness) and self.type.is_int and literal[0] == '-' and literal[1] != '0': # negative decimal literal => guess longness from type to prevent wrap-around if self.type.rank >= PyrexTypes.c_longlong_type.rank: longness = 'LL' elif self.type.rank >= PyrexTypes.c_long_type.rank: longness = 'L' return literal + unsigned + longness def value_as_c_integer_string(self): value = self.value if len(value) <= 2: # too short to go wrong (and simplifies code below) return value neg_sign = '' if value[0] == '-': neg_sign = '-' value = value[1:] if value[0] == '0': literal_type = value[1] # 0'o' - 0'b' - 0'x' # 0x123 hex literals and 0123 octal literals work nicely in C # but C-incompatible Py3 oct/bin notations need conversion if neg_sign and literal_type in 'oOxX0123456789' and value[2:].isdigit(): # negative hex/octal literal => prevent C compiler from using # unsigned integer types by converting to decimal (see C standard 6.4.4.1) value = str(Utils.str_to_number(value)) elif literal_type in 'oO': value = '0' + value[2:] # '0o123' => '0123' elif literal_type in 'bB': value = str(int(value[2:], 2)) elif value.isdigit() and not self.unsigned and not self.longness: if not neg_sign: # C compilers do not consider unsigned types for decimal literals, # but they do for hex (see C standard 6.4.4.1) value = '0x%X' % int(value) return neg_sign + value def calculate_result_code(self): return self.result_code def calculate_constant_result(self): self.constant_result = Utils.str_to_number(self.value) def compile_time_value(self, denv): return Utils.str_to_number(self.value) class FloatNode(ConstNode): type = PyrexTypes.c_double_type def calculate_constant_result(self): self.constant_result = float(self.value) def compile_time_value(self, denv): return float(self.value) def coerce_to(self, dst_type, env): if dst_type.is_pyobject and self.type.is_float: return FloatNode( self.pos, value=self.value, constant_result=self.constant_result, type=Builtin.float_type) if dst_type.is_float and self.type.is_pyobject: return FloatNode( self.pos, value=self.value, constant_result=self.constant_result, type=dst_type) return ConstNode.coerce_to(self, dst_type, env) def calculate_result_code(self): return self.result_code def get_constant_c_result_code(self): strval = self.value assert isinstance(strval, basestring) cmpval = repr(float(strval)) if cmpval == 'nan': return "(Py_HUGE_VAL * 0)" elif cmpval == 'inf': return "Py_HUGE_VAL" elif cmpval == '-inf': return "(-Py_HUGE_VAL)" else: return strval def generate_evaluation_code(self, code): c_value = self.get_constant_c_result_code() if self.type.is_pyobject: self.result_code = code.get_py_float(self.value, c_value) else: self.result_code = c_value def _analyse_name_as_type(name, pos, env): type = PyrexTypes.parse_basic_type(name) if type is not None: return type global_entry = env.global_scope().lookup(name) if global_entry and global_entry.type and ( global_entry.type.is_extension_type or global_entry.type.is_struct_or_union or global_entry.type.is_builtin_type or global_entry.type.is_cpp_class): return global_entry.type from .TreeFragment import TreeFragment with local_errors(ignore=True): pos = (pos[0], pos[1], pos[2]-7) try: declaration = TreeFragment(u"sizeof(%s)" % name, name=pos[0].filename, initial_pos=pos) except CompileError: pass else: sizeof_node = declaration.root.stats[0].expr if isinstance(sizeof_node, SizeofTypeNode): sizeof_node = sizeof_node.analyse_types(env) if isinstance(sizeof_node, SizeofTypeNode): return sizeof_node.arg_type return None class BytesNode(ConstNode): # A char* or bytes literal # # value BytesLiteral is_string_literal = True # start off as Python 'bytes' to support len() in O(1) type = bytes_type def calculate_constant_result(self): self.constant_result = self.value def as_sliced_node(self, start, stop, step=None): value = StringEncoding.bytes_literal(self.value[start:stop:step], self.value.encoding) return BytesNode(self.pos, value=value, constant_result=value) def compile_time_value(self, denv): return self.value.byteencode() def analyse_as_type(self, env): return _analyse_name_as_type(self.value.decode('ISO8859-1'), self.pos, env) def can_coerce_to_char_literal(self): return len(self.value) == 1 def coerce_to_boolean(self, env): # This is special because testing a C char* for truth directly # would yield the wrong result. bool_value = bool(self.value) return BoolNode(self.pos, value=bool_value, constant_result=bool_value) def coerce_to(self, dst_type, env): if self.type == dst_type: return self if dst_type.is_int: if not self.can_coerce_to_char_literal(): error(self.pos, "Only single-character string literals can be coerced into ints.") return self if dst_type.is_unicode_char: error(self.pos, "Bytes literals cannot coerce to Py_UNICODE/Py_UCS4, use a unicode literal instead.") return self return CharNode(self.pos, value=self.value, constant_result=ord(self.value)) node = BytesNode(self.pos, value=self.value, constant_result=self.constant_result) if dst_type.is_pyobject: if dst_type in (py_object_type, Builtin.bytes_type): node.type = Builtin.bytes_type else: self.check_for_coercion_error(dst_type, env, fail=True) return node elif dst_type in (PyrexTypes.c_char_ptr_type, PyrexTypes.c_const_char_ptr_type): node.type = dst_type return node elif dst_type in (PyrexTypes.c_uchar_ptr_type, PyrexTypes.c_const_uchar_ptr_type, PyrexTypes.c_void_ptr_type): node.type = (PyrexTypes.c_const_char_ptr_type if dst_type == PyrexTypes.c_const_uchar_ptr_type else PyrexTypes.c_char_ptr_type) return CastNode(node, dst_type) elif dst_type.assignable_from(PyrexTypes.c_char_ptr_type): # Exclude the case of passing a C string literal into a non-const C++ string. if not dst_type.is_cpp_class or dst_type.is_const: node.type = dst_type return node # We still need to perform normal coerce_to processing on the # result, because we might be coercing to an extension type, # in which case a type test node will be needed. return ConstNode.coerce_to(node, dst_type, env) def generate_evaluation_code(self, code): if self.type.is_pyobject: result = code.get_py_string_const(self.value) elif self.type.is_const: result = code.get_string_const(self.value) else: # not const => use plain C string literal and cast to mutable type literal = self.value.as_c_string_literal() # C++ may require a cast result = typecast(self.type, PyrexTypes.c_void_ptr_type, literal) self.result_code = result def get_constant_c_result_code(self): return None # FIXME def calculate_result_code(self): return self.result_code class UnicodeNode(ConstNode): # A Py_UNICODE* or unicode literal # # value EncodedString # bytes_value BytesLiteral the literal parsed as bytes string # ('-3' unicode literals only) is_string_literal = True bytes_value = None type = unicode_type def calculate_constant_result(self): self.constant_result = self.value def analyse_as_type(self, env): return _analyse_name_as_type(self.value, self.pos, env) def as_sliced_node(self, start, stop, step=None): if StringEncoding.string_contains_surrogates(self.value[:stop]): # this is unsafe as it may give different results # in different runtimes return None value = StringEncoding.EncodedString(self.value[start:stop:step]) value.encoding = self.value.encoding if self.bytes_value is not None: bytes_value = StringEncoding.bytes_literal( self.bytes_value[start:stop:step], self.bytes_value.encoding) else: bytes_value = None return UnicodeNode( self.pos, value=value, bytes_value=bytes_value, constant_result=value) def coerce_to(self, dst_type, env): if dst_type is self.type: pass elif dst_type.is_unicode_char: if not self.can_coerce_to_char_literal(): error(self.pos, "Only single-character Unicode string literals or " "surrogate pairs can be coerced into Py_UCS4/Py_UNICODE.") return self int_value = ord(self.value) return IntNode(self.pos, type=dst_type, value=str(int_value), constant_result=int_value) elif not dst_type.is_pyobject: if dst_type.is_string and self.bytes_value is not None: # special case: '-3' enforced unicode literal used in a # C char* context return BytesNode(self.pos, value=self.bytes_value ).coerce_to(dst_type, env) if dst_type.is_pyunicode_ptr: node = UnicodeNode(self.pos, value=self.value) node.type = dst_type return node error(self.pos, "Unicode literals do not support coercion to C types other " "than Py_UNICODE/Py_UCS4 (for characters) or Py_UNICODE* " "(for strings).") elif dst_type not in (py_object_type, Builtin.basestring_type): self.check_for_coercion_error(dst_type, env, fail=True) return self def can_coerce_to_char_literal(self): return len(self.value) == 1 ## or (len(self.value) == 2 ## and (0xD800 <= self.value[0] <= 0xDBFF) ## and (0xDC00 <= self.value[1] <= 0xDFFF)) def coerce_to_boolean(self, env): bool_value = bool(self.value) return BoolNode(self.pos, value=bool_value, constant_result=bool_value) def contains_surrogates(self): return StringEncoding.string_contains_surrogates(self.value) def generate_evaluation_code(self, code): if self.type.is_pyobject: if self.contains_surrogates(): # surrogates are not really portable and cannot be # decoded by the UTF-8 codec in Py3.3 self.result_code = code.get_py_const(py_object_type, 'ustring') data_cname = code.get_pyunicode_ptr_const(self.value) const_code = code.get_cached_constants_writer(self.result_code) if const_code is None: return # already initialised const_code.mark_pos(self.pos) const_code.putln( "%s = PyUnicode_FromUnicode(%s, (sizeof(%s) / sizeof(Py_UNICODE))-1); %s" % ( self.result_code, data_cname, data_cname, const_code.error_goto_if_null(self.result_code, self.pos))) const_code.put_error_if_neg( self.pos, "__Pyx_PyUnicode_READY(%s)" % self.result_code) else: self.result_code = code.get_py_string_const(self.value) else: self.result_code = code.get_pyunicode_ptr_const(self.value) def calculate_result_code(self): return self.result_code def compile_time_value(self, env): return self.value class StringNode(PyConstNode): # A Python str object, i.e. a byte string in Python 2.x and a # unicode string in Python 3.x # # value BytesLiteral (or EncodedString with ASCII content) # unicode_value EncodedString or None # is_identifier boolean type = str_type is_string_literal = True is_identifier = None unicode_value = None def calculate_constant_result(self): if self.unicode_value is not None: # only the Unicode value is portable across Py2/3 self.constant_result = self.unicode_value def analyse_as_type(self, env): return _analyse_name_as_type(self.unicode_value or self.value.decode('ISO8859-1'), self.pos, env) def as_sliced_node(self, start, stop, step=None): value = type(self.value)(self.value[start:stop:step]) value.encoding = self.value.encoding if self.unicode_value is not None: if StringEncoding.string_contains_surrogates(self.unicode_value[:stop]): # this is unsafe as it may give different results in different runtimes return None unicode_value = StringEncoding.EncodedString( self.unicode_value[start:stop:step]) else: unicode_value = None return StringNode( self.pos, value=value, unicode_value=unicode_value, constant_result=value, is_identifier=self.is_identifier) def coerce_to(self, dst_type, env): if dst_type is not py_object_type and not str_type.subtype_of(dst_type): # if dst_type is Builtin.bytes_type: # # special case: bytes = 'str literal' # return BytesNode(self.pos, value=self.value) if not dst_type.is_pyobject: return BytesNode(self.pos, value=self.value).coerce_to(dst_type, env) if dst_type is not Builtin.basestring_type: self.check_for_coercion_error(dst_type, env, fail=True) return self def can_coerce_to_char_literal(self): return not self.is_identifier and len(self.value) == 1 def generate_evaluation_code(self, code): self.result_code = code.get_py_string_const( self.value, identifier=self.is_identifier, is_str=True, unicode_value=self.unicode_value) def get_constant_c_result_code(self): return None def calculate_result_code(self): return self.result_code def compile_time_value(self, env): if self.value.is_unicode: return self.value if not IS_PYTHON3: # use plain str/bytes object in Py2 return self.value.byteencode() # in Py3, always return a Unicode string if self.unicode_value is not None: return self.unicode_value return self.value.decode('iso8859-1') class IdentifierStringNode(StringNode): # A special str value that represents an identifier (bytes in Py2, # unicode in Py3). is_identifier = True class ImagNode(AtomicExprNode): # Imaginary number literal # # value string imaginary part (float value) type = PyrexTypes.c_double_complex_type def calculate_constant_result(self): self.constant_result = complex(0.0, float(self.value)) def compile_time_value(self, denv): return complex(0.0, float(self.value)) def analyse_types(self, env): self.type.create_declaration_utility_code(env) return self def may_be_none(self): return False def coerce_to(self, dst_type, env): if self.type is dst_type: return self node = ImagNode(self.pos, value=self.value) if dst_type.is_pyobject: node.is_temp = 1 node.type = Builtin.complex_type # We still need to perform normal coerce_to processing on the # result, because we might be coercing to an extension type, # in which case a type test node will be needed. return AtomicExprNode.coerce_to(node, dst_type, env) gil_message = "Constructing complex number" def calculate_result_code(self): if self.type.is_pyobject: return self.result() else: return "%s(0, %r)" % (self.type.from_parts, float(self.value)) def generate_result_code(self, code): if self.type.is_pyobject: code.putln( "%s = PyComplex_FromDoubles(0.0, %r); %s" % ( self.result(), float(self.value), code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) class NewExprNode(AtomicExprNode): # C++ new statement # # cppclass node c++ class to create type = None def infer_type(self, env): type = self.cppclass.analyse_as_type(env) if type is None or not type.is_cpp_class: error(self.pos, "new operator can only be applied to a C++ class") self.type = error_type return self.cpp_check(env) constructor = type.get_constructor(self.pos) self.class_type = type self.entry = constructor self.type = constructor.type return self.type def analyse_types(self, env): if self.type is None: self.infer_type(env) return self def may_be_none(self): return False def generate_result_code(self, code): pass def calculate_result_code(self): return "new " + self.class_type.empty_declaration_code() class NameNode(AtomicExprNode): # Reference to a local or global variable name. # # name string Python name of the variable # entry Entry Symbol table entry # type_entry Entry For extension type names, the original type entry # cf_is_null boolean Is uninitialized before this node # cf_maybe_null boolean Maybe uninitialized before this node # allow_null boolean Don't raise UnboundLocalError # nogil boolean Whether it is used in a nogil context is_name = True is_cython_module = False cython_attribute = None lhs_of_first_assignment = False # TODO: remove me is_used_as_rvalue = 0 entry = None type_entry = None cf_maybe_null = True cf_is_null = False allow_null = False nogil = False inferred_type = None def as_cython_attribute(self): return self.cython_attribute def type_dependencies(self, env): if self.entry is None: self.entry = env.lookup(self.name) if self.entry is not None and self.entry.type.is_unspecified: return (self,) else: return () def infer_type(self, env): if self.entry is None: self.entry = env.lookup(self.name) if self.entry is None or self.entry.type is unspecified_type: if self.inferred_type is not None: return self.inferred_type return py_object_type elif (self.entry.type.is_extension_type or self.entry.type.is_builtin_type) and \ self.name == self.entry.type.name: # Unfortunately the type attribute of type objects # is used for the pointer to the type they represent. return type_type elif self.entry.type.is_cfunction: if self.entry.scope.is_builtin_scope: # special case: optimised builtin functions must be treated as Python objects return py_object_type else: # special case: referring to a C function must return its pointer return PyrexTypes.CPtrType(self.entry.type) else: # If entry is inferred as pyobject it's safe to use local # NameNode's inferred_type. if self.entry.type.is_pyobject and self.inferred_type: # Overflow may happen if integer if not (self.inferred_type.is_int and self.entry.might_overflow): return self.inferred_type return self.entry.type def compile_time_value(self, denv): try: return denv.lookup(self.name) except KeyError: error(self.pos, "Compile-time name '%s' not defined" % self.name) def get_constant_c_result_code(self): if not self.entry or self.entry.type.is_pyobject: return None return self.entry.cname def coerce_to(self, dst_type, env): # If coercing to a generic pyobject and this is a builtin # C function with a Python equivalent, manufacture a NameNode # referring to the Python builtin. #print "NameNode.coerce_to:", self.name, dst_type ### if dst_type is py_object_type: entry = self.entry if entry and entry.is_cfunction: var_entry = entry.as_variable if var_entry: if var_entry.is_builtin and var_entry.is_const: var_entry = env.declare_builtin(var_entry.name, self.pos) node = NameNode(self.pos, name = self.name) node.entry = var_entry node.analyse_rvalue_entry(env) return node return super(NameNode, self).coerce_to(dst_type, env) def declare_from_annotation(self, env, as_target=False): """Implements PEP 526 annotation typing in a fairly relaxed way. Annotations are ignored for global variables, Python class attributes and already declared variables. String literals are allowed and ignored. The ambiguous Python types 'int' and 'long' are ignored and the 'cython.int' form must be used instead. """ if not env.directives['annotation_typing']: return if env.is_module_scope or env.is_py_class_scope: # annotations never create global cdef names and Python classes don't support them anyway return name = self.name if self.entry or env.lookup_here(name) is not None: # already declared => ignore annotation return annotation = self.annotation if annotation.is_string_literal: # name: "description" => not a type, but still a declared variable or attribute atype = None else: _, atype = analyse_type_annotation(annotation, env) if atype is None: atype = unspecified_type if as_target and env.directives['infer_types'] != False else py_object_type self.entry = env.declare_var(name, atype, self.pos, is_cdef=not as_target) self.entry.annotation = annotation def analyse_as_module(self, env): # Try to interpret this as a reference to a cimported module. # Returns the module scope, or None. entry = self.entry if not entry: entry = env.lookup(self.name) if entry and entry.as_module: return entry.as_module return None def analyse_as_type(self, env): if self.cython_attribute: type = PyrexTypes.parse_basic_type(self.cython_attribute) else: type = PyrexTypes.parse_basic_type(self.name) if type: return type entry = self.entry if not entry: entry = env.lookup(self.name) if entry and entry.is_type: return entry.type else: return None def analyse_as_extension_type(self, env): # Try to interpret this as a reference to an extension type. # Returns the extension type, or None. entry = self.entry if not entry: entry = env.lookup(self.name) if entry and entry.is_type: if entry.type.is_extension_type or entry.type.is_builtin_type: return entry.type return None def analyse_target_declaration(self, env): if not self.entry: self.entry = env.lookup_here(self.name) if not self.entry and self.annotation is not None: # name : type = ... self.declare_from_annotation(env, as_target=True) if not self.entry: if env.directives['warn.undeclared']: warning(self.pos, "implicit declaration of '%s'" % self.name, 1) if env.directives['infer_types'] != False: type = unspecified_type else: type = py_object_type self.entry = env.declare_var(self.name, type, self.pos) if self.entry.is_declared_generic: self.result_ctype = py_object_type if self.entry.as_module: # cimported modules namespace can shadow actual variables self.entry.is_variable = 1 def analyse_types(self, env): self.initialized_check = env.directives['initializedcheck'] entry = self.entry if entry is None: entry = env.lookup(self.name) if not entry: entry = env.declare_builtin(self.name, self.pos) if entry and entry.is_builtin and entry.is_const: self.is_literal = True if not entry: self.type = PyrexTypes.error_type return self self.entry = entry entry.used = 1 if entry.type.is_buffer: from . import Buffer Buffer.used_buffer_aux_vars(entry) self.analyse_rvalue_entry(env) return self def analyse_target_types(self, env): self.analyse_entry(env, is_target=True) entry = self.entry if entry.is_cfunction and entry.as_variable: # FIXME: unify "is_overridable" flags below if (entry.is_overridable or entry.type.is_overridable) or not self.is_lvalue() and entry.fused_cfunction: # We need this for assigning to cpdef names and for the fused 'def' TreeFragment entry = self.entry = entry.as_variable self.type = entry.type if self.type.is_const: error(self.pos, "Assignment to const '%s'" % self.name) if self.type.is_reference: error(self.pos, "Assignment to reference '%s'" % self.name) if not self.is_lvalue(): error(self.pos, "Assignment to non-lvalue '%s'" % self.name) self.type = PyrexTypes.error_type entry.used = 1 if entry.type.is_buffer: from . import Buffer Buffer.used_buffer_aux_vars(entry) return self def analyse_rvalue_entry(self, env): #print "NameNode.analyse_rvalue_entry:", self.name ### #print "Entry:", self.entry.__dict__ ### self.analyse_entry(env) entry = self.entry if entry.is_declared_generic: self.result_ctype = py_object_type if entry.is_pyglobal or entry.is_builtin: if entry.is_builtin and entry.is_const: self.is_temp = 0 else: self.is_temp = 1 self.is_used_as_rvalue = 1 elif entry.type.is_memoryviewslice: self.is_temp = False self.is_used_as_rvalue = True self.use_managed_ref = True return self def nogil_check(self, env): self.nogil = True if self.is_used_as_rvalue: entry = self.entry if entry.is_builtin: if not entry.is_const: # cached builtins are ok self.gil_error() elif entry.is_pyglobal: self.gil_error() gil_message = "Accessing Python global or builtin" def analyse_entry(self, env, is_target=False): #print "NameNode.analyse_entry:", self.name ### self.check_identifier_kind() entry = self.entry type = entry.type if (not is_target and type.is_pyobject and self.inferred_type and self.inferred_type.is_builtin_type): # assume that type inference is smarter than the static entry type = self.inferred_type self.type = type def check_identifier_kind(self): # Check that this is an appropriate kind of name for use in an # expression. Also finds the variable entry associated with # an extension type. entry = self.entry if entry.is_type and entry.type.is_extension_type: self.type_entry = entry if entry.is_type and entry.type.is_enum: py_entry = Symtab.Entry(self.name, None, py_object_type) py_entry.is_pyglobal = True py_entry.scope = self.entry.scope self.entry = py_entry elif not (entry.is_const or entry.is_variable or entry.is_builtin or entry.is_cfunction or entry.is_cpp_class): if self.entry.as_variable: self.entry = self.entry.as_variable elif not self.is_cython_module: error(self.pos, "'%s' is not a constant, variable or function identifier" % self.name) def is_cimported_module_without_shadow(self, env): if self.is_cython_module or self.cython_attribute: return False entry = self.entry or env.lookup(self.name) return entry.as_module and not entry.is_variable def is_simple(self): # If it's not a C variable, it'll be in a temp. return 1 def may_be_none(self): if self.cf_state and self.type and (self.type.is_pyobject or self.type.is_memoryviewslice): # gard against infinite recursion on self-dependencies if getattr(self, '_none_checking', False): # self-dependency - either this node receives a None # value from *another* node, or it can not reference # None at this point => safe to assume "not None" return False self._none_checking = True # evaluate control flow state to see if there were any # potential None values assigned to the node so far may_be_none = False for assignment in self.cf_state: if assignment.rhs.may_be_none(): may_be_none = True break del self._none_checking return may_be_none return super(NameNode, self).may_be_none() def nonlocally_immutable(self): if ExprNode.nonlocally_immutable(self): return True entry = self.entry if not entry or entry.in_closure: return False return entry.is_local or entry.is_arg or entry.is_builtin or entry.is_readonly def calculate_target_results(self, env): pass def check_const(self): entry = self.entry if entry is not None and not ( entry.is_const or entry.is_cfunction or entry.is_builtin or entry.type.is_const): self.not_const() return False return True def check_const_addr(self): entry = self.entry if not (entry.is_cglobal or entry.is_cfunction or entry.is_builtin): self.addr_not_const() return False return True def is_lvalue(self): return ( self.entry.is_variable and not self.entry.is_readonly ) or ( self.entry.is_cfunction and self.entry.is_overridable ) def is_addressable(self): return self.entry.is_variable and not self.type.is_memoryviewslice def is_ephemeral(self): # Name nodes are never ephemeral, even if the # result is in a temporary. return 0 def calculate_result_code(self): entry = self.entry if not entry: return "" # There was an error earlier return entry.cname def generate_result_code(self, code): assert hasattr(self, 'entry') entry = self.entry if entry is None: return # There was an error earlier if entry.utility_code: code.globalstate.use_utility_code(entry.utility_code) if entry.is_builtin and entry.is_const: return # Lookup already cached elif entry.is_pyclass_attr: assert entry.type.is_pyobject, "Python global or builtin not a Python object" interned_cname = code.intern_identifier(self.entry.name) if entry.is_builtin: namespace = Naming.builtins_cname else: # entry.is_pyglobal namespace = entry.scope.namespace_cname if not self.cf_is_null: code.putln( '%s = PyObject_GetItem(%s, %s);' % ( self.result(), namespace, interned_cname)) code.putln('if (unlikely(!%s)) {' % self.result()) code.putln('PyErr_Clear();') code.globalstate.use_utility_code( UtilityCode.load_cached("GetModuleGlobalName", "ObjectHandling.c")) code.putln( '__Pyx_GetModuleGlobalName(%s, %s);' % ( self.result(), interned_cname)) if not self.cf_is_null: code.putln("}") code.putln(code.error_goto_if_null(self.result(), self.pos)) code.put_gotref(self.py_result()) elif entry.is_builtin and not entry.scope.is_module_scope: # known builtin assert entry.type.is_pyobject, "Python global or builtin not a Python object" interned_cname = code.intern_identifier(self.entry.name) code.globalstate.use_utility_code( UtilityCode.load_cached("GetBuiltinName", "ObjectHandling.c")) code.putln( '%s = __Pyx_GetBuiltinName(%s); %s' % ( self.result(), interned_cname, code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) elif entry.is_pyglobal or (entry.is_builtin and entry.scope.is_module_scope): # name in class body, global name or unknown builtin assert entry.type.is_pyobject, "Python global or builtin not a Python object" interned_cname = code.intern_identifier(self.entry.name) if entry.scope.is_module_scope: code.globalstate.use_utility_code( UtilityCode.load_cached("GetModuleGlobalName", "ObjectHandling.c")) code.putln( '__Pyx_GetModuleGlobalName(%s, %s); %s' % ( self.result(), interned_cname, code.error_goto_if_null(self.result(), self.pos))) else: # FIXME: is_pyglobal is also used for class namespace code.globalstate.use_utility_code( UtilityCode.load_cached("GetNameInClass", "ObjectHandling.c")) code.putln( '__Pyx_GetNameInClass(%s, %s, %s); %s' % ( self.result(), entry.scope.namespace_cname, interned_cname, code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) elif entry.is_local or entry.in_closure or entry.from_closure or entry.type.is_memoryviewslice: # Raise UnboundLocalError for objects and memoryviewslices raise_unbound = ( (self.cf_maybe_null or self.cf_is_null) and not self.allow_null) null_code = entry.type.check_for_null_code(entry.cname) memslice_check = entry.type.is_memoryviewslice and self.initialized_check if null_code and raise_unbound and (entry.type.is_pyobject or memslice_check): code.put_error_if_unbound(self.pos, entry, self.in_nogil_context) def generate_assignment_code(self, rhs, code, overloaded_assignment=False, exception_check=None, exception_value=None): #print "NameNode.generate_assignment_code:", self.name ### entry = self.entry if entry is None: return # There was an error earlier if (self.entry.type.is_ptr and isinstance(rhs, ListNode) and not self.lhs_of_first_assignment and not rhs.in_module_scope): error(self.pos, "Literal list must be assigned to pointer at time of declaration") # is_pyglobal seems to be True for module level-globals only. # We use this to access class->tp_dict if necessary. if entry.is_pyglobal: assert entry.type.is_pyobject, "Python global or builtin not a Python object" interned_cname = code.intern_identifier(self.entry.name) namespace = self.entry.scope.namespace_cname if entry.is_member: # if the entry is a member we have to cheat: SetAttr does not work # on types, so we create a descriptor which is then added to tp_dict setter = 'PyDict_SetItem' namespace = '%s->tp_dict' % namespace elif entry.scope.is_module_scope: setter = 'PyDict_SetItem' namespace = Naming.moddict_cname elif entry.is_pyclass_attr: code.globalstate.use_utility_code(UtilityCode.load_cached("SetNameInClass", "ObjectHandling.c")) setter = '__Pyx_SetNameInClass' else: assert False, repr(entry) code.put_error_if_neg( self.pos, '%s(%s, %s, %s)' % ( setter, namespace, interned_cname, rhs.py_result())) if debug_disposal_code: print("NameNode.generate_assignment_code:") print("...generating disposal code for %s" % rhs) rhs.generate_disposal_code(code) rhs.free_temps(code) if entry.is_member: # in Py2.6+, we need to invalidate the method cache code.putln("PyType_Modified(%s);" % entry.scope.parent_type.typeptr_cname) else: if self.type.is_memoryviewslice: self.generate_acquire_memoryviewslice(rhs, code) elif self.type.is_buffer: # Generate code for doing the buffer release/acquisition. # This might raise an exception in which case the assignment (done # below) will not happen. # # The reason this is not in a typetest-like node is because the # variables that the acquired buffer info is stored to is allocated # per entry and coupled with it. self.generate_acquire_buffer(rhs, code) assigned = False if self.type.is_pyobject: #print "NameNode.generate_assignment_code: to", self.name ### #print "...from", rhs ### #print "...LHS type", self.type, "ctype", self.ctype() ### #print "...RHS type", rhs.type, "ctype", rhs.ctype() ### if self.use_managed_ref: rhs.make_owned_reference(code) is_external_ref = entry.is_cglobal or self.entry.in_closure or self.entry.from_closure if is_external_ref: if not self.cf_is_null: if self.cf_maybe_null: code.put_xgotref(self.py_result()) else: code.put_gotref(self.py_result()) assigned = True if entry.is_cglobal: code.put_decref_set( self.result(), rhs.result_as(self.ctype())) else: if not self.cf_is_null: if self.cf_maybe_null: code.put_xdecref_set( self.result(), rhs.result_as(self.ctype())) else: code.put_decref_set( self.result(), rhs.result_as(self.ctype())) else: assigned = False if is_external_ref: code.put_giveref(rhs.py_result()) if not self.type.is_memoryviewslice: if not assigned: if overloaded_assignment: result = rhs.result() if exception_check == '+': translate_cpp_exception( code, self.pos, '%s = %s;' % (self.result(), result), self.result() if self.type.is_pyobject else None, exception_value, self.in_nogil_context) else: code.putln('%s = %s;' % (self.result(), result)) else: result = rhs.result_as(self.ctype()) if is_pythran_expr(self.type): code.putln('new (&%s) decltype(%s){%s};' % (self.result(), self.result(), result)) elif result != self.result(): code.putln('%s = %s;' % (self.result(), result)) if debug_disposal_code: print("NameNode.generate_assignment_code:") print("...generating post-assignment code for %s" % rhs) rhs.generate_post_assignment_code(code) elif rhs.result_in_temp(): rhs.generate_post_assignment_code(code) rhs.free_temps(code) def generate_acquire_memoryviewslice(self, rhs, code): """ Slices, coercions from objects, return values etc are new references. We have a borrowed reference in case of dst = src """ from . import MemoryView MemoryView.put_acquire_memoryviewslice( lhs_cname=self.result(), lhs_type=self.type, lhs_pos=self.pos, rhs=rhs, code=code, have_gil=not self.in_nogil_context, first_assignment=self.cf_is_null) def generate_acquire_buffer(self, rhs, code): # rhstmp is only used in case the rhs is a complicated expression leading to # the object, to avoid repeating the same C expression for every reference # to the rhs. It does NOT hold a reference. pretty_rhs = isinstance(rhs, NameNode) or rhs.is_temp if pretty_rhs: rhstmp = rhs.result_as(self.ctype()) else: rhstmp = code.funcstate.allocate_temp(self.entry.type, manage_ref=False) code.putln('%s = %s;' % (rhstmp, rhs.result_as(self.ctype()))) from . import Buffer Buffer.put_assign_to_buffer(self.result(), rhstmp, self.entry, is_initialized=not self.lhs_of_first_assignment, pos=self.pos, code=code) if not pretty_rhs: code.putln("%s = 0;" % rhstmp) code.funcstate.release_temp(rhstmp) def generate_deletion_code(self, code, ignore_nonexisting=False): if self.entry is None: return # There was an error earlier elif self.entry.is_pyclass_attr: namespace = self.entry.scope.namespace_cname interned_cname = code.intern_identifier(self.entry.name) if ignore_nonexisting: key_error_code = 'PyErr_Clear(); else' else: # minor hack: fake a NameError on KeyError key_error_code = ( '{ PyErr_Clear(); PyErr_Format(PyExc_NameError, "name \'%%s\' is not defined", "%s"); }' % self.entry.name) code.putln( 'if (unlikely(PyObject_DelItem(%s, %s) < 0)) {' ' if (likely(PyErr_ExceptionMatches(PyExc_KeyError))) %s' ' %s ' '}' % (namespace, interned_cname, key_error_code, code.error_goto(self.pos))) elif self.entry.is_pyglobal: code.globalstate.use_utility_code( UtilityCode.load_cached("PyObjectSetAttrStr", "ObjectHandling.c")) interned_cname = code.intern_identifier(self.entry.name) del_code = '__Pyx_PyObject_DelAttrStr(%s, %s)' % ( Naming.module_cname, interned_cname) if ignore_nonexisting: code.putln( 'if (unlikely(%s < 0)) {' ' if (likely(PyErr_ExceptionMatches(PyExc_AttributeError))) PyErr_Clear(); else %s ' '}' % (del_code, code.error_goto(self.pos))) else: code.put_error_if_neg(self.pos, del_code) elif self.entry.type.is_pyobject or self.entry.type.is_memoryviewslice: if not self.cf_is_null: if self.cf_maybe_null and not ignore_nonexisting: code.put_error_if_unbound(self.pos, self.entry) if self.entry.type.is_pyobject: if self.entry.in_closure: # generator if ignore_nonexisting and self.cf_maybe_null: code.put_xgotref(self.result()) else: code.put_gotref(self.result()) if ignore_nonexisting and self.cf_maybe_null: code.put_xdecref(self.result(), self.ctype()) else: code.put_decref(self.result(), self.ctype()) code.putln('%s = NULL;' % self.result()) else: code.put_xdecref_memoryviewslice(self.entry.cname, have_gil=not self.nogil) else: error(self.pos, "Deletion of C names not supported") def annotate(self, code): if hasattr(self, 'is_called') and self.is_called: pos = (self.pos[0], self.pos[1], self.pos[2] - len(self.name) - 1) if self.type.is_pyobject: style, text = 'py_call', 'python function (%s)' else: style, text = 'c_call', 'c function (%s)' code.annotate(pos, AnnotationItem(style, text % self.type, size=len(self.name))) class BackquoteNode(ExprNode): # `expr` # # arg ExprNode type = py_object_type subexprs = ['arg'] def analyse_types(self, env): self.arg = self.arg.analyse_types(env) self.arg = self.arg.coerce_to_pyobject(env) self.is_temp = 1 return self gil_message = "Backquote expression" def calculate_constant_result(self): self.constant_result = repr(self.arg.constant_result) def generate_result_code(self, code): code.putln( "%s = PyObject_Repr(%s); %s" % ( self.result(), self.arg.py_result(), code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) class ImportNode(ExprNode): # Used as part of import statement implementation. # Implements result = # __import__(module_name, globals(), None, name_list, level) # # module_name StringNode dotted name of module. Empty module # name means importing the parent package according # to level # name_list ListNode or None list of names to be imported # level int relative import level: # -1: attempt both relative import and absolute import; # 0: absolute import; # >0: the number of parent directories to search # relative to the current module. # None: decide the level according to language level and # directives type = py_object_type subexprs = ['module_name', 'name_list'] def analyse_types(self, env): if self.level is None: if (env.directives['py2_import'] or Future.absolute_import not in env.global_scope().context.future_directives): self.level = -1 else: self.level = 0 module_name = self.module_name.analyse_types(env) self.module_name = module_name.coerce_to_pyobject(env) if self.name_list: name_list = self.name_list.analyse_types(env) self.name_list = name_list.coerce_to_pyobject(env) self.is_temp = 1 return self gil_message = "Python import" def generate_result_code(self, code): if self.name_list: name_list_code = self.name_list.py_result() else: name_list_code = "0" code.globalstate.use_utility_code(UtilityCode.load_cached("Import", "ImportExport.c")) import_code = "__Pyx_Import(%s, %s, %d)" % ( self.module_name.py_result(), name_list_code, self.level) if (self.level <= 0 and self.module_name.is_string_literal and self.module_name.value in utility_code_for_imports): helper_func, code_name, code_file = utility_code_for_imports[self.module_name.value] code.globalstate.use_utility_code(UtilityCode.load_cached(code_name, code_file)) import_code = '%s(%s)' % (helper_func, import_code) code.putln("%s = %s; %s" % ( self.result(), import_code, code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) class IteratorNode(ExprNode): # Used as part of for statement implementation. # # Implements result = iter(sequence) # # sequence ExprNode type = py_object_type iter_func_ptr = None counter_cname = None cpp_iterator_cname = None reversed = False # currently only used for list/tuple types (see Optimize.py) is_async = False subexprs = ['sequence'] def analyse_types(self, env): self.sequence = self.sequence.analyse_types(env) if (self.sequence.type.is_array or self.sequence.type.is_ptr) and \ not self.sequence.type.is_string: # C array iteration will be transformed later on self.type = self.sequence.type elif self.sequence.type.is_cpp_class: self.analyse_cpp_types(env) else: self.sequence = self.sequence.coerce_to_pyobject(env) if self.sequence.type in (list_type, tuple_type): self.sequence = self.sequence.as_none_safe_node("'NoneType' object is not iterable") self.is_temp = 1 return self gil_message = "Iterating over Python object" _func_iternext_type = PyrexTypes.CPtrType(PyrexTypes.CFuncType( PyrexTypes.py_object_type, [ PyrexTypes.CFuncTypeArg("it", PyrexTypes.py_object_type, None), ])) def type_dependencies(self, env): return self.sequence.type_dependencies(env) def infer_type(self, env): sequence_type = self.sequence.infer_type(env) if sequence_type.is_array or sequence_type.is_ptr: return sequence_type elif sequence_type.is_cpp_class: begin = sequence_type.scope.lookup("begin") if begin is not None: return begin.type.return_type elif sequence_type.is_pyobject: return sequence_type return py_object_type def analyse_cpp_types(self, env): sequence_type = self.sequence.type if sequence_type.is_ptr: sequence_type = sequence_type.base_type begin = sequence_type.scope.lookup("begin") end = sequence_type.scope.lookup("end") if (begin is None or not begin.type.is_cfunction or begin.type.args): error(self.pos, "missing begin() on %s" % self.sequence.type) self.type = error_type return if (end is None or not end.type.is_cfunction or end.type.args): error(self.pos, "missing end() on %s" % self.sequence.type) self.type = error_type return iter_type = begin.type.return_type if iter_type.is_cpp_class: if env.lookup_operator_for_types( self.pos, "!=", [iter_type, end.type.return_type]) is None: error(self.pos, "missing operator!= on result of begin() on %s" % self.sequence.type) self.type = error_type return if env.lookup_operator_for_types(self.pos, '++', [iter_type]) is None: error(self.pos, "missing operator++ on result of begin() on %s" % self.sequence.type) self.type = error_type return if env.lookup_operator_for_types(self.pos, '*', [iter_type]) is None: error(self.pos, "missing operator* on result of begin() on %s" % self.sequence.type) self.type = error_type return self.type = iter_type elif iter_type.is_ptr: if not (iter_type == end.type.return_type): error(self.pos, "incompatible types for begin() and end()") self.type = iter_type else: error(self.pos, "result type of begin() on %s must be a C++ class or pointer" % self.sequence.type) self.type = error_type return def generate_result_code(self, code): sequence_type = self.sequence.type if sequence_type.is_cpp_class: if self.sequence.is_name: # safe: C++ won't allow you to reassign to class references begin_func = "%s.begin" % self.sequence.result() else: sequence_type = PyrexTypes.c_ptr_type(sequence_type) self.cpp_iterator_cname = code.funcstate.allocate_temp(sequence_type, manage_ref=False) code.putln("%s = &%s;" % (self.cpp_iterator_cname, self.sequence.result())) begin_func = "%s->begin" % self.cpp_iterator_cname # TODO: Limit scope. code.putln("%s = %s();" % (self.result(), begin_func)) return if sequence_type.is_array or sequence_type.is_ptr: raise InternalError("for in carray slice not transformed") is_builtin_sequence = sequence_type in (list_type, tuple_type) if not is_builtin_sequence: # reversed() not currently optimised (see Optimize.py) assert not self.reversed, "internal error: reversed() only implemented for list/tuple objects" self.may_be_a_sequence = not sequence_type.is_builtin_type if self.may_be_a_sequence: code.putln( "if (likely(PyList_CheckExact(%s)) || PyTuple_CheckExact(%s)) {" % ( self.sequence.py_result(), self.sequence.py_result())) if is_builtin_sequence or self.may_be_a_sequence: self.counter_cname = code.funcstate.allocate_temp( PyrexTypes.c_py_ssize_t_type, manage_ref=False) if self.reversed: if sequence_type is list_type: init_value = 'PyList_GET_SIZE(%s) - 1' % self.result() else: init_value = 'PyTuple_GET_SIZE(%s) - 1' % self.result() else: init_value = '0' code.putln("%s = %s; __Pyx_INCREF(%s); %s = %s;" % ( self.result(), self.sequence.py_result(), self.result(), self.counter_cname, init_value)) if not is_builtin_sequence: self.iter_func_ptr = code.funcstate.allocate_temp(self._func_iternext_type, manage_ref=False) if self.may_be_a_sequence: code.putln("%s = NULL;" % self.iter_func_ptr) code.putln("} else {") code.put("%s = -1; " % self.counter_cname) code.putln("%s = PyObject_GetIter(%s); %s" % ( self.result(), self.sequence.py_result(), code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) # PyObject_GetIter() fails if "tp_iternext" is not set, but the check below # makes it visible to the C compiler that the pointer really isn't NULL, so that # it can distinguish between the special cases and the generic case code.putln("%s = Py_TYPE(%s)->tp_iternext; %s" % ( self.iter_func_ptr, self.py_result(), code.error_goto_if_null(self.iter_func_ptr, self.pos))) if self.may_be_a_sequence: code.putln("}") def generate_next_sequence_item(self, test_name, result_name, code): assert self.counter_cname, "internal error: counter_cname temp not prepared" final_size = 'Py%s_GET_SIZE(%s)' % (test_name, self.py_result()) if self.sequence.is_sequence_constructor: item_count = len(self.sequence.args) if self.sequence.mult_factor is None: final_size = item_count elif isinstance(self.sequence.mult_factor.constant_result, _py_int_types): final_size = item_count * self.sequence.mult_factor.constant_result code.putln("if (%s >= %s) break;" % (self.counter_cname, final_size)) if self.reversed: inc_dec = '--' else: inc_dec = '++' code.putln("#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS") code.putln( "%s = Py%s_GET_ITEM(%s, %s); __Pyx_INCREF(%s); %s%s; %s" % ( result_name, test_name, self.py_result(), self.counter_cname, result_name, self.counter_cname, inc_dec, # use the error label to avoid C compiler warnings if we only use it below code.error_goto_if_neg('0', self.pos) )) code.putln("#else") code.putln( "%s = PySequence_ITEM(%s, %s); %s%s; %s" % ( result_name, self.py_result(), self.counter_cname, self.counter_cname, inc_dec, code.error_goto_if_null(result_name, self.pos))) code.put_gotref(result_name) code.putln("#endif") def generate_iter_next_result_code(self, result_name, code): sequence_type = self.sequence.type if self.reversed: code.putln("if (%s < 0) break;" % self.counter_cname) if sequence_type.is_cpp_class: if self.cpp_iterator_cname: end_func = "%s->end" % self.cpp_iterator_cname else: end_func = "%s.end" % self.sequence.result() # TODO: Cache end() call? code.putln("if (!(%s != %s())) break;" % ( self.result(), end_func)) code.putln("%s = *%s;" % ( result_name, self.result())) code.putln("++%s;" % self.result()) return elif sequence_type is list_type: self.generate_next_sequence_item('List', result_name, code) return elif sequence_type is tuple_type: self.generate_next_sequence_item('Tuple', result_name, code) return if self.may_be_a_sequence: code.putln("if (likely(!%s)) {" % self.iter_func_ptr) code.putln("if (likely(PyList_CheckExact(%s))) {" % self.py_result()) self.generate_next_sequence_item('List', result_name, code) code.putln("} else {") self.generate_next_sequence_item('Tuple', result_name, code) code.putln("}") code.put("} else ") code.putln("{") code.putln( "%s = %s(%s);" % ( result_name, self.iter_func_ptr, self.py_result())) code.putln("if (unlikely(!%s)) {" % result_name) code.putln("PyObject* exc_type = PyErr_Occurred();") code.putln("if (exc_type) {") code.putln("if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();") code.putln("else %s" % code.error_goto(self.pos)) code.putln("}") code.putln("break;") code.putln("}") code.put_gotref(result_name) code.putln("}") def free_temps(self, code): if self.counter_cname: code.funcstate.release_temp(self.counter_cname) if self.iter_func_ptr: code.funcstate.release_temp(self.iter_func_ptr) self.iter_func_ptr = None if self.cpp_iterator_cname: code.funcstate.release_temp(self.cpp_iterator_cname) ExprNode.free_temps(self, code) class NextNode(AtomicExprNode): # Used as part of for statement implementation. # Implements result = next(iterator) # Created during analyse_types phase. # The iterator is not owned by this node. # # iterator IteratorNode def __init__(self, iterator): AtomicExprNode.__init__(self, iterator.pos) self.iterator = iterator def nogil_check(self, env): # ignore - errors (if any) are already handled by IteratorNode pass def type_dependencies(self, env): return self.iterator.type_dependencies(env) def infer_type(self, env, iterator_type=None): if iterator_type is None: iterator_type = self.iterator.infer_type(env) if iterator_type.is_ptr or iterator_type.is_array: return iterator_type.base_type elif iterator_type.is_cpp_class: item_type = env.lookup_operator_for_types(self.pos, "*", [iterator_type]).type.return_type if item_type.is_reference: item_type = item_type.ref_base_type if item_type.is_const: item_type = item_type.const_base_type return item_type else: # Avoid duplication of complicated logic. fake_index_node = IndexNode( self.pos, base=self.iterator.sequence, index=IntNode(self.pos, value='PY_SSIZE_T_MAX', type=PyrexTypes.c_py_ssize_t_type)) return fake_index_node.infer_type(env) def analyse_types(self, env): self.type = self.infer_type(env, self.iterator.type) self.is_temp = 1 return self def generate_result_code(self, code): self.iterator.generate_iter_next_result_code(self.result(), code) class AsyncIteratorNode(ExprNode): # Used as part of 'async for' statement implementation. # # Implements result = sequence.__aiter__() # # sequence ExprNode subexprs = ['sequence'] is_async = True type = py_object_type is_temp = 1 def infer_type(self, env): return py_object_type def analyse_types(self, env): self.sequence = self.sequence.analyse_types(env) if not self.sequence.type.is_pyobject: error(self.pos, "async for loops not allowed on C/C++ types") self.sequence = self.sequence.coerce_to_pyobject(env) return self def generate_result_code(self, code): code.globalstate.use_utility_code(UtilityCode.load_cached("AsyncIter", "Coroutine.c")) code.putln("%s = __Pyx_Coroutine_GetAsyncIter(%s); %s" % ( self.result(), self.sequence.py_result(), code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.result()) class AsyncNextNode(AtomicExprNode): # Used as part of 'async for' statement implementation. # Implements result = iterator.__anext__() # Created during analyse_types phase. # The iterator is not owned by this node. # # iterator IteratorNode type = py_object_type is_temp = 1 def __init__(self, iterator): AtomicExprNode.__init__(self, iterator.pos) self.iterator = iterator def infer_type(self, env): return py_object_type def analyse_types(self, env): return self def generate_result_code(self, code): code.globalstate.use_utility_code(UtilityCode.load_cached("AsyncIter", "Coroutine.c")) code.putln("%s = __Pyx_Coroutine_AsyncIterNext(%s); %s" % ( self.result(), self.iterator.py_result(), code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.result()) class WithExitCallNode(ExprNode): # The __exit__() call of a 'with' statement. Used in both the # except and finally clauses. # with_stat WithStatNode the surrounding 'with' statement # args TupleNode or ResultStatNode the exception info tuple # await_expr AwaitExprNode the await expression of an 'async with' statement subexprs = ['args', 'await_expr'] test_if_run = True await_expr = None def analyse_types(self, env): self.args = self.args.analyse_types(env) if self.await_expr: self.await_expr = self.await_expr.analyse_types(env) self.type = PyrexTypes.c_bint_type self.is_temp = True return self def generate_evaluation_code(self, code): if self.test_if_run: # call only if it was not already called (and decref-cleared) code.putln("if (%s) {" % self.with_stat.exit_var) self.args.generate_evaluation_code(code) result_var = code.funcstate.allocate_temp(py_object_type, manage_ref=False) code.mark_pos(self.pos) code.globalstate.use_utility_code(UtilityCode.load_cached( "PyObjectCall", "ObjectHandling.c")) code.putln("%s = __Pyx_PyObject_Call(%s, %s, NULL);" % ( result_var, self.with_stat.exit_var, self.args.result())) code.put_decref_clear(self.with_stat.exit_var, type=py_object_type) self.args.generate_disposal_code(code) self.args.free_temps(code) code.putln(code.error_goto_if_null(result_var, self.pos)) code.put_gotref(result_var) if self.await_expr: # FIXME: result_var temp currently leaks into the closure self.await_expr.generate_evaluation_code(code, source_cname=result_var, decref_source=True) code.putln("%s = %s;" % (result_var, self.await_expr.py_result())) self.await_expr.generate_post_assignment_code(code) self.await_expr.free_temps(code) if self.result_is_used: self.allocate_temp_result(code) code.putln("%s = __Pyx_PyObject_IsTrue(%s);" % (self.result(), result_var)) code.put_decref_clear(result_var, type=py_object_type) if self.result_is_used: code.put_error_if_neg(self.pos, self.result()) code.funcstate.release_temp(result_var) if self.test_if_run: code.putln("}") class ExcValueNode(AtomicExprNode): # Node created during analyse_types phase # of an ExceptClauseNode to fetch the current # exception value. type = py_object_type def __init__(self, pos): ExprNode.__init__(self, pos) def set_var(self, var): self.var = var def calculate_result_code(self): return self.var def generate_result_code(self, code): pass def analyse_types(self, env): return self class TempNode(ExprNode): # Node created during analyse_types phase # of some nodes to hold a temporary value. # # Note: One must call "allocate" and "release" on # the node during code generation to get/release the temp. # This is because the temp result is often used outside of # the regular cycle. subexprs = [] def __init__(self, pos, type, env=None): ExprNode.__init__(self, pos) self.type = type if type.is_pyobject: self.result_ctype = py_object_type self.is_temp = 1 def analyse_types(self, env): return self def analyse_target_declaration(self, env): pass def generate_result_code(self, code): pass def allocate(self, code): self.temp_cname = code.funcstate.allocate_temp(self.type, manage_ref=True) def release(self, code): code.funcstate.release_temp(self.temp_cname) self.temp_cname = None def result(self): try: return self.temp_cname except: assert False, "Remember to call allocate/release on TempNode" raise # Do not participate in normal temp alloc/dealloc: def allocate_temp_result(self, code): pass def release_temp_result(self, code): pass class PyTempNode(TempNode): # TempNode holding a Python value. def __init__(self, pos, env): TempNode.__init__(self, pos, PyrexTypes.py_object_type, env) class RawCNameExprNode(ExprNode): subexprs = [] def __init__(self, pos, type=None, cname=None): ExprNode.__init__(self, pos, type=type) if cname is not None: self.cname = cname def analyse_types(self, env): return self def set_cname(self, cname): self.cname = cname def result(self): return self.cname def generate_result_code(self, code): pass #------------------------------------------------------------------- # # F-strings # #------------------------------------------------------------------- class JoinedStrNode(ExprNode): # F-strings # # values [UnicodeNode|FormattedValueNode] Substrings of the f-string # type = unicode_type is_temp = True subexprs = ['values'] def analyse_types(self, env): self.values = [v.analyse_types(env).coerce_to_pyobject(env) for v in self.values] return self def may_be_none(self): # PyUnicode_Join() always returns a Unicode string or raises an exception return False def generate_evaluation_code(self, code): code.mark_pos(self.pos) num_items = len(self.values) list_var = code.funcstate.allocate_temp(py_object_type, manage_ref=True) ulength_var = code.funcstate.allocate_temp(PyrexTypes.c_py_ssize_t_type, manage_ref=False) max_char_var = code.funcstate.allocate_temp(PyrexTypes.c_py_ucs4_type, manage_ref=False) code.putln('%s = PyTuple_New(%s); %s' % ( list_var, num_items, code.error_goto_if_null(list_var, self.pos))) code.put_gotref(list_var) code.putln("%s = 0;" % ulength_var) code.putln("%s = 127;" % max_char_var) # at least ASCII character range for i, node in enumerate(self.values): node.generate_evaluation_code(code) node.make_owned_reference(code) ulength = "__Pyx_PyUnicode_GET_LENGTH(%s)" % node.py_result() max_char_value = "__Pyx_PyUnicode_MAX_CHAR_VALUE(%s)" % node.py_result() is_ascii = False if isinstance(node, UnicodeNode): try: # most strings will be ASCII or at least Latin-1 node.value.encode('iso8859-1') max_char_value = '255' node.value.encode('us-ascii') is_ascii = True except UnicodeEncodeError: if max_char_value != '255': # not ISO8859-1 => check BMP limit max_char = max(map(ord, node.value)) if max_char < 0xD800: # BMP-only, no surrogate pairs used max_char_value = '65535' ulength = str(len(node.value)) elif max_char >= 65536: # cleary outside of BMP, and not on a 16-bit Unicode system max_char_value = '1114111' ulength = str(len(node.value)) else: # not really worth implementing a check for surrogate pairs here # drawback: C code can differ when generating on Py2 with 2-byte Unicode pass else: ulength = str(len(node.value)) elif isinstance(node, FormattedValueNode) and node.value.type.is_numeric: is_ascii = True # formatted C numbers are always ASCII if not is_ascii: code.putln("%s = (%s > %s) ? %s : %s;" % ( max_char_var, max_char_value, max_char_var, max_char_value, max_char_var)) code.putln("%s += %s;" % (ulength_var, ulength)) code.put_giveref(node.py_result()) code.putln('PyTuple_SET_ITEM(%s, %s, %s);' % (list_var, i, node.py_result())) node.generate_post_assignment_code(code) node.free_temps(code) code.mark_pos(self.pos) self.allocate_temp_result(code) code.globalstate.use_utility_code(UtilityCode.load_cached("JoinPyUnicode", "StringTools.c")) code.putln('%s = __Pyx_PyUnicode_Join(%s, %d, %s, %s); %s' % ( self.result(), list_var, num_items, ulength_var, max_char_var, code.error_goto_if_null(self.py_result(), self.pos))) code.put_gotref(self.py_result()) code.put_decref_clear(list_var, py_object_type) code.funcstate.release_temp(list_var) code.funcstate.release_temp(ulength_var) code.funcstate.release_temp(max_char_var) class FormattedValueNode(ExprNode): # {}-delimited portions of an f-string # # value ExprNode The expression itself # conversion_char str or None Type conversion (!s, !r, !a, or none) # format_spec JoinedStrNode or None Format string passed to __format__ # c_format_spec str or None If not None, formatting can be done at the C level subexprs = ['value', 'format_spec'] type = unicode_type is_temp = True c_format_spec = None find_conversion_func = { 's': 'PyObject_Unicode', 'r': 'PyObject_Repr', 'a': 'PyObject_ASCII', # NOTE: mapped to PyObject_Repr() in Py2 }.get def may_be_none(self): # PyObject_Format() always returns a Unicode string or raises an exception return False def analyse_types(self, env): self.value = self.value.analyse_types(env) if not self.format_spec or self.format_spec.is_string_literal: c_format_spec = self.format_spec.value if self.format_spec else self.value.type.default_format_spec if self.value.type.can_coerce_to_pystring(env, format_spec=c_format_spec): self.c_format_spec = c_format_spec if self.format_spec: self.format_spec = self.format_spec.analyse_types(env).coerce_to_pyobject(env) if self.c_format_spec is None: self.value = self.value.coerce_to_pyobject(env) if not self.format_spec and (not self.conversion_char or self.conversion_char == 's'): if self.value.type is unicode_type and not self.value.may_be_none(): # value is definitely a unicode string and we don't format it any special return self.value return self def generate_result_code(self, code): if self.c_format_spec is not None and not self.value.type.is_pyobject: convert_func_call = self.value.type.convert_to_pystring( self.value.result(), code, self.c_format_spec) code.putln("%s = %s; %s" % ( self.result(), convert_func_call, code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) return value_result = self.value.py_result() value_is_unicode = self.value.type is unicode_type and not self.value.may_be_none() if self.format_spec: format_func = '__Pyx_PyObject_Format' format_spec = self.format_spec.py_result() else: # common case: expect simple Unicode pass-through if no format spec format_func = '__Pyx_PyObject_FormatSimple' # passing a Unicode format string in Py2 forces PyObject_Format() to also return a Unicode string format_spec = Naming.empty_unicode conversion_char = self.conversion_char if conversion_char == 's' and value_is_unicode: # no need to pipe unicode strings through str() conversion_char = None if conversion_char: fn = self.find_conversion_func(conversion_char) assert fn is not None, "invalid conversion character found: '%s'" % conversion_char value_result = '%s(%s)' % (fn, value_result) code.globalstate.use_utility_code( UtilityCode.load_cached("PyObjectFormatAndDecref", "StringTools.c")) format_func += 'AndDecref' elif self.format_spec: code.globalstate.use_utility_code( UtilityCode.load_cached("PyObjectFormat", "StringTools.c")) else: code.globalstate.use_utility_code( UtilityCode.load_cached("PyObjectFormatSimple", "StringTools.c")) code.putln("%s = %s(%s, %s); %s" % ( self.result(), format_func, value_result, format_spec, code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) #------------------------------------------------------------------- # # Parallel nodes (cython.parallel.thread(savailable|id)) # #------------------------------------------------------------------- class ParallelThreadsAvailableNode(AtomicExprNode): """ Note: this is disabled and not a valid directive at this moment Implements cython.parallel.threadsavailable(). If we are called from the sequential part of the application, we need to call omp_get_max_threads(), and in the parallel part we can just call omp_get_num_threads() """ type = PyrexTypes.c_int_type def analyse_types(self, env): self.is_temp = True # env.add_include_file("omp.h") return self def generate_result_code(self, code): code.putln("#ifdef _OPENMP") code.putln("if (omp_in_parallel()) %s = omp_get_max_threads();" % self.temp_code) code.putln("else %s = omp_get_num_threads();" % self.temp_code) code.putln("#else") code.putln("%s = 1;" % self.temp_code) code.putln("#endif") def result(self): return self.temp_code class ParallelThreadIdNode(AtomicExprNode): #, Nodes.ParallelNode): """ Implements cython.parallel.threadid() """ type = PyrexTypes.c_int_type def analyse_types(self, env): self.is_temp = True # env.add_include_file("omp.h") return self def generate_result_code(self, code): code.putln("#ifdef _OPENMP") code.putln("%s = omp_get_thread_num();" % self.temp_code) code.putln("#else") code.putln("%s = 0;" % self.temp_code) code.putln("#endif") def result(self): return self.temp_code #------------------------------------------------------------------- # # Trailer nodes # #------------------------------------------------------------------- class _IndexingBaseNode(ExprNode): # Base class for indexing nodes. # # base ExprNode the value being indexed def is_ephemeral(self): # in most cases, indexing will return a safe reference to an object in a container, # so we consider the result safe if the base object is return self.base.is_ephemeral() or self.base.type in ( basestring_type, str_type, bytes_type, bytearray_type, unicode_type) def check_const_addr(self): return self.base.check_const_addr() and self.index.check_const() def is_lvalue(self): # NOTE: references currently have both is_reference and is_ptr # set. Since pointers and references have different lvalue # rules, we must be careful to separate the two. if self.type.is_reference: if self.type.ref_base_type.is_array: # fixed-sized arrays aren't l-values return False elif self.type.is_ptr: # non-const pointers can always be reassigned return True # Just about everything else returned by the index operator # can be an lvalue. return True class IndexNode(_IndexingBaseNode): # Sequence indexing. # # base ExprNode # index ExprNode # type_indices [PyrexType] # # is_fused_index boolean Whether the index is used to specialize a # c(p)def function subexprs = ['base', 'index'] type_indices = None is_subscript = True is_fused_index = False def calculate_constant_result(self): self.constant_result = self.base.constant_result[self.index.constant_result] def compile_time_value(self, denv): base = self.base.compile_time_value(denv) index = self.index.compile_time_value(denv) try: return base[index] except Exception as e: self.compile_time_value_error(e) def is_simple(self): base = self.base return (base.is_simple() and self.index.is_simple() and base.type and (base.type.is_ptr or base.type.is_array)) def may_be_none(self): base_type = self.base.type if base_type: if base_type.is_string: return False if isinstance(self.index, SliceNode): # slicing! if base_type in (bytes_type, bytearray_type, str_type, unicode_type, basestring_type, list_type, tuple_type): return False return ExprNode.may_be_none(self) def analyse_target_declaration(self, env): pass def analyse_as_type(self, env): base_type = self.base.analyse_as_type(env) if base_type and not base_type.is_pyobject: if base_type.is_cpp_class: if isinstance(self.index, TupleNode): template_values = self.index.args else: template_values = [self.index] type_node = Nodes.TemplatedTypeNode( pos=self.pos, positional_args=template_values, keyword_args=None) return type_node.analyse(env, base_type=base_type) elif self.index.is_slice or self.index.is_sequence_constructor: # memory view from . import MemoryView env.use_utility_code(MemoryView.view_utility_code) axes = [self.index] if self.index.is_slice else list(self.index.args) return PyrexTypes.MemoryViewSliceType(base_type, MemoryView.get_axes_specs(env, axes)) else: # C array index = self.index.compile_time_value(env) if index is not None: try: index = int(index) except (ValueError, TypeError): pass else: return PyrexTypes.CArrayType(base_type, index) error(self.pos, "Array size must be a compile time constant") return None def type_dependencies(self, env): return self.base.type_dependencies(env) + self.index.type_dependencies(env) def infer_type(self, env): base_type = self.base.infer_type(env) if self.index.is_slice: # slicing! if base_type.is_string: # sliced C strings must coerce to Python return bytes_type elif base_type.is_pyunicode_ptr: # sliced Py_UNICODE* strings must coerce to Python return unicode_type elif base_type in (unicode_type, bytes_type, str_type, bytearray_type, list_type, tuple_type): # slicing these returns the same type return base_type else: # TODO: Handle buffers (hopefully without too much redundancy). return py_object_type index_type = self.index.infer_type(env) if index_type and index_type.is_int or isinstance(self.index, IntNode): # indexing! if base_type is unicode_type: # Py_UCS4 will automatically coerce to a unicode string # if required, so this is safe. We only infer Py_UCS4 # when the index is a C integer type. Otherwise, we may # need to use normal Python item access, in which case # it's faster to return the one-char unicode string than # to receive it, throw it away, and potentially rebuild it # on a subsequent PyObject coercion. return PyrexTypes.c_py_ucs4_type elif base_type is str_type: # always returns str - Py2: bytes, Py3: unicode return base_type elif base_type is bytearray_type: return PyrexTypes.c_uchar_type elif isinstance(self.base, BytesNode): #if env.global_scope().context.language_level >= 3: # # inferring 'char' can be made to work in Python 3 mode # return PyrexTypes.c_char_type # Py2/3 return different types on indexing bytes objects return py_object_type elif base_type in (tuple_type, list_type): # if base is a literal, take a look at its values item_type = infer_sequence_item_type( env, self.base, self.index, seq_type=base_type) if item_type is not None: return item_type elif base_type.is_ptr or base_type.is_array: return base_type.base_type elif base_type.is_ctuple and isinstance(self.index, IntNode): if self.index.has_constant_result(): index = self.index.constant_result if index < 0: index += base_type.size if 0 <= index < base_type.size: return base_type.components[index] if base_type.is_cpp_class: class FakeOperand: def __init__(self, **kwds): self.__dict__.update(kwds) operands = [ FakeOperand(pos=self.pos, type=base_type), FakeOperand(pos=self.pos, type=index_type), ] index_func = env.lookup_operator('[]', operands) if index_func is not None: return index_func.type.return_type if is_pythran_expr(base_type) and is_pythran_expr(index_type): index_with_type = (self.index, index_type) return PythranExpr(pythran_indexing_type(base_type, [index_with_type])) # may be slicing or indexing, we don't know if base_type in (unicode_type, str_type): # these types always returns their own type on Python indexing/slicing return base_type else: # TODO: Handle buffers (hopefully without too much redundancy). return py_object_type def analyse_types(self, env): return self.analyse_base_and_index_types(env, getting=True) def analyse_target_types(self, env): node = self.analyse_base_and_index_types(env, setting=True) if node.type.is_const: error(self.pos, "Assignment to const dereference") if node is self and not node.is_lvalue(): error(self.pos, "Assignment to non-lvalue of type '%s'" % node.type) return node def analyse_base_and_index_types(self, env, getting=False, setting=False, analyse_base=True): # Note: This might be cleaned up by having IndexNode # parsed in a saner way and only construct the tuple if # needed. if analyse_base: self.base = self.base.analyse_types(env) if self.base.type.is_error: # Do not visit child tree if base is undeclared to avoid confusing # error messages self.type = PyrexTypes.error_type return self is_slice = self.index.is_slice if not env.directives['wraparound']: if is_slice: check_negative_indices(self.index.start, self.index.stop) else: check_negative_indices(self.index) # Potentially overflowing index value. if not is_slice and isinstance(self.index, IntNode) and Utils.long_literal(self.index.value): self.index = self.index.coerce_to_pyobject(env) is_memslice = self.base.type.is_memoryviewslice # Handle the case where base is a literal char* (and we expect a string, not an int) if not is_memslice and (isinstance(self.base, BytesNode) or is_slice): if self.base.type.is_string or not (self.base.type.is_ptr or self.base.type.is_array): self.base = self.base.coerce_to_pyobject(env) replacement_node = self.analyse_as_buffer_operation(env, getting) if replacement_node is not None: return replacement_node self.nogil = env.nogil base_type = self.base.type if not base_type.is_cfunction: self.index = self.index.analyse_types(env) self.original_index_type = self.index.type if base_type.is_unicode_char: # we infer Py_UNICODE/Py_UCS4 for unicode strings in some # cases, but indexing must still work for them if setting: warning(self.pos, "cannot assign to Unicode string index", level=1) elif self.index.constant_result in (0, -1): # uchar[0] => uchar return self.base self.base = self.base.coerce_to_pyobject(env) base_type = self.base.type if base_type.is_pyobject: return self.analyse_as_pyobject(env, is_slice, getting, setting) elif base_type.is_ptr or base_type.is_array: return self.analyse_as_c_array(env, is_slice) elif base_type.is_cpp_class: return self.analyse_as_cpp(env, setting) elif base_type.is_cfunction: return self.analyse_as_c_function(env) elif base_type.is_ctuple: return self.analyse_as_c_tuple(env, getting, setting) else: error(self.pos, "Attempting to index non-array type '%s'" % base_type) self.type = PyrexTypes.error_type return self def analyse_as_pyobject(self, env, is_slice, getting, setting): base_type = self.base.type if self.index.type.is_unicode_char and base_type is not dict_type: # TODO: eventually fold into case below and remove warning, once people have adapted their code warning(self.pos, "Item lookup of unicode character codes now always converts to a Unicode string. " "Use an explicit C integer cast to get back the previous integer lookup behaviour.", level=1) self.index = self.index.coerce_to_pyobject(env) self.is_temp = 1 elif self.index.type.is_int and base_type is not dict_type: if (getting and (base_type in (list_type, tuple_type, bytearray_type)) and (not self.index.type.signed or not env.directives['wraparound'] or (isinstance(self.index, IntNode) and self.index.has_constant_result() and self.index.constant_result >= 0)) and not env.directives['boundscheck']): self.is_temp = 0 else: self.is_temp = 1 self.index = self.index.coerce_to(PyrexTypes.c_py_ssize_t_type, env).coerce_to_simple(env) self.original_index_type.create_to_py_utility_code(env) else: self.index = self.index.coerce_to_pyobject(env) self.is_temp = 1 if self.index.type.is_int and base_type is unicode_type: # Py_UNICODE/Py_UCS4 will automatically coerce to a unicode string # if required, so this is fast and safe self.type = PyrexTypes.c_py_ucs4_type elif self.index.type.is_int and base_type is bytearray_type: if setting: self.type = PyrexTypes.c_uchar_type else: # not using 'uchar' to enable fast and safe error reporting as '-1' self.type = PyrexTypes.c_int_type elif is_slice and base_type in (bytes_type, bytearray_type, str_type, unicode_type, list_type, tuple_type): self.type = base_type else: item_type = None if base_type in (list_type, tuple_type) and self.index.type.is_int: item_type = infer_sequence_item_type( env, self.base, self.index, seq_type=base_type) if item_type is None: item_type = py_object_type self.type = item_type if base_type in (list_type, tuple_type, dict_type): # do the None check explicitly (not in a helper) to allow optimising it away self.base = self.base.as_none_safe_node("'NoneType' object is not subscriptable") self.wrap_in_nonecheck_node(env, getting) return self def analyse_as_c_array(self, env, is_slice): base_type = self.base.type self.type = base_type.base_type if is_slice: self.type = base_type elif self.index.type.is_pyobject: self.index = self.index.coerce_to(PyrexTypes.c_py_ssize_t_type, env) elif not self.index.type.is_int: error(self.pos, "Invalid index type '%s'" % self.index.type) return self def analyse_as_cpp(self, env, setting): base_type = self.base.type function = env.lookup_operator("[]", [self.base, self.index]) if function is None: error(self.pos, "Indexing '%s' not supported for index type '%s'" % (base_type, self.index.type)) self.type = PyrexTypes.error_type self.result_code = "" return self func_type = function.type if func_type.is_ptr: func_type = func_type.base_type self.exception_check = func_type.exception_check self.exception_value = func_type.exception_value if self.exception_check: if not setting: self.is_temp = True if self.exception_value is None: env.use_utility_code(UtilityCode.load_cached("CppExceptionConversion", "CppSupport.cpp")) self.index = self.index.coerce_to(func_type.args[0].type, env) self.type = func_type.return_type if setting and not func_type.return_type.is_reference: error(self.pos, "Can't set non-reference result '%s'" % self.type) return self def analyse_as_c_function(self, env): base_type = self.base.type if base_type.is_fused: self.parse_indexed_fused_cdef(env) else: self.type_indices = self.parse_index_as_types(env) self.index = None # FIXME: use a dedicated Node class instead of generic IndexNode if base_type.templates is None: error(self.pos, "Can only parameterize template functions.") self.type = error_type elif self.type_indices is None: # Error recorded earlier. self.type = error_type elif len(base_type.templates) != len(self.type_indices): error(self.pos, "Wrong number of template arguments: expected %s, got %s" % ( (len(base_type.templates), len(self.type_indices)))) self.type = error_type else: self.type = base_type.specialize(dict(zip(base_type.templates, self.type_indices))) # FIXME: use a dedicated Node class instead of generic IndexNode return self def analyse_as_c_tuple(self, env, getting, setting): base_type = self.base.type if isinstance(self.index, IntNode) and self.index.has_constant_result(): index = self.index.constant_result if -base_type.size <= index < base_type.size: if index < 0: index += base_type.size self.type = base_type.components[index] else: error(self.pos, "Index %s out of bounds for '%s'" % (index, base_type)) self.type = PyrexTypes.error_type return self else: self.base = self.base.coerce_to_pyobject(env) return self.analyse_base_and_index_types(env, getting=getting, setting=setting, analyse_base=False) def analyse_as_buffer_operation(self, env, getting): """ Analyse buffer indexing and memoryview indexing/slicing """ if isinstance(self.index, TupleNode): indices = self.index.args else: indices = [self.index] base = self.base base_type = base.type replacement_node = None if base_type.is_memoryviewslice: # memoryviewslice indexing or slicing from . import MemoryView if base.is_memview_slice: # For memory views, "view[i][j]" is the same as "view[i, j]" => use the latter for speed. merged_indices = base.merged_indices(indices) if merged_indices is not None: base = base.base base_type = base.type indices = merged_indices have_slices, indices, newaxes = MemoryView.unellipsify(indices, base_type.ndim) if have_slices: replacement_node = MemoryViewSliceNode(self.pos, indices=indices, base=base) else: replacement_node = MemoryViewIndexNode(self.pos, indices=indices, base=base) elif base_type.is_buffer or base_type.is_pythran_expr: if base_type.is_pythran_expr or len(indices) == base_type.ndim: # Buffer indexing is_buffer_access = True indices = [index.analyse_types(env) for index in indices] if base_type.is_pythran_expr: do_replacement = all( index.type.is_int or index.is_slice or index.type.is_pythran_expr for index in indices) if do_replacement: for i,index in enumerate(indices): if index.is_slice: index = SliceIntNode(index.pos, start=index.start, stop=index.stop, step=index.step) index = index.analyse_types(env) indices[i] = index else: do_replacement = all(index.type.is_int for index in indices) if do_replacement: replacement_node = BufferIndexNode(self.pos, indices=indices, base=base) # On cloning, indices is cloned. Otherwise, unpack index into indices. assert not isinstance(self.index, CloneNode) if replacement_node is not None: replacement_node = replacement_node.analyse_types(env, getting) return replacement_node def wrap_in_nonecheck_node(self, env, getting): if not env.directives['nonecheck'] or not self.base.may_be_none(): return self.base = self.base.as_none_safe_node("'NoneType' object is not subscriptable") def parse_index_as_types(self, env, required=True): if isinstance(self.index, TupleNode): indices = self.index.args else: indices = [self.index] type_indices = [] for index in indices: type_indices.append(index.analyse_as_type(env)) if type_indices[-1] is None: if required: error(index.pos, "not parsable as a type") return None return type_indices def parse_indexed_fused_cdef(self, env): """ Interpret fused_cdef_func[specific_type1, ...] Note that if this method is called, we are an indexed cdef function with fused argument types, and this IndexNode will be replaced by the NameNode with specific entry just after analysis of expressions by AnalyseExpressionsTransform. """ self.type = PyrexTypes.error_type self.is_fused_index = True base_type = self.base.type positions = [] if self.index.is_name or self.index.is_attribute: positions.append(self.index.pos) elif isinstance(self.index, TupleNode): for arg in self.index.args: positions.append(arg.pos) specific_types = self.parse_index_as_types(env, required=False) if specific_types is None: self.index = self.index.analyse_types(env) if not self.base.entry.as_variable: error(self.pos, "Can only index fused functions with types") else: # A cpdef function indexed with Python objects self.base.entry = self.entry = self.base.entry.as_variable self.base.type = self.type = self.entry.type self.base.is_temp = True self.is_temp = True self.entry.used = True self.is_fused_index = False return for i, type in enumerate(specific_types): specific_types[i] = type.specialize_fused(env) fused_types = base_type.get_fused_types() if len(specific_types) > len(fused_types): return error(self.pos, "Too many types specified") elif len(specific_types) < len(fused_types): t = fused_types[len(specific_types)] return error(self.pos, "Not enough types specified to specialize " "the function, %s is still fused" % t) # See if our index types form valid specializations for pos, specific_type, fused_type in zip(positions, specific_types, fused_types): if not any([specific_type.same_as(t) for t in fused_type.types]): return error(pos, "Type not in fused type") if specific_type is None or specific_type.is_error: return fused_to_specific = dict(zip(fused_types, specific_types)) type = base_type.specialize(fused_to_specific) if type.is_fused: # Only partially specific, this is invalid error(self.pos, "Index operation makes function only partially specific") else: # Fully specific, find the signature with the specialized entry for signature in self.base.type.get_all_specialized_function_types(): if type.same_as(signature): self.type = signature if self.base.is_attribute: # Pretend to be a normal attribute, for cdef extension # methods self.entry = signature.entry self.is_attribute = True self.obj = self.base.obj self.type.entry.used = True self.base.type = signature self.base.entry = signature.entry break else: # This is a bug raise InternalError("Couldn't find the right signature") gil_message = "Indexing Python object" def calculate_result_code(self): if self.base.type in (list_type, tuple_type, bytearray_type): if self.base.type is list_type: index_code = "PyList_GET_ITEM(%s, %s)" elif self.base.type is tuple_type: index_code = "PyTuple_GET_ITEM(%s, %s)" elif self.base.type is bytearray_type: index_code = "((unsigned char)(PyByteArray_AS_STRING(%s)[%s]))" else: assert False, "unexpected base type in indexing: %s" % self.base.type elif self.base.type.is_cfunction: return "%s<%s>" % ( self.base.result(), ",".join([param.empty_declaration_code() for param in self.type_indices])) elif self.base.type.is_ctuple: index = self.index.constant_result if index < 0: index += self.base.type.size return "%s.f%s" % (self.base.result(), index) else: if (self.type.is_ptr or self.type.is_array) and self.type == self.base.type: error(self.pos, "Invalid use of pointer slice") return index_code = "(%s[%s])" return index_code % (self.base.result(), self.index.result()) def extra_index_params(self, code): if self.index.type.is_int: is_list = self.base.type is list_type wraparound = ( bool(code.globalstate.directives['wraparound']) and self.original_index_type.signed and not (isinstance(self.index.constant_result, _py_int_types) and self.index.constant_result >= 0)) boundscheck = bool(code.globalstate.directives['boundscheck']) return ", %s, %d, %s, %d, %d, %d" % ( self.original_index_type.empty_declaration_code(), self.original_index_type.signed and 1 or 0, self.original_index_type.to_py_function, is_list, wraparound, boundscheck) else: return "" def generate_result_code(self, code): if not self.is_temp: # all handled in self.calculate_result_code() return utility_code = None if self.type.is_pyobject: error_value = 'NULL' if self.index.type.is_int: if self.base.type is list_type: function = "__Pyx_GetItemInt_List" elif self.base.type is tuple_type: function = "__Pyx_GetItemInt_Tuple" else: function = "__Pyx_GetItemInt" utility_code = TempitaUtilityCode.load_cached("GetItemInt", "ObjectHandling.c") else: if self.base.type is dict_type: function = "__Pyx_PyDict_GetItem" utility_code = UtilityCode.load_cached("DictGetItem", "ObjectHandling.c") elif self.base.type is py_object_type and self.index.type in (str_type, unicode_type): # obj[str] is probably doing a dict lookup function = "__Pyx_PyObject_Dict_GetItem" utility_code = UtilityCode.load_cached("DictGetItem", "ObjectHandling.c") else: function = "__Pyx_PyObject_GetItem" code.globalstate.use_utility_code( TempitaUtilityCode.load_cached("GetItemInt", "ObjectHandling.c")) utility_code = UtilityCode.load_cached("ObjectGetItem", "ObjectHandling.c") elif self.type.is_unicode_char and self.base.type is unicode_type: assert self.index.type.is_int function = "__Pyx_GetItemInt_Unicode" error_value = '(Py_UCS4)-1' utility_code = UtilityCode.load_cached("GetItemIntUnicode", "StringTools.c") elif self.base.type is bytearray_type: assert self.index.type.is_int assert self.type.is_int function = "__Pyx_GetItemInt_ByteArray" error_value = '-1' utility_code = UtilityCode.load_cached("GetItemIntByteArray", "StringTools.c") elif not (self.base.type.is_cpp_class and self.exception_check): assert False, "unexpected type %s and base type %s for indexing" % ( self.type, self.base.type) if utility_code is not None: code.globalstate.use_utility_code(utility_code) if self.index.type.is_int: index_code = self.index.result() else: index_code = self.index.py_result() if self.base.type.is_cpp_class and self.exception_check: translate_cpp_exception(code, self.pos, "%s = %s[%s];" % (self.result(), self.base.result(), self.index.result()), self.result() if self.type.is_pyobject else None, self.exception_value, self.in_nogil_context) else: error_check = '!%s' if error_value == 'NULL' else '%%s == %s' % error_value code.putln( "%s = %s(%s, %s%s); %s" % ( self.result(), function, self.base.py_result(), index_code, self.extra_index_params(code), code.error_goto_if(error_check % self.result(), self.pos))) if self.type.is_pyobject: code.put_gotref(self.py_result()) def generate_setitem_code(self, value_code, code): if self.index.type.is_int: if self.base.type is bytearray_type: code.globalstate.use_utility_code( UtilityCode.load_cached("SetItemIntByteArray", "StringTools.c")) function = "__Pyx_SetItemInt_ByteArray" else: code.globalstate.use_utility_code( UtilityCode.load_cached("SetItemInt", "ObjectHandling.c")) function = "__Pyx_SetItemInt" index_code = self.index.result() else: index_code = self.index.py_result() if self.base.type is dict_type: function = "PyDict_SetItem" # It would seem that we could specialized lists/tuples, but that # shouldn't happen here. # Both PyList_SetItem() and PyTuple_SetItem() take a Py_ssize_t as # index instead of an object, and bad conversion here would give # the wrong exception. Also, tuples are supposed to be immutable, # and raise a TypeError when trying to set their entries # (PyTuple_SetItem() is for creating new tuples from scratch). else: function = "PyObject_SetItem" code.putln(code.error_goto_if_neg( "%s(%s, %s, %s%s)" % ( function, self.base.py_result(), index_code, value_code, self.extra_index_params(code)), self.pos)) def generate_assignment_code(self, rhs, code, overloaded_assignment=False, exception_check=None, exception_value=None): self.generate_subexpr_evaluation_code(code) if self.type.is_pyobject: self.generate_setitem_code(rhs.py_result(), code) elif self.base.type is bytearray_type: value_code = self._check_byte_value(code, rhs) self.generate_setitem_code(value_code, code) elif self.base.type.is_cpp_class and self.exception_check and self.exception_check == '+': if overloaded_assignment and exception_check and \ self.exception_value != exception_value: # Handle the case that both the index operator and the assignment # operator have a c++ exception handler and they are not the same. translate_double_cpp_exception(code, self.pos, self.type, self.result(), rhs.result(), self.exception_value, exception_value, self.in_nogil_context) else: # Handle the case that only the index operator has a # c++ exception handler, or that # both exception handlers are the same. translate_cpp_exception(code, self.pos, "%s = %s;" % (self.result(), rhs.result()), self.result() if self.type.is_pyobject else None, self.exception_value, self.in_nogil_context) else: code.putln( "%s = %s;" % (self.result(), rhs.result())) self.generate_subexpr_disposal_code(code) self.free_subexpr_temps(code) rhs.generate_disposal_code(code) rhs.free_temps(code) def _check_byte_value(self, code, rhs): # TODO: should we do this generally on downcasts, or just here? assert rhs.type.is_int, repr(rhs.type) value_code = rhs.result() if rhs.has_constant_result(): if 0 <= rhs.constant_result < 256: return value_code needs_cast = True # make at least the C compiler happy warning(rhs.pos, "value outside of range(0, 256)" " when assigning to byte: %s" % rhs.constant_result, level=1) else: needs_cast = rhs.type != PyrexTypes.c_uchar_type if not self.nogil: conditions = [] if rhs.is_literal or rhs.type.signed: conditions.append('%s < 0' % value_code) if (rhs.is_literal or not (rhs.is_temp and rhs.type in ( PyrexTypes.c_uchar_type, PyrexTypes.c_char_type, PyrexTypes.c_schar_type))): conditions.append('%s > 255' % value_code) if conditions: code.putln("if (unlikely(%s)) {" % ' || '.join(conditions)) code.putln( 'PyErr_SetString(PyExc_ValueError,' ' "byte must be in range(0, 256)"); %s' % code.error_goto(self.pos)) code.putln("}") if needs_cast: value_code = '((unsigned char)%s)' % value_code return value_code def generate_deletion_code(self, code, ignore_nonexisting=False): self.generate_subexpr_evaluation_code(code) #if self.type.is_pyobject: if self.index.type.is_int: function = "__Pyx_DelItemInt" index_code = self.index.result() code.globalstate.use_utility_code( UtilityCode.load_cached("DelItemInt", "ObjectHandling.c")) else: index_code = self.index.py_result() if self.base.type is dict_type: function = "PyDict_DelItem" else: function = "PyObject_DelItem" code.putln(code.error_goto_if_neg( "%s(%s, %s%s)" % ( function, self.base.py_result(), index_code, self.extra_index_params(code)), self.pos)) self.generate_subexpr_disposal_code(code) self.free_subexpr_temps(code) class BufferIndexNode(_IndexingBaseNode): """ Indexing of buffers and memoryviews. This node is created during type analysis from IndexNode and replaces it. Attributes: base - base node being indexed indices - list of indexing expressions """ subexprs = ['base', 'indices'] is_buffer_access = True # Whether we're assigning to a buffer (in that case it needs to be writable) writable_needed = False def analyse_target_types(self, env): self.analyse_types(env, getting=False) def analyse_types(self, env, getting=True): """ Analyse types for buffer indexing only. Overridden by memoryview indexing and slicing subclasses """ # self.indices are already analyzed if not self.base.is_name and not is_pythran_expr(self.base.type): error(self.pos, "Can only index buffer variables") self.type = error_type return self if not getting: if not self.base.entry.type.writable: error(self.pos, "Writing to readonly buffer") else: self.writable_needed = True if self.base.type.is_buffer: self.base.entry.buffer_aux.writable_needed = True self.none_error_message = "'NoneType' object is not subscriptable" self.analyse_buffer_index(env, getting) self.wrap_in_nonecheck_node(env) return self def analyse_buffer_index(self, env, getting): if is_pythran_expr(self.base.type): index_with_type_list = [(idx, idx.type) for idx in self.indices] self.type = PythranExpr(pythran_indexing_type(self.base.type, index_with_type_list)) else: self.base = self.base.coerce_to_simple(env) self.type = self.base.type.dtype self.buffer_type = self.base.type if getting and (self.type.is_pyobject or self.type.is_pythran_expr): self.is_temp = True def analyse_assignment(self, rhs): """ Called by IndexNode when this node is assigned to, with the rhs of the assignment """ def wrap_in_nonecheck_node(self, env): if not env.directives['nonecheck'] or not self.base.may_be_none(): return self.base = self.base.as_none_safe_node(self.none_error_message) def nogil_check(self, env): if self.is_buffer_access or self.is_memview_index: if self.type.is_pyobject: error(self.pos, "Cannot access buffer with object dtype without gil") self.type = error_type def calculate_result_code(self): return "(*%s)" % self.buffer_ptr_code def buffer_entry(self): base = self.base if self.base.is_nonecheck: base = base.arg return base.type.get_entry(base) def get_index_in_temp(self, code, ivar): ret = code.funcstate.allocate_temp( PyrexTypes.widest_numeric_type( ivar.type, PyrexTypes.c_ssize_t_type if ivar.type.signed else PyrexTypes.c_size_t_type), manage_ref=False) code.putln("%s = %s;" % (ret, ivar.result())) return ret def buffer_lookup_code(self, code): """ ndarray[1, 2, 3] and memslice[1, 2, 3] """ if self.in_nogil_context: if self.is_buffer_access or self.is_memview_index: if code.globalstate.directives['boundscheck']: warning(self.pos, "Use boundscheck(False) for faster access", level=1) # Assign indices to temps of at least (s)size_t to allow further index calculations. index_temps = [self.get_index_in_temp(code,ivar) for ivar in self.indices] # Generate buffer access code using these temps from . import Buffer buffer_entry = self.buffer_entry() if buffer_entry.type.is_buffer: negative_indices = buffer_entry.type.negative_indices else: negative_indices = Buffer.buffer_defaults['negative_indices'] return buffer_entry, Buffer.put_buffer_lookup_code( entry=buffer_entry, index_signeds=[ivar.type.signed for ivar in self.indices], index_cnames=index_temps, directives=code.globalstate.directives, pos=self.pos, code=code, negative_indices=negative_indices, in_nogil_context=self.in_nogil_context) def generate_assignment_code(self, rhs, code, overloaded_assignment=False): self.generate_subexpr_evaluation_code(code) self.generate_buffer_setitem_code(rhs, code) self.generate_subexpr_disposal_code(code) self.free_subexpr_temps(code) rhs.generate_disposal_code(code) rhs.free_temps(code) def generate_buffer_setitem_code(self, rhs, code, op=""): base_type = self.base.type if is_pythran_expr(base_type) and is_pythran_supported_type(rhs.type): obj = code.funcstate.allocate_temp(PythranExpr(pythran_type(self.base.type)), manage_ref=False) # We have got to do this because we have to declare pythran objects # at the beginning of the functions. # Indeed, Cython uses "goto" statement for error management, and # RAII doesn't work with that kind of construction. # Moreover, the way Pythran expressions are made is that they don't # support move-assignation easily. # This, we explicitly destroy then in-place new objects in this # case. code.putln("__Pyx_call_destructor(%s);" % obj) code.putln("new (&%s) decltype(%s){%s};" % (obj, obj, self.base.pythran_result())) code.putln("%s%s %s= %s;" % ( obj, pythran_indexing_code(self.indices), op, rhs.pythran_result())) return # Used from generate_assignment_code and InPlaceAssignmentNode buffer_entry, ptrexpr = self.buffer_lookup_code(code) if self.buffer_type.dtype.is_pyobject: # Must manage refcounts. Decref what is already there # and incref what we put in. ptr = code.funcstate.allocate_temp(buffer_entry.buf_ptr_type, manage_ref=False) rhs_code = rhs.result() code.putln("%s = %s;" % (ptr, ptrexpr)) code.put_gotref("*%s" % ptr) code.putln("__Pyx_INCREF(%s); __Pyx_DECREF(*%s);" % ( rhs_code, ptr)) code.putln("*%s %s= %s;" % (ptr, op, rhs_code)) code.put_giveref("*%s" % ptr) code.funcstate.release_temp(ptr) else: # Simple case code.putln("*%s %s= %s;" % (ptrexpr, op, rhs.result())) def generate_result_code(self, code): if is_pythran_expr(self.base.type): res = self.result() code.putln("__Pyx_call_destructor(%s);" % res) code.putln("new (&%s) decltype(%s){%s%s};" % ( res, res, self.base.pythran_result(), pythran_indexing_code(self.indices))) return buffer_entry, self.buffer_ptr_code = self.buffer_lookup_code(code) if self.type.is_pyobject: # is_temp is True, so must pull out value and incref it. # NOTE: object temporary results for nodes are declared # as PyObject *, so we need a cast code.putln("%s = (PyObject *) *%s;" % (self.result(), self.buffer_ptr_code)) code.putln("__Pyx_INCREF((PyObject*)%s);" % self.result()) class MemoryViewIndexNode(BufferIndexNode): is_memview_index = True is_buffer_access = False warned_untyped_idx = False def analyse_types(self, env, getting=True): # memoryviewslice indexing or slicing from . import MemoryView self.is_pythran_mode = has_np_pythran(env) indices = self.indices have_slices, indices, newaxes = MemoryView.unellipsify(indices, self.base.type.ndim) if not getting: self.writable_needed = True if self.base.is_name or self.base.is_attribute: self.base.entry.type.writable_needed = True self.memslice_index = (not newaxes and len(indices) == self.base.type.ndim) axes = [] index_type = PyrexTypes.c_py_ssize_t_type new_indices = [] if len(indices) - len(newaxes) > self.base.type.ndim: self.type = error_type error(indices[self.base.type.ndim].pos, "Too many indices specified for type %s" % self.base.type) return self axis_idx = 0 for i, index in enumerate(indices[:]): index = index.analyse_types(env) if index.is_none: self.is_memview_slice = True new_indices.append(index) axes.append(('direct', 'strided')) continue access, packing = self.base.type.axes[axis_idx] axis_idx += 1 if index.is_slice: self.is_memview_slice = True if index.step.is_none: axes.append((access, packing)) else: axes.append((access, 'strided')) # Coerce start, stop and step to temps of the right type for attr in ('start', 'stop', 'step'): value = getattr(index, attr) if not value.is_none: value = value.coerce_to(index_type, env) #value = value.coerce_to_temp(env) setattr(index, attr, value) new_indices.append(value) elif index.type.is_int or index.type.is_pyobject: if index.type.is_pyobject and not self.warned_untyped_idx: warning(index.pos, "Index should be typed for more efficient access", level=2) MemoryViewIndexNode.warned_untyped_idx = True self.is_memview_index = True index = index.coerce_to(index_type, env) indices[i] = index new_indices.append(index) else: self.type = error_type error(index.pos, "Invalid index for memoryview specified, type %s" % index.type) return self ### FIXME: replace by MemoryViewSliceNode if is_memview_slice ? self.is_memview_index = self.is_memview_index and not self.is_memview_slice self.indices = new_indices # All indices with all start/stop/step for slices. # We need to keep this around. self.original_indices = indices self.nogil = env.nogil self.analyse_operation(env, getting, axes) self.wrap_in_nonecheck_node(env) return self def analyse_operation(self, env, getting, axes): self.none_error_message = "Cannot index None memoryview slice" self.analyse_buffer_index(env, getting) def analyse_broadcast_operation(self, rhs): """ Support broadcasting for slice assignment. E.g. m_2d[...] = m_1d # or, m_1d[...] = m_2d # if the leading dimension has extent 1 """ if self.type.is_memoryviewslice: lhs = self if lhs.is_memview_broadcast or rhs.is_memview_broadcast: lhs.is_memview_broadcast = True rhs.is_memview_broadcast = True def analyse_as_memview_scalar_assignment(self, rhs): lhs = self.analyse_assignment(rhs) if lhs: rhs.is_memview_copy_assignment = lhs.is_memview_copy_assignment return lhs return self class MemoryViewSliceNode(MemoryViewIndexNode): is_memview_slice = True # No-op slicing operation, this node will be replaced is_ellipsis_noop = False is_memview_scalar_assignment = False is_memview_index = False is_memview_broadcast = False def analyse_ellipsis_noop(self, env, getting): """Slicing operations needing no evaluation, i.e. m[...] or m[:, :]""" ### FIXME: replace directly self.is_ellipsis_noop = all( index.is_slice and index.start.is_none and index.stop.is_none and index.step.is_none for index in self.indices) if self.is_ellipsis_noop: self.type = self.base.type def analyse_operation(self, env, getting, axes): from . import MemoryView if not getting: self.is_memview_broadcast = True self.none_error_message = "Cannot assign to None memoryview slice" else: self.none_error_message = "Cannot slice None memoryview slice" self.analyse_ellipsis_noop(env, getting) if self.is_ellipsis_noop: return self.index = None self.is_temp = True self.use_managed_ref = True if not MemoryView.validate_axes(self.pos, axes): self.type = error_type return self.type = PyrexTypes.MemoryViewSliceType(self.base.type.dtype, axes) if not (self.base.is_simple() or self.base.result_in_temp()): self.base = self.base.coerce_to_temp(env) def analyse_assignment(self, rhs): if not rhs.type.is_memoryviewslice and ( self.type.dtype.assignable_from(rhs.type) or rhs.type.is_pyobject): # scalar assignment return MemoryCopyScalar(self.pos, self) else: return MemoryCopySlice(self.pos, self) def merged_indices(self, indices): """Return a new list of indices/slices with 'indices' merged into the current ones according to slicing rules. Is used to implement "view[i][j]" => "view[i, j]". Return None if the indices cannot (easily) be merged at compile time. """ if not indices: return None # NOTE: Need to evaluate "self.original_indices" here as they might differ from "self.indices". new_indices = self.original_indices[:] indices = indices[:] for i, s in enumerate(self.original_indices): if s.is_slice: if s.start.is_none and s.stop.is_none and s.step.is_none: # Full slice found, replace by index. new_indices[i] = indices[0] indices.pop(0) if not indices: return new_indices else: # Found something non-trivial, e.g. a partial slice. return None elif not s.type.is_int: # Not a slice, not an integer index => could be anything... return None if indices: if len(new_indices) + len(indices) > self.base.type.ndim: return None new_indices += indices return new_indices def is_simple(self): if self.is_ellipsis_noop: # TODO: fix SimpleCallNode.is_simple() return self.base.is_simple() or self.base.result_in_temp() return self.result_in_temp() def calculate_result_code(self): """This is called in case this is a no-op slicing node""" return self.base.result() def generate_result_code(self, code): if self.is_ellipsis_noop: return ### FIXME: remove buffer_entry = self.buffer_entry() have_gil = not self.in_nogil_context # TODO Mark: this is insane, do it better have_slices = False it = iter(self.indices) for index in self.original_indices: if index.is_slice: have_slices = True if not index.start.is_none: index.start = next(it) if not index.stop.is_none: index.stop = next(it) if not index.step.is_none: index.step = next(it) else: next(it) assert not list(it) buffer_entry.generate_buffer_slice_code( code, self.original_indices, self.result(), have_gil=have_gil, have_slices=have_slices, directives=code.globalstate.directives) def generate_assignment_code(self, rhs, code, overloaded_assignment=False): if self.is_ellipsis_noop: self.generate_subexpr_evaluation_code(code) else: self.generate_evaluation_code(code) if self.is_memview_scalar_assignment: self.generate_memoryviewslice_assign_scalar_code(rhs, code) else: self.generate_memoryviewslice_setslice_code(rhs, code) if self.is_ellipsis_noop: self.generate_subexpr_disposal_code(code) else: self.generate_disposal_code(code) rhs.generate_disposal_code(code) rhs.free_temps(code) class MemoryCopyNode(ExprNode): """ Wraps a memoryview slice for slice assignment. dst: destination mememoryview slice """ subexprs = ['dst'] def __init__(self, pos, dst): super(MemoryCopyNode, self).__init__(pos) self.dst = dst self.type = dst.type def generate_assignment_code(self, rhs, code, overloaded_assignment=False): self.dst.generate_evaluation_code(code) self._generate_assignment_code(rhs, code) self.dst.generate_disposal_code(code) rhs.generate_disposal_code(code) rhs.free_temps(code) class MemoryCopySlice(MemoryCopyNode): """ Copy the contents of slice src to slice dst. Does not support indirect slices. memslice1[...] = memslice2 memslice1[:] = memslice2 """ is_memview_copy_assignment = True copy_slice_cname = "__pyx_memoryview_copy_contents" def _generate_assignment_code(self, src, code): dst = self.dst src.type.assert_direct_dims(src.pos) dst.type.assert_direct_dims(dst.pos) code.putln(code.error_goto_if_neg( "%s(%s, %s, %d, %d, %d)" % (self.copy_slice_cname, src.result(), dst.result(), src.type.ndim, dst.type.ndim, dst.type.dtype.is_pyobject), dst.pos)) class MemoryCopyScalar(MemoryCopyNode): """ Assign a scalar to a slice. dst must be simple, scalar will be assigned to a correct type and not just something assignable. memslice1[...] = 0.0 memslice1[:] = 0.0 """ def __init__(self, pos, dst): super(MemoryCopyScalar, self).__init__(pos, dst) self.type = dst.type.dtype def _generate_assignment_code(self, scalar, code): from . import MemoryView self.dst.type.assert_direct_dims(self.dst.pos) dtype = self.dst.type.dtype type_decl = dtype.declaration_code("") slice_decl = self.dst.type.declaration_code("") code.begin_block() code.putln("%s __pyx_temp_scalar = %s;" % (type_decl, scalar.result())) if self.dst.result_in_temp() or self.dst.is_simple(): dst_temp = self.dst.result() else: code.putln("%s __pyx_temp_slice = %s;" % (slice_decl, self.dst.result())) dst_temp = "__pyx_temp_slice" slice_iter_obj = MemoryView.slice_iter(self.dst.type, dst_temp, self.dst.type.ndim, code) p = slice_iter_obj.start_loops() if dtype.is_pyobject: code.putln("Py_DECREF(*(PyObject **) %s);" % p) code.putln("*((%s *) %s) = __pyx_temp_scalar;" % (type_decl, p)) if dtype.is_pyobject: code.putln("Py_INCREF(__pyx_temp_scalar);") slice_iter_obj.end_loops() code.end_block() class SliceIndexNode(ExprNode): # 2-element slice indexing # # base ExprNode # start ExprNode or None # stop ExprNode or None # slice ExprNode or None constant slice object subexprs = ['base', 'start', 'stop', 'slice'] slice = None def infer_type(self, env): base_type = self.base.infer_type(env) if base_type.is_string or base_type.is_cpp_class: return bytes_type elif base_type.is_pyunicode_ptr: return unicode_type elif base_type in (bytes_type, bytearray_type, str_type, unicode_type, basestring_type, list_type, tuple_type): return base_type elif base_type.is_ptr or base_type.is_array: return PyrexTypes.c_array_type(base_type.base_type, None) return py_object_type def inferable_item_node(self, index=0): # slicing shouldn't change the result type of the base, but the index might if index is not not_a_constant and self.start: if self.start.has_constant_result(): index += self.start.constant_result else: index = not_a_constant return self.base.inferable_item_node(index) def may_be_none(self): base_type = self.base.type if base_type: if base_type.is_string: return False if base_type in (bytes_type, str_type, unicode_type, basestring_type, list_type, tuple_type): return False return ExprNode.may_be_none(self) def calculate_constant_result(self): if self.start is None: start = None else: start = self.start.constant_result if self.stop is None: stop = None else: stop = self.stop.constant_result self.constant_result = self.base.constant_result[start:stop] def compile_time_value(self, denv): base = self.base.compile_time_value(denv) if self.start is None: start = 0 else: start = self.start.compile_time_value(denv) if self.stop is None: stop = None else: stop = self.stop.compile_time_value(denv) try: return base[start:stop] except Exception as e: self.compile_time_value_error(e) def analyse_target_declaration(self, env): pass def analyse_target_types(self, env): node = self.analyse_types(env, getting=False) # when assigning, we must accept any Python type if node.type.is_pyobject: node.type = py_object_type return node def analyse_types(self, env, getting=True): self.base = self.base.analyse_types(env) if self.base.type.is_buffer or self.base.type.is_pythran_expr or self.base.type.is_memoryviewslice: none_node = NoneNode(self.pos) index = SliceNode(self.pos, start=self.start or none_node, stop=self.stop or none_node, step=none_node) index_node = IndexNode(self.pos, index=index, base=self.base) return index_node.analyse_base_and_index_types( env, getting=getting, setting=not getting, analyse_base=False) if self.start: self.start = self.start.analyse_types(env) if self.stop: self.stop = self.stop.analyse_types(env) if not env.directives['wraparound']: check_negative_indices(self.start, self.stop) base_type = self.base.type if base_type.is_array and not getting: # cannot assign directly to C array => try to assign by making a copy if not self.start and not self.stop: self.type = base_type else: self.type = PyrexTypes.CPtrType(base_type.base_type) elif base_type.is_string or base_type.is_cpp_string: self.type = default_str_type(env) elif base_type.is_pyunicode_ptr: self.type = unicode_type elif base_type.is_ptr: self.type = base_type elif base_type.is_array: # we need a ptr type here instead of an array type, as # array types can result in invalid type casts in the C # code self.type = PyrexTypes.CPtrType(base_type.base_type) else: self.base = self.base.coerce_to_pyobject(env) self.type = py_object_type if base_type.is_builtin_type: # slicing builtin types returns something of the same type self.type = base_type self.base = self.base.as_none_safe_node("'NoneType' object is not subscriptable") if self.type is py_object_type: if (not self.start or self.start.is_literal) and \ (not self.stop or self.stop.is_literal): # cache the constant slice object, in case we need it none_node = NoneNode(self.pos) self.slice = SliceNode( self.pos, start=copy.deepcopy(self.start or none_node), stop=copy.deepcopy(self.stop or none_node), step=none_node ).analyse_types(env) else: c_int = PyrexTypes.c_py_ssize_t_type def allow_none(node, default_value, env): # Coerce to Py_ssize_t, but allow None as meaning the default slice bound. from .UtilNodes import EvalWithTempExprNode, ResultRefNode node_ref = ResultRefNode(node) new_expr = CondExprNode( node.pos, true_val=IntNode( node.pos, type=c_int, value=default_value, constant_result=int(default_value) if default_value.isdigit() else not_a_constant, ), false_val=node_ref.coerce_to(c_int, env), test=PrimaryCmpNode( node.pos, operand1=node_ref, operator='is', operand2=NoneNode(node.pos), ).analyse_types(env) ).analyse_result_type(env) return EvalWithTempExprNode(node_ref, new_expr) if self.start: if self.start.type.is_pyobject: self.start = allow_none(self.start, '0', env) self.start = self.start.coerce_to(c_int, env) if self.stop: if self.stop.type.is_pyobject: self.stop = allow_none(self.stop, 'PY_SSIZE_T_MAX', env) self.stop = self.stop.coerce_to(c_int, env) self.is_temp = 1 return self def analyse_as_type(self, env): base_type = self.base.analyse_as_type(env) if base_type and not base_type.is_pyobject: if not self.start and not self.stop: # memory view from . import MemoryView env.use_utility_code(MemoryView.view_utility_code) none_node = NoneNode(self.pos) slice_node = SliceNode( self.pos, start=none_node, stop=none_node, step=none_node, ) return PyrexTypes.MemoryViewSliceType( base_type, MemoryView.get_axes_specs(env, [slice_node])) return None nogil_check = Node.gil_error gil_message = "Slicing Python object" get_slice_utility_code = TempitaUtilityCode.load( "SliceObject", "ObjectHandling.c", context={'access': 'Get'}) set_slice_utility_code = TempitaUtilityCode.load( "SliceObject", "ObjectHandling.c", context={'access': 'Set'}) def coerce_to(self, dst_type, env): if ((self.base.type.is_string or self.base.type.is_cpp_string) and dst_type in (bytes_type, bytearray_type, str_type, unicode_type)): if (dst_type not in (bytes_type, bytearray_type) and not env.directives['c_string_encoding']): error(self.pos, "default encoding required for conversion from '%s' to '%s'" % (self.base.type, dst_type)) self.type = dst_type if dst_type.is_array and self.base.type.is_array: if not self.start and not self.stop: # redundant slice building, copy C arrays directly return self.base.coerce_to(dst_type, env) # else: check array size if possible return super(SliceIndexNode, self).coerce_to(dst_type, env) def generate_result_code(self, code): if not self.type.is_pyobject: error(self.pos, "Slicing is not currently supported for '%s'." % self.type) return base_result = self.base.result() result = self.result() start_code = self.start_code() stop_code = self.stop_code() if self.base.type.is_string: base_result = self.base.result() if self.base.type not in (PyrexTypes.c_char_ptr_type, PyrexTypes.c_const_char_ptr_type): base_result = '((const char*)%s)' % base_result if self.type is bytearray_type: type_name = 'ByteArray' else: type_name = self.type.name.title() if self.stop is None: code.putln( "%s = __Pyx_Py%s_FromString(%s + %s); %s" % ( result, type_name, base_result, start_code, code.error_goto_if_null(result, self.pos))) else: code.putln( "%s = __Pyx_Py%s_FromStringAndSize(%s + %s, %s - %s); %s" % ( result, type_name, base_result, start_code, stop_code, start_code, code.error_goto_if_null(result, self.pos))) elif self.base.type.is_pyunicode_ptr: base_result = self.base.result() if self.base.type != PyrexTypes.c_py_unicode_ptr_type: base_result = '((const Py_UNICODE*)%s)' % base_result if self.stop is None: code.putln( "%s = __Pyx_PyUnicode_FromUnicode(%s + %s); %s" % ( result, base_result, start_code, code.error_goto_if_null(result, self.pos))) else: code.putln( "%s = __Pyx_PyUnicode_FromUnicodeAndLength(%s + %s, %s - %s); %s" % ( result, base_result, start_code, stop_code, start_code, code.error_goto_if_null(result, self.pos))) elif self.base.type is unicode_type: code.globalstate.use_utility_code( UtilityCode.load_cached("PyUnicode_Substring", "StringTools.c")) code.putln( "%s = __Pyx_PyUnicode_Substring(%s, %s, %s); %s" % ( result, base_result, start_code, stop_code, code.error_goto_if_null(result, self.pos))) elif self.type is py_object_type: code.globalstate.use_utility_code(self.get_slice_utility_code) (has_c_start, has_c_stop, c_start, c_stop, py_start, py_stop, py_slice) = self.get_slice_config() code.putln( "%s = __Pyx_PyObject_GetSlice(%s, %s, %s, %s, %s, %s, %d, %d, %d); %s" % ( result, self.base.py_result(), c_start, c_stop, py_start, py_stop, py_slice, has_c_start, has_c_stop, bool(code.globalstate.directives['wraparound']), code.error_goto_if_null(result, self.pos))) else: if self.base.type is list_type: code.globalstate.use_utility_code( TempitaUtilityCode.load_cached("SliceTupleAndList", "ObjectHandling.c")) cfunc = '__Pyx_PyList_GetSlice' elif self.base.type is tuple_type: code.globalstate.use_utility_code( TempitaUtilityCode.load_cached("SliceTupleAndList", "ObjectHandling.c")) cfunc = '__Pyx_PyTuple_GetSlice' else: cfunc = 'PySequence_GetSlice' code.putln( "%s = %s(%s, %s, %s); %s" % ( result, cfunc, self.base.py_result(), start_code, stop_code, code.error_goto_if_null(result, self.pos))) code.put_gotref(self.py_result()) def generate_assignment_code(self, rhs, code, overloaded_assignment=False, exception_check=None, exception_value=None): self.generate_subexpr_evaluation_code(code) if self.type.is_pyobject: code.globalstate.use_utility_code(self.set_slice_utility_code) (has_c_start, has_c_stop, c_start, c_stop, py_start, py_stop, py_slice) = self.get_slice_config() code.put_error_if_neg(self.pos, "__Pyx_PyObject_SetSlice(%s, %s, %s, %s, %s, %s, %s, %d, %d, %d)" % ( self.base.py_result(), rhs.py_result(), c_start, c_stop, py_start, py_stop, py_slice, has_c_start, has_c_stop, bool(code.globalstate.directives['wraparound']))) else: start_offset = self.start_code() if self.start else '0' if rhs.type.is_array: array_length = rhs.type.size self.generate_slice_guard_code(code, array_length) else: array_length = '%s - %s' % (self.stop_code(), start_offset) code.globalstate.use_utility_code(UtilityCode.load_cached("IncludeStringH", "StringTools.c")) code.putln("memcpy(&(%s[%s]), %s, sizeof(%s[0]) * (%s));" % ( self.base.result(), start_offset, rhs.result(), self.base.result(), array_length )) self.generate_subexpr_disposal_code(code) self.free_subexpr_temps(code) rhs.generate_disposal_code(code) rhs.free_temps(code) def generate_deletion_code(self, code, ignore_nonexisting=False): if not self.base.type.is_pyobject: error(self.pos, "Deleting slices is only supported for Python types, not '%s'." % self.type) return self.generate_subexpr_evaluation_code(code) code.globalstate.use_utility_code(self.set_slice_utility_code) (has_c_start, has_c_stop, c_start, c_stop, py_start, py_stop, py_slice) = self.get_slice_config() code.put_error_if_neg(self.pos, "__Pyx_PyObject_DelSlice(%s, %s, %s, %s, %s, %s, %d, %d, %d)" % ( self.base.py_result(), c_start, c_stop, py_start, py_stop, py_slice, has_c_start, has_c_stop, bool(code.globalstate.directives['wraparound']))) self.generate_subexpr_disposal_code(code) self.free_subexpr_temps(code) def get_slice_config(self): has_c_start, c_start, py_start = False, '0', 'NULL' if self.start: has_c_start = not self.start.type.is_pyobject if has_c_start: c_start = self.start.result() else: py_start = '&%s' % self.start.py_result() has_c_stop, c_stop, py_stop = False, '0', 'NULL' if self.stop: has_c_stop = not self.stop.type.is_pyobject if has_c_stop: c_stop = self.stop.result() else: py_stop = '&%s' % self.stop.py_result() py_slice = self.slice and '&%s' % self.slice.py_result() or 'NULL' return (has_c_start, has_c_stop, c_start, c_stop, py_start, py_stop, py_slice) def generate_slice_guard_code(self, code, target_size): if not self.base.type.is_array: return slice_size = self.base.type.size try: total_length = slice_size = int(slice_size) except ValueError: total_length = None start = stop = None if self.stop: stop = self.stop.result() try: stop = int(stop) if stop < 0: if total_length is None: slice_size = '%s + %d' % (slice_size, stop) else: slice_size += stop else: slice_size = stop stop = None except ValueError: pass if self.start: start = self.start.result() try: start = int(start) if start < 0: if total_length is None: start = '%s + %d' % (self.base.type.size, start) else: start += total_length if isinstance(slice_size, _py_int_types): slice_size -= start else: slice_size = '%s - (%s)' % (slice_size, start) start = None except ValueError: pass runtime_check = None compile_time_check = False try: int_target_size = int(target_size) except ValueError: int_target_size = None else: compile_time_check = isinstance(slice_size, _py_int_types) if compile_time_check and slice_size < 0: if int_target_size > 0: error(self.pos, "Assignment to empty slice.") elif compile_time_check and start is None and stop is None: # we know the exact slice length if int_target_size != slice_size: error(self.pos, "Assignment to slice of wrong length, expected %s, got %s" % ( slice_size, target_size)) elif start is not None: if stop is None: stop = slice_size runtime_check = "(%s)-(%s)" % (stop, start) elif stop is not None: runtime_check = stop else: runtime_check = slice_size if runtime_check: code.putln("if (unlikely((%s) != (%s))) {" % (runtime_check, target_size)) code.putln( 'PyErr_Format(PyExc_ValueError, "Assignment to slice of wrong length,' ' expected %%" CYTHON_FORMAT_SSIZE_T "d, got %%" CYTHON_FORMAT_SSIZE_T "d",' ' (Py_ssize_t)(%s), (Py_ssize_t)(%s));' % ( target_size, runtime_check)) code.putln(code.error_goto(self.pos)) code.putln("}") def start_code(self): if self.start: return self.start.result() else: return "0" def stop_code(self): if self.stop: return self.stop.result() elif self.base.type.is_array: return self.base.type.size else: return "PY_SSIZE_T_MAX" def calculate_result_code(self): # self.result() is not used, but this method must exist return "" class SliceNode(ExprNode): # start:stop:step in subscript list # # start ExprNode # stop ExprNode # step ExprNode subexprs = ['start', 'stop', 'step'] is_slice = True type = slice_type is_temp = 1 def calculate_constant_result(self): self.constant_result = slice( self.start.constant_result, self.stop.constant_result, self.step.constant_result) def compile_time_value(self, denv): start = self.start.compile_time_value(denv) stop = self.stop.compile_time_value(denv) step = self.step.compile_time_value(denv) try: return slice(start, stop, step) except Exception as e: self.compile_time_value_error(e) def may_be_none(self): return False def analyse_types(self, env): start = self.start.analyse_types(env) stop = self.stop.analyse_types(env) step = self.step.analyse_types(env) self.start = start.coerce_to_pyobject(env) self.stop = stop.coerce_to_pyobject(env) self.step = step.coerce_to_pyobject(env) if self.start.is_literal and self.stop.is_literal and self.step.is_literal: self.is_literal = True self.is_temp = False return self gil_message = "Constructing Python slice object" def calculate_result_code(self): return self.result_code def generate_result_code(self, code): if self.is_literal: dedup_key = make_dedup_key(self.type, (self,)) self.result_code = code.get_py_const(py_object_type, 'slice', cleanup_level=2, dedup_key=dedup_key) code = code.get_cached_constants_writer(self.result_code) if code is None: return # already initialised code.mark_pos(self.pos) code.putln( "%s = PySlice_New(%s, %s, %s); %s" % ( self.result(), self.start.py_result(), self.stop.py_result(), self.step.py_result(), code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) if self.is_literal: code.put_giveref(self.py_result()) class SliceIntNode(SliceNode): # start:stop:step in subscript list # This is just a node to hold start,stop and step nodes that can be # converted to integers. This does not generate a slice python object. # # start ExprNode # stop ExprNode # step ExprNode is_temp = 0 def calculate_constant_result(self): self.constant_result = slice( self.start.constant_result, self.stop.constant_result, self.step.constant_result) def compile_time_value(self, denv): start = self.start.compile_time_value(denv) stop = self.stop.compile_time_value(denv) step = self.step.compile_time_value(denv) try: return slice(start, stop, step) except Exception as e: self.compile_time_value_error(e) def may_be_none(self): return False def analyse_types(self, env): self.start = self.start.analyse_types(env) self.stop = self.stop.analyse_types(env) self.step = self.step.analyse_types(env) if not self.start.is_none: self.start = self.start.coerce_to_integer(env) if not self.stop.is_none: self.stop = self.stop.coerce_to_integer(env) if not self.step.is_none: self.step = self.step.coerce_to_integer(env) if self.start.is_literal and self.stop.is_literal and self.step.is_literal: self.is_literal = True self.is_temp = False return self def calculate_result_code(self): pass def generate_result_code(self, code): for a in self.start,self.stop,self.step: if isinstance(a, CloneNode): a.arg.result() class CallNode(ExprNode): # allow overriding the default 'may_be_none' behaviour may_return_none = None def infer_type(self, env): # TODO(robertwb): Reduce redundancy with analyse_types. function = self.function func_type = function.infer_type(env) if isinstance(function, NewExprNode): # note: needs call to infer_type() above return PyrexTypes.CPtrType(function.class_type) if func_type is py_object_type: # function might have lied for safety => try to find better type entry = getattr(function, 'entry', None) if entry is not None: func_type = entry.type or func_type if func_type.is_ptr: func_type = func_type.base_type if func_type.is_cfunction: if getattr(self.function, 'entry', None) and hasattr(self, 'args'): alternatives = self.function.entry.all_alternatives() arg_types = [arg.infer_type(env) for arg in self.args] func_entry = PyrexTypes.best_match(arg_types, alternatives) if func_entry: func_type = func_entry.type if func_type.is_ptr: func_type = func_type.base_type return func_type.return_type return func_type.return_type elif func_type is type_type: if function.is_name and function.entry and function.entry.type: result_type = function.entry.type if result_type.is_extension_type: return result_type elif result_type.is_builtin_type: if function.entry.name == 'float': return PyrexTypes.c_double_type elif function.entry.name in Builtin.types_that_construct_their_instance: return result_type return py_object_type def type_dependencies(self, env): # TODO: Update when Danilo's C++ code merged in to handle the # the case of function overloading. return self.function.type_dependencies(env) def is_simple(self): # C function calls could be considered simple, but they may # have side-effects that may hit when multiple operations must # be effected in order, e.g. when constructing the argument # sequence for a function call or comparing values. return False def may_be_none(self): if self.may_return_none is not None: return self.may_return_none func_type = self.function.type if func_type is type_type and self.function.is_name: entry = self.function.entry if entry.type.is_extension_type: return False if (entry.type.is_builtin_type and entry.name in Builtin.types_that_construct_their_instance): return False return ExprNode.may_be_none(self) def set_py_result_type(self, function, func_type=None): if func_type is None: func_type = function.type if func_type is Builtin.type_type and ( function.is_name and function.entry and function.entry.is_builtin and function.entry.name in Builtin.types_that_construct_their_instance): # calling a builtin type that returns a specific object type if function.entry.name == 'float': # the following will come true later on in a transform self.type = PyrexTypes.c_double_type self.result_ctype = PyrexTypes.c_double_type else: self.type = Builtin.builtin_types[function.entry.name] self.result_ctype = py_object_type self.may_return_none = False elif function.is_name and function.type_entry: # We are calling an extension type constructor. As long as we do not # support __new__(), the result type is clear self.type = function.type_entry.type self.result_ctype = py_object_type self.may_return_none = False else: self.type = py_object_type def analyse_as_type_constructor(self, env): type = self.function.analyse_as_type(env) if type and type.is_struct_or_union: args, kwds = self.explicit_args_kwds() items = [] for arg, member in zip(args, type.scope.var_entries): items.append(DictItemNode(pos=arg.pos, key=StringNode(pos=arg.pos, value=member.name), value=arg)) if kwds: items += kwds.key_value_pairs self.key_value_pairs = items self.__class__ = DictNode self.analyse_types(env) # FIXME self.coerce_to(type, env) return True elif type and type.is_cpp_class: self.args = [ arg.analyse_types(env) for arg in self.args ] constructor = type.scope.lookup("") if not constructor: error(self.function.pos, "no constructor found for C++ type '%s'" % self.function.name) self.type = error_type return self self.function = RawCNameExprNode(self.function.pos, constructor.type) self.function.entry = constructor self.function.set_cname(type.empty_declaration_code()) self.analyse_c_function_call(env) self.type = type return True def is_lvalue(self): return self.type.is_reference def nogil_check(self, env): func_type = self.function_type() if func_type.is_pyobject: self.gil_error() elif not getattr(func_type, 'nogil', False): self.gil_error() gil_message = "Calling gil-requiring function" class SimpleCallNode(CallNode): # Function call without keyword, * or ** args. # # function ExprNode # args [ExprNode] # arg_tuple ExprNode or None used internally # self ExprNode or None used internally # coerced_self ExprNode or None used internally # wrapper_call bool used internally # has_optional_args bool used internally # nogil bool used internally subexprs = ['self', 'coerced_self', 'function', 'args', 'arg_tuple'] self = None coerced_self = None arg_tuple = None wrapper_call = False has_optional_args = False nogil = False analysed = False overflowcheck = False def compile_time_value(self, denv): function = self.function.compile_time_value(denv) args = [arg.compile_time_value(denv) for arg in self.args] try: return function(*args) except Exception as e: self.compile_time_value_error(e) def analyse_as_type(self, env): attr = self.function.as_cython_attribute() if attr == 'pointer': if len(self.args) != 1: error(self.args.pos, "only one type allowed.") else: type = self.args[0].analyse_as_type(env) if not type: error(self.args[0].pos, "Unknown type") else: return PyrexTypes.CPtrType(type) elif attr == 'typeof': if len(self.args) != 1: error(self.args.pos, "only one type allowed.") operand = self.args[0].analyse_types(env) return operand.type def explicit_args_kwds(self): return self.args, None def analyse_types(self, env): if self.analyse_as_type_constructor(env): return self if self.analysed: return self self.analysed = True self.function.is_called = 1 self.function = self.function.analyse_types(env) function = self.function if function.is_attribute and function.entry and function.entry.is_cmethod: # Take ownership of the object from which the attribute # was obtained, because we need to pass it as 'self'. self.self = function.obj function.obj = CloneNode(self.self) func_type = self.function_type() self.is_numpy_call_with_exprs = False if (has_np_pythran(env) and function.is_numpy_attribute and pythran_is_numpy_func_supported(function)): has_pythran_args = True self.arg_tuple = TupleNode(self.pos, args = self.args) self.arg_tuple = self.arg_tuple.analyse_types(env) for arg in self.arg_tuple.args: has_pythran_args &= is_pythran_supported_node_or_none(arg) self.is_numpy_call_with_exprs = bool(has_pythran_args) if self.is_numpy_call_with_exprs: env.add_include_file(pythran_get_func_include_file(function)) return NumPyMethodCallNode.from_node( self, function=function, arg_tuple=self.arg_tuple, type=PythranExpr(pythran_func_type(function, self.arg_tuple.args)), ) elif func_type.is_pyobject: self.arg_tuple = TupleNode(self.pos, args = self.args) self.arg_tuple = self.arg_tuple.analyse_types(env).coerce_to_pyobject(env) self.args = None self.set_py_result_type(function, func_type) self.is_temp = 1 else: self.args = [ arg.analyse_types(env) for arg in self.args ] self.analyse_c_function_call(env) if func_type.exception_check == '+': self.is_temp = True return self def function_type(self): # Return the type of the function being called, coercing a function # pointer to a function if necessary. If the function has fused # arguments, return the specific type. func_type = self.function.type if func_type.is_ptr: func_type = func_type.base_type return func_type def analyse_c_function_call(self, env): func_type = self.function.type if func_type is error_type: self.type = error_type return if func_type.is_cfunction and func_type.is_static_method: if self.self and self.self.type.is_extension_type: # To support this we'd need to pass self to determine whether # it was overloaded in Python space (possibly via a Cython # superclass turning a cdef method into a cpdef one). error(self.pos, "Cannot call a static method on an instance variable.") args = self.args elif self.self: args = [self.self] + self.args else: args = self.args if func_type.is_cpp_class: overloaded_entry = self.function.type.scope.lookup("operator()") if overloaded_entry is None: self.type = PyrexTypes.error_type self.result_code = "" return elif hasattr(self.function, 'entry'): overloaded_entry = self.function.entry elif self.function.is_subscript and self.function.is_fused_index: overloaded_entry = self.function.type.entry else: overloaded_entry = None if overloaded_entry: if self.function.type.is_fused: functypes = self.function.type.get_all_specialized_function_types() alternatives = [f.entry for f in functypes] else: alternatives = overloaded_entry.all_alternatives() entry = PyrexTypes.best_match( [arg.type for arg in args], alternatives, self.pos, env, args) if not entry: self.type = PyrexTypes.error_type self.result_code = "" return entry.used = True if not func_type.is_cpp_class: self.function.entry = entry self.function.type = entry.type func_type = self.function_type() else: entry = None func_type = self.function_type() if not func_type.is_cfunction: error(self.pos, "Calling non-function type '%s'" % func_type) self.type = PyrexTypes.error_type self.result_code = "" return # Check no. of args max_nargs = len(func_type.args) expected_nargs = max_nargs - func_type.optional_arg_count actual_nargs = len(args) if func_type.optional_arg_count and expected_nargs != actual_nargs: self.has_optional_args = 1 self.is_temp = 1 # check 'self' argument if entry and entry.is_cmethod and func_type.args and not func_type.is_static_method: formal_arg = func_type.args[0] arg = args[0] if formal_arg.not_none: if self.self: self.self = self.self.as_none_safe_node( "'NoneType' object has no attribute '%{0}s'".format('.30' if len(entry.name) <= 30 else ''), error='PyExc_AttributeError', format_args=[entry.name]) else: # unbound method arg = arg.as_none_safe_node( "descriptor '%s' requires a '%s' object but received a 'NoneType'", format_args=[entry.name, formal_arg.type.name]) if self.self: if formal_arg.accept_builtin_subtypes: arg = CMethodSelfCloneNode(self.self) else: arg = CloneNode(self.self) arg = self.coerced_self = arg.coerce_to(formal_arg.type, env) elif formal_arg.type.is_builtin_type: # special case: unbound methods of builtins accept subtypes arg = arg.coerce_to(formal_arg.type, env) if arg.type.is_builtin_type and isinstance(arg, PyTypeTestNode): arg.exact_builtin_type = False args[0] = arg # Coerce arguments some_args_in_temps = False for i in range(min(max_nargs, actual_nargs)): formal_arg = func_type.args[i] formal_type = formal_arg.type arg = args[i].coerce_to(formal_type, env) if formal_arg.not_none: # C methods must do the None checks at *call* time arg = arg.as_none_safe_node( "cannot pass None into a C function argument that is declared 'not None'") if arg.is_temp: if i > 0: # first argument in temp doesn't impact subsequent arguments some_args_in_temps = True elif arg.type.is_pyobject and not env.nogil: if i == 0 and self.self is not None: # a method's cloned "self" argument is ok pass elif arg.nonlocally_immutable(): # plain local variables are ok pass else: # we do not safely own the argument's reference, # but we must make sure it cannot be collected # before we return from the function, so we create # an owned temp reference to it if i > 0: # first argument doesn't matter some_args_in_temps = True arg = arg.coerce_to_temp(env) args[i] = arg # handle additional varargs parameters for i in range(max_nargs, actual_nargs): arg = args[i] if arg.type.is_pyobject: if arg.type is str_type: arg_ctype = PyrexTypes.c_char_ptr_type else: arg_ctype = arg.type.default_coerced_ctype() if arg_ctype is None: error(self.args[i].pos, "Python object cannot be passed as a varargs parameter") else: args[i] = arg = arg.coerce_to(arg_ctype, env) if arg.is_temp and i > 0: some_args_in_temps = True if some_args_in_temps: # if some args are temps and others are not, they may get # constructed in the wrong order (temps first) => make # sure they are either all temps or all not temps (except # for the last argument, which is evaluated last in any # case) for i in range(actual_nargs-1): if i == 0 and self.self is not None: continue # self is ok arg = args[i] if arg.nonlocally_immutable(): # locals, C functions, unassignable types are safe. pass elif arg.type.is_cpp_class: # Assignment has side effects, avoid. pass elif env.nogil and arg.type.is_pyobject: # can't copy a Python reference into a temp in nogil # env (this is safe: a construction would fail in # nogil anyway) pass else: #self.args[i] = arg.coerce_to_temp(env) # instead: issue a warning if i > 0 or i == 1 and self.self is not None: # skip first arg warning(arg.pos, "Argument evaluation order in C function call is undefined and may not be as expected", 0) break self.args[:] = args # Calc result type and code fragment if isinstance(self.function, NewExprNode): self.type = PyrexTypes.CPtrType(self.function.class_type) else: self.type = func_type.return_type if self.function.is_name or self.function.is_attribute: func_entry = self.function.entry if func_entry and (func_entry.utility_code or func_entry.utility_code_definition): self.is_temp = 1 # currently doesn't work for self.calculate_result_code() if self.type.is_pyobject: self.result_ctype = py_object_type self.is_temp = 1 elif func_type.exception_value is not None or func_type.exception_check: self.is_temp = 1 elif self.type.is_memoryviewslice: self.is_temp = 1 # func_type.exception_check = True if self.is_temp and self.type.is_reference: self.type = PyrexTypes.CFakeReferenceType(self.type.ref_base_type) # Called in 'nogil' context? self.nogil = env.nogil if (self.nogil and func_type.exception_check and func_type.exception_check != '+'): env.use_utility_code(pyerr_occurred_withgil_utility_code) # C++ exception handler if func_type.exception_check == '+': if func_type.exception_value is None: env.use_utility_code(UtilityCode.load_cached("CppExceptionConversion", "CppSupport.cpp")) self.overflowcheck = env.directives['overflowcheck'] def calculate_result_code(self): return self.c_call_code() def c_call_code(self): func_type = self.function_type() if self.type is PyrexTypes.error_type or not func_type.is_cfunction: return "" formal_args = func_type.args arg_list_code = [] args = list(zip(formal_args, self.args)) max_nargs = len(func_type.args) expected_nargs = max_nargs - func_type.optional_arg_count actual_nargs = len(self.args) for formal_arg, actual_arg in args[:expected_nargs]: arg_code = actual_arg.result_as(formal_arg.type) arg_list_code.append(arg_code) if func_type.is_overridable: arg_list_code.append(str(int(self.wrapper_call or self.function.entry.is_unbound_cmethod))) if func_type.optional_arg_count: if expected_nargs == actual_nargs: optional_args = 'NULL' else: optional_args = "&%s" % self.opt_arg_struct arg_list_code.append(optional_args) for actual_arg in self.args[len(formal_args):]: arg_list_code.append(actual_arg.result()) result = "%s(%s)" % (self.function.result(), ', '.join(arg_list_code)) return result def is_c_result_required(self): func_type = self.function_type() if not func_type.exception_value or func_type.exception_check == '+': return False # skip allocation of unused result temp return True def generate_evaluation_code(self, code): function = self.function if function.is_name or function.is_attribute: code.globalstate.use_entry_utility_code(function.entry) if not function.type.is_pyobject or len(self.arg_tuple.args) > 1 or ( self.arg_tuple.args and self.arg_tuple.is_literal): super(SimpleCallNode, self).generate_evaluation_code(code) return # Special case 0-args and try to avoid explicit tuple creation for Python calls with 1 arg. arg = self.arg_tuple.args[0] if self.arg_tuple.args else None subexprs = (self.self, self.coerced_self, function, arg) for subexpr in subexprs: if subexpr is not None: subexpr.generate_evaluation_code(code) code.mark_pos(self.pos) assert self.is_temp self.allocate_temp_result(code) if arg is None: code.globalstate.use_utility_code(UtilityCode.load_cached( "PyObjectCallNoArg", "ObjectHandling.c")) code.putln( "%s = __Pyx_PyObject_CallNoArg(%s); %s" % ( self.result(), function.py_result(), code.error_goto_if_null(self.result(), self.pos))) else: code.globalstate.use_utility_code(UtilityCode.load_cached( "PyObjectCallOneArg", "ObjectHandling.c")) code.putln( "%s = __Pyx_PyObject_CallOneArg(%s, %s); %s" % ( self.result(), function.py_result(), arg.py_result(), code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) for subexpr in subexprs: if subexpr is not None: subexpr.generate_disposal_code(code) subexpr.free_temps(code) def generate_result_code(self, code): func_type = self.function_type() if func_type.is_pyobject: arg_code = self.arg_tuple.py_result() code.globalstate.use_utility_code(UtilityCode.load_cached( "PyObjectCall", "ObjectHandling.c")) code.putln( "%s = __Pyx_PyObject_Call(%s, %s, NULL); %s" % ( self.result(), self.function.py_result(), arg_code, code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) elif func_type.is_cfunction: if self.has_optional_args: actual_nargs = len(self.args) expected_nargs = len(func_type.args) - func_type.optional_arg_count self.opt_arg_struct = code.funcstate.allocate_temp( func_type.op_arg_struct.base_type, manage_ref=True) code.putln("%s.%s = %s;" % ( self.opt_arg_struct, Naming.pyrex_prefix + "n", len(self.args) - expected_nargs)) args = list(zip(func_type.args, self.args)) for formal_arg, actual_arg in args[expected_nargs:actual_nargs]: code.putln("%s.%s = %s;" % ( self.opt_arg_struct, func_type.opt_arg_cname(formal_arg.name), actual_arg.result_as(formal_arg.type))) exc_checks = [] if self.type.is_pyobject and self.is_temp: exc_checks.append("!%s" % self.result()) elif self.type.is_memoryviewslice: assert self.is_temp exc_checks.append(self.type.error_condition(self.result())) elif func_type.exception_check != '+': exc_val = func_type.exception_value exc_check = func_type.exception_check if exc_val is not None: exc_checks.append("%s == %s" % (self.result(), func_type.return_type.cast_code(exc_val))) if exc_check: if self.nogil: exc_checks.append("__Pyx_ErrOccurredWithGIL()") else: exc_checks.append("PyErr_Occurred()") if self.is_temp or exc_checks: rhs = self.c_call_code() if self.result(): lhs = "%s = " % self.result() if self.is_temp and self.type.is_pyobject: #return_type = self.type # func_type.return_type #print "SimpleCallNode.generate_result_code: casting", rhs, \ # "from", return_type, "to pyobject" ### rhs = typecast(py_object_type, self.type, rhs) else: lhs = "" if func_type.exception_check == '+': translate_cpp_exception(code, self.pos, '%s%s;' % (lhs, rhs), self.result() if self.type.is_pyobject else None, func_type.exception_value, self.nogil) else: if (self.overflowcheck and self.type.is_int and self.type.signed and self.function.result() in ('abs', 'labs', '__Pyx_abs_longlong')): goto_error = 'if (unlikely(%s < 0)) { PyErr_SetString(PyExc_OverflowError, "value too large"); %s; }' % ( self.result(), code.error_goto(self.pos)) elif exc_checks: goto_error = code.error_goto_if(" && ".join(exc_checks), self.pos) else: goto_error = "" code.putln("%s%s; %s" % (lhs, rhs, goto_error)) if self.type.is_pyobject and self.result(): code.put_gotref(self.py_result()) if self.has_optional_args: code.funcstate.release_temp(self.opt_arg_struct) class NumPyMethodCallNode(SimpleCallNode): # Pythran call to a NumPy function or method. # # function ExprNode the function/method to call # arg_tuple TupleNode the arguments as an args tuple subexprs = ['function', 'arg_tuple'] is_temp = True may_return_none = True def generate_evaluation_code(self, code): code.mark_pos(self.pos) self.allocate_temp_result(code) self.function.generate_evaluation_code(code) assert self.arg_tuple.mult_factor is None args = self.arg_tuple.args for arg in args: arg.generate_evaluation_code(code) code.putln("// function evaluation code for numpy function") code.putln("__Pyx_call_destructor(%s);" % self.result()) code.putln("new (&%s) decltype(%s){%s{}(%s)};" % ( self.result(), self.result(), pythran_functor(self.function), ", ".join(a.pythran_result() for a in args))) class PyMethodCallNode(SimpleCallNode): # Specialised call to a (potential) PyMethodObject with non-constant argument tuple. # Allows the self argument to be injected directly instead of repacking a tuple for it. # # function ExprNode the function/method object to call # arg_tuple TupleNode the arguments for the args tuple subexprs = ['function', 'arg_tuple'] is_temp = True def generate_evaluation_code(self, code): code.mark_pos(self.pos) self.allocate_temp_result(code) self.function.generate_evaluation_code(code) assert self.arg_tuple.mult_factor is None args = self.arg_tuple.args for arg in args: arg.generate_evaluation_code(code) # make sure function is in temp so that we can replace the reference below if it's a method reuse_function_temp = self.function.is_temp if reuse_function_temp: function = self.function.result() else: function = code.funcstate.allocate_temp(py_object_type, manage_ref=True) self.function.make_owned_reference(code) code.put("%s = %s; " % (function, self.function.py_result())) self.function.generate_disposal_code(code) self.function.free_temps(code) self_arg = code.funcstate.allocate_temp(py_object_type, manage_ref=True) code.putln("%s = NULL;" % self_arg) arg_offset_cname = None if len(args) > 1: arg_offset_cname = code.funcstate.allocate_temp(PyrexTypes.c_int_type, manage_ref=False) code.putln("%s = 0;" % arg_offset_cname) def attribute_is_likely_method(attr): obj = attr.obj if obj.is_name and obj.entry.is_pyglobal: return False # more likely to be a function return True if self.function.is_attribute: likely_method = 'likely' if attribute_is_likely_method(self.function) else 'unlikely' elif self.function.is_name and self.function.cf_state: # not an attribute itself, but might have been assigned from one (e.g. bound method) for assignment in self.function.cf_state: value = assignment.rhs if value and value.is_attribute and value.obj.type.is_pyobject: if attribute_is_likely_method(value): likely_method = 'likely' break else: likely_method = 'unlikely' else: likely_method = 'unlikely' code.putln("if (CYTHON_UNPACK_METHODS && %s(PyMethod_Check(%s))) {" % (likely_method, function)) code.putln("%s = PyMethod_GET_SELF(%s);" % (self_arg, function)) # the following is always true in Py3 (kept only for safety), # but is false for unbound methods in Py2 code.putln("if (likely(%s)) {" % self_arg) code.putln("PyObject* function = PyMethod_GET_FUNCTION(%s);" % function) code.put_incref(self_arg, py_object_type) code.put_incref("function", py_object_type) # free method object as early to possible to enable reuse from CPython's freelist code.put_decref_set(function, "function") if len(args) > 1: code.putln("%s = 1;" % arg_offset_cname) code.putln("}") code.putln("}") if not args: # fastest special case: try to avoid tuple creation code.globalstate.use_utility_code( UtilityCode.load_cached("PyObjectCallNoArg", "ObjectHandling.c")) code.globalstate.use_utility_code( UtilityCode.load_cached("PyObjectCallOneArg", "ObjectHandling.c")) code.putln( "%s = (%s) ? __Pyx_PyObject_CallOneArg(%s, %s) : __Pyx_PyObject_CallNoArg(%s);" % ( self.result(), self_arg, function, self_arg, function)) code.put_xdecref_clear(self_arg, py_object_type) code.funcstate.release_temp(self_arg) code.putln(code.error_goto_if_null(self.result(), self.pos)) code.put_gotref(self.py_result()) elif len(args) == 1: # fastest special case: try to avoid tuple creation code.globalstate.use_utility_code( UtilityCode.load_cached("PyObjectCall2Args", "ObjectHandling.c")) code.globalstate.use_utility_code( UtilityCode.load_cached("PyObjectCallOneArg", "ObjectHandling.c")) arg = args[0] code.putln( "%s = (%s) ? __Pyx_PyObject_Call2Args(%s, %s, %s) : __Pyx_PyObject_CallOneArg(%s, %s);" % ( self.result(), self_arg, function, self_arg, arg.py_result(), function, arg.py_result())) code.put_xdecref_clear(self_arg, py_object_type) code.funcstate.release_temp(self_arg) arg.generate_disposal_code(code) arg.free_temps(code) code.putln(code.error_goto_if_null(self.result(), self.pos)) code.put_gotref(self.py_result()) else: code.globalstate.use_utility_code( UtilityCode.load_cached("PyFunctionFastCall", "ObjectHandling.c")) code.globalstate.use_utility_code( UtilityCode.load_cached("PyCFunctionFastCall", "ObjectHandling.c")) for test_func, call_prefix in [('PyFunction_Check', 'Py'), ('__Pyx_PyFastCFunction_Check', 'PyC')]: code.putln("#if CYTHON_FAST_%sCALL" % call_prefix.upper()) code.putln("if (%s(%s)) {" % (test_func, function)) code.putln("PyObject *%s[%d] = {%s, %s};" % ( Naming.quick_temp_cname, len(args)+1, self_arg, ', '.join(arg.py_result() for arg in args))) code.putln("%s = __Pyx_%sFunction_FastCall(%s, %s+1-%s, %d+%s); %s" % ( self.result(), call_prefix, function, Naming.quick_temp_cname, arg_offset_cname, len(args), arg_offset_cname, code.error_goto_if_null(self.result(), self.pos))) code.put_xdecref_clear(self_arg, py_object_type) code.put_gotref(self.py_result()) for arg in args: arg.generate_disposal_code(code) code.putln("} else") code.putln("#endif") code.putln("{") args_tuple = code.funcstate.allocate_temp(py_object_type, manage_ref=True) code.putln("%s = PyTuple_New(%d+%s); %s" % ( args_tuple, len(args), arg_offset_cname, code.error_goto_if_null(args_tuple, self.pos))) code.put_gotref(args_tuple) if len(args) > 1: code.putln("if (%s) {" % self_arg) code.putln("__Pyx_GIVEREF(%s); PyTuple_SET_ITEM(%s, 0, %s); %s = NULL;" % ( self_arg, args_tuple, self_arg, self_arg)) # stealing owned ref in this case code.funcstate.release_temp(self_arg) if len(args) > 1: code.putln("}") for i, arg in enumerate(args): arg.make_owned_reference(code) code.put_giveref(arg.py_result()) code.putln("PyTuple_SET_ITEM(%s, %d+%s, %s);" % ( args_tuple, i, arg_offset_cname, arg.py_result())) if len(args) > 1: code.funcstate.release_temp(arg_offset_cname) for arg in args: arg.generate_post_assignment_code(code) arg.free_temps(code) code.globalstate.use_utility_code( UtilityCode.load_cached("PyObjectCall", "ObjectHandling.c")) code.putln( "%s = __Pyx_PyObject_Call(%s, %s, NULL); %s" % ( self.result(), function, args_tuple, code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) code.put_decref_clear(args_tuple, py_object_type) code.funcstate.release_temp(args_tuple) if len(args) == 1: code.putln("}") code.putln("}") # !CYTHON_FAST_PYCALL if reuse_function_temp: self.function.generate_disposal_code(code) self.function.free_temps(code) else: code.put_decref_clear(function, py_object_type) code.funcstate.release_temp(function) class InlinedDefNodeCallNode(CallNode): # Inline call to defnode # # function PyCFunctionNode # function_name NameNode # args [ExprNode] subexprs = ['args', 'function_name'] is_temp = 1 type = py_object_type function = None function_name = None def can_be_inlined(self): func_type= self.function.def_node if func_type.star_arg or func_type.starstar_arg: return False if len(func_type.args) != len(self.args): return False if func_type.num_kwonly_args: return False # actually wrong number of arguments return True def analyse_types(self, env): self.function_name = self.function_name.analyse_types(env) self.args = [ arg.analyse_types(env) for arg in self.args ] func_type = self.function.def_node actual_nargs = len(self.args) # Coerce arguments some_args_in_temps = False for i in range(actual_nargs): formal_type = func_type.args[i].type arg = self.args[i].coerce_to(formal_type, env) if arg.is_temp: if i > 0: # first argument in temp doesn't impact subsequent arguments some_args_in_temps = True elif arg.type.is_pyobject and not env.nogil: if arg.nonlocally_immutable(): # plain local variables are ok pass else: # we do not safely own the argument's reference, # but we must make sure it cannot be collected # before we return from the function, so we create # an owned temp reference to it if i > 0: # first argument doesn't matter some_args_in_temps = True arg = arg.coerce_to_temp(env) self.args[i] = arg if some_args_in_temps: # if some args are temps and others are not, they may get # constructed in the wrong order (temps first) => make # sure they are either all temps or all not temps (except # for the last argument, which is evaluated last in any # case) for i in range(actual_nargs-1): arg = self.args[i] if arg.nonlocally_immutable(): # locals, C functions, unassignable types are safe. pass elif arg.type.is_cpp_class: # Assignment has side effects, avoid. pass elif env.nogil and arg.type.is_pyobject: # can't copy a Python reference into a temp in nogil # env (this is safe: a construction would fail in # nogil anyway) pass else: #self.args[i] = arg.coerce_to_temp(env) # instead: issue a warning if i > 0: warning(arg.pos, "Argument evaluation order in C function call is undefined and may not be as expected", 0) break return self def generate_result_code(self, code): arg_code = [self.function_name.py_result()] func_type = self.function.def_node for arg, proto_arg in zip(self.args, func_type.args): if arg.type.is_pyobject: arg_code.append(arg.result_as(proto_arg.type)) else: arg_code.append(arg.result()) arg_code = ', '.join(arg_code) code.putln( "%s = %s(%s); %s" % ( self.result(), self.function.def_node.entry.pyfunc_cname, arg_code, code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) class PythonCapiFunctionNode(ExprNode): subexprs = [] def __init__(self, pos, py_name, cname, func_type, utility_code = None): ExprNode.__init__(self, pos, name=py_name, cname=cname, type=func_type, utility_code=utility_code) def analyse_types(self, env): return self def generate_result_code(self, code): if self.utility_code: code.globalstate.use_utility_code(self.utility_code) def calculate_result_code(self): return self.cname class PythonCapiCallNode(SimpleCallNode): # Python C-API Function call (only created in transforms) # By default, we assume that the call never returns None, as this # is true for most C-API functions in CPython. If this does not # apply to a call, set the following to True (or None to inherit # the default behaviour). may_return_none = False def __init__(self, pos, function_name, func_type, utility_code = None, py_name=None, **kwargs): self.type = func_type.return_type self.result_ctype = self.type self.function = PythonCapiFunctionNode( pos, py_name, function_name, func_type, utility_code = utility_code) # call this last so that we can override the constructed # attributes above with explicit keyword arguments if required SimpleCallNode.__init__(self, pos, **kwargs) class CachedBuiltinMethodCallNode(CallNode): # Python call to a method of a known Python builtin (only created in transforms) subexprs = ['obj', 'args'] is_temp = True def __init__(self, call_node, obj, method_name, args): super(CachedBuiltinMethodCallNode, self).__init__( call_node.pos, obj=obj, method_name=method_name, args=args, may_return_none=call_node.may_return_none, type=call_node.type) def may_be_none(self): if self.may_return_none is not None: return self.may_return_none return ExprNode.may_be_none(self) def generate_result_code(self, code): type_cname = self.obj.type.cname obj_cname = self.obj.py_result() args = [arg.py_result() for arg in self.args] call_code = code.globalstate.cached_unbound_method_call_code( obj_cname, type_cname, self.method_name, args) code.putln("%s = %s; %s" % ( self.result(), call_code, code.error_goto_if_null(self.result(), self.pos) )) code.put_gotref(self.result()) class GeneralCallNode(CallNode): # General Python function call, including keyword, # * and ** arguments. # # function ExprNode # positional_args ExprNode Tuple of positional arguments # keyword_args ExprNode or None Dict of keyword arguments type = py_object_type subexprs = ['function', 'positional_args', 'keyword_args'] nogil_check = Node.gil_error def compile_time_value(self, denv): function = self.function.compile_time_value(denv) positional_args = self.positional_args.compile_time_value(denv) keyword_args = self.keyword_args.compile_time_value(denv) try: return function(*positional_args, **keyword_args) except Exception as e: self.compile_time_value_error(e) def explicit_args_kwds(self): if (self.keyword_args and not self.keyword_args.is_dict_literal or not self.positional_args.is_sequence_constructor): raise CompileError(self.pos, 'Compile-time keyword arguments must be explicit.') return self.positional_args.args, self.keyword_args def analyse_types(self, env): if self.analyse_as_type_constructor(env): return self self.function = self.function.analyse_types(env) if not self.function.type.is_pyobject: if self.function.type.is_error: self.type = error_type return self if hasattr(self.function, 'entry'): node = self.map_to_simple_call_node() if node is not None and node is not self: return node.analyse_types(env) elif self.function.entry.as_variable: self.function = self.function.coerce_to_pyobject(env) elif node is self: error(self.pos, "Non-trivial keyword arguments and starred " "arguments not allowed in cdef functions.") else: # error was already reported pass else: self.function = self.function.coerce_to_pyobject(env) if self.keyword_args: self.keyword_args = self.keyword_args.analyse_types(env) self.positional_args = self.positional_args.analyse_types(env) self.positional_args = \ self.positional_args.coerce_to_pyobject(env) self.set_py_result_type(self.function) self.is_temp = 1 return self def map_to_simple_call_node(self): """ Tries to map keyword arguments to declared positional arguments. Returns self to try a Python call, None to report an error or a SimpleCallNode if the mapping succeeds. """ if not isinstance(self.positional_args, TupleNode): # has starred argument return self if not self.keyword_args.is_dict_literal: # keywords come from arbitrary expression => nothing to do here return self function = self.function entry = getattr(function, 'entry', None) if not entry: return self function_type = entry.type if function_type.is_ptr: function_type = function_type.base_type if not function_type.is_cfunction: return self pos_args = self.positional_args.args kwargs = self.keyword_args declared_args = function_type.args if entry.is_cmethod: declared_args = declared_args[1:] # skip 'self' if len(pos_args) > len(declared_args): error(self.pos, "function call got too many positional arguments, " "expected %d, got %s" % (len(declared_args), len(pos_args))) return None matched_args = set([ arg.name for arg in declared_args[:len(pos_args)] if arg.name ]) unmatched_args = declared_args[len(pos_args):] matched_kwargs_count = 0 args = list(pos_args) # check for duplicate keywords seen = set(matched_args) has_errors = False for arg in kwargs.key_value_pairs: name = arg.key.value if name in seen: error(arg.pos, "argument '%s' passed twice" % name) has_errors = True # continue to report more errors if there are any seen.add(name) # match keywords that are passed in order for decl_arg, arg in zip(unmatched_args, kwargs.key_value_pairs): name = arg.key.value if decl_arg.name == name: matched_args.add(name) matched_kwargs_count += 1 args.append(arg.value) else: break # match keyword arguments that are passed out-of-order, but keep # the evaluation of non-simple arguments in order by moving them # into temps from .UtilNodes import EvalWithTempExprNode, LetRefNode temps = [] if len(kwargs.key_value_pairs) > matched_kwargs_count: unmatched_args = declared_args[len(args):] keywords = dict([ (arg.key.value, (i+len(pos_args), arg)) for i, arg in enumerate(kwargs.key_value_pairs) ]) first_missing_keyword = None for decl_arg in unmatched_args: name = decl_arg.name if name not in keywords: # missing keyword argument => either done or error if not first_missing_keyword: first_missing_keyword = name continue elif first_missing_keyword: if entry.as_variable: # we might be able to convert the function to a Python # object, which then allows full calling semantics # with default values in gaps - currently, we only # support optional arguments at the end return self # wasn't the last keyword => gaps are not supported error(self.pos, "C function call is missing " "argument '%s'" % first_missing_keyword) return None pos, arg = keywords[name] matched_args.add(name) matched_kwargs_count += 1 if arg.value.is_simple(): args.append(arg.value) else: temp = LetRefNode(arg.value) assert temp.is_simple() args.append(temp) temps.append((pos, temp)) if temps: # may have to move preceding non-simple args into temps final_args = [] new_temps = [] first_temp_arg = temps[0][-1] for arg_value in args: if arg_value is first_temp_arg: break # done if arg_value.is_simple(): final_args.append(arg_value) else: temp = LetRefNode(arg_value) new_temps.append(temp) final_args.append(temp) if new_temps: args = final_args temps = new_temps + [ arg for i,arg in sorted(temps) ] # check for unexpected keywords for arg in kwargs.key_value_pairs: name = arg.key.value if name not in matched_args: has_errors = True error(arg.pos, "C function got unexpected keyword argument '%s'" % name) if has_errors: # error was reported already return None # all keywords mapped to positional arguments # if we are missing arguments, SimpleCallNode will figure it out node = SimpleCallNode(self.pos, function=function, args=args) for temp in temps[::-1]: node = EvalWithTempExprNode(temp, node) return node def generate_result_code(self, code): if self.type.is_error: return if self.keyword_args: kwargs = self.keyword_args.py_result() else: kwargs = 'NULL' code.globalstate.use_utility_code(UtilityCode.load_cached( "PyObjectCall", "ObjectHandling.c")) code.putln( "%s = __Pyx_PyObject_Call(%s, %s, %s); %s" % ( self.result(), self.function.py_result(), self.positional_args.py_result(), kwargs, code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) class AsTupleNode(ExprNode): # Convert argument to tuple. Used for normalising # the * argument of a function call. # # arg ExprNode subexprs = ['arg'] is_temp = 1 def calculate_constant_result(self): self.constant_result = tuple(self.arg.constant_result) def compile_time_value(self, denv): arg = self.arg.compile_time_value(denv) try: return tuple(arg) except Exception as e: self.compile_time_value_error(e) def analyse_types(self, env): self.arg = self.arg.analyse_types(env).coerce_to_pyobject(env) if self.arg.type is tuple_type: return self.arg.as_none_safe_node("'NoneType' object is not iterable") self.type = tuple_type return self def may_be_none(self): return False nogil_check = Node.gil_error gil_message = "Constructing Python tuple" def generate_result_code(self, code): cfunc = "__Pyx_PySequence_Tuple" if self.arg.type in (py_object_type, tuple_type) else "PySequence_Tuple" code.putln( "%s = %s(%s); %s" % ( self.result(), cfunc, self.arg.py_result(), code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) class MergedDictNode(ExprNode): # Helper class for keyword arguments and other merged dicts. # # keyword_args [DictNode or other ExprNode] subexprs = ['keyword_args'] is_temp = 1 type = dict_type reject_duplicates = True def calculate_constant_result(self): result = {} reject_duplicates = self.reject_duplicates for item in self.keyword_args: if item.is_dict_literal: # process items in order items = ((key.constant_result, value.constant_result) for key, value in item.key_value_pairs) else: items = item.constant_result.iteritems() for key, value in items: if reject_duplicates and key in result: raise ValueError("duplicate keyword argument found: %s" % key) result[key] = value self.constant_result = result def compile_time_value(self, denv): result = {} reject_duplicates = self.reject_duplicates for item in self.keyword_args: if item.is_dict_literal: # process items in order items = [(key.compile_time_value(denv), value.compile_time_value(denv)) for key, value in item.key_value_pairs] else: items = item.compile_time_value(denv).iteritems() try: for key, value in items: if reject_duplicates and key in result: raise ValueError("duplicate keyword argument found: %s" % key) result[key] = value except Exception as e: self.compile_time_value_error(e) return result def type_dependencies(self, env): return () def infer_type(self, env): return dict_type def analyse_types(self, env): args = [ arg.analyse_types(env).coerce_to_pyobject(env).as_none_safe_node( # FIXME: CPython's error message starts with the runtime function name 'argument after ** must be a mapping, not NoneType') for arg in self.keyword_args ] if len(args) == 1 and args[0].type is dict_type: # strip this intermediate node and use the bare dict arg = args[0] if arg.is_name and arg.entry.is_arg and len(arg.entry.cf_assignments) == 1: # passing **kwargs through to function call => allow NULL arg.allow_null = True return arg self.keyword_args = args return self def may_be_none(self): return False gil_message = "Constructing Python dict" def generate_evaluation_code(self, code): code.mark_pos(self.pos) self.allocate_temp_result(code) args = iter(self.keyword_args) item = next(args) item.generate_evaluation_code(code) if item.type is not dict_type: # CPython supports calling functions with non-dicts, so do we code.putln('if (likely(PyDict_CheckExact(%s))) {' % item.py_result()) if item.is_dict_literal: item.make_owned_reference(code) code.putln("%s = %s;" % (self.result(), item.py_result())) item.generate_post_assignment_code(code) else: code.putln("%s = PyDict_Copy(%s); %s" % ( self.result(), item.py_result(), code.error_goto_if_null(self.result(), item.pos))) code.put_gotref(self.result()) item.generate_disposal_code(code) if item.type is not dict_type: code.putln('} else {') code.putln("%s = PyObject_CallFunctionObjArgs((PyObject*)&PyDict_Type, %s, NULL); %s" % ( self.result(), item.py_result(), code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) item.generate_disposal_code(code) code.putln('}') item.free_temps(code) helpers = set() for item in args: if item.is_dict_literal: # inline update instead of creating an intermediate dict for arg in item.key_value_pairs: arg.generate_evaluation_code(code) if self.reject_duplicates: code.putln("if (unlikely(PyDict_Contains(%s, %s))) {" % ( self.result(), arg.key.py_result())) helpers.add("RaiseDoubleKeywords") # FIXME: find out function name at runtime! code.putln('__Pyx_RaiseDoubleKeywordsError("function", %s); %s' % ( arg.key.py_result(), code.error_goto(self.pos))) code.putln("}") code.put_error_if_neg(arg.key.pos, "PyDict_SetItem(%s, %s, %s)" % ( self.result(), arg.key.py_result(), arg.value.py_result())) arg.generate_disposal_code(code) arg.free_temps(code) else: item.generate_evaluation_code(code) if self.reject_duplicates: # merge mapping into kwdict one by one as we need to check for duplicates helpers.add("MergeKeywords") code.put_error_if_neg(item.pos, "__Pyx_MergeKeywords(%s, %s)" % ( self.result(), item.py_result())) else: # simple case, just add all entries helpers.add("RaiseMappingExpected") code.putln("if (unlikely(PyDict_Update(%s, %s) < 0)) {" % ( self.result(), item.py_result())) code.putln("if (PyErr_ExceptionMatches(PyExc_AttributeError)) " "__Pyx_RaiseMappingExpectedError(%s);" % item.py_result()) code.putln(code.error_goto(item.pos)) code.putln("}") item.generate_disposal_code(code) item.free_temps(code) for helper in sorted(helpers): code.globalstate.use_utility_code(UtilityCode.load_cached(helper, "FunctionArguments.c")) def annotate(self, code): for item in self.keyword_args: item.annotate(code) class AttributeNode(ExprNode): # obj.attribute # # obj ExprNode # attribute string # needs_none_check boolean Used if obj is an extension type. # If set to True, it is known that the type is not None. # # Used internally: # # is_py_attr boolean Is a Python getattr operation # member string C name of struct member # is_called boolean Function call is being done on result # entry Entry Symbol table entry of attribute is_attribute = 1 subexprs = ['obj'] type = PyrexTypes.error_type entry = None is_called = 0 needs_none_check = True is_memslice_transpose = False is_special_lookup = False is_py_attr = 0 def as_cython_attribute(self): if (isinstance(self.obj, NameNode) and self.obj.is_cython_module and not self.attribute == u"parallel"): return self.attribute cy = self.obj.as_cython_attribute() if cy: return "%s.%s" % (cy, self.attribute) return None def coerce_to(self, dst_type, env): # If coercing to a generic pyobject and this is a cpdef function # we can create the corresponding attribute if dst_type is py_object_type: entry = self.entry if entry and entry.is_cfunction and entry.as_variable: # must be a cpdef function self.is_temp = 1 self.entry = entry.as_variable self.analyse_as_python_attribute(env) return self return ExprNode.coerce_to(self, dst_type, env) def calculate_constant_result(self): attr = self.attribute if attr.startswith("__") and attr.endswith("__"): return self.constant_result = getattr(self.obj.constant_result, attr) def compile_time_value(self, denv): attr = self.attribute if attr.startswith("__") and attr.endswith("__"): error(self.pos, "Invalid attribute name '%s' in compile-time expression" % attr) return None obj = self.obj.compile_time_value(denv) try: return getattr(obj, attr) except Exception as e: self.compile_time_value_error(e) def type_dependencies(self, env): return self.obj.type_dependencies(env) def infer_type(self, env): # FIXME: this is way too redundant with analyse_types() node = self.analyse_as_cimported_attribute_node(env, target=False) if node is not None: return node.entry.type node = self.analyse_as_type_attribute(env) if node is not None: return node.entry.type obj_type = self.obj.infer_type(env) self.analyse_attribute(env, obj_type=obj_type) if obj_type.is_builtin_type and self.type.is_cfunction: # special case: C-API replacements for C methods of # builtin types cannot be inferred as C functions as # that would prevent their use as bound methods return py_object_type elif self.entry and self.entry.is_cmethod: # special case: bound methods should not be inferred # as their unbound method types return py_object_type return self.type def analyse_target_declaration(self, env): pass def analyse_target_types(self, env): node = self.analyse_types(env, target = 1) if node.type.is_const: error(self.pos, "Assignment to const attribute '%s'" % self.attribute) if not node.is_lvalue(): error(self.pos, "Assignment to non-lvalue of type '%s'" % self.type) return node def analyse_types(self, env, target = 0): self.initialized_check = env.directives['initializedcheck'] node = self.analyse_as_cimported_attribute_node(env, target) if node is None and not target: node = self.analyse_as_type_attribute(env) if node is None: node = self.analyse_as_ordinary_attribute_node(env, target) assert node is not None if node.entry: node.entry.used = True if node.is_attribute: node.wrap_obj_in_nonecheck(env) return node def analyse_as_cimported_attribute_node(self, env, target): # Try to interpret this as a reference to an imported # C const, type, var or function. If successful, mutates # this node into a NameNode and returns 1, otherwise # returns 0. module_scope = self.obj.analyse_as_module(env) if module_scope: entry = module_scope.lookup_here(self.attribute) if entry and ( entry.is_cglobal or entry.is_cfunction or entry.is_type or entry.is_const): return self.as_name_node(env, entry, target) if self.is_cimported_module_without_shadow(env): error(self.pos, "cimported module has no attribute '%s'" % self.attribute) return self return None def analyse_as_type_attribute(self, env): # Try to interpret this as a reference to an unbound # C method of an extension type or builtin type. If successful, # creates a corresponding NameNode and returns it, otherwise # returns None. if self.obj.is_string_literal: return type = self.obj.analyse_as_type(env) if type: if type.is_extension_type or type.is_builtin_type or type.is_cpp_class: entry = type.scope.lookup_here(self.attribute) if entry and (entry.is_cmethod or type.is_cpp_class and entry.type.is_cfunction): if type.is_builtin_type: if not self.is_called: # must handle this as Python object return None ubcm_entry = entry else: # Create a temporary entry describing the C method # as an ordinary function. if entry.func_cname and not hasattr(entry.type, 'op_arg_struct'): cname = entry.func_cname if entry.type.is_static_method or ( env.parent_scope and env.parent_scope.is_cpp_class_scope): ctype = entry.type elif type.is_cpp_class: error(self.pos, "%s not a static member of %s" % (entry.name, type)) ctype = PyrexTypes.error_type else: # Fix self type. ctype = copy.copy(entry.type) ctype.args = ctype.args[:] ctype.args[0] = PyrexTypes.CFuncTypeArg('self', type, 'self', None) else: cname = "%s->%s" % (type.vtabptr_cname, entry.cname) ctype = entry.type ubcm_entry = Symtab.Entry(entry.name, cname, ctype) ubcm_entry.is_cfunction = 1 ubcm_entry.func_cname = entry.func_cname ubcm_entry.is_unbound_cmethod = 1 ubcm_entry.scope = entry.scope return self.as_name_node(env, ubcm_entry, target=False) elif type.is_enum: if self.attribute in type.values: for entry in type.entry.enum_values: if entry.name == self.attribute: return self.as_name_node(env, entry, target=False) else: error(self.pos, "%s not a known value of %s" % (self.attribute, type)) else: error(self.pos, "%s not a known value of %s" % (self.attribute, type)) return None def analyse_as_type(self, env): module_scope = self.obj.analyse_as_module(env) if module_scope: return module_scope.lookup_type(self.attribute) if not self.obj.is_string_literal: base_type = self.obj.analyse_as_type(env) if base_type and hasattr(base_type, 'scope') and base_type.scope is not None: return base_type.scope.lookup_type(self.attribute) return None def analyse_as_extension_type(self, env): # Try to interpret this as a reference to an extension type # in a cimported module. Returns the extension type, or None. module_scope = self.obj.analyse_as_module(env) if module_scope: entry = module_scope.lookup_here(self.attribute) if entry and entry.is_type: if entry.type.is_extension_type or entry.type.is_builtin_type: return entry.type return None def analyse_as_module(self, env): # Try to interpret this as a reference to a cimported module # in another cimported module. Returns the module scope, or None. module_scope = self.obj.analyse_as_module(env) if module_scope: entry = module_scope.lookup_here(self.attribute) if entry and entry.as_module: return entry.as_module return None def as_name_node(self, env, entry, target): # Create a corresponding NameNode from this node and complete the # analyse_types phase. node = NameNode.from_node(self, name=self.attribute, entry=entry) if target: node = node.analyse_target_types(env) else: node = node.analyse_rvalue_entry(env) node.entry.used = 1 return node def analyse_as_ordinary_attribute_node(self, env, target): self.obj = self.obj.analyse_types(env) self.analyse_attribute(env) if self.entry and self.entry.is_cmethod and not self.is_called: # error(self.pos, "C method can only be called") pass ## Reference to C array turns into pointer to first element. #while self.type.is_array: # self.type = self.type.element_ptr_type() if self.is_py_attr: if not target: self.is_temp = 1 self.result_ctype = py_object_type elif target and self.obj.type.is_builtin_type: error(self.pos, "Assignment to an immutable object field") #elif self.type.is_memoryviewslice and not target: # self.is_temp = True return self def analyse_attribute(self, env, obj_type = None): # Look up attribute and set self.type and self.member. immutable_obj = obj_type is not None # used during type inference self.is_py_attr = 0 self.member = self.attribute if obj_type is None: if self.obj.type.is_string or self.obj.type.is_pyunicode_ptr: self.obj = self.obj.coerce_to_pyobject(env) obj_type = self.obj.type else: if obj_type.is_string or obj_type.is_pyunicode_ptr: obj_type = py_object_type if obj_type.is_ptr or obj_type.is_array: obj_type = obj_type.base_type self.op = "->" elif obj_type.is_extension_type or obj_type.is_builtin_type: self.op = "->" elif obj_type.is_reference and obj_type.is_fake_reference: self.op = "->" else: self.op = "." if obj_type.has_attributes: if obj_type.attributes_known(): entry = obj_type.scope.lookup_here(self.attribute) if obj_type.is_memoryviewslice and not entry: if self.attribute == 'T': self.is_memslice_transpose = True self.is_temp = True self.use_managed_ref = True self.type = self.obj.type.transpose(self.pos) return else: obj_type.declare_attribute(self.attribute, env, self.pos) entry = obj_type.scope.lookup_here(self.attribute) if entry and entry.is_member: entry = None else: error(self.pos, "Cannot select attribute of incomplete type '%s'" % obj_type) self.type = PyrexTypes.error_type return self.entry = entry if entry: if obj_type.is_extension_type and entry.name == "__weakref__": error(self.pos, "Illegal use of special attribute __weakref__") # def methods need the normal attribute lookup # because they do not have struct entries # fused function go through assignment synthesis # (foo = pycfunction(foo_func_obj)) and need to go through # regular Python lookup as well if (entry.is_variable and not entry.fused_cfunction) or entry.is_cmethod: self.type = entry.type self.member = entry.cname return else: # If it's not a variable or C method, it must be a Python # method of an extension type, so we treat it like a Python # attribute. pass # If we get here, the base object is not a struct/union/extension # type, or it is an extension type and the attribute is either not # declared or is declared as a Python method. Treat it as a Python # attribute reference. self.analyse_as_python_attribute(env, obj_type, immutable_obj) def analyse_as_python_attribute(self, env, obj_type=None, immutable_obj=False): if obj_type is None: obj_type = self.obj.type # mangle private '__*' Python attributes used inside of a class self.attribute = env.mangle_class_private_name(self.attribute) self.member = self.attribute self.type = py_object_type self.is_py_attr = 1 if not obj_type.is_pyobject and not obj_type.is_error: # Expose python methods for immutable objects. if (obj_type.is_string or obj_type.is_cpp_string or obj_type.is_buffer or obj_type.is_memoryviewslice or obj_type.is_numeric or (obj_type.is_ctuple and obj_type.can_coerce_to_pyobject(env)) or (obj_type.is_struct and obj_type.can_coerce_to_pyobject(env))): if not immutable_obj: self.obj = self.obj.coerce_to_pyobject(env) elif (obj_type.is_cfunction and (self.obj.is_name or self.obj.is_attribute) and self.obj.entry.as_variable and self.obj.entry.as_variable.type.is_pyobject): # might be an optimised builtin function => unpack it if not immutable_obj: self.obj = self.obj.coerce_to_pyobject(env) else: error(self.pos, "Object of type '%s' has no attribute '%s'" % (obj_type, self.attribute)) def wrap_obj_in_nonecheck(self, env): if not env.directives['nonecheck']: return msg = None format_args = () if (self.obj.type.is_extension_type and self.needs_none_check and not self.is_py_attr): msg = "'NoneType' object has no attribute '%{0}s'".format('.30' if len(self.attribute) <= 30 else '') format_args = (self.attribute,) elif self.obj.type.is_memoryviewslice: if self.is_memslice_transpose: msg = "Cannot transpose None memoryview slice" else: entry = self.obj.type.scope.lookup_here(self.attribute) if entry: # copy/is_c_contig/shape/strides etc msg = "Cannot access '%s' attribute of None memoryview slice" format_args = (entry.name,) if msg: self.obj = self.obj.as_none_safe_node(msg, 'PyExc_AttributeError', format_args=format_args) def nogil_check(self, env): if self.is_py_attr: self.gil_error() gil_message = "Accessing Python attribute" def is_cimported_module_without_shadow(self, env): return self.obj.is_cimported_module_without_shadow(env) def is_simple(self): if self.obj: return self.result_in_temp() or self.obj.is_simple() else: return NameNode.is_simple(self) def is_lvalue(self): if self.obj: return True else: return NameNode.is_lvalue(self) def is_ephemeral(self): if self.obj: return self.obj.is_ephemeral() else: return NameNode.is_ephemeral(self) def calculate_result_code(self): #print "AttributeNode.calculate_result_code:", self.member ### #print "...obj node =", self.obj, "code", self.obj.result() ### #print "...obj type", self.obj.type, "ctype", self.obj.ctype() ### obj = self.obj obj_code = obj.result_as(obj.type) #print "...obj_code =", obj_code ### if self.entry and self.entry.is_cmethod: if obj.type.is_extension_type and not self.entry.is_builtin_cmethod: if self.entry.final_func_cname: return self.entry.final_func_cname if self.type.from_fused: # If the attribute was specialized through indexing, make # sure to get the right fused name, as our entry was # replaced by our parent index node # (AnalyseExpressionsTransform) self.member = self.entry.cname return "((struct %s *)%s%s%s)->%s" % ( obj.type.vtabstruct_cname, obj_code, self.op, obj.type.vtabslot_cname, self.member) elif self.result_is_used: return self.member # Generating no code at all for unused access to optimised builtin # methods fixes the problem that some optimisations only exist as # macros, i.e. there is no function pointer to them, so we would # generate invalid C code here. return elif obj.type.is_complex: return "__Pyx_C%s(%s)" % (self.member.upper(), obj_code) else: if obj.type.is_builtin_type and self.entry and self.entry.is_variable: # accessing a field of a builtin type, need to cast better than result_as() does obj_code = obj.type.cast_code(obj.result(), to_object_struct = True) return "%s%s%s" % (obj_code, self.op, self.member) def generate_result_code(self, code): if self.is_py_attr: if self.is_special_lookup: code.globalstate.use_utility_code( UtilityCode.load_cached("PyObjectLookupSpecial", "ObjectHandling.c")) lookup_func_name = '__Pyx_PyObject_LookupSpecial' else: code.globalstate.use_utility_code( UtilityCode.load_cached("PyObjectGetAttrStr", "ObjectHandling.c")) lookup_func_name = '__Pyx_PyObject_GetAttrStr' code.putln( '%s = %s(%s, %s); %s' % ( self.result(), lookup_func_name, self.obj.py_result(), code.intern_identifier(self.attribute), code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) elif self.type.is_memoryviewslice: if self.is_memslice_transpose: # transpose the slice for access, packing in self.type.axes: if access == 'ptr': error(self.pos, "Transposing not supported for slices " "with indirect dimensions") return code.putln("%s = %s;" % (self.result(), self.obj.result())) code.put_incref_memoryviewslice(self.result(), have_gil=True) T = "__pyx_memslice_transpose(&%s) == 0" code.putln(code.error_goto_if(T % self.result(), self.pos)) elif self.initialized_check: code.putln( 'if (unlikely(!%s.memview)) {' 'PyErr_SetString(PyExc_AttributeError,' '"Memoryview is not initialized");' '%s' '}' % (self.result(), code.error_goto(self.pos))) else: # result_code contains what is needed, but we may need to insert # a check and raise an exception if self.obj.type and self.obj.type.is_extension_type: pass elif self.entry and self.entry.is_cmethod: # C method implemented as function call with utility code code.globalstate.use_entry_utility_code(self.entry) def generate_disposal_code(self, code): if self.is_temp and self.type.is_memoryviewslice and self.is_memslice_transpose: # mirror condition for putting the memview incref here: code.put_xdecref_memoryviewslice( self.result(), have_gil=True) code.putln("%s.memview = NULL;" % self.result()) code.putln("%s.data = NULL;" % self.result()) else: ExprNode.generate_disposal_code(self, code) def generate_assignment_code(self, rhs, code, overloaded_assignment=False, exception_check=None, exception_value=None): self.obj.generate_evaluation_code(code) if self.is_py_attr: code.globalstate.use_utility_code( UtilityCode.load_cached("PyObjectSetAttrStr", "ObjectHandling.c")) code.put_error_if_neg(self.pos, '__Pyx_PyObject_SetAttrStr(%s, %s, %s)' % ( self.obj.py_result(), code.intern_identifier(self.attribute), rhs.py_result())) rhs.generate_disposal_code(code) rhs.free_temps(code) elif self.obj.type.is_complex: code.putln("__Pyx_SET_C%s(%s, %s);" % ( self.member.upper(), self.obj.result_as(self.obj.type), rhs.result_as(self.ctype()))) else: select_code = self.result() if self.type.is_pyobject and self.use_managed_ref: rhs.make_owned_reference(code) code.put_giveref(rhs.py_result()) code.put_gotref(select_code) code.put_decref(select_code, self.ctype()) elif self.type.is_memoryviewslice: from . import MemoryView MemoryView.put_assign_to_memviewslice( select_code, rhs, rhs.result(), self.type, code) if not self.type.is_memoryviewslice: code.putln( "%s = %s;" % ( select_code, rhs.result_as(self.ctype()))) #rhs.result())) rhs.generate_post_assignment_code(code) rhs.free_temps(code) self.obj.generate_disposal_code(code) self.obj.free_temps(code) def generate_deletion_code(self, code, ignore_nonexisting=False): self.obj.generate_evaluation_code(code) if self.is_py_attr or (self.entry.scope.is_property_scope and u'__del__' in self.entry.scope.entries): code.globalstate.use_utility_code( UtilityCode.load_cached("PyObjectSetAttrStr", "ObjectHandling.c")) code.put_error_if_neg(self.pos, '__Pyx_PyObject_DelAttrStr(%s, %s)' % ( self.obj.py_result(), code.intern_identifier(self.attribute))) else: error(self.pos, "Cannot delete C attribute of extension type") self.obj.generate_disposal_code(code) self.obj.free_temps(code) def annotate(self, code): if self.is_py_attr: style, text = 'py_attr', 'python attribute (%s)' else: style, text = 'c_attr', 'c attribute (%s)' code.annotate(self.pos, AnnotationItem(style, text % self.type, size=len(self.attribute))) #------------------------------------------------------------------- # # Constructor nodes # #------------------------------------------------------------------- class StarredUnpackingNode(ExprNode): # A starred expression like "*a" # # This is only allowed in sequence assignment or construction such as # # a, *b = (1,2,3,4) => a = 1 ; b = [2,3,4] # # and will be special cased during type analysis (or generate an error # if it's found at unexpected places). # # target ExprNode subexprs = ['target'] is_starred = 1 type = py_object_type is_temp = 1 starred_expr_allowed_here = False def __init__(self, pos, target): ExprNode.__init__(self, pos, target=target) def analyse_declarations(self, env): if not self.starred_expr_allowed_here: error(self.pos, "starred expression is not allowed here") self.target.analyse_declarations(env) def infer_type(self, env): return self.target.infer_type(env) def analyse_types(self, env): if not self.starred_expr_allowed_here: error(self.pos, "starred expression is not allowed here") self.target = self.target.analyse_types(env) self.type = self.target.type return self def analyse_target_declaration(self, env): self.target.analyse_target_declaration(env) def analyse_target_types(self, env): self.target = self.target.analyse_target_types(env) self.type = self.target.type return self def calculate_result_code(self): return "" def generate_result_code(self, code): pass class SequenceNode(ExprNode): # Base class for list and tuple constructor nodes. # Contains common code for performing sequence unpacking. # # args [ExprNode] # unpacked_items [ExprNode] or None # coerced_unpacked_items [ExprNode] or None # mult_factor ExprNode the integer number of content repetitions ([1,2]*3) subexprs = ['args', 'mult_factor'] is_sequence_constructor = 1 unpacked_items = None mult_factor = None slow = False # trade speed for code size (e.g. use PyTuple_Pack()) def compile_time_value_list(self, denv): return [arg.compile_time_value(denv) for arg in self.args] def replace_starred_target_node(self): # replace a starred node in the targets by the contained expression self.starred_assignment = False args = [] for arg in self.args: if arg.is_starred: if self.starred_assignment: error(arg.pos, "more than 1 starred expression in assignment") self.starred_assignment = True arg = arg.target arg.is_starred = True args.append(arg) self.args = args def analyse_target_declaration(self, env): self.replace_starred_target_node() for arg in self.args: arg.analyse_target_declaration(env) def analyse_types(self, env, skip_children=False): for i, arg in enumerate(self.args): if not skip_children: arg = arg.analyse_types(env) self.args[i] = arg.coerce_to_pyobject(env) if self.mult_factor: self.mult_factor = self.mult_factor.analyse_types(env) if not self.mult_factor.type.is_int: self.mult_factor = self.mult_factor.coerce_to_pyobject(env) self.is_temp = 1 # not setting self.type here, subtypes do this return self def coerce_to_ctuple(self, dst_type, env): if self.type == dst_type: return self assert not self.mult_factor if len(self.args) != dst_type.size: error(self.pos, "trying to coerce sequence to ctuple of wrong length, expected %d, got %d" % ( dst_type.size, len(self.args))) coerced_args = [arg.coerce_to(type, env) for arg, type in zip(self.args, dst_type.components)] return TupleNode(self.pos, args=coerced_args, type=dst_type, is_temp=True) def _create_merge_node_if_necessary(self, env): self._flatten_starred_args() if not any(arg.is_starred for arg in self.args): return self # convert into MergedSequenceNode by building partial sequences args = [] values = [] for arg in self.args: if arg.is_starred: if values: args.append(TupleNode(values[0].pos, args=values).analyse_types(env, skip_children=True)) values = [] args.append(arg.target) else: values.append(arg) if values: args.append(TupleNode(values[0].pos, args=values).analyse_types(env, skip_children=True)) node = MergedSequenceNode(self.pos, args, self.type) if self.mult_factor: node = binop_node( self.pos, '*', node, self.mult_factor.coerce_to_pyobject(env), inplace=True, type=self.type, is_temp=True) return node def _flatten_starred_args(self): args = [] for arg in self.args: if arg.is_starred and arg.target.is_sequence_constructor and not arg.target.mult_factor: args.extend(arg.target.args) else: args.append(arg) self.args[:] = args def may_be_none(self): return False def analyse_target_types(self, env): if self.mult_factor: error(self.pos, "can't assign to multiplied sequence") self.unpacked_items = [] self.coerced_unpacked_items = [] self.any_coerced_items = False for i, arg in enumerate(self.args): arg = self.args[i] = arg.analyse_target_types(env) if arg.is_starred: if not arg.type.assignable_from(list_type): error(arg.pos, "starred target must have Python object (list) type") if arg.type is py_object_type: arg.type = list_type unpacked_item = PyTempNode(self.pos, env) coerced_unpacked_item = unpacked_item.coerce_to(arg.type, env) if unpacked_item is not coerced_unpacked_item: self.any_coerced_items = True self.unpacked_items.append(unpacked_item) self.coerced_unpacked_items.append(coerced_unpacked_item) self.type = py_object_type return self def generate_result_code(self, code): self.generate_operation_code(code) def generate_sequence_packing_code(self, code, target=None, plain=False): if target is None: target = self.result() size_factor = c_mult = '' mult_factor = None if self.mult_factor and not plain: mult_factor = self.mult_factor if mult_factor.type.is_int: c_mult = mult_factor.result() if (isinstance(mult_factor.constant_result, _py_int_types) and mult_factor.constant_result > 0): size_factor = ' * %s' % mult_factor.constant_result elif mult_factor.type.signed: size_factor = ' * ((%s<0) ? 0:%s)' % (c_mult, c_mult) else: size_factor = ' * (%s)' % (c_mult,) if self.type is tuple_type and (self.is_literal or self.slow) and not c_mult: # use PyTuple_Pack() to avoid generating huge amounts of one-time code code.putln('%s = PyTuple_Pack(%d, %s); %s' % ( target, len(self.args), ', '.join(arg.py_result() for arg in self.args), code.error_goto_if_null(target, self.pos))) code.put_gotref(target) elif self.type.is_ctuple: for i, arg in enumerate(self.args): code.putln("%s.f%s = %s;" % ( target, i, arg.result())) else: # build the tuple/list step by step, potentially multiplying it as we go if self.type is list_type: create_func, set_item_func = 'PyList_New', 'PyList_SET_ITEM' elif self.type is tuple_type: create_func, set_item_func = 'PyTuple_New', 'PyTuple_SET_ITEM' else: raise InternalError("sequence packing for unexpected type %s" % self.type) arg_count = len(self.args) code.putln("%s = %s(%s%s); %s" % ( target, create_func, arg_count, size_factor, code.error_goto_if_null(target, self.pos))) code.put_gotref(target) if c_mult: # FIXME: can't use a temp variable here as the code may # end up in the constant building function. Temps # currently don't work there. #counter = code.funcstate.allocate_temp(mult_factor.type, manage_ref=False) counter = Naming.quick_temp_cname code.putln('{ Py_ssize_t %s;' % counter) if arg_count == 1: offset = counter else: offset = '%s * %s' % (counter, arg_count) code.putln('for (%s=0; %s < %s; %s++) {' % ( counter, counter, c_mult, counter )) else: offset = '' for i in range(arg_count): arg = self.args[i] if c_mult or not arg.result_in_temp(): code.put_incref(arg.result(), arg.ctype()) code.put_giveref(arg.py_result()) code.putln("%s(%s, %s, %s);" % ( set_item_func, target, (offset and i) and ('%s + %s' % (offset, i)) or (offset or i), arg.py_result())) if c_mult: code.putln('}') #code.funcstate.release_temp(counter) code.putln('}') if mult_factor is not None and mult_factor.type.is_pyobject: code.putln('{ PyObject* %s = PyNumber_InPlaceMultiply(%s, %s); %s' % ( Naming.quick_temp_cname, target, mult_factor.py_result(), code.error_goto_if_null(Naming.quick_temp_cname, self.pos) )) code.put_gotref(Naming.quick_temp_cname) code.put_decref(target, py_object_type) code.putln('%s = %s;' % (target, Naming.quick_temp_cname)) code.putln('}') def generate_subexpr_disposal_code(self, code): if self.mult_factor and self.mult_factor.type.is_int: super(SequenceNode, self).generate_subexpr_disposal_code(code) elif self.type is tuple_type and (self.is_literal or self.slow): super(SequenceNode, self).generate_subexpr_disposal_code(code) else: # We call generate_post_assignment_code here instead # of generate_disposal_code, because values were stored # in the tuple using a reference-stealing operation. for arg in self.args: arg.generate_post_assignment_code(code) # Should NOT call free_temps -- this is invoked by the default # generate_evaluation_code which will do that. if self.mult_factor: self.mult_factor.generate_disposal_code(code) def generate_assignment_code(self, rhs, code, overloaded_assignment=False, exception_check=None, exception_value=None): if self.starred_assignment: self.generate_starred_assignment_code(rhs, code) else: self.generate_parallel_assignment_code(rhs, code) for item in self.unpacked_items: item.release(code) rhs.free_temps(code) _func_iternext_type = PyrexTypes.CPtrType(PyrexTypes.CFuncType( PyrexTypes.py_object_type, [ PyrexTypes.CFuncTypeArg("it", PyrexTypes.py_object_type, None), ])) def generate_parallel_assignment_code(self, rhs, code): # Need to work around the fact that generate_evaluation_code # allocates the temps in a rather hacky way -- the assignment # is evaluated twice, within each if-block. for item in self.unpacked_items: item.allocate(code) special_unpack = (rhs.type is py_object_type or rhs.type in (tuple_type, list_type) or not rhs.type.is_builtin_type) long_enough_for_a_loop = len(self.unpacked_items) > 3 if special_unpack: self.generate_special_parallel_unpacking_code( code, rhs, use_loop=long_enough_for_a_loop) else: code.putln("{") self.generate_generic_parallel_unpacking_code( code, rhs, self.unpacked_items, use_loop=long_enough_for_a_loop) code.putln("}") for value_node in self.coerced_unpacked_items: value_node.generate_evaluation_code(code) for i in range(len(self.args)): self.args[i].generate_assignment_code( self.coerced_unpacked_items[i], code) def generate_special_parallel_unpacking_code(self, code, rhs, use_loop): sequence_type_test = '1' none_check = "likely(%s != Py_None)" % rhs.py_result() if rhs.type is list_type: sequence_types = ['List'] if rhs.may_be_none(): sequence_type_test = none_check elif rhs.type is tuple_type: sequence_types = ['Tuple'] if rhs.may_be_none(): sequence_type_test = none_check else: sequence_types = ['Tuple', 'List'] tuple_check = 'likely(PyTuple_CheckExact(%s))' % rhs.py_result() list_check = 'PyList_CheckExact(%s)' % rhs.py_result() sequence_type_test = "(%s) || (%s)" % (tuple_check, list_check) code.putln("if (%s) {" % sequence_type_test) code.putln("PyObject* sequence = %s;" % rhs.py_result()) # list/tuple => check size code.putln("Py_ssize_t size = __Pyx_PySequence_SIZE(sequence);") code.putln("if (unlikely(size != %d)) {" % len(self.args)) code.globalstate.use_utility_code(raise_too_many_values_to_unpack) code.putln("if (size > %d) __Pyx_RaiseTooManyValuesError(%d);" % ( len(self.args), len(self.args))) code.globalstate.use_utility_code(raise_need_more_values_to_unpack) code.putln("else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);") # < 0 => exception code.putln(code.error_goto(self.pos)) code.putln("}") code.putln("#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS") # unpack items from list/tuple in unrolled loop (can't fail) if len(sequence_types) == 2: code.putln("if (likely(Py%s_CheckExact(sequence))) {" % sequence_types[0]) for i, item in enumerate(self.unpacked_items): code.putln("%s = Py%s_GET_ITEM(sequence, %d); " % ( item.result(), sequence_types[0], i)) if len(sequence_types) == 2: code.putln("} else {") for i, item in enumerate(self.unpacked_items): code.putln("%s = Py%s_GET_ITEM(sequence, %d); " % ( item.result(), sequence_types[1], i)) code.putln("}") for item in self.unpacked_items: code.put_incref(item.result(), item.ctype()) code.putln("#else") # in non-CPython, use the PySequence protocol (which can fail) if not use_loop: for i, item in enumerate(self.unpacked_items): code.putln("%s = PySequence_ITEM(sequence, %d); %s" % ( item.result(), i, code.error_goto_if_null(item.result(), self.pos))) code.put_gotref(item.result()) else: code.putln("{") code.putln("Py_ssize_t i;") code.putln("PyObject** temps[%s] = {%s};" % ( len(self.unpacked_items), ','.join(['&%s' % item.result() for item in self.unpacked_items]))) code.putln("for (i=0; i < %s; i++) {" % len(self.unpacked_items)) code.putln("PyObject* item = PySequence_ITEM(sequence, i); %s" % ( code.error_goto_if_null('item', self.pos))) code.put_gotref('item') code.putln("*(temps[i]) = item;") code.putln("}") code.putln("}") code.putln("#endif") rhs.generate_disposal_code(code) if sequence_type_test == '1': code.putln("}") # all done elif sequence_type_test == none_check: # either tuple/list or None => save some code by generating the error directly code.putln("} else {") code.globalstate.use_utility_code( UtilityCode.load_cached("RaiseNoneIterError", "ObjectHandling.c")) code.putln("__Pyx_RaiseNoneNotIterableError(); %s" % code.error_goto(self.pos)) code.putln("}") # all done else: code.putln("} else {") # needs iteration fallback code self.generate_generic_parallel_unpacking_code( code, rhs, self.unpacked_items, use_loop=use_loop) code.putln("}") def generate_generic_parallel_unpacking_code(self, code, rhs, unpacked_items, use_loop, terminate=True): code.globalstate.use_utility_code(raise_need_more_values_to_unpack) code.globalstate.use_utility_code(UtilityCode.load_cached("IterFinish", "ObjectHandling.c")) code.putln("Py_ssize_t index = -1;") # must be at the start of a C block! if use_loop: code.putln("PyObject** temps[%s] = {%s};" % ( len(self.unpacked_items), ','.join(['&%s' % item.result() for item in unpacked_items]))) iterator_temp = code.funcstate.allocate_temp(py_object_type, manage_ref=True) code.putln( "%s = PyObject_GetIter(%s); %s" % ( iterator_temp, rhs.py_result(), code.error_goto_if_null(iterator_temp, self.pos))) code.put_gotref(iterator_temp) rhs.generate_disposal_code(code) iternext_func = code.funcstate.allocate_temp(self._func_iternext_type, manage_ref=False) code.putln("%s = Py_TYPE(%s)->tp_iternext;" % ( iternext_func, iterator_temp)) unpacking_error_label = code.new_label('unpacking_failed') unpack_code = "%s(%s)" % (iternext_func, iterator_temp) if use_loop: code.putln("for (index=0; index < %s; index++) {" % len(unpacked_items)) code.put("PyObject* item = %s; if (unlikely(!item)) " % unpack_code) code.put_goto(unpacking_error_label) code.put_gotref("item") code.putln("*(temps[index]) = item;") code.putln("}") else: for i, item in enumerate(unpacked_items): code.put( "index = %d; %s = %s; if (unlikely(!%s)) " % ( i, item.result(), unpack_code, item.result())) code.put_goto(unpacking_error_label) code.put_gotref(item.py_result()) if terminate: code.globalstate.use_utility_code( UtilityCode.load_cached("UnpackItemEndCheck", "ObjectHandling.c")) code.put_error_if_neg(self.pos, "__Pyx_IternextUnpackEndCheck(%s, %d)" % ( unpack_code, len(unpacked_items))) code.putln("%s = NULL;" % iternext_func) code.put_decref_clear(iterator_temp, py_object_type) unpacking_done_label = code.new_label('unpacking_done') code.put_goto(unpacking_done_label) code.put_label(unpacking_error_label) code.put_decref_clear(iterator_temp, py_object_type) code.putln("%s = NULL;" % iternext_func) code.putln("if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index);") code.putln(code.error_goto(self.pos)) code.put_label(unpacking_done_label) code.funcstate.release_temp(iternext_func) if terminate: code.funcstate.release_temp(iterator_temp) iterator_temp = None return iterator_temp def generate_starred_assignment_code(self, rhs, code): for i, arg in enumerate(self.args): if arg.is_starred: starred_target = self.unpacked_items[i] unpacked_fixed_items_left = self.unpacked_items[:i] unpacked_fixed_items_right = self.unpacked_items[i+1:] break else: assert False iterator_temp = None if unpacked_fixed_items_left: for item in unpacked_fixed_items_left: item.allocate(code) code.putln('{') iterator_temp = self.generate_generic_parallel_unpacking_code( code, rhs, unpacked_fixed_items_left, use_loop=True, terminate=False) for i, item in enumerate(unpacked_fixed_items_left): value_node = self.coerced_unpacked_items[i] value_node.generate_evaluation_code(code) code.putln('}') starred_target.allocate(code) target_list = starred_target.result() code.putln("%s = PySequence_List(%s); %s" % ( target_list, iterator_temp or rhs.py_result(), code.error_goto_if_null(target_list, self.pos))) code.put_gotref(target_list) if iterator_temp: code.put_decref_clear(iterator_temp, py_object_type) code.funcstate.release_temp(iterator_temp) else: rhs.generate_disposal_code(code) if unpacked_fixed_items_right: code.globalstate.use_utility_code(raise_need_more_values_to_unpack) length_temp = code.funcstate.allocate_temp(PyrexTypes.c_py_ssize_t_type, manage_ref=False) code.putln('%s = PyList_GET_SIZE(%s);' % (length_temp, target_list)) code.putln("if (unlikely(%s < %d)) {" % (length_temp, len(unpacked_fixed_items_right))) code.putln("__Pyx_RaiseNeedMoreValuesError(%d+%s); %s" % ( len(unpacked_fixed_items_left), length_temp, code.error_goto(self.pos))) code.putln('}') for item in unpacked_fixed_items_right[::-1]: item.allocate(code) for i, (item, coerced_arg) in enumerate(zip(unpacked_fixed_items_right[::-1], self.coerced_unpacked_items[::-1])): code.putln('#if CYTHON_COMPILING_IN_CPYTHON') code.putln("%s = PyList_GET_ITEM(%s, %s-%d); " % ( item.py_result(), target_list, length_temp, i+1)) # resize the list the hard way code.putln("((PyVarObject*)%s)->ob_size--;" % target_list) code.putln('#else') code.putln("%s = PySequence_ITEM(%s, %s-%d); " % ( item.py_result(), target_list, length_temp, i+1)) code.putln('#endif') code.put_gotref(item.py_result()) coerced_arg.generate_evaluation_code(code) code.putln('#if !CYTHON_COMPILING_IN_CPYTHON') sublist_temp = code.funcstate.allocate_temp(py_object_type, manage_ref=True) code.putln('%s = PySequence_GetSlice(%s, 0, %s-%d); %s' % ( sublist_temp, target_list, length_temp, len(unpacked_fixed_items_right), code.error_goto_if_null(sublist_temp, self.pos))) code.put_gotref(sublist_temp) code.funcstate.release_temp(length_temp) code.put_decref(target_list, py_object_type) code.putln('%s = %s; %s = NULL;' % (target_list, sublist_temp, sublist_temp)) code.putln('#else') code.putln('(void)%s;' % sublist_temp) # avoid warning about unused variable code.funcstate.release_temp(sublist_temp) code.putln('#endif') for i, arg in enumerate(self.args): arg.generate_assignment_code(self.coerced_unpacked_items[i], code) def annotate(self, code): for arg in self.args: arg.annotate(code) if self.unpacked_items: for arg in self.unpacked_items: arg.annotate(code) for arg in self.coerced_unpacked_items: arg.annotate(code) class TupleNode(SequenceNode): # Tuple constructor. type = tuple_type is_partly_literal = False gil_message = "Constructing Python tuple" def infer_type(self, env): if self.mult_factor or not self.args: return tuple_type arg_types = [arg.infer_type(env) for arg in self.args] if any(type.is_pyobject or type.is_memoryviewslice or type.is_unspecified or type.is_fused for type in arg_types): return tuple_type return env.declare_tuple_type(self.pos, arg_types).type def analyse_types(self, env, skip_children=False): if len(self.args) == 0: self.is_temp = False self.is_literal = True return self if not skip_children: for i, arg in enumerate(self.args): if arg.is_starred: arg.starred_expr_allowed_here = True self.args[i] = arg.analyse_types(env) if (not self.mult_factor and not any((arg.is_starred or arg.type.is_pyobject or arg.type.is_memoryviewslice or arg.type.is_fused) for arg in self.args)): self.type = env.declare_tuple_type(self.pos, (arg.type for arg in self.args)).type self.is_temp = 1 return self node = SequenceNode.analyse_types(self, env, skip_children=True) node = node._create_merge_node_if_necessary(env) if not node.is_sequence_constructor: return node if not all(child.is_literal for child in node.args): return node if not node.mult_factor or ( node.mult_factor.is_literal and isinstance(node.mult_factor.constant_result, _py_int_types)): node.is_temp = False node.is_literal = True else: if not node.mult_factor.type.is_pyobject: node.mult_factor = node.mult_factor.coerce_to_pyobject(env) node.is_temp = True node.is_partly_literal = True return node def analyse_as_type(self, env): # ctuple type if not self.args: return None item_types = [arg.analyse_as_type(env) for arg in self.args] if any(t is None for t in item_types): return None entry = env.declare_tuple_type(self.pos, item_types) return entry.type def coerce_to(self, dst_type, env): if self.type.is_ctuple: if dst_type.is_ctuple and self.type.size == dst_type.size: return self.coerce_to_ctuple(dst_type, env) elif dst_type is tuple_type or dst_type is py_object_type: coerced_args = [arg.coerce_to_pyobject(env) for arg in self.args] return TupleNode(self.pos, args=coerced_args, type=tuple_type, is_temp=1).analyse_types(env, skip_children=True) else: return self.coerce_to_pyobject(env).coerce_to(dst_type, env) elif dst_type.is_ctuple and not self.mult_factor: return self.coerce_to_ctuple(dst_type, env) else: return SequenceNode.coerce_to(self, dst_type, env) def as_list(self): t = ListNode(self.pos, args=self.args, mult_factor=self.mult_factor) if isinstance(self.constant_result, tuple): t.constant_result = list(self.constant_result) return t def is_simple(self): # either temp or constant => always simple return True def nonlocally_immutable(self): # either temp or constant => always safe return True def calculate_result_code(self): if len(self.args) > 0: return self.result_code else: return Naming.empty_tuple def calculate_constant_result(self): self.constant_result = tuple([ arg.constant_result for arg in self.args]) def compile_time_value(self, denv): values = self.compile_time_value_list(denv) try: return tuple(values) except Exception as e: self.compile_time_value_error(e) def generate_operation_code(self, code): if len(self.args) == 0: # result_code is Naming.empty_tuple return if self.is_literal or self.is_partly_literal: # The "mult_factor" is part of the deduplication if it is also constant, i.e. when # we deduplicate the multiplied result. Otherwise, only deduplicate the constant part. dedup_key = make_dedup_key(self.type, [self.mult_factor if self.is_literal else None] + self.args) tuple_target = code.get_py_const(py_object_type, 'tuple', cleanup_level=2, dedup_key=dedup_key) const_code = code.get_cached_constants_writer(tuple_target) if const_code is not None: # constant is not yet initialised const_code.mark_pos(self.pos) self.generate_sequence_packing_code(const_code, tuple_target, plain=not self.is_literal) const_code.put_giveref(tuple_target) if self.is_literal: self.result_code = tuple_target else: code.putln('%s = PyNumber_Multiply(%s, %s); %s' % ( self.result(), tuple_target, self.mult_factor.py_result(), code.error_goto_if_null(self.result(), self.pos) )) code.put_gotref(self.py_result()) else: self.type.entry.used = True self.generate_sequence_packing_code(code) class ListNode(SequenceNode): # List constructor. # obj_conversion_errors [PyrexError] used internally # orignial_args [ExprNode] used internally obj_conversion_errors = [] type = list_type in_module_scope = False gil_message = "Constructing Python list" def type_dependencies(self, env): return () def infer_type(self, env): # TODO: Infer non-object list arrays. return list_type def analyse_expressions(self, env): for arg in self.args: if arg.is_starred: arg.starred_expr_allowed_here = True node = SequenceNode.analyse_expressions(self, env) return node.coerce_to_pyobject(env) def analyse_types(self, env): with local_errors(ignore=True) as errors: self.original_args = list(self.args) node = SequenceNode.analyse_types(self, env) node.obj_conversion_errors = errors if env.is_module_scope: self.in_module_scope = True node = node._create_merge_node_if_necessary(env) return node def coerce_to(self, dst_type, env): if dst_type.is_pyobject: for err in self.obj_conversion_errors: report_error(err) self.obj_conversion_errors = [] if not self.type.subtype_of(dst_type): error(self.pos, "Cannot coerce list to type '%s'" % dst_type) elif (dst_type.is_array or dst_type.is_ptr) and dst_type.base_type is not PyrexTypes.c_void_type: array_length = len(self.args) if self.mult_factor: if isinstance(self.mult_factor.constant_result, _py_int_types): if self.mult_factor.constant_result <= 0: error(self.pos, "Cannot coerce non-positively multiplied list to '%s'" % dst_type) else: array_length *= self.mult_factor.constant_result else: error(self.pos, "Cannot coerce dynamically multiplied list to '%s'" % dst_type) base_type = dst_type.base_type self.type = PyrexTypes.CArrayType(base_type, array_length) for i in range(len(self.original_args)): arg = self.args[i] if isinstance(arg, CoerceToPyTypeNode): arg = arg.arg self.args[i] = arg.coerce_to(base_type, env) elif dst_type.is_cpp_class: # TODO(robertwb): Avoid object conversion for vector/list/set. return TypecastNode(self.pos, operand=self, type=PyrexTypes.py_object_type).coerce_to(dst_type, env) elif self.mult_factor: error(self.pos, "Cannot coerce multiplied list to '%s'" % dst_type) elif dst_type.is_struct: if len(self.args) > len(dst_type.scope.var_entries): error(self.pos, "Too many members for '%s'" % dst_type) else: if len(self.args) < len(dst_type.scope.var_entries): warning(self.pos, "Too few members for '%s'" % dst_type, 1) for i, (arg, member) in enumerate(zip(self.original_args, dst_type.scope.var_entries)): if isinstance(arg, CoerceToPyTypeNode): arg = arg.arg self.args[i] = arg.coerce_to(member.type, env) self.type = dst_type elif dst_type.is_ctuple: return self.coerce_to_ctuple(dst_type, env) else: self.type = error_type error(self.pos, "Cannot coerce list to type '%s'" % dst_type) return self def as_list(self): # dummy for compatibility with TupleNode return self def as_tuple(self): t = TupleNode(self.pos, args=self.args, mult_factor=self.mult_factor) if isinstance(self.constant_result, list): t.constant_result = tuple(self.constant_result) return t def allocate_temp_result(self, code): if self.type.is_array and self.in_module_scope: self.temp_code = code.funcstate.allocate_temp( self.type, manage_ref=False, static=True) else: SequenceNode.allocate_temp_result(self, code) def release_temp_result(self, env): if self.type.is_array: # To be valid C++, we must allocate the memory on the stack # manually and be sure not to reuse it for something else. # Yes, this means that we leak a temp array variable. pass else: SequenceNode.release_temp_result(self, env) def calculate_constant_result(self): if self.mult_factor: raise ValueError() # may exceed the compile time memory self.constant_result = [ arg.constant_result for arg in self.args] def compile_time_value(self, denv): l = self.compile_time_value_list(denv) if self.mult_factor: l *= self.mult_factor.compile_time_value(denv) return l def generate_operation_code(self, code): if self.type.is_pyobject: for err in self.obj_conversion_errors: report_error(err) self.generate_sequence_packing_code(code) elif self.type.is_array: if self.mult_factor: code.putln("{") code.putln("Py_ssize_t %s;" % Naming.quick_temp_cname) code.putln("for ({i} = 0; {i} < {count}; {i}++) {{".format( i=Naming.quick_temp_cname, count=self.mult_factor.result())) offset = '+ (%d * %s)' % (len(self.args), Naming.quick_temp_cname) else: offset = '' for i, arg in enumerate(self.args): if arg.type.is_array: code.globalstate.use_utility_code(UtilityCode.load_cached("IncludeStringH", "StringTools.c")) code.putln("memcpy(&(%s[%s%s]), %s, sizeof(%s[0]));" % ( self.result(), i, offset, arg.result(), self.result() )) else: code.putln("%s[%s%s] = %s;" % ( self.result(), i, offset, arg.result())) if self.mult_factor: code.putln("}") code.putln("}") elif self.type.is_struct: for arg, member in zip(self.args, self.type.scope.var_entries): code.putln("%s.%s = %s;" % ( self.result(), member.cname, arg.result())) else: raise InternalError("List type never specified") class ScopedExprNode(ExprNode): # Abstract base class for ExprNodes that have their own local # scope, such as generator expressions. # # expr_scope Scope the inner scope of the expression subexprs = [] expr_scope = None # does this node really have a local scope, e.g. does it leak loop # variables or not? non-leaking Py3 behaviour is default, except # for list comprehensions where the behaviour differs in Py2 and # Py3 (set in Parsing.py based on parser context) has_local_scope = True def init_scope(self, outer_scope, expr_scope=None): if expr_scope is not None: self.expr_scope = expr_scope elif self.has_local_scope: self.expr_scope = Symtab.GeneratorExpressionScope(outer_scope) else: self.expr_scope = None def analyse_declarations(self, env): self.init_scope(env) def analyse_scoped_declarations(self, env): # this is called with the expr_scope as env pass def analyse_types(self, env): # no recursion here, the children will be analysed separately below return self def analyse_scoped_expressions(self, env): # this is called with the expr_scope as env return self def generate_evaluation_code(self, code): # set up local variables and free their references on exit generate_inner_evaluation_code = super(ScopedExprNode, self).generate_evaluation_code if not self.has_local_scope or not self.expr_scope.var_entries: # no local variables => delegate, done generate_inner_evaluation_code(code) return code.putln('{ /* enter inner scope */') py_entries = [] for _, entry in sorted(item for item in self.expr_scope.entries.items() if item[0]): if not entry.in_closure: if entry.type.is_pyobject and entry.used: py_entries.append(entry) if not py_entries: # no local Python references => no cleanup required generate_inner_evaluation_code(code) code.putln('} /* exit inner scope */') return # must free all local Python references at each exit point old_loop_labels = code.new_loop_labels() old_error_label = code.new_error_label() generate_inner_evaluation_code(code) # normal (non-error) exit self._generate_vars_cleanup(code, py_entries) # error/loop body exit points exit_scope = code.new_label('exit_scope') code.put_goto(exit_scope) for label, old_label in ([(code.error_label, old_error_label)] + list(zip(code.get_loop_labels(), old_loop_labels))): if code.label_used(label): code.put_label(label) self._generate_vars_cleanup(code, py_entries) code.put_goto(old_label) code.put_label(exit_scope) code.putln('} /* exit inner scope */') code.set_loop_labels(old_loop_labels) code.error_label = old_error_label def _generate_vars_cleanup(self, code, py_entries): for entry in py_entries: if entry.is_cglobal: code.put_var_gotref(entry) code.put_decref_set(entry.cname, "Py_None") else: code.put_var_xdecref_clear(entry) class ComprehensionNode(ScopedExprNode): # A list/set/dict comprehension child_attrs = ["loop"] is_temp = True constant_result = not_a_constant def infer_type(self, env): return self.type def analyse_declarations(self, env): self.append.target = self # this is used in the PyList_Append of the inner loop self.init_scope(env) def analyse_scoped_declarations(self, env): self.loop.analyse_declarations(env) def analyse_types(self, env): if not self.has_local_scope: self.loop = self.loop.analyse_expressions(env) return self def analyse_scoped_expressions(self, env): if self.has_local_scope: self.loop = self.loop.analyse_expressions(env) return self def may_be_none(self): return False def generate_result_code(self, code): self.generate_operation_code(code) def generate_operation_code(self, code): if self.type is Builtin.list_type: create_code = 'PyList_New(0)' elif self.type is Builtin.set_type: create_code = 'PySet_New(NULL)' elif self.type is Builtin.dict_type: create_code = 'PyDict_New()' else: raise InternalError("illegal type for comprehension: %s" % self.type) code.putln('%s = %s; %s' % ( self.result(), create_code, code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.result()) self.loop.generate_execution_code(code) def annotate(self, code): self.loop.annotate(code) class ComprehensionAppendNode(Node): # Need to be careful to avoid infinite recursion: # target must not be in child_attrs/subexprs child_attrs = ['expr'] target = None type = PyrexTypes.c_int_type def analyse_expressions(self, env): self.expr = self.expr.analyse_expressions(env) if not self.expr.type.is_pyobject: self.expr = self.expr.coerce_to_pyobject(env) return self def generate_execution_code(self, code): if self.target.type is list_type: code.globalstate.use_utility_code( UtilityCode.load_cached("ListCompAppend", "Optimize.c")) function = "__Pyx_ListComp_Append" elif self.target.type is set_type: function = "PySet_Add" else: raise InternalError( "Invalid type for comprehension node: %s" % self.target.type) self.expr.generate_evaluation_code(code) code.putln(code.error_goto_if("%s(%s, (PyObject*)%s)" % ( function, self.target.result(), self.expr.result() ), self.pos)) self.expr.generate_disposal_code(code) self.expr.free_temps(code) def generate_function_definitions(self, env, code): self.expr.generate_function_definitions(env, code) def annotate(self, code): self.expr.annotate(code) class DictComprehensionAppendNode(ComprehensionAppendNode): child_attrs = ['key_expr', 'value_expr'] def analyse_expressions(self, env): self.key_expr = self.key_expr.analyse_expressions(env) if not self.key_expr.type.is_pyobject: self.key_expr = self.key_expr.coerce_to_pyobject(env) self.value_expr = self.value_expr.analyse_expressions(env) if not self.value_expr.type.is_pyobject: self.value_expr = self.value_expr.coerce_to_pyobject(env) return self def generate_execution_code(self, code): self.key_expr.generate_evaluation_code(code) self.value_expr.generate_evaluation_code(code) code.putln(code.error_goto_if("PyDict_SetItem(%s, (PyObject*)%s, (PyObject*)%s)" % ( self.target.result(), self.key_expr.result(), self.value_expr.result() ), self.pos)) self.key_expr.generate_disposal_code(code) self.key_expr.free_temps(code) self.value_expr.generate_disposal_code(code) self.value_expr.free_temps(code) def generate_function_definitions(self, env, code): self.key_expr.generate_function_definitions(env, code) self.value_expr.generate_function_definitions(env, code) def annotate(self, code): self.key_expr.annotate(code) self.value_expr.annotate(code) class InlinedGeneratorExpressionNode(ExprNode): # An inlined generator expression for which the result is calculated # inside of the loop and returned as a single, first and only Generator # return value. # This will only be created by transforms when replacing safe builtin # calls on generator expressions. # # gen GeneratorExpressionNode the generator, not containing any YieldExprNodes # orig_func String the name of the builtin function this node replaces # target ExprNode or None a 'target' for a ComprehensionAppend node subexprs = ["gen"] orig_func = None target = None is_temp = True type = py_object_type def __init__(self, pos, gen, comprehension_type=None, **kwargs): gbody = gen.def_node.gbody gbody.is_inlined = True if comprehension_type is not None: assert comprehension_type in (list_type, set_type, dict_type), comprehension_type gbody.inlined_comprehension_type = comprehension_type kwargs.update( target=RawCNameExprNode(pos, comprehension_type, Naming.retval_cname), type=comprehension_type, ) super(InlinedGeneratorExpressionNode, self).__init__(pos, gen=gen, **kwargs) def may_be_none(self): return self.orig_func not in ('any', 'all', 'sorted') def infer_type(self, env): return self.type def analyse_types(self, env): self.gen = self.gen.analyse_expressions(env) return self def generate_result_code(self, code): code.putln("%s = __Pyx_Generator_Next(%s); %s" % ( self.result(), self.gen.result(), code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.result()) class MergedSequenceNode(ExprNode): """ Merge a sequence of iterables into a set/list/tuple. The target collection is determined by self.type, which must be set externally. args [ExprNode] """ subexprs = ['args'] is_temp = True gil_message = "Constructing Python collection" def __init__(self, pos, args, type): if type in (list_type, tuple_type) and args and args[0].is_sequence_constructor: # construct a list directly from the first argument that we can then extend if args[0].type is not list_type: args[0] = ListNode(args[0].pos, args=args[0].args, is_temp=True) ExprNode.__init__(self, pos, args=args, type=type) def calculate_constant_result(self): result = [] for item in self.args: if item.is_sequence_constructor and item.mult_factor: if item.mult_factor.constant_result <= 0: continue # otherwise, adding each item once should be enough if item.is_set_literal or item.is_sequence_constructor: # process items in order items = (arg.constant_result for arg in item.args) else: items = item.constant_result result.extend(items) if self.type is set_type: result = set(result) elif self.type is tuple_type: result = tuple(result) else: assert self.type is list_type self.constant_result = result def compile_time_value(self, denv): result = [] for item in self.args: if item.is_sequence_constructor and item.mult_factor: if item.mult_factor.compile_time_value(denv) <= 0: continue if item.is_set_literal or item.is_sequence_constructor: # process items in order items = (arg.compile_time_value(denv) for arg in item.args) else: items = item.compile_time_value(denv) result.extend(items) if self.type is set_type: try: result = set(result) except Exception as e: self.compile_time_value_error(e) elif self.type is tuple_type: result = tuple(result) else: assert self.type is list_type return result def type_dependencies(self, env): return () def infer_type(self, env): return self.type def analyse_types(self, env): args = [ arg.analyse_types(env).coerce_to_pyobject(env).as_none_safe_node( # FIXME: CPython's error message starts with the runtime function name 'argument after * must be an iterable, not NoneType') for arg in self.args ] if len(args) == 1 and args[0].type is self.type: # strip this intermediate node and use the bare collection return args[0] assert self.type in (set_type, list_type, tuple_type) self.args = args return self def may_be_none(self): return False def generate_evaluation_code(self, code): code.mark_pos(self.pos) self.allocate_temp_result(code) is_set = self.type is set_type args = iter(self.args) item = next(args) item.generate_evaluation_code(code) if (is_set and item.is_set_literal or not is_set and item.is_sequence_constructor and item.type is list_type): code.putln("%s = %s;" % (self.result(), item.py_result())) item.generate_post_assignment_code(code) else: code.putln("%s = %s(%s); %s" % ( self.result(), 'PySet_New' if is_set else 'PySequence_List', item.py_result(), code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) item.generate_disposal_code(code) item.free_temps(code) helpers = set() if is_set: add_func = "PySet_Add" extend_func = "__Pyx_PySet_Update" else: add_func = "__Pyx_ListComp_Append" extend_func = "__Pyx_PyList_Extend" for item in args: if (is_set and (item.is_set_literal or item.is_sequence_constructor) or (item.is_sequence_constructor and not item.mult_factor)): if not is_set and item.args: helpers.add(("ListCompAppend", "Optimize.c")) for arg in item.args: arg.generate_evaluation_code(code) code.put_error_if_neg(arg.pos, "%s(%s, %s)" % ( add_func, self.result(), arg.py_result())) arg.generate_disposal_code(code) arg.free_temps(code) continue if is_set: helpers.add(("PySet_Update", "Builtins.c")) else: helpers.add(("ListExtend", "Optimize.c")) item.generate_evaluation_code(code) code.put_error_if_neg(item.pos, "%s(%s, %s)" % ( extend_func, self.result(), item.py_result())) item.generate_disposal_code(code) item.free_temps(code) if self.type is tuple_type: code.putln("{") code.putln("PyObject *%s = PyList_AsTuple(%s);" % ( Naming.quick_temp_cname, self.result())) code.put_decref(self.result(), py_object_type) code.putln("%s = %s; %s" % ( self.result(), Naming.quick_temp_cname, code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.result()) code.putln("}") for helper in sorted(helpers): code.globalstate.use_utility_code(UtilityCode.load_cached(*helper)) def annotate(self, code): for item in self.args: item.annotate(code) class SetNode(ExprNode): """ Set constructor. """ subexprs = ['args'] type = set_type is_set_literal = True gil_message = "Constructing Python set" def analyse_types(self, env): for i in range(len(self.args)): arg = self.args[i] arg = arg.analyse_types(env) self.args[i] = arg.coerce_to_pyobject(env) self.type = set_type self.is_temp = 1 return self def may_be_none(self): return False def calculate_constant_result(self): self.constant_result = set([arg.constant_result for arg in self.args]) def compile_time_value(self, denv): values = [arg.compile_time_value(denv) for arg in self.args] try: return set(values) except Exception as e: self.compile_time_value_error(e) def generate_evaluation_code(self, code): for arg in self.args: arg.generate_evaluation_code(code) self.allocate_temp_result(code) code.putln( "%s = PySet_New(0); %s" % ( self.result(), code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) for arg in self.args: code.put_error_if_neg( self.pos, "PySet_Add(%s, %s)" % (self.result(), arg.py_result())) arg.generate_disposal_code(code) arg.free_temps(code) class DictNode(ExprNode): # Dictionary constructor. # # key_value_pairs [DictItemNode] # exclude_null_values [boolean] Do not add NULL values to dict # # obj_conversion_errors [PyrexError] used internally subexprs = ['key_value_pairs'] is_temp = 1 exclude_null_values = False type = dict_type is_dict_literal = True reject_duplicates = False obj_conversion_errors = [] @classmethod def from_pairs(cls, pos, pairs): return cls(pos, key_value_pairs=[ DictItemNode(pos, key=k, value=v) for k, v in pairs]) def calculate_constant_result(self): self.constant_result = dict([ item.constant_result for item in self.key_value_pairs]) def compile_time_value(self, denv): pairs = [(item.key.compile_time_value(denv), item.value.compile_time_value(denv)) for item in self.key_value_pairs] try: return dict(pairs) except Exception as e: self.compile_time_value_error(e) def type_dependencies(self, env): return () def infer_type(self, env): # TODO: Infer struct constructors. return dict_type def analyse_types(self, env): with local_errors(ignore=True) as errors: self.key_value_pairs = [ item.analyse_types(env) for item in self.key_value_pairs ] self.obj_conversion_errors = errors return self def may_be_none(self): return False def coerce_to(self, dst_type, env): if dst_type.is_pyobject: self.release_errors() if self.type.is_struct_or_union: if not dict_type.subtype_of(dst_type): error(self.pos, "Cannot interpret struct as non-dict type '%s'" % dst_type) return DictNode(self.pos, key_value_pairs=[ DictItemNode(item.pos, key=item.key.coerce_to_pyobject(env), value=item.value.coerce_to_pyobject(env)) for item in self.key_value_pairs]) if not self.type.subtype_of(dst_type): error(self.pos, "Cannot interpret dict as type '%s'" % dst_type) elif dst_type.is_struct_or_union: self.type = dst_type if not dst_type.is_struct and len(self.key_value_pairs) != 1: error(self.pos, "Exactly one field must be specified to convert to union '%s'" % dst_type) elif dst_type.is_struct and len(self.key_value_pairs) < len(dst_type.scope.var_entries): warning(self.pos, "Not all members given for struct '%s'" % dst_type, 1) for item in self.key_value_pairs: if isinstance(item.key, CoerceToPyTypeNode): item.key = item.key.arg if not item.key.is_string_literal: error(item.key.pos, "Invalid struct field identifier") item.key = StringNode(item.key.pos, value="") else: key = str(item.key.value) # converts string literals to unicode in Py3 member = dst_type.scope.lookup_here(key) if not member: error(item.key.pos, "struct '%s' has no field '%s'" % (dst_type, key)) else: value = item.value if isinstance(value, CoerceToPyTypeNode): value = value.arg item.value = value.coerce_to(member.type, env) else: self.type = error_type error(self.pos, "Cannot interpret dict as type '%s'" % dst_type) return self def release_errors(self): for err in self.obj_conversion_errors: report_error(err) self.obj_conversion_errors = [] gil_message = "Constructing Python dict" def generate_evaluation_code(self, code): # Custom method used here because key-value # pairs are evaluated and used one at a time. code.mark_pos(self.pos) self.allocate_temp_result(code) is_dict = self.type.is_pyobject if is_dict: self.release_errors() code.putln( "%s = __Pyx_PyDict_NewPresized(%d); %s" % ( self.result(), len(self.key_value_pairs), code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) keys_seen = set() key_type = None needs_error_helper = False for item in self.key_value_pairs: item.generate_evaluation_code(code) if is_dict: if self.exclude_null_values: code.putln('if (%s) {' % item.value.py_result()) key = item.key if self.reject_duplicates: if keys_seen is not None: # avoid runtime 'in' checks for literals that we can do at compile time if not key.is_string_literal: keys_seen = None elif key.value in keys_seen: # FIXME: this could be a compile time error, at least in Cython code keys_seen = None elif key_type is not type(key.value): if key_type is None: key_type = type(key.value) keys_seen.add(key.value) else: # different types => may not be able to compare at compile time keys_seen = None else: keys_seen.add(key.value) if keys_seen is None: code.putln('if (unlikely(PyDict_Contains(%s, %s))) {' % ( self.result(), key.py_result())) # currently only used in function calls needs_error_helper = True code.putln('__Pyx_RaiseDoubleKeywordsError("function", %s); %s' % ( key.py_result(), code.error_goto(item.pos))) code.putln("} else {") code.put_error_if_neg(self.pos, "PyDict_SetItem(%s, %s, %s)" % ( self.result(), item.key.py_result(), item.value.py_result())) if self.reject_duplicates and keys_seen is None: code.putln('}') if self.exclude_null_values: code.putln('}') else: code.putln("%s.%s = %s;" % ( self.result(), item.key.value, item.value.result())) item.generate_disposal_code(code) item.free_temps(code) if needs_error_helper: code.globalstate.use_utility_code( UtilityCode.load_cached("RaiseDoubleKeywords", "FunctionArguments.c")) def annotate(self, code): for item in self.key_value_pairs: item.annotate(code) class DictItemNode(ExprNode): # Represents a single item in a DictNode # # key ExprNode # value ExprNode subexprs = ['key', 'value'] nogil_check = None # Parent DictNode takes care of it def calculate_constant_result(self): self.constant_result = ( self.key.constant_result, self.value.constant_result) def analyse_types(self, env): self.key = self.key.analyse_types(env) self.value = self.value.analyse_types(env) self.key = self.key.coerce_to_pyobject(env) self.value = self.value.coerce_to_pyobject(env) return self def generate_evaluation_code(self, code): self.key.generate_evaluation_code(code) self.value.generate_evaluation_code(code) def generate_disposal_code(self, code): self.key.generate_disposal_code(code) self.value.generate_disposal_code(code) def free_temps(self, code): self.key.free_temps(code) self.value.free_temps(code) def __iter__(self): return iter([self.key, self.value]) class SortedDictKeysNode(ExprNode): # build sorted list of dict keys, e.g. for dir() subexprs = ['arg'] is_temp = True def __init__(self, arg): ExprNode.__init__(self, arg.pos, arg=arg) self.type = Builtin.list_type def analyse_types(self, env): arg = self.arg.analyse_types(env) if arg.type is Builtin.dict_type: arg = arg.as_none_safe_node( "'NoneType' object is not iterable") self.arg = arg return self def may_be_none(self): return False def generate_result_code(self, code): dict_result = self.arg.py_result() if self.arg.type is Builtin.dict_type: code.putln('%s = PyDict_Keys(%s); %s' % ( self.result(), dict_result, code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) else: # originally used PyMapping_Keys() here, but that may return a tuple code.globalstate.use_utility_code(UtilityCode.load_cached( 'PyObjectCallMethod0', 'ObjectHandling.c')) keys_cname = code.intern_identifier(StringEncoding.EncodedString("keys")) code.putln('%s = __Pyx_PyObject_CallMethod0(%s, %s); %s' % ( self.result(), dict_result, keys_cname, code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) code.putln("if (unlikely(!PyList_Check(%s))) {" % self.result()) code.put_decref_set(self.result(), "PySequence_List(%s)" % self.result()) code.putln(code.error_goto_if_null(self.result(), self.pos)) code.put_gotref(self.py_result()) code.putln("}") code.put_error_if_neg( self.pos, 'PyList_Sort(%s)' % self.py_result()) class ModuleNameMixin(object): def get_py_mod_name(self, code): return code.get_py_string_const( self.module_name, identifier=True) def get_py_qualified_name(self, code): return code.get_py_string_const( self.qualname, identifier=True) class ClassNode(ExprNode, ModuleNameMixin): # Helper class used in the implementation of Python # class definitions. Constructs a class object given # a name, tuple of bases and class dictionary. # # name EncodedString Name of the class # bases ExprNode Base class tuple # dict ExprNode Class dict (not owned by this node) # doc ExprNode or None Doc string # module_name EncodedString Name of defining module subexprs = ['bases', 'doc'] type = py_object_type is_temp = True def infer_type(self, env): # TODO: could return 'type' in some cases return py_object_type def analyse_types(self, env): self.bases = self.bases.analyse_types(env) if self.doc: self.doc = self.doc.analyse_types(env) self.doc = self.doc.coerce_to_pyobject(env) env.use_utility_code(UtilityCode.load_cached("CreateClass", "ObjectHandling.c")) return self def may_be_none(self): return True gil_message = "Constructing Python class" def generate_result_code(self, code): cname = code.intern_identifier(self.name) if self.doc: code.put_error_if_neg(self.pos, 'PyDict_SetItem(%s, %s, %s)' % ( self.dict.py_result(), code.intern_identifier( StringEncoding.EncodedString("__doc__")), self.doc.py_result())) py_mod_name = self.get_py_mod_name(code) qualname = self.get_py_qualified_name(code) code.putln( '%s = __Pyx_CreateClass(%s, %s, %s, %s, %s); %s' % ( self.result(), self.bases.py_result(), self.dict.py_result(), cname, qualname, py_mod_name, code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) class Py3ClassNode(ExprNode): # Helper class used in the implementation of Python3+ # class definitions. Constructs a class object given # a name, tuple of bases and class dictionary. # # name EncodedString Name of the class # dict ExprNode Class dict (not owned by this node) # module_name EncodedString Name of defining module # calculate_metaclass bool should call CalculateMetaclass() # allow_py2_metaclass bool should look for Py2 metaclass subexprs = [] type = py_object_type is_temp = True def infer_type(self, env): # TODO: could return 'type' in some cases return py_object_type def analyse_types(self, env): return self def may_be_none(self): return True gil_message = "Constructing Python class" def generate_result_code(self, code): code.globalstate.use_utility_code(UtilityCode.load_cached("Py3ClassCreate", "ObjectHandling.c")) cname = code.intern_identifier(self.name) if self.mkw: mkw = self.mkw.py_result() else: mkw = 'NULL' if self.metaclass: metaclass = self.metaclass.py_result() else: metaclass = "((PyObject*)&__Pyx_DefaultClassType)" code.putln( '%s = __Pyx_Py3ClassCreate(%s, %s, %s, %s, %s, %d, %d); %s' % ( self.result(), metaclass, cname, self.bases.py_result(), self.dict.py_result(), mkw, self.calculate_metaclass, self.allow_py2_metaclass, code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) class PyClassMetaclassNode(ExprNode): # Helper class holds Python3 metaclass object # # bases ExprNode Base class tuple (not owned by this node) # mkw ExprNode Class keyword arguments (not owned by this node) subexprs = [] def analyse_types(self, env): self.type = py_object_type self.is_temp = True return self def may_be_none(self): return True def generate_result_code(self, code): if self.mkw: code.globalstate.use_utility_code( UtilityCode.load_cached("Py3MetaclassGet", "ObjectHandling.c")) call = "__Pyx_Py3MetaclassGet(%s, %s)" % ( self.bases.result(), self.mkw.result()) else: code.globalstate.use_utility_code( UtilityCode.load_cached("CalculateMetaclass", "ObjectHandling.c")) call = "__Pyx_CalculateMetaclass(NULL, %s)" % ( self.bases.result()) code.putln( "%s = %s; %s" % ( self.result(), call, code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) class PyClassNamespaceNode(ExprNode, ModuleNameMixin): # Helper class holds Python3 namespace object # # All this are not owned by this node # metaclass ExprNode Metaclass object # bases ExprNode Base class tuple # mkw ExprNode Class keyword arguments # doc ExprNode or None Doc string (owned) subexprs = ['doc'] def analyse_types(self, env): if self.doc: self.doc = self.doc.analyse_types(env) self.doc = self.doc.coerce_to_pyobject(env) self.type = py_object_type self.is_temp = 1 return self def may_be_none(self): return True def generate_result_code(self, code): cname = code.intern_identifier(self.name) py_mod_name = self.get_py_mod_name(code) qualname = self.get_py_qualified_name(code) if self.doc: doc_code = self.doc.result() else: doc_code = '(PyObject *) NULL' if self.mkw: mkw = self.mkw.py_result() else: mkw = '(PyObject *) NULL' if self.metaclass: metaclass = self.metaclass.py_result() else: metaclass = "(PyObject *) NULL" code.putln( "%s = __Pyx_Py3MetaclassPrepare(%s, %s, %s, %s, %s, %s, %s); %s" % ( self.result(), metaclass, self.bases.result(), cname, qualname, mkw, py_mod_name, doc_code, code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) class ClassCellInjectorNode(ExprNode): # Initialize CyFunction.func_classobj is_temp = True type = py_object_type subexprs = [] is_active = False def analyse_expressions(self, env): return self def generate_evaluation_code(self, code): if self.is_active: self.allocate_temp_result(code) code.putln( '%s = PyList_New(0); %s' % ( self.result(), code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.result()) def generate_injection_code(self, code, classobj_cname): if self.is_active: code.globalstate.use_utility_code( UtilityCode.load_cached("CyFunctionClassCell", "CythonFunction.c")) code.put_error_if_neg(self.pos, '__Pyx_CyFunction_InitClassCell(%s, %s)' % ( self.result(), classobj_cname)) class ClassCellNode(ExprNode): # Class Cell for noargs super() subexprs = [] is_temp = True is_generator = False type = py_object_type def analyse_types(self, env): return self def generate_result_code(self, code): if not self.is_generator: code.putln('%s = __Pyx_CyFunction_GetClassObj(%s);' % ( self.result(), Naming.self_cname)) else: code.putln('%s = %s->classobj;' % ( self.result(), Naming.generator_cname)) code.putln( 'if (!%s) { PyErr_SetString(PyExc_SystemError, ' '"super(): empty __class__ cell"); %s }' % ( self.result(), code.error_goto(self.pos))) code.put_incref(self.result(), py_object_type) class PyCFunctionNode(ExprNode, ModuleNameMixin): # Helper class used in the implementation of Python # functions. Constructs a PyCFunction object # from a PyMethodDef struct. # # pymethdef_cname string PyMethodDef structure # self_object ExprNode or None # binding bool # def_node DefNode the Python function node # module_name EncodedString Name of defining module # code_object CodeObjectNode the PyCodeObject creator node subexprs = ['code_object', 'defaults_tuple', 'defaults_kwdict', 'annotations_dict'] self_object = None code_object = None binding = False def_node = None defaults = None defaults_struct = None defaults_pyobjects = 0 defaults_tuple = None defaults_kwdict = None annotations_dict = None type = py_object_type is_temp = 1 specialized_cpdefs = None is_specialization = False @classmethod def from_defnode(cls, node, binding): return cls(node.pos, def_node=node, pymethdef_cname=node.entry.pymethdef_cname, binding=binding or node.specialized_cpdefs, specialized_cpdefs=node.specialized_cpdefs, code_object=CodeObjectNode(node)) def analyse_types(self, env): if self.binding: self.analyse_default_args(env) return self def analyse_default_args(self, env): """ Handle non-literal function's default arguments. """ nonliteral_objects = [] nonliteral_other = [] default_args = [] default_kwargs = [] annotations = [] # For global cpdef functions and def/cpdef methods in cdef classes, we must use global constants # for default arguments to avoid the dependency on the CyFunction object as 'self' argument # in the underlying C function. Basically, cpdef functions/methods are static C functions, # so their optional arguments must be static, too. # TODO: change CyFunction implementation to pass both function object and owning object for method calls must_use_constants = env.is_c_class_scope or (self.def_node.is_wrapper and env.is_module_scope) for arg in self.def_node.args: if arg.default and not must_use_constants: if not arg.default.is_literal: arg.is_dynamic = True if arg.type.is_pyobject: nonliteral_objects.append(arg) else: nonliteral_other.append(arg) else: arg.default = DefaultLiteralArgNode(arg.pos, arg.default) if arg.kw_only: default_kwargs.append(arg) else: default_args.append(arg) if arg.annotation: arg.annotation = self.analyse_annotation(env, arg.annotation) annotations.append((arg.pos, arg.name, arg.annotation)) for arg in (self.def_node.star_arg, self.def_node.starstar_arg): if arg and arg.annotation: arg.annotation = self.analyse_annotation(env, arg.annotation) annotations.append((arg.pos, arg.name, arg.annotation)) annotation = self.def_node.return_type_annotation if annotation: annotation = self.analyse_annotation(env, annotation) self.def_node.return_type_annotation = annotation annotations.append((annotation.pos, StringEncoding.EncodedString("return"), annotation)) if nonliteral_objects or nonliteral_other: module_scope = env.global_scope() cname = module_scope.next_id(Naming.defaults_struct_prefix) scope = Symtab.StructOrUnionScope(cname) self.defaults = [] for arg in nonliteral_objects: entry = scope.declare_var(arg.name, arg.type, None, Naming.arg_prefix + arg.name, allow_pyobject=True) self.defaults.append((arg, entry)) for arg in nonliteral_other: entry = scope.declare_var(arg.name, arg.type, None, Naming.arg_prefix + arg.name, allow_pyobject=False, allow_memoryview=True) self.defaults.append((arg, entry)) entry = module_scope.declare_struct_or_union( None, 'struct', scope, 1, None, cname=cname) self.defaults_struct = scope self.defaults_pyobjects = len(nonliteral_objects) for arg, entry in self.defaults: arg.default_value = '%s->%s' % ( Naming.dynamic_args_cname, entry.cname) self.def_node.defaults_struct = self.defaults_struct.name if default_args or default_kwargs: if self.defaults_struct is None: if default_args: defaults_tuple = TupleNode(self.pos, args=[ arg.default for arg in default_args]) self.defaults_tuple = defaults_tuple.analyse_types(env).coerce_to_pyobject(env) if default_kwargs: defaults_kwdict = DictNode(self.pos, key_value_pairs=[ DictItemNode( arg.pos, key=IdentifierStringNode(arg.pos, value=arg.name), value=arg.default) for arg in default_kwargs]) self.defaults_kwdict = defaults_kwdict.analyse_types(env) else: if default_args: defaults_tuple = DefaultsTupleNode( self.pos, default_args, self.defaults_struct) else: defaults_tuple = NoneNode(self.pos) if default_kwargs: defaults_kwdict = DefaultsKwDictNode( self.pos, default_kwargs, self.defaults_struct) else: defaults_kwdict = NoneNode(self.pos) defaults_getter = Nodes.DefNode( self.pos, args=[], star_arg=None, starstar_arg=None, body=Nodes.ReturnStatNode( self.pos, return_type=py_object_type, value=TupleNode( self.pos, args=[defaults_tuple, defaults_kwdict])), decorators=None, name=StringEncoding.EncodedString("__defaults__")) # defaults getter must never live in class scopes, it's always a module function module_scope = env.global_scope() defaults_getter.analyse_declarations(module_scope) defaults_getter = defaults_getter.analyse_expressions(module_scope) defaults_getter.body = defaults_getter.body.analyse_expressions( defaults_getter.local_scope) defaults_getter.py_wrapper_required = False defaults_getter.pymethdef_required = False self.def_node.defaults_getter = defaults_getter if annotations: annotations_dict = DictNode(self.pos, key_value_pairs=[ DictItemNode( pos, key=IdentifierStringNode(pos, value=name), value=value) for pos, name, value in annotations]) self.annotations_dict = annotations_dict.analyse_types(env) def analyse_annotation(self, env, annotation): if annotation is None: return None atype = annotation.analyse_as_type(env) if atype is not None: # Keep parsed types as strings as they might not be Python representable. annotation = UnicodeNode( annotation.pos, value=StringEncoding.EncodedString(atype.declaration_code('', for_display=True))) annotation = annotation.analyse_types(env) if not annotation.type.is_pyobject: annotation = annotation.coerce_to_pyobject(env) return annotation def may_be_none(self): return False gil_message = "Constructing Python function" def self_result_code(self): if self.self_object is None: self_result = "NULL" else: self_result = self.self_object.py_result() return self_result def generate_result_code(self, code): if self.binding: self.generate_cyfunction_code(code) else: self.generate_pycfunction_code(code) def generate_pycfunction_code(self, code): py_mod_name = self.get_py_mod_name(code) code.putln( '%s = PyCFunction_NewEx(&%s, %s, %s); %s' % ( self.result(), self.pymethdef_cname, self.self_result_code(), py_mod_name, code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) def generate_cyfunction_code(self, code): if self.specialized_cpdefs: def_node = self.specialized_cpdefs[0] else: def_node = self.def_node if self.specialized_cpdefs or self.is_specialization: code.globalstate.use_utility_code( UtilityCode.load_cached("FusedFunction", "CythonFunction.c")) constructor = "__pyx_FusedFunction_NewEx" else: code.globalstate.use_utility_code( UtilityCode.load_cached("CythonFunction", "CythonFunction.c")) constructor = "__Pyx_CyFunction_NewEx" if self.code_object: code_object_result = self.code_object.py_result() else: code_object_result = 'NULL' flags = [] if def_node.is_staticmethod: flags.append('__Pyx_CYFUNCTION_STATICMETHOD') elif def_node.is_classmethod: flags.append('__Pyx_CYFUNCTION_CLASSMETHOD') if def_node.local_scope.parent_scope.is_c_class_scope and not def_node.entry.is_anonymous: flags.append('__Pyx_CYFUNCTION_CCLASS') if flags: flags = ' | '.join(flags) else: flags = '0' code.putln( '%s = %s(&%s, %s, %s, %s, %s, %s, %s); %s' % ( self.result(), constructor, self.pymethdef_cname, flags, self.get_py_qualified_name(code), self.self_result_code(), self.get_py_mod_name(code), Naming.moddict_cname, code_object_result, code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) if def_node.requires_classobj: assert code.pyclass_stack, "pyclass_stack is empty" class_node = code.pyclass_stack[-1] code.put_incref(self.py_result(), py_object_type) code.putln( 'PyList_Append(%s, %s);' % ( class_node.class_cell.result(), self.result())) code.put_giveref(self.py_result()) if self.defaults: code.putln( 'if (!__Pyx_CyFunction_InitDefaults(%s, sizeof(%s), %d)) %s' % ( self.result(), self.defaults_struct.name, self.defaults_pyobjects, code.error_goto(self.pos))) defaults = '__Pyx_CyFunction_Defaults(%s, %s)' % ( self.defaults_struct.name, self.result()) for arg, entry in self.defaults: arg.generate_assignment_code(code, target='%s->%s' % ( defaults, entry.cname)) if self.defaults_tuple: code.putln('__Pyx_CyFunction_SetDefaultsTuple(%s, %s);' % ( self.result(), self.defaults_tuple.py_result())) if self.defaults_kwdict: code.putln('__Pyx_CyFunction_SetDefaultsKwDict(%s, %s);' % ( self.result(), self.defaults_kwdict.py_result())) if def_node.defaults_getter and not self.specialized_cpdefs: # Fused functions do not support dynamic defaults, only their specialisations can have them for now. code.putln('__Pyx_CyFunction_SetDefaultsGetter(%s, %s);' % ( self.result(), def_node.defaults_getter.entry.pyfunc_cname)) if self.annotations_dict: code.putln('__Pyx_CyFunction_SetAnnotationsDict(%s, %s);' % ( self.result(), self.annotations_dict.py_result())) class InnerFunctionNode(PyCFunctionNode): # Special PyCFunctionNode that depends on a closure class # binding = True needs_self_code = True def self_result_code(self): if self.needs_self_code: return "((PyObject*)%s)" % Naming.cur_scope_cname return "NULL" class CodeObjectNode(ExprNode): # Create a PyCodeObject for a CyFunction instance. # # def_node DefNode the Python function node # varnames TupleNode a tuple with all local variable names subexprs = ['varnames'] is_temp = False result_code = None def __init__(self, def_node): ExprNode.__init__(self, def_node.pos, def_node=def_node) args = list(def_node.args) # if we have args/kwargs, then the first two in var_entries are those local_vars = [arg for arg in def_node.local_scope.var_entries if arg.name] self.varnames = TupleNode( def_node.pos, args=[IdentifierStringNode(arg.pos, value=arg.name) for arg in args + local_vars], is_temp=0, is_literal=1) def may_be_none(self): return False def calculate_result_code(self, code=None): if self.result_code is None: self.result_code = code.get_py_const(py_object_type, 'codeobj', cleanup_level=2) return self.result_code def generate_result_code(self, code): if self.result_code is None: self.result_code = code.get_py_const(py_object_type, 'codeobj', cleanup_level=2) code = code.get_cached_constants_writer(self.result_code) if code is None: return # already initialised code.mark_pos(self.pos) func = self.def_node func_name = code.get_py_string_const( func.name, identifier=True, is_str=False, unicode_value=func.name) # FIXME: better way to get the module file path at module init time? Encoding to use? file_path = StringEncoding.bytes_literal(func.pos[0].get_filenametable_entry().encode('utf8'), 'utf8') file_path_const = code.get_py_string_const(file_path, identifier=False, is_str=True) # This combination makes CPython create a new dict for "frame.f_locals" (see GH #1836). flags = ['CO_OPTIMIZED', 'CO_NEWLOCALS'] if self.def_node.star_arg: flags.append('CO_VARARGS') if self.def_node.starstar_arg: flags.append('CO_VARKEYWORDS') code.putln("%s = (PyObject*)__Pyx_PyCode_New(%d, %d, %d, 0, %s, %s, %s, %s, %s, %s, %s, %s, %s, %d, %s); %s" % ( self.result_code, len(func.args) - func.num_kwonly_args, # argcount func.num_kwonly_args, # kwonlyargcount (Py3 only) len(self.varnames.args), # nlocals '|'.join(flags) or '0', # flags Naming.empty_bytes, # code Naming.empty_tuple, # consts Naming.empty_tuple, # names (FIXME) self.varnames.result(), # varnames Naming.empty_tuple, # freevars (FIXME) Naming.empty_tuple, # cellvars (FIXME) file_path_const, # filename func_name, # name self.pos[1], # firstlineno Naming.empty_bytes, # lnotab code.error_goto_if_null(self.result_code, self.pos), )) class DefaultLiteralArgNode(ExprNode): # CyFunction's literal argument default value # # Evaluate literal only once. subexprs = [] is_literal = True is_temp = False def __init__(self, pos, arg): super(DefaultLiteralArgNode, self).__init__(pos) self.arg = arg self.type = self.arg.type self.evaluated = False def analyse_types(self, env): return self def generate_result_code(self, code): pass def generate_evaluation_code(self, code): if not self.evaluated: self.arg.generate_evaluation_code(code) self.evaluated = True def result(self): return self.type.cast_code(self.arg.result()) class DefaultNonLiteralArgNode(ExprNode): # CyFunction's non-literal argument default value subexprs = [] def __init__(self, pos, arg, defaults_struct): super(DefaultNonLiteralArgNode, self).__init__(pos) self.arg = arg self.defaults_struct = defaults_struct def analyse_types(self, env): self.type = self.arg.type self.is_temp = False return self def generate_result_code(self, code): pass def result(self): return '__Pyx_CyFunction_Defaults(%s, %s)->%s' % ( self.defaults_struct.name, Naming.self_cname, self.defaults_struct.lookup(self.arg.name).cname) class DefaultsTupleNode(TupleNode): # CyFunction's __defaults__ tuple def __init__(self, pos, defaults, defaults_struct): args = [] for arg in defaults: if not arg.default.is_literal: arg = DefaultNonLiteralArgNode(pos, arg, defaults_struct) else: arg = arg.default args.append(arg) super(DefaultsTupleNode, self).__init__(pos, args=args) def analyse_types(self, env, skip_children=False): return super(DefaultsTupleNode, self).analyse_types(env, skip_children).coerce_to_pyobject(env) class DefaultsKwDictNode(DictNode): # CyFunction's __kwdefaults__ dict def __init__(self, pos, defaults, defaults_struct): items = [] for arg in defaults: name = IdentifierStringNode(arg.pos, value=arg.name) if not arg.default.is_literal: arg = DefaultNonLiteralArgNode(pos, arg, defaults_struct) else: arg = arg.default items.append(DictItemNode(arg.pos, key=name, value=arg)) super(DefaultsKwDictNode, self).__init__(pos, key_value_pairs=items) class LambdaNode(InnerFunctionNode): # Lambda expression node (only used as a function reference) # # args [CArgDeclNode] formal arguments # star_arg PyArgDeclNode or None * argument # starstar_arg PyArgDeclNode or None ** argument # lambda_name string a module-globally unique lambda name # result_expr ExprNode # def_node DefNode the underlying function 'def' node child_attrs = ['def_node'] name = StringEncoding.EncodedString('') def analyse_declarations(self, env): self.lambda_name = self.def_node.lambda_name = env.next_id('lambda') self.def_node.no_assignment_synthesis = True self.def_node.pymethdef_required = True self.def_node.analyse_declarations(env) self.def_node.is_cyfunction = True self.pymethdef_cname = self.def_node.entry.pymethdef_cname env.add_lambda_def(self.def_node) def analyse_types(self, env): self.def_node = self.def_node.analyse_expressions(env) return super(LambdaNode, self).analyse_types(env) def generate_result_code(self, code): self.def_node.generate_execution_code(code) super(LambdaNode, self).generate_result_code(code) class GeneratorExpressionNode(LambdaNode): # A generator expression, e.g. (i for i in range(10)) # # Result is a generator. # # loop ForStatNode the for-loop, containing a YieldExprNode # def_node DefNode the underlying generator 'def' node name = StringEncoding.EncodedString('genexpr') binding = False def analyse_declarations(self, env): self.genexpr_name = env.next_id('genexpr') super(GeneratorExpressionNode, self).analyse_declarations(env) # No pymethdef required self.def_node.pymethdef_required = False self.def_node.py_wrapper_required = False self.def_node.is_cyfunction = False # Force genexpr signature self.def_node.entry.signature = TypeSlots.pyfunction_noargs def generate_result_code(self, code): code.putln( '%s = %s(%s); %s' % ( self.result(), self.def_node.entry.pyfunc_cname, self.self_result_code(), code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) class YieldExprNode(ExprNode): # Yield expression node # # arg ExprNode the value to return from the generator # label_num integer yield label number # is_yield_from boolean is a YieldFromExprNode to delegate to another generator subexprs = ['arg'] type = py_object_type label_num = 0 is_yield_from = False is_await = False in_async_gen = False expr_keyword = 'yield' def analyse_types(self, env): if not self.label_num or (self.is_yield_from and self.in_async_gen): error(self.pos, "'%s' not supported here" % self.expr_keyword) self.is_temp = 1 if self.arg is not None: self.arg = self.arg.analyse_types(env) if not self.arg.type.is_pyobject: self.coerce_yield_argument(env) return self def coerce_yield_argument(self, env): self.arg = self.arg.coerce_to_pyobject(env) def generate_evaluation_code(self, code): if self.arg: self.arg.generate_evaluation_code(code) self.arg.make_owned_reference(code) code.putln( "%s = %s;" % ( Naming.retval_cname, self.arg.result_as(py_object_type))) self.arg.generate_post_assignment_code(code) self.arg.free_temps(code) else: code.put_init_to_py_none(Naming.retval_cname, py_object_type) self.generate_yield_code(code) def generate_yield_code(self, code): """ Generate the code to return the argument in 'Naming.retval_cname' and to continue at the yield label. """ label_num, label_name = code.new_yield_label( self.expr_keyword.replace(' ', '_')) code.use_label(label_name) saved = [] code.funcstate.closure_temps.reset() for cname, type, manage_ref in code.funcstate.temps_in_use(): save_cname = code.funcstate.closure_temps.allocate_temp(type) saved.append((cname, save_cname, type)) if type.is_pyobject: code.put_xgiveref(cname) code.putln('%s->%s = %s;' % (Naming.cur_scope_cname, save_cname, cname)) code.put_xgiveref(Naming.retval_cname) profile = code.globalstate.directives['profile'] linetrace = code.globalstate.directives['linetrace'] if profile or linetrace: code.put_trace_return(Naming.retval_cname, nogil=not code.funcstate.gil_owned) code.put_finish_refcount_context() if code.funcstate.current_except is not None: # inside of an except block => save away currently handled exception code.putln("__Pyx_Coroutine_SwapException(%s);" % Naming.generator_cname) else: # no exceptions being handled => restore exception state of caller code.putln("__Pyx_Coroutine_ResetAndClearException(%s);" % Naming.generator_cname) code.putln("/* return from %sgenerator, %sing value */" % ( 'async ' if self.in_async_gen else '', 'await' if self.is_await else 'yield')) code.putln("%s->resume_label = %d;" % ( Naming.generator_cname, label_num)) if self.in_async_gen and not self.is_await: # __Pyx__PyAsyncGenValueWrapperNew() steals a reference to the return value code.putln("return __Pyx__PyAsyncGenValueWrapperNew(%s);" % Naming.retval_cname) else: code.putln("return %s;" % Naming.retval_cname) code.put_label(label_name) for cname, save_cname, type in saved: code.putln('%s = %s->%s;' % (cname, Naming.cur_scope_cname, save_cname)) if type.is_pyobject: code.putln('%s->%s = 0;' % (Naming.cur_scope_cname, save_cname)) code.put_xgotref(cname) self.generate_sent_value_handling_code(code, Naming.sent_value_cname) if self.result_is_used: self.allocate_temp_result(code) code.put('%s = %s; ' % (self.result(), Naming.sent_value_cname)) code.put_incref(self.result(), py_object_type) def generate_sent_value_handling_code(self, code, value_cname): code.putln(code.error_goto_if_null(value_cname, self.pos)) class _YieldDelegationExprNode(YieldExprNode): def yield_from_func(self, code): raise NotImplementedError() def generate_evaluation_code(self, code, source_cname=None, decref_source=False): if source_cname is None: self.arg.generate_evaluation_code(code) code.putln("%s = %s(%s, %s);" % ( Naming.retval_cname, self.yield_from_func(code), Naming.generator_cname, self.arg.py_result() if source_cname is None else source_cname)) if source_cname is None: self.arg.generate_disposal_code(code) self.arg.free_temps(code) elif decref_source: code.put_decref_clear(source_cname, py_object_type) code.put_xgotref(Naming.retval_cname) code.putln("if (likely(%s)) {" % Naming.retval_cname) self.generate_yield_code(code) code.putln("} else {") # either error or sub-generator has normally terminated: return value => node result if self.result_is_used: self.fetch_iteration_result(code) else: self.handle_iteration_exception(code) code.putln("}") def fetch_iteration_result(self, code): # YieldExprNode has allocated the result temp for us code.putln("%s = NULL;" % self.result()) code.put_error_if_neg(self.pos, "__Pyx_PyGen_FetchStopIterationValue(&%s)" % self.result()) code.put_gotref(self.result()) def handle_iteration_exception(self, code): code.putln("PyObject* exc_type = __Pyx_PyErr_Occurred();") code.putln("if (exc_type) {") code.putln("if (likely(exc_type == PyExc_StopIteration || (exc_type != PyExc_GeneratorExit &&" " __Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration)))) PyErr_Clear();") code.putln("else %s" % code.error_goto(self.pos)) code.putln("}") class YieldFromExprNode(_YieldDelegationExprNode): # "yield from GEN" expression is_yield_from = True expr_keyword = 'yield from' def coerce_yield_argument(self, env): if not self.arg.type.is_string: # FIXME: support C arrays and C++ iterators? error(self.pos, "yielding from non-Python object not supported") self.arg = self.arg.coerce_to_pyobject(env) def yield_from_func(self, code): code.globalstate.use_utility_code(UtilityCode.load_cached("GeneratorYieldFrom", "Coroutine.c")) return "__Pyx_Generator_Yield_From" class AwaitExprNode(_YieldDelegationExprNode): # 'await' expression node # # arg ExprNode the Awaitable value to await # label_num integer yield label number is_await = True expr_keyword = 'await' def coerce_yield_argument(self, env): if self.arg is not None: # FIXME: use same check as in YieldFromExprNode.coerce_yield_argument() ? self.arg = self.arg.coerce_to_pyobject(env) def yield_from_func(self, code): code.globalstate.use_utility_code(UtilityCode.load_cached("CoroutineYieldFrom", "Coroutine.c")) return "__Pyx_Coroutine_Yield_From" class AwaitIterNextExprNode(AwaitExprNode): # 'await' expression node as part of 'async for' iteration # # Breaks out of loop on StopAsyncIteration exception. def _generate_break(self, code): code.globalstate.use_utility_code(UtilityCode.load_cached("StopAsyncIteration", "Coroutine.c")) code.putln("PyObject* exc_type = __Pyx_PyErr_Occurred();") code.putln("if (unlikely(exc_type && (exc_type == __Pyx_PyExc_StopAsyncIteration || (" " exc_type != PyExc_StopIteration && exc_type != PyExc_GeneratorExit &&" " __Pyx_PyErr_GivenExceptionMatches(exc_type, __Pyx_PyExc_StopAsyncIteration))))) {") code.putln("PyErr_Clear();") code.putln("break;") code.putln("}") def fetch_iteration_result(self, code): assert code.break_label, "AwaitIterNextExprNode outside of 'async for' loop" self._generate_break(code) super(AwaitIterNextExprNode, self).fetch_iteration_result(code) def generate_sent_value_handling_code(self, code, value_cname): assert code.break_label, "AwaitIterNextExprNode outside of 'async for' loop" code.putln("if (unlikely(!%s)) {" % value_cname) self._generate_break(code) # all non-break exceptions are errors, as in parent class code.putln(code.error_goto(self.pos)) code.putln("}") class GlobalsExprNode(AtomicExprNode): type = dict_type is_temp = 1 def analyse_types(self, env): env.use_utility_code(Builtin.globals_utility_code) return self gil_message = "Constructing globals dict" def may_be_none(self): return False def generate_result_code(self, code): code.putln('%s = __Pyx_Globals(); %s' % ( self.result(), code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.result()) class LocalsDictItemNode(DictItemNode): def analyse_types(self, env): self.key = self.key.analyse_types(env) self.value = self.value.analyse_types(env) self.key = self.key.coerce_to_pyobject(env) if self.value.type.can_coerce_to_pyobject(env): self.value = self.value.coerce_to_pyobject(env) else: self.value = None return self class FuncLocalsExprNode(DictNode): def __init__(self, pos, env): local_vars = sorted([ entry.name for entry in env.entries.values() if entry.name]) items = [LocalsDictItemNode( pos, key=IdentifierStringNode(pos, value=var), value=NameNode(pos, name=var, allow_null=True)) for var in local_vars] DictNode.__init__(self, pos, key_value_pairs=items, exclude_null_values=True) def analyse_types(self, env): node = super(FuncLocalsExprNode, self).analyse_types(env) node.key_value_pairs = [ i for i in node.key_value_pairs if i.value is not None ] return node class PyClassLocalsExprNode(AtomicExprNode): def __init__(self, pos, pyclass_dict): AtomicExprNode.__init__(self, pos) self.pyclass_dict = pyclass_dict def analyse_types(self, env): self.type = self.pyclass_dict.type self.is_temp = False return self def may_be_none(self): return False def result(self): return self.pyclass_dict.result() def generate_result_code(self, code): pass def LocalsExprNode(pos, scope_node, env): if env.is_module_scope: return GlobalsExprNode(pos) if env.is_py_class_scope: return PyClassLocalsExprNode(pos, scope_node.dict) return FuncLocalsExprNode(pos, env) #------------------------------------------------------------------- # # Unary operator nodes # #------------------------------------------------------------------- compile_time_unary_operators = { 'not': operator.not_, '~': operator.inv, '-': operator.neg, '+': operator.pos, } class UnopNode(ExprNode): # operator string # operand ExprNode # # Processing during analyse_expressions phase: # # analyse_c_operation # Called when the operand is not a pyobject. # - Check operand type and coerce if needed. # - Determine result type and result code fragment. # - Allocate temporary for result if needed. subexprs = ['operand'] infix = True def calculate_constant_result(self): func = compile_time_unary_operators[self.operator] self.constant_result = func(self.operand.constant_result) def compile_time_value(self, denv): func = compile_time_unary_operators.get(self.operator) if not func: error(self.pos, "Unary '%s' not supported in compile-time expression" % self.operator) operand = self.operand.compile_time_value(denv) try: return func(operand) except Exception as e: self.compile_time_value_error(e) def infer_type(self, env): operand_type = self.operand.infer_type(env) if operand_type.is_cpp_class or operand_type.is_ptr: cpp_type = operand_type.find_cpp_operation_type(self.operator) if cpp_type is not None: return cpp_type return self.infer_unop_type(env, operand_type) def infer_unop_type(self, env, operand_type): if operand_type.is_pyobject: return py_object_type else: return operand_type def may_be_none(self): if self.operand.type and self.operand.type.is_builtin_type: if self.operand.type is not type_type: return False return ExprNode.may_be_none(self) def analyse_types(self, env): self.operand = self.operand.analyse_types(env) if self.is_pythran_operation(env): self.type = PythranExpr(pythran_unaryop_type(self.operator, self.operand.type)) self.is_temp = 1 elif self.is_py_operation(): self.coerce_operand_to_pyobject(env) self.type = py_object_type self.is_temp = 1 elif self.is_cpp_operation(): self.analyse_cpp_operation(env) else: self.analyse_c_operation(env) return self def check_const(self): return self.operand.check_const() def is_py_operation(self): return self.operand.type.is_pyobject or self.operand.type.is_ctuple def is_pythran_operation(self, env): np_pythran = has_np_pythran(env) op_type = self.operand.type return np_pythran and (op_type.is_buffer or op_type.is_pythran_expr) def nogil_check(self, env): if self.is_py_operation(): self.gil_error() def is_cpp_operation(self): type = self.operand.type return type.is_cpp_class def coerce_operand_to_pyobject(self, env): self.operand = self.operand.coerce_to_pyobject(env) def generate_result_code(self, code): if self.type.is_pythran_expr: code.putln("// Pythran unaryop") code.putln("__Pyx_call_destructor(%s);" % self.result()) code.putln("new (&%s) decltype(%s){%s%s};" % ( self.result(), self.result(), self.operator, self.operand.pythran_result())) elif self.operand.type.is_pyobject: self.generate_py_operation_code(code) elif self.is_temp: if self.is_cpp_operation() and self.exception_check == '+': translate_cpp_exception(code, self.pos, "%s = %s %s;" % (self.result(), self.operator, self.operand.result()), self.result() if self.type.is_pyobject else None, self.exception_value, self.in_nogil_context) else: code.putln("%s = %s %s;" % (self.result(), self.operator, self.operand.result())) def generate_py_operation_code(self, code): function = self.py_operation_function(code) code.putln( "%s = %s(%s); %s" % ( self.result(), function, self.operand.py_result(), code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) def type_error(self): if not self.operand.type.is_error: error(self.pos, "Invalid operand type for '%s' (%s)" % (self.operator, self.operand.type)) self.type = PyrexTypes.error_type def analyse_cpp_operation(self, env, overload_check=True): entry = env.lookup_operator(self.operator, [self.operand]) if overload_check and not entry: self.type_error() return if entry: self.exception_check = entry.type.exception_check self.exception_value = entry.type.exception_value if self.exception_check == '+': self.is_temp = True if self.exception_value is None: env.use_utility_code(UtilityCode.load_cached("CppExceptionConversion", "CppSupport.cpp")) else: self.exception_check = '' self.exception_value = '' cpp_type = self.operand.type.find_cpp_operation_type(self.operator) if overload_check and cpp_type is None: error(self.pos, "'%s' operator not defined for %s" % ( self.operator, type)) self.type_error() return self.type = cpp_type class NotNode(UnopNode): # 'not' operator # # operand ExprNode operator = '!' type = PyrexTypes.c_bint_type def calculate_constant_result(self): self.constant_result = not self.operand.constant_result def compile_time_value(self, denv): operand = self.operand.compile_time_value(denv) try: return not operand except Exception as e: self.compile_time_value_error(e) def infer_unop_type(self, env, operand_type): return PyrexTypes.c_bint_type def analyse_types(self, env): self.operand = self.operand.analyse_types(env) operand_type = self.operand.type if operand_type.is_cpp_class: self.analyse_cpp_operation(env) else: self.operand = self.operand.coerce_to_boolean(env) return self def calculate_result_code(self): return "(!%s)" % self.operand.result() class UnaryPlusNode(UnopNode): # unary '+' operator operator = '+' def analyse_c_operation(self, env): self.type = PyrexTypes.widest_numeric_type( self.operand.type, PyrexTypes.c_int_type) def py_operation_function(self, code): return "PyNumber_Positive" def calculate_result_code(self): if self.is_cpp_operation(): return "(+%s)" % self.operand.result() else: return self.operand.result() class UnaryMinusNode(UnopNode): # unary '-' operator operator = '-' def analyse_c_operation(self, env): if self.operand.type.is_numeric: self.type = PyrexTypes.widest_numeric_type( self.operand.type, PyrexTypes.c_int_type) elif self.operand.type.is_enum: self.type = PyrexTypes.c_int_type else: self.type_error() if self.type.is_complex: self.infix = False def py_operation_function(self, code): return "PyNumber_Negative" def calculate_result_code(self): if self.infix: return "(-%s)" % self.operand.result() else: return "%s(%s)" % (self.operand.type.unary_op('-'), self.operand.result()) def get_constant_c_result_code(self): value = self.operand.get_constant_c_result_code() if value: return "(-%s)" % value class TildeNode(UnopNode): # unary '~' operator def analyse_c_operation(self, env): if self.operand.type.is_int: self.type = PyrexTypes.widest_numeric_type( self.operand.type, PyrexTypes.c_int_type) elif self.operand.type.is_enum: self.type = PyrexTypes.c_int_type else: self.type_error() def py_operation_function(self, code): return "PyNumber_Invert" def calculate_result_code(self): return "(~%s)" % self.operand.result() class CUnopNode(UnopNode): def is_py_operation(self): return False class DereferenceNode(CUnopNode): # unary * operator operator = '*' def infer_unop_type(self, env, operand_type): if operand_type.is_ptr: return operand_type.base_type else: return PyrexTypes.error_type def analyse_c_operation(self, env): if self.operand.type.is_ptr: self.type = self.operand.type.base_type else: self.type_error() def calculate_result_code(self): return "(*%s)" % self.operand.result() class DecrementIncrementNode(CUnopNode): # unary ++/-- operator def analyse_c_operation(self, env): if self.operand.type.is_numeric: self.type = PyrexTypes.widest_numeric_type( self.operand.type, PyrexTypes.c_int_type) elif self.operand.type.is_ptr: self.type = self.operand.type else: self.type_error() def calculate_result_code(self): if self.is_prefix: return "(%s%s)" % (self.operator, self.operand.result()) else: return "(%s%s)" % (self.operand.result(), self.operator) def inc_dec_constructor(is_prefix, operator): return lambda pos, **kwds: DecrementIncrementNode(pos, is_prefix=is_prefix, operator=operator, **kwds) class AmpersandNode(CUnopNode): # The C address-of operator. # # operand ExprNode operator = '&' def infer_unop_type(self, env, operand_type): return PyrexTypes.c_ptr_type(operand_type) def analyse_types(self, env): self.operand = self.operand.analyse_types(env) argtype = self.operand.type if argtype.is_cpp_class: self.analyse_cpp_operation(env, overload_check=False) if not (argtype.is_cfunction or argtype.is_reference or self.operand.is_addressable()): if argtype.is_memoryviewslice: self.error("Cannot take address of memoryview slice") else: self.error("Taking address of non-lvalue (type %s)" % argtype) return self if argtype.is_pyobject: self.error("Cannot take address of Python %s" % ( "variable '%s'" % self.operand.name if self.operand.is_name else "object attribute '%s'" % self.operand.attribute if self.operand.is_attribute else "object")) return self if not argtype.is_cpp_class or not self.type: self.type = PyrexTypes.c_ptr_type(argtype) return self def check_const(self): return self.operand.check_const_addr() def error(self, mess): error(self.pos, mess) self.type = PyrexTypes.error_type self.result_code = "" def calculate_result_code(self): return "(&%s)" % self.operand.result() def generate_result_code(self, code): if (self.operand.type.is_cpp_class and self.exception_check == '+'): translate_cpp_exception(code, self.pos, "%s = %s %s;" % (self.result(), self.operator, self.operand.result()), self.result() if self.type.is_pyobject else None, self.exception_value, self.in_nogil_context) unop_node_classes = { "+": UnaryPlusNode, "-": UnaryMinusNode, "~": TildeNode, } def unop_node(pos, operator, operand): # Construct unnop node of appropriate class for # given operator. if isinstance(operand, IntNode) and operator == '-': return IntNode(pos = operand.pos, value = str(-Utils.str_to_number(operand.value)), longness=operand.longness, unsigned=operand.unsigned) elif isinstance(operand, UnopNode) and operand.operator == operator in '+-': warning(pos, "Python has no increment/decrement operator: %s%sx == %s(%sx) == x" % ((operator,)*4), 5) return unop_node_classes[operator](pos, operator = operator, operand = operand) class TypecastNode(ExprNode): # C type cast # # operand ExprNode # base_type CBaseTypeNode # declarator CDeclaratorNode # typecheck boolean # # If used from a transform, one can if wanted specify the attribute # "type" directly and leave base_type and declarator to None subexprs = ['operand'] base_type = declarator = type = None def type_dependencies(self, env): return () def infer_type(self, env): if self.type is None: base_type = self.base_type.analyse(env) _, self.type = self.declarator.analyse(base_type, env) return self.type def analyse_types(self, env): if self.type is None: base_type = self.base_type.analyse(env) _, self.type = self.declarator.analyse(base_type, env) if self.operand.has_constant_result(): # Must be done after self.type is resolved. self.calculate_constant_result() if self.type.is_cfunction: error(self.pos, "Cannot cast to a function type") self.type = PyrexTypes.error_type self.operand = self.operand.analyse_types(env) if self.type is PyrexTypes.c_bint_type: # short circuit this to a coercion return self.operand.coerce_to_boolean(env) to_py = self.type.is_pyobject from_py = self.operand.type.is_pyobject if from_py and not to_py and self.operand.is_ephemeral(): if not self.type.is_numeric and not self.type.is_cpp_class: error(self.pos, "Casting temporary Python object to non-numeric non-Python type") if to_py and not from_py: if self.type is bytes_type and self.operand.type.is_int: return CoerceIntToBytesNode(self.operand, env) elif self.operand.type.can_coerce_to_pyobject(env): self.result_ctype = py_object_type self.operand = self.operand.coerce_to(self.type, env) else: if self.operand.type.is_ptr: if not (self.operand.type.base_type.is_void or self.operand.type.base_type.is_struct): error(self.pos, "Python objects cannot be cast from pointers of primitive types") else: # Should this be an error? warning(self.pos, "No conversion from %s to %s, python object pointer used." % ( self.operand.type, self.type)) self.operand = self.operand.coerce_to_simple(env) elif from_py and not to_py: if self.type.create_from_py_utility_code(env): self.operand = self.operand.coerce_to(self.type, env) elif self.type.is_ptr: if not (self.type.base_type.is_void or self.type.base_type.is_struct): error(self.pos, "Python objects cannot be cast to pointers of primitive types") else: warning(self.pos, "No conversion from %s to %s, python object pointer used." % ( self.type, self.operand.type)) elif from_py and to_py: if self.typecheck: self.operand = PyTypeTestNode(self.operand, self.type, env, notnone=True) elif isinstance(self.operand, SliceIndexNode): # This cast can influence the created type of string slices. self.operand = self.operand.coerce_to(self.type, env) elif self.type.is_complex and self.operand.type.is_complex: self.operand = self.operand.coerce_to_simple(env) elif self.operand.type.is_fused: self.operand = self.operand.coerce_to(self.type, env) #self.type = self.operand.type if self.type.is_ptr and self.type.base_type.is_cfunction and self.type.base_type.nogil: op_type = self.operand.type if op_type.is_ptr: op_type = op_type.base_type if op_type.is_cfunction and not op_type.nogil: warning(self.pos, "Casting a GIL-requiring function into a nogil function circumvents GIL validation", 1) return self def is_simple(self): # either temp or a C cast => no side effects other than the operand's return self.operand.is_simple() def is_ephemeral(self): # either temp or a C cast => no side effects other than the operand's return self.operand.is_ephemeral() def nonlocally_immutable(self): return self.is_temp or self.operand.nonlocally_immutable() def nogil_check(self, env): if self.type and self.type.is_pyobject and self.is_temp: self.gil_error() def check_const(self): return self.operand.check_const() def calculate_constant_result(self): self.constant_result = self.calculate_result_code(self.operand.constant_result) def calculate_result_code(self, operand_result = None): if operand_result is None: operand_result = self.operand.result() if self.type.is_complex: operand_result = self.operand.result() if self.operand.type.is_complex: real_part = self.type.real_type.cast_code("__Pyx_CREAL(%s)" % operand_result) imag_part = self.type.real_type.cast_code("__Pyx_CIMAG(%s)" % operand_result) else: real_part = self.type.real_type.cast_code(operand_result) imag_part = "0" return "%s(%s, %s)" % ( self.type.from_parts, real_part, imag_part) else: return self.type.cast_code(operand_result) def get_constant_c_result_code(self): operand_result = self.operand.get_constant_c_result_code() if operand_result: return self.type.cast_code(operand_result) def result_as(self, type): if self.type.is_pyobject and not self.is_temp: # Optimise away some unnecessary casting return self.operand.result_as(type) else: return ExprNode.result_as(self, type) def generate_result_code(self, code): if self.is_temp: code.putln( "%s = (PyObject *)%s;" % ( self.result(), self.operand.result())) code.put_incref(self.result(), self.ctype()) ERR_START = "Start may not be given" ERR_NOT_STOP = "Stop must be provided to indicate shape" ERR_STEPS = ("Strides may only be given to indicate contiguity. " "Consider slicing it after conversion") ERR_NOT_POINTER = "Can only create cython.array from pointer or array" ERR_BASE_TYPE = "Pointer base type does not match cython.array base type" class CythonArrayNode(ExprNode): """ Used when a pointer of base_type is cast to a memoryviewslice with that base type. i.e. p creates a fortran-contiguous cython.array. We leave the type set to object so coercions to object are more efficient and less work. Acquiring a memoryviewslice from this will be just as efficient. ExprNode.coerce_to() will do the additional typecheck on self.compile_time_type This also handles my_c_array operand ExprNode the thing we're casting base_type_node MemoryViewSliceTypeNode the cast expression node """ subexprs = ['operand', 'shapes'] shapes = None is_temp = True mode = "c" array_dtype = None shape_type = PyrexTypes.c_py_ssize_t_type def analyse_types(self, env): from . import MemoryView self.operand = self.operand.analyse_types(env) if self.array_dtype: array_dtype = self.array_dtype else: array_dtype = self.base_type_node.base_type_node.analyse(env) axes = self.base_type_node.axes self.type = error_type self.shapes = [] ndim = len(axes) # Base type of the pointer or C array we are converting base_type = self.operand.type if not self.operand.type.is_ptr and not self.operand.type.is_array: error(self.operand.pos, ERR_NOT_POINTER) return self # Dimension sizes of C array array_dimension_sizes = [] if base_type.is_array: while base_type.is_array: array_dimension_sizes.append(base_type.size) base_type = base_type.base_type elif base_type.is_ptr: base_type = base_type.base_type else: error(self.pos, "unexpected base type %s found" % base_type) return self if not (base_type.same_as(array_dtype) or base_type.is_void): error(self.operand.pos, ERR_BASE_TYPE) return self elif self.operand.type.is_array and len(array_dimension_sizes) != ndim: error(self.operand.pos, "Expected %d dimensions, array has %d dimensions" % (ndim, len(array_dimension_sizes))) return self # Verify the start, stop and step values # In case of a C array, use the size of C array in each dimension to # get an automatic cast for axis_no, axis in enumerate(axes): if not axis.start.is_none: error(axis.start.pos, ERR_START) return self if axis.stop.is_none: if array_dimension_sizes: dimsize = array_dimension_sizes[axis_no] axis.stop = IntNode(self.pos, value=str(dimsize), constant_result=dimsize, type=PyrexTypes.c_int_type) else: error(axis.pos, ERR_NOT_STOP) return self axis.stop = axis.stop.analyse_types(env) shape = axis.stop.coerce_to(self.shape_type, env) if not shape.is_literal: shape.coerce_to_temp(env) self.shapes.append(shape) first_or_last = axis_no in (0, ndim - 1) if not axis.step.is_none and first_or_last: # '1' in the first or last dimension denotes F or C contiguity axis.step = axis.step.analyse_types(env) if (not axis.step.type.is_int and axis.step.is_literal and not axis.step.type.is_error): error(axis.step.pos, "Expected an integer literal") return self if axis.step.compile_time_value(env) != 1: error(axis.step.pos, ERR_STEPS) return self if axis_no == 0: self.mode = "fortran" elif not axis.step.is_none and not first_or_last: # step provided in some other dimension error(axis.step.pos, ERR_STEPS) return self if not self.operand.is_name: self.operand = self.operand.coerce_to_temp(env) axes = [('direct', 'follow')] * len(axes) if self.mode == "fortran": axes[0] = ('direct', 'contig') else: axes[-1] = ('direct', 'contig') self.coercion_type = PyrexTypes.MemoryViewSliceType(array_dtype, axes) self.coercion_type.validate_memslice_dtype(self.pos) self.type = self.get_cython_array_type(env) MemoryView.use_cython_array_utility_code(env) env.use_utility_code(MemoryView.typeinfo_to_format_code) return self def allocate_temp_result(self, code): if self.temp_code: raise RuntimeError("temp allocated multiple times") self.temp_code = code.funcstate.allocate_temp(self.type, True) def infer_type(self, env): return self.get_cython_array_type(env) def get_cython_array_type(self, env): cython_scope = env.global_scope().context.cython_scope cython_scope.load_cythonscope() return cython_scope.viewscope.lookup("array").type def generate_result_code(self, code): from . import Buffer shapes = [self.shape_type.cast_code(shape.result()) for shape in self.shapes] dtype = self.coercion_type.dtype shapes_temp = code.funcstate.allocate_temp(py_object_type, True) format_temp = code.funcstate.allocate_temp(py_object_type, True) itemsize = "sizeof(%s)" % dtype.empty_declaration_code() type_info = Buffer.get_type_information_cname(code, dtype) if self.operand.type.is_ptr: code.putln("if (!%s) {" % self.operand.result()) code.putln( 'PyErr_SetString(PyExc_ValueError,' '"Cannot create cython.array from NULL pointer");') code.putln(code.error_goto(self.operand.pos)) code.putln("}") code.putln("%s = __pyx_format_from_typeinfo(&%s);" % (format_temp, type_info)) buildvalue_fmt = " __PYX_BUILD_PY_SSIZE_T " * len(shapes) code.putln('%s = Py_BuildValue((char*) "(" %s ")", %s);' % ( shapes_temp, buildvalue_fmt, ", ".join(shapes))) err = "!%s || !%s || !PyBytes_AsString(%s)" % (format_temp, shapes_temp, format_temp) code.putln(code.error_goto_if(err, self.pos)) code.put_gotref(format_temp) code.put_gotref(shapes_temp) tup = (self.result(), shapes_temp, itemsize, format_temp, self.mode, self.operand.result()) code.putln('%s = __pyx_array_new(' '%s, %s, PyBytes_AS_STRING(%s), ' '(char *) "%s", (char *) %s);' % tup) code.putln(code.error_goto_if_null(self.result(), self.pos)) code.put_gotref(self.result()) def dispose(temp): code.put_decref_clear(temp, py_object_type) code.funcstate.release_temp(temp) dispose(shapes_temp) dispose(format_temp) @classmethod def from_carray(cls, src_node, env): """ Given a C array type, return a CythonArrayNode """ pos = src_node.pos base_type = src_node.type none_node = NoneNode(pos) axes = [] while base_type.is_array: axes.append(SliceNode(pos, start=none_node, stop=none_node, step=none_node)) base_type = base_type.base_type axes[-1].step = IntNode(pos, value="1", is_c_literal=True) memslicenode = Nodes.MemoryViewSliceTypeNode(pos, axes=axes, base_type_node=base_type) result = CythonArrayNode(pos, base_type_node=memslicenode, operand=src_node, array_dtype=base_type) result = result.analyse_types(env) return result class SizeofNode(ExprNode): # Abstract base class for sizeof(x) expression nodes. type = PyrexTypes.c_size_t_type def check_const(self): return True def generate_result_code(self, code): pass class SizeofTypeNode(SizeofNode): # C sizeof function applied to a type # # base_type CBaseTypeNode # declarator CDeclaratorNode subexprs = [] arg_type = None def analyse_types(self, env): # we may have incorrectly interpreted a dotted name as a type rather than an attribute # this could be better handled by more uniformly treating types as runtime-available objects if 0 and self.base_type.module_path: path = self.base_type.module_path obj = env.lookup(path[0]) if obj.as_module is None: operand = NameNode(pos=self.pos, name=path[0]) for attr in path[1:]: operand = AttributeNode(pos=self.pos, obj=operand, attribute=attr) operand = AttributeNode(pos=self.pos, obj=operand, attribute=self.base_type.name) node = SizeofVarNode(self.pos, operand=operand).analyse_types(env) return node if self.arg_type is None: base_type = self.base_type.analyse(env) _, arg_type = self.declarator.analyse(base_type, env) self.arg_type = arg_type self.check_type() return self def check_type(self): arg_type = self.arg_type if not arg_type: return if arg_type.is_pyobject and not arg_type.is_extension_type: error(self.pos, "Cannot take sizeof Python object") elif arg_type.is_void: error(self.pos, "Cannot take sizeof void") elif not arg_type.is_complete(): error(self.pos, "Cannot take sizeof incomplete type '%s'" % arg_type) def calculate_result_code(self): if self.arg_type.is_extension_type: # the size of the pointer is boring # we want the size of the actual struct arg_code = self.arg_type.declaration_code("", deref=1) else: arg_code = self.arg_type.empty_declaration_code() return "(sizeof(%s))" % arg_code class SizeofVarNode(SizeofNode): # C sizeof function applied to a variable # # operand ExprNode subexprs = ['operand'] def analyse_types(self, env): # We may actually be looking at a type rather than a variable... # If we are, traditional analysis would fail... operand_as_type = self.operand.analyse_as_type(env) if operand_as_type: self.arg_type = operand_as_type if self.arg_type.is_fused: self.arg_type = self.arg_type.specialize(env.fused_to_specific) self.__class__ = SizeofTypeNode self.check_type() else: self.operand = self.operand.analyse_types(env) return self def calculate_result_code(self): return "(sizeof(%s))" % self.operand.result() def generate_result_code(self, code): pass class TypeidNode(ExprNode): # C++ typeid operator applied to a type or variable # # operand ExprNode # arg_type ExprNode # is_variable boolean type = PyrexTypes.error_type subexprs = ['operand'] arg_type = None is_variable = None is_temp = 1 def get_type_info_type(self, env): env_module = env while not env_module.is_module_scope: env_module = env_module.outer_scope typeinfo_module = env_module.find_module('libcpp.typeinfo', self.pos) typeinfo_entry = typeinfo_module.lookup('type_info') return PyrexTypes.CFakeReferenceType(PyrexTypes.c_const_type(typeinfo_entry.type)) def analyse_types(self, env): type_info = self.get_type_info_type(env) if not type_info: self.error("The 'libcpp.typeinfo' module must be cimported to use the typeid() operator") return self self.type = type_info as_type = self.operand.analyse_as_type(env) if as_type: self.arg_type = as_type self.is_type = True else: self.arg_type = self.operand.analyse_types(env) self.is_type = False if self.arg_type.type.is_pyobject: self.error("Cannot use typeid on a Python object") return self elif self.arg_type.type.is_void: self.error("Cannot use typeid on void") return self elif not self.arg_type.type.is_complete(): self.error("Cannot use typeid on incomplete type '%s'" % self.arg_type.type) return self env.use_utility_code(UtilityCode.load_cached("CppExceptionConversion", "CppSupport.cpp")) return self def error(self, mess): error(self.pos, mess) self.type = PyrexTypes.error_type self.result_code = "" def check_const(self): return True def calculate_result_code(self): return self.temp_code def generate_result_code(self, code): if self.is_type: arg_code = self.arg_type.empty_declaration_code() else: arg_code = self.arg_type.result() translate_cpp_exception(code, self.pos, "%s = typeid(%s);" % (self.temp_code, arg_code), None, None, self.in_nogil_context) class TypeofNode(ExprNode): # Compile-time type of an expression, as a string. # # operand ExprNode # literal StringNode # internal literal = None type = py_object_type subexprs = ['literal'] # 'operand' will be ignored after type analysis! def analyse_types(self, env): self.operand = self.operand.analyse_types(env) value = StringEncoding.EncodedString(str(self.operand.type)) #self.operand.type.typeof_name()) literal = StringNode(self.pos, value=value) literal = literal.analyse_types(env) self.literal = literal.coerce_to_pyobject(env) return self def analyse_as_type(self, env): self.operand = self.operand.analyse_types(env) return self.operand.type def may_be_none(self): return False def generate_evaluation_code(self, code): self.literal.generate_evaluation_code(code) def calculate_result_code(self): return self.literal.calculate_result_code() #------------------------------------------------------------------- # # Binary operator nodes # #------------------------------------------------------------------- try: matmul_operator = operator.matmul except AttributeError: def matmul_operator(a, b): try: func = a.__matmul__ except AttributeError: func = b.__rmatmul__ return func(a, b) compile_time_binary_operators = { '<': operator.lt, '<=': operator.le, '==': operator.eq, '!=': operator.ne, '>=': operator.ge, '>': operator.gt, 'is': operator.is_, 'is_not': operator.is_not, '+': operator.add, '&': operator.and_, '/': operator.truediv, '//': operator.floordiv, '<<': operator.lshift, '%': operator.mod, '*': operator.mul, '|': operator.or_, '**': operator.pow, '>>': operator.rshift, '-': operator.sub, '^': operator.xor, '@': matmul_operator, 'in': lambda x, seq: x in seq, 'not_in': lambda x, seq: x not in seq, } def get_compile_time_binop(node): func = compile_time_binary_operators.get(node.operator) if not func: error(node.pos, "Binary '%s' not supported in compile-time expression" % node.operator) return func class BinopNode(ExprNode): # operator string # operand1 ExprNode # operand2 ExprNode # # Processing during analyse_expressions phase: # # analyse_c_operation # Called when neither operand is a pyobject. # - Check operand types and coerce if needed. # - Determine result type and result code fragment. # - Allocate temporary for result if needed. subexprs = ['operand1', 'operand2'] inplace = False def calculate_constant_result(self): func = compile_time_binary_operators[self.operator] self.constant_result = func( self.operand1.constant_result, self.operand2.constant_result) def compile_time_value(self, denv): func = get_compile_time_binop(self) operand1 = self.operand1.compile_time_value(denv) operand2 = self.operand2.compile_time_value(denv) try: return func(operand1, operand2) except Exception as e: self.compile_time_value_error(e) def infer_type(self, env): return self.result_type(self.operand1.infer_type(env), self.operand2.infer_type(env), env) def analyse_types(self, env): self.operand1 = self.operand1.analyse_types(env) self.operand2 = self.operand2.analyse_types(env) self.analyse_operation(env) return self def analyse_operation(self, env): if self.is_pythran_operation(env): self.type = self.result_type(self.operand1.type, self.operand2.type, env) assert self.type.is_pythran_expr self.is_temp = 1 elif self.is_py_operation(): self.coerce_operands_to_pyobjects(env) self.type = self.result_type(self.operand1.type, self.operand2.type, env) assert self.type.is_pyobject self.is_temp = 1 elif self.is_cpp_operation(): self.analyse_cpp_operation(env) else: self.analyse_c_operation(env) def is_py_operation(self): return self.is_py_operation_types(self.operand1.type, self.operand2.type) def is_py_operation_types(self, type1, type2): return type1.is_pyobject or type2.is_pyobject or type1.is_ctuple or type2.is_ctuple def is_pythran_operation(self, env): return self.is_pythran_operation_types(self.operand1.type, self.operand2.type, env) def is_pythran_operation_types(self, type1, type2, env): # Support only expr op supported_type, or supported_type op expr return has_np_pythran(env) and \ (is_pythran_supported_operation_type(type1) and is_pythran_supported_operation_type(type2)) and \ (is_pythran_expr(type1) or is_pythran_expr(type2)) def is_cpp_operation(self): return (self.operand1.type.is_cpp_class or self.operand2.type.is_cpp_class) def analyse_cpp_operation(self, env): entry = env.lookup_operator(self.operator, [self.operand1, self.operand2]) if not entry: self.type_error() return func_type = entry.type self.exception_check = func_type.exception_check self.exception_value = func_type.exception_value if self.exception_check == '+': # Used by NumBinopNodes to break up expressions involving multiple # operators so that exceptions can be handled properly. self.is_temp = 1 if self.exception_value is None: env.use_utility_code(UtilityCode.load_cached("CppExceptionConversion", "CppSupport.cpp")) if func_type.is_ptr: func_type = func_type.base_type if len(func_type.args) == 1: self.operand2 = self.operand2.coerce_to(func_type.args[0].type, env) else: self.operand1 = self.operand1.coerce_to(func_type.args[0].type, env) self.operand2 = self.operand2.coerce_to(func_type.args[1].type, env) self.type = func_type.return_type def result_type(self, type1, type2, env): if self.is_pythran_operation_types(type1, type2, env): return PythranExpr(pythran_binop_type(self.operator, type1, type2)) if self.is_py_operation_types(type1, type2): if type2.is_string: type2 = Builtin.bytes_type elif type2.is_pyunicode_ptr: type2 = Builtin.unicode_type if type1.is_string: type1 = Builtin.bytes_type elif type1.is_pyunicode_ptr: type1 = Builtin.unicode_type if type1.is_builtin_type or type2.is_builtin_type: if type1 is type2 and self.operator in '**%+|&^': # FIXME: at least these operators should be safe - others? return type1 result_type = self.infer_builtin_types_operation(type1, type2) if result_type is not None: return result_type return py_object_type elif type1.is_error or type2.is_error: return PyrexTypes.error_type else: return self.compute_c_result_type(type1, type2) def infer_builtin_types_operation(self, type1, type2): return None def nogil_check(self, env): if self.is_py_operation(): self.gil_error() def coerce_operands_to_pyobjects(self, env): self.operand1 = self.operand1.coerce_to_pyobject(env) self.operand2 = self.operand2.coerce_to_pyobject(env) def check_const(self): return self.operand1.check_const() and self.operand2.check_const() def is_ephemeral(self): return (super(BinopNode, self).is_ephemeral() or self.operand1.is_ephemeral() or self.operand2.is_ephemeral()) def generate_result_code(self, code): if self.type.is_pythran_expr: code.putln("// Pythran binop") code.putln("__Pyx_call_destructor(%s);" % self.result()) if self.operator == '**': code.putln("new (&%s) decltype(%s){pythonic::numpy::functor::power{}(%s, %s)};" % ( self.result(), self.result(), self.operand1.pythran_result(), self.operand2.pythran_result())) else: code.putln("new (&%s) decltype(%s){%s %s %s};" % ( self.result(), self.result(), self.operand1.pythran_result(), self.operator, self.operand2.pythran_result())) elif self.operand1.type.is_pyobject: function = self.py_operation_function(code) if self.operator == '**': extra_args = ", Py_None" else: extra_args = "" code.putln( "%s = %s(%s, %s%s); %s" % ( self.result(), function, self.operand1.py_result(), self.operand2.py_result(), extra_args, code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) elif self.is_temp: # C++ overloaded operators with exception values are currently all # handled through temporaries. if self.is_cpp_operation() and self.exception_check == '+': translate_cpp_exception(code, self.pos, "%s = %s;" % (self.result(), self.calculate_result_code()), self.result() if self.type.is_pyobject else None, self.exception_value, self.in_nogil_context) else: code.putln("%s = %s;" % (self.result(), self.calculate_result_code())) def type_error(self): if not (self.operand1.type.is_error or self.operand2.type.is_error): error(self.pos, "Invalid operand types for '%s' (%s; %s)" % (self.operator, self.operand1.type, self.operand2.type)) self.type = PyrexTypes.error_type class CBinopNode(BinopNode): def analyse_types(self, env): node = BinopNode.analyse_types(self, env) if node.is_py_operation(): node.type = PyrexTypes.error_type return node def py_operation_function(self, code): return "" def calculate_result_code(self): return "(%s %s %s)" % ( self.operand1.result(), self.operator, self.operand2.result()) def compute_c_result_type(self, type1, type2): cpp_type = None if type1.is_cpp_class or type1.is_ptr: cpp_type = type1.find_cpp_operation_type(self.operator, type2) if cpp_type is None and (type2.is_cpp_class or type2.is_ptr): cpp_type = type2.find_cpp_operation_type(self.operator, type1) # FIXME: do we need to handle other cases here? return cpp_type def c_binop_constructor(operator): def make_binop_node(pos, **operands): return CBinopNode(pos, operator=operator, **operands) return make_binop_node class NumBinopNode(BinopNode): # Binary operation taking numeric arguments. infix = True overflow_check = False overflow_bit_node = None def analyse_c_operation(self, env): type1 = self.operand1.type type2 = self.operand2.type self.type = self.compute_c_result_type(type1, type2) if not self.type: self.type_error() return if self.type.is_complex: self.infix = False if (self.type.is_int and env.directives['overflowcheck'] and self.operator in self.overflow_op_names): if (self.operator in ('+', '*') and self.operand1.has_constant_result() and not self.operand2.has_constant_result()): self.operand1, self.operand2 = self.operand2, self.operand1 self.overflow_check = True self.overflow_fold = env.directives['overflowcheck.fold'] self.func = self.type.overflow_check_binop( self.overflow_op_names[self.operator], env, const_rhs = self.operand2.has_constant_result()) self.is_temp = True if not self.infix or (type1.is_numeric and type2.is_numeric): self.operand1 = self.operand1.coerce_to(self.type, env) self.operand2 = self.operand2.coerce_to(self.type, env) def compute_c_result_type(self, type1, type2): if self.c_types_okay(type1, type2): widest_type = PyrexTypes.widest_numeric_type(type1, type2) if widest_type is PyrexTypes.c_bint_type: if self.operator not in '|^&': # False + False == 0 # not False! widest_type = PyrexTypes.c_int_type else: widest_type = PyrexTypes.widest_numeric_type( widest_type, PyrexTypes.c_int_type) return widest_type else: return None def may_be_none(self): if self.type and self.type.is_builtin_type: # if we know the result type, we know the operation, so it can't be None return False type1 = self.operand1.type type2 = self.operand2.type if type1 and type1.is_builtin_type and type2 and type2.is_builtin_type: # XXX: I can't think of any case where a binary operation # on builtin types evaluates to None - add a special case # here if there is one. return False return super(NumBinopNode, self).may_be_none() def get_constant_c_result_code(self): value1 = self.operand1.get_constant_c_result_code() value2 = self.operand2.get_constant_c_result_code() if value1 and value2: return "(%s %s %s)" % (value1, self.operator, value2) else: return None def c_types_okay(self, type1, type2): #print "NumBinopNode.c_types_okay:", type1, type2 ### return (type1.is_numeric or type1.is_enum) \ and (type2.is_numeric or type2.is_enum) def generate_evaluation_code(self, code): if self.overflow_check: self.overflow_bit_node = self self.overflow_bit = code.funcstate.allocate_temp(PyrexTypes.c_int_type, manage_ref=False) code.putln("%s = 0;" % self.overflow_bit) super(NumBinopNode, self).generate_evaluation_code(code) if self.overflow_check: code.putln("if (unlikely(%s)) {" % self.overflow_bit) code.putln('PyErr_SetString(PyExc_OverflowError, "value too large");') code.putln(code.error_goto(self.pos)) code.putln("}") code.funcstate.release_temp(self.overflow_bit) def calculate_result_code(self): if self.overflow_bit_node is not None: return "%s(%s, %s, &%s)" % ( self.func, self.operand1.result(), self.operand2.result(), self.overflow_bit_node.overflow_bit) elif self.type.is_cpp_class or self.infix: if is_pythran_expr(self.type): result1, result2 = self.operand1.pythran_result(), self.operand2.pythran_result() else: result1, result2 = self.operand1.result(), self.operand2.result() return "(%s %s %s)" % (result1, self.operator, result2) else: func = self.type.binary_op(self.operator) if func is None: error(self.pos, "binary operator %s not supported for %s" % (self.operator, self.type)) return "%s(%s, %s)" % ( func, self.operand1.result(), self.operand2.result()) def is_py_operation_types(self, type1, type2): return (type1.is_unicode_char or type2.is_unicode_char or BinopNode.is_py_operation_types(self, type1, type2)) def py_operation_function(self, code): function_name = self.py_functions[self.operator] if self.inplace: function_name = function_name.replace('PyNumber_', 'PyNumber_InPlace') return function_name py_functions = { "|": "PyNumber_Or", "^": "PyNumber_Xor", "&": "PyNumber_And", "<<": "PyNumber_Lshift", ">>": "PyNumber_Rshift", "+": "PyNumber_Add", "-": "PyNumber_Subtract", "*": "PyNumber_Multiply", "@": "__Pyx_PyNumber_MatrixMultiply", "/": "__Pyx_PyNumber_Divide", "//": "PyNumber_FloorDivide", "%": "PyNumber_Remainder", "**": "PyNumber_Power", } overflow_op_names = { "+": "add", "-": "sub", "*": "mul", "<<": "lshift", } class IntBinopNode(NumBinopNode): # Binary operation taking integer arguments. def c_types_okay(self, type1, type2): #print "IntBinopNode.c_types_okay:", type1, type2 ### return (type1.is_int or type1.is_enum) \ and (type2.is_int or type2.is_enum) class AddNode(NumBinopNode): # '+' operator. def is_py_operation_types(self, type1, type2): if type1.is_string and type2.is_string or type1.is_pyunicode_ptr and type2.is_pyunicode_ptr: return 1 else: return NumBinopNode.is_py_operation_types(self, type1, type2) def infer_builtin_types_operation(self, type1, type2): # b'abc' + 'abc' raises an exception in Py3, # so we can safely infer the Py2 type for bytes here string_types = (bytes_type, bytearray_type, str_type, basestring_type, unicode_type) if type1 in string_types and type2 in string_types: return string_types[max(string_types.index(type1), string_types.index(type2))] return None def compute_c_result_type(self, type1, type2): #print "AddNode.compute_c_result_type:", type1, self.operator, type2 ### if (type1.is_ptr or type1.is_array) and (type2.is_int or type2.is_enum): return type1 elif (type2.is_ptr or type2.is_array) and (type1.is_int or type1.is_enum): return type2 else: return NumBinopNode.compute_c_result_type( self, type1, type2) def py_operation_function(self, code): is_unicode_concat = False if isinstance(self.operand1, FormattedValueNode) or isinstance(self.operand2, FormattedValueNode): is_unicode_concat = True else: type1, type2 = self.operand1.type, self.operand2.type if type1 is unicode_type or type2 is unicode_type: is_unicode_concat = type1.is_builtin_type and type2.is_builtin_type if is_unicode_concat: if self.operand1.may_be_none() or self.operand2.may_be_none(): return '__Pyx_PyUnicode_ConcatSafe' else: return '__Pyx_PyUnicode_Concat' return super(AddNode, self).py_operation_function(code) class SubNode(NumBinopNode): # '-' operator. def compute_c_result_type(self, type1, type2): if (type1.is_ptr or type1.is_array) and (type2.is_int or type2.is_enum): return type1 elif (type1.is_ptr or type1.is_array) and (type2.is_ptr or type2.is_array): return PyrexTypes.c_ptrdiff_t_type else: return NumBinopNode.compute_c_result_type( self, type1, type2) class MulNode(NumBinopNode): # '*' operator. def is_py_operation_types(self, type1, type2): if ((type1.is_string and type2.is_int) or (type2.is_string and type1.is_int)): return 1 else: return NumBinopNode.is_py_operation_types(self, type1, type2) def infer_builtin_types_operation(self, type1, type2): # let's assume that whatever builtin type you multiply a string with # will either return a string of the same type or fail with an exception string_types = (bytes_type, bytearray_type, str_type, basestring_type, unicode_type) if type1 in string_types and type2.is_builtin_type: return type1 if type2 in string_types and type1.is_builtin_type: return type2 # multiplication of containers/numbers with an integer value # always (?) returns the same type if type1.is_int: return type2 if type2.is_int: return type1 return None class MatMultNode(NumBinopNode): # '@' operator. def is_py_operation_types(self, type1, type2): return True def generate_evaluation_code(self, code): code.globalstate.use_utility_code(UtilityCode.load_cached("MatrixMultiply", "ObjectHandling.c")) super(MatMultNode, self).generate_evaluation_code(code) class DivNode(NumBinopNode): # '/' or '//' operator. cdivision = None truedivision = None # == "unknown" if operator == '/' ctruedivision = False cdivision_warnings = False zerodivision_check = None def find_compile_time_binary_operator(self, op1, op2): func = compile_time_binary_operators[self.operator] if self.operator == '/' and self.truedivision is None: # => true div for floats, floor div for integers if isinstance(op1, _py_int_types) and isinstance(op2, _py_int_types): func = compile_time_binary_operators['//'] return func def calculate_constant_result(self): op1 = self.operand1.constant_result op2 = self.operand2.constant_result func = self.find_compile_time_binary_operator(op1, op2) self.constant_result = func( self.operand1.constant_result, self.operand2.constant_result) def compile_time_value(self, denv): operand1 = self.operand1.compile_time_value(denv) operand2 = self.operand2.compile_time_value(denv) try: func = self.find_compile_time_binary_operator( operand1, operand2) return func(operand1, operand2) except Exception as e: self.compile_time_value_error(e) def _check_truedivision(self, env): if self.cdivision or env.directives['cdivision']: self.ctruedivision = False else: self.ctruedivision = self.truedivision def infer_type(self, env): self._check_truedivision(env) return self.result_type( self.operand1.infer_type(env), self.operand2.infer_type(env), env) def analyse_operation(self, env): self._check_truedivision(env) NumBinopNode.analyse_operation(self, env) if self.is_cpp_operation(): self.cdivision = True if not self.type.is_pyobject: self.zerodivision_check = ( self.cdivision is None and not env.directives['cdivision'] and (not self.operand2.has_constant_result() or self.operand2.constant_result == 0)) if self.zerodivision_check or env.directives['cdivision_warnings']: # Need to check ahead of time to warn or raise zero division error self.operand1 = self.operand1.coerce_to_simple(env) self.operand2 = self.operand2.coerce_to_simple(env) def compute_c_result_type(self, type1, type2): if self.operator == '/' and self.ctruedivision and not type1.is_cpp_class and not type2.is_cpp_class: if not type1.is_float and not type2.is_float: widest_type = PyrexTypes.widest_numeric_type(type1, PyrexTypes.c_double_type) widest_type = PyrexTypes.widest_numeric_type(type2, widest_type) return widest_type return NumBinopNode.compute_c_result_type(self, type1, type2) def zero_division_message(self): if self.type.is_int: return "integer division or modulo by zero" else: return "float division" def generate_evaluation_code(self, code): if not self.type.is_pyobject and not self.type.is_complex: if self.cdivision is None: self.cdivision = ( code.globalstate.directives['cdivision'] or self.type.is_float or ((self.type.is_numeric or self.type.is_enum) and not self.type.signed) ) if not self.cdivision: code.globalstate.use_utility_code( UtilityCode.load_cached("DivInt", "CMath.c").specialize(self.type)) NumBinopNode.generate_evaluation_code(self, code) self.generate_div_warning_code(code) def generate_div_warning_code(self, code): in_nogil = self.in_nogil_context if not self.type.is_pyobject: if self.zerodivision_check: if not self.infix: zero_test = "%s(%s)" % (self.type.unary_op('zero'), self.operand2.result()) else: zero_test = "%s == 0" % self.operand2.result() code.putln("if (unlikely(%s)) {" % zero_test) if in_nogil: code.put_ensure_gil() code.putln('PyErr_SetString(PyExc_ZeroDivisionError, "%s");' % self.zero_division_message()) if in_nogil: code.put_release_ensured_gil() code.putln(code.error_goto(self.pos)) code.putln("}") if self.type.is_int and self.type.signed and self.operator != '%': code.globalstate.use_utility_code(UtilityCode.load_cached("UnaryNegOverflows", "Overflow.c")) if self.operand2.type.signed == 2: # explicitly signed, no runtime check needed minus1_check = 'unlikely(%s == -1)' % self.operand2.result() else: type_of_op2 = self.operand2.type.empty_declaration_code() minus1_check = '(!(((%s)-1) > 0)) && unlikely(%s == (%s)-1)' % ( type_of_op2, self.operand2.result(), type_of_op2) code.putln("else if (sizeof(%s) == sizeof(long) && %s " " && unlikely(UNARY_NEG_WOULD_OVERFLOW(%s))) {" % ( self.type.empty_declaration_code(), minus1_check, self.operand1.result())) if in_nogil: code.put_ensure_gil() code.putln('PyErr_SetString(PyExc_OverflowError, "value too large to perform division");') if in_nogil: code.put_release_ensured_gil() code.putln(code.error_goto(self.pos)) code.putln("}") if code.globalstate.directives['cdivision_warnings'] and self.operator != '/': code.globalstate.use_utility_code( UtilityCode.load_cached("CDivisionWarning", "CMath.c")) code.putln("if (unlikely((%s < 0) ^ (%s < 0))) {" % ( self.operand1.result(), self.operand2.result())) warning_code = "__Pyx_cdivision_warning(%(FILENAME)s, %(LINENO)s)" % { 'FILENAME': Naming.filename_cname, 'LINENO': Naming.lineno_cname, } if in_nogil: result_code = 'result' code.putln("int %s;" % result_code) code.put_ensure_gil() code.putln(code.set_error_info(self.pos, used=True)) code.putln("%s = %s;" % (result_code, warning_code)) code.put_release_ensured_gil() else: result_code = warning_code code.putln(code.set_error_info(self.pos, used=True)) code.put("if (unlikely(%s)) " % result_code) code.put_goto(code.error_label) code.putln("}") def calculate_result_code(self): if self.type.is_complex or self.is_cpp_operation(): return NumBinopNode.calculate_result_code(self) elif self.type.is_float and self.operator == '//': return "floor(%s / %s)" % ( self.operand1.result(), self.operand2.result()) elif self.truedivision or self.cdivision: op1 = self.operand1.result() op2 = self.operand2.result() if self.truedivision: if self.type != self.operand1.type: op1 = self.type.cast_code(op1) if self.type != self.operand2.type: op2 = self.type.cast_code(op2) return "(%s / %s)" % (op1, op2) else: return "__Pyx_div_%s(%s, %s)" % ( self.type.specialization_name(), self.operand1.result(), self.operand2.result()) _find_formatting_types = re.compile( br"%" br"(?:%|" # %% br"(?:\([^)]+\))?" # %(name) br"[-+#,0-9 ]*([a-z])" # %.2f etc. br")").findall # These format conversion types can never trigger a Unicode string conversion in Py2. _safe_bytes_formats = set([ # Excludes 's' and 'r', which can generate non-bytes strings. b'd', b'i', b'o', b'u', b'x', b'X', b'e', b'E', b'f', b'F', b'g', b'G', b'c', b'b', b'a', ]) class ModNode(DivNode): # '%' operator. def is_py_operation_types(self, type1, type2): return (type1.is_string or type2.is_string or NumBinopNode.is_py_operation_types(self, type1, type2)) def infer_builtin_types_operation(self, type1, type2): # b'%s' % xyz raises an exception in Py3<3.5, so it's safe to infer the type for Py2 and later Py3's. if type1 is unicode_type: # None + xyz may be implemented by RHS if type2.is_builtin_type or not self.operand1.may_be_none(): return type1 elif type1 in (bytes_type, str_type, basestring_type): if type2 is unicode_type: return type2 elif type2.is_numeric: return type1 elif self.operand1.is_string_literal: if type1 is str_type or type1 is bytes_type: if set(_find_formatting_types(self.operand1.value)) <= _safe_bytes_formats: return type1 return basestring_type elif type1 is bytes_type and not type2.is_builtin_type: return None # RHS might implement '% operator differently in Py3 else: return basestring_type # either str or unicode, can't tell return None def zero_division_message(self): if self.type.is_int: return "integer division or modulo by zero" else: return "float divmod()" def analyse_operation(self, env): DivNode.analyse_operation(self, env) if not self.type.is_pyobject: if self.cdivision is None: self.cdivision = env.directives['cdivision'] or not self.type.signed if not self.cdivision and not self.type.is_int and not self.type.is_float: error(self.pos, "mod operator not supported for type '%s'" % self.type) def generate_evaluation_code(self, code): if not self.type.is_pyobject and not self.cdivision: if self.type.is_int: code.globalstate.use_utility_code( UtilityCode.load_cached("ModInt", "CMath.c").specialize(self.type)) else: # float code.globalstate.use_utility_code( UtilityCode.load_cached("ModFloat", "CMath.c").specialize( self.type, math_h_modifier=self.type.math_h_modifier)) # NOTE: skipping over DivNode here NumBinopNode.generate_evaluation_code(self, code) self.generate_div_warning_code(code) def calculate_result_code(self): if self.cdivision: if self.type.is_float: return "fmod%s(%s, %s)" % ( self.type.math_h_modifier, self.operand1.result(), self.operand2.result()) else: return "(%s %% %s)" % ( self.operand1.result(), self.operand2.result()) else: return "__Pyx_mod_%s(%s, %s)" % ( self.type.specialization_name(), self.operand1.result(), self.operand2.result()) def py_operation_function(self, code): type1, type2 = self.operand1.type, self.operand2.type # ("..." % x) must call "x.__rmod__()" for string subtypes. if type1 is unicode_type: if self.operand1.may_be_none() or ( type2.is_extension_type and type2.subtype_of(type1) or type2 is py_object_type and not isinstance(self.operand2, CoerceToPyTypeNode)): return '__Pyx_PyUnicode_FormatSafe' else: return 'PyUnicode_Format' elif type1 is str_type: if self.operand1.may_be_none() or ( type2.is_extension_type and type2.subtype_of(type1) or type2 is py_object_type and not isinstance(self.operand2, CoerceToPyTypeNode)): return '__Pyx_PyString_FormatSafe' else: return '__Pyx_PyString_Format' return super(ModNode, self).py_operation_function(code) class PowNode(NumBinopNode): # '**' operator. def analyse_c_operation(self, env): NumBinopNode.analyse_c_operation(self, env) if self.type.is_complex: if self.type.real_type.is_float: self.operand1 = self.operand1.coerce_to(self.type, env) self.operand2 = self.operand2.coerce_to(self.type, env) self.pow_func = self.type.binary_op('**') else: error(self.pos, "complex int powers not supported") self.pow_func = "" elif self.type.is_float: self.pow_func = "pow" + self.type.math_h_modifier elif self.type.is_int: self.pow_func = "__Pyx_pow_%s" % self.type.empty_declaration_code().replace(' ', '_') env.use_utility_code( UtilityCode.load_cached("IntPow", "CMath.c").specialize( func_name=self.pow_func, type=self.type.empty_declaration_code(), signed=self.type.signed and 1 or 0)) elif not self.type.is_error: error(self.pos, "got unexpected types for C power operator: %s, %s" % (self.operand1.type, self.operand2.type)) def calculate_result_code(self): # Work around MSVC overloading ambiguity. def typecast(operand): if self.type == operand.type: return operand.result() else: return self.type.cast_code(operand.result()) return "%s(%s, %s)" % ( self.pow_func, typecast(self.operand1), typecast(self.operand2)) def py_operation_function(self, code): if (self.type.is_pyobject and self.operand1.constant_result == 2 and isinstance(self.operand1.constant_result, _py_int_types) and self.operand2.type is py_object_type): code.globalstate.use_utility_code(UtilityCode.load_cached('PyNumberPow2', 'Optimize.c')) if self.inplace: return '__Pyx_PyNumber_InPlacePowerOf2' else: return '__Pyx_PyNumber_PowerOf2' return super(PowNode, self).py_operation_function(code) class BoolBinopNode(ExprNode): """ Short-circuiting boolean operation. Note that this node provides the same code generation method as BoolBinopResultNode to simplify expression nesting. operator string "and"/"or" operand1 BoolBinopNode/BoolBinopResultNode left operand operand2 BoolBinopNode/BoolBinopResultNode right operand """ subexprs = ['operand1', 'operand2'] is_temp = True operator = None operand1 = None operand2 = None def infer_type(self, env): type1 = self.operand1.infer_type(env) type2 = self.operand2.infer_type(env) return PyrexTypes.independent_spanning_type(type1, type2) def may_be_none(self): if self.operator == 'or': return self.operand2.may_be_none() else: return self.operand1.may_be_none() or self.operand2.may_be_none() def calculate_constant_result(self): operand1 = self.operand1.constant_result operand2 = self.operand2.constant_result if self.operator == 'and': self.constant_result = operand1 and operand2 else: self.constant_result = operand1 or operand2 def compile_time_value(self, denv): operand1 = self.operand1.compile_time_value(denv) operand2 = self.operand2.compile_time_value(denv) if self.operator == 'and': return operand1 and operand2 else: return operand1 or operand2 def is_ephemeral(self): return self.operand1.is_ephemeral() or self.operand2.is_ephemeral() def analyse_types(self, env): # Note: we do not do any coercion here as we most likely do not know the final type anyway. # We even accept to set self.type to ErrorType if both operands do not have a spanning type. # The coercion to the final type and to a "simple" value is left to coerce_to(). operand1 = self.operand1.analyse_types(env) operand2 = self.operand2.analyse_types(env) self.type = PyrexTypes.independent_spanning_type( operand1.type, operand2.type) self.operand1 = self._wrap_operand(operand1, env) self.operand2 = self._wrap_operand(operand2, env) return self def _wrap_operand(self, operand, env): if not isinstance(operand, (BoolBinopNode, BoolBinopResultNode)): operand = BoolBinopResultNode(operand, self.type, env) return operand def wrap_operands(self, env): """ Must get called by transforms that want to create a correct BoolBinopNode after the type analysis phase. """ self.operand1 = self._wrap_operand(self.operand1, env) self.operand2 = self._wrap_operand(self.operand2, env) def coerce_to_boolean(self, env): return self.coerce_to(PyrexTypes.c_bint_type, env) def coerce_to(self, dst_type, env): operand1 = self.operand1.coerce_to(dst_type, env) operand2 = self.operand2.coerce_to(dst_type, env) return BoolBinopNode.from_node( self, type=dst_type, operator=self.operator, operand1=operand1, operand2=operand2) def generate_bool_evaluation_code(self, code, final_result_temp, final_result_type, and_label, or_label, end_label, fall_through): code.mark_pos(self.pos) outer_labels = (and_label, or_label) if self.operator == 'and': my_label = and_label = code.new_label('next_and') else: my_label = or_label = code.new_label('next_or') self.operand1.generate_bool_evaluation_code( code, final_result_temp, final_result_type, and_label, or_label, end_label, my_label) and_label, or_label = outer_labels code.put_label(my_label) self.operand2.generate_bool_evaluation_code( code, final_result_temp, final_result_type, and_label, or_label, end_label, fall_through) def generate_evaluation_code(self, code): self.allocate_temp_result(code) result_type = PyrexTypes.py_object_type if self.type.is_pyobject else self.type or_label = and_label = None end_label = code.new_label('bool_binop_done') self.generate_bool_evaluation_code(code, self.result(), result_type, and_label, or_label, end_label, end_label) code.put_label(end_label) gil_message = "Truth-testing Python object" def check_const(self): return self.operand1.check_const() and self.operand2.check_const() def generate_subexpr_disposal_code(self, code): pass # nothing to do here, all done in generate_evaluation_code() def free_subexpr_temps(self, code): pass # nothing to do here, all done in generate_evaluation_code() def generate_operand1_test(self, code): # Generate code to test the truth of the first operand. if self.type.is_pyobject: test_result = code.funcstate.allocate_temp( PyrexTypes.c_bint_type, manage_ref=False) code.putln( "%s = __Pyx_PyObject_IsTrue(%s); %s" % ( test_result, self.operand1.py_result(), code.error_goto_if_neg(test_result, self.pos))) else: test_result = self.operand1.result() return (test_result, self.type.is_pyobject) class BoolBinopResultNode(ExprNode): """ Intermediate result of a short-circuiting and/or expression. Tests the result for 'truthiness' and takes care of coercing the final result of the overall expression to the target type. Note that this node provides the same code generation method as BoolBinopNode to simplify expression nesting. arg ExprNode the argument to test value ExprNode the coerced result value node """ subexprs = ['arg', 'value'] is_temp = True arg = None value = None def __init__(self, arg, result_type, env): # using 'arg' multiple times, so it must be a simple/temp value arg = arg.coerce_to_simple(env) # wrap in ProxyNode, in case a transform wants to replace self.arg later arg = ProxyNode(arg) super(BoolBinopResultNode, self).__init__( arg.pos, arg=arg, type=result_type, value=CloneNode(arg).coerce_to(result_type, env)) def coerce_to_boolean(self, env): return self.coerce_to(PyrexTypes.c_bint_type, env) def coerce_to(self, dst_type, env): # unwrap, coerce, rewrap arg = self.arg.arg if dst_type is PyrexTypes.c_bint_type: arg = arg.coerce_to_boolean(env) # TODO: unwrap more coercion nodes? return BoolBinopResultNode(arg, dst_type, env) def nogil_check(self, env): # let's leave all errors to BoolBinopNode pass def generate_operand_test(self, code): # Generate code to test the truth of the first operand. if self.arg.type.is_pyobject: test_result = code.funcstate.allocate_temp( PyrexTypes.c_bint_type, manage_ref=False) code.putln( "%s = __Pyx_PyObject_IsTrue(%s); %s" % ( test_result, self.arg.py_result(), code.error_goto_if_neg(test_result, self.pos))) else: test_result = self.arg.result() return (test_result, self.arg.type.is_pyobject) def generate_bool_evaluation_code(self, code, final_result_temp, final_result_type, and_label, or_label, end_label, fall_through): code.mark_pos(self.pos) # x => x # x and ... or ... => next 'and' / 'or' # False ... or x => next 'or' # True and x => next 'and' # True or x => True (operand) self.arg.generate_evaluation_code(code) if and_label or or_label: test_result, uses_temp = self.generate_operand_test(code) if uses_temp and (and_label and or_label): # cannot become final result => free early # disposal: uses_temp and (and_label and or_label) self.arg.generate_disposal_code(code) sense = '!' if or_label else '' code.putln("if (%s%s) {" % (sense, test_result)) if uses_temp: code.funcstate.release_temp(test_result) if not uses_temp or not (and_label and or_label): # disposal: (not uses_temp) or {not (and_label and or_label) [if]} self.arg.generate_disposal_code(code) if or_label and or_label != fall_through: # value is false => short-circuit to next 'or' code.put_goto(or_label) if and_label: # value is true => go to next 'and' if or_label: code.putln("} else {") if not uses_temp: # disposal: (not uses_temp) and {(and_label and or_label) [else]} self.arg.generate_disposal_code(code) if and_label != fall_through: code.put_goto(and_label) if not and_label or not or_label: # if no next 'and' or 'or', we provide the result if and_label or or_label: code.putln("} else {") self.value.generate_evaluation_code(code) self.value.make_owned_reference(code) code.putln("%s = %s;" % (final_result_temp, self.value.result_as(final_result_type))) self.value.generate_post_assignment_code(code) # disposal: {not (and_label and or_label) [else]} self.arg.generate_disposal_code(code) self.value.free_temps(code) if end_label != fall_through: code.put_goto(end_label) if and_label or or_label: code.putln("}") self.arg.free_temps(code) class CondExprNode(ExprNode): # Short-circuiting conditional expression. # # test ExprNode # true_val ExprNode # false_val ExprNode true_val = None false_val = None is_temp = True subexprs = ['test', 'true_val', 'false_val'] def type_dependencies(self, env): return self.true_val.type_dependencies(env) + self.false_val.type_dependencies(env) def infer_type(self, env): return PyrexTypes.independent_spanning_type( self.true_val.infer_type(env), self.false_val.infer_type(env)) def calculate_constant_result(self): if self.test.constant_result: self.constant_result = self.true_val.constant_result else: self.constant_result = self.false_val.constant_result def is_ephemeral(self): return self.true_val.is_ephemeral() or self.false_val.is_ephemeral() def analyse_types(self, env): self.test = self.test.analyse_types(env).coerce_to_boolean(env) self.true_val = self.true_val.analyse_types(env) self.false_val = self.false_val.analyse_types(env) return self.analyse_result_type(env) def analyse_result_type(self, env): self.type = PyrexTypes.independent_spanning_type( self.true_val.type, self.false_val.type) if self.type.is_reference: self.type = PyrexTypes.CFakeReferenceType(self.type.ref_base_type) if self.type.is_pyobject: self.result_ctype = py_object_type elif self.true_val.is_ephemeral() or self.false_val.is_ephemeral(): error(self.pos, "Unsafe C derivative of temporary Python reference used in conditional expression") if self.true_val.type.is_pyobject or self.false_val.type.is_pyobject: self.true_val = self.true_val.coerce_to(self.type, env) self.false_val = self.false_val.coerce_to(self.type, env) if self.type.is_error: self.type_error() return self def coerce_to_integer(self, env): self.true_val = self.true_val.coerce_to_integer(env) self.false_val = self.false_val.coerce_to_integer(env) self.result_ctype = None return self.analyse_result_type(env) def coerce_to(self, dst_type, env): self.true_val = self.true_val.coerce_to(dst_type, env) self.false_val = self.false_val.coerce_to(dst_type, env) self.result_ctype = None return self.analyse_result_type(env) def type_error(self): if not (self.true_val.type.is_error or self.false_val.type.is_error): error(self.pos, "Incompatible types in conditional expression (%s; %s)" % (self.true_val.type, self.false_val.type)) self.type = PyrexTypes.error_type def check_const(self): return (self.test.check_const() and self.true_val.check_const() and self.false_val.check_const()) def generate_evaluation_code(self, code): # Because subexprs may not be evaluated we can use a more optimal # subexpr allocation strategy than the default, so override evaluation_code. code.mark_pos(self.pos) self.allocate_temp_result(code) self.test.generate_evaluation_code(code) code.putln("if (%s) {" % self.test.result()) self.eval_and_get(code, self.true_val) code.putln("} else {") self.eval_and_get(code, self.false_val) code.putln("}") self.test.generate_disposal_code(code) self.test.free_temps(code) def eval_and_get(self, code, expr): expr.generate_evaluation_code(code) if self.type.is_memoryviewslice: expr.make_owned_memoryviewslice(code) else: expr.make_owned_reference(code) code.putln('%s = %s;' % (self.result(), expr.result_as(self.ctype()))) expr.generate_post_assignment_code(code) expr.free_temps(code) def generate_subexpr_disposal_code(self, code): pass # done explicitly above (cleanup must separately happen within the if/else blocks) def free_subexpr_temps(self, code): pass # done explicitly above (cleanup must separately happen within the if/else blocks) richcmp_constants = { "<" : "Py_LT", "<=": "Py_LE", "==": "Py_EQ", "!=": "Py_NE", "<>": "Py_NE", ">" : "Py_GT", ">=": "Py_GE", # the following are faked by special compare functions "in" : "Py_EQ", "not_in": "Py_NE", } class CmpNode(object): # Mixin class containing code common to PrimaryCmpNodes # and CascadedCmpNodes. special_bool_cmp_function = None special_bool_cmp_utility_code = None def infer_type(self, env): # TODO: Actually implement this (after merging with -unstable). return py_object_type def calculate_cascaded_constant_result(self, operand1_result): func = compile_time_binary_operators[self.operator] operand2_result = self.operand2.constant_result if (isinstance(operand1_result, any_string_type) and isinstance(operand2_result, any_string_type) and type(operand1_result) != type(operand2_result)): # string comparison of different types isn't portable return if self.operator in ('in', 'not_in'): if isinstance(self.operand2, (ListNode, TupleNode, SetNode)): if not self.operand2.args: self.constant_result = self.operator == 'not_in' return elif isinstance(self.operand2, ListNode) and not self.cascade: # tuples are more efficient to store than lists self.operand2 = self.operand2.as_tuple() elif isinstance(self.operand2, DictNode): if not self.operand2.key_value_pairs: self.constant_result = self.operator == 'not_in' return self.constant_result = func(operand1_result, operand2_result) def cascaded_compile_time_value(self, operand1, denv): func = get_compile_time_binop(self) operand2 = self.operand2.compile_time_value(denv) try: result = func(operand1, operand2) except Exception as e: self.compile_time_value_error(e) result = None if result: cascade = self.cascade if cascade: result = result and cascade.cascaded_compile_time_value(operand2, denv) return result def is_cpp_comparison(self): return self.operand1.type.is_cpp_class or self.operand2.type.is_cpp_class def find_common_int_type(self, env, op, operand1, operand2): # type1 != type2 and at least one of the types is not a C int type1 = operand1.type type2 = operand2.type type1_can_be_int = False type2_can_be_int = False if operand1.is_string_literal and operand1.can_coerce_to_char_literal(): type1_can_be_int = True if operand2.is_string_literal and operand2.can_coerce_to_char_literal(): type2_can_be_int = True if type1.is_int: if type2_can_be_int: return type1 elif type2.is_int: if type1_can_be_int: return type2 elif type1_can_be_int: if type2_can_be_int: if Builtin.unicode_type in (type1, type2): return PyrexTypes.c_py_ucs4_type else: return PyrexTypes.c_uchar_type return None def find_common_type(self, env, op, operand1, common_type=None): operand2 = self.operand2 type1 = operand1.type type2 = operand2.type new_common_type = None # catch general errors if (type1 == str_type and (type2.is_string or type2 in (bytes_type, unicode_type)) or type2 == str_type and (type1.is_string or type1 in (bytes_type, unicode_type))): error(self.pos, "Comparisons between bytes/unicode and str are not portable to Python 3") new_common_type = error_type # try to use numeric comparisons where possible elif type1.is_complex or type2.is_complex: if (op not in ('==', '!=') and (type1.is_complex or type1.is_numeric) and (type2.is_complex or type2.is_numeric)): error(self.pos, "complex types are unordered") new_common_type = error_type elif type1.is_pyobject: new_common_type = Builtin.complex_type if type1.subtype_of(Builtin.complex_type) else py_object_type elif type2.is_pyobject: new_common_type = Builtin.complex_type if type2.subtype_of(Builtin.complex_type) else py_object_type else: new_common_type = PyrexTypes.widest_numeric_type(type1, type2) elif type1.is_numeric and type2.is_numeric: new_common_type = PyrexTypes.widest_numeric_type(type1, type2) elif common_type is None or not common_type.is_pyobject: new_common_type = self.find_common_int_type(env, op, operand1, operand2) if new_common_type is None: # fall back to generic type compatibility tests if type1.is_ctuple or type2.is_ctuple: new_common_type = py_object_type elif type1 == type2: new_common_type = type1 elif type1.is_pyobject or type2.is_pyobject: if type2.is_numeric or type2.is_string: if operand2.check_for_coercion_error(type1, env): new_common_type = error_type else: new_common_type = py_object_type elif type1.is_numeric or type1.is_string: if operand1.check_for_coercion_error(type2, env): new_common_type = error_type else: new_common_type = py_object_type elif py_object_type.assignable_from(type1) and py_object_type.assignable_from(type2): new_common_type = py_object_type else: # one Python type and one non-Python type, not assignable self.invalid_types_error(operand1, op, operand2) new_common_type = error_type elif type1.assignable_from(type2): new_common_type = type1 elif type2.assignable_from(type1): new_common_type = type2 else: # C types that we couldn't handle up to here are an error self.invalid_types_error(operand1, op, operand2) new_common_type = error_type if new_common_type.is_string and (isinstance(operand1, BytesNode) or isinstance(operand2, BytesNode)): # special case when comparing char* to bytes literal: must # compare string values! new_common_type = bytes_type # recursively merge types if common_type is None or new_common_type.is_error: common_type = new_common_type else: # we could do a lot better by splitting the comparison # into a non-Python part and a Python part, but this is # safer for now common_type = PyrexTypes.spanning_type(common_type, new_common_type) if self.cascade: common_type = self.cascade.find_common_type(env, self.operator, operand2, common_type) return common_type def invalid_types_error(self, operand1, op, operand2): error(self.pos, "Invalid types for '%s' (%s, %s)" % (op, operand1.type, operand2.type)) def is_python_comparison(self): return (not self.is_ptr_contains() and not self.is_c_string_contains() and (self.has_python_operands() or (self.cascade and self.cascade.is_python_comparison()) or self.operator in ('in', 'not_in'))) def coerce_operands_to(self, dst_type, env): operand2 = self.operand2 if operand2.type != dst_type: self.operand2 = operand2.coerce_to(dst_type, env) if self.cascade: self.cascade.coerce_operands_to(dst_type, env) def is_python_result(self): return ((self.has_python_operands() and self.special_bool_cmp_function is None and self.operator not in ('is', 'is_not', 'in', 'not_in') and not self.is_c_string_contains() and not self.is_ptr_contains()) or (self.cascade and self.cascade.is_python_result())) def is_c_string_contains(self): return self.operator in ('in', 'not_in') and \ ((self.operand1.type.is_int and (self.operand2.type.is_string or self.operand2.type is bytes_type)) or (self.operand1.type.is_unicode_char and self.operand2.type is unicode_type)) def is_ptr_contains(self): if self.operator in ('in', 'not_in'): container_type = self.operand2.type return (container_type.is_ptr or container_type.is_array) \ and not container_type.is_string def find_special_bool_compare_function(self, env, operand1, result_is_bool=False): # note: currently operand1 must get coerced to a Python object if we succeed here! if self.operator in ('==', '!='): type1, type2 = operand1.type, self.operand2.type if result_is_bool or (type1.is_builtin_type and type2.is_builtin_type): if type1 is Builtin.unicode_type or type2 is Builtin.unicode_type: self.special_bool_cmp_utility_code = UtilityCode.load_cached("UnicodeEquals", "StringTools.c") self.special_bool_cmp_function = "__Pyx_PyUnicode_Equals" return True elif type1 is Builtin.bytes_type or type2 is Builtin.bytes_type: self.special_bool_cmp_utility_code = UtilityCode.load_cached("BytesEquals", "StringTools.c") self.special_bool_cmp_function = "__Pyx_PyBytes_Equals" return True elif type1 is Builtin.basestring_type or type2 is Builtin.basestring_type: self.special_bool_cmp_utility_code = UtilityCode.load_cached("UnicodeEquals", "StringTools.c") self.special_bool_cmp_function = "__Pyx_PyUnicode_Equals" return True elif type1 is Builtin.str_type or type2 is Builtin.str_type: self.special_bool_cmp_utility_code = UtilityCode.load_cached("StrEquals", "StringTools.c") self.special_bool_cmp_function = "__Pyx_PyString_Equals" return True elif self.operator in ('in', 'not_in'): if self.operand2.type is Builtin.dict_type: self.operand2 = self.operand2.as_none_safe_node("'NoneType' object is not iterable") self.special_bool_cmp_utility_code = UtilityCode.load_cached("PyDictContains", "ObjectHandling.c") self.special_bool_cmp_function = "__Pyx_PyDict_ContainsTF" return True elif self.operand2.type is Builtin.set_type: self.operand2 = self.operand2.as_none_safe_node("'NoneType' object is not iterable") self.special_bool_cmp_utility_code = UtilityCode.load_cached("PySetContains", "ObjectHandling.c") self.special_bool_cmp_function = "__Pyx_PySet_ContainsTF" return True elif self.operand2.type is Builtin.unicode_type: self.operand2 = self.operand2.as_none_safe_node("'NoneType' object is not iterable") self.special_bool_cmp_utility_code = UtilityCode.load_cached("PyUnicodeContains", "StringTools.c") self.special_bool_cmp_function = "__Pyx_PyUnicode_ContainsTF" return True else: if not self.operand2.type.is_pyobject: self.operand2 = self.operand2.coerce_to_pyobject(env) self.special_bool_cmp_utility_code = UtilityCode.load_cached("PySequenceContains", "ObjectHandling.c") self.special_bool_cmp_function = "__Pyx_PySequence_ContainsTF" return True return False def generate_operation_code(self, code, result_code, operand1, op , operand2): if self.type.is_pyobject: error_clause = code.error_goto_if_null got_ref = "__Pyx_XGOTREF(%s); " % result_code if self.special_bool_cmp_function: code.globalstate.use_utility_code( UtilityCode.load_cached("PyBoolOrNullFromLong", "ObjectHandling.c")) coerce_result = "__Pyx_PyBoolOrNull_FromLong" else: coerce_result = "__Pyx_PyBool_FromLong" else: error_clause = code.error_goto_if_neg got_ref = "" coerce_result = "" if self.special_bool_cmp_function: if operand1.type.is_pyobject: result1 = operand1.py_result() else: result1 = operand1.result() if operand2.type.is_pyobject: result2 = operand2.py_result() else: result2 = operand2.result() if self.special_bool_cmp_utility_code: code.globalstate.use_utility_code(self.special_bool_cmp_utility_code) code.putln( "%s = %s(%s(%s, %s, %s)); %s%s" % ( result_code, coerce_result, self.special_bool_cmp_function, result1, result2, richcmp_constants[op], got_ref, error_clause(result_code, self.pos))) elif operand1.type.is_pyobject and op not in ('is', 'is_not'): assert op not in ('in', 'not_in'), op code.putln("%s = PyObject_RichCompare(%s, %s, %s); %s%s" % ( result_code, operand1.py_result(), operand2.py_result(), richcmp_constants[op], got_ref, error_clause(result_code, self.pos))) elif operand1.type.is_complex: code.putln("%s = %s(%s%s(%s, %s));" % ( result_code, coerce_result, op == "!=" and "!" or "", operand1.type.unary_op('eq'), operand1.result(), operand2.result())) else: type1 = operand1.type type2 = operand2.type if (type1.is_extension_type or type2.is_extension_type) \ and not type1.same_as(type2): common_type = py_object_type elif type1.is_numeric: common_type = PyrexTypes.widest_numeric_type(type1, type2) else: common_type = type1 code1 = operand1.result_as(common_type) code2 = operand2.result_as(common_type) statement = "%s = %s(%s %s %s);" % ( result_code, coerce_result, code1, self.c_operator(op), code2) if self.is_cpp_comparison() and self.exception_check == '+': translate_cpp_exception( code, self.pos, statement, result_code if self.type.is_pyobject else None, self.exception_value, self.in_nogil_context) code.putln(statement) def c_operator(self, op): if op == 'is': return "==" elif op == 'is_not': return "!=" else: return op class PrimaryCmpNode(ExprNode, CmpNode): # Non-cascaded comparison or first comparison of # a cascaded sequence. # # operator string # operand1 ExprNode # operand2 ExprNode # cascade CascadedCmpNode # We don't use the subexprs mechanism, because # things here are too complicated for it to handle. # Instead, we override all the framework methods # which use it. child_attrs = ['operand1', 'operand2', 'coerced_operand2', 'cascade'] cascade = None coerced_operand2 = None is_memslice_nonecheck = False def infer_type(self, env): type1 = self.operand1.infer_type(env) type2 = self.operand2.infer_type(env) if is_pythran_expr(type1) or is_pythran_expr(type2): if is_pythran_supported_type(type1) and is_pythran_supported_type(type2): return PythranExpr(pythran_binop_type(self.operator, type1, type2)) # TODO: implement this for other types. return py_object_type def type_dependencies(self, env): return () def calculate_constant_result(self): assert not self.cascade self.calculate_cascaded_constant_result(self.operand1.constant_result) def compile_time_value(self, denv): operand1 = self.operand1.compile_time_value(denv) return self.cascaded_compile_time_value(operand1, denv) def analyse_types(self, env): self.operand1 = self.operand1.analyse_types(env) self.operand2 = self.operand2.analyse_types(env) if self.is_cpp_comparison(): self.analyse_cpp_comparison(env) if self.cascade: error(self.pos, "Cascading comparison not yet supported for cpp types.") return self type1 = self.operand1.type type2 = self.operand2.type if is_pythran_expr(type1) or is_pythran_expr(type2): if is_pythran_supported_type(type1) and is_pythran_supported_type(type2): self.type = PythranExpr(pythran_binop_type(self.operator, type1, type2)) self.is_pycmp = False return self if self.analyse_memoryviewslice_comparison(env): return self if self.cascade: self.cascade = self.cascade.analyse_types(env) if self.operator in ('in', 'not_in'): if self.is_c_string_contains(): self.is_pycmp = False common_type = None if self.cascade: error(self.pos, "Cascading comparison not yet supported for 'int_val in string'.") return self if self.operand2.type is unicode_type: env.use_utility_code(UtilityCode.load_cached("PyUCS4InUnicode", "StringTools.c")) else: if self.operand1.type is PyrexTypes.c_uchar_type: self.operand1 = self.operand1.coerce_to(PyrexTypes.c_char_type, env) if self.operand2.type is not bytes_type: self.operand2 = self.operand2.coerce_to(bytes_type, env) env.use_utility_code(UtilityCode.load_cached("BytesContains", "StringTools.c")) self.operand2 = self.operand2.as_none_safe_node( "argument of type 'NoneType' is not iterable") elif self.is_ptr_contains(): if self.cascade: error(self.pos, "Cascading comparison not supported for 'val in sliced pointer'.") self.type = PyrexTypes.c_bint_type # Will be transformed by IterationTransform return self elif self.find_special_bool_compare_function(env, self.operand1): if not self.operand1.type.is_pyobject: self.operand1 = self.operand1.coerce_to_pyobject(env) common_type = None # if coercion needed, the method call above has already done it self.is_pycmp = False # result is bint else: common_type = py_object_type self.is_pycmp = True elif self.find_special_bool_compare_function(env, self.operand1): if not self.operand1.type.is_pyobject: self.operand1 = self.operand1.coerce_to_pyobject(env) common_type = None # if coercion needed, the method call above has already done it self.is_pycmp = False # result is bint else: common_type = self.find_common_type(env, self.operator, self.operand1) self.is_pycmp = common_type.is_pyobject if common_type is not None and not common_type.is_error: if self.operand1.type != common_type: self.operand1 = self.operand1.coerce_to(common_type, env) self.coerce_operands_to(common_type, env) if self.cascade: self.operand2 = self.operand2.coerce_to_simple(env) self.cascade.coerce_cascaded_operands_to_temp(env) operand2 = self.cascade.optimise_comparison(self.operand2, env) if operand2 is not self.operand2: self.coerced_operand2 = operand2 if self.is_python_result(): self.type = PyrexTypes.py_object_type else: self.type = PyrexTypes.c_bint_type cdr = self.cascade while cdr: cdr.type = self.type cdr = cdr.cascade if self.is_pycmp or self.cascade or self.special_bool_cmp_function: # 1) owned reference, 2) reused value, 3) potential function error return value self.is_temp = 1 return self def analyse_cpp_comparison(self, env): type1 = self.operand1.type type2 = self.operand2.type self.is_pycmp = False entry = env.lookup_operator(self.operator, [self.operand1, self.operand2]) if entry is None: error(self.pos, "Invalid types for '%s' (%s, %s)" % (self.operator, type1, type2)) self.type = PyrexTypes.error_type self.result_code = "" return func_type = entry.type if func_type.is_ptr: func_type = func_type.base_type self.exception_check = func_type.exception_check self.exception_value = func_type.exception_value if self.exception_check == '+': self.is_temp = True if self.exception_value is None: env.use_utility_code(UtilityCode.load_cached("CppExceptionConversion", "CppSupport.cpp")) if len(func_type.args) == 1: self.operand2 = self.operand2.coerce_to(func_type.args[0].type, env) else: self.operand1 = self.operand1.coerce_to(func_type.args[0].type, env) self.operand2 = self.operand2.coerce_to(func_type.args[1].type, env) self.type = func_type.return_type def analyse_memoryviewslice_comparison(self, env): have_none = self.operand1.is_none or self.operand2.is_none have_slice = (self.operand1.type.is_memoryviewslice or self.operand2.type.is_memoryviewslice) ops = ('==', '!=', 'is', 'is_not') if have_slice and have_none and self.operator in ops: self.is_pycmp = False self.type = PyrexTypes.c_bint_type self.is_memslice_nonecheck = True return True return False def coerce_to_boolean(self, env): if self.is_pycmp: # coercing to bool => may allow for more efficient comparison code if self.find_special_bool_compare_function( env, self.operand1, result_is_bool=True): self.is_pycmp = False self.type = PyrexTypes.c_bint_type self.is_temp = 1 if self.cascade: operand2 = self.cascade.optimise_comparison( self.operand2, env, result_is_bool=True) if operand2 is not self.operand2: self.coerced_operand2 = operand2 return self # TODO: check if we can optimise parts of the cascade here return ExprNode.coerce_to_boolean(self, env) def has_python_operands(self): return (self.operand1.type.is_pyobject or self.operand2.type.is_pyobject) def check_const(self): if self.cascade: self.not_const() return False else: return self.operand1.check_const() and self.operand2.check_const() def calculate_result_code(self): operand1, operand2 = self.operand1, self.operand2 if operand1.type.is_complex: if self.operator == "!=": negation = "!" else: negation = "" return "(%s%s(%s, %s))" % ( negation, operand1.type.binary_op('=='), operand1.result(), operand2.result()) elif self.is_c_string_contains(): if operand2.type is unicode_type: method = "__Pyx_UnicodeContainsUCS4" else: method = "__Pyx_BytesContains" if self.operator == "not_in": negation = "!" else: negation = "" return "(%s%s(%s, %s))" % ( negation, method, operand2.result(), operand1.result()) else: if is_pythran_expr(self.type): result1, result2 = operand1.pythran_result(), operand2.pythran_result() else: result1, result2 = operand1.result(), operand2.result() if self.is_memslice_nonecheck: if operand1.type.is_memoryviewslice: result1 = "((PyObject *) %s.memview)" % result1 else: result2 = "((PyObject *) %s.memview)" % result2 return "(%s %s %s)" % ( result1, self.c_operator(self.operator), result2) def generate_evaluation_code(self, code): self.operand1.generate_evaluation_code(code) self.operand2.generate_evaluation_code(code) if self.is_temp: self.allocate_temp_result(code) self.generate_operation_code(code, self.result(), self.operand1, self.operator, self.operand2) if self.cascade: self.cascade.generate_evaluation_code( code, self.result(), self.coerced_operand2 or self.operand2, needs_evaluation=self.coerced_operand2 is not None) self.operand1.generate_disposal_code(code) self.operand1.free_temps(code) self.operand2.generate_disposal_code(code) self.operand2.free_temps(code) def generate_subexpr_disposal_code(self, code): # If this is called, it is a non-cascaded cmp, # so only need to dispose of the two main operands. self.operand1.generate_disposal_code(code) self.operand2.generate_disposal_code(code) def free_subexpr_temps(self, code): # If this is called, it is a non-cascaded cmp, # so only need to dispose of the two main operands. self.operand1.free_temps(code) self.operand2.free_temps(code) def annotate(self, code): self.operand1.annotate(code) self.operand2.annotate(code) if self.cascade: self.cascade.annotate(code) class CascadedCmpNode(Node, CmpNode): # A CascadedCmpNode is not a complete expression node. It # hangs off the side of another comparison node, shares # its left operand with that node, and shares its result # with the PrimaryCmpNode at the head of the chain. # # operator string # operand2 ExprNode # cascade CascadedCmpNode child_attrs = ['operand2', 'coerced_operand2', 'cascade'] cascade = None coerced_operand2 = None constant_result = constant_value_not_set # FIXME: where to calculate this? def infer_type(self, env): # TODO: Actually implement this (after merging with -unstable). return py_object_type def type_dependencies(self, env): return () def has_constant_result(self): return self.constant_result is not constant_value_not_set and \ self.constant_result is not not_a_constant def analyse_types(self, env): self.operand2 = self.operand2.analyse_types(env) if self.cascade: self.cascade = self.cascade.analyse_types(env) return self def has_python_operands(self): return self.operand2.type.is_pyobject def is_cpp_comparison(self): # cascaded comparisons aren't currently implemented for c++ classes. return False def optimise_comparison(self, operand1, env, result_is_bool=False): if self.find_special_bool_compare_function(env, operand1, result_is_bool): self.is_pycmp = False self.type = PyrexTypes.c_bint_type if not operand1.type.is_pyobject: operand1 = operand1.coerce_to_pyobject(env) if self.cascade: operand2 = self.cascade.optimise_comparison(self.operand2, env, result_is_bool) if operand2 is not self.operand2: self.coerced_operand2 = operand2 return operand1 def coerce_operands_to_pyobjects(self, env): self.operand2 = self.operand2.coerce_to_pyobject(env) if self.operand2.type is dict_type and self.operator in ('in', 'not_in'): self.operand2 = self.operand2.as_none_safe_node("'NoneType' object is not iterable") if self.cascade: self.cascade.coerce_operands_to_pyobjects(env) def coerce_cascaded_operands_to_temp(self, env): if self.cascade: #self.operand2 = self.operand2.coerce_to_temp(env) #CTT self.operand2 = self.operand2.coerce_to_simple(env) self.cascade.coerce_cascaded_operands_to_temp(env) def generate_evaluation_code(self, code, result, operand1, needs_evaluation=False): if self.type.is_pyobject: code.putln("if (__Pyx_PyObject_IsTrue(%s)) {" % result) code.put_decref(result, self.type) else: code.putln("if (%s) {" % result) if needs_evaluation: operand1.generate_evaluation_code(code) self.operand2.generate_evaluation_code(code) self.generate_operation_code(code, result, operand1, self.operator, self.operand2) if self.cascade: self.cascade.generate_evaluation_code( code, result, self.coerced_operand2 or self.operand2, needs_evaluation=self.coerced_operand2 is not None) if needs_evaluation: operand1.generate_disposal_code(code) operand1.free_temps(code) # Cascaded cmp result is always temp self.operand2.generate_disposal_code(code) self.operand2.free_temps(code) code.putln("}") def annotate(self, code): self.operand2.annotate(code) if self.cascade: self.cascade.annotate(code) binop_node_classes = { "or": BoolBinopNode, "and": BoolBinopNode, "|": IntBinopNode, "^": IntBinopNode, "&": IntBinopNode, "<<": IntBinopNode, ">>": IntBinopNode, "+": AddNode, "-": SubNode, "*": MulNode, "@": MatMultNode, "/": DivNode, "//": DivNode, "%": ModNode, "**": PowNode, } def binop_node(pos, operator, operand1, operand2, inplace=False, **kwargs): # Construct binop node of appropriate class for # given operator. return binop_node_classes[operator]( pos, operator=operator, operand1=operand1, operand2=operand2, inplace=inplace, **kwargs) #------------------------------------------------------------------- # # Coercion nodes # # Coercion nodes are special in that they are created during # the analyse_types phase of parse tree processing. # Their __init__ methods consequently incorporate some aspects # of that phase. # #------------------------------------------------------------------- class CoercionNode(ExprNode): # Abstract base class for coercion nodes. # # arg ExprNode node being coerced subexprs = ['arg'] constant_result = not_a_constant def __init__(self, arg): super(CoercionNode, self).__init__(arg.pos) self.arg = arg if debug_coercion: print("%s Coercing %s" % (self, self.arg)) def calculate_constant_result(self): # constant folding can break type coercion, so this is disabled pass def annotate(self, code): self.arg.annotate(code) if self.arg.type != self.type: file, line, col = self.pos code.annotate((file, line, col-1), AnnotationItem( style='coerce', tag='coerce', text='[%s] to [%s]' % (self.arg.type, self.type))) class CoerceToMemViewSliceNode(CoercionNode): """ Coerce an object to a memoryview slice. This holds a new reference in a managed temp. """ def __init__(self, arg, dst_type, env): assert dst_type.is_memoryviewslice assert not arg.type.is_memoryviewslice CoercionNode.__init__(self, arg) self.type = dst_type self.is_temp = 1 self.env = env self.use_managed_ref = True self.arg = arg def generate_result_code(self, code): self.type.create_from_py_utility_code(self.env) code.putln(self.type.from_py_call_code( self.arg.py_result(), self.result(), self.pos, code )) class CastNode(CoercionNode): # Wrap a node in a C type cast. def __init__(self, arg, new_type): CoercionNode.__init__(self, arg) self.type = new_type def may_be_none(self): return self.arg.may_be_none() def calculate_result_code(self): return self.arg.result_as(self.type) def generate_result_code(self, code): self.arg.generate_result_code(code) class PyTypeTestNode(CoercionNode): # This node is used to check that a generic Python # object is an instance of a particular extension type. # This node borrows the result of its argument node. exact_builtin_type = True def __init__(self, arg, dst_type, env, notnone=False): # The arg is know to be a Python object, and # the dst_type is known to be an extension type. assert dst_type.is_extension_type or dst_type.is_builtin_type, "PyTypeTest on non extension type" CoercionNode.__init__(self, arg) self.type = dst_type self.result_ctype = arg.ctype() self.notnone = notnone nogil_check = Node.gil_error gil_message = "Python type test" def analyse_types(self, env): return self def may_be_none(self): if self.notnone: return False return self.arg.may_be_none() def is_simple(self): return self.arg.is_simple() def result_in_temp(self): return self.arg.result_in_temp() def is_ephemeral(self): return self.arg.is_ephemeral() def nonlocally_immutable(self): return self.arg.nonlocally_immutable() def reanalyse(self): if self.type != self.arg.type or not self.arg.is_temp: return self if not self.type.typeobj_is_available(): return self if self.arg.may_be_none() and self.notnone: return self.arg.as_none_safe_node("Cannot convert NoneType to %.200s" % self.type.name) return self.arg def calculate_constant_result(self): # FIXME pass def calculate_result_code(self): return self.arg.result() def generate_result_code(self, code): if self.type.typeobj_is_available(): if self.type.is_builtin_type: type_test = self.type.type_test_code( self.arg.py_result(), self.notnone, exact=self.exact_builtin_type) else: type_test = self.type.type_test_code( self.arg.py_result(), self.notnone) code.globalstate.use_utility_code( UtilityCode.load_cached("ExtTypeTest", "ObjectHandling.c")) code.putln("if (!(%s)) %s" % ( type_test, code.error_goto(self.pos))) else: error(self.pos, "Cannot test type of extern C class " "without type object name specification") def generate_post_assignment_code(self, code): self.arg.generate_post_assignment_code(code) def free_temps(self, code): self.arg.free_temps(code) class NoneCheckNode(CoercionNode): # This node is used to check that a Python object is not None and # raises an appropriate exception (as specified by the creating # transform). is_nonecheck = True def __init__(self, arg, exception_type_cname, exception_message, exception_format_args=()): CoercionNode.__init__(self, arg) self.type = arg.type self.result_ctype = arg.ctype() self.exception_type_cname = exception_type_cname self.exception_message = exception_message self.exception_format_args = tuple(exception_format_args or ()) nogil_check = None # this node only guards an operation that would fail already def analyse_types(self, env): return self def may_be_none(self): return False def is_simple(self): return self.arg.is_simple() def result_in_temp(self): return self.arg.result_in_temp() def nonlocally_immutable(self): return self.arg.nonlocally_immutable() def calculate_result_code(self): return self.arg.result() def condition(self): if self.type.is_pyobject: return self.arg.py_result() elif self.type.is_memoryviewslice: return "((PyObject *) %s.memview)" % self.arg.result() else: raise Exception("unsupported type") @classmethod def generate(cls, arg, code, exception_message, exception_type_cname="PyExc_TypeError", exception_format_args=(), in_nogil_context=False): node = cls(arg, exception_type_cname, exception_message, exception_format_args) node.in_nogil_context = in_nogil_context node.put_nonecheck(code) @classmethod def generate_if_needed(cls, arg, code, exception_message, exception_type_cname="PyExc_TypeError", exception_format_args=(), in_nogil_context=False): if arg.may_be_none(): cls.generate(arg, code, exception_message, exception_type_cname, exception_format_args, in_nogil_context) def put_nonecheck(self, code): code.putln( "if (unlikely(%s == Py_None)) {" % self.condition()) if self.in_nogil_context: code.put_ensure_gil() escape = StringEncoding.escape_byte_string if self.exception_format_args: code.putln('PyErr_Format(%s, "%s", %s);' % ( self.exception_type_cname, StringEncoding.escape_byte_string( self.exception_message.encode('UTF-8')), ', '.join([ '"%s"' % escape(str(arg).encode('UTF-8')) for arg in self.exception_format_args ]))) else: code.putln('PyErr_SetString(%s, "%s");' % ( self.exception_type_cname, escape(self.exception_message.encode('UTF-8')))) if self.in_nogil_context: code.put_release_ensured_gil() code.putln(code.error_goto(self.pos)) code.putln("}") def generate_result_code(self, code): self.put_nonecheck(code) def generate_post_assignment_code(self, code): self.arg.generate_post_assignment_code(code) def free_temps(self, code): self.arg.free_temps(code) class CoerceToPyTypeNode(CoercionNode): # This node is used to convert a C data type # to a Python object. type = py_object_type target_type = py_object_type is_temp = 1 def __init__(self, arg, env, type=py_object_type): if not arg.type.create_to_py_utility_code(env): error(arg.pos, "Cannot convert '%s' to Python object" % arg.type) elif arg.type.is_complex: # special case: complex coercion is so complex that it # uses a macro ("__pyx_PyComplex_FromComplex()"), for # which the argument must be simple arg = arg.coerce_to_simple(env) CoercionNode.__init__(self, arg) if type is py_object_type: # be specific about some known types if arg.type.is_string or arg.type.is_cpp_string: self.type = default_str_type(env) elif arg.type.is_pyunicode_ptr or arg.type.is_unicode_char: self.type = unicode_type elif arg.type.is_complex: self.type = Builtin.complex_type self.target_type = self.type elif arg.type.is_string or arg.type.is_cpp_string: if (type not in (bytes_type, bytearray_type) and not env.directives['c_string_encoding']): error(arg.pos, "default encoding required for conversion from '%s' to '%s'" % (arg.type, type)) self.type = self.target_type = type else: # FIXME: check that the target type and the resulting type are compatible self.target_type = type gil_message = "Converting to Python object" def may_be_none(self): # FIXME: is this always safe? return False def coerce_to_boolean(self, env): arg_type = self.arg.type if (arg_type == PyrexTypes.c_bint_type or (arg_type.is_pyobject and arg_type.name == 'bool')): return self.arg.coerce_to_temp(env) else: return CoerceToBooleanNode(self, env) def coerce_to_integer(self, env): # If not already some C integer type, coerce to longint. if self.arg.type.is_int: return self.arg else: return self.arg.coerce_to(PyrexTypes.c_long_type, env) def analyse_types(self, env): # The arg is always already analysed return self def generate_result_code(self, code): code.putln('%s; %s' % ( self.arg.type.to_py_call_code( self.arg.result(), self.result(), self.target_type), code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.py_result()) class CoerceIntToBytesNode(CoerceToPyTypeNode): # This node is used to convert a C int type to a Python bytes # object. is_temp = 1 def __init__(self, arg, env): arg = arg.coerce_to_simple(env) CoercionNode.__init__(self, arg) self.type = Builtin.bytes_type def generate_result_code(self, code): arg = self.arg arg_result = arg.result() if arg.type not in (PyrexTypes.c_char_type, PyrexTypes.c_uchar_type, PyrexTypes.c_schar_type): if arg.type.signed: code.putln("if ((%s < 0) || (%s > 255)) {" % ( arg_result, arg_result)) else: code.putln("if (%s > 255) {" % arg_result) code.putln('PyErr_SetString(PyExc_OverflowError, ' '"value too large to pack into a byte"); %s' % ( code.error_goto(self.pos))) code.putln('}') temp = None if arg.type is not PyrexTypes.c_char_type: temp = code.funcstate.allocate_temp(PyrexTypes.c_char_type, manage_ref=False) code.putln("%s = (char)%s;" % (temp, arg_result)) arg_result = temp code.putln('%s = PyBytes_FromStringAndSize(&%s, 1); %s' % ( self.result(), arg_result, code.error_goto_if_null(self.result(), self.pos))) if temp is not None: code.funcstate.release_temp(temp) code.put_gotref(self.py_result()) class CoerceFromPyTypeNode(CoercionNode): # This node is used to convert a Python object # to a C data type. def __init__(self, result_type, arg, env): CoercionNode.__init__(self, arg) self.type = result_type self.is_temp = 1 if not result_type.create_from_py_utility_code(env): error(arg.pos, "Cannot convert Python object to '%s'" % result_type) if self.type.is_string or self.type.is_pyunicode_ptr: if self.arg.is_name and self.arg.entry and self.arg.entry.is_pyglobal: warning(arg.pos, "Obtaining '%s' from externally modifiable global Python value" % result_type, level=1) def analyse_types(self, env): # The arg is always already analysed return self def is_ephemeral(self): return (self.type.is_ptr and not self.type.is_array) and self.arg.is_ephemeral() def generate_result_code(self, code): from_py_function = None # for certain source types, we can do better than the generic coercion if self.type.is_string and self.arg.type is bytes_type: if self.type.from_py_function.startswith('__Pyx_PyObject_As'): from_py_function = '__Pyx_PyBytes' + self.type.from_py_function[len('__Pyx_PyObject'):] NoneCheckNode.generate_if_needed(self.arg, code, "expected bytes, NoneType found") code.putln(self.type.from_py_call_code( self.arg.py_result(), self.result(), self.pos, code, from_py_function=from_py_function)) if self.type.is_pyobject: code.put_gotref(self.py_result()) def nogil_check(self, env): error(self.pos, "Coercion from Python not allowed without the GIL") class CoerceToBooleanNode(CoercionNode): # This node is used when a result needs to be used # in a boolean context. type = PyrexTypes.c_bint_type _special_builtins = { Builtin.list_type: 'PyList_GET_SIZE', Builtin.tuple_type: 'PyTuple_GET_SIZE', Builtin.set_type: 'PySet_GET_SIZE', Builtin.frozenset_type: 'PySet_GET_SIZE', Builtin.bytes_type: 'PyBytes_GET_SIZE', Builtin.bytearray_type: 'PyByteArray_GET_SIZE', Builtin.unicode_type: '__Pyx_PyUnicode_IS_TRUE', } def __init__(self, arg, env): CoercionNode.__init__(self, arg) if arg.type.is_pyobject: self.is_temp = 1 def nogil_check(self, env): if self.arg.type.is_pyobject and self._special_builtins.get(self.arg.type) is None: self.gil_error() gil_message = "Truth-testing Python object" def check_const(self): if self.is_temp: self.not_const() return False return self.arg.check_const() def calculate_result_code(self): return "(%s != 0)" % self.arg.result() def generate_result_code(self, code): if not self.is_temp: return test_func = self._special_builtins.get(self.arg.type) if test_func is not None: checks = ["(%s != Py_None)" % self.arg.py_result()] if self.arg.may_be_none() else [] checks.append("(%s(%s) != 0)" % (test_func, self.arg.py_result())) code.putln("%s = %s;" % (self.result(), '&&'.join(checks))) else: code.putln( "%s = __Pyx_PyObject_IsTrue(%s); %s" % ( self.result(), self.arg.py_result(), code.error_goto_if_neg(self.result(), self.pos))) class CoerceToComplexNode(CoercionNode): def __init__(self, arg, dst_type, env): if arg.type.is_complex: arg = arg.coerce_to_simple(env) self.type = dst_type CoercionNode.__init__(self, arg) dst_type.create_declaration_utility_code(env) def calculate_result_code(self): if self.arg.type.is_complex: real_part = "__Pyx_CREAL(%s)" % self.arg.result() imag_part = "__Pyx_CIMAG(%s)" % self.arg.result() else: real_part = self.arg.result() imag_part = "0" return "%s(%s, %s)" % ( self.type.from_parts, real_part, imag_part) def generate_result_code(self, code): pass class CoerceToTempNode(CoercionNode): # This node is used to force the result of another node # to be stored in a temporary. It is only used if the # argument node's result is not already in a temporary. def __init__(self, arg, env): CoercionNode.__init__(self, arg) self.type = self.arg.type.as_argument_type() self.constant_result = self.arg.constant_result self.is_temp = 1 if self.type.is_pyobject: self.result_ctype = py_object_type gil_message = "Creating temporary Python reference" def analyse_types(self, env): # The arg is always already analysed return self def coerce_to_boolean(self, env): self.arg = self.arg.coerce_to_boolean(env) if self.arg.is_simple(): return self.arg self.type = self.arg.type self.result_ctype = self.type return self def generate_result_code(self, code): #self.arg.generate_evaluation_code(code) # Already done # by generic generate_subexpr_evaluation_code! code.putln("%s = %s;" % ( self.result(), self.arg.result_as(self.ctype()))) if self.use_managed_ref: if self.type.is_pyobject: code.put_incref(self.result(), self.ctype()) elif self.type.is_memoryviewslice: code.put_incref_memoryviewslice(self.result(), not self.in_nogil_context) class ProxyNode(CoercionNode): """ A node that should not be replaced by transforms or other means, and hence can be useful to wrap the argument to a clone node MyNode -> ProxyNode -> ArgNode CloneNode -^ """ nogil_check = None def __init__(self, arg): super(ProxyNode, self).__init__(arg) self.constant_result = arg.constant_result self._proxy_type() def analyse_types(self, env): self.arg = self.arg.analyse_expressions(env) self._proxy_type() return self def infer_type(self, env): return self.arg.infer_type(env) def _proxy_type(self): if hasattr(self.arg, 'type'): self.type = self.arg.type self.result_ctype = self.arg.result_ctype if hasattr(self.arg, 'entry'): self.entry = self.arg.entry def generate_result_code(self, code): self.arg.generate_result_code(code) def result(self): return self.arg.result() def is_simple(self): return self.arg.is_simple() def may_be_none(self): return self.arg.may_be_none() def generate_evaluation_code(self, code): self.arg.generate_evaluation_code(code) def generate_disposal_code(self, code): self.arg.generate_disposal_code(code) def free_temps(self, code): self.arg.free_temps(code) class CloneNode(CoercionNode): # This node is employed when the result of another node needs # to be used multiple times. The argument node's result must # be in a temporary. This node "borrows" the result from the # argument node, and does not generate any evaluation or # disposal code for it. The original owner of the argument # node is responsible for doing those things. subexprs = [] # Arg is not considered a subexpr nogil_check = None def __init__(self, arg): CoercionNode.__init__(self, arg) self.constant_result = arg.constant_result if hasattr(arg, 'type'): self.type = arg.type self.result_ctype = arg.result_ctype if hasattr(arg, 'entry'): self.entry = arg.entry def result(self): return self.arg.result() def may_be_none(self): return self.arg.may_be_none() def type_dependencies(self, env): return self.arg.type_dependencies(env) def infer_type(self, env): return self.arg.infer_type(env) def analyse_types(self, env): self.type = self.arg.type self.result_ctype = self.arg.result_ctype self.is_temp = 1 if hasattr(self.arg, 'entry'): self.entry = self.arg.entry return self def coerce_to(self, dest_type, env): if self.arg.is_literal: return self.arg.coerce_to(dest_type, env) return super(CloneNode, self).coerce_to(dest_type, env) def is_simple(self): return True # result is always in a temp (or a name) def generate_evaluation_code(self, code): pass def generate_result_code(self, code): pass def generate_disposal_code(self, code): pass def free_temps(self, code): pass class CMethodSelfCloneNode(CloneNode): # Special CloneNode for the self argument of builtin C methods # that accepts subtypes of the builtin type. This is safe only # for 'final' subtypes, as subtypes of the declared type may # override the C method. def coerce_to(self, dst_type, env): if dst_type.is_builtin_type and self.type.subtype_of(dst_type): return self return CloneNode.coerce_to(self, dst_type, env) class ModuleRefNode(ExprNode): # Simple returns the module object type = py_object_type is_temp = False subexprs = [] def analyse_types(self, env): return self def may_be_none(self): return False def calculate_result_code(self): return Naming.module_cname def generate_result_code(self, code): pass class DocstringRefNode(ExprNode): # Extracts the docstring of the body element subexprs = ['body'] type = py_object_type is_temp = True def __init__(self, pos, body): ExprNode.__init__(self, pos) assert body.type.is_pyobject self.body = body def analyse_types(self, env): return self def generate_result_code(self, code): code.putln('%s = __Pyx_GetAttr(%s, %s); %s' % ( self.result(), self.body.result(), code.intern_identifier(StringEncoding.EncodedString("__doc__")), code.error_goto_if_null(self.result(), self.pos))) code.put_gotref(self.result()) #------------------------------------------------------------------------------------ # # Runtime support code # #------------------------------------------------------------------------------------ pyerr_occurred_withgil_utility_code= UtilityCode( proto = """ static CYTHON_INLINE int __Pyx_ErrOccurredWithGIL(void); /* proto */ """, impl = """ static CYTHON_INLINE int __Pyx_ErrOccurredWithGIL(void) { int err; #ifdef WITH_THREAD PyGILState_STATE _save = PyGILState_Ensure(); #endif err = !!PyErr_Occurred(); #ifdef WITH_THREAD PyGILState_Release(_save); #endif return err; } """ ) #------------------------------------------------------------------------------------ raise_unbound_local_error_utility_code = UtilityCode( proto = """ static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname); """, impl = """ static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname) { PyErr_Format(PyExc_UnboundLocalError, "local variable '%s' referenced before assignment", varname); } """) raise_closure_name_error_utility_code = UtilityCode( proto = """ static CYTHON_INLINE void __Pyx_RaiseClosureNameError(const char *varname); """, impl = """ static CYTHON_INLINE void __Pyx_RaiseClosureNameError(const char *varname) { PyErr_Format(PyExc_NameError, "free variable '%s' referenced before assignment in enclosing scope", varname); } """) # Don't inline the function, it should really never be called in production raise_unbound_memoryview_utility_code_nogil = UtilityCode( proto = """ static void __Pyx_RaiseUnboundMemoryviewSliceNogil(const char *varname); """, impl = """ static void __Pyx_RaiseUnboundMemoryviewSliceNogil(const char *varname) { #ifdef WITH_THREAD PyGILState_STATE gilstate = PyGILState_Ensure(); #endif __Pyx_RaiseUnboundLocalError(varname); #ifdef WITH_THREAD PyGILState_Release(gilstate); #endif } """, requires = [raise_unbound_local_error_utility_code]) #------------------------------------------------------------------------------------ raise_too_many_values_to_unpack = UtilityCode.load_cached("RaiseTooManyValuesToUnpack", "ObjectHandling.c") raise_need_more_values_to_unpack = UtilityCode.load_cached("RaiseNeedMoreValuesToUnpack", "ObjectHandling.c") tuple_unpacking_error_code = UtilityCode.load_cached("UnpackTupleError", "ObjectHandling.c") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Compiler/FlowControl.pxd0000644000175000017500000000554600000000000030256 0ustar00vincentvincent00000000000000from __future__ import absolute_import cimport cython from .Visitor cimport CythonTransform, TreeVisitor cdef class ControlBlock: cdef public set children cdef public set parents cdef public set positions cdef public list stats cdef public dict gen cdef public set bounded # Big integer bitsets cdef public object i_input cdef public object i_output cdef public object i_gen cdef public object i_kill cdef public object i_state cpdef bint empty(self) cpdef detach(self) cpdef add_child(self, block) cdef class ExitBlock(ControlBlock): cpdef bint empty(self) cdef class NameAssignment: cdef public bint is_arg cdef public bint is_deletion cdef public object lhs cdef public object rhs cdef public object entry cdef public object pos cdef public set refs cdef public object bit cdef public object inferred_type cdef class AssignmentList: cdef public object bit cdef public object mask cdef public list stats cdef class AssignmentCollector(TreeVisitor): cdef list assignments @cython.final cdef class ControlFlow: cdef public set blocks cdef public set entries cdef public list loops cdef public list exceptions cdef public ControlBlock entry_point cdef public ExitBlock exit_point cdef public ControlBlock block cdef public dict assmts cpdef newblock(self, ControlBlock parent=*) cpdef nextblock(self, ControlBlock parent=*) cpdef bint is_tracked(self, entry) cpdef bint is_statically_assigned(self, entry) cpdef mark_position(self, node) cpdef mark_assignment(self, lhs, rhs, entry) cpdef mark_argument(self, lhs, rhs, entry) cpdef mark_deletion(self, node, entry) cpdef mark_reference(self, node, entry) @cython.locals(block=ControlBlock, parent=ControlBlock, unreachable=set) cpdef normalize(self) @cython.locals(bit=object, assmts=AssignmentList, block=ControlBlock) cpdef initialize(self) @cython.locals(assmts=AssignmentList, assmt=NameAssignment) cpdef set map_one(self, istate, entry) @cython.locals(block=ControlBlock, parent=ControlBlock) cdef reaching_definitions(self) cdef class Uninitialized: pass cdef class Unknown: pass cdef class MessageCollection: cdef set messages @cython.locals(dirty=bint, block=ControlBlock, parent=ControlBlock, assmt=NameAssignment) cdef check_definitions(ControlFlow flow, dict compiler_directives) @cython.final cdef class ControlFlowAnalysis(CythonTransform): cdef object gv_ctx cdef object constant_folder cdef set reductions cdef list env_stack cdef list stack cdef object env cdef ControlFlow flow cdef bint in_inplace_assignment cpdef mark_assignment(self, lhs, rhs=*) cpdef mark_position(self, node) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Compiler/FlowControl.py0000644000175000017500000013065200000000000030110 0ustar00vincentvincent00000000000000from __future__ import absolute_import import cython cython.declare(PyrexTypes=object, ExprNodes=object, Nodes=object, Builtin=object, InternalError=object, error=object, warning=object, py_object_type=object, unspecified_type=object, object_expr=object, fake_rhs_expr=object, TypedExprNode=object) from . import Builtin from . import ExprNodes from . import Nodes from . import Options from .PyrexTypes import py_object_type, unspecified_type from . import PyrexTypes from .Visitor import TreeVisitor, CythonTransform from .Errors import error, warning, InternalError from .Optimize import ConstantFolding class TypedExprNode(ExprNodes.ExprNode): # Used for declaring assignments of a specified type without a known entry. def __init__(self, type, may_be_none=None, pos=None): super(TypedExprNode, self).__init__(pos) self.type = type self._may_be_none = may_be_none def may_be_none(self): return self._may_be_none != False object_expr = TypedExprNode(py_object_type, may_be_none=True) # Fake rhs to silence "unused variable" warning fake_rhs_expr = TypedExprNode(unspecified_type) class ControlBlock(object): """Control flow graph node. Sequence of assignments and name references. children set of children nodes parents set of parent nodes positions set of position markers stats list of block statements gen dict of assignments generated by this block bounded set of entries that are definitely bounded in this block Example: a = 1 b = a + c # 'c' is already bounded or exception here stats = [Assignment(a), NameReference(a), NameReference(c), Assignment(b)] gen = {Entry(a): Assignment(a), Entry(b): Assignment(b)} bounded = set([Entry(a), Entry(c)]) """ def __init__(self): self.children = set() self.parents = set() self.positions = set() self.stats = [] self.gen = {} self.bounded = set() self.i_input = 0 self.i_output = 0 self.i_gen = 0 self.i_kill = 0 self.i_state = 0 def empty(self): return (not self.stats and not self.positions) def detach(self): """Detach block from parents and children.""" for child in self.children: child.parents.remove(self) for parent in self.parents: parent.children.remove(self) self.parents.clear() self.children.clear() def add_child(self, block): self.children.add(block) block.parents.add(self) class ExitBlock(ControlBlock): """Non-empty exit point block.""" def empty(self): return False class AssignmentList(object): def __init__(self): self.stats = [] class ControlFlow(object): """Control-flow graph. entry_point ControlBlock entry point for this graph exit_point ControlBlock normal exit point block ControlBlock current block blocks set children nodes entries set tracked entries loops list stack for loop descriptors exceptions list stack for exception descriptors """ def __init__(self): self.blocks = set() self.entries = set() self.loops = [] self.exceptions = [] self.entry_point = ControlBlock() self.exit_point = ExitBlock() self.blocks.add(self.exit_point) self.block = self.entry_point def newblock(self, parent=None): """Create floating block linked to `parent` if given. NOTE: Block is NOT added to self.blocks """ block = ControlBlock() self.blocks.add(block) if parent: parent.add_child(block) return block def nextblock(self, parent=None): """Create block children block linked to current or `parent` if given. NOTE: Block is added to self.blocks """ block = ControlBlock() self.blocks.add(block) if parent: parent.add_child(block) elif self.block: self.block.add_child(block) self.block = block return self.block def is_tracked(self, entry): if entry.is_anonymous: return False return (entry.is_local or entry.is_pyclass_attr or entry.is_arg or entry.from_closure or entry.in_closure or entry.error_on_uninitialized) def is_statically_assigned(self, entry): if (entry.is_local and entry.is_variable and (entry.type.is_struct_or_union or entry.type.is_complex or entry.type.is_array or entry.type.is_cpp_class)): # stack allocated structured variable => never uninitialised return True return False def mark_position(self, node): """Mark position, will be used to draw graph nodes.""" if self.block: self.block.positions.add(node.pos[:2]) def mark_assignment(self, lhs, rhs, entry): if self.block and self.is_tracked(entry): assignment = NameAssignment(lhs, rhs, entry) self.block.stats.append(assignment) self.block.gen[entry] = assignment self.entries.add(entry) def mark_argument(self, lhs, rhs, entry): if self.block and self.is_tracked(entry): assignment = Argument(lhs, rhs, entry) self.block.stats.append(assignment) self.block.gen[entry] = assignment self.entries.add(entry) def mark_deletion(self, node, entry): if self.block and self.is_tracked(entry): assignment = NameDeletion(node, entry) self.block.stats.append(assignment) self.block.gen[entry] = Uninitialized self.entries.add(entry) def mark_reference(self, node, entry): if self.block and self.is_tracked(entry): self.block.stats.append(NameReference(node, entry)) ## XXX: We don't track expression evaluation order so we can't use ## XXX: successful reference as initialization sign. ## # Local variable is definitely bound after this reference ## if not node.allow_null: ## self.block.bounded.add(entry) self.entries.add(entry) def normalize(self): """Delete unreachable and orphan blocks.""" queue = set([self.entry_point]) visited = set() while queue: root = queue.pop() visited.add(root) for child in root.children: if child not in visited: queue.add(child) unreachable = self.blocks - visited for block in unreachable: block.detach() visited.remove(self.entry_point) for block in visited: if block.empty(): for parent in block.parents: # Re-parent for child in block.children: parent.add_child(child) block.detach() unreachable.add(block) self.blocks -= unreachable def initialize(self): """Set initial state, map assignments to bits.""" self.assmts = {} bit = 1 for entry in self.entries: assmts = AssignmentList() assmts.mask = assmts.bit = bit self.assmts[entry] = assmts bit <<= 1 for block in self.blocks: for stat in block.stats: if isinstance(stat, NameAssignment): stat.bit = bit assmts = self.assmts[stat.entry] assmts.stats.append(stat) assmts.mask |= bit bit <<= 1 for block in self.blocks: for entry, stat in block.gen.items(): assmts = self.assmts[entry] if stat is Uninitialized: block.i_gen |= assmts.bit else: block.i_gen |= stat.bit block.i_kill |= assmts.mask block.i_output = block.i_gen for entry in block.bounded: block.i_kill |= self.assmts[entry].bit for assmts in self.assmts.values(): self.entry_point.i_gen |= assmts.bit self.entry_point.i_output = self.entry_point.i_gen def map_one(self, istate, entry): ret = set() assmts = self.assmts[entry] if istate & assmts.bit: if self.is_statically_assigned(entry): ret.add(StaticAssignment(entry)) elif entry.from_closure: ret.add(Unknown) else: ret.add(Uninitialized) for assmt in assmts.stats: if istate & assmt.bit: ret.add(assmt) return ret def reaching_definitions(self): """Per-block reaching definitions analysis.""" dirty = True while dirty: dirty = False for block in self.blocks: i_input = 0 for parent in block.parents: i_input |= parent.i_output i_output = (i_input & ~block.i_kill) | block.i_gen if i_output != block.i_output: dirty = True block.i_input = i_input block.i_output = i_output class LoopDescr(object): def __init__(self, next_block, loop_block): self.next_block = next_block self.loop_block = loop_block self.exceptions = [] class ExceptionDescr(object): """Exception handling helper. entry_point ControlBlock Exception handling entry point finally_enter ControlBlock Normal finally clause entry point finally_exit ControlBlock Normal finally clause exit point """ def __init__(self, entry_point, finally_enter=None, finally_exit=None): self.entry_point = entry_point self.finally_enter = finally_enter self.finally_exit = finally_exit class NameAssignment(object): def __init__(self, lhs, rhs, entry): if lhs.cf_state is None: lhs.cf_state = set() self.lhs = lhs self.rhs = rhs self.entry = entry self.pos = lhs.pos self.refs = set() self.is_arg = False self.is_deletion = False self.inferred_type = None def __repr__(self): return '%s(entry=%r)' % (self.__class__.__name__, self.entry) def infer_type(self): self.inferred_type = self.rhs.infer_type(self.entry.scope) return self.inferred_type def type_dependencies(self): return self.rhs.type_dependencies(self.entry.scope) @property def type(self): if not self.entry.type.is_unspecified: return self.entry.type return self.inferred_type class StaticAssignment(NameAssignment): """Initialised at declaration time, e.g. stack allocation.""" def __init__(self, entry): if not entry.type.is_pyobject: may_be_none = False else: may_be_none = None # unknown lhs = TypedExprNode( entry.type, may_be_none=may_be_none, pos=entry.pos) super(StaticAssignment, self).__init__(lhs, lhs, entry) def infer_type(self): return self.entry.type def type_dependencies(self): return () class Argument(NameAssignment): def __init__(self, lhs, rhs, entry): NameAssignment.__init__(self, lhs, rhs, entry) self.is_arg = True class NameDeletion(NameAssignment): def __init__(self, lhs, entry): NameAssignment.__init__(self, lhs, lhs, entry) self.is_deletion = True def infer_type(self): inferred_type = self.rhs.infer_type(self.entry.scope) if (not inferred_type.is_pyobject and inferred_type.can_coerce_to_pyobject(self.entry.scope)): return py_object_type self.inferred_type = inferred_type return inferred_type class Uninitialized(object): """Definitely not initialised yet.""" class Unknown(object): """Coming from outer closure, might be initialised or not.""" class NameReference(object): def __init__(self, node, entry): if node.cf_state is None: node.cf_state = set() self.node = node self.entry = entry self.pos = node.pos def __repr__(self): return '%s(entry=%r)' % (self.__class__.__name__, self.entry) class ControlFlowState(list): # Keeps track of Node's entry assignments # # cf_is_null [boolean] It is uninitialized # cf_maybe_null [boolean] May be uninitialized # is_single [boolean] Has only one assignment at this point cf_maybe_null = False cf_is_null = False is_single = False def __init__(self, state): if Uninitialized in state: state.discard(Uninitialized) self.cf_maybe_null = True if not state: self.cf_is_null = True elif Unknown in state: state.discard(Unknown) self.cf_maybe_null = True else: if len(state) == 1: self.is_single = True # XXX: Remove fake_rhs_expr super(ControlFlowState, self).__init__( [i for i in state if i.rhs is not fake_rhs_expr]) def one(self): return self[0] class GVContext(object): """Graphviz subgraph object.""" def __init__(self): self.blockids = {} self.nextid = 0 self.children = [] self.sources = {} def add(self, child): self.children.append(child) def nodeid(self, block): if block not in self.blockids: self.blockids[block] = 'block%d' % self.nextid self.nextid += 1 return self.blockids[block] def extract_sources(self, block): if not block.positions: return '' start = min(block.positions) stop = max(block.positions) srcdescr = start[0] if not srcdescr in self.sources: self.sources[srcdescr] = list(srcdescr.get_lines()) lines = self.sources[srcdescr] return '\\n'.join([l.strip() for l in lines[start[1] - 1:stop[1]]]) def render(self, fp, name, annotate_defs=False): """Render graphviz dot graph""" fp.write('digraph %s {\n' % name) fp.write(' node [shape=box];\n') for child in self.children: child.render(fp, self, annotate_defs) fp.write('}\n') def escape(self, text): return text.replace('"', '\\"').replace('\n', '\\n') class GV(object): """Graphviz DOT renderer.""" def __init__(self, name, flow): self.name = name self.flow = flow def render(self, fp, ctx, annotate_defs=False): fp.write(' subgraph %s {\n' % self.name) for block in self.flow.blocks: label = ctx.extract_sources(block) if annotate_defs: for stat in block.stats: if isinstance(stat, NameAssignment): label += '\n %s [%s %s]' % ( stat.entry.name, 'deletion' if stat.is_deletion else 'definition', stat.pos[1]) elif isinstance(stat, NameReference): if stat.entry: label += '\n %s [reference %s]' % (stat.entry.name, stat.pos[1]) if not label: label = 'empty' pid = ctx.nodeid(block) fp.write(' %s [label="%s"];\n' % (pid, ctx.escape(label))) for block in self.flow.blocks: pid = ctx.nodeid(block) for child in block.children: fp.write(' %s -> %s;\n' % (pid, ctx.nodeid(child))) fp.write(' }\n') class MessageCollection(object): """Collect error/warnings messages first then sort""" def __init__(self): self.messages = set() def error(self, pos, message): self.messages.add((pos, True, message)) def warning(self, pos, message): self.messages.add((pos, False, message)) def report(self): for pos, is_error, message in sorted(self.messages): if is_error: error(pos, message) else: warning(pos, message, 2) def check_definitions(flow, compiler_directives): flow.initialize() flow.reaching_definitions() # Track down state assignments = set() # Node to entry map references = {} assmt_nodes = set() for block in flow.blocks: i_state = block.i_input for stat in block.stats: i_assmts = flow.assmts[stat.entry] state = flow.map_one(i_state, stat.entry) if isinstance(stat, NameAssignment): stat.lhs.cf_state.update(state) assmt_nodes.add(stat.lhs) i_state = i_state & ~i_assmts.mask if stat.is_deletion: i_state |= i_assmts.bit else: i_state |= stat.bit assignments.add(stat) if stat.rhs is not fake_rhs_expr: stat.entry.cf_assignments.append(stat) elif isinstance(stat, NameReference): references[stat.node] = stat.entry stat.entry.cf_references.append(stat) stat.node.cf_state.update(state) ## if not stat.node.allow_null: ## i_state &= ~i_assmts.bit ## # after successful read, the state is known to be initialised state.discard(Uninitialized) state.discard(Unknown) for assmt in state: assmt.refs.add(stat) # Check variable usage warn_maybe_uninitialized = compiler_directives['warn.maybe_uninitialized'] warn_unused_result = compiler_directives['warn.unused_result'] warn_unused = compiler_directives['warn.unused'] warn_unused_arg = compiler_directives['warn.unused_arg'] messages = MessageCollection() # assignment hints for node in assmt_nodes: if Uninitialized in node.cf_state: node.cf_maybe_null = True if len(node.cf_state) == 1: node.cf_is_null = True else: node.cf_is_null = False elif Unknown in node.cf_state: node.cf_maybe_null = True else: node.cf_is_null = False node.cf_maybe_null = False # Find uninitialized references and cf-hints for node, entry in references.items(): if Uninitialized in node.cf_state: node.cf_maybe_null = True if not entry.from_closure and len(node.cf_state) == 1: node.cf_is_null = True if (node.allow_null or entry.from_closure or entry.is_pyclass_attr or entry.type.is_error): pass # Can be uninitialized here elif node.cf_is_null: if entry.error_on_uninitialized or ( Options.error_on_uninitialized and ( entry.type.is_pyobject or entry.type.is_unspecified)): messages.error( node.pos, "local variable '%s' referenced before assignment" % entry.name) else: messages.warning( node.pos, "local variable '%s' referenced before assignment" % entry.name) elif warn_maybe_uninitialized: messages.warning( node.pos, "local variable '%s' might be referenced before assignment" % entry.name) elif Unknown in node.cf_state: # TODO: better cross-closure analysis to know when inner functions # are being called before a variable is being set, and when # a variable is known to be set before even defining the # inner function, etc. node.cf_maybe_null = True else: node.cf_is_null = False node.cf_maybe_null = False # Unused result for assmt in assignments: if (not assmt.refs and not assmt.entry.is_pyclass_attr and not assmt.entry.in_closure): if assmt.entry.cf_references and warn_unused_result: if assmt.is_arg: messages.warning(assmt.pos, "Unused argument value '%s'" % assmt.entry.name) else: messages.warning(assmt.pos, "Unused result in '%s'" % assmt.entry.name) assmt.lhs.cf_used = False # Unused entries for entry in flow.entries: if (not entry.cf_references and not entry.is_pyclass_attr): if entry.name != '_' and not entry.name.startswith('unused'): # '_' is often used for unused variables, e.g. in loops if entry.is_arg: if warn_unused_arg: messages.warning(entry.pos, "Unused argument '%s'" % entry.name) else: if warn_unused: messages.warning(entry.pos, "Unused entry '%s'" % entry.name) entry.cf_used = False messages.report() for node in assmt_nodes: node.cf_state = ControlFlowState(node.cf_state) for node in references: node.cf_state = ControlFlowState(node.cf_state) class AssignmentCollector(TreeVisitor): def __init__(self): super(AssignmentCollector, self).__init__() self.assignments = [] def visit_Node(self): self._visitchildren(self, None) def visit_SingleAssignmentNode(self, node): self.assignments.append((node.lhs, node.rhs)) def visit_CascadedAssignmentNode(self, node): for lhs in node.lhs_list: self.assignments.append((lhs, node.rhs)) class ControlFlowAnalysis(CythonTransform): def visit_ModuleNode(self, node): self.gv_ctx = GVContext() self.constant_folder = ConstantFolding() # Set of NameNode reductions self.reductions = set() self.in_inplace_assignment = False self.env_stack = [] self.env = node.scope self.stack = [] self.flow = ControlFlow() self.visitchildren(node) check_definitions(self.flow, self.current_directives) dot_output = self.current_directives['control_flow.dot_output'] if dot_output: annotate_defs = self.current_directives['control_flow.dot_annotate_defs'] fp = open(dot_output, 'wt') try: self.gv_ctx.render(fp, 'module', annotate_defs=annotate_defs) finally: fp.close() return node def visit_FuncDefNode(self, node): for arg in node.args: if arg.default: self.visitchildren(arg) self.visitchildren(node, ('decorators',)) self.env_stack.append(self.env) self.env = node.local_scope self.stack.append(self.flow) self.flow = ControlFlow() # Collect all entries for entry in node.local_scope.entries.values(): if self.flow.is_tracked(entry): self.flow.entries.add(entry) self.mark_position(node) # Function body block self.flow.nextblock() for arg in node.args: self._visit(arg) if node.star_arg: self.flow.mark_argument(node.star_arg, TypedExprNode(Builtin.tuple_type, may_be_none=False), node.star_arg.entry) if node.starstar_arg: self.flow.mark_argument(node.starstar_arg, TypedExprNode(Builtin.dict_type, may_be_none=False), node.starstar_arg.entry) self._visit(node.body) # Workaround for generators if node.is_generator: self._visit(node.gbody.body) # Exit point if self.flow.block: self.flow.block.add_child(self.flow.exit_point) # Cleanup graph self.flow.normalize() check_definitions(self.flow, self.current_directives) self.flow.blocks.add(self.flow.entry_point) self.gv_ctx.add(GV(node.local_scope.name, self.flow)) self.flow = self.stack.pop() self.env = self.env_stack.pop() return node def visit_DefNode(self, node): node.used = True return self.visit_FuncDefNode(node) def visit_GeneratorBodyDefNode(self, node): return node def visit_CTypeDefNode(self, node): return node def mark_assignment(self, lhs, rhs=None): if not self.flow.block: return if self.flow.exceptions: exc_descr = self.flow.exceptions[-1] self.flow.block.add_child(exc_descr.entry_point) self.flow.nextblock() if not rhs: rhs = object_expr if lhs.is_name: if lhs.entry is not None: entry = lhs.entry else: entry = self.env.lookup(lhs.name) if entry is None: # TODO: This shouldn't happen... return self.flow.mark_assignment(lhs, rhs, entry) elif lhs.is_sequence_constructor: for i, arg in enumerate(lhs.args): if not rhs or arg.is_starred: item_node = None else: item_node = rhs.inferable_item_node(i) self.mark_assignment(arg, item_node) else: self._visit(lhs) if self.flow.exceptions: exc_descr = self.flow.exceptions[-1] self.flow.block.add_child(exc_descr.entry_point) self.flow.nextblock() def mark_position(self, node): """Mark position if DOT output is enabled.""" if self.current_directives['control_flow.dot_output']: self.flow.mark_position(node) def visit_FromImportStatNode(self, node): for name, target in node.items: if name != "*": self.mark_assignment(target) self.visitchildren(node) return node def visit_AssignmentNode(self, node): raise InternalError("Unhandled assignment node") def visit_SingleAssignmentNode(self, node): self._visit(node.rhs) self.mark_assignment(node.lhs, node.rhs) return node def visit_CascadedAssignmentNode(self, node): self._visit(node.rhs) for lhs in node.lhs_list: self.mark_assignment(lhs, node.rhs) return node def visit_ParallelAssignmentNode(self, node): collector = AssignmentCollector() collector.visitchildren(node) for lhs, rhs in collector.assignments: self._visit(rhs) for lhs, rhs in collector.assignments: self.mark_assignment(lhs, rhs) return node def visit_InPlaceAssignmentNode(self, node): self.in_inplace_assignment = True self.visitchildren(node) self.in_inplace_assignment = False self.mark_assignment(node.lhs, self.constant_folder(node.create_binop_node())) return node def visit_DelStatNode(self, node): for arg in node.args: if arg.is_name: entry = arg.entry or self.env.lookup(arg.name) if entry.in_closure or entry.from_closure: error(arg.pos, "can not delete variable '%s' " "referenced in nested scope" % entry.name) if not node.ignore_nonexisting: self._visit(arg) # mark reference self.flow.mark_deletion(arg, entry) else: self._visit(arg) return node def visit_CArgDeclNode(self, node): entry = self.env.lookup(node.name) if entry: may_be_none = not node.not_none self.flow.mark_argument( node, TypedExprNode(entry.type, may_be_none), entry) return node def visit_NameNode(self, node): if self.flow.block: entry = node.entry or self.env.lookup(node.name) if entry: self.flow.mark_reference(node, entry) if entry in self.reductions and not self.in_inplace_assignment: error(node.pos, "Cannot read reduction variable in loop body") return node def visit_StatListNode(self, node): if self.flow.block: for stat in node.stats: self._visit(stat) if not self.flow.block: stat.is_terminator = True break return node def visit_Node(self, node): self.visitchildren(node) self.mark_position(node) return node def visit_IfStatNode(self, node): next_block = self.flow.newblock() parent = self.flow.block # If clauses for clause in node.if_clauses: parent = self.flow.nextblock(parent) self._visit(clause.condition) self.flow.nextblock() self._visit(clause.body) if self.flow.block: self.flow.block.add_child(next_block) # Else clause if node.else_clause: self.flow.nextblock(parent=parent) self._visit(node.else_clause) if self.flow.block: self.flow.block.add_child(next_block) else: parent.add_child(next_block) if next_block.parents: self.flow.block = next_block else: self.flow.block = None return node def visit_WhileStatNode(self, node): condition_block = self.flow.nextblock() next_block = self.flow.newblock() # Condition block self.flow.loops.append(LoopDescr(next_block, condition_block)) if node.condition: self._visit(node.condition) # Body block self.flow.nextblock() self._visit(node.body) self.flow.loops.pop() # Loop it if self.flow.block: self.flow.block.add_child(condition_block) self.flow.block.add_child(next_block) # Else clause if node.else_clause: self.flow.nextblock(parent=condition_block) self._visit(node.else_clause) if self.flow.block: self.flow.block.add_child(next_block) else: condition_block.add_child(next_block) if next_block.parents: self.flow.block = next_block else: self.flow.block = None return node def mark_forloop_target(self, node): # TODO: Remove redundancy with range optimization... is_special = False sequence = node.iterator.sequence target = node.target if isinstance(sequence, ExprNodes.SimpleCallNode): function = sequence.function if sequence.self is None and function.is_name: entry = self.env.lookup(function.name) if not entry or entry.is_builtin: if function.name == 'reversed' and len(sequence.args) == 1: sequence = sequence.args[0] elif function.name == 'enumerate' and len(sequence.args) == 1: if target.is_sequence_constructor and len(target.args) == 2: iterator = sequence.args[0] if iterator.is_name: iterator_type = iterator.infer_type(self.env) if iterator_type.is_builtin_type: # assume that builtin types have a length within Py_ssize_t self.mark_assignment( target.args[0], ExprNodes.IntNode(target.pos, value='PY_SSIZE_T_MAX', type=PyrexTypes.c_py_ssize_t_type)) target = target.args[1] sequence = sequence.args[0] if isinstance(sequence, ExprNodes.SimpleCallNode): function = sequence.function if sequence.self is None and function.is_name: entry = self.env.lookup(function.name) if not entry or entry.is_builtin: if function.name in ('range', 'xrange'): is_special = True for arg in sequence.args[:2]: self.mark_assignment(target, arg) if len(sequence.args) > 2: self.mark_assignment(target, self.constant_folder( ExprNodes.binop_node(node.pos, '+', sequence.args[0], sequence.args[2]))) if not is_special: # A for-loop basically translates to subsequent calls to # __getitem__(), so using an IndexNode here allows us to # naturally infer the base type of pointers, C arrays, # Python strings, etc., while correctly falling back to an # object type when the base type cannot be handled. self.mark_assignment(target, node.item) def visit_AsyncForStatNode(self, node): return self.visit_ForInStatNode(node) def visit_ForInStatNode(self, node): condition_block = self.flow.nextblock() next_block = self.flow.newblock() # Condition with iterator self.flow.loops.append(LoopDescr(next_block, condition_block)) self._visit(node.iterator) # Target assignment self.flow.nextblock() if isinstance(node, Nodes.ForInStatNode): self.mark_forloop_target(node) elif isinstance(node, Nodes.AsyncForStatNode): # not entirely correct, but good enough for now self.mark_assignment(node.target, node.item) else: # Parallel self.mark_assignment(node.target) # Body block if isinstance(node, Nodes.ParallelRangeNode): # In case of an invalid self._delete_privates(node, exclude=node.target.entry) self.flow.nextblock() self._visit(node.body) self.flow.loops.pop() # Loop it if self.flow.block: self.flow.block.add_child(condition_block) # Else clause if node.else_clause: self.flow.nextblock(parent=condition_block) self._visit(node.else_clause) if self.flow.block: self.flow.block.add_child(next_block) else: condition_block.add_child(next_block) if next_block.parents: self.flow.block = next_block else: self.flow.block = None return node def _delete_privates(self, node, exclude=None): for private_node in node.assigned_nodes: if not exclude or private_node.entry is not exclude: self.flow.mark_deletion(private_node, private_node.entry) def visit_ParallelRangeNode(self, node): reductions = self.reductions # if node.target is None or not a NameNode, an error will have # been previously issued if hasattr(node.target, 'entry'): self.reductions = set(reductions) for private_node in node.assigned_nodes: private_node.entry.error_on_uninitialized = True pos, reduction = node.assignments[private_node.entry] if reduction: self.reductions.add(private_node.entry) node = self.visit_ForInStatNode(node) self.reductions = reductions return node def visit_ParallelWithBlockNode(self, node): for private_node in node.assigned_nodes: private_node.entry.error_on_uninitialized = True self._delete_privates(node) self.visitchildren(node) self._delete_privates(node) return node def visit_ForFromStatNode(self, node): condition_block = self.flow.nextblock() next_block = self.flow.newblock() # Condition with iterator self.flow.loops.append(LoopDescr(next_block, condition_block)) self._visit(node.bound1) self._visit(node.bound2) if node.step is not None: self._visit(node.step) # Target assignment self.flow.nextblock() self.mark_assignment(node.target, node.bound1) if node.step is not None: self.mark_assignment(node.target, self.constant_folder( ExprNodes.binop_node(node.pos, '+', node.bound1, node.step))) # Body block self.flow.nextblock() self._visit(node.body) self.flow.loops.pop() # Loop it if self.flow.block: self.flow.block.add_child(condition_block) # Else clause if node.else_clause: self.flow.nextblock(parent=condition_block) self._visit(node.else_clause) if self.flow.block: self.flow.block.add_child(next_block) else: condition_block.add_child(next_block) if next_block.parents: self.flow.block = next_block else: self.flow.block = None return node def visit_LoopNode(self, node): raise InternalError("Generic loops are not supported") def visit_WithTargetAssignmentStatNode(self, node): self.mark_assignment(node.lhs, node.with_node.enter_call) return node def visit_WithStatNode(self, node): self._visit(node.manager) self._visit(node.enter_call) self._visit(node.body) return node def visit_TryExceptStatNode(self, node): # After exception handling next_block = self.flow.newblock() # Body block self.flow.newblock() # Exception entry point entry_point = self.flow.newblock() self.flow.exceptions.append(ExceptionDescr(entry_point)) self.flow.nextblock() ## XXX: links to exception handling point should be added by ## XXX: children nodes self.flow.block.add_child(entry_point) self.flow.nextblock() self._visit(node.body) self.flow.exceptions.pop() # After exception if self.flow.block: if node.else_clause: self.flow.nextblock() self._visit(node.else_clause) if self.flow.block: self.flow.block.add_child(next_block) for clause in node.except_clauses: self.flow.block = entry_point if clause.pattern: for pattern in clause.pattern: self._visit(pattern) else: # TODO: handle * pattern pass entry_point = self.flow.newblock(parent=self.flow.block) self.flow.nextblock() if clause.target: self.mark_assignment(clause.target) self._visit(clause.body) if self.flow.block: self.flow.block.add_child(next_block) if self.flow.exceptions: entry_point.add_child(self.flow.exceptions[-1].entry_point) if next_block.parents: self.flow.block = next_block else: self.flow.block = None return node def visit_TryFinallyStatNode(self, node): body_block = self.flow.nextblock() # Exception entry point entry_point = self.flow.newblock() self.flow.block = entry_point self._visit(node.finally_except_clause) if self.flow.block and self.flow.exceptions: self.flow.block.add_child(self.flow.exceptions[-1].entry_point) # Normal execution finally_enter = self.flow.newblock() self.flow.block = finally_enter self._visit(node.finally_clause) finally_exit = self.flow.block descr = ExceptionDescr(entry_point, finally_enter, finally_exit) self.flow.exceptions.append(descr) if self.flow.loops: self.flow.loops[-1].exceptions.append(descr) self.flow.block = body_block ## XXX: Is it still required body_block.add_child(entry_point) self.flow.nextblock() self._visit(node.body) self.flow.exceptions.pop() if self.flow.loops: self.flow.loops[-1].exceptions.pop() if self.flow.block: self.flow.block.add_child(finally_enter) if finally_exit: self.flow.block = self.flow.nextblock(parent=finally_exit) else: self.flow.block = None return node def visit_RaiseStatNode(self, node): self.mark_position(node) self.visitchildren(node) if self.flow.exceptions: self.flow.block.add_child(self.flow.exceptions[-1].entry_point) self.flow.block = None return node def visit_ReraiseStatNode(self, node): self.mark_position(node) if self.flow.exceptions: self.flow.block.add_child(self.flow.exceptions[-1].entry_point) self.flow.block = None return node def visit_ReturnStatNode(self, node): self.mark_position(node) self.visitchildren(node) for exception in self.flow.exceptions[::-1]: if exception.finally_enter: self.flow.block.add_child(exception.finally_enter) if exception.finally_exit: exception.finally_exit.add_child(self.flow.exit_point) break else: if self.flow.block: self.flow.block.add_child(self.flow.exit_point) self.flow.block = None return node def visit_BreakStatNode(self, node): if not self.flow.loops: #error(node.pos, "break statement not inside loop") return node loop = self.flow.loops[-1] self.mark_position(node) for exception in loop.exceptions[::-1]: if exception.finally_enter: self.flow.block.add_child(exception.finally_enter) if exception.finally_exit: exception.finally_exit.add_child(loop.next_block) break else: self.flow.block.add_child(loop.next_block) self.flow.block = None return node def visit_ContinueStatNode(self, node): if not self.flow.loops: #error(node.pos, "continue statement not inside loop") return node loop = self.flow.loops[-1] self.mark_position(node) for exception in loop.exceptions[::-1]: if exception.finally_enter: self.flow.block.add_child(exception.finally_enter) if exception.finally_exit: exception.finally_exit.add_child(loop.loop_block) break else: self.flow.block.add_child(loop.loop_block) self.flow.block = None return node def visit_ComprehensionNode(self, node): if node.expr_scope: self.env_stack.append(self.env) self.env = node.expr_scope # Skip append node here self._visit(node.loop) if node.expr_scope: self.env = self.env_stack.pop() return node def visit_ScopedExprNode(self, node): if node.expr_scope: self.env_stack.append(self.env) self.env = node.expr_scope self.visitchildren(node) if node.expr_scope: self.env = self.env_stack.pop() return node def visit_PyClassDefNode(self, node): self.visitchildren(node, ('dict', 'metaclass', 'mkw', 'bases', 'class_result')) self.flow.mark_assignment(node.target, node.classobj, self.env.lookup(node.name)) self.env_stack.append(self.env) self.env = node.scope self.flow.nextblock() self.visitchildren(node, ('body',)) self.flow.nextblock() self.env = self.env_stack.pop() return node def visit_AmpersandNode(self, node): if node.operand.is_name: # Fake assignment to silence warning self.mark_assignment(node.operand, fake_rhs_expr) self.visitchildren(node) return node ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Compiler/FusedNode.py0000644000175000017500000011121200000000000027503 0ustar00vincentvincent00000000000000from __future__ import absolute_import import copy from . import (ExprNodes, PyrexTypes, MemoryView, ParseTreeTransforms, StringEncoding, Errors) from .ExprNodes import CloneNode, ProxyNode, TupleNode from .Nodes import FuncDefNode, CFuncDefNode, StatListNode, DefNode from ..Utils import OrderedSet class FusedCFuncDefNode(StatListNode): """ This node replaces a function with fused arguments. It deep-copies the function for every permutation of fused types, and allocates a new local scope for it. It keeps track of the original function in self.node, and the entry of the original function in the symbol table is given the 'fused_cfunction' attribute which points back to us. Then when a function lookup occurs (to e.g. call it), the call can be dispatched to the right function. node FuncDefNode the original function nodes [FuncDefNode] list of copies of node with different specific types py_func DefNode the fused python function subscriptable from Python space __signatures__ A DictNode mapping signature specialization strings to PyCFunction nodes resulting_fused_function PyCFunction for the fused DefNode that delegates to specializations fused_func_assignment Assignment of the fused function to the function name defaults_tuple TupleNode of defaults (letting PyCFunctionNode build defaults would result in many different tuples) specialized_pycfuncs List of synthesized pycfunction nodes for the specializations code_object CodeObjectNode shared by all specializations and the fused function fused_compound_types All fused (compound) types (e.g. floating[:]) """ __signatures__ = None resulting_fused_function = None fused_func_assignment = None defaults_tuple = None decorators = None child_attrs = StatListNode.child_attrs + [ '__signatures__', 'resulting_fused_function', 'fused_func_assignment'] def __init__(self, node, env): super(FusedCFuncDefNode, self).__init__(node.pos) self.nodes = [] self.node = node is_def = isinstance(self.node, DefNode) if is_def: # self.node.decorators = [] self.copy_def(env) else: self.copy_cdef(env) # Perform some sanity checks. If anything fails, it's a bug for n in self.nodes: assert not n.entry.type.is_fused assert not n.local_scope.return_type.is_fused if node.return_type.is_fused: assert not n.return_type.is_fused if not is_def and n.cfunc_declarator.optional_arg_count: assert n.type.op_arg_struct node.entry.fused_cfunction = self # Copy the nodes as AnalyseDeclarationsTransform will prepend # self.py_func to self.stats, as we only want specialized # CFuncDefNodes in self.nodes self.stats = self.nodes[:] def copy_def(self, env): """ Create a copy of the original def or lambda function for specialized versions. """ fused_compound_types = PyrexTypes.unique( [arg.type for arg in self.node.args if arg.type.is_fused]) fused_types = self._get_fused_base_types(fused_compound_types) permutations = PyrexTypes.get_all_specialized_permutations(fused_types) self.fused_compound_types = fused_compound_types if self.node.entry in env.pyfunc_entries: env.pyfunc_entries.remove(self.node.entry) for cname, fused_to_specific in permutations: copied_node = copy.deepcopy(self.node) # keep signature object identity for special casing in DefNode.analyse_declarations() copied_node.entry.signature = self.node.entry.signature self._specialize_function_args(copied_node.args, fused_to_specific) copied_node.return_type = self.node.return_type.specialize( fused_to_specific) copied_node.analyse_declarations(env) # copied_node.is_staticmethod = self.node.is_staticmethod # copied_node.is_classmethod = self.node.is_classmethod self.create_new_local_scope(copied_node, env, fused_to_specific) self.specialize_copied_def(copied_node, cname, self.node.entry, fused_to_specific, fused_compound_types) PyrexTypes.specialize_entry(copied_node.entry, cname) copied_node.entry.used = True env.entries[copied_node.entry.name] = copied_node.entry if not self.replace_fused_typechecks(copied_node): break self.orig_py_func = self.node self.py_func = self.make_fused_cpdef(self.node, env, is_def=True) def copy_cdef(self, env): """ Create a copy of the original c(p)def function for all specialized versions. """ permutations = self.node.type.get_all_specialized_permutations() # print 'Node %s has %d specializations:' % (self.node.entry.name, # len(permutations)) # import pprint; pprint.pprint([d for cname, d in permutations]) # Prevent copying of the python function self.orig_py_func = orig_py_func = self.node.py_func self.node.py_func = None if orig_py_func: env.pyfunc_entries.remove(orig_py_func.entry) fused_types = self.node.type.get_fused_types() self.fused_compound_types = fused_types new_cfunc_entries = [] for cname, fused_to_specific in permutations: copied_node = copy.deepcopy(self.node) # Make the types in our CFuncType specific. type = copied_node.type.specialize(fused_to_specific) entry = copied_node.entry type.specialize_entry(entry, cname) # Reuse existing Entries (e.g. from .pxd files). for i, orig_entry in enumerate(env.cfunc_entries): if entry.cname == orig_entry.cname and type.same_as_resolved_type(orig_entry.type): copied_node.entry = env.cfunc_entries[i] if not copied_node.entry.func_cname: copied_node.entry.func_cname = entry.func_cname entry = copied_node.entry type = entry.type break else: new_cfunc_entries.append(entry) copied_node.type = type entry.type, type.entry = type, entry entry.used = (entry.used or self.node.entry.defined_in_pxd or env.is_c_class_scope or entry.is_cmethod) if self.node.cfunc_declarator.optional_arg_count: self.node.cfunc_declarator.declare_optional_arg_struct( type, env, fused_cname=cname) copied_node.return_type = type.return_type self.create_new_local_scope(copied_node, env, fused_to_specific) # Make the argument types in the CFuncDeclarator specific self._specialize_function_args(copied_node.cfunc_declarator.args, fused_to_specific) # If a cpdef, declare all specialized cpdefs (this # also calls analyse_declarations) copied_node.declare_cpdef_wrapper(env) if copied_node.py_func: env.pyfunc_entries.remove(copied_node.py_func.entry) self.specialize_copied_def( copied_node.py_func, cname, self.node.entry.as_variable, fused_to_specific, fused_types) if not self.replace_fused_typechecks(copied_node): break # replace old entry with new entries try: cindex = env.cfunc_entries.index(self.node.entry) except ValueError: env.cfunc_entries.extend(new_cfunc_entries) else: env.cfunc_entries[cindex:cindex+1] = new_cfunc_entries if orig_py_func: self.py_func = self.make_fused_cpdef(orig_py_func, env, is_def=False) else: self.py_func = orig_py_func def _get_fused_base_types(self, fused_compound_types): """ Get a list of unique basic fused types, from a list of (possibly) compound fused types. """ base_types = [] seen = set() for fused_type in fused_compound_types: fused_type.get_fused_types(result=base_types, seen=seen) return base_types def _specialize_function_args(self, args, fused_to_specific): for arg in args: if arg.type.is_fused: arg.type = arg.type.specialize(fused_to_specific) if arg.type.is_memoryviewslice: arg.type.validate_memslice_dtype(arg.pos) def create_new_local_scope(self, node, env, f2s): """ Create a new local scope for the copied node and append it to self.nodes. A new local scope is needed because the arguments with the fused types are already in the local scope, and we need the specialized entries created after analyse_declarations on each specialized version of the (CFunc)DefNode. f2s is a dict mapping each fused type to its specialized version """ node.create_local_scope(env) node.local_scope.fused_to_specific = f2s # This is copied from the original function, set it to false to # stop recursion node.has_fused_arguments = False self.nodes.append(node) def specialize_copied_def(self, node, cname, py_entry, f2s, fused_compound_types): """Specialize the copy of a DefNode given the copied node, the specialization cname and the original DefNode entry""" fused_types = self._get_fused_base_types(fused_compound_types) type_strings = [ PyrexTypes.specialization_signature_string(fused_type, f2s) for fused_type in fused_types ] node.specialized_signature_string = '|'.join(type_strings) node.entry.pymethdef_cname = PyrexTypes.get_fused_cname( cname, node.entry.pymethdef_cname) node.entry.doc = py_entry.doc node.entry.doc_cname = py_entry.doc_cname def replace_fused_typechecks(self, copied_node): """ Branch-prune fused type checks like if fused_t is int: ... Returns whether an error was issued and whether we should stop in in order to prevent a flood of errors. """ num_errors = Errors.num_errors transform = ParseTreeTransforms.ReplaceFusedTypeChecks( copied_node.local_scope) transform(copied_node) if Errors.num_errors > num_errors: return False return True def _fused_instance_checks(self, normal_types, pyx_code, env): """ Generate Cython code for instance checks, matching an object to specialized types. """ for specialized_type in normal_types: # all_numeric = all_numeric and specialized_type.is_numeric pyx_code.context.update( py_type_name=specialized_type.py_type_name(), specialized_type_name=specialized_type.specialization_string, ) pyx_code.put_chunk( u""" if isinstance(arg, {{py_type_name}}): dest_sig[{{dest_sig_idx}}] = '{{specialized_type_name}}'; break """) def _dtype_name(self, dtype): if dtype.is_typedef: return '___pyx_%s' % dtype return str(dtype).replace(' ', '_') def _dtype_type(self, dtype): if dtype.is_typedef: return self._dtype_name(dtype) return str(dtype) def _sizeof_dtype(self, dtype): if dtype.is_pyobject: return 'sizeof(void *)' else: return "sizeof(%s)" % self._dtype_type(dtype) def _buffer_check_numpy_dtype_setup_cases(self, pyx_code): "Setup some common cases to match dtypes against specializations" if pyx_code.indenter("if kind in b'iu':"): pyx_code.putln("pass") pyx_code.named_insertion_point("dtype_int") pyx_code.dedent() if pyx_code.indenter("elif kind == b'f':"): pyx_code.putln("pass") pyx_code.named_insertion_point("dtype_float") pyx_code.dedent() if pyx_code.indenter("elif kind == b'c':"): pyx_code.putln("pass") pyx_code.named_insertion_point("dtype_complex") pyx_code.dedent() if pyx_code.indenter("elif kind == b'O':"): pyx_code.putln("pass") pyx_code.named_insertion_point("dtype_object") pyx_code.dedent() match = "dest_sig[{{dest_sig_idx}}] = '{{specialized_type_name}}'" no_match = "dest_sig[{{dest_sig_idx}}] = None" def _buffer_check_numpy_dtype(self, pyx_code, specialized_buffer_types, pythran_types): """ Match a numpy dtype object to the individual specializations. """ self._buffer_check_numpy_dtype_setup_cases(pyx_code) for specialized_type in pythran_types+specialized_buffer_types: final_type = specialized_type if specialized_type.is_pythran_expr: specialized_type = specialized_type.org_buffer dtype = specialized_type.dtype pyx_code.context.update( itemsize_match=self._sizeof_dtype(dtype) + " == itemsize", signed_match="not (%s_is_signed ^ dtype_signed)" % self._dtype_name(dtype), dtype=dtype, specialized_type_name=final_type.specialization_string) dtypes = [ (dtype.is_int, pyx_code.dtype_int), (dtype.is_float, pyx_code.dtype_float), (dtype.is_complex, pyx_code.dtype_complex) ] for dtype_category, codewriter in dtypes: if dtype_category: cond = '{{itemsize_match}} and (arg.ndim) == %d' % ( specialized_type.ndim,) if dtype.is_int: cond += ' and {{signed_match}}' if final_type.is_pythran_expr: cond += ' and arg_is_pythran_compatible' if codewriter.indenter("if %s:" % cond): #codewriter.putln("print 'buffer match found based on numpy dtype'") codewriter.putln(self.match) codewriter.putln("break") codewriter.dedent() def _buffer_parse_format_string_check(self, pyx_code, decl_code, specialized_type, env): """ For each specialized type, try to coerce the object to a memoryview slice of that type. This means obtaining a buffer and parsing the format string. TODO: separate buffer acquisition from format parsing """ dtype = specialized_type.dtype if specialized_type.is_buffer: axes = [('direct', 'strided')] * specialized_type.ndim else: axes = specialized_type.axes memslice_type = PyrexTypes.MemoryViewSliceType(dtype, axes) memslice_type.create_from_py_utility_code(env) pyx_code.context.update( coerce_from_py_func=memslice_type.from_py_function, dtype=dtype) decl_code.putln( "{{memviewslice_cname}} {{coerce_from_py_func}}(object, int)") pyx_code.context.update( specialized_type_name=specialized_type.specialization_string, sizeof_dtype=self._sizeof_dtype(dtype)) pyx_code.put_chunk( u""" # try {{dtype}} if itemsize == -1 or itemsize == {{sizeof_dtype}}: memslice = {{coerce_from_py_func}}(arg, 0) if memslice.memview: __PYX_XDEC_MEMVIEW(&memslice, 1) # print 'found a match for the buffer through format parsing' %s break else: __pyx_PyErr_Clear() """ % self.match) def _buffer_checks(self, buffer_types, pythran_types, pyx_code, decl_code, env): """ Generate Cython code to match objects to buffer specializations. First try to get a numpy dtype object and match it against the individual specializations. If that fails, try naively to coerce the object to each specialization, which obtains the buffer each time and tries to match the format string. """ # The first thing to find a match in this loop breaks out of the loop pyx_code.put_chunk( u""" """ + (u"arg_is_pythran_compatible = False" if pythran_types else u"") + u""" if ndarray is not None: if isinstance(arg, ndarray): dtype = arg.dtype """ + (u"arg_is_pythran_compatible = True" if pythran_types else u"") + u""" elif __pyx_memoryview_check(arg): arg_base = arg.base if isinstance(arg_base, ndarray): dtype = arg_base.dtype else: dtype = None else: dtype = None itemsize = -1 if dtype is not None: itemsize = dtype.itemsize kind = ord(dtype.kind) dtype_signed = kind == 'i' """) pyx_code.indent(2) if pythran_types: pyx_code.put_chunk( u""" # Pythran only supports the endianness of the current compiler byteorder = dtype.byteorder if byteorder == "<" and not __Pyx_Is_Little_Endian(): arg_is_pythran_compatible = False elif byteorder == ">" and __Pyx_Is_Little_Endian(): arg_is_pythran_compatible = False if arg_is_pythran_compatible: cur_stride = itemsize shape = arg.shape strides = arg.strides for i in range(arg.ndim-1, -1, -1): if (strides[i]) != cur_stride: arg_is_pythran_compatible = False break cur_stride *= shape[i] else: arg_is_pythran_compatible = not (arg.flags.f_contiguous and (arg.ndim) > 1) """) pyx_code.named_insertion_point("numpy_dtype_checks") self._buffer_check_numpy_dtype(pyx_code, buffer_types, pythran_types) pyx_code.dedent(2) for specialized_type in buffer_types: self._buffer_parse_format_string_check( pyx_code, decl_code, specialized_type, env) def _buffer_declarations(self, pyx_code, decl_code, all_buffer_types, pythran_types): """ If we have any buffer specializations, write out some variable declarations and imports. """ decl_code.put_chunk( u""" ctypedef struct {{memviewslice_cname}}: void *memview void __PYX_XDEC_MEMVIEW({{memviewslice_cname}} *, int have_gil) bint __pyx_memoryview_check(object) """) pyx_code.local_variable_declarations.put_chunk( u""" cdef {{memviewslice_cname}} memslice cdef Py_ssize_t itemsize cdef bint dtype_signed cdef char kind itemsize = -1 """) if pythran_types: pyx_code.local_variable_declarations.put_chunk(u""" cdef bint arg_is_pythran_compatible cdef Py_ssize_t cur_stride """) pyx_code.imports.put_chunk( u""" cdef type ndarray ndarray = __Pyx_ImportNumPyArrayTypeIfAvailable() """) seen_int_dtypes = set() for buffer_type in all_buffer_types: dtype = buffer_type.dtype if dtype.is_typedef: #decl_code.putln("ctypedef %s %s" % (dtype.resolve(), # self._dtype_name(dtype))) decl_code.putln('ctypedef %s %s "%s"' % (dtype.resolve(), self._dtype_name(dtype), dtype.empty_declaration_code())) if buffer_type.dtype.is_int: if str(dtype) not in seen_int_dtypes: seen_int_dtypes.add(str(dtype)) pyx_code.context.update(dtype_name=self._dtype_name(dtype), dtype_type=self._dtype_type(dtype)) pyx_code.local_variable_declarations.put_chunk( u""" cdef bint {{dtype_name}}_is_signed {{dtype_name}}_is_signed = not (<{{dtype_type}}> -1 > 0) """) def _split_fused_types(self, arg): """ Specialize fused types and split into normal types and buffer types. """ specialized_types = PyrexTypes.get_specialized_types(arg.type) # Prefer long over int, etc by sorting (see type classes in PyrexTypes.py) specialized_types.sort() seen_py_type_names = set() normal_types, buffer_types, pythran_types = [], [], [] has_object_fallback = False for specialized_type in specialized_types: py_type_name = specialized_type.py_type_name() if py_type_name: if py_type_name in seen_py_type_names: continue seen_py_type_names.add(py_type_name) if py_type_name == 'object': has_object_fallback = True else: normal_types.append(specialized_type) elif specialized_type.is_pythran_expr: pythran_types.append(specialized_type) elif specialized_type.is_buffer or specialized_type.is_memoryviewslice: buffer_types.append(specialized_type) return normal_types, buffer_types, pythran_types, has_object_fallback def _unpack_argument(self, pyx_code): pyx_code.put_chunk( u""" # PROCESSING ARGUMENT {{arg_tuple_idx}} if {{arg_tuple_idx}} < len(args): arg = (args)[{{arg_tuple_idx}}] elif kwargs is not None and '{{arg.name}}' in kwargs: arg = (kwargs)['{{arg.name}}'] else: {{if arg.default}} arg = (defaults)[{{default_idx}}] {{else}} {{if arg_tuple_idx < min_positional_args}} raise TypeError("Expected at least %d argument%s, got %d" % ( {{min_positional_args}}, {{'"s"' if min_positional_args != 1 else '""'}}, len(args))) {{else}} raise TypeError("Missing keyword-only argument: '%s'" % "{{arg.default}}") {{endif}} {{endif}} """) def make_fused_cpdef(self, orig_py_func, env, is_def): """ This creates the function that is indexable from Python and does runtime dispatch based on the argument types. The function gets the arg tuple and kwargs dict (or None) and the defaults tuple as arguments from the Binding Fused Function's tp_call. """ from . import TreeFragment, Code, UtilityCode fused_types = self._get_fused_base_types([ arg.type for arg in self.node.args if arg.type.is_fused]) context = { 'memviewslice_cname': MemoryView.memviewslice_cname, 'func_args': self.node.args, 'n_fused': len(fused_types), 'min_positional_args': self.node.num_required_args - self.node.num_required_kw_args if is_def else sum(1 for arg in self.node.args if arg.default is None), 'name': orig_py_func.entry.name, } pyx_code = Code.PyxCodeWriter(context=context) decl_code = Code.PyxCodeWriter(context=context) decl_code.put_chunk( u""" cdef extern from *: void __pyx_PyErr_Clear "PyErr_Clear" () type __Pyx_ImportNumPyArrayTypeIfAvailable() int __Pyx_Is_Little_Endian() """) decl_code.indent() pyx_code.put_chunk( u""" def __pyx_fused_cpdef(signatures, args, kwargs, defaults): # FIXME: use a typed signature - currently fails badly because # default arguments inherit the types we specify here! dest_sig = [None] * {{n_fused}} if kwargs is not None and not kwargs: kwargs = None cdef Py_ssize_t i # instance check body """) pyx_code.indent() # indent following code to function body pyx_code.named_insertion_point("imports") pyx_code.named_insertion_point("func_defs") pyx_code.named_insertion_point("local_variable_declarations") fused_index = 0 default_idx = 0 all_buffer_types = OrderedSet() seen_fused_types = set() for i, arg in enumerate(self.node.args): if arg.type.is_fused: arg_fused_types = arg.type.get_fused_types() if len(arg_fused_types) > 1: raise NotImplementedError("Determination of more than one fused base " "type per argument is not implemented.") fused_type = arg_fused_types[0] if arg.type.is_fused and fused_type not in seen_fused_types: seen_fused_types.add(fused_type) context.update( arg_tuple_idx=i, arg=arg, dest_sig_idx=fused_index, default_idx=default_idx, ) normal_types, buffer_types, pythran_types, has_object_fallback = self._split_fused_types(arg) self._unpack_argument(pyx_code) # 'unrolled' loop, first match breaks out of it if pyx_code.indenter("while 1:"): if normal_types: self._fused_instance_checks(normal_types, pyx_code, env) if buffer_types or pythran_types: env.use_utility_code(Code.UtilityCode.load_cached("IsLittleEndian", "ModuleSetupCode.c")) self._buffer_checks(buffer_types, pythran_types, pyx_code, decl_code, env) if has_object_fallback: pyx_code.context.update(specialized_type_name='object') pyx_code.putln(self.match) else: pyx_code.putln(self.no_match) pyx_code.putln("break") pyx_code.dedent() fused_index += 1 all_buffer_types.update(buffer_types) all_buffer_types.update(ty.org_buffer for ty in pythran_types) if arg.default: default_idx += 1 if all_buffer_types: self._buffer_declarations(pyx_code, decl_code, all_buffer_types, pythran_types) env.use_utility_code(Code.UtilityCode.load_cached("Import", "ImportExport.c")) env.use_utility_code(Code.UtilityCode.load_cached("ImportNumPyArray", "ImportExport.c")) pyx_code.put_chunk( u""" candidates = [] for sig in signatures: match_found = False src_sig = sig.strip('()').split('|') for i in range(len(dest_sig)): dst_type = dest_sig[i] if dst_type is not None: if src_sig[i] == dst_type: match_found = True else: match_found = False break if match_found: candidates.append(sig) if not candidates: raise TypeError("No matching signature found") elif len(candidates) > 1: raise TypeError("Function call with ambiguous argument types") else: return (signatures)[candidates[0]] """) fragment_code = pyx_code.getvalue() # print decl_code.getvalue() # print fragment_code from .Optimize import ConstantFolding fragment = TreeFragment.TreeFragment( fragment_code, level='module', pipeline=[ConstantFolding()]) ast = TreeFragment.SetPosTransform(self.node.pos)(fragment.root) UtilityCode.declare_declarations_in_scope( decl_code.getvalue(), env.global_scope()) ast.scope = env # FIXME: for static methods of cdef classes, we build the wrong signature here: first arg becomes 'self' ast.analyse_declarations(env) py_func = ast.stats[-1] # the DefNode self.fragment_scope = ast.scope if isinstance(self.node, DefNode): py_func.specialized_cpdefs = self.nodes[:] else: py_func.specialized_cpdefs = [n.py_func for n in self.nodes] return py_func def update_fused_defnode_entry(self, env): copy_attributes = ( 'name', 'pos', 'cname', 'func_cname', 'pyfunc_cname', 'pymethdef_cname', 'doc', 'doc_cname', 'is_member', 'scope' ) entry = self.py_func.entry for attr in copy_attributes: setattr(entry, attr, getattr(self.orig_py_func.entry, attr)) self.py_func.name = self.orig_py_func.name self.py_func.doc = self.orig_py_func.doc env.entries.pop('__pyx_fused_cpdef', None) if isinstance(self.node, DefNode): env.entries[entry.name] = entry else: env.entries[entry.name].as_variable = entry env.pyfunc_entries.append(entry) self.py_func.entry.fused_cfunction = self for node in self.nodes: if isinstance(self.node, DefNode): node.fused_py_func = self.py_func else: node.py_func.fused_py_func = self.py_func node.entry.as_variable = entry self.synthesize_defnodes() self.stats.append(self.__signatures__) def analyse_expressions(self, env): """ Analyse the expressions. Take care to only evaluate default arguments once and clone the result for all specializations """ for fused_compound_type in self.fused_compound_types: for fused_type in fused_compound_type.get_fused_types(): for specialization_type in fused_type.types: if specialization_type.is_complex: specialization_type.create_declaration_utility_code(env) if self.py_func: self.__signatures__ = self.__signatures__.analyse_expressions(env) self.py_func = self.py_func.analyse_expressions(env) self.resulting_fused_function = self.resulting_fused_function.analyse_expressions(env) self.fused_func_assignment = self.fused_func_assignment.analyse_expressions(env) self.defaults = defaults = [] for arg in self.node.args: if arg.default: arg.default = arg.default.analyse_expressions(env) defaults.append(ProxyNode(arg.default)) else: defaults.append(None) for i, stat in enumerate(self.stats): stat = self.stats[i] = stat.analyse_expressions(env) if isinstance(stat, FuncDefNode): for arg, default in zip(stat.args, defaults): if default is not None: arg.default = CloneNode(default).coerce_to(arg.type, env) if self.py_func: args = [CloneNode(default) for default in defaults if default] self.defaults_tuple = TupleNode(self.pos, args=args) self.defaults_tuple = self.defaults_tuple.analyse_types(env, skip_children=True).coerce_to_pyobject(env) self.defaults_tuple = ProxyNode(self.defaults_tuple) self.code_object = ProxyNode(self.specialized_pycfuncs[0].code_object) fused_func = self.resulting_fused_function.arg fused_func.defaults_tuple = CloneNode(self.defaults_tuple) fused_func.code_object = CloneNode(self.code_object) for i, pycfunc in enumerate(self.specialized_pycfuncs): pycfunc.code_object = CloneNode(self.code_object) pycfunc = self.specialized_pycfuncs[i] = pycfunc.analyse_types(env) pycfunc.defaults_tuple = CloneNode(self.defaults_tuple) return self def synthesize_defnodes(self): """ Create the __signatures__ dict of PyCFunctionNode specializations. """ if isinstance(self.nodes[0], CFuncDefNode): nodes = [node.py_func for node in self.nodes] else: nodes = self.nodes signatures = [StringEncoding.EncodedString(node.specialized_signature_string) for node in nodes] keys = [ExprNodes.StringNode(node.pos, value=sig) for node, sig in zip(nodes, signatures)] values = [ExprNodes.PyCFunctionNode.from_defnode(node, binding=True) for node in nodes] self.__signatures__ = ExprNodes.DictNode.from_pairs(self.pos, zip(keys, values)) self.specialized_pycfuncs = values for pycfuncnode in values: pycfuncnode.is_specialization = True def generate_function_definitions(self, env, code): if self.py_func: self.py_func.pymethdef_required = True self.fused_func_assignment.generate_function_definitions(env, code) for stat in self.stats: if isinstance(stat, FuncDefNode) and stat.entry.used: code.mark_pos(stat.pos) stat.generate_function_definitions(env, code) def generate_execution_code(self, code): # Note: all def function specialization are wrapped in PyCFunction # nodes in the self.__signatures__ dictnode. for default in self.defaults: if default is not None: default.generate_evaluation_code(code) if self.py_func: self.defaults_tuple.generate_evaluation_code(code) self.code_object.generate_evaluation_code(code) for stat in self.stats: code.mark_pos(stat.pos) if isinstance(stat, ExprNodes.ExprNode): stat.generate_evaluation_code(code) else: stat.generate_execution_code(code) if self.__signatures__: self.resulting_fused_function.generate_evaluation_code(code) code.putln( "((__pyx_FusedFunctionObject *) %s)->__signatures__ = %s;" % (self.resulting_fused_function.result(), self.__signatures__.result())) code.put_giveref(self.__signatures__.result()) self.fused_func_assignment.generate_execution_code(code) # Dispose of results self.resulting_fused_function.generate_disposal_code(code) self.defaults_tuple.generate_disposal_code(code) self.code_object.generate_disposal_code(code) for default in self.defaults: if default is not None: default.generate_disposal_code(code) def annotate(self, code): for stat in self.stats: stat.annotate(code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Compiler/Future.py0000644000175000017500000000111300000000000027077 0ustar00vincentvincent00000000000000def _get_feature(name): import __future__ # fall back to a unique fake object for earlier Python versions or Python 3 return getattr(__future__, name, object()) unicode_literals = _get_feature("unicode_literals") with_statement = _get_feature("with_statement") # dummy division = _get_feature("division") print_function = _get_feature("print_function") absolute_import = _get_feature("absolute_import") nested_scopes = _get_feature("nested_scopes") # dummy generators = _get_feature("generators") # dummy generator_stop = _get_feature("generator_stop") del _get_feature ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Compiler/Interpreter.py0000644000175000017500000000407200000000000030137 0ustar00vincentvincent00000000000000""" This module deals with interpreting the parse tree as Python would have done, in the compiler. For now this only covers parse tree to value conversion of compile-time values. """ from __future__ import absolute_import from .Nodes import * from .ExprNodes import * from .Errors import CompileError class EmptyScope(object): def lookup(self, name): return None empty_scope = EmptyScope() def interpret_compiletime_options(optlist, optdict, type_env=None, type_args=()): """ Tries to interpret a list of compile time option nodes. The result will be a tuple (optlist, optdict) but where all expression nodes have been interpreted. The result is in the form of tuples (value, pos). optlist is a list of nodes, while optdict is a DictNode (the result optdict is a dict) If type_env is set, all type nodes will be analysed and the resulting type set. Otherwise only interpretateable ExprNodes are allowed, other nodes raises errors. A CompileError will be raised if there are problems. """ def interpret(node, ix): if ix in type_args: if type_env: type = node.analyse_as_type(type_env) if not type: raise CompileError(node.pos, "Invalid type.") return (type, node.pos) else: raise CompileError(node.pos, "Type not allowed here.") else: if (sys.version_info[0] >=3 and isinstance(node, StringNode) and node.unicode_value is not None): return (node.unicode_value, node.pos) return (node.compile_time_value(empty_scope), node.pos) if optlist: optlist = [interpret(x, ix) for ix, x in enumerate(optlist)] if optdict: assert isinstance(optdict, DictNode) new_optdict = {} for item in optdict.key_value_pairs: new_key, dummy = interpret(item.key, None) new_optdict[new_key] = interpret(item.value, item.key.value) optdict = new_optdict return (optlist, new_optdict) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Compiler/Lexicon.py0000644000175000017500000001136700000000000027242 0ustar00vincentvincent00000000000000# cython: language_level=3, py2_import=True # # Cython Scanner - Lexical Definitions # from __future__ import absolute_import, unicode_literals raw_prefixes = "rR" bytes_prefixes = "bB" string_prefixes = "fFuU" + bytes_prefixes char_prefixes = "cC" any_string_prefix = raw_prefixes + string_prefixes + char_prefixes IDENT = 'IDENT' def make_lexicon(): from ..Plex import \ Str, Any, AnyBut, AnyChar, Rep, Rep1, Opt, Bol, Eol, Eof, \ TEXT, IGNORE, State, Lexicon from .Scanning import Method letter = Any("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz_") digit = Any("0123456789") bindigit = Any("01") octdigit = Any("01234567") hexdigit = Any("0123456789ABCDEFabcdef") indentation = Bol + Rep(Any(" \t")) def underscore_digits(d): return Rep1(d) + Rep(Str("_") + Rep1(d)) decimal = underscore_digits(digit) dot = Str(".") exponent = Any("Ee") + Opt(Any("+-")) + decimal decimal_fract = (decimal + dot + Opt(decimal)) | (dot + decimal) name = letter + Rep(letter | digit) intconst = decimal | (Str("0") + ((Any("Xx") + underscore_digits(hexdigit)) | (Any("Oo") + underscore_digits(octdigit)) | (Any("Bb") + underscore_digits(bindigit)) )) intsuffix = (Opt(Any("Uu")) + Opt(Any("Ll")) + Opt(Any("Ll"))) | (Opt(Any("Ll")) + Opt(Any("Ll")) + Opt(Any("Uu"))) intliteral = intconst + intsuffix fltconst = (decimal_fract + Opt(exponent)) | (decimal + exponent) imagconst = (intconst | fltconst) + Any("jJ") # invalid combinations of prefixes are caught in p_string_literal beginstring = Opt(Rep(Any(string_prefixes + raw_prefixes)) | Any(char_prefixes) ) + (Str("'") | Str('"') | Str("'''") | Str('"""')) two_oct = octdigit + octdigit three_oct = octdigit + octdigit + octdigit two_hex = hexdigit + hexdigit four_hex = two_hex + two_hex escapeseq = Str("\\") + (two_oct | three_oct | Str('N{') + Rep(AnyBut('}')) + Str('}') | Str('u') + four_hex | Str('x') + two_hex | Str('U') + four_hex + four_hex | AnyChar) bra = Any("([{") ket = Any(")]}") punct = Any(":,;+-*/|&<>=.%`~^?!@") diphthong = Str("==", "<>", "!=", "<=", ">=", "<<", ">>", "**", "//", "+=", "-=", "*=", "/=", "%=", "|=", "^=", "&=", "<<=", ">>=", "**=", "//=", "->", "@=") spaces = Rep1(Any(" \t\f")) escaped_newline = Str("\\\n") lineterm = Eol + Opt(Str("\n")) comment = Str("#") + Rep(AnyBut("\n")) return Lexicon([ (name, IDENT), (intliteral, Method('strip_underscores', symbol='INT')), (fltconst, Method('strip_underscores', symbol='FLOAT')), (imagconst, Method('strip_underscores', symbol='IMAG')), (punct | diphthong, TEXT), (bra, Method('open_bracket_action')), (ket, Method('close_bracket_action')), (lineterm, Method('newline_action')), (beginstring, Method('begin_string_action')), (comment, IGNORE), (spaces, IGNORE), (escaped_newline, IGNORE), State('INDENT', [ (comment + lineterm, Method('commentline')), (Opt(spaces) + Opt(comment) + lineterm, IGNORE), (indentation, Method('indentation_action')), (Eof, Method('eof_action')) ]), State('SQ_STRING', [ (escapeseq, 'ESCAPE'), (Rep1(AnyBut("'\"\n\\")), 'CHARS'), (Str('"'), 'CHARS'), (Str("\n"), Method('unclosed_string_action')), (Str("'"), Method('end_string_action')), (Eof, 'EOF') ]), State('DQ_STRING', [ (escapeseq, 'ESCAPE'), (Rep1(AnyBut('"\n\\')), 'CHARS'), (Str("'"), 'CHARS'), (Str("\n"), Method('unclosed_string_action')), (Str('"'), Method('end_string_action')), (Eof, 'EOF') ]), State('TSQ_STRING', [ (escapeseq, 'ESCAPE'), (Rep1(AnyBut("'\"\n\\")), 'CHARS'), (Any("'\""), 'CHARS'), (Str("\n"), 'NEWLINE'), (Str("'''"), Method('end_string_action')), (Eof, 'EOF') ]), State('TDQ_STRING', [ (escapeseq, 'ESCAPE'), (Rep1(AnyBut('"\'\n\\')), 'CHARS'), (Any("'\""), 'CHARS'), (Str("\n"), 'NEWLINE'), (Str('"""'), Method('end_string_action')), (Eof, 'EOF') ]), (Eof, Method('eof_action')) ], # FIXME: Plex 1.9 needs different args here from Plex 1.1.4 #debug_flags = scanner_debug_flags, #debug_file = scanner_dump_file ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Compiler/Main.py0000644000175000017500000010721000000000000026516 0ustar00vincentvincent00000000000000# # Cython Top Level # from __future__ import absolute_import import os import re import sys import io if sys.version_info[:2] < (2, 6) or (3, 0) <= sys.version_info[:2] < (3, 3): sys.stderr.write("Sorry, Cython requires Python 2.6+ or 3.3+, found %d.%d\n" % tuple(sys.version_info[:2])) sys.exit(1) try: from __builtin__ import basestring except ImportError: basestring = str # Do not import Parsing here, import it when needed, because Parsing imports # Nodes, which globally needs debug command line options initialized to set a # conditional metaclass. These options are processed by CmdLine called from # main() in this file. # import Parsing from . import Errors from .StringEncoding import EncodedString from .Scanning import PyrexScanner, FileSourceDescriptor from .Errors import PyrexError, CompileError, error, warning from .Symtab import ModuleScope from .. import Utils from . import Options from . import Version # legacy import needed by old PyTables versions version = Version.version # legacy attribute - use "Cython.__version__" instead module_name_pattern = re.compile(r"[A-Za-z_][A-Za-z0-9_]*(\.[A-Za-z_][A-Za-z0-9_]*)*$") verbose = 0 standard_include_path = os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir, 'Includes')) class CompilationData(object): # Bundles the information that is passed from transform to transform. # (For now, this is only) # While Context contains every pxd ever loaded, path information etc., # this only contains the data related to a single compilation pass # # pyx ModuleNode Main code tree of this compilation. # pxds {string : ModuleNode} Trees for the pxds used in the pyx. # codewriter CCodeWriter Where to output final code. # options CompilationOptions # result CompilationResult pass class Context(object): # This class encapsulates the context needed for compiling # one or more Cython implementation files along with their # associated and imported declaration files. It includes # the root of the module import namespace and the list # of directories to search for include files. # # modules {string : ModuleScope} # include_directories [string] # future_directives [object] # language_level int currently 2 or 3 for Python 2/3 cython_scope = None language_level = None # warn when not set but default to Py2 def __init__(self, include_directories, compiler_directives, cpp=False, language_level=None, options=None): # cython_scope is a hack, set to False by subclasses, in order to break # an infinite loop. # Better code organization would fix it. from . import Builtin, CythonScope self.modules = {"__builtin__" : Builtin.builtin_scope} self.cython_scope = CythonScope.create_cython_scope(self) self.modules["cython"] = self.cython_scope self.include_directories = include_directories self.future_directives = set() self.compiler_directives = compiler_directives self.cpp = cpp self.options = options self.pxds = {} # full name -> node tree self._interned = {} # (type(value), value, *key_args) -> interned_value if language_level is not None: self.set_language_level(language_level) self.gdb_debug_outputwriter = None def set_language_level(self, level): from .Future import print_function, unicode_literals, absolute_import, division future_directives = set() if level == '3str': level = 3 else: level = int(level) if level >= 3: future_directives.add(unicode_literals) if level >= 3: future_directives.update([print_function, absolute_import, division]) self.language_level = level self.future_directives = future_directives if level >= 3: self.modules['builtins'] = self.modules['__builtin__'] def intern_ustring(self, value, encoding=None): key = (EncodedString, value, encoding) try: return self._interned[key] except KeyError: pass value = EncodedString(value) if encoding: value.encoding = encoding self._interned[key] = value return value def intern_value(self, value, *key): key = (type(value), value) + key try: return self._interned[key] except KeyError: pass self._interned[key] = value return value # pipeline creation functions can now be found in Pipeline.py def process_pxd(self, source_desc, scope, module_name): from . import Pipeline if isinstance(source_desc, FileSourceDescriptor) and source_desc._file_type == 'pyx': source = CompilationSource(source_desc, module_name, os.getcwd()) result_sink = create_default_resultobj(source, self.options) pipeline = Pipeline.create_pyx_as_pxd_pipeline(self, result_sink) result = Pipeline.run_pipeline(pipeline, source) else: pipeline = Pipeline.create_pxd_pipeline(self, scope, module_name) result = Pipeline.run_pipeline(pipeline, source_desc) return result def nonfatal_error(self, exc): return Errors.report_error(exc) def find_module(self, module_name, relative_to=None, pos=None, need_pxd=1, absolute_fallback=True): # Finds and returns the module scope corresponding to # the given relative or absolute module name. If this # is the first time the module has been requested, finds # the corresponding .pxd file and process it. # If relative_to is not None, it must be a module scope, # and the module will first be searched for relative to # that module, provided its name is not a dotted name. debug_find_module = 0 if debug_find_module: print("Context.find_module: module_name = %s, relative_to = %s, pos = %s, need_pxd = %s" % ( module_name, relative_to, pos, need_pxd)) scope = None pxd_pathname = None if relative_to: if module_name: # from .module import ... qualified_name = relative_to.qualify_name(module_name) else: # from . import ... qualified_name = relative_to.qualified_name scope = relative_to relative_to = None else: qualified_name = module_name if not module_name_pattern.match(qualified_name): raise CompileError(pos or (module_name, 0, 0), "'%s' is not a valid module name" % module_name) if relative_to: if debug_find_module: print("...trying relative import") scope = relative_to.lookup_submodule(module_name) if not scope: pxd_pathname = self.find_pxd_file(qualified_name, pos) if pxd_pathname: scope = relative_to.find_submodule(module_name) if not scope: if debug_find_module: print("...trying absolute import") if absolute_fallback: qualified_name = module_name scope = self for name in qualified_name.split("."): scope = scope.find_submodule(name) if debug_find_module: print("...scope = %s" % scope) if not scope.pxd_file_loaded: if debug_find_module: print("...pxd not loaded") if not pxd_pathname: if debug_find_module: print("...looking for pxd file") # Only look in sys.path if we are explicitly looking # for a .pxd file. pxd_pathname = self.find_pxd_file(qualified_name, pos, sys_path=need_pxd) if debug_find_module: print("......found %s" % pxd_pathname) if not pxd_pathname and need_pxd: # Set pxd_file_loaded such that we don't need to # look for the non-existing pxd file next time. scope.pxd_file_loaded = True package_pathname = self.search_include_directories(qualified_name, ".py", pos) if package_pathname and package_pathname.endswith('__init__.py'): pass else: error(pos, "'%s.pxd' not found" % qualified_name.replace('.', os.sep)) if pxd_pathname: scope.pxd_file_loaded = True try: if debug_find_module: print("Context.find_module: Parsing %s" % pxd_pathname) rel_path = module_name.replace('.', os.sep) + os.path.splitext(pxd_pathname)[1] if not pxd_pathname.endswith(rel_path): rel_path = pxd_pathname # safety measure to prevent printing incorrect paths source_desc = FileSourceDescriptor(pxd_pathname, rel_path) err, result = self.process_pxd(source_desc, scope, qualified_name) if err: raise err (pxd_codenodes, pxd_scope) = result self.pxds[module_name] = (pxd_codenodes, pxd_scope) except CompileError: pass return scope def find_pxd_file(self, qualified_name, pos, sys_path=True): # Search include path (and sys.path if sys_path is True) for # the .pxd file corresponding to the given fully-qualified # module name. # Will find either a dotted filename or a file in a # package directory. If a source file position is given, # the directory containing the source file is searched first # for a dotted filename, and its containing package root # directory is searched first for a non-dotted filename. pxd = self.search_include_directories(qualified_name, ".pxd", pos, sys_path=sys_path) if pxd is None: # XXX Keep this until Includes/Deprecated is removed if (qualified_name.startswith('python') or qualified_name in ('stdlib', 'stdio', 'stl')): standard_include_path = os.path.abspath(os.path.normpath( os.path.join(os.path.dirname(__file__), os.path.pardir, 'Includes'))) deprecated_include_path = os.path.join(standard_include_path, 'Deprecated') self.include_directories.append(deprecated_include_path) try: pxd = self.search_include_directories(qualified_name, ".pxd", pos) finally: self.include_directories.pop() if pxd: name = qualified_name if name.startswith('python'): warning(pos, "'%s' is deprecated, use 'cpython'" % name, 1) elif name in ('stdlib', 'stdio'): warning(pos, "'%s' is deprecated, use 'libc.%s'" % (name, name), 1) elif name in ('stl'): warning(pos, "'%s' is deprecated, use 'libcpp.*.*'" % name, 1) if pxd is None and Options.cimport_from_pyx: return self.find_pyx_file(qualified_name, pos) return pxd def find_pyx_file(self, qualified_name, pos): # Search include path for the .pyx file corresponding to the # given fully-qualified module name, as for find_pxd_file(). return self.search_include_directories(qualified_name, ".pyx", pos) def find_include_file(self, filename, pos): # Search list of include directories for filename. # Reports an error and returns None if not found. path = self.search_include_directories(filename, "", pos, include=True) if not path: error(pos, "'%s' not found" % filename) return path def search_include_directories(self, qualified_name, suffix, pos, include=False, sys_path=False): include_dirs = self.include_directories if sys_path: include_dirs = include_dirs + sys.path # include_dirs must be hashable for caching in @cached_function include_dirs = tuple(include_dirs + [standard_include_path]) return search_include_directories(include_dirs, qualified_name, suffix, pos, include) def find_root_package_dir(self, file_path): return Utils.find_root_package_dir(file_path) def check_package_dir(self, dir, package_names): return Utils.check_package_dir(dir, tuple(package_names)) def c_file_out_of_date(self, source_path, output_path): if not os.path.exists(output_path): return 1 c_time = Utils.modification_time(output_path) if Utils.file_newer_than(source_path, c_time): return 1 pos = [source_path] pxd_path = Utils.replace_suffix(source_path, ".pxd") if os.path.exists(pxd_path) and Utils.file_newer_than(pxd_path, c_time): return 1 for kind, name in self.read_dependency_file(source_path): if kind == "cimport": dep_path = self.find_pxd_file(name, pos) elif kind == "include": dep_path = self.search_include_directories(name, pos) else: continue if dep_path and Utils.file_newer_than(dep_path, c_time): return 1 return 0 def find_cimported_module_names(self, source_path): return [ name for kind, name in self.read_dependency_file(source_path) if kind == "cimport" ] def is_package_dir(self, dir_path): return Utils.is_package_dir(dir_path) def read_dependency_file(self, source_path): dep_path = Utils.replace_suffix(source_path, ".dep") if os.path.exists(dep_path): f = open(dep_path, "rU") chunks = [ line.strip().split(" ", 1) for line in f.readlines() if " " in line.strip() ] f.close() return chunks else: return () def lookup_submodule(self, name): # Look up a top-level module. Returns None if not found. return self.modules.get(name, None) def find_submodule(self, name): # Find a top-level module, creating a new one if needed. scope = self.lookup_submodule(name) if not scope: scope = ModuleScope(name, parent_module = None, context = self) self.modules[name] = scope return scope def parse(self, source_desc, scope, pxd, full_module_name): if not isinstance(source_desc, FileSourceDescriptor): raise RuntimeError("Only file sources for code supported") source_filename = source_desc.filename scope.cpp = self.cpp # Parse the given source file and return a parse tree. num_errors = Errors.num_errors try: with Utils.open_source_file(source_filename) as f: from . import Parsing s = PyrexScanner(f, source_desc, source_encoding = f.encoding, scope = scope, context = self) tree = Parsing.p_module(s, pxd, full_module_name) if self.options.formal_grammar: try: from ..Parser import ConcreteSyntaxTree except ImportError: raise RuntimeError( "Formal grammar can only be used with compiled Cython with an available pgen.") ConcreteSyntaxTree.p_module(source_filename) except UnicodeDecodeError as e: #import traceback #traceback.print_exc() raise self._report_decode_error(source_desc, e) if Errors.num_errors > num_errors: raise CompileError() return tree def _report_decode_error(self, source_desc, exc): msg = exc.args[-1] position = exc.args[2] encoding = exc.args[0] line = 1 column = idx = 0 with io.open(source_desc.filename, "r", encoding='iso8859-1', newline='') as f: for line, data in enumerate(f, 1): idx += len(data) if idx >= position: column = position - (idx - len(data)) + 1 break return error((source_desc, line, column), "Decoding error, missing or incorrect coding= " "at top of source (cannot decode with encoding %r: %s)" % (encoding, msg)) def extract_module_name(self, path, options): # Find fully_qualified module name from the full pathname # of a source file. dir, filename = os.path.split(path) module_name, _ = os.path.splitext(filename) if "." in module_name: return module_name names = [module_name] while self.is_package_dir(dir): parent, package_name = os.path.split(dir) if parent == dir: break names.append(package_name) dir = parent names.reverse() return ".".join(names) def setup_errors(self, options, result): Errors.reset() # clear any remaining error state if options.use_listing_file: path = result.listing_file = Utils.replace_suffix(result.main_source_file, ".lis") else: path = None Errors.open_listing_file(path=path, echo_to_stderr=options.errors_to_stderr) def teardown_errors(self, err, options, result): source_desc = result.compilation_source.source_desc if not isinstance(source_desc, FileSourceDescriptor): raise RuntimeError("Only file sources for code supported") Errors.close_listing_file() result.num_errors = Errors.num_errors if result.num_errors > 0: err = True if err and result.c_file: try: Utils.castrate_file(result.c_file, os.stat(source_desc.filename)) except EnvironmentError: pass result.c_file = None def get_output_filename(source_filename, cwd, options): if options.cplus: c_suffix = ".cpp" else: c_suffix = ".c" suggested_file_name = Utils.replace_suffix(source_filename, c_suffix) if options.output_file: out_path = os.path.join(cwd, options.output_file) if os.path.isdir(out_path): return os.path.join(out_path, os.path.basename(suggested_file_name)) else: return out_path else: return suggested_file_name def create_default_resultobj(compilation_source, options): result = CompilationResult() result.main_source_file = compilation_source.source_desc.filename result.compilation_source = compilation_source source_desc = compilation_source.source_desc result.c_file = get_output_filename(source_desc.filename, compilation_source.cwd, options) result.embedded_metadata = options.embedded_metadata return result def run_pipeline(source, options, full_module_name=None, context=None): from . import Pipeline source_ext = os.path.splitext(source)[1] options.configure_language_defaults(source_ext[1:]) # py/pyx if context is None: context = options.create_context() # Set up source object cwd = os.getcwd() abs_path = os.path.abspath(source) full_module_name = full_module_name or context.extract_module_name(source, options) Utils.raise_error_if_module_name_forbidden(full_module_name) if options.relative_path_in_code_position_comments: rel_path = full_module_name.replace('.', os.sep) + source_ext if not abs_path.endswith(rel_path): rel_path = source # safety measure to prevent printing incorrect paths else: rel_path = abs_path source_desc = FileSourceDescriptor(abs_path, rel_path) source = CompilationSource(source_desc, full_module_name, cwd) # Set up result object result = create_default_resultobj(source, options) if options.annotate is None: # By default, decide based on whether an html file already exists. html_filename = os.path.splitext(result.c_file)[0] + ".html" if os.path.exists(html_filename): with io.open(html_filename, "r", encoding="UTF-8") as html_file: if u' State %d\n" % (key, state['number'])) for key in ('bol', 'eol', 'eof', 'else'): state = special_to_state.get(key, None) if state: file.write(" %s --> State %d\n" % (key, state['number'])) def chars_to_ranges(self, char_list): char_list.sort() i = 0 n = len(char_list) result = [] while i < n: c1 = ord(char_list[i]) c2 = c1 i += 1 while i < n and ord(char_list[i]) == c2 + 1: i += 1 c2 += 1 result.append((chr(c1), chr(c2))) return tuple(result) def ranges_to_string(self, range_list): return ','.join(map(self.range_to_string, range_list)) def range_to_string(self, range_tuple): (c1, c2) = range_tuple if c1 == c2: return repr(c1) else: return "%s..%s" % (repr(c1), repr(c2)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Plex/Regexps.py0000644000175000017500000003752000000000000026413 0ustar00vincentvincent00000000000000#======================================================================= # # Python Lexical Analyser # # Regular Expressions # #======================================================================= from __future__ import absolute_import import types try: from sys import maxsize as maxint except ImportError: from sys import maxint from . import Errors # # Constants # BOL = 'bol' EOL = 'eol' EOF = 'eof' nl_code = ord('\n') # # Helper functions # def chars_to_ranges(s): """ Return a list of character codes consisting of pairs [code1a, code1b, code2a, code2b,...] which cover all the characters in |s|. """ char_list = list(s) char_list.sort() i = 0 n = len(char_list) result = [] while i < n: code1 = ord(char_list[i]) code2 = code1 + 1 i += 1 while i < n and code2 >= ord(char_list[i]): code2 += 1 i += 1 result.append(code1) result.append(code2) return result def uppercase_range(code1, code2): """ If the range of characters from code1 to code2-1 includes any lower case letters, return the corresponding upper case range. """ code3 = max(code1, ord('a')) code4 = min(code2, ord('z') + 1) if code3 < code4: d = ord('A') - ord('a') return (code3 + d, code4 + d) else: return None def lowercase_range(code1, code2): """ If the range of characters from code1 to code2-1 includes any upper case letters, return the corresponding lower case range. """ code3 = max(code1, ord('A')) code4 = min(code2, ord('Z') + 1) if code3 < code4: d = ord('a') - ord('A') return (code3 + d, code4 + d) else: return None def CodeRanges(code_list): """ Given a list of codes as returned by chars_to_ranges, return an RE which will match a character in any of the ranges. """ re_list = [CodeRange(code_list[i], code_list[i + 1]) for i in range(0, len(code_list), 2)] return Alt(*re_list) def CodeRange(code1, code2): """ CodeRange(code1, code2) is an RE which matches any character with a code |c| in the range |code1| <= |c| < |code2|. """ if code1 <= nl_code < code2: return Alt(RawCodeRange(code1, nl_code), RawNewline, RawCodeRange(nl_code + 1, code2)) else: return RawCodeRange(code1, code2) # # Abstract classes # class RE(object): """RE is the base class for regular expression constructors. The following operators are defined on REs: re1 + re2 is an RE which matches |re1| followed by |re2| re1 | re2 is an RE which matches either |re1| or |re2| """ nullable = 1 # True if this RE can match 0 input symbols match_nl = 1 # True if this RE can match a string ending with '\n' str = None # Set to a string to override the class's __str__ result def build_machine(self, machine, initial_state, final_state, match_bol, nocase): """ This method should add states to |machine| to implement this RE, starting at |initial_state| and ending at |final_state|. If |match_bol| is true, the RE must be able to match at the beginning of a line. If nocase is true, upper and lower case letters should be treated as equivalent. """ raise NotImplementedError("%s.build_machine not implemented" % self.__class__.__name__) def build_opt(self, m, initial_state, c): """ Given a state |s| of machine |m|, return a new state reachable from |s| on character |c| or epsilon. """ s = m.new_state() initial_state.link_to(s) initial_state.add_transition(c, s) return s def __add__(self, other): return Seq(self, other) def __or__(self, other): return Alt(self, other) def __str__(self): if self.str: return self.str else: return self.calc_str() def check_re(self, num, value): if not isinstance(value, RE): self.wrong_type(num, value, "Plex.RE instance") def check_string(self, num, value): if type(value) != type(''): self.wrong_type(num, value, "string") def check_char(self, num, value): self.check_string(num, value) if len(value) != 1: raise Errors.PlexValueError("Invalid value for argument %d of Plex.%s." "Expected a string of length 1, got: %s" % ( num, self.__class__.__name__, repr(value))) def wrong_type(self, num, value, expected): if type(value) == types.InstanceType: got = "%s.%s instance" % ( value.__class__.__module__, value.__class__.__name__) else: got = type(value).__name__ raise Errors.PlexTypeError("Invalid type for argument %d of Plex.%s " "(expected %s, got %s" % ( num, self.__class__.__name__, expected, got)) # # Primitive RE constructors # ------------------------- # # These are the basic REs from which all others are built. # ## class Char(RE): ## """ ## Char(c) is an RE which matches the character |c|. ## """ ## nullable = 0 ## def __init__(self, char): ## self.char = char ## self.match_nl = char == '\n' ## def build_machine(self, m, initial_state, final_state, match_bol, nocase): ## c = self.char ## if match_bol and c != BOL: ## s1 = self.build_opt(m, initial_state, BOL) ## else: ## s1 = initial_state ## if c == '\n' or c == EOF: ## s1 = self.build_opt(m, s1, EOL) ## if len(c) == 1: ## code = ord(self.char) ## s1.add_transition((code, code+1), final_state) ## if nocase and is_letter_code(code): ## code2 = other_case_code(code) ## s1.add_transition((code2, code2+1), final_state) ## else: ## s1.add_transition(c, final_state) ## def calc_str(self): ## return "Char(%s)" % repr(self.char) def Char(c): """ Char(c) is an RE which matches the character |c|. """ if len(c) == 1: result = CodeRange(ord(c), ord(c) + 1) else: result = SpecialSymbol(c) result.str = "Char(%s)" % repr(c) return result class RawCodeRange(RE): """ RawCodeRange(code1, code2) is a low-level RE which matches any character with a code |c| in the range |code1| <= |c| < |code2|, where the range does not include newline. For internal use only. """ nullable = 0 match_nl = 0 range = None # (code, code) uppercase_range = None # (code, code) or None lowercase_range = None # (code, code) or None def __init__(self, code1, code2): self.range = (code1, code2) self.uppercase_range = uppercase_range(code1, code2) self.lowercase_range = lowercase_range(code1, code2) def build_machine(self, m, initial_state, final_state, match_bol, nocase): if match_bol: initial_state = self.build_opt(m, initial_state, BOL) initial_state.add_transition(self.range, final_state) if nocase: if self.uppercase_range: initial_state.add_transition(self.uppercase_range, final_state) if self.lowercase_range: initial_state.add_transition(self.lowercase_range, final_state) def calc_str(self): return "CodeRange(%d,%d)" % (self.code1, self.code2) class _RawNewline(RE): """ RawNewline is a low-level RE which matches a newline character. For internal use only. """ nullable = 0 match_nl = 1 def build_machine(self, m, initial_state, final_state, match_bol, nocase): if match_bol: initial_state = self.build_opt(m, initial_state, BOL) s = self.build_opt(m, initial_state, EOL) s.add_transition((nl_code, nl_code + 1), final_state) RawNewline = _RawNewline() class SpecialSymbol(RE): """ SpecialSymbol(sym) is an RE which matches the special input symbol |sym|, which is one of BOL, EOL or EOF. """ nullable = 0 match_nl = 0 sym = None def __init__(self, sym): self.sym = sym def build_machine(self, m, initial_state, final_state, match_bol, nocase): # Sequences 'bol bol' and 'bol eof' are impossible, so only need # to allow for bol if sym is eol if match_bol and self.sym == EOL: initial_state = self.build_opt(m, initial_state, BOL) initial_state.add_transition(self.sym, final_state) class Seq(RE): """Seq(re1, re2, re3...) is an RE which matches |re1| followed by |re2| followed by |re3|...""" def __init__(self, *re_list): nullable = 1 for i, re in enumerate(re_list): self.check_re(i, re) nullable = nullable and re.nullable self.re_list = re_list self.nullable = nullable i = len(re_list) match_nl = 0 while i: i -= 1 re = re_list[i] if re.match_nl: match_nl = 1 break if not re.nullable: break self.match_nl = match_nl def build_machine(self, m, initial_state, final_state, match_bol, nocase): re_list = self.re_list if len(re_list) == 0: initial_state.link_to(final_state) else: s1 = initial_state n = len(re_list) for i, re in enumerate(re_list): if i < n - 1: s2 = m.new_state() else: s2 = final_state re.build_machine(m, s1, s2, match_bol, nocase) s1 = s2 match_bol = re.match_nl or (match_bol and re.nullable) def calc_str(self): return "Seq(%s)" % ','.join(map(str, self.re_list)) class Alt(RE): """Alt(re1, re2, re3...) is an RE which matches either |re1| or |re2| or |re3|...""" def __init__(self, *re_list): self.re_list = re_list nullable = 0 match_nl = 0 nullable_res = [] non_nullable_res = [] i = 1 for re in re_list: self.check_re(i, re) if re.nullable: nullable_res.append(re) nullable = 1 else: non_nullable_res.append(re) if re.match_nl: match_nl = 1 i += 1 self.nullable_res = nullable_res self.non_nullable_res = non_nullable_res self.nullable = nullable self.match_nl = match_nl def build_machine(self, m, initial_state, final_state, match_bol, nocase): for re in self.nullable_res: re.build_machine(m, initial_state, final_state, match_bol, nocase) if self.non_nullable_res: if match_bol: initial_state = self.build_opt(m, initial_state, BOL) for re in self.non_nullable_res: re.build_machine(m, initial_state, final_state, 0, nocase) def calc_str(self): return "Alt(%s)" % ','.join(map(str, self.re_list)) class Rep1(RE): """Rep1(re) is an RE which matches one or more repetitions of |re|.""" def __init__(self, re): self.check_re(1, re) self.re = re self.nullable = re.nullable self.match_nl = re.match_nl def build_machine(self, m, initial_state, final_state, match_bol, nocase): s1 = m.new_state() s2 = m.new_state() initial_state.link_to(s1) self.re.build_machine(m, s1, s2, match_bol or self.re.match_nl, nocase) s2.link_to(s1) s2.link_to(final_state) def calc_str(self): return "Rep1(%s)" % self.re class SwitchCase(RE): """ SwitchCase(re, nocase) is an RE which matches the same strings as RE, but treating upper and lower case letters according to |nocase|. If |nocase| is true, case is ignored, otherwise it is not. """ re = None nocase = None def __init__(self, re, nocase): self.re = re self.nocase = nocase self.nullable = re.nullable self.match_nl = re.match_nl def build_machine(self, m, initial_state, final_state, match_bol, nocase): self.re.build_machine(m, initial_state, final_state, match_bol, self.nocase) def calc_str(self): if self.nocase: name = "NoCase" else: name = "Case" return "%s(%s)" % (name, self.re) # # Composite RE constructors # ------------------------- # # These REs are defined in terms of the primitive REs. # Empty = Seq() Empty.__doc__ = \ """ Empty is an RE which matches the empty string. """ Empty.str = "Empty" def Str1(s): """ Str1(s) is an RE which matches the literal string |s|. """ result = Seq(*tuple(map(Char, s))) result.str = "Str(%s)" % repr(s) return result def Str(*strs): """ Str(s) is an RE which matches the literal string |s|. Str(s1, s2, s3, ...) is an RE which matches any of |s1| or |s2| or |s3|... """ if len(strs) == 1: return Str1(strs[0]) else: result = Alt(*tuple(map(Str1, strs))) result.str = "Str(%s)" % ','.join(map(repr, strs)) return result def Any(s): """ Any(s) is an RE which matches any character in the string |s|. """ #result = apply(Alt, tuple(map(Char, s))) result = CodeRanges(chars_to_ranges(s)) result.str = "Any(%s)" % repr(s) return result def AnyBut(s): """ AnyBut(s) is an RE which matches any character (including newline) which is not in the string |s|. """ ranges = chars_to_ranges(s) ranges.insert(0, -maxint) ranges.append(maxint) result = CodeRanges(ranges) result.str = "AnyBut(%s)" % repr(s) return result AnyChar = AnyBut("") AnyChar.__doc__ = \ """ AnyChar is an RE which matches any single character (including a newline). """ AnyChar.str = "AnyChar" def Range(s1, s2=None): """ Range(c1, c2) is an RE which matches any single character in the range |c1| to |c2| inclusive. Range(s) where |s| is a string of even length is an RE which matches any single character in the ranges |s[0]| to |s[1]|, |s[2]| to |s[3]|,... """ if s2: result = CodeRange(ord(s1), ord(s2) + 1) result.str = "Range(%s,%s)" % (s1, s2) else: ranges = [] for i in range(0, len(s1), 2): ranges.append(CodeRange(ord(s1[i]), ord(s1[i + 1]) + 1)) result = Alt(*ranges) result.str = "Range(%s)" % repr(s1) return result def Opt(re): """ Opt(re) is an RE which matches either |re| or the empty string. """ result = Alt(re, Empty) result.str = "Opt(%s)" % re return result def Rep(re): """ Rep(re) is an RE which matches zero or more repetitions of |re|. """ result = Opt(Rep1(re)) result.str = "Rep(%s)" % re return result def NoCase(re): """ NoCase(re) is an RE which matches the same strings as RE, but treating upper and lower case letters as equivalent. """ return SwitchCase(re, nocase=1) def Case(re): """ Case(re) is an RE which matches the same strings as RE, but treating upper and lower case letters as distinct, i.e. it cancels the effect of any enclosing NoCase(). """ return SwitchCase(re, nocase=0) # # RE Constants # Bol = Char(BOL) Bol.__doc__ = \ """ Bol is an RE which matches the beginning of a line. """ Bol.str = "Bol" Eol = Char(EOL) Eol.__doc__ = \ """ Eol is an RE which matches the end of a line. """ Eol.str = "Eol" Eof = Char(EOF) Eof.__doc__ = \ """ Eof is an RE which matches the end of the file. """ Eof.str = "Eof" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Plex/Scanners.pxd0000644000175000017500000000271100000000000026707 0ustar00vincentvincent00000000000000from __future__ import absolute_import import cython from Cython.Plex.Actions cimport Action cdef class Scanner: cdef public lexicon cdef public stream cdef public name cdef public unicode buffer cdef public Py_ssize_t buf_start_pos cdef public Py_ssize_t next_pos cdef public Py_ssize_t cur_pos cdef public Py_ssize_t cur_line cdef public Py_ssize_t cur_line_start cdef public Py_ssize_t start_pos cdef public Py_ssize_t start_line cdef public Py_ssize_t start_col cdef public text cdef public initial_state # int? cdef public state_name cdef public list queue cdef public bint trace cdef public cur_char cdef public long input_state cdef public level @cython.final @cython.locals(input_state=long) cdef next_char(self) @cython.locals(action=Action) cpdef tuple read(self) @cython.final cdef tuple scan_a_token(self) ##cdef tuple position(self) # used frequently by Parsing.py @cython.final @cython.locals(cur_pos=Py_ssize_t, cur_line=Py_ssize_t, cur_line_start=Py_ssize_t, input_state=long, next_pos=Py_ssize_t, state=dict, buf_start_pos=Py_ssize_t, buf_len=Py_ssize_t, buf_index=Py_ssize_t, trace=bint, discard=Py_ssize_t, data=unicode, buffer=unicode) cdef run_machine_inlined(self) @cython.final cdef begin(self, state) @cython.final cdef produce(self, value, text = *) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Plex/Scanners.py0000644000175000017500000002774300000000000026560 0ustar00vincentvincent00000000000000# cython: auto_pickle=False #======================================================================= # # Python Lexical Analyser # # # Scanning an input stream # #======================================================================= from __future__ import absolute_import import cython cython.declare(BOL=object, EOL=object, EOF=object, NOT_FOUND=object) from . import Errors from .Regexps import BOL, EOL, EOF NOT_FOUND = object() class Scanner(object): """ A Scanner is used to read tokens from a stream of characters using the token set specified by a Plex.Lexicon. Constructor: Scanner(lexicon, stream, name = '') See the docstring of the __init__ method for details. Methods: See the docstrings of the individual methods for more information. read() --> (value, text) Reads the next lexical token from the stream. position() --> (name, line, col) Returns the position of the last token read using the read() method. begin(state_name) Causes scanner to change state. produce(value [, text]) Causes return of a token value to the caller of the Scanner. """ # lexicon = None # Lexicon # stream = None # file-like object # name = '' # buffer = '' # buf_start_pos = 0 # position in input of start of buffer # next_pos = 0 # position in input of next char to read # cur_pos = 0 # position in input of current char # cur_line = 1 # line number of current char # cur_line_start = 0 # position in input of start of current line # start_pos = 0 # position in input of start of token # start_line = 0 # line number of start of token # start_col = 0 # position in line of start of token # text = None # text of last token read # initial_state = None # Node # state_name = '' # Name of initial state # queue = None # list of tokens to be returned # trace = 0 def __init__(self, lexicon, stream, name='', initial_pos=None): """ Scanner(lexicon, stream, name = '') |lexicon| is a Plex.Lexicon instance specifying the lexical tokens to be recognised. |stream| can be a file object or anything which implements a compatible read() method. |name| is optional, and may be the name of the file being scanned or any other identifying string. """ self.trace = 0 self.buffer = u'' self.buf_start_pos = 0 self.next_pos = 0 self.cur_pos = 0 self.cur_line = 1 self.start_pos = 0 self.start_line = 0 self.start_col = 0 self.text = None self.state_name = None self.lexicon = lexicon self.stream = stream self.name = name self.queue = [] self.initial_state = None self.begin('') self.next_pos = 0 self.cur_pos = 0 self.cur_line_start = 0 self.cur_char = BOL self.input_state = 1 if initial_pos is not None: self.cur_line, self.cur_line_start = initial_pos[1], -initial_pos[2] def read(self): """ Read the next lexical token from the stream and return a tuple (value, text), where |value| is the value associated with the token as specified by the Lexicon, and |text| is the actual string read from the stream. Returns (None, '') on end of file. """ queue = self.queue while not queue: self.text, action = self.scan_a_token() if action is None: self.produce(None) self.eof() else: value = action.perform(self, self.text) if value is not None: self.produce(value) result = queue[0] del queue[0] return result def scan_a_token(self): """ Read the next input sequence recognised by the machine and return (text, action). Returns ('', None) on end of file. """ self.start_pos = self.cur_pos self.start_line = self.cur_line self.start_col = self.cur_pos - self.cur_line_start action = self.run_machine_inlined() if action is not None: if self.trace: print("Scanner: read: Performing %s %d:%d" % ( action, self.start_pos, self.cur_pos)) text = self.buffer[ self.start_pos - self.buf_start_pos: self.cur_pos - self.buf_start_pos] return (text, action) else: if self.cur_pos == self.start_pos: if self.cur_char is EOL: self.next_char() if self.cur_char is None or self.cur_char is EOF: return (u'', None) raise Errors.UnrecognizedInput(self, self.state_name) def run_machine_inlined(self): """ Inlined version of run_machine for speed. """ state = self.initial_state cur_pos = self.cur_pos cur_line = self.cur_line cur_line_start = self.cur_line_start cur_char = self.cur_char input_state = self.input_state next_pos = self.next_pos buffer = self.buffer buf_start_pos = self.buf_start_pos buf_len = len(buffer) b_action, b_cur_pos, b_cur_line, b_cur_line_start, b_cur_char, b_input_state, b_next_pos = \ None, 0, 0, 0, u'', 0, 0 trace = self.trace while 1: if trace: #TRACE# print("State %d, %d/%d:%s -->" % ( #TRACE# state['number'], input_state, cur_pos, repr(cur_char))) #TRACE# # Begin inlined self.save_for_backup() #action = state.action #@slow action = state['action'] #@fast if action is not None: b_action, b_cur_pos, b_cur_line, b_cur_line_start, b_cur_char, b_input_state, b_next_pos = \ action, cur_pos, cur_line, cur_line_start, cur_char, input_state, next_pos # End inlined self.save_for_backup() c = cur_char #new_state = state.new_state(c) #@slow new_state = state.get(c, NOT_FOUND) #@fast if new_state is NOT_FOUND: #@fast new_state = c and state.get('else') #@fast if new_state: if trace: #TRACE# print("State %d" % new_state['number']) #TRACE# state = new_state # Begin inlined: self.next_char() if input_state == 1: cur_pos = next_pos # Begin inlined: c = self.read_char() buf_index = next_pos - buf_start_pos if buf_index < buf_len: c = buffer[buf_index] next_pos += 1 else: discard = self.start_pos - buf_start_pos data = self.stream.read(0x1000) buffer = self.buffer[discard:] + data self.buffer = buffer buf_start_pos += discard self.buf_start_pos = buf_start_pos buf_len = len(buffer) buf_index -= discard if data: c = buffer[buf_index] next_pos += 1 else: c = u'' # End inlined: c = self.read_char() if c == u'\n': cur_char = EOL input_state = 2 elif not c: cur_char = EOL input_state = 4 else: cur_char = c elif input_state == 2: cur_char = u'\n' input_state = 3 elif input_state == 3: cur_line += 1 cur_line_start = cur_pos = next_pos cur_char = BOL input_state = 1 elif input_state == 4: cur_char = EOF input_state = 5 else: # input_state = 5 cur_char = u'' # End inlined self.next_char() else: # not new_state if trace: #TRACE# print("blocked") #TRACE# # Begin inlined: action = self.back_up() if b_action is not None: (action, cur_pos, cur_line, cur_line_start, cur_char, input_state, next_pos) = \ (b_action, b_cur_pos, b_cur_line, b_cur_line_start, b_cur_char, b_input_state, b_next_pos) else: action = None break # while 1 # End inlined: action = self.back_up() self.cur_pos = cur_pos self.cur_line = cur_line self.cur_line_start = cur_line_start self.cur_char = cur_char self.input_state = input_state self.next_pos = next_pos if trace: #TRACE# if action is not None: #TRACE# print("Doing %s" % action) #TRACE# return action def next_char(self): input_state = self.input_state if self.trace: print("Scanner: next: %s [%d] %d" % (" " * 20, input_state, self.cur_pos)) if input_state == 1: self.cur_pos = self.next_pos c = self.read_char() if c == u'\n': self.cur_char = EOL self.input_state = 2 elif not c: self.cur_char = EOL self.input_state = 4 else: self.cur_char = c elif input_state == 2: self.cur_char = u'\n' self.input_state = 3 elif input_state == 3: self.cur_line += 1 self.cur_line_start = self.cur_pos = self.next_pos self.cur_char = BOL self.input_state = 1 elif input_state == 4: self.cur_char = EOF self.input_state = 5 else: # input_state = 5 self.cur_char = u'' if self.trace: print("--> [%d] %d %r" % (input_state, self.cur_pos, self.cur_char)) def position(self): """ Return a tuple (name, line, col) representing the location of the last token read using the read() method. |name| is the name that was provided to the Scanner constructor; |line| is the line number in the stream (1-based); |col| is the position within the line of the first character of the token (0-based). """ return (self.name, self.start_line, self.start_col) def get_position(self): """Python accessible wrapper around position(), only for error reporting. """ return self.position() def begin(self, state_name): """Set the current state of the scanner to the named state.""" self.initial_state = ( self.lexicon.get_initial_state(state_name)) self.state_name = state_name def produce(self, value, text=None): """ Called from an action procedure, causes |value| to be returned as the token value from read(). If |text| is supplied, it is returned in place of the scanned text. produce() can be called more than once during a single call to an action procedure, in which case the tokens are queued up and returned one at a time by subsequent calls to read(), until the queue is empty, whereupon scanning resumes. """ if text is None: text = self.text self.queue.append((value, text)) def eof(self): """ Override this method if you want something to be done at end of file. """ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Plex/Timing.py0000644000175000017500000000073000000000000026216 0ustar00vincentvincent00000000000000# # Get time in platform-dependent way # from __future__ import absolute_import import os from sys import platform, exit, stderr if platform == 'mac': import MacOS def time(): return MacOS.GetTicks() / 60.0 timekind = "real" elif hasattr(os, 'times'): def time(): t = os.times() return t[0] + t[1] timekind = "cpu" else: stderr.write( "Don't know how to get time on platform %s\n" % repr(platform)) exit(1) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Plex/Traditional.py0000644000175000017500000001003000000000000027233 0ustar00vincentvincent00000000000000#======================================================================= # # Python Lexical Analyser # # Traditional Regular Expression Syntax # #======================================================================= from __future__ import absolute_import from .Regexps import Alt, Seq, Rep, Rep1, Opt, Any, AnyBut, Bol, Eol, Char from .Errors import PlexError class RegexpSyntaxError(PlexError): pass def re(s): """ Convert traditional string representation of regular expression |s| into Plex representation. """ return REParser(s).parse_re() class REParser(object): def __init__(self, s): self.s = s self.i = -1 self.end = 0 self.next() def parse_re(self): re = self.parse_alt() if not self.end: self.error("Unexpected %s" % repr(self.c)) return re def parse_alt(self): """Parse a set of alternative regexps.""" re = self.parse_seq() if self.c == '|': re_list = [re] while self.c == '|': self.next() re_list.append(self.parse_seq()) re = Alt(*re_list) return re def parse_seq(self): """Parse a sequence of regexps.""" re_list = [] while not self.end and not self.c in "|)": re_list.append(self.parse_mod()) return Seq(*re_list) def parse_mod(self): """Parse a primitive regexp followed by *, +, ? modifiers.""" re = self.parse_prim() while not self.end and self.c in "*+?": if self.c == '*': re = Rep(re) elif self.c == '+': re = Rep1(re) else: # self.c == '?' re = Opt(re) self.next() return re def parse_prim(self): """Parse a primitive regexp.""" c = self.get() if c == '.': re = AnyBut("\n") elif c == '^': re = Bol elif c == '$': re = Eol elif c == '(': re = self.parse_alt() self.expect(')') elif c == '[': re = self.parse_charset() self.expect(']') else: if c == '\\': c = self.get() re = Char(c) return re def parse_charset(self): """Parse a charset. Does not include the surrounding [].""" char_list = [] invert = 0 if self.c == '^': invert = 1 self.next() if self.c == ']': char_list.append(']') self.next() while not self.end and self.c != ']': c1 = self.get() if self.c == '-' and self.lookahead(1) != ']': self.next() c2 = self.get() for a in range(ord(c1), ord(c2) + 1): char_list.append(chr(a)) else: char_list.append(c1) chars = ''.join(char_list) if invert: return AnyBut(chars) else: return Any(chars) def next(self): """Advance to the next char.""" s = self.s i = self.i = self.i + 1 if i < len(s): self.c = s[i] else: self.c = '' self.end = 1 def get(self): if self.end: self.error("Premature end of string") c = self.c self.next() return c def lookahead(self, n): """Look ahead n chars.""" j = self.i + n if j < len(self.s): return self.s[j] else: return '' def expect(self, c): """ Expect to find character |c| at current position. Raises an exception otherwise. """ if self.c == c: self.next() else: self.error("Missing %s" % repr(c)) def error(self, mess): """Raise exception to signal syntax error in regexp.""" raise RegexpSyntaxError("Syntax error in regexp %s at position %d: %s" % ( repr(self.s), self.i, mess)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Plex/Transitions.py0000644000175000017500000001602300000000000027306 0ustar00vincentvincent00000000000000# # Plex - Transition Maps # # This version represents state sets directly as dicts for speed. # from __future__ import absolute_import try: from sys import maxsize as maxint except ImportError: from sys import maxint class TransitionMap(object): """ A TransitionMap maps an input event to a set of states. An input event is one of: a range of character codes, the empty string (representing an epsilon move), or one of the special symbols BOL, EOL, EOF. For characters, this implementation compactly represents the map by means of a list: [code_0, states_0, code_1, states_1, code_2, states_2, ..., code_n-1, states_n-1, code_n] where |code_i| is a character code, and |states_i| is a set of states corresponding to characters with codes |c| in the range |code_i| <= |c| <= |code_i+1|. The following invariants hold: n >= 1 code_0 == -maxint code_n == maxint code_i < code_i+1 for i in 0..n-1 states_0 == states_n-1 Mappings for the special events '', BOL, EOL, EOF are kept separately in a dictionary. """ map = None # The list of codes and states special = None # Mapping for special events def __init__(self, map=None, special=None): if not map: map = [-maxint, {}, maxint] if not special: special = {} self.map = map self.special = special #self.check() ### def add(self, event, new_state, TupleType=tuple): """ Add transition to |new_state| on |event|. """ if type(event) is TupleType: code0, code1 = event i = self.split(code0) j = self.split(code1) map = self.map while i < j: map[i + 1][new_state] = 1 i += 2 else: self.get_special(event)[new_state] = 1 def add_set(self, event, new_set, TupleType=tuple): """ Add transitions to the states in |new_set| on |event|. """ if type(event) is TupleType: code0, code1 = event i = self.split(code0) j = self.split(code1) map = self.map while i < j: map[i + 1].update(new_set) i += 2 else: self.get_special(event).update(new_set) def get_epsilon(self, none=None): """ Return the mapping for epsilon, or None. """ return self.special.get('', none) def iteritems(self, len=len): """ Return the mapping as an iterable of ((code1, code2), state_set) and (special_event, state_set) pairs. """ result = [] map = self.map else_set = map[1] i = 0 n = len(map) - 1 code0 = map[0] while i < n: set = map[i + 1] code1 = map[i + 2] if set or else_set: result.append(((code0, code1), set)) code0 = code1 i += 2 for event, set in self.special.items(): if set: result.append((event, set)) return iter(result) items = iteritems # ------------------- Private methods -------------------- def split(self, code, len=len, maxint=maxint): """ Search the list for the position of the split point for |code|, inserting a new split point if necessary. Returns index |i| such that |code| == |map[i]|. """ # We use a funky variation on binary search. map = self.map hi = len(map) - 1 # Special case: code == map[-1] if code == maxint: return hi # General case lo = 0 # loop invariant: map[lo] <= code < map[hi] and hi - lo >= 2 while hi - lo >= 4: # Find midpoint truncated to even index mid = ((lo + hi) // 2) & ~1 if code < map[mid]: hi = mid else: lo = mid # map[lo] <= code < map[hi] and hi - lo == 2 if map[lo] == code: return lo else: map[hi:hi] = [code, map[hi - 1].copy()] #self.check() ### return hi def get_special(self, event): """ Get state set for special event, adding a new entry if necessary. """ special = self.special set = special.get(event, None) if not set: set = {} special[event] = set return set # --------------------- Conversion methods ----------------------- def __str__(self): map_strs = [] map = self.map n = len(map) i = 0 while i < n: code = map[i] if code == -maxint: code_str = "-inf" elif code == maxint: code_str = "inf" else: code_str = str(code) map_strs.append(code_str) i += 1 if i < n: map_strs.append(state_set_str(map[i])) i += 1 special_strs = {} for event, set in self.special.items(): special_strs[event] = state_set_str(set) return "[%s]+%s" % ( ','.join(map_strs), special_strs ) # --------------------- Debugging methods ----------------------- def check(self): """Check data structure integrity.""" if not self.map[-3] < self.map[-1]: print(self) assert 0 def dump(self, file): map = self.map i = 0 n = len(map) - 1 while i < n: self.dump_range(map[i], map[i + 2], map[i + 1], file) i += 2 for event, set in self.special.items(): if set: if not event: event = 'empty' self.dump_trans(event, set, file) def dump_range(self, code0, code1, set, file): if set: if code0 == -maxint: if code1 == maxint: k = "any" else: k = "< %s" % self.dump_char(code1) elif code1 == maxint: k = "> %s" % self.dump_char(code0 - 1) elif code0 == code1 - 1: k = self.dump_char(code0) else: k = "%s..%s" % (self.dump_char(code0), self.dump_char(code1 - 1)) self.dump_trans(k, set, file) def dump_char(self, code): if 0 <= code <= 255: return repr(chr(code)) else: return "chr(%d)" % code def dump_trans(self, key, set, file): file.write(" %s --> %s\n" % (key, self.dump_set(set))) def dump_set(self, set): return state_set_str(set) # # State set manipulation functions # #def merge_state_sets(set1, set2): # for state in set2.keys(): # set1[state] = 1 def state_set_str(set): return "[%s]" % ','.join(["S%d" % state.number for state in set]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Plex/__init__.py0000644000175000017500000000240200000000000026524 0ustar00vincentvincent00000000000000#======================================================================= # # Python Lexical Analyser # #======================================================================= """ The Plex module provides lexical analysers with similar capabilities to GNU Flex. The following classes and functions are exported; see the attached docstrings for more information. Scanner For scanning a character stream under the direction of a Lexicon. Lexicon For constructing a lexical definition to be used by a Scanner. Str, Any, AnyBut, AnyChar, Seq, Alt, Opt, Rep, Rep1, Bol, Eol, Eof, Empty Regular expression constructors, for building pattern definitions for a Lexicon. State For defining scanner states when creating a Lexicon. TEXT, IGNORE, Begin Actions for associating with patterns when creating a Lexicon. """ from __future__ import absolute_import from .Actions import TEXT, IGNORE, Begin from .Lexicons import Lexicon, State from .Regexps import RE, Seq, Alt, Rep1, Empty, Str, Any, AnyBut, AnyChar, Range from .Regexps import Opt, Rep, Bol, Eol, Eof, Case, NoCase from .Scanners import Scanner ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735099.8966608 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Runtime/0000755000175000017500000000000000000000000025130 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Runtime/__init__.py0000644000175000017500000000001500000000000027235 0ustar00vincentvincent00000000000000# empty file ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Runtime/refnanny.pyx0000644000175000017500000001420700000000000027516 0ustar00vincentvincent00000000000000# cython: language_level=3, auto_pickle=False from cpython.ref cimport PyObject, Py_INCREF, Py_DECREF, Py_XDECREF, Py_XINCREF from cpython.exc cimport PyErr_Fetch, PyErr_Restore from cpython.pystate cimport PyThreadState_Get cimport cython loglevel = 0 reflog = [] cdef log(level, action, obj, lineno): if loglevel >= level: reflog.append((lineno, action, id(obj))) LOG_NONE, LOG_ALL = range(2) @cython.final cdef class Context(object): cdef readonly object name, filename cdef readonly dict refs cdef readonly list errors cdef readonly Py_ssize_t start def __cinit__(self, name, line=0, filename=None): self.name = name self.start = line self.filename = filename self.refs = {} # id -> (count, [lineno]) self.errors = [] cdef regref(self, obj, lineno, bint is_null): log(LOG_ALL, u'regref', u"" if is_null else obj, lineno) if is_null: self.errors.append(f"NULL argument on line {lineno}") return id_ = id(obj) count, linenumbers = self.refs.get(id_, (0, [])) self.refs[id_] = (count + 1, linenumbers) linenumbers.append(lineno) cdef bint delref(self, obj, lineno, bint is_null) except -1: # returns whether it is ok to do the decref operation log(LOG_ALL, u'delref', u"" if is_null else obj, lineno) if is_null: self.errors.append(f"NULL argument on line {lineno}") return False id_ = id(obj) count, linenumbers = self.refs.get(id_, (0, [])) if count == 0: self.errors.append(f"Too many decrefs on line {lineno}, reference acquired on lines {linenumbers!r}") return False elif count == 1: del self.refs[id_] return True else: self.refs[id_] = (count - 1, linenumbers) return True cdef end(self): if self.refs: msg = u"References leaked:" for count, linenos in self.refs.itervalues(): msg += f"\n ({count}) acquired on lines: {u', '.join([f'{x}' for x in linenos])}" self.errors.append(msg) if self.errors: return u"\n".join([u'REFNANNY: '+error for error in self.errors]) else: return None cdef void report_unraisable(object e=None): try: if e is None: import sys e = sys.exc_info()[1] print(f"refnanny raised an exception: {e}") except: pass # We absolutely cannot exit with an exception # All Python operations must happen after any existing # exception has been fetched, in case we are called from # exception-handling code. cdef PyObject* SetupContext(char* funcname, int lineno, char* filename) except NULL: if Context is None: # Context may be None during finalize phase. # In that case, we don't want to be doing anything fancy # like caching and resetting exceptions. return NULL cdef (PyObject*) type = NULL, value = NULL, tb = NULL, result = NULL PyThreadState_Get() PyErr_Fetch(&type, &value, &tb) try: ctx = Context(funcname, lineno, filename) Py_INCREF(ctx) result = ctx except Exception, e: report_unraisable(e) PyErr_Restore(type, value, tb) return result cdef void GOTREF(PyObject* ctx, PyObject* p_obj, int lineno): if ctx == NULL: return cdef (PyObject*) type = NULL, value = NULL, tb = NULL PyErr_Fetch(&type, &value, &tb) try: try: if p_obj is NULL: (ctx).regref(None, lineno, True) else: (ctx).regref(p_obj, lineno, False) except: report_unraisable() except: # __Pyx_GetException may itself raise errors pass PyErr_Restore(type, value, tb) cdef int GIVEREF_and_report(PyObject* ctx, PyObject* p_obj, int lineno): if ctx == NULL: return 1 cdef (PyObject*) type = NULL, value = NULL, tb = NULL cdef bint decref_ok = False PyErr_Fetch(&type, &value, &tb) try: try: if p_obj is NULL: decref_ok = (ctx).delref(None, lineno, True) else: decref_ok = (ctx).delref(p_obj, lineno, False) except: report_unraisable() except: # __Pyx_GetException may itself raise errors pass PyErr_Restore(type, value, tb) return decref_ok cdef void GIVEREF(PyObject* ctx, PyObject* p_obj, int lineno): GIVEREF_and_report(ctx, p_obj, lineno) cdef void INCREF(PyObject* ctx, PyObject* obj, int lineno): Py_XINCREF(obj) PyThreadState_Get() GOTREF(ctx, obj, lineno) cdef void DECREF(PyObject* ctx, PyObject* obj, int lineno): if GIVEREF_and_report(ctx, obj, lineno): Py_XDECREF(obj) PyThreadState_Get() cdef void FinishContext(PyObject** ctx): if ctx == NULL or ctx[0] == NULL: return cdef (PyObject*) type = NULL, value = NULL, tb = NULL cdef object errors = None cdef Context context PyThreadState_Get() PyErr_Fetch(&type, &value, &tb) try: try: context = ctx[0] errors = context.end() if errors: print(f"{context.filename.decode('latin1')}: {context.name.decode('latin1')}()") print(errors) context = None except: report_unraisable() except: # __Pyx_GetException may itself raise errors pass Py_XDECREF(ctx[0]) ctx[0] = NULL PyErr_Restore(type, value, tb) ctypedef struct RefNannyAPIStruct: void (*INCREF)(PyObject*, PyObject*, int) void (*DECREF)(PyObject*, PyObject*, int) void (*GOTREF)(PyObject*, PyObject*, int) void (*GIVEREF)(PyObject*, PyObject*, int) PyObject* (*SetupContext)(char*, int, char*) except NULL void (*FinishContext)(PyObject**) cdef RefNannyAPIStruct api api.INCREF = INCREF api.DECREF = DECREF api.GOTREF = GOTREF api.GIVEREF = GIVEREF api.SetupContext = SetupContext api.FinishContext = FinishContext cdef extern from "Python.h": object PyLong_FromVoidPtr(void*) RefNannyAPI = PyLong_FromVoidPtr(&api) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431446.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Shadow.py0000644000175000017500000003122500000000000025307 0ustar00vincentvincent00000000000000# cython.* namespace for pure mode. from __future__ import absolute_import __version__ = "0.29.13" try: from __builtin__ import basestring except ImportError: basestring = str # BEGIN shameless copy from Cython/minivect/minitypes.py class _ArrayType(object): is_array = True subtypes = ['dtype'] def __init__(self, dtype, ndim, is_c_contig=False, is_f_contig=False, inner_contig=False, broadcasting=None): self.dtype = dtype self.ndim = ndim self.is_c_contig = is_c_contig self.is_f_contig = is_f_contig self.inner_contig = inner_contig or is_c_contig or is_f_contig self.broadcasting = broadcasting def __repr__(self): axes = [":"] * self.ndim if self.is_c_contig: axes[-1] = "::1" elif self.is_f_contig: axes[0] = "::1" return "%s[%s]" % (self.dtype, ", ".join(axes)) def index_type(base_type, item): """ Support array type creation by slicing, e.g. double[:, :] specifies a 2D strided array of doubles. The syntax is the same as for Cython memoryviews. """ class InvalidTypeSpecification(Exception): pass def verify_slice(s): if s.start or s.stop or s.step not in (None, 1): raise InvalidTypeSpecification( "Only a step of 1 may be provided to indicate C or " "Fortran contiguity") if isinstance(item, tuple): step_idx = None for idx, s in enumerate(item): verify_slice(s) if s.step and (step_idx or idx not in (0, len(item) - 1)): raise InvalidTypeSpecification( "Step may only be provided once, and only in the " "first or last dimension.") if s.step == 1: step_idx = idx return _ArrayType(base_type, len(item), is_c_contig=step_idx == len(item) - 1, is_f_contig=step_idx == 0) elif isinstance(item, slice): verify_slice(item) return _ArrayType(base_type, 1, is_c_contig=bool(item.step)) else: # int[8] etc. assert int(item) == item # array size must be a plain integer array(base_type, item) # END shameless copy compiled = False _Unspecified = object() # Function decorators def _empty_decorator(x): return x def locals(**arg_types): return _empty_decorator def test_assert_path_exists(*paths): return _empty_decorator def test_fail_if_path_exists(*paths): return _empty_decorator class _EmptyDecoratorAndManager(object): def __call__(self, x): return x def __enter__(self): pass def __exit__(self, exc_type, exc_value, traceback): pass class _Optimization(object): pass cclass = ccall = cfunc = _EmptyDecoratorAndManager() returns = wraparound = boundscheck = initializedcheck = nonecheck = \ embedsignature = cdivision = cdivision_warnings = \ always_allows_keywords = profile = linetrace = infer_types = \ unraisable_tracebacks = freelist = \ lambda _: _EmptyDecoratorAndManager() exceptval = lambda _=None, check=True: _EmptyDecoratorAndManager() overflowcheck = lambda _: _EmptyDecoratorAndManager() optimization = _Optimization() overflowcheck.fold = optimization.use_switch = \ optimization.unpack_method_calls = lambda arg: _EmptyDecoratorAndManager() final = internal = type_version_tag = no_gc_clear = no_gc = _empty_decorator _cython_inline = None def inline(f, *args, **kwds): if isinstance(f, basestring): global _cython_inline if _cython_inline is None: from Cython.Build.Inline import cython_inline as _cython_inline return _cython_inline(f, *args, **kwds) else: assert len(args) == len(kwds) == 0 return f def compile(f): from Cython.Build.Inline import RuntimeCompiledFunction return RuntimeCompiledFunction(f) # Special functions def cdiv(a, b): q = a / b if q < 0: q += 1 return q def cmod(a, b): r = a % b if (a*b) < 0: r -= b return r # Emulated language constructs def cast(type, *args, **kwargs): kwargs.pop('typecheck', None) assert not kwargs if hasattr(type, '__call__'): return type(*args) else: return args[0] def sizeof(arg): return 1 def typeof(arg): return arg.__class__.__name__ # return type(arg) def address(arg): return pointer(type(arg))([arg]) def declare(type=None, value=_Unspecified, **kwds): if type not in (None, object) and hasattr(type, '__call__'): if value is not _Unspecified: return type(value) else: return type() else: return value class _nogil(object): """Support for 'with nogil' statement and @nogil decorator. """ def __call__(self, x): if callable(x): # Used as function decorator => return the function unchanged. return x # Used as conditional context manager or to create an "@nogil(True/False)" decorator => keep going. return self def __enter__(self): pass def __exit__(self, exc_class, exc, tb): return exc_class is None nogil = _nogil() gil = _nogil() del _nogil # Emulated types class CythonMetaType(type): def __getitem__(type, ix): return array(type, ix) CythonTypeObject = CythonMetaType('CythonTypeObject', (object,), {}) class CythonType(CythonTypeObject): def _pointer(self, n=1): for i in range(n): self = pointer(self) return self class PointerType(CythonType): def __init__(self, value=None): if isinstance(value, (ArrayType, PointerType)): self._items = [cast(self._basetype, a) for a in value._items] elif isinstance(value, list): self._items = [cast(self._basetype, a) for a in value] elif value is None or value == 0: self._items = [] else: raise ValueError def __getitem__(self, ix): if ix < 0: raise IndexError("negative indexing not allowed in C") return self._items[ix] def __setitem__(self, ix, value): if ix < 0: raise IndexError("negative indexing not allowed in C") self._items[ix] = cast(self._basetype, value) def __eq__(self, value): if value is None and not self._items: return True elif type(self) != type(value): return False else: return not self._items and not value._items def __repr__(self): return "%s *" % (self._basetype,) class ArrayType(PointerType): def __init__(self): self._items = [None] * self._n class StructType(CythonType): def __init__(self, cast_from=_Unspecified, **data): if cast_from is not _Unspecified: # do cast if len(data) > 0: raise ValueError('Cannot accept keyword arguments when casting.') if type(cast_from) is not type(self): raise ValueError('Cannot cast from %s'%cast_from) for key, value in cast_from.__dict__.items(): setattr(self, key, value) else: for key, value in data.items(): setattr(self, key, value) def __setattr__(self, key, value): if key in self._members: self.__dict__[key] = cast(self._members[key], value) else: raise AttributeError("Struct has no member '%s'" % key) class UnionType(CythonType): def __init__(self, cast_from=_Unspecified, **data): if cast_from is not _Unspecified: # do type cast if len(data) > 0: raise ValueError('Cannot accept keyword arguments when casting.') if isinstance(cast_from, dict): datadict = cast_from elif type(cast_from) is type(self): datadict = cast_from.__dict__ else: raise ValueError('Cannot cast from %s'%cast_from) else: datadict = data if len(datadict) > 1: raise AttributeError("Union can only store one field at a time.") for key, value in datadict.items(): setattr(self, key, value) def __setattr__(self, key, value): if key in '__dict__': CythonType.__setattr__(self, key, value) elif key in self._members: self.__dict__ = {key: cast(self._members[key], value)} else: raise AttributeError("Union has no member '%s'" % key) def pointer(basetype): class PointerInstance(PointerType): _basetype = basetype return PointerInstance def array(basetype, n): class ArrayInstance(ArrayType): _basetype = basetype _n = n return ArrayInstance def struct(**members): class StructInstance(StructType): _members = members for key in members: setattr(StructInstance, key, None) return StructInstance def union(**members): class UnionInstance(UnionType): _members = members for key in members: setattr(UnionInstance, key, None) return UnionInstance class typedef(CythonType): def __init__(self, type, name=None): self._basetype = type self.name = name def __call__(self, *arg): value = cast(self._basetype, *arg) return value def __repr__(self): return self.name or str(self._basetype) __getitem__ = index_type class _FusedType(CythonType): pass def fused_type(*args): if not args: raise TypeError("Expected at least one type as argument") # Find the numeric type with biggest rank if all types are numeric rank = -1 for type in args: if type not in (py_int, py_long, py_float, py_complex): break if type_ordering.index(type) > rank: result_type = type else: return result_type # Not a simple numeric type, return a fused type instance. The result # isn't really meant to be used, as we can't keep track of the context in # pure-mode. Casting won't do anything in this case. return _FusedType() def _specialized_from_args(signatures, args, kwargs): "Perhaps this should be implemented in a TreeFragment in Cython code" raise Exception("yet to be implemented") py_int = typedef(int, "int") try: py_long = typedef(long, "long") except NameError: # Py3 py_long = typedef(int, "long") py_float = typedef(float, "float") py_complex = typedef(complex, "double complex") # Predefined types int_types = ['char', 'short', 'Py_UNICODE', 'int', 'Py_UCS4', 'long', 'longlong', 'Py_ssize_t', 'size_t'] float_types = ['longdouble', 'double', 'float'] complex_types = ['longdoublecomplex', 'doublecomplex', 'floatcomplex', 'complex'] other_types = ['bint', 'void', 'Py_tss_t'] to_repr = { 'longlong': 'long long', 'longdouble': 'long double', 'longdoublecomplex': 'long double complex', 'doublecomplex': 'double complex', 'floatcomplex': 'float complex', }.get gs = globals() # note: cannot simply name the unicode type here as 2to3 gets in the way and replaces it by str try: import __builtin__ as builtins except ImportError: # Py3 import builtins gs['unicode'] = typedef(getattr(builtins, 'unicode', str), 'unicode') del builtins for name in int_types: reprname = to_repr(name, name) gs[name] = typedef(py_int, reprname) if name not in ('Py_UNICODE', 'Py_UCS4') and not name.endswith('size_t'): gs['u'+name] = typedef(py_int, "unsigned " + reprname) gs['s'+name] = typedef(py_int, "signed " + reprname) for name in float_types: gs[name] = typedef(py_float, to_repr(name, name)) for name in complex_types: gs[name] = typedef(py_complex, to_repr(name, name)) bint = typedef(bool, "bint") void = typedef(None, "void") Py_tss_t = typedef(None, "Py_tss_t") for t in int_types + float_types + complex_types + other_types: for i in range(1, 4): gs["%s_%s" % ('p'*i, t)] = gs[t]._pointer(i) NULL = gs['p_void'](0) # looks like 'gs' has some users out there by now... #del gs integral = floating = numeric = _FusedType() type_ordering = [py_int, py_long, py_float, py_complex] class CythonDotParallel(object): """ The cython.parallel module. """ __all__ = ['parallel', 'prange', 'threadid'] def parallel(self, num_threads=None): return nogil def prange(self, start=0, stop=None, step=1, nogil=False, schedule=None, chunksize=None, num_threads=None): if stop is None: stop = start start = 0 return range(start, stop, step) def threadid(self): return 0 # def threadsavailable(self): # return 1 import sys sys.modules['cython.parallel'] = CythonDotParallel() del sys ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/StringIOTree.py0000644000175000017500000000641000000000000026376 0ustar00vincentvincent00000000000000# cython: auto_pickle=False r""" Implements a buffer with insertion points. When you know you need to "get back" to a place and write more later, simply call insertion_point() at that spot and get a new StringIOTree object that is "left behind". EXAMPLE: >>> a = StringIOTree() >>> _= a.write('first\n') >>> b = a.insertion_point() >>> _= a.write('third\n') >>> _= b.write('second\n') >>> a.getvalue().split() ['first', 'second', 'third'] >>> c = b.insertion_point() >>> d = c.insertion_point() >>> _= d.write('alpha\n') >>> _= b.write('gamma\n') >>> _= c.write('beta\n') >>> b.getvalue().split() ['second', 'alpha', 'beta', 'gamma'] >>> i = StringIOTree() >>> d.insert(i) >>> _= i.write('inserted\n') >>> out = StringIO() >>> a.copyto(out) >>> out.getvalue().split() ['first', 'second', 'alpha', 'inserted', 'beta', 'gamma', 'third'] """ from __future__ import absolute_import #, unicode_literals try: # Prefer cStringIO since io.StringIO() does not support writing 'str' in Py2. from cStringIO import StringIO except ImportError: from io import StringIO class StringIOTree(object): """ See module docs. """ def __init__(self, stream=None): self.prepended_children = [] if stream is None: stream = StringIO() self.stream = stream self.write = stream.write self.markers = [] def getvalue(self): content = [x.getvalue() for x in self.prepended_children] content.append(self.stream.getvalue()) return "".join(content) def copyto(self, target): """Potentially cheaper than getvalue as no string concatenation needs to happen.""" for child in self.prepended_children: child.copyto(target) stream_content = self.stream.getvalue() if stream_content: target.write(stream_content) def commit(self): # Save what we have written until now so that the buffer # itself is empty -- this makes it ready for insertion if self.stream.tell(): self.prepended_children.append(StringIOTree(self.stream)) self.prepended_children[-1].markers = self.markers self.markers = [] self.stream = StringIO() self.write = self.stream.write def insert(self, iotree): """ Insert a StringIOTree (and all of its contents) at this location. Further writing to self appears after what is inserted. """ self.commit() self.prepended_children.append(iotree) def insertion_point(self): """ Returns a new StringIOTree, which is left behind at the current position (it what is written to the result will appear right before whatever is next written to self). Calling getvalue() or copyto() on the result will only return the contents written to it. """ # Save what we have written until now # This is so that getvalue on the result doesn't include it. self.commit() # Construct the new forked object to return other = StringIOTree() self.prepended_children.append(other) return other def allmarkers(self): children = self.prepended_children return [m for c in children for m in c.allmarkers()] + self.markers ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735099.8999941 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Tempita/0000755000175000017500000000000000000000000025110 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Tempita/__init__.py0000644000175000017500000000023000000000000027214 0ustar00vincentvincent00000000000000# The original Tempita implements all of its templating code here. # Moved it to _tempita.py to make the compilation portable. from ._tempita import * ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Tempita/_looper.py0000644000175000017500000001011000000000000027112 0ustar00vincentvincent00000000000000""" Helper for looping over sequences, particular in templates. Often in a loop in a template it's handy to know what's next up, previously up, if this is the first or last item in the sequence, etc. These can be awkward to manage in a normal Python loop, but using the looper you can get a better sense of the context. Use like:: >>> for loop, item in looper(['a', 'b', 'c']): ... print loop.number, item ... if not loop.last: ... print '---' 1 a --- 2 b --- 3 c """ import sys from Cython.Tempita.compat3 import basestring_ __all__ = ['looper'] class looper(object): """ Helper for looping (particularly in templates) Use this like:: for loop, item in looper(seq): if loop.first: ... """ def __init__(self, seq): self.seq = seq def __iter__(self): return looper_iter(self.seq) def __repr__(self): return '<%s for %r>' % ( self.__class__.__name__, self.seq) class looper_iter(object): def __init__(self, seq): self.seq = list(seq) self.pos = 0 def __iter__(self): return self def __next__(self): if self.pos >= len(self.seq): raise StopIteration result = loop_pos(self.seq, self.pos), self.seq[self.pos] self.pos += 1 return result if sys.version < "3": next = __next__ class loop_pos(object): def __init__(self, seq, pos): self.seq = seq self.pos = pos def __repr__(self): return '' % ( self.seq[self.pos], self.pos) def index(self): return self.pos index = property(index) def number(self): return self.pos + 1 number = property(number) def item(self): return self.seq[self.pos] item = property(item) def __next__(self): try: return self.seq[self.pos + 1] except IndexError: return None __next__ = property(__next__) if sys.version < "3": next = __next__ def previous(self): if self.pos == 0: return None return self.seq[self.pos - 1] previous = property(previous) def odd(self): return not self.pos % 2 odd = property(odd) def even(self): return self.pos % 2 even = property(even) def first(self): return self.pos == 0 first = property(first) def last(self): return self.pos == len(self.seq) - 1 last = property(last) def length(self): return len(self.seq) length = property(length) def first_group(self, getter=None): """ Returns true if this item is the start of a new group, where groups mean that some attribute has changed. The getter can be None (the item itself changes), an attribute name like ``'.attr'``, a function, or a dict key or list index. """ if self.first: return True return self._compare_group(self.item, self.previous, getter) def last_group(self, getter=None): """ Returns true if this item is the end of a new group, where groups mean that some attribute has changed. The getter can be None (the item itself changes), an attribute name like ``'.attr'``, a function, or a dict key or list index. """ if self.last: return True return self._compare_group(self.item, self.__next__, getter) def _compare_group(self, item, other, getter): if getter is None: return item != other elif (isinstance(getter, basestring_) and getter.startswith('.')): getter = getter[1:] if getter.endswith('()'): getter = getter[:-2] return getattr(item, getter)() != getattr(other, getter)() else: return getattr(item, getter) != getattr(other, getter) elif hasattr(getter, '__call__'): return getter(item) != getter(other) else: return item[getter] != other[getter] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Tempita/_tempita.py0000644000175000017500000011524400000000000027273 0ustar00vincentvincent00000000000000""" A small templating language This implements a small templating language. This language implements if/elif/else, for/continue/break, expressions, and blocks of Python code. The syntax is:: {{any expression (function calls etc)}} {{any expression | filter}} {{for x in y}}...{{endfor}} {{if x}}x{{elif y}}y{{else}}z{{endif}} {{py:x=1}} {{py: def foo(bar): return 'baz' }} {{default var = default_value}} {{# comment}} You use this with the ``Template`` class or the ``sub`` shortcut. The ``Template`` class takes the template string and the name of the template (for errors) and a default namespace. Then (like ``string.Template``) you can call the ``tmpl.substitute(**kw)`` method to make a substitution (or ``tmpl.substitute(a_dict)``). ``sub(content, **kw)`` substitutes the template immediately. You can use ``__name='tmpl.html'`` to set the name of the template. If there are syntax errors ``TemplateError`` will be raised. """ from __future__ import absolute_import import re import sys import cgi try: from urllib import quote as url_quote except ImportError: # Py3 from urllib.parse import quote as url_quote import os import tokenize from io import StringIO from ._looper import looper from .compat3 import bytes, unicode_, basestring_, next, is_unicode, coerce_text __all__ = ['TemplateError', 'Template', 'sub', 'HTMLTemplate', 'sub_html', 'html', 'bunch'] in_re = re.compile(r'\s+in\s+') var_re = re.compile(r'^[a-z_][a-z0-9_]*$', re.I) class TemplateError(Exception): """Exception raised while parsing a template """ def __init__(self, message, position, name=None): Exception.__init__(self, message) self.position = position self.name = name def __str__(self): msg = ' '.join(self.args) if self.position: msg = '%s at line %s column %s' % ( msg, self.position[0], self.position[1]) if self.name: msg += ' in %s' % self.name return msg class _TemplateContinue(Exception): pass class _TemplateBreak(Exception): pass def get_file_template(name, from_template): path = os.path.join(os.path.dirname(from_template.name), name) return from_template.__class__.from_filename( path, namespace=from_template.namespace, get_template=from_template.get_template) class Template(object): default_namespace = { 'start_braces': '{{', 'end_braces': '}}', 'looper': looper, } default_encoding = 'utf8' default_inherit = None def __init__(self, content, name=None, namespace=None, stacklevel=None, get_template=None, default_inherit=None, line_offset=0, delimeters=None): self.content = content # set delimeters if delimeters is None: delimeters = (self.default_namespace['start_braces'], self.default_namespace['end_braces']) else: #assert len(delimeters) == 2 and all([isinstance(delimeter, basestring) # for delimeter in delimeters]) self.default_namespace = self.__class__.default_namespace.copy() self.default_namespace['start_braces'] = delimeters[0] self.default_namespace['end_braces'] = delimeters[1] self.delimeters = delimeters self._unicode = is_unicode(content) if name is None and stacklevel is not None: try: caller = sys._getframe(stacklevel) except ValueError: pass else: globals = caller.f_globals lineno = caller.f_lineno if '__file__' in globals: name = globals['__file__'] if name.endswith('.pyc') or name.endswith('.pyo'): name = name[:-1] elif '__name__' in globals: name = globals['__name__'] else: name = '' if lineno: name += ':%s' % lineno self.name = name self._parsed = parse(content, name=name, line_offset=line_offset, delimeters=self.delimeters) if namespace is None: namespace = {} self.namespace = namespace self.get_template = get_template if default_inherit is not None: self.default_inherit = default_inherit def from_filename(cls, filename, namespace=None, encoding=None, default_inherit=None, get_template=get_file_template): f = open(filename, 'rb') c = f.read() f.close() if encoding: c = c.decode(encoding) return cls(content=c, name=filename, namespace=namespace, default_inherit=default_inherit, get_template=get_template) from_filename = classmethod(from_filename) def __repr__(self): return '<%s %s name=%r>' % ( self.__class__.__name__, hex(id(self))[2:], self.name) def substitute(self, *args, **kw): if args: if kw: raise TypeError( "You can only give positional *or* keyword arguments") if len(args) > 1: raise TypeError( "You can only give one positional argument") if not hasattr(args[0], 'items'): raise TypeError( "If you pass in a single argument, you must pass in a dictionary-like object (with a .items() method); you gave %r" % (args[0],)) kw = args[0] ns = kw ns['__template_name__'] = self.name if self.namespace: ns.update(self.namespace) result, defs, inherit = self._interpret(ns) if not inherit: inherit = self.default_inherit if inherit: result = self._interpret_inherit(result, defs, inherit, ns) return result def _interpret(self, ns): __traceback_hide__ = True parts = [] defs = {} self._interpret_codes(self._parsed, ns, out=parts, defs=defs) if '__inherit__' in defs: inherit = defs.pop('__inherit__') else: inherit = None return ''.join(parts), defs, inherit def _interpret_inherit(self, body, defs, inherit_template, ns): __traceback_hide__ = True if not self.get_template: raise TemplateError( 'You cannot use inheritance without passing in get_template', position=None, name=self.name) templ = self.get_template(inherit_template, self) self_ = TemplateObject(self.name) for name, value in defs.items(): setattr(self_, name, value) self_.body = body ns = ns.copy() ns['self'] = self_ return templ.substitute(ns) def _interpret_codes(self, codes, ns, out, defs): __traceback_hide__ = True for item in codes: if isinstance(item, basestring_): out.append(item) else: self._interpret_code(item, ns, out, defs) def _interpret_code(self, code, ns, out, defs): __traceback_hide__ = True name, pos = code[0], code[1] if name == 'py': self._exec(code[2], ns, pos) elif name == 'continue': raise _TemplateContinue() elif name == 'break': raise _TemplateBreak() elif name == 'for': vars, expr, content = code[2], code[3], code[4] expr = self._eval(expr, ns, pos) self._interpret_for(vars, expr, content, ns, out, defs) elif name == 'cond': parts = code[2:] self._interpret_if(parts, ns, out, defs) elif name == 'expr': parts = code[2].split('|') base = self._eval(parts[0], ns, pos) for part in parts[1:]: func = self._eval(part, ns, pos) base = func(base) out.append(self._repr(base, pos)) elif name == 'default': var, expr = code[2], code[3] if var not in ns: result = self._eval(expr, ns, pos) ns[var] = result elif name == 'inherit': expr = code[2] value = self._eval(expr, ns, pos) defs['__inherit__'] = value elif name == 'def': name = code[2] signature = code[3] parts = code[4] ns[name] = defs[name] = TemplateDef(self, name, signature, body=parts, ns=ns, pos=pos) elif name == 'comment': return else: assert 0, "Unknown code: %r" % name def _interpret_for(self, vars, expr, content, ns, out, defs): __traceback_hide__ = True for item in expr: if len(vars) == 1: ns[vars[0]] = item else: if len(vars) != len(item): raise ValueError( 'Need %i items to unpack (got %i items)' % (len(vars), len(item))) for name, value in zip(vars, item): ns[name] = value try: self._interpret_codes(content, ns, out, defs) except _TemplateContinue: continue except _TemplateBreak: break def _interpret_if(self, parts, ns, out, defs): __traceback_hide__ = True # @@: if/else/else gets through for part in parts: assert not isinstance(part, basestring_) name, pos = part[0], part[1] if name == 'else': result = True else: result = self._eval(part[2], ns, pos) if result: self._interpret_codes(part[3], ns, out, defs) break def _eval(self, code, ns, pos): __traceback_hide__ = True try: try: value = eval(code, self.default_namespace, ns) except SyntaxError as e: raise SyntaxError( 'invalid syntax in expression: %s' % code) return value except Exception as e: if getattr(e, 'args', None): arg0 = e.args[0] else: arg0 = coerce_text(e) e.args = (self._add_line_info(arg0, pos),) raise def _exec(self, code, ns, pos): __traceback_hide__ = True try: exec(code, self.default_namespace, ns) except Exception as e: if e.args: e.args = (self._add_line_info(e.args[0], pos),) else: e.args = (self._add_line_info(None, pos),) raise def _repr(self, value, pos): __traceback_hide__ = True try: if value is None: return '' if self._unicode: try: value = unicode_(value) except UnicodeDecodeError: value = bytes(value) else: if not isinstance(value, basestring_): value = coerce_text(value) if (is_unicode(value) and self.default_encoding): value = value.encode(self.default_encoding) except Exception as e: e.args = (self._add_line_info(e.args[0], pos),) raise else: if self._unicode and isinstance(value, bytes): if not self.default_encoding: raise UnicodeDecodeError( 'Cannot decode bytes value %r into unicode ' '(no default_encoding provided)' % value) try: value = value.decode(self.default_encoding) except UnicodeDecodeError as e: raise UnicodeDecodeError( e.encoding, e.object, e.start, e.end, e.reason + ' in string %r' % value) elif not self._unicode and is_unicode(value): if not self.default_encoding: raise UnicodeEncodeError( 'Cannot encode unicode value %r into bytes ' '(no default_encoding provided)' % value) value = value.encode(self.default_encoding) return value def _add_line_info(self, msg, pos): msg = "%s at line %s column %s" % ( msg, pos[0], pos[1]) if self.name: msg += " in file %s" % self.name return msg def sub(content, delimeters=None, **kw): name = kw.get('__name') tmpl = Template(content, name=name, delimeters=delimeters) return tmpl.substitute(kw) def paste_script_template_renderer(content, vars, filename=None): tmpl = Template(content, name=filename) return tmpl.substitute(vars) class bunch(dict): def __init__(self, **kw): for name, value in kw.items(): setattr(self, name, value) def __setattr__(self, name, value): self[name] = value def __getattr__(self, name): try: return self[name] except KeyError: raise AttributeError(name) def __getitem__(self, key): if 'default' in self: try: return dict.__getitem__(self, key) except KeyError: return dict.__getitem__(self, 'default') else: return dict.__getitem__(self, key) def __repr__(self): return '<%s %s>' % ( self.__class__.__name__, ' '.join(['%s=%r' % (k, v) for k, v in sorted(self.items())])) ############################################################ ## HTML Templating ############################################################ class html(object): def __init__(self, value): self.value = value def __str__(self): return self.value def __html__(self): return self.value def __repr__(self): return '<%s %r>' % ( self.__class__.__name__, self.value) def html_quote(value, force=True): if not force and hasattr(value, '__html__'): return value.__html__() if value is None: return '' if not isinstance(value, basestring_): value = coerce_text(value) if sys.version >= "3" and isinstance(value, bytes): value = cgi.escape(value.decode('latin1'), 1) value = value.encode('latin1') else: value = cgi.escape(value, 1) if sys.version < "3": if is_unicode(value): value = value.encode('ascii', 'xmlcharrefreplace') return value def url(v): v = coerce_text(v) if is_unicode(v): v = v.encode('utf8') return url_quote(v) def attr(**kw): parts = [] for name, value in sorted(kw.items()): if value is None: continue if name.endswith('_'): name = name[:-1] parts.append('%s="%s"' % (html_quote(name), html_quote(value))) return html(' '.join(parts)) class HTMLTemplate(Template): default_namespace = Template.default_namespace.copy() default_namespace.update(dict( html=html, attr=attr, url=url, html_quote=html_quote, )) def _repr(self, value, pos): if hasattr(value, '__html__'): value = value.__html__() quote = False else: quote = True plain = Template._repr(self, value, pos) if quote: return html_quote(plain) else: return plain def sub_html(content, **kw): name = kw.get('__name') tmpl = HTMLTemplate(content, name=name) return tmpl.substitute(kw) class TemplateDef(object): def __init__(self, template, func_name, func_signature, body, ns, pos, bound_self=None): self._template = template self._func_name = func_name self._func_signature = func_signature self._body = body self._ns = ns self._pos = pos self._bound_self = bound_self def __repr__(self): return '' % ( self._func_name, self._func_signature, self._template.name, self._pos) def __str__(self): return self() def __call__(self, *args, **kw): values = self._parse_signature(args, kw) ns = self._ns.copy() ns.update(values) if self._bound_self is not None: ns['self'] = self._bound_self out = [] subdefs = {} self._template._interpret_codes(self._body, ns, out, subdefs) return ''.join(out) def __get__(self, obj, type=None): if obj is None: return self return self.__class__( self._template, self._func_name, self._func_signature, self._body, self._ns, self._pos, bound_self=obj) def _parse_signature(self, args, kw): values = {} sig_args, var_args, var_kw, defaults = self._func_signature extra_kw = {} for name, value in kw.items(): if not var_kw and name not in sig_args: raise TypeError( 'Unexpected argument %s' % name) if name in sig_args: values[sig_args] = value else: extra_kw[name] = value args = list(args) sig_args = list(sig_args) while args: while sig_args and sig_args[0] in values: sig_args.pop(0) if sig_args: name = sig_args.pop(0) values[name] = args.pop(0) elif var_args: values[var_args] = tuple(args) break else: raise TypeError( 'Extra position arguments: %s' % ', '.join([repr(v) for v in args])) for name, value_expr in defaults.items(): if name not in values: values[name] = self._template._eval( value_expr, self._ns, self._pos) for name in sig_args: if name not in values: raise TypeError( 'Missing argument: %s' % name) if var_kw: values[var_kw] = extra_kw return values class TemplateObject(object): def __init__(self, name): self.__name = name self.get = TemplateObjectGetter(self) def __repr__(self): return '<%s %s>' % (self.__class__.__name__, self.__name) class TemplateObjectGetter(object): def __init__(self, template_obj): self.__template_obj = template_obj def __getattr__(self, attr): return getattr(self.__template_obj, attr, Empty) def __repr__(self): return '<%s around %r>' % (self.__class__.__name__, self.__template_obj) class _Empty(object): def __call__(self, *args, **kw): return self def __str__(self): return '' def __repr__(self): return 'Empty' def __unicode__(self): return u'' def __iter__(self): return iter(()) def __bool__(self): return False if sys.version < "3": __nonzero__ = __bool__ Empty = _Empty() del _Empty ############################################################ ## Lexing and Parsing ############################################################ def lex(s, name=None, trim_whitespace=True, line_offset=0, delimeters=None): """ Lex a string into chunks: >>> lex('hey') ['hey'] >>> lex('hey {{you}}') ['hey ', ('you', (1, 7))] >>> lex('hey {{') Traceback (most recent call last): ... TemplateError: No }} to finish last expression at line 1 column 7 >>> lex('hey }}') Traceback (most recent call last): ... TemplateError: }} outside expression at line 1 column 7 >>> lex('hey {{ {{') Traceback (most recent call last): ... TemplateError: {{ inside expression at line 1 column 10 """ if delimeters is None: delimeters = ( Template.default_namespace['start_braces'], Template.default_namespace['end_braces'] ) in_expr = False chunks = [] last = 0 last_pos = (line_offset + 1, 1) token_re = re.compile(r'%s|%s' % (re.escape(delimeters[0]), re.escape(delimeters[1]))) for match in token_re.finditer(s): expr = match.group(0) pos = find_position(s, match.end(), last, last_pos) if expr == delimeters[0] and in_expr: raise TemplateError('%s inside expression' % delimeters[0], position=pos, name=name) elif expr == delimeters[1] and not in_expr: raise TemplateError('%s outside expression' % delimeters[1], position=pos, name=name) if expr == delimeters[0]: part = s[last:match.start()] if part: chunks.append(part) in_expr = True else: chunks.append((s[last:match.start()], last_pos)) in_expr = False last = match.end() last_pos = pos if in_expr: raise TemplateError('No %s to finish last expression' % delimeters[1], name=name, position=last_pos) part = s[last:] if part: chunks.append(part) if trim_whitespace: chunks = trim_lex(chunks) return chunks statement_re = re.compile(r'^(?:if |elif |for |def |inherit |default |py:)') single_statements = ['else', 'endif', 'endfor', 'enddef', 'continue', 'break'] trail_whitespace_re = re.compile(r'\n\r?[\t ]*$') lead_whitespace_re = re.compile(r'^[\t ]*\n') def trim_lex(tokens): r""" Takes a lexed set of tokens, and removes whitespace when there is a directive on a line by itself: >>> tokens = lex('{{if x}}\nx\n{{endif}}\ny', trim_whitespace=False) >>> tokens [('if x', (1, 3)), '\nx\n', ('endif', (3, 3)), '\ny'] >>> trim_lex(tokens) [('if x', (1, 3)), 'x\n', ('endif', (3, 3)), 'y'] """ last_trim = None for i, current in enumerate(tokens): if isinstance(current, basestring_): # we don't trim this continue item = current[0] if not statement_re.search(item) and item not in single_statements: continue if not i: prev = '' else: prev = tokens[i - 1] if i + 1 >= len(tokens): next_chunk = '' else: next_chunk = tokens[i + 1] if (not isinstance(next_chunk, basestring_) or not isinstance(prev, basestring_)): continue prev_ok = not prev or trail_whitespace_re.search(prev) if i == 1 and not prev.strip(): prev_ok = True if last_trim is not None and last_trim + 2 == i and not prev.strip(): prev_ok = 'last' if (prev_ok and (not next_chunk or lead_whitespace_re.search(next_chunk) or (i == len(tokens) - 2 and not next_chunk.strip()))): if prev: if ((i == 1 and not prev.strip()) or prev_ok == 'last'): tokens[i - 1] = '' else: m = trail_whitespace_re.search(prev) # +1 to leave the leading \n on: prev = prev[:m.start() + 1] tokens[i - 1] = prev if next_chunk: last_trim = i if i == len(tokens) - 2 and not next_chunk.strip(): tokens[i + 1] = '' else: m = lead_whitespace_re.search(next_chunk) next_chunk = next_chunk[m.end():] tokens[i + 1] = next_chunk return tokens def find_position(string, index, last_index, last_pos): """Given a string and index, return (line, column)""" lines = string.count('\n', last_index, index) if lines > 0: column = index - string.rfind('\n', last_index, index) else: column = last_pos[1] + (index - last_index) return (last_pos[0] + lines, column) def parse(s, name=None, line_offset=0, delimeters=None): r""" Parses a string into a kind of AST >>> parse('{{x}}') [('expr', (1, 3), 'x')] >>> parse('foo') ['foo'] >>> parse('{{if x}}test{{endif}}') [('cond', (1, 3), ('if', (1, 3), 'x', ['test']))] >>> parse('series->{{for x in y}}x={{x}}{{endfor}}') ['series->', ('for', (1, 11), ('x',), 'y', ['x=', ('expr', (1, 27), 'x')])] >>> parse('{{for x, y in z:}}{{continue}}{{endfor}}') [('for', (1, 3), ('x', 'y'), 'z', [('continue', (1, 21))])] >>> parse('{{py:x=1}}') [('py', (1, 3), 'x=1')] >>> parse('{{if x}}a{{elif y}}b{{else}}c{{endif}}') [('cond', (1, 3), ('if', (1, 3), 'x', ['a']), ('elif', (1, 12), 'y', ['b']), ('else', (1, 23), None, ['c']))] Some exceptions:: >>> parse('{{continue}}') Traceback (most recent call last): ... TemplateError: continue outside of for loop at line 1 column 3 >>> parse('{{if x}}foo') Traceback (most recent call last): ... TemplateError: No {{endif}} at line 1 column 3 >>> parse('{{else}}') Traceback (most recent call last): ... TemplateError: else outside of an if block at line 1 column 3 >>> parse('{{if x}}{{for x in y}}{{endif}}{{endfor}}') Traceback (most recent call last): ... TemplateError: Unexpected endif at line 1 column 25 >>> parse('{{if}}{{endif}}') Traceback (most recent call last): ... TemplateError: if with no expression at line 1 column 3 >>> parse('{{for x y}}{{endfor}}') Traceback (most recent call last): ... TemplateError: Bad for (no "in") in 'x y' at line 1 column 3 >>> parse('{{py:x=1\ny=2}}') Traceback (most recent call last): ... TemplateError: Multi-line py blocks must start with a newline at line 1 column 3 """ if delimeters is None: delimeters = ( Template.default_namespace['start_braces'], Template.default_namespace['end_braces'] ) tokens = lex(s, name=name, line_offset=line_offset, delimeters=delimeters) result = [] while tokens: next_chunk, tokens = parse_expr(tokens, name) result.append(next_chunk) return result def parse_expr(tokens, name, context=()): if isinstance(tokens[0], basestring_): return tokens[0], tokens[1:] expr, pos = tokens[0] expr = expr.strip() if expr.startswith('py:'): expr = expr[3:].lstrip(' \t') if expr.startswith('\n') or expr.startswith('\r'): expr = expr.lstrip('\r\n') if '\r' in expr: expr = expr.replace('\r\n', '\n') expr = expr.replace('\r', '') expr += '\n' else: if '\n' in expr: raise TemplateError( 'Multi-line py blocks must start with a newline', position=pos, name=name) return ('py', pos, expr), tokens[1:] elif expr in ('continue', 'break'): if 'for' not in context: raise TemplateError( 'continue outside of for loop', position=pos, name=name) return (expr, pos), tokens[1:] elif expr.startswith('if '): return parse_cond(tokens, name, context) elif (expr.startswith('elif ') or expr == 'else'): raise TemplateError( '%s outside of an if block' % expr.split()[0], position=pos, name=name) elif expr in ('if', 'elif', 'for'): raise TemplateError( '%s with no expression' % expr, position=pos, name=name) elif expr in ('endif', 'endfor', 'enddef'): raise TemplateError( 'Unexpected %s' % expr, position=pos, name=name) elif expr.startswith('for '): return parse_for(tokens, name, context) elif expr.startswith('default '): return parse_default(tokens, name, context) elif expr.startswith('inherit '): return parse_inherit(tokens, name, context) elif expr.startswith('def '): return parse_def(tokens, name, context) elif expr.startswith('#'): return ('comment', pos, tokens[0][0]), tokens[1:] return ('expr', pos, tokens[0][0]), tokens[1:] def parse_cond(tokens, name, context): start = tokens[0][1] pieces = [] context = context + ('if',) while 1: if not tokens: raise TemplateError( 'Missing {{endif}}', position=start, name=name) if (isinstance(tokens[0], tuple) and tokens[0][0] == 'endif'): return ('cond', start) + tuple(pieces), tokens[1:] next_chunk, tokens = parse_one_cond(tokens, name, context) pieces.append(next_chunk) def parse_one_cond(tokens, name, context): (first, pos), tokens = tokens[0], tokens[1:] content = [] if first.endswith(':'): first = first[:-1] if first.startswith('if '): part = ('if', pos, first[3:].lstrip(), content) elif first.startswith('elif '): part = ('elif', pos, first[5:].lstrip(), content) elif first == 'else': part = ('else', pos, None, content) else: assert 0, "Unexpected token %r at %s" % (first, pos) while 1: if not tokens: raise TemplateError( 'No {{endif}}', position=pos, name=name) if (isinstance(tokens[0], tuple) and (tokens[0][0] == 'endif' or tokens[0][0].startswith('elif ') or tokens[0][0] == 'else')): return part, tokens next_chunk, tokens = parse_expr(tokens, name, context) content.append(next_chunk) def parse_for(tokens, name, context): first, pos = tokens[0] tokens = tokens[1:] context = ('for',) + context content = [] assert first.startswith('for ') if first.endswith(':'): first = first[:-1] first = first[3:].strip() match = in_re.search(first) if not match: raise TemplateError( 'Bad for (no "in") in %r' % first, position=pos, name=name) vars = first[:match.start()] if '(' in vars: raise TemplateError( 'You cannot have () in the variable section of a for loop (%r)' % vars, position=pos, name=name) vars = tuple([ v.strip() for v in first[:match.start()].split(',') if v.strip()]) expr = first[match.end():] while 1: if not tokens: raise TemplateError( 'No {{endfor}}', position=pos, name=name) if (isinstance(tokens[0], tuple) and tokens[0][0] == 'endfor'): return ('for', pos, vars, expr, content), tokens[1:] next_chunk, tokens = parse_expr(tokens, name, context) content.append(next_chunk) def parse_default(tokens, name, context): first, pos = tokens[0] assert first.startswith('default ') first = first.split(None, 1)[1] parts = first.split('=', 1) if len(parts) == 1: raise TemplateError( "Expression must be {{default var=value}}; no = found in %r" % first, position=pos, name=name) var = parts[0].strip() if ',' in var: raise TemplateError( "{{default x, y = ...}} is not supported", position=pos, name=name) if not var_re.search(var): raise TemplateError( "Not a valid variable name for {{default}}: %r" % var, position=pos, name=name) expr = parts[1].strip() return ('default', pos, var, expr), tokens[1:] def parse_inherit(tokens, name, context): first, pos = tokens[0] assert first.startswith('inherit ') expr = first.split(None, 1)[1] return ('inherit', pos, expr), tokens[1:] def parse_def(tokens, name, context): first, start = tokens[0] tokens = tokens[1:] assert first.startswith('def ') first = first.split(None, 1)[1] if first.endswith(':'): first = first[:-1] if '(' not in first: func_name = first sig = ((), None, None, {}) elif not first.endswith(')'): raise TemplateError("Function definition doesn't end with ): %s" % first, position=start, name=name) else: first = first[:-1] func_name, sig_text = first.split('(', 1) sig = parse_signature(sig_text, name, start) context = context + ('def',) content = [] while 1: if not tokens: raise TemplateError( 'Missing {{enddef}}', position=start, name=name) if (isinstance(tokens[0], tuple) and tokens[0][0] == 'enddef'): return ('def', start, func_name, sig, content), tokens[1:] next_chunk, tokens = parse_expr(tokens, name, context) content.append(next_chunk) def parse_signature(sig_text, name, pos): tokens = tokenize.generate_tokens(StringIO(sig_text).readline) sig_args = [] var_arg = None var_kw = None defaults = {} def get_token(pos=False): try: tok_type, tok_string, (srow, scol), (erow, ecol), line = next(tokens) except StopIteration: return tokenize.ENDMARKER, '' if pos: return tok_type, tok_string, (srow, scol), (erow, ecol) else: return tok_type, tok_string while 1: var_arg_type = None tok_type, tok_string = get_token() if tok_type == tokenize.ENDMARKER: break if tok_type == tokenize.OP and (tok_string == '*' or tok_string == '**'): var_arg_type = tok_string tok_type, tok_string = get_token() if tok_type != tokenize.NAME: raise TemplateError('Invalid signature: (%s)' % sig_text, position=pos, name=name) var_name = tok_string tok_type, tok_string = get_token() if tok_type == tokenize.ENDMARKER or (tok_type == tokenize.OP and tok_string == ','): if var_arg_type == '*': var_arg = var_name elif var_arg_type == '**': var_kw = var_name else: sig_args.append(var_name) if tok_type == tokenize.ENDMARKER: break continue if var_arg_type is not None: raise TemplateError('Invalid signature: (%s)' % sig_text, position=pos, name=name) if tok_type == tokenize.OP and tok_string == '=': nest_type = None unnest_type = None nest_count = 0 start_pos = end_pos = None parts = [] while 1: tok_type, tok_string, s, e = get_token(True) if start_pos is None: start_pos = s end_pos = e if tok_type == tokenize.ENDMARKER and nest_count: raise TemplateError('Invalid signature: (%s)' % sig_text, position=pos, name=name) if (not nest_count and (tok_type == tokenize.ENDMARKER or (tok_type == tokenize.OP and tok_string == ','))): default_expr = isolate_expression(sig_text, start_pos, end_pos) defaults[var_name] = default_expr sig_args.append(var_name) break parts.append((tok_type, tok_string)) if nest_count and tok_type == tokenize.OP and tok_string == nest_type: nest_count += 1 elif nest_count and tok_type == tokenize.OP and tok_string == unnest_type: nest_count -= 1 if not nest_count: nest_type = unnest_type = None elif not nest_count and tok_type == tokenize.OP and tok_string in ('(', '[', '{'): nest_type = tok_string nest_count = 1 unnest_type = {'(': ')', '[': ']', '{': '}'}[nest_type] return sig_args, var_arg, var_kw, defaults def isolate_expression(string, start_pos, end_pos): srow, scol = start_pos srow -= 1 erow, ecol = end_pos erow -= 1 lines = string.splitlines(True) if srow == erow: return lines[srow][scol:ecol] parts = [lines[srow][scol:]] parts.extend(lines[srow+1:erow]) if erow < len(lines): # It'll sometimes give (end_row_past_finish, 0) parts.append(lines[erow][:ecol]) return ''.join(parts) _fill_command_usage = """\ %prog [OPTIONS] TEMPLATE arg=value Use py:arg=value to set a Python value; otherwise all values are strings. """ def fill_command(args=None): import sys import optparse import pkg_resources import os if args is None: args = sys.argv[1:] dist = pkg_resources.get_distribution('Paste') parser = optparse.OptionParser( version=coerce_text(dist), usage=_fill_command_usage) parser.add_option( '-o', '--output', dest='output', metavar="FILENAME", help="File to write output to (default stdout)") parser.add_option( '--html', dest='use_html', action='store_true', help="Use HTML style filling (including automatic HTML quoting)") parser.add_option( '--env', dest='use_env', action='store_true', help="Put the environment in as top-level variables") options, args = parser.parse_args(args) if len(args) < 1: print('You must give a template filename') sys.exit(2) template_name = args[0] args = args[1:] vars = {} if options.use_env: vars.update(os.environ) for value in args: if '=' not in value: print('Bad argument: %r' % value) sys.exit(2) name, value = value.split('=', 1) if name.startswith('py:'): name = name[:3] value = eval(value) vars[name] = value if template_name == '-': template_content = sys.stdin.read() template_name = '' else: f = open(template_name, 'rb') template_content = f.read() f.close() if options.use_html: TemplateClass = HTMLTemplate else: TemplateClass = Template template = TemplateClass(template_content, name=template_name) result = template.substitute(vars) if options.output: f = open(options.output, 'wb') f.write(result) f.close() else: sys.stdout.write(result) if __name__ == '__main__': fill_command() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Tempita/compat3.py0000644000175000017500000000160700000000000027034 0ustar00vincentvincent00000000000000import sys __all__ = ['b', 'basestring_', 'bytes', 'unicode_', 'next', 'is_unicode'] if sys.version < "3": b = bytes = str basestring_ = basestring unicode_ = unicode else: def b(s): if isinstance(s, str): return s.encode('latin1') return bytes(s) basestring_ = (bytes, str) bytes = bytes unicode_ = str text = str if sys.version < "3": def next(obj): return obj.next() else: next = next if sys.version < "3": def is_unicode(obj): return isinstance(obj, unicode) else: def is_unicode(obj): return isinstance(obj, str) def coerce_text(v): if not isinstance(v, basestring_): if sys.version < "3": attr = '__unicode__' else: attr = '__str__' if hasattr(v, attr): return unicode(v) else: return bytes(v) return v ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/TestUtils.py0000644000175000017500000001745300000000000026031 0ustar00vincentvincent00000000000000from __future__ import absolute_import import os import unittest import tempfile from .Compiler import Errors from .CodeWriter import CodeWriter from .Compiler.TreeFragment import TreeFragment, strip_common_indent from .Compiler.Visitor import TreeVisitor, VisitorTransform from .Compiler import TreePath class NodeTypeWriter(TreeVisitor): def __init__(self): super(NodeTypeWriter, self).__init__() self._indents = 0 self.result = [] def visit_Node(self, node): if not self.access_path: name = u"(root)" else: tip = self.access_path[-1] if tip[2] is not None: name = u"%s[%d]" % tip[1:3] else: name = tip[1] self.result.append(u" " * self._indents + u"%s: %s" % (name, node.__class__.__name__)) self._indents += 1 self.visitchildren(node) self._indents -= 1 def treetypes(root): """Returns a string representing the tree by class names. There's a leading and trailing whitespace so that it can be compared by simple string comparison while still making test cases look ok.""" w = NodeTypeWriter() w.visit(root) return u"\n".join([u""] + w.result + [u""]) class CythonTest(unittest.TestCase): def setUp(self): self.listing_file = Errors.listing_file self.echo_file = Errors.echo_file Errors.listing_file = Errors.echo_file = None def tearDown(self): Errors.listing_file = self.listing_file Errors.echo_file = self.echo_file def assertLines(self, expected, result): "Checks that the given strings or lists of strings are equal line by line" if not isinstance(expected, list): expected = expected.split(u"\n") if not isinstance(result, list): result = result.split(u"\n") for idx, (expected_line, result_line) in enumerate(zip(expected, result)): self.assertEqual(expected_line, result_line, "Line %d:\nExp: %s\nGot: %s" % (idx, expected_line, result_line)) self.assertEqual(len(expected), len(result), "Unmatched lines. Got:\n%s\nExpected:\n%s" % ("\n".join(expected), u"\n".join(result))) def codeToLines(self, tree): writer = CodeWriter() writer.write(tree) return writer.result.lines def codeToString(self, tree): return "\n".join(self.codeToLines(tree)) def assertCode(self, expected, result_tree): result_lines = self.codeToLines(result_tree) expected_lines = strip_common_indent(expected.split("\n")) for idx, (line, expected_line) in enumerate(zip(result_lines, expected_lines)): self.assertEqual(expected_line, line, "Line %d:\nGot: %s\nExp: %s" % (idx, line, expected_line)) self.assertEqual(len(result_lines), len(expected_lines), "Unmatched lines. Got:\n%s\nExpected:\n%s" % ("\n".join(result_lines), expected)) def assertNodeExists(self, path, result_tree): self.assertNotEqual(TreePath.find_first(result_tree, path), None, "Path '%s' not found in result tree" % path) def fragment(self, code, pxds=None, pipeline=None): "Simply create a tree fragment using the name of the test-case in parse errors." if pxds is None: pxds = {} if pipeline is None: pipeline = [] name = self.id() if name.startswith("__main__."): name = name[len("__main__."):] name = name.replace(".", "_") return TreeFragment(code, name, pxds, pipeline=pipeline) def treetypes(self, root): return treetypes(root) def should_fail(self, func, exc_type=Exception): """Calls "func" and fails if it doesn't raise the right exception (any exception by default). Also returns the exception in question. """ try: func() self.fail("Expected an exception of type %r" % exc_type) except exc_type as e: self.assertTrue(isinstance(e, exc_type)) return e def should_not_fail(self, func): """Calls func and succeeds if and only if no exception is raised (i.e. converts exception raising into a failed testcase). Returns the return value of func.""" try: return func() except Exception as exc: self.fail(str(exc)) class TransformTest(CythonTest): """ Utility base class for transform unit tests. It is based around constructing test trees (either explicitly or by parsing a Cython code string); running the transform, serialize it using a customized Cython serializer (with special markup for nodes that cannot be represented in Cython), and do a string-comparison line-by-line of the result. To create a test case: - Call run_pipeline. The pipeline should at least contain the transform you are testing; pyx should be either a string (passed to the parser to create a post-parse tree) or a node representing input to pipeline. The result will be a transformed result. - Check that the tree is correct. If wanted, assertCode can be used, which takes a code string as expected, and a ModuleNode in result_tree (it serializes the ModuleNode to a string and compares line-by-line). All code strings are first stripped for whitespace lines and then common indentation. Plans: One could have a pxd dictionary parameter to run_pipeline. """ def run_pipeline(self, pipeline, pyx, pxds=None): if pxds is None: pxds = {} tree = self.fragment(pyx, pxds).root # Run pipeline for T in pipeline: tree = T(tree) return tree class TreeAssertVisitor(VisitorTransform): # actually, a TreeVisitor would be enough, but this needs to run # as part of the compiler pipeline def visit_CompilerDirectivesNode(self, node): directives = node.directives if 'test_assert_path_exists' in directives: for path in directives['test_assert_path_exists']: if TreePath.find_first(node, path) is None: Errors.error( node.pos, "Expected path '%s' not found in result tree" % path) if 'test_fail_if_path_exists' in directives: for path in directives['test_fail_if_path_exists']: if TreePath.find_first(node, path) is not None: Errors.error( node.pos, "Unexpected path '%s' found in result tree" % path) self.visitchildren(node) return node visit_Node = VisitorTransform.recurse_to_children def unpack_source_tree(tree_file, dir=None): if dir is None: dir = tempfile.mkdtemp() header = [] cur_file = None f = open(tree_file) try: lines = f.readlines() finally: f.close() del f try: for line in lines: if line[:5] == '#####': filename = line.strip().strip('#').strip().replace('/', os.path.sep) path = os.path.join(dir, filename) if not os.path.exists(os.path.dirname(path)): os.makedirs(os.path.dirname(path)) if cur_file is not None: f, cur_file = cur_file, None f.close() cur_file = open(path, 'w') elif cur_file is not None: cur_file.write(line) elif line.strip() and not line.lstrip().startswith('#'): if line.strip() not in ('"""', "'''"): header.append(line) finally: if cur_file is not None: cur_file.close() return dir, ''.join(header) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735099.8999941 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Tests/0000755000175000017500000000000000000000000024607 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Tests/TestCodeWriter.py0000644000175000017500000000441400000000000030073 0ustar00vincentvincent00000000000000from Cython.TestUtils import CythonTest class TestCodeWriter(CythonTest): # CythonTest uses the CodeWriter heavily, so do some checking by # roundtripping Cython code through the test framework. # Note that this test is dependent upon the normal Cython parser # to generate the input trees to the CodeWriter. This save *a lot* # of time; better to spend that time writing other tests than perfecting # this one... # Whitespace is very significant in this process: # - always newline on new block (!) # - indent 4 spaces # - 1 space around every operator def t(self, codestr): self.assertCode(codestr, self.fragment(codestr).root) def test_print(self): self.t(u""" print x, y print x + y ** 2 print x, y, z, """) def test_if(self): self.t(u"if x:\n pass") def test_ifelifelse(self): self.t(u""" if x: pass elif y: pass elif z + 34 ** 34 - 2: pass else: pass """) def test_def(self): self.t(u""" def f(x, y, z): pass def f(x = 34, y = 54, z): pass """) def test_longness_and_signedness(self): self.t(u"def f(unsigned long long long long long int y):\n pass") def test_signed_short(self): self.t(u"def f(signed short int y):\n pass") def test_typed_args(self): self.t(u"def f(int x, unsigned long int y):\n pass") def test_cdef_var(self): self.t(u""" cdef int hello cdef int hello = 4, x = 3, y, z """) def test_for_loop(self): self.t(u""" for x, y, z in f(g(h(34) * 2) + 23): print x, y, z else: print 43 """) def test_inplace_assignment(self): self.t(u"x += 43") def test_attribute(self): self.t(u"a.x") if __name__ == "__main__": import unittest unittest.main() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Tests/TestCythonUtils.py0000644000175000017500000000073200000000000030310 0ustar00vincentvincent00000000000000import unittest from ..Utils import build_hex_version class TestCythonUtils(unittest.TestCase): def test_build_hex_version(self): self.assertEqual('0x001D00A1', build_hex_version('0.29a1')) self.assertEqual('0x001D00A1', build_hex_version('0.29a1')) self.assertEqual('0x001D03C4', build_hex_version('0.29.3rc4')) self.assertEqual('0x001D00F0', build_hex_version('0.29')) self.assertEqual('0x040000F0', build_hex_version('4.0')) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Tests/TestJediTyper.py0000644000175000017500000001552400000000000027727 0ustar00vincentvincent00000000000000# -*- coding: utf-8 -*- # tag: jedi from __future__ import absolute_import import sys import os.path from textwrap import dedent from contextlib import contextmanager from tempfile import NamedTemporaryFile from Cython.Compiler.ParseTreeTransforms import NormalizeTree, InterpretCompilerDirectives from Cython.Compiler import Main, Symtab, Visitor from Cython.TestUtils import TransformTest TOOLS_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', 'Tools')) @contextmanager def _tempfile(code): code = dedent(code) if not isinstance(code, bytes): code = code.encode('utf8') with NamedTemporaryFile(suffix='.py') as f: f.write(code) f.seek(0) yield f def _test_typing(code, inject=False): sys.path.insert(0, TOOLS_DIR) try: import jedityper finally: sys.path.remove(TOOLS_DIR) lines = [] with _tempfile(code) as f: types = jedityper.analyse(f.name) if inject: lines = jedityper.inject_types(f.name, types) return types, lines class DeclarationsFinder(Visitor.VisitorTransform): directives = None visit_Node = Visitor.VisitorTransform.recurse_to_children def visit_CompilerDirectivesNode(self, node): if not self.directives: self.directives = [] self.directives.append(node) self.visitchildren(node) return node class TestJediTyper(TransformTest): def _test(self, code): return _test_typing(code)[0] def test_typing_global_int_loop(self): code = '''\ for i in range(10): a = i + 1 ''' types = self._test(code) self.assertIn((None, (1, 0)), types) variables = types.pop((None, (1, 0))) self.assertFalse(types) self.assertEqual({'a': set(['int']), 'i': set(['int'])}, variables) def test_typing_function_int_loop(self): code = '''\ def func(x): for i in range(x): a = i + 1 return a ''' types = self._test(code) self.assertIn(('func', (1, 0)), types) variables = types.pop(('func', (1, 0))) self.assertFalse(types) self.assertEqual({'a': set(['int']), 'i': set(['int'])}, variables) def test_conflicting_types_in_function(self): code = '''\ def func(a, b): print(a) a = 1 b += a a = 'abc' return a, str(b) print(func(1.5, 2)) ''' types = self._test(code) self.assertIn(('func', (1, 0)), types) variables = types.pop(('func', (1, 0))) self.assertFalse(types) self.assertEqual({'a': set(['float', 'int', 'str']), 'b': set(['int'])}, variables) def _test_typing_function_char_loop(self): code = '''\ def func(x): l = [] for c in x: l.append(c) return l print(func('abcdefg')) ''' types = self._test(code) self.assertIn(('func', (1, 0)), types) variables = types.pop(('func', (1, 0))) self.assertFalse(types) self.assertEqual({'a': set(['int']), 'i': set(['int'])}, variables) def test_typing_global_list(self): code = '''\ a = [x for x in range(10)] b = list(range(10)) c = a + b d = [0]*10 ''' types = self._test(code) self.assertIn((None, (1, 0)), types) variables = types.pop((None, (1, 0))) self.assertFalse(types) self.assertEqual({'a': set(['list']), 'b': set(['list']), 'c': set(['list']), 'd': set(['list'])}, variables) def test_typing_function_list(self): code = '''\ def func(x): a = [[], []] b = [0]* 10 + a c = a[0] print(func([0]*100)) ''' types = self._test(code) self.assertIn(('func', (1, 0)), types) variables = types.pop(('func', (1, 0))) self.assertFalse(types) self.assertEqual({'a': set(['list']), 'b': set(['list']), 'c': set(['list']), 'x': set(['list'])}, variables) def test_typing_global_dict(self): code = '''\ a = dict() b = {i: i**2 for i in range(10)} c = a ''' types = self._test(code) self.assertIn((None, (1, 0)), types) variables = types.pop((None, (1, 0))) self.assertFalse(types) self.assertEqual({'a': set(['dict']), 'b': set(['dict']), 'c': set(['dict'])}, variables) def test_typing_function_dict(self): code = '''\ def func(x): a = dict() b = {i: i**2 for i in range(10)} c = x print(func({1:2, 'x':7})) ''' types = self._test(code) self.assertIn(('func', (1, 0)), types) variables = types.pop(('func', (1, 0))) self.assertFalse(types) self.assertEqual({'a': set(['dict']), 'b': set(['dict']), 'c': set(['dict']), 'x': set(['dict'])}, variables) def test_typing_global_set(self): code = '''\ a = set() # b = {i for i in range(10)} # jedi does not support set comprehension yet c = a d = {1,2,3} e = a | b ''' types = self._test(code) self.assertIn((None, (1, 0)), types) variables = types.pop((None, (1, 0))) self.assertFalse(types) self.assertEqual({'a': set(['set']), 'c': set(['set']), 'd': set(['set']), 'e': set(['set'])}, variables) def test_typing_function_set(self): code = '''\ def func(x): a = set() # b = {i for i in range(10)} # jedi does not support set comprehension yet c = a d = a | b print(func({1,2,3})) ''' types = self._test(code) self.assertIn(('func', (1, 0)), types) variables = types.pop(('func', (1, 0))) self.assertFalse(types) self.assertEqual({'a': set(['set']), 'c': set(['set']), 'd': set(['set']), 'x': set(['set'])}, variables) class TestTypeInjection(TestJediTyper): """ Subtype of TestJediTyper that additionally tests type injection and compilation. """ def setUp(self): super(TestTypeInjection, self).setUp() compilation_options = Main.CompilationOptions(Main.default_options) ctx = compilation_options.create_context() transform = InterpretCompilerDirectives(ctx, ctx.compiler_directives) transform.module_scope = Symtab.ModuleScope('__main__', None, ctx) self.declarations_finder = DeclarationsFinder() self.pipeline = [NormalizeTree(None), transform, self.declarations_finder] def _test(self, code): types, lines = _test_typing(code, inject=True) tree = self.run_pipeline(self.pipeline, ''.join(lines)) directives = self.declarations_finder.directives # TODO: validate directives return types ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Tests/TestStringIOTree.py0000644000175000017500000000363200000000000030343 0ustar00vincentvincent00000000000000import unittest from Cython import StringIOTree as stringtree code = """ cdef int spam # line 1 cdef ham(): a = 1 b = 2 c = 3 d = 4 def eggs(): pass cpdef bacon(): print spam print 'scotch' print 'tea?' print 'or coffee?' # line 16 """ linemap = dict(enumerate(code.splitlines())) class TestStringIOTree(unittest.TestCase): def setUp(self): self.tree = stringtree.StringIOTree() def test_markers(self): assert not self.tree.allmarkers() def test_insertion(self): self.write_lines((1, 2, 3)) line_4_to_6_insertion_point = self.tree.insertion_point() self.write_lines((7, 8)) line_9_to_13_insertion_point = self.tree.insertion_point() self.write_lines((14, 15, 16)) line_4_insertion_point = line_4_to_6_insertion_point.insertion_point() self.write_lines((5, 6), tree=line_4_to_6_insertion_point) line_9_to_12_insertion_point = ( line_9_to_13_insertion_point.insertion_point()) self.write_line(13, tree=line_9_to_13_insertion_point) self.write_line(4, tree=line_4_insertion_point) self.write_line(9, tree=line_9_to_12_insertion_point) line_10_insertion_point = line_9_to_12_insertion_point.insertion_point() self.write_line(11, tree=line_9_to_12_insertion_point) self.write_line(10, tree=line_10_insertion_point) self.write_line(12, tree=line_9_to_12_insertion_point) self.assertEqual(self.tree.allmarkers(), list(range(1, 17))) self.assertEqual(code.strip(), self.tree.getvalue().strip()) def write_lines(self, linenos, tree=None): for lineno in linenos: self.write_line(lineno, tree=tree) def write_line(self, lineno, tree=None): if tree is None: tree = self.tree tree.markers.append(lineno) tree.write(linemap[lineno] + '\n') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Tests/__init__.py0000644000175000017500000000001500000000000026714 0ustar00vincentvincent00000000000000# empty file ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Tests/xmlrunner.py0000644000175000017500000003471700000000000027227 0ustar00vincentvincent00000000000000# -*- coding: utf-8 -*- """unittest-xml-reporting is a PyUnit-based TestRunner that can export test results to XML files that can be consumed by a wide range of tools, such as build systems, IDEs and Continuous Integration servers. This module provides the XMLTestRunner class, which is heavily based on the default TextTestRunner. This makes the XMLTestRunner very simple to use. The script below, adapted from the unittest documentation, shows how to use XMLTestRunner in a very simple way. In fact, the only difference between this script and the original one is the last line: import random import unittest import xmlrunner class TestSequenceFunctions(unittest.TestCase): def setUp(self): self.seq = range(10) def test_shuffle(self): # make sure the shuffled sequence does not lose any elements random.shuffle(self.seq) self.seq.sort() self.assertEqual(self.seq, range(10)) def test_choice(self): element = random.choice(self.seq) self.assert_(element in self.seq) def test_sample(self): self.assertRaises(ValueError, random.sample, self.seq, 20) for element in random.sample(self.seq, 5): self.assert_(element in self.seq) if __name__ == '__main__': unittest.main(testRunner=xmlrunner.XMLTestRunner(output='test-reports')) """ from __future__ import absolute_import import os import sys import time from unittest import TestResult, _TextTestResult, TextTestRunner import xml.dom.minidom try: from StringIO import StringIO except ImportError: from io import StringIO # doesn't accept 'str' in Py2 class XMLDocument(xml.dom.minidom.Document): def createCDATAOrText(self, data): if ']]>' in data: return self.createTextNode(data) return self.createCDATASection(data) class _TestInfo(object): """This class is used to keep useful information about the execution of a test method. """ # Possible test outcomes (SUCCESS, FAILURE, ERROR) = range(3) def __init__(self, test_result, test_method, outcome=SUCCESS, err=None): "Create a new instance of _TestInfo." self.test_result = test_result self.test_method = test_method self.outcome = outcome self.err = err self.stdout = test_result.stdout and test_result.stdout.getvalue().strip() or '' self.stderr = test_result.stdout and test_result.stderr.getvalue().strip() or '' def get_elapsed_time(self): """Return the time that shows how long the test method took to execute. """ return self.test_result.stop_time - self.test_result.start_time def get_description(self): "Return a text representation of the test method." return self.test_result.getDescription(self.test_method) def get_error_info(self): """Return a text representation of an exception thrown by a test method. """ if not self.err: return '' return self.test_result._exc_info_to_string( self.err, self.test_method) class _XMLTestResult(_TextTestResult): """A test result class that can express test results in a XML report. Used by XMLTestRunner. """ def __init__(self, stream=sys.stderr, descriptions=1, verbosity=1, elapsed_times=True): "Create a new instance of _XMLTestResult." _TextTestResult.__init__(self, stream, descriptions, verbosity) self.successes = [] self.callback = None self.elapsed_times = elapsed_times self.output_patched = False def _prepare_callback(self, test_info, target_list, verbose_str, short_str): """Append a _TestInfo to the given target list and sets a callback method to be called by stopTest method. """ target_list.append(test_info) def callback(): """This callback prints the test method outcome to the stream, as well as the elapsed time. """ # Ignore the elapsed times for a more reliable unit testing if not self.elapsed_times: self.start_time = self.stop_time = 0 if self.showAll: self.stream.writeln('(%.3fs) %s' % \ (test_info.get_elapsed_time(), verbose_str)) elif self.dots: self.stream.write(short_str) self.callback = callback def _patch_standard_output(self): """Replace the stdout and stderr streams with string-based streams in order to capture the tests' output. """ if not self.output_patched: (self.old_stdout, self.old_stderr) = (sys.stdout, sys.stderr) self.output_patched = True (sys.stdout, sys.stderr) = (self.stdout, self.stderr) = \ (StringIO(), StringIO()) def _restore_standard_output(self): "Restore the stdout and stderr streams." (sys.stdout, sys.stderr) = (self.old_stdout, self.old_stderr) self.output_patched = False def startTest(self, test): "Called before execute each test method." self._patch_standard_output() self.start_time = time.time() TestResult.startTest(self, test) if self.showAll: self.stream.write(' ' + self.getDescription(test)) self.stream.write(" ... ") def stopTest(self, test): "Called after execute each test method." self._restore_standard_output() _TextTestResult.stopTest(self, test) self.stop_time = time.time() if self.callback and callable(self.callback): self.callback() self.callback = None def addSuccess(self, test): "Called when a test executes successfully." self._prepare_callback(_TestInfo(self, test), self.successes, 'OK', '.') def addFailure(self, test, err): "Called when a test method fails." self._prepare_callback(_TestInfo(self, test, _TestInfo.FAILURE, err), self.failures, 'FAIL', 'F') def addError(self, test, err): "Called when a test method raises an error." self._prepare_callback(_TestInfo(self, test, _TestInfo.ERROR, err), self.errors, 'ERROR', 'E') def printErrorList(self, flavour, errors): "Write some information about the FAIL or ERROR to the stream." for test_info in errors: if isinstance(test_info, tuple): test_info, exc_info = test_info try: t = test_info.get_elapsed_time() except AttributeError: t = 0 try: descr = test_info.get_description() except AttributeError: try: descr = test_info.getDescription() except AttributeError: descr = str(test_info) try: err_info = test_info.get_error_info() except AttributeError: err_info = str(test_info) self.stream.writeln(self.separator1) self.stream.writeln('%s [%.3fs]: %s' % (flavour, t, descr)) self.stream.writeln(self.separator2) self.stream.writeln('%s' % err_info) def _get_info_by_testcase(self): """This method organizes test results by TestCase module. This information is used during the report generation, where a XML report will be generated for each TestCase. """ tests_by_testcase = {} for tests in (self.successes, self.failures, self.errors): for test_info in tests: if not isinstance(test_info, _TestInfo): print("Unexpected test result type: %r" % (test_info,)) continue testcase = type(test_info.test_method) # Ignore module name if it is '__main__' module = testcase.__module__ + '.' if module == '__main__.': module = '' testcase_name = module + testcase.__name__ if testcase_name not in tests_by_testcase: tests_by_testcase[testcase_name] = [] tests_by_testcase[testcase_name].append(test_info) return tests_by_testcase def _report_testsuite(suite_name, tests, xml_document): "Appends the testsuite section to the XML document." testsuite = xml_document.createElement('testsuite') xml_document.appendChild(testsuite) testsuite.setAttribute('name', str(suite_name)) testsuite.setAttribute('tests', str(len(tests))) testsuite.setAttribute('time', '%.3f' % sum([e.get_elapsed_time() for e in tests])) failures = len([1 for e in tests if e.outcome == _TestInfo.FAILURE]) testsuite.setAttribute('failures', str(failures)) errors = len([1 for e in tests if e.outcome == _TestInfo.ERROR]) testsuite.setAttribute('errors', str(errors)) return testsuite _report_testsuite = staticmethod(_report_testsuite) def _report_testcase(suite_name, test_result, xml_testsuite, xml_document): "Appends a testcase section to the XML document." testcase = xml_document.createElement('testcase') xml_testsuite.appendChild(testcase) testcase.setAttribute('classname', str(suite_name)) testcase.setAttribute('name', test_result.test_method.shortDescription() or getattr(test_result.test_method, '_testMethodName', str(test_result.test_method))) testcase.setAttribute('time', '%.3f' % test_result.get_elapsed_time()) if (test_result.outcome != _TestInfo.SUCCESS): elem_name = ('failure', 'error')[test_result.outcome-1] failure = xml_document.createElement(elem_name) testcase.appendChild(failure) failure.setAttribute('type', str(test_result.err[0].__name__)) failure.setAttribute('message', str(test_result.err[1])) error_info = test_result.get_error_info() failureText = xml_document.createCDATAOrText(error_info) failure.appendChild(failureText) _report_testcase = staticmethod(_report_testcase) def _report_output(test_runner, xml_testsuite, xml_document, stdout, stderr): "Appends the system-out and system-err sections to the XML document." systemout = xml_document.createElement('system-out') xml_testsuite.appendChild(systemout) systemout_text = xml_document.createCDATAOrText(stdout) systemout.appendChild(systemout_text) systemerr = xml_document.createElement('system-err') xml_testsuite.appendChild(systemerr) systemerr_text = xml_document.createCDATAOrText(stderr) systemerr.appendChild(systemerr_text) _report_output = staticmethod(_report_output) def generate_reports(self, test_runner): "Generates the XML reports to a given XMLTestRunner object." all_results = self._get_info_by_testcase() if type(test_runner.output) == str and not \ os.path.exists(test_runner.output): os.makedirs(test_runner.output) for suite, tests in all_results.items(): doc = XMLDocument() # Build the XML file testsuite = _XMLTestResult._report_testsuite(suite, tests, doc) stdout, stderr = [], [] for test in tests: _XMLTestResult._report_testcase(suite, test, testsuite, doc) if test.stdout: stdout.extend(['*****************', test.get_description(), test.stdout]) if test.stderr: stderr.extend(['*****************', test.get_description(), test.stderr]) _XMLTestResult._report_output(test_runner, testsuite, doc, '\n'.join(stdout), '\n'.join(stderr)) xml_content = doc.toprettyxml(indent='\t') if type(test_runner.output) is str: report_file = open('%s%sTEST-%s.xml' % \ (test_runner.output, os.sep, suite), 'w') try: report_file.write(xml_content) finally: report_file.close() else: # Assume that test_runner.output is a stream test_runner.output.write(xml_content) class XMLTestRunner(TextTestRunner): """A test runner class that outputs the results in JUnit like XML files. """ def __init__(self, output='.', stream=None, descriptions=True, verbose=False, elapsed_times=True): "Create a new instance of XMLTestRunner." if stream is None: stream = sys.stderr verbosity = (1, 2)[verbose] TextTestRunner.__init__(self, stream, descriptions, verbosity) self.output = output self.elapsed_times = elapsed_times def _make_result(self): """Create the TestResult object which will be used to store information about the executed tests. """ return _XMLTestResult(self.stream, self.descriptions, \ self.verbosity, self.elapsed_times) def run(self, test): "Run the given test case or test suite." # Prepare the test execution result = self._make_result() # Print a nice header self.stream.writeln() self.stream.writeln('Running tests...') self.stream.writeln(result.separator2) # Execute tests start_time = time.time() test(result) stop_time = time.time() time_taken = stop_time - start_time # Generate reports self.stream.writeln() self.stream.writeln('Generating XML reports...') result.generate_reports(self) # Print results result.printErrors() self.stream.writeln(result.separator2) run = result.testsRun self.stream.writeln("Ran %d test%s in %.3fs" % (run, run != 1 and "s" or "", time_taken)) self.stream.writeln() # Error traces if not result.wasSuccessful(): self.stream.write("FAILED (") failed, errored = (len(result.failures), len(result.errors)) if failed: self.stream.write("failures=%d" % failed) if errored: if failed: self.stream.write(", ") self.stream.write("errors=%d" % errored) self.stream.writeln(")") else: self.stream.writeln("OK") return result ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735099.9033275 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Utility/0000755000175000017500000000000000000000000025150 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Utility/CConvert.pyx0000644000175000017500000001036200000000000027437 0ustar00vincentvincent00000000000000#################### FromPyStructUtility #################### cdef extern from *: ctypedef struct PyTypeObject: char* tp_name PyTypeObject *Py_TYPE(obj) bint PyMapping_Check(obj) object PyErr_Format(exc, const char *format, ...) @cname("{{funcname}}") cdef {{struct_type}} {{funcname}}(obj) except *: cdef {{struct_type}} result if not PyMapping_Check(obj): PyErr_Format(TypeError, b"Expected %.16s, got %.200s", b"a mapping", Py_TYPE(obj).tp_name) {{for member in var_entries:}} try: value = obj['{{member.name}}'] except KeyError: raise ValueError("No value specified for struct attribute '{{member.name}}'") result.{{member.cname}} = value {{endfor}} return result #################### FromPyUnionUtility #################### cdef extern from *: ctypedef struct PyTypeObject: char* tp_name PyTypeObject *Py_TYPE(obj) bint PyMapping_Check(obj) object PyErr_Format(exc, const char *format, ...) @cname("{{funcname}}") cdef {{struct_type}} {{funcname}}(obj) except *: cdef {{struct_type}} result cdef Py_ssize_t length if not PyMapping_Check(obj): PyErr_Format(TypeError, b"Expected %.16s, got %.200s", b"a mapping", Py_TYPE(obj).tp_name) last_found = None length = len(obj) if length: {{for member in var_entries:}} if '{{member.name}}' in obj: if last_found is not None: raise ValueError("More than one union attribute passed: '%s' and '%s'" % (last_found, '{{member.name}}')) last_found = '{{member.name}}' result.{{member.cname}} = obj['{{member.name}}'] length -= 1 if not length: return result {{endfor}} if last_found is None: raise ValueError("No value specified for any of the union attributes (%s)" % '{{", ".join(member.name for member in var_entries)}}') return result #################### cfunc.to_py #################### @cname("{{cname}}") cdef object {{cname}}({{return_type.ctype}} (*f)({{ ', '.join(arg.type_cname for arg in args) }}) {{except_clause}}): def wrap({{ ', '.join('{arg.ctype} {arg.name}'.format(arg=arg) for arg in args) }}): """wrap({{', '.join(('{arg.name}: {arg.type_displayname}'.format(arg=arg) if arg.type_displayname else arg.name) for arg in args)}}){{if return_type.type_displayname}} -> {{return_type.type_displayname}}{{endif}}""" {{'' if return_type.type.is_void else 'return '}}f({{ ', '.join(arg.name for arg in args) }}) return wrap #################### carray.from_py #################### cdef extern from *: object PyErr_Format(exc, const char *format, ...) @cname("{{cname}}") cdef int {{cname}}(object o, {{base_type}} *v, Py_ssize_t length) except -1: cdef Py_ssize_t i = length try: i = len(o) except (TypeError, OverflowError): pass if i == length: for i, item in enumerate(o): if i >= length: break v[i] = item else: i += 1 # convert index to length if i == length: return 0 PyErr_Format( IndexError, ("too many values found during array assignment, expected %zd" if i >= length else "not enough values found during array assignment, expected %zd, got %zd"), length, i) #################### carray.to_py #################### cdef extern from *: void Py_INCREF(object o) tuple PyTuple_New(Py_ssize_t size) list PyList_New(Py_ssize_t size) void PyTuple_SET_ITEM(object p, Py_ssize_t pos, object o) void PyList_SET_ITEM(object p, Py_ssize_t pos, object o) @cname("{{cname}}") cdef inline list {{cname}}({{base_type}} *v, Py_ssize_t length): cdef size_t i cdef object value l = PyList_New(length) for i in range(length): value = v[i] Py_INCREF(value) PyList_SET_ITEM(l, i, value) return l @cname("{{to_tuple_cname}}") cdef inline tuple {{to_tuple_cname}}({{base_type}} *v, Py_ssize_t length): cdef size_t i cdef object value t = PyTuple_New(length) for i in range(length): value = v[i] Py_INCREF(value) PyTuple_SET_ITEM(t, i, value) return t ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Utility/CpdefEnums.pyx0000644000175000017500000000354500000000000027752 0ustar00vincentvincent00000000000000#################### EnumBase #################### cimport cython cdef extern from *: int PY_VERSION_HEX cdef object __Pyx_OrderedDict if PY_VERSION_HEX >= 0x02070000: from collections import OrderedDict as __Pyx_OrderedDict else: __Pyx_OrderedDict = dict @cython.internal cdef class __Pyx_EnumMeta(type): def __init__(cls, name, parents, dct): type.__init__(cls, name, parents, dct) cls.__members__ = __Pyx_OrderedDict() def __iter__(cls): return iter(cls.__members__.values()) def __getitem__(cls, name): return cls.__members__[name] # @cython.internal cdef object __Pyx_EnumBase class __Pyx_EnumBase(int): __metaclass__ = __Pyx_EnumMeta def __new__(cls, value, name=None): for v in cls: if v == value: return v if name is None: raise ValueError("Unknown enum value: '%s'" % value) res = int.__new__(cls, value) res.name = name setattr(cls, name, res) cls.__members__[name] = res return res def __repr__(self): return "<%s.%s: %d>" % (self.__class__.__name__, self.name, self) def __str__(self): return "%s.%s" % (self.__class__.__name__, self.name) if PY_VERSION_HEX >= 0x03040000: from enum import IntEnum as __Pyx_EnumBase #################### EnumType #################### #@requires: EnumBase cdef dict __Pyx_globals = globals() if PY_VERSION_HEX >= 0x03040000: # create new IntEnum() {{name}} = __Pyx_EnumBase('{{name}}', __Pyx_OrderedDict([ {{for item in items}} ('{{item}}', {{item}}), {{endfor}} ])) {{for item in items}} __Pyx_globals['{{item}}'] = {{name}}.{{item}} {{endfor}} else: class {{name}}(__Pyx_EnumBase): pass {{for item in items}} __Pyx_globals['{{item}}'] = {{name}}({{item}}, '{{item}}') {{endfor}} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Utility/CppConvert.pyx0000644000175000017500000001371600000000000030005 0ustar00vincentvincent00000000000000# TODO: Figure out how many of the pass-by-value copies the compiler can eliminate. #################### string.from_py #################### cdef extern from *: cdef cppclass string "{{type}}": string() string(char* c_str, size_t size) cdef const char* __Pyx_PyObject_AsStringAndSize(object, Py_ssize_t*) except NULL @cname("{{cname}}") cdef string {{cname}}(object o) except *: cdef Py_ssize_t length cdef const char* data = __Pyx_PyObject_AsStringAndSize(o, &length) return string(data, length) #################### string.to_py #################### #cimport cython #from libcpp.string cimport string cdef extern from *: cdef cppclass string "{{type}}": char* data() size_t size() {{for py_type in ['PyObject', 'PyUnicode', 'PyStr', 'PyBytes', 'PyByteArray']}} cdef extern from *: cdef object __Pyx_{{py_type}}_FromStringAndSize(const char*, size_t) @cname("{{cname.replace("PyObject", py_type, 1)}}") cdef inline object {{cname.replace("PyObject", py_type, 1)}}(const string& s): return __Pyx_{{py_type}}_FromStringAndSize(s.data(), s.size()) {{endfor}} #################### vector.from_py #################### cdef extern from *: cdef cppclass vector "std::vector" [T]: void push_back(T&) @cname("{{cname}}") cdef vector[X] {{cname}}(object o) except *: cdef vector[X] v for item in o: v.push_back(item) return v #################### vector.to_py #################### cdef extern from *: cdef cppclass vector "const std::vector" [T]: size_t size() T& operator[](size_t) @cname("{{cname}}") cdef object {{cname}}(vector[X]& v): return [v[i] for i in range(v.size())] #################### list.from_py #################### cdef extern from *: cdef cppclass cpp_list "std::list" [T]: void push_back(T&) @cname("{{cname}}") cdef cpp_list[X] {{cname}}(object o) except *: cdef cpp_list[X] l for item in o: l.push_back(item) return l #################### list.to_py #################### cimport cython cdef extern from *: cdef cppclass cpp_list "std::list" [T]: cppclass const_iterator: T& operator*() const_iterator operator++() bint operator!=(const_iterator) const_iterator begin() const_iterator end() @cname("{{cname}}") cdef object {{cname}}(const cpp_list[X]& v): o = [] cdef cpp_list[X].const_iterator iter = v.begin() while iter != v.end(): o.append(cython.operator.dereference(iter)) cython.operator.preincrement(iter) return o #################### set.from_py #################### cdef extern from *: cdef cppclass set "std::{{maybe_unordered}}set" [T]: void insert(T&) @cname("{{cname}}") cdef set[X] {{cname}}(object o) except *: cdef set[X] s for item in o: s.insert(item) return s #################### set.to_py #################### cimport cython cdef extern from *: cdef cppclass cpp_set "std::{{maybe_unordered}}set" [T]: cppclass const_iterator: T& operator*() const_iterator operator++() bint operator!=(const_iterator) const_iterator begin() const_iterator end() @cname("{{cname}}") cdef object {{cname}}(const cpp_set[X]& s): o = set() cdef cpp_set[X].const_iterator iter = s.begin() while iter != s.end(): o.add(cython.operator.dereference(iter)) cython.operator.preincrement(iter) return o #################### pair.from_py #################### cdef extern from *: cdef cppclass pair "std::pair" [T, U]: pair() pair(T&, U&) @cname("{{cname}}") cdef pair[X,Y] {{cname}}(object o) except *: x, y = o return pair[X,Y](x, y) #################### pair.to_py #################### cdef extern from *: cdef cppclass pair "std::pair" [T, U]: T first U second @cname("{{cname}}") cdef object {{cname}}(const pair[X,Y]& p): return p.first, p.second #################### map.from_py #################### cdef extern from *: cdef cppclass pair "std::pair" [T, U]: pair(T&, U&) cdef cppclass map "std::{{maybe_unordered}}map" [T, U]: void insert(pair[T, U]&) cdef cppclass vector "std::vector" [T]: pass @cname("{{cname}}") cdef map[X,Y] {{cname}}(object o) except *: cdef dict d = o cdef map[X,Y] m for key, value in d.iteritems(): m.insert(pair[X,Y](key, value)) return m #################### map.to_py #################### # TODO: Work out const so that this can take a const # reference rather than pass by value. cimport cython cdef extern from *: cdef cppclass map "std::{{maybe_unordered}}map" [T, U]: cppclass value_type: T first U second cppclass const_iterator: value_type& operator*() const_iterator operator++() bint operator!=(const_iterator) const_iterator begin() const_iterator end() @cname("{{cname}}") cdef object {{cname}}(const map[X,Y]& s): o = {} cdef const map[X,Y].value_type *key_value cdef map[X,Y].const_iterator iter = s.begin() while iter != s.end(): key_value = &cython.operator.dereference(iter) o[key_value.first] = key_value.second cython.operator.preincrement(iter) return o #################### complex.from_py #################### cdef extern from *: cdef cppclass std_complex "std::complex" [T]: std_complex() std_complex(T, T) except + @cname("{{cname}}") cdef std_complex[X] {{cname}}(object o) except *: cdef double complex z = o return std_complex[X](z.real, z.imag) #################### complex.to_py #################### cdef extern from *: cdef cppclass std_complex "std::complex" [T]: X real() X imag() @cname("{{cname}}") cdef object {{cname}}(const std_complex[X]& z): cdef double complex tmp tmp.real = z.real() tmp.imag = z.imag() return tmp ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Utility/MemoryView.pyx0000644000175000017500000014072400000000000030025 0ustar00vincentvincent00000000000000#################### View.MemoryView #################### # This utility provides cython.array and cython.view.memoryview from __future__ import absolute_import cimport cython # from cpython cimport ... cdef extern from "Python.h": int PyIndex_Check(object) object PyLong_FromVoidPtr(void *) cdef extern from "pythread.h": ctypedef void *PyThread_type_lock PyThread_type_lock PyThread_allocate_lock() void PyThread_free_lock(PyThread_type_lock) int PyThread_acquire_lock(PyThread_type_lock, int mode) nogil void PyThread_release_lock(PyThread_type_lock) nogil cdef extern from "": void *memset(void *b, int c, size_t len) cdef extern from *: int __Pyx_GetBuffer(object, Py_buffer *, int) except -1 void __Pyx_ReleaseBuffer(Py_buffer *) ctypedef struct PyObject ctypedef Py_ssize_t Py_intptr_t void Py_INCREF(PyObject *) void Py_DECREF(PyObject *) void* PyMem_Malloc(size_t n) void PyMem_Free(void *p) void* PyObject_Malloc(size_t n) void PyObject_Free(void *p) cdef struct __pyx_memoryview "__pyx_memoryview_obj": Py_buffer view PyObject *obj __Pyx_TypeInfo *typeinfo ctypedef struct {{memviewslice_name}}: __pyx_memoryview *memview char *data Py_ssize_t shape[{{max_dims}}] Py_ssize_t strides[{{max_dims}}] Py_ssize_t suboffsets[{{max_dims}}] void __PYX_INC_MEMVIEW({{memviewslice_name}} *memslice, int have_gil) void __PYX_XDEC_MEMVIEW({{memviewslice_name}} *memslice, int have_gil) ctypedef struct __pyx_buffer "Py_buffer": PyObject *obj PyObject *Py_None cdef enum: PyBUF_C_CONTIGUOUS, PyBUF_F_CONTIGUOUS, PyBUF_ANY_CONTIGUOUS PyBUF_FORMAT PyBUF_WRITABLE PyBUF_STRIDES PyBUF_INDIRECT PyBUF_ND PyBUF_RECORDS PyBUF_RECORDS_RO ctypedef struct __Pyx_TypeInfo: pass cdef object capsule "__pyx_capsule_create" (void *p, char *sig) cdef int __pyx_array_getbuffer(PyObject *obj, Py_buffer view, int flags) cdef int __pyx_memoryview_getbuffer(PyObject *obj, Py_buffer view, int flags) cdef extern from *: ctypedef int __pyx_atomic_int {{memviewslice_name}} slice_copy_contig "__pyx_memoryview_copy_new_contig"( __Pyx_memviewslice *from_mvs, char *mode, int ndim, size_t sizeof_dtype, int contig_flag, bint dtype_is_object) nogil except * bint slice_is_contig "__pyx_memviewslice_is_contig" ( {{memviewslice_name}} mvs, char order, int ndim) nogil bint slices_overlap "__pyx_slices_overlap" ({{memviewslice_name}} *slice1, {{memviewslice_name}} *slice2, int ndim, size_t itemsize) nogil cdef extern from "": void *malloc(size_t) nogil void free(void *) nogil void *memcpy(void *dest, void *src, size_t n) nogil # ### cython.array class # @cname("__pyx_array") cdef class array: cdef: char *data Py_ssize_t len char *format int ndim Py_ssize_t *_shape Py_ssize_t *_strides Py_ssize_t itemsize unicode mode # FIXME: this should have been a simple 'char' bytes _format void (*callback_free_data)(void *data) # cdef object _memview cdef bint free_data cdef bint dtype_is_object def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, mode="c", bint allocate_buffer=True): cdef int idx cdef Py_ssize_t i, dim cdef PyObject **p self.ndim = len(shape) self.itemsize = itemsize if not self.ndim: raise ValueError("Empty shape tuple for cython.array") if itemsize <= 0: raise ValueError("itemsize <= 0 for cython.array") if not isinstance(format, bytes): format = format.encode('ASCII') self._format = format # keep a reference to the byte string self.format = self._format # use single malloc() for both shape and strides self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2) self._strides = self._shape + self.ndim if not self._shape: raise MemoryError("unable to allocate shape and strides.") # cdef Py_ssize_t dim, stride for idx, dim in enumerate(shape): if dim <= 0: raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) self._shape[idx] = dim cdef char order if mode == 'fortran': order = b'F' self.mode = u'fortran' elif mode == 'c': order = b'C' self.mode = u'c' else: raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode) self.len = fill_contig_strides_array(self._shape, self._strides, itemsize, self.ndim, order) self.free_data = allocate_buffer self.dtype_is_object = format == b'O' if allocate_buffer: # use malloc() for backwards compatibility # in case external code wants to change the data pointer self.data = malloc(self.len) if not self.data: raise MemoryError("unable to allocate array data.") if self.dtype_is_object: p = self.data for i in range(self.len / itemsize): p[i] = Py_None Py_INCREF(Py_None) @cname('getbuffer') def __getbuffer__(self, Py_buffer *info, int flags): cdef int bufmode = -1 if self.mode == u"c": bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS elif self.mode == u"fortran": bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS if not (flags & bufmode): raise ValueError("Can only create a buffer that is contiguous in memory.") info.buf = self.data info.len = self.len info.ndim = self.ndim info.shape = self._shape info.strides = self._strides info.suboffsets = NULL info.itemsize = self.itemsize info.readonly = 0 if flags & PyBUF_FORMAT: info.format = self.format else: info.format = NULL info.obj = self __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") def __dealloc__(array self): if self.callback_free_data != NULL: self.callback_free_data(self.data) elif self.free_data: if self.dtype_is_object: refcount_objects_in_slice(self.data, self._shape, self._strides, self.ndim, False) free(self.data) PyObject_Free(self._shape) @property def memview(self): return self.get_memview() @cname('get_memview') cdef get_memview(self): flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE return memoryview(self, flags, self.dtype_is_object) def __len__(self): return self._shape[0] def __getattr__(self, attr): return getattr(self.memview, attr) def __getitem__(self, item): return self.memview[item] def __setitem__(self, item, value): self.memview[item] = value @cname("__pyx_array_new") cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, char *mode, char *buf): cdef array result if buf == NULL: result = array(shape, itemsize, format, mode.decode('ASCII')) else: result = array(shape, itemsize, format, mode.decode('ASCII'), allocate_buffer=False) result.data = buf return result # ### Memoryview constants and cython.view.memoryview class # # Disable generic_contiguous, as it makes trouble verifying contiguity: # - 'contiguous' or '::1' means the dimension is contiguous with dtype # - 'indirect_contiguous' means a contiguous list of pointers # - dtype contiguous must be contiguous in the first or last dimension # from the start, or from the dimension following the last indirect dimension # # e.g. # int[::indirect_contiguous, ::contiguous, :] # # is valid (list of pointers to 2d fortran-contiguous array), but # # int[::generic_contiguous, ::contiguous, :] # # would mean you'd have assert dimension 0 to be indirect (and pointer contiguous) at runtime. # So it doesn't bring any performance benefit, and it's only confusing. @cname('__pyx_MemviewEnum') cdef class Enum(object): cdef object name def __init__(self, name): self.name = name def __repr__(self): return self.name cdef generic = Enum("") cdef strided = Enum("") # default cdef indirect = Enum("") # Disable generic_contiguous, as it is a troublemaker #cdef generic_contiguous = Enum("") cdef contiguous = Enum("") cdef indirect_contiguous = Enum("") # 'follow' is implied when the first or last axis is ::1 @cname('__pyx_align_pointer') cdef void *align_pointer(void *memory, size_t alignment) nogil: "Align pointer memory on a given boundary" cdef Py_intptr_t aligned_p = memory cdef size_t offset with cython.cdivision(True): offset = aligned_p % alignment if offset > 0: aligned_p += alignment - offset return aligned_p # pre-allocate thread locks for reuse ## note that this could be implemented in a more beautiful way in "normal" Cython, ## but this code gets merged into the user module and not everything works there. DEF THREAD_LOCKS_PREALLOCATED = 8 cdef int __pyx_memoryview_thread_locks_used = 0 cdef PyThread_type_lock[THREAD_LOCKS_PREALLOCATED] __pyx_memoryview_thread_locks = [ PyThread_allocate_lock(), PyThread_allocate_lock(), PyThread_allocate_lock(), PyThread_allocate_lock(), PyThread_allocate_lock(), PyThread_allocate_lock(), PyThread_allocate_lock(), PyThread_allocate_lock(), ] @cname('__pyx_memoryview') cdef class memoryview(object): cdef object obj cdef object _size cdef object _array_interface cdef PyThread_type_lock lock # the following array will contain a single __pyx_atomic int with # suitable alignment cdef __pyx_atomic_int acquisition_count[2] cdef __pyx_atomic_int *acquisition_count_aligned_p cdef Py_buffer view cdef int flags cdef bint dtype_is_object cdef __Pyx_TypeInfo *typeinfo def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): self.obj = obj self.flags = flags if type(self) is memoryview or obj is not None: __Pyx_GetBuffer(obj, &self.view, flags) if self.view.obj == NULL: (<__pyx_buffer *> &self.view).obj = Py_None Py_INCREF(Py_None) global __pyx_memoryview_thread_locks_used if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] __pyx_memoryview_thread_locks_used += 1 if self.lock is NULL: self.lock = PyThread_allocate_lock() if self.lock is NULL: raise MemoryError if flags & PyBUF_FORMAT: self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') else: self.dtype_is_object = dtype_is_object self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer( &self.acquisition_count[0], sizeof(__pyx_atomic_int)) self.typeinfo = NULL def __dealloc__(memoryview self): if self.obj is not None: __Pyx_ReleaseBuffer(&self.view) elif (<__pyx_buffer *> &self.view).obj == Py_None: # Undo the incref in __cinit__() above. (<__pyx_buffer *> &self.view).obj = NULL Py_DECREF(Py_None) cdef int i global __pyx_memoryview_thread_locks_used if self.lock != NULL: for i in range(__pyx_memoryview_thread_locks_used): if __pyx_memoryview_thread_locks[i] is self.lock: __pyx_memoryview_thread_locks_used -= 1 if i != __pyx_memoryview_thread_locks_used: __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) break else: PyThread_free_lock(self.lock) cdef char *get_item_pointer(memoryview self, object index) except NULL: cdef Py_ssize_t dim cdef char *itemp = self.view.buf for dim, idx in enumerate(index): itemp = pybuffer_index(&self.view, itemp, idx, dim) return itemp #@cname('__pyx_memoryview_getitem') def __getitem__(memoryview self, object index): if index is Ellipsis: return self have_slices, indices = _unellipsify(index, self.view.ndim) cdef char *itemp if have_slices: return memview_slice(self, indices) else: itemp = self.get_item_pointer(indices) return self.convert_item_to_object(itemp) def __setitem__(memoryview self, object index, object value): if self.view.readonly: raise TypeError("Cannot assign to read-only memoryview") have_slices, index = _unellipsify(index, self.view.ndim) if have_slices: obj = self.is_slice(value) if obj: self.setitem_slice_assignment(self[index], obj) else: self.setitem_slice_assign_scalar(self[index], value) else: self.setitem_indexed(index, value) cdef is_slice(self, obj): if not isinstance(obj, memoryview): try: obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, self.dtype_is_object) except TypeError: return None return obj cdef setitem_slice_assignment(self, dst, src): cdef {{memviewslice_name}} dst_slice cdef {{memviewslice_name}} src_slice memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], get_slice_from_memview(dst, &dst_slice)[0], src.ndim, dst.ndim, self.dtype_is_object) cdef setitem_slice_assign_scalar(self, memoryview dst, value): cdef int array[128] cdef void *tmp = NULL cdef void *item cdef {{memviewslice_name}} *dst_slice cdef {{memviewslice_name}} tmp_slice dst_slice = get_slice_from_memview(dst, &tmp_slice) if self.view.itemsize > sizeof(array): tmp = PyMem_Malloc(self.view.itemsize) if tmp == NULL: raise MemoryError item = tmp else: item = array try: if self.dtype_is_object: ( item)[0] = value else: self.assign_item_from_object( item, value) # It would be easy to support indirect dimensions, but it's easier # to disallow :) if self.view.suboffsets != NULL: assert_direct_dimensions(self.view.suboffsets, self.view.ndim) slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, item, self.dtype_is_object) finally: PyMem_Free(tmp) cdef setitem_indexed(self, index, value): cdef char *itemp = self.get_item_pointer(index) self.assign_item_from_object(itemp, value) cdef convert_item_to_object(self, char *itemp): """Only used if instantiated manually by the user, or if Cython doesn't know how to convert the type""" import struct cdef bytes bytesitem # Do a manual and complete check here instead of this easy hack bytesitem = itemp[:self.view.itemsize] try: result = struct.unpack(self.view.format, bytesitem) except struct.error: raise ValueError("Unable to convert item to object") else: if len(self.view.format) == 1: return result[0] return result cdef assign_item_from_object(self, char *itemp, object value): """Only used if instantiated manually by the user, or if Cython doesn't know how to convert the type""" import struct cdef char c cdef bytes bytesvalue cdef Py_ssize_t i if isinstance(value, tuple): bytesvalue = struct.pack(self.view.format, *value) else: bytesvalue = struct.pack(self.view.format, value) for i, c in enumerate(bytesvalue): itemp[i] = c @cname('getbuffer') def __getbuffer__(self, Py_buffer *info, int flags): if flags & PyBUF_WRITABLE and self.view.readonly: raise ValueError("Cannot create writable memory view from read-only memoryview") if flags & PyBUF_ND: info.shape = self.view.shape else: info.shape = NULL if flags & PyBUF_STRIDES: info.strides = self.view.strides else: info.strides = NULL if flags & PyBUF_INDIRECT: info.suboffsets = self.view.suboffsets else: info.suboffsets = NULL if flags & PyBUF_FORMAT: info.format = self.view.format else: info.format = NULL info.buf = self.view.buf info.ndim = self.view.ndim info.itemsize = self.view.itemsize info.len = self.view.len info.readonly = self.view.readonly info.obj = self __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") # Some properties that have the same semantics as in NumPy @property def T(self): cdef _memoryviewslice result = memoryview_copy(self) transpose_memslice(&result.from_slice) return result @property def base(self): return self.obj @property def shape(self): return tuple([length for length in self.view.shape[:self.view.ndim]]) @property def strides(self): if self.view.strides == NULL: # Note: we always ask for strides, so if this is not set it's a bug raise ValueError("Buffer view does not expose strides") return tuple([stride for stride in self.view.strides[:self.view.ndim]]) @property def suboffsets(self): if self.view.suboffsets == NULL: return (-1,) * self.view.ndim return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) @property def ndim(self): return self.view.ndim @property def itemsize(self): return self.view.itemsize @property def nbytes(self): return self.size * self.view.itemsize @property def size(self): if self._size is None: result = 1 for length in self.view.shape[:self.view.ndim]: result *= length self._size = result return self._size def __len__(self): if self.view.ndim >= 1: return self.view.shape[0] return 0 def __repr__(self): return "" % (self.base.__class__.__name__, id(self)) def __str__(self): return "" % (self.base.__class__.__name__,) # Support the same attributes as memoryview slices def is_c_contig(self): cdef {{memviewslice_name}} *mslice cdef {{memviewslice_name}} tmp mslice = get_slice_from_memview(self, &tmp) return slice_is_contig(mslice[0], 'C', self.view.ndim) def is_f_contig(self): cdef {{memviewslice_name}} *mslice cdef {{memviewslice_name}} tmp mslice = get_slice_from_memview(self, &tmp) return slice_is_contig(mslice[0], 'F', self.view.ndim) def copy(self): cdef {{memviewslice_name}} mslice cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS slice_copy(self, &mslice) mslice = slice_copy_contig(&mslice, "c", self.view.ndim, self.view.itemsize, flags|PyBUF_C_CONTIGUOUS, self.dtype_is_object) return memoryview_copy_from_slice(self, &mslice) def copy_fortran(self): cdef {{memviewslice_name}} src, dst cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS slice_copy(self, &src) dst = slice_copy_contig(&src, "fortran", self.view.ndim, self.view.itemsize, flags|PyBUF_F_CONTIGUOUS, self.dtype_is_object) return memoryview_copy_from_slice(self, &dst) @cname('__pyx_memoryview_new') cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): cdef memoryview result = memoryview(o, flags, dtype_is_object) result.typeinfo = typeinfo return result @cname('__pyx_memoryview_check') cdef inline bint memoryview_check(object o): return isinstance(o, memoryview) cdef tuple _unellipsify(object index, int ndim): """ Replace all ellipses with full slices and fill incomplete indices with full slices. """ if not isinstance(index, tuple): tup = (index,) else: tup = index result = [] have_slices = False seen_ellipsis = False for idx, item in enumerate(tup): if item is Ellipsis: if not seen_ellipsis: result.extend([slice(None)] * (ndim - len(tup) + 1)) seen_ellipsis = True else: result.append(slice(None)) have_slices = True else: if not isinstance(item, slice) and not PyIndex_Check(item): raise TypeError("Cannot index with type '%s'" % type(item)) have_slices = have_slices or isinstance(item, slice) result.append(item) nslices = ndim - len(result) if nslices: result.extend([slice(None)] * nslices) return have_slices or nslices, tuple(result) cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): for suboffset in suboffsets[:ndim]: if suboffset >= 0: raise ValueError("Indirect dimensions not supported") # ### Slicing a memoryview # @cname('__pyx_memview_slice') cdef memoryview memview_slice(memoryview memview, object indices): cdef int new_ndim = 0, suboffset_dim = -1, dim cdef bint negative_step cdef {{memviewslice_name}} src, dst cdef {{memviewslice_name}} *p_src # dst is copied by value in memoryview_fromslice -- initialize it # src is never copied memset(&dst, 0, sizeof(dst)) cdef _memoryviewslice memviewsliceobj assert memview.view.ndim > 0 if isinstance(memview, _memoryviewslice): memviewsliceobj = memview p_src = &memviewsliceobj.from_slice else: slice_copy(memview, &src) p_src = &src # Note: don't use variable src at this point # SubNote: we should be able to declare variables in blocks... # memoryview_fromslice() will inc our dst slice dst.memview = p_src.memview dst.data = p_src.data # Put everything in temps to avoid this bloody warning: # "Argument evaluation order in C function call is undefined and # may not be as expected" cdef {{memviewslice_name}} *p_dst = &dst cdef int *p_suboffset_dim = &suboffset_dim cdef Py_ssize_t start, stop, step cdef bint have_start, have_stop, have_step for dim, index in enumerate(indices): if PyIndex_Check(index): slice_memviewslice( p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], dim, new_ndim, p_suboffset_dim, index, 0, 0, # start, stop, step 0, 0, 0, # have_{start,stop,step} False) elif index is None: p_dst.shape[new_ndim] = 1 p_dst.strides[new_ndim] = 0 p_dst.suboffsets[new_ndim] = -1 new_ndim += 1 else: start = index.start or 0 stop = index.stop or 0 step = index.step or 0 have_start = index.start is not None have_stop = index.stop is not None have_step = index.step is not None slice_memviewslice( p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], dim, new_ndim, p_suboffset_dim, start, stop, step, have_start, have_stop, have_step, True) new_ndim += 1 if isinstance(memview, _memoryviewslice): return memoryview_fromslice(dst, new_ndim, memviewsliceobj.to_object_func, memviewsliceobj.to_dtype_func, memview.dtype_is_object) else: return memoryview_fromslice(dst, new_ndim, NULL, NULL, memview.dtype_is_object) # ### Slicing in a single dimension of a memoryviewslice # cdef extern from "": void abort() nogil void printf(char *s, ...) nogil cdef extern from "": ctypedef struct FILE FILE *stderr int fputs(char *s, FILE *stream) cdef extern from "pystate.h": void PyThreadState_Get() nogil # These are not actually nogil, but we check for the GIL before calling them void PyErr_SetString(PyObject *type, char *msg) nogil PyObject *PyErr_Format(PyObject *exc, char *msg, ...) nogil @cname('__pyx_memoryview_slice_memviewslice') cdef int slice_memviewslice( {{memviewslice_name}} *dst, Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset, int dim, int new_ndim, int *suboffset_dim, Py_ssize_t start, Py_ssize_t stop, Py_ssize_t step, int have_start, int have_stop, int have_step, bint is_slice) nogil except -1: """ Create a new slice dst given slice src. dim - the current src dimension (indexing will make dimensions disappear) new_dim - the new dst dimension suboffset_dim - pointer to a single int initialized to -1 to keep track of where slicing offsets should be added """ cdef Py_ssize_t new_shape cdef bint negative_step if not is_slice: # index is a normal integer-like index if start < 0: start += shape if not 0 <= start < shape: _err_dim(IndexError, "Index out of bounds (axis %d)", dim) else: # index is a slice negative_step = have_step != 0 and step < 0 if have_step and step == 0: _err_dim(ValueError, "Step may not be zero (axis %d)", dim) # check our bounds and set defaults if have_start: if start < 0: start += shape if start < 0: start = 0 elif start >= shape: if negative_step: start = shape - 1 else: start = shape else: if negative_step: start = shape - 1 else: start = 0 if have_stop: if stop < 0: stop += shape if stop < 0: stop = 0 elif stop > shape: stop = shape else: if negative_step: stop = -1 else: stop = shape if not have_step: step = 1 # len = ceil( (stop - start) / step ) with cython.cdivision(True): new_shape = (stop - start) // step if (stop - start) - step * new_shape: new_shape += 1 if new_shape < 0: new_shape = 0 # shape/strides/suboffsets dst.strides[new_ndim] = stride * step dst.shape[new_ndim] = new_shape dst.suboffsets[new_ndim] = suboffset # Add the slicing or idexing offsets to the right suboffset or base data * if suboffset_dim[0] < 0: dst.data += start * stride else: dst.suboffsets[suboffset_dim[0]] += start * stride if suboffset >= 0: if not is_slice: if new_ndim == 0: dst.data = ( dst.data)[0] + suboffset else: _err_dim(IndexError, "All dimensions preceding dimension %d " "must be indexed and not sliced", dim) else: suboffset_dim[0] = new_ndim return 0 # ### Index a memoryview # @cname('__pyx_pybuffer_index') cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, Py_ssize_t dim) except NULL: cdef Py_ssize_t shape, stride, suboffset = -1 cdef Py_ssize_t itemsize = view.itemsize cdef char *resultp if view.ndim == 0: shape = view.len / itemsize stride = itemsize else: shape = view.shape[dim] stride = view.strides[dim] if view.suboffsets != NULL: suboffset = view.suboffsets[dim] if index < 0: index += view.shape[dim] if index < 0: raise IndexError("Out of bounds on buffer access (axis %d)" % dim) if index >= shape: raise IndexError("Out of bounds on buffer access (axis %d)" % dim) resultp = bufp + index * stride if suboffset >= 0: resultp = ( resultp)[0] + suboffset return resultp # ### Transposing a memoryviewslice # @cname('__pyx_memslice_transpose') cdef int transpose_memslice({{memviewslice_name}} *memslice) nogil except 0: cdef int ndim = memslice.memview.view.ndim cdef Py_ssize_t *shape = memslice.shape cdef Py_ssize_t *strides = memslice.strides # reverse strides and shape cdef int i, j for i in range(ndim / 2): j = ndim - 1 - i strides[i], strides[j] = strides[j], strides[i] shape[i], shape[j] = shape[j], shape[i] if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: _err(ValueError, "Cannot transpose memoryview with indirect dimensions") return 1 # ### Creating new memoryview objects from slices and memoryviews # @cname('__pyx_memoryviewslice') cdef class _memoryviewslice(memoryview): "Internal class for passing memoryview slices to Python" # We need this to keep our shape/strides/suboffset pointers valid cdef {{memviewslice_name}} from_slice # We need this only to print it's class' name cdef object from_object cdef object (*to_object_func)(char *) cdef int (*to_dtype_func)(char *, object) except 0 def __dealloc__(self): __PYX_XDEC_MEMVIEW(&self.from_slice, 1) cdef convert_item_to_object(self, char *itemp): if self.to_object_func != NULL: return self.to_object_func(itemp) else: return memoryview.convert_item_to_object(self, itemp) cdef assign_item_from_object(self, char *itemp, object value): if self.to_dtype_func != NULL: self.to_dtype_func(itemp, value) else: memoryview.assign_item_from_object(self, itemp, value) @property def base(self): return self.from_object __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") @cname('__pyx_memoryview_fromslice') cdef memoryview_fromslice({{memviewslice_name}} memviewslice, int ndim, object (*to_object_func)(char *), int (*to_dtype_func)(char *, object) except 0, bint dtype_is_object): cdef _memoryviewslice result if memviewslice.memview == Py_None: return None # assert 0 < ndim <= memviewslice.memview.view.ndim, ( # ndim, memviewslice.memview.view.ndim) result = _memoryviewslice(None, 0, dtype_is_object) result.from_slice = memviewslice __PYX_INC_MEMVIEW(&memviewslice, 1) result.from_object = ( memviewslice.memview).base result.typeinfo = memviewslice.memview.typeinfo result.view = memviewslice.memview.view result.view.buf = memviewslice.data result.view.ndim = ndim (<__pyx_buffer *> &result.view).obj = Py_None Py_INCREF(Py_None) if (memviewslice.memview).flags & PyBUF_WRITABLE: result.flags = PyBUF_RECORDS else: result.flags = PyBUF_RECORDS_RO result.view.shape = result.from_slice.shape result.view.strides = result.from_slice.strides # only set suboffsets if actually used, otherwise set to NULL to improve compatibility result.view.suboffsets = NULL for suboffset in result.from_slice.suboffsets[:ndim]: if suboffset >= 0: result.view.suboffsets = result.from_slice.suboffsets break result.view.len = result.view.itemsize for length in result.view.shape[:ndim]: result.view.len *= length result.to_object_func = to_object_func result.to_dtype_func = to_dtype_func return result @cname('__pyx_memoryview_get_slice_from_memoryview') cdef {{memviewslice_name}} *get_slice_from_memview(memoryview memview, {{memviewslice_name}} *mslice): cdef _memoryviewslice obj if isinstance(memview, _memoryviewslice): obj = memview return &obj.from_slice else: slice_copy(memview, mslice) return mslice @cname('__pyx_memoryview_slice_copy') cdef void slice_copy(memoryview memview, {{memviewslice_name}} *dst): cdef int dim cdef (Py_ssize_t*) shape, strides, suboffsets shape = memview.view.shape strides = memview.view.strides suboffsets = memview.view.suboffsets dst.memview = <__pyx_memoryview *> memview dst.data = memview.view.buf for dim in range(memview.view.ndim): dst.shape[dim] = shape[dim] dst.strides[dim] = strides[dim] dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 @cname('__pyx_memoryview_copy_object') cdef memoryview_copy(memoryview memview): "Create a new memoryview object" cdef {{memviewslice_name}} memviewslice slice_copy(memview, &memviewslice) return memoryview_copy_from_slice(memview, &memviewslice) @cname('__pyx_memoryview_copy_object_from_slice') cdef memoryview_copy_from_slice(memoryview memview, {{memviewslice_name}} *memviewslice): """ Create a new memoryview object from a given memoryview object and slice. """ cdef object (*to_object_func)(char *) cdef int (*to_dtype_func)(char *, object) except 0 if isinstance(memview, _memoryviewslice): to_object_func = (<_memoryviewslice> memview).to_object_func to_dtype_func = (<_memoryviewslice> memview).to_dtype_func else: to_object_func = NULL to_dtype_func = NULL return memoryview_fromslice(memviewslice[0], memview.view.ndim, to_object_func, to_dtype_func, memview.dtype_is_object) # ### Copy the contents of a memoryview slices # cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: if arg < 0: return -arg else: return arg @cname('__pyx_get_best_slice_order') cdef char get_best_order({{memviewslice_name}} *mslice, int ndim) nogil: """ Figure out the best memory access order for a given slice. """ cdef int i cdef Py_ssize_t c_stride = 0 cdef Py_ssize_t f_stride = 0 for i in range(ndim - 1, -1, -1): if mslice.shape[i] > 1: c_stride = mslice.strides[i] break for i in range(ndim): if mslice.shape[i] > 1: f_stride = mslice.strides[i] break if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): return 'C' else: return 'F' @cython.cdivision(True) cdef void _copy_strided_to_strided(char *src_data, Py_ssize_t *src_strides, char *dst_data, Py_ssize_t *dst_strides, Py_ssize_t *src_shape, Py_ssize_t *dst_shape, int ndim, size_t itemsize) nogil: # Note: src_extent is 1 if we're broadcasting # dst_extent always >= src_extent as we don't do reductions cdef Py_ssize_t i cdef Py_ssize_t src_extent = src_shape[0] cdef Py_ssize_t dst_extent = dst_shape[0] cdef Py_ssize_t src_stride = src_strides[0] cdef Py_ssize_t dst_stride = dst_strides[0] if ndim == 1: if (src_stride > 0 and dst_stride > 0 and src_stride == itemsize == dst_stride): memcpy(dst_data, src_data, itemsize * dst_extent) else: for i in range(dst_extent): memcpy(dst_data, src_data, itemsize) src_data += src_stride dst_data += dst_stride else: for i in range(dst_extent): _copy_strided_to_strided(src_data, src_strides + 1, dst_data, dst_strides + 1, src_shape + 1, dst_shape + 1, ndim - 1, itemsize) src_data += src_stride dst_data += dst_stride cdef void copy_strided_to_strided({{memviewslice_name}} *src, {{memviewslice_name}} *dst, int ndim, size_t itemsize) nogil: _copy_strided_to_strided(src.data, src.strides, dst.data, dst.strides, src.shape, dst.shape, ndim, itemsize) @cname('__pyx_memoryview_slice_get_size') cdef Py_ssize_t slice_get_size({{memviewslice_name}} *src, int ndim) nogil: "Return the size of the memory occupied by the slice in number of bytes" cdef int i cdef Py_ssize_t size = src.memview.view.itemsize for i in range(ndim): size *= src.shape[i] return size @cname('__pyx_fill_contig_strides_array') cdef Py_ssize_t fill_contig_strides_array( Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t stride, int ndim, char order) nogil: """ Fill the strides array for a slice with C or F contiguous strides. This is like PyBuffer_FillContiguousStrides, but compatible with py < 2.6 """ cdef int idx if order == 'F': for idx in range(ndim): strides[idx] = stride stride = stride * shape[idx] else: for idx in range(ndim - 1, -1, -1): strides[idx] = stride stride = stride * shape[idx] return stride @cname('__pyx_memoryview_copy_data_to_temp') cdef void *copy_data_to_temp({{memviewslice_name}} *src, {{memviewslice_name}} *tmpslice, char order, int ndim) nogil except NULL: """ Copy a direct slice to temporary contiguous memory. The caller should free the result when done. """ cdef int i cdef void *result cdef size_t itemsize = src.memview.view.itemsize cdef size_t size = slice_get_size(src, ndim) result = malloc(size) if not result: _err(MemoryError, NULL) # tmpslice[0] = src tmpslice.data = result tmpslice.memview = src.memview for i in range(ndim): tmpslice.shape[i] = src.shape[i] tmpslice.suboffsets[i] = -1 fill_contig_strides_array(&tmpslice.shape[0], &tmpslice.strides[0], itemsize, ndim, order) # We need to broadcast strides again for i in range(ndim): if tmpslice.shape[i] == 1: tmpslice.strides[i] = 0 if slice_is_contig(src[0], order, ndim): memcpy(result, src.data, size) else: copy_strided_to_strided(src, tmpslice, ndim, itemsize) return result # Use 'with gil' functions and avoid 'with gil' blocks, as the code within the blocks # has temporaries that need the GIL to clean up @cname('__pyx_memoryview_err_extents') cdef int _err_extents(int i, Py_ssize_t extent1, Py_ssize_t extent2) except -1 with gil: raise ValueError("got differing extents in dimension %d (got %d and %d)" % (i, extent1, extent2)) @cname('__pyx_memoryview_err_dim') cdef int _err_dim(object error, char *msg, int dim) except -1 with gil: raise error(msg.decode('ascii') % dim) @cname('__pyx_memoryview_err') cdef int _err(object error, char *msg) except -1 with gil: if msg != NULL: raise error(msg.decode('ascii')) else: raise error @cname('__pyx_memoryview_copy_contents') cdef int memoryview_copy_contents({{memviewslice_name}} src, {{memviewslice_name}} dst, int src_ndim, int dst_ndim, bint dtype_is_object) nogil except -1: """ Copy memory from slice src to slice dst. Check for overlapping memory and verify the shapes. """ cdef void *tmpdata = NULL cdef size_t itemsize = src.memview.view.itemsize cdef int i cdef char order = get_best_order(&src, src_ndim) cdef bint broadcasting = False cdef bint direct_copy = False cdef {{memviewslice_name}} tmp if src_ndim < dst_ndim: broadcast_leading(&src, src_ndim, dst_ndim) elif dst_ndim < src_ndim: broadcast_leading(&dst, dst_ndim, src_ndim) cdef int ndim = max(src_ndim, dst_ndim) for i in range(ndim): if src.shape[i] != dst.shape[i]: if src.shape[i] == 1: broadcasting = True src.strides[i] = 0 else: _err_extents(i, dst.shape[i], src.shape[i]) if src.suboffsets[i] >= 0: _err_dim(ValueError, "Dimension %d is not direct", i) if slices_overlap(&src, &dst, ndim, itemsize): # slices overlap, copy to temp, copy temp to dst if not slice_is_contig(src, order, ndim): order = get_best_order(&dst, ndim) tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) src = tmp if not broadcasting: # See if both slices have equal contiguity, in that case perform a # direct copy. This only works when we are not broadcasting. if slice_is_contig(src, 'C', ndim): direct_copy = slice_is_contig(dst, 'C', ndim) elif slice_is_contig(src, 'F', ndim): direct_copy = slice_is_contig(dst, 'F', ndim) if direct_copy: # Contiguous slices with same order refcount_copying(&dst, dtype_is_object, ndim, False) memcpy(dst.data, src.data, slice_get_size(&src, ndim)) refcount_copying(&dst, dtype_is_object, ndim, True) free(tmpdata) return 0 if order == 'F' == get_best_order(&dst, ndim): # see if both slices have Fortran order, transpose them to match our # C-style indexing order transpose_memslice(&src) transpose_memslice(&dst) refcount_copying(&dst, dtype_is_object, ndim, False) copy_strided_to_strided(&src, &dst, ndim, itemsize) refcount_copying(&dst, dtype_is_object, ndim, True) free(tmpdata) return 0 @cname('__pyx_memoryview_broadcast_leading') cdef void broadcast_leading({{memviewslice_name}} *mslice, int ndim, int ndim_other) nogil: cdef int i cdef int offset = ndim_other - ndim for i in range(ndim - 1, -1, -1): mslice.shape[i + offset] = mslice.shape[i] mslice.strides[i + offset] = mslice.strides[i] mslice.suboffsets[i + offset] = mslice.suboffsets[i] for i in range(offset): mslice.shape[i] = 1 mslice.strides[i] = mslice.strides[0] mslice.suboffsets[i] = -1 # ### Take care of refcounting the objects in slices. Do this separately from any copying, ### to minimize acquiring the GIL # @cname('__pyx_memoryview_refcount_copying') cdef void refcount_copying({{memviewslice_name}} *dst, bint dtype_is_object, int ndim, bint inc) nogil: # incref or decref the objects in the destination slice if the dtype is # object if dtype_is_object: refcount_objects_in_slice_with_gil(dst.data, dst.shape, dst.strides, ndim, inc) @cname('__pyx_memoryview_refcount_objects_in_slice_with_gil') cdef void refcount_objects_in_slice_with_gil(char *data, Py_ssize_t *shape, Py_ssize_t *strides, int ndim, bint inc) with gil: refcount_objects_in_slice(data, shape, strides, ndim, inc) @cname('__pyx_memoryview_refcount_objects_in_slice') cdef void refcount_objects_in_slice(char *data, Py_ssize_t *shape, Py_ssize_t *strides, int ndim, bint inc): cdef Py_ssize_t i for i in range(shape[0]): if ndim == 1: if inc: Py_INCREF(( data)[0]) else: Py_DECREF(( data)[0]) else: refcount_objects_in_slice(data, shape + 1, strides + 1, ndim - 1, inc) data += strides[0] # ### Scalar to slice assignment # @cname('__pyx_memoryview_slice_assign_scalar') cdef void slice_assign_scalar({{memviewslice_name}} *dst, int ndim, size_t itemsize, void *item, bint dtype_is_object) nogil: refcount_copying(dst, dtype_is_object, ndim, False) _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, itemsize, item) refcount_copying(dst, dtype_is_object, ndim, True) @cname('__pyx_memoryview__slice_assign_scalar') cdef void _slice_assign_scalar(char *data, Py_ssize_t *shape, Py_ssize_t *strides, int ndim, size_t itemsize, void *item) nogil: cdef Py_ssize_t i cdef Py_ssize_t stride = strides[0] cdef Py_ssize_t extent = shape[0] if ndim == 1: for i in range(extent): memcpy(data, item, itemsize) data += stride else: for i in range(extent): _slice_assign_scalar(data, shape + 1, strides + 1, ndim - 1, itemsize, item) data += stride ############### BufferFormatFromTypeInfo ############### cdef extern from *: ctypedef struct __Pyx_StructField cdef enum: __PYX_BUF_FLAGS_PACKED_STRUCT __PYX_BUF_FLAGS_INTEGER_COMPLEX ctypedef struct __Pyx_TypeInfo: char* name __Pyx_StructField* fields size_t size size_t arraysize[8] int ndim char typegroup char is_unsigned int flags ctypedef struct __Pyx_StructField: __Pyx_TypeInfo* type char* name size_t offset ctypedef struct __Pyx_BufFmt_StackElem: __Pyx_StructField* field size_t parent_offset #ctypedef struct __Pyx_BufFmt_Context: # __Pyx_StructField root __Pyx_BufFmt_StackElem* head struct __pyx_typeinfo_string: char string[3] __pyx_typeinfo_string __Pyx_TypeInfoToFormat(__Pyx_TypeInfo *) @cname('__pyx_format_from_typeinfo') cdef bytes format_from_typeinfo(__Pyx_TypeInfo *type): cdef __Pyx_StructField *field cdef __pyx_typeinfo_string fmt cdef bytes part, result if type.typegroup == 'S': assert type.fields != NULL and type.fields.type != NULL if type.flags & __PYX_BUF_FLAGS_PACKED_STRUCT: alignment = b'^' else: alignment = b'' parts = [b"T{"] field = type.fields while field.type: part = format_from_typeinfo(field.type) parts.append(part + b':' + field.name + b':') field += 1 result = alignment.join(parts) + b'}' else: fmt = __Pyx_TypeInfoToFormat(type) if type.arraysize[0]: extents = [unicode(type.arraysize[i]) for i in range(type.ndim)] result = (u"(%s)" % u','.join(extents)).encode('ascii') + fmt.string else: result = fmt.string return result ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Utility/TestCyUtilityLoader.pyx0000644000175000017500000000023000000000000031633 0ustar00vincentvincent00000000000000########## TestCyUtilityLoader ########## #@requires: OtherUtility test {{cy_loader}} impl ########## OtherUtility ########## req {{cy_loader}} impl ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Utility/TestCythonScope.pyx0000644000175000017500000000307300000000000031013 0ustar00vincentvincent00000000000000########## TestClass ########## # These utilities are for testing purposes cdef extern from *: cdef object __pyx_test_dep(object) @cname('__pyx_TestClass') cdef class TestClass(object): cdef public int value def __init__(self, int value): self.value = value def __str__(self): return 'TestClass(%d)' % self.value cdef cdef_method(self, int value): print 'Hello from cdef_method', value cpdef cpdef_method(self, int value): print 'Hello from cpdef_method', value def def_method(self, int value): print 'Hello from def_method', value @cname('cdef_cname') cdef cdef_cname_method(self, int value): print "Hello from cdef_cname_method", value @cname('cpdef_cname') cpdef cpdef_cname_method(self, int value): print "Hello from cpdef_cname_method", value @cname('def_cname') def def_cname_method(self, int value): print "Hello from def_cname_method", value @cname('__pyx_test_call_other_cy_util') cdef test_call(obj): print 'test_call' __pyx_test_dep(obj) @cname('__pyx_TestClass_New') cdef _testclass_new(int value): return TestClass(value) ########### TestDep ########## @cname('__pyx_test_dep') cdef test_dep(obj): print 'test_dep', obj ########## TestScope ########## @cname('__pyx_testscope') cdef object _testscope(int value): return "hello from cython scope, value=%d" % value ########## View.TestScope ########## @cname('__pyx_view_testscope') cdef object _testscope(int value): return "hello from cython.view scope, value=%d" % value ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Utility/__init__.py0000644000175000017500000000220700000000000027262 0ustar00vincentvincent00000000000000 def pylong_join(count, digits_ptr='digits', join_type='unsigned long'): """ Generate an unrolled shift-then-or loop over the first 'count' digits. Assumes that they fit into 'join_type'. (((d[2] << n) | d[1]) << n) | d[0] """ return ('(' * (count * 2) + ' | '.join( "(%s)%s[%d])%s)" % (join_type, digits_ptr, _i, " << PyLong_SHIFT" if _i else '') for _i in range(count-1, -1, -1))) # although it could potentially make use of data independence, # this implementation is a bit slower than the simpler one above def _pylong_join(count, digits_ptr='digits', join_type='unsigned long'): """ Generate an or-ed series of shifts for the first 'count' digits. Assumes that they fit into 'join_type'. (d[2] << 2*n) | (d[1] << 1*n) | d[0] """ def shift(n): # avoid compiler warnings for overly large shifts that will be discarded anyway return " << (%d * PyLong_SHIFT < 8 * sizeof(%s) ? %d * PyLong_SHIFT : 0)" % (n, join_type, n) if n else '' return '(%s)' % ' | '.join( "(((%s)%s[%d])%s)" % (join_type, digits_ptr, i, shift(i)) for i in range(count-1, -1, -1)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/Utils.py0000644000175000017500000003266300000000000025171 0ustar00vincentvincent00000000000000# # Cython -- Things that don't belong # anywhere else in particular # from __future__ import absolute_import try: from __builtin__ import basestring except ImportError: basestring = str try: FileNotFoundError except NameError: FileNotFoundError = OSError import os import sys import re import io import codecs import shutil from contextlib import contextmanager modification_time = os.path.getmtime _function_caches = [] def clear_function_caches(): for cache in _function_caches: cache.clear() def cached_function(f): cache = {} _function_caches.append(cache) uncomputed = object() def wrapper(*args): res = cache.get(args, uncomputed) if res is uncomputed: res = cache[args] = f(*args) return res wrapper.uncached = f return wrapper def cached_method(f): cache_name = '__%s_cache' % f.__name__ def wrapper(self, *args): cache = getattr(self, cache_name, None) if cache is None: cache = {} setattr(self, cache_name, cache) if args in cache: return cache[args] res = cache[args] = f(self, *args) return res return wrapper def replace_suffix(path, newsuf): base, _ = os.path.splitext(path) return base + newsuf def open_new_file(path): if os.path.exists(path): # Make sure to create a new file here so we can # safely hard link the output files. os.unlink(path) # we use the ISO-8859-1 encoding here because we only write pure # ASCII strings or (e.g. for file names) byte encoded strings as # Unicode, so we need a direct mapping from the first 256 Unicode # characters to a byte sequence, which ISO-8859-1 provides # note: can't use io.open() in Py2 as we may be writing str objects return codecs.open(path, "w", encoding="ISO-8859-1") def castrate_file(path, st): # Remove junk contents from an output file after a # failed compilation. # Also sets access and modification times back to # those specified by st (a stat struct). try: f = open_new_file(path) except EnvironmentError: pass else: f.write( "#error Do not use this file, it is the result of a failed Cython compilation.\n") f.close() if st: os.utime(path, (st.st_atime, st.st_mtime-1)) def file_newer_than(path, time): ftime = modification_time(path) return ftime > time def safe_makedirs(path): try: os.makedirs(path) except OSError: if not os.path.isdir(path): raise def copy_file_to_dir_if_newer(sourcefile, destdir): """ Copy file sourcefile to directory destdir (creating it if needed), preserving metadata. If the destination file exists and is not older than the source file, the copying is skipped. """ destfile = os.path.join(destdir, os.path.basename(sourcefile)) try: desttime = modification_time(destfile) except OSError: # New file does not exist, destdir may or may not exist safe_makedirs(destdir) else: # New file already exists if not file_newer_than(sourcefile, desttime): return shutil.copy2(sourcefile, destfile) @cached_function def find_root_package_dir(file_path): dir = os.path.dirname(file_path) if file_path == dir: return dir elif is_package_dir(dir): return find_root_package_dir(dir) else: return dir @cached_function def check_package_dir(dir, package_names): for dirname in package_names: dir = os.path.join(dir, dirname) if not is_package_dir(dir): return None return dir @cached_function def is_package_dir(dir_path): for filename in ("__init__.py", "__init__.pyc", "__init__.pyx", "__init__.pxd"): path = os.path.join(dir_path, filename) if path_exists(path): return 1 @cached_function def path_exists(path): # try on the filesystem first if os.path.exists(path): return True # figure out if a PEP 302 loader is around try: loader = __loader__ # XXX the code below assumes a 'zipimport.zipimporter' instance # XXX should be easy to generalize, but too lazy right now to write it archive_path = getattr(loader, 'archive', None) if archive_path: normpath = os.path.normpath(path) if normpath.startswith(archive_path): arcname = normpath[len(archive_path)+1:] try: loader.get_data(arcname) return True except IOError: return False except NameError: pass return False # file name encodings def decode_filename(filename): if isinstance(filename, bytes): try: filename_encoding = sys.getfilesystemencoding() if filename_encoding is None: filename_encoding = sys.getdefaultencoding() filename = filename.decode(filename_encoding) except UnicodeDecodeError: pass return filename # support for source file encoding detection _match_file_encoding = re.compile(br"(\w*coding)[:=]\s*([-\w.]+)").search def detect_opened_file_encoding(f): # PEPs 263 and 3120 # Most of the time the first two lines fall in the first couple of hundred chars, # and this bulk read/split is much faster. lines = () start = b'' while len(lines) < 3: data = f.read(500) start += data lines = start.split(b"\n") if not data: break m = _match_file_encoding(lines[0]) if m and m.group(1) != b'c_string_encoding': return m.group(2).decode('iso8859-1') elif len(lines) > 1: m = _match_file_encoding(lines[1]) if m: return m.group(2).decode('iso8859-1') return "UTF-8" def skip_bom(f): """ Read past a BOM at the beginning of a source file. This could be added to the scanner, but it's *substantially* easier to keep it at this level. """ if f.read(1) != u'\uFEFF': f.seek(0) def open_source_file(source_filename, encoding=None, error_handling=None): stream = None try: if encoding is None: # Most of the time the encoding is not specified, so try hard to open the file only once. f = io.open(source_filename, 'rb') encoding = detect_opened_file_encoding(f) f.seek(0) stream = io.TextIOWrapper(f, encoding=encoding, errors=error_handling) else: stream = io.open(source_filename, encoding=encoding, errors=error_handling) except OSError: if os.path.exists(source_filename): raise # File is there, but something went wrong reading from it. # Allow source files to be in zip files etc. try: loader = __loader__ if source_filename.startswith(loader.archive): stream = open_source_from_loader( loader, source_filename, encoding, error_handling) except (NameError, AttributeError): pass if stream is None: raise FileNotFoundError(source_filename) skip_bom(stream) return stream def open_source_from_loader(loader, source_filename, encoding=None, error_handling=None): nrmpath = os.path.normpath(source_filename) arcname = nrmpath[len(loader.archive)+1:] data = loader.get_data(arcname) return io.TextIOWrapper(io.BytesIO(data), encoding=encoding, errors=error_handling) def str_to_number(value): # note: this expects a string as input that was accepted by the # parser already, with an optional "-" sign in front is_neg = False if value[:1] == '-': is_neg = True value = value[1:] if len(value) < 2: value = int(value, 0) elif value[0] == '0': literal_type = value[1] # 0'o' - 0'b' - 0'x' if literal_type in 'xX': # hex notation ('0x1AF') value = int(value[2:], 16) elif literal_type in 'oO': # Py3 octal notation ('0o136') value = int(value[2:], 8) elif literal_type in 'bB': # Py3 binary notation ('0b101') value = int(value[2:], 2) else: # Py2 octal notation ('0136') value = int(value, 8) else: value = int(value, 0) return -value if is_neg else value def long_literal(value): if isinstance(value, basestring): value = str_to_number(value) return not -2**31 <= value < 2**31 @cached_function def get_cython_cache_dir(): r""" Return the base directory containing Cython's caches. Priority: 1. CYTHON_CACHE_DIR 2. (OS X): ~/Library/Caches/Cython (posix not OS X): XDG_CACHE_HOME/cython if XDG_CACHE_HOME defined 3. ~/.cython """ if 'CYTHON_CACHE_DIR' in os.environ: return os.environ['CYTHON_CACHE_DIR'] parent = None if os.name == 'posix': if sys.platform == 'darwin': parent = os.path.expanduser('~/Library/Caches') else: # this could fallback on ~/.cache parent = os.environ.get('XDG_CACHE_HOME') if parent and os.path.isdir(parent): return os.path.join(parent, 'cython') # last fallback: ~/.cython return os.path.expanduser(os.path.join('~', '.cython')) @contextmanager def captured_fd(stream=2, encoding=None): pipe_in = t = None orig_stream = os.dup(stream) # keep copy of original stream try: pipe_in, pipe_out = os.pipe() os.dup2(pipe_out, stream) # replace stream by copy of pipe try: os.close(pipe_out) # close original pipe-out stream data = [] def copy(): try: while True: d = os.read(pipe_in, 1000) if d: data.append(d) else: break finally: os.close(pipe_in) def get_output(): output = b''.join(data) if encoding: output = output.decode(encoding) return output from threading import Thread t = Thread(target=copy) t.daemon = True # just in case t.start() yield get_output finally: os.dup2(orig_stream, stream) # restore original stream if t is not None: t.join() finally: os.close(orig_stream) def print_bytes(s, header_text=None, end=b'\n', file=sys.stdout, flush=True): if header_text: file.write(header_text) # note: text! => file.write() instead of out.write() file.flush() try: out = file.buffer # Py3 except AttributeError: out = file # Py2 out.write(s) if end: out.write(end) if flush: out.flush() class LazyStr: def __init__(self, callback): self.callback = callback def __str__(self): return self.callback() def __repr__(self): return self.callback() def __add__(self, right): return self.callback() + right def __radd__(self, left): return left + self.callback() class OrderedSet(object): def __init__(self, elements=()): self._list = [] self._set = set() self.update(elements) def __iter__(self): return iter(self._list) def update(self, elements): for e in elements: self.add(e) def add(self, e): if e not in self._set: self._list.append(e) self._set.add(e) # Class decorator that adds a metaclass and recreates the class with it. # Copied from 'six'. def add_metaclass(metaclass): """Class decorator for creating a class with a metaclass.""" def wrapper(cls): orig_vars = cls.__dict__.copy() slots = orig_vars.get('__slots__') if slots is not None: if isinstance(slots, str): slots = [slots] for slots_var in slots: orig_vars.pop(slots_var) orig_vars.pop('__dict__', None) orig_vars.pop('__weakref__', None) return metaclass(cls.__name__, cls.__bases__, orig_vars) return wrapper def raise_error_if_module_name_forbidden(full_module_name): #it is bad idea to call the pyx-file cython.pyx, so fail early if full_module_name == 'cython' or full_module_name.startswith('cython.'): raise ValueError('cython is a special module, cannot be used as a module name') def build_hex_version(version_string): """ Parse and translate '4.3a1' into the readable hex representation '0x040300A1' (like PY_HEX_VERSION). """ # First, parse '4.12a1' into [4, 12, 0, 0xA01]. digits = [] release_status = 0xF0 for digit in re.split('([.abrc]+)', version_string): if digit in ('a', 'b', 'rc'): release_status = {'a': 0xA0, 'b': 0xB0, 'rc': 0xC0}[digit] digits = (digits + [0, 0])[:3] # 1.2a1 -> 1.2.0a1 elif digit != '.': digits.append(int(digit)) digits = (digits + [0] * 3)[:4] digits[3] += release_status # Then, build a single hex value, two hex digits per version part. hexversion = 0 for digit in digits: hexversion = (hexversion << 8) + digit return '0x%08X' % hexversion ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431447.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/Cython/__init__.py0000644000175000017500000000054600000000000025623 0ustar00vincentvincent00000000000000from __future__ import absolute_import from .Shadow import __version__ # Void cython.* directives (for case insensitive operating systems). from .Shadow import * def load_ipython_extension(ip): """Load the extension in IPython.""" from .Build.IpythonMagic import CythonMagics # pylint: disable=cyclic-import ip.register_magics(CythonMagics) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735099.9233274 cypari2-2.1.2/venv/lib/python3.7/site-packages/cypari2/0000755000175000017500000000000000000000000023612 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779356.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/cypari2/__init__.py0000644000175000017500000000013100000000000025716 0ustar00vincentvincent00000000000000from .pari_instance import Pari from .handle_error import PariError from .gen import Gen ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779356.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/cypari2/auto_paridecl.pxd0000644000175000017500000007007300000000000027151 0ustar00vincentvincent00000000000000# This file is auto-generated by autogen/generator.py from .types cimport * cdef extern from *: GEN mpcatalan(long) GEN gtocol0(GEN, long) GEN gtocolrev0(GEN, long) GEN mpeuler(long) GEN gen_I() GEN gtolist(GEN) GEN gtomap(GEN) GEN gtomat(GEN) GEN gmodulo(GEN, GEN) GEN mppi(long) GEN gtopoly(GEN, long) GEN gtopolyrev(GEN, long) GEN Qfb0(GEN, GEN, GEN, GEN, long) GEN Ser0(GEN, long, GEN, long) GEN gtoset(GEN) GEN pari_strchr(GEN) GEN gtovec0(GEN, long) GEN gtovecrev0(GEN, long) GEN gtovecsmall0(GEN, long) GEN gabs(GEN, long) GEN gacos(GEN, long) GEN gacosh(GEN, long) void addhelp(char *, char *) GEN addprimes(GEN) GEN agm(GEN, GEN, long) GEN airy(GEN, long) GEN algadd(GEN, GEN, GEN) GEN algalgtobasis(GEN, GEN) GEN algaut(GEN) GEN algb(GEN) GEN algbasis(GEN) GEN algbasistoalg(GEN, GEN) GEN algcenter(GEN) GEN alg_centralproj(GEN, GEN, long) GEN algchar(GEN) GEN algcharpoly(GEN, GEN, long, long) long algdegree(GEN) GEN algdep0(GEN, long, long) long algdim(GEN, long) GEN algdisc(GEN) GEN algdivl(GEN, GEN, GEN) GEN algdivr(GEN, GEN, GEN) GEN alggroup(GEN, GEN) GEN alghasse(GEN, GEN) GEN alghassef(GEN) GEN alghassei(GEN) long algindex(GEN, GEN) GEN alginit(GEN, GEN, long, long) GEN alginv(GEN, GEN) GEN alginvbasis(GEN) int algisassociative(GEN, GEN) int algiscommutative(GEN) int algisdivision(GEN, GEN) int algisramified(GEN, GEN) int algissemisimple(GEN) int algissimple(GEN, long) int algissplit(GEN, GEN) GEN alglatelement(GEN, GEN, GEN) GEN alglathnf(GEN, GEN, GEN) GEN alglatindex(GEN, GEN, GEN) GEN alglatlefttransporter(GEN, GEN, GEN) GEN alglatmul(GEN, GEN, GEN) GEN alglatrighttransporter(GEN, GEN, GEN) GEN algmakeintegral(GEN, long) GEN algmul(GEN, GEN, GEN) GEN algmultable(GEN) GEN algneg(GEN, GEN) GEN algnorm(GEN, GEN, long) GEN algpoleval(GEN, GEN, GEN) GEN algpow(GEN, GEN, GEN) GEN algprimesubalg(GEN) GEN alg_quotient(GEN, GEN, long) GEN algradical(GEN) GEN algramifiedplaces(GEN) GEN algrandom(GEN, GEN) GEN algrelmultable(GEN) GEN algsimpledec(GEN, long) GEN algsplit(GEN, long) GEN algsplittingdata(GEN) GEN algsplittingfield(GEN) GEN algsqr(GEN, GEN) GEN algsub(GEN, GEN, GEN) GEN algsubalg(GEN, GEN) GEN algtableinit(GEN, GEN) GEN algtensor(GEN, GEN, long) GEN algtomatrix(GEN, GEN, long) GEN algtrace(GEN, GEN, long) long algtype(GEN) GEN apply0(GEN, GEN) GEN garg(GEN, long) GEN arity0(GEN) GEN gasin(GEN, long) GEN gasinh(GEN, long) GEN asympnum0(GEN, GEN, long) GEN gatan(GEN, long) GEN gatanh(GEN, long) GEN bernfrac(long) GEN bernpol(long, long) GEN bernreal(long, long) GEN bernvec(long) GEN hbessel1(GEN, GEN, long) GEN hbessel2(GEN, GEN, long) GEN ibessel(GEN, GEN, long) GEN jbessel(GEN, GEN, long) GEN jbesselh(GEN, GEN, long) GEN kbessel(GEN, GEN, long) GEN nbessel(GEN, GEN, long) GEN nbessel(GEN, GEN, long) GEN bestappr(GEN, GEN) GEN bestapprPade(GEN, long) GEN bestapprnf(GEN, GEN, GEN, long) GEN gcdext0(GEN, GEN) GEN polresultantext0(GEN, GEN, long) long bigomega(GEN) GEN binaire(GEN) GEN binomial0(GEN, GEN) GEN gbitand(GEN, GEN) GEN gbitneg(GEN, long) GEN gbitnegimply(GEN, GEN) GEN gbitor(GEN, GEN) GEN bitprecision00(GEN, GEN) GEN gbittest(GEN, long) GEN gbitxor(GEN, GEN) long bnfcertify0(GEN, long) GEN bnfcompress(GEN) GEN decodemodule(GEN, GEN) GEN bnfinit0(GEN, long, GEN, long) GEN bnfisintnorm(GEN, GEN) GEN bnfisnorm(GEN, GEN, long) GEN bnfisprincipal0(GEN, GEN, long) GEN bnfissunit(GEN, GEN, GEN) GEN bnfisunit(GEN, GEN) GEN bnflog(GEN, GEN) GEN bnflogdegree(GEN, GEN, GEN) GEN bnflogef(GEN, GEN) GEN bnfnarrow(GEN) GEN signunits(GEN) GEN bnfsunit(GEN, GEN, long) GEN bnrL1(GEN, GEN, long, long) GEN bnrchar(GEN, GEN, GEN) GEN bnrclassfield(GEN, GEN, long, long) GEN bnrclassno0(GEN, GEN, GEN) GEN bnrclassnolist(GEN, GEN) GEN bnrconductor0(GEN, GEN, GEN, long) GEN bnrconductorofchar(GEN, GEN) GEN bnrdisc0(GEN, GEN, GEN, long) GEN bnrdisclist0(GEN, GEN, GEN) GEN bnrgaloisapply(GEN, GEN, GEN) GEN bnrgaloismatrix(GEN, GEN) GEN bnrinit0(GEN, GEN, long) long bnrisconductor0(GEN, GEN, GEN) long bnrisgalois(GEN, GEN, GEN) GEN bnrisprincipal(GEN, GEN, long) GEN bnrrootnumber(GEN, GEN, long, long) GEN bnrstark(GEN, GEN, long) GEN call0(GEN, GEN) GEN gceil(GEN) GEN centerlift0(GEN, long) GEN characteristic(GEN) GEN charconj0(GEN, GEN) GEN chardiv0(GEN, GEN, GEN) GEN chareval(GEN, GEN, GEN, GEN) GEN chargalois(GEN, GEN) GEN charker0(GEN, GEN) GEN charmul0(GEN, GEN, GEN) GEN charorder0(GEN, GEN) GEN charpoly0(GEN, long, long) GEN charpow0(GEN, GEN, GEN) GEN chinese(GEN, GEN) int cmp_universal(GEN, GEN) GEN compo(GEN, long) GEN gconcat(GEN, GEN) GEN gconj(GEN) GEN conjvec(GEN, long) GEN content0(GEN, GEN) GEN contfrac0(GEN, GEN, long) GEN contfraceval(GEN, GEN, long) GEN contfracinit(GEN, long) GEN contfracpnqn(GEN, long) GEN core0(GEN, long) GEN coredisc0(GEN, long) GEN gcos(GEN, long) GEN gcosh(GEN, long) GEN gcotan(GEN, long) GEN gcotanh(GEN, long) GEN default0(char *, char *) GEN denominator(GEN, GEN) GEN deriv(GEN, long) GEN derivn(GEN, long, long) GEN diffop0(GEN, GEN, GEN, long) GEN digits(GEN, GEN) GEN dilog(GEN, long) GEN dirdiv(GEN, GEN) GEN dirmul(GEN, GEN) GEN dirpowers(long, GEN, long) GEN dirzetak(GEN, GEN) GEN divisors0(GEN, long) GEN divisorslenstra(GEN, GEN, GEN) GEN divrem(GEN, GEN, long) GEN veceint1(GEN, GEN, long) GEN ellE(GEN, long) GEN ellK(GEN, long) GEN ellL1_bitprec(GEN, long, long) GEN elladd(GEN, GEN, GEN) GEN akell(GEN, GEN) GEN ellan(GEN, long) GEN ellanalyticrank_bitprec(GEN, GEN, long) GEN ellap(GEN, GEN) GEN bilhell(GEN, GEN, GEN, long) GEN ellbsd(GEN, long) GEN ellcard(GEN, GEN) GEN ellchangecurve(GEN, GEN) GEN ellchangepoint(GEN, GEN) GEN ellchangepointinv(GEN, GEN) GEN ellconvertname(GEN) GEN elldivpol(GEN, long, long) GEN elleisnum(GEN, long, long, long) GEN elleta(GEN, long) GEN ellformaldifferential(GEN, long, long) GEN ellformalexp(GEN, long, long) GEN ellformallog(GEN, long, long) GEN ellformalpoint(GEN, long, long) GEN ellformalw(GEN, long, long) GEN ellfromeqn(GEN) GEN ellfromj(GEN) GEN ellgenerators(GEN) GEN ellglobalred(GEN) GEN ellgroup0(GEN, GEN, long) GEN ellheegner(GEN) GEN ellheight0(GEN, GEN, GEN, long) GEN ellheightmatrix(GEN, GEN, long) GEN ellidentify(GEN) GEN ellinit(GEN, GEN, long) GEN ellisogeny(GEN, GEN, long, long, long) GEN ellisogenyapply(GEN, GEN) GEN ellisomat(GEN, long, long) GEN ellisoncurve(GEN, GEN) GEN ellisotree(GEN) int ellissupersingular(GEN, GEN) GEN jell(GEN, long) GEN elllocalred(GEN, GEN) GEN elllog(GEN, GEN, GEN, GEN) GEN elllseries(GEN, GEN, GEN, long) GEN ellminimaldisc(GEN) GEN ellminimaltwist0(GEN, long) GEN ellmoddegree(GEN) GEN ellmodulareqn(long, long, long) GEN ellmul(GEN, GEN, GEN) GEN ellneg(GEN, GEN) GEN ellnonsingularmultiple(GEN, GEN) GEN ellorder(GEN, GEN, GEN) GEN ellordinate(GEN, GEN, long) GEN ellpadicL(GEN, GEN, long, GEN, long, GEN) GEN ellpadicbsd(GEN, GEN, long, GEN) GEN ellpadicfrobenius(GEN, long, long) GEN ellpadicheight0(GEN, GEN, long, GEN, GEN) GEN ellpadicheightmatrix(GEN, GEN, long, GEN) GEN ellpadiclog(GEN, GEN, long, GEN) GEN ellpadicregulator(GEN, GEN, long, GEN) GEN ellpadics2(GEN, GEN, long) GEN ellperiods(GEN, long, long) GEN zell(GEN, GEN, long) GEN ellmul(GEN, GEN, GEN) GEN ellratpoints(GEN, GEN, long) long ellrootno(GEN, GEN) GEN ellsea(GEN, long) GEN ellsearch(GEN) GEN ellsigma(GEN, GEN, long, long) GEN ellsub(GEN, GEN, GEN) GEN elltamagawa(GEN) GEN elltaniyama(GEN, long) GEN elltatepairing(GEN, GEN, GEN, GEN) GEN elltors(GEN) GEN elltwist(GEN, GEN) GEN ellweilpairing(GEN, GEN, GEN, GEN) GEN ellwp0(GEN, GEN, long, long) GEN ellxn(GEN, long, long) GEN ellzeta(GEN, GEN, long) GEN pointell(GEN, GEN, long) GEN gerfc(GEN, long) GEN errname(GEN) GEN eta0(GEN, long, long) GEN eulerphi(GEN) GEN gexp(GEN, long) GEN gexpm1(GEN, long) GEN gpexponent(GEN) void exportall() GEN gpextern(char *) GEN externstr(char *) GEN factor0(GEN, GEN) GEN factorback2(GEN, GEN) GEN factmod(GEN, GEN) GEN factorff(GEN, GEN, GEN) GEN mpfactr(long, long) GEN factorint(GEN, long) GEN factormod0(GEN, GEN, long) GEN factormodDDF(GEN, GEN) GEN factormodSQF(GEN, GEN) GEN polfnf(GEN, GEN) GEN factorpadic(GEN, GEN, long) GEN ffcompomap(GEN, GEN) GEN ffembed(GEN, GEN) GEN ffextend(GEN, GEN, long) GEN fffrobenius(GEN, long) GEN ffgen(GEN, long) GEN ffinit(GEN, long, long) GEN ffinvmap(GEN) GEN fflog(GEN, GEN, GEN) GEN ffmap(GEN, GEN) GEN ffmaprel(GEN, GEN) GEN ffnbirred0(GEN, long, long) GEN fforder(GEN, GEN) GEN fibo(long) void gp_fileclose(long) long gp_fileextern(char *) void gp_fileflush0(GEN) long gp_fileopen(char *, char *) GEN gp_fileread(long) GEN gp_filereadstr(long) void gp_filewrite(long, char *) void gp_filewrite1(long, char *) GEN gfloor(GEN) GEN fold0(GEN, GEN) GEN gfrac(GEN) GEN fromdigits(GEN, GEN) GEN galoischardet(GEN, GEN, long) GEN galoischarpoly(GEN, GEN, long) GEN galoischartable(GEN) GEN galoisconjclasses(GEN) GEN galoisexport(GEN, long) GEN galoisfixedfield(GEN, GEN, long, long) GEN galoisgetgroup(long, long) GEN galoisgetname(long, long) GEN galoisgetpol(long, long, long) GEN galoisidentify(GEN) GEN galoisinit(GEN, GEN) GEN galoisisabelian(GEN, long) long galoisisnormal(GEN, GEN) GEN galoispermtopol(GEN, GEN) GEN galoissubcyclo(GEN, GEN, long, long) GEN galoissubfields(GEN, long, long) GEN galoissubgroups(GEN) GEN ggamma(GEN, long) GEN ggammah(GEN, long) GEN gammamellininv(GEN, GEN, long, long) GEN gammamellininvasymp(GEN, long, long) GEN gammamellininvinit(GEN, long, long) GEN ggcd0(GEN, GEN) GEN gcdext0(GEN, GEN) GEN genus2red(GEN, GEN) long getabstime() GEN getcache() GEN gp_getenv(char *) GEN getheap() long getlocalbitprec(long) long getlocalprec(long) GEN getrand() long getstack() long gettime() GEN getwalltime() long hammingweight(GEN) long hilbert(GEN, GEN, GEN) GEN hyperellcharpoly(GEN) GEN hyperellpadicfrobenius0(GEN, GEN, long) GEN hyperellratpoints(GEN, GEN, long) GEN hypergeom(GEN, GEN, GEN, long) GEN hyperu(GEN, GEN, GEN, long) GEN idealadd(GEN, GEN, GEN) GEN idealaddtoone0(GEN, GEN, GEN) GEN idealappr0(GEN, GEN, long) GEN idealchinese(GEN, GEN, GEN) GEN idealcoprime(GEN, GEN, GEN) GEN idealdiv0(GEN, GEN, GEN, long) GEN idealdown(GEN, GEN) GEN gpidealfactor(GEN, GEN, GEN) GEN idealfactorback(GEN, GEN, GEN, long) GEN idealfrobenius(GEN, GEN, GEN) GEN idealhnf0(GEN, GEN, GEN) GEN idealintersect(GEN, GEN, GEN) GEN idealinv(GEN, GEN) GEN idealismaximal(GEN, GEN) GEN ideallist0(GEN, long, long) GEN ideallistarch(GEN, GEN, GEN) GEN ideallog(GEN, GEN, GEN) GEN idealmin(GEN, GEN, GEN) GEN idealmul0(GEN, GEN, GEN, long) GEN idealnorm(GEN, GEN) GEN idealnumden(GEN, GEN) GEN idealpow0(GEN, GEN, GEN, long) GEN idealprimedec_limit_f(GEN, GEN, long) GEN idealprincipalunits(GEN, GEN, long) GEN idealramgroups(GEN, GEN, GEN) GEN idealred0(GEN, GEN, GEN) GEN idealredmodpower(GEN, GEN, unsigned long, unsigned long) GEN idealstar0(GEN, GEN, long) GEN idealtwoelt0(GEN, GEN, GEN) GEN gpidealval(GEN, GEN, GEN) GEN gimag(GEN) GEN incgam0(GEN, GEN, GEN, long) GEN incgamc(GEN, GEN, long) GEN gp_input() void gpinstall(char *, char *, char *, char *) GEN integ(GEN, long) GEN intnumgaussinit(long, long) GEN intnuminit(GEN, GEN, long, long) long isfundamental(GEN) long ispowerful(GEN) GEN gisprime(GEN, long) GEN gispseudoprime(GEN, long) long issquarefree(GEN) void kill0(char *) long kronecker(GEN, GEN) GEN glambertW(GEN, long) GEN laurentseries0(GEN, long, long, long) GEN glcm0(GEN, GEN) long glength(GEN) int lexcmp(GEN, GEN) GEN lfun0(GEN, GEN, long, long) GEN lfunabelianrelinit(GEN, GEN, GEN, GEN, long, long) GEN lfunan(GEN, long, long) GEN lfunartin(GEN, GEN, GEN, long, long) long lfuncheckfeq(GEN, GEN, long) GEN lfunconductor(GEN, GEN, long, long) GEN lfuncost0(GEN, GEN, long, long) GEN lfuncreate(GEN) GEN lfundiv(GEN, GEN, long) GEN lfunetaquo(GEN) GEN lfungenus2(GEN) GEN lfunhardy(GEN, GEN, long) GEN lfuninit0(GEN, GEN, long, long) GEN lfunlambda0(GEN, GEN, long, long) GEN lfunmf(GEN, GEN, long) GEN lfunmfspec(GEN, long) GEN lfunmul(GEN, GEN, long) long lfunorderzero(GEN, long, long) GEN lfunqf(GEN, long) GEN lfunrootres(GEN, long) GEN lfunsympow(GEN, unsigned long) GEN lfuntheta(GEN, GEN, long, long) long lfunthetacost0(GEN, GEN, long, long) GEN lfunthetainit(GEN, GEN, long, long) GEN lfuntwist(GEN, GEN) GEN lfunzeros(GEN, GEN, long, long) GEN lift0(GEN, long) GEN liftall(GEN) GEN liftint(GEN) GEN liftpol(GEN) GEN limitnum0(GEN, GEN, long) GEN lindep0(GEN, long) GEN listinsert(GEN, GEN, long) void listkill(GEN) void listpop0(GEN, long) GEN listput0(GEN, GEN, long) void listsort(GEN, long) GEN glngamma(GEN, long) void localbitprec(GEN) void localprec(GEN) GEN glog(GEN, long) GEN glog1p(GEN, long) void mapdelete(GEN, GEN) GEN mapget(GEN, GEN) void mapput(GEN, GEN, GEN) GEN matadjoint0(GEN, long) GEN matalgtobasis(GEN, GEN) GEN matbasistoalg(GEN, GEN) GEN matcompanion(GEN) GEN matconcat(GEN) GEN det0(GEN, long) GEN detint(GEN) GEN matdetmod(GEN, GEN) GEN diagonal(GEN) GEN mateigen(GEN, long, long) GEN matfrobenius(GEN, long, long) GEN hess(GEN) GEN mathilbert(long) GEN mathnf0(GEN, long) GEN hnfmod(GEN, GEN) GEN hnfmodid(GEN, GEN) GEN mathouseholder(GEN, GEN) GEN matid(long) GEN matimage0(GEN, long) GEN imagecompl(GEN) GEN indexrank(GEN) GEN intersect(GEN, GEN) GEN inverseimage(GEN, GEN) GEN matinvmod(GEN, GEN) int isdiagonal(GEN) GEN matker0(GEN, long) GEN matkerint0(GEN, long) GEN matmuldiagonal(GEN, GEN) GEN matmultodiagonal(GEN, GEN) GEN matqpascal(long, GEN) GEN matpermanent(GEN) GEN matqr(GEN, long, long) long rank(GEN) GEN matrixqz0(GEN, GEN) GEN matsize(GEN) GEN matsnf0(GEN, long) GEN gauss(GEN, GEN) GEN matsolvemod(GEN, GEN, GEN, long) GEN suppl(GEN) GEN gtrans(GEN) GEN gmax(GEN, GEN) GEN mfDelta() GEN mfEH(GEN) GEN mfEk(long) GEN mfTheta(GEN) GEN mfatkin(GEN, GEN) GEN mfatkineigenvalues(GEN, long, long) GEN mfatkininit(GEN, long, long) GEN mfbasis(GEN, long) GEN mfbd(GEN, long) GEN mfbracket(GEN, GEN, long) GEN mfcoef(GEN, long) GEN mfcoefs(GEN, long, long) long mfconductor(GEN, GEN) GEN mfcosets(GEN) long mfcuspisregular(GEN, GEN) GEN mfcusps(GEN) GEN mfcuspval(GEN, GEN, GEN, long) long mfcuspwidth(GEN, GEN) GEN mfderiv(GEN, long) GEN mfderivE2(GEN, long) GEN mfdim(GEN, long) GEN mfdiv(GEN, GEN) GEN mfeigenbasis(GEN) GEN mfeigensearch(GEN, GEN) GEN mfeisenstein(long, GEN, GEN) GEN mfembed0(GEN, GEN, long) GEN mfeval(GEN, GEN, GEN, long) GEN mffields(GEN) GEN mffromell(GEN) GEN mffrometaquo(GEN, long) GEN mffromlfun(GEN, long) GEN mffromqf(GEN, GEN) GEN mfgaloisprojrep(GEN, GEN, long) GEN mfgaloistype(GEN, GEN) GEN mfhecke(GEN, GEN, long) GEN mfheckemat(GEN, GEN) GEN mfinit(GEN, long) GEN mfisCM(GEN) long mfisequal(GEN, GEN, long) GEN mfkohnenbasis(GEN) GEN mfkohnenbijection(GEN) GEN mfkohneneigenbasis(GEN, GEN) GEN mflinear(GEN, GEN) GEN mfmanin(GEN, long) GEN mfmul(GEN, GEN) GEN mfnumcusps(GEN) GEN mfparams(GEN) GEN mfperiodpol(GEN, GEN, long, long) GEN mfperiodpolbasis(long, long) GEN mfpetersson(GEN, GEN) GEN mfpow(GEN, long) GEN mfsearch(GEN, GEN, long) GEN mfshift(GEN, long) GEN mfshimura(GEN, GEN, long) long mfspace(GEN, GEN) GEN mfsplit(GEN, long, long) long mfsturm(GEN) GEN mfsymbol(GEN, GEN, long) GEN mfsymboleval(GEN, GEN, GEN, long) GEN mftaylor(GEN, long, long, long) GEN mftobasis(GEN, GEN, long) GEN mftocoset(long, GEN, GEN) GEN mftonew(GEN, GEN) GEN mftraceform(GEN, long) GEN mftwist(GEN, GEN) GEN gmin(GEN, GEN) GEN minpoly(GEN, long) GEN modreverse(GEN) long moebius(GEN) GEN msatkinlehner(GEN, long, GEN) GEN mscuspidal(GEN, long) long msdim(GEN) GEN mseisenstein(GEN) GEN mseval(GEN, GEN, GEN) GEN msfromcusp(GEN, GEN) GEN msfromell(GEN, long) GEN msfromhecke(GEN, GEN, GEN) long msgetlevel(GEN) long msgetsign(GEN) long msgetweight(GEN) GEN mshecke(GEN, long, GEN) GEN msinit(GEN, GEN, long) GEN msissymbol(GEN, GEN) GEN mslattice(GEN, GEN) GEN msnew(GEN) GEN msomseval(GEN, GEN, GEN) GEN mspadicL(GEN, GEN, long) GEN mspadicinit(GEN, long, long, long) GEN mspadicmoments(GEN, GEN, long) GEN mspadicseries(GEN, long) GEN mspathgens(GEN) GEN mspathlog(GEN, GEN) GEN mspetersson(GEN, GEN, GEN) GEN mspolygon(GEN, long) GEN msqexpansion(GEN, GEN, long) GEN mssplit(GEN, GEN, long) GEN msstar(GEN, GEN) GEN mstooms(GEN, GEN) GEN newtonpoly(GEN, GEN) GEN nextprime(GEN) GEN algtobasis(GEN, GEN) GEN basistoalg(GEN, GEN) GEN nfcertify(GEN) GEN nfcompositum(GEN, GEN, GEN, long) GEN nfdetint(GEN, GEN) GEN nfdisc(GEN) GEN nfdiscfactors(GEN) GEN nfadd(GEN, GEN, GEN) GEN nfdiv(GEN, GEN, GEN) GEN nfdiveuc(GEN, GEN, GEN) GEN nfdivmodpr(GEN, GEN, GEN, GEN) GEN nfdivrem(GEN, GEN, GEN) GEN nfeltembed(GEN, GEN, GEN, long) GEN nfmod(GEN, GEN, GEN) GEN nfmul(GEN, GEN, GEN) GEN nfmulmodpr(GEN, GEN, GEN, GEN) GEN nfnorm(GEN, GEN) GEN nfpow(GEN, GEN, GEN) GEN nfpowmodpr(GEN, GEN, GEN, GEN) GEN nfreduce(GEN, GEN, GEN) GEN nfreducemodpr(GEN, GEN, GEN) GEN nfeltsign(GEN, GEN, GEN) GEN nftrace(GEN, GEN) GEN nffactor(GEN, GEN) GEN nffactorback(GEN, GEN, GEN) GEN nffactormod(GEN, GEN, GEN) GEN galoisapply(GEN, GEN, GEN) GEN galoisconj0(GEN, long, GEN, long) GEN nfgrunwaldwang(GEN, GEN, GEN, GEN, long) long nfhilbert0(GEN, GEN, GEN, GEN) GEN nfhnf0(GEN, GEN, long) GEN nfhnfmod(GEN, GEN, GEN) GEN nfinit0(GEN, long, long) long isideal(GEN, GEN) GEN nfisincl(GEN, GEN) GEN nfisisom(GEN, GEN) long nfislocalpower(GEN, GEN, GEN, GEN) GEN nfkermodpr(GEN, GEN, GEN) GEN nfmodpr(GEN, GEN, GEN) GEN nfmodprinit0(GEN, GEN, long) GEN nfmodprlift(GEN, GEN, GEN) GEN nfnewprec(GEN, long) GEN nfpolsturm(GEN, GEN, GEN) GEN nfroots(GEN, GEN) GEN nfrootsof1(GEN) GEN nfsnf0(GEN, GEN, long) GEN nfsolvemodpr(GEN, GEN, GEN, GEN) GEN nfsplitting(GEN, GEN) GEN nfsubfields(GEN, long) GEN gnorm(GEN) GEN gnorml2(GEN) GEN gnormlp(GEN, GEN, long) GEN numbpart(GEN) GEN numdiv(GEN) GEN numerator(GEN, GEN) GEN numtoperm(long, GEN) long omega(GEN) GEN mkoo() GEN padicappr(GEN, GEN) GEN padicfields0(GEN, GEN, long) GEN gppadicprec(GEN, GEN) GEN parapply(GEN, GEN) GEN pareval(GEN) GEN parselect(GEN, GEN, long) GEN partitions(long, GEN, GEN) long permorder(GEN) long permsign(GEN) GEN permtonum(GEN) void plotbox(long, GEN, GEN, long) void plotclip(long) GEN plotcolor(long, GEN) void plotcopy(long, long, GEN, GEN, long) GEN plotcursor(long) void plotdraw(GEN, long) GEN plotexport(GEN, GEN, long) GEN plothraw(GEN, GEN, long) GEN plothrawexport(GEN, GEN, GEN, long) GEN plothsizes(long) void plotinit(long, GEN, GEN, long) void plotkill(long) void plotlines(long, GEN, GEN, long) void plotlinetype(long, long) void plotmove(long, GEN, GEN) void plotpoints(long, GEN, GEN) void plotpointsize(long, GEN) void plotpointtype(long, long) void plotrbox(long, GEN, GEN, long) GEN plotrecthraw(long, GEN, long) void plotrline(long, GEN, GEN) void plotrmove(long, GEN, GEN) void plotrpoint(long, GEN, GEN) void plotscale(long, GEN, GEN, GEN, GEN) void plotstring(long, char *, long) GEN polchebyshev_eval(long, long, GEN) GEN polclass(GEN, long, long) GEN polcoef(GEN, long, long) GEN polcoeff0(GEN, long, long) GEN polcompositum0(GEN, GEN, long) GEN polcyclo_eval(long, GEN) GEN polcyclofactors(GEN) GEN gppoldegree(GEN, long) GEN poldisc0(GEN, long) GEN poldiscfactors(GEN, long) GEN reduceddiscsmith(GEN) GEN polgalois(GEN, long) GEN polgraeffe(GEN) GEN polhensellift(GEN, GEN, GEN, long) GEN polhermite_eval0(long, GEN, long) long poliscyclo(GEN) long poliscycloprod(GEN) long polisirreducible(GEN) GEN pollaguerre_eval0(long, GEN, GEN, long) GEN pollead(GEN, long) GEN pollegendre_eval0(long, GEN, long) GEN polmodular(long, long, GEN, long, long) GEN polrecip(GEN) GEN polred0(GEN, long, GEN) GEN polredabs0(GEN, long) GEN polredbest(GEN, long) GEN polredord(GEN) GEN polresultant0(GEN, GEN, long, long) GEN polresultantext0(GEN, GEN, long) GEN roots(GEN, long) GEN polrootsbound(GEN, GEN) GEN polrootsff(GEN, GEN, GEN) GEN polrootsmod(GEN, GEN) GEN polrootspadic(GEN, GEN, long) GEN realroots(GEN, GEN, long) long sturmpart(GEN, GEN, GEN) GEN polsubcyclo(long, long, long) GEN sylvestermatrix(GEN, GEN) GEN polsym(GEN, long) GEN polchebyshev1(long, long) GEN polteichmuller(GEN, unsigned long, long) GEN tschirnhaus(GEN) GEN polylog0(long, GEN, long, long) GEN polzag(long, long) GEN gpowers0(GEN, long, GEN) GEN precision00(GEN, GEN) GEN precprime(GEN) GEN prime(long) GEN primecert(GEN, long) GEN primecertexport(GEN, long) long primecertisvalid(GEN) GEN primepi(GEN) GEN primes0(GEN) GEN prodeulerrat(GEN, GEN, long, long) GEN prodnumrat(GEN, long, long) void psdraw(GEN, long) GEN gpsi(GEN, long) GEN psplothraw(GEN, GEN, long) GEN qfauto0(GEN, GEN) GEN qfautoexport(GEN, long) GEN qfbclassno0(GEN, long) GEN qfbcompraw(GEN, GEN) GEN hclassno(GEN) GEN qfbil(GEN, GEN, GEN) GEN nucomp(GEN, GEN, GEN) GEN nupow(GEN, GEN, GEN) GEN qfbpowraw(GEN, long) GEN primeform(GEN, GEN, long) GEN qfbred0(GEN, long, GEN, GEN, GEN) GEN qfbredsl2(GEN, GEN) GEN qfbsolve(GEN, GEN) GEN qfeval0(GEN, GEN, GEN) GEN qfgaussred(GEN) GEN qfisom0(GEN, GEN, GEN, GEN) GEN qfisominit0(GEN, GEN, GEN) GEN jacobi(GEN, long) GEN qflll0(GEN, long) GEN qflllgram0(GEN, long) GEN qfminim0(GEN, GEN, GEN, long, long) GEN qfnorm(GEN, GEN) GEN qforbits(GEN, GEN) GEN qfparam(GEN, GEN, long) GEN perf(GEN) GEN qfrep0(GEN, GEN, long) GEN qfsign(GEN) GEN qfsolve(GEN) GEN quadclassunit0(GEN, long, GEN, long) GEN quaddisc(GEN) GEN quadgen0(GEN, long) GEN quadhilbert(GEN, long) GEN quadpoly0(GEN, long) GEN quadray(GEN, GEN, long) GEN quadregulator(GEN, long) GEN quadunit0(GEN, long) GEN ramanujantau(GEN) GEN genrand(GEN) GEN randomprime(GEN) GEN gp_read_file(char *) GEN readstr(char *) GEN gp_readvec_file(char *) GEN greal(GEN) GEN removeprimes(GEN) GEN rnfalgtobasis(GEN, GEN) GEN rnfbasis(GEN, GEN) GEN rnfbasistoalg(GEN, GEN) GEN rnfcharpoly(GEN, GEN, GEN, long) GEN rnfconductor(GEN, GEN) GEN rnfdedekind(GEN, GEN, GEN, long) GEN rnfdet(GEN, GEN) GEN rnfdiscf(GEN, GEN) GEN rnfeltabstorel(GEN, GEN) GEN rnfeltdown0(GEN, GEN, long) GEN rnfeltnorm(GEN, GEN) GEN rnfeltreltoabs(GEN, GEN) GEN rnfelttrace(GEN, GEN) GEN rnfeltup0(GEN, GEN, long) GEN rnfequation0(GEN, GEN, long) GEN rnfhnfbasis(GEN, GEN) GEN rnfidealabstorel(GEN, GEN) GEN rnfidealdown(GEN, GEN) GEN rnfidealfactor(GEN, GEN) GEN rnfidealhnf(GEN, GEN) GEN rnfidealmul(GEN, GEN, GEN) GEN rnfidealnormabs(GEN, GEN) GEN rnfidealnormrel(GEN, GEN) GEN rnfidealprimedec(GEN, GEN) GEN rnfidealreltoabs0(GEN, GEN, long) GEN rnfidealtwoelement(GEN, GEN) GEN rnfidealup0(GEN, GEN, long) GEN rnfinit0(GEN, GEN, long) long rnfisabelian(GEN, GEN) long rnfisfree(GEN, GEN) long rnfislocalcyclo(GEN) GEN rnfisnorm(GEN, GEN, long) GEN rnfisnorminit(GEN, GEN, long) GEN rnfkummer(GEN, GEN, long, long) GEN rnflllgram(GEN, GEN, GEN, long) GEN rnfnormgroup(GEN, GEN) GEN rnfpolred(GEN, GEN, long) GEN rnfpolredabs(GEN, GEN, long) GEN rnfpolredbest(GEN, GEN, long) GEN rnfpseudobasis(GEN, GEN) GEN rnfsteinitz(GEN, GEN) GEN grootsof1(long, long) GEN select0(GEN, GEN, long) GEN pari_self() GEN seralgdep(GEN, long, long) GEN serchop(GEN, long) GEN convol(GEN, GEN) GEN laplace(GEN) GEN gpserprec(GEN, long) GEN serreverse(GEN) GEN setbinop(GEN, GEN, GEN) GEN setintersect(GEN, GEN) long setisset(GEN) GEN setminus(GEN, GEN) void setrand(GEN) long setsearch(GEN, GEN, long) GEN setunion(GEN, GEN) GEN gshift(GEN, long) GEN gmul2n(GEN, long) GEN sumdivk(GEN, long) int gsigne(GEN) GEN simplify(GEN) GEN gsin(GEN, long) GEN gsinc(GEN, long) GEN gsinh(GEN, long) long gsizebyte(GEN) long sizedigit(GEN) GEN gsqr(GEN) GEN gsqrt(GEN, long) GEN sqrtint(GEN) GEN sqrtnint(GEN, long) GEN stirling(long, long, long) GEN pari_strchr(GEN) GEN strjoin(GEN, GEN) GEN strsplit(GEN, GEN) GEN strtime(long) GEN subgrouplist0(GEN, GEN, long) GEN gsubst(GEN, long, GEN) GEN gsubstpol(GEN, GEN, GEN) GEN gsubstvec(GEN, GEN, GEN) GEN sumdedekind(GEN, GEN) GEN sumdigits0(GEN, GEN) GEN sumeulerrat(GEN, GEN, long, long) GEN sumformal(GEN, long) GEN sumnumapinit(GEN, long) GEN sumnuminit(GEN, long) GEN sumnumlagrangeinit(GEN, GEN, long) GEN sumnummonieninit(GEN, GEN, GEN, long) GEN sumnumrat(GEN, GEN, long) void gpsystem(char *) GEN gtan(GEN, long) GEN gtanh(GEN, long) GEN tayl(GEN, long, long) GEN teichmuller(GEN, GEN) GEN theta(GEN, GEN, long) GEN thetanullk(GEN, long, long) GEN thue(GEN, GEN, GEN) GEN thueinit(GEN, long, long) GEN gtrace(GEN) GEN type0(GEN) void unexportall() GEN gpvaluation(GEN, GEN) GEN varhigher(char *, long) GEN gpolvar(GEN) GEN variables_vec(GEN) GEN varlower(char *, long) GEN extract0(GEN, GEN, GEN) GEN vecprod(GEN) long vecsearch(GEN, GEN, GEN) GEN vecsort0(GEN, GEN, long) GEN vecsum(GEN) GEN pari_version() GEN weber0(GEN, long, long) void gpwritebin(char *, GEN) GEN gzeta(GEN, long) GEN zetahurwitz(GEN, GEN, long, long) GEN zetamult0(GEN, GEN, long) GEN zetamultall(long, long) GEN zetamultconvert(GEN, long) GEN zetamultinit(long, long) GEN znchar(GEN) GEN zncharconductor(GEN, GEN) GEN znchardecompose(GEN, GEN, GEN) GEN znchargauss(GEN, GEN, GEN, long) GEN zncharinduce(GEN, GEN, GEN) long zncharisodd(GEN, GEN) GEN znchartokronecker(GEN, GEN, long) GEN znchartoprimitive(GEN, GEN) GEN znconreychar(GEN, GEN) GEN znconreyexp(GEN, GEN) GEN znconreylog(GEN, GEN) GEN zncoppersmith(GEN, GEN, GEN, GEN) GEN znlog0(GEN, GEN, GEN) GEN znorder(GEN, GEN) GEN znprimroot(GEN) GEN znstar0(GEN, long) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779356.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/cypari2/closure.pxd0000644000175000017500000000013000000000000025775 0ustar00vincentvincent00000000000000from .gen cimport Gen cpdef Gen objtoclosure(f) cdef int _pari_init_closure() except -1 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779356.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/cypari2/convert.pxd0000644000175000017500000000403400000000000026010 0ustar00vincentvincent00000000000000from .paridecl cimport (GEN, t_COMPLEX, dbltor, real_0_bit, stoi, cgetg, set_gel, gen_0) from .gen cimport Gen from cpython.int cimport PyInt_AS_LONG from cpython.float cimport PyFloat_AS_DOUBLE from cpython.complex cimport PyComplex_RealAsDouble, PyComplex_ImagAsDouble # Conversion PARI -> Python cdef GEN gtoi(GEN g0) except NULL cdef PyObject_FromGEN(GEN g) cdef PyInt_FromGEN(GEN g) cpdef gen_to_python(Gen z) cpdef gen_to_integer(Gen x) # Conversion C -> PARI cdef inline GEN double_to_REAL(double x): # PARI has an odd concept where it attempts to track the accuracy # of floating-point 0; a floating-point zero might be 0.0e-20 # (meaning roughly that it might represent any number in the # range -1e-20 <= x <= 1e20). # PARI's dbltor converts a floating-point 0 into the PARI real # 0.0e-307; PARI treats this as an extremely precise 0. This # can cause problems; for instance, the PARI incgam() function can # be very slow if the first argument is very precise. # So we translate 0 into a floating-point 0 with 53 bits # of precision (that's the number of mantissa bits in an IEEE # double). if x == 0: return real_0_bit(-53) else: return dbltor(x) cdef inline GEN doubles_to_COMPLEX(double re, double im): cdef GEN g = cgetg(3, t_COMPLEX) if re == 0: set_gel(g, 1, gen_0) else: set_gel(g, 1, dbltor(re)) if im == 0: set_gel(g, 2, gen_0) else: set_gel(g, 2, dbltor(im)) return g # Conversion Python -> PARI cdef inline GEN PyInt_AS_GEN(x): return stoi(PyInt_AS_LONG(x)) cdef GEN PyLong_AS_GEN(x) cdef inline GEN PyFloat_AS_GEN(x): return double_to_REAL(PyFloat_AS_DOUBLE(x)) cdef inline GEN PyComplex_AS_GEN(x): return doubles_to_COMPLEX( PyComplex_RealAsDouble(x), PyComplex_ImagAsDouble(x)) cdef GEN PyObject_AsGEN(x) except? NULL # Deprecated functions still used by SageMath cdef Gen new_gen_from_double(double) cdef Gen new_t_COMPLEX_from_double(double re, double im) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779356.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/cypari2/gen.pxd0000644000175000017500000000375700000000000025114 0ustar00vincentvincent00000000000000from cpython.object cimport PyObject from .types cimport GEN, pari_sp cdef class Gen_base: # The actual PARI GEN cdef GEN g cdef class Gen(Gen_base): # There are 3 kinds of memory management for a GEN: # * stack: GEN on the PARI stack # * clone: refcounted clone on the PARI heap # * constant: universal constant such as gen_0 # # A priori, it makes sense to have separate classes for these cases. # However, a GEN may be moved from the stack to the heap. This is # easier to support when there is just one class. Second, the # differences between the cases are really implementation details # which should not affect the user. # Base address of the GEN that we wrap. On the stack, this is the # value of avma when new_gen() was called. For clones, this is the # memory allocated by gclone(). For constants, this is NULL. cdef GEN address cdef inline pari_sp sp(self): return self.address # The Gen objects on the PARI stack form a linked list, from the # bottom to the top of the stack. This makes sense since we can only # deallocate a Gen which is on the bottom of the PARI stack. If this # is the last object on the stack, then next = top_of_stack # (a singleton object). # # In the clone and constant cases, this is None. cdef Gen next # A cache for __getitem__. Initially, this is None but it will be # turned into a dict when needed. cdef dict itemcache cdef inline int cache(self, key, value) except -1: """ Add ``(key, value)`` to ``self.itemcache``. """ if self.itemcache is None: self.itemcache = {} self.itemcache[key] = value cdef Gen new_ref(self, GEN g) cdef GEN fixGEN(self) except NULL cdef GEN ref_target(self) except NULL cdef inline Gen Gen_new(GEN g, GEN addr): z = Gen.__new__(Gen) z.g = g z.address = addr return z cdef Gen list_of_Gens_to_Gen(list s) cpdef Gen objtogen(s) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779356.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/cypari2/handle_error.pxd0000644000175000017500000000022200000000000026767 0ustar00vincentvincent00000000000000from .types cimport GEN cdef void _pari_init_error_handling() cdef int _pari_err_handle(GEN E) except 0 cdef void _pari_err_recover(long errnum) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779356.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/cypari2/pari_instance.pxd0000644000175000017500000000066100000000000027151 0ustar00vincentvincent00000000000000from .types cimport * cimport cython from .gen cimport Gen cpdef long prec_bits_to_words(unsigned long prec_in_bits) cpdef long prec_words_to_bits(long prec_in_words) cpdef long default_bitprec() cdef class Pari_auto: pass cdef class Pari(Pari_auto): cdef readonly Gen PARI_ZERO, PARI_ONE, PARI_TWO cpdef Gen zero(self) cpdef Gen one(self) cdef Gen _empty_vector(self, long n) cdef long get_var(v) except -2 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779356.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/cypari2/paridecl.pxd0000644000175000017500000061456600000000000026134 0ustar00vincentvincent00000000000000# distutils: libraries = gmp pari """ Declarations for non-inline functions from PARI. This file contains all declarations from headers/paridecl.h from the PARI distribution, except the inline functions which are in declinl.pxi (that file is automatically included by this file). AUTHORS: - (unknown authors before 2010) - Robert Bradshaw, Jeroen Demeyer, William Stein (2010-08-15): Upgrade to PARI 2.4.3 (:trac:`9343`) - Jeroen Demeyer (2010-08-15): big clean up (:trac:`9898`) - Jeroen Demeyer (2014-02-09): upgrade to PARI 2.7 (:trac:`15767`) - Jeroen Demeyer (2014-09-19): upgrade to PARI 2.8 (:trac:`16997`) """ #***************************************************************************** # Distributed under the terms of the GNU General Public License (GPL) # as published by the Free Software Foundation; either version 2 of # the License, or (at your option) any later version. # http://www.gnu.org/licenses/ #***************************************************************************** from __future__ import print_function from libc.stdio cimport FILE from cpython.getargs cimport va_list from .types cimport * cdef extern from *: # PARI headers already included by types.pxd char* PARIVERSION # parierr.h enum err_list: e_SYNTAX, e_BUG e_ALARM, e_FILE e_MISC, e_FLAG, e_IMPL, e_ARCH, e_PACKAGE, e_NOTFUNC e_PREC, e_TYPE, e_DIM, e_VAR, e_PRIORITY, e_USER e_STACK, e_OVERFLOW, e_DOMAIN, e_COMPONENT e_MAXPRIME e_CONSTPOL, e_IRREDPOL, e_COPRIME, e_PRIME, e_MODULUS, e_ROOTS0 e_OP, e_TYPE2, e_INV e_MEM e_SQRTN e_NONE int warner, warnprec, warnfile, warnmem, warnuser # paricast.h long mael2(GEN, long, long) long mael3(GEN, long, long, long) long mael4(GEN, long, long, long, long) long mael5(GEN, long, long, long, long, long) long mael(GEN, long, long) GEN gmael1(GEN, long) GEN gmael2(GEN, long, long) GEN gmael3(GEN, long, long, long) GEN gmael4(GEN, long, long, long, long) GEN gmael5(GEN, long, long, long, long, long) GEN gmael(GEN, long, long) GEN gel(GEN, long) GEN gcoeff(GEN, long, long) long coeff(GEN, long, long) char* GSTR(GEN) # paricom.h GEN gen_1 GEN gen_m1 GEN gen_2 GEN gen_m2 GEN ghalf GEN gen_0 GEN gnil GEN err_e_STACK int PARI_SIGINT_block, PARI_SIGINT_pending void NEXT_PRIME_VIADIFF(long, byteptr) void PREC_PRIME_VIADIFF(long, byteptr) int INIT_JMPm, INIT_SIGm, INIT_DFTm, INIT_noPRIMEm, INIT_noIMTm int new_galois_format, factor_add_primes, factor_proven int precdl # The "except 0" here is to ensure compatibility with # _pari_err_handle() in handle_error.pyx int (*cb_pari_err_handle)(GEN) except 0 int (*cb_pari_handle_exception)(long) except 0 void (*cb_pari_err_recover)(long) # kernel/gmp/int.h GEN int_MSW(GEN z) GEN int_LSW(GEN z) GEN int_W(GEN z, long i) GEN int_W_lg(GEN z, long i, long lz) GEN int_precW(GEN z) GEN int_nextW(GEN z) # paristio.h extern pari_sp avma struct _pari_mainstack "pari_mainstack": pari_sp top, bot, vbot size_t size, rsize, vsize, memused extern _pari_mainstack* pari_mainstack struct PariOUT: void (*putch)(char) void (*puts)(char*) void (*flush)() extern PariOUT* pariOut extern PariOUT* pariErr extern byteptr diffptr ############################################### # # # All declarations below come from paridecl.h # # # ############################################### # OBSOLETE GEN bernvec(long nomb) # F2x.c GEN F2c_to_ZC(GEN x) GEN F2c_to_mod(GEN x) GEN F2m_rowslice(GEN x, long a, long b) GEN F2m_to_Flm(GEN z) GEN F2m_to_ZM(GEN z) GEN F2m_to_mod(GEN z) void F2v_add_inplace(GEN x, GEN y) ulong F2v_dotproduct(GEN x, GEN y) GEN F2v_slice(GEN x, long a, long b) GEN F2v_to_Flv(GEN x) GEN F2x_F2xq_eval(GEN Q, GEN x, GEN T) GEN F2x_F2xqV_eval(GEN P, GEN V, GEN T) GEN F2x_Frobenius(GEN T) GEN F2x_1_add(GEN y) GEN F2x_add(GEN x, GEN y) GEN F2x_deflate(GEN x0, long d) long F2x_degree(GEN x) GEN F2x_deriv(GEN x) GEN F2x_divrem(GEN x, GEN y, GEN *pr) ulong F2x_eval(GEN P, ulong x) void F2x_even_odd(GEN p, GEN *pe, GEN *po) GEN F2x_extgcd(GEN a, GEN b, GEN *ptu, GEN *ptv) GEN F2x_gcd(GEN a, GEN b) GEN F2x_halfgcd(GEN a, GEN b) int F2x_issquare(GEN a) GEN F2x_matFrobenius(GEN T) GEN F2x_mul(GEN x, GEN y) GEN F2x_rem(GEN x, GEN y) GEN F2x_shift(GEN y, long d) GEN F2x_sqr(GEN x) GEN F2x_sqrt(GEN x) GEN F2x_to_F2v(GEN x, long n) GEN F2x_to_Flx(GEN x) GEN F2x_to_ZX(GEN x) long F2x_valrem(GEN x, GEN *Z) GEN F2xC_to_FlxC(GEN v) GEN F2xC_to_ZXC(GEN x) GEN F2xV_to_F2m(GEN v, long n) GEN F2xX_F2x_mul(GEN P, GEN U) GEN F2xX_add(GEN x, GEN y) GEN F2xX_deriv(GEN z) GEN F2xX_renormalize(GEN x, long lx) GEN F2xX_to_Kronecker(GEN P, long d) GEN F2xX_to_ZXX(GEN B) GEN F2xY_F2xq_evalx(GEN P, GEN x, GEN T) GEN F2xY_F2xqV_evalx(GEN P, GEN x, GEN T) long F2xY_degreex(GEN b) GEN F2xq_Artin_Schreier(GEN a, GEN T) GEN F2xq_autpow(GEN x, long n, GEN T) GEN F2xq_conjvec(GEN x, GEN T) GEN F2xq_div(GEN x, GEN y, GEN T) GEN F2xq_inv(GEN x, GEN T) GEN F2xq_invsafe(GEN x, GEN T) GEN F2xq_log(GEN a, GEN g, GEN ord, GEN T) GEN F2xq_matrix_pow(GEN y, long n, long m, GEN P) GEN F2xq_mul(GEN x, GEN y, GEN pol) GEN F2xq_order(GEN a, GEN ord, GEN T) GEN F2xq_pow(GEN x, GEN n, GEN pol) GEN F2xq_powu(GEN x, ulong n, GEN pol) GEN F2xq_powers(GEN x, long l, GEN T) GEN F2xq_sqr(GEN x, GEN pol) GEN F2xq_sqrt(GEN a, GEN T) GEN F2xq_sqrt_fast(GEN c, GEN sqx, GEN T) GEN F2xq_sqrtn(GEN a, GEN n, GEN T, GEN *zeta) ulong F2xq_trace(GEN x, GEN T) GEN F2xqX_F2xq_mul(GEN P, GEN U, GEN T) GEN F2xqX_F2xq_mul_to_monic(GEN P, GEN U, GEN T) GEN F2xqX_F2xqXQ_eval(GEN Q, GEN x, GEN S, GEN T) GEN F2xqX_F2xqXQV_eval(GEN P, GEN V, GEN S, GEN T) GEN F2xqX_divrem(GEN x, GEN y, GEN T, GEN *pr) GEN F2xqX_gcd(GEN a, GEN b, GEN T) GEN F2xqX_mul(GEN x, GEN y, GEN T) GEN F2xqX_normalize(GEN z, GEN T) GEN F2xqX_red(GEN z, GEN T) GEN F2xqX_rem(GEN x, GEN S, GEN T) GEN F2xqX_sqr(GEN x, GEN T) GEN F2xqXQ_mul(GEN x, GEN y, GEN S, GEN T) GEN F2xqXQ_sqr(GEN x, GEN S, GEN T) GEN F2xqXQ_pow(GEN x, GEN n, GEN S, GEN T) GEN F2xqXQ_powers(GEN x, long l, GEN S, GEN T) GEN F2xqXQV_autpow(GEN aut, long n, GEN S, GEN T) GEN F2xqXQV_auttrace(GEN aut, long n, GEN S, GEN T) GEN Flm_to_F2m(GEN x) GEN Flv_to_F2v(GEN x) GEN Flx_to_F2x(GEN x) GEN FlxC_to_F2xC(GEN x) GEN FlxX_to_F2xX(GEN B) GEN Kronecker_to_F2xqX(GEN z, GEN T) GEN Rg_to_F2xq(GEN x, GEN T) GEN RgM_to_F2m(GEN x) GEN RgV_to_F2v(GEN x) GEN RgX_to_F2x(GEN x) GEN Z_to_F2x(GEN x, long v) GEN ZM_to_F2m(GEN x) GEN ZV_to_F2v(GEN x) GEN ZX_to_F2x(GEN x) GEN ZXX_to_F2xX(GEN B, long v) GEN const_F2v(long m) GEN gener_F2xq(GEN T, GEN *po) const bb_field *get_F2xq_field(void **E, GEN T) GEN monomial_F2x(long d, long vs) GEN pol1_F2xX(long v, long sv) GEN polx_F2xX(long v, long sv) GEN random_F2x(long d, long vs) GEN random_F2xqX(long d1, long v, GEN T) # F2xqE.c GEN F2xq_ellcard(GEN a2, GEN a6, GEN T) GEN F2xq_ellgens(GEN a2, GEN a6, GEN ch, GEN D, GEN m, GEN T) GEN F2xq_ellgroup(GEN a2, GEN a6, GEN N, GEN T, GEN *pt_m) void F2xq_elltwist(GEN a, GEN a6, GEN T, GEN *pt_a, GEN *pt_a6) GEN F2xqE_add(GEN P, GEN Q, GEN a2, GEN T) GEN F2xqE_changepoint(GEN x, GEN ch, GEN T) GEN F2xqE_changepointinv(GEN x, GEN ch, GEN T) GEN F2xqE_dbl(GEN P, GEN a2, GEN T) GEN F2xqE_log(GEN a, GEN b, GEN o, GEN a2, GEN T) GEN F2xqE_mul(GEN P, GEN n, GEN a2, GEN T) GEN F2xqE_neg(GEN P, GEN a2, GEN T) GEN F2xqE_order(GEN z, GEN o, GEN a2, GEN T) GEN F2xqE_sub(GEN P, GEN Q, GEN a2, GEN T) GEN F2xqE_tatepairing(GEN t, GEN s, GEN m, GEN a2, GEN T) GEN F2xqE_weilpairing(GEN t, GEN s, GEN m, GEN a2, GEN T) const bb_group * get_F2xqE_group(void **E, GEN a2, GEN a6, GEN T) GEN RgE_to_F2xqE(GEN x, GEN T) GEN random_F2xqE(GEN a2, GEN a6, GEN T) # Fle.c ulong Fl_elldisc(ulong a4, ulong a6, ulong p) ulong Fl_elldisc_pre(ulong a4, ulong a6, ulong p, ulong pi) ulong Fl_ellj(ulong a4, ulong a6, ulong p) void Fl_ellj_to_a4a6(ulong j, ulong p, ulong *pt_a4, ulong *pt_a6) void Fl_elltwist(ulong a4, ulong a6, ulong p, ulong *pt_a4, ulong *pt_a6) void Fl_elltwist_disc(ulong a4, ulong a6, ulong D, ulong p, ulong *pt_a4, ulong *pt_a6) GEN Fle_add(GEN P, GEN Q, ulong a4, ulong p) GEN Fle_dbl(GEN P, ulong a4, ulong p) GEN Fle_changepoint(GEN x, GEN ch, ulong p) GEN Fle_changepointinv(GEN x, GEN ch, ulong p) GEN Fle_log(GEN a, GEN b, GEN o, ulong a4, ulong p) GEN Fle_mul(GEN P, GEN n, ulong a4, ulong p) GEN Fle_mulu(GEN P, ulong n, ulong a4, ulong p) GEN Fle_order(GEN z, GEN o, ulong a4, ulong p) GEN Fle_sub(GEN P, GEN Q, ulong a4, ulong p) GEN Fle_to_Flj(GEN P) GEN Flj_add_pre(GEN P, GEN Q, ulong a4, ulong p, ulong pi) GEN Flj_dbl_pre(GEN P, ulong a4, ulong p, ulong pi) GEN Flj_mulu_pre(GEN P, ulong n, ulong a4, ulong p, ulong pi) GEN Flj_neg(GEN Q, ulong p) GEN Flj_to_Fle_pre(GEN P, ulong p, ulong pi) GEN random_Fle(ulong a4, ulong a6, ulong p) GEN random_Fle_pre(ulong a4, ulong a6, ulong p, ulong pi) GEN random_Flj_pre(ulong a4, ulong a6, ulong p, ulong pi) # Flx.c GEN Fl_to_Flx(ulong x, long sv) int Fl2_equal1(GEN x) GEN Fl2_inv_pre(GEN x, ulong D, ulong p, ulong pi) GEN Fl2_mul_pre(GEN x, GEN y, ulong D, ulong p, ulong pi) ulong Fl2_norm_pre(GEN x, ulong D, ulong p, ulong pi) GEN Fl2_pow_pre(GEN x, GEN n, ulong D, ulong p, ulong pi) GEN Fl2_sqr_pre(GEN x, ulong D, ulong p, ulong pi) GEN Fl2_sqrtn_pre(GEN a, GEN n, ulong D, ulong p, ulong pi, GEN *zeta) GEN Flc_to_ZC(GEN z) GEN Flm_to_FlxV(GEN x, long sv) GEN Flm_to_FlxX(GEN x, long v, long w) GEN Flm_to_ZM(GEN z) GEN Flv_Flm_polint(GEN xa, GEN ya, ulong p, long vs) GEN Flv_inv(GEN x, ulong p) void Flv_inv_inplace(GEN x, ulong p) void Flv_inv_pre_inplace(GEN x, ulong p, ulong pi) GEN Flv_inv_pre(GEN x, ulong p, ulong pi) GEN Flv_polint(GEN xa, GEN ya, ulong p, long vs) ulong Flv_prod(GEN v, ulong p) ulong Flv_prod_pre(GEN x, ulong p, ulong pi) GEN Flv_roots_to_pol(GEN a, ulong p, long vs) GEN Flv_to_Flx(GEN x, long vs) GEN Fly_to_FlxY(GEN B, long v) GEN Flv_to_ZV(GEN z) GEN Flx_Fl_add(GEN y, ulong x, ulong p) GEN Flx_Fl_mul(GEN y, ulong x, ulong p) GEN Flx_Fl_mul_to_monic(GEN y, ulong x, ulong p) GEN Flx_Fl2_eval_pre(GEN x, GEN y, ulong D, ulong p, ulong pi) GEN Flx_Flv_multieval(GEN P, GEN v, ulong p) GEN Flx_Flxq_eval(GEN f, GEN x, GEN T, ulong p) GEN Flx_FlxqV_eval(GEN f, GEN x, GEN T, ulong p) GEN Flx_Frobenius(GEN T, ulong p) GEN Flx_add(GEN x, GEN y, ulong p) GEN Flx_deflate(GEN x0, long d) GEN Flx_deriv(GEN z, ulong p) GEN Flx_double(GEN y, ulong p) GEN Flx_div_by_X_x(GEN a, ulong x, ulong p, ulong *rem) GEN Flx_divrem(GEN x, GEN y, ulong p, GEN *pr) int Flx_equal(GEN V, GEN W) ulong Flx_eval(GEN x, ulong y, ulong p) ulong Flx_eval_powers_pre(GEN x, GEN y, ulong p, ulong pi) ulong Flx_eval_pre(GEN x, ulong y, ulong p, ulong pi) GEN Flx_extgcd(GEN a, GEN b, ulong p, GEN *ptu, GEN *ptv) ulong Flx_extresultant(GEN a, GEN b, ulong p, GEN *ptU, GEN *ptV) GEN Flx_gcd(GEN a, GEN b, ulong p) GEN Flx_get_red(GEN T, ulong p) GEN Flx_halfgcd(GEN a, GEN b, ulong p) GEN Flx_halve(GEN y, ulong p) GEN Flx_inflate(GEN x0, long d) GEN Flx_invBarrett(GEN T, ulong p) int Flx_is_squarefree(GEN z, ulong p) int Flx_is_smooth(GEN g, long r, ulong p) GEN Flx_matFrobenius(GEN T, ulong p) GEN Flx_mod_Xn1(GEN T, ulong n, ulong p) GEN Flx_mod_Xnm1(GEN T, ulong n, ulong p) GEN Flx_mul(GEN x, GEN y, ulong p) GEN Flx_neg(GEN x, ulong p) GEN Flx_neg_inplace(GEN x, ulong p) GEN Flx_normalize(GEN z, ulong p) GEN Flx_powu(GEN x, ulong n, ulong p) GEN Flx_recip(GEN x) GEN Flx_red(GEN z, ulong p) GEN Flx_rem(GEN x, GEN y, ulong p) GEN Flx_renormalize(GEN x, long l) GEN Flx_rescale(GEN P, ulong h, ulong p) ulong Flx_resultant(GEN a, GEN b, ulong p) GEN Flx_shift(GEN a, long n) GEN Flx_splitting(GEN p, long k) GEN Flx_sqr(GEN x, ulong p) GEN Flx_sub(GEN x, GEN y, ulong p) GEN Flx_to_Flv(GEN x, long N) GEN Flx_to_FlxX(GEN z, long v) GEN Flx_to_ZX(GEN z) GEN Flx_to_ZX_inplace(GEN z) GEN Flx_triple(GEN y, ulong p) long Flx_val(GEN x) long Flx_valrem(GEN x, GEN *Z) GEN FlxC_to_ZXC(GEN x) GEN FlxM_Flx_add_shallow(GEN x, GEN y, ulong p) GEN FlxM_to_ZXM(GEN z) GEN FlxT_red(GEN z, ulong p) GEN FlxV_Flc_mul(GEN V, GEN W, ulong p) GEN FlxV_prod(GEN V, ulong p) GEN FlxV_red(GEN z, ulong p) GEN FlxV_to_Flm(GEN v, long n) GEN FlxV_to_ZXV(GEN x) GEN FlxX_Fl_mul(GEN x, ulong y, ulong p) GEN FlxX_Flx_add(GEN y, GEN x, ulong p) GEN FlxX_Flx_mul(GEN x, GEN y, ulong p) GEN FlxX_add(GEN P, GEN Q, ulong p) GEN FlxX_deriv(GEN z, ulong p) GEN FlxX_double(GEN x, ulong p) GEN FlxX_neg(GEN x, ulong p) GEN FlxX_sub(GEN P, GEN Q, ulong p) GEN FlxX_swap(GEN x, long n, long ws) GEN FlxX_renormalize(GEN x, long lx) GEN FlxX_shift(GEN a, long n) GEN FlxX_to_Flm(GEN v, long n) GEN FlxX_to_FlxC(GEN x, long N, long sv) GEN FlxX_to_ZXX(GEN B) GEN FlxX_triple(GEN x, ulong p) GEN FlxXC_to_ZXXC(GEN B) GEN FlxXM_to_ZXXM(GEN B) GEN FlxXV_to_FlxM(GEN v, long n, long sv) GEN FlxY_Flx_div(GEN x, GEN y, ulong p) GEN FlxY_Flx_translate(GEN P, GEN c, ulong p) GEN FlxY_Flxq_evalx(GEN P, GEN x, GEN T, ulong p) GEN FlxY_FlxqV_evalx(GEN P, GEN x, GEN T, ulong p) ulong FlxY_eval_powers_pre(GEN pol, GEN ypowers, GEN xpowers, ulong p, ulong pi) GEN FlxY_evalx(GEN Q, ulong x, ulong p) GEN FlxY_evalx_powers_pre(GEN pol, GEN ypowers, ulong p, ulong pi) GEN FlxYqq_pow(GEN x, GEN n, GEN S, GEN T, ulong p) GEN Flxq_autpow(GEN x, ulong n, GEN T, ulong p) GEN Flxq_autsum(GEN x, ulong n, GEN T, ulong p) GEN Flxq_auttrace(GEN x, ulong n, GEN T, ulong p) GEN Flxq_charpoly(GEN x, GEN T, ulong p) GEN Flxq_conjvec(GEN x, GEN T, ulong p) GEN Flxq_div(GEN x, GEN y, GEN T, ulong p) GEN Flxq_inv(GEN x, GEN T, ulong p) GEN Flxq_invsafe(GEN x, GEN T, ulong p) int Flxq_issquare(GEN x, GEN T, ulong p) int Flxq_is2npower(GEN x, long n, GEN T, ulong p) GEN Flxq_log(GEN a, GEN g, GEN ord, GEN T, ulong p) GEN Flxq_lroot(GEN a, GEN T, long p) GEN Flxq_lroot_fast(GEN a, GEN sqx, GEN T, long p) GEN Flxq_matrix_pow(GEN y, long n, long m, GEN P, ulong l) GEN Flxq_minpoly(GEN x, GEN T, ulong p) GEN Flxq_mul(GEN x, GEN y, GEN T, ulong p) ulong Flxq_norm(GEN x, GEN T, ulong p) GEN Flxq_order(GEN a, GEN ord, GEN T, ulong p) GEN Flxq_pow(GEN x, GEN n, GEN T, ulong p) GEN Flxq_powu(GEN x, ulong n, GEN T, ulong p) GEN Flxq_powers(GEN x, long l, GEN T, ulong p) GEN Flxq_sqr(GEN y, GEN T, ulong p) GEN Flxq_sqrt(GEN a, GEN T, ulong p) GEN Flxq_sqrtn(GEN a, GEN n, GEN T, ulong p, GEN *zetan) ulong Flxq_trace(GEN x, GEN T, ulong p) GEN FlxqV_dotproduct(GEN x, GEN y, GEN T, ulong p) GEN FlxqV_roots_to_pol(GEN V, GEN T, ulong p, long v) GEN FlxqX_FlxqXQ_eval(GEN Q, GEN x, GEN S, GEN T, ulong p) GEN FlxqX_FlxqXQV_eval(GEN P, GEN V, GEN S, GEN T, ulong p) GEN FlxqX_Flxq_mul(GEN P, GEN U, GEN T, ulong p) GEN FlxqX_Flxq_mul_to_monic(GEN P, GEN U, GEN T, ulong p) GEN FlxqX_divrem(GEN x, GEN y, GEN T, ulong p, GEN *pr) GEN FlxqX_extgcd(GEN a, GEN b, GEN T, ulong p, GEN *ptu, GEN *ptv) GEN FlxqX_gcd(GEN P, GEN Q, GEN T, ulong p) GEN FlxqX_get_red(GEN S, GEN T, ulong p) GEN FlxqX_halfgcd(GEN x, GEN y, GEN T, ulong p) GEN FlxqX_invBarrett(GEN T, GEN Q, ulong p) GEN FlxqX_mul(GEN x, GEN y, GEN T, ulong p) GEN FlxqX_normalize(GEN z, GEN T, ulong p) GEN FlxqX_pow(GEN V, long n, GEN T, ulong p) GEN FlxqX_red(GEN z, GEN T, ulong p) GEN FlxqX_rem(GEN x, GEN y, GEN T, ulong p) GEN FlxqX_safegcd(GEN P, GEN Q, GEN T, ulong p) GEN FlxqX_sqr(GEN x, GEN T, ulong p) GEN FlxqXQ_div(GEN x, GEN y, GEN S, GEN T, ulong p) GEN FlxqXQ_inv(GEN x, GEN S, GEN T, ulong p) GEN FlxqXQ_invsafe(GEN x, GEN S, GEN T, ulong p) GEN FlxqXQ_matrix_pow(GEN x, long n, long m, GEN S, GEN T, ulong p) GEN FlxqXQ_mul(GEN x, GEN y, GEN S, GEN T, ulong p) GEN FlxqXQ_pow(GEN x, GEN n, GEN S, GEN T, ulong p) GEN FlxqXQ_powu(GEN x, ulong n, GEN S, GEN T, ulong p) GEN FlxqXQ_powers(GEN x, long n, GEN S, GEN T, ulong p) GEN FlxqXQ_sqr(GEN x, GEN S, GEN T, ulong p) GEN FlxqXQV_autpow(GEN x, long n, GEN S, GEN T, ulong p) GEN FlxqXQV_autsum(GEN aut, long n, GEN S, GEN T, ulong p) GEN FlxqXV_prod(GEN V, GEN T, ulong p) GEN Kronecker_to_FlxqX(GEN z, GEN T, ulong p) ulong Rg_to_F2(GEN x) ulong Rg_to_Fl(GEN x, ulong p) GEN Rg_to_Flxq(GEN x, GEN T, ulong p) GEN RgX_to_Flx(GEN x, ulong p) GEN Z_to_Flx(GEN x, ulong p, long sv) GEN ZX_to_Flx(GEN x, ulong p) GEN ZXV_to_FlxV(GEN v, ulong p) GEN ZXT_to_FlxT(GEN z, ulong p) GEN ZXX_to_FlxX(GEN B, ulong p, long v) GEN ZXXT_to_FlxXT(GEN z, ulong p, long v) GEN ZXXV_to_FlxXV(GEN V, ulong p, long v) GEN gener_Flxq(GEN T, ulong p, GEN *o) long get_Flx_degree(GEN T) GEN get_Flx_mod(GEN T) long get_Flx_var(GEN T) long get_FlxqX_degree(GEN T) GEN get_FlxqX_mod(GEN T) long get_FlxqX_var(GEN T) const bb_field *get_Flxq_field(void **E, GEN T, ulong p) const bb_group *get_Flxq_star(void **E, GEN T, ulong p) GEN monomial_Flx(ulong a, long d, long vs) GEN pol1_FlxX(long v, long sv) GEN polx_FlxX(long v, long sv) GEN random_Flx(long d1, long v, ulong p) GEN random_FlxqX(long d1, long v, GEN T, ulong p) GEN zx_to_Flx(GEN x, ulong p) GEN zxX_to_FlxX(GEN B, ulong p) GEN zxX_to_Kronecker(GEN P, GEN Q) # FlxqE.c GEN Flxq_ellcard(GEN a4, GEN a6, GEN T, ulong p) GEN Flxq_ellgens(GEN a4, GEN a6, GEN ch, GEN D, GEN m, GEN T, ulong p) GEN Flxq_ellgroup(GEN a4, GEN a6, GEN N, GEN T, ulong p, GEN *pt_m) void Flxq_elltwist(GEN a, GEN a6, GEN T, ulong p, GEN *pt_a, GEN *pt_a6) GEN Flxq_ellj(GEN a4, GEN a6, GEN T, ulong p) void Flxq_ellj_to_a4a6(GEN j, GEN T, ulong p, GEN *pt_a4, GEN *pt_a6) GEN FlxqE_add(GEN P, GEN Q, GEN a4, GEN T, ulong p) GEN FlxqE_changepoint(GEN x, GEN ch, GEN T, ulong p) GEN FlxqE_changepointinv(GEN x, GEN ch, GEN T, ulong p) GEN FlxqE_dbl(GEN P, GEN a4, GEN T, ulong p) GEN FlxqE_log(GEN a, GEN b, GEN o, GEN a4, GEN T, ulong p) GEN FlxqE_mul(GEN P, GEN n, GEN a4, GEN T, ulong p) GEN FlxqE_neg(GEN P, GEN T, ulong p) GEN FlxqE_order(GEN z, GEN o, GEN a4, GEN T, ulong p) GEN FlxqE_sub(GEN P, GEN Q, GEN a4, GEN T, ulong p) GEN FlxqE_tatepairing(GEN t, GEN s, GEN m, GEN a4, GEN T, ulong p) GEN FlxqE_weilpairing(GEN t, GEN s, GEN m, GEN a4, GEN T, ulong p) const bb_group * get_FlxqE_group(void **E, GEN a4, GEN a6, GEN T, ulong p) GEN RgE_to_FlxqE(GEN x, GEN T, ulong p) GEN random_FlxqE(GEN a4, GEN a6, GEN T, ulong p) # FpE.c long Fl_elltrace(ulong a4, ulong a6, ulong p) long Fl_elltrace_CM(long CM, ulong a4, ulong a6, ulong p) GEN Fp_ellcard(GEN a4, GEN a6, GEN p) GEN Fp_elldivpol(GEN a4, GEN a6, long n, GEN p) GEN Fp_ellgens(GEN a4, GEN a6, GEN ch, GEN D, GEN m, GEN p) GEN Fp_ellgroup(GEN a4, GEN a6, GEN N, GEN p, GEN *pt_m) GEN Fp_ellj(GEN a4, GEN a6, GEN p) int Fp_elljissupersingular(GEN j, GEN p) void Fp_elltwist(GEN a4, GEN a6, GEN p, GEN *pt_a4, GEN *pt_a6) GEN Fp_ffellcard(GEN a4, GEN a6, GEN q, long n, GEN p) GEN FpE_add(GEN P, GEN Q, GEN a4, GEN p) GEN FpE_changepoint(GEN x, GEN ch, GEN p) GEN FpE_changepointinv(GEN x, GEN ch, GEN p) GEN FpE_dbl(GEN P, GEN a4, GEN p) GEN FpE_log(GEN a, GEN b, GEN o, GEN a4, GEN p) GEN FpE_mul(GEN P, GEN n, GEN a4, GEN p) GEN FpE_neg(GEN P, GEN p) GEN FpE_order(GEN z, GEN o, GEN a4, GEN p) GEN FpE_sub(GEN P, GEN Q, GEN a4, GEN p) GEN FpE_to_mod(GEN P, GEN p) GEN FpE_tatepairing(GEN t, GEN s, GEN m, GEN a4, GEN p) GEN FpE_weilpairing(GEN t, GEN s, GEN m, GEN a4, GEN p) GEN FpXQ_ellcard(GEN a4, GEN a6, GEN T, GEN p) GEN FpXQ_elldivpol(GEN a4, GEN a6, long n, GEN T, GEN p) GEN FpXQ_ellgens(GEN a4, GEN a6, GEN ch, GEN D, GEN m, GEN T, GEN p) GEN FpXQ_ellgroup(GEN a4, GEN a6, GEN N, GEN T, GEN p, GEN *pt_m) GEN FpXQ_ellj(GEN a4, GEN a6, GEN T, GEN p) int FpXQ_elljissupersingular(GEN j, GEN T, GEN p) void FpXQ_elltwist(GEN a4, GEN a6, GEN T, GEN p, GEN *pt_a4, GEN *pt_a6) GEN FpXQE_add(GEN P, GEN Q, GEN a4, GEN T, GEN p) GEN FpXQE_changepoint(GEN x, GEN ch, GEN T, GEN p) GEN FpXQE_changepointinv(GEN x, GEN ch, GEN T, GEN p) GEN FpXQE_dbl(GEN P, GEN a4, GEN T, GEN p) GEN FpXQE_log(GEN a, GEN b, GEN o, GEN a4, GEN T, GEN p) GEN FpXQE_mul(GEN P, GEN n, GEN a4, GEN T, GEN p) GEN FpXQE_neg(GEN P, GEN T, GEN p) GEN FpXQE_order(GEN z, GEN o, GEN a4, GEN T, GEN p) GEN FpXQE_sub(GEN P, GEN Q, GEN a4, GEN T, GEN p) GEN FpXQE_tatepairing(GEN t, GEN s, GEN m, GEN a4, GEN T, GEN p) GEN FpXQE_weilpairing(GEN t, GEN s, GEN m, GEN a4, GEN T, GEN p) GEN Fq_elldivpolmod(GEN a4, GEN a6, long n, GEN h, GEN T, GEN p) GEN RgE_to_FpE(GEN x, GEN p) GEN RgE_to_FpXQE(GEN x, GEN T, GEN p) const bb_group * get_FpE_group(void **E, GEN a4, GEN a6, GEN p) const bb_group * get_FpXQE_group(void **E, GEN a4, GEN a6, GEN T, GEN p) GEN elltrace_extension(GEN t, long n, GEN p) GEN random_FpE(GEN a4, GEN a6, GEN p) GEN random_FpXQE(GEN a4, GEN a6, GEN T, GEN p) # FpX.c int Fp_issquare(GEN x, GEN p) GEN Fp_FpX_sub(GEN x, GEN y, GEN p) GEN Fp_FpXQ_log(GEN a, GEN g, GEN ord, GEN T, GEN p) GEN FpV_FpM_polint(GEN xa, GEN ya, GEN p, long vs) GEN FpV_inv(GEN x, GEN p) GEN FpV_invVandermonde(GEN L, GEN den, GEN p) GEN FpV_polint(GEN xa, GEN ya, GEN p, long v) GEN FpV_roots_to_pol(GEN V, GEN p, long v) GEN FpX_Fp_add(GEN x, GEN y, GEN p) GEN FpX_Fp_add_shallow(GEN y, GEN x, GEN p) GEN FpX_Fp_mul(GEN x, GEN y, GEN p) GEN FpX_Fp_mul_to_monic(GEN y, GEN x, GEN p) GEN FpX_Fp_mulspec(GEN y, GEN x, GEN p, long ly) GEN FpX_Fp_sub(GEN x, GEN y, GEN p) GEN FpX_Fp_sub_shallow(GEN y, GEN x, GEN p) GEN FpX_FpV_multieval(GEN P, GEN xa, GEN p) GEN FpX_FpXQ_eval(GEN f, GEN x, GEN T, GEN p) GEN FpX_FpXQV_eval(GEN f, GEN x, GEN T, GEN p) GEN FpX_Frobenius(GEN T, GEN p) GEN FpX_add(GEN x, GEN y, GEN p) GEN FpX_center(GEN x, GEN p, GEN pov2) GEN FpX_chinese_coprime(GEN x, GEN y, GEN Tx, GEN Ty, GEN Tz, GEN p) GEN FpX_deriv(GEN x, GEN p) GEN FpX_digits(GEN x, GEN y, GEN p) GEN FpX_disc(GEN x, GEN p) GEN FpX_div_by_X_x(GEN a, GEN x, GEN p, GEN *r) GEN FpX_divrem(GEN x, GEN y, GEN p, GEN *pr) GEN FpX_dotproduct(GEN x, GEN y, GEN p) GEN FpX_eval(GEN x, GEN y, GEN p) GEN FpX_extgcd(GEN x, GEN y, GEN p, GEN *ptu, GEN *ptv) GEN FpX_fromdigits(GEN x, GEN T, GEN p) GEN FpX_gcd(GEN x, GEN y, GEN p) GEN FpX_get_red(GEN T, GEN p) GEN FpX_halve(GEN y, GEN p) GEN FpX_halfgcd(GEN x, GEN y, GEN p) GEN FpX_invBarrett(GEN T, GEN p) int FpX_is_squarefree(GEN f, GEN p) GEN FpX_matFrobenius(GEN T, GEN p) GEN FpX_mul(GEN x, GEN y, GEN p) GEN FpX_mulspec(GEN a, GEN b, GEN p, long na, long nb) GEN FpX_mulu(GEN x, ulong y, GEN p) GEN FpX_neg(GEN x, GEN p) GEN FpX_normalize(GEN z, GEN p) GEN FpX_powu(GEN x, ulong n, GEN p) GEN FpX_red(GEN z, GEN p) GEN FpX_rem(GEN x, GEN y, GEN p) GEN FpX_rescale(GEN P, GEN h, GEN p) GEN FpX_resultant(GEN a, GEN b, GEN p) GEN FpX_sqr(GEN x, GEN p) GEN FpX_sub(GEN x, GEN y, GEN p) long FpX_valrem(GEN x0, GEN t, GEN p, GEN *py) GEN FpXC_FpXQV_eval(GEN Q, GEN x, GEN T, GEN p) GEN FpXM_FpXQV_eval(GEN Q, GEN x, GEN T, GEN p) GEN FpXQ_autpow(GEN x, ulong n, GEN T, GEN p) GEN FpXQ_autpowers(GEN aut, long f, GEN T, GEN p) GEN FpXQ_autsum(GEN x, ulong n, GEN T, GEN p) GEN FpXQ_auttrace(GEN x, ulong n, GEN T, GEN p) GEN FpXQ_charpoly(GEN x, GEN T, GEN p) GEN FpXQ_conjvec(GEN x, GEN T, GEN p) GEN FpXQ_div(GEN x, GEN y, GEN T, GEN p) GEN FpXQ_inv(GEN x, GEN T, GEN p) GEN FpXQ_invsafe(GEN x, GEN T, GEN p) int FpXQ_issquare(GEN x, GEN T, GEN p) GEN FpXQ_log(GEN a, GEN g, GEN ord, GEN T, GEN p) GEN FpXQ_matrix_pow(GEN y, long n, long m, GEN P, GEN l) GEN FpXQ_minpoly(GEN x, GEN T, GEN p) GEN FpXQ_mul(GEN y, GEN x, GEN T, GEN p) GEN FpXQ_norm(GEN x, GEN T, GEN p) GEN FpXQ_order(GEN a, GEN ord, GEN T, GEN p) GEN FpXQ_pow(GEN x, GEN n, GEN T, GEN p) GEN FpXQ_powu(GEN x, ulong n, GEN T, GEN p) GEN FpXQ_powers(GEN x, long l, GEN T, GEN p) GEN FpXQ_red(GEN x, GEN T, GEN p) GEN FpXQ_sqr(GEN y, GEN T, GEN p) GEN FpXQ_sqrt(GEN a, GEN T, GEN p) GEN FpXQ_sqrtn(GEN a, GEN n, GEN T, GEN p, GEN *zetan) GEN FpXQ_trace(GEN x, GEN T, GEN p) GEN FpXQC_to_mod(GEN z, GEN T, GEN p) GEN FpXQM_autsum(GEN x, ulong n, GEN T, GEN p) GEN FpXT_red(GEN z, GEN p) GEN FpXV_prod(GEN V, GEN p) GEN FpXV_red(GEN z, GEN p) int Fq_issquare(GEN x, GEN T, GEN p) long Fq_ispower(GEN x, GEN K, GEN T, GEN p) GEN Fq_log(GEN a, GEN g, GEN ord, GEN T, GEN p) GEN Fq_sqrtn(GEN a, GEN n, GEN T, GEN p, GEN *z) GEN Fq_sqrt(GEN a, GEN T, GEN p) long FqX_ispower(GEN f, ulong k, GEN T, GEN p, GEN *pt) GEN FqV_inv(GEN x, GEN T, GEN p) GEN Z_to_FpX(GEN a, GEN p, long v) GEN gener_FpXQ(GEN T, GEN p, GEN *o) GEN gener_FpXQ_local(GEN T, GEN p, GEN L) long get_FpX_degree(GEN T) GEN get_FpX_mod(GEN T) long get_FpX_var(GEN T) const bb_group *get_FpXQ_star(void **E, GEN T, GEN p) GEN random_FpX(long d, long v, GEN p) # FpX_factor.c GEN F2x_factor(GEN f) int F2x_is_irred(GEN f) void F2xV_to_FlxV_inplace(GEN v) void F2xV_to_ZXV_inplace(GEN v) int Flx_is_irred(GEN f, ulong p) GEN Flx_degfact(GEN f, ulong p) GEN Flx_factor(GEN f, ulong p) long Flx_nbfact(GEN z, ulong p) long Flx_nbfact_Frobenius(GEN T, GEN XP, ulong p) GEN Flx_nbfact_by_degree(GEN z, long *nb, ulong p) long Flx_nbroots(GEN f, ulong p) ulong Flx_oneroot(GEN f, ulong p) ulong Flx_oneroot_split(GEN f, ulong p) GEN Flx_roots(GEN f, ulong p) GEN Flx_rootsff(GEN P, GEN T, ulong p) void FlxV_to_ZXV_inplace(GEN v) GEN FpX_degfact(GEN f, GEN p) int FpX_is_irred(GEN f, GEN p) int FpX_is_totally_split(GEN f, GEN p) GEN FpX_factor(GEN f, GEN p) GEN FpX_factorff(GEN P, GEN T, GEN p) long FpX_nbfact(GEN f, GEN p) long FpX_nbfact_Frobenius(GEN T, GEN XP, GEN p) long FpX_nbroots(GEN f, GEN p) GEN FpX_oneroot(GEN f, GEN p) GEN FpX_roots(GEN f, GEN p) GEN FpX_rootsff(GEN P, GEN T, GEN p) GEN FpX_split_part(GEN f, GEN p) GEN factcantor(GEN x, GEN p) GEN factormod0(GEN f, GEN p, long flag) GEN rootmod0(GEN f, GEN p, long flag) # FpXQX_factor.c GEN F2xqX_roots(GEN x, GEN T) GEN FlxqX_Frobenius(GEN S, GEN T, ulong p) GEN FlxqXQ_halfFrobenius(GEN a, GEN S, GEN T, ulong p) GEN FlxqX_roots(GEN S, GEN T, ulong p) long FlxqX_nbroots(GEN f, GEN T, ulong p) GEN FpXQX_Frobenius(GEN S, GEN T, GEN p) GEN FpXQX_factor(GEN x, GEN T, GEN p) long FpXQX_nbfact(GEN u, GEN T, GEN p) long FpXQX_nbroots(GEN f, GEN T, GEN p) GEN FpXQX_roots(GEN f, GEN T, GEN p) GEN FpXQXQ_halfFrobenius(GEN a, GEN S, GEN T, GEN p) long FqX_is_squarefree(GEN P, GEN T, GEN p) long FqX_nbfact(GEN u, GEN T, GEN p) long FqX_nbroots(GEN f, GEN T, GEN p) GEN factorff(GEN f, GEN p, GEN a) GEN polrootsff(GEN f, GEN p, GEN T) # FpXX.c GEN FpXQX_FpXQ_mul(GEN P, GEN U, GEN T, GEN p) GEN FpXQX_FpXQXQV_eval(GEN P, GEN V, GEN S, GEN T, GEN p) GEN FpXQX_FpXQXQ_eval(GEN P, GEN x, GEN S, GEN T, GEN p) GEN FpXQX_div_by_X_x(GEN a, GEN x, GEN T, GEN p, GEN *pr) GEN FpXQX_divrem(GEN x, GEN y, GEN T, GEN p, GEN *pr) GEN FpXQX_digits(GEN x, GEN B, GEN T, GEN p) GEN FpXQX_extgcd(GEN x, GEN y, GEN T, GEN p, GEN *ptu, GEN *ptv) GEN FpXQX_fromdigits(GEN x, GEN B, GEN T, GEN p) GEN FpXQX_gcd(GEN P, GEN Q, GEN T, GEN p) GEN FpXQX_get_red(GEN S, GEN T, GEN p) GEN FpXQX_halfgcd(GEN x, GEN y, GEN T, GEN p) GEN FpXQX_invBarrett(GEN S, GEN T, GEN p) GEN FpXQX_mul(GEN x, GEN y, GEN T, GEN p) GEN FpXQX_powu(GEN x, ulong n, GEN T, GEN p) GEN FpXQX_red(GEN z, GEN T, GEN p) GEN FpXQX_rem(GEN x, GEN S, GEN T, GEN p) GEN FpXQX_sqr(GEN x, GEN T, GEN p) GEN FpXQXQ_div(GEN x, GEN y, GEN S, GEN T, GEN p) GEN FpXQXQ_inv(GEN x, GEN S, GEN T, GEN p) GEN FpXQXQ_invsafe(GEN x, GEN S, GEN T, GEN p) GEN FpXQXQ_matrix_pow(GEN y, long n, long m, GEN S, GEN T, GEN p) GEN FpXQXQ_mul(GEN x, GEN y, GEN S, GEN T, GEN p) GEN FpXQXQ_pow(GEN x, GEN n, GEN S, GEN T, GEN p) GEN FpXQXQ_powers(GEN x, long n, GEN S, GEN T, GEN p) GEN FpXQXQ_sqr(GEN x, GEN S, GEN T, GEN p) GEN FpXQXQV_autpow(GEN aut, long n, GEN S, GEN T, GEN p) GEN FpXQXQV_autsum(GEN aut, long n, GEN S, GEN T, GEN p) GEN FpXQXQV_auttrace(GEN aut, long n, GEN S, GEN T, GEN p) GEN FpXQXV_prod(GEN V, GEN Tp, GEN p) GEN FpXX_Fp_mul(GEN x, GEN y, GEN p) GEN FpXX_FpX_mul(GEN x, GEN y, GEN p) GEN FpXX_add(GEN x, GEN y, GEN p) GEN FpXX_deriv(GEN P, GEN p) GEN FpXX_mulu(GEN P, ulong u, GEN p) GEN FpXX_neg(GEN x, GEN p) GEN FpXX_red(GEN z, GEN p) GEN FpXX_sub(GEN x, GEN y, GEN p) GEN FpXY_FpXQ_evalx(GEN P, GEN x, GEN T, GEN p) GEN FpXY_FpXQV_evalx(GEN P, GEN x, GEN T, GEN p) GEN FpXY_eval(GEN Q, GEN y, GEN x, GEN p) GEN FpXY_evalx(GEN Q, GEN x, GEN p) GEN FpXY_evaly(GEN Q, GEN y, GEN p, long vy) GEN FpXYQQ_pow(GEN x, GEN n, GEN S, GEN T, GEN p) GEN Kronecker_to_FpXQX(GEN z, GEN pol, GEN p) GEN Kronecker_to_ZXX(GEN z, long N, long v) GEN ZXX_mul_Kronecker(GEN x, GEN y, long n) GEN ZXX_sqr_Kronecker(GEN x, long n) long get_FpXQX_degree(GEN T) GEN get_FpXQX_mod(GEN T) long get_FpXQX_var(GEN T) GEN random_FpXQX(long d1, long v, GEN T, GEN p) # FpV.c GEN F2m_F2c_mul(GEN x, GEN y) GEN F2m_mul(GEN x, GEN y) GEN F2m_powu(GEN x, ulong n) GEN Flc_to_mod(GEN z, ulong pp) GEN Flm_Fl_add(GEN x, ulong y, ulong p) GEN Flm_Fl_mul(GEN y, ulong x, ulong p) void Flm_Fl_mul_inplace(GEN y, ulong x, ulong p) GEN Flm_Flc_mul(GEN x, GEN y, ulong p) GEN Flm_Flc_mul_pre(GEN x, GEN y, ulong p, ulong pi) GEN Flm_add(GEN x, GEN y, ulong p) GEN Flm_center(GEN z, ulong p, ulong ps2) GEN Flm_mul(GEN x, GEN y, ulong p) GEN Flm_neg(GEN y, ulong p) GEN Flm_powu(GEN x, ulong n, ulong p) GEN Flm_sub(GEN x, GEN y, ulong p) GEN Flm_to_mod(GEN z, ulong pp) GEN Flm_transpose(GEN x) GEN Flv_Fl_div(GEN x, ulong y, ulong p) void Flv_Fl_div_inplace(GEN x, ulong y, ulong p) GEN Flv_Fl_mul(GEN x, ulong y, ulong p) void Flv_Fl_mul_inplace(GEN x, ulong y, ulong p) void Flv_Fl_mul_part_inplace(GEN x, ulong y, ulong p, long l) GEN Flv_add(GEN x, GEN y, ulong p) void Flv_add_inplace(GEN x, GEN y, ulong p) GEN Flv_center(GEN z, ulong p, ulong ps2) ulong Flv_dotproduct(GEN x, GEN y, ulong p) ulong Flv_dotproduct_pre(GEN x, GEN y, ulong p, ulong pi) GEN Flv_neg(GEN v, ulong p) void Flv_neg_inplace(GEN v, ulong p) GEN Flv_sub(GEN x, GEN y, ulong p) void Flv_sub_inplace(GEN x, GEN y, ulong p) ulong Flv_sum(GEN x, ulong p) ulong Flx_dotproduct(GEN x, GEN y, ulong p) GEN Fp_to_mod(GEN z, GEN p) GEN FpC_FpV_mul(GEN x, GEN y, GEN p) GEN FpC_Fp_mul(GEN x, GEN y, GEN p) GEN FpC_center(GEN z, GEN p, GEN pov2) void FpC_center_inplace(GEN z, GEN p, GEN pov2) GEN FpC_red(GEN z, GEN p) GEN FpC_to_mod(GEN z, GEN p) GEN FpM_add(GEN x, GEN y, GEN p) GEN FpM_Fp_mul(GEN X, GEN c, GEN p) GEN FpM_FpC_mul(GEN x, GEN y, GEN p) GEN FpM_FpC_mul_FpX(GEN x, GEN y, GEN p, long v) GEN FpM_center(GEN z, GEN p, GEN pov2) void FpM_center_inplace(GEN z, GEN p, GEN pov2) GEN FpM_mul(GEN x, GEN y, GEN p) GEN FpM_powu(GEN x, ulong n, GEN p) GEN FpM_red(GEN z, GEN p) GEN FpM_sub(GEN x, GEN y, GEN p) GEN FpM_to_mod(GEN z, GEN p) GEN FpMs_FpC_mul(GEN M, GEN B, GEN p) GEN FpMs_FpCs_solve(GEN M, GEN B, long nbrow, GEN p) GEN FpMs_FpCs_solve_safe(GEN M, GEN A, long nbrow, GEN p) GEN FpMs_leftkernel_elt(GEN M, long nbrow, GEN p) GEN FpC_add(GEN x, GEN y, GEN p) GEN FpC_sub(GEN x, GEN y, GEN p) GEN FpV_FpMs_mul(GEN B, GEN M, GEN p) GEN FpV_add(GEN x, GEN y, GEN p) GEN FpV_sub(GEN x, GEN y, GEN p) GEN FpV_dotproduct(GEN x, GEN y, GEN p) GEN FpV_dotsquare(GEN x, GEN p) GEN FpV_red(GEN z, GEN p) GEN FpV_to_mod(GEN z, GEN p) GEN FpVV_to_mod(GEN z, GEN p) GEN FpX_to_mod(GEN z, GEN p) GEN ZV_zMs_mul(GEN B, GEN M) GEN ZpMs_ZpCs_solve(GEN M, GEN B, long nbrow, GEN p, long e) GEN gen_FpM_Wiedemann(void *E, GEN (*f)(void*, GEN), GEN B, GEN p) GEN gen_ZpM_Dixon(void *E, GEN (*f)(void*, GEN), GEN B, GEN p, long e) GEN gen_matid(long n, void *E, const bb_field *S) GEN matid_F2m(long n) GEN matid_Flm(long n) GEN matid_F2xqM(long n, GEN T) GEN matid_FlxqM(long n, GEN T, ulong p) GEN scalar_Flm(long s, long n) GEN zCs_to_ZC(GEN C, long nbrow) GEN zMs_to_ZM(GEN M, long nbrow) GEN zMs_ZC_mul(GEN M, GEN B) GEN ZMV_to_FlmV(GEN z, ulong m) # Hensel.c GEN Z2_sqrt(GEN x, long e) GEN Zp_sqrt(GEN x, GEN p, long e) GEN Zp_sqrtlift(GEN b, GEN a, GEN p, long e) GEN Zp_sqrtnlift(GEN b, GEN n, GEN a, GEN p, long e) GEN ZpX_Frobenius(GEN T, GEN p, long e) GEN ZpX_ZpXQ_liftroot(GEN P, GEN S, GEN T, GEN p, long e) GEN ZpX_ZpXQ_liftroot_ea(GEN P, GEN S, GEN T, GEN p, long n, void *E, int early(void *E, GEN x, GEN q)) GEN ZpX_liftfact(GEN pol, GEN Q, GEN T, GEN p, long e, GEN pe) GEN ZpX_liftroot(GEN f, GEN a, GEN p, long e) GEN ZpX_liftroots(GEN f, GEN S, GEN p, long e) GEN ZpX_roots(GEN f, GEN p, long e) GEN ZpXQ_inv(GEN a, GEN T, GEN p, long e) GEN ZpXQ_invlift(GEN b, GEN a, GEN T, GEN p, long e) GEN ZpXQ_log(GEN a, GEN T, GEN p, long N) GEN ZpXQ_sqrt(GEN a, GEN T, GEN p, long e) GEN ZpXQ_sqrtnlift(GEN b, GEN n, GEN a, GEN T, GEN p, long e) GEN ZpXQM_prodFrobenius(GEN M, GEN T, GEN p, long e) GEN ZpXQX_liftroot(GEN f, GEN a, GEN T, GEN p, long e) GEN ZpXQX_liftroot_vald(GEN f, GEN a, long v, GEN T, GEN p, long e) GEN gen_ZpX_Dixon(GEN F, GEN V, GEN q, GEN p, long N, void *E, GEN lin(void *E, GEN F, GEN d, GEN q), GEN invl(void *E, GEN d)) GEN gen_ZpX_Newton(GEN x, GEN p, long n, void *E, GEN eval(void *E, GEN f, GEN q), GEN invd(void *E, GEN V, GEN v, GEN q, long M)) GEN polhensellift(GEN pol, GEN fct, GEN p, long exp) ulong quadratic_prec_mask(long n) # QX_factor.c GEN QX_factor(GEN x) GEN ZX_factor(GEN x) long ZX_is_irred(GEN x) GEN ZX_squff(GEN f, GEN *ex) GEN polcyclofactors(GEN f) long poliscyclo(GEN f) long poliscycloprod(GEN f) # RgV.c GEN RgC_Rg_add(GEN x, GEN y) GEN RgC_Rg_div(GEN x, GEN y) GEN RgC_Rg_mul(GEN x, GEN y) GEN RgC_RgM_mul(GEN x, GEN y) GEN RgC_RgV_mul(GEN x, GEN y) GEN RgC_add(GEN x, GEN y) long RgC_is_ei(GEN x) GEN RgC_neg(GEN x) GEN RgC_sub(GEN x, GEN y) GEN RgM_Rg_add(GEN x, GEN y) GEN RgM_Rg_add_shallow(GEN x, GEN y) GEN RgM_Rg_div(GEN x, GEN y) GEN RgM_Rg_mul(GEN x, GEN y) GEN RgM_Rg_sub(GEN x, GEN y) GEN RgM_Rg_sub_shallow(GEN x, GEN y) GEN RgM_RgC_mul(GEN x, GEN y) GEN RgM_RgV_mul(GEN x, GEN y) GEN RgM_add(GEN x, GEN y) GEN RgM_det_triangular(GEN x) int RgM_is_ZM(GEN x) int RgM_isdiagonal(GEN x) int RgM_isidentity(GEN x) int RgM_isscalar(GEN x, GEN s) GEN RgM_mul(GEN x, GEN y) GEN RgM_multosym(GEN x, GEN y) GEN RgM_neg(GEN x) GEN RgM_powers(GEN x, long l) GEN RgM_sqr(GEN x) GEN RgM_sub(GEN x, GEN y) GEN RgM_sumcol(GEN A) GEN RgM_transmul(GEN x, GEN y) GEN RgM_transmultosym(GEN x, GEN y) GEN RgM_zc_mul(GEN x, GEN y) GEN RgM_zm_mul(GEN x, GEN y) GEN RgMrow_RgC_mul(GEN x, GEN y, long i) GEN RgV_RgM_mul(GEN x, GEN y) GEN RgV_RgC_mul(GEN x, GEN y) GEN RgV_Rg_mul(GEN x, GEN y) GEN RgV_add(GEN x, GEN y) GEN RgV_dotproduct(GEN x, GEN y) GEN RgV_dotsquare(GEN x) int RgV_is_ZMV(GEN V) long RgV_isin(GEN v, GEN x) GEN RgV_kill0(GEN v) GEN RgV_neg(GEN x) GEN RgV_prod(GEN v) GEN RgV_sub(GEN x, GEN y) GEN RgV_sum(GEN v) GEN RgV_sumpart(GEN v, long n) GEN RgV_sumpart2(GEN v, long m, long n) GEN RgV_zc_mul(GEN x, GEN y) GEN RgV_zm_mul(GEN x, GEN y) GEN RgX_RgM_eval(GEN x, GEN y) GEN RgX_RgMV_eval(GEN x, GEN y) int isdiagonal(GEN x) GEN matid(long n) GEN scalarcol(GEN x, long n) GEN scalarcol_shallow(GEN x, long n) GEN scalarmat(GEN x, long n) GEN scalarmat_shallow(GEN x, long n) GEN scalarmat_s(long x, long n) # RgX.c GEN Kronecker_to_mod(GEN z, GEN pol) GEN QX_ZXQV_eval(GEN P, GEN V, GEN dV) GEN QXQ_charpoly(GEN A, GEN T, long v) GEN QXQ_powers(GEN a, long n, GEN T) GEN QXQX_to_mod_shallow(GEN z, GEN T) GEN QXQV_to_mod(GEN V, GEN T) GEN QXQXV_to_mod(GEN V, GEN T) GEN QXV_QXQ_eval(GEN v, GEN a, GEN T) GEN QXX_QXQ_eval(GEN v, GEN a, GEN T) GEN Rg_RgX_sub(GEN x, GEN y) GEN Rg_to_RgC(GEN x, long N) GEN RgM_to_RgXV(GEN x, long v) GEN RgM_to_RgXX(GEN x, long v, long w) GEN RgV_to_RgX(GEN x, long v) GEN RgV_to_RgM(GEN v, long n) GEN RgV_to_RgX_reverse(GEN x, long v) GEN RgX_RgXQ_eval(GEN f, GEN x, GEN T) GEN RgX_RgXQV_eval(GEN P, GEN V, GEN T) GEN RgX_RgXn_eval(GEN Q, GEN x, long n) GEN RgX_RgXnV_eval(GEN Q, GEN x, long n) GEN RgX_Rg_add(GEN y, GEN x) GEN RgX_Rg_add_shallow(GEN y, GEN x) GEN RgX_Rg_div(GEN y, GEN x) GEN RgX_Rg_divexact(GEN x, GEN y) GEN RgX_Rg_eval_bk(GEN Q, GEN x) GEN RgX_Rg_mul(GEN y, GEN x) GEN RgX_Rg_sub(GEN y, GEN x) GEN RgX_RgV_eval(GEN Q, GEN x) GEN RgX_add(GEN x, GEN y) GEN RgX_blocks(GEN P, long n, long m) GEN RgX_deflate(GEN x0, long d) GEN RgX_deriv(GEN x) GEN RgX_div_by_X_x(GEN a, GEN x, GEN *r) GEN RgX_divrem(GEN x, GEN y, GEN *r) GEN RgX_divs(GEN y, long x) long RgX_equal(GEN x, GEN y) void RgX_even_odd(GEN p, GEN *pe, GEN *po) GEN RgX_get_0(GEN x) GEN RgX_get_1(GEN x) GEN RgX_inflate(GEN x0, long d) GEN RgX_mul(GEN x, GEN y) GEN RgX_mul_normalized(GEN A, long a, GEN B, long b) GEN RgX_mulXn(GEN x, long d) GEN RgX_muls(GEN y, long x) GEN RgX_mulspec(GEN a, GEN b, long na, long nb) GEN RgX_neg(GEN x) GEN RgX_normalize(GEN x) GEN RgX_pseudodivrem(GEN x, GEN y, GEN *ptr) GEN RgX_pseudorem(GEN x, GEN y) GEN RgX_recip(GEN x) GEN RgX_recip_shallow(GEN x) GEN RgX_renormalize_lg(GEN x, long lx) GEN RgX_rescale(GEN P, GEN h) GEN RgX_rotate_shallow(GEN P, long k, long p) GEN RgX_shift(GEN a, long n) GEN RgX_shift_shallow(GEN x, long n) GEN RgX_splitting(GEN p, long k) GEN RgX_sqr(GEN x) GEN RgX_sqrspec(GEN a, long na) GEN RgX_sub(GEN x, GEN y) GEN RgX_to_RgC(GEN x, long N) GEN RgX_translate(GEN P, GEN c) GEN RgX_unscale(GEN P, GEN h) GEN RgXQ_matrix_pow(GEN y, long n, long m, GEN P) GEN RgXQ_norm(GEN x, GEN T) GEN RgXQ_pow(GEN x, GEN n, GEN T) GEN RgXQ_powers(GEN x, long l, GEN T) GEN RgXQ_powu(GEN x, ulong n, GEN T) GEN RgXQC_red(GEN P, GEN T) GEN RgXQV_RgXQ_mul(GEN v, GEN x, GEN T) GEN RgXQV_red(GEN P, GEN T) GEN RgXQX_RgXQ_mul(GEN x, GEN y, GEN T) GEN RgXQX_divrem(GEN x, GEN y, GEN T, GEN *r) GEN RgXQX_mul(GEN x, GEN y, GEN T) GEN RgXQX_pseudodivrem(GEN x, GEN y, GEN T, GEN *ptr) GEN RgXQX_pseudorem(GEN x, GEN y, GEN T) GEN RgXQX_red(GEN P, GEN T) GEN RgXQX_sqr(GEN x, GEN T) GEN RgXQX_translate(GEN P, GEN c, GEN T) GEN RgXV_RgV_eval(GEN Q, GEN x) GEN RgXV_to_RgM(GEN v, long n) GEN RgXV_unscale(GEN v, GEN h) GEN RgXX_to_RgM(GEN v, long n) long RgXY_degreex(GEN bpol) GEN RgXY_swap(GEN x, long n, long w) GEN RgXY_swapspec(GEN x, long n, long w, long nx) GEN RgXn_eval(GEN Q, GEN x, long n) GEN RgXn_exp(GEN f, long e) GEN RgXn_inv(GEN f, long e) GEN RgXn_mul(GEN f, GEN g, long n) GEN RgXn_powers(GEN f, long m, long n) GEN RgXn_red_shallow(GEN a, long n) GEN RgXn_reverse(GEN f, long e) GEN RgXn_sqr(GEN f, long n) GEN RgXnV_red_shallow(GEN P, long n) GEN RgXn_powu(GEN x, ulong m, long n) GEN RgXn_powu_i(GEN x, ulong m, long n) GEN ZX_translate(GEN P, GEN c) GEN ZX_unscale2n(GEN P, long n) GEN ZX_unscale(GEN P, GEN h) GEN ZX_unscale_div(GEN P, GEN h) int ZXQX_dvd(GEN x, GEN y, GEN T) long brent_kung_optpow(long d, long n, long m) GEN gen_bkeval(GEN Q, long d, GEN x, int use_sqr, void *E, const bb_algebra *ff, GEN cmul(void *E, GEN P, long a, GEN x)) GEN gen_bkeval_powers(GEN P, long d, GEN V, void *E, const bb_algebra *ff, GEN cmul(void *E, GEN P, long a, GEN x)) const bb_algebra * get_Rg_algebra() # ZG.c void ZGC_G_mul_inplace(GEN v, GEN x) void ZGC_add_inplace(GEN x, GEN y) GEN ZGC_add_sparse(GEN x, GEN y) void ZGM_add_inplace(GEN x, GEN y) GEN ZGM_add_sparse(GEN x, GEN y) GEN G_ZGC_mul(GEN x, GEN v) GEN G_ZG_mul(GEN x, GEN y) GEN ZGC_G_mul(GEN v, GEN x) GEN ZGC_Z_mul(GEN v, GEN x) GEN ZG_G_mul(GEN x, GEN y) GEN ZG_Z_mul(GEN x, GEN c) GEN ZG_add(GEN x, GEN y) GEN ZG_mul(GEN x, GEN y) GEN ZG_neg(GEN x) GEN ZG_normalize(GEN x) GEN ZG_sub(GEN x, GEN y) # ZV.c void Flc_lincomb1_inplace(GEN X, GEN Y, ulong v, ulong q) void RgM_check_ZM(GEN A, const char *s) void RgV_check_ZV(GEN A, const char *s) GEN ZV_zc_mul(GEN x, GEN y) GEN ZC_ZV_mul(GEN x, GEN y) GEN ZC_Z_add(GEN x, GEN y) GEN ZC_Z_divexact(GEN X, GEN c) GEN ZC_Z_mul(GEN X, GEN c) GEN ZC_Z_sub(GEN x, GEN y) GEN ZC_add(GEN x, GEN y) GEN ZC_copy(GEN x) GEN ZC_hnfremdiv(GEN x, GEN y, GEN *Q) long ZC_is_ei(GEN x) GEN ZC_lincomb(GEN u, GEN v, GEN X, GEN Y) void ZC_lincomb1_inplace(GEN X, GEN Y, GEN v) GEN ZC_neg(GEN M) GEN ZC_reducemodlll(GEN x, GEN y) GEN ZC_reducemodmatrix(GEN v, GEN y) GEN ZC_sub(GEN x, GEN y) GEN ZC_z_mul(GEN X, long c) GEN ZM_ZC_mul(GEN x, GEN y) GEN ZM_Z_div(GEN X, GEN c) GEN ZM_Z_divexact(GEN X, GEN c) GEN ZM_Z_mul(GEN X, GEN c) GEN ZM_add(GEN x, GEN y) GEN ZM_copy(GEN x) GEN ZM_det_triangular(GEN mat) GEN ZM_diag_mul(GEN m, GEN d) int ZM_equal(GEN A, GEN B) GEN ZM_hnfdivrem(GEN x, GEN y, GEN *Q) int ZM_ishnf(GEN x) int ZM_isidentity(GEN x) int ZM_isscalar(GEN x, GEN s) long ZM_max_lg(GEN x) GEN ZM_mul(GEN x, GEN y) GEN ZM_mul_diag(GEN m, GEN d) GEN ZM_multosym(GEN x, GEN y) GEN ZM_neg(GEN x) GEN ZM_nm_mul(GEN x, GEN y) GEN ZM_pow(GEN x, GEN n) GEN ZM_powu(GEN x, ulong n) GEN ZM_reducemodlll(GEN x, GEN y) GEN ZM_reducemodmatrix(GEN v, GEN y) GEN ZM_sqr(GEN x) GEN ZM_sub(GEN x, GEN y) GEN ZM_supnorm(GEN x) GEN ZM_to_Flm(GEN x, ulong p) GEN ZM_to_zm(GEN z) GEN ZM_transmul(GEN x, GEN y) GEN ZM_transmultosym(GEN x, GEN y) GEN ZMV_to_zmV(GEN z) void ZM_togglesign(GEN M) GEN ZM_zc_mul(GEN x, GEN y) GEN ZM_zm_mul(GEN x, GEN y) GEN ZMrow_ZC_mul(GEN x, GEN y, long i) GEN ZV_ZM_mul(GEN x, GEN y) int ZV_abscmp(GEN x, GEN y) int ZV_cmp(GEN x, GEN y) GEN ZV_content(GEN x) GEN ZV_dotproduct(GEN x, GEN y) GEN ZV_dotsquare(GEN x) int ZV_equal(GEN V, GEN W) int ZV_equal0(GEN V) long ZV_max_lg(GEN x) void ZV_neg_inplace(GEN M) GEN ZV_prod(GEN v) GEN ZV_sum(GEN v) GEN ZV_to_Flv(GEN x, ulong p) GEN ZV_to_nv(GEN z) void ZV_togglesign(GEN M) GEN gram_matrix(GEN M) GEN nm_Z_mul(GEN X, GEN c) GEN zm_mul(GEN x, GEN y) GEN zm_to_Flm(GEN z, ulong p) GEN zm_to_ZM(GEN z) GEN zm_zc_mul(GEN x, GEN y) GEN zmV_to_ZMV(GEN z) long zv_content(GEN x) long zv_dotproduct(GEN x, GEN y) int zv_equal(GEN V, GEN W) int zv_equal0(GEN V) GEN zv_neg(GEN x) GEN zv_neg_inplace(GEN M) long zv_prod(GEN v) GEN zv_prod_Z(GEN v) long zv_sum(GEN v) GEN zv_to_Flv(GEN z, ulong p) GEN zv_z_mul(GEN v, long n) GEN zv_ZM_mul(GEN x, GEN y) int zvV_equal(GEN V, GEN W) # ZX.c void RgX_check_QX(GEN x, const char *s) void RgX_check_ZX(GEN x, const char *s) void RgX_check_ZXX(GEN x, const char *s) GEN Z_ZX_sub(GEN x, GEN y) GEN ZX_Z_add(GEN y, GEN x) GEN ZX_Z_add_shallow(GEN y, GEN x) GEN ZX_Z_divexact(GEN y, GEN x) GEN ZX_Z_mul(GEN y, GEN x) GEN ZX_Z_sub(GEN y, GEN x) GEN ZX_add(GEN x, GEN y) GEN ZX_copy(GEN x) GEN ZX_deriv(GEN x) GEN ZX_div_by_X_1(GEN a, GEN *r) int ZX_equal(GEN V, GEN W) GEN ZX_eval1(GEN x) long ZX_max_lg(GEN x) GEN ZX_mod_Xnm1(GEN T, ulong n) GEN ZX_mul(GEN x, GEN y) GEN ZX_mulspec(GEN a, GEN b, long na, long nb) GEN ZX_mulu(GEN y, ulong x) GEN ZX_neg(GEN x) GEN ZX_rem(GEN x, GEN y) GEN ZX_remi2n(GEN y, long n) GEN ZX_rescale2n(GEN P, long n) GEN ZX_rescale(GEN P, GEN h) GEN ZX_rescale_lt(GEN P) GEN ZX_shifti(GEN x, long n) GEN ZX_sqr(GEN x) GEN ZX_sqrspec(GEN a, long na) GEN ZX_sub(GEN x, GEN y) long ZX_val(GEN x) long ZX_valrem(GEN x, GEN *Z) GEN ZXT_remi2n(GEN z, long n) GEN ZXV_Z_mul(GEN y, GEN x) GEN ZXV_dotproduct(GEN V, GEN W) int ZXV_equal(GEN V, GEN W) GEN ZXV_remi2n(GEN x, long n) GEN ZXX_Z_divexact(GEN y, GEN x) GEN ZXX_Z_mul(GEN y, GEN x) GEN ZXX_Z_add_shallow(GEN x, GEN y) long ZXX_max_lg(GEN x) GEN ZXX_renormalize(GEN x, long lx) GEN ZXX_to_Kronecker(GEN P, long n) GEN ZXX_to_Kronecker_spec(GEN P, long lP, long n) GEN scalar_ZX(GEN x, long v) GEN scalar_ZX_shallow(GEN x, long v) GEN zx_to_ZX(GEN z) # algebras.c GEN alg_centralproj(GEN al, GEN z, int maps) GEN alg_change_overorder_shallow(GEN al, GEN ord) GEN alg_complete(GEN rnf, GEN aut, GEN hi, GEN hf, long maxord) GEN alg_csa_table(GEN nf, GEN mt, long v, long maxord) GEN alg_cyclic(GEN rnf, GEN aut, GEN b, long maxord) GEN alg_decomposition(GEN al) long alg_get_absdim(GEN al) long algabsdim(GEN al) GEN alg_get_abssplitting(GEN al) GEN alg_get_aut(GEN al) GEN algaut(GEN al) GEN alg_get_auts(GEN al) GEN alg_get_b(GEN al) GEN algb(GEN al) GEN algcenter(GEN al) GEN alg_get_center(GEN al) GEN alg_get_char(GEN al) GEN algchar(GEN al) long alg_get_degree(GEN al) long algdegree(GEN al) long alg_get_dim(GEN al) long algdim(GEN al) GEN alg_get_hasse_f(GEN al) GEN alghassef(GEN al) GEN alg_get_hasse_i(GEN al) GEN alghassei(GEN al) GEN alg_get_invbasis(GEN al) GEN alginvbasis(GEN al) GEN alg_get_multable(GEN al) GEN alg_get_basis(GEN al) GEN algbasis(GEN al) GEN alg_get_relmultable(GEN al) GEN algrelmultable(GEN al) GEN alg_get_splitpol(GEN al) GEN alg_get_splittingfield(GEN al) GEN algsplittingfield(GEN al) GEN alg_get_splittingbasis(GEN al) GEN alg_get_splittingbasisinv(GEN al) GEN alg_get_splittingdata(GEN al) GEN algsplittingdata(GEN al) GEN alg_get_tracebasis(GEN al) GEN alg_hasse(GEN nf, long n, GEN hi, GEN hf, long var, long flag) GEN alg_hilbert(GEN nf, GEN a, GEN b, long v, long flag) GEN alg_matrix(GEN nf, long n, long v, GEN L, long flag) long alg_model(GEN al, GEN x) GEN alg_ordermodp(GEN al, GEN p) GEN alg_quotient(GEN al, GEN I, int maps) GEN algradical(GEN al) GEN algsimpledec(GEN al, int maps) GEN algsubalg(GEN al, GEN basis) long alg_type(GEN al) GEN algadd(GEN al, GEN x, GEN y) GEN algalgtobasis(GEN al, GEN x) GEN algbasischarpoly(GEN al, GEN x, long v) GEN algbasismul(GEN al, GEN x, GEN y) GEN algbasismultable(GEN al, GEN x) GEN algbasismultable_Flm(GEN mt, GEN x, ulong m) GEN algbasistoalg(GEN al, GEN x) GEN algcharpoly(GEN al, GEN x, long v) GEN algdisc(GEN al) GEN algdivl(GEN al, GEN x, GEN y) GEN algdivr(GEN al, GEN x, GEN y) GEN alggroup(GEN gal, GEN p) GEN alghasse(GEN al, GEN pl) GEN alginit(GEN A, GEN B, long v, long flag) long algindex(GEN al, GEN pl) GEN alginv(GEN al, GEN x) int algisassociative(GEN mt0, GEN p) int algiscommutative(GEN al) int algisdivision(GEN al, GEN pl) int algisramified(GEN al, GEN pl) int algissemisimple(GEN al) int algissimple(GEN al, long ss) int algissplit(GEN al, GEN pl) int algisdivl(GEN al, GEN x, GEN y, GEN* ptz) int algisinv(GEN al, GEN x, GEN* ptix) GEN algleftordermodp(GEN al, GEN Ip, GEN p) GEN algmul(GEN al, GEN x, GEN y) GEN algmultable(GEN al) GEN alglathnf(GEN al, GEN m) GEN algleftmultable(GEN al, GEN x) GEN algneg(GEN al, GEN x) GEN algnorm(GEN al, GEN x) GEN algpoleval(GEN al, GEN pol, GEN x) GEN algpow(GEN al, GEN x, GEN n) GEN algprimesubalg(GEN al) GEN algramifiedplaces(GEN al) GEN algrandom(GEN al, GEN b) GEN algsplittingmatrix(GEN al, GEN x) GEN algsqr(GEN al, GEN x) GEN algsub(GEN al, GEN x, GEN y) GEN algtableinit(GEN mt, GEN p) GEN algtensor(GEN al1, GEN al2, long maxord) GEN algtrace(GEN al, GEN x) long algtype(GEN al) GEN bnfgwgeneric(GEN bnf, GEN Lpr, GEN Ld, GEN pl, long var) GEN bnrgwsearch(GEN bnr, GEN Lpr, GEN Ld, GEN pl) void checkalg(GEN x) void checkhasse(GEN nf, GEN hi, GEN hf, long n) long cyclicrelfrob(GEN rnf, GEN auts, GEN pr) GEN hassecoprime(GEN hi, GEN hf, long n) GEN hassedown(GEN nf, long n, GEN hi, GEN hf) GEN hassewedderburn(GEN hi, GEN hf, long n) long localhasse(GEN rnf, GEN cnd, GEN pl, GEN auts, GEN b, long k) GEN nfgrunwaldwang(GEN nf0, GEN Lpr, GEN Ld, GEN pl, long var) GEN nfgwkummer(GEN nf, GEN Lpr, GEN Ld, GEN pl, long var) # alglin1.c GEN F2m_F2c_gauss(GEN a, GEN b) GEN F2m_F2c_invimage(GEN A, GEN y) GEN F2m_deplin(GEN x) ulong F2m_det(GEN x) ulong F2m_det_sp(GEN x) GEN F2m_gauss(GEN a, GEN b) GEN F2m_image(GEN x) GEN F2m_indexrank(GEN x) GEN F2m_inv(GEN x) GEN F2m_invimage(GEN A, GEN B) GEN F2m_ker(GEN x) GEN F2m_ker_sp(GEN x, long deplin) long F2m_rank(GEN x) GEN F2m_suppl(GEN x) GEN F2xqM_F2xqC_mul(GEN a, GEN b, GEN T) GEN F2xqM_det(GEN a, GEN T) GEN F2xqM_ker(GEN x, GEN T) GEN F2xqM_image(GEN x, GEN T) GEN F2xqM_inv(GEN a, GEN T) GEN F2xqM_mul(GEN a, GEN b, GEN T) long F2xqM_rank(GEN x, GEN T) GEN Flm_Flc_gauss(GEN a, GEN b, ulong p) GEN Flm_Flc_invimage(GEN mat, GEN y, ulong p) GEN Flm_deplin(GEN x, ulong p) ulong Flm_det(GEN x, ulong p) ulong Flm_det_sp(GEN x, ulong p) GEN Flm_gauss(GEN a, GEN b, ulong p) GEN Flm_image(GEN x, ulong p) GEN Flm_invimage(GEN m, GEN v, ulong p) GEN Flm_indexrank(GEN x, ulong p) GEN Flm_intersect(GEN x, GEN y, ulong p) GEN Flm_inv(GEN x, ulong p) GEN Flm_ker(GEN x, ulong p) GEN Flm_ker_sp(GEN x, ulong p, long deplin) long Flm_rank(GEN x, ulong p) GEN Flm_suppl(GEN x, ulong p) GEN FlxqM_FlxqC_gauss(GEN a, GEN b, GEN T, ulong p) GEN FlxqM_FlxqC_mul(GEN a, GEN b, GEN T, ulong p) GEN FlxqM_det(GEN a, GEN T, ulong p) GEN FlxqM_gauss(GEN a, GEN b, GEN T, ulong p) GEN FlxqM_ker(GEN x, GEN T, ulong p) GEN FlxqM_image(GEN x, GEN T, ulong p) GEN FlxqM_inv(GEN x, GEN T, ulong p) GEN FlxqM_mul(GEN a, GEN b, GEN T, ulong p) long FlxqM_rank(GEN x, GEN T, ulong p) GEN FpM_FpC_gauss(GEN a, GEN b, GEN p) GEN FpM_FpC_invimage(GEN m, GEN v, GEN p) GEN FpM_deplin(GEN x, GEN p) GEN FpM_det(GEN x, GEN p) GEN FpM_gauss(GEN a, GEN b, GEN p) GEN FpM_image(GEN x, GEN p) GEN FpM_indexrank(GEN x, GEN p) GEN FpM_intersect(GEN x, GEN y, GEN p) GEN FpM_inv(GEN x, GEN p) GEN FpM_invimage(GEN m, GEN v, GEN p) GEN FpM_ker(GEN x, GEN p) long FpM_rank(GEN x, GEN p) GEN FpM_suppl(GEN x, GEN p) GEN FqM_FqC_gauss(GEN a, GEN b, GEN T, GEN p) GEN FqM_FqC_mul(GEN a, GEN b, GEN T, GEN p) GEN FqM_deplin(GEN x, GEN T, GEN p) GEN FqM_det(GEN x, GEN T, GEN p) GEN FqM_gauss(GEN a, GEN b, GEN T, GEN p) GEN FqM_ker(GEN x, GEN T, GEN p) GEN FqM_image(GEN x, GEN T, GEN p) GEN FqM_inv(GEN x, GEN T, GEN p) GEN FqM_mul(GEN a, GEN b, GEN T, GEN p) long FqM_rank(GEN a, GEN T, GEN p) GEN FqM_suppl(GEN x, GEN T, GEN p) GEN QM_inv(GEN M, GEN dM) GEN RgM_Fp_init(GEN a, GEN p, ulong *pp) GEN RgM_RgC_invimage(GEN A, GEN B) GEN RgM_diagonal(GEN m) GEN RgM_diagonal_shallow(GEN m) GEN RgM_Hadamard(GEN a) GEN RgM_inv_upper(GEN a) GEN RgM_invimage(GEN A, GEN B) GEN RgM_solve(GEN a, GEN b) GEN RgM_solve_realimag(GEN x, GEN y) void RgMs_structelim(GEN M, long nbrow, GEN A, GEN *p_col, GEN *p_lin) GEN ZM_det(GEN a) GEN ZM_detmult(GEN A) GEN ZM_gauss(GEN a, GEN b) GEN ZM_imagecompl(GEN x) GEN ZM_indeximage(GEN x) GEN ZM_indexrank(GEN x) GEN ZM_inv(GEN M, GEN dM) GEN ZM_inv_ratlift(GEN M, GEN *pden) long ZM_rank(GEN x) GEN ZlM_gauss(GEN a, GEN b, ulong p, long e, GEN C) GEN closemodinvertible(GEN x, GEN y) GEN deplin(GEN x) GEN det(GEN a) GEN det0(GEN a, long flag) GEN det2(GEN a) GEN detint(GEN x) GEN eigen(GEN x, long prec) GEN gauss(GEN a, GEN b) GEN gaussmodulo(GEN M, GEN D, GEN Y) GEN gaussmodulo2(GEN M, GEN D, GEN Y) GEN gen_Gauss(GEN a, GEN b, void *E, const bb_field *ff) GEN gen_Gauss_pivot(GEN x, long *rr, void *E, const bb_field *ff) GEN gen_det(GEN a, void *E, const bb_field *ff) GEN gen_ker(GEN x, long deplin, void *E, const bb_field *ff) GEN gen_matcolmul(GEN a, GEN b, void *E, const bb_field *ff) GEN gen_matmul(GEN a, GEN b, void *E, const bb_field *ff) GEN image(GEN x) GEN image2(GEN x) GEN imagecompl(GEN x) GEN indexrank(GEN x) GEN inverseimage(GEN mat, GEN y) GEN ker(GEN x) GEN keri(GEN x) GEN mateigen(GEN x, long flag, long prec) GEN matimage0(GEN x, long flag) GEN matker0(GEN x, long flag) GEN matsolvemod0(GEN M, GEN D, GEN Y, long flag) long rank(GEN x) GEN reducemodinvertible(GEN x, GEN y) GEN reducemodlll(GEN x, GEN y) GEN split_realimag(GEN x, long r1, long r2) GEN suppl(GEN x) # alglin2.c GEN Flm_charpoly(GEN x, ulong p) GEN Flm_hess(GEN x, ulong p) GEN FpM_charpoly(GEN x, GEN p) GEN FpM_hess(GEN x, GEN p) GEN Frobeniusform(GEN V, long n) GEN QM_minors_coprime(GEN x, GEN pp) GEN QM_ImZ_hnf(GEN x) GEN QM_ImQ_hnf(GEN x) GEN QM_charpoly_ZX(GEN M) GEN QM_charpoly_ZX_bound(GEN M, long bit) GEN QM_charpoly_ZX2_bound(GEN M, GEN M2, long bit) GEN ZM_charpoly(GEN x) GEN adj(GEN x) GEN adjsafe(GEN x) GEN caract(GEN x, long v) GEN caradj(GEN x, long v, GEN *py) GEN carberkowitz(GEN x, long v) GEN carhess(GEN x, long v) GEN charpoly(GEN x, long v) GEN charpoly0(GEN x, long v, long flag) GEN gnorm(GEN x) GEN gnorml1(GEN x, long prec) GEN gnorml1_fake(GEN x) GEN gnormlp(GEN x, GEN p, long prec) GEN gnorml2(GEN x) GEN gsupnorm(GEN x, long prec) void gsupnorm_aux(GEN x, GEN *m, GEN *msq, long prec) GEN gtrace(GEN x) GEN hess(GEN x) GEN intersect(GEN x, GEN y) GEN jacobi(GEN a, long prec) GEN matadjoint0(GEN x, long flag) GEN matcompanion(GEN x) GEN matrixqz0(GEN x, GEN pp) GEN minpoly(GEN x, long v) GEN qfgaussred(GEN a) GEN qfgaussred_positive(GEN a) GEN qfsign(GEN a) # alglin3.c GEN apply0(GEN f, GEN A) GEN diagonal(GEN x) GEN diagonal_shallow(GEN x) GEN extract0(GEN x, GEN l1, GEN l2) GEN fold0(GEN f, GEN A) GEN genapply(void *E, GEN (*f)(void *E, GEN x), GEN A) GEN genfold(void *E, GEN (*f)(void *E, GEN x, GEN y), GEN A) GEN genindexselect(void *E, long (*f)(void *E, GEN x), GEN A) GEN genselect(void *E, long (*f)(void *E, GEN x), GEN A) GEN gtomat(GEN x) GEN gtrans(GEN x) GEN matmuldiagonal(GEN x, GEN d) GEN matmultodiagonal(GEN x, GEN y) GEN matslice0(GEN A, long x1, long x2, long y1, long y2) GEN parapply(GEN V, GEN C) void parfor(GEN a, GEN b, GEN code, void *E, long call(void*, GEN, GEN)) void parforprime(GEN a, GEN b, GEN code, void *E, long call(void*, GEN, GEN)) void parforvec(GEN x, GEN code, long flag, void *E, long call(void*, GEN, GEN)) GEN parselect(GEN C, GEN D, long flag) GEN select0(GEN A, GEN f, long flag) GEN shallowextract(GEN x, GEN L) GEN shallowtrans(GEN x) GEN vecapply(void *E, GEN (*f)(void* E, GEN x), GEN x) GEN veccatapply(void *E, GEN (*f)(void* E, GEN x), GEN x) GEN veccatselapply(void *Epred, long (*pred)(void* E, GEN x), void *Efun, GEN (*fun)(void* E, GEN x), GEN A) GEN vecrange(GEN a, GEN b) GEN vecrangess(long a, long b) GEN vecselapply(void *Epred, long (*pred)(void* E, GEN x), void *Efun, GEN (*fun)(void* E, GEN x), GEN A) GEN vecselect(void *E, long (*f)(void* E, GEN x), GEN A) GEN vecslice0(GEN A, long y1, long y2) GEN vecsum(GEN v) # anal.c void addhelp(const char *e, char *s) void alias0(const char *s, const char *old) GEN compile_str(const char *s) GEN chartoGENstr(char c) long delete_var() long fetch_user_var(const char *s) long fetch_var() long fetch_var_higher() GEN fetch_var_value(long vx, GEN t) char * gp_embedded(const char *s) void gp_embedded_init(long rsize, long vsize) GEN gp_read_str(const char *t) entree* install(void *f, const char *name, const char *code) entree* is_entry(const char *s) void kill0(const char *e) void pari_var_close() void pari_var_init() long pari_var_next() long pari_var_next_temp() long pari_var_create(entree *ep) void name_var(long n, const char *s) GEN readseq(char *t) GEN* safegel(GEN x, long l) long* safeel(GEN x, long l) GEN* safelistel(GEN x, long l) GEN* safegcoeff(GEN x, long a, long b) GEN strntoGENstr(const char *s, long n0) GEN strtoGENstr(const char *s) GEN strtoi(const char *s) GEN strtor(const char *s, long prec) GEN type0(GEN x) GEN varhigher(const char *s, long v) GEN varlower(const char *s, long v) # aprcl.c long isprimeAPRCL(GEN N) # Qfb.c GEN Qfb0(GEN x, GEN y, GEN z, GEN d, long prec) void check_quaddisc(GEN x, long *s, long *r, const char *f) void check_quaddisc_imag(GEN x, long *r, const char *f) void check_quaddisc_real(GEN x, long *r, const char *f) long cornacchia(GEN d, GEN p, GEN *px, GEN *py) long cornacchia2(GEN d, GEN p, GEN *px, GEN *py) GEN nucomp(GEN x, GEN y, GEN L) GEN nudupl(GEN x, GEN L) GEN nupow(GEN x, GEN n, GEN L) GEN primeform(GEN x, GEN p, long prec) GEN primeform_u(GEN x, ulong p) int qfb_equal1(GEN f) GEN qfbcompraw(GEN x, GEN y) GEN qfbpowraw(GEN x, long n) GEN qfbred0(GEN x, long flag, GEN D, GEN isqrtD, GEN sqrtD) GEN qfbredsl2(GEN q, GEN S) GEN qfbsolve(GEN Q, GEN n) GEN qfi(GEN x, GEN y, GEN z) GEN qfi_1(GEN x) GEN qfi_Shanks(GEN a, GEN g, long n) GEN qfi_log(GEN a, GEN g, GEN o) GEN qfi_order(GEN q, GEN o) GEN qficomp(GEN x, GEN y) GEN qficompraw(GEN x, GEN y) GEN qfipowraw(GEN x, long n) GEN qfisolvep(GEN Q, GEN p) GEN qfisqr(GEN x) GEN qfisqrraw(GEN x) GEN qfr(GEN x, GEN y, GEN z, GEN d) GEN qfr3_comp(GEN x, GEN y, qfr_data *S) GEN qfr3_pow(GEN x, GEN n, qfr_data *S) GEN qfr3_red(GEN x, qfr_data *S) GEN qfr3_rho(GEN x, qfr_data *S) GEN qfr3_to_qfr(GEN x, GEN z) GEN qfr5_comp(GEN x, GEN y, qfr_data *S) GEN qfr5_dist(GEN e, GEN d, long prec) GEN qfr5_pow(GEN x, GEN n, qfr_data *S) GEN qfr5_red(GEN x, qfr_data *S) GEN qfr5_rho(GEN x, qfr_data *S) GEN qfr5_to_qfr(GEN x, GEN d0) GEN qfr_1(GEN x) void qfr_data_init(GEN D, long prec, qfr_data *S) GEN qfr_to_qfr5(GEN x, long prec) GEN qfrcomp(GEN x, GEN y) GEN qfrcompraw(GEN x, GEN y) GEN qfrpow(GEN x, GEN n) GEN qfrpowraw(GEN x, long n) GEN qfrsolvep(GEN Q, GEN p) GEN qfrsqr(GEN x) GEN qfrsqrraw(GEN x) GEN quadgen(GEN x) GEN quadpoly(GEN x) GEN quadpoly0(GEN x, long v) GEN redimag(GEN x) GEN redreal(GEN x) GEN redrealnod(GEN x, GEN isqrtD) GEN rhoreal(GEN x) GEN rhorealnod(GEN x, GEN isqrtD) # arith1.c ulong Fl_order(ulong a, ulong o, ulong p) GEN Fl_powers(ulong x, long n, ulong p) GEN Fl_powers_pre(ulong x, long n, ulong p, ulong pi) ulong Fl_powu(ulong x, ulong n, ulong p) ulong Fl_powu_pre(ulong x, ulong n, ulong p, ulong pi) ulong Fl_sqrt(ulong a, ulong p) ulong Fl_sqrt_pre(ulong a, ulong p, ulong pi) ulong Fl_sqrtl(ulong a, ulong l, ulong p) ulong Fl_sqrtl_pre(ulong a, ulong l, ulong p, ulong pi) GEN Fp_factored_order(GEN a, GEN o, GEN p) int Fp_ispower(GEN x, GEN K, GEN p) GEN Fp_log(GEN a, GEN g, GEN ord, GEN p) GEN Fp_order(GEN a, GEN o, GEN p) GEN Fp_pow(GEN a, GEN n, GEN m) GEN Fp_powers(GEN x, long n, GEN p) GEN Fp_pows(GEN A, long k, GEN N) GEN Fp_powu(GEN x, ulong k, GEN p) GEN Fp_sqrt(GEN a, GEN p) GEN Fp_sqrtn(GEN a, GEN n, GEN p, GEN *zetan) GEN Z_ZV_mod(GEN P, GEN xa) GEN Z_chinese(GEN a, GEN b, GEN A, GEN B) GEN Z_chinese_all(GEN a, GEN b, GEN A, GEN B, GEN *pC) GEN Z_chinese_coprime(GEN a, GEN b, GEN A, GEN B, GEN C) GEN Z_chinese_post(GEN a, GEN b, GEN C, GEN U, GEN d) void Z_chinese_pre(GEN A, GEN B, GEN *pC, GEN *pU, GEN *pd) GEN Z_factor_listP(GEN N, GEN L) long Z_isanypower(GEN x, GEN *y) long Z_isfundamental(GEN x) long Z_ispow2(GEN x) long Z_ispowerall(GEN x, ulong k, GEN *pt) long Z_issquareall(GEN x, GEN *pt) GEN Z_nv_mod(GEN P, GEN xa) GEN ZV_allpnqn(GEN x) GEN ZV_chinese(GEN A, GEN P, GEN *pt_mod) GEN ZV_chinese_tree(GEN A, GEN P, GEN tree, GEN *pt_mod) GEN ZV_producttree(GEN xa) GEN ZX_nv_mod_tree(GEN P, GEN xa, GEN T) GEN Zideallog(GEN bid, GEN x) long Zp_issquare(GEN a, GEN p) GEN bestappr(GEN x, GEN k) GEN bestapprPade(GEN x, long B) long cgcd(long a, long b) GEN chinese(GEN x, GEN y) GEN chinese1(GEN x) GEN chinese1_coprime_Z(GEN x) GEN classno(GEN x) GEN classno2(GEN x) long clcm(long a, long b) GEN contfrac0(GEN x, GEN b, long flag) GEN contfracpnqn(GEN x, long n) GEN fibo(long n) GEN gboundcf(GEN x, long k) GEN gcf(GEN x) GEN gcf2(GEN b, GEN x) const bb_field *get_Fp_field(void **E, GEN p) ulong pgener_Fl(ulong p) ulong pgener_Fl_local(ulong p, GEN L) GEN pgener_Fp(GEN p) GEN pgener_Fp_local(GEN p, GEN L) ulong pgener_Zl(ulong p) GEN pgener_Zp(GEN p) long gisanypower(GEN x, GEN *pty) GEN gissquare(GEN x) GEN gissquareall(GEN x, GEN *pt) GEN hclassno(GEN x) long hilbert(GEN x, GEN y, GEN p) long hilbertii(GEN x, GEN y, GEN p) long isfundamental(GEN x) long ispolygonal(GEN x, GEN S, GEN *N) long ispower(GEN x, GEN k, GEN *pty) long isprimepower(GEN x, GEN *pty) long ispseudoprimepower(GEN n, GEN *pt) long issquare(GEN x) long issquareall(GEN x, GEN *pt) long krois(GEN x, long y) long kroiu(GEN x, ulong y) long kronecker(GEN x, GEN y) long krosi(long s, GEN x) long kross(long x, long y) long kroui(ulong x, GEN y) long krouu(ulong x, ulong y) GEN lcmii(GEN a, GEN b) long logint0(GEN B, GEN y, GEN *ptq) GEN mpfact(long n) GEN muls_interval(long a, long b) GEN mulu_interval(ulong a, ulong b) GEN ncV_chinese_center(GEN A, GEN P, GEN *pt_mod) GEN nmV_chinese_center(GEN A, GEN P, GEN *pt_mod) GEN odd_prime_divisors(GEN q) GEN order(GEN x) GEN pnqn(GEN x) GEN qfbclassno0(GEN x, long flag) GEN quaddisc(GEN x) GEN quadregulator(GEN x, long prec) GEN quadunit(GEN x) ulong rootsof1_Fl(ulong n, ulong p) GEN rootsof1_Fp(GEN n, GEN p) GEN rootsof1u_Fp(ulong n, GEN p) long sisfundamental(long x) GEN sqrtint(GEN a) GEN ramanujantau(GEN n) ulong ugcd(ulong a, ulong b) long uisprimepower(ulong n, ulong *p) long uissquare(ulong A) long uissquareall(ulong A, ulong *sqrtA) long unegisfundamental(ulong x) long uposisfundamental(ulong x) GEN znlog(GEN x, GEN g, GEN o) GEN znorder(GEN x, GEN o) GEN znprimroot(GEN m) GEN znstar(GEN x) GEN znstar0(GEN N, long flag) # arith2.c GEN Z_smoothen(GEN N, GEN L, GEN *pP, GEN *pe) GEN boundfact(GEN n, ulong lim) GEN check_arith_pos(GEN n, const char *f) GEN check_arith_non0(GEN n, const char *f) GEN check_arith_all(GEN n, const char *f) GEN clean_Z_factor(GEN f) GEN corepartial(GEN n, long l) GEN core0(GEN n, long flag) GEN core2(GEN n) GEN core2partial(GEN n, long l) GEN coredisc(GEN n) GEN coredisc0(GEN n, long flag) GEN coredisc2(GEN n) long corediscs(long D, ulong *f) GEN digits(GEN N, GEN B) GEN divisors(GEN n) GEN divisorsu(ulong n) GEN divisorsu_fact(GEN P, GEN e) GEN factor_pn_1(GEN p, ulong n) GEN factor_pn_1_limit(GEN p, long n, ulong lim) GEN factoru_pow(ulong n) GEN fromdigits(GEN x, GEN B) GEN fuse_Z_factor(GEN f, GEN B) GEN gen_digits(GEN x, GEN B, long n, void *E, bb_ring *r, GEN (*div)(void *E, GEN x, GEN y, GEN *r)) GEN gen_fromdigits(GEN x, GEN B, void *E, bb_ring *r) byteptr initprimes(ulong maxnum, long *lenp, ulong *lastp) void initprimetable(ulong maxnum) ulong init_primepointer_geq(ulong a, byteptr *pd) ulong init_primepointer_gt(ulong a, byteptr *pd) ulong init_primepointer_leq(ulong a, byteptr *pd) ulong init_primepointer_lt(ulong a, byteptr *pd) int is_Z_factor(GEN f) int is_Z_factornon0(GEN f) int is_Z_factorpos(GEN f) ulong maxprime() void maxprime_check(ulong c) GEN sumdigits(GEN n) GEN sumdigits0(GEN n, GEN B) ulong sumdigitsu(ulong n) GEN usumdiv_fact(GEN f) GEN usumdivk_fact(GEN f, ulong k) # base1.c GEN FpX_FpC_nfpoleval(GEN nf, GEN pol, GEN a, GEN p) GEN embed_T2(GEN x, long r1) GEN embednorm_T2(GEN x, long r1) GEN embed_norm(GEN x, long r1) void check_ZKmodule(GEN x, const char *s) void checkbid(GEN bid) GEN checkbnf(GEN bnf) void checkbnr(GEN bnr) void checkbnrgen(GEN bnr) void checkabgrp(GEN v) void checksqmat(GEN x, long N) GEN checknf(GEN nf) GEN checknfelt_mod(GEN nf, GEN x, const char *s) void checkprid(GEN bid) void checkrnf(GEN rnf) GEN factoredpolred(GEN x, GEN fa) GEN factoredpolred2(GEN x, GEN fa) GEN galoisapply(GEN nf, GEN aut, GEN x) GEN get_bnf(GEN x, long *t) GEN get_bnfpol(GEN x, GEN *bnf, GEN *nf) GEN get_nf(GEN x, long *t) GEN get_nfpol(GEN x, GEN *nf) GEN get_prid(GEN x) GEN idealfrobenius(GEN nf, GEN gal, GEN pr) GEN idealfrobenius_aut(GEN nf, GEN gal, GEN pr, GEN aut) GEN idealramfrobenius(GEN nf, GEN gal, GEN pr, GEN ram) GEN idealramgroups(GEN nf, GEN gal, GEN pr) GEN nf_get_allroots(GEN nf) long nf_get_prec(GEN x) GEN nfcertify(GEN x) GEN nfgaloismatrix(GEN nf, GEN s) GEN nfgaloispermtobasis(GEN nf, GEN gal) void nfinit_step1(nfbasic_t *T, GEN x, long flag) GEN nfinit_step2(nfbasic_t *T, long flag, long prec) GEN nfinit(GEN x, long prec) GEN nfinit0(GEN x, long flag, long prec) GEN nfinitall(GEN x, long flag, long prec) GEN nfinitred(GEN x, long prec) GEN nfinitred2(GEN x, long prec) GEN nfisincl(GEN a, GEN b) GEN nfisisom(GEN a, GEN b) GEN nfnewprec(GEN nf, long prec) GEN nfnewprec_shallow(GEN nf, long prec) GEN nfpoleval(GEN nf, GEN pol, GEN a) long nftyp(GEN x) GEN polredord(GEN x) GEN polgalois(GEN x, long prec) GEN polred(GEN x) GEN polred0(GEN x, long flag, GEN fa) GEN polred2(GEN x) GEN polredabs(GEN x) GEN polredabs0(GEN x, long flag) GEN polredabs2(GEN x) GEN polredabsall(GEN x, long flun) GEN polredbest(GEN x, long flag) GEN rnfpolredabs(GEN nf, GEN pol, long flag) GEN rnfpolredbest(GEN nf, GEN relpol, long flag) GEN smallpolred(GEN x) GEN smallpolred2(GEN x) GEN tschirnhaus(GEN x) GEN ZX_Q_normalize(GEN pol, GEN *ptlc) GEN ZX_Z_normalize(GEN pol, GEN *ptk) GEN ZX_to_monic(GEN pol, GEN *lead) GEN ZX_primitive_to_monic(GEN pol, GEN *lead) # base2.c GEN Fq_to_nf(GEN x, GEN modpr) GEN FqM_to_nfM(GEN z, GEN modpr) GEN FqV_to_nfV(GEN z, GEN modpr) GEN FqX_to_nfX(GEN x, GEN modpr) GEN Rg_nffix(const char *f, GEN T, GEN c, int lift) GEN RgV_nffix(const char *f, GEN T, GEN P, int lift) GEN RgX_nffix(const char *s, GEN nf, GEN x, int lift) long ZpX_disc_val(GEN f, GEN p) GEN ZpX_gcd(GEN f1, GEN f2, GEN p, GEN pm) GEN ZpX_reduced_resultant(GEN x, GEN y, GEN p, GEN pm) GEN ZpX_reduced_resultant_fast(GEN f, GEN g, GEN p, long M) long ZpX_resultant_val(GEN f, GEN g, GEN p, long M) void checkmodpr(GEN modpr) GEN ZX_compositum_disjoint(GEN A, GEN B) GEN compositum(GEN P, GEN Q) GEN compositum2(GEN P, GEN Q) GEN nfdisc(GEN x) GEN get_modpr(GEN x) GEN indexpartial(GEN P, GEN DP) GEN modpr_genFq(GEN modpr) GEN nf_to_Fq_init(GEN nf, GEN *pr, GEN *T, GEN *p) GEN nf_to_Fq(GEN nf, GEN x, GEN modpr) GEN nfM_to_FqM(GEN z, GEN nf, GEN modpr) GEN nfV_to_FqV(GEN z, GEN nf, GEN modpr) GEN nfX_to_FqX(GEN x, GEN nf, GEN modpr) GEN nfbasis(GEN x, GEN *y, GEN p) GEN nfcompositum(GEN nf, GEN A, GEN B, long flag) void nfmaxord(nfmaxord_t *S, GEN T, long flag) GEN nfmodprinit(GEN nf, GEN pr) GEN nfreducemodpr(GEN nf, GEN x, GEN modpr) GEN nfsplitting(GEN T, GEN D) GEN polcompositum0(GEN P, GEN Q, long flag) GEN idealprimedec(GEN nf, GEN p) GEN idealprimedec_limit_f(GEN nf, GEN p, long f) GEN idealprimedec_limit_norm(GEN nf, GEN p, GEN B) GEN rnfbasis(GEN bnf, GEN order) GEN rnfdedekind(GEN nf, GEN T, GEN pr, long flag) GEN rnfdet(GEN nf, GEN order) GEN rnfdiscf(GEN nf, GEN pol) GEN rnfequation(GEN nf, GEN pol) GEN rnfequation0(GEN nf, GEN pol, long flall) GEN rnfequation2(GEN nf, GEN pol) GEN nf_rnfeq(GEN nf, GEN relpol) GEN nf_rnfeqsimple(GEN nf, GEN relpol) GEN rnfequationall(GEN A, GEN B, long *pk, GEN *pLPRS) GEN rnfhnfbasis(GEN bnf, GEN order) long rnfisfree(GEN bnf, GEN order) GEN rnflllgram(GEN nf, GEN pol, GEN order, long prec) GEN rnfpolred(GEN nf, GEN pol, long prec) GEN rnfpseudobasis(GEN nf, GEN pol) GEN rnfsimplifybasis(GEN bnf, GEN order) GEN rnfsteinitz(GEN nf, GEN order) long factorial_lval(ulong n, ulong p) GEN zk_to_Fq_init(GEN nf, GEN *pr, GEN *T, GEN *p) GEN zk_to_Fq(GEN x, GEN modpr) GEN zkmodprinit(GEN nf, GEN pr) # base3.c GEN Idealstar(GEN nf, GEN x, long flun) GEN RgC_to_nfC(GEN nf, GEN x) GEN RgM_to_nfM(GEN nf, GEN x) GEN RgX_to_nfX(GEN nf, GEN pol) GEN algtobasis(GEN nf, GEN x) GEN basistoalg(GEN nf, GEN x) GEN gpnfvalrem(GEN nf, GEN x, GEN pr, GEN *py) GEN ideallist(GEN nf, long bound) GEN ideallist0(GEN nf, long bound, long flag) GEN ideallistarch(GEN nf, GEN list, GEN arch) GEN idealprincipalunits(GEN nf, GEN pr, long e) GEN idealstar0(GEN nf, GEN x, long flag) GEN indices_to_vec01(GEN archp, long r) GEN matalgtobasis(GEN nf, GEN x) GEN matbasistoalg(GEN nf, GEN x) GEN nf_to_scalar_or_alg(GEN nf, GEN x) GEN nf_to_scalar_or_basis(GEN nf, GEN x) GEN nfadd(GEN nf, GEN x, GEN y) GEN nfarchstar(GEN nf, GEN x, GEN arch) GEN nfdiv(GEN nf, GEN x, GEN y) GEN nfdiveuc(GEN nf, GEN a, GEN b) GEN nfdivrem(GEN nf, GEN a, GEN b) GEN nfembed(GEN nf, GEN x, long k) GEN nfinv(GEN nf, GEN x) GEN nfinvmodideal(GEN nf, GEN x, GEN ideal) int nfchecksigns(GEN nf, GEN x, GEN pl) GEN nfmod(GEN nf, GEN a, GEN b) GEN nfmul(GEN nf, GEN x, GEN y) GEN nfmuli(GEN nf, GEN x, GEN y) GEN nfnorm(GEN nf, GEN x) GEN nfpow(GEN nf, GEN x, GEN k) GEN nfpow_u(GEN nf, GEN z, ulong n) GEN nfpowmodideal(GEN nf, GEN x, GEN k, GEN ideal) GEN nfsign(GEN nf, GEN alpha) GEN nfsign_arch(GEN nf, GEN alpha, GEN arch) GEN nfsign_from_logarch(GEN Larch, GEN invpi, GEN archp) GEN nfsqr(GEN nf, GEN x) GEN nfsqri(GEN nf, GEN x) GEN nftrace(GEN nf, GEN x) long nfval(GEN nf, GEN x, GEN vp) long nfvalrem(GEN nf, GEN x, GEN pr, GEN *py) GEN polmod_nffix(const char *f, GEN rnf, GEN x, int lift) GEN polmod_nffix2(const char *f, GEN T, GEN relpol, GEN x, int lift) int pr_equal(GEN nf, GEN P, GEN Q) GEN rnfalgtobasis(GEN rnf, GEN x) GEN rnfbasistoalg(GEN rnf, GEN x) GEN rnfeltnorm(GEN rnf, GEN x) GEN rnfelttrace(GEN rnf, GEN x) GEN set_sign_mod_divisor(GEN nf, GEN x, GEN y, GEN idele, GEN sarch) GEN vec01_to_indices(GEN arch) GEN vecsmall01_to_indices(GEN v) GEN vecmodii(GEN a, GEN b) GEN ideallog(GEN nf, GEN x, GEN bigideal) GEN multable(GEN nf, GEN x) GEN tablemul(GEN TAB, GEN x, GEN y) GEN tablemul_ei(GEN M, GEN x, long i) GEN tablemul_ei_ej(GEN M, long i, long j) GEN tablemulvec(GEN M, GEN x, GEN v) GEN tablesqr(GEN tab, GEN x) GEN ei_multable(GEN nf, long i) long ZC_nfval(GEN nf, GEN x, GEN P) long ZC_nfvalrem(GEN nf, GEN x, GEN P, GEN *t) GEN zk_multable(GEN nf, GEN x) GEN zk_scalar_or_multable(GEN, GEN x) int ZC_prdvd(GEN nf, GEN x, GEN P) # base4.c GEN RM_round_maxrank(GEN G) GEN ZM_famat_limit(GEN fa, GEN limit) GEN famat_Z_gcd(GEN M, GEN n) GEN famat_inv(GEN f) GEN famat_inv_shallow(GEN f) GEN famat_makecoprime(GEN nf, GEN g, GEN e, GEN pr, GEN prk, GEN EX) GEN famat_mul(GEN f, GEN g) GEN famat_pow(GEN f, GEN n) GEN famat_sqr(GEN f) GEN famat_reduce(GEN fa) GEN famat_to_nf(GEN nf, GEN f) GEN famat_to_nf_modideal_coprime(GEN nf, GEN g, GEN e, GEN id, GEN EX) GEN famat_to_nf_moddivisor(GEN nf, GEN g, GEN e, GEN bid) GEN famatsmall_reduce(GEN fa) GEN gpidealval(GEN nf, GEN ix, GEN P) GEN idealtwoelt(GEN nf, GEN ix) GEN idealtwoelt0(GEN nf, GEN ix, GEN a) GEN idealtwoelt2(GEN nf, GEN x, GEN a) GEN idealadd(GEN nf, GEN x, GEN y) GEN idealaddmultoone(GEN nf, GEN list) GEN idealaddtoone(GEN nf, GEN x, GEN y) GEN idealaddtoone_i(GEN nf, GEN x, GEN y) GEN idealaddtoone0(GEN nf, GEN x, GEN y) GEN idealappr(GEN nf, GEN x) GEN idealappr0(GEN nf, GEN x, long fl) GEN idealapprfact(GEN nf, GEN x) GEN idealchinese(GEN nf, GEN x, GEN y) GEN idealcoprime(GEN nf, GEN x, GEN y) GEN idealcoprimefact(GEN nf, GEN x, GEN fy) GEN idealdiv(GEN nf, GEN x, GEN y) GEN idealdiv0(GEN nf, GEN x, GEN y, long flag) GEN idealdivexact(GEN nf, GEN x, GEN y) GEN idealdivpowprime(GEN nf, GEN x, GEN vp, GEN n) GEN idealmulpowprime(GEN nf, GEN x, GEN vp, GEN n) GEN idealfactor(GEN nf, GEN x) GEN idealhnf(GEN nf, GEN x) GEN idealhnf_principal(GEN nf, GEN x) GEN idealhnf_shallow(GEN nf, GEN x) GEN idealhnf_two(GEN nf, GEN vp) GEN idealhnf0(GEN nf, GEN a, GEN b) GEN idealintersect(GEN nf, GEN x, GEN y) GEN idealinv(GEN nf, GEN ix) GEN idealinv_HNF(GEN nf, GEN I) GEN idealred0(GEN nf, GEN I, GEN vdir) GEN idealmul(GEN nf, GEN ix, GEN iy) GEN idealmul0(GEN nf, GEN ix, GEN iy, long flag) GEN idealmul_HNF(GEN nf, GEN ix, GEN iy) GEN idealmulred(GEN nf, GEN ix, GEN iy) GEN idealnorm(GEN nf, GEN x) GEN idealnumden(GEN nf, GEN x) GEN idealpow(GEN nf, GEN ix, GEN n) GEN idealpow0(GEN nf, GEN ix, GEN n, long flag) GEN idealpowred(GEN nf, GEN ix, GEN n) GEN idealpows(GEN nf, GEN ideal, long iexp) GEN idealprodprime(GEN nf, GEN L) GEN idealsqr(GEN nf, GEN x) long idealtyp(GEN *ideal, GEN *arch) long idealval(GEN nf, GEN ix, GEN vp) long isideal(GEN nf, GEN x) GEN idealmin(GEN nf, GEN ix, GEN vdir) GEN nf_get_Gtwist(GEN nf, GEN vdir) GEN nf_get_Gtwist1(GEN nf, long i) GEN nfC_nf_mul(GEN nf, GEN v, GEN x) GEN nfdetint(GEN nf, GEN pseudo) GEN nfdivmodpr(GEN nf, GEN x, GEN y, GEN modpr) GEN nfhnf(GEN nf, GEN x) GEN nfhnf0(GEN nf, GEN x, long flag) GEN nfhnfmod(GEN nf, GEN x, GEN d) GEN nfkermodpr(GEN nf, GEN x, GEN modpr) GEN nfmulmodpr(GEN nf, GEN x, GEN y, GEN modpr) GEN nfpowmodpr(GEN nf, GEN x, GEN k, GEN modpr) GEN nfreduce(GEN nf, GEN x, GEN ideal) GEN nfsnf(GEN nf, GEN x) GEN nfsnf0(GEN nf, GEN x, long flag) GEN nfsolvemodpr(GEN nf, GEN a, GEN b, GEN modpr) GEN to_famat(GEN x, GEN y) GEN to_famat_shallow(GEN x, GEN y) GEN vecdiv(GEN x, GEN y) GEN vecinv(GEN x) GEN vecmul(GEN x, GEN y) GEN vecpow(GEN x, GEN n) # base5.c GEN eltreltoabs(GEN rnfeq, GEN x) GEN eltabstorel(GEN eq, GEN P) GEN eltabstorel_lift(GEN rnfeq, GEN P) GEN rnfeltabstorel(GEN rnf, GEN x) GEN rnfeltdown(GEN rnf, GEN x) GEN rnfeltdown0(GEN rnf, GEN x, long flag) GEN rnfeltreltoabs(GEN rnf, GEN x) GEN rnfeltup(GEN rnf, GEN x) GEN rnfeltup0(GEN rnf, GEN x, long flag) GEN rnfidealabstorel(GEN rnf, GEN x) GEN rnfidealdown(GEN rnf, GEN x) GEN rnfidealhnf(GEN rnf, GEN x) GEN rnfidealmul(GEN rnf, GEN x, GEN y) GEN rnfidealnormabs(GEN rnf, GEN x) GEN rnfidealnormrel(GEN rnf, GEN x) GEN rnfidealprimedec(GEN rnf, GEN pr) GEN rnfidealreltoabs(GEN rnf, GEN x) GEN rnfidealreltoabs0(GEN rnf, GEN x, long flag) GEN rnfidealtwoelement(GEN rnf, GEN x) GEN rnfidealup(GEN rnf, GEN x) GEN rnfidealup0(GEN rnf, GEN x, long flag) GEN rnfinit(GEN nf, GEN pol) GEN rnfinit0(GEN nf, GEN pol, long flag) # bb_group.c GEN dlog_get_ordfa(GEN o) GEN dlog_get_ord(GEN o) GEN gen_PH_log(GEN a, GEN g, GEN ord, void *E, const bb_group *grp) GEN gen_Shanks_init(GEN g, long n, void *E, const bb_group *grp) GEN gen_Shanks(GEN T, GEN x, ulong N, void *E, const bb_group *grp) GEN gen_Shanks_sqrtn(GEN a, GEN n, GEN q, GEN *zetan, void *E, const bb_group *grp) GEN gen_gener(GEN o, void *E, const bb_group *grp) GEN gen_ellgens(GEN d1, GEN d2, GEN m, void *E, const bb_group *grp, GEN pairorder(void *E, GEN P, GEN Q, GEN m, GEN F)) GEN gen_ellgroup(GEN N, GEN F, GEN *pt_m, void *E, const bb_group *grp, GEN pairorder(void *E, GEN P, GEN Q, GEN m, GEN F)) GEN gen_factored_order(GEN a, GEN o, void *E, const bb_group *grp) GEN gen_order(GEN x, GEN o, void *E, const bb_group *grp) GEN gen_select_order(GEN o, void *E, const bb_group *grp) GEN gen_plog(GEN x, GEN g0, GEN q, void *E, const bb_group *grp) GEN gen_pow(GEN x, GEN n, void *E, GEN (*sqr)(void*, GEN), GEN (*mul)(void*, GEN, GEN)) GEN gen_pow_i(GEN x, GEN n, void *E, GEN (*sqr)(void*, GEN), GEN (*mul)(void*, GEN, GEN)) GEN gen_pow_fold(GEN x, GEN n, void *E, GEN (*sqr)(void*, GEN), GEN (*msqr)(void*, GEN)) GEN gen_pow_fold_i(GEN x, GEN n, void *E, GEN (*sqr)(void*, GEN), GEN (*msqr)(void*, GEN)) GEN gen_powers(GEN x, long l, int use_sqr, void *E, GEN (*sqr)(void*, GEN), GEN (*mul)(void*, GEN, GEN), GEN (*one)(void*)) GEN gen_powu(GEN x, ulong n, void *E, GEN (*sqr)(void*, GEN), GEN (*mul)(void*, GEN, GEN)) GEN gen_powu_i(GEN x, ulong n, void *E, GEN (*sqr)(void*, GEN), GEN (*mul)(void*, GEN, GEN)) GEN gen_powu_fold(GEN x, ulong n, void *E, GEN (*sqr)(void*, GEN), GEN (*msqr)(void*, GEN)) GEN gen_powu_fold_i(GEN x, ulong n, void *E, GEN (*sqr)(void*, GEN), GEN (*msqr)(void*, GEN)) GEN gen_product(GEN x, void *data, GEN (*mul)(void*, GEN, GEN)) # bibli1.c int QR_init(GEN x, GEN *pB, GEN *pQ, GEN *pL, long prec) GEN R_from_QR(GEN x, long prec) GEN RgM_Babai(GEN B, GEN t) int RgM_QR_init(GEN x, GEN *pB, GEN *pQ, GEN *pL, long prec) GEN RgM_gram_schmidt(GEN e, GEN *ptB) GEN Xadic_lindep(GEN x) GEN algdep(GEN x, long n) GEN algdep0(GEN x, long n, long bit) void forqfvec(void *E, long (*fun)(void *, GEN, GEN, double), GEN a, GEN BORNE) void forqfvec0(GEN a, GEN BORNE, GEN code) GEN gaussred_from_QR(GEN x, long prec) GEN lindep0(GEN x, long flag) GEN lindep(GEN x) GEN lindep2(GEN x, long bit) GEN mathouseholder(GEN Q, GEN v) GEN matqr(GEN x, long flag, long prec) GEN minim(GEN a, GEN borne, GEN stockmax) GEN minim_raw(GEN a, GEN borne, GEN stockmax) GEN minim2(GEN a, GEN borne, GEN stockmax) GEN padic_lindep(GEN x) GEN perf(GEN a) GEN qfrep0(GEN a, GEN borne, long flag) GEN qfminim0(GEN a, GEN borne, GEN stockmax, long flag, long prec) GEN seralgdep(GEN s, long p, long r) GEN zncoppersmith(GEN P0, GEN N, GEN X, GEN B) # bibli2.c GEN QXQ_reverse(GEN a, GEN T) GEN RgV_polint(GEN X, GEN Y, long v) GEN RgXQ_reverse(GEN a, GEN T) GEN ZV_indexsort(GEN L) long ZV_search(GEN x, GEN y) GEN ZV_sort(GEN L) GEN ZV_sort_uniq(GEN L) GEN ZV_union_shallow(GEN x, GEN y) GEN binomial(GEN x, long k) GEN binomialuu(ulong n, ulong k) int cmp_Flx(GEN x, GEN y) int cmp_RgX(GEN x, GEN y) int cmp_nodata(void *data, GEN x, GEN y) int cmp_prime_ideal(GEN x, GEN y) int cmp_prime_over_p(GEN x, GEN y) int cmp_universal(GEN x, GEN y) GEN convol(GEN x, GEN y) int gen_cmp_RgX(void *data, GEN x, GEN y) GEN polcyclo(long n, long v) GEN polcyclo_eval(long n, GEN x) GEN dirdiv(GEN x, GEN y) GEN dirmul(GEN x, GEN y) GEN gen_indexsort(GEN x, void *E, int (*cmp)(void*, GEN, GEN)) GEN gen_indexsort_uniq(GEN x, void *E, int (*cmp)(void*, GEN, GEN)) long gen_search(GEN x, GEN y, long flag, void *data, int (*cmp)(void*, GEN, GEN)) GEN gen_setminus(GEN set1, GEN set2, int (*cmp)(GEN, GEN)) GEN gen_sort(GEN x, void *E, int (*cmp)(void*, GEN, GEN)) void gen_sort_inplace(GEN x, void *E, int (*cmp)(void*, GEN, GEN), GEN *perm) GEN gen_sort_uniq(GEN x, void *E, int (*cmp)(void*, GEN, GEN)) long getstack() long gettime() long getabstime() GEN getwalltime() GEN gprec(GEN x, long l) GEN gprec_wtrunc(GEN x, long pr) GEN gprec_w(GEN x, long pr) GEN gtoset(GEN x) GEN indexlexsort(GEN x) GEN indexsort(GEN x) GEN indexvecsort(GEN x, GEN k) GEN laplace(GEN x) GEN lexsort(GEN x) GEN mathilbert(long n) GEN matqpascal(long n, GEN q) GEN merge_factor(GEN fx, GEN fy, void *data, int (*cmp)(void *, GEN, GEN)) GEN merge_sort_uniq(GEN x, GEN y, void *data, int (*cmp)(void *, GEN, GEN)) GEN modreverse(GEN x) GEN numtoperm(long n, GEN x) GEN permtonum(GEN x) GEN polhermite(long n, long v) GEN polhermite_eval(long n, GEN x) GEN pollegendre(long n, long v) GEN pollegendre_eval(long n, GEN x) GEN polint(GEN xa, GEN ya, GEN x, GEN *dy) GEN polchebyshev(long n, long kind, long v) GEN polchebyshev_eval(long n, long kind, GEN x) GEN polchebyshev1(long n, long v) GEN polchebyshev2(long n, long v) GEN polrecip(GEN x) GEN setbinop(GEN f, GEN x, GEN y) GEN setintersect(GEN x, GEN y) long setisset(GEN x) GEN setminus(GEN x, GEN y) long setsearch(GEN x, GEN y, long flag) GEN setunion(GEN x, GEN y) GEN sort(GEN x) GEN sort_factor(GEN y, void *data, int (*cmp)(void*, GEN, GEN)) GEN stirling(long n, long m, long flag) GEN stirling1(ulong n, ulong m) GEN stirling2(ulong n, ulong m) long tablesearch(GEN T, GEN x, int (*cmp)(GEN, GEN)) GEN vecbinome(long n) long vecsearch(GEN v, GEN x, GEN k) GEN vecsort(GEN x, GEN k) GEN vecsort0(GEN x, GEN k, long flag) long zv_search(GEN x, long y) # bit.c GEN bits_to_int(GEN x, long l) ulong bits_to_u(GEN v, long l) GEN binaire(GEN x) GEN binary_2k(GEN x, long k) GEN binary_2k_nv(GEN x, long k) GEN binary_zv(GEN x) long bittest(GEN x, long n) GEN fromdigits_2k(GEN x, long k) GEN gbitand(GEN x, GEN y) GEN gbitneg(GEN x, long n) GEN gbitnegimply(GEN x, GEN y) GEN gbitor(GEN x, GEN y) GEN gbittest(GEN x, long n) GEN gbitxor(GEN x, GEN y) long hammingweight(GEN n) GEN ibitand(GEN x, GEN y) GEN ibitnegimply(GEN x, GEN y) GEN ibitor(GEN x, GEN y) GEN ibitxor(GEN x, GEN y) GEN nv_fromdigits_2k(GEN x, long k) # buch1.c GEN Buchquad(GEN D, double c1, double c2, long prec) GEN quadclassunit0(GEN x, long flag, GEN data, long prec) GEN quadhilbert(GEN D, long prec) GEN quadray(GEN bnf, GEN f, long prec) # buch2.c GEN Buchall(GEN P, long flag, long prec) GEN Buchall_param(GEN P, double bach, double bach2, long nbrelpid, long flun, long prec) GEN bnfcompress(GEN bnf) GEN bnfinit0(GEN P, long flag, GEN data, long prec) GEN bnfisprincipal0(GEN bnf, GEN x, long flall) GEN bnfisunit(GEN bignf, GEN x) GEN bnfnewprec(GEN nf, long prec) GEN bnfnewprec_shallow(GEN nf, long prec) GEN bnrnewprec(GEN bnr, long prec) GEN bnrnewprec_shallow(GEN bnr, long prec) GEN isprincipalfact(GEN bnf, GEN C, GEN L, GEN f, long flag) GEN isprincipalfact_or_fail(GEN bnf, GEN C, GEN P, GEN e) GEN isprincipal(GEN bnf, GEN x) GEN nfcyclotomicunits(GEN nf, GEN zu) GEN nfsign_units(GEN bnf, GEN archp, int add_zu) GEN signunits(GEN bignf) # buch3.c GEN ABC_to_bnr(GEN A, GEN B, GEN C, GEN *H, int gen) GEN Buchray(GEN bnf, GEN module, long flag) GEN bnrautmatrix(GEN bnr, GEN aut) GEN bnrchar(GEN bnr, GEN g, GEN v) GEN bnrchar_primitive(GEN bnr, GEN chi, GEN bnrc) GEN bnrclassno(GEN bignf, GEN ideal) GEN bnrclassno0(GEN A, GEN B, GEN C) GEN bnrclassnolist(GEN bnf, GEN listes) GEN bnrconductor0(GEN A, GEN B, GEN C, long flag) GEN bnrconductor(GEN bnr, GEN H0, long flag) GEN bnrconductor_i(GEN bnr, GEN H0, long flag) GEN bnrconductorofchar(GEN bnr, GEN chi) GEN bnrdisc0(GEN A, GEN B, GEN C, long flag) GEN bnrdisc(GEN bnr, GEN H, long flag) GEN bnrdisclist0(GEN bnf, GEN borne, GEN arch) GEN bnrgaloismatrix(GEN bnr, GEN aut) GEN bnrgaloisapply(GEN bnr, GEN mat, GEN x) GEN bnrinit0(GEN bignf, GEN ideal, long flag) long bnrisconductor0(GEN A, GEN B, GEN C) long bnrisconductor(GEN bnr, GEN H) long bnrisgalois(GEN bnr, GEN M, GEN H) GEN bnrisprincipal(GEN bnf, GEN x, long flag) GEN bnrsurjection(GEN bnr1, GEN bnr2) GEN buchnarrow(GEN bignf) long bnfcertify(GEN bnf) long bnfcertify0(GEN bnf, long flag) GEN decodemodule(GEN nf, GEN fa) GEN discrayabslist(GEN bnf, GEN listes) GEN discrayabslistarch(GEN bnf, GEN arch, ulong bound) GEN discrayabslistlong(GEN bnf, ulong bound) GEN idealmoddivisor(GEN bnr, GEN x) GEN isprincipalray(GEN bnf, GEN x) GEN isprincipalraygen(GEN bnf, GEN x) GEN rnfconductor(GEN bnf, GEN polrel) long rnfisabelian(GEN nf, GEN pol) GEN rnfnormgroup(GEN bnr, GEN polrel) GEN subgrouplist0(GEN bnr, GEN indexbound, long all) # buch4.c GEN bnfisnorm(GEN bnf, GEN x, long flag) GEN rnfisnorm(GEN S, GEN x, long flag) GEN rnfisnorminit(GEN bnf, GEN relpol, int galois) GEN bnfissunit(GEN bnf, GEN suni, GEN x) GEN bnfsunit(GEN bnf, GEN s, long PREC) long nfhilbert(GEN bnf, GEN a, GEN b) long nfhilbert0(GEN bnf, GEN a, GEN b, GEN p) long hyperell_locally_soluble(GEN pol, GEN p) long nf_hyperell_locally_soluble(GEN nf, GEN pol, GEN p) # char.c int char_check(GEN cyc, GEN chi) GEN charconj(GEN cyc, GEN chi) GEN charconj0(GEN cyc, GEN chi) GEN chardiv(GEN x, GEN a, GEN b) GEN chardiv0(GEN x, GEN a, GEN b) GEN chareval(GEN G, GEN chi, GEN n, GEN z) GEN charker(GEN cyc, GEN chi) GEN charker0(GEN cyc, GEN chi) GEN charmul(GEN x, GEN a, GEN b) GEN charmul0(GEN x, GEN a, GEN b) GEN charorder(GEN cyc, GEN x) GEN charorder0(GEN x, GEN chi) GEN char_denormalize(GEN cyc, GEN D, GEN chic) GEN char_normalize(GEN chi, GEN ncyc) GEN char_rootof1(GEN d, long prec) GEN char_rootof1_u(ulong d, long prec) GEN char_simplify(GEN D, GEN C) GEN cyc_normalize(GEN c) int zncharcheck(GEN G, GEN chi) GEN zncharconj(GEN G, GEN chi) GEN znchardiv(GEN G, GEN a, GEN b) GEN zncharker(GEN G, GEN chi) GEN znchareval(GEN G, GEN chi, GEN n, GEN z) GEN zncharinduce(GEN G, GEN chi, GEN N) long zncharisodd(GEN G, GEN chi) GEN zncharmul(GEN G, GEN a, GEN b) GEN zncharorder(GEN G, GEN chi) int znconrey_check(GEN cyc, GEN chi) GEN znconrey_normalized(GEN G, GEN chi) GEN znconreychar(GEN bid, GEN m) GEN znconreyfromchar_normalized(GEN bid, GEN chi) GEN znconreyconductor(GEN bid, GEN co, GEN *pm) GEN znconreyexp(GEN bid, GEN x) GEN znconreyfromchar(GEN bid, GEN chi) GEN znconreylog(GEN bid, GEN x) GEN znconreylog_normalize(GEN G, GEN m) # compile.c GEN closure_deriv(GEN G) long localvars_find(GEN pack, entree *ep) GEN localvars_read_str(const char *str, GEN pack) GEN snm_closure(entree *ep, GEN data) GEN strtoclosure(const char *s, long n, ...) GEN strtofunction(const char *s) # concat.c GEN gconcat(GEN x, GEN y) GEN gconcat1(GEN x) GEN matconcat(GEN v) GEN shallowconcat(GEN x, GEN y) GEN shallowconcat1(GEN x) GEN shallowmatconcat(GEN v) GEN vconcat(GEN A, GEN B) # default.c extern int d_SILENT, d_ACKNOWLEDGE, d_INITRC, d_RETURN GEN default0(const char *a, const char *b) long getrealprecision() entree *pari_is_default(const char *s) GEN sd_TeXstyle(const char *v, long flag) GEN sd_colors(const char *v, long flag) GEN sd_compatible(const char *v, long flag) GEN sd_datadir(const char *v, long flag) GEN sd_debug(const char *v, long flag) GEN sd_debugfiles(const char *v, long flag) GEN sd_debugmem(const char *v, long flag) GEN sd_factor_add_primes(const char *v, long flag) GEN sd_factor_proven(const char *v, long flag) GEN sd_format(const char *v, long flag) GEN sd_histsize(const char *v, long flag) GEN sd_log(const char *v, long flag) GEN sd_logfile(const char *v, long flag) GEN sd_nbthreads(const char *v, long flag) GEN sd_new_galois_format(const char *v, long flag) GEN sd_output(const char *v, long flag) GEN sd_parisize(const char *v, long flag) GEN sd_parisizemax(const char *v, long flag) GEN sd_path(const char *v, long flag) GEN sd_prettyprinter(const char *v, long flag) GEN sd_primelimit(const char *v, long flag) GEN sd_realbitprecision(const char *v, long flag) GEN sd_realprecision(const char *v, long flag) GEN sd_secure(const char *v, long flag) GEN sd_seriesprecision(const char *v, long flag) GEN sd_simplify(const char *v, long flag) GEN sd_sopath(char *v, int flag) GEN sd_strictargs(const char *v, long flag) GEN sd_strictmatch(const char *v, long flag) GEN sd_string(const char *v, long flag, const char *s, char **f) GEN sd_threadsize(const char *v, long flag) GEN sd_threadsizemax(const char *v, long flag) GEN sd_toggle(const char *v, long flag, const char *s, int *ptn) GEN sd_ulong(const char *v, long flag, const char *s, ulong *ptn, ulong Min, ulong Max, const char **msg) GEN setdefault(const char *s, const char *v, long flag) long setrealprecision(long n, long *prec) # gplib.c GEN sd_breakloop(const char *v, long flag) GEN sd_echo(const char *v, long flag) GEN sd_graphcolormap(const char *v, long flag) GEN sd_graphcolors(const char *v, long flag) GEN sd_help(const char *v, long flag) GEN sd_histfile(const char *v, long flag) GEN sd_lines(const char *v, long flag) GEN sd_linewrap(const char *v, long flag) GEN sd_prompt(const char *v, long flag) GEN sd_prompt_cont(const char *v, long flag) GEN sd_psfile(const char *v, long flag) GEN sd_readline(const char *v, long flag) GEN sd_recover(const char *v, long flag) GEN sd_timer(const char *v, long flag) void pari_hit_return() void gp_load_gprc() int gp_meta(const char *buf, int ismain) const char **gphelp_keyword_list() void pari_center(const char *s) void pari_print_version() const char *gp_format_time(long delay) const char *gp_format_prompt(const char *p) void pari_alarm(long s) GEN gp_alarm(long s, GEN code) GEN gp_input() void gp_allocatemem(GEN z) int gp_handle_exception(long numerr) void gp_alarm_handler(int sig) void gp_sigint_fun() extern int h_REGULAR, h_LONG, h_APROPOS, h_RL void gp_help(const char *s, long flag) void gp_echo_and_log(const char *prompt, const char *s) void print_fun_list(char **list, long nbli) # dirichlet.c GEN direxpand(GEN a, long L) GEN direuler(void *E, GEN (*eval)(void *, GEN), GEN a, GEN b, GEN c) # ellanal.c GEN ellanalyticrank(GEN e, GEN eps, long prec) GEN ellanalyticrank_bitprec(GEN e, GEN eps, long bitprec) GEN ellanal_globalred_all(GEN e, GEN *N, GEN *cb, GEN *tam) GEN ellheegner(GEN e) GEN ellL1(GEN E, long r, long prec) GEN ellL1_bitprec(GEN E, long r, long bitprec) # elldata.c GEN ellconvertname(GEN s) GEN elldatagenerators(GEN E) GEN ellidentify(GEN E) GEN ellsearch(GEN A) GEN ellsearchcurve(GEN name) void forell(void *E, long call(void*, GEN), long a, long b, long flag) # ellfromeqn.c GEN ellfromeqn(GEN s) # elliptic.c extern int t_ELL_Rg, t_ELL_Q, t_ELL_Qp, t_ELL_Fp, t_ELL_Fq, t_ELL_NF long ellQ_get_CM(GEN e) int ell_is_integral(GEN E) GEN ellbasechar(GEN E) GEN akell(GEN e, GEN n) GEN anell(GEN e, long n) GEN anellsmall(GEN e, long n) GEN bilhell(GEN e, GEN z1, GEN z2, long prec) void checkell(GEN e) void checkell_Fq(GEN e) void checkell_Q(GEN e) void checkell_Qp(GEN e) void checkellisog(GEN v) void checkellpt(GEN z) void checkell5(GEN e) GEN ec_bmodel(GEN e) GEN ec_f_evalx(GEN E, GEN x) GEN ec_h_evalx(GEN e, GEN x) GEN ec_dFdx_evalQ(GEN E, GEN Q) GEN ec_dFdy_evalQ(GEN E, GEN Q) GEN ec_dmFdy_evalQ(GEN e, GEN Q) GEN ec_2divpol_evalx(GEN E, GEN x) GEN ec_half_deriv_2divpol_evalx(GEN E, GEN x) GEN ellanal_globalred(GEN e, GEN *gr) GEN ellQ_get_N(GEN e) void ellQ_get_Nfa(GEN e, GEN *N, GEN *faN) GEN ellQp_Tate_uniformization(GEN E, long prec) GEN ellQp_u(GEN E, long prec) GEN ellQp_u2(GEN E, long prec) GEN ellQp_q(GEN E, long prec) GEN ellQp_ab(GEN E, long prec) GEN ellQp_root(GEN E, long prec) GEN ellR_ab(GEN E, long prec) GEN ellR_eta(GEN E, long prec) GEN ellR_omega(GEN x, long prec) GEN ellR_roots(GEN E, long prec) GEN elladd(GEN e, GEN z1, GEN z2) GEN ellap(GEN e, GEN p) long ellap_CM_fast(GEN E, ulong p, long CM) GEN ellcard(GEN E, GEN p) GEN ellchangecurve(GEN e, GEN ch) GEN ellchangeinvert(GEN w) GEN ellchangepoint(GEN x, GEN ch) GEN ellchangepointinv(GEN x, GEN ch) GEN elldivpol(GEN e, long n, long v) GEN elleisnum(GEN om, long k, long flag, long prec) GEN elleta(GEN om, long prec) GEN ellff_get_card(GEN E) GEN ellff_get_gens(GEN E) GEN ellff_get_group(GEN E) GEN ellff_get_o(GEN x) GEN ellff_get_p(GEN E) GEN ellfromj(GEN j) GEN ellformaldifferential(GEN e, long n, long v) GEN ellformalexp(GEN e, long n, long v) GEN ellformallog(GEN e, long n, long v) GEN ellformalpoint(GEN e, long n, long v) GEN ellformalw(GEN e, long n, long v) GEN ellgenerators(GEN E) GEN ellglobalred(GEN e1) GEN ellgroup(GEN E, GEN p) GEN ellgroup0(GEN E, GEN p, long flag) GEN ellheight0(GEN e, GEN a, GEN b, long prec) GEN ellheight(GEN e, GEN a, long prec) GEN ellheightmatrix(GEN E, GEN x, long n) GEN ellheightoo(GEN e, GEN z, long prec) GEN ellinit(GEN x, GEN p, long prec) GEN ellintegralmodel(GEN e, GEN *pv) GEN ellisoncurve(GEN e, GEN z) int ellissupersingular(GEN x, GEN p) int elljissupersingular(GEN x) GEN elllseries(GEN e, GEN s, GEN A, long prec) GEN elllocalred(GEN e, GEN p1) GEN elllog(GEN e, GEN a, GEN g, GEN o) GEN ellminimalmodel(GEN E, GEN *ptv) GEN ellminimaltwist(GEN e) GEN ellminimaltwist0(GEN e, long fl) GEN ellminimaltwistcond(GEN e) GEN ellmul(GEN e, GEN z, GEN n) GEN ellnonsingularmultiple(GEN e, GEN P) GEN ellneg(GEN e, GEN z) GEN ellorder(GEN e, GEN p, GEN o) long ellorder_Q(GEN E, GEN P) GEN ellordinate(GEN e, GEN x, long prec) GEN ellpadicfrobenius(GEN E, ulong p, long n) GEN ellpadicheight(GEN e, GEN p, long n, GEN P) GEN ellpadicheight0(GEN e, GEN p, long n, GEN P, GEN Q) GEN ellpadicheightmatrix(GEN e, GEN p, long n, GEN P) GEN ellpadiclog(GEN E, GEN p, long n, GEN P) GEN ellpadics2(GEN E, GEN p, long n) GEN ellperiods(GEN w, long flag, long prec) GEN elltwist(GEN E, GEN D) GEN ellrandom(GEN e) long ellrootno(GEN e, GEN p) long ellrootno_global(GEN e) GEN ellsea(GEN E, ulong smallfact) GEN ellsigma(GEN om, GEN z, long flag, long prec) GEN ellsub(GEN e, GEN z1, GEN z2) GEN elltaniyama(GEN e, long prec) GEN elltatepairing(GEN E, GEN t, GEN s, GEN m) GEN elltors(GEN e) GEN elltors0(GEN e, long flag) GEN ellweilpairing(GEN E, GEN t, GEN s, GEN m) GEN ellwp(GEN w, GEN z, long prec) GEN ellwp0(GEN w, GEN z, long flag, long prec) GEN ellwpseries(GEN e, long v, long PRECDL) GEN ellxn(GEN e, long n, long v) GEN ellzeta(GEN om, GEN z, long prec) GEN expIxy(GEN x, GEN y, long prec) int oncurve(GEN e, GEN z) GEN orderell(GEN e, GEN p) GEN pointell(GEN e, GEN z, long prec) GEN point_to_a4a6(GEN E, GEN P, GEN p, GEN *pa4) GEN point_to_a4a6_Fl(GEN E, GEN P, ulong p, ulong *pa4) GEN zell(GEN e, GEN z, long prec) # elltors.c long ellisdivisible(GEN E, GEN P, GEN n, GEN *Q) # ellisogeny.c GEN ellisogenyapply(GEN f, GEN P) GEN ellisogeny(GEN e, GEN G, long only_image, long vx, long vy) GEN ellisomat(GEN E, long flag) # ellsea.c GEN Fp_ellcard_SEA(GEN a4, GEN a6, GEN p, long early_abort) GEN Fq_ellcard_SEA(GEN a4, GEN a6, GEN q, GEN T, GEN p, long early_abort) GEN ellmodulareqn(long l, long vx, long vy) # es.c GEN externstr(const char *cmd) char *gp_filter(const char *s) GEN gpextern(const char *cmd) void gpsystem(const char *s) GEN readstr(const char *s) GEN GENtoGENstr_nospace(GEN x) GEN GENtoGENstr(GEN x) char* GENtoTeXstr(GEN x) char* GENtostr(GEN x) char* GENtostr_raw(GEN x) char* GENtostr_unquoted(GEN x) GEN Str(GEN g) GEN Strchr(GEN g) GEN Strexpand(GEN g) GEN Strtex(GEN g) void brute(GEN g, char format, long dec) void dbgGEN(GEN x, long nb) void error0(GEN g) void dbg_pari_heap() int file_is_binary(FILE *f) void err_flush() void err_printf(const char* pat, ...) GEN gp_getenv(const char *s) GEN gp_read_file(const char *s) GEN gp_read_str_multiline(const char *s, char *last) GEN gp_read_stream(FILE *f) GEN gp_readvec_file(char *s) GEN gp_readvec_stream(FILE *f) void gpinstall(const char *s, const char *code, const char *gpname, const char *lib) GEN gsprintf(const char *fmt, ...) GEN gvsprintf(const char *fmt, va_list ap) char* itostr(GEN x) void matbrute(GEN g, char format, long dec) char* os_getenv(const char *s) void (*os_signal(int sig, void (*f)(int)))(int) void outmat(GEN x) void output(GEN x) char* RgV_to_str(GEN g, long flag) void pari_add_hist(GEN z, long t) void pari_ask_confirm(const char *s) void pari_fclose(pariFILE *f) void pari_flush() pariFILE* pari_fopen(const char *s, const char *mode) pariFILE* pari_fopen_or_fail(const char *s, const char *mode) pariFILE* pari_fopengz(const char *s) void pari_fprintf(FILE *file, const char *fmt, ...) void pari_fread_chars(void *b, size_t n, FILE *f) GEN pari_get_hist(long p) long pari_get_histtime(long p) char* pari_get_homedir(const char *user) int pari_is_dir(const char *name) int pari_is_file(const char *name) int pari_last_was_newline() void pari_set_last_newline(int last) ulong pari_nb_hist() void pari_printf(const char *fmt, ...) void pari_putc(char c) void pari_puts(const char *s) pariFILE* pari_safefopen(const char *s, const char *mode) char* pari_sprintf(const char *fmt, ...) int pari_stdin_isatty() char* pari_strdup(const char *s) char* pari_strndup(const char *s, long n) char* pari_unique_dir(const char *s) char* pari_unique_filename(const char *s) void pari_unlink(const char *s) void pari_vfprintf(FILE *file, const char *fmt, va_list ap) void pari_vprintf(const char *fmt, va_list ap) char* pari_vsprintf(const char *fmt, va_list ap) char* path_expand(const char *s) void out_print0(PariOUT *out, const char *sep, GEN g, long flag) void out_printf(PariOUT *out, const char *fmt, ...) void out_putc(PariOUT *out, char c) void out_puts(PariOUT *out, const char *s) void out_term_color(PariOUT *out, long c) void out_vprintf(PariOUT *out, const char *fmt, va_list ap) char* pari_sprint0(const char *msg, GEN g, long flag) extern int f_RAW, f_PRETTYMAT, f_PRETTY, f_TEX void print0(GEN g, long flag) void print1(GEN g) void printf0(const char *fmt, GEN args) void printsep(const char *s, GEN g) void printsep1(const char *s, GEN g) void printtex(GEN g) char* stack_sprintf(const char *fmt, ...) char* stack_strcat(const char *s, const char *t) char* stack_strdup(const char *s) void strftime_expand(const char *s, char *buf, long max) GEN Strprintf(const char *fmt, GEN args) FILE* switchin(const char *name) void switchout(const char *name) void term_color(long c) char* term_get_color(char *s, long c) void texe(GEN g, char format, long dec) const char* type_name(long t) void warning0(GEN g) void write0(const char *s, GEN g) void write1(const char *s, GEN g) void writebin(const char *name, GEN x) void writetex(const char *s, GEN g) # eval.c extern int br_NONE, br_BREAK, br_NEXT, br_MULTINEXT, br_RETURN void bincopy_relink(GEN C, GEN vi) GEN break0(long n) GEN call0(GEN fun, GEN args) GEN closure_callgen1(GEN C, GEN x) GEN closure_callgen1prec(GEN C, GEN x, long prec) GEN closure_callgen2(GEN C, GEN x, GEN y) GEN closure_callgenall(GEN C, long n, ...) GEN closure_callgenvec(GEN C, GEN args) GEN closure_callgenvecprec(GEN C, GEN args, long prec) void closure_callvoid1(GEN C, GEN x) long closure_context(long start, long level) void closure_disassemble(GEN n) void closure_err(long level) GEN closure_evalbrk(GEN C, long *status) GEN closure_evalgen(GEN C) GEN closure_evalnobrk(GEN C) GEN closure_evalres(GEN C) void closure_evalvoid(GEN C) GEN closure_trapgen(GEN C, long numerr) GEN copybin_unlink(GEN C) GEN get_lex(long vn) long get_localprec() long get_localbitprec() GEN gp_call(void *E, GEN x) GEN gp_callprec(void *E, GEN x, long prec) GEN gp_call2(void *E, GEN x, GEN y) long gp_callbool(void *E, GEN x) long gp_callvoid(void *E, GEN x) GEN gp_eval(void *E, GEN x) long gp_evalbool(void *E, GEN x) GEN gp_evalprec(void *E, GEN x, long prec) GEN gp_evalupto(void *E, GEN x) long gp_evalvoid(void *E, GEN x) void localprec(long p) void localbitprec(long p) long loop_break() GEN next0(long n) GEN pareval(GEN C) GEN pari_self() GEN parsum(GEN a, GEN b, GEN code, GEN x) GEN parvector(long n, GEN code) void pop_lex(long n) void pop_localprec() void push_lex(GEN a, GEN C) void push_localbitprec(long p) void push_localprec(long p) GEN return0(GEN x) void set_lex(long vn, GEN x) # FF.c GEN FF_1(GEN a) GEN FF_Z_Z_muldiv(GEN x, GEN y, GEN z) GEN FF_Q_add(GEN x, GEN y) GEN FF_Z_add(GEN a, GEN b) GEN FF_Z_mul(GEN a, GEN b) GEN FF_add(GEN a, GEN b) GEN FF_charpoly(GEN x) GEN FF_conjvec(GEN x) GEN FF_div(GEN a, GEN b) GEN FF_ellcard(GEN E) GEN FF_ellcard_SEA(GEN E, long smallfact) GEN FF_ellgens(GEN E) GEN FF_ellgroup(GEN E) GEN FF_elllog(GEN E, GEN P, GEN Q, GEN o) GEN FF_ellmul(GEN E, GEN P, GEN n) GEN FF_ellorder(GEN E, GEN P, GEN o) GEN FF_elltwist(GEN E) GEN FF_ellrandom(GEN E) GEN FF_elltatepairing(GEN E, GEN P, GEN Q, GEN m) GEN FF_ellweilpairing(GEN E, GEN P, GEN Q, GEN m) int FF_equal(GEN a, GEN b) int FF_equal0(GEN x) int FF_equal1(GEN x) int FF_equalm1(GEN x) long FF_f(GEN x) GEN FF_inv(GEN a) long FF_issquare(GEN x) long FF_issquareall(GEN x, GEN *pt) long FF_ispower(GEN x, GEN K, GEN *pt) GEN FF_log(GEN a, GEN b, GEN o) GEN FF_minpoly(GEN x) GEN FF_mod(GEN x) GEN FF_mul(GEN a, GEN b) GEN FF_mul2n(GEN a, long n) GEN FF_neg(GEN a) GEN FF_neg_i(GEN a) GEN FF_norm(GEN x) GEN FF_order(GEN x, GEN o) GEN FF_p(GEN x) GEN FF_p_i(GEN x) GEN FF_pow(GEN x, GEN n) GEN FF_primroot(GEN x, GEN *o) GEN FF_q(GEN x) int FF_samefield(GEN x, GEN y) GEN FF_sqr(GEN a) GEN FF_sqrt(GEN a) GEN FF_sqrtn(GEN x, GEN n, GEN *zetan) GEN FF_sub(GEN x, GEN y) GEN FF_to_F2xq(GEN x) GEN FF_to_F2xq_i(GEN x) GEN FF_to_Flxq(GEN x) GEN FF_to_Flxq_i(GEN x) GEN FF_to_FpXQ(GEN x) GEN FF_to_FpXQ_i(GEN x) GEN FF_trace(GEN x) GEN FF_zero(GEN a) GEN FFM_FFC_mul(GEN M, GEN C, GEN ff) GEN FFM_det(GEN M, GEN ff) GEN FFM_image(GEN M, GEN ff) GEN FFM_inv(GEN M, GEN ff) GEN FFM_ker(GEN M, GEN ff) GEN FFM_mul(GEN M, GEN N, GEN ff) long FFM_rank(GEN M, GEN ff) GEN FFX_factor(GEN f, GEN x) GEN FFX_roots(GEN f, GEN x) GEN FqX_to_FFX(GEN x, GEN ff) GEN Fq_to_FF(GEN x, GEN ff) GEN Z_FF_div(GEN a, GEN b) GEN ffgen(GEN T, long v) GEN fflog(GEN x, GEN g, GEN o) GEN fforder(GEN x, GEN o) GEN ffprimroot(GEN x, GEN *o) GEN ffrandom(GEN ff) int Rg_is_FF(GEN c, GEN *ff) int RgC_is_FFC(GEN x, GEN *ff) int RgM_is_FFM(GEN x, GEN *ff) GEN p_to_FF(GEN p, long v) GEN Tp_to_FF(GEN T, GEN p) # galconj.c GEN checkgal(GEN gal) GEN checkgroup(GEN g, GEN *S) GEN embed_disc(GEN r, long r1, long prec) GEN embed_roots(GEN r, long r1) GEN galois_group(GEN gal) GEN galoisconj(GEN nf, GEN d) GEN galoisconj0(GEN nf, long flag, GEN d, long prec) GEN galoisexport(GEN gal, long format) GEN galoisfixedfield(GEN gal, GEN v, long flag, long y) GEN galoisidentify(GEN gal) GEN galoisinit(GEN nf, GEN den) GEN galoisisabelian(GEN gal, long flag) long galoisisnormal(GEN gal, GEN sub) GEN galoispermtopol(GEN gal, GEN perm) GEN galoissubgroups(GEN G) GEN galoissubfields(GEN G, long flag, long v) long numberofconjugates(GEN T, long pdepart) GEN vandermondeinverse(GEN L, GEN T, GEN den, GEN prep) # galpol.c GEN galoisnbpol(long a) GEN galoisgetpol(long a, long b, long s) # gen1.c GEN conjvec(GEN x, long prec) GEN gadd(GEN x, GEN y) GEN gaddsg(long x, GEN y) GEN gconj(GEN x) GEN gdiv(GEN x, GEN y) GEN gdivgs(GEN x, long s) GEN ginv(GEN x) GEN gmul(GEN x, GEN y) GEN gmul2n(GEN x, long n) GEN gmulsg(long s, GEN y) GEN gsqr(GEN x) GEN gsub(GEN x, GEN y) GEN gsubsg(long x, GEN y) GEN inv_ser(GEN b) GEN mulcxI(GEN x) GEN mulcxmI(GEN x) GEN ser_normalize(GEN x) # gen2.c GEN gassoc_proto(GEN f(GEN, GEN), GEN, GEN) GEN map_proto_G(GEN f(GEN), GEN x) GEN map_proto_lG(long f(GEN), GEN x) GEN map_proto_lGL(long f(GEN, long), GEN x, long y) long Q_pval(GEN x, GEN p) long Q_pvalrem(GEN x, GEN p, GEN *y) long RgX_val(GEN x) long RgX_valrem(GEN x, GEN *z) long RgX_valrem_inexact(GEN x, GEN *Z) int ZV_Z_dvd(GEN v, GEN p) long ZV_pval(GEN x, GEN p) long ZV_pvalrem(GEN x, GEN p, GEN *px) long ZV_lval(GEN x, ulong p) long ZV_lvalrem(GEN x, ulong p, GEN *px) long ZX_lvalrem(GEN x, ulong p, GEN *px) long ZX_lval(GEN x, ulong p) long ZX_pval(GEN x, GEN p) long ZX_pvalrem(GEN x, GEN p, GEN *px) long Z_lval(GEN n, ulong p) long Z_lvalrem(GEN n, ulong p, GEN *py) long Z_lvalrem_stop(GEN *n, ulong p, int *stop) long Z_pval(GEN n, GEN p) long Z_pvalrem(GEN x, GEN p, GEN *py) GEN cgetp(GEN x) GEN cvstop2(long s, GEN y) GEN cvtop(GEN x, GEN p, long l) GEN cvtop2(GEN x, GEN y) GEN gabs(GEN x, long prec) void gaffect(GEN x, GEN y) void gaffsg(long s, GEN x) int gcmp(GEN x, GEN y) int gequal0(GEN x) int gequal1(GEN x) int gequalX(GEN x) int gequalm1(GEN x) int gcmpsg(long x, GEN y) GEN gcvtop(GEN x, GEN p, long r) int gequal(GEN x, GEN y) int gequalsg(long s, GEN x) long gexpo(GEN x) GEN gpvaluation(GEN x, GEN p) long gvaluation(GEN x, GEN p) int gidentical(GEN x, GEN y) long glength(GEN x) GEN gmax(GEN x, GEN y) GEN gmaxgs(GEN x, long y) GEN gmin(GEN x, GEN y) GEN gmings(GEN x, long y) GEN gneg(GEN x) GEN gneg_i(GEN x) GEN RgX_to_ser(GEN x, long l) GEN RgX_to_ser_inexact(GEN x, long l) int gsigne(GEN x) GEN gtolist(GEN x) long gtolong(GEN x) int lexcmp(GEN x, GEN y) GEN listcreate_typ(long t) GEN listcreate() GEN listinsert(GEN list, GEN object, long index) void listpop(GEN L, long index) void listpop0(GEN L, long index) GEN listput(GEN list, GEN object, long index) GEN listput0(GEN list, GEN object, long index) void listsort(GEN list, long flag) GEN matsize(GEN x) GEN mklist() GEN mklistcopy(GEN x) GEN normalize(GEN x) GEN normalizepol(GEN x) GEN normalizepol_approx(GEN x, long lx) GEN normalizepol_lg(GEN x, long lx) ulong padic_to_Fl(GEN x, ulong p) GEN padic_to_Fp(GEN x, GEN Y) GEN quadtofp(GEN x, long l) GEN rfrac_to_ser(GEN x, long l) long sizedigit(GEN x) long u_lval(ulong x, ulong p) long u_lvalrem(ulong x, ulong p, ulong *py) long u_lvalrem_stop(ulong *n, ulong p, int *stop) long u_pval(ulong x, GEN p) long u_pvalrem(ulong x, GEN p, ulong *py) long vecindexmax(GEN x) long vecindexmin(GEN x) GEN vecmax0(GEN x, GEN *pv) GEN vecmax(GEN x) GEN vecmin0(GEN x, GEN *pv) GEN vecmin(GEN x) long z_lval(long s, ulong p) long z_lvalrem(long s, ulong p, long *py) long z_pval(long n, GEN p) long z_pvalrem(long n, GEN p, long *py) # gen3.c GEN padic_to_Q(GEN x) GEN padic_to_Q_shallow(GEN x) GEN QpV_to_QV(GEN v) GEN RgM_mulreal(GEN x, GEN y) GEN RgX_cxeval(GEN T, GEN u, GEN ui) GEN RgX_deflate_max(GEN x0, long *m) long RgX_deflate_order(GEN x) long ZX_deflate_order(GEN x) GEN ZX_deflate_max(GEN x, long *m) long RgX_degree(GEN x, long v) GEN RgX_integ(GEN x) GEN bitprecision0(GEN x, long n) GEN ceil_safe(GEN x) GEN ceilr(GEN x) GEN centerlift(GEN x) GEN centerlift0(GEN x, long v) GEN compo(GEN x, long n) GEN deg1pol(GEN x1, GEN x0, long v) GEN deg1pol_shallow(GEN x1, GEN x0, long v) long degree(GEN x) GEN denom(GEN x) GEN deriv(GEN x, long v) GEN derivser(GEN x) GEN diffop(GEN x, GEN v, GEN dv) GEN diffop0(GEN x, GEN v, GEN dv, long n) GEN diviiround(GEN x, GEN y) GEN divrem(GEN x, GEN y, long v) GEN floor_safe(GEN x) GEN gceil(GEN x) GEN gcvtoi(GEN x, long *e) GEN gdeflate(GEN x, long v, long d) GEN gdivent(GEN x, GEN y) GEN gdiventgs(GEN x, long y) GEN gdiventsg(long x, GEN y) GEN gdiventres(GEN x, GEN y) GEN gdivmod(GEN x, GEN y, GEN *pr) GEN gdivround(GEN x, GEN y) int gdvd(GEN x, GEN y) GEN geq(GEN x, GEN y) GEN geval(GEN x) GEN gfloor(GEN x) GEN gtrunc2n(GEN x, long s) GEN gfrac(GEN x) GEN gge(GEN x, GEN y) GEN ggrando(GEN x, long n) GEN ggt(GEN x, GEN y) GEN gimag(GEN x) GEN gisexactzero(GEN g) GEN gle(GEN x, GEN y) GEN glt(GEN x, GEN y) GEN gmod(GEN x, GEN y) GEN gmodgs(GEN x, long y) GEN gmodsg(long x, GEN y) GEN gmodulo(GEN x, GEN y) GEN gmodulsg(long x, GEN y) GEN gmodulss(long x, long y) GEN gne(GEN x, GEN y) GEN gnot(GEN x) GEN gpolvar(GEN y) GEN gppadicprec(GEN x, GEN p) GEN gppoldegree(GEN x, long v) long gprecision(GEN x) GEN gpserprec(GEN x, long v) GEN greal(GEN x) GEN grndtoi(GEN x, long *e) GEN ground(GEN x) GEN gshift(GEN x, long n) GEN gsubst(GEN x, long v, GEN y) GEN gsubstpol(GEN x, GEN v, GEN y) GEN gsubstvec(GEN x, GEN v, GEN y) GEN gtocol(GEN x) GEN gtocol0(GEN x, long n) GEN gtocolrev(GEN x) GEN gtocolrev0(GEN x, long n) GEN gtopoly(GEN x, long v) GEN gtopolyrev(GEN x, long v) GEN gtoser(GEN x, long v, long precdl) GEN gtovec(GEN x) GEN gtovec0(GEN x, long n) GEN gtovecrev(GEN x) GEN gtovecrev0(GEN x, long n) GEN gtovecsmall(GEN x) GEN gtovecsmall0(GEN x, long n) GEN gtrunc(GEN x) long gvar(GEN x) long gvar2(GEN x) GEN hqfeval(GEN q, GEN x) GEN imag_i(GEN x) GEN integ(GEN x, long v) GEN integser(GEN x) GEN inv_ser(GEN b) int iscomplex(GEN x) int isexactzero(GEN g) int isrationalzeroscalar(GEN g) int isinexact(GEN x) int isinexactreal(GEN x) int isint(GEN n, GEN *ptk) int isrationalzero(GEN g) int issmall(GEN n, long *ptk) GEN lift(GEN x) GEN lift0(GEN x, long v) GEN liftall(GEN x) GEN liftall_shallow(GEN x) GEN liftint(GEN x) GEN liftint_shallow(GEN x) GEN liftpol(GEN x) GEN liftpol_shallow(GEN x) GEN mkcoln(long n, ...) GEN mkintn(long n, ...) GEN mkpoln(long n, ...) GEN mkvecn(long n, ...) GEN mkvecsmalln(long n, ...) GEN mulreal(GEN x, GEN y) GEN numer(GEN x) long padicprec(GEN x, GEN p) long padicprec_relative(GEN x) GEN polcoeff0(GEN x, long n, long v) GEN polcoeff_i(GEN x, long n, long v) long poldegree(GEN x, long v) GEN poleval(GEN x, GEN y) GEN pollead(GEN x, long v) long precision(GEN x) GEN precision0(GEN x, long n) GEN qf_apply_RgM(GEN q, GEN M) GEN qf_apply_ZM(GEN q, GEN M) GEN qfbil(GEN x, GEN y, GEN q) GEN qfeval(GEN q, GEN x) GEN qfevalb(GEN q, GEN x, GEN y) GEN qfnorm(GEN x, GEN q) GEN real_i(GEN x) GEN round0(GEN x, GEN *pte) GEN roundr(GEN x) GEN roundr_safe(GEN x) GEN scalarpol(GEN x, long v) GEN scalarpol_shallow(GEN x, long v) GEN scalarser(GEN x, long v, long prec) GEN ser_unscale(GEN P, GEN h) long serprec(GEN x, long v) GEN serreverse(GEN x) GEN sertoser(GEN x, long prec) GEN simplify(GEN x) GEN simplify_shallow(GEN x) GEN tayl(GEN x, long v, long precdl) GEN toser_i(GEN x) GEN trunc0(GEN x, GEN *pte) GEN uu32toi(ulong a, ulong b) GEN vars_sort_inplace(GEN z) GEN vars_to_RgXV(GEN h) GEN variables_vecsmall(GEN x) GEN variables_vec(GEN x) # genus2red.c GEN genus2red(GEN PQ, GEN p) # groupid.c long group_ident(GEN G, GEN S) long group_ident_trans(GEN G, GEN S) # hash.c hashtable *hash_create_ulong(ulong s, long stack) hashtable *hash_create_str(ulong s, long stack) hashtable *hash_create(ulong minsize, ulong (*hash)(void*), int (*eq)(void*, void*), int use_stack) void hash_insert(hashtable *h, void *k, void *v) void hash_insert2(hashtable *h, void *k, void *v, ulong hash) GEN hash_keys(hashtable *h) GEN hash_values(hashtable *h) hashentry *hash_search(hashtable *h, void *k) hashentry *hash_search2(hashtable *h, void *k, ulong hash) hashentry *hash_select(hashtable *h, void *k, void *E, int(*select)(void *, hashentry *)) hashentry *hash_remove(hashtable *h, void *k) hashentry *hash_remove_select(hashtable *h, void *k, void *E, int (*select)(void*, hashentry*)) void hash_destroy(hashtable *h) ulong hash_str(const char *str) ulong hash_str2(const char *s) ulong hash_GEN(GEN x) # hyperell.c GEN hyperellpadicfrobenius(GEN x, ulong p, long e) GEN hyperellcharpoly(GEN x) GEN nfhyperellpadicfrobenius(GEN H, GEN T, ulong p, long n) # hnf_snf.c GEN RgM_hnfall(GEN A, GEN *pB, long remove) GEN ZM_hnf(GEN x) GEN ZM_hnfall(GEN A, GEN *ptB, long remove) GEN ZM_hnfcenter(GEN M) GEN ZM_hnflll(GEN A, GEN *ptB, int remove) GEN ZV_extgcd(GEN A) GEN ZV_snfall(GEN D, GEN *pU, GEN *pV) GEN ZV_snf_group(GEN d, GEN *newU, GEN *newUi) GEN ZM_hnfmod(GEN x, GEN d) GEN ZM_hnfmodall(GEN x, GEN dm, long flag) GEN ZM_hnfmodid(GEN x, GEN d) GEN ZM_hnfperm(GEN A, GEN *ptU, GEN *ptperm) void ZM_snfclean(GEN d, GEN u, GEN v) GEN ZM_snf(GEN x) GEN ZM_snf_group(GEN H, GEN *newU, GEN *newUi) GEN ZM_snfall(GEN x, GEN *ptU, GEN *ptV) GEN ZM_snfall_i(GEN x, GEN *ptU, GEN *ptV, int return_vec) GEN zlm_echelon(GEN x, long early_abort, ulong p, ulong pm) GEN ZpM_echelon(GEN x, long early_abort, GEN p, GEN pm) GEN gsmith(GEN x) GEN gsmithall(GEN x) GEN hnf(GEN x) GEN hnf_divscale(GEN A, GEN B, GEN t) GEN hnf_solve(GEN A, GEN B) GEN hnf_invimage(GEN A, GEN b) GEN hnfall(GEN x) int hnfdivide(GEN A, GEN B) GEN hnflll(GEN x) GEN hnfmerge_get_1(GEN A, GEN B) GEN hnfmod(GEN x, GEN d) GEN hnfmodid(GEN x, GEN p) GEN hnfperm(GEN x) GEN matfrobenius(GEN M, long flag, long v) GEN mathnf0(GEN x, long flag) GEN matsnf0(GEN x, long flag) GEN smith(GEN x) GEN smithall(GEN x) GEN smithclean(GEN z) # ifactor1.c GEN Z_factor(GEN n) GEN Z_factor_limit(GEN n, ulong all) GEN Z_factor_until(GEN n, GEN limit) long Z_issmooth(GEN m, ulong lim) GEN Z_issmooth_fact(GEN m, ulong lim) long Z_issquarefree(GEN x) GEN absi_factor(GEN n) GEN absi_factor_limit(GEN n, ulong all) long bigomega(GEN n) long bigomegau(ulong n) GEN core(GEN n) ulong coreu(ulong n) GEN eulerphi(GEN n) ulong eulerphiu(ulong n) ulong eulerphiu_fact(GEN f) GEN factorint(GEN n, long flag) GEN factoru(ulong n) int ifac_isprime(GEN x) int ifac_next(GEN *part, GEN *p, long *e) int ifac_read(GEN part, GEN *p, long *e) void ifac_skip(GEN part) GEN ifac_start(GEN n, int moebius) int is_357_power(GEN x, GEN *pt, ulong *mask) int is_pth_power(GEN x, GEN *pt, forprime_t *T, ulong cutoffbits) long ispowerful(GEN n) long issquarefree(GEN x) long istotient(GEN n, GEN *px) long moebius(GEN n) long moebiusu(ulong n) GEN nextprime(GEN n) GEN numdiv(GEN n) long omega(GEN n) long omegau(ulong n) GEN precprime(GEN n) GEN sumdiv(GEN n) GEN sumdivk(GEN n, long k) ulong tridiv_bound(GEN n) int uis_357_power(ulong x, ulong *pt, ulong *mask) int uis_357_powermod(ulong x, ulong *mask) long uissquarefree(ulong n) long uissquarefree_fact(GEN f) ulong unextprime(ulong n) ulong uprecprime(ulong n) GEN usumdivkvec(ulong n, GEN K) # init.c long timer_delay(pari_timer *T) long timer_get(pari_timer *T) void timer_start(pari_timer *T) int chk_gerepileupto(GEN x) GENbin* copy_bin(GEN x) GENbin* copy_bin_canon(GEN x) void dbg_gerepile(pari_sp av) void dbg_gerepileupto(GEN q) GEN errname(GEN err) GEN gclone(GEN x) GEN gcloneref(GEN x) void gclone_refc(GEN x) GEN gcopy(GEN x) GEN gcopy_avma(GEN x, pari_sp *AVMA) GEN gcopy_lg(GEN x, long lx) GEN gerepile(pari_sp ltop, pari_sp lbot, GEN q) void gerepileallsp(pari_sp av, pari_sp tetpil, int n, ...) void gerepilecoeffssp(pari_sp av, pari_sp tetpil, long *g, int n) void gerepilemanysp(pari_sp av, pari_sp tetpil, GEN* g[], int n) GEN getheap() void gp_context_save(gp_context* rec) void gp_context_restore(gp_context* rec) long gsizeword(GEN x) long gsizebyte(GEN x) void gunclone(GEN x) void gunclone_deep(GEN x) GEN listcopy(GEN x) void timer_printf(pari_timer *T, const char *format, ...) void msgtimer(const char *format, ...) long name_numerr(const char *s) void new_chunk_resize(size_t x) GEN newblock(size_t n) const char * numerr_name(long errnum) GEN obj_check(GEN S, long K) GEN obj_checkbuild(GEN S, long tag, GEN (*build)(GEN)) GEN obj_checkbuild_padicprec(GEN S, long tag, GEN (*build)(GEN, long), long prec) GEN obj_checkbuild_realprec(GEN S, long tag, GEN (*build)(GEN, long), long prec) GEN obj_checkbuild_prec(GEN S, long tag, GEN (*build)(GEN, long), long (*pr)(GEN), long prec) void obj_free(GEN S) GEN obj_init(long d, long n) GEN obj_insert(GEN S, long K, GEN O) GEN obj_insert_shallow(GEN S, long K, GEN O) void pari_add_function(entree *ep) void pari_add_module(entree *ep) void pari_add_defaults_module(entree *ep) void pari_close() void pari_close_opts(ulong init_opts) GEN pari_compile_str(const char *lex) int pari_daemon() void pari_err(int numerr, ...) GEN pari_err_last() char * pari_err2str(GEN err) void pari_init_opts(size_t parisize, ulong maxprime, ulong init_opts) void pari_init(size_t parisize, ulong maxprime) void pari_stackcheck_init(void *pari_stack_base) void pari_sighandler(int sig) void pari_sig_init(void (*f)(int)) void pari_thread_alloc(pari_thread *t, size_t s, GEN arg) void pari_thread_close() void pari_thread_free(pari_thread *t) void pari_thread_init() GEN pari_thread_start(pari_thread *t) void pari_thread_valloc(pari_thread *t, size_t s, size_t v, GEN arg) GEN pari_version() void pari_warn(int numerr, ...) void paristack_newrsize(ulong newsize) void paristack_resize(ulong newsize) void paristack_setsize(size_t rsize, size_t vsize) void parivstack_resize(ulong newsize) void parivstack_reset() GEN trap0(const char *e, GEN f, GEN r) void shiftaddress(GEN x, long dec) void shiftaddress_canon(GEN x, long dec) long timer() long timer2() void traverseheap( void(*f)(GEN, void *), void *data ) # intnum.c GEN contfraceval(GEN CF, GEN t, long nlim) GEN contfracinit(GEN M, long lim) GEN intcirc(void *E, GEN (*eval) (void *, GEN), GEN a, GEN R, GEN tab, long prec) GEN intfuncinit(void *E, GEN (*eval) (void *, GEN), GEN a, GEN b, long m, long prec) GEN intnum(void *E, GEN (*eval) (void *, GEN), GEN a, GEN b, GEN tab, long prec) GEN intnumgauss(void *E, GEN (*eval)(void*, GEN), GEN a, GEN b, GEN tab, long prec) GEN intnumgaussinit(long n, long prec) GEN intnuminit(GEN a, GEN b, long m, long prec) GEN intnumromb(void *E, GEN (*eval)(void *, GEN), GEN a, GEN b, long flag, long prec) GEN intnumromb_bitprec(void *E, GEN (*eval)(void *, GEN), GEN a, GEN b, long flag, long bit) GEN sumnum(void *E, GEN (*eval)(void*, GEN), GEN a, GEN tab, long prec) GEN sumnuminit(GEN fast, long prec) GEN sumnummonien(void *E, GEN (*eval)(void*, GEN), GEN a, GEN tab, long prec) GEN sumnummonieninit(GEN asymp, GEN w, GEN n0, long prec) # krasner.c GEN padicfields0(GEN p, GEN n, long flag) GEN padicfields(GEN p, long m, long d, long flag) # kummer.c GEN rnfkummer(GEN bnr, GEN subgroup, long all, long prec) # lfun.c long is_linit(GEN data) GEN ldata_get_an(GEN ldata) GEN ldata_get_dual(GEN ldata) GEN ldata_get_gammavec(GEN ldata) long ldata_get_degree(GEN ldata) long ldata_get_k(GEN ldata) GEN ldata_get_conductor(GEN ldata) GEN ldata_get_rootno(GEN ldata) GEN ldata_get_residue(GEN ldata) long ldata_get_type(GEN ldata) long ldata_isreal(GEN ldata) GEN ldata_vecan(GEN ldata, long L, long prec) long linit_get_type(GEN linit) GEN linit_get_ldata(GEN linit) GEN linit_get_tech(GEN linit) GEN lfun_get_domain(GEN tech) GEN lfun_get_dom(GEN tech) long lfun_get_bitprec(GEN tech) GEN lfun_get_factgammavec(GEN tech) GEN lfun_get_step(GEN tech) GEN lfun_get_pol(GEN tech) GEN lfun_get_Residue(GEN tech) GEN lfun_get_k2(GEN tech) GEN lfun_get_w2(GEN tech) GEN lfun_get_expot(GEN tech) long lfun_get_der(GEN tech) long lfun_get_bitprec(GEN tech) GEN lfun(GEN ldata, GEN s, long bitprec) GEN lfun0(GEN ldata, GEN s, long der, long bitprec) long lfuncheckfeq(GEN data, GEN t0, long bitprec) GEN lfunconductor(GEN data, GEN maxcond, long flag, long bitprec) GEN lfuncost(GEN lmisc, GEN dom, long der, long bitprec) GEN lfuncost0(GEN L, GEN dom, long der, long bitprec) GEN lfuncreate(GEN obj) GEN lfunan(GEN ldata, long L, long prec) GEN lfunhardy(GEN ldata, GEN t, long bitprec) GEN lfuninit(GEN ldata, GEN dom, long der, long bitprec) GEN lfuninit0(GEN ldata, GEN dom, long der, long bitprec) GEN lfuninit_make(long t, GEN ldata, GEN molin, GEN domain) long lfunisvgaell(GEN Vga, long flag) GEN lfunlambda(GEN ldata, GEN s, long bitprec) GEN lfunlambda0(GEN ldata, GEN s, long der, long bitprec) GEN lfunmisc_to_ldata(GEN ldata) GEN lfunmisc_to_ldata_shallow(GEN ldata) long lfunorderzero(GEN ldata, long m, long bitprec) GEN lfunprod_get_fact(GEN tech) GEN lfunrootno(GEN data, long bitprec) GEN lfunrootres(GEN data, long bitprec) GEN lfunrtopoles(GEN r) GEN lfuntheta(GEN data, GEN t, long m, long bitprec) long lfunthetacost0(GEN L, GEN tdom, long m, long bitprec) long lfunthetacost(GEN ldata, GEN tdom, long m, long bitprec) GEN lfunthetainit(GEN ldata, GEN tdom, long m, long bitprec) GEN lfunthetacheckinit(GEN data, GEN tinf, long m, long *ptbitprec, long fl) GEN lfunzeros(GEN ldata, GEN lim, long divz, long bitprec) int sdomain_isincl(long k, GEN dom, GEN dom0) GEN theta_get_an(GEN tdata) GEN theta_get_K(GEN tdata) GEN theta_get_R(GEN tdata) long theta_get_bitprec(GEN tdata) long theta_get_m(GEN tdata) GEN theta_get_tdom(GEN tdata) GEN theta_get_sqrtN(GEN tdata) # lfunutils.c GEN dirzetak(GEN nf, GEN b) GEN ellmoddegree(GEN e, long bitprec) GEN lfunabelianrelinit(GEN bnfabs, GEN bnf, GEN polrel, GEN dom, long der, long bitprec) GEN lfunartin(GEN N, GEN G, GEN M, long o) GEN lfundiv(GEN ldata1, GEN ldata2, long bitprec) GEN lfunellmfpeters(GEN E, long bitprec) GEN lfunetaquo(GEN eta) GEN lfungenus2(GEN PS) GEN lfunmfspec(GEN lmisc, long bitprec) GEN lfunmfpeters(GEN ldata, long bitprec) GEN lfunmul(GEN ldata1, GEN ldata2, long bitprec) GEN lfunqf(GEN ldata, long prec) GEN lfunsymsq(GEN ldata, GEN known, long prec) GEN lfunsymsqspec(GEN lmisc, long bitprec) GEN lfunzetakinit(GEN pol, GEN dom, long der, long flag, long bitprec) # lll.c GEN ZM_lll_norms(GEN x, double D, long flag, GEN *B) GEN kerint(GEN x) GEN lll(GEN x) GEN lllfp(GEN x, double D, long flag) GEN lllgen(GEN x) GEN lllgram(GEN x) GEN lllgramgen(GEN x) GEN lllgramint(GEN x) GEN lllgramkerim(GEN x) GEN lllgramkerimgen(GEN x) GEN lllint(GEN x) GEN lllintpartial(GEN mat) GEN lllintpartial_inplace(GEN mat) GEN lllkerim(GEN x) GEN lllkerimgen(GEN x) GEN matkerint0(GEN x, long flag) GEN qflll0(GEN x, long flag) GEN qflllgram0(GEN x, long flag) # map.c GEN gtomap(GEN M) void mapdelete(GEN T, GEN a) GEN mapdomain(GEN T) GEN mapdomain_shallow(GEN T) GEN mapget(GEN T, GEN a) int mapisdefined(GEN T, GEN a, GEN *pt_z) void mapput(GEN T, GEN a, GEN b) GEN maptomat(GEN T) GEN maptomat_shallow(GEN T) # mellininv.c double dbllambertW0(double a) double dbllambertW_1(double a) double dbllemma526(double a, double b, double c, double B) double dblcoro526(double a, double c, double B) GEN gammamellininv(GEN Vga, GEN s, long m, long bitprec) GEN gammamellininvasymp(GEN Vga, long nlimmax, long m) GEN gammamellininvinit(GEN Vga, long m, long bitprec) GEN gammamellininvrt(GEN K, GEN s, long bitprec) # members.c GEN member_a1(GEN x) GEN member_a2(GEN x) GEN member_a3(GEN x) GEN member_a4(GEN x) GEN member_a6(GEN x) GEN member_area(GEN x) GEN member_b2(GEN x) GEN member_b4(GEN x) GEN member_b6(GEN x) GEN member_b8(GEN x) GEN member_bid(GEN x) GEN member_bnf(GEN x) GEN member_c4(GEN x) GEN member_c6(GEN x) GEN member_clgp(GEN x) GEN member_codiff(GEN x) GEN member_cyc(GEN clg) GEN member_diff(GEN x) GEN member_disc(GEN x) GEN member_e(GEN x) GEN member_eta(GEN x) GEN member_f(GEN x) GEN member_fu(GEN x) GEN member_futu(GEN x) GEN member_gen(GEN x) GEN member_group(GEN x) GEN member_index(GEN x) GEN member_j(GEN x) GEN member_mod(GEN x) GEN member_nf(GEN x) GEN member_no(GEN clg) GEN member_omega(GEN x) GEN member_orders(GEN x) GEN member_p(GEN x) GEN member_pol(GEN x) GEN member_polabs(GEN x) GEN member_reg(GEN x) GEN member_r1(GEN x) GEN member_r2(GEN x) GEN member_roots(GEN x) GEN member_sign(GEN x) GEN member_t2(GEN x) GEN member_tate(GEN x) GEN member_tufu(GEN x) GEN member_tu(GEN x) GEN member_zk(GEN x) GEN member_zkst(GEN bid) # mp.c GEN addmulii(GEN x, GEN y, GEN z) GEN addmulii_inplace(GEN x, GEN y, GEN z) ulong Fl_inv(ulong x, ulong p) ulong Fl_invgen(ulong x, ulong p, ulong *pg) ulong Fl_invsafe(ulong x, ulong p) int Fp_ratlift(GEN x, GEN m, GEN amax, GEN bmax, GEN *a, GEN *b) int absi_cmp(GEN x, GEN y) int absi_equal(GEN x, GEN y) int absr_cmp(GEN x, GEN y) GEN addii_sign(GEN x, long sx, GEN y, long sy) GEN addir_sign(GEN x, long sx, GEN y, long sy) GEN addrr_sign(GEN x, long sx, GEN y, long sy) GEN addsi_sign(long x, GEN y, long sy) GEN addui_sign(ulong x, GEN y, long sy) GEN addsr(long x, GEN y) GEN addumului(ulong a, ulong b, GEN Y) void affir(GEN x, GEN y) void affrr(GEN x, GEN y) GEN bezout(GEN a, GEN b, GEN *u, GEN *v) long cbezout(long a, long b, long *uu, long *vv) GEN cbrtr_abs(GEN x) int cmpii(GEN x, GEN y) int cmprr(GEN x, GEN y) long dblexpo(double x) ulong dblmantissa(double x) GEN dbltor(double x) GEN diviiexact(GEN x, GEN y) GEN divir(GEN x, GEN y) GEN divis(GEN y, long x) GEN divis_rem(GEN x, long y, long *rem) GEN diviu_rem(GEN y, ulong x, ulong *rem) GEN diviuuexact(GEN x, ulong y, ulong z) GEN diviuexact(GEN x, ulong y) GEN divri(GEN x, GEN y) GEN divrr(GEN x, GEN y) GEN divrs(GEN x, long y) GEN divru(GEN x, ulong y) GEN divsi(long x, GEN y) GEN divsr(long x, GEN y) GEN divur(ulong x, GEN y) GEN dvmdii(GEN x, GEN y, GEN *z) int equalii(GEN x, GEN y) int equalrr(GEN x, GEN y) GEN floorr(GEN x) GEN gcdii(GEN x, GEN y) GEN int2n(long n) GEN int2u(ulong n) GEN int_normalize(GEN x, long known_zero_words) int invmod(GEN a, GEN b, GEN *res) ulong invmod2BIL(ulong b) GEN invr(GEN b) GEN mantissa_real(GEN x, long *e) GEN modii(GEN x, GEN y) void modiiz(GEN x, GEN y, GEN z) GEN mulii(GEN x, GEN y) GEN mulir(GEN x, GEN y) GEN mulrr(GEN x, GEN y) GEN mulsi(long x, GEN y) GEN mulsr(long x, GEN y) GEN mulss(long x, long y) GEN mului(ulong x, GEN y) GEN mulur(ulong x, GEN y) GEN muluu(ulong x, ulong y) GEN muluui(ulong x, ulong y, GEN z) GEN remi2n(GEN x, long n) double rtodbl(GEN x) GEN shifti(GEN x, long n) GEN sqri(GEN x) GEN sqrr(GEN x) GEN sqrs(long x) GEN sqrtr_abs(GEN x) GEN sqrtremi(GEN S, GEN *R) GEN sqru(ulong x) GEN subsr(long x, GEN y) GEN truedvmdii(GEN x, GEN y, GEN *z) GEN truedvmdis(GEN x, long y, GEN *z) GEN truedvmdsi(long x, GEN y, GEN *z) GEN trunc2nr(GEN x, long n) GEN mantissa2nr(GEN x, long n) GEN truncr(GEN x) ulong umodiu(GEN y, ulong x) long vals(ulong x) # nffactor.c GEN FpC_ratlift(GEN P, GEN mod, GEN amax, GEN bmax, GEN denom) GEN FpM_ratlift(GEN M, GEN mod, GEN amax, GEN bmax, GEN denom) GEN FpX_ratlift(GEN P, GEN mod, GEN amax, GEN bmax, GEN denom) GEN nffactor(GEN nf, GEN x) GEN nffactormod(GEN nf, GEN pol, GEN pr) GEN nfgcd(GEN P, GEN Q, GEN nf, GEN den) GEN nfgcd_all(GEN P, GEN Q, GEN T, GEN den, GEN *Pnew) int nfissquarefree(GEN nf, GEN x) GEN nfroots(GEN nf, GEN pol) GEN polfnf(GEN a, GEN t) GEN rootsof1(GEN x) GEN rootsof1_kannan(GEN nf) # paricfg.c extern const char *paricfg_datadir extern const char *paricfg_version extern const char *paricfg_buildinfo extern const long paricfg_version_code extern const char *paricfg_vcsversion extern const char *paricfg_compiledate extern const char *paricfg_mt_engine # part.c void forpart(void *E, long call(void*, GEN), long k, GEN nbound, GEN abound) void forpart_init(forpart_t *T, long k, GEN abound, GEN nbound) GEN forpart_next(forpart_t *T) GEN forpart_prev(forpart_t *T) GEN numbpart(GEN x) GEN partitions(long k, GEN nbound, GEN abound) # perm.c GEN abelian_group(GEN G) GEN cyclicgroup(GEN g, long s) GEN cyc_pow(GEN cyc, long exp) GEN cyc_pow_perm(GEN cyc, long exp) GEN dicyclicgroup(GEN g1, GEN g2, long s1, long s2) GEN group_abelianHNF(GEN G, GEN L) GEN group_abelianSNF(GEN G, GEN L) long group_domain(GEN G) GEN group_elts(GEN G, long n) GEN group_export(GEN G, long format) long group_isA4S4(GEN G) long group_isabelian(GEN G) GEN group_leftcoset(GEN G, GEN g) long group_order(GEN G) long group_perm_normalize(GEN N, GEN g) GEN group_quotient(GEN G, GEN H) GEN group_rightcoset(GEN G, GEN g) GEN group_set(GEN G, long n) long group_subgroup_isnormal(GEN G, GEN H) GEN group_subgroups(GEN G) GEN groupelts_abelian_group(GEN S) GEN groupelts_center(GEN S) GEN groupelts_set(GEN G, long n) int perm_commute(GEN p, GEN q) GEN perm_cycles(GEN v) long perm_order(GEN perm) GEN perm_pow(GEN perm, long exp) GEN quotient_group(GEN C, GEN G) GEN quotient_perm(GEN C, GEN p) GEN quotient_subgroup_lift(GEN C, GEN H, GEN S) GEN subgroups_tableset(GEN S, long n) long tableset_find_index(GEN tbl, GEN set) GEN trivialgroup() GEN vecperm_orbits(GEN v, long n) GEN vec_insert(GEN v, long n, GEN x) int vec_is1to1(GEN v) int vec_isconst(GEN v) long vecsmall_duplicate(GEN x) long vecsmall_duplicate_sorted(GEN x) GEN vecsmall_indexsort(GEN V) void vecsmall_sort(GEN V) GEN vecsmall_uniq(GEN V) GEN vecsmall_uniq_sorted(GEN V) GEN vecvecsmall_indexsort(GEN x) long vecvecsmall_search(GEN x, GEN y, long flag) GEN vecvecsmall_sort(GEN x) GEN vecvecsmall_sort_uniq(GEN x) # mt.c void mt_broadcast(GEN code) void mt_sigint_block() void mt_sigint_unblock() void mt_queue_end(pari_mt *pt) GEN mt_queue_get(pari_mt *pt, long *jobid, long *pending) void mt_queue_start(pari_mt *pt, GEN worker) void mt_queue_submit(pari_mt *pt, long jobid, GEN work) void pari_mt_init() void pari_mt_close() # plotport.c void color_to_rgb(GEN c, int *r, int *g, int *b) void colorname_to_rgb(const char *s, int *r, int *g, int *b) void pari_plot_by_file(const char *env, const char *suf, const char *img) void pari_set_plot_engine(void (*plot)(PARI_plot *)) void pari_kill_plot_engine() void plotbox(long ne, GEN gx2, GEN gy2) void plotclip(long rect) void plotcolor(long ne, long color) void plotcopy(long source, long dest, GEN xoff, GEN yoff, long flag) GEN plotcursor(long ne) void plotdraw(GEN list, long flag) GEN ploth(void *E, GEN(*f)(void*, GEN), GEN a, GEN b, long flags, long n, long prec) GEN plothraw(GEN listx, GEN listy, long flag) GEN plothsizes(long flag) void plotinit(long ne, GEN x, GEN y, long flag) void plotkill(long ne) void plotline(long ne, GEN gx2, GEN gy2) void plotlines(long ne, GEN listx, GEN listy, long flag) void plotlinetype(long ne, long t) void plotmove(long ne, GEN x, GEN y) void plotpoints(long ne, GEN listx, GEN listy) void plotpointsize(long ne, GEN size) void plotpointtype(long ne, long t) void plotrbox(long ne, GEN x2, GEN y2) GEN plotrecth(void *E, GEN(*f)(void*, GEN), long ne, GEN a, GEN b, ulong flags, long n, long prec) GEN plotrecthraw(long ne, GEN data, long flags) void plotrline(long ne, GEN x2, GEN y2) void plotrmove(long ne, GEN x, GEN y) void plotrpoint(long ne, GEN x, GEN y) void plotscale(long ne, GEN x1, GEN x2, GEN y1, GEN y2) void plotstring(long ne, char *x, long dir) void psdraw(GEN list, long flag) GEN psploth(void *E, GEN(*f)(void*, GEN), GEN a, GEN b, long flags, long n, long prec) GEN psplothraw(GEN listx, GEN listy, long flag) char* rect2ps(GEN w, GEN x, GEN y, PARI_plot *T) char* rect2ps_i(GEN w, GEN x, GEN y, PARI_plot *T, int plotps) char* rect2svg(GEN w, GEN x, GEN y, PARI_plot *T) # polarit1.c GEN ZX_Zp_root(GEN f, GEN a, GEN p, long prec) GEN Zp_appr(GEN f, GEN a) GEN factorpadic(GEN x, GEN p, long r) GEN gdeuc(GEN x, GEN y) GEN grem(GEN x, GEN y) GEN padicappr(GEN f, GEN a) GEN poldivrem(GEN x, GEN y, GEN *pr) GEN rootpadic(GEN f, GEN p, long r) # polarit2.c GEN FpV_factorback(GEN L, GEN e, GEN p) GEN Q_content(GEN x) GEN Q_denom(GEN x) GEN Q_div_to_int(GEN x, GEN c) GEN Q_gcd(GEN x, GEN y) GEN Q_mul_to_int(GEN x, GEN c) GEN Q_muli_to_int(GEN x, GEN d) GEN Q_primitive_part(GEN x, GEN *ptc) GEN Q_primpart(GEN x) GEN Q_remove_denom(GEN x, GEN *ptd) GEN RgXQ_charpoly(GEN x, GEN T, long v) GEN RgXQ_inv(GEN x, GEN y) GEN RgX_disc(GEN x) GEN RgX_extgcd(GEN x, GEN y, GEN *U, GEN *V) GEN RgX_extgcd_simple(GEN a, GEN b, GEN *pu, GEN *pv) GEN RgX_gcd(GEN x, GEN y) GEN RgX_gcd_simple(GEN x, GEN y) int RgXQ_ratlift(GEN y, GEN x, long amax, long bmax, GEN *P, GEN *Q) GEN RgX_resultant_all(GEN P, GEN Q, GEN *sol) long RgX_sturmpart(GEN x, GEN ab) long RgX_type(GEN x, GEN *ptp, GEN *ptpol, long *ptpa) void RgX_type_decode(long x, long *t1, long *t2) int RgX_type_is_composite(long t) GEN ZX_content(GEN x) GEN centermod(GEN x, GEN p) GEN centermod_i(GEN x, GEN p, GEN ps2) GEN centermodii(GEN x, GEN p, GEN po2) GEN content(GEN x) GEN deg1_from_roots(GEN L, long v) GEN factor(GEN x) GEN factor0(GEN x, long flag) GEN factorback(GEN fa) GEN factorback2(GEN fa, GEN e) GEN famat_mul_shallow(GEN f, GEN g) GEN gbezout(GEN x, GEN y, GEN *u, GEN *v) GEN gdivexact(GEN x, GEN y) GEN gen_factorback(GEN L, GEN e, GEN (*_mul)(void*, GEN, GEN), GEN (*_pow)(void*, GEN, GEN), void *data) GEN ggcd(GEN x, GEN y) GEN ggcd0(GEN x, GEN y) GEN ginvmod(GEN x, GEN y) GEN glcm(GEN x, GEN y) GEN glcm0(GEN x, GEN y) GEN gp_factor0(GEN x, GEN flag) GEN idealfactorback(GEN nf, GEN L, GEN e, int red) GEN newtonpoly(GEN x, GEN p) GEN nffactorback(GEN nf, GEN L, GEN e) GEN nfrootsQ(GEN x) GEN poldisc0(GEN x, long v) GEN polresultant0(GEN x, GEN y, long v, long flag) GEN polsym(GEN x, long n) GEN primitive_part(GEN x, GEN *c) GEN primpart(GEN x) GEN reduceddiscsmith(GEN pol) GEN resultant2(GEN x, GEN y) GEN resultant_all(GEN u, GEN v, GEN *sol) GEN rnfcharpoly(GEN nf, GEN T, GEN alpha, long v) GEN roots_from_deg1(GEN x) GEN roots_to_pol(GEN a, long v) GEN roots_to_pol_r1(GEN a, long v, long r1) long sturmpart(GEN x, GEN a, GEN b) GEN subresext(GEN x, GEN y, GEN *U, GEN *V) GEN sylvestermatrix(GEN x, GEN y) GEN trivial_fact() GEN gcdext0(GEN x, GEN y) GEN polresultantext0(GEN x, GEN y, long v) GEN polresultantext(GEN x, GEN y) GEN prime_fact(GEN x) # polarit3.c GEN Flx_FlxY_resultant(GEN a, GEN b, ulong pp) GEN Flx_factorff_irred(GEN P, GEN Q, ulong p) void Flx_ffintersect(GEN P, GEN Q, long n, ulong l, GEN *SP, GEN *SQ, GEN MA, GEN MB) GEN Flx_ffisom(GEN P, GEN Q, ulong l) GEN Flx_roots_naive(GEN f, ulong p) GEN FlxX_resultant(GEN u, GEN v, ulong p, long sx) GEN Flxq_ffisom_inv(GEN S, GEN Tp, ulong p) GEN FpX_FpXY_resultant(GEN a, GEN b0, GEN p) GEN FpX_factorff_irred(GEN P, GEN Q, GEN p) void FpX_ffintersect(GEN P, GEN Q, long n, GEN l, GEN *SP, GEN *SQ, GEN MA, GEN MB) GEN FpX_ffisom(GEN P, GEN Q, GEN l) GEN FpX_translate(GEN P, GEN c, GEN p) GEN FpXQ_ffisom_inv(GEN S, GEN Tp, GEN p) GEN FpXQX_normalize(GEN z, GEN T, GEN p) GEN FpXV_FpC_mul(GEN V, GEN W, GEN p) GEN FpXY_Fq_evaly(GEN Q, GEN y, GEN T, GEN p, long vx) GEN Fq_Fp_mul(GEN x, GEN y, GEN T, GEN p) GEN Fq_add(GEN x, GEN y, GEN T, GEN p) GEN Fq_div(GEN x, GEN y, GEN T, GEN p) GEN Fq_halve(GEN x, GEN T, GEN p) GEN Fq_inv(GEN x, GEN T, GEN p) GEN Fq_invsafe(GEN x, GEN T, GEN p) GEN Fq_mul(GEN x, GEN y, GEN T, GEN p) GEN Fq_mulu(GEN x, ulong y, GEN T, GEN p) GEN Fq_neg(GEN x, GEN T, GEN p) GEN Fq_neg_inv(GEN x, GEN T, GEN p) GEN Fq_pow(GEN x, GEN n, GEN T, GEN p) GEN Fq_powu(GEN x, ulong n, GEN pol, GEN p) GEN Fq_sub(GEN x, GEN y, GEN T, GEN p) GEN Fq_sqr(GEN x, GEN T, GEN p) GEN Fq_sqrt(GEN x, GEN T, GEN p) GEN Fq_sqrtn(GEN x, GEN n, GEN T, GEN p, GEN *zeta) GEN FqC_add(GEN x, GEN y, GEN T, GEN p) GEN FqC_sub(GEN x, GEN y, GEN T, GEN p) GEN FqC_Fq_mul(GEN x, GEN y, GEN T, GEN p) GEN FqC_to_FlxC(GEN v, GEN T, GEN pp) GEN FqM_to_FlxM(GEN x, GEN T, GEN pp) GEN FqV_roots_to_pol(GEN V, GEN T, GEN p, long v) GEN FqV_red(GEN z, GEN T, GEN p) GEN FqV_to_FlxV(GEN v, GEN T, GEN pp) GEN FqX_Fq_add(GEN y, GEN x, GEN T, GEN p) GEN FqX_Fq_mul_to_monic(GEN P, GEN U, GEN T, GEN p) GEN FqX_eval(GEN x, GEN y, GEN T, GEN p) GEN FqX_translate(GEN P, GEN c, GEN T, GEN p) GEN FqXQ_powers(GEN x, long l, GEN S, GEN T, GEN p) GEN FqXQ_matrix_pow(GEN y, long n, long m, GEN S, GEN T, GEN p) GEN FqXY_eval(GEN Q, GEN y, GEN x, GEN T, GEN p) GEN FqXY_evalx(GEN Q, GEN x, GEN T, GEN p) GEN QX_disc(GEN x) GEN QX_gcd(GEN a, GEN b) GEN QX_resultant(GEN A, GEN B) GEN QXQ_intnorm(GEN A, GEN B) GEN QXQ_inv(GEN A, GEN B) GEN QXQ_norm(GEN A, GEN B) int Rg_is_Fp(GEN x, GEN *p) int Rg_is_FpXQ(GEN x, GEN *pT, GEN *pp) GEN Rg_to_Fp(GEN x, GEN p) GEN Rg_to_FpXQ(GEN x, GEN T, GEN p) GEN RgC_to_FpC(GEN x, GEN p) int RgM_is_FpM(GEN x, GEN *p) GEN RgM_to_Flm(GEN x, ulong p) GEN RgM_to_FpM(GEN x, GEN p) int RgV_is_FpV(GEN x, GEN *p) GEN RgV_to_Flv(GEN x, ulong p) GEN RgV_to_FpV(GEN x, GEN p) int RgX_is_FpX(GEN x, GEN *p) GEN RgX_to_FpX(GEN x, GEN p) int RgX_is_FpXQX(GEN x, GEN *pT, GEN *pp) GEN RgX_to_FpXQX(GEN x, GEN T, GEN p) GEN RgX_to_FqX(GEN x, GEN T, GEN p) GEN ZX_ZXY_rnfequation(GEN A, GEN B, long *Lambda) GEN ZXQ_charpoly(GEN A, GEN T, long v) GEN ZX_disc(GEN x) int ZX_is_squarefree(GEN x) GEN ZX_gcd(GEN A, GEN B) GEN ZX_gcd_all(GEN A, GEN B, GEN *Anew) GEN ZX_resultant(GEN A, GEN B) int Z_incremental_CRT(GEN *H, ulong Hp, GEN *q, ulong p) GEN Z_init_CRT(ulong Hp, ulong p) int ZM_incremental_CRT(GEN *H, GEN Hp, GEN *q, ulong p) GEN ZM_init_CRT(GEN Hp, ulong p) int ZX_incremental_CRT(GEN *ptH, GEN Hp, GEN *q, ulong p) GEN ZX_init_CRT(GEN Hp, ulong p, long v) GEN characteristic(GEN x) GEN ffinit(GEN p, long n, long v) GEN ffnbirred(GEN p, long n) GEN ffnbirred0(GEN p, long n, long flag) GEN ffsumnbirred(GEN p, long n) const bb_field *get_Fq_field(void **E, GEN T, GEN p) GEN init_Fq(GEN p, long n, long v) GEN pol_x_powers(long N, long v) GEN residual_characteristic(GEN x) # polclass.c GEN polclass(GEN D, long inv, long xvar) # polmodular.c GEN Flm_Fl_polmodular_evalx(GEN phi, long L, ulong j, ulong p, ulong pi) GEN Fp_polmodular_evalx(long L, long inv, GEN J, GEN P, long v, int compute_derivs) GEN polmodular(long L, long inv, GEN x, long yvar, long compute_derivs) GEN polmodular_ZM(long L, long inv) GEN polmodular_ZXX(long L, long inv, long xvar, long yvar) # prime.c long BPSW_isprime(GEN x) long BPSW_psp(GEN N) GEN addprimes(GEN primes) GEN gisprime(GEN x, long flag) GEN gispseudoprime(GEN x, long flag) GEN gprimepi_upper_bound(GEN x) GEN gprimepi_lower_bound(GEN x) long isprime(GEN x) long ispseudoprime(GEN x, long flag) long millerrabin(GEN n, long k) GEN prime(long n) GEN primepi(GEN x) double primepi_upper_bound(double x) double primepi_lower_bound(double x) GEN primes(long n) GEN primes_interval(GEN a, GEN b) GEN primes_interval_zv(ulong a, ulong b) GEN primes_upto_zv(ulong b) GEN primes0(GEN n) GEN primes_zv(long m) GEN randomprime(GEN N) GEN removeprimes(GEN primes) int uislucaspsp(ulong n) int uisprime(ulong n) ulong uprime(long n) ulong uprimepi(ulong n) # qfisom.c GEN qfauto(GEN g, GEN flags) GEN qfauto0(GEN g, GEN flags) GEN qfautoexport(GEN g, long flag) GEN qfisom(GEN g, GEN h, GEN flags) GEN qfisom0(GEN g, GEN h, GEN flags) GEN qfisominit(GEN g, GEN flags, GEN minvec) GEN qfisominit0(GEN g, GEN flags, GEN minvec) GEN qforbits(GEN G, GEN V) # qfparam.c GEN qfsolve(GEN G) GEN qfparam(GEN G, GEN sol, long fl) # random.c GEN genrand(GEN N) GEN getrand() ulong pari_rand() GEN randomi(GEN x) GEN randomr(long prec) ulong random_Fl(ulong n) long random_bits(long k) void setrand(GEN seed) # rootpol.c GEN QX_complex_roots(GEN p, long l) GEN ZX_graeffe(GEN p) GEN cleanroots(GEN x, long l) double fujiwara_bound(GEN p) double fujiwara_bound_real(GEN p, long sign) int isrealappr(GEN x, long l) GEN polgraeffe(GEN p) GEN polmod_to_embed(GEN x, long prec) GEN roots(GEN x, long l) GEN realroots(GEN P, GEN ab, long prec) long ZX_sturm(GEN P) long ZX_sturmpart(GEN P, GEN ab) GEN ZX_Uspensky(GEN P, GEN ab, long flag, long prec) # subcyclo.c GEN factor_Aurifeuille(GEN p, long n) GEN factor_Aurifeuille_prime(GEN p, long n) GEN galoissubcyclo(GEN N, GEN sg, long flag, long v) GEN polsubcyclo(long n, long d, long v) # subfield.c GEN nfsubfields(GEN nf, long d) # subgroup.c GEN subgrouplist(GEN cyc, GEN bound) void forsubgroup(void *E, long fun(void*, GEN), GEN cyc, GEN B) # stark.c GEN bnrL1(GEN bnr, GEN sbgrp, long flag, long prec) GEN bnrrootnumber(GEN bnr, GEN chi, long flag, long prec) GEN bnrstark(GEN bnr, GEN subgroup, long prec) GEN cyc2elts(GEN cyc) # sumiter.c GEN asympnum(void *E, GEN (*f)(void *, GEN, long), long muli, GEN alpha, long prec) GEN derivnum(void *E, GEN (*eval)(void *, GEN, long prec), GEN x, long prec) GEN derivfun(void *E, GEN (*eval)(void *, GEN, long prec), GEN x, long prec) int forcomposite_init(forcomposite_t *C, GEN a, GEN b) GEN forcomposite_next(forcomposite_t *C) GEN forprime_next(forprime_t *T) int forprime_init(forprime_t *T, GEN a, GEN b) int forvec_init(forvec_t *T, GEN x, long flag) GEN forvec_next(forvec_t *T) GEN limitnum(void *E, GEN (*f)(void *, GEN, long), long muli, GEN alpha, long prec) GEN polzag(long n, long m) GEN prodeuler(void *E, GEN (*eval)(void *, GEN), GEN ga, GEN gb, long prec) GEN prodinf(void *E, GEN (*eval)(void *, GEN), GEN a, long prec) GEN prodinf1(void *E, GEN (*eval)(void *, GEN), GEN a, long prec) GEN sumalt(void *E, GEN (*eval)(void *, GEN), GEN a, long prec) GEN sumalt2(void *E, GEN (*eval)(void *, GEN), GEN a, long prec) GEN sumpos(void *E, GEN (*eval)(void *, GEN), GEN a, long prec) GEN sumpos2(void *E, GEN (*eval)(void *, GEN), GEN a, long prec) GEN suminf(void *E, GEN (*eval)(void *, GEN), GEN a, long prec) ulong u_forprime_next(forprime_t *T) int u_forprime_init(forprime_t *T, ulong a, ulong b) void u_forprime_restrict(forprime_t *T, ulong c) int u_forprime_arith_init(forprime_t *T, ulong a, ulong b, ulong c, ulong q) GEN zbrent(void *E, GEN (*eval)(void *, GEN), GEN a, GEN b, long prec) GEN solvestep(void *E, GEN (*eval)(void *, GEN), GEN a, GEN b, GEN step, long flag, long prec) # thue.c GEN bnfisintnorm(GEN x, GEN y) GEN bnfisintnormabs(GEN bnf, GEN a) GEN thue(GEN thueres, GEN rhs, GEN ne) GEN thueinit(GEN pol, long flag, long prec) # trans1.c GEN Pi2n(long n, long prec) GEN PiI2(long prec) GEN PiI2n(long n, long prec) GEN Qp_exp(GEN x) GEN Qp_log(GEN x) GEN Qp_sqrt(GEN x) GEN Qp_sqrtn(GEN x, GEN n, GEN *zetan) long Zn_ispower(GEN a, GEN q, GEN K, GEN *pt) long Zn_issquare(GEN x, GEN n) GEN Zn_sqrt(GEN x, GEN n) GEN Zp_teichmuller(GEN x, GEN p, long n, GEN q) GEN agm(GEN x, GEN y, long prec) GEN constcatalan(long prec) GEN consteuler(long prec) GEN constlog2(long prec) GEN constpi(long prec) GEN cxexpm1(GEN z, long prec) GEN expIr(GEN x) GEN exp1r_abs(GEN x) GEN gcos(GEN x, long prec) GEN gcotan(GEN x, long prec) GEN gcotanh(GEN x, long prec) GEN gexp(GEN x, long prec) GEN gexpm1(GEN x, long prec) GEN glog(GEN x, long prec) GEN gpow(GEN x, GEN n, long prec) GEN gpowers(GEN x, long n) GEN gpowers0(GEN x, long n, GEN x0) GEN gpowgs(GEN x, long n) GEN grootsof1(long N, long prec) GEN gsin(GEN x, long prec) GEN gsinc(GEN x, long prec) void gsincos(GEN x, GEN *s, GEN *c, long prec) GEN gsqrpowers(GEN q, long n) GEN gsqrt(GEN x, long prec) GEN gsqrtn(GEN x, GEN n, GEN *zetan, long prec) GEN gtan(GEN x, long prec) GEN logr_abs(GEN x) GEN mpcos(GEN x) GEN mpeuler(long prec) GEN mpcatalan(long prec) void mpsincosm1(GEN x, GEN *s, GEN *c) GEN mpexp(GEN x) GEN mpexpm1(GEN x) GEN mplog(GEN x) GEN mplog2(long prec) GEN mppi(long prec) GEN mpsin(GEN x) void mpsincos(GEN x, GEN *s, GEN *c) GEN powersr(GEN a, long n) GEN powis(GEN x, long n) GEN powiu(GEN p, ulong k) GEN powrfrac(GEN x, long n, long d) GEN powrs(GEN x, long n) GEN powrshalf(GEN x, long s) GEN powru(GEN x, ulong n) GEN powruhalf(GEN x, ulong s) GEN powuu(ulong p, ulong k) GEN powgi(GEN x, GEN n) GEN serchop0(GEN s) GEN sqrtnint(GEN a, long n) GEN teich(GEN x) GEN teichmullerinit(long p, long n) GEN teichmuller(GEN x, GEN tab) GEN trans_eval(const char *fun, GEN (*f) (GEN, long), GEN x, long prec) ulong upowuu(ulong p, ulong k) ulong usqrtn(ulong a, ulong n) ulong usqrt(ulong a) # trans2.c GEN Qp_gamma(GEN x) GEN bernfrac(long n) GEN bernpol(long k, long v) GEN bernreal(long n, long prec) GEN gacosh(GEN x, long prec) GEN gacos(GEN x, long prec) GEN garg(GEN x, long prec) GEN gasinh(GEN x, long prec) GEN gasin(GEN x, long prec) GEN gatan(GEN x, long prec) GEN gatanh(GEN x, long prec) GEN gcosh(GEN x, long prec) GEN ggammah(GEN x, long prec) GEN ggamma(GEN x, long prec) GEN ggamma1m1(GEN x, long prec) GEN glngamma(GEN x, long prec) GEN gpsi(GEN x, long prec) GEN gsinh(GEN x, long prec) GEN gtanh(GEN x, long prec) void mpbern(long nomb, long prec) GEN mpfactr(long n, long prec) GEN sumformal(GEN T, long v) # trans3.c double dblmodulus(GEN x) GEN dilog(GEN x, long prec) GEN eint1(GEN x, long prec) GEN eta(GEN x, long prec) GEN eta0(GEN x, long flag, long prec) GEN gerfc(GEN x, long prec) GEN glambertW(GEN y, long prec) GEN gpolylog(long m, GEN x, long prec) GEN gzeta(GEN x, long prec) GEN hbessel1(GEN n, GEN z, long prec) GEN hbessel2(GEN n, GEN z, long prec) GEN hyperu(GEN a, GEN b, GEN gx, long prec) GEN ibessel(GEN n, GEN z, long prec) GEN incgam(GEN a, GEN x, long prec) GEN incgam0(GEN a, GEN x, GEN z, long prec) GEN incgamc(GEN a, GEN x, long prec) GEN jbessel(GEN n, GEN z, long prec) GEN jbesselh(GEN n, GEN z, long prec) GEN jell(GEN x, long prec) GEN kbessel(GEN nu, GEN gx, long prec) GEN mpeint1(GEN x, GEN expx) GEN mplambertW(GEN y) GEN mpveceint1(GEN C, GEN eC, long n) GEN nbessel(GEN n, GEN z, long prec) GEN polylog0(long m, GEN x, long flag, long prec) GEN sumdedekind(GEN h, GEN k) GEN sumdedekind_coprime(GEN h, GEN k) GEN szeta(long x, long prec) GEN theta(GEN q, GEN z, long prec) GEN thetanullk(GEN q, long k, long prec) GEN trueeta(GEN x, long prec) GEN u_sumdedekind_coprime(long h, long k) GEN veceint1(GEN nmax, GEN C, long prec) GEN vecthetanullk(GEN q, long k, long prec) GEN vecthetanullk_tau(GEN tau, long k, long prec) GEN veczeta(GEN a, GEN b, long N, long prec) GEN weber0(GEN x, long flag, long prec) GEN weberf(GEN x, long prec) GEN weberf1(GEN x, long prec) GEN weberf2(GEN x, long prec) # modsym.c GEN Qevproj_apply(GEN T, GEN pro) GEN Qevproj_apply_vecei(GEN T, GEN pro, long k) GEN Qevproj_init(GEN M) GEN RgX_act_Gl2Q(GEN g, long k) GEN RgX_act_ZGl2Q(GEN z, long k) void checkms(GEN W) void checkmspadic(GEN W) GEN ellpadicL(GEN E, GEN p, long n, GEN s, long r, GEN D) GEN msfromcusp(GEN W, GEN c) GEN msfromell(GEN E, long signe) GEN msfromhecke(GEN W, GEN v, GEN H) long msgetlevel(GEN W) long msgetsign(GEN W) long msgetweight(GEN W) GEN msatkinlehner(GEN W, long Q, GEN) GEN mscuspidal(GEN W, long flag) GEN mseisenstein(GEN W) GEN mseval(GEN W, GEN s, GEN p) GEN mshecke(GEN W, long p, GEN H) GEN msinit(GEN N, GEN k, long sign) long msissymbol(GEN W, GEN s) GEN msomseval(GEN W, GEN phi, GEN path) GEN mspadicinit(GEN W, long p, long n, long flag) GEN mspadicL(GEN oms, GEN s, long r) GEN mspadicmoments(GEN W, GEN phi, long D) GEN mspadicseries(GEN M, long teichi) GEN mspathgens(GEN W) GEN mspathlog(GEN W, GEN path) GEN msnew(GEN W) GEN msstar(GEN W, GEN) GEN msqexpansion(GEN W, GEN proV, ulong B) GEN mssplit(GEN W, GEN H, long deglim) GEN mstooms(GEN W, GEN phi) # zetamult.c GEN zetamult(GEN avec, long prec) # level1.h ulong Fl_add(ulong a, ulong b, ulong p) ulong Fl_addmul_pre(ulong x0, ulong x1, ulong y0, ulong p, ulong pi) ulong Fl_addmulmul_pre(ulong x0, ulong y0, ulong x1, ulong y1, ulong p, ulong pi) long Fl_center(ulong u, ulong p, ulong ps2) ulong Fl_div(ulong a, ulong b, ulong p) ulong Fl_double(ulong a, ulong p) ulong Fl_ellj_pre(ulong a4, ulong a6, ulong p, ulong pi) ulong Fl_halve(ulong y, ulong p) ulong Fl_mul(ulong a, ulong b, ulong p) ulong Fl_mul_pre(ulong a, ulong b, ulong p, ulong pi) ulong Fl_neg(ulong x, ulong p) ulong Fl_sqr(ulong a, ulong p) ulong Fl_sqr_pre(ulong a, ulong p, ulong pi) ulong Fl_sub(ulong a, ulong b, ulong p) ulong Fl_triple(ulong a, ulong p) ulong Mod2(GEN x) ulong Mod4(GEN x) ulong Mod8(GEN x) ulong Mod16(GEN x) ulong Mod32(GEN x) ulong Mod64(GEN x) int abscmpiu(GEN x, ulong y) int abscmpui(ulong x, GEN y) int absequaliu(GEN x, ulong y) GEN absi(GEN x) GEN absi_shallow(GEN x) GEN absr(GEN x) int absrnz_equal1(GEN x) int absrnz_equal2n(GEN x) GEN addii(GEN x, GEN y) void addiiz(GEN x, GEN y, GEN z) GEN addir(GEN x, GEN y) void addirz(GEN x, GEN y, GEN z) GEN addis(GEN x, long s) GEN addri(GEN x, GEN y) void addriz(GEN x, GEN y, GEN z) GEN addrr(GEN x, GEN y) void addrrz(GEN x, GEN y, GEN z) GEN addrs(GEN x, long s) GEN addsi(long x, GEN y) void addsiz(long s, GEN y, GEN z) void addsrz(long s, GEN y, GEN z) GEN addss(long x, long y) void addssz(long s, long y, GEN z) GEN adduu(ulong x, ulong y) void affgr(GEN x, GEN y) void affii(GEN x, GEN y) void affiz(GEN x, GEN y) void affrr_fixlg(GEN y, GEN z) void affsi(long s, GEN x) void affsr(long s, GEN x) void affsz(long x, GEN y) void affui(ulong s, GEN x) void affur(ulong s, GEN x) GEN cgetg(long x, long y) GEN cgetg_block(long x, long y) GEN cgetg_copy(GEN x, long *plx) GEN cgeti(long x) GEN cgetineg(long x) GEN cgetipos(long x) GEN cgetr(long x) GEN cgetr_block(long prec) int cmpir(GEN x, GEN y) int cmpis(GEN x, long y) int cmpri(GEN x, GEN y) int cmprs(GEN x, long y) int cmpsi(long x, GEN y) int cmpsr(long x, GEN y) GEN cxtofp(GEN x, long prec) GEN divii(GEN a, GEN b) void diviiz(GEN x, GEN y, GEN z) void divirz(GEN x, GEN y, GEN z) void divisz(GEN x, long s, GEN z) void divriz(GEN x, GEN y, GEN z) void divrrz(GEN x, GEN y, GEN z) void divrsz(GEN y, long s, GEN z) GEN divsi_rem(long x, GEN y, long *rem) void divsiz(long x, GEN y, GEN z) void divsrz(long s, GEN y, GEN z) GEN divss(long x, long y) GEN divss_rem(long x, long y, long *rem) void divssz(long x, long y, GEN z) int dvdii(GEN x, GEN y) int dvdiiz(GEN x, GEN y, GEN z) int dvdis(GEN x, long y) int dvdisz(GEN x, long y, GEN z) int dvdiu(GEN x, ulong y) int dvdiuz(GEN x, ulong y, GEN z) int dvdsi(long x, GEN y) int dvdui(ulong x, GEN y) void dvmdiiz(GEN x, GEN y, GEN z, GEN t) GEN dvmdis(GEN x, long y, GEN *z) void dvmdisz(GEN x, long y, GEN z, GEN t) long dvmdsBIL(long n, long *r) GEN dvmdsi(long x, GEN y, GEN *z) void dvmdsiz(long x, GEN y, GEN z, GEN t) GEN dvmdss(long x, long y, GEN *z) void dvmdssz(long x, long y, GEN z, GEN t) ulong dvmduBIL(ulong n, ulong *r) int equalis(GEN x, long y) int equalsi(long x, GEN y) int absequalui(ulong x, GEN y) ulong ceildivuu(ulong a, ulong b) long evalexpo(long x) long evallg(long x) long evalprecp(long x) long evalvalp(long x) long expi(GEN x) long expu(ulong x) void fixlg(GEN z, long ly) GEN fractor(GEN x, long prec) GEN icopy(GEN x) GEN icopyspec(GEN x, long nx) GEN icopy_avma(GEN x, pari_sp av) ulong int_bit(GEN x, long n) GEN itor(GEN x, long prec) long itos(GEN x) long itos_or_0(GEN x) ulong itou(GEN x) ulong itou_or_0(GEN x) GEN leafcopy(GEN x) GEN leafcopy_avma(GEN x, pari_sp av) double maxdd(double x, double y) long maxss(long x, long y) long maxuu(ulong x, ulong y) double mindd(double x, double y) long minss(long x, long y) long minuu(ulong x, ulong y) long mod16(GEN x) long mod2(GEN x) ulong mod2BIL(GEN x) long mod32(GEN x) long mod4(GEN x) long mod64(GEN x) long mod8(GEN x) GEN modis(GEN x, long y) void modisz(GEN y, long s, GEN z) GEN modsi(long x, GEN y) void modsiz(long s, GEN y, GEN z) GEN modss(long x, long y) void modssz(long s, long y, GEN z) GEN mpabs(GEN x) GEN mpabs_shallow(GEN x) GEN mpadd(GEN x, GEN y) void mpaddz(GEN x, GEN y, GEN z) void mpaff(GEN x, GEN y) GEN mpceil(GEN x) int mpcmp(GEN x, GEN y) GEN mpcopy(GEN x) GEN mpdiv(GEN x, GEN y) long mpexpo(GEN x) GEN mpfloor(GEN x) GEN mpmul(GEN x, GEN y) void mpmulz(GEN x, GEN y, GEN z) GEN mpneg(GEN x) int mpodd(GEN x) GEN mpround(GEN x) GEN mpshift(GEN x, long s) GEN mpsqr(GEN x) GEN mpsub(GEN x, GEN y) void mpsubz(GEN x, GEN y, GEN z) GEN mptrunc(GEN x) void muliiz(GEN x, GEN y, GEN z) void mulirz(GEN x, GEN y, GEN z) GEN mulis(GEN x, long s) GEN muliu(GEN x, ulong s) GEN mulri(GEN x, GEN s) void mulriz(GEN x, GEN y, GEN z) void mulrrz(GEN x, GEN y, GEN z) GEN mulrs(GEN x, long s) GEN mulru(GEN x, ulong s) void mulsiz(long s, GEN y, GEN z) void mulsrz(long s, GEN y, GEN z) void mulssz(long s, long y, GEN z) GEN negi(GEN x) GEN negr(GEN x) GEN new_chunk(size_t x) GEN rcopy(GEN x) GEN rdivii(GEN x, GEN y, long prec) void rdiviiz(GEN x, GEN y, GEN z) GEN rdivis(GEN x, long y, long prec) GEN rdivsi(long x, GEN y, long prec) GEN rdivss(long x, long y, long prec) GEN real2n(long n, long prec) GEN real_m2n(long n, long prec) GEN real_0(long prec) GEN real_0_bit(long bitprec) GEN real_1(long prec) GEN real_1_bit(long bit) GEN real_m1(long prec) GEN remii(GEN a, GEN b) void remiiz(GEN x, GEN y, GEN z) GEN remis(GEN x, long y) void remisz(GEN y, long s, GEN z) ulong remlll_pre(ulong u2, ulong u1, ulong u0, ulong p, ulong pi) GEN remsi(long x, GEN y) void remsiz(long s, GEN y, GEN z) GEN remss(long x, long y) void remssz(long s, long y, GEN z) GEN rtor(GEN x, long prec) long sdivsi(long x, GEN y) long sdivsi_rem(long x, GEN y, long *rem) long sdivss_rem(long x, long y, long *rem) ulong udiviu_rem(GEN n, ulong d, ulong *r) ulong udivuu_rem(ulong x, ulong y, ulong *r) ulong umodi2n(GEN x, long n) void setabssign(GEN x) void shift_left(GEN z2, GEN z1, long min, long M, ulong f, ulong sh) void shift_right(GEN z2, GEN z1, long min, long M, ulong f, ulong sh) ulong shiftl(ulong x, ulong y) ulong shiftlr(ulong x, ulong y) GEN shiftr(GEN x, long n) void shiftr_inplace(GEN z, long d) long smodis(GEN x, long y) long smodss(long x, long y) void stackdummy(pari_sp av, pari_sp ltop) char *stack_malloc(size_t N) char *stack_malloc_align(size_t N, long k) char *stack_calloc(size_t N) GEN stoi(long x) GEN stor(long x, long prec) GEN subii(GEN x, GEN y) void subiiz(GEN x, GEN y, GEN z) GEN subir(GEN x, GEN y) void subirz(GEN x, GEN y, GEN z) GEN subis(GEN x, long y) void subisz(GEN y, long s, GEN z) GEN subri(GEN x, GEN y) void subriz(GEN x, GEN y, GEN z) GEN subrr(GEN x, GEN y) void subrrz(GEN x, GEN y, GEN z) GEN subrs(GEN x, long y) void subrsz(GEN y, long s, GEN z) GEN subsi(long x, GEN y) void subsiz(long s, GEN y, GEN z) void subsrz(long s, GEN y, GEN z) GEN subss(long x, long y) void subssz(long x, long y, GEN z) GEN subuu(ulong x, ulong y) void togglesign(GEN x) void togglesign_safe(GEN *px) void affectsign(GEN x, GEN y) void affectsign_safe(GEN x, GEN *py) GEN truedivii(GEN a, GEN b) GEN truedivis(GEN a, long b) GEN truedivsi(long a, GEN b) ulong udivui_rem(ulong x, GEN y, ulong *rem) ulong umodsu(long x, ulong y) ulong umodui(ulong x, GEN y) ulong umuluu_or_0(ulong x, ulong y) GEN utoi(ulong x) GEN utoineg(ulong x) GEN utoipos(ulong x) GEN utor(ulong s, long prec) GEN uutoi(ulong x, ulong y) GEN uutoineg(ulong x, ulong y) long vali(GEN x) int varncmp(long x, long y) long varnmax(long x, long y) long varnmin(long x, long y) # pariinl.h GEN abgrp_get_cyc(GEN x) GEN abgrp_get_gen(GEN x) GEN abgrp_get_no(GEN x) GEN bid_get_arch(GEN bid) GEN bid_get_archp(GEN bid) GEN bid_get_cyc(GEN bid) GEN bid_get_fact(GEN bid) GEN bid_get_gen(GEN bid) GEN bid_get_gen_nocheck(GEN bid) GEN bid_get_grp(GEN bid) GEN bid_get_ideal(GEN bid) GEN bid_get_mod(GEN bid) GEN bid_get_no(GEN bid) GEN bid_get_sarch(GEN bid) GEN bid_get_sprk(GEN bid) GEN bid_get_U(GEN bid) GEN bnf_get_clgp(GEN bnf) GEN bnf_get_cyc(GEN bnf) GEN bnf_get_fu(GEN bnf) GEN bnf_get_fu_nocheck(GEN bnf) GEN bnf_get_gen(GEN bnf) GEN bnf_get_logfu(GEN bnf) GEN bnf_get_nf(GEN bnf) GEN bnf_get_no(GEN bnf) GEN bnf_get_reg(GEN bnf) GEN bnf_get_tuU(GEN bnf) long bnf_get_tuN(GEN bnf) GEN bnr_get_bnf(GEN bnr) GEN bnr_get_clgp(GEN bnr) GEN bnr_get_cyc(GEN bnr) GEN bnr_get_gen(GEN bnr) GEN bnr_get_gen_nocheck(GEN bnr) GEN bnr_get_no(GEN bnr) GEN bnr_get_bid(GEN bnr) GEN bnr_get_mod(GEN bnr) GEN bnr_get_nf(GEN bnr) GEN ell_get_a1(GEN e) GEN ell_get_a2(GEN e) GEN ell_get_a3(GEN e) GEN ell_get_a4(GEN e) GEN ell_get_a6(GEN e) GEN ell_get_b2(GEN e) GEN ell_get_b4(GEN e) GEN ell_get_b6(GEN e) GEN ell_get_b8(GEN e) GEN ell_get_c4(GEN e) GEN ell_get_c6(GEN e) GEN ell_get_disc(GEN e) GEN ell_get_j(GEN e) long ell_get_type(GEN e) int ell_is_inf(GEN z) GEN ellinf() GEN ellff_get_field(GEN x) GEN ellff_get_a4a6(GEN x) GEN ellnf_get_nf(GEN x) GEN ellQp_get_p(GEN E) long ellQp_get_prec(GEN E) GEN ellQp_get_zero(GEN x) long ellR_get_prec(GEN x) long ellR_get_sign(GEN x) GEN gal_get_pol(GEN gal) GEN gal_get_p(GEN gal) GEN gal_get_e(GEN gal) GEN gal_get_mod(GEN gal) GEN gal_get_roots(GEN gal) GEN gal_get_invvdm(GEN gal) GEN gal_get_den(GEN gal) GEN gal_get_group(GEN gal) GEN gal_get_gen(GEN gal) GEN gal_get_orders(GEN gal) GEN idealchineseinit(GEN nf, GEN x) GEN idealpseudomin(GEN I, GEN G) GEN idealpseudomin_nonscalar(GEN I, GEN G) GEN idealpseudored(GEN I, GEN G) GEN idealred_elt(GEN nf, GEN I) GEN idealred(GEN nf, GEN I) long logint(GEN B, GEN y) ulong ulogint(ulong B, ulong y) GEN modpr_get_pr(GEN x) GEN modpr_get_p(GEN x) GEN modpr_get_T(GEN x) GEN nf_get_M(GEN nf) GEN nf_get_G(GEN nf) GEN nf_get_Tr(GEN nf) GEN nf_get_diff(GEN nf) long nf_get_degree(GEN nf) GEN nf_get_disc(GEN nf) GEN nf_get_index(GEN nf) GEN nf_get_invzk(GEN nf) GEN nf_get_pol(GEN nf) long nf_get_r1(GEN nf) long nf_get_r2(GEN nf) GEN nf_get_ramified_primes(GEN nf) GEN nf_get_roots(GEN nf) GEN nf_get_roundG(GEN nf) void nf_get_sign(GEN nf, long *r1, long *r2) long nf_get_varn(GEN nf) GEN nf_get_zk(GEN nf) GEN nf_get_zkden(GEN nf) GEN nf_get_zkprimpart(GEN nf) long pr_get_e(GEN pr) long pr_get_f(GEN pr) GEN pr_get_gen(GEN pr) GEN pr_get_p(GEN pr) GEN pr_get_tau(GEN pr) int pr_is_inert(GEN P) GEN pr_norm(GEN pr) GEN rnf_get_alpha(GEN rnf) long rnf_get_absdegree(GEN rnf) long rnf_get_degree(GEN rnf) GEN rnf_get_idealdisc(GEN rnf) GEN rnf_get_invzk(GEN rnf) GEN rnf_get_k(GEN rnf) GEN rnf_get_map(GEN rnf) GEN rnf_get_nf(GEN rnf) long rnf_get_nfdegree(GEN rnf) GEN rnf_get_nfpol(GEN rnf) long rnf_get_nfvarn(GEN rnf) GEN rnf_get_pol(GEN rnf) GEN rnf_get_polabs(GEN rnf) GEN rnf_get_zk(GEN nf) void rnf_get_nfzk(GEN rnf, GEN *b, GEN *cb) long rnf_get_varn(GEN rnf) ulong upr_norm(GEN pr) GEN znstar_get_N(GEN G) GEN znstar_get_conreycyc(GEN G) GEN znstar_get_conreygen(GEN G) GEN znstar_get_faN(GEN G) GEN znstar_get_no(GEN G) GEN znstar_get_pe(GEN G) GEN znstar_get_Ui(GEN G) long closure_arity(GEN C) const char * closure_codestr(GEN C) GEN closure_get_code(GEN C) GEN closure_get_oper(GEN C) GEN closure_get_data(GEN C) GEN closure_get_dbg(GEN C) GEN closure_get_text(GEN C) GEN closure_get_frame(GEN C) long closure_is_variadic(GEN C) GEN addmuliu(GEN x, GEN y, ulong u) GEN addmuliu_inplace(GEN x, GEN y, ulong u) GEN lincombii(GEN u, GEN v, GEN x, GEN y) GEN mulsubii(GEN y, GEN z, GEN x) GEN submulii(GEN x, GEN y, GEN z) GEN submuliu(GEN x, GEN y, ulong u) GEN submuliu_inplace(GEN x, GEN y, ulong u) GEN FpXQ_add(GEN x, GEN y, GEN T, GEN p) GEN FpXQ_sub(GEN x, GEN y, GEN T, GEN p) GEN Flxq_add(GEN x, GEN y, GEN T, ulong p) GEN Flxq_sub(GEN x, GEN y, GEN T, ulong p) GEN FpXQX_div(GEN x, GEN y, GEN T, GEN p) GEN FlxqX_div(GEN x, GEN y, GEN T, ulong p) GEN F2xqX_div(GEN x, GEN y, GEN T) GEN Fq_red(GEN x, GEN T, GEN p) GEN Fq_to_FpXQ(GEN x, GEN T, GEN p) GEN gener_Fq_local(GEN T, GEN p, GEN L) GEN FqX_Fp_mul(GEN P, GEN U, GEN T, GEN p) GEN FqX_Fq_mul(GEN P, GEN U, GEN T, GEN p) GEN FqX_add(GEN x, GEN y, GEN T, GEN p) GEN FqX_deriv(GEN f, GEN T, GEN p) GEN FqX_div(GEN x, GEN y, GEN T, GEN p) GEN FqX_div_by_X_x(GEN x, GEN y, GEN T, GEN p, GEN *z) GEN FqX_divrem(GEN x, GEN y, GEN T, GEN p, GEN *z) GEN FqX_extgcd(GEN P, GEN Q, GEN T, GEN p, GEN *U, GEN *V) GEN FqX_factor(GEN f, GEN T, GEN p) GEN FqX_gcd(GEN P, GEN Q, GEN T, GEN p) GEN FqX_get_red(GEN S, GEN T, GEN p) GEN FqX_halfgcd(GEN P, GEN Q, GEN T, GEN p) GEN FqX_mul(GEN x, GEN y, GEN T, GEN p) GEN FqX_mulu(GEN x, ulong y, GEN T, GEN p) GEN FqX_neg(GEN x, GEN T, GEN p) GEN FqX_normalize(GEN z, GEN T, GEN p) GEN FqX_powu(GEN x, ulong n, GEN T, GEN p) GEN FqX_red(GEN z, GEN T, GEN p) GEN FqX_rem(GEN x, GEN y, GEN T, GEN p) GEN FqX_roots(GEN f, GEN T, GEN p) GEN FqX_sqr(GEN x, GEN T, GEN p) GEN FqX_sub(GEN x, GEN y, GEN T, GEN p) GEN FqXQ_add(GEN x, GEN y, GEN S, GEN T, GEN p) GEN FqXQ_div(GEN x, GEN y, GEN S, GEN T, GEN p) GEN FqXQ_inv(GEN x, GEN S, GEN T, GEN p) GEN FqXQ_invsafe(GEN x, GEN S, GEN T, GEN p) GEN FqXQ_mul(GEN x, GEN y, GEN S, GEN T, GEN p) GEN FqXQ_pow(GEN x, GEN n, GEN S, GEN T, GEN p) GEN FqXQ_sqr(GEN x, GEN S, GEN T, GEN p) GEN FqXQ_sub(GEN x, GEN y, GEN S, GEN T, GEN p) long get_F2x_degree(GEN T) GEN get_F2x_mod(GEN T) long get_F2x_var(GEN T) long get_F2xqX_degree(GEN T) GEN get_F2xqX_mod(GEN T) long get_F2xqX_var(GEN T) long get_Flx_degree(GEN T) GEN get_Flx_mod(GEN T) long get_Flx_var(GEN T) long get_FlxqX_degree(GEN T) GEN get_FlxqX_mod(GEN T) long get_FlxqX_var(GEN T) long get_FpX_degree(GEN T) GEN get_FpX_mod(GEN T) long get_FpX_var(GEN T) long get_FpXQX_degree(GEN T) GEN get_FpXQX_mod(GEN T) long get_FpXQX_var(GEN T) ulong F2m_coeff(GEN x, long a, long b) void F2m_clear(GEN x, long a, long b) void F2m_flip(GEN x, long a, long b) void F2m_set(GEN x, long a, long b) void F2v_clear(GEN x, long v) ulong F2v_coeff(GEN x, long v) void F2v_flip(GEN x, long v) GEN F2v_to_F2x(GEN x, long sv) void F2v_set(GEN x, long v) void F2x_clear(GEN x, long v) ulong F2x_coeff(GEN x, long v) void F2x_flip(GEN x, long v) void F2x_set(GEN x, long v) int F2x_equal1(GEN x) int F2x_equal(GEN V, GEN W) GEN F2x_div(GEN x, GEN y) GEN F2x_renormalize(GEN x, long lx) GEN F2m_copy(GEN x) GEN F2v_copy(GEN x) GEN F2x_copy(GEN x) GEN F2v_ei(long n, long i) GEN Flm_copy(GEN x) GEN Flv_copy(GEN x) int Flx_equal1(GEN x) GEN Flx_copy(GEN x) GEN Flx_div(GEN x, GEN y, ulong p) ulong Flx_lead(GEN x) GEN Flx_mulu(GEN x, ulong a, ulong p) GEN FpV_FpC_mul(GEN x, GEN y, GEN p) GEN FpXQX_renormalize(GEN x, long lx) GEN FpXX_renormalize(GEN x, long lx) GEN FpX_div(GEN x, GEN y, GEN p) GEN FpX_renormalize(GEN x, long lx) GEN Fp_add(GEN a, GEN b, GEN m) GEN Fp_addmul(GEN x, GEN y, GEN z, GEN p) GEN Fp_center(GEN u, GEN p, GEN ps2) GEN Fp_div(GEN a, GEN b, GEN m) GEN Fp_halve(GEN y, GEN p) GEN Fp_inv(GEN a, GEN m) GEN Fp_invsafe(GEN a, GEN m) GEN Fp_mul(GEN a, GEN b, GEN m) GEN Fp_muls(GEN a, long b, GEN m) GEN Fp_mulu(GEN a, ulong b, GEN m) GEN Fp_neg(GEN b, GEN m) GEN Fp_red(GEN x, GEN p) GEN Fp_sqr(GEN a, GEN m) GEN Fp_sub(GEN a, GEN b, GEN m) GEN GENbinbase(GENbin *p) GEN Q_abs(GEN x) GEN Q_abs_shallow(GEN x) int QV_isscalar(GEN x) void Qtoss(GEN q, long *n, long *d) GEN R_abs(GEN x) GEN R_abs_shallow(GEN x) GEN RgC_fpnorml2(GEN x, long prec) GEN RgC_gtofp(GEN x, long prec) GEN RgC_gtomp(GEN x, long prec) void RgM_dimensions(GEN x, long *m, long *n) GEN RgM_fpnorml2(GEN x, long prec) GEN RgM_gtofp(GEN x, long prec) GEN RgM_gtomp(GEN x, long prec) GEN RgM_inv(GEN a) GEN RgM_minor(GEN a, long i, long j) GEN RgM_shallowcopy(GEN x) GEN RgV_gtofp(GEN x, long prec) int RgV_isscalar(GEN x) int RgV_is_ZV(GEN x) int RgV_is_QV(GEN x) long RgX_equal_var(GEN x, GEN y) int RgX_is_monomial(GEN x) int RgX_is_rational(GEN x) int RgX_is_QX(GEN x) int RgX_is_ZX(GEN x) int RgX_isscalar(GEN x) GEN RgX_shift_inplace(GEN x, long v) void RgX_shift_inplace_init(long v) GEN RgXQ_mul(GEN x, GEN y, GEN T) GEN RgXQ_sqr(GEN x, GEN T) GEN RgXQX_div(GEN x, GEN y, GEN T) GEN RgXQX_rem(GEN x, GEN y, GEN T) GEN RgX_coeff(GEN x, long n) GEN RgX_copy(GEN x) GEN RgX_div(GEN x, GEN y) GEN RgX_fpnorml2(GEN x, long prec) GEN RgX_gtofp(GEN x, long prec) GEN RgX_rem(GEN x, GEN y) GEN RgX_renormalize(GEN x) GEN Rg_col_ei(GEN x, long n, long i) GEN ZC_hnfrem(GEN x, GEN y) GEN ZM_hnfrem(GEN x, GEN y) GEN ZM_lll(GEN x, double D, long f) int ZV_dvd(GEN x, GEN y) int ZV_isscalar(GEN x) GEN ZV_to_zv(GEN x) int ZX_equal1(GEN x) GEN ZX_renormalize(GEN x, long lx) GEN ZXQ_mul(GEN x, GEN y, GEN T) GEN ZXQ_sqr(GEN x, GEN T) long Z_ispower(GEN x, ulong k) long Z_issquare(GEN x) GEN absfrac(GEN x) GEN absfrac_shallow(GEN x) GEN affc_fixlg(GEN x, GEN res) GEN bin_copy(GENbin *p) long bit_accuracy(long x) double bit_accuracy_mul(long x, double y) long bit_prec(GEN x) int both_odd(long x, long y) GEN cbrtr(GEN x) GEN cbrtr_abs(GEN x) GEN cgetc(long x) GEN cgetalloc(long t, size_t l) GEN cxcompotor(GEN z, long prec) void cgiv(GEN x) GEN col_ei(long n, long i) GEN const_col(long n, GEN x) GEN const_vec(long n, GEN x) GEN const_vecsmall(long n, long c) GEN constant_coeff(GEN x) GEN cxnorm(GEN x) GEN cyclic_perm(long l, long d) double dbllog2r(GEN x) long degpol(GEN x) long divsBIL(long n) void gabsz(GEN x, long prec, GEN z) GEN gaddgs(GEN y, long s) void gaddz(GEN x, GEN y, GEN z) int gcmpgs(GEN y, long s) void gdiventz(GEN x, GEN y, GEN z) GEN gdivsg(long s, GEN y) void gdivz(GEN x, GEN y, GEN z) GEN gen_I() void gerepileall(pari_sp av, int n, ...) void gerepilecoeffs(pari_sp av, GEN x, int n) GEN gerepilecopy(pari_sp av, GEN x) void gerepilemany(pari_sp av, GEN* g[], int n) int gequalgs(GEN y, long s) GEN gerepileupto(pari_sp av, GEN q) GEN gerepileuptoint(pari_sp av, GEN q) GEN gerepileuptoleaf(pari_sp av, GEN q) GEN gmaxsg(long s, GEN y) GEN gminsg(long s, GEN y) void gmodz(GEN x, GEN y, GEN z) void gmul2nz(GEN x, long s, GEN z) GEN gmulgs(GEN y, long s) void gmulz(GEN x, GEN y, GEN z) void gnegz(GEN x, GEN z) void gshiftz(GEN x, long s, GEN z) GEN gsubgs(GEN y, long s) void gsubz(GEN x, GEN y, GEN z) double gtodouble(GEN x) GEN gtofp(GEN z, long prec) GEN gtomp(GEN z, long prec) long gtos(GEN x) long gval(GEN x, long v) GEN identity_perm(long l) int equali1(GEN n) int equalim1(GEN n) long inf_get_sign(GEN x) int is_bigint(GEN n) int is_const_t(long t) int is_extscalar_t(long t) int is_intreal_t(long t) int is_matvec_t(long t) int is_noncalc_t(long tx) int is_pm1(GEN n) int is_rational_t(long t) int is_real_t(long t) int is_recursive_t(long t) int is_scalar_t(long t) int _is_universal_constant "is_universal_constant"(GEN x) int is_vec_t(long t) int isint1(GEN x) int isintm1(GEN x) int isintzero(GEN x) int ismpzero(GEN x) int isonstack(GEN x) void killblock(GEN x) GEN leading_coeff(GEN x) void lg_increase(GEN x) long lgcols(GEN x) long lgpol(GEN x) GEN matpascal(long n) GEN matslice(GEN A, long x1, long x2, long y1, long y2) GEN mkcol(GEN x) GEN mkcol2(GEN x, GEN y) GEN mkcol2s(long x, long y) GEN mkcol3(GEN x, GEN y, GEN z) GEN mkcol3s(long x, long y, long z) GEN mkcol4(GEN x, GEN y, GEN z, GEN t) GEN mkcol4s(long x, long y, long z, long t) GEN mkcol5(GEN x, GEN y, GEN z, GEN t, GEN u) GEN mkcol6(GEN x, GEN y, GEN z, GEN t, GEN u, GEN v) GEN mkcolcopy(GEN x) GEN mkcols(long x) GEN mkcomplex(GEN x, GEN y) GEN mkerr(long n) GEN mkmoo() GEN mkoo() GEN mkfrac(GEN x, GEN y) GEN mkfracss(long x, long y) GEN mkfraccopy(GEN x, GEN y) GEN mkintmod(GEN x, GEN y) GEN mkintmodu(ulong x, ulong y) GEN mkmat(GEN x) GEN mkmat2(GEN x, GEN y) GEN mkmat3(GEN x, GEN y, GEN z) GEN mkmat4(GEN x, GEN y, GEN z, GEN t) GEN mkmat5(GEN x, GEN y, GEN z, GEN t, GEN u) GEN mkmatcopy(GEN x) GEN mkpolmod(GEN x, GEN y) GEN mkqfi(GEN x, GEN y, GEN z) GEN mkquad(GEN n, GEN x, GEN y) GEN mkrfrac(GEN x, GEN y) GEN mkrfraccopy(GEN x, GEN y) GEN mkvec(GEN x) GEN mkvec2(GEN x, GEN y) GEN mkvec2copy(GEN x, GEN y) GEN mkvec2s(long x, long y) GEN mkvec3(GEN x, GEN y, GEN z) GEN mkvec3s(long x, long y, long z) GEN mkvec4(GEN x, GEN y, GEN z, GEN t) GEN mkvec4s(long x, long y, long z, long t) GEN mkvec5(GEN x, GEN y, GEN z, GEN t, GEN u) GEN mkveccopy(GEN x) GEN mkvecs(long x) GEN mkvecsmall(long x) GEN mkvecsmall2(long x, long y) GEN mkvecsmall3(long x, long y, long z) GEN mkvecsmall4(long x, long y, long z, long t) void mpcosz(GEN x, GEN z) void mpexpz(GEN x, GEN z) void mplogz(GEN x, GEN z) void mpsinz(GEN x, GEN z) GEN mul_content(GEN cx, GEN cy) GEN mul_denom(GEN cx, GEN cy) long nbits2nlong(long x) long nbits2extraprec(long x) long nbits2ndec(long x) long nbits2prec(long x) long nbits2lg(long x) long nbrows(GEN x) long nchar2nlong(long x) long ndec2nbits(long x) long ndec2nlong(long x) long ndec2prec(long x) void normalize_frac(GEN z) int odd(long x) void pari_free(void *pointer) void* pari_calloc(size_t size) void* pari_malloc(size_t bytes) void* pari_realloc(void *pointer, size_t size) GEN perm_conj(GEN s, GEN t) GEN perm_inv(GEN x) GEN perm_mul(GEN s, GEN t) GEN pol_0(long v) GEN pol_1(long v) GEN pol_x(long v) GEN pol_xn(long n, long v) GEN pol_xnall(long n, long v) GEN pol0_F2x(long sv) GEN pol1_F2x(long sv) GEN polx_F2x(long sv) GEN pol0_Flx(long sv) GEN pol1_Flx(long sv) GEN polx_Flx(long sv) GEN polx_zx(long sv) GEN powii(GEN x, GEN n) GEN powIs(long n) long prec2nbits(long x) double prec2nbits_mul(long x, double y) long prec2ndec(long x) long precdbl(long x) GEN quad_disc(GEN x) GEN qfb_disc(GEN x) GEN qfb_disc3(GEN x, GEN y, GEN z) GEN quadnorm(GEN q) long remsBIL(long n) GEN row(GEN A, long x1) GEN Flm_row(GEN A, long x0) GEN row_i(GEN A, long x0, long x1, long x2) GEN zm_row(GEN x, long i) GEN rowcopy(GEN A, long x0) GEN rowpermute(GEN A, GEN p) GEN rowslice(GEN A, long x1, long x2) GEN rowslicepermute(GEN A, GEN p, long x1, long x2) GEN rowsplice(GEN a, long j) int ser_isexactzero(GEN x) GEN shallowcopy(GEN x) GEN sqrfrac(GEN x) GEN sqrti(GEN x) GEN sqrtnr(GEN x, long n) GEN sqrtr(GEN x) GEN sstoQ(long n, long d) void pari_stack_alloc(pari_stack *s, long nb) void** pari_stack_base(pari_stack *s) void pari_stack_delete(pari_stack *s) void pari_stack_init(pari_stack *s, size_t size, void **data) long pari_stack_new(pari_stack *s) void pari_stack_pushp(pari_stack *s, void *u) long sturm(GEN x) GEN truecoeff(GEN x, long n) GEN trunc_safe(GEN x) GEN vec_ei(long n, long i) GEN vec_append(GEN v, GEN s) GEN vec_lengthen(GEN v, long n) GEN vec_setconst(GEN v, GEN x) GEN vec_shorten(GEN v, long n) GEN vec_to_vecsmall(GEN z) GEN vecpermute(GEN A, GEN p) GEN vecreverse(GEN A) void vecreverse_inplace(GEN y) GEN vecsmallpermute(GEN A, GEN p) GEN vecslice(GEN A, long y1, long y2) GEN vecslicepermute(GEN A, GEN p, long y1, long y2) GEN vecsplice(GEN a, long j) GEN vecsmall_append(GEN V, long s) long vecsmall_coincidence(GEN u, GEN v) GEN vecsmall_concat(GEN u, GEN v) GEN vecsmall_copy(GEN x) GEN vecsmall_ei(long n, long i) long vecsmall_indexmax(GEN x) long vecsmall_indexmin(GEN x) long vecsmall_isin(GEN v, long x) GEN vecsmall_lengthen(GEN v, long n) int vecsmall_lexcmp(GEN x, GEN y) long vecsmall_max(GEN v) long vecsmall_min(GEN v) long vecsmall_pack(GEN V, long base, long mod) int vecsmall_prefixcmp(GEN x, GEN y) GEN vecsmall_prepend(GEN V, long s) GEN vecsmall_reverse(GEN A) GEN vecsmall_shorten(GEN v, long n) GEN vecsmall_to_col(GEN z) GEN vecsmall_to_vec(GEN z) GEN vecsmall_to_vec_inplace(GEN z) void vecsmalltrunc_append(GEN x, long t) GEN vecsmalltrunc_init(long l) void vectrunc_append(GEN x, GEN t) void vectrunc_append_batch(GEN x, GEN y) GEN vectrunc_init(long l) GEN zc_to_ZC(GEN x) GEN zero_F2m(long n, long m) GEN zero_F2m_copy(long n, long m) GEN zero_F2v(long m) GEN zero_F2x(long sv) GEN zero_Flm(long m, long n) GEN zero_Flm_copy(long m, long n) GEN zero_Flv(long n) GEN zero_Flx(long sv) GEN zero_zm(long x, long y) GEN zero_zv(long x) GEN zero_zx(long sv) GEN zerocol(long n) GEN zeromat(long m, long n) GEN zeromatcopy(long m, long n) GEN zeropadic(GEN p, long e) GEN zeropadic_shallow(GEN p, long e) GEN zeropol(long v) GEN zeroser(long v, long e) GEN zerovec(long n) GEN zerovec_block(long len) GEN zm_copy(GEN x) GEN zm_to_zxV(GEN x, long sv) GEN zm_transpose(GEN x) GEN zv_copy(GEN x) GEN zv_to_ZV(GEN x) GEN zv_to_zx(GEN x, long sv) GEN zx_renormalize(GEN x, long l) GEN zx_shift(GEN x, long n) GEN zx_to_zv(GEN x, long N) GEN err_get_compo(GEN e, long i) long err_get_num(GEN e) void pari_err_BUG(const char *f) void pari_err_COMPONENT(const char *f, const char *op, GEN l, GEN x) void pari_err_CONSTPOL(const char *f) void pari_err_COPRIME(const char *f, GEN x, GEN y) void pari_err_DIM(const char *f) void pari_err_DOMAIN(const char *f, const char *v, const char *op, GEN l, GEN x) void pari_err_FILE(const char *f, const char *g) void pari_err_FLAG(const char *f) void pari_err_IMPL(const char *f) void pari_err_INV(const char *f, GEN x) void pari_err_IRREDPOL(const char *f, GEN x) void pari_err_MAXPRIME(ulong c) void pari_err_MODULUS(const char *f, GEN x, GEN y) void pari_err_OP(const char *f, GEN x, GEN y) void pari_err_OVERFLOW(const char *f) void pari_err_PACKAGE(const char *f) void pari_err_PREC(const char *f) void pari_err_PRIME(const char *f, GEN x) void pari_err_PRIORITY(const char *f, GEN x, const char *op, long v) void pari_err_SQRTN(const char *f, GEN x) void pari_err_TYPE(const char *f, GEN x) void pari_err_TYPE2(const char *f, GEN x, GEN y) void pari_err_VAR(const char *f, GEN x, GEN y) void pari_err_ROOTS0(const char *f) # Work around a bug in PARI's is_universal_constant() cdef inline int is_universal_constant(GEN x): return _is_universal_constant(x) or (x is err_e_STACK) # Auto-generated declarations. There are taken from the PARI version # on the system, so they more up-to-date than the above. In case of # conflicting declarations, auto_paridecl should have priority. from .auto_paridecl cimport * cdef inline int is_on_stack(GEN x) except -1: """ Is the GEN ``x`` stored on the PARI stack? """ s = x if avma <= s: return s < pari_mainstack.top if pari_mainstack.vbot <= s: raise SystemError("PARI object in unused part of PARI stack") return 0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779356.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/cypari2/paripriv.pxd0000644000175000017500000000175100000000000026167 0ustar00vincentvincent00000000000000# distutils: libraries = gmp pari """ Declarations for private functions from PARI Ideally, we shouldn't use these, but for technical reasons, we have to. """ from .types cimport * cdef extern from "pari/paripriv.h": int t_FF_FpXQ, t_FF_Flxq, t_FF_F2xq int gpd_QUIET, gpd_TEST, gpd_EMACS, gpd_TEXMACS struct pariout_t: char format # e,f,g long fieldw # 0 (ignored) or field width long sigd # -1 (all) or number of significant digits printed int sp # 0 = suppress whitespace from output int prettyp # output style: raw, prettyprint, etc int TeXstyle struct gp_data: pariout_t *fmt unsigned long flags extern gp_data* GP_DATA # In older versions of PARI, this is declared in the private # non-installed PARI header file "anal.h". More recently, this is # declared in "paripriv.h". Since a double declaration does not hurt, # we declare it here regardless. cdef extern const char* closure_func_err() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779356.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/cypari2/stack.pxd0000644000175000017500000000106200000000000025433 0ustar00vincentvincent00000000000000from .types cimport GEN, pari_sp from .gen cimport Gen_base, Gen cdef Gen new_gen(GEN x) cdef new_gens2(GEN x, GEN y) cdef Gen new_gen_noclear(GEN x) cdef Gen clone_gen(GEN x) cdef Gen clone_gen_noclear(GEN x) cdef void clear_stack() cdef void reset_avma() cdef void remove_from_pari_stack(Gen self) cdef int move_gens_to_heap(pari_sp lim) except -1 cdef int before_resize() except -1 cdef int set_pari_stack_size(size_t size, size_t sizemax) except -1 cdef void after_resize() cdef class DetachGen: cdef source cdef GEN detach(self) except NULL ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779356.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/cypari2/string_utils.pxd0000644000175000017500000000114600000000000027057 0ustar00vincentvincent00000000000000cdef extern from *: int PY_MAJOR_VERSION cpdef bytes to_bytes(s) cpdef unicode to_unicode(s) cpdef inline to_string(s): r""" Converts a bytes and unicode ``s`` to a string. String means bytes in Python2 and unicode in Python3 Examples: >>> from cypari2.string_utils import to_string >>> s1 = to_string(b'hello') >>> s2 = to_string('hello') >>> s3 = to_string(u'hello') >>> type(s1) == type(s2) == type(s3) == str True >>> s1 == s2 == s3 == 'hello' True """ if PY_MAJOR_VERSION <= 2: return to_bytes(s) else: return to_unicode(s) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564779356.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/cypari2/types.pxd0000644000175000017500000000754600000000000025507 0ustar00vincentvincent00000000000000""" Declarations for types used by PARI This includes both the C types as well as the PARI types (and a few macros for dealing with those). It is important that the functionality in this file does not call any PARI library functions. The reason is that we want to allow just using these types (for example, to define a Cython extension type) without linking to PARI. This file should consist only of typedefs and macros from PARI's include files. """ #***************************************************************************** # Distributed under the terms of the GNU General Public License (GPL) # as published by the Free Software Foundation; either version 2 of # the License, or (at your option) any later version. # http://www.gnu.org/licenses/ #***************************************************************************** cdef extern from "pari/pari.h": ctypedef unsigned long ulong "pari_ulong" ctypedef long* GEN ctypedef char* byteptr ctypedef unsigned long pari_sp # PARI types enum: t_INT t_REAL t_INTMOD t_FRAC t_FFELT t_COMPLEX t_PADIC t_QUAD t_POLMOD t_POL t_SER t_RFRAC t_QFR t_QFI t_VEC t_COL t_MAT t_LIST t_STR t_VECSMALL t_CLOSURE t_ERROR t_INFINITY int BITS_IN_LONG long DEFAULTPREC # 64 bits precision long MEDDEFAULTPREC # 128 bits precision long BIGDEFAULTPREC # 192 bits precision ulong CLONEBIT long typ(GEN x) long settyp(GEN x, long s) long isclone(GEN x) long setisclone(GEN x) long unsetisclone(GEN x) long lg(GEN x) long setlg(GEN x, long s) long signe(GEN x) long setsigne(GEN x, long s) long lgefint(GEN x) long setlgefint(GEN x, long s) long expo(GEN x) long setexpo(GEN x, long s) long valp(GEN x) long setvalp(GEN x, long s) long precp(GEN x) long setprecp(GEN x, long s) long varn(GEN x) long setvarn(GEN x, long s) long evaltyp(long x) long evallg(long x) long evalvarn(long x) long evalsigne(long x) long evalprecp(long x) long evalvalp(long x) long evalexpo(long x) long evallgefint(long x) ctypedef struct PARI_plot: long width long height long hunit long vunit long fwidth long fheight void (*draw)(PARI_plot *T, GEN w, GEN x, GEN y) ctypedef struct entree: const char *name ulong valence void *value long menu const char *code const char *help void *pvalue long arity ulong hash entree *next # Various structures that we don't interface but which need to be # declared, such that Cython understands the declarations of # functions using these types. struct bb_group struct bb_field struct bb_ring struct bb_algebra struct qfr_data struct nfmaxord_t struct forcomposite_t struct forpart_t struct forprime_t struct forvec_t struct gp_context struct nfbasic_t struct pariFILE struct pari_mt struct pari_stack struct pari_thread struct pari_timer struct GENbin struct hashentry struct hashtable # These are actually defined in cypari.h but we put them here to # prevent Cython from reordering the includes. GEN set_gel(GEN x, long n, GEN z) # gel(x, n) = z GEN set_gmael(GEN x, long i, long j, GEN z) # gmael(x, i, j) = z GEN set_gcoeff(GEN x, long i, long j, GEN z) # gcoeff(x, i, j) = z GEN set_uel(GEN x, long n, ulong z) # uel(x, n) = z # It is important that this gets included *after* all PARI includes cdef extern from "cypari.h": pass ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735099.9266608 cypari2-2.1.2/venv/lib/python3.7/site-packages/cysignals/0000755000175000017500000000000000000000000024235 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431446.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/cysignals/__init__.py0000644000175000017500000000013300000000000026343 0ustar00vincentvincent00000000000000from .signals import AlarmInterrupt, SignalError, init_cysignals # noqa init_cysignals() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431446.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/cysignals/memory.pxd0000644000175000017500000001065600000000000026272 0ustar00vincentvincent00000000000000""" Memory allocation functions which are interrupt-safe The ``sig_`` variants are simple wrappers around the corresponding C functions. The ``check_`` variants check the return value and raise ``MemoryError`` in case of failure. """ #***************************************************************************** # Copyright (C) 2011-2016 Jeroen Demeyer # # cysignals is free software: you can redistribute it and/or modify it # under the terms of the GNU Lesser General Public License as published # by the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # cysignals is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with cysignals. If not, see . # #***************************************************************************** cimport cython from libc.stdlib cimport malloc, calloc, realloc, free from .signals cimport sig_block, sig_unblock cdef extern from *: int unlikely(int) nogil # Defined by Cython cdef inline void* sig_malloc "sig_malloc"(size_t n) nogil: sig_block() cdef void* ret = malloc(n) sig_unblock() return ret cdef inline void* sig_realloc "sig_realloc"(void* ptr, size_t size) nogil: sig_block() cdef void* ret = realloc(ptr, size) sig_unblock() return ret cdef inline void* sig_calloc "sig_calloc"(size_t nmemb, size_t size) nogil: sig_block() cdef void* ret = calloc(nmemb, size) sig_unblock() return ret cdef inline void sig_free "sig_free"(void* ptr) nogil: sig_block() free(ptr) sig_unblock() @cython.cdivision(True) cdef inline size_t mul_overflowcheck(size_t a, size_t b) nogil: """ Return a*b, checking for overflow. Assume that a > 0. If overflow occurs, return (-1). We assume that malloc(-1) always fails. """ # If a and b both less than MUL_NO_OVERFLOW, no overflow can occur cdef size_t MUL_NO_OVERFLOW = ((1) << (4*sizeof(size_t))) if a >= MUL_NO_OVERFLOW or b >= MUL_NO_OVERFLOW: if unlikely(b > (-1) // a): return (-1) return a*b cdef inline void* check_allocarray(size_t nmemb, size_t size) except? NULL: """ Allocate memory for ``nmemb`` elements of size ``size``. """ if nmemb == 0: return NULL cdef size_t n = mul_overflowcheck(nmemb, size) cdef void* ret = sig_malloc(n) if unlikely(ret == NULL): raise MemoryError("failed to allocate %s * %s bytes" % (nmemb, size)) return ret cdef inline void* check_reallocarray(void* ptr, size_t nmemb, size_t size) except? NULL: """ Re-allocate memory at ``ptr`` to hold ``nmemb`` elements of size ``size``. If ``ptr`` equals ``NULL``, this behaves as ``check_allocarray``. When ``nmemb`` equals 0, then free the memory at ``ptr``. """ if nmemb == 0: sig_free(ptr) return NULL cdef size_t n = mul_overflowcheck(nmemb, size) cdef void* ret = sig_realloc(ptr, n) if unlikely(ret == NULL): raise MemoryError("failed to allocate %s * %s bytes" % (nmemb, size)) return ret cdef inline void* check_malloc(size_t n) except? NULL: """ Allocate ``n`` bytes of memory. """ if n == 0: return NULL cdef void* ret = sig_malloc(n) if unlikely(ret == NULL): raise MemoryError("failed to allocate %s bytes" % n) return ret cdef inline void* check_realloc(void* ptr, size_t n) except? NULL: """ Re-allocate memory at ``ptr`` to hold ``n`` bytes. If ``ptr`` equals ``NULL``, this behaves as ``check_malloc``. """ if n == 0: sig_free(ptr) return NULL cdef void* ret = sig_realloc(ptr, n) if unlikely(ret == NULL): raise MemoryError("failed to allocate %s bytes" % n) return ret cdef inline void* check_calloc(size_t nmemb, size_t size) except? NULL: """ Allocate memory for ``nmemb`` elements of size ``size``. The resulting memory is zeroed. """ if nmemb == 0: return NULL cdef void* ret = sig_calloc(nmemb, size) if unlikely(ret == NULL): raise MemoryError("failed to allocate %s * %s bytes" % (nmemb, size)) return ret ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431446.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/cysignals/memory.pxi0000644000175000017500000000040100000000000026262 0ustar00vincentvincent00000000000000cdef extern from "pxi_warning.h": pass from cysignals.signals cimport * from cysignals.memory cimport ( sig_malloc, sig_realloc, sig_calloc, sig_free, check_allocarray, check_reallocarray, check_malloc, check_realloc, check_calloc) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431446.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/cysignals/pysignals.pxd0000644000175000017500000000012600000000000026762 0ustar00vincentvincent00000000000000from posix.signal cimport sigaction_t cdef class SigAction: cdef sigaction_t act ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431446.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/cysignals/signals.pxd0000644000175000017500000000667700000000000026432 0ustar00vincentvincent00000000000000# cython: preliminary_late_includes_cy28=True # distutils: extra_link_args = -lpari #***************************************************************************** # cysignals is free software: you can redistribute it and/or modify it # under the terms of the GNU Lesser General Public License as published # by the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # cysignals is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with cysignals. If not, see . # #***************************************************************************** from cpython.object cimport PyObject cdef extern from *: int unlikely(int) nogil # Defined by Cython cdef extern from "struct_signals.h": ctypedef int cy_atomic_int ctypedef struct cysigs_t: cy_atomic_int sig_on_count cy_atomic_int block_sigint const char* s PyObject* exc_value cdef extern from "macros.h" nogil: int sig_on() except 0 int sig_str(const char*) except 0 int sig_check() except 0 void sig_off() void sig_retry() # Does not return void sig_error() # Does not return void sig_block() void sig_unblock() # Macros behaving exactly like sig_on, sig_str and sig_check but # which are *not* declared "except 0". This is useful if some # low-level Cython code wants to do its own exception handling. int sig_on_no_except "sig_on"() int sig_str_no_except "sig_str"(const char*) int sig_check_no_except "sig_check"() # This function does nothing, but it is declared cdef except *, so it # can be used to make Cython check whether there is a pending exception # (PyErr_Occurred() is non-NULL). To Cython, it will look like # cython_check_exception() actually raised the exception. cdef inline void cython_check_exception() nogil except *: pass cdef void verify_exc_value() cdef inline PyObject* sig_occurred(): """ Borrowed reference to the exception which is currently being propagated from cysignals. If there is no exception or if we are done handling the exception, return ``NULL``. This is meant for Cython code to check whether objects may be in an invalid state. Typically, this would be used in an ``except`` or ``finally`` block or in ``__dealloc__``. The implementation is based on reference counting: it checks whether the exception has been deleted. This means that it will break if the exception is stored somewhere. """ if unlikely(cysigs.exc_value is not NULL): verify_exc_value() return cysigs.exc_value # Variables and functions which are implemented in implementation.c # and used by macros.h. We use the Cython cimport mechanism to make # these available to every Cython module cimporting this file. cdef nogil: cysigs_t cysigs "cysigs" void _sig_on_interrupt_received "_sig_on_interrupt_received"() void _sig_on_recover "_sig_on_recover"() void _sig_off_warning "_sig_off_warning"(const char*, int) void print_backtrace "print_backtrace"() cdef inline void __generate_declarations(): cysigs _sig_on_interrupt_received _sig_on_recover _sig_off_warning print_backtrace ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431446.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/cysignals/signals.pxi0000644000175000017500000000011100000000000026410 0ustar00vincentvincent00000000000000cdef extern from "pxi_warning.h": pass from cysignals.signals cimport * ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431446.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/cython.py0000644000175000017500000000101000000000000024107 0ustar00vincentvincent00000000000000#!/usr/bin/env python # # Cython -- Main Program, generic # if __name__ == '__main__': import os import sys # Make sure we import the right Cython cythonpath, _ = os.path.split(os.path.realpath(__file__)) sys.path.insert(0, cythonpath) from Cython.Compiler.Main import main main(command_line = 1) else: # Void cython.* directives. from Cython.Shadow import * ## and bring in the __version__ from Cython import __version__ from Cython import load_ipython_extension ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430189.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/easy_install.py0000644000175000017500000000017600000000000025306 0ustar00vincentvincent00000000000000"""Run the EasyInstall command""" if __name__ == '__main__': from setuptools.command.easy_install import main main() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735099.9266608 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/0000755000175000017500000000000000000000000023031 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/__init__.py0000644000175000017500000000002700000000000025141 0ustar00vincentvincent00000000000000__version__ = "19.2.1" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/__main__.py0000644000175000017500000000115700000000000025127 0ustar00vincentvincent00000000000000from __future__ import absolute_import import os import sys # If we are running from a wheel, add the wheel to sys.path # This allows the usage python pip-*.whl/pip install pip-*.whl if __package__ == '': # __file__ is pip-*.whl/pip/__main__.py # first dirname call strips of '/__main__.py', second strips off '/pip' # Resulting path is the name of the wheel itself # Add that to sys.path so we can import pip path = os.path.dirname(os.path.dirname(__file__)) sys.path.insert(0, path) from pip._internal import main as _main # isort:skip # noqa if __name__ == '__main__': sys.exit(_main()) ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1604735099.929994 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/0000755000175000017500000000000000000000000025004 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/__init__.py0000644000175000017500000000535500000000000027125 0ustar00vincentvincent00000000000000#!/usr/bin/env python from __future__ import absolute_import import locale import logging import os import warnings import sys # 2016-06-17 barry@debian.org: urllib3 1.14 added optional support for socks, # but if invoked (i.e. imported), it will issue a warning to stderr if socks # isn't available. requests unconditionally imports urllib3's socks contrib # module, triggering this warning. The warning breaks DEP-8 tests (because of # the stderr output) and is just plain annoying in normal usage. I don't want # to add socks as yet another dependency for pip, nor do I want to allow-stderr # in the DEP-8 tests, so just suppress the warning. pdb tells me this has to # be done before the import of pip.vcs. from pip._vendor.urllib3.exceptions import DependencyWarning warnings.filterwarnings("ignore", category=DependencyWarning) # noqa # We want to inject the use of SecureTransport as early as possible so that any # references or sessions or what have you are ensured to have it, however we # only want to do this in the case that we're running on macOS and the linked # OpenSSL is too old to handle TLSv1.2 try: import ssl except ImportError: pass else: # Checks for OpenSSL 1.0.1 on MacOS if sys.platform == "darwin" and ssl.OPENSSL_VERSION_NUMBER < 0x1000100f: try: from pip._vendor.urllib3.contrib import securetransport except (ImportError, OSError): pass else: securetransport.inject_into_urllib3() from pip._internal.cli.autocompletion import autocomplete from pip._internal.cli.main_parser import parse_command from pip._internal.commands import commands_dict from pip._internal.exceptions import PipError from pip._internal.utils import deprecation from pip._vendor.urllib3.exceptions import InsecureRequestWarning logger = logging.getLogger(__name__) # Hide the InsecureRequestWarning from urllib3 warnings.filterwarnings("ignore", category=InsecureRequestWarning) def main(args=None): if args is None: args = sys.argv[1:] # Configure our deprecation warnings to be sent through loggers deprecation.install_warning_logger() autocomplete() try: cmd_name, cmd_args = parse_command(args) except PipError as exc: sys.stderr.write("ERROR: %s" % exc) sys.stderr.write(os.linesep) sys.exit(1) # Needed for locale.getpreferredencoding(False) to work # in pip._internal.utils.encoding.auto_decode try: locale.setlocale(locale.LC_ALL, '') except locale.Error as e: # setlocale can apparently crash if locale are uninitialized logger.debug("Ignoring error %s when setting locale", e) command = commands_dict[cmd_name](isolated=("--isolated" in cmd_args)) return command.main(cmd_args) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/build_env.py0000644000175000017500000001635500000000000027337 0ustar00vincentvincent00000000000000"""Build Environment used for isolation during sdist building """ import logging import os import sys import textwrap from collections import OrderedDict from distutils.sysconfig import get_python_lib from sysconfig import get_paths from pip._vendor.pkg_resources import Requirement, VersionConflict, WorkingSet from pip import __file__ as pip_location from pip._internal.utils.misc import call_subprocess from pip._internal.utils.temp_dir import TempDirectory from pip._internal.utils.typing import MYPY_CHECK_RUNNING from pip._internal.utils.ui import open_spinner if MYPY_CHECK_RUNNING: from typing import Tuple, Set, Iterable, Optional, List from pip._internal.index import PackageFinder logger = logging.getLogger(__name__) class _Prefix: def __init__(self, path): # type: (str) -> None self.path = path self.setup = False self.bin_dir = get_paths( 'nt' if os.name == 'nt' else 'posix_prefix', vars={'base': path, 'platbase': path} )['scripts'] # Note: prefer distutils' sysconfig to get the # library paths so PyPy is correctly supported. purelib = get_python_lib(plat_specific=False, prefix=path) platlib = get_python_lib(plat_specific=True, prefix=path) if purelib == platlib: self.lib_dirs = [purelib] else: self.lib_dirs = [purelib, platlib] class BuildEnvironment(object): """Creates and manages an isolated environment to install build deps """ def __init__(self): # type: () -> None self._temp_dir = TempDirectory(kind="build-env") self._temp_dir.create() self._prefixes = OrderedDict(( (name, _Prefix(os.path.join(self._temp_dir.path, name))) for name in ('normal', 'overlay') )) self._bin_dirs = [] # type: List[str] self._lib_dirs = [] # type: List[str] for prefix in reversed(list(self._prefixes.values())): self._bin_dirs.append(prefix.bin_dir) self._lib_dirs.extend(prefix.lib_dirs) # Customize site to: # - ensure .pth files are honored # - prevent access to system site packages system_sites = { os.path.normcase(site) for site in ( get_python_lib(plat_specific=False), get_python_lib(plat_specific=True), ) } self._site_dir = os.path.join(self._temp_dir.path, 'site') if not os.path.exists(self._site_dir): os.mkdir(self._site_dir) with open(os.path.join(self._site_dir, 'sitecustomize.py'), 'w') as fp: fp.write(textwrap.dedent( ''' import os, site, sys # First, drop system-sites related paths. original_sys_path = sys.path[:] known_paths = set() for path in {system_sites!r}: site.addsitedir(path, known_paths=known_paths) system_paths = set( os.path.normcase(path) for path in sys.path[len(original_sys_path):] ) original_sys_path = [ path for path in original_sys_path if os.path.normcase(path) not in system_paths ] sys.path = original_sys_path # Second, add lib directories. # ensuring .pth file are processed. for path in {lib_dirs!r}: assert not path in sys.path site.addsitedir(path) ''' ).format(system_sites=system_sites, lib_dirs=self._lib_dirs)) def __enter__(self): self._save_env = { name: os.environ.get(name, None) for name in ('PATH', 'PYTHONNOUSERSITE', 'PYTHONPATH') } path = self._bin_dirs[:] old_path = self._save_env['PATH'] if old_path: path.extend(old_path.split(os.pathsep)) pythonpath = [self._site_dir] os.environ.update({ 'PATH': os.pathsep.join(path), 'PYTHONNOUSERSITE': '1', 'PYTHONPATH': os.pathsep.join(pythonpath), }) def __exit__(self, exc_type, exc_val, exc_tb): for varname, old_value in self._save_env.items(): if old_value is None: os.environ.pop(varname, None) else: os.environ[varname] = old_value def cleanup(self): # type: () -> None self._temp_dir.cleanup() def check_requirements(self, reqs): # type: (Iterable[str]) -> Tuple[Set[Tuple[str, str]], Set[str]] """Return 2 sets: - conflicting requirements: set of (installed, wanted) reqs tuples - missing requirements: set of reqs """ missing = set() conflicting = set() if reqs: ws = WorkingSet(self._lib_dirs) for req in reqs: try: if ws.find(Requirement.parse(req)) is None: missing.add(req) except VersionConflict as e: conflicting.add((str(e.args[0].as_requirement()), str(e.args[1]))) return conflicting, missing def install_requirements( self, finder, # type: PackageFinder requirements, # type: Iterable[str] prefix_as_string, # type: str message # type: Optional[str] ): # type: (...) -> None prefix = self._prefixes[prefix_as_string] assert not prefix.setup prefix.setup = True if not requirements: return args = [ sys.executable, os.path.dirname(pip_location), 'install', '--ignore-installed', '--no-user', '--prefix', prefix.path, '--no-warn-script-location', ] # type: List[str] if logger.getEffectiveLevel() <= logging.DEBUG: args.append('-v') for format_control in ('no_binary', 'only_binary'): formats = getattr(finder.format_control, format_control) args.extend(('--' + format_control.replace('_', '-'), ','.join(sorted(formats or {':none:'})))) index_urls = finder.index_urls if index_urls: args.extend(['-i', index_urls[0]]) for extra_index in index_urls[1:]: args.extend(['--extra-index-url', extra_index]) else: args.append('--no-index') for link in finder.find_links: args.extend(['--find-links', link]) for host in finder.trusted_hosts: args.extend(['--trusted-host', host]) if finder.allow_all_prereleases: args.append('--pre') args.append('--') args.extend(requirements) with open_spinner(message) as spinner: call_subprocess(args, spinner=spinner) class NoOpBuildEnvironment(BuildEnvironment): """A no-op drop-in replacement for BuildEnvironment """ def __init__(self): pass def __enter__(self): pass def __exit__(self, exc_type, exc_val, exc_tb): pass def cleanup(self): pass def install_requirements(self, finder, requirements, prefix, message): raise NotImplementedError() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/cache.py0000644000175000017500000001675200000000000026434 0ustar00vincentvincent00000000000000"""Cache Management """ import errno import hashlib import logging import os from pip._vendor.packaging.utils import canonicalize_name from pip._internal.models.link import Link from pip._internal.utils.compat import expanduser from pip._internal.utils.misc import path_to_url from pip._internal.utils.temp_dir import TempDirectory from pip._internal.utils.typing import MYPY_CHECK_RUNNING from pip._internal.wheel import InvalidWheelFilename, Wheel if MYPY_CHECK_RUNNING: from typing import Optional, Set, List, Any from pip._internal.index import FormatControl logger = logging.getLogger(__name__) class Cache(object): """An abstract class - provides cache directories for data from links :param cache_dir: The root of the cache. :param format_control: An object of FormatControl class to limit binaries being read from the cache. :param allowed_formats: which formats of files the cache should store. ('binary' and 'source' are the only allowed values) """ def __init__(self, cache_dir, format_control, allowed_formats): # type: (str, FormatControl, Set[str]) -> None super(Cache, self).__init__() self.cache_dir = expanduser(cache_dir) if cache_dir else None self.format_control = format_control self.allowed_formats = allowed_formats _valid_formats = {"source", "binary"} assert self.allowed_formats.union(_valid_formats) == _valid_formats def _get_cache_path_parts(self, link): # type: (Link) -> List[str] """Get parts of part that must be os.path.joined with cache_dir """ # We want to generate an url to use as our cache key, we don't want to # just re-use the URL because it might have other items in the fragment # and we don't care about those. key_parts = [link.url_without_fragment] if link.hash_name is not None and link.hash is not None: key_parts.append("=".join([link.hash_name, link.hash])) key_url = "#".join(key_parts) # Encode our key url with sha224, we'll use this because it has similar # security properties to sha256, but with a shorter total output (and # thus less secure). However the differences don't make a lot of # difference for our use case here. hashed = hashlib.sha224(key_url.encode()).hexdigest() # We want to nest the directories some to prevent having a ton of top # level directories where we might run out of sub directories on some # FS. parts = [hashed[:2], hashed[2:4], hashed[4:6], hashed[6:]] return parts def _get_candidates(self, link, package_name): # type: (Link, Optional[str]) -> List[Any] can_not_cache = ( not self.cache_dir or not package_name or not link ) if can_not_cache: return [] canonical_name = canonicalize_name(package_name) formats = self.format_control.get_allowed_formats( canonical_name ) if not self.allowed_formats.intersection(formats): return [] root = self.get_path_for_link(link) try: return os.listdir(root) except OSError as err: if err.errno in {errno.ENOENT, errno.ENOTDIR}: return [] raise def get_path_for_link(self, link): # type: (Link) -> str """Return a directory to store cached items in for link. """ raise NotImplementedError() def get(self, link, package_name): # type: (Link, Optional[str]) -> Link """Returns a link to a cached item if it exists, otherwise returns the passed link. """ raise NotImplementedError() def _link_for_candidate(self, link, candidate): # type: (Link, str) -> Link root = self.get_path_for_link(link) path = os.path.join(root, candidate) return Link(path_to_url(path)) def cleanup(self): # type: () -> None pass class SimpleWheelCache(Cache): """A cache of wheels for future installs. """ def __init__(self, cache_dir, format_control): # type: (str, FormatControl) -> None super(SimpleWheelCache, self).__init__( cache_dir, format_control, {"binary"} ) def get_path_for_link(self, link): # type: (Link) -> str """Return a directory to store cached wheels for link Because there are M wheels for any one sdist, we provide a directory to cache them in, and then consult that directory when looking up cache hits. We only insert things into the cache if they have plausible version numbers, so that we don't contaminate the cache with things that were not unique. E.g. ./package might have dozens of installs done for it and build a version of 0.0...and if we built and cached a wheel, we'd end up using the same wheel even if the source has been edited. :param link: The link of the sdist for which this will cache wheels. """ parts = self._get_cache_path_parts(link) # Store wheels within the root cache_dir return os.path.join(self.cache_dir, "wheels", *parts) def get(self, link, package_name): # type: (Link, Optional[str]) -> Link candidates = [] for wheel_name in self._get_candidates(link, package_name): try: wheel = Wheel(wheel_name) except InvalidWheelFilename: continue if not wheel.supported(): # Built for a different python/arch/etc continue candidates.append((wheel.support_index_min(), wheel_name)) if not candidates: return link return self._link_for_candidate(link, min(candidates)[1]) class EphemWheelCache(SimpleWheelCache): """A SimpleWheelCache that creates it's own temporary cache directory """ def __init__(self, format_control): # type: (FormatControl) -> None self._temp_dir = TempDirectory(kind="ephem-wheel-cache") self._temp_dir.create() super(EphemWheelCache, self).__init__( self._temp_dir.path, format_control ) def cleanup(self): # type: () -> None self._temp_dir.cleanup() class WheelCache(Cache): """Wraps EphemWheelCache and SimpleWheelCache into a single Cache This Cache allows for gracefully degradation, using the ephem wheel cache when a certain link is not found in the simple wheel cache first. """ def __init__(self, cache_dir, format_control): # type: (str, FormatControl) -> None super(WheelCache, self).__init__( cache_dir, format_control, {'binary'} ) self._wheel_cache = SimpleWheelCache(cache_dir, format_control) self._ephem_cache = EphemWheelCache(format_control) def get_path_for_link(self, link): # type: (Link) -> str return self._wheel_cache.get_path_for_link(link) def get_ephem_path_for_link(self, link): # type: (Link) -> str return self._ephem_cache.get_path_for_link(link) def get(self, link, package_name): # type: (Link, Optional[str]) -> Link retval = self._wheel_cache.get(link, package_name) if retval is link: retval = self._ephem_cache.get(link, package_name) return retval def cleanup(self): # type: () -> None self._wheel_cache.cleanup() self._ephem_cache.cleanup() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735099.9333274 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/cli/0000755000175000017500000000000000000000000025553 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/cli/__init__.py0000644000175000017500000000020400000000000027660 0ustar00vincentvincent00000000000000"""Subpackage containing all of pip's command line interface related code """ # This file intentionally does not import submodules ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/cli/autocompletion.py0000644000175000017500000001370300000000000031173 0ustar00vincentvincent00000000000000"""Logic that powers autocompletion installed by ``pip completion``. """ import optparse import os import sys from pip._internal.cli.main_parser import create_main_parser from pip._internal.commands import commands_dict, get_summaries from pip._internal.utils.misc import get_installed_distributions def autocomplete(): """Entry Point for completion of main and subcommand options. """ # Don't complete if user hasn't sourced bash_completion file. if 'PIP_AUTO_COMPLETE' not in os.environ: return cwords = os.environ['COMP_WORDS'].split()[1:] cword = int(os.environ['COMP_CWORD']) try: current = cwords[cword - 1] except IndexError: current = '' subcommands = [cmd for cmd, summary in get_summaries()] options = [] # subcommand try: subcommand_name = [w for w in cwords if w in subcommands][0] except IndexError: subcommand_name = None parser = create_main_parser() # subcommand options if subcommand_name: # special case: 'help' subcommand has no options if subcommand_name == 'help': sys.exit(1) # special case: list locally installed dists for show and uninstall should_list_installed = ( subcommand_name in ['show', 'uninstall'] and not current.startswith('-') ) if should_list_installed: installed = [] lc = current.lower() for dist in get_installed_distributions(local_only=True): if dist.key.startswith(lc) and dist.key not in cwords[1:]: installed.append(dist.key) # if there are no dists installed, fall back to option completion if installed: for dist in installed: print(dist) sys.exit(1) subcommand = commands_dict[subcommand_name]() for opt in subcommand.parser.option_list_all: if opt.help != optparse.SUPPRESS_HELP: for opt_str in opt._long_opts + opt._short_opts: options.append((opt_str, opt.nargs)) # filter out previously specified options from available options prev_opts = [x.split('=')[0] for x in cwords[1:cword - 1]] options = [(x, v) for (x, v) in options if x not in prev_opts] # filter options by current input options = [(k, v) for k, v in options if k.startswith(current)] # get completion type given cwords and available subcommand options completion_type = get_path_completion_type( cwords, cword, subcommand.parser.option_list_all, ) # get completion files and directories if ``completion_type`` is # ````, ```` or ```` if completion_type: options = auto_complete_paths(current, completion_type) options = ((opt, 0) for opt in options) for option in options: opt_label = option[0] # append '=' to options which require args if option[1] and option[0][:2] == "--": opt_label += '=' print(opt_label) else: # show main parser options only when necessary opts = [i.option_list for i in parser.option_groups] opts.append(parser.option_list) opts = (o for it in opts for o in it) if current.startswith('-'): for opt in opts: if opt.help != optparse.SUPPRESS_HELP: subcommands += opt._long_opts + opt._short_opts else: # get completion type given cwords and all available options completion_type = get_path_completion_type(cwords, cword, opts) if completion_type: subcommands = auto_complete_paths(current, completion_type) print(' '.join([x for x in subcommands if x.startswith(current)])) sys.exit(1) def get_path_completion_type(cwords, cword, opts): """Get the type of path completion (``file``, ``dir``, ``path`` or None) :param cwords: same as the environmental variable ``COMP_WORDS`` :param cword: same as the environmental variable ``COMP_CWORD`` :param opts: The available options to check :return: path completion type (``file``, ``dir``, ``path`` or None) """ if cword < 2 or not cwords[cword - 2].startswith('-'): return for opt in opts: if opt.help == optparse.SUPPRESS_HELP: continue for o in str(opt).split('/'): if cwords[cword - 2].split('=')[0] == o: if not opt.metavar or any( x in ('path', 'file', 'dir') for x in opt.metavar.split('/')): return opt.metavar def auto_complete_paths(current, completion_type): """If ``completion_type`` is ``file`` or ``path``, list all regular files and directories starting with ``current``; otherwise only list directories starting with ``current``. :param current: The word to be completed :param completion_type: path completion type(`file`, `path` or `dir`)i :return: A generator of regular files and/or directories """ directory, filename = os.path.split(current) current_path = os.path.abspath(directory) # Don't complete paths if they can't be accessed if not os.access(current_path, os.R_OK): return filename = os.path.normcase(filename) # list all files that start with ``filename`` file_list = (x for x in os.listdir(current_path) if os.path.normcase(x).startswith(filename)) for f in file_list: opt = os.path.join(current_path, f) comp_file = os.path.normcase(os.path.join(directory, f)) # complete regular files when there is not ```` after option # complete directories when there is ````, ```` or # ````after option if completion_type != 'dir' and os.path.isfile(opt): yield comp_file elif os.path.isdir(opt): yield os.path.join(comp_file, '') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/cli/base_command.py0000644000175000017500000003134500000000000030543 0ustar00vincentvincent00000000000000"""Base Command class, and related routines""" from __future__ import absolute_import, print_function import logging import logging.config import optparse import os import platform import sys import traceback from pip._internal.cli import cmdoptions from pip._internal.cli.cmdoptions import make_search_scope from pip._internal.cli.parser import ( ConfigOptionParser, UpdatingDefaultsHelpFormatter, ) from pip._internal.cli.status_codes import ( ERROR, PREVIOUS_BUILD_DIR_ERROR, SUCCESS, UNKNOWN_ERROR, VIRTUALENV_NOT_FOUND, ) from pip._internal.download import PipSession from pip._internal.exceptions import ( BadCommand, CommandError, InstallationError, PreviousBuildDirError, UninstallationError, ) from pip._internal.index import PackageFinder from pip._internal.models.selection_prefs import SelectionPreferences from pip._internal.models.target_python import TargetPython from pip._internal.req.constructors import ( install_req_from_editable, install_req_from_line, ) from pip._internal.req.req_file import parse_requirements from pip._internal.utils.deprecation import deprecated from pip._internal.utils.logging import BrokenStdoutLoggingError, setup_logging from pip._internal.utils.misc import get_prog, normalize_path from pip._internal.utils.outdated import pip_version_check from pip._internal.utils.typing import MYPY_CHECK_RUNNING from pip._internal.utils.virtualenv import running_under_virtualenv if MYPY_CHECK_RUNNING: from typing import Optional, List, Tuple, Any from optparse import Values from pip._internal.cache import WheelCache from pip._internal.req.req_set import RequirementSet __all__ = ['Command'] logger = logging.getLogger(__name__) class Command(object): name = None # type: Optional[str] usage = None # type: Optional[str] ignore_require_venv = False # type: bool def __init__(self, isolated=False): # type: (bool) -> None parser_kw = { 'usage': self.usage, 'prog': '%s %s' % (get_prog(), self.name), 'formatter': UpdatingDefaultsHelpFormatter(), 'add_help_option': False, 'name': self.name, 'description': self.__doc__, 'isolated': isolated, } self.parser = ConfigOptionParser(**parser_kw) # Commands should add options to this option group optgroup_name = '%s Options' % self.name.capitalize() self.cmd_opts = optparse.OptionGroup(self.parser, optgroup_name) # Add the general options gen_opts = cmdoptions.make_option_group( cmdoptions.general_group, self.parser, ) self.parser.add_option_group(gen_opts) def run(self, options, args): # type: (Values, List[Any]) -> Any raise NotImplementedError @classmethod def _get_index_urls(cls, options): """Return a list of index urls from user-provided options.""" index_urls = [] if not getattr(options, "no_index", False): url = getattr(options, "index_url", None) if url: index_urls.append(url) urls = getattr(options, "extra_index_urls", None) if urls: index_urls.extend(urls) # Return None rather than an empty list return index_urls or None def _build_session(self, options, retries=None, timeout=None): # type: (Values, Optional[int], Optional[int]) -> PipSession session = PipSession( cache=( normalize_path(os.path.join(options.cache_dir, "http")) if options.cache_dir else None ), retries=retries if retries is not None else options.retries, insecure_hosts=options.trusted_hosts, index_urls=self._get_index_urls(options), ) # Handle custom ca-bundles from the user if options.cert: session.verify = options.cert # Handle SSL client certificate if options.client_cert: session.cert = options.client_cert # Handle timeouts if options.timeout or timeout: session.timeout = ( timeout if timeout is not None else options.timeout ) # Handle configured proxies if options.proxy: session.proxies = { "http": options.proxy, "https": options.proxy, } # Determine if we can prompt the user for authentication or not session.auth.prompting = not options.no_input return session def parse_args(self, args): # type: (List[str]) -> Tuple # factored out for testability return self.parser.parse_args(args) def main(self, args): # type: (List[str]) -> int options, args = self.parse_args(args) # Set verbosity so that it can be used elsewhere. self.verbosity = options.verbose - options.quiet level_number = setup_logging( verbosity=self.verbosity, no_color=options.no_color, user_log_file=options.log, ) if sys.version_info[:2] == (2, 7): message = ( "A future version of pip will drop support for Python 2.7. " "More details about Python 2 support in pip, can be found at " "https://pip.pypa.io/en/latest/development/release-process/#python-2-support" # noqa ) if platform.python_implementation() == "CPython": message = ( "Python 2.7 will reach the end of its life on January " "1st, 2020. Please upgrade your Python as Python 2.7 " "won't be maintained after that date. " ) + message deprecated(message, replacement=None, gone_in=None) # TODO: Try to get these passing down from the command? # without resorting to os.environ to hold these. # This also affects isolated builds and it should. if options.no_input: os.environ['PIP_NO_INPUT'] = '1' if options.exists_action: os.environ['PIP_EXISTS_ACTION'] = ' '.join(options.exists_action) if options.require_venv and not self.ignore_require_venv: # If a venv is required check if it can really be found if not running_under_virtualenv(): logger.critical( 'Could not find an activated virtualenv (required).' ) sys.exit(VIRTUALENV_NOT_FOUND) try: status = self.run(options, args) # FIXME: all commands should return an exit status # and when it is done, isinstance is not needed anymore if isinstance(status, int): return status except PreviousBuildDirError as exc: logger.critical(str(exc)) logger.debug('Exception information:', exc_info=True) return PREVIOUS_BUILD_DIR_ERROR except (InstallationError, UninstallationError, BadCommand) as exc: logger.critical(str(exc)) logger.debug('Exception information:', exc_info=True) return ERROR except CommandError as exc: logger.critical('%s', exc) logger.debug('Exception information:', exc_info=True) return ERROR except BrokenStdoutLoggingError: # Bypass our logger and write any remaining messages to stderr # because stdout no longer works. print('ERROR: Pipe to stdout was broken', file=sys.stderr) if level_number <= logging.DEBUG: traceback.print_exc(file=sys.stderr) return ERROR except KeyboardInterrupt: logger.critical('Operation cancelled by user') logger.debug('Exception information:', exc_info=True) return ERROR except BaseException: logger.critical('Exception:', exc_info=True) return UNKNOWN_ERROR finally: allow_version_check = ( # Does this command have the index_group options? hasattr(options, "no_index") and # Is this command allowed to perform this check? not (options.disable_pip_version_check or options.no_index) ) # Check if we're using the latest version of pip available if allow_version_check: session = self._build_session( options, retries=0, timeout=min(5, options.timeout) ) with session: pip_version_check(session, options) # Shutdown the logging module logging.shutdown() return SUCCESS class RequirementCommand(Command): @staticmethod def populate_requirement_set(requirement_set, # type: RequirementSet args, # type: List[str] options, # type: Values finder, # type: PackageFinder session, # type: PipSession name, # type: str wheel_cache # type: Optional[WheelCache] ): # type: (...) -> None """ Marshal cmd line args into a requirement set. """ # NOTE: As a side-effect, options.require_hashes and # requirement_set.require_hashes may be updated for filename in options.constraints: for req_to_add in parse_requirements( filename, constraint=True, finder=finder, options=options, session=session, wheel_cache=wheel_cache): req_to_add.is_direct = True requirement_set.add_requirement(req_to_add) for req in args: req_to_add = install_req_from_line( req, None, isolated=options.isolated_mode, use_pep517=options.use_pep517, wheel_cache=wheel_cache ) req_to_add.is_direct = True requirement_set.add_requirement(req_to_add) for req in options.editables: req_to_add = install_req_from_editable( req, isolated=options.isolated_mode, use_pep517=options.use_pep517, wheel_cache=wheel_cache ) req_to_add.is_direct = True requirement_set.add_requirement(req_to_add) for filename in options.requirements: for req_to_add in parse_requirements( filename, finder=finder, options=options, session=session, wheel_cache=wheel_cache, use_pep517=options.use_pep517): req_to_add.is_direct = True requirement_set.add_requirement(req_to_add) # If --require-hashes was a line in a requirements file, tell # RequirementSet about it: requirement_set.require_hashes = options.require_hashes if not (args or options.editables or options.requirements): opts = {'name': name} if options.find_links: raise CommandError( 'You must give at least one requirement to %(name)s ' '(maybe you meant "pip %(name)s %(links)s"?)' % dict(opts, links=' '.join(options.find_links))) else: raise CommandError( 'You must give at least one requirement to %(name)s ' '(see "pip help %(name)s")' % opts) def _build_package_finder( self, options, # type: Values session, # type: PipSession target_python=None, # type: Optional[TargetPython] ignore_requires_python=None, # type: Optional[bool] ): # type: (...) -> PackageFinder """ Create a package finder appropriate to this requirement command. :param ignore_requires_python: Whether to ignore incompatible "Requires-Python" values in links. Defaults to False. """ search_scope = make_search_scope(options) selection_prefs = SelectionPreferences( allow_yanked=True, format_control=options.format_control, allow_all_prereleases=options.pre, prefer_binary=options.prefer_binary, ignore_requires_python=ignore_requires_python, ) return PackageFinder.create( search_scope=search_scope, selection_prefs=selection_prefs, trusted_hosts=options.trusted_hosts, session=session, target_python=target_python, ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/cli/cmdoptions.py0000644000175000017500000006546400000000000030323 0ustar00vincentvincent00000000000000""" shared options and groups The principle here is to define options once, but *not* instantiate them globally. One reason being that options with action='append' can carry state between parses. pip parses general options twice internally, and shouldn't pass on state. To be consistent, all options will follow this design. """ from __future__ import absolute_import import logging import textwrap import warnings from distutils.util import strtobool from functools import partial from optparse import SUPPRESS_HELP, Option, OptionGroup from textwrap import dedent from pip._internal.exceptions import CommandError from pip._internal.locations import USER_CACHE_DIR, get_src_prefix from pip._internal.models.format_control import FormatControl from pip._internal.models.index import PyPI from pip._internal.models.search_scope import SearchScope from pip._internal.models.target_python import TargetPython from pip._internal.utils.hashes import STRONG_HASHES from pip._internal.utils.misc import redact_password_from_url from pip._internal.utils.typing import MYPY_CHECK_RUNNING from pip._internal.utils.ui import BAR_TYPES if MYPY_CHECK_RUNNING: from typing import Any, Callable, Dict, Optional, Tuple from optparse import OptionParser, Values from pip._internal.cli.parser import ConfigOptionParser logger = logging.getLogger(__name__) def raise_option_error(parser, option, msg): """ Raise an option parsing error using parser.error(). Args: parser: an OptionParser instance. option: an Option instance. msg: the error text. """ msg = '{} error: {}'.format(option, msg) msg = textwrap.fill(' '.join(msg.split())) parser.error(msg) def make_option_group(group, parser): # type: (Dict[str, Any], ConfigOptionParser) -> OptionGroup """ Return an OptionGroup object group -- assumed to be dict with 'name' and 'options' keys parser -- an optparse Parser """ option_group = OptionGroup(parser, group['name']) for option in group['options']: option_group.add_option(option()) return option_group def check_install_build_global(options, check_options=None): # type: (Values, Optional[Values]) -> None """Disable wheels if per-setup.py call options are set. :param options: The OptionParser options to update. :param check_options: The options to check, if not supplied defaults to options. """ if check_options is None: check_options = options def getname(n): return getattr(check_options, n, None) names = ["build_options", "global_options", "install_options"] if any(map(getname, names)): control = options.format_control control.disallow_binaries() warnings.warn( 'Disabling all use of wheels due to the use of --build-options ' '/ --global-options / --install-options.', stacklevel=2, ) def check_dist_restriction(options, check_target=False): # type: (Values, bool) -> None """Function for determining if custom platform options are allowed. :param options: The OptionParser options. :param check_target: Whether or not to check if --target is being used. """ dist_restriction_set = any([ options.python_version, options.platform, options.abi, options.implementation, ]) binary_only = FormatControl(set(), {':all:'}) sdist_dependencies_allowed = ( options.format_control != binary_only and not options.ignore_dependencies ) # Installations or downloads using dist restrictions must not combine # source distributions and dist-specific wheels, as they are not # guaranteed to be locally compatible. if dist_restriction_set and sdist_dependencies_allowed: raise CommandError( "When restricting platform and interpreter constraints using " "--python-version, --platform, --abi, or --implementation, " "either --no-deps must be set, or --only-binary=:all: must be " "set and --no-binary must not be set (or must be set to " ":none:)." ) if check_target: if dist_restriction_set and not options.target_dir: raise CommandError( "Can not use any platform or abi specific options unless " "installing via '--target'" ) ########### # options # ########### help_ = partial( Option, '-h', '--help', dest='help', action='help', help='Show help.', ) # type: Callable[..., Option] isolated_mode = partial( Option, "--isolated", dest="isolated_mode", action="store_true", default=False, help=( "Run pip in an isolated mode, ignoring environment variables and user " "configuration." ), ) # type: Callable[..., Option] require_virtualenv = partial( Option, # Run only if inside a virtualenv, bail if not. '--require-virtualenv', '--require-venv', dest='require_venv', action='store_true', default=False, help=SUPPRESS_HELP ) # type: Callable[..., Option] verbose = partial( Option, '-v', '--verbose', dest='verbose', action='count', default=0, help='Give more output. Option is additive, and can be used up to 3 times.' ) # type: Callable[..., Option] no_color = partial( Option, '--no-color', dest='no_color', action='store_true', default=False, help="Suppress colored output", ) # type: Callable[..., Option] version = partial( Option, '-V', '--version', dest='version', action='store_true', help='Show version and exit.', ) # type: Callable[..., Option] quiet = partial( Option, '-q', '--quiet', dest='quiet', action='count', default=0, help=( 'Give less output. Option is additive, and can be used up to 3' ' times (corresponding to WARNING, ERROR, and CRITICAL logging' ' levels).' ), ) # type: Callable[..., Option] progress_bar = partial( Option, '--progress-bar', dest='progress_bar', type='choice', choices=list(BAR_TYPES.keys()), default='on', help=( 'Specify type of progress to be displayed [' + '|'.join(BAR_TYPES.keys()) + '] (default: %default)' ), ) # type: Callable[..., Option] log = partial( Option, "--log", "--log-file", "--local-log", dest="log", metavar="path", help="Path to a verbose appending log." ) # type: Callable[..., Option] no_input = partial( Option, # Don't ask for input '--no-input', dest='no_input', action='store_true', default=False, help=SUPPRESS_HELP ) # type: Callable[..., Option] proxy = partial( Option, '--proxy', dest='proxy', type='str', default='', help="Specify a proxy in the form [user:passwd@]proxy.server:port." ) # type: Callable[..., Option] retries = partial( Option, '--retries', dest='retries', type='int', default=5, help="Maximum number of retries each connection should attempt " "(default %default times).", ) # type: Callable[..., Option] timeout = partial( Option, '--timeout', '--default-timeout', metavar='sec', dest='timeout', type='float', default=15, help='Set the socket timeout (default %default seconds).', ) # type: Callable[..., Option] skip_requirements_regex = partial( Option, # A regex to be used to skip requirements '--skip-requirements-regex', dest='skip_requirements_regex', type='str', default='', help=SUPPRESS_HELP, ) # type: Callable[..., Option] def exists_action(): # type: () -> Option return Option( # Option when path already exist '--exists-action', dest='exists_action', type='choice', choices=['s', 'i', 'w', 'b', 'a'], default=[], action='append', metavar='action', help="Default action when a path already exists: " "(s)witch, (i)gnore, (w)ipe, (b)ackup, (a)bort.", ) cert = partial( Option, '--cert', dest='cert', type='str', metavar='path', help="Path to alternate CA bundle.", ) # type: Callable[..., Option] client_cert = partial( Option, '--client-cert', dest='client_cert', type='str', default=None, metavar='path', help="Path to SSL client certificate, a single file containing the " "private key and the certificate in PEM format.", ) # type: Callable[..., Option] index_url = partial( Option, '-i', '--index-url', '--pypi-url', dest='index_url', metavar='URL', default=PyPI.simple_url, help="Base URL of the Python Package Index (default %default). " "This should point to a repository compliant with PEP 503 " "(the simple repository API) or a local directory laid out " "in the same format.", ) # type: Callable[..., Option] def extra_index_url(): return Option( '--extra-index-url', dest='extra_index_urls', metavar='URL', action='append', default=[], help="Extra URLs of package indexes to use in addition to " "--index-url. Should follow the same rules as " "--index-url.", ) no_index = partial( Option, '--no-index', dest='no_index', action='store_true', default=False, help='Ignore package index (only looking at --find-links URLs instead).', ) # type: Callable[..., Option] def find_links(): # type: () -> Option return Option( '-f', '--find-links', dest='find_links', action='append', default=[], metavar='url', help="If a url or path to an html file, then parse for links to " "archives. If a local path or file:// url that's a directory, " "then look for archives in the directory listing.", ) def make_search_scope(options, suppress_no_index=False): # type: (Values, bool) -> SearchScope """ :param suppress_no_index: Whether to ignore the --no-index option when constructing the SearchScope object. """ index_urls = [options.index_url] + options.extra_index_urls if options.no_index and not suppress_no_index: logger.debug( 'Ignoring indexes: %s', ','.join(redact_password_from_url(url) for url in index_urls), ) index_urls = [] search_scope = SearchScope( find_links=options.find_links, index_urls=index_urls, ) return search_scope def trusted_host(): # type: () -> Option return Option( "--trusted-host", dest="trusted_hosts", action="append", metavar="HOSTNAME", default=[], help="Mark this host as trusted, even though it does not have valid " "or any HTTPS.", ) def constraints(): # type: () -> Option return Option( '-c', '--constraint', dest='constraints', action='append', default=[], metavar='file', help='Constrain versions using the given constraints file. ' 'This option can be used multiple times.' ) def requirements(): # type: () -> Option return Option( '-r', '--requirement', dest='requirements', action='append', default=[], metavar='file', help='Install from the given requirements file. ' 'This option can be used multiple times.' ) def editable(): # type: () -> Option return Option( '-e', '--editable', dest='editables', action='append', default=[], metavar='path/url', help=('Install a project in editable mode (i.e. setuptools ' '"develop mode") from a local project path or a VCS url.'), ) src = partial( Option, '--src', '--source', '--source-dir', '--source-directory', dest='src_dir', metavar='dir', default=get_src_prefix(), help='Directory to check out editable projects into. ' 'The default in a virtualenv is "/src". ' 'The default for global installs is "/src".' ) # type: Callable[..., Option] def _get_format_control(values, option): # type: (Values, Option) -> Any """Get a format_control object.""" return getattr(values, option.dest) def _handle_no_binary(option, opt_str, value, parser): # type: (Option, str, str, OptionParser) -> None existing = _get_format_control(parser.values, option) FormatControl.handle_mutual_excludes( value, existing.no_binary, existing.only_binary, ) def _handle_only_binary(option, opt_str, value, parser): # type: (Option, str, str, OptionParser) -> None existing = _get_format_control(parser.values, option) FormatControl.handle_mutual_excludes( value, existing.only_binary, existing.no_binary, ) def no_binary(): # type: () -> Option format_control = FormatControl(set(), set()) return Option( "--no-binary", dest="format_control", action="callback", callback=_handle_no_binary, type="str", default=format_control, help="Do not use binary packages. Can be supplied multiple times, and " "each time adds to the existing value. Accepts either :all: to " "disable all binary packages, :none: to empty the set, or one or " "more package names with commas between them. Note that some " "packages are tricky to compile and may fail to install when " "this option is used on them.", ) def only_binary(): # type: () -> Option format_control = FormatControl(set(), set()) return Option( "--only-binary", dest="format_control", action="callback", callback=_handle_only_binary, type="str", default=format_control, help="Do not use source packages. Can be supplied multiple times, and " "each time adds to the existing value. Accepts either :all: to " "disable all source packages, :none: to empty the set, or one or " "more package names with commas between them. Packages without " "binary distributions will fail to install when this option is " "used on them.", ) platform = partial( Option, '--platform', dest='platform', metavar='platform', default=None, help=("Only use wheels compatible with . " "Defaults to the platform of the running system."), ) # type: Callable[..., Option] # This was made a separate function for unit-testing purposes. def _convert_python_version(value): # type: (str) -> Tuple[Tuple[int, ...], Optional[str]] """ Convert a version string like "3", "37", or "3.7.3" into a tuple of ints. :return: A 2-tuple (version_info, error_msg), where `error_msg` is non-None if and only if there was a parsing error. """ if not value: # The empty string is the same as not providing a value. return (None, None) parts = value.split('.') if len(parts) > 3: return ((), 'at most three version parts are allowed') if len(parts) == 1: # Then we are in the case of "3" or "37". value = parts[0] if len(value) > 1: parts = [value[0], value[1:]] try: version_info = tuple(int(part) for part in parts) except ValueError: return ((), 'each version part must be an integer') return (version_info, None) def _handle_python_version(option, opt_str, value, parser): # type: (Option, str, str, OptionParser) -> None """ Handle a provided --python-version value. """ version_info, error_msg = _convert_python_version(value) if error_msg is not None: msg = ( 'invalid --python-version value: {!r}: {}'.format( value, error_msg, ) ) raise_option_error(parser, option=option, msg=msg) parser.values.python_version = version_info python_version = partial( Option, '--python-version', dest='python_version', metavar='python_version', action='callback', callback=_handle_python_version, type='str', default=None, help=dedent("""\ The Python interpreter version to use for wheel and "Requires-Python" compatibility checks. Defaults to a version derived from the running interpreter. The version can be specified using up to three dot-separated integers (e.g. "3" for 3.0.0, "3.7" for 3.7.0, or "3.7.3"). A major-minor version can also be given as a string without dots (e.g. "37" for 3.7.0). """), ) # type: Callable[..., Option] implementation = partial( Option, '--implementation', dest='implementation', metavar='implementation', default=None, help=("Only use wheels compatible with Python " "implementation , e.g. 'pp', 'jy', 'cp', " " or 'ip'. If not specified, then the current " "interpreter implementation is used. Use 'py' to force " "implementation-agnostic wheels."), ) # type: Callable[..., Option] abi = partial( Option, '--abi', dest='abi', metavar='abi', default=None, help=("Only use wheels compatible with Python " "abi , e.g. 'pypy_41'. If not specified, then the " "current interpreter abi tag is used. Generally " "you will need to specify --implementation, " "--platform, and --python-version when using " "this option."), ) # type: Callable[..., Option] def add_target_python_options(cmd_opts): # type: (OptionGroup) -> None cmd_opts.add_option(platform()) cmd_opts.add_option(python_version()) cmd_opts.add_option(implementation()) cmd_opts.add_option(abi()) def make_target_python(options): # type: (Values) -> TargetPython target_python = TargetPython( platform=options.platform, py_version_info=options.python_version, abi=options.abi, implementation=options.implementation, ) return target_python def prefer_binary(): # type: () -> Option return Option( "--prefer-binary", dest="prefer_binary", action="store_true", default=False, help="Prefer older binary packages over newer source packages." ) cache_dir = partial( Option, "--cache-dir", dest="cache_dir", default=USER_CACHE_DIR, metavar="dir", help="Store the cache data in ." ) # type: Callable[..., Option] def _handle_no_cache_dir(option, opt, value, parser): # type: (Option, str, str, OptionParser) -> None """ Process a value provided for the --no-cache-dir option. This is an optparse.Option callback for the --no-cache-dir option. """ # The value argument will be None if --no-cache-dir is passed via the # command-line, since the option doesn't accept arguments. However, # the value can be non-None if the option is triggered e.g. by an # environment variable, like PIP_NO_CACHE_DIR=true. if value is not None: # Then parse the string value to get argument error-checking. try: strtobool(value) except ValueError as exc: raise_option_error(parser, option=option, msg=str(exc)) # Originally, setting PIP_NO_CACHE_DIR to a value that strtobool() # converted to 0 (like "false" or "no") caused cache_dir to be disabled # rather than enabled (logic would say the latter). Thus, we disable # the cache directory not just on values that parse to True, but (for # backwards compatibility reasons) also on values that parse to False. # In other words, always set it to False if the option is provided in # some (valid) form. parser.values.cache_dir = False no_cache = partial( Option, "--no-cache-dir", dest="cache_dir", action="callback", callback=_handle_no_cache_dir, help="Disable the cache.", ) # type: Callable[..., Option] no_deps = partial( Option, '--no-deps', '--no-dependencies', dest='ignore_dependencies', action='store_true', default=False, help="Don't install package dependencies.", ) # type: Callable[..., Option] build_dir = partial( Option, '-b', '--build', '--build-dir', '--build-directory', dest='build_dir', metavar='dir', help='Directory to unpack packages into and build in. Note that ' 'an initial build still takes place in a temporary directory. ' 'The location of temporary directories can be controlled by setting ' 'the TMPDIR environment variable (TEMP on Windows) appropriately. ' 'When passed, build directories are not cleaned in case of failures.' ) # type: Callable[..., Option] ignore_requires_python = partial( Option, '--ignore-requires-python', dest='ignore_requires_python', action='store_true', help='Ignore the Requires-Python information.' ) # type: Callable[..., Option] no_build_isolation = partial( Option, '--no-build-isolation', dest='build_isolation', action='store_false', default=True, help='Disable isolation when building a modern source distribution. ' 'Build dependencies specified by PEP 518 must be already installed ' 'if this option is used.' ) # type: Callable[..., Option] def _handle_no_use_pep517(option, opt, value, parser): # type: (Option, str, str, OptionParser) -> None """ Process a value provided for the --no-use-pep517 option. This is an optparse.Option callback for the no_use_pep517 option. """ # Since --no-use-pep517 doesn't accept arguments, the value argument # will be None if --no-use-pep517 is passed via the command-line. # However, the value can be non-None if the option is triggered e.g. # by an environment variable, for example "PIP_NO_USE_PEP517=true". if value is not None: msg = """A value was passed for --no-use-pep517, probably using either the PIP_NO_USE_PEP517 environment variable or the "no-use-pep517" config file option. Use an appropriate value of the PIP_USE_PEP517 environment variable or the "use-pep517" config file option instead. """ raise_option_error(parser, option=option, msg=msg) # Otherwise, --no-use-pep517 was passed via the command-line. parser.values.use_pep517 = False use_pep517 = partial( Option, '--use-pep517', dest='use_pep517', action='store_true', default=None, help='Use PEP 517 for building source distributions ' '(use --no-use-pep517 to force legacy behaviour).' ) # type: Any no_use_pep517 = partial( Option, '--no-use-pep517', dest='use_pep517', action='callback', callback=_handle_no_use_pep517, default=None, help=SUPPRESS_HELP ) # type: Any install_options = partial( Option, '--install-option', dest='install_options', action='append', metavar='options', help="Extra arguments to be supplied to the setup.py install " "command (use like --install-option=\"--install-scripts=/usr/local/" "bin\"). Use multiple --install-option options to pass multiple " "options to setup.py install. If you are using an option with a " "directory path, be sure to use absolute path.", ) # type: Callable[..., Option] global_options = partial( Option, '--global-option', dest='global_options', action='append', metavar='options', help="Extra global options to be supplied to the setup.py " "call before the install command.", ) # type: Callable[..., Option] no_clean = partial( Option, '--no-clean', action='store_true', default=False, help="Don't clean up build directories." ) # type: Callable[..., Option] pre = partial( Option, '--pre', action='store_true', default=False, help="Include pre-release and development versions. By default, " "pip only finds stable versions.", ) # type: Callable[..., Option] disable_pip_version_check = partial( Option, "--disable-pip-version-check", dest="disable_pip_version_check", action="store_true", default=False, help="Don't periodically check PyPI to determine whether a new version " "of pip is available for download. Implied with --no-index.", ) # type: Callable[..., Option] # Deprecated, Remove later always_unzip = partial( Option, '-Z', '--always-unzip', dest='always_unzip', action='store_true', help=SUPPRESS_HELP, ) # type: Callable[..., Option] def _handle_merge_hash(option, opt_str, value, parser): # type: (Option, str, str, OptionParser) -> None """Given a value spelled "algo:digest", append the digest to a list pointed to in a dict by the algo name.""" if not parser.values.hashes: parser.values.hashes = {} try: algo, digest = value.split(':', 1) except ValueError: parser.error('Arguments to %s must be a hash name ' 'followed by a value, like --hash=sha256:abcde...' % opt_str) if algo not in STRONG_HASHES: parser.error('Allowed hash algorithms for %s are %s.' % (opt_str, ', '.join(STRONG_HASHES))) parser.values.hashes.setdefault(algo, []).append(digest) hash = partial( Option, '--hash', # Hash values eventually end up in InstallRequirement.hashes due to # __dict__ copying in process_line(). dest='hashes', action='callback', callback=_handle_merge_hash, type='string', help="Verify that the package's archive matches this " 'hash before installing. Example: --hash=sha256:abcdef...', ) # type: Callable[..., Option] require_hashes = partial( Option, '--require-hashes', dest='require_hashes', action='store_true', default=False, help='Require a hash to check each requirement against, for ' 'repeatable installs. This option is implied when any package in a ' 'requirements file has a --hash option.', ) # type: Callable[..., Option] list_path = partial( Option, '--path', dest='path', action='append', help='Restrict to the specified installation path for listing ' 'packages (can be used multiple times).' ) # type: Callable[..., Option] def check_list_path_option(options): # type: (Values) -> None if options.path and (options.user or options.local): raise CommandError( "Cannot combine '--path' with '--user' or '--local'" ) ########## # groups # ########## general_group = { 'name': 'General Options', 'options': [ help_, isolated_mode, require_virtualenv, verbose, version, quiet, log, no_input, proxy, retries, timeout, skip_requirements_regex, exists_action, trusted_host, cert, client_cert, cache_dir, no_cache, disable_pip_version_check, no_color, ] } # type: Dict[str, Any] index_group = { 'name': 'Package Index Options', 'options': [ index_url, extra_index_url, no_index, find_links, ] } # type: Dict[str, Any] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/cli/main_parser.py0000644000175000017500000000540100000000000030425 0ustar00vincentvincent00000000000000"""A single place for constructing and exposing the main parser """ import os import sys from pip._internal.cli import cmdoptions from pip._internal.cli.parser import ( ConfigOptionParser, UpdatingDefaultsHelpFormatter, ) from pip._internal.commands import ( commands_dict, get_similar_commands, get_summaries, ) from pip._internal.exceptions import CommandError from pip._internal.utils.misc import get_pip_version, get_prog from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import Tuple, List __all__ = ["create_main_parser", "parse_command"] def create_main_parser(): # type: () -> ConfigOptionParser """Creates and returns the main parser for pip's CLI """ parser_kw = { 'usage': '\n%prog [options]', 'add_help_option': False, 'formatter': UpdatingDefaultsHelpFormatter(), 'name': 'global', 'prog': get_prog(), } parser = ConfigOptionParser(**parser_kw) parser.disable_interspersed_args() parser.version = get_pip_version() # add the general options gen_opts = cmdoptions.make_option_group(cmdoptions.general_group, parser) parser.add_option_group(gen_opts) # so the help formatter knows parser.main = True # type: ignore # create command listing for description command_summaries = get_summaries() description = [''] + ['%-27s %s' % (i, j) for i, j in command_summaries] parser.description = '\n'.join(description) return parser def parse_command(args): # type: (List[str]) -> Tuple[str, List[str]] parser = create_main_parser() # Note: parser calls disable_interspersed_args(), so the result of this # call is to split the initial args into the general options before the # subcommand and everything else. # For example: # args: ['--timeout=5', 'install', '--user', 'INITools'] # general_options: ['--timeout==5'] # args_else: ['install', '--user', 'INITools'] general_options, args_else = parser.parse_args(args) # --version if general_options.version: sys.stdout.write(parser.version) # type: ignore sys.stdout.write(os.linesep) sys.exit() # pip || pip help -> print_help() if not args_else or (args_else[0] == 'help' and len(args_else) == 1): parser.print_help() sys.exit() # the subcommand name cmd_name = args_else[0] if cmd_name not in commands_dict: guess = get_similar_commands(cmd_name) msg = ['unknown command "%s"' % cmd_name] if guess: msg.append('maybe you meant "%s"' % guess) raise CommandError(' - '.join(msg)) # all the args without the subcommand cmd_args = args[:] cmd_args.remove(cmd_name) return cmd_name, cmd_args ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/cli/parser.py0000644000175000017500000002224200000000000027423 0ustar00vincentvincent00000000000000"""Base option parser setup""" from __future__ import absolute_import import logging import optparse import sys import textwrap from distutils.util import strtobool from pip._vendor.six import string_types from pip._internal.cli.status_codes import UNKNOWN_ERROR from pip._internal.configuration import Configuration, ConfigurationError from pip._internal.utils.compat import get_terminal_size logger = logging.getLogger(__name__) class PrettyHelpFormatter(optparse.IndentedHelpFormatter): """A prettier/less verbose help formatter for optparse.""" def __init__(self, *args, **kwargs): # help position must be aligned with __init__.parseopts.description kwargs['max_help_position'] = 30 kwargs['indent_increment'] = 1 kwargs['width'] = get_terminal_size()[0] - 2 optparse.IndentedHelpFormatter.__init__(self, *args, **kwargs) def format_option_strings(self, option): return self._format_option_strings(option, ' <%s>', ', ') def _format_option_strings(self, option, mvarfmt=' <%s>', optsep=', '): """ Return a comma-separated list of option strings and metavars. :param option: tuple of (short opt, long opt), e.g: ('-f', '--format') :param mvarfmt: metavar format string - evaluated as mvarfmt % metavar :param optsep: separator """ opts = [] if option._short_opts: opts.append(option._short_opts[0]) if option._long_opts: opts.append(option._long_opts[0]) if len(opts) > 1: opts.insert(1, optsep) if option.takes_value(): metavar = option.metavar or option.dest.lower() opts.append(mvarfmt % metavar.lower()) return ''.join(opts) def format_heading(self, heading): if heading == 'Options': return '' return heading + ':\n' def format_usage(self, usage): """ Ensure there is only one newline between usage and the first heading if there is no description. """ msg = '\nUsage: %s\n' % self.indent_lines(textwrap.dedent(usage), " ") return msg def format_description(self, description): # leave full control over description to us if description: if hasattr(self.parser, 'main'): label = 'Commands' else: label = 'Description' # some doc strings have initial newlines, some don't description = description.lstrip('\n') # some doc strings have final newlines and spaces, some don't description = description.rstrip() # dedent, then reindent description = self.indent_lines(textwrap.dedent(description), " ") description = '%s:\n%s\n' % (label, description) return description else: return '' def format_epilog(self, epilog): # leave full control over epilog to us if epilog: return epilog else: return '' def indent_lines(self, text, indent): new_lines = [indent + line for line in text.split('\n')] return "\n".join(new_lines) class UpdatingDefaultsHelpFormatter(PrettyHelpFormatter): """Custom help formatter for use in ConfigOptionParser. This is updates the defaults before expanding them, allowing them to show up correctly in the help listing. """ def expand_default(self, option): if self.parser is not None: self.parser._update_defaults(self.parser.defaults) return optparse.IndentedHelpFormatter.expand_default(self, option) class CustomOptionParser(optparse.OptionParser): def insert_option_group(self, idx, *args, **kwargs): """Insert an OptionGroup at a given position.""" group = self.add_option_group(*args, **kwargs) self.option_groups.pop() self.option_groups.insert(idx, group) return group @property def option_list_all(self): """Get a list of all options, including those in option groups.""" res = self.option_list[:] for i in self.option_groups: res.extend(i.option_list) return res class ConfigOptionParser(CustomOptionParser): """Custom option parser which updates its defaults by checking the configuration files and environmental variables""" def __init__(self, *args, **kwargs): self.name = kwargs.pop('name') isolated = kwargs.pop("isolated", False) self.config = Configuration(isolated) assert self.name optparse.OptionParser.__init__(self, *args, **kwargs) def check_default(self, option, key, val): try: return option.check_value(key, val) except optparse.OptionValueError as exc: print("An error occurred during configuration: %s" % exc) sys.exit(3) def _get_ordered_configuration_items(self): # Configuration gives keys in an unordered manner. Order them. override_order = ["global", self.name, ":env:"] # Pool the options into different groups section_items = {name: [] for name in override_order} for section_key, val in self.config.items(): # ignore empty values if not val: logger.debug( "Ignoring configuration key '%s' as it's value is empty.", section_key ) continue section, key = section_key.split(".", 1) if section in override_order: section_items[section].append((key, val)) # Yield each group in their override order for section in override_order: for key, val in section_items[section]: yield key, val def _update_defaults(self, defaults): """Updates the given defaults with values from the config files and the environ. Does a little special handling for certain types of options (lists).""" # Accumulate complex default state. self.values = optparse.Values(self.defaults) late_eval = set() # Then set the options with those values for key, val in self._get_ordered_configuration_items(): # '--' because configuration supports only long names option = self.get_option('--' + key) # Ignore options not present in this parser. E.g. non-globals put # in [global] by users that want them to apply to all applicable # commands. if option is None: continue if option.action in ('store_true', 'store_false', 'count'): try: val = strtobool(val) except ValueError: error_msg = invalid_config_error_message( option.action, key, val ) self.error(error_msg) elif option.action == 'append': val = val.split() val = [self.check_default(option, key, v) for v in val] elif option.action == 'callback': late_eval.add(option.dest) opt_str = option.get_opt_string() val = option.convert_value(opt_str, val) # From take_action args = option.callback_args or () kwargs = option.callback_kwargs or {} option.callback(option, opt_str, val, self, *args, **kwargs) else: val = self.check_default(option, key, val) defaults[option.dest] = val for key in late_eval: defaults[key] = getattr(self.values, key) self.values = None return defaults def get_default_values(self): """Overriding to make updating the defaults after instantiation of the option parser possible, _update_defaults() does the dirty work.""" if not self.process_default_values: # Old, pre-Optik 1.5 behaviour. return optparse.Values(self.defaults) # Load the configuration, or error out in case of an error try: self.config.load() except ConfigurationError as err: self.exit(UNKNOWN_ERROR, str(err)) defaults = self._update_defaults(self.defaults.copy()) # ours for option in self._get_all_options(): default = defaults.get(option.dest) if isinstance(default, string_types): opt_str = option.get_opt_string() defaults[option.dest] = option.check_value(opt_str, default) return optparse.Values(defaults) def error(self, msg): self.print_usage(sys.stderr) self.exit(UNKNOWN_ERROR, "%s\n" % msg) def invalid_config_error_message(action, key, val): """Returns a better error message when invalid configuration option is provided.""" if action in ('store_true', 'store_false'): return ("{0} is not a valid value for {1} option, " "please specify a boolean value like yes/no, " "true/false or 1/0 instead.").format(val, key) return ("{0} is not a valid value for {1} option, " "please specify a numerical value like 1/0 " "instead.").format(val, key) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/cli/status_codes.py0000644000175000017500000000023400000000000030624 0ustar00vincentvincent00000000000000from __future__ import absolute_import SUCCESS = 0 ERROR = 1 UNKNOWN_ERROR = 2 VIRTUALENV_NOT_FOUND = 3 PREVIOUS_BUILD_DIR_ERROR = 4 NO_MATCHES_FOUND = 23 ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1604735099.939994 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/commands/0000755000175000017500000000000000000000000026605 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/commands/__init__.py0000644000175000017500000000436700000000000030730 0ustar00vincentvincent00000000000000""" Package containing all pip commands """ from __future__ import absolute_import from pip._internal.commands.completion import CompletionCommand from pip._internal.commands.configuration import ConfigurationCommand from pip._internal.commands.debug import DebugCommand from pip._internal.commands.download import DownloadCommand from pip._internal.commands.freeze import FreezeCommand from pip._internal.commands.hash import HashCommand from pip._internal.commands.help import HelpCommand from pip._internal.commands.list import ListCommand from pip._internal.commands.check import CheckCommand from pip._internal.commands.search import SearchCommand from pip._internal.commands.show import ShowCommand from pip._internal.commands.install import InstallCommand from pip._internal.commands.uninstall import UninstallCommand from pip._internal.commands.wheel import WheelCommand from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import List, Type from pip._internal.cli.base_command import Command commands_order = [ InstallCommand, DownloadCommand, UninstallCommand, FreezeCommand, ListCommand, ShowCommand, CheckCommand, ConfigurationCommand, SearchCommand, WheelCommand, HashCommand, CompletionCommand, DebugCommand, HelpCommand, ] # type: List[Type[Command]] commands_dict = {c.name: c for c in commands_order} def get_summaries(ordered=True): """Yields sorted (command name, command summary) tuples.""" if ordered: cmditems = _sort_commands(commands_dict, commands_order) else: cmditems = commands_dict.items() for name, command_class in cmditems: yield (name, command_class.summary) def get_similar_commands(name): """Command name auto-correct.""" from difflib import get_close_matches name = name.lower() close_commands = get_close_matches(name, commands_dict.keys()) if close_commands: return close_commands[0] else: return False def _sort_commands(cmddict, order): def keyfn(key): try: return order.index(key[1]) except ValueError: # unordered items should come last return 0xff return sorted(cmddict.items(), key=keyfn) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/commands/check.py0000644000175000017500000000262600000000000030242 0ustar00vincentvincent00000000000000import logging from pip._internal.cli.base_command import Command from pip._internal.operations.check import ( check_package_set, create_package_set_from_installed, ) logger = logging.getLogger(__name__) class CheckCommand(Command): """Verify installed packages have compatible dependencies.""" name = 'check' usage = """ %prog [options]""" summary = 'Verify installed packages have compatible dependencies.' def run(self, options, args): package_set, parsing_probs = create_package_set_from_installed() missing, conflicting = check_package_set(package_set) for project_name in missing: version = package_set[project_name].version for dependency in missing[project_name]: logger.info( "%s %s requires %s, which is not installed.", project_name, version, dependency[0], ) for project_name in conflicting: version = package_set[project_name].version for dep_name, dep_version, req in conflicting[project_name]: logger.info( "%s %s has requirement %s, but you have %s %s.", project_name, version, req, dep_name, dep_version, ) if missing or conflicting or parsing_probs: return 1 else: logger.info("No broken requirements found.") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/commands/completion.py0000644000175000017500000000556100000000000031337 0ustar00vincentvincent00000000000000from __future__ import absolute_import import sys import textwrap from pip._internal.cli.base_command import Command from pip._internal.utils.misc import get_prog BASE_COMPLETION = """ # pip %(shell)s completion start%(script)s# pip %(shell)s completion end """ COMPLETION_SCRIPTS = { 'bash': """ _pip_completion() { COMPREPLY=( $( COMP_WORDS="${COMP_WORDS[*]}" \\ COMP_CWORD=$COMP_CWORD \\ PIP_AUTO_COMPLETE=1 $1 ) ) } complete -o default -F _pip_completion %(prog)s """, 'zsh': """ function _pip_completion { local words cword read -Ac words read -cn cword reply=( $( COMP_WORDS="$words[*]" \\ COMP_CWORD=$(( cword-1 )) \\ PIP_AUTO_COMPLETE=1 $words[1] ) ) } compctl -K _pip_completion %(prog)s """, 'fish': """ function __fish_complete_pip set -lx COMP_WORDS (commandline -o) "" set -lx COMP_CWORD ( \\ math (contains -i -- (commandline -t) $COMP_WORDS)-1 \\ ) set -lx PIP_AUTO_COMPLETE 1 string split \\ -- (eval $COMP_WORDS[1]) end complete -fa "(__fish_complete_pip)" -c %(prog)s """, } class CompletionCommand(Command): """A helper command to be used for command completion.""" name = 'completion' summary = 'A helper command used for command completion.' ignore_require_venv = True def __init__(self, *args, **kw): super(CompletionCommand, self).__init__(*args, **kw) cmd_opts = self.cmd_opts cmd_opts.add_option( '--bash', '-b', action='store_const', const='bash', dest='shell', help='Emit completion code for bash') cmd_opts.add_option( '--zsh', '-z', action='store_const', const='zsh', dest='shell', help='Emit completion code for zsh') cmd_opts.add_option( '--fish', '-f', action='store_const', const='fish', dest='shell', help='Emit completion code for fish') self.parser.insert_option_group(0, cmd_opts) def run(self, options, args): """Prints the completion code of the given shell""" shells = COMPLETION_SCRIPTS.keys() shell_options = ['--' + shell for shell in sorted(shells)] if options.shell in shells: script = textwrap.dedent( COMPLETION_SCRIPTS.get(options.shell, '') % { 'prog': get_prog(), } ) print(BASE_COMPLETION % {'script': script, 'shell': options.shell}) else: sys.stderr.write( 'ERROR: You must pass %s\n' % ' or '.join(shell_options) ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/commands/configuration.py0000644000175000017500000001773400000000000032042 0ustar00vincentvincent00000000000000import logging import os import subprocess from pip._internal.cli.base_command import Command from pip._internal.cli.status_codes import ERROR, SUCCESS from pip._internal.configuration import ( Configuration, get_configuration_files, kinds, ) from pip._internal.exceptions import PipError from pip._internal.utils.deprecation import deprecated from pip._internal.utils.misc import get_prog from pip._internal.utils.virtualenv import running_under_virtualenv logger = logging.getLogger(__name__) class ConfigurationCommand(Command): """Manage local and global configuration. Subcommands: list: List the active configuration (or from the file specified) edit: Edit the configuration file in an editor get: Get the value associated with name set: Set the name=value unset: Unset the value associated with name If none of --user, --global and --site are passed, a virtual environment configuration file is used if one is active and the file exists. Otherwise, all modifications happen on the to the user file by default. """ name = 'config' usage = """ %prog [] list %prog [] [--editor ] edit %prog [] get name %prog [] set name value %prog [] unset name """ summary = "Manage local and global configuration." def __init__(self, *args, **kwargs): super(ConfigurationCommand, self).__init__(*args, **kwargs) self.configuration = None self.cmd_opts.add_option( '--editor', dest='editor', action='store', default=None, help=( 'Editor to use to edit the file. Uses VISUAL or EDITOR ' 'environment variables if not provided.' ) ) self.cmd_opts.add_option( '--global', dest='global_file', action='store_true', default=False, help='Use the system-wide configuration file only' ) self.cmd_opts.add_option( '--user', dest='user_file', action='store_true', default=False, help='Use the user configuration file only' ) self.cmd_opts.add_option( '--site', dest='site_file', action='store_true', default=False, help='Use the current environment configuration file only' ) self.cmd_opts.add_option( '--venv', dest='venv_file', action='store_true', default=False, help=( '[Deprecated] Use the current environment configuration ' 'file in a virtual environment only' ) ) self.parser.insert_option_group(0, self.cmd_opts) def run(self, options, args): handlers = { "list": self.list_values, "edit": self.open_in_editor, "get": self.get_name, "set": self.set_name_value, "unset": self.unset_name } # Determine action if not args or args[0] not in handlers: logger.error("Need an action ({}) to perform.".format( ", ".join(sorted(handlers))) ) return ERROR action = args[0] # Determine which configuration files are to be loaded # Depends on whether the command is modifying. try: load_only = self._determine_file( options, need_value=(action in ["get", "set", "unset", "edit"]) ) except PipError as e: logger.error(e.args[0]) return ERROR # Load a new configuration self.configuration = Configuration( isolated=options.isolated_mode, load_only=load_only ) self.configuration.load() # Error handling happens here, not in the action-handlers. try: handlers[action](options, args[1:]) except PipError as e: logger.error(e.args[0]) return ERROR return SUCCESS def _determine_file(self, options, need_value): # Convert legacy venv_file option to site_file or error if options.venv_file and not options.site_file: if running_under_virtualenv(): options.site_file = True deprecated( "The --venv option has been deprecated.", replacement="--site", gone_in="19.3", ) else: raise PipError( "Legacy --venv option requires a virtual environment. " "Use --site instead." ) file_options = [key for key, value in ( (kinds.USER, options.user_file), (kinds.GLOBAL, options.global_file), (kinds.SITE, options.site_file), ) if value] if not file_options: if not need_value: return None # Default to user, unless there's a site file. elif any( os.path.exists(site_config_file) for site_config_file in get_configuration_files()[kinds.SITE] ): return kinds.SITE else: return kinds.USER elif len(file_options) == 1: return file_options[0] raise PipError( "Need exactly one file to operate upon " "(--user, --site, --global) to perform." ) def list_values(self, options, args): self._get_n_args(args, "list", n=0) for key, value in sorted(self.configuration.items()): logger.info("%s=%r", key, value) def get_name(self, options, args): key = self._get_n_args(args, "get [name]", n=1) value = self.configuration.get_value(key) logger.info("%s", value) def set_name_value(self, options, args): key, value = self._get_n_args(args, "set [name] [value]", n=2) self.configuration.set_value(key, value) self._save_configuration() def unset_name(self, options, args): key = self._get_n_args(args, "unset [name]", n=1) self.configuration.unset_value(key) self._save_configuration() def open_in_editor(self, options, args): editor = self._determine_editor(options) fname = self.configuration.get_file_to_edit() if fname is None: raise PipError("Could not determine appropriate file.") try: subprocess.check_call([editor, fname]) except subprocess.CalledProcessError as e: raise PipError( "Editor Subprocess exited with exit code {}" .format(e.returncode) ) def _get_n_args(self, args, example, n): """Helper to make sure the command got the right number of arguments """ if len(args) != n: msg = ( 'Got unexpected number of arguments, expected {}. ' '(example: "{} config {}")' ).format(n, get_prog(), example) raise PipError(msg) if n == 1: return args[0] else: return args def _save_configuration(self): # We successfully ran a modifying command. Need to save the # configuration. try: self.configuration.save() except Exception: logger.error( "Unable to save configuration. Please report this as a bug.", exc_info=1 ) raise PipError("Internal Error.") def _determine_editor(self, options): if options.editor is not None: return options.editor elif "VISUAL" in os.environ: return os.environ["VISUAL"] elif "EDITOR" in os.environ: return os.environ["EDITOR"] else: raise PipError("Could not determine editor to use.") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/commands/debug.py0000644000175000017500000000644000000000000030251 0ustar00vincentvincent00000000000000from __future__ import absolute_import import locale import logging import sys from pip._internal.cli import cmdoptions from pip._internal.cli.base_command import Command from pip._internal.cli.cmdoptions import make_target_python from pip._internal.cli.status_codes import SUCCESS from pip._internal.utils.logging import indent_log from pip._internal.utils.misc import get_pip_version from pip._internal.utils.typing import MYPY_CHECK_RUNNING from pip._internal.wheel import format_tag if MYPY_CHECK_RUNNING: from typing import Any, List from optparse import Values logger = logging.getLogger(__name__) def show_value(name, value): # type: (str, str) -> None logger.info('{}: {}'.format(name, value)) def show_sys_implementation(): # type: () -> None logger.info('sys.implementation:') if hasattr(sys, 'implementation'): implementation = sys.implementation # type: ignore implementation_name = implementation.name else: implementation_name = '' with indent_log(): show_value('name', implementation_name) def show_tags(options): # type: (Values) -> None tag_limit = 10 target_python = make_target_python(options) tags = target_python.get_tags() # Display the target options that were explicitly provided. formatted_target = target_python.format_given() suffix = '' if formatted_target: suffix = ' (target: {})'.format(formatted_target) msg = 'Compatible tags: {}{}'.format(len(tags), suffix) logger.info(msg) if options.verbose < 1 and len(tags) > tag_limit: tags_limited = True tags = tags[:tag_limit] else: tags_limited = False with indent_log(): for tag in tags: logger.info(format_tag(tag)) if tags_limited: msg = ( '...\n' '[First {tag_limit} tags shown. Pass --verbose to show all.]' ).format(tag_limit=tag_limit) logger.info(msg) class DebugCommand(Command): """ Display debug information. """ name = 'debug' usage = """ %prog """ summary = 'Show information useful for debugging.' ignore_require_venv = True def __init__(self, *args, **kw): super(DebugCommand, self).__init__(*args, **kw) cmd_opts = self.cmd_opts cmdoptions.add_target_python_options(cmd_opts) self.parser.insert_option_group(0, cmd_opts) def run(self, options, args): # type: (Values, List[Any]) -> int logger.warning( "This command is only meant for debugging. " "Do not use this with automation for parsing and getting these " "details, since the output and options of this command may " "change without notice." ) show_value('pip version', get_pip_version()) show_value('sys.version', sys.version) show_value('sys.executable', sys.executable) show_value('sys.getdefaultencoding', sys.getdefaultencoding()) show_value('sys.getfilesystemencoding', sys.getfilesystemencoding()) show_value( 'locale.getpreferredencoding', locale.getpreferredencoding(), ) show_value('sys.platform', sys.platform) show_sys_implementation() show_tags(options) return SUCCESS ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/commands/download.py0000644000175000017500000001434700000000000030777 0ustar00vincentvincent00000000000000from __future__ import absolute_import import logging import os from pip._internal.cli import cmdoptions from pip._internal.cli.base_command import RequirementCommand from pip._internal.cli.cmdoptions import make_target_python from pip._internal.legacy_resolve import Resolver from pip._internal.operations.prepare import RequirementPreparer from pip._internal.req import RequirementSet from pip._internal.req.req_tracker import RequirementTracker from pip._internal.utils.filesystem import check_path_owner from pip._internal.utils.misc import ensure_dir, normalize_path from pip._internal.utils.temp_dir import TempDirectory logger = logging.getLogger(__name__) class DownloadCommand(RequirementCommand): """ Download packages from: - PyPI (and other indexes) using requirement specifiers. - VCS project urls. - Local project directories. - Local or remote source archives. pip also supports downloading from "requirements files", which provide an easy way to specify a whole environment to be downloaded. """ name = 'download' usage = """ %prog [options] [package-index-options] ... %prog [options] -r [package-index-options] ... %prog [options] ... %prog [options] ... %prog [options] ...""" summary = 'Download packages.' def __init__(self, *args, **kw): super(DownloadCommand, self).__init__(*args, **kw) cmd_opts = self.cmd_opts cmd_opts.add_option(cmdoptions.constraints()) cmd_opts.add_option(cmdoptions.requirements()) cmd_opts.add_option(cmdoptions.build_dir()) cmd_opts.add_option(cmdoptions.no_deps()) cmd_opts.add_option(cmdoptions.global_options()) cmd_opts.add_option(cmdoptions.no_binary()) cmd_opts.add_option(cmdoptions.only_binary()) cmd_opts.add_option(cmdoptions.prefer_binary()) cmd_opts.add_option(cmdoptions.src()) cmd_opts.add_option(cmdoptions.pre()) cmd_opts.add_option(cmdoptions.no_clean()) cmd_opts.add_option(cmdoptions.require_hashes()) cmd_opts.add_option(cmdoptions.progress_bar()) cmd_opts.add_option(cmdoptions.no_build_isolation()) cmd_opts.add_option(cmdoptions.use_pep517()) cmd_opts.add_option(cmdoptions.no_use_pep517()) cmd_opts.add_option( '-d', '--dest', '--destination-dir', '--destination-directory', dest='download_dir', metavar='dir', default=os.curdir, help=("Download packages into ."), ) cmdoptions.add_target_python_options(cmd_opts) index_opts = cmdoptions.make_option_group( cmdoptions.index_group, self.parser, ) self.parser.insert_option_group(0, index_opts) self.parser.insert_option_group(0, cmd_opts) def run(self, options, args): options.ignore_installed = True # editable doesn't really make sense for `pip download`, but the bowels # of the RequirementSet code require that property. options.editables = [] cmdoptions.check_dist_restriction(options) options.src_dir = os.path.abspath(options.src_dir) options.download_dir = normalize_path(options.download_dir) ensure_dir(options.download_dir) with self._build_session(options) as session: target_python = make_target_python(options) finder = self._build_package_finder( options=options, session=session, target_python=target_python, ) build_delete = (not (options.no_clean or options.build_dir)) if options.cache_dir and not check_path_owner(options.cache_dir): logger.warning( "The directory '%s' or its parent directory is not owned " "by the current user and caching wheels has been " "disabled. check the permissions and owner of that " "directory. If executing pip with sudo, you may want " "sudo's -H flag.", options.cache_dir, ) options.cache_dir = None with RequirementTracker() as req_tracker, TempDirectory( options.build_dir, delete=build_delete, kind="download" ) as directory: requirement_set = RequirementSet( require_hashes=options.require_hashes, ) self.populate_requirement_set( requirement_set, args, options, finder, session, self.name, None ) preparer = RequirementPreparer( build_dir=directory.path, src_dir=options.src_dir, download_dir=options.download_dir, wheel_download_dir=None, progress_bar=options.progress_bar, build_isolation=options.build_isolation, req_tracker=req_tracker, ) resolver = Resolver( preparer=preparer, finder=finder, session=session, wheel_cache=None, use_user_site=False, upgrade_strategy="to-satisfy-only", force_reinstall=False, ignore_dependencies=options.ignore_dependencies, py_version_info=options.python_version, ignore_requires_python=False, ignore_installed=True, isolated=options.isolated_mode, ) resolver.resolve(requirement_set) downloaded = ' '.join([ req.name for req in requirement_set.successfully_downloaded ]) if downloaded: logger.info('Successfully downloaded %s', downloaded) # Clean up if not options.no_clean: requirement_set.cleanup_files() return requirement_set ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/commands/freeze.py0000644000175000017500000000656100000000000030447 0ustar00vincentvincent00000000000000from __future__ import absolute_import import sys from pip._internal.cache import WheelCache from pip._internal.cli import cmdoptions from pip._internal.cli.base_command import Command from pip._internal.models.format_control import FormatControl from pip._internal.operations.freeze import freeze from pip._internal.utils.compat import stdlib_pkgs DEV_PKGS = {'pip', 'setuptools', 'distribute', 'wheel'} class FreezeCommand(Command): """ Output installed packages in requirements format. packages are listed in a case-insensitive sorted order. """ name = 'freeze' usage = """ %prog [options]""" summary = 'Output installed packages in requirements format.' log_streams = ("ext://sys.stderr", "ext://sys.stderr") def __init__(self, *args, **kw): super(FreezeCommand, self).__init__(*args, **kw) self.cmd_opts.add_option( '-r', '--requirement', dest='requirements', action='append', default=[], metavar='file', help="Use the order in the given requirements file and its " "comments when generating output. This option can be " "used multiple times.") self.cmd_opts.add_option( '-f', '--find-links', dest='find_links', action='append', default=[], metavar='URL', help='URL for finding packages, which will be added to the ' 'output.') self.cmd_opts.add_option( '-l', '--local', dest='local', action='store_true', default=False, help='If in a virtualenv that has global access, do not output ' 'globally-installed packages.') self.cmd_opts.add_option( '--user', dest='user', action='store_true', default=False, help='Only output packages installed in user-site.') self.cmd_opts.add_option(cmdoptions.list_path()) self.cmd_opts.add_option( '--all', dest='freeze_all', action='store_true', help='Do not skip these packages in the output:' ' %s' % ', '.join(DEV_PKGS)) self.cmd_opts.add_option( '--exclude-editable', dest='exclude_editable', action='store_true', help='Exclude editable package from output.') self.parser.insert_option_group(0, self.cmd_opts) def run(self, options, args): format_control = FormatControl(set(), set()) wheel_cache = WheelCache(options.cache_dir, format_control) skip = set(stdlib_pkgs) if not options.freeze_all: skip.update(DEV_PKGS) cmdoptions.check_list_path_option(options) freeze_kwargs = dict( requirement=options.requirements, find_links=options.find_links, local_only=options.local, user_only=options.user, paths=options.path, skip_regex=options.skip_requirements_regex, isolated=options.isolated_mode, wheel_cache=wheel_cache, skip=skip, exclude_editable=options.exclude_editable, ) try: for line in freeze(**freeze_kwargs): sys.stdout.write(line + '\n') finally: wheel_cache.cleanup() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/commands/hash.py0000644000175000017500000000322100000000000030100 0ustar00vincentvincent00000000000000from __future__ import absolute_import import hashlib import logging import sys from pip._internal.cli.base_command import Command from pip._internal.cli.status_codes import ERROR from pip._internal.utils.hashes import FAVORITE_HASH, STRONG_HASHES from pip._internal.utils.misc import read_chunks logger = logging.getLogger(__name__) class HashCommand(Command): """ Compute a hash of a local package archive. These can be used with --hash in a requirements file to do repeatable installs. """ name = 'hash' usage = '%prog [options] ...' summary = 'Compute hashes of package archives.' ignore_require_venv = True def __init__(self, *args, **kw): super(HashCommand, self).__init__(*args, **kw) self.cmd_opts.add_option( '-a', '--algorithm', dest='algorithm', choices=STRONG_HASHES, action='store', default=FAVORITE_HASH, help='The hash algorithm to use: one of %s' % ', '.join(STRONG_HASHES)) self.parser.insert_option_group(0, self.cmd_opts) def run(self, options, args): if not args: self.parser.print_usage(sys.stderr) return ERROR algorithm = options.algorithm for path in args: logger.info('%s:\n--hash=%s:%s', path, algorithm, _hash_of_file(path, algorithm)) def _hash_of_file(path, algorithm): """Return the hash digest of a file.""" with open(path, 'rb') as archive: hash = hashlib.new(algorithm) for chunk in read_chunks(archive): hash.update(chunk) return hash.hexdigest() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/commands/help.py0000644000175000017500000000210200000000000030102 0ustar00vincentvincent00000000000000from __future__ import absolute_import from pip._internal.cli.base_command import Command from pip._internal.cli.status_codes import SUCCESS from pip._internal.exceptions import CommandError class HelpCommand(Command): """Show help for commands""" name = 'help' usage = """ %prog """ summary = 'Show help for commands.' ignore_require_venv = True def run(self, options, args): from pip._internal.commands import commands_dict, get_similar_commands try: # 'pip help' with no args is handled by pip.__init__.parseopt() cmd_name = args[0] # the command we need help for except IndexError: return SUCCESS if cmd_name not in commands_dict: guess = get_similar_commands(cmd_name) msg = ['unknown command "%s"' % cmd_name] if guess: msg.append('maybe you meant "%s"' % guess) raise CommandError(' - '.join(msg)) command = commands_dict[cmd_name]() command.parser.print_help() return SUCCESS ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/commands/install.py0000644000175000017500000005416600000000000030641 0ustar00vincentvincent00000000000000from __future__ import absolute_import import errno import logging import operator import os import shutil from optparse import SUPPRESS_HELP from pip._vendor import pkg_resources from pip._internal.cache import WheelCache from pip._internal.cli import cmdoptions from pip._internal.cli.base_command import RequirementCommand from pip._internal.cli.cmdoptions import make_target_python from pip._internal.cli.status_codes import ERROR from pip._internal.exceptions import ( CommandError, InstallationError, PreviousBuildDirError, ) from pip._internal.legacy_resolve import Resolver from pip._internal.locations import distutils_scheme from pip._internal.operations.check import check_install_conflicts from pip._internal.operations.prepare import RequirementPreparer from pip._internal.req import RequirementSet, install_given_reqs from pip._internal.req.req_tracker import RequirementTracker from pip._internal.utils.filesystem import check_path_owner from pip._internal.utils.misc import ( ensure_dir, get_installed_version, protect_pip_from_modification_on_windows, ) from pip._internal.utils.temp_dir import TempDirectory from pip._internal.utils.virtualenv import virtualenv_no_global from pip._internal.wheel import WheelBuilder logger = logging.getLogger(__name__) def is_wheel_installed(): """ Return whether the wheel package is installed. """ try: import wheel # noqa: F401 except ImportError: return False return True def build_wheels(builder, pep517_requirements, legacy_requirements, session): """ Build wheels for requirements, depending on whether wheel is installed. """ # We don't build wheels for legacy requirements if wheel is not installed. should_build_legacy = is_wheel_installed() # Always build PEP 517 requirements build_failures = builder.build( pep517_requirements, session=session, autobuilding=True ) if should_build_legacy: # We don't care about failures building legacy # requirements, as we'll fall through to a direct # install for those. builder.build( legacy_requirements, session=session, autobuilding=True ) return build_failures class InstallCommand(RequirementCommand): """ Install packages from: - PyPI (and other indexes) using requirement specifiers. - VCS project urls. - Local project directories. - Local or remote source archives. pip also supports installing from "requirements files," which provide an easy way to specify a whole environment to be installed. """ name = 'install' usage = """ %prog [options] [package-index-options] ... %prog [options] -r [package-index-options] ... %prog [options] [-e] ... %prog [options] [-e] ... %prog [options] ...""" summary = 'Install packages.' def __init__(self, *args, **kw): super(InstallCommand, self).__init__(*args, **kw) cmd_opts = self.cmd_opts cmd_opts.add_option(cmdoptions.requirements()) cmd_opts.add_option(cmdoptions.constraints()) cmd_opts.add_option(cmdoptions.no_deps()) cmd_opts.add_option(cmdoptions.pre()) cmd_opts.add_option(cmdoptions.editable()) cmd_opts.add_option( '-t', '--target', dest='target_dir', metavar='dir', default=None, help='Install packages into . ' 'By default this will not replace existing files/folders in ' '. Use --upgrade to replace existing packages in ' 'with new versions.' ) cmdoptions.add_target_python_options(cmd_opts) cmd_opts.add_option( '--user', dest='use_user_site', action='store_true', help="Install to the Python user install directory for your " "platform. Typically ~/.local/, or %APPDATA%\\Python on " "Windows. (See the Python documentation for site.USER_BASE " "for full details.)") cmd_opts.add_option( '--no-user', dest='use_user_site', action='store_false', help=SUPPRESS_HELP) cmd_opts.add_option( '--root', dest='root_path', metavar='dir', default=None, help="Install everything relative to this alternate root " "directory.") cmd_opts.add_option( '--prefix', dest='prefix_path', metavar='dir', default=None, help="Installation prefix where lib, bin and other top-level " "folders are placed") cmd_opts.add_option(cmdoptions.build_dir()) cmd_opts.add_option(cmdoptions.src()) cmd_opts.add_option( '-U', '--upgrade', dest='upgrade', action='store_true', help='Upgrade all specified packages to the newest available ' 'version. The handling of dependencies depends on the ' 'upgrade-strategy used.' ) cmd_opts.add_option( '--upgrade-strategy', dest='upgrade_strategy', default='only-if-needed', choices=['only-if-needed', 'eager'], help='Determines how dependency upgrading should be handled ' '[default: %default]. ' '"eager" - dependencies are upgraded regardless of ' 'whether the currently installed version satisfies the ' 'requirements of the upgraded package(s). ' '"only-if-needed" - are upgraded only when they do not ' 'satisfy the requirements of the upgraded package(s).' ) cmd_opts.add_option( '--force-reinstall', dest='force_reinstall', action='store_true', help='Reinstall all packages even if they are already ' 'up-to-date.') cmd_opts.add_option( '-I', '--ignore-installed', dest='ignore_installed', action='store_true', help='Ignore the installed packages (reinstalling instead).') cmd_opts.add_option(cmdoptions.ignore_requires_python()) cmd_opts.add_option(cmdoptions.no_build_isolation()) cmd_opts.add_option(cmdoptions.use_pep517()) cmd_opts.add_option(cmdoptions.no_use_pep517()) cmd_opts.add_option(cmdoptions.install_options()) cmd_opts.add_option(cmdoptions.global_options()) cmd_opts.add_option( "--compile", action="store_true", dest="compile", default=True, help="Compile Python source files to bytecode", ) cmd_opts.add_option( "--no-compile", action="store_false", dest="compile", help="Do not compile Python source files to bytecode", ) cmd_opts.add_option( "--no-warn-script-location", action="store_false", dest="warn_script_location", default=True, help="Do not warn when installing scripts outside PATH", ) cmd_opts.add_option( "--no-warn-conflicts", action="store_false", dest="warn_about_conflicts", default=True, help="Do not warn about broken dependencies", ) cmd_opts.add_option(cmdoptions.no_binary()) cmd_opts.add_option(cmdoptions.only_binary()) cmd_opts.add_option(cmdoptions.prefer_binary()) cmd_opts.add_option(cmdoptions.no_clean()) cmd_opts.add_option(cmdoptions.require_hashes()) cmd_opts.add_option(cmdoptions.progress_bar()) index_opts = cmdoptions.make_option_group( cmdoptions.index_group, self.parser, ) self.parser.insert_option_group(0, index_opts) self.parser.insert_option_group(0, cmd_opts) def run(self, options, args): cmdoptions.check_install_build_global(options) upgrade_strategy = "to-satisfy-only" if options.upgrade: upgrade_strategy = options.upgrade_strategy if options.build_dir: options.build_dir = os.path.abspath(options.build_dir) cmdoptions.check_dist_restriction(options, check_target=True) options.src_dir = os.path.abspath(options.src_dir) install_options = options.install_options or [] if options.use_user_site: if options.prefix_path: raise CommandError( "Can not combine '--user' and '--prefix' as they imply " "different installation locations" ) if virtualenv_no_global(): raise InstallationError( "Can not perform a '--user' install. User site-packages " "are not visible in this virtualenv." ) install_options.append('--user') install_options.append('--prefix=') target_temp_dir = TempDirectory(kind="target") if options.target_dir: options.ignore_installed = True options.target_dir = os.path.abspath(options.target_dir) if (os.path.exists(options.target_dir) and not os.path.isdir(options.target_dir)): raise CommandError( "Target path exists but is not a directory, will not " "continue." ) # Create a target directory for using with the target option target_temp_dir.create() install_options.append('--home=' + target_temp_dir.path) global_options = options.global_options or [] with self._build_session(options) as session: target_python = make_target_python(options) finder = self._build_package_finder( options=options, session=session, target_python=target_python, ignore_requires_python=options.ignore_requires_python, ) build_delete = (not (options.no_clean or options.build_dir)) wheel_cache = WheelCache(options.cache_dir, options.format_control) if options.cache_dir and not check_path_owner(options.cache_dir): logger.warning( "The directory '%s' or its parent directory is not owned " "by the current user and caching wheels has been " "disabled. check the permissions and owner of that " "directory. If executing pip with sudo, you may want " "sudo's -H flag.", options.cache_dir, ) options.cache_dir = None with RequirementTracker() as req_tracker, TempDirectory( options.build_dir, delete=build_delete, kind="install" ) as directory: requirement_set = RequirementSet( require_hashes=options.require_hashes, check_supported_wheels=not options.target_dir, ) try: self.populate_requirement_set( requirement_set, args, options, finder, session, self.name, wheel_cache ) preparer = RequirementPreparer( build_dir=directory.path, src_dir=options.src_dir, download_dir=None, wheel_download_dir=None, progress_bar=options.progress_bar, build_isolation=options.build_isolation, req_tracker=req_tracker, ) resolver = Resolver( preparer=preparer, finder=finder, session=session, wheel_cache=wheel_cache, use_user_site=options.use_user_site, upgrade_strategy=upgrade_strategy, force_reinstall=options.force_reinstall, ignore_dependencies=options.ignore_dependencies, ignore_requires_python=options.ignore_requires_python, ignore_installed=options.ignore_installed, isolated=options.isolated_mode, use_pep517=options.use_pep517 ) resolver.resolve(requirement_set) protect_pip_from_modification_on_windows( modifying_pip=requirement_set.has_requirement("pip") ) # Consider legacy and PEP517-using requirements separately legacy_requirements = [] pep517_requirements = [] for req in requirement_set.requirements.values(): if req.use_pep517: pep517_requirements.append(req) else: legacy_requirements.append(req) wheel_builder = WheelBuilder( finder, preparer, wheel_cache, build_options=[], global_options=[], ) build_failures = build_wheels( builder=wheel_builder, pep517_requirements=pep517_requirements, legacy_requirements=legacy_requirements, session=session, ) # If we're using PEP 517, we cannot do a direct install # so we fail here. if build_failures: raise InstallationError( "Could not build wheels for {} which use" " PEP 517 and cannot be installed directly".format( ", ".join(r.name for r in build_failures))) to_install = resolver.get_installation_order( requirement_set ) # Consistency Checking of the package set we're installing. should_warn_about_conflicts = ( not options.ignore_dependencies and options.warn_about_conflicts ) if should_warn_about_conflicts: self._warn_about_conflicts(to_install) # Don't warn about script install locations if # --target has been specified warn_script_location = options.warn_script_location if options.target_dir: warn_script_location = False installed = install_given_reqs( to_install, install_options, global_options, root=options.root_path, home=target_temp_dir.path, prefix=options.prefix_path, pycompile=options.compile, warn_script_location=warn_script_location, use_user_site=options.use_user_site, ) lib_locations = get_lib_location_guesses( user=options.use_user_site, home=target_temp_dir.path, root=options.root_path, prefix=options.prefix_path, isolated=options.isolated_mode, ) working_set = pkg_resources.WorkingSet(lib_locations) reqs = sorted(installed, key=operator.attrgetter('name')) items = [] for req in reqs: item = req.name try: installed_version = get_installed_version( req.name, working_set=working_set ) if installed_version: item += '-' + installed_version except Exception: pass items.append(item) installed = ' '.join(items) if installed: logger.info('Successfully installed %s', installed) except EnvironmentError as error: show_traceback = (self.verbosity >= 1) message = create_env_error_message( error, show_traceback, options.use_user_site, ) logger.error(message, exc_info=show_traceback) return ERROR except PreviousBuildDirError: options.no_clean = True raise finally: # Clean up if not options.no_clean: requirement_set.cleanup_files() wheel_cache.cleanup() if options.target_dir: self._handle_target_dir( options.target_dir, target_temp_dir, options.upgrade ) return requirement_set def _handle_target_dir(self, target_dir, target_temp_dir, upgrade): ensure_dir(target_dir) # Checking both purelib and platlib directories for installed # packages to be moved to target directory lib_dir_list = [] with target_temp_dir: # Checking both purelib and platlib directories for installed # packages to be moved to target directory scheme = distutils_scheme('', home=target_temp_dir.path) purelib_dir = scheme['purelib'] platlib_dir = scheme['platlib'] data_dir = scheme['data'] if os.path.exists(purelib_dir): lib_dir_list.append(purelib_dir) if os.path.exists(platlib_dir) and platlib_dir != purelib_dir: lib_dir_list.append(platlib_dir) if os.path.exists(data_dir): lib_dir_list.append(data_dir) for lib_dir in lib_dir_list: for item in os.listdir(lib_dir): if lib_dir == data_dir: ddir = os.path.join(data_dir, item) if any(s.startswith(ddir) for s in lib_dir_list[:-1]): continue target_item_dir = os.path.join(target_dir, item) if os.path.exists(target_item_dir): if not upgrade: logger.warning( 'Target directory %s already exists. Specify ' '--upgrade to force replacement.', target_item_dir ) continue if os.path.islink(target_item_dir): logger.warning( 'Target directory %s already exists and is ' 'a link. Pip will not automatically replace ' 'links, please remove if replacement is ' 'desired.', target_item_dir ) continue if os.path.isdir(target_item_dir): shutil.rmtree(target_item_dir) else: os.remove(target_item_dir) shutil.move( os.path.join(lib_dir, item), target_item_dir ) def _warn_about_conflicts(self, to_install): try: package_set, _dep_info = check_install_conflicts(to_install) except Exception: logger.error("Error checking for conflicts.", exc_info=True) return missing, conflicting = _dep_info # NOTE: There is some duplication here from pip check for project_name in missing: version = package_set[project_name][0] for dependency in missing[project_name]: logger.critical( "%s %s requires %s, which is not installed.", project_name, version, dependency[1], ) for project_name in conflicting: version = package_set[project_name][0] for dep_name, dep_version, req in conflicting[project_name]: logger.critical( "%s %s has requirement %s, but you'll have %s %s which is " "incompatible.", project_name, version, req, dep_name, dep_version, ) def get_lib_location_guesses(*args, **kwargs): scheme = distutils_scheme('', *args, **kwargs) return [scheme['purelib'], scheme['platlib']] def create_env_error_message(error, show_traceback, using_user_site): """Format an error message for an EnvironmentError It may occur anytime during the execution of the install command. """ parts = [] # Mention the error if we are not going to show a traceback parts.append("Could not install packages due to an EnvironmentError") if not show_traceback: parts.append(": ") parts.append(str(error)) else: parts.append(".") # Spilt the error indication from a helper message (if any) parts[-1] += "\n" # Suggest useful actions to the user: # (1) using user site-packages or (2) verifying the permissions if error.errno == errno.EACCES: user_option_part = "Consider using the `--user` option" permissions_part = "Check the permissions" if not using_user_site: parts.extend([ user_option_part, " or ", permissions_part.lower(), ]) else: parts.append(permissions_part) parts.append(".\n") return "".join(parts).strip() + "\n" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/commands/list.py0000644000175000017500000002440100000000000030133 0ustar00vincentvincent00000000000000from __future__ import absolute_import import json import logging from pip._vendor import six from pip._vendor.six.moves import zip_longest from pip._internal.cli import cmdoptions from pip._internal.cli.base_command import Command from pip._internal.cli.cmdoptions import make_search_scope from pip._internal.exceptions import CommandError from pip._internal.index import PackageFinder from pip._internal.models.selection_prefs import SelectionPreferences from pip._internal.utils.misc import ( dist_is_editable, get_installed_distributions, ) from pip._internal.utils.packaging import get_installer logger = logging.getLogger(__name__) class ListCommand(Command): """ List installed packages, including editables. Packages are listed in a case-insensitive sorted order. """ name = 'list' usage = """ %prog [options]""" summary = 'List installed packages.' def __init__(self, *args, **kw): super(ListCommand, self).__init__(*args, **kw) cmd_opts = self.cmd_opts cmd_opts.add_option( '-o', '--outdated', action='store_true', default=False, help='List outdated packages') cmd_opts.add_option( '-u', '--uptodate', action='store_true', default=False, help='List uptodate packages') cmd_opts.add_option( '-e', '--editable', action='store_true', default=False, help='List editable projects.') cmd_opts.add_option( '-l', '--local', action='store_true', default=False, help=('If in a virtualenv that has global access, do not list ' 'globally-installed packages.'), ) self.cmd_opts.add_option( '--user', dest='user', action='store_true', default=False, help='Only output packages installed in user-site.') cmd_opts.add_option(cmdoptions.list_path()) cmd_opts.add_option( '--pre', action='store_true', default=False, help=("Include pre-release and development versions. By default, " "pip only finds stable versions."), ) cmd_opts.add_option( '--format', action='store', dest='list_format', default="columns", choices=('columns', 'freeze', 'json'), help="Select the output format among: columns (default), freeze, " "or json", ) cmd_opts.add_option( '--not-required', action='store_true', dest='not_required', help="List packages that are not dependencies of " "installed packages.", ) cmd_opts.add_option( '--exclude-editable', action='store_false', dest='include_editable', help='Exclude editable package from output.', ) cmd_opts.add_option( '--include-editable', action='store_true', dest='include_editable', help='Include editable package from output.', default=True, ) index_opts = cmdoptions.make_option_group( cmdoptions.index_group, self.parser ) self.parser.insert_option_group(0, index_opts) self.parser.insert_option_group(0, cmd_opts) def _build_package_finder(self, options, session): """ Create a package finder appropriate to this list command. """ search_scope = make_search_scope(options) # Pass allow_yanked=False to ignore yanked versions. selection_prefs = SelectionPreferences( allow_yanked=False, allow_all_prereleases=options.pre, ) return PackageFinder.create( search_scope=search_scope, selection_prefs=selection_prefs, trusted_hosts=options.trusted_hosts, session=session, ) def run(self, options, args): if options.outdated and options.uptodate: raise CommandError( "Options --outdated and --uptodate cannot be combined.") cmdoptions.check_list_path_option(options) packages = get_installed_distributions( local_only=options.local, user_only=options.user, editables_only=options.editable, include_editables=options.include_editable, paths=options.path, ) # get_not_required must be called firstly in order to find and # filter out all dependencies correctly. Otherwise a package # can't be identified as requirement because some parent packages # could be filtered out before. if options.not_required: packages = self.get_not_required(packages, options) if options.outdated: packages = self.get_outdated(packages, options) elif options.uptodate: packages = self.get_uptodate(packages, options) self.output_package_listing(packages, options) def get_outdated(self, packages, options): return [ dist for dist in self.iter_packages_latest_infos(packages, options) if dist.latest_version > dist.parsed_version ] def get_uptodate(self, packages, options): return [ dist for dist in self.iter_packages_latest_infos(packages, options) if dist.latest_version == dist.parsed_version ] def get_not_required(self, packages, options): dep_keys = set() for dist in packages: dep_keys.update(requirement.key for requirement in dist.requires()) return {pkg for pkg in packages if pkg.key not in dep_keys} def iter_packages_latest_infos(self, packages, options): with self._build_session(options) as session: finder = self._build_package_finder(options, session) for dist in packages: typ = 'unknown' all_candidates = finder.find_all_candidates(dist.key) if not options.pre: # Remove prereleases all_candidates = [candidate for candidate in all_candidates if not candidate.version.is_prerelease] evaluator = finder.make_candidate_evaluator( project_name=dist.project_name, ) best_candidate = evaluator.get_best_candidate(all_candidates) if best_candidate is None: continue remote_version = best_candidate.version if best_candidate.link.is_wheel: typ = 'wheel' else: typ = 'sdist' # This is dirty but makes the rest of the code much cleaner dist.latest_version = remote_version dist.latest_filetype = typ yield dist def output_package_listing(self, packages, options): packages = sorted( packages, key=lambda dist: dist.project_name.lower(), ) if options.list_format == 'columns' and packages: data, header = format_for_columns(packages, options) self.output_package_listing_columns(data, header) elif options.list_format == 'freeze': for dist in packages: if options.verbose >= 1: logger.info("%s==%s (%s)", dist.project_name, dist.version, dist.location) else: logger.info("%s==%s", dist.project_name, dist.version) elif options.list_format == 'json': logger.info(format_for_json(packages, options)) def output_package_listing_columns(self, data, header): # insert the header first: we need to know the size of column names if len(data) > 0: data.insert(0, header) pkg_strings, sizes = tabulate(data) # Create and add a separator. if len(data) > 0: pkg_strings.insert(1, " ".join(map(lambda x: '-' * x, sizes))) for val in pkg_strings: logger.info(val) def tabulate(vals): # From pfmoore on GitHub: # https://github.com/pypa/pip/issues/3651#issuecomment-216932564 assert len(vals) > 0 sizes = [0] * max(len(x) for x in vals) for row in vals: sizes = [max(s, len(str(c))) for s, c in zip_longest(sizes, row)] result = [] for row in vals: display = " ".join([str(c).ljust(s) if c is not None else '' for s, c in zip_longest(sizes, row)]) result.append(display) return result, sizes def format_for_columns(pkgs, options): """ Convert the package data into something usable by output_package_listing_columns. """ running_outdated = options.outdated # Adjust the header for the `pip list --outdated` case. if running_outdated: header = ["Package", "Version", "Latest", "Type"] else: header = ["Package", "Version"] data = [] if options.verbose >= 1 or any(dist_is_editable(x) for x in pkgs): header.append("Location") if options.verbose >= 1: header.append("Installer") for proj in pkgs: # if we're working on the 'outdated' list, separate out the # latest_version and type row = [proj.project_name, proj.version] if running_outdated: row.append(proj.latest_version) row.append(proj.latest_filetype) if options.verbose >= 1 or dist_is_editable(proj): row.append(proj.location) if options.verbose >= 1: row.append(get_installer(proj)) data.append(row) return data, header def format_for_json(packages, options): data = [] for dist in packages: info = { 'name': dist.project_name, 'version': six.text_type(dist.version), } if options.verbose >= 1: info['location'] = dist.location info['installer'] = get_installer(dist) if options.outdated: info['latest_version'] = six.text_type(dist.latest_version) info['latest_filetype'] = dist.latest_filetype data.append(info) return json.dumps(data) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/commands/search.py0000644000175000017500000001155400000000000030432 0ustar00vincentvincent00000000000000from __future__ import absolute_import import logging import sys import textwrap from collections import OrderedDict from pip._vendor import pkg_resources from pip._vendor.packaging.version import parse as parse_version # NOTE: XMLRPC Client is not annotated in typeshed as on 2017-07-17, which is # why we ignore the type on this import from pip._vendor.six.moves import xmlrpc_client # type: ignore from pip._internal.cli.base_command import Command from pip._internal.cli.status_codes import NO_MATCHES_FOUND, SUCCESS from pip._internal.download import PipXmlrpcTransport from pip._internal.exceptions import CommandError from pip._internal.models.index import PyPI from pip._internal.utils.compat import get_terminal_size from pip._internal.utils.logging import indent_log logger = logging.getLogger(__name__) class SearchCommand(Command): """Search for PyPI packages whose name or summary contains .""" name = 'search' usage = """ %prog [options] """ summary = 'Search PyPI for packages.' ignore_require_venv = True def __init__(self, *args, **kw): super(SearchCommand, self).__init__(*args, **kw) self.cmd_opts.add_option( '-i', '--index', dest='index', metavar='URL', default=PyPI.pypi_url, help='Base URL of Python Package Index (default %default)') self.parser.insert_option_group(0, self.cmd_opts) def run(self, options, args): if not args: raise CommandError('Missing required argument (search query).') query = args pypi_hits = self.search(query, options) hits = transform_hits(pypi_hits) terminal_width = None if sys.stdout.isatty(): terminal_width = get_terminal_size()[0] print_results(hits, terminal_width=terminal_width) if pypi_hits: return SUCCESS return NO_MATCHES_FOUND def search(self, query, options): index_url = options.index with self._build_session(options) as session: transport = PipXmlrpcTransport(index_url, session) pypi = xmlrpc_client.ServerProxy(index_url, transport) hits = pypi.search({'name': query, 'summary': query}, 'or') return hits def transform_hits(hits): """ The list from pypi is really a list of versions. We want a list of packages with the list of versions stored inline. This converts the list from pypi into one we can use. """ packages = OrderedDict() for hit in hits: name = hit['name'] summary = hit['summary'] version = hit['version'] if name not in packages.keys(): packages[name] = { 'name': name, 'summary': summary, 'versions': [version], } else: packages[name]['versions'].append(version) # if this is the highest version, replace summary and score if version == highest_version(packages[name]['versions']): packages[name]['summary'] = summary return list(packages.values()) def print_results(hits, name_column_width=None, terminal_width=None): if not hits: return if name_column_width is None: name_column_width = max([ len(hit['name']) + len(highest_version(hit.get('versions', ['-']))) for hit in hits ]) + 4 installed_packages = [p.project_name for p in pkg_resources.working_set] for hit in hits: name = hit['name'] summary = hit['summary'] or '' latest = highest_version(hit.get('versions', ['-'])) if terminal_width is not None: target_width = terminal_width - name_column_width - 5 if target_width > 10: # wrap and indent summary to fit terminal summary = textwrap.wrap(summary, target_width) summary = ('\n' + ' ' * (name_column_width + 3)).join(summary) line = '%-*s - %s' % (name_column_width, '%s (%s)' % (name, latest), summary) try: logger.info(line) if name in installed_packages: dist = pkg_resources.get_distribution(name) with indent_log(): if dist.version == latest: logger.info('INSTALLED: %s (latest)', dist.version) else: logger.info('INSTALLED: %s', dist.version) if parse_version(latest).pre: logger.info('LATEST: %s (pre-release; install' ' with "pip install --pre")', latest) else: logger.info('LATEST: %s', latest) except UnicodeEncodeError: pass def highest_version(versions): return max(versions, key=parse_version) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/commands/show.py0000644000175000017500000001420100000000000030135 0ustar00vincentvincent00000000000000from __future__ import absolute_import import logging import os from email.parser import FeedParser from pip._vendor import pkg_resources from pip._vendor.packaging.utils import canonicalize_name from pip._internal.cli.base_command import Command from pip._internal.cli.status_codes import ERROR, SUCCESS logger = logging.getLogger(__name__) class ShowCommand(Command): """ Show information about one or more installed packages. The output is in RFC-compliant mail header format. """ name = 'show' usage = """ %prog [options] ...""" summary = 'Show information about installed packages.' ignore_require_venv = True def __init__(self, *args, **kw): super(ShowCommand, self).__init__(*args, **kw) self.cmd_opts.add_option( '-f', '--files', dest='files', action='store_true', default=False, help='Show the full list of installed files for each package.') self.parser.insert_option_group(0, self.cmd_opts) def run(self, options, args): if not args: logger.warning('ERROR: Please provide a package name or names.') return ERROR query = args results = search_packages_info(query) if not print_results( results, list_files=options.files, verbose=options.verbose): return ERROR return SUCCESS def search_packages_info(query): """ Gather details from installed distributions. Print distribution name, version, location, and installed files. Installed files requires a pip generated 'installed-files.txt' in the distributions '.egg-info' directory. """ installed = {} for p in pkg_resources.working_set: installed[canonicalize_name(p.project_name)] = p query_names = [canonicalize_name(name) for name in query] for dist in [installed[pkg] for pkg in query_names if pkg in installed]: package = { 'name': dist.project_name, 'version': dist.version, 'location': dist.location, 'requires': [dep.project_name for dep in dist.requires()], } file_list = None metadata = None if isinstance(dist, pkg_resources.DistInfoDistribution): # RECORDs should be part of .dist-info metadatas if dist.has_metadata('RECORD'): lines = dist.get_metadata_lines('RECORD') paths = [l.split(',')[0] for l in lines] paths = [os.path.join(dist.location, p) for p in paths] file_list = [os.path.relpath(p, dist.location) for p in paths] if dist.has_metadata('METADATA'): metadata = dist.get_metadata('METADATA') else: # Otherwise use pip's log for .egg-info's if dist.has_metadata('installed-files.txt'): paths = dist.get_metadata_lines('installed-files.txt') paths = [os.path.join(dist.egg_info, p) for p in paths] file_list = [os.path.relpath(p, dist.location) for p in paths] if dist.has_metadata('PKG-INFO'): metadata = dist.get_metadata('PKG-INFO') if dist.has_metadata('entry_points.txt'): entry_points = dist.get_metadata_lines('entry_points.txt') package['entry_points'] = entry_points if dist.has_metadata('INSTALLER'): for line in dist.get_metadata_lines('INSTALLER'): if line.strip(): package['installer'] = line.strip() break # @todo: Should pkg_resources.Distribution have a # `get_pkg_info` method? feed_parser = FeedParser() feed_parser.feed(metadata) pkg_info_dict = feed_parser.close() for key in ('metadata-version', 'summary', 'home-page', 'author', 'author-email', 'license'): package[key] = pkg_info_dict.get(key) # It looks like FeedParser cannot deal with repeated headers classifiers = [] for line in metadata.splitlines(): if line.startswith('Classifier: '): classifiers.append(line[len('Classifier: '):]) package['classifiers'] = classifiers if file_list: package['files'] = sorted(file_list) yield package def print_results(distributions, list_files=False, verbose=False): """ Print the informations from installed distributions found. """ results_printed = False for i, dist in enumerate(distributions): results_printed = True if i > 0: logger.info("---") name = dist.get('name', '') required_by = [ pkg.project_name for pkg in pkg_resources.working_set if name in [required.name for required in pkg.requires()] ] logger.info("Name: %s", name) logger.info("Version: %s", dist.get('version', '')) logger.info("Summary: %s", dist.get('summary', '')) logger.info("Home-page: %s", dist.get('home-page', '')) logger.info("Author: %s", dist.get('author', '')) logger.info("Author-email: %s", dist.get('author-email', '')) logger.info("License: %s", dist.get('license', '')) logger.info("Location: %s", dist.get('location', '')) logger.info("Requires: %s", ', '.join(dist.get('requires', []))) logger.info("Required-by: %s", ', '.join(required_by)) if verbose: logger.info("Metadata-Version: %s", dist.get('metadata-version', '')) logger.info("Installer: %s", dist.get('installer', '')) logger.info("Classifiers:") for classifier in dist.get('classifiers', []): logger.info(" %s", classifier) logger.info("Entry-points:") for entry in dist.get('entry_points', []): logger.info(" %s", entry.strip()) if list_files: logger.info("Files:") for line in dist.get('files', []): logger.info(" %s", line.strip()) if "files" not in dist: logger.info("Cannot locate installed-files.txt") return results_printed ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/commands/uninstall.py0000644000175000017500000000562300000000000031176 0ustar00vincentvincent00000000000000from __future__ import absolute_import from pip._vendor.packaging.utils import canonicalize_name from pip._internal.cli.base_command import Command from pip._internal.exceptions import InstallationError from pip._internal.req import parse_requirements from pip._internal.req.constructors import install_req_from_line from pip._internal.utils.misc import protect_pip_from_modification_on_windows class UninstallCommand(Command): """ Uninstall packages. pip is able to uninstall most installed packages. Known exceptions are: - Pure distutils packages installed with ``python setup.py install``, which leave behind no metadata to determine what files were installed. - Script wrappers installed by ``python setup.py develop``. """ name = 'uninstall' usage = """ %prog [options] ... %prog [options] -r ...""" summary = 'Uninstall packages.' def __init__(self, *args, **kw): super(UninstallCommand, self).__init__(*args, **kw) self.cmd_opts.add_option( '-r', '--requirement', dest='requirements', action='append', default=[], metavar='file', help='Uninstall all the packages listed in the given requirements ' 'file. This option can be used multiple times.', ) self.cmd_opts.add_option( '-y', '--yes', dest='yes', action='store_true', help="Don't ask for confirmation of uninstall deletions.") self.parser.insert_option_group(0, self.cmd_opts) def run(self, options, args): with self._build_session(options) as session: reqs_to_uninstall = {} for name in args: req = install_req_from_line( name, isolated=options.isolated_mode, ) if req.name: reqs_to_uninstall[canonicalize_name(req.name)] = req for filename in options.requirements: for req in parse_requirements( filename, options=options, session=session): if req.name: reqs_to_uninstall[canonicalize_name(req.name)] = req if not reqs_to_uninstall: raise InstallationError( 'You must give at least one requirement to %(name)s (see ' '"pip help %(name)s")' % dict(name=self.name) ) protect_pip_from_modification_on_windows( modifying_pip="pip" in reqs_to_uninstall ) for req in reqs_to_uninstall.values(): uninstall_pathset = req.uninstall( auto_confirm=options.yes, verbose=self.verbosity > 0, ) if uninstall_pathset: uninstall_pathset.commit() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/commands/wheel.py0000644000175000017500000001550100000000000030265 0ustar00vincentvincent00000000000000# -*- coding: utf-8 -*- from __future__ import absolute_import import logging import os from pip._internal.cache import WheelCache from pip._internal.cli import cmdoptions from pip._internal.cli.base_command import RequirementCommand from pip._internal.exceptions import CommandError, PreviousBuildDirError from pip._internal.legacy_resolve import Resolver from pip._internal.operations.prepare import RequirementPreparer from pip._internal.req import RequirementSet from pip._internal.req.req_tracker import RequirementTracker from pip._internal.utils.temp_dir import TempDirectory from pip._internal.wheel import WheelBuilder logger = logging.getLogger(__name__) class WheelCommand(RequirementCommand): """ Build Wheel archives for your requirements and dependencies. Wheel is a built-package format, and offers the advantage of not recompiling your software during every install. For more details, see the wheel docs: https://wheel.readthedocs.io/en/latest/ Requirements: setuptools>=0.8, and wheel. 'pip wheel' uses the bdist_wheel setuptools extension from the wheel package to build individual wheels. """ name = 'wheel' usage = """ %prog [options] ... %prog [options] -r ... %prog [options] [-e] ... %prog [options] [-e] ... %prog [options] ...""" summary = 'Build wheels from your requirements.' def __init__(self, *args, **kw): super(WheelCommand, self).__init__(*args, **kw) cmd_opts = self.cmd_opts cmd_opts.add_option( '-w', '--wheel-dir', dest='wheel_dir', metavar='dir', default=os.curdir, help=("Build wheels into , where the default is the " "current working directory."), ) cmd_opts.add_option(cmdoptions.no_binary()) cmd_opts.add_option(cmdoptions.only_binary()) cmd_opts.add_option(cmdoptions.prefer_binary()) cmd_opts.add_option( '--build-option', dest='build_options', metavar='options', action='append', help="Extra arguments to be supplied to 'setup.py bdist_wheel'.", ) cmd_opts.add_option(cmdoptions.no_build_isolation()) cmd_opts.add_option(cmdoptions.use_pep517()) cmd_opts.add_option(cmdoptions.no_use_pep517()) cmd_opts.add_option(cmdoptions.constraints()) cmd_opts.add_option(cmdoptions.editable()) cmd_opts.add_option(cmdoptions.requirements()) cmd_opts.add_option(cmdoptions.src()) cmd_opts.add_option(cmdoptions.ignore_requires_python()) cmd_opts.add_option(cmdoptions.no_deps()) cmd_opts.add_option(cmdoptions.build_dir()) cmd_opts.add_option(cmdoptions.progress_bar()) cmd_opts.add_option( '--global-option', dest='global_options', action='append', metavar='options', help="Extra global options to be supplied to the setup.py " "call before the 'bdist_wheel' command.") cmd_opts.add_option( '--pre', action='store_true', default=False, help=("Include pre-release and development versions. By default, " "pip only finds stable versions."), ) cmd_opts.add_option(cmdoptions.no_clean()) cmd_opts.add_option(cmdoptions.require_hashes()) index_opts = cmdoptions.make_option_group( cmdoptions.index_group, self.parser, ) self.parser.insert_option_group(0, index_opts) self.parser.insert_option_group(0, cmd_opts) def run(self, options, args): cmdoptions.check_install_build_global(options) if options.build_dir: options.build_dir = os.path.abspath(options.build_dir) options.src_dir = os.path.abspath(options.src_dir) with self._build_session(options) as session: finder = self._build_package_finder(options, session) build_delete = (not (options.no_clean or options.build_dir)) wheel_cache = WheelCache(options.cache_dir, options.format_control) with RequirementTracker() as req_tracker, TempDirectory( options.build_dir, delete=build_delete, kind="wheel" ) as directory: requirement_set = RequirementSet( require_hashes=options.require_hashes, ) try: self.populate_requirement_set( requirement_set, args, options, finder, session, self.name, wheel_cache ) preparer = RequirementPreparer( build_dir=directory.path, src_dir=options.src_dir, download_dir=None, wheel_download_dir=options.wheel_dir, progress_bar=options.progress_bar, build_isolation=options.build_isolation, req_tracker=req_tracker, ) resolver = Resolver( preparer=preparer, finder=finder, session=session, wheel_cache=wheel_cache, use_user_site=False, upgrade_strategy="to-satisfy-only", force_reinstall=False, ignore_dependencies=options.ignore_dependencies, ignore_requires_python=options.ignore_requires_python, ignore_installed=True, isolated=options.isolated_mode, use_pep517=options.use_pep517 ) resolver.resolve(requirement_set) # build wheels wb = WheelBuilder( finder, preparer, wheel_cache, build_options=options.build_options or [], global_options=options.global_options or [], no_clean=options.no_clean, ) build_failures = wb.build( requirement_set.requirements.values(), session=session, ) if len(build_failures) != 0: raise CommandError( "Failed to build one or more wheels" ) except PreviousBuildDirError: options.no_clean = True raise finally: if not options.no_clean: requirement_set.cleanup_files() wheel_cache.cleanup() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/configuration.py0000644000175000017500000003337400000000000030237 0ustar00vincentvincent00000000000000"""Configuration management setup Some terminology: - name As written in config files. - value Value associated with a name - key Name combined with it's section (section.name) - variant A single word describing where the configuration key-value pair came from """ import locale import logging import os import sys from pip._vendor.six.moves import configparser from pip._internal.exceptions import ( ConfigurationError, ConfigurationFileCouldNotBeLoaded, ) from pip._internal.utils import appdirs from pip._internal.utils.compat import WINDOWS, expanduser from pip._internal.utils.misc import ensure_dir, enum from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import ( Any, Dict, Iterable, List, NewType, Optional, Tuple ) RawConfigParser = configparser.RawConfigParser # Shorthand Kind = NewType("Kind", str) logger = logging.getLogger(__name__) # NOTE: Maybe use the optionx attribute to normalize keynames. def _normalize_name(name): # type: (str) -> str """Make a name consistent regardless of source (environment or file) """ name = name.lower().replace('_', '-') if name.startswith('--'): name = name[2:] # only prefer long opts return name def _disassemble_key(name): # type: (str) -> List[str] if "." not in name: error_message = ( "Key does not contain dot separated section and key. " "Perhaps you wanted to use 'global.{}' instead?" ).format(name) raise ConfigurationError(error_message) return name.split(".", 1) # The kinds of configurations there are. kinds = enum( USER="user", # User Specific GLOBAL="global", # System Wide SITE="site", # [Virtual] Environment Specific ENV="env", # from PIP_CONFIG_FILE ENV_VAR="env-var", # from Environment Variables ) CONFIG_BASENAME = 'pip.ini' if WINDOWS else 'pip.conf' def get_configuration_files(): global_config_files = [ os.path.join(path, CONFIG_BASENAME) for path in appdirs.site_config_dirs('pip') ] site_config_file = os.path.join(sys.prefix, CONFIG_BASENAME) legacy_config_file = os.path.join( expanduser('~'), 'pip' if WINDOWS else '.pip', CONFIG_BASENAME, ) new_config_file = os.path.join( appdirs.user_config_dir("pip"), CONFIG_BASENAME ) return { kinds.GLOBAL: global_config_files, kinds.SITE: [site_config_file], kinds.USER: [legacy_config_file, new_config_file], } class Configuration(object): """Handles management of configuration. Provides an interface to accessing and managing configuration files. This class converts provides an API that takes "section.key-name" style keys and stores the value associated with it as "key-name" under the section "section". This allows for a clean interface wherein the both the section and the key-name are preserved in an easy to manage form in the configuration files and the data stored is also nice. """ def __init__(self, isolated, load_only=None): # type: (bool, Kind) -> None super(Configuration, self).__init__() _valid_load_only = [kinds.USER, kinds.GLOBAL, kinds.SITE, None] if load_only not in _valid_load_only: raise ConfigurationError( "Got invalid value for load_only - should be one of {}".format( ", ".join(map(repr, _valid_load_only[:-1])) ) ) self.isolated = isolated # type: bool self.load_only = load_only # type: Optional[Kind] # The order here determines the override order. self._override_order = [ kinds.GLOBAL, kinds.USER, kinds.SITE, kinds.ENV, kinds.ENV_VAR ] self._ignore_env_names = ["version", "help"] # Because we keep track of where we got the data from self._parsers = { variant: [] for variant in self._override_order } # type: Dict[Kind, List[Tuple[str, RawConfigParser]]] self._config = { variant: {} for variant in self._override_order } # type: Dict[Kind, Dict[str, Any]] self._modified_parsers = [] # type: List[Tuple[str, RawConfigParser]] def load(self): # type: () -> None """Loads configuration from configuration files and environment """ self._load_config_files() if not self.isolated: self._load_environment_vars() def get_file_to_edit(self): # type: () -> Optional[str] """Returns the file with highest priority in configuration """ assert self.load_only is not None, \ "Need to be specified a file to be editing" try: return self._get_parser_to_modify()[0] except IndexError: return None def items(self): # type: () -> Iterable[Tuple[str, Any]] """Returns key-value pairs like dict.items() representing the loaded configuration """ return self._dictionary.items() def get_value(self, key): # type: (str) -> Any """Get a value from the configuration. """ try: return self._dictionary[key] except KeyError: raise ConfigurationError("No such key - {}".format(key)) def set_value(self, key, value): # type: (str, Any) -> None """Modify a value in the configuration. """ self._ensure_have_load_only() fname, parser = self._get_parser_to_modify() if parser is not None: section, name = _disassemble_key(key) # Modify the parser and the configuration if not parser.has_section(section): parser.add_section(section) parser.set(section, name, value) self._config[self.load_only][key] = value self._mark_as_modified(fname, parser) def unset_value(self, key): # type: (str) -> None """Unset a value in the configuration. """ self._ensure_have_load_only() if key not in self._config[self.load_only]: raise ConfigurationError("No such key - {}".format(key)) fname, parser = self._get_parser_to_modify() if parser is not None: section, name = _disassemble_key(key) # Remove the key in the parser modified_something = False if parser.has_section(section): # Returns whether the option was removed or not modified_something = parser.remove_option(section, name) if modified_something: # name removed from parser, section may now be empty section_iter = iter(parser.items(section)) try: val = next(section_iter) except StopIteration: val = None if val is None: parser.remove_section(section) self._mark_as_modified(fname, parser) else: raise ConfigurationError( "Fatal Internal error [id=1]. Please report as a bug." ) del self._config[self.load_only][key] def save(self): # type: () -> None """Save the current in-memory state. """ self._ensure_have_load_only() for fname, parser in self._modified_parsers: logger.info("Writing to %s", fname) # Ensure directory exists. ensure_dir(os.path.dirname(fname)) with open(fname, "w") as f: parser.write(f) # # Private routines # def _ensure_have_load_only(self): # type: () -> None if self.load_only is None: raise ConfigurationError("Needed a specific file to be modifying.") logger.debug("Will be working with %s variant only", self.load_only) @property def _dictionary(self): # type: () -> Dict[str, Any] """A dictionary representing the loaded configuration. """ # NOTE: Dictionaries are not populated if not loaded. So, conditionals # are not needed here. retval = {} for variant in self._override_order: retval.update(self._config[variant]) return retval def _load_config_files(self): # type: () -> None """Loads configuration from configuration files """ config_files = dict(self._iter_config_files()) if config_files[kinds.ENV][0:1] == [os.devnull]: logger.debug( "Skipping loading configuration files due to " "environment's PIP_CONFIG_FILE being os.devnull" ) return for variant, files in config_files.items(): for fname in files: # If there's specific variant set in `load_only`, load only # that variant, not the others. if self.load_only is not None and variant != self.load_only: logger.debug( "Skipping file '%s' (variant: %s)", fname, variant ) continue parser = self._load_file(variant, fname) # Keeping track of the parsers used self._parsers[variant].append((fname, parser)) def _load_file(self, variant, fname): # type: (Kind, str) -> RawConfigParser logger.debug("For variant '%s', will try loading '%s'", variant, fname) parser = self._construct_parser(fname) for section in parser.sections(): items = parser.items(section) self._config[variant].update(self._normalized_keys(section, items)) return parser def _construct_parser(self, fname): # type: (str) -> RawConfigParser parser = configparser.RawConfigParser() # If there is no such file, don't bother reading it but create the # parser anyway, to hold the data. # Doing this is useful when modifying and saving files, where we don't # need to construct a parser. if os.path.exists(fname): try: parser.read(fname) except UnicodeDecodeError: # See https://github.com/pypa/pip/issues/4963 raise ConfigurationFileCouldNotBeLoaded( reason="contains invalid {} characters".format( locale.getpreferredencoding(False) ), fname=fname, ) except configparser.Error as error: # See https://github.com/pypa/pip/issues/4893 raise ConfigurationFileCouldNotBeLoaded(error=error) return parser def _load_environment_vars(self): # type: () -> None """Loads configuration from environment variables """ self._config[kinds.ENV_VAR].update( self._normalized_keys(":env:", self._get_environ_vars()) ) def _normalized_keys(self, section, items): # type: (str, Iterable[Tuple[str, Any]]) -> Dict[str, Any] """Normalizes items to construct a dictionary with normalized keys. This routine is where the names become keys and are made the same regardless of source - configuration files or environment. """ normalized = {} for name, val in items: key = section + "." + _normalize_name(name) normalized[key] = val return normalized def _get_environ_vars(self): # type: () -> Iterable[Tuple[str, str]] """Returns a generator with all environmental vars with prefix PIP_""" for key, val in os.environ.items(): should_be_yielded = ( key.startswith("PIP_") and key[4:].lower() not in self._ignore_env_names ) if should_be_yielded: yield key[4:].lower(), val # XXX: This is patched in the tests. def _iter_config_files(self): # type: () -> Iterable[Tuple[Kind, List[str]]] """Yields variant and configuration files associated with it. This should be treated like items of a dictionary. """ # SMELL: Move the conditions out of this function # environment variables have the lowest priority config_file = os.environ.get('PIP_CONFIG_FILE', None) if config_file is not None: yield kinds.ENV, [config_file] else: yield kinds.ENV, [] config_files = get_configuration_files() # at the base we have any global configuration yield kinds.GLOBAL, config_files[kinds.GLOBAL] # per-user configuration next should_load_user_config = not self.isolated and not ( config_file and os.path.exists(config_file) ) if should_load_user_config: # The legacy config file is overridden by the new config file yield kinds.USER, config_files[kinds.USER] # finally virtualenv configuration first trumping others yield kinds.SITE, config_files[kinds.SITE] def _get_parser_to_modify(self): # type: () -> Tuple[str, RawConfigParser] # Determine which parser to modify parsers = self._parsers[self.load_only] if not parsers: # This should not happen if everything works correctly. raise ConfigurationError( "Fatal Internal error [id=2]. Please report as a bug." ) # Use the highest priority parser. return parsers[-1] # XXX: This is patched in the tests. def _mark_as_modified(self, fname, parser): # type: (str, RawConfigParser) -> None file_parser_tuple = (fname, parser) if file_parser_tuple not in self._modified_parsers: self._modified_parsers.append(file_parser_tuple) ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1604735099.939994 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/distributions/0000755000175000017500000000000000000000000027706 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/distributions/__init__.py0000644000175000017500000000153500000000000032023 0ustar00vincentvincent00000000000000from pip._internal.distributions.source import SourceDistribution from pip._internal.distributions.wheel import WheelDistribution from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from pip._internal.distributions.base import AbstractDistribution from pip._internal.req.req_install import InstallRequirement def make_distribution_for_install_requirement(install_req): # type: (InstallRequirement) -> AbstractDistribution """Returns a Distribution for the given InstallRequirement """ # If it's not an editable, is a wheel, it's a WheelDistribution if install_req.editable: return SourceDistribution(install_req) if install_req.link and install_req.is_wheel: return WheelDistribution(install_req) # Otherwise, a SourceDistribution return SourceDistribution(install_req) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/distributions/base.py0000644000175000017500000000175000000000000031175 0ustar00vincentvincent00000000000000import abc from pip._vendor.six import add_metaclass @add_metaclass(abc.ABCMeta) class AbstractDistribution(object): """A base class for handling installable artifacts. The requirements for anything installable are as follows: - we must be able to determine the requirement name (or we can't correctly handle the non-upgrade case). - for packages with setup requirements, we must also be able to determine their requirements without installing additional packages (for the same reason as run-time dependencies) - we must be able to create a Distribution object exposing the above metadata. """ def __init__(self, req): super(AbstractDistribution, self).__init__() self.req = req @abc.abstractmethod def get_pkg_resources_distribution(self): raise NotImplementedError() @abc.abstractmethod def prepare_distribution_metadata(self, finder, build_isolation): raise NotImplementedError() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/distributions/installed.py0000644000175000017500000000066200000000000032243 0ustar00vincentvincent00000000000000from pip._internal.distributions.base import AbstractDistribution class InstalledDistribution(AbstractDistribution): """Represents an installed package. This does not need any preparation as the required information has already been computed. """ def get_pkg_resources_distribution(self): return self.req.satisfied_by def prepare_distribution_metadata(self, finder, build_isolation): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/distributions/source.py0000644000175000017500000000667200000000000031573 0ustar00vincentvincent00000000000000import logging from pip._internal.build_env import BuildEnvironment from pip._internal.distributions.base import AbstractDistribution from pip._internal.exceptions import InstallationError logger = logging.getLogger(__name__) class SourceDistribution(AbstractDistribution): """Represents a source distribution. The preparation step for these needs metadata for the packages to be generated, either using PEP 517 or using the legacy `setup.py egg_info`. NOTE from @pradyunsg (14 June 2019) I expect SourceDistribution class will need to be split into `legacy_source` (setup.py based) and `source` (PEP 517 based) when we start bringing logic for preparation out of InstallRequirement into this class. """ def get_pkg_resources_distribution(self): return self.req.get_dist() def prepare_distribution_metadata(self, finder, build_isolation): # Prepare for building. We need to: # 1. Load pyproject.toml (if it exists) # 2. Set up the build environment self.req.load_pyproject_toml() should_isolate = self.req.use_pep517 and build_isolation def _raise_conflicts(conflicting_with, conflicting_reqs): raise InstallationError( "Some build dependencies for %s conflict with %s: %s." % ( self.req, conflicting_with, ', '.join( '%s is incompatible with %s' % (installed, wanted) for installed, wanted in sorted(conflicting)))) if should_isolate: # Isolate in a BuildEnvironment and install the build-time # requirements. self.req.build_env = BuildEnvironment() self.req.build_env.install_requirements( finder, self.req.pyproject_requires, 'overlay', "Installing build dependencies" ) conflicting, missing = self.req.build_env.check_requirements( self.req.requirements_to_check ) if conflicting: _raise_conflicts("PEP 517/518 supported requirements", conflicting) if missing: logger.warning( "Missing build requirements in pyproject.toml for %s.", self.req, ) logger.warning( "The project does not specify a build backend, and " "pip cannot fall back to setuptools without %s.", " and ".join(map(repr, sorted(missing))) ) # Install any extra build dependencies that the backend requests. # This must be done in a second pass, as the pyproject.toml # dependencies must be installed before we can call the backend. with self.req.build_env: # We need to have the env active when calling the hook. self.req.spin_message = "Getting requirements to build wheel" reqs = self.req.pep517_backend.get_requires_for_build_wheel() conflicting, missing = self.req.build_env.check_requirements(reqs) if conflicting: _raise_conflicts("the backend dependencies", conflicting) self.req.build_env.install_requirements( finder, missing, 'normal', "Installing backend dependencies" ) self.req.prepare_metadata() self.req.assert_source_matches_version() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/distributions/wheel.py0000644000175000017500000000077400000000000031374 0ustar00vincentvincent00000000000000from pip._vendor import pkg_resources from pip._internal.distributions.base import AbstractDistribution class WheelDistribution(AbstractDistribution): """Represents a wheel distribution. This does not need any preparation as wheels can be directly unpacked. """ def get_pkg_resources_distribution(self): return list(pkg_resources.find_distributions( self.req.source_dir))[0] def prepare_distribution_metadata(self, finder, build_isolation): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/download.py0000644000175000017500000012323100000000000027167 0ustar00vincentvincent00000000000000from __future__ import absolute_import import cgi import email.utils import json import logging import mimetypes import os import platform import re import shutil import sys from pip._vendor import requests, urllib3 from pip._vendor.cachecontrol import CacheControlAdapter from pip._vendor.cachecontrol.caches import FileCache from pip._vendor.lockfile import LockError from pip._vendor.requests.adapters import BaseAdapter, HTTPAdapter from pip._vendor.requests.auth import AuthBase, HTTPBasicAuth from pip._vendor.requests.models import CONTENT_CHUNK_SIZE, Response from pip._vendor.requests.structures import CaseInsensitiveDict from pip._vendor.requests.utils import get_netrc_auth # NOTE: XMLRPC Client is not annotated in typeshed as on 2017-07-17, which is # why we ignore the type on this import from pip._vendor.six.moves import xmlrpc_client # type: ignore from pip._vendor.six.moves.urllib import parse as urllib_parse from pip._vendor.six.moves.urllib import request as urllib_request import pip from pip._internal.exceptions import HashMismatch, InstallationError from pip._internal.models.index import PyPI # Import ssl from compat so the initial import occurs in only one place. from pip._internal.utils.compat import HAS_TLS, ssl from pip._internal.utils.encoding import auto_decode from pip._internal.utils.filesystem import check_path_owner from pip._internal.utils.glibc import libc_ver from pip._internal.utils.marker_files import write_delete_marker_file from pip._internal.utils.misc import ( ARCHIVE_EXTENSIONS, ask, ask_input, ask_password, ask_path_exists, backup_dir, consume, display_path, format_size, get_installed_version, path_to_url, remove_auth_from_url, rmtree, split_auth_netloc_from_url, splitext, unpack_file, ) from pip._internal.utils.temp_dir import TempDirectory from pip._internal.utils.typing import MYPY_CHECK_RUNNING from pip._internal.utils.ui import DownloadProgressProvider from pip._internal.vcs import vcs if MYPY_CHECK_RUNNING: from typing import ( Optional, Tuple, Dict, IO, Text, Union ) from optparse import Values from pip._internal.models.link import Link from pip._internal.utils.hashes import Hashes from pip._internal.vcs.versioncontrol import AuthInfo, VersionControl Credentials = Tuple[str, str, str] __all__ = ['get_file_content', 'is_url', 'url_to_path', 'path_to_url', 'is_archive_file', 'unpack_vcs_link', 'unpack_file_url', 'is_vcs_url', 'is_file_url', 'unpack_http_url', 'unpack_url', 'parse_content_disposition', 'sanitize_content_filename'] logger = logging.getLogger(__name__) try: import keyring # noqa except ImportError: keyring = None except Exception as exc: logger.warning("Keyring is skipped due to an exception: %s", str(exc)) keyring = None # These are environment variables present when running under various # CI systems. For each variable, some CI systems that use the variable # are indicated. The collection was chosen so that for each of a number # of popular systems, at least one of the environment variables is used. # This list is used to provide some indication of and lower bound for # CI traffic to PyPI. Thus, it is okay if the list is not comprehensive. # For more background, see: https://github.com/pypa/pip/issues/5499 CI_ENVIRONMENT_VARIABLES = ( # Azure Pipelines 'BUILD_BUILDID', # Jenkins 'BUILD_ID', # AppVeyor, CircleCI, Codeship, Gitlab CI, Shippable, Travis CI 'CI', # Explicit environment variable. 'PIP_IS_CI', ) def looks_like_ci(): # type: () -> bool """ Return whether it looks like pip is running under CI. """ # We don't use the method of checking for a tty (e.g. using isatty()) # because some CI systems mimic a tty (e.g. Travis CI). Thus that # method doesn't provide definitive information in either direction. return any(name in os.environ for name in CI_ENVIRONMENT_VARIABLES) def user_agent(): """ Return a string representing the user agent. """ data = { "installer": {"name": "pip", "version": pip.__version__}, "python": platform.python_version(), "implementation": { "name": platform.python_implementation(), }, } if data["implementation"]["name"] == 'CPython': data["implementation"]["version"] = platform.python_version() elif data["implementation"]["name"] == 'PyPy': if sys.pypy_version_info.releaselevel == 'final': pypy_version_info = sys.pypy_version_info[:3] else: pypy_version_info = sys.pypy_version_info data["implementation"]["version"] = ".".join( [str(x) for x in pypy_version_info] ) elif data["implementation"]["name"] == 'Jython': # Complete Guess data["implementation"]["version"] = platform.python_version() elif data["implementation"]["name"] == 'IronPython': # Complete Guess data["implementation"]["version"] = platform.python_version() if sys.platform.startswith("linux"): from pip._vendor import distro distro_infos = dict(filter( lambda x: x[1], zip(["name", "version", "id"], distro.linux_distribution()), )) libc = dict(filter( lambda x: x[1], zip(["lib", "version"], libc_ver()), )) if libc: distro_infos["libc"] = libc if distro_infos: data["distro"] = distro_infos if sys.platform.startswith("darwin") and platform.mac_ver()[0]: data["distro"] = {"name": "macOS", "version": platform.mac_ver()[0]} if platform.system(): data.setdefault("system", {})["name"] = platform.system() if platform.release(): data.setdefault("system", {})["release"] = platform.release() if platform.machine(): data["cpu"] = platform.machine() if HAS_TLS: data["openssl_version"] = ssl.OPENSSL_VERSION setuptools_version = get_installed_version("setuptools") if setuptools_version is not None: data["setuptools_version"] = setuptools_version # Use None rather than False so as not to give the impression that # pip knows it is not being run under CI. Rather, it is a null or # inconclusive result. Also, we include some value rather than no # value to make it easier to know that the check has been run. data["ci"] = True if looks_like_ci() else None user_data = os.environ.get("PIP_USER_AGENT_USER_DATA") if user_data is not None: data["user_data"] = user_data return "{data[installer][name]}/{data[installer][version]} {json}".format( data=data, json=json.dumps(data, separators=(",", ":"), sort_keys=True), ) def _get_keyring_auth(url, username): """Return the tuple auth for a given url from keyring.""" if not url or not keyring: return None try: try: get_credential = keyring.get_credential except AttributeError: pass else: logger.debug("Getting credentials from keyring for %s", url) cred = get_credential(url, username) if cred is not None: return cred.username, cred.password return None if username: logger.debug("Getting password from keyring for %s", url) password = keyring.get_password(url, username) if password: return username, password except Exception as exc: logger.warning("Keyring is skipped due to an exception: %s", str(exc)) class MultiDomainBasicAuth(AuthBase): def __init__(self, prompting=True, index_urls=None): # type: (bool, Optional[Values]) -> None self.prompting = prompting self.index_urls = index_urls self.passwords = {} # type: Dict[str, AuthInfo] # When the user is prompted to enter credentials and keyring is # available, we will offer to save them. If the user accepts, # this value is set to the credentials they entered. After the # request authenticates, the caller should call # ``save_credentials`` to save these. self._credentials_to_save = None # type: Optional[Credentials] def _get_index_url(self, url): """Return the original index URL matching the requested URL. Cached or dynamically generated credentials may work against the original index URL rather than just the netloc. The provided url should have had its username and password removed already. If the original index url had credentials then they will be included in the return value. Returns None if no matching index was found, or if --no-index was specified by the user. """ if not url or not self.index_urls: return None for u in self.index_urls: prefix = remove_auth_from_url(u).rstrip("/") + "/" if url.startswith(prefix): return u def _get_new_credentials(self, original_url, allow_netrc=True, allow_keyring=True): """Find and return credentials for the specified URL.""" # Split the credentials and netloc from the url. url, netloc, url_user_password = split_auth_netloc_from_url( original_url) # Start with the credentials embedded in the url username, password = url_user_password if username is not None and password is not None: logger.debug("Found credentials in url for %s", netloc) return url_user_password # Find a matching index url for this request index_url = self._get_index_url(url) if index_url: # Split the credentials from the url. index_info = split_auth_netloc_from_url(index_url) if index_info: index_url, _, index_url_user_password = index_info logger.debug("Found index url %s", index_url) # If an index URL was found, try its embedded credentials if index_url and index_url_user_password[0] is not None: username, password = index_url_user_password if username is not None and password is not None: logger.debug("Found credentials in index url for %s", netloc) return index_url_user_password # Get creds from netrc if we still don't have them if allow_netrc: netrc_auth = get_netrc_auth(original_url) if netrc_auth: logger.debug("Found credentials in netrc for %s", netloc) return netrc_auth # If we don't have a password and keyring is available, use it. if allow_keyring: # The index url is more specific than the netloc, so try it first kr_auth = (_get_keyring_auth(index_url, username) or _get_keyring_auth(netloc, username)) if kr_auth: logger.debug("Found credentials in keyring for %s", netloc) return kr_auth return None, None def _get_url_and_credentials(self, original_url): """Return the credentials to use for the provided URL. If allowed, netrc and keyring may be used to obtain the correct credentials. Returns (url_without_credentials, username, password). Note that even if the original URL contains credentials, this function may return a different username and password. """ url, netloc, _ = split_auth_netloc_from_url(original_url) # Use any stored credentials that we have for this netloc username, password = self.passwords.get(netloc, (None, None)) # If nothing cached, acquire new credentials without prompting # the user (e.g. from netrc, keyring, or similar). if username is None or password is None: username, password = self._get_new_credentials(original_url) if username is not None and password is not None: # Store the username and password self.passwords[netloc] = (username, password) return url, username, password def __call__(self, req): # Get credentials for this request url, username, password = self._get_url_and_credentials(req.url) # Set the url of the request to the url without any credentials req.url = url if username is not None and password is not None: # Send the basic auth with this request req = HTTPBasicAuth(username, password)(req) # Attach a hook to handle 401 responses req.register_hook("response", self.handle_401) return req # Factored out to allow for easy patching in tests def _prompt_for_password(self, netloc): username = ask_input("User for %s: " % netloc) if not username: return None, None auth = _get_keyring_auth(netloc, username) if auth: return auth[0], auth[1], False password = ask_password("Password: ") return username, password, True # Factored out to allow for easy patching in tests def _should_save_password_to_keyring(self): if not keyring: return False return ask("Save credentials to keyring [y/N]: ", ["y", "n"]) == "y" def handle_401(self, resp, **kwargs): # We only care about 401 responses, anything else we want to just # pass through the actual response if resp.status_code != 401: return resp # We are not able to prompt the user so simply return the response if not self.prompting: return resp parsed = urllib_parse.urlparse(resp.url) # Prompt the user for a new username and password username, password, save = self._prompt_for_password(parsed.netloc) # Store the new username and password to use for future requests self._credentials_to_save = None if username is not None and password is not None: self.passwords[parsed.netloc] = (username, password) # Prompt to save the password to keyring if save and self._should_save_password_to_keyring(): self._credentials_to_save = (parsed.netloc, username, password) # Consume content and release the original connection to allow our new # request to reuse the same one. resp.content resp.raw.release_conn() # Add our new username and password to the request req = HTTPBasicAuth(username or "", password or "")(resp.request) req.register_hook("response", self.warn_on_401) # On successful request, save the credentials that were used to # keyring. (Note that if the user responded "no" above, this member # is not set and nothing will be saved.) if self._credentials_to_save: req.register_hook("response", self.save_credentials) # Send our new request new_resp = resp.connection.send(req, **kwargs) new_resp.history.append(resp) return new_resp def warn_on_401(self, resp, **kwargs): """Response callback to warn about incorrect credentials.""" if resp.status_code == 401: logger.warning('401 Error, Credentials not correct for %s', resp.request.url) def save_credentials(self, resp, **kwargs): """Response callback to save credentials on success.""" assert keyring is not None, "should never reach here without keyring" if not keyring: return creds = self._credentials_to_save self._credentials_to_save = None if creds and resp.status_code < 400: try: logger.info('Saving credentials to keyring') keyring.set_password(*creds) except Exception: logger.exception('Failed to save credentials') class LocalFSAdapter(BaseAdapter): def send(self, request, stream=None, timeout=None, verify=None, cert=None, proxies=None): pathname = url_to_path(request.url) resp = Response() resp.status_code = 200 resp.url = request.url try: stats = os.stat(pathname) except OSError as exc: resp.status_code = 404 resp.raw = exc else: modified = email.utils.formatdate(stats.st_mtime, usegmt=True) content_type = mimetypes.guess_type(pathname)[0] or "text/plain" resp.headers = CaseInsensitiveDict({ "Content-Type": content_type, "Content-Length": stats.st_size, "Last-Modified": modified, }) resp.raw = open(pathname, "rb") resp.close = resp.raw.close return resp def close(self): pass class SafeFileCache(FileCache): """ A file based cache which is safe to use even when the target directory may not be accessible or writable. """ def __init__(self, *args, **kwargs): super(SafeFileCache, self).__init__(*args, **kwargs) # Check to ensure that the directory containing our cache directory # is owned by the user current executing pip. If it does not exist # we will check the parent directory until we find one that does exist. # If it is not owned by the user executing pip then we will disable # the cache and log a warning. if not check_path_owner(self.directory): logger.warning( "The directory '%s' or its parent directory is not owned by " "the current user and the cache has been disabled. Please " "check the permissions and owner of that directory. If " "executing pip with sudo, you may want sudo's -H flag.", self.directory, ) # Set our directory to None to disable the Cache self.directory = None def get(self, *args, **kwargs): # If we don't have a directory, then the cache should be a no-op. if self.directory is None: return try: return super(SafeFileCache, self).get(*args, **kwargs) except (LockError, OSError, IOError): # We intentionally silence this error, if we can't access the cache # then we can just skip caching and process the request as if # caching wasn't enabled. pass def set(self, *args, **kwargs): # If we don't have a directory, then the cache should be a no-op. if self.directory is None: return try: return super(SafeFileCache, self).set(*args, **kwargs) except (LockError, OSError, IOError): # We intentionally silence this error, if we can't access the cache # then we can just skip caching and process the request as if # caching wasn't enabled. pass def delete(self, *args, **kwargs): # If we don't have a directory, then the cache should be a no-op. if self.directory is None: return try: return super(SafeFileCache, self).delete(*args, **kwargs) except (LockError, OSError, IOError): # We intentionally silence this error, if we can't access the cache # then we can just skip caching and process the request as if # caching wasn't enabled. pass class InsecureHTTPAdapter(HTTPAdapter): def cert_verify(self, conn, url, verify, cert): conn.cert_reqs = 'CERT_NONE' conn.ca_certs = None class PipSession(requests.Session): timeout = None # type: Optional[int] def __init__(self, *args, **kwargs): retries = kwargs.pop("retries", 0) cache = kwargs.pop("cache", None) insecure_hosts = kwargs.pop("insecure_hosts", []) index_urls = kwargs.pop("index_urls", None) super(PipSession, self).__init__(*args, **kwargs) # Attach our User Agent to the request self.headers["User-Agent"] = user_agent() # Attach our Authentication handler to the session self.auth = MultiDomainBasicAuth(index_urls=index_urls) # Create our urllib3.Retry instance which will allow us to customize # how we handle retries. retries = urllib3.Retry( # Set the total number of retries that a particular request can # have. total=retries, # A 503 error from PyPI typically means that the Fastly -> Origin # connection got interrupted in some way. A 503 error in general # is typically considered a transient error so we'll go ahead and # retry it. # A 500 may indicate transient error in Amazon S3 # A 520 or 527 - may indicate transient error in CloudFlare status_forcelist=[500, 503, 520, 527], # Add a small amount of back off between failed requests in # order to prevent hammering the service. backoff_factor=0.25, ) # We want to _only_ cache responses on securely fetched origins. We do # this because we can't validate the response of an insecurely fetched # origin, and we don't want someone to be able to poison the cache and # require manual eviction from the cache to fix it. if cache: secure_adapter = CacheControlAdapter( cache=SafeFileCache(cache, use_dir_lock=True), max_retries=retries, ) else: secure_adapter = HTTPAdapter(max_retries=retries) # Our Insecure HTTPAdapter disables HTTPS validation. It does not # support caching (see above) so we'll use it for all http:// URLs as # well as any https:// host that we've marked as ignoring TLS errors # for. insecure_adapter = InsecureHTTPAdapter(max_retries=retries) # Save this for later use in add_insecure_host(). self._insecure_adapter = insecure_adapter self.mount("https://", secure_adapter) self.mount("http://", insecure_adapter) # Enable file:// urls self.mount("file://", LocalFSAdapter()) # We want to use a non-validating adapter for any requests which are # deemed insecure. for host in insecure_hosts: self.add_insecure_host(host) def add_insecure_host(self, host): # type: (str) -> None self.mount('https://{}/'.format(host), self._insecure_adapter) def request(self, method, url, *args, **kwargs): # Allow setting a default timeout on a session kwargs.setdefault("timeout", self.timeout) # Dispatch the actual request return super(PipSession, self).request(method, url, *args, **kwargs) def get_file_content(url, comes_from=None, session=None): # type: (str, Optional[str], Optional[PipSession]) -> Tuple[str, Text] """Gets the content of a file; it may be a filename, file: URL, or http: URL. Returns (location, content). Content is unicode. :param url: File path or url. :param comes_from: Origin description of requirements. :param session: Instance of pip.download.PipSession. """ if session is None: raise TypeError( "get_file_content() missing 1 required keyword argument: 'session'" ) match = _scheme_re.search(url) if match: scheme = match.group(1).lower() if (scheme == 'file' and comes_from and comes_from.startswith('http')): raise InstallationError( 'Requirements file %s references URL %s, which is local' % (comes_from, url)) if scheme == 'file': path = url.split(':', 1)[1] path = path.replace('\\', '/') match = _url_slash_drive_re.match(path) if match: path = match.group(1) + ':' + path.split('|', 1)[1] path = urllib_parse.unquote(path) if path.startswith('/'): path = '/' + path.lstrip('/') url = path else: # FIXME: catch some errors resp = session.get(url) resp.raise_for_status() return resp.url, resp.text try: with open(url, 'rb') as f: content = auto_decode(f.read()) except IOError as exc: raise InstallationError( 'Could not open requirements file: %s' % str(exc) ) return url, content _scheme_re = re.compile(r'^(http|https|file):', re.I) _url_slash_drive_re = re.compile(r'/*([a-z])\|', re.I) def is_url(name): # type: (Union[str, Text]) -> bool """Returns true if the name looks like a URL""" if ':' not in name: return False scheme = name.split(':', 1)[0].lower() return scheme in ['http', 'https', 'file', 'ftp'] + vcs.all_schemes def url_to_path(url): # type: (str) -> str """ Convert a file: URL to a path. """ assert url.startswith('file:'), ( "You can only turn file: urls into filenames (not %r)" % url) _, netloc, path, _, _ = urllib_parse.urlsplit(url) if not netloc or netloc == 'localhost': # According to RFC 8089, same as empty authority. netloc = '' elif sys.platform == 'win32': # If we have a UNC path, prepend UNC share notation. netloc = '\\\\' + netloc else: raise ValueError( 'non-local file URIs are not supported on this platform: %r' % url ) path = urllib_request.url2pathname(netloc + path) return path def is_archive_file(name): # type: (str) -> bool """Return True if `name` is a considered as an archive file.""" ext = splitext(name)[1].lower() if ext in ARCHIVE_EXTENSIONS: return True return False def unpack_vcs_link(link, location): vcs_backend = _get_used_vcs_backend(link) vcs_backend.unpack(location, url=link.url) def _get_used_vcs_backend(link): # type: (Link) -> Optional[VersionControl] """ Return a VersionControl object or None. """ for vcs_backend in vcs.backends: if link.scheme in vcs_backend.schemes: return vcs_backend return None def is_vcs_url(link): # type: (Link) -> bool return bool(_get_used_vcs_backend(link)) def is_file_url(link): # type: (Link) -> bool return link.url.lower().startswith('file:') def is_dir_url(link): # type: (Link) -> bool """Return whether a file:// Link points to a directory. ``link`` must not have any other scheme but file://. Call is_file_url() first. """ link_path = url_to_path(link.url_without_fragment) return os.path.isdir(link_path) def _progress_indicator(iterable, *args, **kwargs): return iterable def _download_url( resp, # type: Response link, # type: Link content_file, # type: IO hashes, # type: Optional[Hashes] progress_bar # type: str ): # type: (...) -> None try: total_length = int(resp.headers['content-length']) except (ValueError, KeyError, TypeError): total_length = 0 cached_resp = getattr(resp, "from_cache", False) if logger.getEffectiveLevel() > logging.INFO: show_progress = False elif cached_resp: show_progress = False elif total_length > (40 * 1000): show_progress = True elif not total_length: show_progress = True else: show_progress = False show_url = link.show_url def resp_read(chunk_size): try: # Special case for urllib3. for chunk in resp.raw.stream( chunk_size, # We use decode_content=False here because we don't # want urllib3 to mess with the raw bytes we get # from the server. If we decompress inside of # urllib3 then we cannot verify the checksum # because the checksum will be of the compressed # file. This breakage will only occur if the # server adds a Content-Encoding header, which # depends on how the server was configured: # - Some servers will notice that the file isn't a # compressible file and will leave the file alone # and with an empty Content-Encoding # - Some servers will notice that the file is # already compressed and will leave the file # alone and will add a Content-Encoding: gzip # header # - Some servers won't notice anything at all and # will take a file that's already been compressed # and compress it again and set the # Content-Encoding: gzip header # # By setting this not to decode automatically we # hope to eliminate problems with the second case. decode_content=False): yield chunk except AttributeError: # Standard file-like object. while True: chunk = resp.raw.read(chunk_size) if not chunk: break yield chunk def written_chunks(chunks): for chunk in chunks: content_file.write(chunk) yield chunk progress_indicator = _progress_indicator if link.netloc == PyPI.netloc: url = show_url else: url = link.url_without_fragment if show_progress: # We don't show progress on cached responses progress_indicator = DownloadProgressProvider(progress_bar, max=total_length) if total_length: logger.info("Downloading %s (%s)", url, format_size(total_length)) else: logger.info("Downloading %s", url) elif cached_resp: logger.info("Using cached %s", url) else: logger.info("Downloading %s", url) logger.debug('Downloading from URL %s', link) downloaded_chunks = written_chunks( progress_indicator( resp_read(CONTENT_CHUNK_SIZE), CONTENT_CHUNK_SIZE ) ) if hashes: hashes.check_against_chunks(downloaded_chunks) else: consume(downloaded_chunks) def _copy_file(filename, location, link): copy = True download_location = os.path.join(location, link.filename) if os.path.exists(download_location): response = ask_path_exists( 'The file %s exists. (i)gnore, (w)ipe, (b)ackup, (a)abort' % display_path(download_location), ('i', 'w', 'b', 'a')) if response == 'i': copy = False elif response == 'w': logger.warning('Deleting %s', display_path(download_location)) os.remove(download_location) elif response == 'b': dest_file = backup_dir(download_location) logger.warning( 'Backing up %s to %s', display_path(download_location), display_path(dest_file), ) shutil.move(download_location, dest_file) elif response == 'a': sys.exit(-1) if copy: shutil.copy(filename, download_location) logger.info('Saved %s', display_path(download_location)) def unpack_http_url( link, # type: Link location, # type: str download_dir=None, # type: Optional[str] session=None, # type: Optional[PipSession] hashes=None, # type: Optional[Hashes] progress_bar="on" # type: str ): # type: (...) -> None if session is None: raise TypeError( "unpack_http_url() missing 1 required keyword argument: 'session'" ) with TempDirectory(kind="unpack") as temp_dir: # If a download dir is specified, is the file already downloaded there? already_downloaded_path = None if download_dir: already_downloaded_path = _check_download_dir(link, download_dir, hashes) if already_downloaded_path: from_path = already_downloaded_path content_type = mimetypes.guess_type(from_path)[0] else: # let's download to a tmp dir from_path, content_type = _download_http_url(link, session, temp_dir.path, hashes, progress_bar) # unpack the archive to the build dir location. even when only # downloading archives, they have to be unpacked to parse dependencies unpack_file(from_path, location, content_type, link) # a download dir is specified; let's copy the archive there if download_dir and not already_downloaded_path: _copy_file(from_path, download_dir, link) if not already_downloaded_path: os.unlink(from_path) def unpack_file_url( link, # type: Link location, # type: str download_dir=None, # type: Optional[str] hashes=None # type: Optional[Hashes] ): # type: (...) -> None """Unpack link into location. If download_dir is provided and link points to a file, make a copy of the link file inside download_dir. """ link_path = url_to_path(link.url_without_fragment) # If it's a url to a local directory if is_dir_url(link): if os.path.isdir(location): rmtree(location) shutil.copytree(link_path, location, symlinks=True) if download_dir: logger.info('Link is a directory, ignoring download_dir') return # If --require-hashes is off, `hashes` is either empty, the # link's embedded hash, or MissingHashes; it is required to # match. If --require-hashes is on, we are satisfied by any # hash in `hashes` matching: a URL-based or an option-based # one; no internet-sourced hash will be in `hashes`. if hashes: hashes.check_against_path(link_path) # If a download dir is specified, is the file already there and valid? already_downloaded_path = None if download_dir: already_downloaded_path = _check_download_dir(link, download_dir, hashes) if already_downloaded_path: from_path = already_downloaded_path else: from_path = link_path content_type = mimetypes.guess_type(from_path)[0] # unpack the archive to the build dir location. even when only downloading # archives, they have to be unpacked to parse dependencies unpack_file(from_path, location, content_type, link) # a download dir is specified and not already downloaded if download_dir and not already_downloaded_path: _copy_file(from_path, download_dir, link) class PipXmlrpcTransport(xmlrpc_client.Transport): """Provide a `xmlrpclib.Transport` implementation via a `PipSession` object. """ def __init__(self, index_url, session, use_datetime=False): xmlrpc_client.Transport.__init__(self, use_datetime) index_parts = urllib_parse.urlparse(index_url) self._scheme = index_parts.scheme self._session = session def request(self, host, handler, request_body, verbose=False): parts = (self._scheme, host, handler, None, None, None) url = urllib_parse.urlunparse(parts) try: headers = {'Content-Type': 'text/xml'} response = self._session.post(url, data=request_body, headers=headers, stream=True) response.raise_for_status() self.verbose = verbose return self.parse_response(response.raw) except requests.HTTPError as exc: logger.critical( "HTTP error %s while getting %s", exc.response.status_code, url, ) raise def unpack_url( link, # type: Link location, # type: str download_dir=None, # type: Optional[str] only_download=False, # type: bool session=None, # type: Optional[PipSession] hashes=None, # type: Optional[Hashes] progress_bar="on" # type: str ): # type: (...) -> None """Unpack link. If link is a VCS link: if only_download, export into download_dir and ignore location else unpack into location for other types of link: - unpack into location - if download_dir, copy the file into download_dir - if only_download, mark location for deletion :param hashes: A Hashes object, one of whose embedded hashes must match, or HashMismatch will be raised. If the Hashes is empty, no matches are required, and unhashable types of requirements (like VCS ones, which would ordinarily raise HashUnsupported) are allowed. """ # non-editable vcs urls if is_vcs_url(link): unpack_vcs_link(link, location) # file urls elif is_file_url(link): unpack_file_url(link, location, download_dir, hashes=hashes) # http urls else: if session is None: session = PipSession() unpack_http_url( link, location, download_dir, session, hashes=hashes, progress_bar=progress_bar ) if only_download: write_delete_marker_file(location) def sanitize_content_filename(filename): # type: (str) -> str """ Sanitize the "filename" value from a Content-Disposition header. """ return os.path.basename(filename) def parse_content_disposition(content_disposition, default_filename): # type: (str, str) -> str """ Parse the "filename" value from a Content-Disposition header, and return the default filename if the result is empty. """ _type, params = cgi.parse_header(content_disposition) filename = params.get('filename') if filename: # We need to sanitize the filename to prevent directory traversal # in case the filename contains ".." path parts. filename = sanitize_content_filename(filename) return filename or default_filename def _download_http_url( link, # type: Link session, # type: PipSession temp_dir, # type: str hashes, # type: Optional[Hashes] progress_bar # type: str ): # type: (...) -> Tuple[str, str] """Download link url into temp_dir using provided session""" target_url = link.url.split('#', 1)[0] try: resp = session.get( target_url, # We use Accept-Encoding: identity here because requests # defaults to accepting compressed responses. This breaks in # a variety of ways depending on how the server is configured. # - Some servers will notice that the file isn't a compressible # file and will leave the file alone and with an empty # Content-Encoding # - Some servers will notice that the file is already # compressed and will leave the file alone and will add a # Content-Encoding: gzip header # - Some servers won't notice anything at all and will take # a file that's already been compressed and compress it again # and set the Content-Encoding: gzip header # By setting this to request only the identity encoding We're # hoping to eliminate the third case. Hopefully there does not # exist a server which when given a file will notice it is # already compressed and that you're not asking for a # compressed file and will then decompress it before sending # because if that's the case I don't think it'll ever be # possible to make this work. headers={"Accept-Encoding": "identity"}, stream=True, ) resp.raise_for_status() except requests.HTTPError as exc: logger.critical( "HTTP error %s while getting %s", exc.response.status_code, link, ) raise content_type = resp.headers.get('content-type', '') filename = link.filename # fallback # Have a look at the Content-Disposition header for a better guess content_disposition = resp.headers.get('content-disposition') if content_disposition: filename = parse_content_disposition(content_disposition, filename) ext = splitext(filename)[1] # type: Optional[str] if not ext: ext = mimetypes.guess_extension(content_type) if ext: filename += ext if not ext and link.url != resp.url: ext = os.path.splitext(resp.url)[1] if ext: filename += ext file_path = os.path.join(temp_dir, filename) with open(file_path, 'wb') as content_file: _download_url(resp, link, content_file, hashes, progress_bar) return file_path, content_type def _check_download_dir(link, download_dir, hashes): # type: (Link, str, Optional[Hashes]) -> Optional[str] """ Check download_dir for previously downloaded file with correct hash If a correct file is found return its path else None """ download_path = os.path.join(download_dir, link.filename) if os.path.exists(download_path): # If already downloaded, does its hash match? logger.info('File was already downloaded %s', download_path) if hashes: try: hashes.check_against_path(download_path) except HashMismatch: logger.warning( 'Previously-downloaded file %s has bad hash. ' 'Re-downloading.', download_path ) os.unlink(download_path) return None return download_path return None ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/exceptions.py0000644000175000017500000002367000000000000027547 0ustar00vincentvincent00000000000000"""Exceptions used throughout package""" from __future__ import absolute_import from itertools import chain, groupby, repeat from pip._vendor.six import iteritems from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import Optional from pip._vendor.pkg_resources import Distribution from pip._internal.req.req_install import InstallRequirement class PipError(Exception): """Base pip exception""" class ConfigurationError(PipError): """General exception in configuration""" class InstallationError(PipError): """General exception during installation""" class UninstallationError(PipError): """General exception during uninstallation""" class NoneMetadataError(PipError): """ Raised when accessing "METADATA" or "PKG-INFO" metadata for a pip._vendor.pkg_resources.Distribution object and `dist.has_metadata('METADATA')` returns True but `dist.get_metadata('METADATA')` returns None (and similarly for "PKG-INFO"). """ def __init__(self, dist, metadata_name): # type: (Distribution, str) -> None """ :param dist: A Distribution object. :param metadata_name: The name of the metadata being accessed (can be "METADATA" or "PKG-INFO"). """ self.dist = dist self.metadata_name = metadata_name def __str__(self): # type: () -> str # Use `dist` in the error message because its stringification # includes more information, like the version and location. return ( 'None {} metadata found for distribution: {}'.format( self.metadata_name, self.dist, ) ) class DistributionNotFound(InstallationError): """Raised when a distribution cannot be found to satisfy a requirement""" class RequirementsFileParseError(InstallationError): """Raised when a general error occurs parsing a requirements file line.""" class BestVersionAlreadyInstalled(PipError): """Raised when the most up-to-date version of a package is already installed.""" class BadCommand(PipError): """Raised when virtualenv or a command is not found""" class CommandError(PipError): """Raised when there is an error in command-line arguments""" class PreviousBuildDirError(PipError): """Raised when there's a previous conflicting build directory""" class InvalidWheelFilename(InstallationError): """Invalid wheel filename.""" class UnsupportedWheel(InstallationError): """Unsupported wheel.""" class HashErrors(InstallationError): """Multiple HashError instances rolled into one for reporting""" def __init__(self): self.errors = [] def append(self, error): self.errors.append(error) def __str__(self): lines = [] self.errors.sort(key=lambda e: e.order) for cls, errors_of_cls in groupby(self.errors, lambda e: e.__class__): lines.append(cls.head) lines.extend(e.body() for e in errors_of_cls) if lines: return '\n'.join(lines) def __nonzero__(self): return bool(self.errors) def __bool__(self): return self.__nonzero__() class HashError(InstallationError): """ A failure to verify a package against known-good hashes :cvar order: An int sorting hash exception classes by difficulty of recovery (lower being harder), so the user doesn't bother fretting about unpinned packages when he has deeper issues, like VCS dependencies, to deal with. Also keeps error reports in a deterministic order. :cvar head: A section heading for display above potentially many exceptions of this kind :ivar req: The InstallRequirement that triggered this error. This is pasted on after the exception is instantiated, because it's not typically available earlier. """ req = None # type: Optional[InstallRequirement] head = '' def body(self): """Return a summary of me for display under the heading. This default implementation simply prints a description of the triggering requirement. :param req: The InstallRequirement that provoked this error, with populate_link() having already been called """ return ' %s' % self._requirement_name() def __str__(self): return '%s\n%s' % (self.head, self.body()) def _requirement_name(self): """Return a description of the requirement that triggered me. This default implementation returns long description of the req, with line numbers """ return str(self.req) if self.req else 'unknown package' class VcsHashUnsupported(HashError): """A hash was provided for a version-control-system-based requirement, but we don't have a method for hashing those.""" order = 0 head = ("Can't verify hashes for these requirements because we don't " "have a way to hash version control repositories:") class DirectoryUrlHashUnsupported(HashError): """A hash was provided for a version-control-system-based requirement, but we don't have a method for hashing those.""" order = 1 head = ("Can't verify hashes for these file:// requirements because they " "point to directories:") class HashMissing(HashError): """A hash was needed for a requirement but is absent.""" order = 2 head = ('Hashes are required in --require-hashes mode, but they are ' 'missing from some requirements. Here is a list of those ' 'requirements along with the hashes their downloaded archives ' 'actually had. Add lines like these to your requirements files to ' 'prevent tampering. (If you did not enable --require-hashes ' 'manually, note that it turns on automatically when any package ' 'has a hash.)') def __init__(self, gotten_hash): """ :param gotten_hash: The hash of the (possibly malicious) archive we just downloaded """ self.gotten_hash = gotten_hash def body(self): # Dodge circular import. from pip._internal.utils.hashes import FAVORITE_HASH package = None if self.req: # In the case of URL-based requirements, display the original URL # seen in the requirements file rather than the package name, # so the output can be directly copied into the requirements file. package = (self.req.original_link if self.req.original_link # In case someone feeds something downright stupid # to InstallRequirement's constructor. else getattr(self.req, 'req', None)) return ' %s --hash=%s:%s' % (package or 'unknown package', FAVORITE_HASH, self.gotten_hash) class HashUnpinned(HashError): """A requirement had a hash specified but was not pinned to a specific version.""" order = 3 head = ('In --require-hashes mode, all requirements must have their ' 'versions pinned with ==. These do not:') class HashMismatch(HashError): """ Distribution file hash values don't match. :ivar package_name: The name of the package that triggered the hash mismatch. Feel free to write to this after the exception is raise to improve its error message. """ order = 4 head = ('THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS ' 'FILE. If you have updated the package versions, please update ' 'the hashes. Otherwise, examine the package contents carefully; ' 'someone may have tampered with them.') def __init__(self, allowed, gots): """ :param allowed: A dict of algorithm names pointing to lists of allowed hex digests :param gots: A dict of algorithm names pointing to hashes we actually got from the files under suspicion """ self.allowed = allowed self.gots = gots def body(self): return ' %s:\n%s' % (self._requirement_name(), self._hash_comparison()) def _hash_comparison(self): """ Return a comparison of actual and expected hash values. Example:: Expected sha256 abcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcde or 123451234512345123451234512345123451234512345 Got bcdefbcdefbcdefbcdefbcdefbcdefbcdefbcdefbcdef """ def hash_then_or(hash_name): # For now, all the decent hashes have 6-char names, so we can get # away with hard-coding space literals. return chain([hash_name], repeat(' or')) lines = [] for hash_name, expecteds in iteritems(self.allowed): prefix = hash_then_or(hash_name) lines.extend((' Expected %s %s' % (next(prefix), e)) for e in expecteds) lines.append(' Got %s\n' % self.gots[hash_name].hexdigest()) prefix = ' or' return '\n'.join(lines) class UnsupportedPythonVersion(InstallationError): """Unsupported python version according to Requires-Python package metadata.""" class ConfigurationFileCouldNotBeLoaded(ConfigurationError): """When there are errors while loading a configuration file """ def __init__(self, reason="could not be loaded", fname=None, error=None): super(ConfigurationFileCouldNotBeLoaded, self).__init__(error) self.reason = reason self.fname = fname self.error = error def __str__(self): if self.fname is not None: message_part = " in {}.".format(self.fname) else: assert self.error is not None message_part = ".\n{}\n".format(self.error.message) return "Configuration file {}{}".format(self.reason, message_part) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/index.py0000644000175000017500000015532100000000000026474 0ustar00vincentvincent00000000000000"""Routines related to PyPI, indexes""" from __future__ import absolute_import import cgi import itertools import logging import mimetypes import os import re from pip._vendor import html5lib, requests, six from pip._vendor.distlib.compat import unescape from pip._vendor.packaging import specifiers from pip._vendor.packaging.utils import canonicalize_name from pip._vendor.packaging.version import parse as parse_version from pip._vendor.requests.exceptions import HTTPError, RetryError, SSLError from pip._vendor.six.moves.urllib import parse as urllib_parse from pip._vendor.six.moves.urllib import request as urllib_request from pip._internal.download import is_url, url_to_path from pip._internal.exceptions import ( BestVersionAlreadyInstalled, DistributionNotFound, InvalidWheelFilename, UnsupportedWheel, ) from pip._internal.models.candidate import InstallationCandidate from pip._internal.models.format_control import FormatControl from pip._internal.models.link import Link from pip._internal.models.selection_prefs import SelectionPreferences from pip._internal.models.target_python import TargetPython from pip._internal.utils.compat import ipaddress from pip._internal.utils.logging import indent_log from pip._internal.utils.misc import ( ARCHIVE_EXTENSIONS, SUPPORTED_EXTENSIONS, WHEEL_EXTENSION, path_to_url, redact_password_from_url, ) from pip._internal.utils.packaging import check_requires_python from pip._internal.utils.typing import MYPY_CHECK_RUNNING from pip._internal.wheel import Wheel if MYPY_CHECK_RUNNING: from logging import Logger from typing import ( Any, Callable, FrozenSet, Iterable, Iterator, List, MutableMapping, Optional, Sequence, Set, Text, Tuple, Union, ) import xml.etree.ElementTree from pip._vendor.packaging.version import _BaseVersion from pip._vendor.requests import Response from pip._internal.models.search_scope import SearchScope from pip._internal.req import InstallRequirement from pip._internal.download import PipSession from pip._internal.pep425tags import Pep425Tag from pip._internal.utils.hashes import Hashes BuildTag = Tuple[Any, ...] # either empty tuple or Tuple[int, str] CandidateSortingKey = ( Tuple[int, int, int, _BaseVersion, BuildTag, Optional[int]] ) HTMLElement = xml.etree.ElementTree.Element SecureOrigin = Tuple[str, str, Optional[str]] __all__ = ['FormatControl', 'FoundCandidates', 'PackageFinder'] SECURE_ORIGINS = [ # protocol, hostname, port # Taken from Chrome's list of secure origins (See: http://bit.ly/1qrySKC) ("https", "*", "*"), ("*", "localhost", "*"), ("*", "127.0.0.0/8", "*"), ("*", "::1/128", "*"), ("file", "*", None), # ssh is always secure. ("ssh", "*", "*"), ] # type: List[SecureOrigin] logger = logging.getLogger(__name__) def _match_vcs_scheme(url): # type: (str) -> Optional[str] """Look for VCS schemes in the URL. Returns the matched VCS scheme, or None if there's no match. """ from pip._internal.vcs import vcs for scheme in vcs.schemes: if url.lower().startswith(scheme) and url[len(scheme)] in '+:': return scheme return None def _is_url_like_archive(url): # type: (str) -> bool """Return whether the URL looks like an archive. """ filename = Link(url).filename for bad_ext in ARCHIVE_EXTENSIONS: if filename.endswith(bad_ext): return True return False class _NotHTML(Exception): def __init__(self, content_type, request_desc): # type: (str, str) -> None super(_NotHTML, self).__init__(content_type, request_desc) self.content_type = content_type self.request_desc = request_desc def _ensure_html_header(response): # type: (Response) -> None """Check the Content-Type header to ensure the response contains HTML. Raises `_NotHTML` if the content type is not text/html. """ content_type = response.headers.get("Content-Type", "") if not content_type.lower().startswith("text/html"): raise _NotHTML(content_type, response.request.method) class _NotHTTP(Exception): pass def _ensure_html_response(url, session): # type: (str, PipSession) -> None """Send a HEAD request to the URL, and ensure the response contains HTML. Raises `_NotHTTP` if the URL is not available for a HEAD request, or `_NotHTML` if the content type is not text/html. """ scheme, netloc, path, query, fragment = urllib_parse.urlsplit(url) if scheme not in {'http', 'https'}: raise _NotHTTP() resp = session.head(url, allow_redirects=True) resp.raise_for_status() _ensure_html_header(resp) def _get_html_response(url, session): # type: (str, PipSession) -> Response """Access an HTML page with GET, and return the response. This consists of three parts: 1. If the URL looks suspiciously like an archive, send a HEAD first to check the Content-Type is HTML, to avoid downloading a large file. Raise `_NotHTTP` if the content type cannot be determined, or `_NotHTML` if it is not HTML. 2. Actually perform the request. Raise HTTP exceptions on network failures. 3. Check the Content-Type header to make sure we got HTML, and raise `_NotHTML` otherwise. """ if _is_url_like_archive(url): _ensure_html_response(url, session=session) logger.debug('Getting page %s', redact_password_from_url(url)) resp = session.get( url, headers={ "Accept": "text/html", # We don't want to blindly returned cached data for # /simple/, because authors generally expecting that # twine upload && pip install will function, but if # they've done a pip install in the last ~10 minutes # it won't. Thus by setting this to zero we will not # blindly use any cached data, however the benefit of # using max-age=0 instead of no-cache, is that we will # still support conditional requests, so we will still # minimize traffic sent in cases where the page hasn't # changed at all, we will just always incur the round # trip for the conditional GET now instead of only # once per 10 minutes. # For more information, please see pypa/pip#5670. "Cache-Control": "max-age=0", }, ) resp.raise_for_status() # The check for archives above only works if the url ends with # something that looks like an archive. However that is not a # requirement of an url. Unless we issue a HEAD request on every # url we cannot know ahead of time for sure if something is HTML # or not. However we can check after we've downloaded it. _ensure_html_header(resp) return resp def _handle_get_page_fail( link, # type: Link reason, # type: Union[str, Exception] meth=None # type: Optional[Callable[..., None]] ): # type: (...) -> None if meth is None: meth = logger.debug meth("Could not fetch URL %s: %s - skipping", link, reason) def _get_html_page(link, session=None): # type: (Link, Optional[PipSession]) -> Optional[HTMLPage] if session is None: raise TypeError( "_get_html_page() missing 1 required keyword argument: 'session'" ) url = link.url.split('#', 1)[0] # Check for VCS schemes that do not support lookup as web pages. vcs_scheme = _match_vcs_scheme(url) if vcs_scheme: logger.debug('Cannot look at %s URL %s', vcs_scheme, link) return None # Tack index.html onto file:// URLs that point to directories scheme, _, path, _, _, _ = urllib_parse.urlparse(url) if (scheme == 'file' and os.path.isdir(urllib_request.url2pathname(path))): # add trailing slash if not present so urljoin doesn't trim # final segment if not url.endswith('/'): url += '/' url = urllib_parse.urljoin(url, 'index.html') logger.debug(' file: URL is directory, getting %s', url) try: resp = _get_html_response(url, session=session) except _NotHTTP: logger.debug( 'Skipping page %s because it looks like an archive, and cannot ' 'be checked by HEAD.', link, ) except _NotHTML as exc: logger.debug( 'Skipping page %s because the %s request got Content-Type: %s', link, exc.request_desc, exc.content_type, ) except HTTPError as exc: _handle_get_page_fail(link, exc) except RetryError as exc: _handle_get_page_fail(link, exc) except SSLError as exc: reason = "There was a problem confirming the ssl certificate: " reason += str(exc) _handle_get_page_fail(link, reason, meth=logger.info) except requests.ConnectionError as exc: _handle_get_page_fail(link, "connection error: %s" % exc) except requests.Timeout: _handle_get_page_fail(link, "timed out") else: return HTMLPage(resp.content, resp.url, resp.headers) return None def _check_link_requires_python( link, # type: Link version_info, # type: Tuple[int, int, int] ignore_requires_python=False, # type: bool ): # type: (...) -> bool """ Return whether the given Python version is compatible with a link's "Requires-Python" value. :param version_info: A 3-tuple of ints representing the Python major-minor-micro version to check. :param ignore_requires_python: Whether to ignore the "Requires-Python" value if the given Python version isn't compatible. """ try: is_compatible = check_requires_python( link.requires_python, version_info=version_info, ) except specifiers.InvalidSpecifier: logger.debug( "Ignoring invalid Requires-Python (%r) for link: %s", link.requires_python, link, ) else: if not is_compatible: version = '.'.join(map(str, version_info)) if not ignore_requires_python: logger.debug( 'Link requires a different Python (%s not in: %r): %s', version, link.requires_python, link, ) return False logger.debug( 'Ignoring failed Requires-Python check (%s not in: %r) ' 'for link: %s', version, link.requires_python, link, ) return True class LinkEvaluator(object): """ Responsible for evaluating links for a particular project. """ _py_version_re = re.compile(r'-py([123]\.?[0-9]?)$') # Don't include an allow_yanked default value to make sure each call # site considers whether yanked releases are allowed. This also causes # that decision to be made explicit in the calling code, which helps # people when reading the code. def __init__( self, project_name, # type: str canonical_name, # type: str formats, # type: FrozenSet target_python, # type: TargetPython allow_yanked, # type: bool ignore_requires_python=None, # type: Optional[bool] ): # type: (...) -> None """ :param project_name: The user supplied package name. :param canonical_name: The canonical package name. :param formats: The formats allowed for this package. Should be a set with 'binary' or 'source' or both in it. :param target_python: The target Python interpreter to use when evaluating link compatibility. This is used, for example, to check wheel compatibility, as well as when checking the Python version, e.g. the Python version embedded in a link filename (or egg fragment) and against an HTML link's optional PEP 503 "data-requires-python" attribute. :param allow_yanked: Whether files marked as yanked (in the sense of PEP 592) are permitted to be candidates for install. :param ignore_requires_python: Whether to ignore incompatible PEP 503 "data-requires-python" values in HTML links. Defaults to False. """ if ignore_requires_python is None: ignore_requires_python = False self._allow_yanked = allow_yanked self._canonical_name = canonical_name self._ignore_requires_python = ignore_requires_python self._formats = formats self._target_python = target_python self.project_name = project_name def evaluate_link(self, link): # type: (Link) -> Tuple[bool, Optional[Text]] """ Determine whether a link is a candidate for installation. :return: A tuple (is_candidate, result), where `result` is (1) a version string if `is_candidate` is True, and (2) if `is_candidate` is False, an optional string to log the reason the link fails to qualify. """ version = None if link.is_yanked and not self._allow_yanked: reason = link.yanked_reason or '' # Mark this as a unicode string to prevent "UnicodeEncodeError: # 'ascii' codec can't encode character" in Python 2 when # the reason contains non-ascii characters. return (False, u'yanked for reason: {}'.format(reason)) if link.egg_fragment: egg_info = link.egg_fragment ext = link.ext else: egg_info, ext = link.splitext() if not ext: return (False, 'not a file') if ext not in SUPPORTED_EXTENSIONS: return (False, 'unsupported archive format: %s' % ext) if "binary" not in self._formats and ext == WHEEL_EXTENSION: reason = 'No binaries permitted for %s' % self.project_name return (False, reason) if "macosx10" in link.path and ext == '.zip': return (False, 'macosx10 one') if ext == WHEEL_EXTENSION: try: wheel = Wheel(link.filename) except InvalidWheelFilename: return (False, 'invalid wheel filename') if canonicalize_name(wheel.name) != self._canonical_name: reason = 'wrong project name (not %s)' % self.project_name return (False, reason) supported_tags = self._target_python.get_tags() if not wheel.supported(supported_tags): # Include the wheel's tags in the reason string to # simplify troubleshooting compatibility issues. file_tags = wheel.get_formatted_file_tags() reason = ( "none of the wheel's tags match: {}".format( ', '.join(file_tags) ) ) return (False, reason) version = wheel.version # This should be up by the self.ok_binary check, but see issue 2700. if "source" not in self._formats and ext != WHEEL_EXTENSION: return (False, 'No sources permitted for %s' % self.project_name) if not version: version = _extract_version_from_fragment( egg_info, self._canonical_name, ) if not version: return ( False, 'Missing project version for %s' % self.project_name, ) match = self._py_version_re.search(version) if match: version = version[:match.start()] py_version = match.group(1) if py_version != self._target_python.py_version: return (False, 'Python version is incorrect') supports_python = _check_link_requires_python( link, version_info=self._target_python.py_version_info, ignore_requires_python=self._ignore_requires_python, ) if not supports_python: # Return None for the reason text to suppress calling # _log_skipped_link(). return (False, None) logger.debug('Found link %s, version: %s', link, version) return (True, version) def filter_unallowed_hashes( candidates, # type: List[InstallationCandidate] hashes, # type: Hashes project_name, # type: str ): # type: (...) -> List[InstallationCandidate] """ Filter out candidates whose hashes aren't allowed, and return a new list of candidates. If at least one candidate has an allowed hash, then all candidates with either an allowed hash or no hash specified are returned. Otherwise, the given candidates are returned. Including the candidates with no hash specified when there is a match allows a warning to be logged if there is a more preferred candidate with no hash specified. Returning all candidates in the case of no matches lets pip report the hash of the candidate that would otherwise have been installed (e.g. permitting the user to more easily update their requirements file with the desired hash). """ if not hashes: logger.debug( 'Given no hashes to check %s links for project %r: ' 'discarding no candidates', len(candidates), project_name, ) # Make sure we're not returning back the given value. return list(candidates) matches_or_no_digest = [] # Collect the non-matches for logging purposes. non_matches = [] match_count = 0 for candidate in candidates: link = candidate.link if not link.has_hash: pass elif link.is_hash_allowed(hashes=hashes): match_count += 1 else: non_matches.append(candidate) continue matches_or_no_digest.append(candidate) if match_count: filtered = matches_or_no_digest else: # Make sure we're not returning back the given value. filtered = list(candidates) if len(filtered) == len(candidates): discard_message = 'discarding no candidates' else: discard_message = 'discarding {} non-matches:\n {}'.format( len(non_matches), '\n '.join(str(candidate.link) for candidate in non_matches) ) logger.debug( 'Checked %s links for project %r against %s hashes ' '(%s matches, %s no digest): %s', len(candidates), project_name, hashes.digest_count, match_count, len(matches_or_no_digest) - match_count, discard_message ) return filtered class CandidatePreferences(object): """ Encapsulates some of the preferences for filtering and sorting InstallationCandidate objects. """ def __init__( self, prefer_binary=False, # type: bool allow_all_prereleases=False, # type: bool ): # type: (...) -> None """ :param allow_all_prereleases: Whether to allow all pre-releases. """ self.allow_all_prereleases = allow_all_prereleases self.prefer_binary = prefer_binary class CandidateEvaluator(object): """ Responsible for filtering and sorting candidates for installation based on what tags are valid. """ @classmethod def create( cls, project_name, # type: str target_python=None, # type: Optional[TargetPython] prefer_binary=False, # type: bool allow_all_prereleases=False, # type: bool specifier=None, # type: Optional[specifiers.BaseSpecifier] hashes=None, # type: Optional[Hashes] ): # type: (...) -> CandidateEvaluator """Create a CandidateEvaluator object. :param target_python: The target Python interpreter to use when checking compatibility. If None (the default), a TargetPython object will be constructed from the running Python. :param hashes: An optional collection of allowed hashes. """ if target_python is None: target_python = TargetPython() if specifier is None: specifier = specifiers.SpecifierSet() supported_tags = target_python.get_tags() return cls( project_name=project_name, supported_tags=supported_tags, specifier=specifier, prefer_binary=prefer_binary, allow_all_prereleases=allow_all_prereleases, hashes=hashes, ) def __init__( self, project_name, # type: str supported_tags, # type: List[Pep425Tag] specifier, # type: specifiers.BaseSpecifier prefer_binary=False, # type: bool allow_all_prereleases=False, # type: bool hashes=None, # type: Optional[Hashes] ): # type: (...) -> None """ :param supported_tags: The PEP 425 tags supported by the target Python in order of preference (most preferred first). """ self._allow_all_prereleases = allow_all_prereleases self._hashes = hashes self._prefer_binary = prefer_binary self._project_name = project_name self._specifier = specifier self._supported_tags = supported_tags def get_applicable_candidates( self, candidates, # type: List[InstallationCandidate] ): # type: (...) -> List[InstallationCandidate] """ Return the applicable candidates from a list of candidates. """ # Using None infers from the specifier instead. allow_prereleases = self._allow_all_prereleases or None specifier = self._specifier versions = { str(v) for v in specifier.filter( # We turn the version object into a str here because otherwise # when we're debundled but setuptools isn't, Python will see # packaging.version.Version and # pkg_resources._vendor.packaging.version.Version as different # types. This way we'll use a str as a common data interchange # format. If we stop using the pkg_resources provided specifier # and start using our own, we can drop the cast to str(). (str(c.version) for c in candidates), prereleases=allow_prereleases, ) } # Again, converting version to str to deal with debundling. applicable_candidates = [ c for c in candidates if str(c.version) in versions ] return filter_unallowed_hashes( candidates=applicable_candidates, hashes=self._hashes, project_name=self._project_name, ) def make_found_candidates( self, candidates, # type: List[InstallationCandidate] ): # type: (...) -> FoundCandidates """ Create and return a `FoundCandidates` instance. :param specifier: An optional object implementing `filter` (e.g. `packaging.specifiers.SpecifierSet`) to filter applicable versions. """ applicable_candidates = self.get_applicable_candidates(candidates) return FoundCandidates( candidates, applicable_candidates=applicable_candidates, evaluator=self, ) def _sort_key(self, candidate): # type: (InstallationCandidate) -> CandidateSortingKey """ Function to pass as the `key` argument to a call to sorted() to sort InstallationCandidates by preference. Returns a tuple such that tuples sorting as greater using Python's default comparison operator are more preferred. The preference is as follows: First and foremost, candidates with allowed (matching) hashes are always preferred over candidates without matching hashes. This is because e.g. if the only candidate with an allowed hash is yanked, we still want to use that candidate. Second, excepting hash considerations, candidates that have been yanked (in the sense of PEP 592) are always less preferred than candidates that haven't been yanked. Then: If not finding wheels, they are sorted by version only. If finding wheels, then the sort order is by version, then: 1. existing installs 2. wheels ordered via Wheel.support_index_min(self._supported_tags) 3. source archives If prefer_binary was set, then all wheels are sorted above sources. Note: it was considered to embed this logic into the Link comparison operators, but then different sdist links with the same version, would have to be considered equal """ valid_tags = self._supported_tags support_num = len(valid_tags) build_tag = tuple() # type: BuildTag binary_preference = 0 link = candidate.link if link.is_wheel: # can raise InvalidWheelFilename wheel = Wheel(link.filename) if not wheel.supported(valid_tags): raise UnsupportedWheel( "%s is not a supported wheel for this platform. It " "can't be sorted." % wheel.filename ) if self._prefer_binary: binary_preference = 1 pri = -(wheel.support_index_min(valid_tags)) if wheel.build_tag is not None: match = re.match(r'^(\d+)(.*)$', wheel.build_tag) build_tag_groups = match.groups() build_tag = (int(build_tag_groups[0]), build_tag_groups[1]) else: # sdist pri = -(support_num) has_allowed_hash = int(link.is_hash_allowed(self._hashes)) yank_value = -1 * int(link.is_yanked) # -1 for yanked. return ( has_allowed_hash, yank_value, binary_preference, candidate.version, build_tag, pri, ) def get_best_candidate( self, candidates, # type: List[InstallationCandidate] ): # type: (...) -> Optional[InstallationCandidate] """ Return the best candidate per the instance's sort order, or None if no candidate is acceptable. """ if not candidates: return None best_candidate = max(candidates, key=self._sort_key) # Log a warning per PEP 592 if necessary before returning. link = best_candidate.link if link.is_yanked: reason = link.yanked_reason or '' msg = ( # Mark this as a unicode string to prevent # "UnicodeEncodeError: 'ascii' codec can't encode character" # in Python 2 when the reason contains non-ascii characters. u'The candidate selected for download or install is a ' 'yanked version: {candidate}\n' 'Reason for being yanked: {reason}' ).format(candidate=best_candidate, reason=reason) logger.warning(msg) return best_candidate class FoundCandidates(object): """A collection of candidates, returned by `PackageFinder.find_candidates`. This class is only intended to be instantiated by CandidateEvaluator's `make_found_candidates()` method. """ def __init__( self, candidates, # type: List[InstallationCandidate] applicable_candidates, # type: List[InstallationCandidate] evaluator, # type: CandidateEvaluator ): # type: (...) -> None """ :param candidates: A sequence of all available candidates found. :param applicable_candidates: The applicable candidates. :param evaluator: A CandidateEvaluator object to sort applicable candidates by order of preference. """ self._applicable_candidates = applicable_candidates self._candidates = candidates self._evaluator = evaluator def iter_all(self): # type: () -> Iterable[InstallationCandidate] """Iterate through all candidates. """ return iter(self._candidates) def iter_applicable(self): # type: () -> Iterable[InstallationCandidate] """Iterate through the applicable candidates. """ return iter(self._applicable_candidates) def get_best(self): # type: () -> Optional[InstallationCandidate] """Return the best candidate available, or None if no applicable candidates are found. """ candidates = list(self.iter_applicable()) return self._evaluator.get_best_candidate(candidates) class PackageFinder(object): """This finds packages. This is meant to match easy_install's technique for looking for packages, by reading pages and looking for appropriate links. """ def __init__( self, search_scope, # type: SearchScope session, # type: PipSession target_python, # type: TargetPython allow_yanked, # type: bool format_control=None, # type: Optional[FormatControl] trusted_hosts=None, # type: Optional[List[str]] candidate_prefs=None, # type: CandidatePreferences ignore_requires_python=None, # type: Optional[bool] ): # type: (...) -> None """ This constructor is primarily meant to be used by the create() class method and from tests. :param session: The Session to use to make requests. :param format_control: A FormatControl object, used to control the selection of source packages / binary packages when consulting the index and links. :param candidate_prefs: Options to use when creating a CandidateEvaluator object. """ if trusted_hosts is None: trusted_hosts = [] if candidate_prefs is None: candidate_prefs = CandidatePreferences() format_control = format_control or FormatControl(set(), set()) self._allow_yanked = allow_yanked self._candidate_prefs = candidate_prefs self._ignore_requires_python = ignore_requires_python self._target_python = target_python self.search_scope = search_scope self.session = session self.format_control = format_control self.trusted_hosts = trusted_hosts # These are boring links that have already been logged somehow. self._logged_links = set() # type: Set[Link] # Don't include an allow_yanked default value to make sure each call # site considers whether yanked releases are allowed. This also causes # that decision to be made explicit in the calling code, which helps # people when reading the code. @classmethod def create( cls, search_scope, # type: SearchScope selection_prefs, # type: SelectionPreferences trusted_hosts=None, # type: Optional[List[str]] session=None, # type: Optional[PipSession] target_python=None, # type: Optional[TargetPython] ): # type: (...) -> PackageFinder """Create a PackageFinder. :param selection_prefs: The candidate selection preferences, as a SelectionPreferences object. :param trusted_hosts: Domains not to emit warnings for when not using HTTPS. :param session: The Session to use to make requests. :param target_python: The target Python interpreter to use when checking compatibility. If None (the default), a TargetPython object will be constructed from the running Python. """ if session is None: raise TypeError( "PackageFinder.create() missing 1 required keyword argument: " "'session'" ) if target_python is None: target_python = TargetPython() candidate_prefs = CandidatePreferences( prefer_binary=selection_prefs.prefer_binary, allow_all_prereleases=selection_prefs.allow_all_prereleases, ) return cls( candidate_prefs=candidate_prefs, search_scope=search_scope, session=session, target_python=target_python, allow_yanked=selection_prefs.allow_yanked, format_control=selection_prefs.format_control, trusted_hosts=trusted_hosts, ignore_requires_python=selection_prefs.ignore_requires_python, ) @property def find_links(self): # type: () -> List[str] return self.search_scope.find_links @property def index_urls(self): # type: () -> List[str] return self.search_scope.index_urls @property def allow_all_prereleases(self): # type: () -> bool return self._candidate_prefs.allow_all_prereleases def set_allow_all_prereleases(self): # type: () -> None self._candidate_prefs.allow_all_prereleases = True def add_trusted_host(self, host, source=None): # type: (str, Optional[str]) -> None """ :param source: An optional source string, for logging where the host string came from. """ # It is okay to add a previously added host because PipSession stores # the resulting prefixes in a dict. msg = 'adding trusted host: {!r}'.format(host) if source is not None: msg += ' (from {})'.format(source) logger.info(msg) self.session.add_insecure_host(host) if host in self.trusted_hosts: return self.trusted_hosts.append(host) def iter_secure_origins(self): # type: () -> Iterator[SecureOrigin] for secure_origin in SECURE_ORIGINS: yield secure_origin for host in self.trusted_hosts: yield ('*', host, '*') @staticmethod def _sort_locations(locations, expand_dir=False): # type: (Sequence[str], bool) -> Tuple[List[str], List[str]] """ Sort locations into "files" (archives) and "urls", and return a pair of lists (files,urls) """ files = [] urls = [] # puts the url for the given file path into the appropriate list def sort_path(path): url = path_to_url(path) if mimetypes.guess_type(url, strict=False)[0] == 'text/html': urls.append(url) else: files.append(url) for url in locations: is_local_path = os.path.exists(url) is_file_url = url.startswith('file:') if is_local_path or is_file_url: if is_local_path: path = url else: path = url_to_path(url) if os.path.isdir(path): if expand_dir: path = os.path.realpath(path) for item in os.listdir(path): sort_path(os.path.join(path, item)) elif is_file_url: urls.append(url) else: logger.warning( "Path '{0}' is ignored: " "it is a directory.".format(path), ) elif os.path.isfile(path): sort_path(path) else: logger.warning( "Url '%s' is ignored: it is neither a file " "nor a directory.", url, ) elif is_url(url): # Only add url with clear scheme urls.append(url) else: logger.warning( "Url '%s' is ignored. It is either a non-existing " "path or lacks a specific scheme.", url, ) return files, urls def _validate_secure_origin(self, logger, location): # type: (Logger, Link) -> bool # Determine if this url used a secure transport mechanism parsed = urllib_parse.urlparse(str(location)) origin = (parsed.scheme, parsed.hostname, parsed.port) # The protocol to use to see if the protocol matches. # Don't count the repository type as part of the protocol: in # cases such as "git+ssh", only use "ssh". (I.e., Only verify against # the last scheme.) protocol = origin[0].rsplit('+', 1)[-1] # Determine if our origin is a secure origin by looking through our # hardcoded list of secure origins, as well as any additional ones # configured on this PackageFinder instance. for secure_origin in self.iter_secure_origins(): if protocol != secure_origin[0] and secure_origin[0] != "*": continue try: # We need to do this decode dance to ensure that we have a # unicode object, even on Python 2.x. addr = ipaddress.ip_address( origin[1] if ( isinstance(origin[1], six.text_type) or origin[1] is None ) else origin[1].decode("utf8") ) network = ipaddress.ip_network( secure_origin[1] if isinstance(secure_origin[1], six.text_type) # setting secure_origin[1] to proper Union[bytes, str] # creates problems in other places else secure_origin[1].decode("utf8") # type: ignore ) except ValueError: # We don't have both a valid address or a valid network, so # we'll check this origin against hostnames. if (origin[1] and origin[1].lower() != secure_origin[1].lower() and secure_origin[1] != "*"): continue else: # We have a valid address and network, so see if the address # is contained within the network. if addr not in network: continue # Check to see if the port patches if (origin[2] != secure_origin[2] and secure_origin[2] != "*" and secure_origin[2] is not None): continue # If we've gotten here, then this origin matches the current # secure origin and we should return True return True # If we've gotten to this point, then the origin isn't secure and we # will not accept it as a valid location to search. We will however # log a warning that we are ignoring it. logger.warning( "The repository located at %s is not a trusted or secure host and " "is being ignored. If this repository is available via HTTPS we " "recommend you use HTTPS instead, otherwise you may silence " "this warning and allow it anyway with '--trusted-host %s'.", parsed.hostname, parsed.hostname, ) return False def make_link_evaluator(self, project_name): # type: (str) -> LinkEvaluator canonical_name = canonicalize_name(project_name) formats = self.format_control.get_allowed_formats(canonical_name) return LinkEvaluator( project_name=project_name, canonical_name=canonical_name, formats=formats, target_python=self._target_python, allow_yanked=self._allow_yanked, ignore_requires_python=self._ignore_requires_python, ) def find_all_candidates(self, project_name): # type: (str) -> List[InstallationCandidate] """Find all available InstallationCandidate for project_name This checks index_urls and find_links. All versions found are returned as an InstallationCandidate list. See LinkEvaluator.evaluate_link() for details on which files are accepted. """ search_scope = self.search_scope index_locations = search_scope.get_index_urls_locations(project_name) index_file_loc, index_url_loc = self._sort_locations(index_locations) fl_file_loc, fl_url_loc = self._sort_locations( self.find_links, expand_dir=True, ) file_locations = (Link(url) for url in itertools.chain( index_file_loc, fl_file_loc, )) # We trust every url that the user has given us whether it was given # via --index-url or --find-links. # We want to filter out any thing which does not have a secure origin. url_locations = [ link for link in itertools.chain( (Link(url) for url in index_url_loc), (Link(url) for url in fl_url_loc), ) if self._validate_secure_origin(logger, link) ] logger.debug('%d location(s) to search for versions of %s:', len(url_locations), project_name) for location in url_locations: logger.debug('* %s', location) link_evaluator = self.make_link_evaluator(project_name) find_links_versions = self._package_versions( link_evaluator, # We trust every directly linked archive in find_links (Link(url, '-f') for url in self.find_links), ) page_versions = [] for page in self._get_pages(url_locations, project_name): logger.debug('Analyzing links from page %s', page.url) with indent_log(): page_versions.extend( self._package_versions(link_evaluator, page.iter_links()) ) file_versions = self._package_versions(link_evaluator, file_locations) if file_versions: file_versions.sort(reverse=True) logger.debug( 'Local files found: %s', ', '.join([ url_to_path(candidate.link.url) for candidate in file_versions ]) ) # This is an intentional priority ordering return file_versions + find_links_versions + page_versions def make_candidate_evaluator( self, project_name, # type: str specifier=None, # type: Optional[specifiers.BaseSpecifier] hashes=None, # type: Optional[Hashes] ): # type: (...) -> CandidateEvaluator """Create a CandidateEvaluator object to use. """ candidate_prefs = self._candidate_prefs return CandidateEvaluator.create( project_name=project_name, target_python=self._target_python, prefer_binary=candidate_prefs.prefer_binary, allow_all_prereleases=candidate_prefs.allow_all_prereleases, specifier=specifier, hashes=hashes, ) def find_candidates( self, project_name, # type: str specifier=None, # type: Optional[specifiers.BaseSpecifier] hashes=None, # type: Optional[Hashes] ): # type: (...) -> FoundCandidates """Find matches for the given project and specifier. :param specifier: An optional object implementing `filter` (e.g. `packaging.specifiers.SpecifierSet`) to filter applicable versions. :return: A `FoundCandidates` instance. """ candidates = self.find_all_candidates(project_name) candidate_evaluator = self.make_candidate_evaluator( project_name=project_name, specifier=specifier, hashes=hashes, ) return candidate_evaluator.make_found_candidates(candidates) def find_requirement(self, req, upgrade): # type: (InstallRequirement, bool) -> Optional[Link] """Try to find a Link matching req Expects req, an InstallRequirement and upgrade, a boolean Returns a Link if found, Raises DistributionNotFound or BestVersionAlreadyInstalled otherwise """ hashes = req.hashes(trust_internet=False) candidates = self.find_candidates( req.name, specifier=req.specifier, hashes=hashes, ) best_candidate = candidates.get_best() installed_version = None # type: Optional[_BaseVersion] if req.satisfied_by is not None: installed_version = parse_version(req.satisfied_by.version) def _format_versions(cand_iter): # This repeated parse_version and str() conversion is needed to # handle different vendoring sources from pip and pkg_resources. # If we stop using the pkg_resources provided specifier and start # using our own, we can drop the cast to str(). return ", ".join(sorted( {str(c.version) for c in cand_iter}, key=parse_version, )) or "none" if installed_version is None and best_candidate is None: logger.critical( 'Could not find a version that satisfies the requirement %s ' '(from versions: %s)', req, _format_versions(candidates.iter_all()), ) raise DistributionNotFound( 'No matching distribution found for %s' % req ) best_installed = False if installed_version and ( best_candidate is None or best_candidate.version <= installed_version): best_installed = True if not upgrade and installed_version is not None: if best_installed: logger.debug( 'Existing installed version (%s) is most up-to-date and ' 'satisfies requirement', installed_version, ) else: logger.debug( 'Existing installed version (%s) satisfies requirement ' '(most up-to-date version is %s)', installed_version, best_candidate.version, ) return None if best_installed: # We have an existing version, and its the best version logger.debug( 'Installed version (%s) is most up-to-date (past versions: ' '%s)', installed_version, _format_versions(candidates.iter_applicable()), ) raise BestVersionAlreadyInstalled logger.debug( 'Using version %s (newest of versions: %s)', best_candidate.version, _format_versions(candidates.iter_applicable()), ) return best_candidate.link def _get_pages(self, locations, project_name): # type: (Iterable[Link], str) -> Iterable[HTMLPage] """ Yields (page, page_url) from the given locations, skipping locations that have errors. """ seen = set() # type: Set[Link] for location in locations: if location in seen: continue seen.add(location) page = _get_html_page(location, session=self.session) if page is None: continue yield page def _sort_links(self, links): # type: (Iterable[Link]) -> List[Link] """ Returns elements of links in order, non-egg links first, egg links second, while eliminating duplicates """ eggs, no_eggs = [], [] seen = set() # type: Set[Link] for link in links: if link not in seen: seen.add(link) if link.egg_fragment: eggs.append(link) else: no_eggs.append(link) return no_eggs + eggs def _log_skipped_link(self, link, reason): # type: (Link, Text) -> None if link not in self._logged_links: # Mark this as a unicode string to prevent "UnicodeEncodeError: # 'ascii' codec can't encode character" in Python 2 when # the reason contains non-ascii characters. # Also, put the link at the end so the reason is more visible # and because the link string is usually very long. logger.debug(u'Skipping link: %s: %s', reason, link) self._logged_links.add(link) def get_install_candidate(self, link_evaluator, link): # type: (LinkEvaluator, Link) -> Optional[InstallationCandidate] """ If the link is a candidate for install, convert it to an InstallationCandidate and return it. Otherwise, return None. """ is_candidate, result = link_evaluator.evaluate_link(link) if not is_candidate: if result: self._log_skipped_link(link, reason=result) return None return InstallationCandidate( project=link_evaluator.project_name, link=link, # Convert the Text result to str since InstallationCandidate # accepts str. version=str(result), ) def _package_versions(self, link_evaluator, links): # type: (LinkEvaluator, Iterable[Link]) -> List[InstallationCandidate] result = [] for link in self._sort_links(links): candidate = self.get_install_candidate(link_evaluator, link) if candidate is not None: result.append(candidate) return result def _find_name_version_sep(fragment, canonical_name): # type: (str, str) -> int """Find the separator's index based on the package's canonical name. :param fragment: A + filename "fragment" (stem) or egg fragment. :param canonical_name: The package's canonical name. This function is needed since the canonicalized name does not necessarily have the same length as the egg info's name part. An example:: >>> fragment = 'foo__bar-1.0' >>> canonical_name = 'foo-bar' >>> _find_name_version_sep(fragment, canonical_name) 8 """ # Project name and version must be separated by one single dash. Find all # occurrences of dashes; if the string in front of it matches the canonical # name, this is the one separating the name and version parts. for i, c in enumerate(fragment): if c != "-": continue if canonicalize_name(fragment[:i]) == canonical_name: return i raise ValueError("{} does not match {}".format(fragment, canonical_name)) def _extract_version_from_fragment(fragment, canonical_name): # type: (str, str) -> Optional[str] """Parse the version string from a + filename "fragment" (stem) or egg fragment. :param fragment: The string to parse. E.g. foo-2.1 :param canonical_name: The canonicalized name of the package this belongs to. """ try: version_start = _find_name_version_sep(fragment, canonical_name) + 1 except ValueError: return None version = fragment[version_start:] if not version: return None return version def _determine_base_url(document, page_url): """Determine the HTML document's base URL. This looks for a ```` tag in the HTML document. If present, its href attribute denotes the base URL of anchor tags in the document. If there is no such tag (or if it does not have a valid href attribute), the HTML file's URL is used as the base URL. :param document: An HTML document representation. The current implementation expects the result of ``html5lib.parse()``. :param page_url: The URL of the HTML document. """ for base in document.findall(".//base"): href = base.get("href") if href is not None: return href return page_url def _get_encoding_from_headers(headers): """Determine if we have any encoding information in our headers. """ if headers and "Content-Type" in headers: content_type, params = cgi.parse_header(headers["Content-Type"]) if "charset" in params: return params['charset'] return None def _clean_link(url): # type: (str) -> str """Makes sure a link is fully encoded. That is, if a ' ' shows up in the link, it will be rewritten to %20 (while not over-quoting % or other characters).""" # Split the URL into parts according to the general structure # `scheme://netloc/path;parameters?query#fragment`. Note that the # `netloc` can be empty and the URI will then refer to a local # filesystem path. result = urllib_parse.urlparse(url) # In both cases below we unquote prior to quoting to make sure # nothing is double quoted. if result.netloc == "": # On Windows the path part might contain a drive letter which # should not be quoted. On Linux where drive letters do not # exist, the colon should be quoted. We rely on urllib.request # to do the right thing here. path = urllib_request.pathname2url( urllib_request.url2pathname(result.path)) else: # In addition to the `/` character we protect `@` so that # revision strings in VCS URLs are properly parsed. path = urllib_parse.quote(urllib_parse.unquote(result.path), safe="/@") return urllib_parse.urlunparse(result._replace(path=path)) def _create_link_from_element( anchor, # type: HTMLElement page_url, # type: str base_url, # type: str ): # type: (...) -> Optional[Link] """ Convert an anchor element in a simple repository page to a Link. """ href = anchor.get("href") if not href: return None url = _clean_link(urllib_parse.urljoin(base_url, href)) pyrequire = anchor.get('data-requires-python') pyrequire = unescape(pyrequire) if pyrequire else None yanked_reason = anchor.get('data-yanked') if yanked_reason: # This is a unicode string in Python 2 (and 3). yanked_reason = unescape(yanked_reason) link = Link( url, comes_from=page_url, requires_python=pyrequire, yanked_reason=yanked_reason, ) return link class HTMLPage(object): """Represents one page, along with its URL""" def __init__(self, content, url, headers=None): # type: (bytes, str, MutableMapping[str, str]) -> None self.content = content self.url = url self.headers = headers def __str__(self): return redact_password_from_url(self.url) def iter_links(self): # type: () -> Iterable[Link] """Yields all links in the page""" document = html5lib.parse( self.content, transport_encoding=_get_encoding_from_headers(self.headers), namespaceHTMLElements=False, ) base_url = _determine_base_url(document, self.url) for anchor in document.findall(".//a"): link = _create_link_from_element( anchor, page_url=self.url, base_url=base_url, ) if link is None: continue yield link ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/legacy_resolve.py0000644000175000017500000004162700000000000030373 0ustar00vincentvincent00000000000000"""Dependency Resolution The dependency resolution in pip is performed as follows: for top-level requirements: a. only one spec allowed per project, regardless of conflicts or not. otherwise a "double requirement" exception is raised b. they override sub-dependency requirements. for sub-dependencies a. "first found, wins" (where the order is breadth first) """ import logging import sys from collections import defaultdict from itertools import chain from pip._vendor.packaging import specifiers from pip._internal.exceptions import ( BestVersionAlreadyInstalled, DistributionNotFound, HashError, HashErrors, UnsupportedPythonVersion, ) from pip._internal.req.constructors import install_req_from_req_string from pip._internal.utils.logging import indent_log from pip._internal.utils.misc import ( dist_in_usersite, ensure_dir, normalize_version_info, ) from pip._internal.utils.packaging import ( check_requires_python, get_requires_python, ) from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import DefaultDict, List, Optional, Set, Tuple from pip._vendor import pkg_resources from pip._internal.cache import WheelCache from pip._internal.distributions import AbstractDistribution from pip._internal.download import PipSession from pip._internal.index import PackageFinder from pip._internal.operations.prepare import RequirementPreparer from pip._internal.req.req_install import InstallRequirement from pip._internal.req.req_set import RequirementSet logger = logging.getLogger(__name__) def _check_dist_requires_python( dist, # type: pkg_resources.Distribution version_info, # type: Tuple[int, int, int] ignore_requires_python=False, # type: bool ): # type: (...) -> None """ Check whether the given Python version is compatible with a distribution's "Requires-Python" value. :param version_info: A 3-tuple of ints representing the Python major-minor-micro version to check. :param ignore_requires_python: Whether to ignore the "Requires-Python" value if the given Python version isn't compatible. :raises UnsupportedPythonVersion: When the given Python version isn't compatible. """ requires_python = get_requires_python(dist) try: is_compatible = check_requires_python( requires_python, version_info=version_info, ) except specifiers.InvalidSpecifier as exc: logger.warning( "Package %r has an invalid Requires-Python: %s", dist.project_name, exc, ) return if is_compatible: return version = '.'.join(map(str, version_info)) if ignore_requires_python: logger.debug( 'Ignoring failed Requires-Python check for package %r: ' '%s not in %r', dist.project_name, version, requires_python, ) return raise UnsupportedPythonVersion( 'Package {!r} requires a different Python: {} not in {!r}'.format( dist.project_name, version, requires_python, )) class Resolver(object): """Resolves which packages need to be installed/uninstalled to perform \ the requested operation without breaking the requirements of any package. """ _allowed_strategies = {"eager", "only-if-needed", "to-satisfy-only"} def __init__( self, preparer, # type: RequirementPreparer session, # type: PipSession finder, # type: PackageFinder wheel_cache, # type: Optional[WheelCache] use_user_site, # type: bool ignore_dependencies, # type: bool ignore_installed, # type: bool ignore_requires_python, # type: bool force_reinstall, # type: bool isolated, # type: bool upgrade_strategy, # type: str use_pep517=None, # type: Optional[bool] py_version_info=None, # type: Optional[Tuple[int, ...]] ): # type: (...) -> None super(Resolver, self).__init__() assert upgrade_strategy in self._allowed_strategies if py_version_info is None: py_version_info = sys.version_info[:3] else: py_version_info = normalize_version_info(py_version_info) self._py_version_info = py_version_info self.preparer = preparer self.finder = finder self.session = session # NOTE: This would eventually be replaced with a cache that can give # information about both sdist and wheels transparently. self.wheel_cache = wheel_cache # This is set in resolve self.require_hashes = None # type: Optional[bool] self.upgrade_strategy = upgrade_strategy self.force_reinstall = force_reinstall self.isolated = isolated self.ignore_dependencies = ignore_dependencies self.ignore_installed = ignore_installed self.ignore_requires_python = ignore_requires_python self.use_user_site = use_user_site self.use_pep517 = use_pep517 self._discovered_dependencies = \ defaultdict(list) # type: DefaultDict[str, List] def resolve(self, requirement_set): # type: (RequirementSet) -> None """Resolve what operations need to be done As a side-effect of this method, the packages (and their dependencies) are downloaded, unpacked and prepared for installation. This preparation is done by ``pip.operations.prepare``. Once PyPI has static dependency metadata available, it would be possible to move the preparation to become a step separated from dependency resolution. """ # make the wheelhouse if self.preparer.wheel_download_dir: ensure_dir(self.preparer.wheel_download_dir) # If any top-level requirement has a hash specified, enter # hash-checking mode, which requires hashes from all. root_reqs = ( requirement_set.unnamed_requirements + list(requirement_set.requirements.values()) ) self.require_hashes = ( requirement_set.require_hashes or any(req.has_hash_options for req in root_reqs) ) # Display where finder is looking for packages search_scope = self.finder.search_scope locations = search_scope.get_formatted_locations() if locations: logger.info(locations) # Actually prepare the files, and collect any exceptions. Most hash # exceptions cannot be checked ahead of time, because # req.populate_link() needs to be called before we can make decisions # based on link type. discovered_reqs = [] # type: List[InstallRequirement] hash_errors = HashErrors() for req in chain(root_reqs, discovered_reqs): try: discovered_reqs.extend( self._resolve_one(requirement_set, req) ) except HashError as exc: exc.req = req hash_errors.append(exc) if hash_errors: raise hash_errors def _is_upgrade_allowed(self, req): # type: (InstallRequirement) -> bool if self.upgrade_strategy == "to-satisfy-only": return False elif self.upgrade_strategy == "eager": return True else: assert self.upgrade_strategy == "only-if-needed" return req.is_direct def _set_req_to_reinstall(self, req): # type: (InstallRequirement) -> None """ Set a requirement to be installed. """ # Don't uninstall the conflict if doing a user install and the # conflict is not a user install. if not self.use_user_site or dist_in_usersite(req.satisfied_by): req.conflicts_with = req.satisfied_by req.satisfied_by = None # XXX: Stop passing requirement_set for options def _check_skip_installed(self, req_to_install): # type: (InstallRequirement) -> Optional[str] """Check if req_to_install should be skipped. This will check if the req is installed, and whether we should upgrade or reinstall it, taking into account all the relevant user options. After calling this req_to_install will only have satisfied_by set to None if the req_to_install is to be upgraded/reinstalled etc. Any other value will be a dist recording the current thing installed that satisfies the requirement. Note that for vcs urls and the like we can't assess skipping in this routine - we simply identify that we need to pull the thing down, then later on it is pulled down and introspected to assess upgrade/ reinstalls etc. :return: A text reason for why it was skipped, or None. """ if self.ignore_installed: return None req_to_install.check_if_exists(self.use_user_site) if not req_to_install.satisfied_by: return None if self.force_reinstall: self._set_req_to_reinstall(req_to_install) return None if not self._is_upgrade_allowed(req_to_install): if self.upgrade_strategy == "only-if-needed": return 'already satisfied, skipping upgrade' return 'already satisfied' # Check for the possibility of an upgrade. For link-based # requirements we have to pull the tree down and inspect to assess # the version #, so it's handled way down. if not req_to_install.link: try: self.finder.find_requirement(req_to_install, upgrade=True) except BestVersionAlreadyInstalled: # Then the best version is installed. return 'already up-to-date' except DistributionNotFound: # No distribution found, so we squash the error. It will # be raised later when we re-try later to do the install. # Why don't we just raise here? pass self._set_req_to_reinstall(req_to_install) return None def _get_abstract_dist_for(self, req): # type: (InstallRequirement) -> AbstractDistribution """Takes a InstallRequirement and returns a single AbstractDist \ representing a prepared variant of the same. """ assert self.require_hashes is not None, ( "require_hashes should have been set in Resolver.resolve()" ) if req.editable: return self.preparer.prepare_editable_requirement( req, self.require_hashes, self.use_user_site, self.finder, ) # satisfied_by is only evaluated by calling _check_skip_installed, # so it must be None here. assert req.satisfied_by is None skip_reason = self._check_skip_installed(req) if req.satisfied_by: return self.preparer.prepare_installed_requirement( req, self.require_hashes, skip_reason ) upgrade_allowed = self._is_upgrade_allowed(req) abstract_dist = self.preparer.prepare_linked_requirement( req, self.session, self.finder, upgrade_allowed, self.require_hashes ) # NOTE # The following portion is for determining if a certain package is # going to be re-installed/upgraded or not and reporting to the user. # This should probably get cleaned up in a future refactor. # req.req is only avail after unpack for URL # pkgs repeat check_if_exists to uninstall-on-upgrade # (#14) if not self.ignore_installed: req.check_if_exists(self.use_user_site) if req.satisfied_by: should_modify = ( self.upgrade_strategy != "to-satisfy-only" or self.force_reinstall or self.ignore_installed or req.link.scheme == 'file' ) if should_modify: self._set_req_to_reinstall(req) else: logger.info( 'Requirement already satisfied (use --upgrade to upgrade):' ' %s', req, ) return abstract_dist def _resolve_one( self, requirement_set, # type: RequirementSet req_to_install # type: InstallRequirement ): # type: (...) -> List[InstallRequirement] """Prepare a single requirements file. :return: A list of additional InstallRequirements to also install. """ # Tell user what we are doing for this requirement: # obtain (editable), skipping, processing (local url), collecting # (remote url or package name) if req_to_install.constraint or req_to_install.prepared: return [] req_to_install.prepared = True # register tmp src for cleanup in case something goes wrong requirement_set.reqs_to_cleanup.append(req_to_install) abstract_dist = self._get_abstract_dist_for(req_to_install) # Parse and return dependencies dist = abstract_dist.get_pkg_resources_distribution() # This will raise UnsupportedPythonVersion if the given Python # version isn't compatible with the distribution's Requires-Python. _check_dist_requires_python( dist, version_info=self._py_version_info, ignore_requires_python=self.ignore_requires_python, ) more_reqs = [] # type: List[InstallRequirement] def add_req(subreq, extras_requested): sub_install_req = install_req_from_req_string( str(subreq), req_to_install, isolated=self.isolated, wheel_cache=self.wheel_cache, use_pep517=self.use_pep517 ) parent_req_name = req_to_install.name to_scan_again, add_to_parent = requirement_set.add_requirement( sub_install_req, parent_req_name=parent_req_name, extras_requested=extras_requested, ) if parent_req_name and add_to_parent: self._discovered_dependencies[parent_req_name].append( add_to_parent ) more_reqs.extend(to_scan_again) with indent_log(): # We add req_to_install before its dependencies, so that we # can refer to it when adding dependencies. if not requirement_set.has_requirement(req_to_install.name): # 'unnamed' requirements will get added here req_to_install.is_direct = True requirement_set.add_requirement( req_to_install, parent_req_name=None, ) if not self.ignore_dependencies: if req_to_install.extras: logger.debug( "Installing extra requirements: %r", ','.join(req_to_install.extras), ) missing_requested = sorted( set(req_to_install.extras) - set(dist.extras) ) for missing in missing_requested: logger.warning( '%s does not provide the extra \'%s\'', dist, missing ) available_requested = sorted( set(dist.extras) & set(req_to_install.extras) ) for subreq in dist.requires(available_requested): add_req(subreq, extras_requested=available_requested) if not req_to_install.editable and not req_to_install.satisfied_by: # XXX: --no-install leads this to report 'Successfully # downloaded' for only non-editable reqs, even though we took # action on them. requirement_set.successfully_downloaded.append(req_to_install) return more_reqs def get_installation_order(self, req_set): # type: (RequirementSet) -> List[InstallRequirement] """Create the installation order. The installation order is topological - requirements are installed before the requiring thing. We break cycles at an arbitrary point, and make no other guarantees. """ # The current implementation, which we may change at any point # installs the user specified things in the order given, except when # dependencies must come earlier to achieve topological order. order = [] ordered_reqs = set() # type: Set[InstallRequirement] def schedule(req): if req.satisfied_by or req in ordered_reqs: return if req.constraint: return ordered_reqs.add(req) for dep in self._discovered_dependencies[req.name]: schedule(dep) order.append(req) for install_req in req_set.requirements.values(): schedule(install_req) return order ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/locations.py0000644000175000017500000001166500000000000027362 0ustar00vincentvincent00000000000000"""Locations where we look for configs, install stuff, etc""" from __future__ import absolute_import import os import os.path import platform import site import sys import sysconfig from distutils import sysconfig as distutils_sysconfig from distutils.command.install import SCHEME_KEYS # type: ignore from pip._internal.utils import appdirs from pip._internal.utils.compat import WINDOWS from pip._internal.utils.typing import MYPY_CHECK_RUNNING from pip._internal.utils.virtualenv import running_under_virtualenv if MYPY_CHECK_RUNNING: from typing import Any, Union, Dict, List, Optional # Application Directories USER_CACHE_DIR = appdirs.user_cache_dir("pip") def get_src_prefix(): if running_under_virtualenv(): src_prefix = os.path.join(sys.prefix, 'src') else: # FIXME: keep src in cwd for now (it is not a temporary folder) try: src_prefix = os.path.join(os.getcwd(), 'src') except OSError: # In case the current working directory has been renamed or deleted sys.exit( "The folder you are executing pip from can no longer be found." ) # under macOS + virtualenv sys.prefix is not properly resolved # it is something like /path/to/python/bin/.. return os.path.abspath(src_prefix) # FIXME doesn't account for venv linked to global site-packages site_packages = sysconfig.get_path("purelib") # type: Optional[str] # This is because of a bug in PyPy's sysconfig module, see # https://bitbucket.org/pypy/pypy/issues/2506/sysconfig-returns-incorrect-paths # for more information. if platform.python_implementation().lower() == "pypy": site_packages = distutils_sysconfig.get_python_lib() try: # Use getusersitepackages if this is present, as it ensures that the # value is initialised properly. user_site = site.getusersitepackages() except AttributeError: user_site = site.USER_SITE if WINDOWS: bin_py = os.path.join(sys.prefix, 'Scripts') bin_user = os.path.join(user_site, 'Scripts') # buildout uses 'bin' on Windows too? if not os.path.exists(bin_py): bin_py = os.path.join(sys.prefix, 'bin') bin_user = os.path.join(user_site, 'bin') else: bin_py = os.path.join(sys.prefix, 'bin') bin_user = os.path.join(user_site, 'bin') # Forcing to use /usr/local/bin for standard macOS framework installs # Also log to ~/Library/Logs/ for use with the Console.app log viewer if sys.platform[:6] == 'darwin' and sys.prefix[:16] == '/System/Library/': bin_py = '/usr/local/bin' def distutils_scheme(dist_name, user=False, home=None, root=None, isolated=False, prefix=None): # type:(str, bool, str, str, bool, str) -> dict """ Return a distutils install scheme """ from distutils.dist import Distribution scheme = {} if isolated: extra_dist_args = {"script_args": ["--no-user-cfg"]} else: extra_dist_args = {} dist_args = {'name': dist_name} # type: Dict[str, Union[str, List[str]]] dist_args.update(extra_dist_args) d = Distribution(dist_args) # Ignoring, typeshed issue reported python/typeshed/issues/2567 d.parse_config_files() # NOTE: Ignoring type since mypy can't find attributes on 'Command' i = d.get_command_obj('install', create=True) # type: Any assert i is not None # NOTE: setting user or home has the side-effect of creating the home dir # or user base for installations during finalize_options() # ideally, we'd prefer a scheme class that has no side-effects. assert not (user and prefix), "user={} prefix={}".format(user, prefix) assert not (home and prefix), "home={} prefix={}".format(home, prefix) i.user = user or i.user if user or home: i.prefix = "" i.prefix = prefix or i.prefix i.home = home or i.home i.root = root or i.root i.finalize_options() for key in SCHEME_KEYS: scheme[key] = getattr(i, 'install_' + key) # install_lib specified in setup.cfg should install *everything* # into there (i.e. it takes precedence over both purelib and # platlib). Note, i.install_lib is *always* set after # finalize_options(); we only want to override here if the user # has explicitly requested it hence going back to the config # Ignoring, typeshed issue reported python/typeshed/issues/2567 if 'install_lib' in d.get_option_dict('install'): # type: ignore scheme.update(dict(purelib=i.install_lib, platlib=i.install_lib)) if running_under_virtualenv(): scheme['headers'] = os.path.join( sys.prefix, 'include', 'site', 'python' + sys.version[:3], dist_name, ) if root is not None: path_no_drive = os.path.splitdrive( os.path.abspath(scheme["headers"]))[1] scheme["headers"] = os.path.join( root, path_no_drive[1:], ) return scheme ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735099.9433274 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/models/0000755000175000017500000000000000000000000026267 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/models/__init__.py0000644000175000017500000000007700000000000030404 0ustar00vincentvincent00000000000000"""A package that contains models that represent entities. """ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/models/candidate.py0000644000175000017500000000222100000000000030552 0ustar00vincentvincent00000000000000from pip._vendor.packaging.version import parse as parse_version from pip._internal.utils.models import KeyBasedCompareMixin from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from pip._vendor.packaging.version import _BaseVersion from pip._internal.models.link import Link from typing import Any class InstallationCandidate(KeyBasedCompareMixin): """Represents a potential "candidate" for installation. """ def __init__(self, project, version, link): # type: (Any, str, Link) -> None self.project = project self.version = parse_version(version) # type: _BaseVersion self.link = link super(InstallationCandidate, self).__init__( key=(self.project, self.version, self.link), defining_class=InstallationCandidate ) def __repr__(self): # type: () -> str return "".format( self.project, self.version, self.link, ) def __str__(self): return '{!r} candidate (version {} at {})'.format( self.project, self.version, self.link, ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/models/format_control.py0000644000175000017500000000431200000000000031671 0ustar00vincentvincent00000000000000from pip._vendor.packaging.utils import canonicalize_name from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import Optional, Set, FrozenSet class FormatControl(object): """Helper for managing formats from which a package can be installed. """ def __init__(self, no_binary=None, only_binary=None): # type: (Optional[Set], Optional[Set]) -> None if no_binary is None: no_binary = set() if only_binary is None: only_binary = set() self.no_binary = no_binary self.only_binary = only_binary def __eq__(self, other): return self.__dict__ == other.__dict__ def __ne__(self, other): return not self.__eq__(other) def __repr__(self): return "{}({}, {})".format( self.__class__.__name__, self.no_binary, self.only_binary ) @staticmethod def handle_mutual_excludes(value, target, other): # type: (str, Optional[Set], Optional[Set]) -> None new = value.split(',') while ':all:' in new: other.clear() target.clear() target.add(':all:') del new[:new.index(':all:') + 1] # Without a none, we want to discard everything as :all: covers it if ':none:' not in new: return for name in new: if name == ':none:': target.clear() continue name = canonicalize_name(name) other.discard(name) target.add(name) def get_allowed_formats(self, canonical_name): # type: (str) -> FrozenSet result = {"binary", "source"} if canonical_name in self.only_binary: result.discard('source') elif canonical_name in self.no_binary: result.discard('binary') elif ':all:' in self.only_binary: result.discard('source') elif ':all:' in self.no_binary: result.discard('binary') return frozenset(result) def disallow_binaries(self): # type: () -> None self.handle_mutual_excludes( ':all:', self.no_binary, self.only_binary, ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/models/index.py0000644000175000017500000000204400000000000027750 0ustar00vincentvincent00000000000000from pip._vendor.six.moves.urllib import parse as urllib_parse class PackageIndex(object): """Represents a Package Index and provides easier access to endpoints """ def __init__(self, url, file_storage_domain): # type: (str, str) -> None super(PackageIndex, self).__init__() self.url = url self.netloc = urllib_parse.urlsplit(url).netloc self.simple_url = self._url_for_path('simple') self.pypi_url = self._url_for_path('pypi') # This is part of a temporary hack used to block installs of PyPI # packages which depend on external urls only necessary until PyPI can # block such packages themselves self.file_storage_domain = file_storage_domain def _url_for_path(self, path): # type: (str) -> str return urllib_parse.urljoin(self.url, path) PyPI = PackageIndex( 'https://pypi.org/', file_storage_domain='files.pythonhosted.org' ) TestPyPI = PackageIndex( 'https://test.pypi.org/', file_storage_domain='test-files.pythonhosted.org' ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/models/link.py0000644000175000017500000001463100000000000027603 0ustar00vincentvincent00000000000000import posixpath import re from pip._vendor.six.moves.urllib import parse as urllib_parse from pip._internal.utils.misc import ( WHEEL_EXTENSION, path_to_url, redact_password_from_url, split_auth_from_netloc, splitext, ) from pip._internal.utils.models import KeyBasedCompareMixin from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import Optional, Text, Tuple, Union from pip._internal.index import HTMLPage from pip._internal.utils.hashes import Hashes class Link(KeyBasedCompareMixin): """Represents a parsed link from a Package Index's simple URL """ def __init__( self, url, # type: str comes_from=None, # type: Optional[Union[str, HTMLPage]] requires_python=None, # type: Optional[str] yanked_reason=None, # type: Optional[Text] ): # type: (...) -> None """ :param url: url of the resource pointed to (href of the link) :param comes_from: instance of HTMLPage where the link was found, or string. :param requires_python: String containing the `Requires-Python` metadata field, specified in PEP 345. This may be specified by a data-requires-python attribute in the HTML link tag, as described in PEP 503. :param yanked_reason: the reason the file has been yanked, if the file has been yanked, or None if the file hasn't been yanked. This is the value of the "data-yanked" attribute, if present, in a simple repository HTML link. If the file has been yanked but no reason was provided, this should be the empty string. See PEP 592 for more information and the specification. """ # url can be a UNC windows share if url.startswith('\\\\'): url = path_to_url(url) self._parsed_url = urllib_parse.urlsplit(url) # Store the url as a private attribute to prevent accidentally # trying to set a new value. self._url = url self.comes_from = comes_from self.requires_python = requires_python if requires_python else None self.yanked_reason = yanked_reason super(Link, self).__init__(key=url, defining_class=Link) def __str__(self): if self.requires_python: rp = ' (requires-python:%s)' % self.requires_python else: rp = '' if self.comes_from: return '%s (from %s)%s' % (redact_password_from_url(self._url), self.comes_from, rp) else: return redact_password_from_url(str(self._url)) def __repr__(self): return '' % self @property def url(self): # type: () -> str return self._url @property def filename(self): # type: () -> str path = self.path.rstrip('/') name = posixpath.basename(path) if not name: # Make sure we don't leak auth information if the netloc # includes a username and password. netloc, user_pass = split_auth_from_netloc(self.netloc) return netloc name = urllib_parse.unquote(name) assert name, ('URL %r produced no filename' % self._url) return name @property def scheme(self): # type: () -> str return self._parsed_url.scheme @property def netloc(self): # type: () -> str """ This can contain auth information. """ return self._parsed_url.netloc @property def path(self): # type: () -> str return urllib_parse.unquote(self._parsed_url.path) def splitext(self): # type: () -> Tuple[str, str] return splitext(posixpath.basename(self.path.rstrip('/'))) @property def ext(self): # type: () -> str return self.splitext()[1] @property def url_without_fragment(self): # type: () -> str scheme, netloc, path, query, fragment = self._parsed_url return urllib_parse.urlunsplit((scheme, netloc, path, query, None)) _egg_fragment_re = re.compile(r'[#&]egg=([^&]*)') @property def egg_fragment(self): # type: () -> Optional[str] match = self._egg_fragment_re.search(self._url) if not match: return None return match.group(1) _subdirectory_fragment_re = re.compile(r'[#&]subdirectory=([^&]*)') @property def subdirectory_fragment(self): # type: () -> Optional[str] match = self._subdirectory_fragment_re.search(self._url) if not match: return None return match.group(1) _hash_re = re.compile( r'(sha1|sha224|sha384|sha256|sha512|md5)=([a-f0-9]+)' ) @property def hash(self): # type: () -> Optional[str] match = self._hash_re.search(self._url) if match: return match.group(2) return None @property def hash_name(self): # type: () -> Optional[str] match = self._hash_re.search(self._url) if match: return match.group(1) return None @property def show_url(self): # type: () -> Optional[str] return posixpath.basename(self._url.split('#', 1)[0].split('?', 1)[0]) @property def is_wheel(self): # type: () -> bool return self.ext == WHEEL_EXTENSION @property def is_artifact(self): # type: () -> bool """ Determines if this points to an actual artifact (e.g. a tarball) or if it points to an "abstract" thing like a path or a VCS location. """ from pip._internal.vcs import vcs if self.scheme in vcs.all_schemes: return False return True @property def is_yanked(self): # type: () -> bool return self.yanked_reason is not None @property def has_hash(self): return self.hash_name is not None def is_hash_allowed(self, hashes): # type: (Optional[Hashes]) -> bool """ Return True if the link has a hash and it is allowed. """ if hashes is None or not self.has_hash: return False # Assert non-None so mypy knows self.hash_name and self.hash are str. assert self.hash_name is not None assert self.hash is not None return hashes.is_hash_allowed(self.hash_name, hex_digest=self.hash) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/models/search_scope.py0000644000175000017500000000744300000000000031307 0ustar00vincentvincent00000000000000import itertools import logging import os import posixpath from pip._vendor.packaging.utils import canonicalize_name from pip._vendor.six.moves.urllib import parse as urllib_parse from pip._internal.models.index import PyPI from pip._internal.utils.compat import HAS_TLS from pip._internal.utils.misc import normalize_path, redact_password_from_url from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import List logger = logging.getLogger(__name__) class SearchScope(object): """ Encapsulates the locations that pip is configured to search. """ @classmethod def create( cls, find_links, # type: List[str] index_urls, # type: List[str] ): # type: (...) -> SearchScope """ Create a SearchScope object after normalizing the `find_links`. """ # Build find_links. If an argument starts with ~, it may be # a local file relative to a home directory. So try normalizing # it and if it exists, use the normalized version. # This is deliberately conservative - it might be fine just to # blindly normalize anything starting with a ~... built_find_links = [] # type: List[str] for link in find_links: if link.startswith('~'): new_link = normalize_path(link) if os.path.exists(new_link): link = new_link built_find_links.append(link) # If we don't have TLS enabled, then WARN if anyplace we're looking # relies on TLS. if not HAS_TLS: for link in itertools.chain(index_urls, built_find_links): parsed = urllib_parse.urlparse(link) if parsed.scheme == 'https': logger.warning( 'pip is configured with locations that require ' 'TLS/SSL, however the ssl module in Python is not ' 'available.' ) break return cls( find_links=built_find_links, index_urls=index_urls, ) def __init__( self, find_links, # type: List[str] index_urls, # type: List[str] ): # type: (...) -> None self.find_links = find_links self.index_urls = index_urls def get_formatted_locations(self): # type: () -> str lines = [] if self.index_urls and self.index_urls != [PyPI.simple_url]: lines.append( 'Looking in indexes: {}'.format(', '.join( redact_password_from_url(url) for url in self.index_urls)) ) if self.find_links: lines.append( 'Looking in links: {}'.format(', '.join( redact_password_from_url(url) for url in self.find_links)) ) return '\n'.join(lines) def get_index_urls_locations(self, project_name): # type: (str) -> List[str] """Returns the locations found via self.index_urls Checks the url_name on the main (first in the list) index and use this url_name to produce all locations """ def mkurl_pypi_url(url): loc = posixpath.join( url, urllib_parse.quote(canonicalize_name(project_name))) # For maximum compatibility with easy_install, ensure the path # ends in a trailing slash. Although this isn't in the spec # (and PyPI can handle it without the slash) some other index # implementations might break if they relied on easy_install's # behavior. if not loc.endswith('/'): loc = loc + '/' return loc return [mkurl_pypi_url(url) for url in self.index_urls] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/models/selection_prefs.py0000644000175000017500000000356400000000000032035 0ustar00vincentvincent00000000000000from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import Optional from pip._internal.models.format_control import FormatControl class SelectionPreferences(object): """ Encapsulates the candidate selection preferences for downloading and installing files. """ # Don't include an allow_yanked default value to make sure each call # site considers whether yanked releases are allowed. This also causes # that decision to be made explicit in the calling code, which helps # people when reading the code. def __init__( self, allow_yanked, # type: bool allow_all_prereleases=False, # type: bool format_control=None, # type: Optional[FormatControl] prefer_binary=False, # type: bool ignore_requires_python=None, # type: Optional[bool] ): # type: (...) -> None """Create a SelectionPreferences object. :param allow_yanked: Whether files marked as yanked (in the sense of PEP 592) are permitted to be candidates for install. :param format_control: A FormatControl object or None. Used to control the selection of source packages / binary packages when consulting the index and links. :param prefer_binary: Whether to prefer an old, but valid, binary dist over a new source dist. :param ignore_requires_python: Whether to ignore incompatible "Requires-Python" values in links. Defaults to False. """ if ignore_requires_python is None: ignore_requires_python = False self.allow_yanked = allow_yanked self.allow_all_prereleases = allow_all_prereleases self.format_control = format_control self.prefer_binary = prefer_binary self.ignore_requires_python = ignore_requires_python ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/models/target_python.py0000644000175000017500000000735400000000000031541 0ustar00vincentvincent00000000000000import sys from pip._internal.pep425tags import get_supported, version_info_to_nodot from pip._internal.utils.misc import normalize_version_info from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import List, Optional, Tuple from pip._internal.pep425tags import Pep425Tag class TargetPython(object): """ Encapsulates the properties of a Python interpreter one is targeting for a package install, download, etc. """ def __init__( self, platform=None, # type: Optional[str] py_version_info=None, # type: Optional[Tuple[int, ...]] abi=None, # type: Optional[str] implementation=None, # type: Optional[str] ): # type: (...) -> None """ :param platform: A string or None. If None, searches for packages that are supported by the current system. Otherwise, will find packages that can be built on the platform passed in. These packages will only be downloaded for distribution: they will not be built locally. :param py_version_info: An optional tuple of ints representing the Python version information to use (e.g. `sys.version_info[:3]`). This can have length 1, 2, or 3 when provided. :param abi: A string or None. This is passed to pep425tags.py's get_supported() function as is. :param implementation: A string or None. This is passed to pep425tags.py's get_supported() function as is. """ # Store the given py_version_info for when we call get_supported(). self._given_py_version_info = py_version_info if py_version_info is None: py_version_info = sys.version_info[:3] else: py_version_info = normalize_version_info(py_version_info) py_version = '.'.join(map(str, py_version_info[:2])) self.abi = abi self.implementation = implementation self.platform = platform self.py_version = py_version self.py_version_info = py_version_info # This is used to cache the return value of get_tags(). self._valid_tags = None # type: Optional[List[Pep425Tag]] def format_given(self): # type: () -> str """ Format the given, non-None attributes for display. """ display_version = None if self._given_py_version_info is not None: display_version = '.'.join( str(part) for part in self._given_py_version_info ) key_values = [ ('platform', self.platform), ('version_info', display_version), ('abi', self.abi), ('implementation', self.implementation), ] return ' '.join( '{}={!r}'.format(key, value) for key, value in key_values if value is not None ) def get_tags(self): # type: () -> List[Pep425Tag] """ Return the supported PEP 425 tags to check wheel candidates against. The tags are returned in order of preference (most preferred first). """ if self._valid_tags is None: # Pass versions=None if no py_version_info was given since # versions=None uses special default logic. py_version_info = self._given_py_version_info if py_version_info is None: versions = None else: versions = [version_info_to_nodot(py_version_info)] tags = get_supported( versions=versions, platform=self.platform, abi=self.abi, impl=self.implementation, ) self._valid_tags = tags return self._valid_tags ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735099.9433274 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/operations/0000755000175000017500000000000000000000000027167 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/operations/__init__.py0000644000175000017500000000000000000000000031266 0ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/operations/check.py0000644000175000017500000001215000000000000030615 0ustar00vincentvincent00000000000000"""Validation of dependencies of packages """ import logging from collections import namedtuple from pip._vendor.packaging.utils import canonicalize_name from pip._vendor.pkg_resources import RequirementParseError from pip._internal.distributions import ( make_distribution_for_install_requirement, ) from pip._internal.utils.misc import get_installed_distributions from pip._internal.utils.typing import MYPY_CHECK_RUNNING logger = logging.getLogger(__name__) if MYPY_CHECK_RUNNING: from pip._internal.req.req_install import InstallRequirement from typing import ( Any, Callable, Dict, Optional, Set, Tuple, List ) # Shorthands PackageSet = Dict[str, 'PackageDetails'] Missing = Tuple[str, Any] Conflicting = Tuple[str, str, Any] MissingDict = Dict[str, List[Missing]] ConflictingDict = Dict[str, List[Conflicting]] CheckResult = Tuple[MissingDict, ConflictingDict] PackageDetails = namedtuple('PackageDetails', ['version', 'requires']) def create_package_set_from_installed(**kwargs): # type: (**Any) -> Tuple[PackageSet, bool] """Converts a list of distributions into a PackageSet. """ # Default to using all packages installed on the system if kwargs == {}: kwargs = {"local_only": False, "skip": ()} package_set = {} problems = False for dist in get_installed_distributions(**kwargs): name = canonicalize_name(dist.project_name) try: package_set[name] = PackageDetails(dist.version, dist.requires()) except RequirementParseError as e: # Don't crash on broken metadata logging.warning("Error parsing requirements for %s: %s", name, e) problems = True return package_set, problems def check_package_set(package_set, should_ignore=None): # type: (PackageSet, Optional[Callable[[str], bool]]) -> CheckResult """Check if a package set is consistent If should_ignore is passed, it should be a callable that takes a package name and returns a boolean. """ if should_ignore is None: def should_ignore(name): return False missing = dict() conflicting = dict() for package_name in package_set: # Info about dependencies of package_name missing_deps = set() # type: Set[Missing] conflicting_deps = set() # type: Set[Conflicting] if should_ignore(package_name): continue for req in package_set[package_name].requires: name = canonicalize_name(req.project_name) # type: str # Check if it's missing if name not in package_set: missed = True if req.marker is not None: missed = req.marker.evaluate() if missed: missing_deps.add((name, req)) continue # Check if there's a conflict version = package_set[name].version # type: str if not req.specifier.contains(version, prereleases=True): conflicting_deps.add((name, version, req)) if missing_deps: missing[package_name] = sorted(missing_deps, key=str) if conflicting_deps: conflicting[package_name] = sorted(conflicting_deps, key=str) return missing, conflicting def check_install_conflicts(to_install): # type: (List[InstallRequirement]) -> Tuple[PackageSet, CheckResult] """For checking if the dependency graph would be consistent after \ installing given requirements """ # Start from the current state package_set, _ = create_package_set_from_installed() # Install packages would_be_installed = _simulate_installation_of(to_install, package_set) # Only warn about directly-dependent packages; create a whitelist of them whitelist = _create_whitelist(would_be_installed, package_set) return ( package_set, check_package_set( package_set, should_ignore=lambda name: name not in whitelist ) ) def _simulate_installation_of(to_install, package_set): # type: (List[InstallRequirement], PackageSet) -> Set[str] """Computes the version of packages after installing to_install. """ # Keep track of packages that were installed installed = set() # Modify it as installing requirement_set would (assuming no errors) for inst_req in to_install: abstract_dist = make_distribution_for_install_requirement(inst_req) dist = abstract_dist.get_pkg_resources_distribution() name = canonicalize_name(dist.key) package_set[name] = PackageDetails(dist.version, dist.requires()) installed.add(name) return installed def _create_whitelist(would_be_installed, package_set): # type: (Set[str], PackageSet) -> Set[str] packages_affected = set(would_be_installed) for package_name in package_set: if package_name in packages_affected: continue for req in package_set[package_name].requires: if canonicalize_name(req.name) in packages_affected: packages_affected.add(package_name) break return packages_affected ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/operations/freeze.py0000644000175000017500000002272000000000000031024 0ustar00vincentvincent00000000000000from __future__ import absolute_import import collections import logging import os import re from pip._vendor import six from pip._vendor.packaging.utils import canonicalize_name from pip._vendor.pkg_resources import RequirementParseError from pip._internal.exceptions import BadCommand, InstallationError from pip._internal.req.constructors import ( install_req_from_editable, install_req_from_line, ) from pip._internal.req.req_file import COMMENT_RE from pip._internal.utils.misc import ( dist_is_editable, get_installed_distributions, ) from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import ( Iterator, Optional, List, Container, Set, Dict, Tuple, Iterable, Union ) from pip._internal.cache import WheelCache from pip._vendor.pkg_resources import ( Distribution, Requirement ) RequirementInfo = Tuple[Optional[Union[str, Requirement]], bool, List[str]] logger = logging.getLogger(__name__) def freeze( requirement=None, # type: Optional[List[str]] find_links=None, # type: Optional[List[str]] local_only=None, # type: Optional[bool] user_only=None, # type: Optional[bool] paths=None, # type: Optional[List[str]] skip_regex=None, # type: Optional[str] isolated=False, # type: bool wheel_cache=None, # type: Optional[WheelCache] exclude_editable=False, # type: bool skip=() # type: Container[str] ): # type: (...) -> Iterator[str] find_links = find_links or [] skip_match = None if skip_regex: skip_match = re.compile(skip_regex).search for link in find_links: yield '-f %s' % link installations = {} # type: Dict[str, FrozenRequirement] for dist in get_installed_distributions(local_only=local_only, skip=(), user_only=user_only, paths=paths): try: req = FrozenRequirement.from_dist(dist) except RequirementParseError as exc: # We include dist rather than dist.project_name because the # dist string includes more information, like the version and # location. We also include the exception message to aid # troubleshooting. logger.warning( 'Could not generate requirement for distribution %r: %s', dist, exc ) continue if exclude_editable and req.editable: continue installations[req.name] = req if requirement: # the options that don't get turned into an InstallRequirement # should only be emitted once, even if the same option is in multiple # requirements files, so we need to keep track of what has been emitted # so that we don't emit it again if it's seen again emitted_options = set() # type: Set[str] # keep track of which files a requirement is in so that we can # give an accurate warning if a requirement appears multiple times. req_files = collections.defaultdict(list) # type: Dict[str, List[str]] for req_file_path in requirement: with open(req_file_path) as req_file: for line in req_file: if (not line.strip() or line.strip().startswith('#') or (skip_match and skip_match(line)) or line.startswith(( '-r', '--requirement', '-Z', '--always-unzip', '-f', '--find-links', '-i', '--index-url', '--pre', '--trusted-host', '--process-dependency-links', '--extra-index-url'))): line = line.rstrip() if line not in emitted_options: emitted_options.add(line) yield line continue if line.startswith('-e') or line.startswith('--editable'): if line.startswith('-e'): line = line[2:].strip() else: line = line[len('--editable'):].strip().lstrip('=') line_req = install_req_from_editable( line, isolated=isolated, wheel_cache=wheel_cache, ) else: line_req = install_req_from_line( COMMENT_RE.sub('', line).strip(), isolated=isolated, wheel_cache=wheel_cache, ) if not line_req.name: logger.info( "Skipping line in requirement file [%s] because " "it's not clear what it would install: %s", req_file_path, line.strip(), ) logger.info( " (add #egg=PackageName to the URL to avoid" " this warning)" ) elif line_req.name not in installations: # either it's not installed, or it is installed # but has been processed already if not req_files[line_req.name]: logger.warning( "Requirement file [%s] contains %s, but " "package %r is not installed", req_file_path, COMMENT_RE.sub('', line).strip(), line_req.name ) else: req_files[line_req.name].append(req_file_path) else: yield str(installations[line_req.name]).rstrip() del installations[line_req.name] req_files[line_req.name].append(req_file_path) # Warn about requirements that were included multiple times (in a # single requirements file or in different requirements files). for name, files in six.iteritems(req_files): if len(files) > 1: logger.warning("Requirement %s included multiple times [%s]", name, ', '.join(sorted(set(files)))) yield( '## The following requirements were added by ' 'pip freeze:' ) for installation in sorted( installations.values(), key=lambda x: x.name.lower()): if canonicalize_name(installation.name) not in skip: yield str(installation).rstrip() def get_requirement_info(dist): # type: (Distribution) -> RequirementInfo """ Compute and return values (req, editable, comments) for use in FrozenRequirement.from_dist(). """ if not dist_is_editable(dist): return (None, False, []) location = os.path.normcase(os.path.abspath(dist.location)) from pip._internal.vcs import vcs, RemoteNotFoundError vcs_backend = vcs.get_backend_for_dir(location) if vcs_backend is None: req = dist.as_requirement() logger.debug( 'No VCS found for editable requirement "%s" in: %r', req, location, ) comments = [ '# Editable install with no version control ({})'.format(req) ] return (location, True, comments) try: req = vcs_backend.get_src_requirement(location, dist.project_name) except RemoteNotFoundError: req = dist.as_requirement() comments = [ '# Editable {} install with no remote ({})'.format( type(vcs_backend).__name__, req, ) ] return (location, True, comments) except BadCommand: logger.warning( 'cannot determine version of editable source in %s ' '(%s command not found in path)', location, vcs_backend.name, ) return (None, True, []) except InstallationError as exc: logger.warning( "Error when trying to get requirement for VCS system %s, " "falling back to uneditable format", exc ) else: if req is not None: return (req, True, []) logger.warning( 'Could not determine repository location of %s', location ) comments = ['## !! Could not determine repository location'] return (None, False, comments) class FrozenRequirement(object): def __init__(self, name, req, editable, comments=()): # type: (str, Union[str, Requirement], bool, Iterable[str]) -> None self.name = name self.req = req self.editable = editable self.comments = comments @classmethod def from_dist(cls, dist): # type: (Distribution) -> FrozenRequirement req, editable, comments = get_requirement_info(dist) if req is None: req = dist.as_requirement() return cls(dist.project_name, req, editable, comments=comments) def __str__(self): req = self.req if self.editable: req = '-e %s' % req return '\n'.join(list(self.comments) + [str(req)]) + '\n' ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/operations/prepare.py0000644000175000017500000002672000000000000031206 0ustar00vincentvincent00000000000000"""Prepares a distribution for installation """ import logging import os from pip._vendor import requests from pip._internal.distributions import ( make_distribution_for_install_requirement, ) from pip._internal.distributions.installed import InstalledDistribution from pip._internal.download import ( is_dir_url, is_file_url, is_vcs_url, unpack_url, url_to_path, ) from pip._internal.exceptions import ( DirectoryUrlHashUnsupported, HashUnpinned, InstallationError, PreviousBuildDirError, VcsHashUnsupported, ) from pip._internal.utils.compat import expanduser from pip._internal.utils.hashes import MissingHashes from pip._internal.utils.logging import indent_log from pip._internal.utils.misc import display_path, normalize_path from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import Optional from pip._internal.distributions import AbstractDistribution from pip._internal.download import PipSession from pip._internal.index import PackageFinder from pip._internal.req.req_install import InstallRequirement from pip._internal.req.req_tracker import RequirementTracker logger = logging.getLogger(__name__) class RequirementPreparer(object): """Prepares a Requirement """ def __init__( self, build_dir, # type: str download_dir, # type: Optional[str] src_dir, # type: str wheel_download_dir, # type: Optional[str] progress_bar, # type: str build_isolation, # type: bool req_tracker # type: RequirementTracker ): # type: (...) -> None super(RequirementPreparer, self).__init__() self.src_dir = src_dir self.build_dir = build_dir self.req_tracker = req_tracker # Where still packed archives should be written to. If None, they are # not saved, and are deleted immediately after unpacking. self.download_dir = download_dir # Where still-packed .whl files should be written to. If None, they are # written to the download_dir parameter. Separate to download_dir to # permit only keeping wheel archives for pip wheel. if wheel_download_dir: wheel_download_dir = normalize_path(wheel_download_dir) self.wheel_download_dir = wheel_download_dir # NOTE # download_dir and wheel_download_dir overlap semantically and may # be combined if we're willing to have non-wheel archives present in # the wheelhouse output by 'pip wheel'. self.progress_bar = progress_bar # Is build isolation allowed? self.build_isolation = build_isolation @property def _download_should_save(self): # type: () -> bool # TODO: Modify to reduce indentation needed if self.download_dir: self.download_dir = expanduser(self.download_dir) if os.path.exists(self.download_dir): return True else: logger.critical('Could not find download directory') raise InstallationError( "Could not find or access download directory '%s'" % display_path(self.download_dir)) return False def prepare_linked_requirement( self, req, # type: InstallRequirement session, # type: PipSession finder, # type: PackageFinder upgrade_allowed, # type: bool require_hashes # type: bool ): # type: (...) -> AbstractDistribution """Prepare a requirement that would be obtained from req.link """ # TODO: Breakup into smaller functions if req.link and req.link.scheme == 'file': path = url_to_path(req.link.url) logger.info('Processing %s', display_path(path)) else: logger.info('Collecting %s', req) with indent_log(): # @@ if filesystem packages are not marked # editable in a req, a non deterministic error # occurs when the script attempts to unpack the # build directory req.ensure_has_source_dir(self.build_dir) # If a checkout exists, it's unwise to keep going. version # inconsistencies are logged later, but do not fail the # installation. # FIXME: this won't upgrade when there's an existing # package unpacked in `req.source_dir` # package unpacked in `req.source_dir` if os.path.exists(os.path.join(req.source_dir, 'setup.py')): raise PreviousBuildDirError( "pip can't proceed with requirements '%s' due to a" " pre-existing build directory (%s). This is " "likely due to a previous installation that failed" ". pip is being responsible and not assuming it " "can delete this. Please delete it and try again." % (req, req.source_dir) ) req.populate_link(finder, upgrade_allowed, require_hashes) # We can't hit this spot and have populate_link return None. # req.satisfied_by is None here (because we're # guarded) and upgrade has no impact except when satisfied_by # is not None. # Then inside find_requirement existing_applicable -> False # If no new versions are found, DistributionNotFound is raised, # otherwise a result is guaranteed. assert req.link link = req.link # Now that we have the real link, we can tell what kind of # requirements we have and raise some more informative errors # than otherwise. (For example, we can raise VcsHashUnsupported # for a VCS URL rather than HashMissing.) if require_hashes: # We could check these first 2 conditions inside # unpack_url and save repetition of conditions, but then # we would report less-useful error messages for # unhashable requirements, complaining that there's no # hash provided. if is_vcs_url(link): raise VcsHashUnsupported() elif is_file_url(link) and is_dir_url(link): raise DirectoryUrlHashUnsupported() if not req.original_link and not req.is_pinned: # Unpinned packages are asking for trouble when a new # version is uploaded. This isn't a security check, but # it saves users a surprising hash mismatch in the # future. # # file:/// URLs aren't pinnable, so don't complain # about them not being pinned. raise HashUnpinned() hashes = req.hashes(trust_internet=not require_hashes) if require_hashes and not hashes: # Known-good hashes are missing for this requirement, so # shim it with a facade object that will provoke hash # computation and then raise a HashMissing exception # showing the user what the hash should be. hashes = MissingHashes() try: download_dir = self.download_dir # We always delete unpacked sdists after pip ran. autodelete_unpacked = True if req.link.is_wheel and self.wheel_download_dir: # when doing 'pip wheel` we download wheels to a # dedicated dir. download_dir = self.wheel_download_dir if req.link.is_wheel: if download_dir: # When downloading, we only unpack wheels to get # metadata. autodelete_unpacked = True else: # When installing a wheel, we use the unpacked # wheel. autodelete_unpacked = False unpack_url( req.link, req.source_dir, download_dir, autodelete_unpacked, session=session, hashes=hashes, progress_bar=self.progress_bar ) except requests.HTTPError as exc: logger.critical( 'Could not install requirement %s because of error %s', req, exc, ) raise InstallationError( 'Could not install requirement %s because of HTTP ' 'error %s for URL %s' % (req, exc, req.link) ) abstract_dist = make_distribution_for_install_requirement(req) with self.req_tracker.track(req): abstract_dist.prepare_distribution_metadata( finder, self.build_isolation, ) if self._download_should_save: # Make a .zip of the source_dir we already created. if not req.link.is_artifact: req.archive(self.download_dir) return abstract_dist def prepare_editable_requirement( self, req, # type: InstallRequirement require_hashes, # type: bool use_user_site, # type: bool finder # type: PackageFinder ): # type: (...) -> AbstractDistribution """Prepare an editable requirement """ assert req.editable, "cannot prepare a non-editable req as editable" logger.info('Obtaining %s', req) with indent_log(): if require_hashes: raise InstallationError( 'The editable requirement %s cannot be installed when ' 'requiring hashes, because there is no single file to ' 'hash.' % req ) req.ensure_has_source_dir(self.src_dir) req.update_editable(not self._download_should_save) abstract_dist = make_distribution_for_install_requirement(req) with self.req_tracker.track(req): abstract_dist.prepare_distribution_metadata( finder, self.build_isolation, ) if self._download_should_save: req.archive(self.download_dir) req.check_if_exists(use_user_site) return abstract_dist def prepare_installed_requirement( self, req, # type: InstallRequirement require_hashes, # type: bool skip_reason # type: str ): # type: (...) -> AbstractDistribution """Prepare an already-installed requirement """ assert req.satisfied_by, "req should have been satisfied but isn't" assert skip_reason is not None, ( "did not get skip reason skipped but req.satisfied_by " "is set to %r" % (req.satisfied_by,) ) logger.info( 'Requirement %s: %s (%s)', skip_reason, req, req.satisfied_by.version ) with indent_log(): if require_hashes: logger.debug( 'Since it is already installed, we are trusting this ' 'package without checking its hash. To ensure a ' 'completely repeatable environment, install into an ' 'empty virtualenv.' ) abstract_dist = InstalledDistribution(req) return abstract_dist ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/pep425tags.py0000644000175000017500000003176300000000000027266 0ustar00vincentvincent00000000000000"""Generate and work with PEP 425 Compatibility Tags.""" from __future__ import absolute_import import distutils.util import logging import platform import re import sys import sysconfig import warnings from collections import OrderedDict import pip._internal.utils.glibc from pip._internal.utils.compat import get_extension_suffixes from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import ( Tuple, Callable, List, Optional, Union, Dict ) Pep425Tag = Tuple[str, str, str] logger = logging.getLogger(__name__) _osx_arch_pat = re.compile(r'(.+)_(\d+)_(\d+)_(.+)') def get_config_var(var): # type: (str) -> Optional[str] try: return sysconfig.get_config_var(var) except IOError as e: # Issue #1074 warnings.warn("{}".format(e), RuntimeWarning) return None def get_abbr_impl(): # type: () -> str """Return abbreviated implementation name.""" if hasattr(sys, 'pypy_version_info'): pyimpl = 'pp' elif sys.platform.startswith('java'): pyimpl = 'jy' elif sys.platform == 'cli': pyimpl = 'ip' else: pyimpl = 'cp' return pyimpl def version_info_to_nodot(version_info): # type: (Tuple[int, ...]) -> str # Only use up to the first two numbers. return ''.join(map(str, version_info[:2])) def get_impl_ver(): # type: () -> str """Return implementation version.""" impl_ver = get_config_var("py_version_nodot") if not impl_ver or get_abbr_impl() == 'pp': impl_ver = ''.join(map(str, get_impl_version_info())) return impl_ver def get_impl_version_info(): # type: () -> Tuple[int, ...] """Return sys.version_info-like tuple for use in decrementing the minor version.""" if get_abbr_impl() == 'pp': # as per https://github.com/pypa/pip/issues/2882 # attrs exist only on pypy return (sys.version_info[0], sys.pypy_version_info.major, # type: ignore sys.pypy_version_info.minor) # type: ignore else: return sys.version_info[0], sys.version_info[1] def get_impl_tag(): # type: () -> str """ Returns the Tag for this specific implementation. """ return "{}{}".format(get_abbr_impl(), get_impl_ver()) def get_flag(var, fallback, expected=True, warn=True): # type: (str, Callable[..., bool], Union[bool, int], bool) -> bool """Use a fallback method for determining SOABI flags if the needed config var is unset or unavailable.""" val = get_config_var(var) if val is None: if warn: logger.debug("Config variable '%s' is unset, Python ABI tag may " "be incorrect", var) return fallback() return val == expected def get_abi_tag(): # type: () -> Optional[str] """Return the ABI tag based on SOABI (if available) or emulate SOABI (CPython 2, PyPy).""" soabi = get_config_var('SOABI') impl = get_abbr_impl() if not soabi and impl in {'cp', 'pp'} and hasattr(sys, 'maxunicode'): d = '' m = '' u = '' if get_flag('Py_DEBUG', lambda: hasattr(sys, 'gettotalrefcount'), warn=(impl == 'cp')): d = 'd' if get_flag('WITH_PYMALLOC', lambda: impl == 'cp', warn=(impl == 'cp')): m = 'm' if get_flag('Py_UNICODE_SIZE', lambda: sys.maxunicode == 0x10ffff, expected=4, warn=(impl == 'cp' and sys.version_info < (3, 3))) \ and sys.version_info < (3, 3): u = 'u' abi = '%s%s%s%s%s' % (impl, get_impl_ver(), d, m, u) elif soabi and soabi.startswith('cpython-'): abi = 'cp' + soabi.split('-')[1] elif soabi: abi = soabi.replace('.', '_').replace('-', '_') else: abi = None return abi def _is_running_32bit(): # type: () -> bool return sys.maxsize == 2147483647 def get_platform(): # type: () -> str """Return our platform name 'win32', 'linux_x86_64'""" if sys.platform == 'darwin': # distutils.util.get_platform() returns the release based on the value # of MACOSX_DEPLOYMENT_TARGET on which Python was built, which may # be significantly older than the user's current machine. release, _, machine = platform.mac_ver() split_ver = release.split('.') if machine == "x86_64" and _is_running_32bit(): machine = "i386" elif machine == "ppc64" and _is_running_32bit(): machine = "ppc" return 'macosx_{}_{}_{}'.format(split_ver[0], split_ver[1], machine) # XXX remove distutils dependency result = distutils.util.get_platform().replace('.', '_').replace('-', '_') if result == "linux_x86_64" and _is_running_32bit(): # 32 bit Python program (running on a 64 bit Linux): pip should only # install and run 32 bit compiled extensions in that case. result = "linux_i686" return result def is_manylinux1_compatible(): # type: () -> bool # Only Linux, and only x86-64 / i686 if get_platform() not in {"linux_x86_64", "linux_i686"}: return False # Check for presence of _manylinux module try: import _manylinux return bool(_manylinux.manylinux1_compatible) except (ImportError, AttributeError): # Fall through to heuristic check below pass # Check glibc version. CentOS 5 uses glibc 2.5. return pip._internal.utils.glibc.have_compatible_glibc(2, 5) def is_manylinux2010_compatible(): # type: () -> bool # Only Linux, and only x86-64 / i686 if get_platform() not in {"linux_x86_64", "linux_i686"}: return False # Check for presence of _manylinux module try: import _manylinux return bool(_manylinux.manylinux2010_compatible) except (ImportError, AttributeError): # Fall through to heuristic check below pass # Check glibc version. CentOS 6 uses glibc 2.12. return pip._internal.utils.glibc.have_compatible_glibc(2, 12) def get_darwin_arches(major, minor, machine): # type: (int, int, str) -> List[str] """Return a list of supported arches (including group arches) for the given major, minor and machine architecture of an macOS machine. """ arches = [] def _supports_arch(major, minor, arch): # type: (int, int, str) -> bool # Looking at the application support for macOS versions in the chart # provided by https://en.wikipedia.org/wiki/OS_X#Versions it appears # our timeline looks roughly like: # # 10.0 - Introduces ppc support. # 10.4 - Introduces ppc64, i386, and x86_64 support, however the ppc64 # and x86_64 support is CLI only, and cannot be used for GUI # applications. # 10.5 - Extends ppc64 and x86_64 support to cover GUI applications. # 10.6 - Drops support for ppc64 # 10.7 - Drops support for ppc # # Given that we do not know if we're installing a CLI or a GUI # application, we must be conservative and assume it might be a GUI # application and behave as if ppc64 and x86_64 support did not occur # until 10.5. # # Note: The above information is taken from the "Application support" # column in the chart not the "Processor support" since I believe # that we care about what instruction sets an application can use # not which processors the OS supports. if arch == 'ppc': return (major, minor) <= (10, 5) if arch == 'ppc64': return (major, minor) == (10, 5) if arch == 'i386': return (major, minor) >= (10, 4) if arch == 'x86_64': return (major, minor) >= (10, 5) if arch in groups: for garch in groups[arch]: if _supports_arch(major, minor, garch): return True return False groups = OrderedDict([ ("fat", ("i386", "ppc")), ("intel", ("x86_64", "i386")), ("fat64", ("x86_64", "ppc64")), ("fat32", ("x86_64", "i386", "ppc")), ]) # type: Dict[str, Tuple[str, ...]] if _supports_arch(major, minor, machine): arches.append(machine) for garch in groups: if machine in groups[garch] and _supports_arch(major, minor, garch): arches.append(garch) arches.append('universal') return arches def get_all_minor_versions_as_strings(version_info): # type: (Tuple[int, ...]) -> List[str] versions = [] major = version_info[:-1] # Support all previous minor Python versions. for minor in range(version_info[-1], -1, -1): versions.append(''.join(map(str, major + (minor,)))) return versions def get_supported( versions=None, # type: Optional[List[str]] noarch=False, # type: bool platform=None, # type: Optional[str] impl=None, # type: Optional[str] abi=None # type: Optional[str] ): # type: (...) -> List[Pep425Tag] """Return a list of supported tags for each version specified in `versions`. :param versions: a list of string versions, of the form ["33", "32"], or None. The first version will be assumed to support our ABI. :param platform: specify the exact platform you want valid tags for, or None. If None, use the local system platform. :param impl: specify the exact implementation you want valid tags for, or None. If None, use the local interpreter impl. :param abi: specify the exact abi you want valid tags for, or None. If None, use the local interpreter abi. """ supported = [] # Versions must be given with respect to the preference if versions is None: version_info = get_impl_version_info() versions = get_all_minor_versions_as_strings(version_info) impl = impl or get_abbr_impl() abis = [] # type: List[str] abi = abi or get_abi_tag() if abi: abis[0:0] = [abi] abi3s = set() for suffix in get_extension_suffixes(): if suffix.startswith('.abi'): abi3s.add(suffix.split('.', 2)[1]) abis.extend(sorted(list(abi3s))) abis.append('none') if not noarch: arch = platform or get_platform() arch_prefix, arch_sep, arch_suffix = arch.partition('_') if arch.startswith('macosx'): # support macosx-10.6-intel on macosx-10.9-x86_64 match = _osx_arch_pat.match(arch) if match: name, major, minor, actual_arch = match.groups() tpl = '{}_{}_%i_%s'.format(name, major) arches = [] for m in reversed(range(int(minor) + 1)): for a in get_darwin_arches(int(major), m, actual_arch): arches.append(tpl % (m, a)) else: # arch pattern didn't match (?!) arches = [arch] elif arch_prefix == 'manylinux2010': # manylinux1 wheels run on most manylinux2010 systems with the # exception of wheels depending on ncurses. PEP 571 states # manylinux1 wheels should be considered manylinux2010 wheels: # https://www.python.org/dev/peps/pep-0571/#backwards-compatibility-with-manylinux1-wheels arches = [arch, 'manylinux1' + arch_sep + arch_suffix] elif platform is None: arches = [] if is_manylinux2010_compatible(): arches.append('manylinux2010' + arch_sep + arch_suffix) if is_manylinux1_compatible(): arches.append('manylinux1' + arch_sep + arch_suffix) arches.append(arch) else: arches = [arch] # Current version, current API (built specifically for our Python): for abi in abis: for arch in arches: supported.append(('%s%s' % (impl, versions[0]), abi, arch)) # abi3 modules compatible with older version of Python for version in versions[1:]: # abi3 was introduced in Python 3.2 if version in {'31', '30'}: break for abi in abi3s: # empty set if not Python 3 for arch in arches: supported.append(("%s%s" % (impl, version), abi, arch)) # Has binaries, does not use the Python API: for arch in arches: supported.append(('py%s' % (versions[0][0]), 'none', arch)) # No abi / arch, but requires our implementation: supported.append(('%s%s' % (impl, versions[0]), 'none', 'any')) # Tagged specifically as being cross-version compatible # (with just the major version specified) supported.append(('%s%s' % (impl, versions[0][0]), 'none', 'any')) # No abi / arch, generic Python for i, version in enumerate(versions): supported.append(('py%s' % (version,), 'none', 'any')) if i == 0: supported.append(('py%s' % (version[0]), 'none', 'any')) return supported implementation_tag = get_impl_tag() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/pyproject.py0000644000175000017500000001450000000000000027375 0ustar00vincentvincent00000000000000from __future__ import absolute_import import io import os import sys from pip._vendor import pytoml, six from pip._internal.exceptions import InstallationError from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import Any, Tuple, Optional, List def _is_list_of_str(obj): # type: (Any) -> bool return ( isinstance(obj, list) and all(isinstance(item, six.string_types) for item in obj) ) def make_pyproject_path(setup_py_dir): # type: (str) -> str path = os.path.join(setup_py_dir, 'pyproject.toml') # Python2 __file__ should not be unicode if six.PY2 and isinstance(path, six.text_type): path = path.encode(sys.getfilesystemencoding()) return path def load_pyproject_toml( use_pep517, # type: Optional[bool] pyproject_toml, # type: str setup_py, # type: str req_name # type: str ): # type: (...) -> Optional[Tuple[List[str], str, List[str]]] """Load the pyproject.toml file. Parameters: use_pep517 - Has the user requested PEP 517 processing? None means the user hasn't explicitly specified. pyproject_toml - Location of the project's pyproject.toml file setup_py - Location of the project's setup.py file req_name - The name of the requirement we're processing (for error reporting) Returns: None if we should use the legacy code path, otherwise a tuple ( requirements from pyproject.toml, name of PEP 517 backend, requirements we should check are installed after setting up the build environment ) """ has_pyproject = os.path.isfile(pyproject_toml) has_setup = os.path.isfile(setup_py) if has_pyproject: with io.open(pyproject_toml, encoding="utf-8") as f: pp_toml = pytoml.load(f) build_system = pp_toml.get("build-system") else: build_system = None # The following cases must use PEP 517 # We check for use_pep517 being non-None and falsey because that means # the user explicitly requested --no-use-pep517. The value 0 as # opposed to False can occur when the value is provided via an # environment variable or config file option (due to the quirk of # strtobool() returning an integer in pip's configuration code). if has_pyproject and not has_setup: if use_pep517 is not None and not use_pep517: raise InstallationError( "Disabling PEP 517 processing is invalid: " "project does not have a setup.py" ) use_pep517 = True elif build_system and "build-backend" in build_system: if use_pep517 is not None and not use_pep517: raise InstallationError( "Disabling PEP 517 processing is invalid: " "project specifies a build backend of {} " "in pyproject.toml".format( build_system["build-backend"] ) ) use_pep517 = True # If we haven't worked out whether to use PEP 517 yet, # and the user hasn't explicitly stated a preference, # we do so if the project has a pyproject.toml file. elif use_pep517 is None: use_pep517 = has_pyproject # At this point, we know whether we're going to use PEP 517. assert use_pep517 is not None # If we're using the legacy code path, there is nothing further # for us to do here. if not use_pep517: return None if build_system is None: # Either the user has a pyproject.toml with no build-system # section, or the user has no pyproject.toml, but has opted in # explicitly via --use-pep517. # In the absence of any explicit backend specification, we # assume the setuptools backend that most closely emulates the # traditional direct setup.py execution, and require wheel and # a version of setuptools that supports that backend. build_system = { "requires": ["setuptools>=40.8.0", "wheel"], "build-backend": "setuptools.build_meta:__legacy__", } # If we're using PEP 517, we have build system information (either # from pyproject.toml, or defaulted by the code above). # Note that at this point, we do not know if the user has actually # specified a backend, though. assert build_system is not None # Ensure that the build-system section in pyproject.toml conforms # to PEP 518. error_template = ( "{package} has a pyproject.toml file that does not comply " "with PEP 518: {reason}" ) # Specifying the build-system table but not the requires key is invalid if "requires" not in build_system: raise InstallationError( error_template.format(package=req_name, reason=( "it has a 'build-system' table but not " "'build-system.requires' which is mandatory in the table" )) ) # Error out if requires is not a list of strings requires = build_system["requires"] if not _is_list_of_str(requires): raise InstallationError(error_template.format( package=req_name, reason="'build-system.requires' is not a list of strings.", )) backend = build_system.get("build-backend") check = [] # type: List[str] if backend is None: # If the user didn't specify a backend, we assume they want to use # the setuptools backend. But we can't be sure they have included # a version of setuptools which supplies the backend, or wheel # (which is needed by the backend) in their requirements. So we # make a note to check that those requirements are present once # we have set up the environment. # This is quite a lot of work to check for a very specific case. But # the problem is, that case is potentially quite common - projects that # adopted PEP 518 early for the ability to specify requirements to # execute setup.py, but never considered needing to mention the build # tools themselves. The original PEP 518 code had a similar check (but # implemented in a different way). backend = "setuptools.build_meta:__legacy__" check = ["setuptools>=40.8.0", "wheel"] return (requires, backend, check) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735099.9466608 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/req/0000755000175000017500000000000000000000000025573 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/req/__init__.py0000644000175000017500000000447400000000000027715 0ustar00vincentvincent00000000000000from __future__ import absolute_import import logging from .req_install import InstallRequirement from .req_set import RequirementSet from .req_file import parse_requirements from pip._internal.utils.logging import indent_log from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import Any, List, Sequence __all__ = [ "RequirementSet", "InstallRequirement", "parse_requirements", "install_given_reqs", ] logger = logging.getLogger(__name__) def install_given_reqs( to_install, # type: List[InstallRequirement] install_options, # type: List[str] global_options=(), # type: Sequence[str] *args, # type: Any **kwargs # type: Any ): # type: (...) -> List[InstallRequirement] """ Install everything in the given list. (to be called after having downloaded and unpacked the packages) """ if to_install: logger.info( 'Installing collected packages: %s', ', '.join([req.name for req in to_install]), ) with indent_log(): for requirement in to_install: if requirement.conflicts_with: logger.info( 'Found existing installation: %s', requirement.conflicts_with, ) with indent_log(): uninstalled_pathset = requirement.uninstall( auto_confirm=True ) try: requirement.install( install_options, global_options, *args, **kwargs ) except Exception: should_rollback = ( requirement.conflicts_with and not requirement.install_succeeded ) # if install did not succeed, rollback previous uninstall if should_rollback: uninstalled_pathset.rollback() raise else: should_commit = ( requirement.conflicts_with and requirement.install_succeeded ) if should_commit: uninstalled_pathset.commit() requirement.remove_temporary_source() return to_install ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/req/constructors.py0000644000175000017500000002712200000000000030721 0ustar00vincentvincent00000000000000"""Backing implementation for InstallRequirement's various constructors The idea here is that these formed a major chunk of InstallRequirement's size so, moving them and support code dedicated to them outside of that class helps creates for better understandability for the rest of the code. These are meant to be used elsewhere within pip to create instances of InstallRequirement. """ import logging import os import re from pip._vendor.packaging.markers import Marker from pip._vendor.packaging.requirements import InvalidRequirement, Requirement from pip._vendor.packaging.specifiers import Specifier from pip._vendor.pkg_resources import RequirementParseError, parse_requirements from pip._internal.download import is_archive_file, is_url, url_to_path from pip._internal.exceptions import InstallationError from pip._internal.models.index import PyPI, TestPyPI from pip._internal.models.link import Link from pip._internal.pyproject import make_pyproject_path from pip._internal.req.req_install import InstallRequirement from pip._internal.utils.misc import is_installable_dir, path_to_url from pip._internal.utils.typing import MYPY_CHECK_RUNNING from pip._internal.vcs import vcs from pip._internal.wheel import Wheel if MYPY_CHECK_RUNNING: from typing import ( Any, Dict, Optional, Set, Tuple, Union, ) from pip._internal.cache import WheelCache __all__ = [ "install_req_from_editable", "install_req_from_line", "parse_editable" ] logger = logging.getLogger(__name__) operators = Specifier._operators.keys() def _strip_extras(path): # type: (str) -> Tuple[str, Optional[str]] m = re.match(r'^(.+)(\[[^\]]+\])$', path) extras = None if m: path_no_extras = m.group(1) extras = m.group(2) else: path_no_extras = path return path_no_extras, extras def parse_editable(editable_req): # type: (str) -> Tuple[Optional[str], str, Optional[Set[str]]] """Parses an editable requirement into: - a requirement name - an URL - extras - editable options Accepted requirements: svn+http://blahblah@rev#egg=Foobar[baz]&subdirectory=version_subdir .[some_extra] """ url = editable_req # If a file path is specified with extras, strip off the extras. url_no_extras, extras = _strip_extras(url) if os.path.isdir(url_no_extras): if not os.path.exists(os.path.join(url_no_extras, 'setup.py')): msg = ( 'File "setup.py" not found. Directory cannot be installed ' 'in editable mode: {}'.format(os.path.abspath(url_no_extras)) ) pyproject_path = make_pyproject_path(url_no_extras) if os.path.isfile(pyproject_path): msg += ( '\n(A "pyproject.toml" file was found, but editable ' 'mode currently requires a setup.py based build.)' ) raise InstallationError(msg) # Treating it as code that has already been checked out url_no_extras = path_to_url(url_no_extras) if url_no_extras.lower().startswith('file:'): package_name = Link(url_no_extras).egg_fragment if extras: return ( package_name, url_no_extras, Requirement("placeholder" + extras.lower()).extras, ) else: return package_name, url_no_extras, None for version_control in vcs: if url.lower().startswith('%s:' % version_control): url = '%s+%s' % (version_control, url) break if '+' not in url: raise InstallationError( '{} is not a valid editable requirement. ' 'It should either be a path to a local project or a VCS URL ' '(beginning with svn+, git+, hg+, or bzr+).'.format(editable_req) ) vc_type = url.split('+', 1)[0].lower() if not vcs.get_backend(vc_type): error_message = 'For --editable=%s only ' % editable_req + \ ', '.join([backend.name + '+URL' for backend in vcs.backends]) + \ ' is currently supported' raise InstallationError(error_message) package_name = Link(url).egg_fragment if not package_name: raise InstallationError( "Could not detect requirement name for '%s', please specify one " "with #egg=your_package_name" % editable_req ) return package_name, url, None def deduce_helpful_msg(req): # type: (str) -> str """Returns helpful msg in case requirements file does not exist, or cannot be parsed. :params req: Requirements file path """ msg = "" if os.path.exists(req): msg = " It does exist." # Try to parse and check if it is a requirements file. try: with open(req, 'r') as fp: # parse first line only next(parse_requirements(fp.read())) msg += " The argument you provided " + \ "(%s) appears to be a" % (req) + \ " requirements file. If that is the" + \ " case, use the '-r' flag to install" + \ " the packages specified within it." except RequirementParseError: logger.debug("Cannot parse '%s' as requirements \ file" % (req), exc_info=True) else: msg += " File '%s' does not exist." % (req) return msg # ---- The actual constructors follow ---- def install_req_from_editable( editable_req, # type: str comes_from=None, # type: Optional[str] use_pep517=None, # type: Optional[bool] isolated=False, # type: bool options=None, # type: Optional[Dict[str, Any]] wheel_cache=None, # type: Optional[WheelCache] constraint=False # type: bool ): # type: (...) -> InstallRequirement name, url, extras_override = parse_editable(editable_req) if url.startswith('file:'): source_dir = url_to_path(url) else: source_dir = None if name is not None: try: req = Requirement(name) except InvalidRequirement: raise InstallationError("Invalid requirement: '%s'" % name) else: req = None return InstallRequirement( req, comes_from, source_dir=source_dir, editable=True, link=Link(url), constraint=constraint, use_pep517=use_pep517, isolated=isolated, options=options if options else {}, wheel_cache=wheel_cache, extras=extras_override or (), ) def install_req_from_line( name, # type: str comes_from=None, # type: Optional[Union[str, InstallRequirement]] use_pep517=None, # type: Optional[bool] isolated=False, # type: bool options=None, # type: Optional[Dict[str, Any]] wheel_cache=None, # type: Optional[WheelCache] constraint=False, # type: bool line_source=None, # type: Optional[str] ): # type: (...) -> InstallRequirement """Creates an InstallRequirement from a name, which might be a requirement, directory containing 'setup.py', filename, or URL. :param line_source: An optional string describing where the line is from, for logging purposes in case of an error. """ if is_url(name): marker_sep = '; ' else: marker_sep = ';' if marker_sep in name: name, markers_as_string = name.split(marker_sep, 1) markers_as_string = markers_as_string.strip() if not markers_as_string: markers = None else: markers = Marker(markers_as_string) else: markers = None name = name.strip() req_as_string = None path = os.path.normpath(os.path.abspath(name)) link = None extras_as_string = None if is_url(name): link = Link(name) else: p, extras_as_string = _strip_extras(path) looks_like_dir = os.path.isdir(p) and ( os.path.sep in name or (os.path.altsep is not None and os.path.altsep in name) or name.startswith('.') ) if looks_like_dir: if not is_installable_dir(p): raise InstallationError( "Directory %r is not installable. Neither 'setup.py' " "nor 'pyproject.toml' found." % name ) link = Link(path_to_url(p)) elif is_archive_file(p): if not os.path.isfile(p): logger.warning( 'Requirement %r looks like a filename, but the ' 'file does not exist', name ) link = Link(path_to_url(p)) # it's a local file, dir, or url if link: # Handle relative file URLs if link.scheme == 'file' and re.search(r'\.\./', link.url): link = Link( path_to_url(os.path.normpath(os.path.abspath(link.path)))) # wheel file if link.is_wheel: wheel = Wheel(link.filename) # can raise InvalidWheelFilename req_as_string = "%s==%s" % (wheel.name, wheel.version) else: # set the req to the egg fragment. when it's not there, this # will become an 'unnamed' requirement req_as_string = link.egg_fragment # a requirement specifier else: req_as_string = name if extras_as_string: extras = Requirement("placeholder" + extras_as_string.lower()).extras else: extras = () if req_as_string is not None: try: req = Requirement(req_as_string) except InvalidRequirement: if os.path.sep in req_as_string: add_msg = "It looks like a path." add_msg += deduce_helpful_msg(req_as_string) elif ('=' in req_as_string and not any(op in req_as_string for op in operators)): add_msg = "= is not a valid operator. Did you mean == ?" else: add_msg = '' if line_source is None: source = '' else: source = ' (from {})'.format(line_source) msg = ( 'Invalid requirement: {!r}{}'.format(req_as_string, source) ) if add_msg: msg += '\nHint: {}'.format(add_msg) raise InstallationError(msg) else: req = None return InstallRequirement( req, comes_from, link=link, markers=markers, use_pep517=use_pep517, isolated=isolated, options=options if options else {}, wheel_cache=wheel_cache, constraint=constraint, extras=extras, ) def install_req_from_req_string( req_string, # type: str comes_from=None, # type: Optional[InstallRequirement] isolated=False, # type: bool wheel_cache=None, # type: Optional[WheelCache] use_pep517=None # type: Optional[bool] ): # type: (...) -> InstallRequirement try: req = Requirement(req_string) except InvalidRequirement: raise InstallationError("Invalid requirement: '%s'" % req_string) domains_not_allowed = [ PyPI.file_storage_domain, TestPyPI.file_storage_domain, ] if (req.url and comes_from and comes_from.link and comes_from.link.netloc in domains_not_allowed): # Explicitly disallow pypi packages that depend on external urls raise InstallationError( "Packages installed from PyPI cannot depend on packages " "which are not also hosted on PyPI.\n" "%s depends on %s " % (comes_from.name, req) ) return InstallRequirement( req, comes_from, isolated=isolated, wheel_cache=wheel_cache, use_pep517=use_pep517 ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/req/req_file.py0000644000175000017500000003354400000000000027744 0ustar00vincentvincent00000000000000""" Requirements file parsing """ from __future__ import absolute_import import optparse import os import re import shlex import sys from pip._vendor.six.moves import filterfalse from pip._vendor.six.moves.urllib import parse as urllib_parse from pip._internal.cli import cmdoptions from pip._internal.download import get_file_content from pip._internal.exceptions import RequirementsFileParseError from pip._internal.models.search_scope import SearchScope from pip._internal.req.constructors import ( install_req_from_editable, install_req_from_line, ) from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import ( Any, Callable, Iterator, List, NoReturn, Optional, Text, Tuple, ) from pip._internal.req import InstallRequirement from pip._internal.cache import WheelCache from pip._internal.index import PackageFinder from pip._internal.download import PipSession ReqFileLines = Iterator[Tuple[int, Text]] __all__ = ['parse_requirements'] SCHEME_RE = re.compile(r'^(http|https|file):', re.I) COMMENT_RE = re.compile(r'(^|\s+)#.*$') # Matches environment variable-style values in '${MY_VARIABLE_1}' with the # variable name consisting of only uppercase letters, digits or the '_' # (underscore). This follows the POSIX standard defined in IEEE Std 1003.1, # 2013 Edition. ENV_VAR_RE = re.compile(r'(?P\$\{(?P[A-Z0-9_]+)\})') SUPPORTED_OPTIONS = [ cmdoptions.constraints, cmdoptions.editable, cmdoptions.requirements, cmdoptions.no_index, cmdoptions.index_url, cmdoptions.find_links, cmdoptions.extra_index_url, cmdoptions.always_unzip, cmdoptions.no_binary, cmdoptions.only_binary, cmdoptions.pre, cmdoptions.trusted_host, cmdoptions.require_hashes, ] # type: List[Callable[..., optparse.Option]] # options to be passed to requirements SUPPORTED_OPTIONS_REQ = [ cmdoptions.install_options, cmdoptions.global_options, cmdoptions.hash, ] # type: List[Callable[..., optparse.Option]] # the 'dest' string values SUPPORTED_OPTIONS_REQ_DEST = [str(o().dest) for o in SUPPORTED_OPTIONS_REQ] def parse_requirements( filename, # type: str finder=None, # type: Optional[PackageFinder] comes_from=None, # type: Optional[str] options=None, # type: Optional[optparse.Values] session=None, # type: Optional[PipSession] constraint=False, # type: bool wheel_cache=None, # type: Optional[WheelCache] use_pep517=None # type: Optional[bool] ): # type: (...) -> Iterator[InstallRequirement] """Parse a requirements file and yield InstallRequirement instances. :param filename: Path or url of requirements file. :param finder: Instance of pip.index.PackageFinder. :param comes_from: Origin description of requirements. :param options: cli options. :param session: Instance of pip.download.PipSession. :param constraint: If true, parsing a constraint file rather than requirements file. :param wheel_cache: Instance of pip.wheel.WheelCache :param use_pep517: Value of the --use-pep517 option. """ if session is None: raise TypeError( "parse_requirements() missing 1 required keyword argument: " "'session'" ) _, content = get_file_content( filename, comes_from=comes_from, session=session ) lines_enum = preprocess(content, options) for line_number, line in lines_enum: req_iter = process_line(line, filename, line_number, finder, comes_from, options, session, wheel_cache, use_pep517=use_pep517, constraint=constraint) for req in req_iter: yield req def preprocess(content, options): # type: (Text, Optional[optparse.Values]) -> ReqFileLines """Split, filter, and join lines, and return a line iterator :param content: the content of the requirements file :param options: cli options """ lines_enum = enumerate(content.splitlines(), start=1) # type: ReqFileLines lines_enum = join_lines(lines_enum) lines_enum = ignore_comments(lines_enum) lines_enum = skip_regex(lines_enum, options) lines_enum = expand_env_variables(lines_enum) return lines_enum def process_line( line, # type: Text filename, # type: str line_number, # type: int finder=None, # type: Optional[PackageFinder] comes_from=None, # type: Optional[str] options=None, # type: Optional[optparse.Values] session=None, # type: Optional[PipSession] wheel_cache=None, # type: Optional[WheelCache] use_pep517=None, # type: Optional[bool] constraint=False, # type: bool ): # type: (...) -> Iterator[InstallRequirement] """Process a single requirements line; This can result in creating/yielding requirements, or updating the finder. For lines that contain requirements, the only options that have an effect are from SUPPORTED_OPTIONS_REQ, and they are scoped to the requirement. Other options from SUPPORTED_OPTIONS may be present, but are ignored. For lines that do not contain requirements, the only options that have an effect are from SUPPORTED_OPTIONS. Options from SUPPORTED_OPTIONS_REQ may be present, but are ignored. These lines may contain multiple options (although our docs imply only one is supported), and all our parsed and affect the finder. :param constraint: If True, parsing a constraints file. :param options: OptionParser options that we may update """ parser = build_parser(line) defaults = parser.get_default_values() defaults.index_url = None if finder: defaults.format_control = finder.format_control args_str, options_str = break_args_options(line) # Prior to 2.7.3, shlex cannot deal with unicode entries if sys.version_info < (2, 7, 3): # https://github.com/python/mypy/issues/1174 options_str = options_str.encode('utf8') # type: ignore # https://github.com/python/mypy/issues/1174 opts, _ = parser.parse_args( shlex.split(options_str), defaults) # type: ignore # preserve for the nested code path line_comes_from = '%s %s (line %s)' % ( '-c' if constraint else '-r', filename, line_number, ) # yield a line requirement if args_str: isolated = options.isolated_mode if options else False if options: cmdoptions.check_install_build_global(options, opts) # get the options that apply to requirements req_options = {} for dest in SUPPORTED_OPTIONS_REQ_DEST: if dest in opts.__dict__ and opts.__dict__[dest]: req_options[dest] = opts.__dict__[dest] line_source = 'line {} of {}'.format(line_number, filename) yield install_req_from_line( args_str, comes_from=line_comes_from, use_pep517=use_pep517, isolated=isolated, options=req_options, wheel_cache=wheel_cache, constraint=constraint, line_source=line_source, ) # yield an editable requirement elif opts.editables: isolated = options.isolated_mode if options else False yield install_req_from_editable( opts.editables[0], comes_from=line_comes_from, use_pep517=use_pep517, constraint=constraint, isolated=isolated, wheel_cache=wheel_cache ) # parse a nested requirements file elif opts.requirements or opts.constraints: if opts.requirements: req_path = opts.requirements[0] nested_constraint = False else: req_path = opts.constraints[0] nested_constraint = True # original file is over http if SCHEME_RE.search(filename): # do a url join so relative paths work req_path = urllib_parse.urljoin(filename, req_path) # original file and nested file are paths elif not SCHEME_RE.search(req_path): # do a join so relative paths work req_path = os.path.join(os.path.dirname(filename), req_path) # TODO: Why not use `comes_from='-r {} (line {})'` here as well? parsed_reqs = parse_requirements( req_path, finder, comes_from, options, session, constraint=nested_constraint, wheel_cache=wheel_cache ) for req in parsed_reqs: yield req # percolate hash-checking option upward elif opts.require_hashes: options.require_hashes = opts.require_hashes # set finder options elif finder: find_links = finder.find_links index_urls = finder.index_urls if opts.index_url: index_urls = [opts.index_url] if opts.no_index is True: index_urls = [] if opts.extra_index_urls: index_urls.extend(opts.extra_index_urls) if opts.find_links: # FIXME: it would be nice to keep track of the source # of the find_links: support a find-links local path # relative to a requirements file. value = opts.find_links[0] req_dir = os.path.dirname(os.path.abspath(filename)) relative_to_reqs_file = os.path.join(req_dir, value) if os.path.exists(relative_to_reqs_file): value = relative_to_reqs_file find_links.append(value) search_scope = SearchScope( find_links=find_links, index_urls=index_urls, ) finder.search_scope = search_scope if opts.pre: finder.set_allow_all_prereleases() for host in opts.trusted_hosts or []: source = 'line {} of {}'.format(line_number, filename) finder.add_trusted_host(host, source=source) def break_args_options(line): # type: (Text) -> Tuple[str, Text] """Break up the line into an args and options string. We only want to shlex (and then optparse) the options, not the args. args can contain markers which are corrupted by shlex. """ tokens = line.split(' ') args = [] options = tokens[:] for token in tokens: if token.startswith('-') or token.startswith('--'): break else: args.append(token) options.pop(0) return ' '.join(args), ' '.join(options) # type: ignore def build_parser(line): # type: (Text) -> optparse.OptionParser """ Return a parser for parsing requirement lines """ parser = optparse.OptionParser(add_help_option=False) option_factories = SUPPORTED_OPTIONS + SUPPORTED_OPTIONS_REQ for option_factory in option_factories: option = option_factory() parser.add_option(option) # By default optparse sys.exits on parsing errors. We want to wrap # that in our own exception. def parser_exit(self, msg): # type: (Any, str) -> NoReturn # add offending line msg = 'Invalid requirement: %s\n%s' % (line, msg) raise RequirementsFileParseError(msg) # NOTE: mypy disallows assigning to a method # https://github.com/python/mypy/issues/2427 parser.exit = parser_exit # type: ignore return parser def join_lines(lines_enum): # type: (ReqFileLines) -> ReqFileLines """Joins a line ending in '\' with the previous line (except when following comments). The joined line takes on the index of the first line. """ primary_line_number = None new_line = [] # type: List[Text] for line_number, line in lines_enum: if not line.endswith('\\') or COMMENT_RE.match(line): if COMMENT_RE.match(line): # this ensures comments are always matched later line = ' ' + line if new_line: new_line.append(line) yield primary_line_number, ''.join(new_line) new_line = [] else: yield line_number, line else: if not new_line: primary_line_number = line_number new_line.append(line.strip('\\')) # last line contains \ if new_line: yield primary_line_number, ''.join(new_line) # TODO: handle space after '\'. def ignore_comments(lines_enum): # type: (ReqFileLines) -> ReqFileLines """ Strips comments and filter empty lines. """ for line_number, line in lines_enum: line = COMMENT_RE.sub('', line) line = line.strip() if line: yield line_number, line def skip_regex(lines_enum, options): # type: (ReqFileLines, Optional[optparse.Values]) -> ReqFileLines """ Skip lines that match '--skip-requirements-regex' pattern Note: the regex pattern is only built once """ skip_regex = options.skip_requirements_regex if options else None if skip_regex: pattern = re.compile(skip_regex) lines_enum = filterfalse(lambda e: pattern.search(e[1]), lines_enum) return lines_enum def expand_env_variables(lines_enum): # type: (ReqFileLines) -> ReqFileLines """Replace all environment variables that can be retrieved via `os.getenv`. The only allowed format for environment variables defined in the requirement file is `${MY_VARIABLE_1}` to ensure two things: 1. Strings that contain a `$` aren't accidentally (partially) expanded. 2. Ensure consistency across platforms for requirement files. These points are the result of a discussion on the `github pull request #3514 `_. Valid characters in variable names follow the `POSIX standard `_ and are limited to uppercase letter, digits and the `_` (underscore). """ for line_number, line in lines_enum: for env_var, var_name in ENV_VAR_RE.findall(line): value = os.getenv(var_name) if not value: continue line = line.replace(env_var, value) yield line_number, line ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/req/req_install.py0000644000175000017500000011655000000000000030472 0ustar00vincentvincent00000000000000from __future__ import absolute_import import logging import os import shutil import sys import sysconfig import zipfile from distutils.util import change_root from pip._vendor import pkg_resources, six from pip._vendor.packaging.requirements import Requirement from pip._vendor.packaging.utils import canonicalize_name from pip._vendor.packaging.version import Version from pip._vendor.packaging.version import parse as parse_version from pip._vendor.pep517.wrappers import Pep517HookCaller from pip._internal import wheel from pip._internal.build_env import NoOpBuildEnvironment from pip._internal.exceptions import InstallationError from pip._internal.models.link import Link from pip._internal.pyproject import load_pyproject_toml, make_pyproject_path from pip._internal.req.req_uninstall import UninstallPathSet from pip._internal.utils.compat import native_str from pip._internal.utils.hashes import Hashes from pip._internal.utils.logging import indent_log from pip._internal.utils.marker_files import PIP_DELETE_MARKER_FILENAME from pip._internal.utils.misc import ( _make_build_dir, ask_path_exists, backup_dir, call_subprocess, display_path, dist_in_site_packages, dist_in_usersite, ensure_dir, get_installed_version, redact_password_from_url, rmtree, ) from pip._internal.utils.packaging import get_metadata from pip._internal.utils.setuptools_build import make_setuptools_shim_args from pip._internal.utils.temp_dir import TempDirectory from pip._internal.utils.typing import MYPY_CHECK_RUNNING from pip._internal.utils.ui import open_spinner from pip._internal.utils.virtualenv import running_under_virtualenv from pip._internal.vcs import vcs if MYPY_CHECK_RUNNING: from typing import ( Any, Dict, Iterable, List, Mapping, Optional, Sequence, Union, ) from pip._internal.build_env import BuildEnvironment from pip._internal.cache import WheelCache from pip._internal.index import PackageFinder from pip._vendor.pkg_resources import Distribution from pip._vendor.packaging.specifiers import SpecifierSet from pip._vendor.packaging.markers import Marker logger = logging.getLogger(__name__) class InstallRequirement(object): """ Represents something that may be installed later on, may have information about where to fetch the relevant requirement and also contains logic for installing the said requirement. """ def __init__( self, req, # type: Optional[Requirement] comes_from, # type: Optional[Union[str, InstallRequirement]] source_dir=None, # type: Optional[str] editable=False, # type: bool link=None, # type: Optional[Link] update=True, # type: bool markers=None, # type: Optional[Marker] use_pep517=None, # type: Optional[bool] isolated=False, # type: bool options=None, # type: Optional[Dict[str, Any]] wheel_cache=None, # type: Optional[WheelCache] constraint=False, # type: bool extras=() # type: Iterable[str] ): # type: (...) -> None assert req is None or isinstance(req, Requirement), req self.req = req self.comes_from = comes_from self.constraint = constraint if source_dir is None: self.source_dir = None # type: Optional[str] else: self.source_dir = os.path.normpath(os.path.abspath(source_dir)) self.editable = editable self._wheel_cache = wheel_cache if link is None and req and req.url: # PEP 508 URL requirement link = Link(req.url) self.link = self.original_link = link if extras: self.extras = extras elif req: self.extras = { pkg_resources.safe_extra(extra) for extra in req.extras } else: self.extras = set() if markers is None and req: markers = req.marker self.markers = markers self._egg_info_path = None # type: Optional[str] # This holds the pkg_resources.Distribution object if this requirement # is already available: self.satisfied_by = None # This hold the pkg_resources.Distribution object if this requirement # conflicts with another installed distribution: self.conflicts_with = None # Temporary build location self._temp_build_dir = TempDirectory(kind="req-build") # Used to store the global directory where the _temp_build_dir should # have been created. Cf _correct_build_location method. self._ideal_build_dir = None # type: Optional[str] # True if the editable should be updated: self.update = update # Set to True after successful installation self.install_succeeded = None # type: Optional[bool] # UninstallPathSet of uninstalled distribution (for possible rollback) self.uninstalled_pathset = None self.options = options if options else {} # Set to True after successful preparation of this requirement self.prepared = False self.is_direct = False self.isolated = isolated self.build_env = NoOpBuildEnvironment() # type: BuildEnvironment # For PEP 517, the directory where we request the project metadata # gets stored. We need this to pass to build_wheel, so the backend # can ensure that the wheel matches the metadata (see the PEP for # details). self.metadata_directory = None # type: Optional[str] # The static build requirements (from pyproject.toml) self.pyproject_requires = None # type: Optional[List[str]] # Build requirements that we will check are available self.requirements_to_check = [] # type: List[str] # The PEP 517 backend we should use to build the project self.pep517_backend = None # type: Optional[Pep517HookCaller] # Are we using PEP 517 for this requirement? # After pyproject.toml has been loaded, the only valid values are True # and False. Before loading, None is valid (meaning "use the default"). # Setting an explicit value before loading pyproject.toml is supported, # but after loading this flag should be treated as read only. self.use_pep517 = use_pep517 def __str__(self): # type: () -> str if self.req: s = str(self.req) if self.link: s += ' from %s' % redact_password_from_url(self.link.url) elif self.link: s = redact_password_from_url(self.link.url) else: s = '' if self.satisfied_by is not None: s += ' in %s' % display_path(self.satisfied_by.location) if self.comes_from: if isinstance(self.comes_from, six.string_types): comes_from = self.comes_from # type: Optional[str] else: comes_from = self.comes_from.from_path() if comes_from: s += ' (from %s)' % comes_from return s def __repr__(self): # type: () -> str return '<%s object: %s editable=%r>' % ( self.__class__.__name__, str(self), self.editable) def format_debug(self): # type: () -> str """An un-tested helper for getting state, for debugging. """ attributes = vars(self) names = sorted(attributes) state = ( "{}={!r}".format(attr, attributes[attr]) for attr in sorted(names) ) return '<{name} object: {{{state}}}>'.format( name=self.__class__.__name__, state=", ".join(state), ) def populate_link(self, finder, upgrade, require_hashes): # type: (PackageFinder, bool, bool) -> None """Ensure that if a link can be found for this, that it is found. Note that self.link may still be None - if Upgrade is False and the requirement is already installed. If require_hashes is True, don't use the wheel cache, because cached wheels, always built locally, have different hashes than the files downloaded from the index server and thus throw false hash mismatches. Furthermore, cached wheels at present have undeterministic contents due to file modification times. """ if self.link is None: self.link = finder.find_requirement(self, upgrade) if self._wheel_cache is not None and not require_hashes: old_link = self.link self.link = self._wheel_cache.get(self.link, self.name) if old_link != self.link: logger.debug('Using cached wheel link: %s', self.link) # Things that are valid for all kinds of requirements? @property def name(self): # type: () -> Optional[str] if self.req is None: return None return native_str(pkg_resources.safe_name(self.req.name)) @property def specifier(self): # type: () -> SpecifierSet return self.req.specifier @property def is_pinned(self): # type: () -> bool """Return whether I am pinned to an exact version. For example, some-package==1.2 is pinned; some-package>1.2 is not. """ specifiers = self.specifier return (len(specifiers) == 1 and next(iter(specifiers)).operator in {'==', '==='}) @property def installed_version(self): # type: () -> Optional[str] return get_installed_version(self.name) def match_markers(self, extras_requested=None): # type: (Optional[Iterable[str]]) -> bool if not extras_requested: # Provide an extra to safely evaluate the markers # without matching any extra extras_requested = ('',) if self.markers is not None: return any( self.markers.evaluate({'extra': extra}) for extra in extras_requested) else: return True @property def has_hash_options(self): # type: () -> bool """Return whether any known-good hashes are specified as options. These activate --require-hashes mode; hashes specified as part of a URL do not. """ return bool(self.options.get('hashes', {})) def hashes(self, trust_internet=True): # type: (bool) -> Hashes """Return a hash-comparer that considers my option- and URL-based hashes to be known-good. Hashes in URLs--ones embedded in the requirements file, not ones downloaded from an index server--are almost peers with ones from flags. They satisfy --require-hashes (whether it was implicitly or explicitly activated) but do not activate it. md5 and sha224 are not allowed in flags, which should nudge people toward good algos. We always OR all hashes together, even ones from URLs. :param trust_internet: Whether to trust URL-based (#md5=...) hashes downloaded from the internet, as by populate_link() """ good_hashes = self.options.get('hashes', {}).copy() link = self.link if trust_internet else self.original_link if link and link.hash: good_hashes.setdefault(link.hash_name, []).append(link.hash) return Hashes(good_hashes) def from_path(self): # type: () -> Optional[str] """Format a nice indicator to show where this "comes from" """ if self.req is None: return None s = str(self.req) if self.comes_from: if isinstance(self.comes_from, six.string_types): comes_from = self.comes_from else: comes_from = self.comes_from.from_path() if comes_from: s += '->' + comes_from return s def build_location(self, build_dir): # type: (str) -> str assert build_dir is not None if self._temp_build_dir.path is not None: return self._temp_build_dir.path if self.req is None: # for requirement via a path to a directory: the name of the # package is not available yet so we create a temp directory # Once run_egg_info will have run, we'll be able # to fix it via _correct_build_location # Some systems have /tmp as a symlink which confuses custom # builds (such as numpy). Thus, we ensure that the real path # is returned. self._temp_build_dir.create() self._ideal_build_dir = build_dir return self._temp_build_dir.path if self.editable: name = self.name.lower() else: name = self.name # FIXME: Is there a better place to create the build_dir? (hg and bzr # need this) if not os.path.exists(build_dir): logger.debug('Creating directory %s', build_dir) _make_build_dir(build_dir) return os.path.join(build_dir, name) def _correct_build_location(self): # type: () -> None """Move self._temp_build_dir to self._ideal_build_dir/self.req.name For some requirements (e.g. a path to a directory), the name of the package is not available until we run egg_info, so the build_location will return a temporary directory and store the _ideal_build_dir. This is only called by self.run_egg_info to fix the temporary build directory. """ if self.source_dir is not None: return assert self.req is not None assert self._temp_build_dir.path assert (self._ideal_build_dir is not None and self._ideal_build_dir.path) # type: ignore old_location = self._temp_build_dir.path self._temp_build_dir.path = None new_location = self.build_location(self._ideal_build_dir) if os.path.exists(new_location): raise InstallationError( 'A package already exists in %s; please remove it to continue' % display_path(new_location)) logger.debug( 'Moving package %s from %s to new location %s', self, display_path(old_location), display_path(new_location), ) shutil.move(old_location, new_location) self._temp_build_dir.path = new_location self._ideal_build_dir = None self.source_dir = os.path.normpath(os.path.abspath(new_location)) self._egg_info_path = None # Correct the metadata directory, if it exists if self.metadata_directory: old_meta = self.metadata_directory rel = os.path.relpath(old_meta, start=old_location) new_meta = os.path.join(new_location, rel) new_meta = os.path.normpath(os.path.abspath(new_meta)) self.metadata_directory = new_meta def remove_temporary_source(self): # type: () -> None """Remove the source files from this requirement, if they are marked for deletion""" if self.source_dir and os.path.exists( os.path.join(self.source_dir, PIP_DELETE_MARKER_FILENAME)): logger.debug('Removing source in %s', self.source_dir) rmtree(self.source_dir) self.source_dir = None self._temp_build_dir.cleanup() self.build_env.cleanup() def check_if_exists(self, use_user_site): # type: (bool) -> bool """Find an installed distribution that satisfies or conflicts with this requirement, and set self.satisfied_by or self.conflicts_with appropriately. """ if self.req is None: return False try: # get_distribution() will resolve the entire list of requirements # anyway, and we've already determined that we need the requirement # in question, so strip the marker so that we don't try to # evaluate it. no_marker = Requirement(str(self.req)) no_marker.marker = None self.satisfied_by = pkg_resources.get_distribution(str(no_marker)) if self.editable and self.satisfied_by: self.conflicts_with = self.satisfied_by # when installing editables, nothing pre-existing should ever # satisfy self.satisfied_by = None return True except pkg_resources.DistributionNotFound: return False except pkg_resources.VersionConflict: existing_dist = pkg_resources.get_distribution( self.req.name ) if use_user_site: if dist_in_usersite(existing_dist): self.conflicts_with = existing_dist elif (running_under_virtualenv() and dist_in_site_packages(existing_dist)): raise InstallationError( "Will not install to the user site because it will " "lack sys.path precedence to %s in %s" % (existing_dist.project_name, existing_dist.location) ) else: self.conflicts_with = existing_dist return True # Things valid for wheels @property def is_wheel(self): # type: () -> bool if not self.link: return False return self.link.is_wheel def move_wheel_files( self, wheeldir, # type: str root=None, # type: Optional[str] home=None, # type: Optional[str] prefix=None, # type: Optional[str] warn_script_location=True, # type: bool use_user_site=False, # type: bool pycompile=True # type: bool ): # type: (...) -> None wheel.move_wheel_files( self.name, self.req, wheeldir, user=use_user_site, home=home, root=root, prefix=prefix, pycompile=pycompile, isolated=self.isolated, warn_script_location=warn_script_location, ) # Things valid for sdists @property def setup_py_dir(self): # type: () -> str return os.path.join( self.source_dir, self.link and self.link.subdirectory_fragment or '') @property def setup_py_path(self): # type: () -> str assert self.source_dir, "No source dir for %s" % self setup_py = os.path.join(self.setup_py_dir, 'setup.py') # Python2 __file__ should not be unicode if six.PY2 and isinstance(setup_py, six.text_type): setup_py = setup_py.encode(sys.getfilesystemencoding()) return setup_py @property def pyproject_toml_path(self): # type: () -> str assert self.source_dir, "No source dir for %s" % self return make_pyproject_path(self.setup_py_dir) def load_pyproject_toml(self): # type: () -> None """Load the pyproject.toml file. After calling this routine, all of the attributes related to PEP 517 processing for this requirement have been set. In particular, the use_pep517 attribute can be used to determine whether we should follow the PEP 517 or legacy (setup.py) code path. """ pyproject_toml_data = load_pyproject_toml( self.use_pep517, self.pyproject_toml_path, self.setup_py_path, str(self) ) self.use_pep517 = (pyproject_toml_data is not None) if not self.use_pep517: return requires, backend, check = pyproject_toml_data self.requirements_to_check = check self.pyproject_requires = requires self.pep517_backend = Pep517HookCaller(self.setup_py_dir, backend) # Use a custom function to call subprocesses self.spin_message = "" def runner( cmd, # type: List[str] cwd=None, # type: Optional[str] extra_environ=None # type: Optional[Mapping[str, Any]] ): # type: (...) -> None with open_spinner(self.spin_message) as spinner: call_subprocess( cmd, cwd=cwd, extra_environ=extra_environ, spinner=spinner ) self.spin_message = "" self.pep517_backend._subprocess_runner = runner def prepare_metadata(self): # type: () -> None """Ensure that project metadata is available. Under PEP 517, call the backend hook to prepare the metadata. Under legacy processing, call setup.py egg-info. """ assert self.source_dir with indent_log(): if self.use_pep517: self.prepare_pep517_metadata() else: self.run_egg_info() if not self.req: if isinstance(parse_version(self.metadata["Version"]), Version): op = "==" else: op = "===" self.req = Requirement( "".join([ self.metadata["Name"], op, self.metadata["Version"], ]) ) self._correct_build_location() else: metadata_name = canonicalize_name(self.metadata["Name"]) if canonicalize_name(self.req.name) != metadata_name: logger.warning( 'Generating metadata for package %s ' 'produced metadata for project name %s. Fix your ' '#egg=%s fragments.', self.name, metadata_name, self.name ) self.req = Requirement(metadata_name) def prepare_pep517_metadata(self): # type: () -> None assert self.pep517_backend is not None metadata_dir = os.path.join( self.setup_py_dir, 'pip-wheel-metadata' ) ensure_dir(metadata_dir) with self.build_env: # Note that Pep517HookCaller implements a fallback for # prepare_metadata_for_build_wheel, so we don't have to # consider the possibility that this hook doesn't exist. backend = self.pep517_backend self.spin_message = "Preparing wheel metadata" distinfo_dir = backend.prepare_metadata_for_build_wheel( metadata_dir ) self.metadata_directory = os.path.join(metadata_dir, distinfo_dir) def run_egg_info(self): # type: () -> None if self.name: logger.debug( 'Running setup.py (path:%s) egg_info for package %s', self.setup_py_path, self.name, ) else: logger.debug( 'Running setup.py (path:%s) egg_info for package from %s', self.setup_py_path, self.link, ) base_cmd = make_setuptools_shim_args(self.setup_py_path) if self.isolated: base_cmd += ["--no-user-cfg"] egg_info_cmd = base_cmd + ['egg_info'] # We can't put the .egg-info files at the root, because then the # source code will be mistaken for an installed egg, causing # problems if self.editable: egg_base_option = [] # type: List[str] else: egg_info_dir = os.path.join(self.setup_py_dir, 'pip-egg-info') ensure_dir(egg_info_dir) egg_base_option = ['--egg-base', 'pip-egg-info'] with self.build_env: call_subprocess( egg_info_cmd + egg_base_option, cwd=self.setup_py_dir, command_desc='python setup.py egg_info') @property def egg_info_path(self): # type: () -> str if self._egg_info_path is None: if self.editable: base = self.source_dir else: base = os.path.join(self.setup_py_dir, 'pip-egg-info') filenames = os.listdir(base) if self.editable: filenames = [] for root, dirs, files in os.walk(base): for dir in vcs.dirnames: if dir in dirs: dirs.remove(dir) # Iterate over a copy of ``dirs``, since mutating # a list while iterating over it can cause trouble. # (See https://github.com/pypa/pip/pull/462.) for dir in list(dirs): # Don't search in anything that looks like a virtualenv # environment if ( os.path.lexists( os.path.join(root, dir, 'bin', 'python') ) or os.path.exists( os.path.join( root, dir, 'Scripts', 'Python.exe' ) )): dirs.remove(dir) # Also don't search through tests elif dir == 'test' or dir == 'tests': dirs.remove(dir) filenames.extend([os.path.join(root, dir) for dir in dirs]) filenames = [f for f in filenames if f.endswith('.egg-info')] if not filenames: raise InstallationError( "Files/directories not found in %s" % base ) # if we have more than one match, we pick the toplevel one. This # can easily be the case if there is a dist folder which contains # an extracted tarball for testing purposes. if len(filenames) > 1: filenames.sort( key=lambda x: x.count(os.path.sep) + (os.path.altsep and x.count(os.path.altsep) or 0) ) self._egg_info_path = os.path.join(base, filenames[0]) return self._egg_info_path @property def metadata(self): # type: () -> Any if not hasattr(self, '_metadata'): self._metadata = get_metadata(self.get_dist()) return self._metadata def get_dist(self): # type: () -> Distribution """Return a pkg_resources.Distribution for this requirement""" if self.metadata_directory: dist_dir = self.metadata_directory dist_cls = pkg_resources.DistInfoDistribution else: dist_dir = self.egg_info_path.rstrip(os.path.sep) # https://github.com/python/mypy/issues/1174 dist_cls = pkg_resources.Distribution # type: ignore # dist_dir_name can be of the form ".dist-info" or # e.g. ".egg-info". base_dir, dist_dir_name = os.path.split(dist_dir) dist_name = os.path.splitext(dist_dir_name)[0] metadata = pkg_resources.PathMetadata(base_dir, dist_dir) return dist_cls( base_dir, project_name=dist_name, metadata=metadata, ) def assert_source_matches_version(self): # type: () -> None assert self.source_dir version = self.metadata['version'] if self.req.specifier and version not in self.req.specifier: logger.warning( 'Requested %s, but installing version %s', self, version, ) else: logger.debug( 'Source in %s has version %s, which satisfies requirement %s', display_path(self.source_dir), version, self, ) # For both source distributions and editables def ensure_has_source_dir(self, parent_dir): # type: (str) -> str """Ensure that a source_dir is set. This will create a temporary build dir if the name of the requirement isn't known yet. :param parent_dir: The ideal pip parent_dir for the source_dir. Generally src_dir for editables and build_dir for sdists. :return: self.source_dir """ if self.source_dir is None: self.source_dir = self.build_location(parent_dir) return self.source_dir # For editable installations def install_editable( self, install_options, # type: List[str] global_options=(), # type: Sequence[str] prefix=None # type: Optional[str] ): # type: (...) -> None logger.info('Running setup.py develop for %s', self.name) if self.isolated: global_options = list(global_options) + ["--no-user-cfg"] if prefix: prefix_param = ['--prefix={}'.format(prefix)] install_options = list(install_options) + prefix_param with indent_log(): # FIXME: should we do --install-headers here too? with self.build_env: call_subprocess( make_setuptools_shim_args(self.setup_py_path) + list(global_options) + ['develop', '--no-deps'] + list(install_options), cwd=self.setup_py_dir, ) self.install_succeeded = True def update_editable(self, obtain=True): # type: (bool) -> None if not self.link: logger.debug( "Cannot update repository at %s; repository location is " "unknown", self.source_dir, ) return assert self.editable assert self.source_dir if self.link.scheme == 'file': # Static paths don't get updated return assert '+' in self.link.url, "bad url: %r" % self.link.url if not self.update: return vc_type, url = self.link.url.split('+', 1) vcs_backend = vcs.get_backend(vc_type) if vcs_backend: url = self.link.url if obtain: vcs_backend.obtain(self.source_dir, url=url) else: vcs_backend.export(self.source_dir, url=url) else: assert 0, ( 'Unexpected version control type (in %s): %s' % (self.link, vc_type)) # Top-level Actions def uninstall(self, auto_confirm=False, verbose=False, use_user_site=False): # type: (bool, bool, bool) -> Optional[UninstallPathSet] """ Uninstall the distribution currently satisfying this requirement. Prompts before removing or modifying files unless ``auto_confirm`` is True. Refuses to delete or modify files outside of ``sys.prefix`` - thus uninstallation within a virtual environment can only modify that virtual environment, even if the virtualenv is linked to global site-packages. """ if not self.check_if_exists(use_user_site): logger.warning("Skipping %s as it is not installed.", self.name) return None dist = self.satisfied_by or self.conflicts_with uninstalled_pathset = UninstallPathSet.from_dist(dist) uninstalled_pathset.remove(auto_confirm, verbose) return uninstalled_pathset def _clean_zip_name(self, name, prefix): # only used by archive. # type: (str, str) -> str assert name.startswith(prefix + os.path.sep), ( "name %r doesn't start with prefix %r" % (name, prefix) ) name = name[len(prefix) + 1:] name = name.replace(os.path.sep, '/') return name def _get_archive_name(self, path, parentdir, rootdir): # type: (str, str, str) -> str path = os.path.join(parentdir, path) name = self._clean_zip_name(path, rootdir) return self.name + '/' + name # TODO: Investigate if this should be kept in InstallRequirement # Seems to be used only when VCS + downloads def archive(self, build_dir): # type: (str) -> None assert self.source_dir create_archive = True archive_name = '%s-%s.zip' % (self.name, self.metadata["version"]) archive_path = os.path.join(build_dir, archive_name) if os.path.exists(archive_path): response = ask_path_exists( 'The file %s exists. (i)gnore, (w)ipe, (b)ackup, (a)bort ' % display_path(archive_path), ('i', 'w', 'b', 'a')) if response == 'i': create_archive = False elif response == 'w': logger.warning('Deleting %s', display_path(archive_path)) os.remove(archive_path) elif response == 'b': dest_file = backup_dir(archive_path) logger.warning( 'Backing up %s to %s', display_path(archive_path), display_path(dest_file), ) shutil.move(archive_path, dest_file) elif response == 'a': sys.exit(-1) if create_archive: zip = zipfile.ZipFile( archive_path, 'w', zipfile.ZIP_DEFLATED, allowZip64=True ) dir = os.path.normcase(os.path.abspath(self.setup_py_dir)) for dirpath, dirnames, filenames in os.walk(dir): if 'pip-egg-info' in dirnames: dirnames.remove('pip-egg-info') for dirname in dirnames: dir_arcname = self._get_archive_name(dirname, parentdir=dirpath, rootdir=dir) zipdir = zipfile.ZipInfo(dir_arcname + '/') zipdir.external_attr = 0x1ED << 16 # 0o755 zip.writestr(zipdir, '') for filename in filenames: if filename == PIP_DELETE_MARKER_FILENAME: continue file_arcname = self._get_archive_name(filename, parentdir=dirpath, rootdir=dir) filename = os.path.join(dirpath, filename) zip.write(filename, file_arcname) zip.close() logger.info('Saved %s', display_path(archive_path)) def install( self, install_options, # type: List[str] global_options=None, # type: Optional[Sequence[str]] root=None, # type: Optional[str] home=None, # type: Optional[str] prefix=None, # type: Optional[str] warn_script_location=True, # type: bool use_user_site=False, # type: bool pycompile=True # type: bool ): # type: (...) -> None global_options = global_options if global_options is not None else [] if self.editable: self.install_editable( install_options, global_options, prefix=prefix, ) return if self.is_wheel: version = wheel.wheel_version(self.source_dir) wheel.check_compatibility(version, self.name) self.move_wheel_files( self.source_dir, root=root, prefix=prefix, home=home, warn_script_location=warn_script_location, use_user_site=use_user_site, pycompile=pycompile, ) self.install_succeeded = True return # Extend the list of global and install options passed on to # the setup.py call with the ones from the requirements file. # Options specified in requirements file override those # specified on the command line, since the last option given # to setup.py is the one that is used. global_options = list(global_options) + \ self.options.get('global_options', []) install_options = list(install_options) + \ self.options.get('install_options', []) if self.isolated: # https://github.com/python/mypy/issues/1174 global_options = global_options + ["--no-user-cfg"] # type: ignore with TempDirectory(kind="record") as temp_dir: record_filename = os.path.join(temp_dir.path, 'install-record.txt') install_args = self.get_install_args( global_options, record_filename, root, prefix, pycompile, ) msg = 'Running setup.py install for %s' % (self.name,) with open_spinner(msg) as spinner: with indent_log(): with self.build_env: call_subprocess( install_args + install_options, cwd=self.setup_py_dir, spinner=spinner, ) if not os.path.exists(record_filename): logger.debug('Record file %s not found', record_filename) return self.install_succeeded = True def prepend_root(path): # type: (str) -> str if root is None or not os.path.isabs(path): return path else: return change_root(root, path) with open(record_filename) as f: for line in f: directory = os.path.dirname(line) if directory.endswith('.egg-info'): egg_info_dir = prepend_root(directory) break else: logger.warning( 'Could not find .egg-info directory in install record' ' for %s', self, ) # FIXME: put the record somewhere # FIXME: should this be an error? return new_lines = [] with open(record_filename) as f: for line in f: filename = line.strip() if os.path.isdir(filename): filename += os.path.sep new_lines.append( os.path.relpath(prepend_root(filename), egg_info_dir) ) new_lines.sort() ensure_dir(egg_info_dir) inst_files_path = os.path.join(egg_info_dir, 'installed-files.txt') with open(inst_files_path, 'w') as f: f.write('\n'.join(new_lines) + '\n') def get_install_args( self, global_options, # type: Sequence[str] record_filename, # type: str root, # type: Optional[str] prefix, # type: Optional[str] pycompile # type: bool ): # type: (...) -> List[str] install_args = make_setuptools_shim_args(self.setup_py_path, unbuffered_output=True) install_args += list(global_options) + \ ['install', '--record', record_filename] install_args += ['--single-version-externally-managed'] if root is not None: install_args += ['--root', root] if prefix is not None: install_args += ['--prefix', prefix] if pycompile: install_args += ["--compile"] else: install_args += ["--no-compile"] if running_under_virtualenv(): py_ver_str = 'python' + sysconfig.get_python_version() install_args += ['--install-headers', os.path.join(sys.prefix, 'include', 'site', py_ver_str, self.name)] return install_args ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/req/req_set.py0000644000175000017500000001726400000000000027621 0ustar00vincentvincent00000000000000from __future__ import absolute_import import logging from collections import OrderedDict from pip._internal.exceptions import InstallationError from pip._internal.utils.logging import indent_log from pip._internal.utils.typing import MYPY_CHECK_RUNNING from pip._internal.wheel import Wheel if MYPY_CHECK_RUNNING: from typing import Dict, Iterable, List, Optional, Tuple from pip._internal.req.req_install import InstallRequirement logger = logging.getLogger(__name__) class RequirementSet(object): def __init__(self, require_hashes=False, check_supported_wheels=True): # type: (bool, bool) -> None """Create a RequirementSet. """ self.requirements = OrderedDict() # type: Dict[str, InstallRequirement] # noqa: E501 self.require_hashes = require_hashes self.check_supported_wheels = check_supported_wheels # Mapping of alias: real_name self.requirement_aliases = {} # type: Dict[str, str] self.unnamed_requirements = [] # type: List[InstallRequirement] self.successfully_downloaded = [] # type: List[InstallRequirement] self.reqs_to_cleanup = [] # type: List[InstallRequirement] def __str__(self): # type: () -> str reqs = [req for req in self.requirements.values() if not req.comes_from] reqs.sort(key=lambda req: req.name.lower()) return ' '.join([str(req.req) for req in reqs]) def __repr__(self): # type: () -> str reqs = [req for req in self.requirements.values()] reqs.sort(key=lambda req: req.name.lower()) reqs_str = ', '.join([str(req.req) for req in reqs]) return ('<%s object; %d requirement(s): %s>' % (self.__class__.__name__, len(reqs), reqs_str)) def add_requirement( self, install_req, # type: InstallRequirement parent_req_name=None, # type: Optional[str] extras_requested=None # type: Optional[Iterable[str]] ): # type: (...) -> Tuple[List[InstallRequirement], Optional[InstallRequirement]] # noqa: E501 """Add install_req as a requirement to install. :param parent_req_name: The name of the requirement that needed this added. The name is used because when multiple unnamed requirements resolve to the same name, we could otherwise end up with dependency links that point outside the Requirements set. parent_req must already be added. Note that None implies that this is a user supplied requirement, vs an inferred one. :param extras_requested: an iterable of extras used to evaluate the environment markers. :return: Additional requirements to scan. That is either [] if the requirement is not applicable, or [install_req] if the requirement is applicable and has just been added. """ name = install_req.name # If the markers do not match, ignore this requirement. if not install_req.match_markers(extras_requested): logger.info( "Ignoring %s: markers '%s' don't match your environment", name, install_req.markers, ) return [], None # If the wheel is not supported, raise an error. # Should check this after filtering out based on environment markers to # allow specifying different wheels based on the environment/OS, in a # single requirements file. if install_req.link and install_req.link.is_wheel: wheel = Wheel(install_req.link.filename) if self.check_supported_wheels and not wheel.supported(): raise InstallationError( "%s is not a supported wheel on this platform." % wheel.filename ) # This next bit is really a sanity check. assert install_req.is_direct == (parent_req_name is None), ( "a direct req shouldn't have a parent and also, " "a non direct req should have a parent" ) # Unnamed requirements are scanned again and the requirement won't be # added as a dependency until after scanning. if not name: # url or path requirement w/o an egg fragment self.unnamed_requirements.append(install_req) return [install_req], None try: existing_req = self.get_requirement(name) except KeyError: existing_req = None has_conflicting_requirement = ( parent_req_name is None and existing_req and not existing_req.constraint and existing_req.extras == install_req.extras and existing_req.req.specifier != install_req.req.specifier ) if has_conflicting_requirement: raise InstallationError( "Double requirement given: %s (already in %s, name=%r)" % (install_req, existing_req, name) ) # When no existing requirement exists, add the requirement as a # dependency and it will be scanned again after. if not existing_req: self.requirements[name] = install_req # FIXME: what about other normalizations? E.g., _ vs. -? if name.lower() != name: self.requirement_aliases[name.lower()] = name # We'd want to rescan this requirements later return [install_req], install_req # Assume there's no need to scan, and that we've already # encountered this for scanning. if install_req.constraint or not existing_req.constraint: return [], existing_req does_not_satisfy_constraint = ( install_req.link and not ( existing_req.link and install_req.link.path == existing_req.link.path ) ) if does_not_satisfy_constraint: self.reqs_to_cleanup.append(install_req) raise InstallationError( "Could not satisfy constraints for '%s': " "installation from path or url cannot be " "constrained to a version" % name, ) # If we're now installing a constraint, mark the existing # object for real installation. existing_req.constraint = False existing_req.extras = tuple(sorted( set(existing_req.extras) | set(install_req.extras) )) logger.debug( "Setting %s extras to: %s", existing_req, existing_req.extras, ) # Return the existing requirement for addition to the parent and # scanning again. return [existing_req], existing_req def has_requirement(self, project_name): # type: (str) -> bool name = project_name.lower() if (name in self.requirements and not self.requirements[name].constraint or name in self.requirement_aliases and not self.requirements[self.requirement_aliases[name]].constraint): return True return False def get_requirement(self, project_name): # type: (str) -> InstallRequirement for name in project_name, project_name.lower(): if name in self.requirements: return self.requirements[name] if name in self.requirement_aliases: return self.requirements[self.requirement_aliases[name]] raise KeyError("No project with the name %r" % project_name) def cleanup_files(self): # type: () -> None """Clean up files, remove builds.""" logger.debug('Cleaning up...') with indent_log(): for req in self.reqs_to_cleanup: req.remove_temporary_source() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/req/req_tracker.py0000644000175000017500000000607100000000000030453 0ustar00vincentvincent00000000000000from __future__ import absolute_import import contextlib import errno import hashlib import logging import os from pip._internal.utils.temp_dir import TempDirectory from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from types import TracebackType from typing import Iterator, Optional, Set, Type from pip._internal.req.req_install import InstallRequirement from pip._internal.models.link import Link logger = logging.getLogger(__name__) class RequirementTracker(object): def __init__(self): # type: () -> None self._root = os.environ.get('PIP_REQ_TRACKER') if self._root is None: self._temp_dir = TempDirectory(delete=False, kind='req-tracker') self._temp_dir.create() self._root = os.environ['PIP_REQ_TRACKER'] = self._temp_dir.path logger.debug('Created requirements tracker %r', self._root) else: self._temp_dir = None logger.debug('Re-using requirements tracker %r', self._root) self._entries = set() # type: Set[InstallRequirement] def __enter__(self): # type: () -> RequirementTracker return self def __exit__( self, exc_type, # type: Optional[Type[BaseException]] exc_val, # type: Optional[BaseException] exc_tb # type: Optional[TracebackType] ): # type: (...) -> None self.cleanup() def _entry_path(self, link): # type: (Link) -> str hashed = hashlib.sha224(link.url_without_fragment.encode()).hexdigest() return os.path.join(self._root, hashed) def add(self, req): # type: (InstallRequirement) -> None link = req.link info = str(req) entry_path = self._entry_path(link) try: with open(entry_path) as fp: # Error, these's already a build in progress. raise LookupError('%s is already being built: %s' % (link, fp.read())) except IOError as e: if e.errno != errno.ENOENT: raise assert req not in self._entries with open(entry_path, 'w') as fp: fp.write(info) self._entries.add(req) logger.debug('Added %s to build tracker %r', req, self._root) def remove(self, req): # type: (InstallRequirement) -> None link = req.link self._entries.remove(req) os.unlink(self._entry_path(link)) logger.debug('Removed %s from build tracker %r', req, self._root) def cleanup(self): # type: () -> None for req in set(self._entries): self.remove(req) remove = self._temp_dir is not None if remove: self._temp_dir.cleanup() logger.debug('%s build tracker %r', 'Removed' if remove else 'Cleaned', self._root) @contextlib.contextmanager def track(self, req): # type: (InstallRequirement) -> Iterator[None] self.add(req) yield self.remove(req) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/req/req_uninstall.py0000644000175000017500000005510100000000000031027 0ustar00vincentvincent00000000000000from __future__ import absolute_import import csv import functools import logging import os import sys import sysconfig from pip._vendor import pkg_resources from pip._internal.exceptions import UninstallationError from pip._internal.locations import bin_py, bin_user from pip._internal.utils.compat import WINDOWS, cache_from_source, uses_pycache from pip._internal.utils.logging import indent_log from pip._internal.utils.misc import ( FakeFile, ask, dist_in_usersite, dist_is_local, egg_link_path, is_local, normalize_path, renames, rmtree, ) from pip._internal.utils.temp_dir import AdjacentTempDirectory, TempDirectory from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import ( Any, Callable, Dict, Iterable, Iterator, List, Optional, Set, Tuple, ) from pip._vendor.pkg_resources import Distribution logger = logging.getLogger(__name__) def _script_names(dist, script_name, is_gui): # type: (Distribution, str, bool) -> List[str] """Create the fully qualified name of the files created by {console,gui}_scripts for the given ``dist``. Returns the list of file names """ if dist_in_usersite(dist): bin_dir = bin_user else: bin_dir = bin_py exe_name = os.path.join(bin_dir, script_name) paths_to_remove = [exe_name] if WINDOWS: paths_to_remove.append(exe_name + '.exe') paths_to_remove.append(exe_name + '.exe.manifest') if is_gui: paths_to_remove.append(exe_name + '-script.pyw') else: paths_to_remove.append(exe_name + '-script.py') return paths_to_remove def _unique(fn): # type: (Callable) -> Callable[..., Iterator[Any]] @functools.wraps(fn) def unique(*args, **kw): # type: (Any, Any) -> Iterator[Any] seen = set() # type: Set[Any] for item in fn(*args, **kw): if item not in seen: seen.add(item) yield item return unique @_unique def uninstallation_paths(dist): # type: (Distribution) -> Iterator[str] """ Yield all the uninstallation paths for dist based on RECORD-without-.py[co] Yield paths to all the files in RECORD. For each .py file in RECORD, add the .pyc and .pyo in the same directory. UninstallPathSet.add() takes care of the __pycache__ .py[co]. """ r = csv.reader(FakeFile(dist.get_metadata_lines('RECORD'))) for row in r: path = os.path.join(dist.location, row[0]) yield path if path.endswith('.py'): dn, fn = os.path.split(path) base = fn[:-3] path = os.path.join(dn, base + '.pyc') yield path path = os.path.join(dn, base + '.pyo') yield path def compact(paths): # type: (Iterable[str]) -> Set[str] """Compact a path set to contain the minimal number of paths necessary to contain all paths in the set. If /a/path/ and /a/path/to/a/file.txt are both in the set, leave only the shorter path.""" sep = os.path.sep short_paths = set() # type: Set[str] for path in sorted(paths, key=len): should_skip = any( path.startswith(shortpath.rstrip("*")) and path[len(shortpath.rstrip("*").rstrip(sep))] == sep for shortpath in short_paths ) if not should_skip: short_paths.add(path) return short_paths def compress_for_rename(paths): # type: (Iterable[str]) -> Set[str] """Returns a set containing the paths that need to be renamed. This set may include directories when the original sequence of paths included every file on disk. """ case_map = dict((os.path.normcase(p), p) for p in paths) remaining = set(case_map) unchecked = sorted(set(os.path.split(p)[0] for p in case_map.values()), key=len) wildcards = set() # type: Set[str] def norm_join(*a): # type: (str) -> str return os.path.normcase(os.path.join(*a)) for root in unchecked: if any(os.path.normcase(root).startswith(w) for w in wildcards): # This directory has already been handled. continue all_files = set() # type: Set[str] all_subdirs = set() # type: Set[str] for dirname, subdirs, files in os.walk(root): all_subdirs.update(norm_join(root, dirname, d) for d in subdirs) all_files.update(norm_join(root, dirname, f) for f in files) # If all the files we found are in our remaining set of files to # remove, then remove them from the latter set and add a wildcard # for the directory. if not (all_files - remaining): remaining.difference_update(all_files) wildcards.add(root + os.sep) return set(map(case_map.__getitem__, remaining)) | wildcards def compress_for_output_listing(paths): # type: (Iterable[str]) -> Tuple[Set[str], Set[str]] """Returns a tuple of 2 sets of which paths to display to user The first set contains paths that would be deleted. Files of a package are not added and the top-level directory of the package has a '*' added at the end - to signify that all it's contents are removed. The second set contains files that would have been skipped in the above folders. """ will_remove = set(paths) will_skip = set() # Determine folders and files folders = set() files = set() for path in will_remove: if path.endswith(".pyc"): continue if path.endswith("__init__.py") or ".dist-info" in path: folders.add(os.path.dirname(path)) files.add(path) # probably this one https://github.com/python/mypy/issues/390 _normcased_files = set(map(os.path.normcase, files)) # type: ignore folders = compact(folders) # This walks the tree using os.walk to not miss extra folders # that might get added. for folder in folders: for dirpath, _, dirfiles in os.walk(folder): for fname in dirfiles: if fname.endswith(".pyc"): continue file_ = os.path.join(dirpath, fname) if (os.path.isfile(file_) and os.path.normcase(file_) not in _normcased_files): # We are skipping this file. Add it to the set. will_skip.add(file_) will_remove = files | { os.path.join(folder, "*") for folder in folders } return will_remove, will_skip class StashedUninstallPathSet(object): """A set of file rename operations to stash files while tentatively uninstalling them.""" def __init__(self): # type: () -> None # Mapping from source file root to [Adjacent]TempDirectory # for files under that directory. self._save_dirs = {} # type: Dict[str, TempDirectory] # (old path, new path) tuples for each move that may need # to be undone. self._moves = [] # type: List[Tuple[str, str]] def _get_directory_stash(self, path): # type: (str) -> str """Stashes a directory. Directories are stashed adjacent to their original location if possible, or else moved/copied into the user's temp dir.""" try: save_dir = AdjacentTempDirectory(path) # type: TempDirectory save_dir.create() except OSError: save_dir = TempDirectory(kind="uninstall") save_dir.create() self._save_dirs[os.path.normcase(path)] = save_dir return save_dir.path def _get_file_stash(self, path): # type: (str) -> str """Stashes a file. If no root has been provided, one will be created for the directory in the user's temp directory.""" path = os.path.normcase(path) head, old_head = os.path.dirname(path), None save_dir = None while head != old_head: try: save_dir = self._save_dirs[head] break except KeyError: pass head, old_head = os.path.dirname(head), head else: # Did not find any suitable root head = os.path.dirname(path) save_dir = TempDirectory(kind='uninstall') save_dir.create() self._save_dirs[head] = save_dir relpath = os.path.relpath(path, head) if relpath and relpath != os.path.curdir: return os.path.join(save_dir.path, relpath) return save_dir.path def stash(self, path): # type: (str) -> str """Stashes the directory or file and returns its new location. """ if os.path.isdir(path): new_path = self._get_directory_stash(path) else: new_path = self._get_file_stash(path) self._moves.append((path, new_path)) if os.path.isdir(path) and os.path.isdir(new_path): # If we're moving a directory, we need to # remove the destination first or else it will be # moved to inside the existing directory. # We just created new_path ourselves, so it will # be removable. os.rmdir(new_path) renames(path, new_path) return new_path def commit(self): # type: () -> None """Commits the uninstall by removing stashed files.""" for _, save_dir in self._save_dirs.items(): save_dir.cleanup() self._moves = [] self._save_dirs = {} def rollback(self): # type: () -> None """Undoes the uninstall by moving stashed files back.""" for p in self._moves: logging.info("Moving to %s\n from %s", *p) for new_path, path in self._moves: try: logger.debug('Replacing %s from %s', new_path, path) if os.path.isfile(new_path): os.unlink(new_path) elif os.path.isdir(new_path): rmtree(new_path) renames(path, new_path) except OSError as ex: logger.error("Failed to restore %s", new_path) logger.debug("Exception: %s", ex) self.commit() @property def can_rollback(self): # type: () -> bool return bool(self._moves) class UninstallPathSet(object): """A set of file paths to be removed in the uninstallation of a requirement.""" def __init__(self, dist): # type: (Distribution) -> None self.paths = set() # type: Set[str] self._refuse = set() # type: Set[str] self.pth = {} # type: Dict[str, UninstallPthEntries] self.dist = dist self._moved_paths = StashedUninstallPathSet() def _permitted(self, path): # type: (str) -> bool """ Return True if the given path is one we are permitted to remove/modify, False otherwise. """ return is_local(path) def add(self, path): # type: (str) -> None head, tail = os.path.split(path) # we normalize the head to resolve parent directory symlinks, but not # the tail, since we only want to uninstall symlinks, not their targets path = os.path.join(normalize_path(head), os.path.normcase(tail)) if not os.path.exists(path): return if self._permitted(path): self.paths.add(path) else: self._refuse.add(path) # __pycache__ files can show up after 'installed-files.txt' is created, # due to imports if os.path.splitext(path)[1] == '.py' and uses_pycache: self.add(cache_from_source(path)) def add_pth(self, pth_file, entry): # type: (str, str) -> None pth_file = normalize_path(pth_file) if self._permitted(pth_file): if pth_file not in self.pth: self.pth[pth_file] = UninstallPthEntries(pth_file) self.pth[pth_file].add(entry) else: self._refuse.add(pth_file) def remove(self, auto_confirm=False, verbose=False): # type: (bool, bool) -> None """Remove paths in ``self.paths`` with confirmation (unless ``auto_confirm`` is True).""" if not self.paths: logger.info( "Can't uninstall '%s'. No files were found to uninstall.", self.dist.project_name, ) return dist_name_version = ( self.dist.project_name + "-" + self.dist.version ) logger.info('Uninstalling %s:', dist_name_version) with indent_log(): if auto_confirm or self._allowed_to_proceed(verbose): moved = self._moved_paths for_rename = compress_for_rename(self.paths) for path in sorted(compact(for_rename)): moved.stash(path) logger.debug('Removing file or directory %s', path) for pth in self.pth.values(): pth.remove() logger.info('Successfully uninstalled %s', dist_name_version) def _allowed_to_proceed(self, verbose): # type: (bool) -> bool """Display which files would be deleted and prompt for confirmation """ def _display(msg, paths): # type: (str, Iterable[str]) -> None if not paths: return logger.info(msg) with indent_log(): for path in sorted(compact(paths)): logger.info(path) if not verbose: will_remove, will_skip = compress_for_output_listing(self.paths) else: # In verbose mode, display all the files that are going to be # deleted. will_remove = set(self.paths) will_skip = set() _display('Would remove:', will_remove) _display('Would not remove (might be manually added):', will_skip) _display('Would not remove (outside of prefix):', self._refuse) if verbose: _display('Will actually move:', compress_for_rename(self.paths)) return ask('Proceed (y/n)? ', ('y', 'n')) == 'y' def rollback(self): # type: () -> None """Rollback the changes previously made by remove().""" if not self._moved_paths.can_rollback: logger.error( "Can't roll back %s; was not uninstalled", self.dist.project_name, ) return logger.info('Rolling back uninstall of %s', self.dist.project_name) self._moved_paths.rollback() for pth in self.pth.values(): pth.rollback() def commit(self): # type: () -> None """Remove temporary save dir: rollback will no longer be possible.""" self._moved_paths.commit() @classmethod def from_dist(cls, dist): # type: (Distribution) -> UninstallPathSet dist_path = normalize_path(dist.location) if not dist_is_local(dist): logger.info( "Not uninstalling %s at %s, outside environment %s", dist.key, dist_path, sys.prefix, ) return cls(dist) if dist_path in {p for p in {sysconfig.get_path("stdlib"), sysconfig.get_path("platstdlib")} if p}: logger.info( "Not uninstalling %s at %s, as it is in the standard library.", dist.key, dist_path, ) return cls(dist) paths_to_remove = cls(dist) develop_egg_link = egg_link_path(dist) develop_egg_link_egg_info = '{}.egg-info'.format( pkg_resources.to_filename(dist.project_name)) egg_info_exists = dist.egg_info and os.path.exists(dist.egg_info) # Special case for distutils installed package distutils_egg_info = getattr(dist._provider, 'path', None) # Uninstall cases order do matter as in the case of 2 installs of the # same package, pip needs to uninstall the currently detected version if (egg_info_exists and dist.egg_info.endswith('.egg-info') and not dist.egg_info.endswith(develop_egg_link_egg_info)): # if dist.egg_info.endswith(develop_egg_link_egg_info), we # are in fact in the develop_egg_link case paths_to_remove.add(dist.egg_info) if dist.has_metadata('installed-files.txt'): for installed_file in dist.get_metadata( 'installed-files.txt').splitlines(): path = os.path.normpath( os.path.join(dist.egg_info, installed_file) ) paths_to_remove.add(path) # FIXME: need a test for this elif block # occurs with --single-version-externally-managed/--record outside # of pip elif dist.has_metadata('top_level.txt'): if dist.has_metadata('namespace_packages.txt'): namespaces = dist.get_metadata('namespace_packages.txt') else: namespaces = [] for top_level_pkg in [ p for p in dist.get_metadata('top_level.txt').splitlines() if p and p not in namespaces]: path = os.path.join(dist.location, top_level_pkg) paths_to_remove.add(path) paths_to_remove.add(path + '.py') paths_to_remove.add(path + '.pyc') paths_to_remove.add(path + '.pyo') elif distutils_egg_info: raise UninstallationError( "Cannot uninstall {!r}. It is a distutils installed project " "and thus we cannot accurately determine which files belong " "to it which would lead to only a partial uninstall.".format( dist.project_name, ) ) elif dist.location.endswith('.egg'): # package installed by easy_install # We cannot match on dist.egg_name because it can slightly vary # i.e. setuptools-0.6c11-py2.6.egg vs setuptools-0.6rc11-py2.6.egg paths_to_remove.add(dist.location) easy_install_egg = os.path.split(dist.location)[1] easy_install_pth = os.path.join(os.path.dirname(dist.location), 'easy-install.pth') paths_to_remove.add_pth(easy_install_pth, './' + easy_install_egg) elif egg_info_exists and dist.egg_info.endswith('.dist-info'): for path in uninstallation_paths(dist): paths_to_remove.add(path) elif develop_egg_link: # develop egg with open(develop_egg_link, 'r') as fh: link_pointer = os.path.normcase(fh.readline().strip()) assert (link_pointer == dist.location), ( 'Egg-link %s does not match installed location of %s ' '(at %s)' % (link_pointer, dist.project_name, dist.location) ) paths_to_remove.add(develop_egg_link) easy_install_pth = os.path.join(os.path.dirname(develop_egg_link), 'easy-install.pth') paths_to_remove.add_pth(easy_install_pth, dist.location) else: logger.debug( 'Not sure how to uninstall: %s - Check: %s', dist, dist.location, ) # find distutils scripts= scripts if dist.has_metadata('scripts') and dist.metadata_isdir('scripts'): for script in dist.metadata_listdir('scripts'): if dist_in_usersite(dist): bin_dir = bin_user else: bin_dir = bin_py paths_to_remove.add(os.path.join(bin_dir, script)) if WINDOWS: paths_to_remove.add(os.path.join(bin_dir, script) + '.bat') # find console_scripts _scripts_to_remove = [] console_scripts = dist.get_entry_map(group='console_scripts') for name in console_scripts.keys(): _scripts_to_remove.extend(_script_names(dist, name, False)) # find gui_scripts gui_scripts = dist.get_entry_map(group='gui_scripts') for name in gui_scripts.keys(): _scripts_to_remove.extend(_script_names(dist, name, True)) for s in _scripts_to_remove: paths_to_remove.add(s) return paths_to_remove class UninstallPthEntries(object): def __init__(self, pth_file): # type: (str) -> None if not os.path.isfile(pth_file): raise UninstallationError( "Cannot remove entries from nonexistent file %s" % pth_file ) self.file = pth_file self.entries = set() # type: Set[str] self._saved_lines = None # type: Optional[List[bytes]] def add(self, entry): # type: (str) -> None entry = os.path.normcase(entry) # On Windows, os.path.normcase converts the entry to use # backslashes. This is correct for entries that describe absolute # paths outside of site-packages, but all the others use forward # slashes. if WINDOWS and not os.path.splitdrive(entry)[0]: entry = entry.replace('\\', '/') self.entries.add(entry) def remove(self): # type: () -> None logger.debug('Removing pth entries from %s:', self.file) with open(self.file, 'rb') as fh: # windows uses '\r\n' with py3k, but uses '\n' with py2.x lines = fh.readlines() self._saved_lines = lines if any(b'\r\n' in line for line in lines): endline = '\r\n' else: endline = '\n' # handle missing trailing newline if lines and not lines[-1].endswith(endline.encode("utf-8")): lines[-1] = lines[-1] + endline.encode("utf-8") for entry in self.entries: try: logger.debug('Removing entry: %s', entry) lines.remove((entry + endline).encode("utf-8")) except ValueError: pass with open(self.file, 'wb') as fh: fh.writelines(lines) def rollback(self): # type: () -> bool if self._saved_lines is None: logger.error( 'Cannot roll back changes to %s, none were made', self.file ) return False logger.debug('Rolling %s back to previous state', self.file) with open(self.file, 'wb') as fh: fh.writelines(self._saved_lines) return True ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1604735099.949994 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/utils/0000755000175000017500000000000000000000000026144 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/utils/__init__.py0000644000175000017500000000000000000000000030243 0ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/utils/appdirs.py0000644000175000017500000002226600000000000030170 0ustar00vincentvincent00000000000000""" This code was taken from https://github.com/ActiveState/appdirs and modified to suit our purposes. """ from __future__ import absolute_import import os import sys from pip._vendor.six import PY2, text_type from pip._internal.utils.compat import WINDOWS, expanduser from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import List def user_cache_dir(appname): # type: (str) -> str r""" Return full path to the user-specific cache dir for this application. "appname" is the name of application. Typical user cache directories are: macOS: ~/Library/Caches/ Unix: ~/.cache/ (XDG default) Windows: C:\Users\\AppData\Local\\Cache On Windows the only suggestion in the MSDN docs is that local settings go in the `CSIDL_LOCAL_APPDATA` directory. This is identical to the non-roaming app data dir (the default returned by `user_data_dir`). Apps typically put cache data somewhere *under* the given dir here. Some examples: ...\Mozilla\Firefox\Profiles\\Cache ...\Acme\SuperApp\Cache\1.0 OPINION: This function appends "Cache" to the `CSIDL_LOCAL_APPDATA` value. """ if WINDOWS: # Get the base path path = os.path.normpath(_get_win_folder("CSIDL_LOCAL_APPDATA")) # When using Python 2, return paths as bytes on Windows like we do on # other operating systems. See helper function docs for more details. if PY2 and isinstance(path, text_type): path = _win_path_to_bytes(path) # Add our app name and Cache directory to it path = os.path.join(path, appname, "Cache") elif sys.platform == "darwin": # Get the base path path = expanduser("~/Library/Caches") # Add our app name to it path = os.path.join(path, appname) else: # Get the base path path = os.getenv("XDG_CACHE_HOME", expanduser("~/.cache")) # Add our app name to it path = os.path.join(path, appname) return path def user_data_dir(appname, roaming=False): # type: (str, bool) -> str r""" Return full path to the user-specific data dir for this application. "appname" is the name of application. If None, just the system directory is returned. "roaming" (boolean, default False) can be set True to use the Windows roaming appdata directory. That means that for users on a Windows network setup for roaming profiles, this user data will be sync'd on login. See for a discussion of issues. Typical user data directories are: macOS: ~/Library/Application Support/ if it exists, else ~/.config/ Unix: ~/.local/share/ # or in $XDG_DATA_HOME, if defined Win XP (not roaming): C:\Documents and Settings\\ ... ...Application Data\ Win XP (roaming): C:\Documents and Settings\\Local ... ...Settings\Application Data\ Win 7 (not roaming): C:\\Users\\AppData\Local\ Win 7 (roaming): C:\\Users\\AppData\Roaming\ For Unix, we follow the XDG spec and support $XDG_DATA_HOME. That means, by default "~/.local/share/". """ if WINDOWS: const = roaming and "CSIDL_APPDATA" or "CSIDL_LOCAL_APPDATA" path = os.path.join(os.path.normpath(_get_win_folder(const)), appname) elif sys.platform == "darwin": path = os.path.join( expanduser('~/Library/Application Support/'), appname, ) if os.path.isdir(os.path.join( expanduser('~/Library/Application Support/'), appname, ) ) else os.path.join( expanduser('~/.config/'), appname, ) else: path = os.path.join( os.getenv('XDG_DATA_HOME', expanduser("~/.local/share")), appname, ) return path def user_config_dir(appname, roaming=True): # type: (str, bool) -> str """Return full path to the user-specific config dir for this application. "appname" is the name of application. If None, just the system directory is returned. "roaming" (boolean, default True) can be set False to not use the Windows roaming appdata directory. That means that for users on a Windows network setup for roaming profiles, this user data will be sync'd on login. See for a discussion of issues. Typical user data directories are: macOS: same as user_data_dir Unix: ~/.config/ Win *: same as user_data_dir For Unix, we follow the XDG spec and support $XDG_CONFIG_HOME. That means, by default "~/.config/". """ if WINDOWS: path = user_data_dir(appname, roaming=roaming) elif sys.platform == "darwin": path = user_data_dir(appname) else: path = os.getenv('XDG_CONFIG_HOME', expanduser("~/.config")) path = os.path.join(path, appname) return path # for the discussion regarding site_config_dirs locations # see def site_config_dirs(appname): # type: (str) -> List[str] r"""Return a list of potential user-shared config dirs for this application. "appname" is the name of application. Typical user config directories are: macOS: /Library/Application Support// Unix: /etc or $XDG_CONFIG_DIRS[i]// for each value in $XDG_CONFIG_DIRS Win XP: C:\Documents and Settings\All Users\Application ... ...Data\\ Vista: (Fail! "C:\ProgramData" is a hidden *system* directory on Vista.) Win 7: Hidden, but writeable on Win 7: C:\ProgramData\\ """ if WINDOWS: path = os.path.normpath(_get_win_folder("CSIDL_COMMON_APPDATA")) pathlist = [os.path.join(path, appname)] elif sys.platform == 'darwin': pathlist = [os.path.join('/Library/Application Support', appname)] else: # try looking in $XDG_CONFIG_DIRS xdg_config_dirs = os.getenv('XDG_CONFIG_DIRS', '/etc/xdg') if xdg_config_dirs: pathlist = [ os.path.join(expanduser(x), appname) for x in xdg_config_dirs.split(os.pathsep) ] else: pathlist = [] # always look in /etc directly as well pathlist.append('/etc') return pathlist # -- Windows support functions -- def _get_win_folder_from_registry(csidl_name): # type: (str) -> str """ This is a fallback technique at best. I'm not sure if using the registry for this guarantees us the correct answer for all CSIDL_* names. """ import _winreg shell_folder_name = { "CSIDL_APPDATA": "AppData", "CSIDL_COMMON_APPDATA": "Common AppData", "CSIDL_LOCAL_APPDATA": "Local AppData", }[csidl_name] key = _winreg.OpenKey( _winreg.HKEY_CURRENT_USER, r"Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders" ) directory, _type = _winreg.QueryValueEx(key, shell_folder_name) return directory def _get_win_folder_with_ctypes(csidl_name): # type: (str) -> str csidl_const = { "CSIDL_APPDATA": 26, "CSIDL_COMMON_APPDATA": 35, "CSIDL_LOCAL_APPDATA": 28, }[csidl_name] buf = ctypes.create_unicode_buffer(1024) ctypes.windll.shell32.SHGetFolderPathW(None, csidl_const, None, 0, buf) # Downgrade to short path name if have highbit chars. See # . has_high_char = False for c in buf: if ord(c) > 255: has_high_char = True break if has_high_char: buf2 = ctypes.create_unicode_buffer(1024) if ctypes.windll.kernel32.GetShortPathNameW(buf.value, buf2, 1024): buf = buf2 return buf.value if WINDOWS: try: import ctypes _get_win_folder = _get_win_folder_with_ctypes except ImportError: _get_win_folder = _get_win_folder_from_registry def _win_path_to_bytes(path): """Encode Windows paths to bytes. Only used on Python 2. Motivation is to be consistent with other operating systems where paths are also returned as bytes. This avoids problems mixing bytes and Unicode elsewhere in the codebase. For more details and discussion see . If encoding using ASCII and MBCS fails, return the original Unicode path. """ for encoding in ('ASCII', 'MBCS'): try: return path.encode(encoding) except (UnicodeEncodeError, LookupError): pass return path ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/utils/compat.py0000644000175000017500000002257400000000000030013 0ustar00vincentvincent00000000000000"""Stuff that differs in different Python versions and platform distributions.""" from __future__ import absolute_import, division import codecs import locale import logging import os import shutil import sys from pip._vendor.six import text_type from pip._vendor.urllib3.util import IS_PYOPENSSL from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import Optional, Text, Tuple, Union try: import _ssl # noqa except ImportError: ssl = None else: # This additional assignment was needed to prevent a mypy error. ssl = _ssl try: import ipaddress except ImportError: try: from pip._vendor import ipaddress # type: ignore except ImportError: import ipaddr as ipaddress # type: ignore ipaddress.ip_address = ipaddress.IPAddress # type: ignore ipaddress.ip_network = ipaddress.IPNetwork # type: ignore __all__ = [ "ipaddress", "uses_pycache", "console_to_str", "native_str", "get_path_uid", "stdlib_pkgs", "WINDOWS", "samefile", "get_terminal_size", "get_extension_suffixes", ] logger = logging.getLogger(__name__) HAS_TLS = (ssl is not None) or IS_PYOPENSSL if sys.version_info >= (3, 4): uses_pycache = True from importlib.util import cache_from_source else: import imp try: cache_from_source = imp.cache_from_source # type: ignore except AttributeError: # does not use __pycache__ cache_from_source = None uses_pycache = cache_from_source is not None if sys.version_info >= (3, 5): backslashreplace_decode = "backslashreplace" else: # In version 3.4 and older, backslashreplace exists # but does not support use for decoding. # We implement our own replace handler for this # situation, so that we can consistently use # backslash replacement for all versions. def backslashreplace_decode_fn(err): raw_bytes = (err.object[i] for i in range(err.start, err.end)) if sys.version_info[0] == 2: # Python 2 gave us characters - convert to numeric bytes raw_bytes = (ord(b) for b in raw_bytes) return u"".join(u"\\x%x" % c for c in raw_bytes), err.end codecs.register_error( "backslashreplace_decode", backslashreplace_decode_fn, ) backslashreplace_decode = "backslashreplace_decode" def str_to_display(data, desc=None): # type: (Union[bytes, Text], Optional[str]) -> Text """ For display or logging purposes, convert a bytes object (or text) to text (e.g. unicode in Python 2) safe for output. :param desc: An optional phrase describing the input data, for use in the log message if a warning is logged. Defaults to "Bytes object". This function should never error out and so can take a best effort approach. It is okay to be lossy if needed since the return value is just for display. We assume the data is in the locale preferred encoding. If it won't decode properly, we warn the user but decode as best we can. We also ensure that the output can be safely written to standard output without encoding errors. """ if isinstance(data, text_type): return data # Otherwise, data is a bytes object (str in Python 2). # First, get the encoding we assume. This is the preferred # encoding for the locale, unless that is not found, or # it is ASCII, in which case assume UTF-8 encoding = locale.getpreferredencoding() if (not encoding) or codecs.lookup(encoding).name == "ascii": encoding = "utf-8" # Now try to decode the data - if we fail, warn the user and # decode with replacement. try: decoded_data = data.decode(encoding) except UnicodeDecodeError: if desc is None: desc = 'Bytes object' msg_format = '{} does not appear to be encoded as %s'.format(desc) logger.warning(msg_format, encoding) decoded_data = data.decode(encoding, errors=backslashreplace_decode) # Make sure we can print the output, by encoding it to the output # encoding with replacement of unencodable characters, and then # decoding again. # We use stderr's encoding because it's less likely to be # redirected and if we don't find an encoding we skip this # step (on the assumption that output is wrapped by something # that won't fail). # The double getattr is to deal with the possibility that we're # being called in a situation where sys.__stderr__ doesn't exist, # or doesn't have an encoding attribute. Neither of these cases # should occur in normal pip use, but there's no harm in checking # in case people use pip in (unsupported) unusual situations. output_encoding = getattr(getattr(sys, "__stderr__", None), "encoding", None) if output_encoding: output_encoded = decoded_data.encode( output_encoding, errors="backslashreplace" ) decoded_data = output_encoded.decode(output_encoding) return decoded_data def console_to_str(data): # type: (bytes) -> Text """Return a string, safe for output, of subprocess output. """ return str_to_display(data, desc='Subprocess output') if sys.version_info >= (3,): def native_str(s, replace=False): # type: (str, bool) -> str if isinstance(s, bytes): return s.decode('utf-8', 'replace' if replace else 'strict') return s else: def native_str(s, replace=False): # type: (str, bool) -> str # Replace is ignored -- unicode to UTF-8 can't fail if isinstance(s, text_type): return s.encode('utf-8') return s def get_path_uid(path): # type: (str) -> int """ Return path's uid. Does not follow symlinks: https://github.com/pypa/pip/pull/935#discussion_r5307003 Placed this function in compat due to differences on AIX and Jython, that should eventually go away. :raises OSError: When path is a symlink or can't be read. """ if hasattr(os, 'O_NOFOLLOW'): fd = os.open(path, os.O_RDONLY | os.O_NOFOLLOW) file_uid = os.fstat(fd).st_uid os.close(fd) else: # AIX and Jython # WARNING: time of check vulnerability, but best we can do w/o NOFOLLOW if not os.path.islink(path): # older versions of Jython don't have `os.fstat` file_uid = os.stat(path).st_uid else: # raise OSError for parity with os.O_NOFOLLOW above raise OSError( "%s is a symlink; Will not return uid for symlinks" % path ) return file_uid if sys.version_info >= (3, 4): from importlib.machinery import EXTENSION_SUFFIXES def get_extension_suffixes(): return EXTENSION_SUFFIXES else: from imp import get_suffixes def get_extension_suffixes(): return [suffix[0] for suffix in get_suffixes()] def expanduser(path): # type: (str) -> str """ Expand ~ and ~user constructions. Includes a workaround for https://bugs.python.org/issue14768 """ expanded = os.path.expanduser(path) if path.startswith('~/') and expanded.startswith('//'): expanded = expanded[1:] return expanded # packages in the stdlib that may have installation metadata, but should not be # considered 'installed'. this theoretically could be determined based on # dist.location (py27:`sysconfig.get_paths()['stdlib']`, # py26:sysconfig.get_config_vars('LIBDEST')), but fear platform variation may # make this ineffective, so hard-coding stdlib_pkgs = {"python", "wsgiref", "argparse"} # windows detection, covers cpython and ironpython WINDOWS = (sys.platform.startswith("win") or (sys.platform == 'cli' and os.name == 'nt')) def samefile(file1, file2): # type: (str, str) -> bool """Provide an alternative for os.path.samefile on Windows/Python2""" if hasattr(os.path, 'samefile'): return os.path.samefile(file1, file2) else: path1 = os.path.normcase(os.path.abspath(file1)) path2 = os.path.normcase(os.path.abspath(file2)) return path1 == path2 if hasattr(shutil, 'get_terminal_size'): def get_terminal_size(): # type: () -> Tuple[int, int] """ Returns a tuple (x, y) representing the width(x) and the height(y) in characters of the terminal window. """ return tuple(shutil.get_terminal_size()) # type: ignore else: def get_terminal_size(): # type: () -> Tuple[int, int] """ Returns a tuple (x, y) representing the width(x) and the height(y) in characters of the terminal window. """ def ioctl_GWINSZ(fd): try: import fcntl import termios import struct cr = struct.unpack_from( 'hh', fcntl.ioctl(fd, termios.TIOCGWINSZ, '12345678') ) except Exception: return None if cr == (0, 0): return None return cr cr = ioctl_GWINSZ(0) or ioctl_GWINSZ(1) or ioctl_GWINSZ(2) if not cr: try: fd = os.open(os.ctermid(), os.O_RDONLY) cr = ioctl_GWINSZ(fd) os.close(fd) except Exception: pass if not cr: cr = (os.environ.get('LINES', 25), os.environ.get('COLUMNS', 80)) return int(cr[1]), int(cr[0]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/utils/deprecation.py0000644000175000017500000000621100000000000031013 0ustar00vincentvincent00000000000000""" A module that implements tooling to enable easy warnings about deprecations. """ from __future__ import absolute_import import logging import warnings from pip._vendor.packaging.version import parse from pip import __version__ as current_version from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import Any, Optional DEPRECATION_MSG_PREFIX = "DEPRECATION: " class PipDeprecationWarning(Warning): pass _original_showwarning = None # type: Any # Warnings <-> Logging Integration def _showwarning(message, category, filename, lineno, file=None, line=None): if file is not None: if _original_showwarning is not None: _original_showwarning( message, category, filename, lineno, file, line, ) elif issubclass(category, PipDeprecationWarning): # We use a specially named logger which will handle all of the # deprecation messages for pip. logger = logging.getLogger("pip._internal.deprecations") logger.warning(message) else: _original_showwarning( message, category, filename, lineno, file, line, ) def install_warning_logger(): # type: () -> None # Enable our Deprecation Warnings warnings.simplefilter("default", PipDeprecationWarning, append=True) global _original_showwarning if _original_showwarning is None: _original_showwarning = warnings.showwarning warnings.showwarning = _showwarning def deprecated(reason, replacement, gone_in, issue=None): # type: (str, Optional[str], Optional[str], Optional[int]) -> None """Helper to deprecate existing functionality. reason: Textual reason shown to the user about why this functionality has been deprecated. replacement: Textual suggestion shown to the user about what alternative functionality they can use. gone_in: The version of pip does this functionality should get removed in. Raises errors if pip's current version is greater than or equal to this. issue: Issue number on the tracker that would serve as a useful place for users to find related discussion and provide feedback. Always pass replacement, gone_in and issue as keyword arguments for clarity at the call site. """ # Construct a nice message. # This is eagerly formatted as we want it to get logged as if someone # typed this entire message out. sentences = [ (reason, DEPRECATION_MSG_PREFIX + "{}"), (gone_in, "pip {} will remove support for this functionality."), (replacement, "A possible replacement is {}."), (issue, ( "You can find discussion regarding this at " "https://github.com/pypa/pip/issues/{}." )), ] message = " ".join( template.format(val) for val, template in sentences if val is not None ) # Raise as an error if it has to be removed. if gone_in is not None and parse(current_version) >= parse(gone_in): raise PipDeprecationWarning(message) warnings.warn(message, category=PipDeprecationWarning, stacklevel=2) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/utils/encoding.py0000644000175000017500000000230200000000000030301 0ustar00vincentvincent00000000000000import codecs import locale import re import sys from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import List, Tuple, Text BOMS = [ (codecs.BOM_UTF8, 'utf-8'), (codecs.BOM_UTF16, 'utf-16'), (codecs.BOM_UTF16_BE, 'utf-16-be'), (codecs.BOM_UTF16_LE, 'utf-16-le'), (codecs.BOM_UTF32, 'utf-32'), (codecs.BOM_UTF32_BE, 'utf-32-be'), (codecs.BOM_UTF32_LE, 'utf-32-le'), ] # type: List[Tuple[bytes, Text]] ENCODING_RE = re.compile(br'coding[:=]\s*([-\w.]+)') def auto_decode(data): # type: (bytes) -> Text """Check a bytes string for a BOM to correctly detect the encoding Fallback to locale.getpreferredencoding(False) like open() on Python3""" for bom, encoding in BOMS: if data.startswith(bom): return data[len(bom):].decode(encoding) # Lets check the first two lines as in PEP263 for line in data.split(b'\n')[:2]: if line[0:1] == b'#' and ENCODING_RE.search(line): encoding = ENCODING_RE.search(line).groups()[0].decode('ascii') return data.decode(encoding) return data.decode( locale.getpreferredencoding(False) or sys.getdefaultencoding(), ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/utils/filesystem.py0000644000175000017500000000173600000000000030711 0ustar00vincentvincent00000000000000import os import os.path from pip._internal.utils.compat import get_path_uid def check_path_owner(path): # type: (str) -> bool # If we don't have a way to check the effective uid of this process, then # we'll just assume that we own the directory. if not hasattr(os, "geteuid"): return True previous = None while path != previous: if os.path.lexists(path): # Check if path is writable by current user. if os.geteuid() == 0: # Special handling for root user in order to handle properly # cases where users use sudo without -H flag. try: path_uid = get_path_uid(path) except OSError: return False return path_uid == 0 else: return os.access(path, os.W_OK) else: previous, path = path, os.path.dirname(path) return False # assume we don't own the path ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/utils/glibc.py0000644000175000017500000001030700000000000027577 0ustar00vincentvincent00000000000000from __future__ import absolute_import import os import re import warnings from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import Optional, Tuple def glibc_version_string(): # type: () -> Optional[str] "Returns glibc version string, or None if not using glibc." return glibc_version_string_confstr() or glibc_version_string_ctypes() def glibc_version_string_confstr(): # type: () -> Optional[str] "Primary implementation of glibc_version_string using os.confstr." # os.confstr is quite a bit faster than ctypes.DLL. It's also less likely # to be broken or missing. This strategy is used in the standard library # platform module: # https://github.com/python/cpython/blob/fcf1d003bf4f0100c9d0921ff3d70e1127ca1b71/Lib/platform.py#L175-L183 try: # os.confstr("CS_GNU_LIBC_VERSION") returns a string like "glibc 2.17": _, version = os.confstr("CS_GNU_LIBC_VERSION").split() except (AttributeError, OSError, ValueError): # os.confstr() or CS_GNU_LIBC_VERSION not available (or a bad value)... return None return version def glibc_version_string_ctypes(): # type: () -> Optional[str] "Fallback implementation of glibc_version_string using ctypes." try: import ctypes except ImportError: return None # ctypes.CDLL(None) internally calls dlopen(NULL), and as the dlopen # manpage says, "If filename is NULL, then the returned handle is for the # main program". This way we can let the linker do the work to figure out # which libc our process is actually using. process_namespace = ctypes.CDLL(None) try: gnu_get_libc_version = process_namespace.gnu_get_libc_version except AttributeError: # Symbol doesn't exist -> therefore, we are not linked to # glibc. return None # Call gnu_get_libc_version, which returns a string like "2.5" gnu_get_libc_version.restype = ctypes.c_char_p version_str = gnu_get_libc_version() # py2 / py3 compatibility: if not isinstance(version_str, str): version_str = version_str.decode("ascii") return version_str # Separated out from have_compatible_glibc for easier unit testing def check_glibc_version(version_str, required_major, minimum_minor): # type: (str, int, int) -> bool # Parse string and check against requested version. # # We use a regexp instead of str.split because we want to discard any # random junk that might come after the minor version -- this might happen # in patched/forked versions of glibc (e.g. Linaro's version of glibc # uses version strings like "2.20-2014.11"). See gh-3588. m = re.match(r"(?P[0-9]+)\.(?P[0-9]+)", version_str) if not m: warnings.warn("Expected glibc version with 2 components major.minor," " got: %s" % version_str, RuntimeWarning) return False return (int(m.group("major")) == required_major and int(m.group("minor")) >= minimum_minor) def have_compatible_glibc(required_major, minimum_minor): # type: (int, int) -> bool version_str = glibc_version_string() if version_str is None: return False return check_glibc_version(version_str, required_major, minimum_minor) # platform.libc_ver regularly returns completely nonsensical glibc # versions. E.g. on my computer, platform says: # # ~$ python2.7 -c 'import platform; print(platform.libc_ver())' # ('glibc', '2.7') # ~$ python3.5 -c 'import platform; print(platform.libc_ver())' # ('glibc', '2.9') # # But the truth is: # # ~$ ldd --version # ldd (Debian GLIBC 2.22-11) 2.22 # # This is unfortunate, because it means that the linehaul data on libc # versions that was generated by pip 8.1.2 and earlier is useless and # misleading. Solution: instead of using platform, use our code that actually # works. def libc_ver(): # type: () -> Tuple[str, str] """Try to determine the glibc version Returns a tuple of strings (lib, version) which default to empty strings in case the lookup fails. """ glibc_version = glibc_version_string() if glibc_version is None: return ("", "") else: return ("glibc", glibc_version) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/utils/hashes.py0000644000175000017500000000750000000000000027773 0ustar00vincentvincent00000000000000from __future__ import absolute_import import hashlib from pip._vendor.six import iteritems, iterkeys, itervalues from pip._internal.exceptions import ( HashMismatch, HashMissing, InstallationError, ) from pip._internal.utils.misc import read_chunks from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import ( Dict, List, BinaryIO, NoReturn, Iterator ) from pip._vendor.six import PY3 if PY3: from hashlib import _Hash else: from hashlib import _hash as _Hash # The recommended hash algo of the moment. Change this whenever the state of # the art changes; it won't hurt backward compatibility. FAVORITE_HASH = 'sha256' # Names of hashlib algorithms allowed by the --hash option and ``pip hash`` # Currently, those are the ones at least as collision-resistant as sha256. STRONG_HASHES = ['sha256', 'sha384', 'sha512'] class Hashes(object): """A wrapper that builds multiple hashes at once and checks them against known-good values """ def __init__(self, hashes=None): # type: (Dict[str, List[str]]) -> None """ :param hashes: A dict of algorithm names pointing to lists of allowed hex digests """ self._allowed = {} if hashes is None else hashes @property def digest_count(self): # type: () -> int return sum(len(digests) for digests in self._allowed.values()) def is_hash_allowed( self, hash_name, # type: str hex_digest, # type: str ): """Return whether the given hex digest is allowed.""" return hex_digest in self._allowed.get(hash_name, []) def check_against_chunks(self, chunks): # type: (Iterator[bytes]) -> None """Check good hashes against ones built from iterable of chunks of data. Raise HashMismatch if none match. """ gots = {} for hash_name in iterkeys(self._allowed): try: gots[hash_name] = hashlib.new(hash_name) except (ValueError, TypeError): raise InstallationError('Unknown hash name: %s' % hash_name) for chunk in chunks: for hash in itervalues(gots): hash.update(chunk) for hash_name, got in iteritems(gots): if got.hexdigest() in self._allowed[hash_name]: return self._raise(gots) def _raise(self, gots): # type: (Dict[str, _Hash]) -> NoReturn raise HashMismatch(self._allowed, gots) def check_against_file(self, file): # type: (BinaryIO) -> None """Check good hashes against a file-like object Raise HashMismatch if none match. """ return self.check_against_chunks(read_chunks(file)) def check_against_path(self, path): # type: (str) -> None with open(path, 'rb') as file: return self.check_against_file(file) def __nonzero__(self): # type: () -> bool """Return whether I know any known-good hashes.""" return bool(self._allowed) def __bool__(self): # type: () -> bool return self.__nonzero__() class MissingHashes(Hashes): """A workalike for Hashes used when we're missing a hash for a requirement It computes the actual hash of the requirement and raises a HashMissing exception showing it to the user. """ def __init__(self): # type: () -> None """Don't offer the ``hashes`` kwarg.""" # Pass our favorite hash in to generate a "gotten hash". With the # empty list, it will never match, so an error will always raise. super(MissingHashes, self).__init__(hashes={FAVORITE_HASH: []}) def _raise(self, gots): # type: (Dict[str, _Hash]) -> NoReturn raise HashMissing(gots[FAVORITE_HASH].hexdigest()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/utils/logging.py0000644000175000017500000003112300000000000030144 0ustar00vincentvincent00000000000000from __future__ import absolute_import import contextlib import errno import logging import logging.handlers import os import sys from logging import Filter from pip._vendor.six import PY2 from pip._internal.utils.compat import WINDOWS from pip._internal.utils.deprecation import DEPRECATION_MSG_PREFIX from pip._internal.utils.misc import ensure_dir, subprocess_logger try: import threading except ImportError: import dummy_threading as threading # type: ignore try: # Use "import as" and set colorama in the else clause to avoid mypy # errors and get the following correct revealed type for colorama: # `Union[_importlib_modulespec.ModuleType, None]` # Otherwise, we get an error like the following in the except block: # > Incompatible types in assignment (expression has type "None", # variable has type Module) # TODO: eliminate the need to use "import as" once mypy addresses some # of its issues with conditional imports. Here is an umbrella issue: # https://github.com/python/mypy/issues/1297 from pip._vendor import colorama as _colorama # Lots of different errors can come from this, including SystemError and # ImportError. except Exception: colorama = None else: # Import Fore explicitly rather than accessing below as colorama.Fore # to avoid the following error running mypy: # > Module has no attribute "Fore" # TODO: eliminate the need to import Fore once mypy addresses some of its # issues with conditional imports. This particular case could be an # instance of the following issue (but also see the umbrella issue above): # https://github.com/python/mypy/issues/3500 from pip._vendor.colorama import Fore colorama = _colorama _log_state = threading.local() _log_state.indentation = 0 class BrokenStdoutLoggingError(Exception): """ Raised if BrokenPipeError occurs for the stdout stream while logging. """ pass # BrokenPipeError does not exist in Python 2 and, in addition, manifests # differently in Windows and non-Windows. if WINDOWS: # In Windows, a broken pipe can show up as EINVAL rather than EPIPE: # https://bugs.python.org/issue19612 # https://bugs.python.org/issue30418 if PY2: def _is_broken_pipe_error(exc_class, exc): """See the docstring for non-Windows Python 3 below.""" return (exc_class is IOError and exc.errno in (errno.EINVAL, errno.EPIPE)) else: # In Windows, a broken pipe IOError became OSError in Python 3. def _is_broken_pipe_error(exc_class, exc): """See the docstring for non-Windows Python 3 below.""" return ((exc_class is BrokenPipeError) or # noqa: F821 (exc_class is OSError and exc.errno in (errno.EINVAL, errno.EPIPE))) elif PY2: def _is_broken_pipe_error(exc_class, exc): """See the docstring for non-Windows Python 3 below.""" return (exc_class is IOError and exc.errno == errno.EPIPE) else: # Then we are in the non-Windows Python 3 case. def _is_broken_pipe_error(exc_class, exc): """ Return whether an exception is a broken pipe error. Args: exc_class: an exception class. exc: an exception instance. """ return (exc_class is BrokenPipeError) # noqa: F821 @contextlib.contextmanager def indent_log(num=2): """ A context manager which will cause the log output to be indented for any log messages emitted inside it. """ _log_state.indentation += num try: yield finally: _log_state.indentation -= num def get_indentation(): return getattr(_log_state, 'indentation', 0) class IndentingFormatter(logging.Formatter): def __init__(self, *args, **kwargs): """ A logging.Formatter that obeys the indent_log() context manager. :param add_timestamp: A bool indicating output lines should be prefixed with their record's timestamp. """ self.add_timestamp = kwargs.pop("add_timestamp", False) super(IndentingFormatter, self).__init__(*args, **kwargs) def get_message_start(self, formatted, levelno): """ Return the start of the formatted log message (not counting the prefix to add to each line). """ if levelno < logging.WARNING: return '' if formatted.startswith(DEPRECATION_MSG_PREFIX): # Then the message already has a prefix. We don't want it to # look like "WARNING: DEPRECATION: ...." return '' if levelno < logging.ERROR: return 'WARNING: ' return 'ERROR: ' def format(self, record): """ Calls the standard formatter, but will indent all of the log message lines by our current indentation level. """ formatted = super(IndentingFormatter, self).format(record) message_start = self.get_message_start(formatted, record.levelno) formatted = message_start + formatted prefix = '' if self.add_timestamp: # TODO: Use Formatter.default_time_format after dropping PY2. t = self.formatTime(record, "%Y-%m-%dT%H:%M:%S") prefix = '%s,%03d ' % (t, record.msecs) prefix += " " * get_indentation() formatted = "".join([ prefix + line for line in formatted.splitlines(True) ]) return formatted def _color_wrap(*colors): def wrapped(inp): return "".join(list(colors) + [inp, colorama.Style.RESET_ALL]) return wrapped class ColorizedStreamHandler(logging.StreamHandler): # Don't build up a list of colors if we don't have colorama if colorama: COLORS = [ # This needs to be in order from highest logging level to lowest. (logging.ERROR, _color_wrap(Fore.RED)), (logging.WARNING, _color_wrap(Fore.YELLOW)), ] else: COLORS = [] def __init__(self, stream=None, no_color=None): logging.StreamHandler.__init__(self, stream) self._no_color = no_color if WINDOWS and colorama: self.stream = colorama.AnsiToWin32(self.stream) def _using_stdout(self): """ Return whether the handler is using sys.stdout. """ if WINDOWS and colorama: # Then self.stream is an AnsiToWin32 object. return self.stream.wrapped is sys.stdout return self.stream is sys.stdout def should_color(self): # Don't colorize things if we do not have colorama or if told not to if not colorama or self._no_color: return False real_stream = ( self.stream if not isinstance(self.stream, colorama.AnsiToWin32) else self.stream.wrapped ) # If the stream is a tty we should color it if hasattr(real_stream, "isatty") and real_stream.isatty(): return True # If we have an ANSI term we should color it if os.environ.get("TERM") == "ANSI": return True # If anything else we should not color it return False def format(self, record): msg = logging.StreamHandler.format(self, record) if self.should_color(): for level, color in self.COLORS: if record.levelno >= level: msg = color(msg) break return msg # The logging module says handleError() can be customized. def handleError(self, record): exc_class, exc = sys.exc_info()[:2] # If a broken pipe occurred while calling write() or flush() on the # stdout stream in logging's Handler.emit(), then raise our special # exception so we can handle it in main() instead of logging the # broken pipe error and continuing. if (exc_class and self._using_stdout() and _is_broken_pipe_error(exc_class, exc)): raise BrokenStdoutLoggingError() return super(ColorizedStreamHandler, self).handleError(record) class BetterRotatingFileHandler(logging.handlers.RotatingFileHandler): def _open(self): ensure_dir(os.path.dirname(self.baseFilename)) return logging.handlers.RotatingFileHandler._open(self) class MaxLevelFilter(Filter): def __init__(self, level): self.level = level def filter(self, record): return record.levelno < self.level class ExcludeLoggerFilter(Filter): """ A logging Filter that excludes records from a logger (or its children). """ def filter(self, record): # The base Filter class allows only records from a logger (or its # children). return not super(ExcludeLoggerFilter, self).filter(record) def setup_logging(verbosity, no_color, user_log_file): """Configures and sets up all of the logging Returns the requested logging level, as its integer value. """ # Determine the level to be logging at. if verbosity >= 1: level = "DEBUG" elif verbosity == -1: level = "WARNING" elif verbosity == -2: level = "ERROR" elif verbosity <= -3: level = "CRITICAL" else: level = "INFO" level_number = getattr(logging, level) # The "root" logger should match the "console" level *unless* we also need # to log to a user log file. include_user_log = user_log_file is not None if include_user_log: additional_log_file = user_log_file root_level = "DEBUG" else: additional_log_file = "/dev/null" root_level = level # Disable any logging besides WARNING unless we have DEBUG level logging # enabled for vendored libraries. vendored_log_level = "WARNING" if level in ["INFO", "ERROR"] else "DEBUG" # Shorthands for clarity log_streams = { "stdout": "ext://sys.stdout", "stderr": "ext://sys.stderr", } handler_classes = { "stream": "pip._internal.utils.logging.ColorizedStreamHandler", "file": "pip._internal.utils.logging.BetterRotatingFileHandler", } handlers = ["console", "console_errors", "console_subprocess"] + ( ["user_log"] if include_user_log else [] ) logging.config.dictConfig({ "version": 1, "disable_existing_loggers": False, "filters": { "exclude_warnings": { "()": "pip._internal.utils.logging.MaxLevelFilter", "level": logging.WARNING, }, "restrict_to_subprocess": { "()": "logging.Filter", "name": subprocess_logger.name, }, "exclude_subprocess": { "()": "pip._internal.utils.logging.ExcludeLoggerFilter", "name": subprocess_logger.name, }, }, "formatters": { "indent": { "()": IndentingFormatter, "format": "%(message)s", }, "indent_with_timestamp": { "()": IndentingFormatter, "format": "%(message)s", "add_timestamp": True, }, }, "handlers": { "console": { "level": level, "class": handler_classes["stream"], "no_color": no_color, "stream": log_streams["stdout"], "filters": ["exclude_subprocess", "exclude_warnings"], "formatter": "indent", }, "console_errors": { "level": "WARNING", "class": handler_classes["stream"], "no_color": no_color, "stream": log_streams["stderr"], "filters": ["exclude_subprocess"], "formatter": "indent", }, # A handler responsible for logging to the console messages # from the "subprocessor" logger. "console_subprocess": { "level": level, "class": handler_classes["stream"], "no_color": no_color, "stream": log_streams["stderr"], "filters": ["restrict_to_subprocess"], "formatter": "indent", }, "user_log": { "level": "DEBUG", "class": handler_classes["file"], "filename": additional_log_file, "delay": True, "formatter": "indent_with_timestamp", }, }, "root": { "level": root_level, "handlers": handlers, }, "loggers": { "pip._vendor": { "level": vendored_log_level } }, }) return level_number ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/utils/marker_files.py0000644000175000017500000000112300000000000031156 0ustar00vincentvincent00000000000000import os.path DELETE_MARKER_MESSAGE = '''\ This file is placed here by pip to indicate the source was put here by pip. Once this package is successfully installed this source code will be deleted (unless you remove this file). ''' PIP_DELETE_MARKER_FILENAME = 'pip-delete-this-directory.txt' def write_delete_marker_file(directory): # type: (str) -> None """ Write the pip delete marker file into this directory. """ filepath = os.path.join(directory, PIP_DELETE_MARKER_FILENAME) with open(filepath, 'w') as marker_fp: marker_fp.write(DELETE_MARKER_MESSAGE) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430190.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/utils/misc.py0000644000175000017500000011360700000000000027461 0ustar00vincentvincent00000000000000from __future__ import absolute_import import contextlib import errno import getpass import io # we have a submodule named 'logging' which would shadow this if we used the # regular name: import logging as std_logging import os import posixpath import re import shutil import stat import subprocess import sys import tarfile import zipfile from collections import deque from pip._vendor import pkg_resources # NOTE: retrying is not annotated in typeshed as on 2017-07-17, which is # why we ignore the type on this import. from pip._vendor.retrying import retry # type: ignore from pip._vendor.six import PY2, text_type from pip._vendor.six.moves import input, shlex_quote from pip._vendor.six.moves.urllib import parse as urllib_parse from pip._vendor.six.moves.urllib import request as urllib_request from pip._vendor.six.moves.urllib.parse import unquote as urllib_unquote from pip import __version__ from pip._internal.exceptions import CommandError, InstallationError from pip._internal.locations import site_packages, user_site from pip._internal.utils.compat import ( WINDOWS, console_to_str, expanduser, stdlib_pkgs, str_to_display, ) from pip._internal.utils.marker_files import write_delete_marker_file from pip._internal.utils.typing import MYPY_CHECK_RUNNING from pip._internal.utils.virtualenv import ( running_under_virtualenv, virtualenv_no_global, ) if PY2: from io import BytesIO as StringIO else: from io import StringIO if MYPY_CHECK_RUNNING: from typing import ( Any, AnyStr, Container, Iterable, List, Mapping, Match, Optional, Text, Union, ) from pip._vendor.pkg_resources import Distribution from pip._internal.models.link import Link from pip._internal.utils.ui import SpinnerInterface try: from typing import cast, Tuple VersionInfo = Tuple[int, int, int] except ImportError: # typing's cast() isn't supported in code comments, so we need to # define a dummy, no-op version. def cast(typ, val): return val VersionInfo = None __all__ = ['rmtree', 'display_path', 'backup_dir', 'ask', 'splitext', 'format_size', 'is_installable_dir', 'is_svn_page', 'file_contents', 'split_leading_dir', 'has_leading_dir', 'normalize_path', 'renames', 'get_prog', 'unzip_file', 'untar_file', 'unpack_file', 'call_subprocess', 'captured_stdout', 'ensure_dir', 'ARCHIVE_EXTENSIONS', 'SUPPORTED_EXTENSIONS', 'WHEEL_EXTENSION', 'get_installed_version', 'remove_auth_from_url'] logger = std_logging.getLogger(__name__) subprocess_logger = std_logging.getLogger('pip.subprocessor') LOG_DIVIDER = '----------------------------------------' WHEEL_EXTENSION = '.whl' BZ2_EXTENSIONS = ('.tar.bz2', '.tbz') XZ_EXTENSIONS = ('.tar.xz', '.txz', '.tlz', '.tar.lz', '.tar.lzma') ZIP_EXTENSIONS = ('.zip', WHEEL_EXTENSION) TAR_EXTENSIONS = ('.tar.gz', '.tgz', '.tar') ARCHIVE_EXTENSIONS = ( ZIP_EXTENSIONS + BZ2_EXTENSIONS + TAR_EXTENSIONS + XZ_EXTENSIONS) SUPPORTED_EXTENSIONS = ZIP_EXTENSIONS + TAR_EXTENSIONS try: import bz2 # noqa SUPPORTED_EXTENSIONS += BZ2_EXTENSIONS except ImportError: logger.debug('bz2 module is not available') try: # Only for Python 3.3+ import lzma # noqa SUPPORTED_EXTENSIONS += XZ_EXTENSIONS except ImportError: logger.debug('lzma module is not available') def get_pip_version(): # type: () -> str pip_pkg_dir = os.path.join(os.path.dirname(__file__), "..", "..") pip_pkg_dir = os.path.abspath(pip_pkg_dir) return ( 'pip {} from {} (python {})'.format( __version__, pip_pkg_dir, sys.version[:3], ) ) def normalize_version_info(py_version_info): # type: (Tuple[int, ...]) -> Tuple[int, int, int] """ Convert a tuple of ints representing a Python version to one of length three. :param py_version_info: a tuple of ints representing a Python version, or None to specify no version. The tuple can have any length. :return: a tuple of length three if `py_version_info` is non-None. Otherwise, return `py_version_info` unchanged (i.e. None). """ if len(py_version_info) < 3: py_version_info += (3 - len(py_version_info)) * (0,) elif len(py_version_info) > 3: py_version_info = py_version_info[:3] return cast(VersionInfo, py_version_info) def ensure_dir(path): # type: (AnyStr) -> None """os.path.makedirs without EEXIST.""" try: os.makedirs(path) except OSError as e: if e.errno != errno.EEXIST: raise def get_prog(): # type: () -> str try: prog = os.path.basename(sys.argv[0]) if prog in ('__main__.py', '-c'): return "%s -m pip" % sys.executable else: return prog except (AttributeError, TypeError, IndexError): pass return 'pip' # Retry every half second for up to 3 seconds @retry(stop_max_delay=3000, wait_fixed=500) def rmtree(dir, ignore_errors=False): # type: (str, bool) -> None shutil.rmtree(dir, ignore_errors=ignore_errors, onerror=rmtree_errorhandler) def rmtree_errorhandler(func, path, exc_info): """On Windows, the files in .svn are read-only, so when rmtree() tries to remove them, an exception is thrown. We catch that here, remove the read-only attribute, and hopefully continue without problems.""" # if file type currently read only if os.stat(path).st_mode & stat.S_IREAD: # convert to read/write os.chmod(path, stat.S_IWRITE) # use the original function to repeat the operation func(path) return else: raise def path_to_display(path): # type: (Optional[Union[str, Text]]) -> Optional[Text] """ Convert a bytes (or text) path to text (unicode in Python 2) for display and logging purposes. This function should never error out. Also, this function is mainly needed for Python 2 since in Python 3 str paths are already text. """ if path is None: return None if isinstance(path, text_type): return path # Otherwise, path is a bytes object (str in Python 2). try: display_path = path.decode(sys.getfilesystemencoding(), 'strict') except UnicodeDecodeError: # Include the full bytes to make troubleshooting easier, even though # it may not be very human readable. if PY2: # Convert the bytes to a readable str representation using # repr(), and then convert the str to unicode. # Also, we add the prefix "b" to the repr() return value both # to make the Python 2 output look like the Python 3 output, and # to signal to the user that this is a bytes representation. display_path = str_to_display('b{!r}'.format(path)) else: # Silence the "F821 undefined name 'ascii'" flake8 error since # in Python 3 ascii() is a built-in. display_path = ascii(path) # noqa: F821 return display_path def display_path(path): # type: (Union[str, Text]) -> str """Gives the display value for a given path, making it relative to cwd if possible.""" path = os.path.normcase(os.path.abspath(path)) if sys.version_info[0] == 2: path = path.decode(sys.getfilesystemencoding(), 'replace') path = path.encode(sys.getdefaultencoding(), 'replace') if path.startswith(os.getcwd() + os.path.sep): path = '.' + path[len(os.getcwd()):] return path def backup_dir(dir, ext='.bak'): # type: (str, str) -> str """Figure out the name of a directory to back up the given dir to (adding .bak, .bak2, etc)""" n = 1 extension = ext while os.path.exists(dir + extension): n += 1 extension = ext + str(n) return dir + extension def ask_path_exists(message, options): # type: (str, Iterable[str]) -> str for action in os.environ.get('PIP_EXISTS_ACTION', '').split(): if action in options: return action return ask(message, options) def _check_no_input(message): # type: (str) -> None """Raise an error if no input is allowed.""" if os.environ.get('PIP_NO_INPUT'): raise Exception( 'No input was expected ($PIP_NO_INPUT set); question: %s' % message ) def ask(message, options): # type: (str, Iterable[str]) -> str """Ask the message interactively, with the given possible responses""" while 1: _check_no_input(message) response = input(message) response = response.strip().lower() if response not in options: print( 'Your response (%r) was not one of the expected responses: ' '%s' % (response, ', '.join(options)) ) else: return response def ask_input(message): # type: (str) -> str """Ask for input interactively.""" _check_no_input(message) return input(message) def ask_password(message): # type: (str) -> str """Ask for a password interactively.""" _check_no_input(message) return getpass.getpass(message) def format_size(bytes): # type: (float) -> str if bytes > 1000 * 1000: return '%.1fMB' % (bytes / 1000.0 / 1000) elif bytes > 10 * 1000: return '%ikB' % (bytes / 1000) elif bytes > 1000: return '%.1fkB' % (bytes / 1000.0) else: return '%ibytes' % bytes def is_installable_dir(path): # type: (str) -> bool """Is path is a directory containing setup.py or pyproject.toml? """ if not os.path.isdir(path): return False setup_py = os.path.join(path, 'setup.py') if os.path.isfile(setup_py): return True pyproject_toml = os.path.join(path, 'pyproject.toml') if os.path.isfile(pyproject_toml): return True return False def is_svn_page(html): # type: (Union[str, Text]) -> Optional[Match[Union[str, Text]]] """ Returns true if the page appears to be the index page of an svn repository """ return (re.search(r'[^<]*Revision \d+:', html) and re.search(r'Powered by (?:<a[^>]*?>)?Subversion', html, re.I)) def file_contents(filename): # type: (str) -> Text with open(filename, 'rb') as fp: return fp.read().decode('utf-8') def read_chunks(file, size=io.DEFAULT_BUFFER_SIZE): """Yield pieces of data from a file-like object until EOF.""" while True: chunk = file.read(size) if not chunk: break yield chunk def split_leading_dir(path): # type: (Union[str, Text]) -> List[Union[str, Text]] path = path.lstrip('/').lstrip('\\') if '/' in path and (('\\' in path and path.find('/') < path.find('\\')) or '\\' not in path): return path.split('/', 1) elif '\\' in path: return path.split('\\', 1) else: return [path, ''] def has_leading_dir(paths): # type: (Iterable[Union[str, Text]]) -> bool """Returns true if all the paths have the same leading path name (i.e., everything is in one subdirectory in an archive)""" common_prefix = None for path in paths: prefix, rest = split_leading_dir(path) if not prefix: return False elif common_prefix is None: common_prefix = prefix elif prefix != common_prefix: return False return True def normalize_path(path, resolve_symlinks=True): # type: (str, bool) -> str """ Convert a path to its canonical, case-normalized, absolute version. """ path = expanduser(path) if resolve_symlinks: path = os.path.realpath(path) else: path = os.path.abspath(path) return os.path.normcase(path) def splitext(path): # type: (str) -> Tuple[str, str] """Like os.path.splitext, but take off .tar too""" base, ext = posixpath.splitext(path) if base.lower().endswith('.tar'): ext = base[-4:] + ext base = base[:-4] return base, ext def renames(old, new): # type: (str, str) -> None """Like os.renames(), but handles renaming across devices.""" # Implementation borrowed from os.renames(). head, tail = os.path.split(new) if head and tail and not os.path.exists(head): os.makedirs(head) shutil.move(old, new) head, tail = os.path.split(old) if head and tail: try: os.removedirs(head) except OSError: pass def is_local(path): # type: (str) -> bool """ Return True if path is within sys.prefix, if we're running in a virtualenv. If we're not in a virtualenv, all paths are considered "local." """ if not running_under_virtualenv(): return True return normalize_path(path).startswith(normalize_path(sys.prefix)) def dist_is_local(dist): # type: (Distribution) -> bool """ Return True if given Distribution object is installed locally (i.e. within current virtualenv). Always True if we're not in a virtualenv. """ return is_local(dist_location(dist)) def dist_in_usersite(dist): # type: (Distribution) -> bool """ Return True if given Distribution is installed in user site. """ norm_path = normalize_path(dist_location(dist)) return norm_path.startswith(normalize_path(user_site)) def dist_in_site_packages(dist): # type: (Distribution) -> bool """ Return True if given Distribution is installed in sysconfig.get_python_lib(). """ return normalize_path( dist_location(dist) ).startswith(normalize_path(site_packages)) def dist_is_editable(dist): # type: (Distribution) -> bool """ Return True if given Distribution is an editable install. """ for path_item in sys.path: egg_link = os.path.join(path_item, dist.project_name + '.egg-link') if os.path.isfile(egg_link): return True return False def get_installed_distributions( local_only=True, # type: bool skip=stdlib_pkgs, # type: Container[str] include_editables=True, # type: bool editables_only=False, # type: bool user_only=False, # type: bool paths=None # type: Optional[List[str]] ): # type: (...) -> List[Distribution] """ Return a list of installed Distribution objects. If ``local_only`` is True (default), only return installations local to the current virtualenv, if in a virtualenv. ``skip`` argument is an iterable of lower-case project names to ignore; defaults to stdlib_pkgs If ``include_editables`` is False, don't report editables. If ``editables_only`` is True , only report editables. If ``user_only`` is True , only report installations in the user site directory. If ``paths`` is set, only report the distributions present at the specified list of locations. """ if paths: working_set = pkg_resources.WorkingSet(paths) else: working_set = pkg_resources.working_set if local_only: local_test = dist_is_local else: def local_test(d): return True if include_editables: def editable_test(d): return True else: def editable_test(d): return not dist_is_editable(d) if editables_only: def editables_only_test(d): return dist_is_editable(d) else: def editables_only_test(d): return True if user_only: user_test = dist_in_usersite else: def user_test(d): return True # because of pkg_resources vendoring, mypy cannot find stub in typeshed return [d for d in working_set # type: ignore if local_test(d) and d.key not in skip and editable_test(d) and editables_only_test(d) and user_test(d) ] def egg_link_path(dist): # type: (Distribution) -> Optional[str] """ Return the path for the .egg-link file if it exists, otherwise, None. There's 3 scenarios: 1) not in a virtualenv try to find in site.USER_SITE, then site_packages 2) in a no-global virtualenv try to find in site_packages 3) in a yes-global virtualenv try to find in site_packages, then site.USER_SITE (don't look in global location) For #1 and #3, there could be odd cases, where there's an egg-link in 2 locations. This method will just return the first one found. """ sites = [] if running_under_virtualenv(): if virtualenv_no_global(): sites.append(site_packages) else: sites.append(site_packages) if user_site: sites.append(user_site) else: if user_site: sites.append(user_site) sites.append(site_packages) for site in sites: egglink = os.path.join(site, dist.project_name) + '.egg-link' if os.path.isfile(egglink): return egglink return None def dist_location(dist): # type: (Distribution) -> str """ Get the site-packages location of this distribution. Generally this is dist.location, except in the case of develop-installed packages, where dist.location is the source code location, and we want to know where the egg-link file is. """ egg_link = egg_link_path(dist) if egg_link: return egg_link return dist.location def current_umask(): """Get the current umask which involves having to set it temporarily.""" mask = os.umask(0) os.umask(mask) return mask def unzip_file(filename, location, flatten=True): # type: (str, str, bool) -> None """ Unzip the file (with path `filename`) to the destination `location`. All files are written based on system defaults and umask (i.e. permissions are not preserved), except that regular file members with any execute permissions (user, group, or world) have "chmod +x" applied after being written. Note that for windows, any execute changes using os.chmod are no-ops per the python docs. """ ensure_dir(location) zipfp = open(filename, 'rb') try: zip = zipfile.ZipFile(zipfp, allowZip64=True) leading = has_leading_dir(zip.namelist()) and flatten for info in zip.infolist(): name = info.filename fn = name if leading: fn = split_leading_dir(name)[1] fn = os.path.join(location, fn) dir = os.path.dirname(fn) if fn.endswith('/') or fn.endswith('\\'): # A directory ensure_dir(fn) else: ensure_dir(dir) # Don't use read() to avoid allocating an arbitrarily large # chunk of memory for the file's content fp = zip.open(name) try: with open(fn, 'wb') as destfp: shutil.copyfileobj(fp, destfp) finally: fp.close() mode = info.external_attr >> 16 # if mode and regular file and any execute permissions for # user/group/world? if mode and stat.S_ISREG(mode) and mode & 0o111: # make dest file have execute for user/group/world # (chmod +x) no-op on windows per python docs os.chmod(fn, (0o777 - current_umask() | 0o111)) finally: zipfp.close() def untar_file(filename, location): # type: (str, str) -> None """ Untar the file (with path `filename`) to the destination `location`. All files are written based on system defaults and umask (i.e. permissions are not preserved), except that regular file members with any execute permissions (user, group, or world) have "chmod +x" applied after being written. Note that for windows, any execute changes using os.chmod are no-ops per the python docs. """ ensure_dir(location) if filename.lower().endswith('.gz') or filename.lower().endswith('.tgz'): mode = 'r:gz' elif filename.lower().endswith(BZ2_EXTENSIONS): mode = 'r:bz2' elif filename.lower().endswith(XZ_EXTENSIONS): mode = 'r:xz' elif filename.lower().endswith('.tar'): mode = 'r' else: logger.warning( 'Cannot determine compression type for file %s', filename, ) mode = 'r:*' tar = tarfile.open(filename, mode) try: leading = has_leading_dir([ member.name for member in tar.getmembers() ]) for member in tar.getmembers(): fn = member.name if leading: # https://github.com/python/mypy/issues/1174 fn = split_leading_dir(fn)[1] # type: ignore path = os.path.join(location, fn) if member.isdir(): ensure_dir(path) elif member.issym(): try: # https://github.com/python/typeshed/issues/2673 tar._extract_member(member, path) # type: ignore except Exception as exc: # Some corrupt tar files seem to produce this # (specifically bad symlinks) logger.warning( 'In the tar file %s the member %s is invalid: %s', filename, member.name, exc, ) continue else: try: fp = tar.extractfile(member) except (KeyError, AttributeError) as exc: # Some corrupt tar files seem to produce this # (specifically bad symlinks) logger.warning( 'In the tar file %s the member %s is invalid: %s', filename, member.name, exc, ) continue ensure_dir(os.path.dirname(path)) with open(path, 'wb') as destfp: shutil.copyfileobj(fp, destfp) fp.close() # Update the timestamp (useful for cython compiled files) # https://github.com/python/typeshed/issues/2673 tar.utime(member, path) # type: ignore # member have any execute permissions for user/group/world? if member.mode & 0o111: # make dest file have execute for user/group/world # no-op on windows per python docs os.chmod(path, (0o777 - current_umask() | 0o111)) finally: tar.close() def unpack_file( filename, # type: str location, # type: str content_type, # type: Optional[str] link # type: Optional[Link] ): # type: (...) -> None filename = os.path.realpath(filename) if (content_type == 'application/zip' or filename.lower().endswith(ZIP_EXTENSIONS) or zipfile.is_zipfile(filename)): unzip_file( filename, location, flatten=not filename.endswith('.whl') ) elif (content_type == 'application/x-gzip' or tarfile.is_tarfile(filename) or filename.lower().endswith( TAR_EXTENSIONS + BZ2_EXTENSIONS + XZ_EXTENSIONS)): untar_file(filename, location) elif (content_type and content_type.startswith('text/html') and is_svn_page(file_contents(filename))): # We don't really care about this from pip._internal.vcs.subversion import Subversion url = 'svn+' + link.url Subversion().unpack(location, url=url) else: # FIXME: handle? # FIXME: magic signatures? logger.critical( 'Cannot unpack file %s (downloaded from %s, content-type: %s); ' 'cannot detect archive format', filename, location, content_type, ) raise InstallationError( 'Cannot determine archive format of %s' % location ) def format_command_args(args): # type: (List[str]) -> str """ Format command arguments for display. """ return ' '.join(shlex_quote(arg) for arg in args) def make_subprocess_output_error( cmd_args, # type: List[str] cwd, # type: Optional[str] lines, # type: List[Text] exit_status, # type: int ): # type: (...) -> Text """ Create and return the error message to use to log a subprocess error with command output. :param lines: A list of lines, each ending with a newline. """ command = format_command_args(cmd_args) # Convert `command` and `cwd` to text (unicode in Python 2) so we can use # them as arguments in the unicode format string below. This avoids # "UnicodeDecodeError: 'ascii' codec can't decode byte ..." in Python 2 # if either contains a non-ascii character. command_display = str_to_display(command, desc='command bytes') cwd_display = path_to_display(cwd) # We know the joined output value ends in a newline. output = ''.join(lines) msg = ( # Use a unicode string to avoid "UnicodeEncodeError: 'ascii' # codec can't encode character ..." in Python 2 when a format # argument (e.g. `output`) has a non-ascii character. u'Command errored out with exit status {exit_status}:\n' ' command: {command_display}\n' ' cwd: {cwd_display}\n' 'Complete output ({line_count} lines):\n{output}{divider}' ).format( exit_status=exit_status, command_display=command_display, cwd_display=cwd_display, line_count=len(lines), output=output, divider=LOG_DIVIDER, ) return msg def call_subprocess( cmd, # type: List[str] show_stdout=False, # type: bool cwd=None, # type: Optional[str] on_returncode='raise', # type: str extra_ok_returncodes=None, # type: Optional[Iterable[int]] command_desc=None, # type: Optional[str] extra_environ=None, # type: Optional[Mapping[str, Any]] unset_environ=None, # type: Optional[Iterable[str]] spinner=None # type: Optional[SpinnerInterface] ): # type: (...) -> Text """ Args: show_stdout: if true, use INFO to log the subprocess's stderr and stdout streams. Otherwise, use DEBUG. Defaults to False. extra_ok_returncodes: an iterable of integer return codes that are acceptable, in addition to 0. Defaults to None, which means []. unset_environ: an iterable of environment variable names to unset prior to calling subprocess.Popen(). """ if extra_ok_returncodes is None: extra_ok_returncodes = [] if unset_environ is None: unset_environ = [] # Most places in pip use show_stdout=False. What this means is-- # # - We connect the child's output (combined stderr and stdout) to a # single pipe, which we read. # - We log this output to stderr at DEBUG level as it is received. # - If DEBUG logging isn't enabled (e.g. if --verbose logging wasn't # requested), then we show a spinner so the user can still see the # subprocess is in progress. # - If the subprocess exits with an error, we log the output to stderr # at ERROR level if it hasn't already been displayed to the console # (e.g. if --verbose logging wasn't enabled). This way we don't log # the output to the console twice. # # If show_stdout=True, then the above is still done, but with DEBUG # replaced by INFO. if show_stdout: # Then log the subprocess output at INFO level. log_subprocess = subprocess_logger.info used_level = std_logging.INFO else: # Then log the subprocess output using DEBUG. This also ensures # it will be logged to the log file (aka user_log), if enabled. log_subprocess = subprocess_logger.debug used_level = std_logging.DEBUG # Whether the subprocess will be visible in the console. showing_subprocess = subprocess_logger.getEffectiveLevel() <= used_level # Only use the spinner if we're not showing the subprocess output # and we have a spinner. use_spinner = not showing_subprocess and spinner is not None if command_desc is None: command_desc = format_command_args(cmd) log_subprocess("Running command %s", command_desc) env = os.environ.copy() if extra_environ: env.update(extra_environ) for name in unset_environ: env.pop(name, None) try: proc = subprocess.Popen( cmd, stderr=subprocess.STDOUT, stdin=subprocess.PIPE, stdout=subprocess.PIPE, cwd=cwd, env=env, ) proc.stdin.close() except Exception as exc: subprocess_logger.critical( "Error %s while executing command %s", exc, command_desc, ) raise all_output = [] while True: # The "line" value is a unicode string in Python 2. line = console_to_str(proc.stdout.readline()) if not line: break line = line.rstrip() all_output.append(line + '\n') # Show the line immediately. log_subprocess(line) # Update the spinner. if use_spinner: spinner.spin() try: proc.wait() finally: if proc.stdout: proc.stdout.close() proc_had_error = ( proc.returncode and proc.returncode not in extra_ok_returncodes ) if use_spinner: if proc_had_error: spinner.finish("error") else: spinner.finish("done") if proc_had_error: if on_returncode == 'raise': if not showing_subprocess: # Then the subprocess streams haven't been logged to the # console yet. msg = make_subprocess_output_error( cmd_args=cmd, cwd=cwd, lines=all_output, exit_status=proc.returncode, ) subprocess_logger.error(msg) exc_msg = ( 'Command errored out with exit status {}: {} ' 'Check the logs for full command output.' ).format(proc.returncode, command_desc) raise InstallationError(exc_msg) elif on_returncode == 'warn': subprocess_logger.warning( 'Command "%s" had error code %s in %s', command_desc, proc.returncode, cwd, ) elif on_returncode == 'ignore': pass else: raise ValueError('Invalid value: on_returncode=%s' % repr(on_returncode)) return ''.join(all_output) def _make_build_dir(build_dir): os.makedirs(build_dir) write_delete_marker_file(build_dir) class FakeFile(object): """Wrap a list of lines in an object with readline() to make ConfigParser happy.""" def __init__(self, lines): self._gen = (l for l in lines) def readline(self): try: try: return next(self._gen) except NameError: return self._gen.next() except StopIteration: return '' def __iter__(self): return self._gen class StreamWrapper(StringIO): @classmethod def from_stream(cls, orig_stream): cls.orig_stream = orig_stream return cls() # compileall.compile_dir() needs stdout.encoding to print to stdout @property def encoding(self): return self.orig_stream.encoding @contextlib.contextmanager def captured_output(stream_name): """Return a context manager used by captured_stdout/stdin/stderr that temporarily replaces the sys stream *stream_name* with a StringIO. Taken from Lib/support/__init__.py in the CPython repo. """ orig_stdout = getattr(sys, stream_name) setattr(sys, stream_name, StreamWrapper.from_stream(orig_stdout)) try: yield getattr(sys, stream_name) finally: setattr(sys, stream_name, orig_stdout) def captured_stdout(): """Capture the output of sys.stdout: with captured_stdout() as stdout: print('hello') self.assertEqual(stdout.getvalue(), 'hello\n') Taken from Lib/support/__init__.py in the CPython repo. """ return captured_output('stdout') def captured_stderr(): """ See captured_stdout(). """ return captured_output('stderr') class cached_property(object): """A property that is only computed once per instance and then replaces itself with an ordinary attribute. Deleting the attribute resets the property. Source: https://github.com/bottlepy/bottle/blob/0.11.5/bottle.py#L175 """ def __init__(self, func): self.__doc__ = getattr(func, '__doc__') self.func = func def __get__(self, obj, cls): if obj is None: # We're being accessed from the class itself, not from an object return self value = obj.__dict__[self.func.__name__] = self.func(obj) return value def get_installed_version(dist_name, working_set=None): """Get the installed version of dist_name avoiding pkg_resources cache""" # Create a requirement that we'll look for inside of setuptools. req = pkg_resources.Requirement.parse(dist_name) if working_set is None: # We want to avoid having this cached, so we need to construct a new # working set each time. working_set = pkg_resources.WorkingSet() # Get the installed distribution from our working set dist = working_set.find(req) # Check to see if we got an installed distribution or not, if we did # we want to return it's version. return dist.version if dist else None def consume(iterator): """Consume an iterable at C speed.""" deque(iterator, maxlen=0) # Simulates an enum def enum(*sequential, **named): enums = dict(zip(sequential, range(len(sequential))), **named) reverse = {value: key for key, value in enums.items()} enums['reverse_mapping'] = reverse return type('Enum', (), enums) def path_to_url(path): # type: (Union[str, Text]) -> str """ Convert a path to a file: URL. The path will be made absolute and have quoted path parts. """ path = os.path.normpath(os.path.abspath(path)) url = urllib_parse.urljoin('file:', urllib_request.pathname2url(path)) return url def split_auth_from_netloc(netloc): """ Parse out and remove the auth information from a netloc. Returns: (netloc, (username, password)). """ if '@' not in netloc: return netloc, (None, None) # Split from the right because that's how urllib.parse.urlsplit() # behaves if more than one @ is present (which can be checked using # the password attribute of urlsplit()'s return value). auth, netloc = netloc.rsplit('@', 1) if ':' in auth: # Split from the left because that's how urllib.parse.urlsplit() # behaves if more than one : is present (which again can be checked # using the password attribute of the return value) user_pass = auth.split(':', 1) else: user_pass = auth, None user_pass = tuple( None if x is None else urllib_unquote(x) for x in user_pass ) return netloc, user_pass def redact_netloc(netloc): # type: (str) -> str """ Replace the password in a netloc with "****", if it exists. For example, "user:pass@example.com" returns "user:****@example.com". """ netloc, (user, password) = split_auth_from_netloc(netloc) if user is None: return netloc password = '' if password is None else ':****' return '{user}{password}@{netloc}'.format(user=urllib_parse.quote(user), password=password, netloc=netloc) def _transform_url(url, transform_netloc): """Transform and replace netloc in a url. transform_netloc is a function taking the netloc and returning a tuple. The first element of this tuple is the new netloc. The entire tuple is returned. Returns a tuple containing the transformed url as item 0 and the original tuple returned by transform_netloc as item 1. """ purl = urllib_parse.urlsplit(url) netloc_tuple = transform_netloc(purl.netloc) # stripped url url_pieces = ( purl.scheme, netloc_tuple[0], purl.path, purl.query, purl.fragment ) surl = urllib_parse.urlunsplit(url_pieces) return surl, netloc_tuple def _get_netloc(netloc): return split_auth_from_netloc(netloc) def _redact_netloc(netloc): return (redact_netloc(netloc),) def split_auth_netloc_from_url(url): # type: (str) -> Tuple[str, str, Tuple[str, str]] """ Parse a url into separate netloc, auth, and url with no auth. Returns: (url_without_auth, netloc, (username, password)) """ url_without_auth, (netloc, auth) = _transform_url(url, _get_netloc) return url_without_auth, netloc, auth def remove_auth_from_url(url): # type: (str) -> str """Return a copy of url with 'username:password@' removed.""" # username/pass params are passed to subversion through flags # and are not recognized in the url. return _transform_url(url, _get_netloc)[0] def redact_password_from_url(url): # type: (str) -> str """Replace the password in a given url with ****.""" return _transform_url(url, _redact_netloc)[0] def protect_pip_from_modification_on_windows(modifying_pip): """Protection of pip.exe from modification on Windows On Windows, any operation modifying pip should be run as: python -m pip ... """ pip_names = [ "pip.exe", "pip{}.exe".format(sys.version_info[0]), "pip{}.{}.exe".format(*sys.version_info[:2]) ] # See https://github.com/pypa/pip/issues/1299 for more discussion should_show_use_python_msg = ( modifying_pip and WINDOWS and os.path.basename(sys.argv[0]) in pip_names ) if should_show_use_python_msg: new_command = [ sys.executable, "-m", "pip" ] + sys.argv[1:] raise CommandError( 'To modify pip, please run the following command:\n{}' .format(" ".join(new_command)) ) �������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/utils/models.py������������������������0000644�0001750�0001750�00000002021�00000000000�027774� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""Utilities for defining models """ import operator class KeyBasedCompareMixin(object): """Provides comparison capabilities that is based on a key """ def __init__(self, key, defining_class): self._compare_key = key self._defining_class = defining_class def __hash__(self): return hash(self._compare_key) def __lt__(self, other): return self._compare(other, operator.__lt__) def __le__(self, other): return self._compare(other, operator.__le__) def __gt__(self, other): return self._compare(other, operator.__gt__) def __ge__(self, other): return self._compare(other, operator.__ge__) def __eq__(self, other): return self._compare(other, operator.__eq__) def __ne__(self, other): return self._compare(other, operator.__ne__) def _compare(self, other, method): if not isinstance(other, self._defining_class): return NotImplemented return method(self._compare_key, other._compare_key) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/utils/outdated.py����������������������0000644�0001750�0001750�00000014224�00000000000�030332� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import import datetime import json import logging import os.path import sys from pip._vendor import lockfile, pkg_resources from pip._vendor.packaging import version as packaging_version from pip._internal.cli.cmdoptions import make_search_scope from pip._internal.index import PackageFinder from pip._internal.models.selection_prefs import SelectionPreferences from pip._internal.utils.compat import WINDOWS from pip._internal.utils.filesystem import check_path_owner from pip._internal.utils.misc import ensure_dir, get_installed_version from pip._internal.utils.packaging import get_installer from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: import optparse from typing import Any, Dict from pip._internal.download import PipSession SELFCHECK_DATE_FMT = "%Y-%m-%dT%H:%M:%SZ" logger = logging.getLogger(__name__) class SelfCheckState(object): def __init__(self, cache_dir): # type: (str) -> None self.state = {} # type: Dict[str, Any] self.statefile_path = None # Try to load the existing state if cache_dir: self.statefile_path = os.path.join(cache_dir, "selfcheck.json") try: with open(self.statefile_path) as statefile: self.state = json.load(statefile)[sys.prefix] except (IOError, ValueError, KeyError): # Explicitly suppressing exceptions, since we don't want to # error out if the cache file is invalid. pass def save(self, pypi_version, current_time): # type: (str, datetime.datetime) -> None # If we do not have a path to cache in, don't bother saving. if not self.statefile_path: return # Check to make sure that we own the directory if not check_path_owner(os.path.dirname(self.statefile_path)): return # Now that we've ensured the directory is owned by this user, we'll go # ahead and make sure that all our directories are created. ensure_dir(os.path.dirname(self.statefile_path)) # Attempt to write out our version check file with lockfile.LockFile(self.statefile_path): if os.path.exists(self.statefile_path): with open(self.statefile_path) as statefile: state = json.load(statefile) else: state = {} state[sys.prefix] = { "last_check": current_time.strftime(SELFCHECK_DATE_FMT), "pypi_version": pypi_version, } with open(self.statefile_path, "w") as statefile: json.dump(state, statefile, sort_keys=True, separators=(",", ":")) def was_installed_by_pip(pkg): # type: (str) -> bool """Checks whether pkg was installed by pip This is used not to display the upgrade message when pip is in fact installed by system package manager, such as dnf on Fedora. """ try: dist = pkg_resources.get_distribution(pkg) return "pip" == get_installer(dist) except pkg_resources.DistributionNotFound: return False def pip_version_check(session, options): # type: (PipSession, optparse.Values) -> None """Check for an update for pip. Limit the frequency of checks to once per week. State is stored either in the active virtualenv or in the user's USER_CACHE_DIR keyed off the prefix of the pip script path. """ installed_version = get_installed_version("pip") if not installed_version: return pip_version = packaging_version.parse(installed_version) pypi_version = None try: state = SelfCheckState(cache_dir=options.cache_dir) current_time = datetime.datetime.utcnow() # Determine if we need to refresh the state if "last_check" in state.state and "pypi_version" in state.state: last_check = datetime.datetime.strptime( state.state["last_check"], SELFCHECK_DATE_FMT ) if (current_time - last_check).total_seconds() < 7 * 24 * 60 * 60: pypi_version = state.state["pypi_version"] # Refresh the version if we need to or just see if we need to warn if pypi_version is None: # Lets use PackageFinder to see what the latest pip version is search_scope = make_search_scope(options, suppress_no_index=True) # Pass allow_yanked=False so we don't suggest upgrading to a # yanked version. selection_prefs = SelectionPreferences( allow_yanked=False, allow_all_prereleases=False, # Explicitly set to False ) finder = PackageFinder.create( search_scope=search_scope, selection_prefs=selection_prefs, trusted_hosts=options.trusted_hosts, session=session, ) candidate = finder.find_candidates("pip").get_best() if candidate is None: return pypi_version = str(candidate.version) # save that we've performed a check state.save(pypi_version, current_time) remote_version = packaging_version.parse(pypi_version) local_version_is_older = ( pip_version < remote_version and pip_version.base_version != remote_version.base_version and was_installed_by_pip('pip') ) # Determine if our pypi_version is older if not local_version_is_older: return # Advise "python -m pip" on Windows to avoid issues # with overwriting pip.exe. if WINDOWS: pip_cmd = "python -m pip" else: pip_cmd = "pip" logger.warning( "You are using pip version %s, however version %s is " "available.\nYou should consider upgrading via the " "'%s install --upgrade pip' command.", pip_version, pypi_version, pip_cmd ) except Exception: logger.debug( "There was an error checking the latest version of pip", exc_info=True, ) ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/utils/packaging.py���������������������0000644�0001750�0001750�00000005733�00000000000�030452� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import import logging from email.parser import FeedParser from pip._vendor import pkg_resources from pip._vendor.packaging import specifiers, version from pip._internal.exceptions import NoneMetadataError from pip._internal.utils.misc import display_path from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import Optional, Tuple from email.message import Message from pip._vendor.pkg_resources import Distribution logger = logging.getLogger(__name__) def check_requires_python(requires_python, version_info): # type: (Optional[str], Tuple[int, ...]) -> bool """ Check if the given Python version matches a "Requires-Python" specifier. :param version_info: A 3-tuple of ints representing a Python major-minor-micro version to check (e.g. `sys.version_info[:3]`). :return: `True` if the given Python version satisfies the requirement. Otherwise, return `False`. :raises InvalidSpecifier: If `requires_python` has an invalid format. """ if requires_python is None: # The package provides no information return True requires_python_specifier = specifiers.SpecifierSet(requires_python) python_version = version.parse('.'.join(map(str, version_info))) return python_version in requires_python_specifier def get_metadata(dist): # type: (Distribution) -> Message """ :raises NoneMetadataError: if the distribution reports `has_metadata()` True but `get_metadata()` returns None. """ metadata_name = 'METADATA' if (isinstance(dist, pkg_resources.DistInfoDistribution) and dist.has_metadata(metadata_name)): metadata = dist.get_metadata(metadata_name) elif dist.has_metadata('PKG-INFO'): metadata_name = 'PKG-INFO' metadata = dist.get_metadata(metadata_name) else: logger.warning("No metadata found in %s", display_path(dist.location)) metadata = '' if metadata is None: raise NoneMetadataError(dist, metadata_name) feed_parser = FeedParser() # The following line errors out if with a "NoneType" TypeError if # passed metadata=None. feed_parser.feed(metadata) return feed_parser.close() def get_requires_python(dist): # type: (pkg_resources.Distribution) -> Optional[str] """ Return the "Requires-Python" metadata for a distribution, or None if not present. """ pkg_info_dict = get_metadata(dist) requires_python = pkg_info_dict.get('Requires-Python') if requires_python is not None: # Convert to a str to satisfy the type checker, since requires_python # can be a Header object. requires_python = str(requires_python) return requires_python def get_installer(dist): # type: (Distribution) -> str if dist.has_metadata('INSTALLER'): for line in dist.get_metadata_lines('INSTALLER'): if line.strip(): return line.strip() return '' �������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/utils/setuptools_build.py��������������0000644�0001750�0001750�00000002327�00000000000�032122� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import sys from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import List # Shim to wrap setup.py invocation with setuptools # # We set sys.argv[0] to the path to the underlying setup.py file so # setuptools / distutils don't take the path to the setup.py to be "-c" when # invoking via the shim. This avoids e.g. the following manifest_maker # warning: "warning: manifest_maker: standard file '-c' not found". _SETUPTOOLS_SHIM = ( "import sys, setuptools, tokenize; sys.argv[0] = {0!r}; __file__={0!r};" "f=getattr(tokenize, 'open', open)(__file__);" "code=f.read().replace('\\r\\n', '\\n');" "f.close();" "exec(compile(code, __file__, 'exec'))" ) def make_setuptools_shim_args(setup_py_path, unbuffered_output=False): # type: (str, bool) -> List[str] """ Get setuptools command arguments with shim wrapped setup file invocation. :param setup_py_path: The path to setup.py to be wrapped. :param unbuffered_output: If True, adds the unbuffered switch to the argument list. """ args = [sys.executable] if unbuffered_output: args.append('-u') args.extend(['-c', _SETUPTOOLS_SHIM.format(setup_py_path)]) return args ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/utils/temp_dir.py����������������������0000644�0001750�0001750�00000012333�00000000000�030323� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import import errno import itertools import logging import os.path import tempfile from pip._internal.utils.misc import rmtree logger = logging.getLogger(__name__) class TempDirectory(object): """Helper class that owns and cleans up a temporary directory. This class can be used as a context manager or as an OO representation of a temporary directory. Attributes: path Location to the created temporary directory or None delete Whether the directory should be deleted when exiting (when used as a contextmanager) Methods: create() Creates a temporary directory and stores its path in the path attribute. cleanup() Deletes the temporary directory and sets path attribute to None When used as a context manager, a temporary directory is created on entering the context and, if the delete attribute is True, on exiting the context the created directory is deleted. """ def __init__(self, path=None, delete=None, kind="temp"): super(TempDirectory, self).__init__() if path is None and delete is None: # If we were not given an explicit directory, and we were not given # an explicit delete option, then we'll default to deleting. delete = True self.path = path self.delete = delete self.kind = kind def __repr__(self): return "<{} {!r}>".format(self.__class__.__name__, self.path) def __enter__(self): self.create() return self def __exit__(self, exc, value, tb): if self.delete: self.cleanup() def create(self): """Create a temporary directory and store its path in self.path """ if self.path is not None: logger.debug( "Skipped creation of temporary directory: {}".format(self.path) ) return # We realpath here because some systems have their default tmpdir # symlinked to another directory. This tends to confuse build # scripts, so we canonicalize the path by traversing potential # symlinks here. self.path = os.path.realpath( tempfile.mkdtemp(prefix="pip-{}-".format(self.kind)) ) logger.debug("Created temporary directory: {}".format(self.path)) def cleanup(self): """Remove the temporary directory created and reset state """ if self.path is not None and os.path.exists(self.path): rmtree(self.path) self.path = None class AdjacentTempDirectory(TempDirectory): """Helper class that creates a temporary directory adjacent to a real one. Attributes: original The original directory to create a temp directory for. path After calling create() or entering, contains the full path to the temporary directory. delete Whether the directory should be deleted when exiting (when used as a contextmanager) """ # The characters that may be used to name the temp directory # We always prepend a ~ and then rotate through these until # a usable name is found. # pkg_resources raises a different error for .dist-info folder # with leading '-' and invalid metadata LEADING_CHARS = "-~.=%0123456789" def __init__(self, original, delete=None): super(AdjacentTempDirectory, self).__init__(delete=delete) self.original = original.rstrip('/\\') @classmethod def _generate_names(cls, name): """Generates a series of temporary names. The algorithm replaces the leading characters in the name with ones that are valid filesystem characters, but are not valid package names (for both Python and pip definitions of package). """ for i in range(1, len(name)): for candidate in itertools.combinations_with_replacement( cls.LEADING_CHARS, i - 1): new_name = '~' + ''.join(candidate) + name[i:] if new_name != name: yield new_name # If we make it this far, we will have to make a longer name for i in range(len(cls.LEADING_CHARS)): for candidate in itertools.combinations_with_replacement( cls.LEADING_CHARS, i): new_name = '~' + ''.join(candidate) + name if new_name != name: yield new_name def create(self): root, name = os.path.split(self.original) for candidate in self._generate_names(name): path = os.path.join(root, candidate) try: os.mkdir(path) except OSError as ex: # Continue if the name exists already if ex.errno != errno.EEXIST: raise else: self.path = os.path.realpath(path) break if not self.path: # Final fallback on the default behavior. self.path = os.path.realpath( tempfile.mkdtemp(prefix="pip-{}-".format(self.kind)) ) logger.debug("Created temporary directory: {}".format(self.path)) �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/utils/typing.py������������������������0000644�0001750�0001750�00000002145�00000000000�030032� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""For neatly implementing static typing in pip. `mypy` - the static type analysis tool we use - uses the `typing` module, which provides core functionality fundamental to mypy's functioning. Generally, `typing` would be imported at runtime and used in that fashion - it acts as a no-op at runtime and does not have any run-time overhead by design. As it turns out, `typing` is not vendorable - it uses separate sources for Python 2/Python 3. Thus, this codebase can not expect it to be present. To work around this, mypy allows the typing import to be behind a False-y optional to prevent it from running at runtime and type-comments can be used to remove the need for the types to be accessible directly during runtime. This module provides the False-y guard in a nicely named fashion so that a curious maintainer can reach here to read this. In pip, all static-typing related imports should be guarded as follows: from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import ... Ref: https://github.com/python/mypy/issues/3216 """ MYPY_CHECK_RUNNING = False ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/utils/ui.py����������������������������0000644�0001750�0001750�00000032710�00000000000�027136� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division import contextlib import itertools import logging import sys import time from signal import SIGINT, default_int_handler, signal from pip._vendor import six from pip._vendor.progress import HIDE_CURSOR, SHOW_CURSOR from pip._vendor.progress.bar import Bar, FillingCirclesBar, IncrementalBar from pip._vendor.progress.spinner import Spinner from pip._internal.utils.compat import WINDOWS from pip._internal.utils.logging import get_indentation from pip._internal.utils.misc import format_size from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import Any, Iterator, IO try: from pip._vendor import colorama # Lots of different errors can come from this, including SystemError and # ImportError. except Exception: colorama = None logger = logging.getLogger(__name__) def _select_progress_class(preferred, fallback): encoding = getattr(preferred.file, "encoding", None) # If we don't know what encoding this file is in, then we'll just assume # that it doesn't support unicode and use the ASCII bar. if not encoding: return fallback # Collect all of the possible characters we want to use with the preferred # bar. characters = [ getattr(preferred, "empty_fill", six.text_type()), getattr(preferred, "fill", six.text_type()), ] characters += list(getattr(preferred, "phases", [])) # Try to decode the characters we're using for the bar using the encoding # of the given file, if this works then we'll assume that we can use the # fancier bar and if not we'll fall back to the plaintext bar. try: six.text_type().join(characters).encode(encoding) except UnicodeEncodeError: return fallback else: return preferred _BaseBar = _select_progress_class(IncrementalBar, Bar) # type: Any class InterruptibleMixin(object): """ Helper to ensure that self.finish() gets called on keyboard interrupt. This allows downloads to be interrupted without leaving temporary state (like hidden cursors) behind. This class is similar to the progress library's existing SigIntMixin helper, but as of version 1.2, that helper has the following problems: 1. It calls sys.exit(). 2. It discards the existing SIGINT handler completely. 3. It leaves its own handler in place even after an uninterrupted finish, which will have unexpected delayed effects if the user triggers an unrelated keyboard interrupt some time after a progress-displaying download has already completed, for example. """ def __init__(self, *args, **kwargs): """ Save the original SIGINT handler for later. """ super(InterruptibleMixin, self).__init__(*args, **kwargs) self.original_handler = signal(SIGINT, self.handle_sigint) # If signal() returns None, the previous handler was not installed from # Python, and we cannot restore it. This probably should not happen, # but if it does, we must restore something sensible instead, at least. # The least bad option should be Python's default SIGINT handler, which # just raises KeyboardInterrupt. if self.original_handler is None: self.original_handler = default_int_handler def finish(self): """ Restore the original SIGINT handler after finishing. This should happen regardless of whether the progress display finishes normally, or gets interrupted. """ super(InterruptibleMixin, self).finish() signal(SIGINT, self.original_handler) def handle_sigint(self, signum, frame): """ Call self.finish() before delegating to the original SIGINT handler. This handler should only be in place while the progress display is active. """ self.finish() self.original_handler(signum, frame) class SilentBar(Bar): def update(self): pass class BlueEmojiBar(IncrementalBar): suffix = "%(percent)d%%" bar_prefix = " " bar_suffix = " " phases = (u"\U0001F539", u"\U0001F537", u"\U0001F535") # type: Any class DownloadProgressMixin(object): def __init__(self, *args, **kwargs): super(DownloadProgressMixin, self).__init__(*args, **kwargs) self.message = (" " * (get_indentation() + 2)) + self.message @property def downloaded(self): return format_size(self.index) @property def download_speed(self): # Avoid zero division errors... if self.avg == 0.0: return "..." return format_size(1 / self.avg) + "/s" @property def pretty_eta(self): if self.eta: return "eta %s" % self.eta_td return "" def iter(self, it, n=1): for x in it: yield x self.next(n) self.finish() class WindowsMixin(object): def __init__(self, *args, **kwargs): # The Windows terminal does not support the hide/show cursor ANSI codes # even with colorama. So we'll ensure that hide_cursor is False on # Windows. # This call needs to go before the super() call, so that hide_cursor # is set in time. The base progress bar class writes the "hide cursor" # code to the terminal in its init, so if we don't set this soon # enough, we get a "hide" with no corresponding "show"... if WINDOWS and self.hide_cursor: self.hide_cursor = False super(WindowsMixin, self).__init__(*args, **kwargs) # Check if we are running on Windows and we have the colorama module, # if we do then wrap our file with it. if WINDOWS and colorama: self.file = colorama.AnsiToWin32(self.file) # The progress code expects to be able to call self.file.isatty() # but the colorama.AnsiToWin32() object doesn't have that, so we'll # add it. self.file.isatty = lambda: self.file.wrapped.isatty() # The progress code expects to be able to call self.file.flush() # but the colorama.AnsiToWin32() object doesn't have that, so we'll # add it. self.file.flush = lambda: self.file.wrapped.flush() class BaseDownloadProgressBar(WindowsMixin, InterruptibleMixin, DownloadProgressMixin): file = sys.stdout message = "%(percent)d%%" suffix = "%(downloaded)s %(download_speed)s %(pretty_eta)s" # NOTE: The "type: ignore" comments on the following classes are there to # work around https://github.com/python/typing/issues/241 class DefaultDownloadProgressBar(BaseDownloadProgressBar, _BaseBar): pass class DownloadSilentBar(BaseDownloadProgressBar, SilentBar): # type: ignore pass class DownloadBar(BaseDownloadProgressBar, # type: ignore Bar): pass class DownloadFillingCirclesBar(BaseDownloadProgressBar, # type: ignore FillingCirclesBar): pass class DownloadBlueEmojiProgressBar(BaseDownloadProgressBar, # type: ignore BlueEmojiBar): pass class DownloadProgressSpinner(WindowsMixin, InterruptibleMixin, DownloadProgressMixin, Spinner): file = sys.stdout suffix = "%(downloaded)s %(download_speed)s" def next_phase(self): if not hasattr(self, "_phaser"): self._phaser = itertools.cycle(self.phases) return next(self._phaser) def update(self): message = self.message % self phase = self.next_phase() suffix = self.suffix % self line = ''.join([ message, " " if message else "", phase, " " if suffix else "", suffix, ]) self.writeln(line) BAR_TYPES = { "off": (DownloadSilentBar, DownloadSilentBar), "on": (DefaultDownloadProgressBar, DownloadProgressSpinner), "ascii": (DownloadBar, DownloadProgressSpinner), "pretty": (DownloadFillingCirclesBar, DownloadProgressSpinner), "emoji": (DownloadBlueEmojiProgressBar, DownloadProgressSpinner) } def DownloadProgressProvider(progress_bar, max=None): if max is None or max == 0: return BAR_TYPES[progress_bar][1]().iter else: return BAR_TYPES[progress_bar][0](max=max).iter ################################################################ # Generic "something is happening" spinners # # We don't even try using progress.spinner.Spinner here because it's actually # simpler to reimplement from scratch than to coerce their code into doing # what we need. ################################################################ @contextlib.contextmanager def hidden_cursor(file): # type: (IO) -> Iterator[None] # The Windows terminal does not support the hide/show cursor ANSI codes, # even via colorama. So don't even try. if WINDOWS: yield # We don't want to clutter the output with control characters if we're # writing to a file, or if the user is running with --quiet. # See https://github.com/pypa/pip/issues/3418 elif not file.isatty() or logger.getEffectiveLevel() > logging.INFO: yield else: file.write(HIDE_CURSOR) try: yield finally: file.write(SHOW_CURSOR) class RateLimiter(object): def __init__(self, min_update_interval_seconds): # type: (float) -> None self._min_update_interval_seconds = min_update_interval_seconds self._last_update = 0 # type: float def ready(self): # type: () -> bool now = time.time() delta = now - self._last_update return delta >= self._min_update_interval_seconds def reset(self): # type: () -> None self._last_update = time.time() class SpinnerInterface(object): def spin(self): # type: () -> None raise NotImplementedError() def finish(self, final_status): # type: (str) -> None raise NotImplementedError() class InteractiveSpinner(SpinnerInterface): def __init__(self, message, file=None, spin_chars="-\\|/", # Empirically, 8 updates/second looks nice min_update_interval_seconds=0.125): self._message = message if file is None: file = sys.stdout self._file = file self._rate_limiter = RateLimiter(min_update_interval_seconds) self._finished = False self._spin_cycle = itertools.cycle(spin_chars) self._file.write(" " * get_indentation() + self._message + " ... ") self._width = 0 def _write(self, status): assert not self._finished # Erase what we wrote before by backspacing to the beginning, writing # spaces to overwrite the old text, and then backspacing again backup = "\b" * self._width self._file.write(backup + " " * self._width + backup) # Now we have a blank slate to add our status self._file.write(status) self._width = len(status) self._file.flush() self._rate_limiter.reset() def spin(self): # type: () -> None if self._finished: return if not self._rate_limiter.ready(): return self._write(next(self._spin_cycle)) def finish(self, final_status): # type: (str) -> None if self._finished: return self._write(final_status) self._file.write("\n") self._file.flush() self._finished = True # Used for dumb terminals, non-interactive installs (no tty), etc. # We still print updates occasionally (once every 60 seconds by default) to # act as a keep-alive for systems like Travis-CI that take lack-of-output as # an indication that a task has frozen. class NonInteractiveSpinner(SpinnerInterface): def __init__(self, message, min_update_interval_seconds=60): # type: (str, float) -> None self._message = message self._finished = False self._rate_limiter = RateLimiter(min_update_interval_seconds) self._update("started") def _update(self, status): assert not self._finished self._rate_limiter.reset() logger.info("%s: %s", self._message, status) def spin(self): # type: () -> None if self._finished: return if not self._rate_limiter.ready(): return self._update("still running...") def finish(self, final_status): # type: (str) -> None if self._finished: return self._update("finished with status '%s'" % (final_status,)) self._finished = True @contextlib.contextmanager def open_spinner(message): # type: (str) -> Iterator[SpinnerInterface] # Interactive spinner goes directly to sys.stdout rather than being routed # through the logging system, but it acts like it has level INFO, # i.e. it's only displayed if we're at level INFO or better. # Non-interactive spinner goes through the logging system, so it is always # in sync with logging configuration. if sys.stdout.isatty() and logger.getEffectiveLevel() <= logging.INFO: spinner = InteractiveSpinner(message) # type: SpinnerInterface else: spinner = NonInteractiveSpinner(message) try: with hidden_cursor(sys.stdout): yield spinner except KeyboardInterrupt: spinner.finish("canceled") raise except Exception: spinner.finish("error") raise else: spinner.finish("done") ��������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/utils/virtualenv.py��������������������0000644�0001750�0001750�00000001573�00000000000�030723� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import os.path import site import sys def running_under_virtualenv(): # type: () -> bool """ Return True if we're running inside a virtualenv, False otherwise. """ if hasattr(sys, 'real_prefix'): # pypa/virtualenv case return True elif sys.prefix != getattr(sys, "base_prefix", sys.prefix): # PEP 405 venv return True return False def virtualenv_no_global(): # type: () -> bool """ Return True if in a venv and no system site packages. """ # this mirrors the logic in virtualenv.py for locating the # no-global-site-packages.txt file site_mod_dir = os.path.dirname(os.path.abspath(site.__file__)) no_global_file = os.path.join(site_mod_dir, 'no-global-site-packages.txt') if running_under_virtualenv() and os.path.isfile(no_global_file): return True else: return False �������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735099.9533274 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/vcs/�����������������������������������0000755�0001750�0001750�00000000000�00000000000�025577� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/vcs/__init__.py������������������������0000644�0001750�0001750�00000001125�00000000000�027707� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# Expose a limited set of classes and functions so callers outside of # the vcs package don't need to import deeper than `pip._internal.vcs`. # (The test directory and imports protected by MYPY_CHECK_RUNNING may # still need to import from a vcs sub-package.) from pip._internal.vcs.versioncontrol import ( # noqa: F401 RemoteNotFoundError, make_vcs_requirement_url, vcs, ) # Import all vcs modules to register each VCS in the VcsSupport object. import pip._internal.vcs.bazaar import pip._internal.vcs.git import pip._internal.vcs.mercurial import pip._internal.vcs.subversion # noqa: F401 �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/vcs/bazaar.py��������������������������0000644�0001750�0001750�00000006156�00000000000�027421� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import import logging import os from pip._vendor.six.moves.urllib import parse as urllib_parse from pip._internal.utils.misc import display_path, path_to_url, rmtree from pip._internal.vcs.versioncontrol import VersionControl, vcs logger = logging.getLogger(__name__) class Bazaar(VersionControl): name = 'bzr' dirname = '.bzr' repo_name = 'branch' schemes = ( 'bzr', 'bzr+http', 'bzr+https', 'bzr+ssh', 'bzr+sftp', 'bzr+ftp', 'bzr+lp', ) def __init__(self, *args, **kwargs): super(Bazaar, self).__init__(*args, **kwargs) # This is only needed for python <2.7.5 # Register lp but do not expose as a scheme to support bzr+lp. if getattr(urllib_parse, 'uses_fragment', None): urllib_parse.uses_fragment.extend(['lp']) @staticmethod def get_base_rev_args(rev): return ['-r', rev] def export(self, location, url): """ Export the Bazaar repository at the url to the destination location """ # Remove the location to make sure Bazaar can export it correctly if os.path.exists(location): rmtree(location) url, rev_options = self.get_url_rev_options(url) self.run_command( ['export', location, url] + rev_options.to_args(), show_stdout=False, ) def fetch_new(self, dest, url, rev_options): rev_display = rev_options.to_display() logger.info( 'Checking out %s%s to %s', url, rev_display, display_path(dest), ) cmd_args = ['branch', '-q'] + rev_options.to_args() + [url, dest] self.run_command(cmd_args) def switch(self, dest, url, rev_options): self.run_command(['switch', url], cwd=dest) def update(self, dest, url, rev_options): cmd_args = ['pull', '-q'] + rev_options.to_args() self.run_command(cmd_args, cwd=dest) @classmethod def get_url_rev_and_auth(cls, url): # hotfix the URL scheme after removing bzr+ from bzr+ssh:// readd it url, rev, user_pass = super(Bazaar, cls).get_url_rev_and_auth(url) if url.startswith('ssh://'): url = 'bzr+' + url return url, rev, user_pass @classmethod def get_remote_url(cls, location): urls = cls.run_command(['info'], show_stdout=False, cwd=location) for line in urls.splitlines(): line = line.strip() for x in ('checkout of branch: ', 'parent branch: '): if line.startswith(x): repo = line.split(x)[1] if cls._is_local_repository(repo): return path_to_url(repo) return repo return None @classmethod def get_revision(cls, location): revision = cls.run_command( ['revno'], show_stdout=False, cwd=location, ) return revision.splitlines()[-1] @classmethod def is_commit_id_equal(cls, dest, name): """Always assume the versions don't match""" return False vcs.register(Bazaar) ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/vcs/git.py�����������������������������0000644�0001750�0001750�00000031016�00000000000�026735� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import import logging import os.path import re from pip._vendor.packaging.version import parse as parse_version from pip._vendor.six.moves.urllib import parse as urllib_parse from pip._vendor.six.moves.urllib import request as urllib_request from pip._internal.exceptions import BadCommand from pip._internal.utils.compat import samefile from pip._internal.utils.misc import display_path, redact_password_from_url from pip._internal.utils.temp_dir import TempDirectory from pip._internal.vcs.versioncontrol import ( RemoteNotFoundError, VersionControl, vcs, ) urlsplit = urllib_parse.urlsplit urlunsplit = urllib_parse.urlunsplit logger = logging.getLogger(__name__) HASH_REGEX = re.compile('^[a-fA-F0-9]{40}$') def looks_like_hash(sha): return bool(HASH_REGEX.match(sha)) class Git(VersionControl): name = 'git' dirname = '.git' repo_name = 'clone' schemes = ( 'git', 'git+http', 'git+https', 'git+ssh', 'git+git', 'git+file', ) # Prevent the user's environment variables from interfering with pip: # https://github.com/pypa/pip/issues/1130 unset_environ = ('GIT_DIR', 'GIT_WORK_TREE') default_arg_rev = 'HEAD' @staticmethod def get_base_rev_args(rev): return [rev] def get_git_version(self): VERSION_PFX = 'git version ' version = self.run_command(['version'], show_stdout=False) if version.startswith(VERSION_PFX): version = version[len(VERSION_PFX):].split()[0] else: version = '' # get first 3 positions of the git version because # on windows it is x.y.z.windows.t, and this parses as # LegacyVersion which always smaller than a Version. version = '.'.join(version.split('.')[:3]) return parse_version(version) @classmethod def get_current_branch(cls, location): """ Return the current branch, or None if HEAD isn't at a branch (e.g. detached HEAD). """ # git-symbolic-ref exits with empty stdout if "HEAD" is a detached # HEAD rather than a symbolic ref. In addition, the -q causes the # command to exit with status code 1 instead of 128 in this case # and to suppress the message to stderr. args = ['symbolic-ref', '-q', 'HEAD'] output = cls.run_command( args, extra_ok_returncodes=(1, ), show_stdout=False, cwd=location, ) ref = output.strip() if ref.startswith('refs/heads/'): return ref[len('refs/heads/'):] return None def export(self, location, url): """Export the Git repository at the url to the destination location""" if not location.endswith('/'): location = location + '/' with TempDirectory(kind="export") as temp_dir: self.unpack(temp_dir.path, url=url) self.run_command( ['checkout-index', '-a', '-f', '--prefix', location], show_stdout=False, cwd=temp_dir.path ) @classmethod def get_revision_sha(cls, dest, rev): """ Return (sha_or_none, is_branch), where sha_or_none is a commit hash if the revision names a remote branch or tag, otherwise None. Args: dest: the repository directory. rev: the revision name. """ # Pass rev to pre-filter the list. output = cls.run_command(['show-ref', rev], cwd=dest, show_stdout=False, on_returncode='ignore') refs = {} for line in output.strip().splitlines(): try: sha, ref = line.split() except ValueError: # Include the offending line to simplify troubleshooting if # this error ever occurs. raise ValueError('unexpected show-ref line: {!r}'.format(line)) refs[ref] = sha branch_ref = 'refs/remotes/origin/{}'.format(rev) tag_ref = 'refs/tags/{}'.format(rev) sha = refs.get(branch_ref) if sha is not None: return (sha, True) sha = refs.get(tag_ref) return (sha, False) @classmethod def resolve_revision(cls, dest, url, rev_options): """ Resolve a revision to a new RevOptions object with the SHA1 of the branch, tag, or ref if found. Args: rev_options: a RevOptions object. """ rev = rev_options.arg_rev sha, is_branch = cls.get_revision_sha(dest, rev) if sha is not None: rev_options = rev_options.make_new(sha) rev_options.branch_name = rev if is_branch else None return rev_options # Do not show a warning for the common case of something that has # the form of a Git commit hash. if not looks_like_hash(rev): logger.warning( "Did not find branch or tag '%s', assuming revision or ref.", rev, ) if not rev.startswith('refs/'): return rev_options # If it looks like a ref, we have to fetch it explicitly. cls.run_command( ['fetch', '-q', url] + rev_options.to_args(), cwd=dest, ) # Change the revision to the SHA of the ref we fetched sha = cls.get_revision(dest, rev='FETCH_HEAD') rev_options = rev_options.make_new(sha) return rev_options @classmethod def is_commit_id_equal(cls, dest, name): """ Return whether the current commit hash equals the given name. Args: dest: the repository directory. name: a string name. """ if not name: # Then avoid an unnecessary subprocess call. return False return cls.get_revision(dest) == name def fetch_new(self, dest, url, rev_options): rev_display = rev_options.to_display() logger.info( 'Cloning %s%s to %s', redact_password_from_url(url), rev_display, display_path(dest), ) self.run_command(['clone', '-q', url, dest]) if rev_options.rev: # Then a specific revision was requested. rev_options = self.resolve_revision(dest, url, rev_options) branch_name = getattr(rev_options, 'branch_name', None) if branch_name is None: # Only do a checkout if the current commit id doesn't match # the requested revision. if not self.is_commit_id_equal(dest, rev_options.rev): cmd_args = ['checkout', '-q'] + rev_options.to_args() self.run_command(cmd_args, cwd=dest) elif self.get_current_branch(dest) != branch_name: # Then a specific branch was requested, and that branch # is not yet checked out. track_branch = 'origin/{}'.format(branch_name) cmd_args = [ 'checkout', '-b', branch_name, '--track', track_branch, ] self.run_command(cmd_args, cwd=dest) #: repo may contain submodules self.update_submodules(dest) def switch(self, dest, url, rev_options): self.run_command(['config', 'remote.origin.url', url], cwd=dest) cmd_args = ['checkout', '-q'] + rev_options.to_args() self.run_command(cmd_args, cwd=dest) self.update_submodules(dest) def update(self, dest, url, rev_options): # First fetch changes from the default remote if self.get_git_version() >= parse_version('1.9.0'): # fetch tags in addition to everything else self.run_command(['fetch', '-q', '--tags'], cwd=dest) else: self.run_command(['fetch', '-q'], cwd=dest) # Then reset to wanted revision (maybe even origin/master) rev_options = self.resolve_revision(dest, url, rev_options) cmd_args = ['reset', '--hard', '-q'] + rev_options.to_args() self.run_command(cmd_args, cwd=dest) #: update submodules self.update_submodules(dest) @classmethod def get_remote_url(cls, location): """ Return URL of the first remote encountered. Raises RemoteNotFoundError if the repository does not have a remote url configured. """ # We need to pass 1 for extra_ok_returncodes since the command # exits with return code 1 if there are no matching lines. stdout = cls.run_command( ['config', '--get-regexp', r'remote\..*\.url'], extra_ok_returncodes=(1, ), show_stdout=False, cwd=location, ) remotes = stdout.splitlines() try: found_remote = remotes[0] except IndexError: raise RemoteNotFoundError for remote in remotes: if remote.startswith('remote.origin.url '): found_remote = remote break url = found_remote.split(' ')[1] return url.strip() @classmethod def get_revision(cls, location, rev=None): if rev is None: rev = 'HEAD' current_rev = cls.run_command( ['rev-parse', rev], show_stdout=False, cwd=location, ) return current_rev.strip() @classmethod def get_subdirectory(cls, location): # find the repo root git_dir = cls.run_command(['rev-parse', '--git-dir'], show_stdout=False, cwd=location).strip() if not os.path.isabs(git_dir): git_dir = os.path.join(location, git_dir) root_dir = os.path.join(git_dir, '..') # find setup.py orig_location = location while not os.path.exists(os.path.join(location, 'setup.py')): last_location = location location = os.path.dirname(location) if location == last_location: # We've traversed up to the root of the filesystem without # finding setup.py logger.warning( "Could not find setup.py for directory %s (tried all " "parent directories)", orig_location, ) return None # relative path of setup.py to repo root if samefile(root_dir, location): return None return os.path.relpath(location, root_dir) @classmethod def get_url_rev_and_auth(cls, url): """ Prefixes stub URLs like 'user@hostname:user/repo.git' with 'ssh://'. That's required because although they use SSH they sometimes don't work with a ssh:// scheme (e.g. GitHub). But we need a scheme for parsing. Hence we remove it again afterwards and return it as a stub. """ # Works around an apparent Git bug # (see https://article.gmane.org/gmane.comp.version-control.git/146500) scheme, netloc, path, query, fragment = urlsplit(url) if scheme.endswith('file'): initial_slashes = path[:-len(path.lstrip('/'))] newpath = ( initial_slashes + urllib_request.url2pathname(path) .replace('\\', '/').lstrip('/') ) url = urlunsplit((scheme, netloc, newpath, query, fragment)) after_plus = scheme.find('+') + 1 url = scheme[:after_plus] + urlunsplit( (scheme[after_plus:], netloc, newpath, query, fragment), ) if '://' not in url: assert 'file:' not in url url = url.replace('git+', 'git+ssh://') url, rev, user_pass = super(Git, cls).get_url_rev_and_auth(url) url = url.replace('ssh://', '') else: url, rev, user_pass = super(Git, cls).get_url_rev_and_auth(url) return url, rev, user_pass @classmethod def update_submodules(cls, location): if not os.path.exists(os.path.join(location, '.gitmodules')): return cls.run_command( ['submodule', 'update', '--init', '--recursive', '-q'], cwd=location, ) @classmethod def controls_location(cls, location): if super(Git, cls).controls_location(location): return True try: r = cls.run_command(['rev-parse'], cwd=location, show_stdout=False, on_returncode='ignore') return not r except BadCommand: logger.debug("could not determine if %s is under git control " "because git is not available", location) return False vcs.register(Git) ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/vcs/mercurial.py�����������������������0000644�0001750�0001750�00000006407�00000000000�030143� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import import logging import os from pip._vendor.six.moves import configparser from pip._internal.utils.misc import display_path, path_to_url from pip._internal.utils.temp_dir import TempDirectory from pip._internal.vcs.versioncontrol import VersionControl, vcs logger = logging.getLogger(__name__) class Mercurial(VersionControl): name = 'hg' dirname = '.hg' repo_name = 'clone' schemes = ('hg', 'hg+http', 'hg+https', 'hg+ssh', 'hg+static-http') @staticmethod def get_base_rev_args(rev): return [rev] def export(self, location, url): """Export the Hg repository at the url to the destination location""" with TempDirectory(kind="export") as temp_dir: self.unpack(temp_dir.path, url=url) self.run_command( ['archive', location], show_stdout=False, cwd=temp_dir.path ) def fetch_new(self, dest, url, rev_options): rev_display = rev_options.to_display() logger.info( 'Cloning hg %s%s to %s', url, rev_display, display_path(dest), ) self.run_command(['clone', '--noupdate', '-q', url, dest]) cmd_args = ['update', '-q'] + rev_options.to_args() self.run_command(cmd_args, cwd=dest) def switch(self, dest, url, rev_options): repo_config = os.path.join(dest, self.dirname, 'hgrc') config = configparser.RawConfigParser() try: config.read(repo_config) config.set('paths', 'default', url) with open(repo_config, 'w') as config_file: config.write(config_file) except (OSError, configparser.NoSectionError) as exc: logger.warning( 'Could not switch Mercurial repository to %s: %s', url, exc, ) else: cmd_args = ['update', '-q'] + rev_options.to_args() self.run_command(cmd_args, cwd=dest) def update(self, dest, url, rev_options): self.run_command(['pull', '-q'], cwd=dest) cmd_args = ['update', '-q'] + rev_options.to_args() self.run_command(cmd_args, cwd=dest) @classmethod def get_remote_url(cls, location): url = cls.run_command( ['showconfig', 'paths.default'], show_stdout=False, cwd=location).strip() if cls._is_local_repository(url): url = path_to_url(url) return url.strip() @classmethod def get_revision(cls, location): """ Return the repository-local changeset revision number, as an integer. """ current_revision = cls.run_command( ['parents', '--template={rev}'], show_stdout=False, cwd=location).strip() return current_revision @classmethod def get_requirement_revision(cls, location): """ Return the changeset identification hash, as a 40-character hexadecimal string """ current_rev_hash = cls.run_command( ['parents', '--template={node}'], show_stdout=False, cwd=location).strip() return current_rev_hash @classmethod def is_commit_id_equal(cls, dest, name): """Always assume the versions don't match""" return False vcs.register(Mercurial) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/vcs/subversion.py����������������������0000644�0001750�0001750�00000026661�00000000000�030363� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import import logging import os import re import sys from pip._internal.utils.logging import indent_log from pip._internal.utils.misc import ( display_path, rmtree, split_auth_from_netloc, ) from pip._internal.utils.typing import MYPY_CHECK_RUNNING from pip._internal.vcs.versioncontrol import VersionControl, vcs _svn_xml_url_re = re.compile('url="([^"]+)"') _svn_rev_re = re.compile(r'committed-rev="(\d+)"') _svn_info_xml_rev_re = re.compile(r'\s*revision="(\d+)"') _svn_info_xml_url_re = re.compile(r'<url>(.*)</url>') if MYPY_CHECK_RUNNING: from typing import List, Optional, Tuple from pip._internal.vcs.versioncontrol import RevOptions logger = logging.getLogger(__name__) class Subversion(VersionControl): name = 'svn' dirname = '.svn' repo_name = 'checkout' schemes = ('svn', 'svn+ssh', 'svn+http', 'svn+https', 'svn+svn') @classmethod def should_add_vcs_url_prefix(cls, remote_url): return True @staticmethod def get_base_rev_args(rev): return ['-r', rev] @classmethod def get_revision(cls, location): """ Return the maximum revision for all files under a given location """ # Note: taken from setuptools.command.egg_info revision = 0 for base, dirs, files in os.walk(location): if cls.dirname not in dirs: dirs[:] = [] continue # no sense walking uncontrolled subdirs dirs.remove(cls.dirname) entries_fn = os.path.join(base, cls.dirname, 'entries') if not os.path.exists(entries_fn): # FIXME: should we warn? continue dirurl, localrev = cls._get_svn_url_rev(base) if base == location: base = dirurl + '/' # save the root url elif not dirurl or not dirurl.startswith(base): dirs[:] = [] continue # not part of the same svn tree, skip it revision = max(revision, localrev) return revision @classmethod def get_netloc_and_auth(cls, netloc, scheme): """ This override allows the auth information to be passed to svn via the --username and --password options instead of via the URL. """ if scheme == 'ssh': # The --username and --password options can't be used for # svn+ssh URLs, so keep the auth information in the URL. return super(Subversion, cls).get_netloc_and_auth(netloc, scheme) return split_auth_from_netloc(netloc) @classmethod def get_url_rev_and_auth(cls, url): # hotfix the URL scheme after removing svn+ from svn+ssh:// readd it url, rev, user_pass = super(Subversion, cls).get_url_rev_and_auth(url) if url.startswith('ssh://'): url = 'svn+' + url return url, rev, user_pass @staticmethod def make_rev_args(username, password): extra_args = [] if username: extra_args += ['--username', username] if password: extra_args += ['--password', password] return extra_args @classmethod def get_remote_url(cls, location): # In cases where the source is in a subdirectory, not alongside # setup.py we have to look up in the location until we find a real # setup.py orig_location = location while not os.path.exists(os.path.join(location, 'setup.py')): last_location = location location = os.path.dirname(location) if location == last_location: # We've traversed up to the root of the filesystem without # finding setup.py logger.warning( "Could not find setup.py for directory %s (tried all " "parent directories)", orig_location, ) return None return cls._get_svn_url_rev(location)[0] @classmethod def _get_svn_url_rev(cls, location): from pip._internal.exceptions import InstallationError entries_path = os.path.join(location, cls.dirname, 'entries') if os.path.exists(entries_path): with open(entries_path) as f: data = f.read() else: # subversion >= 1.7 does not have the 'entries' file data = '' if (data.startswith('8') or data.startswith('9') or data.startswith('10')): data = list(map(str.splitlines, data.split('\n\x0c\n'))) del data[0][0] # get rid of the '8' url = data[0][3] revs = [int(d[9]) for d in data if len(d) > 9 and d[9]] + [0] elif data.startswith('<?xml'): match = _svn_xml_url_re.search(data) if not match: raise ValueError('Badly formatted data: %r' % data) url = match.group(1) # get repository URL revs = [int(m.group(1)) for m in _svn_rev_re.finditer(data)] + [0] else: try: # subversion >= 1.7 # Note that using get_remote_call_options is not necessary here # because `svn info` is being run against a local directory. # We don't need to worry about making sure interactive mode # is being used to prompt for passwords, because passwords # are only potentially needed for remote server requests. xml = cls.run_command( ['info', '--xml', location], show_stdout=False, ) url = _svn_info_xml_url_re.search(xml).group(1) revs = [ int(m.group(1)) for m in _svn_info_xml_rev_re.finditer(xml) ] except InstallationError: url, revs = None, [] if revs: rev = max(revs) else: rev = 0 return url, rev @classmethod def is_commit_id_equal(cls, dest, name): """Always assume the versions don't match""" return False def __init__(self, use_interactive=None): # type: (bool) -> None if use_interactive is None: use_interactive = sys.stdin.isatty() self.use_interactive = use_interactive # This member is used to cache the fetched version of the current # ``svn`` client. # Special value definitions: # None: Not evaluated yet. # Empty tuple: Could not parse version. self._vcs_version = None # type: Optional[Tuple[int, ...]] super(Subversion, self).__init__() def call_vcs_version(self): # type: () -> Tuple[int, ...] """Query the version of the currently installed Subversion client. :return: A tuple containing the parts of the version information or ``()`` if the version returned from ``svn`` could not be parsed. :raises: BadCommand: If ``svn`` is not installed. """ # Example versions: # svn, version 1.10.3 (r1842928) # compiled Feb 25 2019, 14:20:39 on x86_64-apple-darwin17.0.0 # svn, version 1.7.14 (r1542130) # compiled Mar 28 2018, 08:49:13 on x86_64-pc-linux-gnu version_prefix = 'svn, version ' version = self.run_command(['--version'], show_stdout=False) if not version.startswith(version_prefix): return () version = version[len(version_prefix):].split()[0] version_list = version.split('.') try: parsed_version = tuple(map(int, version_list)) except ValueError: return () return parsed_version def get_vcs_version(self): # type: () -> Tuple[int, ...] """Return the version of the currently installed Subversion client. If the version of the Subversion client has already been queried, a cached value will be used. :return: A tuple containing the parts of the version information or ``()`` if the version returned from ``svn`` could not be parsed. :raises: BadCommand: If ``svn`` is not installed. """ if self._vcs_version is not None: # Use cached version, if available. # If parsing the version failed previously (empty tuple), # do not attempt to parse it again. return self._vcs_version vcs_version = self.call_vcs_version() self._vcs_version = vcs_version return vcs_version def get_remote_call_options(self): # type: () -> List[str] """Return options to be used on calls to Subversion that contact the server. These options are applicable for the following ``svn`` subcommands used in this class. - checkout - export - switch - update :return: A list of command line arguments to pass to ``svn``. """ if not self.use_interactive: # --non-interactive switch is available since Subversion 0.14.4. # Subversion < 1.8 runs in interactive mode by default. return ['--non-interactive'] svn_version = self.get_vcs_version() # By default, Subversion >= 1.8 runs in non-interactive mode if # stdin is not a TTY. Since that is how pip invokes SVN, in # call_subprocess(), pip must pass --force-interactive to ensure # the user can be prompted for a password, if required. # SVN added the --force-interactive option in SVN 1.8. Since # e.g. RHEL/CentOS 7, which is supported until 2024, ships with # SVN 1.7, pip should continue to support SVN 1.7. Therefore, pip # can't safely add the option if the SVN version is < 1.8 (or unknown). if svn_version >= (1, 8): return ['--force-interactive'] return [] def export(self, location, url): """Export the svn repository at the url to the destination location""" url, rev_options = self.get_url_rev_options(url) logger.info('Exporting svn repository %s to %s', url, location) with indent_log(): if os.path.exists(location): # Subversion doesn't like to check out over an existing # directory --force fixes this, but was only added in svn 1.5 rmtree(location) cmd_args = (['export'] + self.get_remote_call_options() + rev_options.to_args() + [url, location]) self.run_command(cmd_args, show_stdout=False) def fetch_new(self, dest, url, rev_options): # type: (str, str, RevOptions) -> None rev_display = rev_options.to_display() logger.info( 'Checking out %s%s to %s', url, rev_display, display_path(dest), ) cmd_args = (['checkout', '-q'] + self.get_remote_call_options() + rev_options.to_args() + [url, dest]) self.run_command(cmd_args) def switch(self, dest, url, rev_options): # type: (str, str, RevOptions) -> None cmd_args = (['switch'] + self.get_remote_call_options() + rev_options.to_args() + [url, dest]) self.run_command(cmd_args) def update(self, dest, url, rev_options): # type: (str, str, RevOptions) -> None cmd_args = (['update'] + self.get_remote_call_options() + rev_options.to_args() + [dest]) self.run_command(cmd_args) vcs.register(Subversion) �������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/vcs/versioncontrol.py������������������0000644�0001750�0001750�00000045102�00000000000�031241� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""Handles all VCS (version control) support""" from __future__ import absolute_import import errno import logging import os import shutil import sys from pip._vendor import pkg_resources from pip._vendor.six.moves.urllib import parse as urllib_parse from pip._internal.exceptions import BadCommand from pip._internal.utils.misc import ( ask_path_exists, backup_dir, call_subprocess, display_path, rmtree, ) from pip._internal.utils.typing import MYPY_CHECK_RUNNING if MYPY_CHECK_RUNNING: from typing import ( Any, Dict, Iterable, List, Mapping, Optional, Text, Tuple, Type ) from pip._internal.utils.ui import SpinnerInterface AuthInfo = Tuple[Optional[str], Optional[str]] __all__ = ['vcs'] logger = logging.getLogger(__name__) def make_vcs_requirement_url(repo_url, rev, project_name, subdir=None): """ Return the URL for a VCS requirement. Args: repo_url: the remote VCS url, with any needed VCS prefix (e.g. "git+"). project_name: the (unescaped) project name. """ egg_project_name = pkg_resources.to_filename(project_name) req = '{}@{}#egg={}'.format(repo_url, rev, egg_project_name) if subdir: req += '&subdirectory={}'.format(subdir) return req class RemoteNotFoundError(Exception): pass class RevOptions(object): """ Encapsulates a VCS-specific revision to install, along with any VCS install options. Instances of this class should be treated as if immutable. """ def __init__( self, vc_class, # type: Type[VersionControl] rev=None, # type: Optional[str] extra_args=None, # type: Optional[List[str]] ): # type: (...) -> None """ Args: vc_class: a VersionControl subclass. rev: the name of the revision to install. extra_args: a list of extra options. """ if extra_args is None: extra_args = [] self.extra_args = extra_args self.rev = rev self.vc_class = vc_class def __repr__(self): return '<RevOptions {}: rev={!r}>'.format(self.vc_class.name, self.rev) @property def arg_rev(self): # type: () -> Optional[str] if self.rev is None: return self.vc_class.default_arg_rev return self.rev def to_args(self): # type: () -> List[str] """ Return the VCS-specific command arguments. """ args = [] # type: List[str] rev = self.arg_rev if rev is not None: args += self.vc_class.get_base_rev_args(rev) args += self.extra_args return args def to_display(self): # type: () -> str if not self.rev: return '' return ' (to revision {})'.format(self.rev) def make_new(self, rev): # type: (str) -> RevOptions """ Make a copy of the current instance, but with a new rev. Args: rev: the name of the revision for the new object. """ return self.vc_class.make_rev_options(rev, extra_args=self.extra_args) class VcsSupport(object): _registry = {} # type: Dict[str, VersionControl] schemes = ['ssh', 'git', 'hg', 'bzr', 'sftp', 'svn'] def __init__(self): # type: () -> None # Register more schemes with urlparse for various version control # systems urllib_parse.uses_netloc.extend(self.schemes) # Python >= 2.7.4, 3.3 doesn't have uses_fragment if getattr(urllib_parse, 'uses_fragment', None): urllib_parse.uses_fragment.extend(self.schemes) super(VcsSupport, self).__init__() def __iter__(self): return self._registry.__iter__() @property def backends(self): # type: () -> List[VersionControl] return list(self._registry.values()) @property def dirnames(self): # type: () -> List[str] return [backend.dirname for backend in self.backends] @property def all_schemes(self): # type: () -> List[str] schemes = [] # type: List[str] for backend in self.backends: schemes.extend(backend.schemes) return schemes def register(self, cls): # type: (Type[VersionControl]) -> None if not hasattr(cls, 'name'): logger.warning('Cannot register VCS %s', cls.__name__) return if cls.name not in self._registry: self._registry[cls.name] = cls() logger.debug('Registered VCS backend: %s', cls.name) def unregister(self, name): # type: (str) -> None if name in self._registry: del self._registry[name] def get_backend_for_dir(self, location): # type: (str) -> Optional[VersionControl] """ Return a VersionControl object if a repository of that type is found at the given directory. """ for vcs_backend in self._registry.values(): if vcs_backend.controls_location(location): logger.debug('Determine that %s uses VCS: %s', location, vcs_backend.name) return vcs_backend return None def get_backend(self, name): # type: (str) -> Optional[VersionControl] """ Return a VersionControl object or None. """ name = name.lower() return self._registry.get(name) vcs = VcsSupport() class VersionControl(object): name = '' dirname = '' repo_name = '' # List of supported schemes for this Version Control schemes = () # type: Tuple[str, ...] # Iterable of environment variable names to pass to call_subprocess(). unset_environ = () # type: Tuple[str, ...] default_arg_rev = None # type: Optional[str] @classmethod def should_add_vcs_url_prefix(cls, remote_url): """ Return whether the vcs prefix (e.g. "git+") should be added to a repository's remote url when used in a requirement. """ return not remote_url.lower().startswith('{}:'.format(cls.name)) @classmethod def get_subdirectory(cls, repo_dir): """ Return the path to setup.py, relative to the repo root. """ return None @classmethod def get_requirement_revision(cls, repo_dir): """ Return the revision string that should be used in a requirement. """ return cls.get_revision(repo_dir) @classmethod def get_src_requirement(cls, repo_dir, project_name): """ Return the requirement string to use to redownload the files currently at the given repository directory. Args: project_name: the (unescaped) project name. The return value has a form similar to the following: {repository_url}@{revision}#egg={project_name} """ repo_url = cls.get_remote_url(repo_dir) if repo_url is None: return None if cls.should_add_vcs_url_prefix(repo_url): repo_url = '{}+{}'.format(cls.name, repo_url) revision = cls.get_requirement_revision(repo_dir) subdir = cls.get_subdirectory(repo_dir) req = make_vcs_requirement_url(repo_url, revision, project_name, subdir=subdir) return req @staticmethod def get_base_rev_args(rev): """ Return the base revision arguments for a vcs command. Args: rev: the name of a revision to install. Cannot be None. """ raise NotImplementedError @classmethod def make_rev_options(cls, rev=None, extra_args=None): # type: (Optional[str], Optional[List[str]]) -> RevOptions """ Return a RevOptions object. Args: rev: the name of a revision to install. extra_args: a list of extra options. """ return RevOptions(cls, rev, extra_args=extra_args) @classmethod def _is_local_repository(cls, repo): # type: (str) -> bool """ posix absolute paths start with os.path.sep, win32 ones start with drive (like c:\\folder) """ drive, tail = os.path.splitdrive(repo) return repo.startswith(os.path.sep) or bool(drive) def export(self, location, url): """ Export the repository at the url to the destination location i.e. only download the files, without vcs informations :param url: the repository URL starting with a vcs prefix. """ raise NotImplementedError @classmethod def get_netloc_and_auth(cls, netloc, scheme): """ Parse the repository URL's netloc, and return the new netloc to use along with auth information. Args: netloc: the original repository URL netloc. scheme: the repository URL's scheme without the vcs prefix. This is mainly for the Subversion class to override, so that auth information can be provided via the --username and --password options instead of through the URL. For other subclasses like Git without such an option, auth information must stay in the URL. Returns: (netloc, (username, password)). """ return netloc, (None, None) @classmethod def get_url_rev_and_auth(cls, url): # type: (str) -> Tuple[str, Optional[str], AuthInfo] """ Parse the repository URL to use, and return the URL, revision, and auth info to use. Returns: (url, rev, (username, password)). """ scheme, netloc, path, query, frag = urllib_parse.urlsplit(url) if '+' not in scheme: raise ValueError( "Sorry, {!r} is a malformed VCS url. " "The format is <vcs>+<protocol>://<url>, " "e.g. svn+http://myrepo/svn/MyApp#egg=MyApp".format(url) ) # Remove the vcs prefix. scheme = scheme.split('+', 1)[1] netloc, user_pass = cls.get_netloc_and_auth(netloc, scheme) rev = None if '@' in path: path, rev = path.rsplit('@', 1) url = urllib_parse.urlunsplit((scheme, netloc, path, query, '')) return url, rev, user_pass @staticmethod def make_rev_args(username, password): """ Return the RevOptions "extra arguments" to use in obtain(). """ return [] def get_url_rev_options(self, url): # type: (str) -> Tuple[str, RevOptions] """ Return the URL and RevOptions object to use in obtain() and in some cases export(), as a tuple (url, rev_options). """ url, rev, user_pass = self.get_url_rev_and_auth(url) username, password = user_pass extra_args = self.make_rev_args(username, password) rev_options = self.make_rev_options(rev, extra_args=extra_args) return url, rev_options @staticmethod def normalize_url(url): # type: (str) -> str """ Normalize a URL for comparison by unquoting it and removing any trailing slash. """ return urllib_parse.unquote(url).rstrip('/') @classmethod def compare_urls(cls, url1, url2): # type: (str, str) -> bool """ Compare two repo URLs for identity, ignoring incidental differences. """ return (cls.normalize_url(url1) == cls.normalize_url(url2)) def fetch_new(self, dest, url, rev_options): """ Fetch a revision from a repository, in the case that this is the first fetch from the repository. Args: dest: the directory to fetch the repository to. rev_options: a RevOptions object. """ raise NotImplementedError def switch(self, dest, url, rev_options): """ Switch the repo at ``dest`` to point to ``URL``. Args: rev_options: a RevOptions object. """ raise NotImplementedError def update(self, dest, url, rev_options): """ Update an already-existing repo to the given ``rev_options``. Args: rev_options: a RevOptions object. """ raise NotImplementedError @classmethod def is_commit_id_equal(cls, dest, name): """ Return whether the id of the current commit equals the given name. Args: dest: the repository directory. name: a string name. """ raise NotImplementedError def obtain(self, dest, url): # type: (str, str) -> None """ Install or update in editable mode the package represented by this VersionControl object. :param dest: the repository directory in which to install or update. :param url: the repository URL starting with a vcs prefix. """ url, rev_options = self.get_url_rev_options(url) if not os.path.exists(dest): self.fetch_new(dest, url, rev_options) return rev_display = rev_options.to_display() if self.is_repository_directory(dest): existing_url = self.get_remote_url(dest) if self.compare_urls(existing_url, url): logger.debug( '%s in %s exists, and has correct URL (%s)', self.repo_name.title(), display_path(dest), url, ) if not self.is_commit_id_equal(dest, rev_options.rev): logger.info( 'Updating %s %s%s', display_path(dest), self.repo_name, rev_display, ) self.update(dest, url, rev_options) else: logger.info('Skipping because already up-to-date.') return logger.warning( '%s %s in %s exists with URL %s', self.name, self.repo_name, display_path(dest), existing_url, ) prompt = ('(s)witch, (i)gnore, (w)ipe, (b)ackup ', ('s', 'i', 'w', 'b')) else: logger.warning( 'Directory %s already exists, and is not a %s %s.', dest, self.name, self.repo_name, ) # https://github.com/python/mypy/issues/1174 prompt = ('(i)gnore, (w)ipe, (b)ackup ', # type: ignore ('i', 'w', 'b')) logger.warning( 'The plan is to install the %s repository %s', self.name, url, ) response = ask_path_exists('What to do? %s' % prompt[0], prompt[1]) if response == 'a': sys.exit(-1) if response == 'w': logger.warning('Deleting %s', display_path(dest)) rmtree(dest) self.fetch_new(dest, url, rev_options) return if response == 'b': dest_dir = backup_dir(dest) logger.warning( 'Backing up %s to %s', display_path(dest), dest_dir, ) shutil.move(dest, dest_dir) self.fetch_new(dest, url, rev_options) return # Do nothing if the response is "i". if response == 's': logger.info( 'Switching %s %s to %s%s', self.repo_name, display_path(dest), url, rev_display, ) self.switch(dest, url, rev_options) def unpack(self, location, url): # type: (str, str) -> None """ Clean up current location and download the url repository (and vcs infos) into location :param url: the repository URL starting with a vcs prefix. """ if os.path.exists(location): rmtree(location) self.obtain(location, url=url) @classmethod def get_remote_url(cls, location): """ Return the url used at location Raises RemoteNotFoundError if the repository does not have a remote url configured. """ raise NotImplementedError @classmethod def get_revision(cls, location): """ Return the current commit id of the files at the given location. """ raise NotImplementedError @classmethod def run_command( cls, cmd, # type: List[str] show_stdout=True, # type: bool cwd=None, # type: Optional[str] on_returncode='raise', # type: str extra_ok_returncodes=None, # type: Optional[Iterable[int]] command_desc=None, # type: Optional[str] extra_environ=None, # type: Optional[Mapping[str, Any]] spinner=None # type: Optional[SpinnerInterface] ): # type: (...) -> Text """ Run a VCS subcommand This is simply a wrapper around call_subprocess that adds the VCS command name, and checks that the VCS is available """ cmd = [cls.name] + cmd try: return call_subprocess(cmd, show_stdout, cwd, on_returncode=on_returncode, extra_ok_returncodes=extra_ok_returncodes, command_desc=command_desc, extra_environ=extra_environ, unset_environ=cls.unset_environ, spinner=spinner) except OSError as e: # errno.ENOENT = no such file or directory # In other words, the VCS executable isn't available if e.errno == errno.ENOENT: raise BadCommand( 'Cannot find command %r - do you have ' '%r installed and in your ' 'PATH?' % (cls.name, cls.name)) else: raise # re-raise exception if a different error occurred @classmethod def is_repository_directory(cls, path): # type: (str) -> bool """ Return whether a directory path is a repository directory. """ logger.debug('Checking in %s for %s (%s)...', path, cls.dirname, cls.name) return os.path.exists(os.path.join(path, cls.dirname)) @classmethod def controls_location(cls, location): # type: (str) -> bool """ Check if a location is controlled by the vcs. It is meant to be overridden to implement smarter detection mechanisms for specific vcs. This can do more than is_repository_directory() alone. For example, the Git override checks that Git is actually available. """ return cls.is_repository_directory(location) ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_internal/wheel.py�������������������������������0000644�0001750�0001750�00000122505�00000000000�026467� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" Support for installing and building the "wheel" binary package format. """ from __future__ import absolute_import import collections import compileall import csv import hashlib import logging import os.path import re import shutil import stat import sys import warnings from base64 import urlsafe_b64encode from email.parser import Parser from pip._vendor import pkg_resources from pip._vendor.distlib.scripts import ScriptMaker from pip._vendor.packaging.utils import canonicalize_name from pip._vendor.six import StringIO from pip._internal import pep425tags from pip._internal.download import unpack_url from pip._internal.exceptions import ( InstallationError, InvalidWheelFilename, UnsupportedWheel, ) from pip._internal.locations import distutils_scheme from pip._internal.models.link import Link from pip._internal.utils.logging import indent_log from pip._internal.utils.marker_files import PIP_DELETE_MARKER_FILENAME from pip._internal.utils.misc import ( LOG_DIVIDER, call_subprocess, captured_stdout, ensure_dir, format_command_args, path_to_url, read_chunks, ) from pip._internal.utils.setuptools_build import make_setuptools_shim_args from pip._internal.utils.temp_dir import TempDirectory from pip._internal.utils.typing import MYPY_CHECK_RUNNING from pip._internal.utils.ui import open_spinner if MYPY_CHECK_RUNNING: from typing import ( Dict, List, Optional, Sequence, Mapping, Tuple, IO, Text, Any, Iterable ) from pip._vendor.packaging.requirements import Requirement from pip._internal.req.req_install import InstallRequirement from pip._internal.download import PipSession from pip._internal.index import FormatControl, PackageFinder from pip._internal.operations.prepare import ( RequirementPreparer ) from pip._internal.cache import WheelCache from pip._internal.pep425tags import Pep425Tag InstalledCSVRow = Tuple[str, ...] VERSION_COMPATIBLE = (1, 0) logger = logging.getLogger(__name__) def normpath(src, p): return os.path.relpath(src, p).replace(os.path.sep, '/') def hash_file(path, blocksize=1 << 20): # type: (str, int) -> Tuple[Any, int] """Return (hash, length) for path using hashlib.sha256()""" h = hashlib.sha256() length = 0 with open(path, 'rb') as f: for block in read_chunks(f, size=blocksize): length += len(block) h.update(block) return (h, length) # type: ignore def rehash(path, blocksize=1 << 20): # type: (str, int) -> Tuple[str, str] """Return (encoded_digest, length) for path using hashlib.sha256()""" h, length = hash_file(path, blocksize) digest = 'sha256=' + urlsafe_b64encode( h.digest() ).decode('latin1').rstrip('=') # unicode/str python2 issues return (digest, str(length)) # type: ignore def open_for_csv(name, mode): # type: (str, Text) -> IO if sys.version_info[0] < 3: nl = {} # type: Dict[str, Any] bin = 'b' else: nl = {'newline': ''} # type: Dict[str, Any] bin = '' return open(name, mode + bin, **nl) def replace_python_tag(wheelname, new_tag): # type: (str, str) -> str """Replace the Python tag in a wheel file name with a new value. """ parts = wheelname.split('-') parts[-3] = new_tag return '-'.join(parts) def fix_script(path): # type: (str) -> Optional[bool] """Replace #!python with #!/path/to/python Return True if file was changed.""" # XXX RECORD hashes will need to be updated if os.path.isfile(path): with open(path, 'rb') as script: firstline = script.readline() if not firstline.startswith(b'#!python'): return False exename = sys.executable.encode(sys.getfilesystemencoding()) firstline = b'#!' + exename + os.linesep.encode("ascii") rest = script.read() with open(path, 'wb') as script: script.write(firstline) script.write(rest) return True return None dist_info_re = re.compile(r"""^(?P<namever>(?P<name>.+?)(-(?P<ver>.+?))?) \.dist-info$""", re.VERBOSE) def root_is_purelib(name, wheeldir): # type: (str, str) -> bool """ Return True if the extracted wheel in wheeldir should go into purelib. """ name_folded = name.replace("-", "_") for item in os.listdir(wheeldir): match = dist_info_re.match(item) if match and match.group('name') == name_folded: with open(os.path.join(wheeldir, item, 'WHEEL')) as wheel: for line in wheel: line = line.lower().rstrip() if line == "root-is-purelib: true": return True return False def get_entrypoints(filename): # type: (str) -> Tuple[Dict[str, str], Dict[str, str]] if not os.path.exists(filename): return {}, {} # This is done because you can pass a string to entry_points wrappers which # means that they may or may not be valid INI files. The attempt here is to # strip leading and trailing whitespace in order to make them valid INI # files. with open(filename) as fp: data = StringIO() for line in fp: data.write(line.strip()) data.write("\n") data.seek(0) # get the entry points and then the script names entry_points = pkg_resources.EntryPoint.parse_map(data) console = entry_points.get('console_scripts', {}) gui = entry_points.get('gui_scripts', {}) def _split_ep(s): """get the string representation of EntryPoint, remove space and split on '='""" return str(s).replace(" ", "").split("=") # convert the EntryPoint objects into strings with module:function console = dict(_split_ep(v) for v in console.values()) gui = dict(_split_ep(v) for v in gui.values()) return console, gui def message_about_scripts_not_on_PATH(scripts): # type: (Sequence[str]) -> Optional[str] """Determine if any scripts are not on PATH and format a warning. Returns a warning message if one or more scripts are not on PATH, otherwise None. """ if not scripts: return None # Group scripts by the path they were installed in grouped_by_dir = collections.defaultdict(set) # type: Dict[str, set] for destfile in scripts: parent_dir = os.path.dirname(destfile) script_name = os.path.basename(destfile) grouped_by_dir[parent_dir].add(script_name) # We don't want to warn for directories that are on PATH. not_warn_dirs = [ os.path.normcase(i).rstrip(os.sep) for i in os.environ.get("PATH", "").split(os.pathsep) ] # If an executable sits with sys.executable, we don't warn for it. # This covers the case of venv invocations without activating the venv. not_warn_dirs.append(os.path.normcase(os.path.dirname(sys.executable))) warn_for = { parent_dir: scripts for parent_dir, scripts in grouped_by_dir.items() if os.path.normcase(parent_dir) not in not_warn_dirs } if not warn_for: return None # Format a message msg_lines = [] for parent_dir, scripts in warn_for.items(): sorted_scripts = sorted(scripts) # type: List[str] if len(sorted_scripts) == 1: start_text = "script {} is".format(sorted_scripts[0]) else: start_text = "scripts {} are".format( ", ".join(sorted_scripts[:-1]) + " and " + sorted_scripts[-1] ) msg_lines.append( "The {} installed in '{}' which is not on PATH." .format(start_text, parent_dir) ) last_line_fmt = ( "Consider adding {} to PATH or, if you prefer " "to suppress this warning, use --no-warn-script-location." ) if len(msg_lines) == 1: msg_lines.append(last_line_fmt.format("this directory")) else: msg_lines.append(last_line_fmt.format("these directories")) # Returns the formatted multiline message return "\n".join(msg_lines) def sorted_outrows(outrows): # type: (Iterable[InstalledCSVRow]) -> List[InstalledCSVRow] """ Return the given rows of a RECORD file in sorted order. Each row is a 3-tuple (path, hash, size) and corresponds to a record of a RECORD file (see PEP 376 and PEP 427 for details). For the rows passed to this function, the size can be an integer as an int or string, or the empty string. """ # Normally, there should only be one row per path, in which case the # second and third elements don't come into play when sorting. # However, in cases in the wild where a path might happen to occur twice, # we don't want the sort operation to trigger an error (but still want # determinism). Since the third element can be an int or string, we # coerce each element to a string to avoid a TypeError in this case. # For additional background, see-- # https://github.com/pypa/pip/issues/5868 return sorted(outrows, key=lambda row: tuple(str(x) for x in row)) def get_csv_rows_for_installed( old_csv_rows, # type: Iterable[List[str]] installed, # type: Dict[str, str] changed, # type: set generated, # type: List[str] lib_dir, # type: str ): # type: (...) -> List[InstalledCSVRow] """ :param installed: A map from archive RECORD path to installation RECORD path. """ installed_rows = [] # type: List[InstalledCSVRow] for row in old_csv_rows: if len(row) > 3: logger.warning( 'RECORD line has more than three elements: {}'.format(row) ) # Make a copy because we are mutating the row. row = list(row) old_path = row[0] new_path = installed.pop(old_path, old_path) row[0] = new_path if new_path in changed: digest, length = rehash(new_path) row[1] = digest row[2] = length installed_rows.append(tuple(row)) for f in generated: digest, length = rehash(f) installed_rows.append((normpath(f, lib_dir), digest, str(length))) for f in installed: installed_rows.append((installed[f], '', '')) return installed_rows def move_wheel_files( name, # type: str req, # type: Requirement wheeldir, # type: str user=False, # type: bool home=None, # type: Optional[str] root=None, # type: Optional[str] pycompile=True, # type: bool scheme=None, # type: Optional[Mapping[str, str]] isolated=False, # type: bool prefix=None, # type: Optional[str] warn_script_location=True # type: bool ): # type: (...) -> None """Install a wheel""" # TODO: Investigate and break this up. # TODO: Look into moving this into a dedicated class for representing an # installation. if not scheme: scheme = distutils_scheme( name, user=user, home=home, root=root, isolated=isolated, prefix=prefix, ) if root_is_purelib(name, wheeldir): lib_dir = scheme['purelib'] else: lib_dir = scheme['platlib'] info_dir = [] # type: List[str] data_dirs = [] source = wheeldir.rstrip(os.path.sep) + os.path.sep # Record details of the files moved # installed = files copied from the wheel to the destination # changed = files changed while installing (scripts #! line typically) # generated = files newly generated during the install (script wrappers) installed = {} # type: Dict[str, str] changed = set() generated = [] # type: List[str] # Compile all of the pyc files that we're going to be installing if pycompile: with captured_stdout() as stdout: with warnings.catch_warnings(): warnings.filterwarnings('ignore') compileall.compile_dir(source, force=True, quiet=True) logger.debug(stdout.getvalue()) def record_installed(srcfile, destfile, modified=False): """Map archive RECORD paths to installation RECORD paths.""" oldpath = normpath(srcfile, wheeldir) newpath = normpath(destfile, lib_dir) installed[oldpath] = newpath if modified: changed.add(destfile) def clobber(source, dest, is_base, fixer=None, filter=None): ensure_dir(dest) # common for the 'include' path for dir, subdirs, files in os.walk(source): basedir = dir[len(source):].lstrip(os.path.sep) destdir = os.path.join(dest, basedir) if is_base and basedir.split(os.path.sep, 1)[0].endswith('.data'): continue for s in subdirs: destsubdir = os.path.join(dest, basedir, s) if is_base and basedir == '' and destsubdir.endswith('.data'): data_dirs.append(s) continue elif (is_base and s.endswith('.dist-info') and canonicalize_name(s).startswith( canonicalize_name(req.name))): assert not info_dir, ('Multiple .dist-info directories: ' + destsubdir + ', ' + ', '.join(info_dir)) info_dir.append(destsubdir) for f in files: # Skip unwanted files if filter and filter(f): continue srcfile = os.path.join(dir, f) destfile = os.path.join(dest, basedir, f) # directory creation is lazy and after the file filtering above # to ensure we don't install empty dirs; empty dirs can't be # uninstalled. ensure_dir(destdir) # copyfile (called below) truncates the destination if it # exists and then writes the new contents. This is fine in most # cases, but can cause a segfault if pip has loaded a shared # object (e.g. from pyopenssl through its vendored urllib3) # Since the shared object is mmap'd an attempt to call a # symbol in it will then cause a segfault. Unlinking the file # allows writing of new contents while allowing the process to # continue to use the old copy. if os.path.exists(destfile): os.unlink(destfile) # We use copyfile (not move, copy, or copy2) to be extra sure # that we are not moving directories over (copyfile fails for # directories) as well as to ensure that we are not copying # over any metadata because we want more control over what # metadata we actually copy over. shutil.copyfile(srcfile, destfile) # Copy over the metadata for the file, currently this only # includes the atime and mtime. st = os.stat(srcfile) if hasattr(os, "utime"): os.utime(destfile, (st.st_atime, st.st_mtime)) # If our file is executable, then make our destination file # executable. if os.access(srcfile, os.X_OK): st = os.stat(srcfile) permissions = ( st.st_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH ) os.chmod(destfile, permissions) changed = False if fixer: changed = fixer(destfile) record_installed(srcfile, destfile, changed) clobber(source, lib_dir, True) assert info_dir, "%s .dist-info directory not found" % req # Get the defined entry points ep_file = os.path.join(info_dir[0], 'entry_points.txt') console, gui = get_entrypoints(ep_file) def is_entrypoint_wrapper(name): # EP, EP.exe and EP-script.py are scripts generated for # entry point EP by setuptools if name.lower().endswith('.exe'): matchname = name[:-4] elif name.lower().endswith('-script.py'): matchname = name[:-10] elif name.lower().endswith(".pya"): matchname = name[:-4] else: matchname = name # Ignore setuptools-generated scripts return (matchname in console or matchname in gui) for datadir in data_dirs: fixer = None filter = None for subdir in os.listdir(os.path.join(wheeldir, datadir)): fixer = None if subdir == 'scripts': fixer = fix_script filter = is_entrypoint_wrapper source = os.path.join(wheeldir, datadir, subdir) dest = scheme[subdir] clobber(source, dest, False, fixer=fixer, filter=filter) maker = ScriptMaker(None, scheme['scripts']) # Ensure old scripts are overwritten. # See https://github.com/pypa/pip/issues/1800 maker.clobber = True # Ensure we don't generate any variants for scripts because this is almost # never what somebody wants. # See https://bitbucket.org/pypa/distlib/issue/35/ maker.variants = {''} # This is required because otherwise distlib creates scripts that are not # executable. # See https://bitbucket.org/pypa/distlib/issue/32/ maker.set_mode = True # Simplify the script and fix the fact that the default script swallows # every single stack trace. # See https://bitbucket.org/pypa/distlib/issue/34/ # See https://bitbucket.org/pypa/distlib/issue/33/ def _get_script_text(entry): if entry.suffix is None: raise InstallationError( "Invalid script entry point: %s for req: %s - A callable " "suffix is required. Cf https://packaging.python.org/en/" "latest/distributing.html#console-scripts for more " "information." % (entry, req) ) return maker.script_template % { "module": entry.prefix, "import_name": entry.suffix.split(".")[0], "func": entry.suffix, } # ignore type, because mypy disallows assigning to a method, # see https://github.com/python/mypy/issues/2427 maker._get_script_text = _get_script_text # type: ignore maker.script_template = r"""# -*- coding: utf-8 -*- import re import sys from %(module)s import %(import_name)s if __name__ == '__main__': sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0]) sys.exit(%(func)s()) """ # Special case pip and setuptools to generate versioned wrappers # # The issue is that some projects (specifically, pip and setuptools) use # code in setup.py to create "versioned" entry points - pip2.7 on Python # 2.7, pip3.3 on Python 3.3, etc. But these entry points are baked into # the wheel metadata at build time, and so if the wheel is installed with # a *different* version of Python the entry points will be wrong. The # correct fix for this is to enhance the metadata to be able to describe # such versioned entry points, but that won't happen till Metadata 2.0 is # available. # In the meantime, projects using versioned entry points will either have # incorrect versioned entry points, or they will not be able to distribute # "universal" wheels (i.e., they will need a wheel per Python version). # # Because setuptools and pip are bundled with _ensurepip and virtualenv, # we need to use universal wheels. So, as a stopgap until Metadata 2.0, we # override the versioned entry points in the wheel and generate the # correct ones. This code is purely a short-term measure until Metadata 2.0 # is available. # # To add the level of hack in this section of code, in order to support # ensurepip this code will look for an ``ENSUREPIP_OPTIONS`` environment # variable which will control which version scripts get installed. # # ENSUREPIP_OPTIONS=altinstall # - Only pipX.Y and easy_install-X.Y will be generated and installed # ENSUREPIP_OPTIONS=install # - pipX.Y, pipX, easy_install-X.Y will be generated and installed. Note # that this option is technically if ENSUREPIP_OPTIONS is set and is # not altinstall # DEFAULT # - The default behavior is to install pip, pipX, pipX.Y, easy_install # and easy_install-X.Y. pip_script = console.pop('pip', None) if pip_script: if "ENSUREPIP_OPTIONS" not in os.environ: spec = 'pip = ' + pip_script generated.extend(maker.make(spec)) if os.environ.get("ENSUREPIP_OPTIONS", "") != "altinstall": spec = 'pip%s = %s' % (sys.version[:1], pip_script) generated.extend(maker.make(spec)) spec = 'pip%s = %s' % (sys.version[:3], pip_script) generated.extend(maker.make(spec)) # Delete any other versioned pip entry points pip_ep = [k for k in console if re.match(r'pip(\d(\.\d)?)?$', k)] for k in pip_ep: del console[k] easy_install_script = console.pop('easy_install', None) if easy_install_script: if "ENSUREPIP_OPTIONS" not in os.environ: spec = 'easy_install = ' + easy_install_script generated.extend(maker.make(spec)) spec = 'easy_install-%s = %s' % (sys.version[:3], easy_install_script) generated.extend(maker.make(spec)) # Delete any other versioned easy_install entry points easy_install_ep = [ k for k in console if re.match(r'easy_install(-\d\.\d)?$', k) ] for k in easy_install_ep: del console[k] # Generate the console and GUI entry points specified in the wheel if len(console) > 0: generated_console_scripts = maker.make_multiple( ['%s = %s' % kv for kv in console.items()] ) generated.extend(generated_console_scripts) if warn_script_location: msg = message_about_scripts_not_on_PATH(generated_console_scripts) if msg is not None: logger.warning(msg) if len(gui) > 0: generated.extend( maker.make_multiple( ['%s = %s' % kv for kv in gui.items()], {'gui': True} ) ) # Record pip as the installer installer = os.path.join(info_dir[0], 'INSTALLER') temp_installer = os.path.join(info_dir[0], 'INSTALLER.pip') with open(temp_installer, 'wb') as installer_file: installer_file.write(b'pip\n') shutil.move(temp_installer, installer) generated.append(installer) # Record details of all files installed record = os.path.join(info_dir[0], 'RECORD') temp_record = os.path.join(info_dir[0], 'RECORD.pip') with open_for_csv(record, 'r') as record_in: with open_for_csv(temp_record, 'w+') as record_out: reader = csv.reader(record_in) outrows = get_csv_rows_for_installed( reader, installed=installed, changed=changed, generated=generated, lib_dir=lib_dir, ) writer = csv.writer(record_out) # Sort to simplify testing. for row in sorted_outrows(outrows): writer.writerow(row) shutil.move(temp_record, record) def wheel_version(source_dir): # type: (Optional[str]) -> Optional[Tuple[int, ...]] """ Return the Wheel-Version of an extracted wheel, if possible. Otherwise, return None if we couldn't parse / extract it. """ try: dist = [d for d in pkg_resources.find_on_path(None, source_dir)][0] wheel_data = dist.get_metadata('WHEEL') wheel_data = Parser().parsestr(wheel_data) version = wheel_data['Wheel-Version'].strip() version = tuple(map(int, version.split('.'))) return version except Exception: return None def check_compatibility(version, name): # type: (Optional[Tuple[int, ...]], str) -> None """ Raises errors or warns if called with an incompatible Wheel-Version. Pip should refuse to install a Wheel-Version that's a major series ahead of what it's compatible with (e.g 2.0 > 1.1); and warn when installing a version only minor version ahead (e.g 1.2 > 1.1). version: a 2-tuple representing a Wheel-Version (Major, Minor) name: name of wheel or package to raise exception about :raises UnsupportedWheel: when an incompatible Wheel-Version is given """ if not version: raise UnsupportedWheel( "%s is in an unsupported or invalid wheel" % name ) if version[0] > VERSION_COMPATIBLE[0]: raise UnsupportedWheel( "%s's Wheel-Version (%s) is not compatible with this version " "of pip" % (name, '.'.join(map(str, version))) ) elif version > VERSION_COMPATIBLE: logger.warning( 'Installing from a newer Wheel-Version (%s)', '.'.join(map(str, version)), ) def format_tag(file_tag): # type: (Tuple[str, ...]) -> str """ Format three tags in the form "<python_tag>-<abi_tag>-<platform_tag>". :param file_tag: A 3-tuple of tags (python_tag, abi_tag, platform_tag). """ return '-'.join(file_tag) class Wheel(object): """A wheel file""" # TODO: Maybe move the class into the models sub-package # TODO: Maybe move the install code into this class wheel_file_re = re.compile( r"""^(?P<namever>(?P<name>.+?)-(?P<ver>.*?)) ((-(?P<build>\d[^-]*?))?-(?P<pyver>.+?)-(?P<abi>.+?)-(?P<plat>.+?) \.whl|\.dist-info)$""", re.VERBOSE ) def __init__(self, filename): # type: (str) -> None """ :raises InvalidWheelFilename: when the filename is invalid for a wheel """ wheel_info = self.wheel_file_re.match(filename) if not wheel_info: raise InvalidWheelFilename( "%s is not a valid wheel filename." % filename ) self.filename = filename self.name = wheel_info.group('name').replace('_', '-') # we'll assume "_" means "-" due to wheel naming scheme # (https://github.com/pypa/pip/issues/1150) self.version = wheel_info.group('ver').replace('_', '-') self.build_tag = wheel_info.group('build') self.pyversions = wheel_info.group('pyver').split('.') self.abis = wheel_info.group('abi').split('.') self.plats = wheel_info.group('plat').split('.') # All the tag combinations from this file self.file_tags = { (x, y, z) for x in self.pyversions for y in self.abis for z in self.plats } def get_formatted_file_tags(self): # type: () -> List[str] """ Return the wheel's tags as a sorted list of strings. """ return sorted(format_tag(tag) for tag in self.file_tags) def support_index_min(self, tags=None): # type: (Optional[List[Pep425Tag]]) -> Optional[int] """ Return the lowest index that one of the wheel's file_tag combinations achieves in the supported_tags list e.g. if there are 8 supported tags, and one of the file tags is first in the list, then return 0. Returns None is the wheel is not supported. """ if tags is None: # for mock tags = pep425tags.get_supported() indexes = [tags.index(c) for c in self.file_tags if c in tags] return min(indexes) if indexes else None def supported(self, tags=None): # type: (Optional[List[Pep425Tag]]) -> bool """Is this wheel supported on this system?""" if tags is None: # for mock tags = pep425tags.get_supported() return bool(set(tags).intersection(self.file_tags)) def _contains_egg_info( s, _egg_info_re=re.compile(r'([a-z0-9_.]+)-([a-z0-9_.!+-]+)', re.I)): """Determine whether the string looks like an egg_info. :param s: The string to parse. E.g. foo-2.1 """ return bool(_egg_info_re.search(s)) def should_use_ephemeral_cache( req, # type: InstallRequirement format_control, # type: FormatControl autobuilding, # type: bool cache_available # type: bool ): # type: (...) -> Optional[bool] """ Return whether to build an InstallRequirement object using the ephemeral cache. :param cache_available: whether a cache directory is available for the autobuilding=True case. :return: True or False to build the requirement with ephem_cache=True or False, respectively; or None not to build the requirement. """ if req.constraint: return None if req.is_wheel: if not autobuilding: logger.info( 'Skipping %s, due to already being wheel.', req.name, ) return None if not autobuilding: return False if req.editable or not req.source_dir: return None if "binary" not in format_control.get_allowed_formats( canonicalize_name(req.name)): logger.info( "Skipping bdist_wheel for %s, due to binaries " "being disabled for it.", req.name, ) return None if req.link and not req.link.is_artifact: # VCS checkout. Build wheel just for this run. return True link = req.link base, ext = link.splitext() if cache_available and _contains_egg_info(base): return False # Otherwise, build the wheel just for this run using the ephemeral # cache since we are either in the case of e.g. a local directory, or # no cache directory is available to use. return True def format_command_result( command_args, # type: List[str] command_output, # type: str ): # type: (...) -> str """ Format command information for logging. """ command_desc = format_command_args(command_args) text = 'Command arguments: {}\n'.format(command_desc) if not command_output: text += 'Command output: None' elif logger.getEffectiveLevel() > logging.DEBUG: text += 'Command output: [use --verbose to show]' else: if not command_output.endswith('\n'): command_output += '\n' text += 'Command output:\n{}{}'.format(command_output, LOG_DIVIDER) return text def get_legacy_build_wheel_path( names, # type: List[str] temp_dir, # type: str req, # type: InstallRequirement command_args, # type: List[str] command_output, # type: str ): # type: (...) -> Optional[str] """ Return the path to the wheel in the temporary build directory. """ # Sort for determinism. names = sorted(names) if not names: msg = ( 'Legacy build of wheel for {!r} created no files.\n' ).format(req.name) msg += format_command_result(command_args, command_output) logger.warning(msg) return None if len(names) > 1: msg = ( 'Legacy build of wheel for {!r} created more than one file.\n' 'Filenames (choosing first): {}\n' ).format(req.name, names) msg += format_command_result(command_args, command_output) logger.warning(msg) return os.path.join(temp_dir, names[0]) class WheelBuilder(object): """Build wheels from a RequirementSet.""" def __init__( self, finder, # type: PackageFinder preparer, # type: RequirementPreparer wheel_cache, # type: WheelCache build_options=None, # type: Optional[List[str]] global_options=None, # type: Optional[List[str]] no_clean=False # type: bool ): # type: (...) -> None self.finder = finder self.preparer = preparer self.wheel_cache = wheel_cache self._wheel_dir = preparer.wheel_download_dir self.build_options = build_options or [] self.global_options = global_options or [] self.no_clean = no_clean def _build_one(self, req, output_dir, python_tag=None): """Build one wheel. :return: The filename of the built wheel, or None if the build failed. """ # Install build deps into temporary directory (PEP 518) with req.build_env: return self._build_one_inside_env(req, output_dir, python_tag=python_tag) def _build_one_inside_env(self, req, output_dir, python_tag=None): with TempDirectory(kind="wheel") as temp_dir: if req.use_pep517: builder = self._build_one_pep517 else: builder = self._build_one_legacy wheel_path = builder(req, temp_dir.path, python_tag=python_tag) if wheel_path is not None: wheel_name = os.path.basename(wheel_path) dest_path = os.path.join(output_dir, wheel_name) try: wheel_hash, length = hash_file(wheel_path) shutil.move(wheel_path, dest_path) logger.info('Created wheel for %s: ' 'filename=%s size=%d sha256=%s', req.name, wheel_name, length, wheel_hash.hexdigest()) logger.info('Stored in directory: %s', output_dir) return dest_path except Exception: pass # Ignore return, we can't do anything else useful. self._clean_one(req) return None def _base_setup_args(self, req): # NOTE: Eventually, we'd want to also -S to the flags here, when we're # isolating. Currently, it breaks Python in virtualenvs, because it # relies on site.py to find parts of the standard library outside the # virtualenv. base_cmd = make_setuptools_shim_args(req.setup_py_path, unbuffered_output=True) return base_cmd + list(self.global_options) def _build_one_pep517(self, req, tempd, python_tag=None): """Build one InstallRequirement using the PEP 517 build process. Returns path to wheel if successfully built. Otherwise, returns None. """ assert req.metadata_directory is not None if self.build_options: # PEP 517 does not support --build-options logger.error('Cannot build wheel for %s using PEP 517 when ' '--build-options is present' % (req.name,)) return None try: req.spin_message = 'Building wheel for %s (PEP 517)' % (req.name,) logger.debug('Destination directory: %s', tempd) wheel_name = req.pep517_backend.build_wheel( tempd, metadata_directory=req.metadata_directory ) if python_tag: # General PEP 517 backends don't necessarily support # a "--python-tag" option, so we rename the wheel # file directly. new_name = replace_python_tag(wheel_name, python_tag) os.rename( os.path.join(tempd, wheel_name), os.path.join(tempd, new_name) ) # Reassign to simplify the return at the end of function wheel_name = new_name except Exception: logger.error('Failed building wheel for %s', req.name) return None return os.path.join(tempd, wheel_name) def _build_one_legacy(self, req, tempd, python_tag=None): """Build one InstallRequirement using the "legacy" build process. Returns path to wheel if successfully built. Otherwise, returns None. """ base_args = self._base_setup_args(req) spin_message = 'Building wheel for %s (setup.py)' % (req.name,) with open_spinner(spin_message) as spinner: logger.debug('Destination directory: %s', tempd) wheel_args = base_args + ['bdist_wheel', '-d', tempd] \ + self.build_options if python_tag is not None: wheel_args += ["--python-tag", python_tag] try: output = call_subprocess(wheel_args, cwd=req.setup_py_dir, spinner=spinner) except Exception: spinner.finish("error") logger.error('Failed building wheel for %s', req.name) return None names = os.listdir(tempd) wheel_path = get_legacy_build_wheel_path( names=names, temp_dir=tempd, req=req, command_args=wheel_args, command_output=output, ) return wheel_path def _clean_one(self, req): base_args = self._base_setup_args(req) logger.info('Running setup.py clean for %s', req.name) clean_args = base_args + ['clean', '--all'] try: call_subprocess(clean_args, cwd=req.source_dir) return True except Exception: logger.error('Failed cleaning build dir for %s', req.name) return False def build( self, requirements, # type: Iterable[InstallRequirement] session, # type: PipSession autobuilding=False # type: bool ): # type: (...) -> List[InstallRequirement] """Build wheels. :param unpack: If True, replace the sdist we built from with the newly built wheel, in preparation for installation. :return: True if all the wheels built correctly. """ buildset = [] format_control = self.finder.format_control # Whether a cache directory is available for autobuilding=True. cache_available = bool(self._wheel_dir or self.wheel_cache.cache_dir) for req in requirements: ephem_cache = should_use_ephemeral_cache( req, format_control=format_control, autobuilding=autobuilding, cache_available=cache_available, ) if ephem_cache is None: continue buildset.append((req, ephem_cache)) if not buildset: return [] # Is any wheel build not using the ephemeral cache? if any(not ephem_cache for _, ephem_cache in buildset): have_directory_for_build = self._wheel_dir or ( autobuilding and self.wheel_cache.cache_dir ) assert have_directory_for_build # TODO by @pradyunsg # Should break up this method into 2 separate methods. # Build the wheels. logger.info( 'Building wheels for collected packages: %s', ', '.join([req.name for (req, _) in buildset]), ) _cache = self.wheel_cache # shorter name with indent_log(): build_success, build_failure = [], [] for req, ephem in buildset: python_tag = None if autobuilding: python_tag = pep425tags.implementation_tag if ephem: output_dir = _cache.get_ephem_path_for_link(req.link) else: output_dir = _cache.get_path_for_link(req.link) try: ensure_dir(output_dir) except OSError as e: logger.warning("Building wheel for %s failed: %s", req.name, e) build_failure.append(req) continue else: output_dir = self._wheel_dir wheel_file = self._build_one( req, output_dir, python_tag=python_tag, ) if wheel_file: build_success.append(req) if autobuilding: # XXX: This is mildly duplicative with prepare_files, # but not close enough to pull out to a single common # method. # The code below assumes temporary source dirs - # prevent it doing bad things. if req.source_dir and not os.path.exists(os.path.join( req.source_dir, PIP_DELETE_MARKER_FILENAME)): raise AssertionError( "bad source dir - missing marker") # Delete the source we built the wheel from req.remove_temporary_source() # set the build directory again - name is known from # the work prepare_files did. req.source_dir = req.build_location( self.preparer.build_dir ) # Update the link for this. req.link = Link(path_to_url(wheel_file)) assert req.link.is_wheel # extract the wheel into the dir unpack_url( req.link, req.source_dir, None, False, session=session, ) else: build_failure.append(req) # notify success/failure if build_success: logger.info( 'Successfully built %s', ' '.join([req.name for req in build_success]), ) if build_failure: logger.info( 'Failed to build %s', ' '.join([req.name for req in build_failure]), ) # Return a list of requirements that failed to build return build_failure �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735099.9566607 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/�����������������������������������������0000755�0001750�0001750�00000000000�00000000000�024465� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/__init__.py������������������������������0000644�0001750�0001750�00000011056�00000000000�026601� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" pip._vendor is for vendoring dependencies of pip to prevent needing pip to depend on something external. Files inside of pip._vendor should be considered immutable and should only be updated to versions from upstream. """ from __future__ import absolute_import import glob import os.path import sys # Downstream redistributors which have debundled our dependencies should also # patch this value to be true. This will trigger the additional patching # to cause things like "six" to be available as pip. DEBUNDLED = False # By default, look in this directory for a bunch of .whl files which we will # add to the beginning of sys.path before attempting to import anything. This # is done to support downstream re-distributors like Debian and Fedora who # wish to create their own Wheels for our dependencies to aid in debundling. WHEEL_DIR = os.path.abspath(os.path.dirname(__file__)) # Define a small helper function to alias our vendored modules to the real ones # if the vendored ones do not exist. This idea of this was taken from # https://github.com/kennethreitz/requests/pull/2567. def vendored(modulename): vendored_name = "{0}.{1}".format(__name__, modulename) try: __import__(modulename, globals(), locals(), level=0) except ImportError: # We can just silently allow import failures to pass here. If we # got to this point it means that ``import pip._vendor.whatever`` # failed and so did ``import whatever``. Since we're importing this # upfront in an attempt to alias imports, not erroring here will # just mean we get a regular import error whenever pip *actually* # tries to import one of these modules to use it, which actually # gives us a better error message than we would have otherwise # gotten. pass else: sys.modules[vendored_name] = sys.modules[modulename] base, head = vendored_name.rsplit(".", 1) setattr(sys.modules[base], head, sys.modules[modulename]) # If we're operating in a debundled setup, then we want to go ahead and trigger # the aliasing of our vendored libraries as well as looking for wheels to add # to our sys.path. This will cause all of this code to be a no-op typically # however downstream redistributors can enable it in a consistent way across # all platforms. if DEBUNDLED: # Actually look inside of WHEEL_DIR to find .whl files and add them to the # front of our sys.path. sys.path[:] = glob.glob(os.path.join(WHEEL_DIR, "*.whl")) + sys.path # Actually alias all of our vendored dependencies. vendored("cachecontrol") vendored("colorama") vendored("distlib") vendored("distro") vendored("html5lib") vendored("lockfile") vendored("six") vendored("six.moves") vendored("six.moves.urllib") vendored("six.moves.urllib.parse") vendored("packaging") vendored("packaging.version") vendored("packaging.specifiers") vendored("pep517") vendored("pkg_resources") vendored("progress") vendored("pytoml") vendored("retrying") vendored("requests") vendored("requests.exceptions") vendored("requests.packages") vendored("requests.packages.urllib3") vendored("requests.packages.urllib3._collections") vendored("requests.packages.urllib3.connection") vendored("requests.packages.urllib3.connectionpool") vendored("requests.packages.urllib3.contrib") vendored("requests.packages.urllib3.contrib.ntlmpool") vendored("requests.packages.urllib3.contrib.pyopenssl") vendored("requests.packages.urllib3.exceptions") vendored("requests.packages.urllib3.fields") vendored("requests.packages.urllib3.filepost") vendored("requests.packages.urllib3.packages") vendored("requests.packages.urllib3.packages.ordered_dict") vendored("requests.packages.urllib3.packages.six") vendored("requests.packages.urllib3.packages.ssl_match_hostname") vendored("requests.packages.urllib3.packages.ssl_match_hostname." "_implementation") vendored("requests.packages.urllib3.poolmanager") vendored("requests.packages.urllib3.request") vendored("requests.packages.urllib3.response") vendored("requests.packages.urllib3.util") vendored("requests.packages.urllib3.util.connection") vendored("requests.packages.urllib3.util.request") vendored("requests.packages.urllib3.util.response") vendored("requests.packages.urllib3.util.retry") vendored("requests.packages.urllib3.util.ssl_") vendored("requests.packages.urllib3.util.timeout") vendored("requests.packages.urllib3.util.url") vendored("urllib3") ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/appdirs.py�������������������������������0000644�0001750�0001750�00000057743�00000000000�026521� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������#!/usr/bin/env python # -*- coding: utf-8 -*- # Copyright (c) 2005-2010 ActiveState Software Inc. # Copyright (c) 2013 Eddy Petrișor """Utilities for determining application-specific dirs. See <http://github.com/ActiveState/appdirs> for details and usage. """ # Dev Notes: # - MSDN on where to store app data files: # http://support.microsoft.com/default.aspx?scid=kb;en-us;310294#XSLTH3194121123120121120120 # - Mac OS X: http://developer.apple.com/documentation/MacOSX/Conceptual/BPFileSystem/index.html # - XDG spec for Un*x: http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html __version_info__ = (1, 4, 3) __version__ = '.'.join(map(str, __version_info__)) import sys import os PY3 = sys.version_info[0] == 3 if PY3: unicode = str if sys.platform.startswith('java'): import platform os_name = platform.java_ver()[3][0] if os_name.startswith('Windows'): # "Windows XP", "Windows 7", etc. system = 'win32' elif os_name.startswith('Mac'): # "Mac OS X", etc. system = 'darwin' else: # "Linux", "SunOS", "FreeBSD", etc. # Setting this to "linux2" is not ideal, but only Windows or Mac # are actually checked for and the rest of the module expects # *sys.platform* style strings. system = 'linux2' else: system = sys.platform def user_data_dir(appname=None, appauthor=None, version=None, roaming=False): r"""Return full path to the user-specific data dir for this application. "appname" is the name of application. If None, just the system directory is returned. "appauthor" (only used on Windows) is the name of the appauthor or distributing body for this application. Typically it is the owning company name. This falls back to appname. You may pass False to disable it. "version" is an optional version path element to append to the path. You might want to use this if you want multiple versions of your app to be able to run independently. If used, this would typically be "<major>.<minor>". Only applied when appname is present. "roaming" (boolean, default False) can be set True to use the Windows roaming appdata directory. That means that for users on a Windows network setup for roaming profiles, this user data will be sync'd on login. See <http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx> for a discussion of issues. Typical user data directories are: Mac OS X: ~/Library/Application Support/<AppName> Unix: ~/.local/share/<AppName> # or in $XDG_DATA_HOME, if defined Win XP (not roaming): C:\Documents and Settings\<username>\Application Data\<AppAuthor>\<AppName> Win XP (roaming): C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName> Win 7 (not roaming): C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName> Win 7 (roaming): C:\Users\<username>\AppData\Roaming\<AppAuthor>\<AppName> For Unix, we follow the XDG spec and support $XDG_DATA_HOME. That means, by default "~/.local/share/<AppName>". """ if system == "win32": if appauthor is None: appauthor = appname const = roaming and "CSIDL_APPDATA" or "CSIDL_LOCAL_APPDATA" path = os.path.normpath(_get_win_folder(const)) if appname: if appauthor is not False: path = os.path.join(path, appauthor, appname) else: path = os.path.join(path, appname) elif system == 'darwin': path = os.path.expanduser('~/Library/Application Support/') if appname: path = os.path.join(path, appname) else: path = os.getenv('XDG_DATA_HOME', os.path.expanduser("~/.local/share")) if appname: path = os.path.join(path, appname) if appname and version: path = os.path.join(path, version) return path def site_data_dir(appname=None, appauthor=None, version=None, multipath=False): r"""Return full path to the user-shared data dir for this application. "appname" is the name of application. If None, just the system directory is returned. "appauthor" (only used on Windows) is the name of the appauthor or distributing body for this application. Typically it is the owning company name. This falls back to appname. You may pass False to disable it. "version" is an optional version path element to append to the path. You might want to use this if you want multiple versions of your app to be able to run independently. If used, this would typically be "<major>.<minor>". Only applied when appname is present. "multipath" is an optional parameter only applicable to *nix which indicates that the entire list of data dirs should be returned. By default, the first item from XDG_DATA_DIRS is returned, or '/usr/local/share/<AppName>', if XDG_DATA_DIRS is not set Typical site data directories are: Mac OS X: /Library/Application Support/<AppName> Unix: /usr/local/share/<AppName> or /usr/share/<AppName> Win XP: C:\Documents and Settings\All Users\Application Data\<AppAuthor>\<AppName> Vista: (Fail! "C:\ProgramData" is a hidden *system* directory on Vista.) Win 7: C:\ProgramData\<AppAuthor>\<AppName> # Hidden, but writeable on Win 7. For Unix, this is using the $XDG_DATA_DIRS[0] default. WARNING: Do not use this on Windows. See the Vista-Fail note above for why. """ if system == "win32": if appauthor is None: appauthor = appname path = os.path.normpath(_get_win_folder("CSIDL_COMMON_APPDATA")) if appname: if appauthor is not False: path = os.path.join(path, appauthor, appname) else: path = os.path.join(path, appname) elif system == 'darwin': path = os.path.expanduser('/Library/Application Support') if appname: path = os.path.join(path, appname) else: # XDG default for $XDG_DATA_DIRS # only first, if multipath is False path = os.getenv('XDG_DATA_DIRS', os.pathsep.join(['/usr/local/share', '/usr/share'])) pathlist = [os.path.expanduser(x.rstrip(os.sep)) for x in path.split(os.pathsep)] if appname: if version: appname = os.path.join(appname, version) pathlist = [os.sep.join([x, appname]) for x in pathlist] if multipath: path = os.pathsep.join(pathlist) else: path = pathlist[0] return path if appname and version: path = os.path.join(path, version) return path def user_config_dir(appname=None, appauthor=None, version=None, roaming=False): r"""Return full path to the user-specific config dir for this application. "appname" is the name of application. If None, just the system directory is returned. "appauthor" (only used on Windows) is the name of the appauthor or distributing body for this application. Typically it is the owning company name. This falls back to appname. You may pass False to disable it. "version" is an optional version path element to append to the path. You might want to use this if you want multiple versions of your app to be able to run independently. If used, this would typically be "<major>.<minor>". Only applied when appname is present. "roaming" (boolean, default False) can be set True to use the Windows roaming appdata directory. That means that for users on a Windows network setup for roaming profiles, this user data will be sync'd on login. See <http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx> for a discussion of issues. Typical user config directories are: Mac OS X: same as user_data_dir Unix: ~/.config/<AppName> # or in $XDG_CONFIG_HOME, if defined Win *: same as user_data_dir For Unix, we follow the XDG spec and support $XDG_CONFIG_HOME. That means, by default "~/.config/<AppName>". """ if system in ["win32", "darwin"]: path = user_data_dir(appname, appauthor, None, roaming) else: path = os.getenv('XDG_CONFIG_HOME', os.path.expanduser("~/.config")) if appname: path = os.path.join(path, appname) if appname and version: path = os.path.join(path, version) return path def site_config_dir(appname=None, appauthor=None, version=None, multipath=False): r"""Return full path to the user-shared data dir for this application. "appname" is the name of application. If None, just the system directory is returned. "appauthor" (only used on Windows) is the name of the appauthor or distributing body for this application. Typically it is the owning company name. This falls back to appname. You may pass False to disable it. "version" is an optional version path element to append to the path. You might want to use this if you want multiple versions of your app to be able to run independently. If used, this would typically be "<major>.<minor>". Only applied when appname is present. "multipath" is an optional parameter only applicable to *nix which indicates that the entire list of config dirs should be returned. By default, the first item from XDG_CONFIG_DIRS is returned, or '/etc/xdg/<AppName>', if XDG_CONFIG_DIRS is not set Typical site config directories are: Mac OS X: same as site_data_dir Unix: /etc/xdg/<AppName> or $XDG_CONFIG_DIRS[i]/<AppName> for each value in $XDG_CONFIG_DIRS Win *: same as site_data_dir Vista: (Fail! "C:\ProgramData" is a hidden *system* directory on Vista.) For Unix, this is using the $XDG_CONFIG_DIRS[0] default, if multipath=False WARNING: Do not use this on Windows. See the Vista-Fail note above for why. """ if system in ["win32", "darwin"]: path = site_data_dir(appname, appauthor) if appname and version: path = os.path.join(path, version) else: # XDG default for $XDG_CONFIG_DIRS # only first, if multipath is False path = os.getenv('XDG_CONFIG_DIRS', '/etc/xdg') pathlist = [os.path.expanduser(x.rstrip(os.sep)) for x in path.split(os.pathsep)] if appname: if version: appname = os.path.join(appname, version) pathlist = [os.sep.join([x, appname]) for x in pathlist] if multipath: path = os.pathsep.join(pathlist) else: path = pathlist[0] return path def user_cache_dir(appname=None, appauthor=None, version=None, opinion=True): r"""Return full path to the user-specific cache dir for this application. "appname" is the name of application. If None, just the system directory is returned. "appauthor" (only used on Windows) is the name of the appauthor or distributing body for this application. Typically it is the owning company name. This falls back to appname. You may pass False to disable it. "version" is an optional version path element to append to the path. You might want to use this if you want multiple versions of your app to be able to run independently. If used, this would typically be "<major>.<minor>". Only applied when appname is present. "opinion" (boolean) can be False to disable the appending of "Cache" to the base app data dir for Windows. See discussion below. Typical user cache directories are: Mac OS X: ~/Library/Caches/<AppName> Unix: ~/.cache/<AppName> (XDG default) Win XP: C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>\Cache Vista: C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>\Cache On Windows the only suggestion in the MSDN docs is that local settings go in the `CSIDL_LOCAL_APPDATA` directory. This is identical to the non-roaming app data dir (the default returned by `user_data_dir` above). Apps typically put cache data somewhere *under* the given dir here. Some examples: ...\Mozilla\Firefox\Profiles\<ProfileName>\Cache ...\Acme\SuperApp\Cache\1.0 OPINION: This function appends "Cache" to the `CSIDL_LOCAL_APPDATA` value. This can be disabled with the `opinion=False` option. """ if system == "win32": if appauthor is None: appauthor = appname path = os.path.normpath(_get_win_folder("CSIDL_LOCAL_APPDATA")) if appname: if appauthor is not False: path = os.path.join(path, appauthor, appname) else: path = os.path.join(path, appname) if opinion: path = os.path.join(path, "Cache") elif system == 'darwin': path = os.path.expanduser('~/Library/Caches') if appname: path = os.path.join(path, appname) else: path = os.getenv('XDG_CACHE_HOME', os.path.expanduser('~/.cache')) if appname: path = os.path.join(path, appname) if appname and version: path = os.path.join(path, version) return path def user_state_dir(appname=None, appauthor=None, version=None, roaming=False): r"""Return full path to the user-specific state dir for this application. "appname" is the name of application. If None, just the system directory is returned. "appauthor" (only used on Windows) is the name of the appauthor or distributing body for this application. Typically it is the owning company name. This falls back to appname. You may pass False to disable it. "version" is an optional version path element to append to the path. You might want to use this if you want multiple versions of your app to be able to run independently. If used, this would typically be "<major>.<minor>". Only applied when appname is present. "roaming" (boolean, default False) can be set True to use the Windows roaming appdata directory. That means that for users on a Windows network setup for roaming profiles, this user data will be sync'd on login. See <http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx> for a discussion of issues. Typical user state directories are: Mac OS X: same as user_data_dir Unix: ~/.local/state/<AppName> # or in $XDG_STATE_HOME, if defined Win *: same as user_data_dir For Unix, we follow this Debian proposal <https://wiki.debian.org/XDGBaseDirectorySpecification#state> to extend the XDG spec and support $XDG_STATE_HOME. That means, by default "~/.local/state/<AppName>". """ if system in ["win32", "darwin"]: path = user_data_dir(appname, appauthor, None, roaming) else: path = os.getenv('XDG_STATE_HOME', os.path.expanduser("~/.local/state")) if appname: path = os.path.join(path, appname) if appname and version: path = os.path.join(path, version) return path def user_log_dir(appname=None, appauthor=None, version=None, opinion=True): r"""Return full path to the user-specific log dir for this application. "appname" is the name of application. If None, just the system directory is returned. "appauthor" (only used on Windows) is the name of the appauthor or distributing body for this application. Typically it is the owning company name. This falls back to appname. You may pass False to disable it. "version" is an optional version path element to append to the path. You might want to use this if you want multiple versions of your app to be able to run independently. If used, this would typically be "<major>.<minor>". Only applied when appname is present. "opinion" (boolean) can be False to disable the appending of "Logs" to the base app data dir for Windows, and "log" to the base cache dir for Unix. See discussion below. Typical user log directories are: Mac OS X: ~/Library/Logs/<AppName> Unix: ~/.cache/<AppName>/log # or under $XDG_CACHE_HOME if defined Win XP: C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>\Logs Vista: C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>\Logs On Windows the only suggestion in the MSDN docs is that local settings go in the `CSIDL_LOCAL_APPDATA` directory. (Note: I'm interested in examples of what some windows apps use for a logs dir.) OPINION: This function appends "Logs" to the `CSIDL_LOCAL_APPDATA` value for Windows and appends "log" to the user cache dir for Unix. This can be disabled with the `opinion=False` option. """ if system == "darwin": path = os.path.join( os.path.expanduser('~/Library/Logs'), appname) elif system == "win32": path = user_data_dir(appname, appauthor, version) version = False if opinion: path = os.path.join(path, "Logs") else: path = user_cache_dir(appname, appauthor, version) version = False if opinion: path = os.path.join(path, "log") if appname and version: path = os.path.join(path, version) return path class AppDirs(object): """Convenience wrapper for getting application dirs.""" def __init__(self, appname=None, appauthor=None, version=None, roaming=False, multipath=False): self.appname = appname self.appauthor = appauthor self.version = version self.roaming = roaming self.multipath = multipath @property def user_data_dir(self): return user_data_dir(self.appname, self.appauthor, version=self.version, roaming=self.roaming) @property def site_data_dir(self): return site_data_dir(self.appname, self.appauthor, version=self.version, multipath=self.multipath) @property def user_config_dir(self): return user_config_dir(self.appname, self.appauthor, version=self.version, roaming=self.roaming) @property def site_config_dir(self): return site_config_dir(self.appname, self.appauthor, version=self.version, multipath=self.multipath) @property def user_cache_dir(self): return user_cache_dir(self.appname, self.appauthor, version=self.version) @property def user_state_dir(self): return user_state_dir(self.appname, self.appauthor, version=self.version) @property def user_log_dir(self): return user_log_dir(self.appname, self.appauthor, version=self.version) #---- internal support stuff def _get_win_folder_from_registry(csidl_name): """This is a fallback technique at best. I'm not sure if using the registry for this guarantees us the correct answer for all CSIDL_* names. """ if PY3: import winreg as _winreg else: import _winreg shell_folder_name = { "CSIDL_APPDATA": "AppData", "CSIDL_COMMON_APPDATA": "Common AppData", "CSIDL_LOCAL_APPDATA": "Local AppData", }[csidl_name] key = _winreg.OpenKey( _winreg.HKEY_CURRENT_USER, r"Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders" ) dir, type = _winreg.QueryValueEx(key, shell_folder_name) return dir def _get_win_folder_with_pywin32(csidl_name): from win32com.shell import shellcon, shell dir = shell.SHGetFolderPath(0, getattr(shellcon, csidl_name), 0, 0) # Try to make this a unicode path because SHGetFolderPath does # not return unicode strings when there is unicode data in the # path. try: dir = unicode(dir) # Downgrade to short path name if have highbit chars. See # <http://bugs.activestate.com/show_bug.cgi?id=85099>. has_high_char = False for c in dir: if ord(c) > 255: has_high_char = True break if has_high_char: try: import win32api dir = win32api.GetShortPathName(dir) except ImportError: pass except UnicodeError: pass return dir def _get_win_folder_with_ctypes(csidl_name): import ctypes csidl_const = { "CSIDL_APPDATA": 26, "CSIDL_COMMON_APPDATA": 35, "CSIDL_LOCAL_APPDATA": 28, }[csidl_name] buf = ctypes.create_unicode_buffer(1024) ctypes.windll.shell32.SHGetFolderPathW(None, csidl_const, None, 0, buf) # Downgrade to short path name if have highbit chars. See # <http://bugs.activestate.com/show_bug.cgi?id=85099>. has_high_char = False for c in buf: if ord(c) > 255: has_high_char = True break if has_high_char: buf2 = ctypes.create_unicode_buffer(1024) if ctypes.windll.kernel32.GetShortPathNameW(buf.value, buf2, 1024): buf = buf2 return buf.value def _get_win_folder_with_jna(csidl_name): import array from com.sun import jna from com.sun.jna.platform import win32 buf_size = win32.WinDef.MAX_PATH * 2 buf = array.zeros('c', buf_size) shell = win32.Shell32.INSTANCE shell.SHGetFolderPath(None, getattr(win32.ShlObj, csidl_name), None, win32.ShlObj.SHGFP_TYPE_CURRENT, buf) dir = jna.Native.toString(buf.tostring()).rstrip("\0") # Downgrade to short path name if have highbit chars. See # <http://bugs.activestate.com/show_bug.cgi?id=85099>. has_high_char = False for c in dir: if ord(c) > 255: has_high_char = True break if has_high_char: buf = array.zeros('c', buf_size) kernel = win32.Kernel32.INSTANCE if kernel.GetShortPathName(dir, buf, buf_size): dir = jna.Native.toString(buf.tostring()).rstrip("\0") return dir if system == "win32": try: from ctypes import windll _get_win_folder = _get_win_folder_with_ctypes except ImportError: try: import com.sun.jna _get_win_folder = _get_win_folder_with_jna except ImportError: _get_win_folder = _get_win_folder_from_registry #---- self test code if __name__ == "__main__": appname = "MyApp" appauthor = "MyCompany" props = ("user_data_dir", "user_config_dir", "user_cache_dir", "user_state_dir", "user_log_dir", "site_data_dir", "site_config_dir") print("-- app dirs %s --" % __version__) print("-- app dirs (with optional 'version')") dirs = AppDirs(appname, appauthor, version="1.0") for prop in props: print("%s: %s" % (prop, getattr(dirs, prop))) print("\n-- app dirs (without optional 'version')") dirs = AppDirs(appname, appauthor) for prop in props: print("%s: %s" % (prop, getattr(dirs, prop))) print("\n-- app dirs (without optional 'appauthor')") dirs = AppDirs(appname) for prop in props: print("%s: %s" % (prop, getattr(dirs, prop))) print("\n-- app dirs (with disabled 'appauthor')") dirs = AppDirs(appname, appauthor=False) for prop in props: print("%s: %s" % (prop, getattr(dirs, prop))) �����������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000033�00000000000�011451� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������27 mtime=1604735099.959994 �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/����������������������������0000755�0001750�0001750�00000000000�00000000000�027131� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/__init__.py�����������������0000644�0001750�0001750�00000000456�00000000000�031247� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""CacheControl import Interface. Make it easy to import from cachecontrol without long namespaces. """ __author__ = "Eric Larson" __email__ = "eric@ionrock.org" __version__ = "0.12.5" from .wrapper import CacheControl from .adapter import CacheControlAdapter from .controller import CacheController ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/_cmd.py���������������������0000644�0001750�0001750�00000002417�00000000000�030411� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import logging from pip._vendor import requests from pip._vendor.cachecontrol.adapter import CacheControlAdapter from pip._vendor.cachecontrol.cache import DictCache from pip._vendor.cachecontrol.controller import logger from argparse import ArgumentParser def setup_logging(): logger.setLevel(logging.DEBUG) handler = logging.StreamHandler() logger.addHandler(handler) def get_session(): adapter = CacheControlAdapter( DictCache(), cache_etags=True, serializer=None, heuristic=None ) sess = requests.Session() sess.mount("http://", adapter) sess.mount("https://", adapter) sess.cache_controller = adapter.controller return sess def get_args(): parser = ArgumentParser() parser.add_argument("url", help="The URL to try and cache") return parser.parse_args() def main(args=None): args = get_args() sess = get_session() # Make a request to get a response resp = sess.get(args.url) # Turn on logging setup_logging() # try setting the cache sess.cache_controller.cache_response(resp.request, resp.raw) # Now try to get it if sess.cache_controller.cached_request(resp.request): print("Cached!") else: print("Not cached :(") if __name__ == "__main__": main() �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/adapter.py������������������0000644�0001750�0001750�00000011377�00000000000�031134� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import types import functools import zlib from pip._vendor.requests.adapters import HTTPAdapter from .controller import CacheController from .cache import DictCache from .filewrapper import CallbackFileWrapper class CacheControlAdapter(HTTPAdapter): invalidating_methods = {"PUT", "DELETE"} def __init__( self, cache=None, cache_etags=True, controller_class=None, serializer=None, heuristic=None, cacheable_methods=None, *args, **kw ): super(CacheControlAdapter, self).__init__(*args, **kw) self.cache = cache or DictCache() self.heuristic = heuristic self.cacheable_methods = cacheable_methods or ("GET",) controller_factory = controller_class or CacheController self.controller = controller_factory( self.cache, cache_etags=cache_etags, serializer=serializer ) def send(self, request, cacheable_methods=None, **kw): """ Send a request. Use the request information to see if it exists in the cache and cache the response if we need to and can. """ cacheable = cacheable_methods or self.cacheable_methods if request.method in cacheable: try: cached_response = self.controller.cached_request(request) except zlib.error: cached_response = None if cached_response: return self.build_response(request, cached_response, from_cache=True) # check for etags and add headers if appropriate request.headers.update(self.controller.conditional_headers(request)) resp = super(CacheControlAdapter, self).send(request, **kw) return resp def build_response( self, request, response, from_cache=False, cacheable_methods=None ): """ Build a response by making a request or using the cache. This will end up calling send and returning a potentially cached response """ cacheable = cacheable_methods or self.cacheable_methods if not from_cache and request.method in cacheable: # Check for any heuristics that might update headers # before trying to cache. if self.heuristic: response = self.heuristic.apply(response) # apply any expiration heuristics if response.status == 304: # We must have sent an ETag request. This could mean # that we've been expired already or that we simply # have an etag. In either case, we want to try and # update the cache if that is the case. cached_response = self.controller.update_cached_response( request, response ) if cached_response is not response: from_cache = True # We are done with the server response, read a # possible response body (compliant servers will # not return one, but we cannot be 100% sure) and # release the connection back to the pool. response.read(decode_content=False) response.release_conn() response = cached_response # We always cache the 301 responses elif response.status == 301: self.controller.cache_response(request, response) else: # Wrap the response file with a wrapper that will cache the # response when the stream has been consumed. response._fp = CallbackFileWrapper( response._fp, functools.partial( self.controller.cache_response, request, response ), ) if response.chunked: super_update_chunk_length = response._update_chunk_length def _update_chunk_length(self): super_update_chunk_length() if self.chunk_left == 0: self._fp._close() response._update_chunk_length = types.MethodType( _update_chunk_length, response ) resp = super(CacheControlAdapter, self).build_response(request, response) # See if we should invalidate the cache. if request.method in self.invalidating_methods and resp.ok: cache_url = self.controller.cache_url(request.url) self.cache.delete(cache_url) # Give the request a from_cache attr to let people use it resp.from_cache = from_cache return resp def close(self): self.cache.close() super(CacheControlAdapter, self).close() �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/cache.py��������������������0000644�0001750�0001750�00000001445�00000000000�030552� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" The cache object API for implementing caches. The default is a thread safe in-memory dictionary. """ from threading import Lock class BaseCache(object): def get(self, key): raise NotImplementedError() def set(self, key, value): raise NotImplementedError() def delete(self, key): raise NotImplementedError() def close(self): pass class DictCache(BaseCache): def __init__(self, init_dict=None): self.lock = Lock() self.data = init_dict or {} def get(self, key): return self.data.get(key, None) def set(self, key, value): with self.lock: self.data.update({key: value}) def delete(self, key): with self.lock: if key in self.data: self.data.pop(key) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000033�00000000000�011451� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������27 mtime=1604735099.959994 �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/caches/���������������������0000755�0001750�0001750�00000000000�00000000000�030357� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/caches/__init__.py����������0000644�0001750�0001750�00000000126�00000000000�032467� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from .file_cache import FileCache # noqa from .redis_cache import RedisCache # noqa ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/caches/file_cache.py��������0000644�0001750�0001750�00000010121�00000000000�032766� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import hashlib import os from textwrap import dedent from ..cache import BaseCache from ..controller import CacheController try: FileNotFoundError except NameError: # py2.X FileNotFoundError = (IOError, OSError) def _secure_open_write(filename, fmode): # We only want to write to this file, so open it in write only mode flags = os.O_WRONLY # os.O_CREAT | os.O_EXCL will fail if the file already exists, so we only # will open *new* files. # We specify this because we want to ensure that the mode we pass is the # mode of the file. flags |= os.O_CREAT | os.O_EXCL # Do not follow symlinks to prevent someone from making a symlink that # we follow and insecurely open a cache file. if hasattr(os, "O_NOFOLLOW"): flags |= os.O_NOFOLLOW # On Windows we'll mark this file as binary if hasattr(os, "O_BINARY"): flags |= os.O_BINARY # Before we open our file, we want to delete any existing file that is # there try: os.remove(filename) except (IOError, OSError): # The file must not exist already, so we can just skip ahead to opening pass # Open our file, the use of os.O_CREAT | os.O_EXCL will ensure that if a # race condition happens between the os.remove and this line, that an # error will be raised. Because we utilize a lockfile this should only # happen if someone is attempting to attack us. fd = os.open(filename, flags, fmode) try: return os.fdopen(fd, "wb") except: # An error occurred wrapping our FD in a file object os.close(fd) raise class FileCache(BaseCache): def __init__( self, directory, forever=False, filemode=0o0600, dirmode=0o0700, use_dir_lock=None, lock_class=None, ): if use_dir_lock is not None and lock_class is not None: raise ValueError("Cannot use use_dir_lock and lock_class together") try: from pip._vendor.lockfile import LockFile from pip._vendor.lockfile.mkdirlockfile import MkdirLockFile except ImportError: notice = dedent( """ NOTE: In order to use the FileCache you must have lockfile installed. You can install it via pip: pip install lockfile """ ) raise ImportError(notice) else: if use_dir_lock: lock_class = MkdirLockFile elif lock_class is None: lock_class = LockFile self.directory = directory self.forever = forever self.filemode = filemode self.dirmode = dirmode self.lock_class = lock_class @staticmethod def encode(x): return hashlib.sha224(x.encode()).hexdigest() def _fn(self, name): # NOTE: This method should not change as some may depend on it. # See: https://github.com/ionrock/cachecontrol/issues/63 hashed = self.encode(name) parts = list(hashed[:5]) + [hashed] return os.path.join(self.directory, *parts) def get(self, key): name = self._fn(key) try: with open(name, "rb") as fh: return fh.read() except FileNotFoundError: return None def set(self, key, value): name = self._fn(key) # Make sure the directory exists try: os.makedirs(os.path.dirname(name), self.dirmode) except (IOError, OSError): pass with self.lock_class(name) as lock: # Write our actual file with _secure_open_write(lock.path, self.filemode) as fh: fh.write(value) def delete(self, key): name = self._fn(key) if not self.forever: try: os.remove(name) except FileNotFoundError: pass def url_to_file_path(url, filecache): """Return the file cache path based on the URL. This does not ensure the file exists! """ key = CacheController.cache_url(url) return filecache._fn(key) �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/caches/redis_cache.py�������0000644�0001750�0001750�00000001530�00000000000�033161� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import division from datetime import datetime from pip._vendor.cachecontrol.cache import BaseCache class RedisCache(BaseCache): def __init__(self, conn): self.conn = conn def get(self, key): return self.conn.get(key) def set(self, key, value, expires=None): if not expires: self.conn.set(key, value) else: expires = expires - datetime.utcnow() self.conn.setex(key, int(expires.total_seconds()), value) def delete(self, key): self.conn.delete(key) def clear(self): """Helper for clearing all the keys in a database. Use with caution!""" for key in self.conn.keys(): self.conn.delete(key) def close(self): """Redis uses connection pooling, no need to close the connection.""" pass ������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/compat.py�������������������0000644�0001750�0001750�00000001267�00000000000�030774� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������try: from urllib.parse import urljoin except ImportError: from urlparse import urljoin try: import cPickle as pickle except ImportError: import pickle # Handle the case where the requests module has been patched to not have # urllib3 bundled as part of its source. try: from pip._vendor.requests.packages.urllib3.response import HTTPResponse except ImportError: from pip._vendor.urllib3.response import HTTPResponse try: from pip._vendor.requests.packages.urllib3.util import is_fp_closed except ImportError: from pip._vendor.urllib3.util import is_fp_closed # Replicate some six behaviour try: text_type = unicode except NameError: text_type = str �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/controller.py���������������0000644�0001750�0001750�00000032602�00000000000�031671� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" The httplib2 algorithms ported for use with requests. """ import logging import re import calendar import time from email.utils import parsedate_tz from pip._vendor.requests.structures import CaseInsensitiveDict from .cache import DictCache from .serialize import Serializer logger = logging.getLogger(__name__) URI = re.compile(r"^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?") def parse_uri(uri): """Parses a URI using the regex given in Appendix B of RFC 3986. (scheme, authority, path, query, fragment) = parse_uri(uri) """ groups = URI.match(uri).groups() return (groups[1], groups[3], groups[4], groups[6], groups[8]) class CacheController(object): """An interface to see if request should cached or not. """ def __init__( self, cache=None, cache_etags=True, serializer=None, status_codes=None ): self.cache = cache or DictCache() self.cache_etags = cache_etags self.serializer = serializer or Serializer() self.cacheable_status_codes = status_codes or (200, 203, 300, 301) @classmethod def _urlnorm(cls, uri): """Normalize the URL to create a safe key for the cache""" (scheme, authority, path, query, fragment) = parse_uri(uri) if not scheme or not authority: raise Exception("Only absolute URIs are allowed. uri = %s" % uri) scheme = scheme.lower() authority = authority.lower() if not path: path = "/" # Could do syntax based normalization of the URI before # computing the digest. See Section 6.2.2 of Std 66. request_uri = query and "?".join([path, query]) or path defrag_uri = scheme + "://" + authority + request_uri return defrag_uri @classmethod def cache_url(cls, uri): return cls._urlnorm(uri) def parse_cache_control(self, headers): known_directives = { # https://tools.ietf.org/html/rfc7234#section-5.2 "max-age": (int, True), "max-stale": (int, False), "min-fresh": (int, True), "no-cache": (None, False), "no-store": (None, False), "no-transform": (None, False), "only-if-cached": (None, False), "must-revalidate": (None, False), "public": (None, False), "private": (None, False), "proxy-revalidate": (None, False), "s-maxage": (int, True), } cc_headers = headers.get("cache-control", headers.get("Cache-Control", "")) retval = {} for cc_directive in cc_headers.split(","): if not cc_directive.strip(): continue parts = cc_directive.split("=", 1) directive = parts[0].strip() try: typ, required = known_directives[directive] except KeyError: logger.debug("Ignoring unknown cache-control directive: %s", directive) continue if not typ or not required: retval[directive] = None if typ: try: retval[directive] = typ(parts[1].strip()) except IndexError: if required: logger.debug( "Missing value for cache-control " "directive: %s", directive, ) except ValueError: logger.debug( "Invalid value for cache-control directive " "%s, must be %s", directive, typ.__name__, ) return retval def cached_request(self, request): """ Return a cached response if it exists in the cache, otherwise return False. """ cache_url = self.cache_url(request.url) logger.debug('Looking up "%s" in the cache', cache_url) cc = self.parse_cache_control(request.headers) # Bail out if the request insists on fresh data if "no-cache" in cc: logger.debug('Request header has "no-cache", cache bypassed') return False if "max-age" in cc and cc["max-age"] == 0: logger.debug('Request header has "max_age" as 0, cache bypassed') return False # Request allows serving from the cache, let's see if we find something cache_data = self.cache.get(cache_url) if cache_data is None: logger.debug("No cache entry available") return False # Check whether it can be deserialized resp = self.serializer.loads(request, cache_data) if not resp: logger.warning("Cache entry deserialization failed, entry ignored") return False # If we have a cached 301, return it immediately. We don't # need to test our response for other headers b/c it is # intrinsically "cacheable" as it is Permanent. # See: # https://tools.ietf.org/html/rfc7231#section-6.4.2 # # Client can try to refresh the value by repeating the request # with cache busting headers as usual (ie no-cache). if resp.status == 301: msg = ( 'Returning cached "301 Moved Permanently" response ' "(ignoring date and etag information)" ) logger.debug(msg) return resp headers = CaseInsensitiveDict(resp.headers) if not headers or "date" not in headers: if "etag" not in headers: # Without date or etag, the cached response can never be used # and should be deleted. logger.debug("Purging cached response: no date or etag") self.cache.delete(cache_url) logger.debug("Ignoring cached response: no date") return False now = time.time() date = calendar.timegm(parsedate_tz(headers["date"])) current_age = max(0, now - date) logger.debug("Current age based on date: %i", current_age) # TODO: There is an assumption that the result will be a # urllib3 response object. This may not be best since we # could probably avoid instantiating or constructing the # response until we know we need it. resp_cc = self.parse_cache_control(headers) # determine freshness freshness_lifetime = 0 # Check the max-age pragma in the cache control header if "max-age" in resp_cc: freshness_lifetime = resp_cc["max-age"] logger.debug("Freshness lifetime from max-age: %i", freshness_lifetime) # If there isn't a max-age, check for an expires header elif "expires" in headers: expires = parsedate_tz(headers["expires"]) if expires is not None: expire_time = calendar.timegm(expires) - date freshness_lifetime = max(0, expire_time) logger.debug("Freshness lifetime from expires: %i", freshness_lifetime) # Determine if we are setting freshness limit in the # request. Note, this overrides what was in the response. if "max-age" in cc: freshness_lifetime = cc["max-age"] logger.debug( "Freshness lifetime from request max-age: %i", freshness_lifetime ) if "min-fresh" in cc: min_fresh = cc["min-fresh"] # adjust our current age by our min fresh current_age += min_fresh logger.debug("Adjusted current age from min-fresh: %i", current_age) # Return entry if it is fresh enough if freshness_lifetime > current_age: logger.debug('The response is "fresh", returning cached response') logger.debug("%i > %i", freshness_lifetime, current_age) return resp # we're not fresh. If we don't have an Etag, clear it out if "etag" not in headers: logger.debug('The cached response is "stale" with no etag, purging') self.cache.delete(cache_url) # return the original handler return False def conditional_headers(self, request): cache_url = self.cache_url(request.url) resp = self.serializer.loads(request, self.cache.get(cache_url)) new_headers = {} if resp: headers = CaseInsensitiveDict(resp.headers) if "etag" in headers: new_headers["If-None-Match"] = headers["ETag"] if "last-modified" in headers: new_headers["If-Modified-Since"] = headers["Last-Modified"] return new_headers def cache_response(self, request, response, body=None, status_codes=None): """ Algorithm for caching requests. This assumes a requests Response object. """ # From httplib2: Don't cache 206's since we aren't going to # handle byte range requests cacheable_status_codes = status_codes or self.cacheable_status_codes if response.status not in cacheable_status_codes: logger.debug( "Status code %s not in %s", response.status, cacheable_status_codes ) return response_headers = CaseInsensitiveDict(response.headers) # If we've been given a body, our response has a Content-Length, that # Content-Length is valid then we can check to see if the body we've # been given matches the expected size, and if it doesn't we'll just # skip trying to cache it. if ( body is not None and "content-length" in response_headers and response_headers["content-length"].isdigit() and int(response_headers["content-length"]) != len(body) ): return cc_req = self.parse_cache_control(request.headers) cc = self.parse_cache_control(response_headers) cache_url = self.cache_url(request.url) logger.debug('Updating cache with response from "%s"', cache_url) # Delete it from the cache if we happen to have it stored there no_store = False if "no-store" in cc: no_store = True logger.debug('Response header has "no-store"') if "no-store" in cc_req: no_store = True logger.debug('Request header has "no-store"') if no_store and self.cache.get(cache_url): logger.debug('Purging existing cache entry to honor "no-store"') self.cache.delete(cache_url) if no_store: return # If we've been given an etag, then keep the response if self.cache_etags and "etag" in response_headers: logger.debug("Caching due to etag") self.cache.set( cache_url, self.serializer.dumps(request, response, body=body) ) # Add to the cache any 301s. We do this before looking that # the Date headers. elif response.status == 301: logger.debug("Caching permanant redirect") self.cache.set(cache_url, self.serializer.dumps(request, response)) # Add to the cache if the response headers demand it. If there # is no date header then we can't do anything about expiring # the cache. elif "date" in response_headers: # cache when there is a max-age > 0 if "max-age" in cc and cc["max-age"] > 0: logger.debug("Caching b/c date exists and max-age > 0") self.cache.set( cache_url, self.serializer.dumps(request, response, body=body) ) # If the request can expire, it means we should cache it # in the meantime. elif "expires" in response_headers: if response_headers["expires"]: logger.debug("Caching b/c of expires header") self.cache.set( cache_url, self.serializer.dumps(request, response, body=body) ) def update_cached_response(self, request, response): """On a 304 we will get a new set of headers that we want to update our cached value with, assuming we have one. This should only ever be called when we've sent an ETag and gotten a 304 as the response. """ cache_url = self.cache_url(request.url) cached_response = self.serializer.loads(request, self.cache.get(cache_url)) if not cached_response: # we didn't have a cached response return response # Lets update our headers with the headers from the new request: # http://tools.ietf.org/html/draft-ietf-httpbis-p4-conditional-26#section-4.1 # # The server isn't supposed to send headers that would make # the cached body invalid. But... just in case, we'll be sure # to strip out ones we know that might be problmatic due to # typical assumptions. excluded_headers = ["content-length"] cached_response.headers.update( dict( (k, v) for k, v in response.headers.items() if k.lower() not in excluded_headers ) ) # we want a 200 b/c we have content via the cache cached_response.status = 200 # update our cache self.cache.set(cache_url, self.serializer.dumps(request, cached_response)) return cached_response ������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/filewrapper.py��������������0000644�0001750�0001750�00000004745�00000000000�032035� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from io import BytesIO class CallbackFileWrapper(object): """ Small wrapper around a fp object which will tee everything read into a buffer, and when that file is closed it will execute a callback with the contents of that buffer. All attributes are proxied to the underlying file object. This class uses members with a double underscore (__) leading prefix so as not to accidentally shadow an attribute. """ def __init__(self, fp, callback): self.__buf = BytesIO() self.__fp = fp self.__callback = callback def __getattr__(self, name): # The vaguaries of garbage collection means that self.__fp is # not always set. By using __getattribute__ and the private # name[0] allows looking up the attribute value and raising an # AttributeError when it doesn't exist. This stop thigns from # infinitely recursing calls to getattr in the case where # self.__fp hasn't been set. # # [0] https://docs.python.org/2/reference/expressions.html#atom-identifiers fp = self.__getattribute__("_CallbackFileWrapper__fp") return getattr(fp, name) def __is_fp_closed(self): try: return self.__fp.fp is None except AttributeError: pass try: return self.__fp.closed except AttributeError: pass # We just don't cache it then. # TODO: Add some logging here... return False def _close(self): if self.__callback: self.__callback(self.__buf.getvalue()) # We assign this to None here, because otherwise we can get into # really tricky problems where the CPython interpreter dead locks # because the callback is holding a reference to something which # has a __del__ method. Setting this to None breaks the cycle # and allows the garbage collector to do it's thing normally. self.__callback = None def read(self, amt=None): data = self.__fp.read(amt) self.__buf.write(data) if self.__is_fp_closed(): self._close() return data def _safe_read(self, amt): data = self.__fp._safe_read(amt) if amt == 2 and data == b"\r\n": # urllib executes this read to toss the CRLF at the end # of the chunk. return data self.__buf.write(data) if self.__is_fp_closed(): self._close() return data ���������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/heuristics.py���������������0000644�0001750�0001750�00000007746�00000000000�031703� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import calendar import time from email.utils import formatdate, parsedate, parsedate_tz from datetime import datetime, timedelta TIME_FMT = "%a, %d %b %Y %H:%M:%S GMT" def expire_after(delta, date=None): date = date or datetime.utcnow() return date + delta def datetime_to_header(dt): return formatdate(calendar.timegm(dt.timetuple())) class BaseHeuristic(object): def warning(self, response): """ Return a valid 1xx warning header value describing the cache adjustments. The response is provided too allow warnings like 113 http://tools.ietf.org/html/rfc7234#section-5.5.4 where we need to explicitly say response is over 24 hours old. """ return '110 - "Response is Stale"' def update_headers(self, response): """Update the response headers with any new headers. NOTE: This SHOULD always include some Warning header to signify that the response was cached by the client, not by way of the provided headers. """ return {} def apply(self, response): updated_headers = self.update_headers(response) if updated_headers: response.headers.update(updated_headers) warning_header_value = self.warning(response) if warning_header_value is not None: response.headers.update({"Warning": warning_header_value}) return response class OneDayCache(BaseHeuristic): """ Cache the response by providing an expires 1 day in the future. """ def update_headers(self, response): headers = {} if "expires" not in response.headers: date = parsedate(response.headers["date"]) expires = expire_after(timedelta(days=1), date=datetime(*date[:6])) headers["expires"] = datetime_to_header(expires) headers["cache-control"] = "public" return headers class ExpiresAfter(BaseHeuristic): """ Cache **all** requests for a defined time period. """ def __init__(self, **kw): self.delta = timedelta(**kw) def update_headers(self, response): expires = expire_after(self.delta) return {"expires": datetime_to_header(expires), "cache-control": "public"} def warning(self, response): tmpl = "110 - Automatically cached for %s. Response might be stale" return tmpl % self.delta class LastModified(BaseHeuristic): """ If there is no Expires header already, fall back on Last-Modified using the heuristic from http://tools.ietf.org/html/rfc7234#section-4.2.2 to calculate a reasonable value. Firefox also does something like this per https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching_FAQ http://lxr.mozilla.org/mozilla-release/source/netwerk/protocol/http/nsHttpResponseHead.cpp#397 Unlike mozilla we limit this to 24-hr. """ cacheable_by_default_statuses = { 200, 203, 204, 206, 300, 301, 404, 405, 410, 414, 501 } def update_headers(self, resp): headers = resp.headers if "expires" in headers: return {} if "cache-control" in headers and headers["cache-control"] != "public": return {} if resp.status not in self.cacheable_by_default_statuses: return {} if "date" not in headers or "last-modified" not in headers: return {} date = calendar.timegm(parsedate_tz(headers["date"])) last_modified = parsedate(headers["last-modified"]) if date is None or last_modified is None: return {} now = time.time() current_age = max(0, now - date) delta = date - calendar.timegm(last_modified) freshness_lifetime = max(0, min(delta / 10, 24 * 3600)) if freshness_lifetime <= current_age: return {} expires = date + freshness_lifetime return {"expires": time.strftime(TIME_FMT, time.gmtime(expires))} def warning(self, resp): return None ��������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/serialize.py����������������0000644�0001750�0001750�00000015452�00000000000�031501� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import base64 import io import json import zlib from pip._vendor import msgpack from pip._vendor.requests.structures import CaseInsensitiveDict from .compat import HTTPResponse, pickle, text_type def _b64_decode_bytes(b): return base64.b64decode(b.encode("ascii")) def _b64_decode_str(s): return _b64_decode_bytes(s).decode("utf8") class Serializer(object): def dumps(self, request, response, body=None): response_headers = CaseInsensitiveDict(response.headers) if body is None: body = response.read(decode_content=False) # NOTE: 99% sure this is dead code. I'm only leaving it # here b/c I don't have a test yet to prove # it. Basically, before using # `cachecontrol.filewrapper.CallbackFileWrapper`, # this made an effort to reset the file handle. The # `CallbackFileWrapper` short circuits this code by # setting the body as the content is consumed, the # result being a `body` argument is *always* passed # into cache_response, and in turn, # `Serializer.dump`. response._fp = io.BytesIO(body) # NOTE: This is all a bit weird, but it's really important that on # Python 2.x these objects are unicode and not str, even when # they contain only ascii. The problem here is that msgpack # understands the difference between unicode and bytes and we # have it set to differentiate between them, however Python 2 # doesn't know the difference. Forcing these to unicode will be # enough to have msgpack know the difference. data = { u"response": { u"body": body, u"headers": dict( (text_type(k), text_type(v)) for k, v in response.headers.items() ), u"status": response.status, u"version": response.version, u"reason": text_type(response.reason), u"strict": response.strict, u"decode_content": response.decode_content, } } # Construct our vary headers data[u"vary"] = {} if u"vary" in response_headers: varied_headers = response_headers[u"vary"].split(",") for header in varied_headers: header = text_type(header).strip() header_value = request.headers.get(header, None) if header_value is not None: header_value = text_type(header_value) data[u"vary"][header] = header_value return b",".join([b"cc=4", msgpack.dumps(data, use_bin_type=True)]) def loads(self, request, data): # Short circuit if we've been given an empty set of data if not data: return # Determine what version of the serializer the data was serialized # with try: ver, data = data.split(b",", 1) except ValueError: ver = b"cc=0" # Make sure that our "ver" is actually a version and isn't a false # positive from a , being in the data stream. if ver[:3] != b"cc=": data = ver + data ver = b"cc=0" # Get the version number out of the cc=N ver = ver.split(b"=", 1)[-1].decode("ascii") # Dispatch to the actual load method for the given version try: return getattr(self, "_loads_v{}".format(ver))(request, data) except AttributeError: # This is a version we don't have a loads function for, so we'll # just treat it as a miss and return None return def prepare_response(self, request, cached): """Verify our vary headers match and construct a real urllib3 HTTPResponse object. """ # Special case the '*' Vary value as it means we cannot actually # determine if the cached response is suitable for this request. if "*" in cached.get("vary", {}): return # Ensure that the Vary headers for the cached response match our # request for header, value in cached.get("vary", {}).items(): if request.headers.get(header, None) != value: return body_raw = cached["response"].pop("body") headers = CaseInsensitiveDict(data=cached["response"]["headers"]) if headers.get("transfer-encoding", "") == "chunked": headers.pop("transfer-encoding") cached["response"]["headers"] = headers try: body = io.BytesIO(body_raw) except TypeError: # This can happen if cachecontrol serialized to v1 format (pickle) # using Python 2. A Python 2 str(byte string) will be unpickled as # a Python 3 str (unicode string), which will cause the above to # fail with: # # TypeError: 'str' does not support the buffer interface body = io.BytesIO(body_raw.encode("utf8")) return HTTPResponse(body=body, preload_content=False, **cached["response"]) def _loads_v0(self, request, data): # The original legacy cache data. This doesn't contain enough # information to construct everything we need, so we'll treat this as # a miss. return def _loads_v1(self, request, data): try: cached = pickle.loads(data) except ValueError: return return self.prepare_response(request, cached) def _loads_v2(self, request, data): try: cached = json.loads(zlib.decompress(data).decode("utf8")) except (ValueError, zlib.error): return # We need to decode the items that we've base64 encoded cached["response"]["body"] = _b64_decode_bytes(cached["response"]["body"]) cached["response"]["headers"] = dict( (_b64_decode_str(k), _b64_decode_str(v)) for k, v in cached["response"]["headers"].items() ) cached["response"]["reason"] = _b64_decode_str(cached["response"]["reason"]) cached["vary"] = dict( (_b64_decode_str(k), _b64_decode_str(v) if v is not None else v) for k, v in cached["vary"].items() ) return self.prepare_response(request, cached) def _loads_v3(self, request, data): # Due to Python 2 encoding issues, it's impossible to know for sure # exactly how to load v3 entries, thus we'll treat these as a miss so # that they get rewritten out as v4 entries. return def _loads_v4(self, request, data): try: cached = msgpack.loads(data, encoding="utf-8") except ValueError: return return self.prepare_response(request, cached) ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/wrapper.py������������������0000644�0001750�0001750�00000001237�00000000000�031166� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from .adapter import CacheControlAdapter from .cache import DictCache def CacheControl( sess, cache=None, cache_etags=True, serializer=None, heuristic=None, controller_class=None, adapter_class=None, cacheable_methods=None, ): cache = cache or DictCache() adapter_class = adapter_class or CacheControlAdapter adapter = adapter_class( cache, cache_etags=cache_etags, serializer=serializer, heuristic=heuristic, controller_class=controller_class, cacheable_methods=cacheable_methods, ) sess.mount("http://", adapter) sess.mount("https://", adapter) return sess �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735099.9633274 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/certifi/���������������������������������0000755�0001750�0001750�00000000000�00000000000�026112� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/certifi/__init__.py����������������������0000644�0001750�0001750�00000000064�00000000000�030223� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from .core import where __version__ = "2019.06.16" ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/certifi/__main__.py����������������������0000644�0001750�0001750�00000000065�00000000000�030205� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from pip._vendor.certifi import where print(where()) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/certifi/core.py��������������������������0000644�0001750�0001750�00000000332�00000000000�027412� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- """ certifi.py ~~~~~~~~~~ This module returns the installation location of cacert.pem. """ import os def where(): f = os.path.dirname(__file__) return os.path.join(f, 'cacert.pem') ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735099.9733274 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/���������������������������������0000755�0001750�0001750�00000000000�00000000000�026077� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/__init__.py����������������������0000644�0001750�0001750�00000003027�00000000000�030212� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### from .compat import PY2, PY3 from .universaldetector import UniversalDetector from .version import __version__, VERSION def detect(byte_str): """ Detect the encoding of the given byte string. :param byte_str: The byte sequence to examine. :type byte_str: ``bytes`` or ``bytearray`` """ if not isinstance(byte_str, bytearray): if not isinstance(byte_str, bytes): raise TypeError('Expected object of type bytes or bytearray, got: ' '{0}'.format(type(byte_str))) else: byte_str = bytearray(byte_str) detector = UniversalDetector() detector.feed(byte_str) return detector.close() ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/big5freq.py����������������������0000644�0001750�0001750�00000075026�00000000000�030167� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Communicator client code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### # Big5 frequency table # by Taiwan's Mandarin Promotion Council # <http://www.edu.tw:81/mandr/> # # 128 --> 0.42261 # 256 --> 0.57851 # 512 --> 0.74851 # 1024 --> 0.89384 # 2048 --> 0.97583 # # Ideal Distribution Ratio = 0.74851/(1-0.74851) =2.98 # Random Distribution Ration = 512/(5401-512)=0.105 # # Typical Distribution Ratio about 25% of Ideal one, still much higher than RDR BIG5_TYPICAL_DISTRIBUTION_RATIO = 0.75 #Char to FreqOrder table BIG5_TABLE_SIZE = 5376 BIG5_CHAR_TO_FREQ_ORDER = ( 1,1801,1506, 255,1431, 198, 9, 82, 6,5008, 177, 202,3681,1256,2821, 110, # 16 3814, 33,3274, 261, 76, 44,2114, 16,2946,2187,1176, 659,3971, 26,3451,2653, # 32 1198,3972,3350,4202, 410,2215, 302, 590, 361,1964, 8, 204, 58,4510,5009,1932, # 48 63,5010,5011, 317,1614, 75, 222, 159,4203,2417,1480,5012,3555,3091, 224,2822, # 64 3682, 3, 10,3973,1471, 29,2787,1135,2866,1940, 873, 130,3275,1123, 312,5013, # 80 4511,2052, 507, 252, 682,5014, 142,1915, 124, 206,2947, 34,3556,3204, 64, 604, # 96 5015,2501,1977,1978, 155,1991, 645, 641,1606,5016,3452, 337, 72, 406,5017, 80, # 112 630, 238,3205,1509, 263, 939,1092,2654, 756,1440,1094,3453, 449, 69,2987, 591, # 128 179,2096, 471, 115,2035,1844, 60, 50,2988, 134, 806,1869, 734,2036,3454, 180, # 144 995,1607, 156, 537,2907, 688,5018, 319,1305, 779,2145, 514,2379, 298,4512, 359, # 160 2502, 90,2716,1338, 663, 11, 906,1099,2553, 20,2441, 182, 532,1716,5019, 732, # 176 1376,4204,1311,1420,3206, 25,2317,1056, 113, 399, 382,1950, 242,3455,2474, 529, # 192 3276, 475,1447,3683,5020, 117, 21, 656, 810,1297,2300,2334,3557,5021, 126,4205, # 208 706, 456, 150, 613,4513, 71,1118,2037,4206, 145,3092, 85, 835, 486,2115,1246, # 224 1426, 428, 727,1285,1015, 800, 106, 623, 303,1281,5022,2128,2359, 347,3815, 221, # 240 3558,3135,5023,1956,1153,4207, 83, 296,1199,3093, 192, 624, 93,5024, 822,1898, # 256 2823,3136, 795,2065, 991,1554,1542,1592, 27, 43,2867, 859, 139,1456, 860,4514, # 272 437, 712,3974, 164,2397,3137, 695, 211,3037,2097, 195,3975,1608,3559,3560,3684, # 288 3976, 234, 811,2989,2098,3977,2233,1441,3561,1615,2380, 668,2077,1638, 305, 228, # 304 1664,4515, 467, 415,5025, 262,2099,1593, 239, 108, 300, 200,1033, 512,1247,2078, # 320 5026,5027,2176,3207,3685,2682, 593, 845,1062,3277, 88,1723,2038,3978,1951, 212, # 336 266, 152, 149, 468,1899,4208,4516, 77, 187,5028,3038, 37, 5,2990,5029,3979, # 352 5030,5031, 39,2524,4517,2908,3208,2079, 55, 148, 74,4518, 545, 483,1474,1029, # 368 1665, 217,1870,1531,3138,1104,2655,4209, 24, 172,3562, 900,3980,3563,3564,4519, # 384 32,1408,2824,1312, 329, 487,2360,2251,2717, 784,2683, 4,3039,3351,1427,1789, # 400 188, 109, 499,5032,3686,1717,1790, 888,1217,3040,4520,5033,3565,5034,3352,1520, # 416 3687,3981, 196,1034, 775,5035,5036, 929,1816, 249, 439, 38,5037,1063,5038, 794, # 432 3982,1435,2301, 46, 178,3278,2066,5039,2381,5040, 214,1709,4521, 804, 35, 707, # 448 324,3688,1601,2554, 140, 459,4210,5041,5042,1365, 839, 272, 978,2262,2580,3456, # 464 2129,1363,3689,1423, 697, 100,3094, 48, 70,1231, 495,3139,2196,5043,1294,5044, # 480 2080, 462, 586,1042,3279, 853, 256, 988, 185,2382,3457,1698, 434,1084,5045,3458, # 496 314,2625,2788,4522,2335,2336, 569,2285, 637,1817,2525, 757,1162,1879,1616,3459, # 512 287,1577,2116, 768,4523,1671,2868,3566,2526,1321,3816, 909,2418,5046,4211, 933, # 528 3817,4212,2053,2361,1222,4524, 765,2419,1322, 786,4525,5047,1920,1462,1677,2909, # 544 1699,5048,4526,1424,2442,3140,3690,2600,3353,1775,1941,3460,3983,4213, 309,1369, # 560 1130,2825, 364,2234,1653,1299,3984,3567,3985,3986,2656, 525,1085,3041, 902,2001, # 576 1475, 964,4527, 421,1845,1415,1057,2286, 940,1364,3141, 376,4528,4529,1381, 7, # 592 2527, 983,2383, 336,1710,2684,1846, 321,3461, 559,1131,3042,2752,1809,1132,1313, # 608 265,1481,1858,5049, 352,1203,2826,3280, 167,1089, 420,2827, 776, 792,1724,3568, # 624 4214,2443,3281,5050,4215,5051, 446, 229, 333,2753, 901,3818,1200,1557,4530,2657, # 640 1921, 395,2754,2685,3819,4216,1836, 125, 916,3209,2626,4531,5052,5053,3820,5054, # 656 5055,5056,4532,3142,3691,1133,2555,1757,3462,1510,2318,1409,3569,5057,2146, 438, # 672 2601,2910,2384,3354,1068, 958,3043, 461, 311,2869,2686,4217,1916,3210,4218,1979, # 688 383, 750,2755,2627,4219, 274, 539, 385,1278,1442,5058,1154,1965, 384, 561, 210, # 704 98,1295,2556,3570,5059,1711,2420,1482,3463,3987,2911,1257, 129,5060,3821, 642, # 720 523,2789,2790,2658,5061, 141,2235,1333, 68, 176, 441, 876, 907,4220, 603,2602, # 736 710, 171,3464, 404, 549, 18,3143,2398,1410,3692,1666,5062,3571,4533,2912,4534, # 752 5063,2991, 368,5064, 146, 366, 99, 871,3693,1543, 748, 807,1586,1185, 22,2263, # 768 379,3822,3211,5065,3212, 505,1942,2628,1992,1382,2319,5066, 380,2362, 218, 702, # 784 1818,1248,3465,3044,3572,3355,3282,5067,2992,3694, 930,3283,3823,5068, 59,5069, # 800 585, 601,4221, 497,3466,1112,1314,4535,1802,5070,1223,1472,2177,5071, 749,1837, # 816 690,1900,3824,1773,3988,1476, 429,1043,1791,2236,2117, 917,4222, 447,1086,1629, # 832 5072, 556,5073,5074,2021,1654, 844,1090, 105, 550, 966,1758,2828,1008,1783, 686, # 848 1095,5075,2287, 793,1602,5076,3573,2603,4536,4223,2948,2302,4537,3825, 980,2503, # 864 544, 353, 527,4538, 908,2687,2913,5077, 381,2629,1943,1348,5078,1341,1252, 560, # 880 3095,5079,3467,2870,5080,2054, 973, 886,2081, 143,4539,5081,5082, 157,3989, 496, # 896 4224, 57, 840, 540,2039,4540,4541,3468,2118,1445, 970,2264,1748,1966,2082,4225, # 912 3144,1234,1776,3284,2829,3695, 773,1206,2130,1066,2040,1326,3990,1738,1725,4226, # 928 279,3145, 51,1544,2604, 423,1578,2131,2067, 173,4542,1880,5083,5084,1583, 264, # 944 610,3696,4543,2444, 280, 154,5085,5086,5087,1739, 338,1282,3096, 693,2871,1411, # 960 1074,3826,2445,5088,4544,5089,5090,1240, 952,2399,5091,2914,1538,2688, 685,1483, # 976 4227,2475,1436, 953,4228,2055,4545, 671,2400, 79,4229,2446,3285, 608, 567,2689, # 992 3469,4230,4231,1691, 393,1261,1792,2401,5092,4546,5093,5094,5095,5096,1383,1672, # 1008 3827,3213,1464, 522,1119, 661,1150, 216, 675,4547,3991,1432,3574, 609,4548,2690, # 1024 2402,5097,5098,5099,4232,3045, 0,5100,2476, 315, 231,2447, 301,3356,4549,2385, # 1040 5101, 233,4233,3697,1819,4550,4551,5102, 96,1777,1315,2083,5103, 257,5104,1810, # 1056 3698,2718,1139,1820,4234,2022,1124,2164,2791,1778,2659,5105,3097, 363,1655,3214, # 1072 5106,2993,5107,5108,5109,3992,1567,3993, 718, 103,3215, 849,1443, 341,3357,2949, # 1088 1484,5110,1712, 127, 67, 339,4235,2403, 679,1412, 821,5111,5112, 834, 738, 351, # 1104 2994,2147, 846, 235,1497,1881, 418,1993,3828,2719, 186,1100,2148,2756,3575,1545, # 1120 1355,2950,2872,1377, 583,3994,4236,2581,2995,5113,1298,3699,1078,2557,3700,2363, # 1136 78,3829,3830, 267,1289,2100,2002,1594,4237, 348, 369,1274,2197,2178,1838,4552, # 1152 1821,2830,3701,2757,2288,2003,4553,2951,2758, 144,3358, 882,4554,3995,2759,3470, # 1168 4555,2915,5114,4238,1726, 320,5115,3996,3046, 788,2996,5116,2831,1774,1327,2873, # 1184 3997,2832,5117,1306,4556,2004,1700,3831,3576,2364,2660, 787,2023, 506, 824,3702, # 1200 534, 323,4557,1044,3359,2024,1901, 946,3471,5118,1779,1500,1678,5119,1882,4558, # 1216 165, 243,4559,3703,2528, 123, 683,4239, 764,4560, 36,3998,1793, 589,2916, 816, # 1232 626,1667,3047,2237,1639,1555,1622,3832,3999,5120,4000,2874,1370,1228,1933, 891, # 1248 2084,2917, 304,4240,5121, 292,2997,2720,3577, 691,2101,4241,1115,4561, 118, 662, # 1264 5122, 611,1156, 854,2386,1316,2875, 2, 386, 515,2918,5123,5124,3286, 868,2238, # 1280 1486, 855,2661, 785,2216,3048,5125,1040,3216,3578,5126,3146, 448,5127,1525,5128, # 1296 2165,4562,5129,3833,5130,4242,2833,3579,3147, 503, 818,4001,3148,1568, 814, 676, # 1312 1444, 306,1749,5131,3834,1416,1030, 197,1428, 805,2834,1501,4563,5132,5133,5134, # 1328 1994,5135,4564,5136,5137,2198, 13,2792,3704,2998,3149,1229,1917,5138,3835,2132, # 1344 5139,4243,4565,2404,3580,5140,2217,1511,1727,1120,5141,5142, 646,3836,2448, 307, # 1360 5143,5144,1595,3217,5145,5146,5147,3705,1113,1356,4002,1465,2529,2530,5148, 519, # 1376 5149, 128,2133, 92,2289,1980,5150,4003,1512, 342,3150,2199,5151,2793,2218,1981, # 1392 3360,4244, 290,1656,1317, 789, 827,2365,5152,3837,4566, 562, 581,4004,5153, 401, # 1408 4567,2252, 94,4568,5154,1399,2794,5155,1463,2025,4569,3218,1944,5156, 828,1105, # 1424 4245,1262,1394,5157,4246, 605,4570,5158,1784,2876,5159,2835, 819,2102, 578,2200, # 1440 2952,5160,1502, 436,3287,4247,3288,2836,4005,2919,3472,3473,5161,2721,2320,5162, # 1456 5163,2337,2068, 23,4571, 193, 826,3838,2103, 699,1630,4248,3098, 390,1794,1064, # 1472 3581,5164,1579,3099,3100,1400,5165,4249,1839,1640,2877,5166,4572,4573, 137,4250, # 1488 598,3101,1967, 780, 104, 974,2953,5167, 278, 899, 253, 402, 572, 504, 493,1339, # 1504 5168,4006,1275,4574,2582,2558,5169,3706,3049,3102,2253, 565,1334,2722, 863, 41, # 1520 5170,5171,4575,5172,1657,2338, 19, 463,2760,4251, 606,5173,2999,3289,1087,2085, # 1536 1323,2662,3000,5174,1631,1623,1750,4252,2691,5175,2878, 791,2723,2663,2339, 232, # 1552 2421,5176,3001,1498,5177,2664,2630, 755,1366,3707,3290,3151,2026,1609, 119,1918, # 1568 3474, 862,1026,4253,5178,4007,3839,4576,4008,4577,2265,1952,2477,5179,1125, 817, # 1584 4254,4255,4009,1513,1766,2041,1487,4256,3050,3291,2837,3840,3152,5180,5181,1507, # 1600 5182,2692, 733, 40,1632,1106,2879, 345,4257, 841,2531, 230,4578,3002,1847,3292, # 1616 3475,5183,1263, 986,3476,5184, 735, 879, 254,1137, 857, 622,1300,1180,1388,1562, # 1632 4010,4011,2954, 967,2761,2665,1349, 592,2134,1692,3361,3003,1995,4258,1679,4012, # 1648 1902,2188,5185, 739,3708,2724,1296,1290,5186,4259,2201,2202,1922,1563,2605,2559, # 1664 1871,2762,3004,5187, 435,5188, 343,1108, 596, 17,1751,4579,2239,3477,3709,5189, # 1680 4580, 294,3582,2955,1693, 477, 979, 281,2042,3583, 643,2043,3710,2631,2795,2266, # 1696 1031,2340,2135,2303,3584,4581, 367,1249,2560,5190,3585,5191,4582,1283,3362,2005, # 1712 240,1762,3363,4583,4584, 836,1069,3153, 474,5192,2149,2532, 268,3586,5193,3219, # 1728 1521,1284,5194,1658,1546,4260,5195,3587,3588,5196,4261,3364,2693,1685,4262, 961, # 1744 1673,2632, 190,2006,2203,3841,4585,4586,5197, 570,2504,3711,1490,5198,4587,2633, # 1760 3293,1957,4588, 584,1514, 396,1045,1945,5199,4589,1968,2449,5200,5201,4590,4013, # 1776 619,5202,3154,3294, 215,2007,2796,2561,3220,4591,3221,4592, 763,4263,3842,4593, # 1792 5203,5204,1958,1767,2956,3365,3712,1174, 452,1477,4594,3366,3155,5205,2838,1253, # 1808 2387,2189,1091,2290,4264, 492,5206, 638,1169,1825,2136,1752,4014, 648, 926,1021, # 1824 1324,4595, 520,4596, 997, 847,1007, 892,4597,3843,2267,1872,3713,2405,1785,4598, # 1840 1953,2957,3103,3222,1728,4265,2044,3714,4599,2008,1701,3156,1551, 30,2268,4266, # 1856 5207,2027,4600,3589,5208, 501,5209,4267, 594,3478,2166,1822,3590,3479,3591,3223, # 1872 829,2839,4268,5210,1680,3157,1225,4269,5211,3295,4601,4270,3158,2341,5212,4602, # 1888 4271,5213,4015,4016,5214,1848,2388,2606,3367,5215,4603, 374,4017, 652,4272,4273, # 1904 375,1140, 798,5216,5217,5218,2366,4604,2269, 546,1659, 138,3051,2450,4605,5219, # 1920 2254, 612,1849, 910, 796,3844,1740,1371, 825,3845,3846,5220,2920,2562,5221, 692, # 1936 444,3052,2634, 801,4606,4274,5222,1491, 244,1053,3053,4275,4276, 340,5223,4018, # 1952 1041,3005, 293,1168, 87,1357,5224,1539, 959,5225,2240, 721, 694,4277,3847, 219, # 1968 1478, 644,1417,3368,2666,1413,1401,1335,1389,4019,5226,5227,3006,2367,3159,1826, # 1984 730,1515, 184,2840, 66,4607,5228,1660,2958, 246,3369, 378,1457, 226,3480, 975, # 2000 4020,2959,1264,3592, 674, 696,5229, 163,5230,1141,2422,2167, 713,3593,3370,4608, # 2016 4021,5231,5232,1186, 15,5233,1079,1070,5234,1522,3224,3594, 276,1050,2725, 758, # 2032 1126, 653,2960,3296,5235,2342, 889,3595,4022,3104,3007, 903,1250,4609,4023,3481, # 2048 3596,1342,1681,1718, 766,3297, 286, 89,2961,3715,5236,1713,5237,2607,3371,3008, # 2064 5238,2962,2219,3225,2880,5239,4610,2505,2533, 181, 387,1075,4024, 731,2190,3372, # 2080 5240,3298, 310, 313,3482,2304, 770,4278, 54,3054, 189,4611,3105,3848,4025,5241, # 2096 1230,1617,1850, 355,3597,4279,4612,3373, 111,4280,3716,1350,3160,3483,3055,4281, # 2112 2150,3299,3598,5242,2797,4026,4027,3009, 722,2009,5243,1071, 247,1207,2343,2478, # 2128 1378,4613,2010, 864,1437,1214,4614, 373,3849,1142,2220, 667,4615, 442,2763,2563, # 2144 3850,4028,1969,4282,3300,1840, 837, 170,1107, 934,1336,1883,5244,5245,2119,4283, # 2160 2841, 743,1569,5246,4616,4284, 582,2389,1418,3484,5247,1803,5248, 357,1395,1729, # 2176 3717,3301,2423,1564,2241,5249,3106,3851,1633,4617,1114,2086,4285,1532,5250, 482, # 2192 2451,4618,5251,5252,1492, 833,1466,5253,2726,3599,1641,2842,5254,1526,1272,3718, # 2208 4286,1686,1795, 416,2564,1903,1954,1804,5255,3852,2798,3853,1159,2321,5256,2881, # 2224 4619,1610,1584,3056,2424,2764, 443,3302,1163,3161,5257,5258,4029,5259,4287,2506, # 2240 3057,4620,4030,3162,2104,1647,3600,2011,1873,4288,5260,4289, 431,3485,5261, 250, # 2256 97, 81,4290,5262,1648,1851,1558, 160, 848,5263, 866, 740,1694,5264,2204,2843, # 2272 3226,4291,4621,3719,1687, 950,2479, 426, 469,3227,3720,3721,4031,5265,5266,1188, # 2288 424,1996, 861,3601,4292,3854,2205,2694, 168,1235,3602,4293,5267,2087,1674,4622, # 2304 3374,3303, 220,2565,1009,5268,3855, 670,3010, 332,1208, 717,5269,5270,3603,2452, # 2320 4032,3375,5271, 513,5272,1209,2882,3376,3163,4623,1080,5273,5274,5275,5276,2534, # 2336 3722,3604, 815,1587,4033,4034,5277,3605,3486,3856,1254,4624,1328,3058,1390,4035, # 2352 1741,4036,3857,4037,5278, 236,3858,2453,3304,5279,5280,3723,3859,1273,3860,4625, # 2368 5281, 308,5282,4626, 245,4627,1852,2480,1307,2583, 430, 715,2137,2454,5283, 270, # 2384 199,2883,4038,5284,3606,2727,1753, 761,1754, 725,1661,1841,4628,3487,3724,5285, # 2400 5286, 587, 14,3305, 227,2608, 326, 480,2270, 943,2765,3607, 291, 650,1884,5287, # 2416 1702,1226, 102,1547, 62,3488, 904,4629,3489,1164,4294,5288,5289,1224,1548,2766, # 2432 391, 498,1493,5290,1386,1419,5291,2056,1177,4630, 813, 880,1081,2368, 566,1145, # 2448 4631,2291,1001,1035,2566,2609,2242, 394,1286,5292,5293,2069,5294, 86,1494,1730, # 2464 4039, 491,1588, 745, 897,2963, 843,3377,4040,2767,2884,3306,1768, 998,2221,2070, # 2480 397,1827,1195,1970,3725,3011,3378, 284,5295,3861,2507,2138,2120,1904,5296,4041, # 2496 2151,4042,4295,1036,3490,1905, 114,2567,4296, 209,1527,5297,5298,2964,2844,2635, # 2512 2390,2728,3164, 812,2568,5299,3307,5300,1559, 737,1885,3726,1210, 885, 28,2695, # 2528 3608,3862,5301,4297,1004,1780,4632,5302, 346,1982,2222,2696,4633,3863,1742, 797, # 2544 1642,4043,1934,1072,1384,2152, 896,4044,3308,3727,3228,2885,3609,5303,2569,1959, # 2560 4634,2455,1786,5304,5305,5306,4045,4298,1005,1308,3728,4299,2729,4635,4636,1528, # 2576 2610, 161,1178,4300,1983, 987,4637,1101,4301, 631,4046,1157,3229,2425,1343,1241, # 2592 1016,2243,2570, 372, 877,2344,2508,1160, 555,1935, 911,4047,5307, 466,1170, 169, # 2608 1051,2921,2697,3729,2481,3012,1182,2012,2571,1251,2636,5308, 992,2345,3491,1540, # 2624 2730,1201,2071,2406,1997,2482,5309,4638, 528,1923,2191,1503,1874,1570,2369,3379, # 2640 3309,5310, 557,1073,5311,1828,3492,2088,2271,3165,3059,3107, 767,3108,2799,4639, # 2656 1006,4302,4640,2346,1267,2179,3730,3230, 778,4048,3231,2731,1597,2667,5312,4641, # 2672 5313,3493,5314,5315,5316,3310,2698,1433,3311, 131, 95,1504,4049, 723,4303,3166, # 2688 1842,3610,2768,2192,4050,2028,2105,3731,5317,3013,4051,1218,5318,3380,3232,4052, # 2704 4304,2584, 248,1634,3864, 912,5319,2845,3732,3060,3865, 654, 53,5320,3014,5321, # 2720 1688,4642, 777,3494,1032,4053,1425,5322, 191, 820,2121,2846, 971,4643, 931,3233, # 2736 135, 664, 783,3866,1998, 772,2922,1936,4054,3867,4644,2923,3234, 282,2732, 640, # 2752 1372,3495,1127, 922, 325,3381,5323,5324, 711,2045,5325,5326,4055,2223,2800,1937, # 2768 4056,3382,2224,2255,3868,2305,5327,4645,3869,1258,3312,4057,3235,2139,2965,4058, # 2784 4059,5328,2225, 258,3236,4646, 101,1227,5329,3313,1755,5330,1391,3314,5331,2924, # 2800 2057, 893,5332,5333,5334,1402,4305,2347,5335,5336,3237,3611,5337,5338, 878,1325, # 2816 1781,2801,4647, 259,1385,2585, 744,1183,2272,4648,5339,4060,2509,5340, 684,1024, # 2832 4306,5341, 472,3612,3496,1165,3315,4061,4062, 322,2153, 881, 455,1695,1152,1340, # 2848 660, 554,2154,4649,1058,4650,4307, 830,1065,3383,4063,4651,1924,5342,1703,1919, # 2864 5343, 932,2273, 122,5344,4652, 947, 677,5345,3870,2637, 297,1906,1925,2274,4653, # 2880 2322,3316,5346,5347,4308,5348,4309, 84,4310, 112, 989,5349, 547,1059,4064, 701, # 2896 3613,1019,5350,4311,5351,3497, 942, 639, 457,2306,2456, 993,2966, 407, 851, 494, # 2912 4654,3384, 927,5352,1237,5353,2426,3385, 573,4312, 680, 921,2925,1279,1875, 285, # 2928 790,1448,1984, 719,2168,5354,5355,4655,4065,4066,1649,5356,1541, 563,5357,1077, # 2944 5358,3386,3061,3498, 511,3015,4067,4068,3733,4069,1268,2572,3387,3238,4656,4657, # 2960 5359, 535,1048,1276,1189,2926,2029,3167,1438,1373,2847,2967,1134,2013,5360,4313, # 2976 1238,2586,3109,1259,5361, 700,5362,2968,3168,3734,4314,5363,4315,1146,1876,1907, # 2992 4658,2611,4070, 781,2427, 132,1589, 203, 147, 273,2802,2407, 898,1787,2155,4071, # 3008 4072,5364,3871,2803,5365,5366,4659,4660,5367,3239,5368,1635,3872, 965,5369,1805, # 3024 2699,1516,3614,1121,1082,1329,3317,4073,1449,3873, 65,1128,2848,2927,2769,1590, # 3040 3874,5370,5371, 12,2668, 45, 976,2587,3169,4661, 517,2535,1013,1037,3240,5372, # 3056 3875,2849,5373,3876,5374,3499,5375,2612, 614,1999,2323,3877,3110,2733,2638,5376, # 3072 2588,4316, 599,1269,5377,1811,3735,5378,2700,3111, 759,1060, 489,1806,3388,3318, # 3088 1358,5379,5380,2391,1387,1215,2639,2256, 490,5381,5382,4317,1759,2392,2348,5383, # 3104 4662,3878,1908,4074,2640,1807,3241,4663,3500,3319,2770,2349, 874,5384,5385,3501, # 3120 3736,1859, 91,2928,3737,3062,3879,4664,5386,3170,4075,2669,5387,3502,1202,1403, # 3136 3880,2969,2536,1517,2510,4665,3503,2511,5388,4666,5389,2701,1886,1495,1731,4076, # 3152 2370,4667,5390,2030,5391,5392,4077,2702,1216, 237,2589,4318,2324,4078,3881,4668, # 3168 4669,2703,3615,3504, 445,4670,5393,5394,5395,5396,2771, 61,4079,3738,1823,4080, # 3184 5397, 687,2046, 935, 925, 405,2670, 703,1096,1860,2734,4671,4081,1877,1367,2704, # 3200 3389, 918,2106,1782,2483, 334,3320,1611,1093,4672, 564,3171,3505,3739,3390, 945, # 3216 2641,2058,4673,5398,1926, 872,4319,5399,3506,2705,3112, 349,4320,3740,4082,4674, # 3232 3882,4321,3741,2156,4083,4675,4676,4322,4677,2408,2047, 782,4084, 400, 251,4323, # 3248 1624,5400,5401, 277,3742, 299,1265, 476,1191,3883,2122,4324,4325,1109, 205,5402, # 3264 2590,1000,2157,3616,1861,5403,5404,5405,4678,5406,4679,2573, 107,2484,2158,4085, # 3280 3507,3172,5407,1533, 541,1301, 158, 753,4326,2886,3617,5408,1696, 370,1088,4327, # 3296 4680,3618, 579, 327, 440, 162,2244, 269,1938,1374,3508, 968,3063, 56,1396,3113, # 3312 2107,3321,3391,5409,1927,2159,4681,3016,5410,3619,5411,5412,3743,4682,2485,5413, # 3328 2804,5414,1650,4683,5415,2613,5416,5417,4086,2671,3392,1149,3393,4087,3884,4088, # 3344 5418,1076, 49,5419, 951,3242,3322,3323, 450,2850, 920,5420,1812,2805,2371,4328, # 3360 1909,1138,2372,3885,3509,5421,3243,4684,1910,1147,1518,2428,4685,3886,5422,4686, # 3376 2393,2614, 260,1796,3244,5423,5424,3887,3324, 708,5425,3620,1704,5426,3621,1351, # 3392 1618,3394,3017,1887, 944,4329,3395,4330,3064,3396,4331,5427,3744, 422, 413,1714, # 3408 3325, 500,2059,2350,4332,2486,5428,1344,1911, 954,5429,1668,5430,5431,4089,2409, # 3424 4333,3622,3888,4334,5432,2307,1318,2512,3114, 133,3115,2887,4687, 629, 31,2851, # 3440 2706,3889,4688, 850, 949,4689,4090,2970,1732,2089,4335,1496,1853,5433,4091, 620, # 3456 3245, 981,1242,3745,3397,1619,3746,1643,3326,2140,2457,1971,1719,3510,2169,5434, # 3472 3246,5435,5436,3398,1829,5437,1277,4690,1565,2048,5438,1636,3623,3116,5439, 869, # 3488 2852, 655,3890,3891,3117,4092,3018,3892,1310,3624,4691,5440,5441,5442,1733, 558, # 3504 4692,3747, 335,1549,3065,1756,4336,3748,1946,3511,1830,1291,1192, 470,2735,2108, # 3520 2806, 913,1054,4093,5443,1027,5444,3066,4094,4693, 982,2672,3399,3173,3512,3247, # 3536 3248,1947,2807,5445, 571,4694,5446,1831,5447,3625,2591,1523,2429,5448,2090, 984, # 3552 4695,3749,1960,5449,3750, 852, 923,2808,3513,3751, 969,1519, 999,2049,2325,1705, # 3568 5450,3118, 615,1662, 151, 597,4095,2410,2326,1049, 275,4696,3752,4337, 568,3753, # 3584 3626,2487,4338,3754,5451,2430,2275, 409,3249,5452,1566,2888,3514,1002, 769,2853, # 3600 194,2091,3174,3755,2226,3327,4339, 628,1505,5453,5454,1763,2180,3019,4096, 521, # 3616 1161,2592,1788,2206,2411,4697,4097,1625,4340,4341, 412, 42,3119, 464,5455,2642, # 3632 4698,3400,1760,1571,2889,3515,2537,1219,2207,3893,2643,2141,2373,4699,4700,3328, # 3648 1651,3401,3627,5456,5457,3628,2488,3516,5458,3756,5459,5460,2276,2092, 460,5461, # 3664 4701,5462,3020, 962, 588,3629, 289,3250,2644,1116, 52,5463,3067,1797,5464,5465, # 3680 5466,1467,5467,1598,1143,3757,4342,1985,1734,1067,4702,1280,3402, 465,4703,1572, # 3696 510,5468,1928,2245,1813,1644,3630,5469,4704,3758,5470,5471,2673,1573,1534,5472, # 3712 5473, 536,1808,1761,3517,3894,3175,2645,5474,5475,5476,4705,3518,2929,1912,2809, # 3728 5477,3329,1122, 377,3251,5478, 360,5479,5480,4343,1529, 551,5481,2060,3759,1769, # 3744 2431,5482,2930,4344,3330,3120,2327,2109,2031,4706,1404, 136,1468,1479, 672,1171, # 3760 3252,2308, 271,3176,5483,2772,5484,2050, 678,2736, 865,1948,4707,5485,2014,4098, # 3776 2971,5486,2737,2227,1397,3068,3760,4708,4709,1735,2931,3403,3631,5487,3895, 509, # 3792 2854,2458,2890,3896,5488,5489,3177,3178,4710,4345,2538,4711,2309,1166,1010, 552, # 3808 681,1888,5490,5491,2972,2973,4099,1287,1596,1862,3179, 358, 453, 736, 175, 478, # 3824 1117, 905,1167,1097,5492,1854,1530,5493,1706,5494,2181,3519,2292,3761,3520,3632, # 3840 4346,2093,4347,5495,3404,1193,2489,4348,1458,2193,2208,1863,1889,1421,3331,2932, # 3856 3069,2182,3521, 595,2123,5496,4100,5497,5498,4349,1707,2646, 223,3762,1359, 751, # 3872 3121, 183,3522,5499,2810,3021, 419,2374, 633, 704,3897,2394, 241,5500,5501,5502, # 3888 838,3022,3763,2277,2773,2459,3898,1939,2051,4101,1309,3122,2246,1181,5503,1136, # 3904 2209,3899,2375,1446,4350,2310,4712,5504,5505,4351,1055,2615, 484,3764,5506,4102, # 3920 625,4352,2278,3405,1499,4353,4103,5507,4104,4354,3253,2279,2280,3523,5508,5509, # 3936 2774, 808,2616,3765,3406,4105,4355,3123,2539, 526,3407,3900,4356, 955,5510,1620, # 3952 4357,2647,2432,5511,1429,3766,1669,1832, 994, 928,5512,3633,1260,5513,5514,5515, # 3968 1949,2293, 741,2933,1626,4358,2738,2460, 867,1184, 362,3408,1392,5516,5517,4106, # 3984 4359,1770,1736,3254,2934,4713,4714,1929,2707,1459,1158,5518,3070,3409,2891,1292, # 4000 1930,2513,2855,3767,1986,1187,2072,2015,2617,4360,5519,2574,2514,2170,3768,2490, # 4016 3332,5520,3769,4715,5521,5522, 666,1003,3023,1022,3634,4361,5523,4716,1814,2257, # 4032 574,3901,1603, 295,1535, 705,3902,4362, 283, 858, 417,5524,5525,3255,4717,4718, # 4048 3071,1220,1890,1046,2281,2461,4107,1393,1599, 689,2575, 388,4363,5526,2491, 802, # 4064 5527,2811,3903,2061,1405,2258,5528,4719,3904,2110,1052,1345,3256,1585,5529, 809, # 4080 5530,5531,5532, 575,2739,3524, 956,1552,1469,1144,2328,5533,2329,1560,2462,3635, # 4096 3257,4108, 616,2210,4364,3180,2183,2294,5534,1833,5535,3525,4720,5536,1319,3770, # 4112 3771,1211,3636,1023,3258,1293,2812,5537,5538,5539,3905, 607,2311,3906, 762,2892, # 4128 1439,4365,1360,4721,1485,3072,5540,4722,1038,4366,1450,2062,2648,4367,1379,4723, # 4144 2593,5541,5542,4368,1352,1414,2330,2935,1172,5543,5544,3907,3908,4724,1798,1451, # 4160 5545,5546,5547,5548,2936,4109,4110,2492,2351, 411,4111,4112,3637,3333,3124,4725, # 4176 1561,2674,1452,4113,1375,5549,5550, 47,2974, 316,5551,1406,1591,2937,3181,5552, # 4192 1025,2142,3125,3182, 354,2740, 884,2228,4369,2412, 508,3772, 726,3638, 996,2433, # 4208 3639, 729,5553, 392,2194,1453,4114,4726,3773,5554,5555,2463,3640,2618,1675,2813, # 4224 919,2352,2975,2353,1270,4727,4115, 73,5556,5557, 647,5558,3259,2856,2259,1550, # 4240 1346,3024,5559,1332, 883,3526,5560,5561,5562,5563,3334,2775,5564,1212, 831,1347, # 4256 4370,4728,2331,3909,1864,3073, 720,3910,4729,4730,3911,5565,4371,5566,5567,4731, # 4272 5568,5569,1799,4732,3774,2619,4733,3641,1645,2376,4734,5570,2938, 669,2211,2675, # 4288 2434,5571,2893,5572,5573,1028,3260,5574,4372,2413,5575,2260,1353,5576,5577,4735, # 4304 3183, 518,5578,4116,5579,4373,1961,5580,2143,4374,5581,5582,3025,2354,2355,3912, # 4320 516,1834,1454,4117,2708,4375,4736,2229,2620,1972,1129,3642,5583,2776,5584,2976, # 4336 1422, 577,1470,3026,1524,3410,5585,5586, 432,4376,3074,3527,5587,2594,1455,2515, # 4352 2230,1973,1175,5588,1020,2741,4118,3528,4737,5589,2742,5590,1743,1361,3075,3529, # 4368 2649,4119,4377,4738,2295, 895, 924,4378,2171, 331,2247,3076, 166,1627,3077,1098, # 4384 5591,1232,2894,2231,3411,4739, 657, 403,1196,2377, 542,3775,3412,1600,4379,3530, # 4400 5592,4740,2777,3261, 576, 530,1362,4741,4742,2540,2676,3776,4120,5593, 842,3913, # 4416 5594,2814,2032,1014,4121, 213,2709,3413, 665, 621,4380,5595,3777,2939,2435,5596, # 4432 2436,3335,3643,3414,4743,4381,2541,4382,4744,3644,1682,4383,3531,1380,5597, 724, # 4448 2282, 600,1670,5598,1337,1233,4745,3126,2248,5599,1621,4746,5600, 651,4384,5601, # 4464 1612,4385,2621,5602,2857,5603,2743,2312,3078,5604, 716,2464,3079, 174,1255,2710, # 4480 4122,3645, 548,1320,1398, 728,4123,1574,5605,1891,1197,3080,4124,5606,3081,3082, # 4496 3778,3646,3779, 747,5607, 635,4386,4747,5608,5609,5610,4387,5611,5612,4748,5613, # 4512 3415,4749,2437, 451,5614,3780,2542,2073,4388,2744,4389,4125,5615,1764,4750,5616, # 4528 4390, 350,4751,2283,2395,2493,5617,4391,4126,2249,1434,4127, 488,4752, 458,4392, # 4544 4128,3781, 771,1330,2396,3914,2576,3184,2160,2414,1553,2677,3185,4393,5618,2494, # 4560 2895,2622,1720,2711,4394,3416,4753,5619,2543,4395,5620,3262,4396,2778,5621,2016, # 4576 2745,5622,1155,1017,3782,3915,5623,3336,2313, 201,1865,4397,1430,5624,4129,5625, # 4592 5626,5627,5628,5629,4398,1604,5630, 414,1866, 371,2595,4754,4755,3532,2017,3127, # 4608 4756,1708, 960,4399, 887, 389,2172,1536,1663,1721,5631,2232,4130,2356,2940,1580, # 4624 5632,5633,1744,4757,2544,4758,4759,5634,4760,5635,2074,5636,4761,3647,3417,2896, # 4640 4400,5637,4401,2650,3418,2815, 673,2712,2465, 709,3533,4131,3648,4402,5638,1148, # 4656 502, 634,5639,5640,1204,4762,3649,1575,4763,2623,3783,5641,3784,3128, 948,3263, # 4672 121,1745,3916,1110,5642,4403,3083,2516,3027,4132,3785,1151,1771,3917,1488,4133, # 4688 1987,5643,2438,3534,5644,5645,2094,5646,4404,3918,1213,1407,2816, 531,2746,2545, # 4704 3264,1011,1537,4764,2779,4405,3129,1061,5647,3786,3787,1867,2897,5648,2018, 120, # 4720 4406,4407,2063,3650,3265,2314,3919,2678,3419,1955,4765,4134,5649,3535,1047,2713, # 4736 1266,5650,1368,4766,2858, 649,3420,3920,2546,2747,1102,2859,2679,5651,5652,2000, # 4752 5653,1111,3651,2977,5654,2495,3921,3652,2817,1855,3421,3788,5655,5656,3422,2415, # 4768 2898,3337,3266,3653,5657,2577,5658,3654,2818,4135,1460, 856,5659,3655,5660,2899, # 4784 2978,5661,2900,3922,5662,4408, 632,2517, 875,3923,1697,3924,2296,5663,5664,4767, # 4800 3028,1239, 580,4768,4409,5665, 914, 936,2075,1190,4136,1039,2124,5666,5667,5668, # 4816 5669,3423,1473,5670,1354,4410,3925,4769,2173,3084,4137, 915,3338,4411,4412,3339, # 4832 1605,1835,5671,2748, 398,3656,4413,3926,4138, 328,1913,2860,4139,3927,1331,4414, # 4848 3029, 937,4415,5672,3657,4140,4141,3424,2161,4770,3425, 524, 742, 538,3085,1012, # 4864 5673,5674,3928,2466,5675, 658,1103, 225,3929,5676,5677,4771,5678,4772,5679,3267, # 4880 1243,5680,4142, 963,2250,4773,5681,2714,3658,3186,5682,5683,2596,2332,5684,4774, # 4896 5685,5686,5687,3536, 957,3426,2547,2033,1931,2941,2467, 870,2019,3659,1746,2780, # 4912 2781,2439,2468,5688,3930,5689,3789,3130,3790,3537,3427,3791,5690,1179,3086,5691, # 4928 3187,2378,4416,3792,2548,3188,3131,2749,4143,5692,3428,1556,2549,2297, 977,2901, # 4944 2034,4144,1205,3429,5693,1765,3430,3189,2125,1271, 714,1689,4775,3538,5694,2333, # 4960 3931, 533,4417,3660,2184, 617,5695,2469,3340,3539,2315,5696,5697,3190,5698,5699, # 4976 3932,1988, 618, 427,2651,3540,3431,5700,5701,1244,1690,5702,2819,4418,4776,5703, # 4992 3541,4777,5704,2284,1576, 473,3661,4419,3432, 972,5705,3662,5706,3087,5707,5708, # 5008 4778,4779,5709,3793,4145,4146,5710, 153,4780, 356,5711,1892,2902,4420,2144, 408, # 5024 803,2357,5712,3933,5713,4421,1646,2578,2518,4781,4782,3934,5714,3935,4422,5715, # 5040 2416,3433, 752,5716,5717,1962,3341,2979,5718, 746,3030,2470,4783,4423,3794, 698, # 5056 4784,1893,4424,3663,2550,4785,3664,3936,5719,3191,3434,5720,1824,1302,4147,2715, # 5072 3937,1974,4425,5721,4426,3192, 823,1303,1288,1236,2861,3542,4148,3435, 774,3938, # 5088 5722,1581,4786,1304,2862,3939,4787,5723,2440,2162,1083,3268,4427,4149,4428, 344, # 5104 1173, 288,2316, 454,1683,5724,5725,1461,4788,4150,2597,5726,5727,4789, 985, 894, # 5120 5728,3436,3193,5729,1914,2942,3795,1989,5730,2111,1975,5731,4151,5732,2579,1194, # 5136 425,5733,4790,3194,1245,3796,4429,5734,5735,2863,5736, 636,4791,1856,3940, 760, # 5152 1800,5737,4430,2212,1508,4792,4152,1894,1684,2298,5738,5739,4793,4431,4432,2213, # 5168 479,5740,5741, 832,5742,4153,2496,5743,2980,2497,3797, 990,3132, 627,1815,2652, # 5184 4433,1582,4434,2126,2112,3543,4794,5744, 799,4435,3195,5745,4795,2113,1737,3031, # 5200 1018, 543, 754,4436,3342,1676,4796,4797,4154,4798,1489,5746,3544,5747,2624,2903, # 5216 4155,5748,5749,2981,5750,5751,5752,5753,3196,4799,4800,2185,1722,5754,3269,3270, # 5232 1843,3665,1715, 481, 365,1976,1857,5755,5756,1963,2498,4801,5757,2127,3666,3271, # 5248 433,1895,2064,2076,5758, 602,2750,5759,5760,5761,5762,5763,3032,1628,3437,5764, # 5264 3197,4802,4156,2904,4803,2519,5765,2551,2782,5766,5767,5768,3343,4804,2905,5769, # 5280 4805,5770,2864,4806,4807,1221,2982,4157,2520,5771,5772,5773,1868,1990,5774,5775, # 5296 5776,1896,5777,5778,4808,1897,4158, 318,5779,2095,4159,4437,5780,5781, 485,5782, # 5312 938,3941, 553,2680, 116,5783,3942,3667,5784,3545,2681,2783,3438,3344,2820,5785, # 5328 3668,2943,4160,1747,2944,2983,5786,5787, 207,5788,4809,5789,4810,2521,5790,3033, # 5344 890,3669,3943,5791,1878,3798,3439,5792,2186,2358,3440,1652,5793,5794,5795, 941, # 5360 2299, 208,3546,4161,2020, 330,4438,3944,2906,2499,3799,4439,4811,5796,5797,5798, # 5376 ) ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/big5prober.py��������������������0000644�0001750�0001750�00000003335�00000000000�030515� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Communicator client code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### from .mbcharsetprober import MultiByteCharSetProber from .codingstatemachine import CodingStateMachine from .chardistribution import Big5DistributionAnalysis from .mbcssm import BIG5_SM_MODEL class Big5Prober(MultiByteCharSetProber): def __init__(self): super(Big5Prober, self).__init__() self.coding_sm = CodingStateMachine(BIG5_SM_MODEL) self.distribution_analyzer = Big5DistributionAnalysis() self.reset() @property def charset_name(self): return "Big5" @property def language(self): return "Chinese" ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/chardistribution.py��������������0000644�0001750�0001750�00000022303�00000000000�032026� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Communicator client code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### from .euctwfreq import (EUCTW_CHAR_TO_FREQ_ORDER, EUCTW_TABLE_SIZE, EUCTW_TYPICAL_DISTRIBUTION_RATIO) from .euckrfreq import (EUCKR_CHAR_TO_FREQ_ORDER, EUCKR_TABLE_SIZE, EUCKR_TYPICAL_DISTRIBUTION_RATIO) from .gb2312freq import (GB2312_CHAR_TO_FREQ_ORDER, GB2312_TABLE_SIZE, GB2312_TYPICAL_DISTRIBUTION_RATIO) from .big5freq import (BIG5_CHAR_TO_FREQ_ORDER, BIG5_TABLE_SIZE, BIG5_TYPICAL_DISTRIBUTION_RATIO) from .jisfreq import (JIS_CHAR_TO_FREQ_ORDER, JIS_TABLE_SIZE, JIS_TYPICAL_DISTRIBUTION_RATIO) class CharDistributionAnalysis(object): ENOUGH_DATA_THRESHOLD = 1024 SURE_YES = 0.99 SURE_NO = 0.01 MINIMUM_DATA_THRESHOLD = 3 def __init__(self): # Mapping table to get frequency order from char order (get from # GetOrder()) self._char_to_freq_order = None self._table_size = None # Size of above table # This is a constant value which varies from language to language, # used in calculating confidence. See # http://www.mozilla.org/projects/intl/UniversalCharsetDetection.html # for further detail. self.typical_distribution_ratio = None self._done = None self._total_chars = None self._freq_chars = None self.reset() def reset(self): """reset analyser, clear any state""" # If this flag is set to True, detection is done and conclusion has # been made self._done = False self._total_chars = 0 # Total characters encountered # The number of characters whose frequency order is less than 512 self._freq_chars = 0 def feed(self, char, char_len): """feed a character with known length""" if char_len == 2: # we only care about 2-bytes character in our distribution analysis order = self.get_order(char) else: order = -1 if order >= 0: self._total_chars += 1 # order is valid if order < self._table_size: if 512 > self._char_to_freq_order[order]: self._freq_chars += 1 def get_confidence(self): """return confidence based on existing data""" # if we didn't receive any character in our consideration range, # return negative answer if self._total_chars <= 0 or self._freq_chars <= self.MINIMUM_DATA_THRESHOLD: return self.SURE_NO if self._total_chars != self._freq_chars: r = (self._freq_chars / ((self._total_chars - self._freq_chars) * self.typical_distribution_ratio)) if r < self.SURE_YES: return r # normalize confidence (we don't want to be 100% sure) return self.SURE_YES def got_enough_data(self): # It is not necessary to receive all data to draw conclusion. # For charset detection, certain amount of data is enough return self._total_chars > self.ENOUGH_DATA_THRESHOLD def get_order(self, byte_str): # We do not handle characters based on the original encoding string, # but convert this encoding string to a number, here called order. # This allows multiple encodings of a language to share one frequency # table. return -1 class EUCTWDistributionAnalysis(CharDistributionAnalysis): def __init__(self): super(EUCTWDistributionAnalysis, self).__init__() self._char_to_freq_order = EUCTW_CHAR_TO_FREQ_ORDER self._table_size = EUCTW_TABLE_SIZE self.typical_distribution_ratio = EUCTW_TYPICAL_DISTRIBUTION_RATIO def get_order(self, byte_str): # for euc-TW encoding, we are interested # first byte range: 0xc4 -- 0xfe # second byte range: 0xa1 -- 0xfe # no validation needed here. State machine has done that first_char = byte_str[0] if first_char >= 0xC4: return 94 * (first_char - 0xC4) + byte_str[1] - 0xA1 else: return -1 class EUCKRDistributionAnalysis(CharDistributionAnalysis): def __init__(self): super(EUCKRDistributionAnalysis, self).__init__() self._char_to_freq_order = EUCKR_CHAR_TO_FREQ_ORDER self._table_size = EUCKR_TABLE_SIZE self.typical_distribution_ratio = EUCKR_TYPICAL_DISTRIBUTION_RATIO def get_order(self, byte_str): # for euc-KR encoding, we are interested # first byte range: 0xb0 -- 0xfe # second byte range: 0xa1 -- 0xfe # no validation needed here. State machine has done that first_char = byte_str[0] if first_char >= 0xB0: return 94 * (first_char - 0xB0) + byte_str[1] - 0xA1 else: return -1 class GB2312DistributionAnalysis(CharDistributionAnalysis): def __init__(self): super(GB2312DistributionAnalysis, self).__init__() self._char_to_freq_order = GB2312_CHAR_TO_FREQ_ORDER self._table_size = GB2312_TABLE_SIZE self.typical_distribution_ratio = GB2312_TYPICAL_DISTRIBUTION_RATIO def get_order(self, byte_str): # for GB2312 encoding, we are interested # first byte range: 0xb0 -- 0xfe # second byte range: 0xa1 -- 0xfe # no validation needed here. State machine has done that first_char, second_char = byte_str[0], byte_str[1] if (first_char >= 0xB0) and (second_char >= 0xA1): return 94 * (first_char - 0xB0) + second_char - 0xA1 else: return -1 class Big5DistributionAnalysis(CharDistributionAnalysis): def __init__(self): super(Big5DistributionAnalysis, self).__init__() self._char_to_freq_order = BIG5_CHAR_TO_FREQ_ORDER self._table_size = BIG5_TABLE_SIZE self.typical_distribution_ratio = BIG5_TYPICAL_DISTRIBUTION_RATIO def get_order(self, byte_str): # for big5 encoding, we are interested # first byte range: 0xa4 -- 0xfe # second byte range: 0x40 -- 0x7e , 0xa1 -- 0xfe # no validation needed here. State machine has done that first_char, second_char = byte_str[0], byte_str[1] if first_char >= 0xA4: if second_char >= 0xA1: return 157 * (first_char - 0xA4) + second_char - 0xA1 + 63 else: return 157 * (first_char - 0xA4) + second_char - 0x40 else: return -1 class SJISDistributionAnalysis(CharDistributionAnalysis): def __init__(self): super(SJISDistributionAnalysis, self).__init__() self._char_to_freq_order = JIS_CHAR_TO_FREQ_ORDER self._table_size = JIS_TABLE_SIZE self.typical_distribution_ratio = JIS_TYPICAL_DISTRIBUTION_RATIO def get_order(self, byte_str): # for sjis encoding, we are interested # first byte range: 0x81 -- 0x9f , 0xe0 -- 0xfe # second byte range: 0x40 -- 0x7e, 0x81 -- oxfe # no validation needed here. State machine has done that first_char, second_char = byte_str[0], byte_str[1] if (first_char >= 0x81) and (first_char <= 0x9F): order = 188 * (first_char - 0x81) elif (first_char >= 0xE0) and (first_char <= 0xEF): order = 188 * (first_char - 0xE0 + 31) else: return -1 order = order + second_char - 0x40 if second_char > 0x7F: order = -1 return order class EUCJPDistributionAnalysis(CharDistributionAnalysis): def __init__(self): super(EUCJPDistributionAnalysis, self).__init__() self._char_to_freq_order = JIS_CHAR_TO_FREQ_ORDER self._table_size = JIS_TABLE_SIZE self.typical_distribution_ratio = JIS_TYPICAL_DISTRIBUTION_RATIO def get_order(self, byte_str): # for euc-JP encoding, we are interested # first byte range: 0xa0 -- 0xfe # second byte range: 0xa1 -- 0xfe # no validation needed here. State machine has done that char = byte_str[0] if char >= 0xA0: return 94 * (char - 0xA1) + byte_str[1] - 0xa1 else: return -1 �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/charsetgroupprober.py������������0000644�0001750�0001750�00000007313�00000000000�032375� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Communicator client code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### from .enums import ProbingState from .charsetprober import CharSetProber class CharSetGroupProber(CharSetProber): def __init__(self, lang_filter=None): super(CharSetGroupProber, self).__init__(lang_filter=lang_filter) self._active_num = 0 self.probers = [] self._best_guess_prober = None def reset(self): super(CharSetGroupProber, self).reset() self._active_num = 0 for prober in self.probers: if prober: prober.reset() prober.active = True self._active_num += 1 self._best_guess_prober = None @property def charset_name(self): if not self._best_guess_prober: self.get_confidence() if not self._best_guess_prober: return None return self._best_guess_prober.charset_name @property def language(self): if not self._best_guess_prober: self.get_confidence() if not self._best_guess_prober: return None return self._best_guess_prober.language def feed(self, byte_str): for prober in self.probers: if not prober: continue if not prober.active: continue state = prober.feed(byte_str) if not state: continue if state == ProbingState.FOUND_IT: self._best_guess_prober = prober return self.state elif state == ProbingState.NOT_ME: prober.active = False self._active_num -= 1 if self._active_num <= 0: self._state = ProbingState.NOT_ME return self.state return self.state def get_confidence(self): state = self.state if state == ProbingState.FOUND_IT: return 0.99 elif state == ProbingState.NOT_ME: return 0.01 best_conf = 0.0 self._best_guess_prober = None for prober in self.probers: if not prober: continue if not prober.active: self.logger.debug('%s not active', prober.charset_name) continue conf = prober.get_confidence() self.logger.debug('%s %s confidence = %s', prober.charset_name, prober.language, conf) if best_conf < conf: best_conf = conf self._best_guess_prober = prober if not self._best_guess_prober: return 0.0 return best_conf ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/charsetprober.py�����������������0000644�0001750�0001750�00000011766�00000000000�031327� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Universal charset detector code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 2001 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # Shy Shalom - original C code # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### import logging import re from .enums import ProbingState class CharSetProber(object): SHORTCUT_THRESHOLD = 0.95 def __init__(self, lang_filter=None): self._state = None self.lang_filter = lang_filter self.logger = logging.getLogger(__name__) def reset(self): self._state = ProbingState.DETECTING @property def charset_name(self): return None def feed(self, buf): pass @property def state(self): return self._state def get_confidence(self): return 0.0 @staticmethod def filter_high_byte_only(buf): buf = re.sub(b'([\x00-\x7F])+', b' ', buf) return buf @staticmethod def filter_international_words(buf): """ We define three types of bytes: alphabet: english alphabets [a-zA-Z] international: international characters [\x80-\xFF] marker: everything else [^a-zA-Z\x80-\xFF] The input buffer can be thought to contain a series of words delimited by markers. This function works to filter all words that contain at least one international character. All contiguous sequences of markers are replaced by a single space ascii character. This filter applies to all scripts which do not use English characters. """ filtered = bytearray() # This regex expression filters out only words that have at-least one # international character. The word may include one marker character at # the end. words = re.findall(b'[a-zA-Z]*[\x80-\xFF]+[a-zA-Z]*[^a-zA-Z\x80-\xFF]?', buf) for word in words: filtered.extend(word[:-1]) # If the last character in the word is a marker, replace it with a # space as markers shouldn't affect our analysis (they are used # similarly across all languages and may thus have similar # frequencies). last_char = word[-1:] if not last_char.isalpha() and last_char < b'\x80': last_char = b' ' filtered.extend(last_char) return filtered @staticmethod def filter_with_english_letters(buf): """ Returns a copy of ``buf`` that retains only the sequences of English alphabet and high byte characters that are not between <> characters. Also retains English alphabet and high byte characters immediately before occurrences of >. This filter can be applied to all scripts which contain both English characters and extended ASCII characters, but is currently only used by ``Latin1Prober``. """ filtered = bytearray() in_tag = False prev = 0 for curr in range(len(buf)): # Slice here to get bytes instead of an int with Python 3 buf_char = buf[curr:curr + 1] # Check if we're coming out of or entering an HTML tag if buf_char == b'>': in_tag = False elif buf_char == b'<': in_tag = True # If current character is not extended-ASCII and not alphabetic... if buf_char < b'\x80' and not buf_char.isalpha(): # ...and we're not in a tag if curr > prev and not in_tag: # Keep everything after last non-extended-ASCII, # non-alphabetic character filtered.extend(buf[prev:curr]) # Output a space to delimit stretch we kept filtered.extend(b' ') prev = curr + 1 # If we're not in a tag... if not in_tag: # Keep everything after last non-extended-ASCII, non-alphabetic # character filtered.extend(buf[prev:]) return filtered ����������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735099.9733274 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/cli/�����������������������������0000755�0001750�0001750�00000000000�00000000000�026646� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/cli/__init__.py������������������0000644�0001750�0001750�00000000001�00000000000�030746� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/cli/chardetect.py����������������0000644�0001750�0001750�00000005326�00000000000�031334� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������#!/usr/bin/env python """ Script which takes one or more file paths and reports on their detected encodings Example:: % chardetect somefile someotherfile somefile: windows-1252 with confidence 0.5 someotherfile: ascii with confidence 1.0 If no paths are provided, it takes its input from stdin. """ from __future__ import absolute_import, print_function, unicode_literals import argparse import sys from pip._vendor.chardet import __version__ from pip._vendor.chardet.compat import PY2 from pip._vendor.chardet.universaldetector import UniversalDetector def description_of(lines, name='stdin'): """ Return a string describing the probable encoding of a file or list of strings. :param lines: The lines to get the encoding of. :type lines: Iterable of bytes :param name: Name of file or collection of lines :type name: str """ u = UniversalDetector() for line in lines: line = bytearray(line) u.feed(line) # shortcut out of the loop to save reading further - particularly useful if we read a BOM. if u.done: break u.close() result = u.result if PY2: name = name.decode(sys.getfilesystemencoding(), 'ignore') if result['encoding']: return '{0}: {1} with confidence {2}'.format(name, result['encoding'], result['confidence']) else: return '{0}: no result'.format(name) def main(argv=None): """ Handles command line arguments and gets things started. :param argv: List of arguments, as if specified on the command-line. If None, ``sys.argv[1:]`` is used instead. :type argv: list of str """ # Get command line arguments parser = argparse.ArgumentParser( description="Takes one or more file paths and reports their detected \ encodings") parser.add_argument('input', help='File whose encoding we would like to determine. \ (default: stdin)', type=argparse.FileType('rb'), nargs='*', default=[sys.stdin if PY2 else sys.stdin.buffer]) parser.add_argument('--version', action='version', version='%(prog)s {0}'.format(__version__)) args = parser.parse_args(argv) for f in args.input: if f.isatty(): print("You are running chardetect interactively. Press " + "CTRL-D twice at the start of a blank line to signal the " + "end of your input. If you want help, run chardetect " + "--help\n", file=sys.stderr) print(description_of(f, f.name)) if __name__ == '__main__': main() ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/codingstatemachine.py������������0000644�0001750�0001750�00000007006�00000000000�032305� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is mozilla.org code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### import logging from .enums import MachineState class CodingStateMachine(object): """ A state machine to verify a byte sequence for a particular encoding. For each byte the detector receives, it will feed that byte to every active state machine available, one byte at a time. The state machine changes its state based on its previous state and the byte it receives. There are 3 states in a state machine that are of interest to an auto-detector: START state: This is the state to start with, or a legal byte sequence (i.e. a valid code point) for character has been identified. ME state: This indicates that the state machine identified a byte sequence that is specific to the charset it is designed for and that there is no other possible encoding which can contain this byte sequence. This will to lead to an immediate positive answer for the detector. ERROR state: This indicates the state machine identified an illegal byte sequence for that encoding. This will lead to an immediate negative answer for this encoding. Detector will exclude this encoding from consideration from here on. """ def __init__(self, sm): self._model = sm self._curr_byte_pos = 0 self._curr_char_len = 0 self._curr_state = None self.logger = logging.getLogger(__name__) self.reset() def reset(self): self._curr_state = MachineState.START def next_state(self, c): # for each byte we get its class # if it is first byte, we also get byte length byte_class = self._model['class_table'][c] if self._curr_state == MachineState.START: self._curr_byte_pos = 0 self._curr_char_len = self._model['char_len_table'][byte_class] # from byte's class and state_table, we get its next state curr_state = (self._curr_state * self._model['class_factor'] + byte_class) self._curr_state = self._model['state_table'][curr_state] self._curr_byte_pos += 1 return self._curr_state def get_current_charlen(self): return self._curr_char_len def get_coding_state_machine(self): return self._model['name'] @property def language(self): return self._model['language'] ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/compat.py������������������������0000644�0001750�0001750�00000002156�00000000000�027740� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # Contributor(s): # Dan Blanchard # Ian Cordasco # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### import sys if sys.version_info < (3, 0): PY2 = True PY3 = False base_str = (str, unicode) text_type = unicode else: PY2 = False PY3 = True base_str = (bytes, str) text_type = str ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/cp949prober.py�������������������0000644�0001750�0001750�00000003477�00000000000�030546� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is mozilla.org code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### from .chardistribution import EUCKRDistributionAnalysis from .codingstatemachine import CodingStateMachine from .mbcharsetprober import MultiByteCharSetProber from .mbcssm import CP949_SM_MODEL class CP949Prober(MultiByteCharSetProber): def __init__(self): super(CP949Prober, self).__init__() self.coding_sm = CodingStateMachine(CP949_SM_MODEL) # NOTE: CP949 is a superset of EUC-KR, so the distribution should be # not different. self.distribution_analyzer = EUCKRDistributionAnalysis() self.reset() @property def charset_name(self): return "CP949" @property def language(self): return "Korean" �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/enums.py�������������������������0000644�0001750�0001750�00000003175�00000000000�027606� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" All of the Enums that are used throughout the chardet package. :author: Dan Blanchard (dan.blanchard@gmail.com) """ class InputState(object): """ This enum represents the different states a universal detector can be in. """ PURE_ASCII = 0 ESC_ASCII = 1 HIGH_BYTE = 2 class LanguageFilter(object): """ This enum represents the different language filters we can apply to a ``UniversalDetector``. """ CHINESE_SIMPLIFIED = 0x01 CHINESE_TRADITIONAL = 0x02 JAPANESE = 0x04 KOREAN = 0x08 NON_CJK = 0x10 ALL = 0x1F CHINESE = CHINESE_SIMPLIFIED | CHINESE_TRADITIONAL CJK = CHINESE | JAPANESE | KOREAN class ProbingState(object): """ This enum represents the different states a prober can be in. """ DETECTING = 0 FOUND_IT = 1 NOT_ME = 2 class MachineState(object): """ This enum represents the different states a state machine can be in. """ START = 0 ERROR = 1 ITS_ME = 2 class SequenceLikelihood(object): """ This enum represents the likelihood of a character following the previous one. """ NEGATIVE = 0 UNLIKELY = 1 LIKELY = 2 POSITIVE = 3 @classmethod def get_num_categories(cls): """:returns: The number of likelihood categories in the enum.""" return 4 class CharacterCategory(object): """ This enum represents the different categories language models for ``SingleByteCharsetProber`` put characters into. Anything less than CONTROL is considered a letter. """ UNDEFINED = 255 LINE_BREAK = 254 SYMBOL = 253 DIGIT = 252 CONTROL = 251 ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/escprober.py���������������������0000644�0001750�0001750�00000007556�00000000000�030452� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is mozilla.org code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### from .charsetprober import CharSetProber from .codingstatemachine import CodingStateMachine from .enums import LanguageFilter, ProbingState, MachineState from .escsm import (HZ_SM_MODEL, ISO2022CN_SM_MODEL, ISO2022JP_SM_MODEL, ISO2022KR_SM_MODEL) class EscCharSetProber(CharSetProber): """ This CharSetProber uses a "code scheme" approach for detecting encodings, whereby easily recognizable escape or shift sequences are relied on to identify these encodings. """ def __init__(self, lang_filter=None): super(EscCharSetProber, self).__init__(lang_filter=lang_filter) self.coding_sm = [] if self.lang_filter & LanguageFilter.CHINESE_SIMPLIFIED: self.coding_sm.append(CodingStateMachine(HZ_SM_MODEL)) self.coding_sm.append(CodingStateMachine(ISO2022CN_SM_MODEL)) if self.lang_filter & LanguageFilter.JAPANESE: self.coding_sm.append(CodingStateMachine(ISO2022JP_SM_MODEL)) if self.lang_filter & LanguageFilter.KOREAN: self.coding_sm.append(CodingStateMachine(ISO2022KR_SM_MODEL)) self.active_sm_count = None self._detected_charset = None self._detected_language = None self._state = None self.reset() def reset(self): super(EscCharSetProber, self).reset() for coding_sm in self.coding_sm: if not coding_sm: continue coding_sm.active = True coding_sm.reset() self.active_sm_count = len(self.coding_sm) self._detected_charset = None self._detected_language = None @property def charset_name(self): return self._detected_charset @property def language(self): return self._detected_language def get_confidence(self): if self._detected_charset: return 0.99 else: return 0.00 def feed(self, byte_str): for c in byte_str: for coding_sm in self.coding_sm: if not coding_sm or not coding_sm.active: continue coding_state = coding_sm.next_state(c) if coding_state == MachineState.ERROR: coding_sm.active = False self.active_sm_count -= 1 if self.active_sm_count <= 0: self._state = ProbingState.NOT_ME return self.state elif coding_state == MachineState.ITS_ME: self._state = ProbingState.FOUND_IT self._detected_charset = coding_sm.get_coding_state_machine() self._detected_language = coding_sm.language return self.state return self.state ��������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/escsm.py�������������������������0000644�0001750�0001750�00000024416�00000000000�027572� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is mozilla.org code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### from .enums import MachineState HZ_CLS = ( 1,0,0,0,0,0,0,0, # 00 - 07 0,0,0,0,0,0,0,0, # 08 - 0f 0,0,0,0,0,0,0,0, # 10 - 17 0,0,0,1,0,0,0,0, # 18 - 1f 0,0,0,0,0,0,0,0, # 20 - 27 0,0,0,0,0,0,0,0, # 28 - 2f 0,0,0,0,0,0,0,0, # 30 - 37 0,0,0,0,0,0,0,0, # 38 - 3f 0,0,0,0,0,0,0,0, # 40 - 47 0,0,0,0,0,0,0,0, # 48 - 4f 0,0,0,0,0,0,0,0, # 50 - 57 0,0,0,0,0,0,0,0, # 58 - 5f 0,0,0,0,0,0,0,0, # 60 - 67 0,0,0,0,0,0,0,0, # 68 - 6f 0,0,0,0,0,0,0,0, # 70 - 77 0,0,0,4,0,5,2,0, # 78 - 7f 1,1,1,1,1,1,1,1, # 80 - 87 1,1,1,1,1,1,1,1, # 88 - 8f 1,1,1,1,1,1,1,1, # 90 - 97 1,1,1,1,1,1,1,1, # 98 - 9f 1,1,1,1,1,1,1,1, # a0 - a7 1,1,1,1,1,1,1,1, # a8 - af 1,1,1,1,1,1,1,1, # b0 - b7 1,1,1,1,1,1,1,1, # b8 - bf 1,1,1,1,1,1,1,1, # c0 - c7 1,1,1,1,1,1,1,1, # c8 - cf 1,1,1,1,1,1,1,1, # d0 - d7 1,1,1,1,1,1,1,1, # d8 - df 1,1,1,1,1,1,1,1, # e0 - e7 1,1,1,1,1,1,1,1, # e8 - ef 1,1,1,1,1,1,1,1, # f0 - f7 1,1,1,1,1,1,1,1, # f8 - ff ) HZ_ST = ( MachineState.START,MachineState.ERROR, 3,MachineState.START,MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,# 00-07 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,# 08-0f MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START, 4,MachineState.ERROR,# 10-17 5,MachineState.ERROR, 6,MachineState.ERROR, 5, 5, 4,MachineState.ERROR,# 18-1f 4,MachineState.ERROR, 4, 4, 4,MachineState.ERROR, 4,MachineState.ERROR,# 20-27 4,MachineState.ITS_ME,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,# 28-2f ) HZ_CHAR_LEN_TABLE = (0, 0, 0, 0, 0, 0) HZ_SM_MODEL = {'class_table': HZ_CLS, 'class_factor': 6, 'state_table': HZ_ST, 'char_len_table': HZ_CHAR_LEN_TABLE, 'name': "HZ-GB-2312", 'language': 'Chinese'} ISO2022CN_CLS = ( 2,0,0,0,0,0,0,0, # 00 - 07 0,0,0,0,0,0,0,0, # 08 - 0f 0,0,0,0,0,0,0,0, # 10 - 17 0,0,0,1,0,0,0,0, # 18 - 1f 0,0,0,0,0,0,0,0, # 20 - 27 0,3,0,0,0,0,0,0, # 28 - 2f 0,0,0,0,0,0,0,0, # 30 - 37 0,0,0,0,0,0,0,0, # 38 - 3f 0,0,0,4,0,0,0,0, # 40 - 47 0,0,0,0,0,0,0,0, # 48 - 4f 0,0,0,0,0,0,0,0, # 50 - 57 0,0,0,0,0,0,0,0, # 58 - 5f 0,0,0,0,0,0,0,0, # 60 - 67 0,0,0,0,0,0,0,0, # 68 - 6f 0,0,0,0,0,0,0,0, # 70 - 77 0,0,0,0,0,0,0,0, # 78 - 7f 2,2,2,2,2,2,2,2, # 80 - 87 2,2,2,2,2,2,2,2, # 88 - 8f 2,2,2,2,2,2,2,2, # 90 - 97 2,2,2,2,2,2,2,2, # 98 - 9f 2,2,2,2,2,2,2,2, # a0 - a7 2,2,2,2,2,2,2,2, # a8 - af 2,2,2,2,2,2,2,2, # b0 - b7 2,2,2,2,2,2,2,2, # b8 - bf 2,2,2,2,2,2,2,2, # c0 - c7 2,2,2,2,2,2,2,2, # c8 - cf 2,2,2,2,2,2,2,2, # d0 - d7 2,2,2,2,2,2,2,2, # d8 - df 2,2,2,2,2,2,2,2, # e0 - e7 2,2,2,2,2,2,2,2, # e8 - ef 2,2,2,2,2,2,2,2, # f0 - f7 2,2,2,2,2,2,2,2, # f8 - ff ) ISO2022CN_ST = ( MachineState.START, 3,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,# 00-07 MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,# 08-0f MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,# 10-17 MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 4,MachineState.ERROR,# 18-1f MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,# 20-27 5, 6,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,# 28-2f MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,# 30-37 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,MachineState.START,# 38-3f ) ISO2022CN_CHAR_LEN_TABLE = (0, 0, 0, 0, 0, 0, 0, 0, 0) ISO2022CN_SM_MODEL = {'class_table': ISO2022CN_CLS, 'class_factor': 9, 'state_table': ISO2022CN_ST, 'char_len_table': ISO2022CN_CHAR_LEN_TABLE, 'name': "ISO-2022-CN", 'language': 'Chinese'} ISO2022JP_CLS = ( 2,0,0,0,0,0,0,0, # 00 - 07 0,0,0,0,0,0,2,2, # 08 - 0f 0,0,0,0,0,0,0,0, # 10 - 17 0,0,0,1,0,0,0,0, # 18 - 1f 0,0,0,0,7,0,0,0, # 20 - 27 3,0,0,0,0,0,0,0, # 28 - 2f 0,0,0,0,0,0,0,0, # 30 - 37 0,0,0,0,0,0,0,0, # 38 - 3f 6,0,4,0,8,0,0,0, # 40 - 47 0,9,5,0,0,0,0,0, # 48 - 4f 0,0,0,0,0,0,0,0, # 50 - 57 0,0,0,0,0,0,0,0, # 58 - 5f 0,0,0,0,0,0,0,0, # 60 - 67 0,0,0,0,0,0,0,0, # 68 - 6f 0,0,0,0,0,0,0,0, # 70 - 77 0,0,0,0,0,0,0,0, # 78 - 7f 2,2,2,2,2,2,2,2, # 80 - 87 2,2,2,2,2,2,2,2, # 88 - 8f 2,2,2,2,2,2,2,2, # 90 - 97 2,2,2,2,2,2,2,2, # 98 - 9f 2,2,2,2,2,2,2,2, # a0 - a7 2,2,2,2,2,2,2,2, # a8 - af 2,2,2,2,2,2,2,2, # b0 - b7 2,2,2,2,2,2,2,2, # b8 - bf 2,2,2,2,2,2,2,2, # c0 - c7 2,2,2,2,2,2,2,2, # c8 - cf 2,2,2,2,2,2,2,2, # d0 - d7 2,2,2,2,2,2,2,2, # d8 - df 2,2,2,2,2,2,2,2, # e0 - e7 2,2,2,2,2,2,2,2, # e8 - ef 2,2,2,2,2,2,2,2, # f0 - f7 2,2,2,2,2,2,2,2, # f8 - ff ) ISO2022JP_ST = ( MachineState.START, 3,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,# 00-07 MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,# 08-0f MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,# 10-17 MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,# 18-1f MachineState.ERROR, 5,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 4,MachineState.ERROR,MachineState.ERROR,# 20-27 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 6,MachineState.ITS_ME,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,# 28-2f MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,# 30-37 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,# 38-3f MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,MachineState.START,MachineState.START,# 40-47 ) ISO2022JP_CHAR_LEN_TABLE = (0, 0, 0, 0, 0, 0, 0, 0, 0, 0) ISO2022JP_SM_MODEL = {'class_table': ISO2022JP_CLS, 'class_factor': 10, 'state_table': ISO2022JP_ST, 'char_len_table': ISO2022JP_CHAR_LEN_TABLE, 'name': "ISO-2022-JP", 'language': 'Japanese'} ISO2022KR_CLS = ( 2,0,0,0,0,0,0,0, # 00 - 07 0,0,0,0,0,0,0,0, # 08 - 0f 0,0,0,0,0,0,0,0, # 10 - 17 0,0,0,1,0,0,0,0, # 18 - 1f 0,0,0,0,3,0,0,0, # 20 - 27 0,4,0,0,0,0,0,0, # 28 - 2f 0,0,0,0,0,0,0,0, # 30 - 37 0,0,0,0,0,0,0,0, # 38 - 3f 0,0,0,5,0,0,0,0, # 40 - 47 0,0,0,0,0,0,0,0, # 48 - 4f 0,0,0,0,0,0,0,0, # 50 - 57 0,0,0,0,0,0,0,0, # 58 - 5f 0,0,0,0,0,0,0,0, # 60 - 67 0,0,0,0,0,0,0,0, # 68 - 6f 0,0,0,0,0,0,0,0, # 70 - 77 0,0,0,0,0,0,0,0, # 78 - 7f 2,2,2,2,2,2,2,2, # 80 - 87 2,2,2,2,2,2,2,2, # 88 - 8f 2,2,2,2,2,2,2,2, # 90 - 97 2,2,2,2,2,2,2,2, # 98 - 9f 2,2,2,2,2,2,2,2, # a0 - a7 2,2,2,2,2,2,2,2, # a8 - af 2,2,2,2,2,2,2,2, # b0 - b7 2,2,2,2,2,2,2,2, # b8 - bf 2,2,2,2,2,2,2,2, # c0 - c7 2,2,2,2,2,2,2,2, # c8 - cf 2,2,2,2,2,2,2,2, # d0 - d7 2,2,2,2,2,2,2,2, # d8 - df 2,2,2,2,2,2,2,2, # e0 - e7 2,2,2,2,2,2,2,2, # e8 - ef 2,2,2,2,2,2,2,2, # f0 - f7 2,2,2,2,2,2,2,2, # f8 - ff ) ISO2022KR_ST = ( MachineState.START, 3,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,# 00-07 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,# 08-0f MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 4,MachineState.ERROR,MachineState.ERROR,# 10-17 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 5,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,# 18-1f MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.START,MachineState.START,MachineState.START,MachineState.START,# 20-27 ) ISO2022KR_CHAR_LEN_TABLE = (0, 0, 0, 0, 0, 0) ISO2022KR_SM_MODEL = {'class_table': ISO2022KR_CLS, 'class_factor': 6, 'state_table': ISO2022KR_ST, 'char_len_table': ISO2022KR_CHAR_LEN_TABLE, 'name': "ISO-2022-KR", 'language': 'Korean'} ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/eucjpprober.py�������������������0000644�0001750�0001750�00000007245�00000000000�031001� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is mozilla.org code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### from .enums import ProbingState, MachineState from .mbcharsetprober import MultiByteCharSetProber from .codingstatemachine import CodingStateMachine from .chardistribution import EUCJPDistributionAnalysis from .jpcntx import EUCJPContextAnalysis from .mbcssm import EUCJP_SM_MODEL class EUCJPProber(MultiByteCharSetProber): def __init__(self): super(EUCJPProber, self).__init__() self.coding_sm = CodingStateMachine(EUCJP_SM_MODEL) self.distribution_analyzer = EUCJPDistributionAnalysis() self.context_analyzer = EUCJPContextAnalysis() self.reset() def reset(self): super(EUCJPProber, self).reset() self.context_analyzer.reset() @property def charset_name(self): return "EUC-JP" @property def language(self): return "Japanese" def feed(self, byte_str): for i in range(len(byte_str)): # PY3K: byte_str is a byte array, so byte_str[i] is an int, not a byte coding_state = self.coding_sm.next_state(byte_str[i]) if coding_state == MachineState.ERROR: self.logger.debug('%s %s prober hit error at byte %s', self.charset_name, self.language, i) self._state = ProbingState.NOT_ME break elif coding_state == MachineState.ITS_ME: self._state = ProbingState.FOUND_IT break elif coding_state == MachineState.START: char_len = self.coding_sm.get_current_charlen() if i == 0: self._last_char[1] = byte_str[0] self.context_analyzer.feed(self._last_char, char_len) self.distribution_analyzer.feed(self._last_char, char_len) else: self.context_analyzer.feed(byte_str[i - 1:i + 1], char_len) self.distribution_analyzer.feed(byte_str[i - 1:i + 1], char_len) self._last_char[0] = byte_str[-1] if self.state == ProbingState.DETECTING: if (self.context_analyzer.got_enough_data() and (self.get_confidence() > self.SHORTCUT_THRESHOLD)): self._state = ProbingState.FOUND_IT return self.state def get_confidence(self): context_conf = self.context_analyzer.get_confidence() distrib_conf = self.distribution_analyzer.get_confidence() return max(context_conf, distrib_conf) �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/euckrfreq.py���������������������0000644�0001750�0001750�00000032352�00000000000�030445� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Communicator client code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### # Sampling from about 20M text materials include literature and computer technology # 128 --> 0.79 # 256 --> 0.92 # 512 --> 0.986 # 1024 --> 0.99944 # 2048 --> 0.99999 # # Idea Distribution Ratio = 0.98653 / (1-0.98653) = 73.24 # Random Distribution Ration = 512 / (2350-512) = 0.279. # # Typical Distribution Ratio EUCKR_TYPICAL_DISTRIBUTION_RATIO = 6.0 EUCKR_TABLE_SIZE = 2352 # Char to FreqOrder table , EUCKR_CHAR_TO_FREQ_ORDER = ( 13, 130, 120,1396, 481,1719,1720, 328, 609, 212,1721, 707, 400, 299,1722, 87, 1397,1723, 104, 536,1117,1203,1724,1267, 685,1268, 508,1725,1726,1727,1728,1398, 1399,1729,1730,1731, 141, 621, 326,1057, 368,1732, 267, 488, 20,1733,1269,1734, 945,1400,1735, 47, 904,1270,1736,1737, 773, 248,1738, 409, 313, 786, 429,1739, 116, 987, 813,1401, 683, 75,1204, 145,1740,1741,1742,1743, 16, 847, 667, 622, 708,1744,1745,1746, 966, 787, 304, 129,1747, 60, 820, 123, 676,1748,1749,1750, 1751, 617,1752, 626,1753,1754,1755,1756, 653,1757,1758,1759,1760,1761,1762, 856, 344,1763,1764,1765,1766, 89, 401, 418, 806, 905, 848,1767,1768,1769, 946,1205, 709,1770,1118,1771, 241,1772,1773,1774,1271,1775, 569,1776, 999,1777,1778,1779, 1780, 337, 751,1058, 28, 628, 254,1781, 177, 906, 270, 349, 891,1079,1782, 19, 1783, 379,1784, 315,1785, 629, 754,1402, 559,1786, 636, 203,1206,1787, 710, 567, 1788, 935, 814,1789,1790,1207, 766, 528,1791,1792,1208,1793,1794,1795,1796,1797, 1403,1798,1799, 533,1059,1404,1405,1156,1406, 936, 884,1080,1800, 351,1801,1802, 1803,1804,1805, 801,1806,1807,1808,1119,1809,1157, 714, 474,1407,1810, 298, 899, 885,1811,1120, 802,1158,1812, 892,1813,1814,1408, 659,1815,1816,1121,1817,1818, 1819,1820,1821,1822, 319,1823, 594, 545,1824, 815, 937,1209,1825,1826, 573,1409, 1022,1827,1210,1828,1829,1830,1831,1832,1833, 556, 722, 807,1122,1060,1834, 697, 1835, 900, 557, 715,1836,1410, 540,1411, 752,1159, 294, 597,1211, 976, 803, 770, 1412,1837,1838, 39, 794,1413, 358,1839, 371, 925,1840, 453, 661, 788, 531, 723, 544,1023,1081, 869, 91,1841, 392, 430, 790, 602,1414, 677,1082, 457,1415,1416, 1842,1843, 475, 327,1024,1417, 795, 121,1844, 733, 403,1418,1845,1846,1847, 300, 119, 711,1212, 627,1848,1272, 207,1849,1850, 796,1213, 382,1851, 519,1852,1083, 893,1853,1854,1855, 367, 809, 487, 671,1856, 663,1857,1858, 956, 471, 306, 857, 1859,1860,1160,1084,1861,1862,1863,1864,1865,1061,1866,1867,1868,1869,1870,1871, 282, 96, 574,1872, 502,1085,1873,1214,1874, 907,1875,1876, 827, 977,1419,1420, 1421, 268,1877,1422,1878,1879,1880, 308,1881, 2, 537,1882,1883,1215,1884,1885, 127, 791,1886,1273,1423,1887, 34, 336, 404, 643,1888, 571, 654, 894, 840,1889, 0, 886,1274, 122, 575, 260, 908, 938,1890,1275, 410, 316,1891,1892, 100,1893, 1894,1123, 48,1161,1124,1025,1895, 633, 901,1276,1896,1897, 115, 816,1898, 317, 1899, 694,1900, 909, 734,1424, 572, 866,1425, 691, 85, 524,1010, 543, 394, 841, 1901,1902,1903,1026,1904,1905,1906,1907,1908,1909, 30, 451, 651, 988, 310,1910, 1911,1426, 810,1216, 93,1912,1913,1277,1217,1914, 858, 759, 45, 58, 181, 610, 269,1915,1916, 131,1062, 551, 443,1000, 821,1427, 957, 895,1086,1917,1918, 375, 1919, 359,1920, 687,1921, 822,1922, 293,1923,1924, 40, 662, 118, 692, 29, 939, 887, 640, 482, 174,1925, 69,1162, 728,1428, 910,1926,1278,1218,1279, 386, 870, 217, 854,1163, 823,1927,1928,1929,1930, 834,1931, 78,1932, 859,1933,1063,1934, 1935,1936,1937, 438,1164, 208, 595,1938,1939,1940,1941,1219,1125,1942, 280, 888, 1429,1430,1220,1431,1943,1944,1945,1946,1947,1280, 150, 510,1432,1948,1949,1950, 1951,1952,1953,1954,1011,1087,1955,1433,1043,1956, 881,1957, 614, 958,1064,1065, 1221,1958, 638,1001, 860, 967, 896,1434, 989, 492, 553,1281,1165,1959,1282,1002, 1283,1222,1960,1961,1962,1963, 36, 383, 228, 753, 247, 454,1964, 876, 678,1965, 1966,1284, 126, 464, 490, 835, 136, 672, 529, 940,1088,1435, 473,1967,1968, 467, 50, 390, 227, 587, 279, 378, 598, 792, 968, 240, 151, 160, 849, 882,1126,1285, 639,1044, 133, 140, 288, 360, 811, 563,1027, 561, 142, 523,1969,1970,1971, 7, 103, 296, 439, 407, 506, 634, 990,1972,1973,1974,1975, 645,1976,1977,1978,1979, 1980,1981, 236,1982,1436,1983,1984,1089, 192, 828, 618, 518,1166, 333,1127,1985, 818,1223,1986,1987,1988,1989,1990,1991,1992,1993, 342,1128,1286, 746, 842,1994, 1995, 560, 223,1287, 98, 8, 189, 650, 978,1288,1996,1437,1997, 17, 345, 250, 423, 277, 234, 512, 226, 97, 289, 42, 167,1998, 201,1999,2000, 843, 836, 824, 532, 338, 783,1090, 182, 576, 436,1438,1439, 527, 500,2001, 947, 889,2002,2003, 2004,2005, 262, 600, 314, 447,2006, 547,2007, 693, 738,1129,2008, 71,1440, 745, 619, 688,2009, 829,2010,2011, 147,2012, 33, 948,2013,2014, 74, 224,2015, 61, 191, 918, 399, 637,2016,1028,1130, 257, 902,2017,2018,2019,2020,2021,2022,2023, 2024,2025,2026, 837,2027,2028,2029,2030, 179, 874, 591, 52, 724, 246,2031,2032, 2033,2034,1167, 969,2035,1289, 630, 605, 911,1091,1168,2036,2037,2038,1441, 912, 2039, 623,2040,2041, 253,1169,1290,2042,1442, 146, 620, 611, 577, 433,2043,1224, 719,1170, 959, 440, 437, 534, 84, 388, 480,1131, 159, 220, 198, 679,2044,1012, 819,1066,1443, 113,1225, 194, 318,1003,1029,2045,2046,2047,2048,1067,2049,2050, 2051,2052,2053, 59, 913, 112,2054, 632,2055, 455, 144, 739,1291,2056, 273, 681, 499,2057, 448,2058,2059, 760,2060,2061, 970, 384, 169, 245,1132,2062,2063, 414, 1444,2064,2065, 41, 235,2066, 157, 252, 877, 568, 919, 789, 580,2067, 725,2068, 2069,1292,2070,2071,1445,2072,1446,2073,2074, 55, 588, 66,1447, 271,1092,2075, 1226,2076, 960,1013, 372,2077,2078,2079,2080,2081,1293,2082,2083,2084,2085, 850, 2086,2087,2088,2089,2090, 186,2091,1068, 180,2092,2093,2094, 109,1227, 522, 606, 2095, 867,1448,1093, 991,1171, 926, 353,1133,2096, 581,2097,2098,2099,1294,1449, 1450,2100, 596,1172,1014,1228,2101,1451,1295,1173,1229,2102,2103,1296,1134,1452, 949,1135,2104,2105,1094,1453,1454,1455,2106,1095,2107,2108,2109,2110,2111,2112, 2113,2114,2115,2116,2117, 804,2118,2119,1230,1231, 805,1456, 405,1136,2120,2121, 2122,2123,2124, 720, 701,1297, 992,1457, 927,1004,2125,2126,2127,2128,2129,2130, 22, 417,2131, 303,2132, 385,2133, 971, 520, 513,2134,1174, 73,1096, 231, 274, 962,1458, 673,2135,1459,2136, 152,1137,2137,2138,2139,2140,1005,1138,1460,1139, 2141,2142,2143,2144, 11, 374, 844,2145, 154,1232, 46,1461,2146, 838, 830, 721, 1233, 106,2147, 90, 428, 462, 578, 566,1175, 352,2148,2149, 538,1234, 124,1298, 2150,1462, 761, 565,2151, 686,2152, 649,2153, 72, 173,2154, 460, 415,2155,1463, 2156,1235, 305,2157,2158,2159,2160,2161,2162, 579,2163,2164,2165,2166,2167, 747, 2168,2169,2170,2171,1464, 669,2172,2173,2174,2175,2176,1465,2177, 23, 530, 285, 2178, 335, 729,2179, 397,2180,2181,2182,1030,2183,2184, 698,2185,2186, 325,2187, 2188, 369,2189, 799,1097,1015, 348,2190,1069, 680,2191, 851,1466,2192,2193, 10, 2194, 613, 424,2195, 979, 108, 449, 589, 27, 172, 81,1031, 80, 774, 281, 350, 1032, 525, 301, 582,1176,2196, 674,1045,2197,2198,1467, 730, 762,2199,2200,2201, 2202,1468,2203, 993,2204,2205, 266,1070, 963,1140,2206,2207,2208, 664,1098, 972, 2209,2210,2211,1177,1469,1470, 871,2212,2213,2214,2215,2216,1471,2217,2218,2219, 2220,2221,2222,2223,2224,2225,2226,2227,1472,1236,2228,2229,2230,2231,2232,2233, 2234,2235,1299,2236,2237, 200,2238, 477, 373,2239,2240, 731, 825, 777,2241,2242, 2243, 521, 486, 548,2244,2245,2246,1473,1300, 53, 549, 137, 875, 76, 158,2247, 1301,1474, 469, 396,1016, 278, 712,2248, 321, 442, 503, 767, 744, 941,1237,1178, 1475,2249, 82, 178,1141,1179, 973,2250,1302,2251, 297,2252,2253, 570,2254,2255, 2256, 18, 450, 206,2257, 290, 292,1142,2258, 511, 162, 99, 346, 164, 735,2259, 1476,1477, 4, 554, 343, 798,1099,2260,1100,2261, 43, 171,1303, 139, 215,2262, 2263, 717, 775,2264,1033, 322, 216,2265, 831,2266, 149,2267,1304,2268,2269, 702, 1238, 135, 845, 347, 309,2270, 484,2271, 878, 655, 238,1006,1478,2272, 67,2273, 295,2274,2275, 461,2276, 478, 942, 412,2277,1034,2278,2279,2280, 265,2281, 541, 2282,2283,2284,2285,2286, 70, 852,1071,2287,2288,2289,2290, 21, 56, 509, 117, 432,2291,2292, 331, 980, 552,1101, 148, 284, 105, 393,1180,1239, 755,2293, 187, 2294,1046,1479,2295, 340,2296, 63,1047, 230,2297,2298,1305, 763,1306, 101, 800, 808, 494,2299,2300,2301, 903,2302, 37,1072, 14, 5,2303, 79, 675,2304, 312, 2305,2306,2307,2308,2309,1480, 6,1307,2310,2311,2312, 1, 470, 35, 24, 229, 2313, 695, 210, 86, 778, 15, 784, 592, 779, 32, 77, 855, 964,2314, 259,2315, 501, 380,2316,2317, 83, 981, 153, 689,1308,1481,1482,1483,2318,2319, 716,1484, 2320,2321,2322,2323,2324,2325,1485,2326,2327, 128, 57, 68, 261,1048, 211, 170, 1240, 31,2328, 51, 435, 742,2329,2330,2331, 635,2332, 264, 456,2333,2334,2335, 425,2336,1486, 143, 507, 263, 943,2337, 363, 920,1487, 256,1488,1102, 243, 601, 1489,2338,2339,2340,2341,2342,2343,2344, 861,2345,2346,2347,2348,2349,2350, 395, 2351,1490,1491, 62, 535, 166, 225,2352,2353, 668, 419,1241, 138, 604, 928,2354, 1181,2355,1492,1493,2356,2357,2358,1143,2359, 696,2360, 387, 307,1309, 682, 476, 2361,2362, 332, 12, 222, 156,2363, 232,2364, 641, 276, 656, 517,1494,1495,1035, 416, 736,1496,2365,1017, 586,2366,2367,2368,1497,2369, 242,2370,2371,2372,1498, 2373, 965, 713,2374,2375,2376,2377, 740, 982,1499, 944,1500,1007,2378,2379,1310, 1501,2380,2381,2382, 785, 329,2383,2384,1502,2385,2386,2387, 932,2388,1503,2389, 2390,2391,2392,1242,2393,2394,2395,2396,2397, 994, 950,2398,2399,2400,2401,1504, 1311,2402,2403,2404,2405,1049, 749,2406,2407, 853, 718,1144,1312,2408,1182,1505, 2409,2410, 255, 516, 479, 564, 550, 214,1506,1507,1313, 413, 239, 444, 339,1145, 1036,1508,1509,1314,1037,1510,1315,2411,1511,2412,2413,2414, 176, 703, 497, 624, 593, 921, 302,2415, 341, 165,1103,1512,2416,1513,2417,2418,2419, 376,2420, 700, 2421,2422,2423, 258, 768,1316,2424,1183,2425, 995, 608,2426,2427,2428,2429, 221, 2430,2431,2432,2433,2434,2435,2436,2437, 195, 323, 726, 188, 897, 983,1317, 377, 644,1050, 879,2438, 452,2439,2440,2441,2442,2443,2444, 914,2445,2446,2447,2448, 915, 489,2449,1514,1184,2450,2451, 515, 64, 427, 495,2452, 583,2453, 483, 485, 1038, 562, 213,1515, 748, 666,2454,2455,2456,2457, 334,2458, 780, 996,1008, 705, 1243,2459,2460,2461,2462,2463, 114,2464, 493,1146, 366, 163,1516, 961,1104,2465, 291,2466,1318,1105,2467,1517, 365,2468, 355, 951,1244,2469,1319,2470, 631,2471, 2472, 218,1320, 364, 320, 756,1518,1519,1321,1520,1322,2473,2474,2475,2476, 997, 2477,2478,2479,2480, 665,1185,2481, 916,1521,2482,2483,2484, 584, 684,2485,2486, 797,2487,1051,1186,2488,2489,2490,1522,2491,2492, 370,2493,1039,1187, 65,2494, 434, 205, 463,1188,2495, 125, 812, 391, 402, 826, 699, 286, 398, 155, 781, 771, 585,2496, 590, 505,1073,2497, 599, 244, 219, 917,1018, 952, 646,1523,2498,1323, 2499,2500, 49, 984, 354, 741,2501, 625,2502,1324,2503,1019, 190, 357, 757, 491, 95, 782, 868,2504,2505,2506,2507,2508,2509, 134,1524,1074, 422,1525, 898,2510, 161,2511,2512,2513,2514, 769,2515,1526,2516,2517, 411,1325,2518, 472,1527,2519, 2520,2521,2522,2523,2524, 985,2525,2526,2527,2528,2529,2530, 764,2531,1245,2532, 2533, 25, 204, 311,2534, 496,2535,1052,2536,2537,2538,2539,2540,2541,2542, 199, 704, 504, 468, 758, 657,1528, 196, 44, 839,1246, 272, 750,2543, 765, 862,2544, 2545,1326,2546, 132, 615, 933,2547, 732,2548,2549,2550,1189,1529,2551, 283,1247, 1053, 607, 929,2552,2553,2554, 930, 183, 872, 616,1040,1147,2555,1148,1020, 441, 249,1075,2556,2557,2558, 466, 743,2559,2560,2561, 92, 514, 426, 420, 526,2562, 2563,2564,2565,2566,2567,2568, 185,2569,2570,2571,2572, 776,1530, 658,2573, 362, 2574, 361, 922,1076, 793,2575,2576,2577,2578,2579,2580,1531, 251,2581,2582,2583, 2584,1532, 54, 612, 237,1327,2585,2586, 275, 408, 647, 111,2587,1533,1106, 465, 3, 458, 9, 38,2588, 107, 110, 890, 209, 26, 737, 498,2589,1534,2590, 431, 202, 88,1535, 356, 287,1107, 660,1149,2591, 381,1536, 986,1150, 445,1248,1151, 974,2592,2593, 846,2594, 446, 953, 184,1249,1250, 727,2595, 923, 193, 883,2596, 2597,2598, 102, 324, 539, 817,2599, 421,1041,2600, 832,2601, 94, 175, 197, 406, 2602, 459,2603,2604,2605,2606,2607, 330, 555,2608,2609,2610, 706,1108, 389,2611, 2612,2613,2614, 233,2615, 833, 558, 931, 954,1251,2616,2617,1537, 546,2618,2619, 1009,2620,2621,2622,1538, 690,1328,2623, 955,2624,1539,2625,2626, 772,2627,2628, 2629,2630,2631, 924, 648, 863, 603,2632,2633, 934,1540, 864, 865,2634, 642,1042, 670,1190,2635,2636,2637,2638, 168,2639, 652, 873, 542,1054,1541,2640,2641,2642, # 512, 256 ) ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/euckrprober.py�������������������0000644�0001750�0001750�00000003324�00000000000�030776� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is mozilla.org code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### from .mbcharsetprober import MultiByteCharSetProber from .codingstatemachine import CodingStateMachine from .chardistribution import EUCKRDistributionAnalysis from .mbcssm import EUCKR_SM_MODEL class EUCKRProber(MultiByteCharSetProber): def __init__(self): super(EUCKRProber, self).__init__() self.coding_sm = CodingStateMachine(EUCKR_SM_MODEL) self.distribution_analyzer = EUCKRDistributionAnalysis() self.reset() @property def charset_name(self): return "EUC-KR" @property def language(self): return "Korean" ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/euctwfreq.py���������������������0000644�0001750�0001750�00000075605�00000000000�030473� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Communicator client code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### # EUCTW frequency table # Converted from big5 work # by Taiwan's Mandarin Promotion Council # <http:#www.edu.tw:81/mandr/> # 128 --> 0.42261 # 256 --> 0.57851 # 512 --> 0.74851 # 1024 --> 0.89384 # 2048 --> 0.97583 # # Idea Distribution Ratio = 0.74851/(1-0.74851) =2.98 # Random Distribution Ration = 512/(5401-512)=0.105 # # Typical Distribution Ratio about 25% of Ideal one, still much higher than RDR EUCTW_TYPICAL_DISTRIBUTION_RATIO = 0.75 # Char to FreqOrder table , EUCTW_TABLE_SIZE = 5376 EUCTW_CHAR_TO_FREQ_ORDER = ( 1,1800,1506, 255,1431, 198, 9, 82, 6,7310, 177, 202,3615,1256,2808, 110, # 2742 3735, 33,3241, 261, 76, 44,2113, 16,2931,2184,1176, 659,3868, 26,3404,2643, # 2758 1198,3869,3313,4060, 410,2211, 302, 590, 361,1963, 8, 204, 58,4296,7311,1931, # 2774 63,7312,7313, 317,1614, 75, 222, 159,4061,2412,1480,7314,3500,3068, 224,2809, # 2790 3616, 3, 10,3870,1471, 29,2774,1135,2852,1939, 873, 130,3242,1123, 312,7315, # 2806 4297,2051, 507, 252, 682,7316, 142,1914, 124, 206,2932, 34,3501,3173, 64, 604, # 2822 7317,2494,1976,1977, 155,1990, 645, 641,1606,7318,3405, 337, 72, 406,7319, 80, # 2838 630, 238,3174,1509, 263, 939,1092,2644, 756,1440,1094,3406, 449, 69,2969, 591, # 2854 179,2095, 471, 115,2034,1843, 60, 50,2970, 134, 806,1868, 734,2035,3407, 180, # 2870 995,1607, 156, 537,2893, 688,7320, 319,1305, 779,2144, 514,2374, 298,4298, 359, # 2886 2495, 90,2707,1338, 663, 11, 906,1099,2545, 20,2436, 182, 532,1716,7321, 732, # 2902 1376,4062,1311,1420,3175, 25,2312,1056, 113, 399, 382,1949, 242,3408,2467, 529, # 2918 3243, 475,1447,3617,7322, 117, 21, 656, 810,1297,2295,2329,3502,7323, 126,4063, # 2934 706, 456, 150, 613,4299, 71,1118,2036,4064, 145,3069, 85, 835, 486,2114,1246, # 2950 1426, 428, 727,1285,1015, 800, 106, 623, 303,1281,7324,2127,2354, 347,3736, 221, # 2966 3503,3110,7325,1955,1153,4065, 83, 296,1199,3070, 192, 624, 93,7326, 822,1897, # 2982 2810,3111, 795,2064, 991,1554,1542,1592, 27, 43,2853, 859, 139,1456, 860,4300, # 2998 437, 712,3871, 164,2392,3112, 695, 211,3017,2096, 195,3872,1608,3504,3505,3618, # 3014 3873, 234, 811,2971,2097,3874,2229,1441,3506,1615,2375, 668,2076,1638, 305, 228, # 3030 1664,4301, 467, 415,7327, 262,2098,1593, 239, 108, 300, 200,1033, 512,1247,2077, # 3046 7328,7329,2173,3176,3619,2673, 593, 845,1062,3244, 88,1723,2037,3875,1950, 212, # 3062 266, 152, 149, 468,1898,4066,4302, 77, 187,7330,3018, 37, 5,2972,7331,3876, # 3078 7332,7333, 39,2517,4303,2894,3177,2078, 55, 148, 74,4304, 545, 483,1474,1029, # 3094 1665, 217,1869,1531,3113,1104,2645,4067, 24, 172,3507, 900,3877,3508,3509,4305, # 3110 32,1408,2811,1312, 329, 487,2355,2247,2708, 784,2674, 4,3019,3314,1427,1788, # 3126 188, 109, 499,7334,3620,1717,1789, 888,1217,3020,4306,7335,3510,7336,3315,1520, # 3142 3621,3878, 196,1034, 775,7337,7338, 929,1815, 249, 439, 38,7339,1063,7340, 794, # 3158 3879,1435,2296, 46, 178,3245,2065,7341,2376,7342, 214,1709,4307, 804, 35, 707, # 3174 324,3622,1601,2546, 140, 459,4068,7343,7344,1365, 839, 272, 978,2257,2572,3409, # 3190 2128,1363,3623,1423, 697, 100,3071, 48, 70,1231, 495,3114,2193,7345,1294,7346, # 3206 2079, 462, 586,1042,3246, 853, 256, 988, 185,2377,3410,1698, 434,1084,7347,3411, # 3222 314,2615,2775,4308,2330,2331, 569,2280, 637,1816,2518, 757,1162,1878,1616,3412, # 3238 287,1577,2115, 768,4309,1671,2854,3511,2519,1321,3737, 909,2413,7348,4069, 933, # 3254 3738,7349,2052,2356,1222,4310, 765,2414,1322, 786,4311,7350,1919,1462,1677,2895, # 3270 1699,7351,4312,1424,2437,3115,3624,2590,3316,1774,1940,3413,3880,4070, 309,1369, # 3286 1130,2812, 364,2230,1653,1299,3881,3512,3882,3883,2646, 525,1085,3021, 902,2000, # 3302 1475, 964,4313, 421,1844,1415,1057,2281, 940,1364,3116, 376,4314,4315,1381, 7, # 3318 2520, 983,2378, 336,1710,2675,1845, 321,3414, 559,1131,3022,2742,1808,1132,1313, # 3334 265,1481,1857,7352, 352,1203,2813,3247, 167,1089, 420,2814, 776, 792,1724,3513, # 3350 4071,2438,3248,7353,4072,7354, 446, 229, 333,2743, 901,3739,1200,1557,4316,2647, # 3366 1920, 395,2744,2676,3740,4073,1835, 125, 916,3178,2616,4317,7355,7356,3741,7357, # 3382 7358,7359,4318,3117,3625,1133,2547,1757,3415,1510,2313,1409,3514,7360,2145, 438, # 3398 2591,2896,2379,3317,1068, 958,3023, 461, 311,2855,2677,4074,1915,3179,4075,1978, # 3414 383, 750,2745,2617,4076, 274, 539, 385,1278,1442,7361,1154,1964, 384, 561, 210, # 3430 98,1295,2548,3515,7362,1711,2415,1482,3416,3884,2897,1257, 129,7363,3742, 642, # 3446 523,2776,2777,2648,7364, 141,2231,1333, 68, 176, 441, 876, 907,4077, 603,2592, # 3462 710, 171,3417, 404, 549, 18,3118,2393,1410,3626,1666,7365,3516,4319,2898,4320, # 3478 7366,2973, 368,7367, 146, 366, 99, 871,3627,1543, 748, 807,1586,1185, 22,2258, # 3494 379,3743,3180,7368,3181, 505,1941,2618,1991,1382,2314,7369, 380,2357, 218, 702, # 3510 1817,1248,3418,3024,3517,3318,3249,7370,2974,3628, 930,3250,3744,7371, 59,7372, # 3526 585, 601,4078, 497,3419,1112,1314,4321,1801,7373,1223,1472,2174,7374, 749,1836, # 3542 690,1899,3745,1772,3885,1476, 429,1043,1790,2232,2116, 917,4079, 447,1086,1629, # 3558 7375, 556,7376,7377,2020,1654, 844,1090, 105, 550, 966,1758,2815,1008,1782, 686, # 3574 1095,7378,2282, 793,1602,7379,3518,2593,4322,4080,2933,2297,4323,3746, 980,2496, # 3590 544, 353, 527,4324, 908,2678,2899,7380, 381,2619,1942,1348,7381,1341,1252, 560, # 3606 3072,7382,3420,2856,7383,2053, 973, 886,2080, 143,4325,7384,7385, 157,3886, 496, # 3622 4081, 57, 840, 540,2038,4326,4327,3421,2117,1445, 970,2259,1748,1965,2081,4082, # 3638 3119,1234,1775,3251,2816,3629, 773,1206,2129,1066,2039,1326,3887,1738,1725,4083, # 3654 279,3120, 51,1544,2594, 423,1578,2130,2066, 173,4328,1879,7386,7387,1583, 264, # 3670 610,3630,4329,2439, 280, 154,7388,7389,7390,1739, 338,1282,3073, 693,2857,1411, # 3686 1074,3747,2440,7391,4330,7392,7393,1240, 952,2394,7394,2900,1538,2679, 685,1483, # 3702 4084,2468,1436, 953,4085,2054,4331, 671,2395, 79,4086,2441,3252, 608, 567,2680, # 3718 3422,4087,4088,1691, 393,1261,1791,2396,7395,4332,7396,7397,7398,7399,1383,1672, # 3734 3748,3182,1464, 522,1119, 661,1150, 216, 675,4333,3888,1432,3519, 609,4334,2681, # 3750 2397,7400,7401,7402,4089,3025, 0,7403,2469, 315, 231,2442, 301,3319,4335,2380, # 3766 7404, 233,4090,3631,1818,4336,4337,7405, 96,1776,1315,2082,7406, 257,7407,1809, # 3782 3632,2709,1139,1819,4091,2021,1124,2163,2778,1777,2649,7408,3074, 363,1655,3183, # 3798 7409,2975,7410,7411,7412,3889,1567,3890, 718, 103,3184, 849,1443, 341,3320,2934, # 3814 1484,7413,1712, 127, 67, 339,4092,2398, 679,1412, 821,7414,7415, 834, 738, 351, # 3830 2976,2146, 846, 235,1497,1880, 418,1992,3749,2710, 186,1100,2147,2746,3520,1545, # 3846 1355,2935,2858,1377, 583,3891,4093,2573,2977,7416,1298,3633,1078,2549,3634,2358, # 3862 78,3750,3751, 267,1289,2099,2001,1594,4094, 348, 369,1274,2194,2175,1837,4338, # 3878 1820,2817,3635,2747,2283,2002,4339,2936,2748, 144,3321, 882,4340,3892,2749,3423, # 3894 4341,2901,7417,4095,1726, 320,7418,3893,3026, 788,2978,7419,2818,1773,1327,2859, # 3910 3894,2819,7420,1306,4342,2003,1700,3752,3521,2359,2650, 787,2022, 506, 824,3636, # 3926 534, 323,4343,1044,3322,2023,1900, 946,3424,7421,1778,1500,1678,7422,1881,4344, # 3942 165, 243,4345,3637,2521, 123, 683,4096, 764,4346, 36,3895,1792, 589,2902, 816, # 3958 626,1667,3027,2233,1639,1555,1622,3753,3896,7423,3897,2860,1370,1228,1932, 891, # 3974 2083,2903, 304,4097,7424, 292,2979,2711,3522, 691,2100,4098,1115,4347, 118, 662, # 3990 7425, 611,1156, 854,2381,1316,2861, 2, 386, 515,2904,7426,7427,3253, 868,2234, # 4006 1486, 855,2651, 785,2212,3028,7428,1040,3185,3523,7429,3121, 448,7430,1525,7431, # 4022 2164,4348,7432,3754,7433,4099,2820,3524,3122, 503, 818,3898,3123,1568, 814, 676, # 4038 1444, 306,1749,7434,3755,1416,1030, 197,1428, 805,2821,1501,4349,7435,7436,7437, # 4054 1993,7438,4350,7439,7440,2195, 13,2779,3638,2980,3124,1229,1916,7441,3756,2131, # 4070 7442,4100,4351,2399,3525,7443,2213,1511,1727,1120,7444,7445, 646,3757,2443, 307, # 4086 7446,7447,1595,3186,7448,7449,7450,3639,1113,1356,3899,1465,2522,2523,7451, 519, # 4102 7452, 128,2132, 92,2284,1979,7453,3900,1512, 342,3125,2196,7454,2780,2214,1980, # 4118 3323,7455, 290,1656,1317, 789, 827,2360,7456,3758,4352, 562, 581,3901,7457, 401, # 4134 4353,2248, 94,4354,1399,2781,7458,1463,2024,4355,3187,1943,7459, 828,1105,4101, # 4150 1262,1394,7460,4102, 605,4356,7461,1783,2862,7462,2822, 819,2101, 578,2197,2937, # 4166 7463,1502, 436,3254,4103,3255,2823,3902,2905,3425,3426,7464,2712,2315,7465,7466, # 4182 2332,2067, 23,4357, 193, 826,3759,2102, 699,1630,4104,3075, 390,1793,1064,3526, # 4198 7467,1579,3076,3077,1400,7468,4105,1838,1640,2863,7469,4358,4359, 137,4106, 598, # 4214 3078,1966, 780, 104, 974,2938,7470, 278, 899, 253, 402, 572, 504, 493,1339,7471, # 4230 3903,1275,4360,2574,2550,7472,3640,3029,3079,2249, 565,1334,2713, 863, 41,7473, # 4246 7474,4361,7475,1657,2333, 19, 463,2750,4107, 606,7476,2981,3256,1087,2084,1323, # 4262 2652,2982,7477,1631,1623,1750,4108,2682,7478,2864, 791,2714,2653,2334, 232,2416, # 4278 7479,2983,1498,7480,2654,2620, 755,1366,3641,3257,3126,2025,1609, 119,1917,3427, # 4294 862,1026,4109,7481,3904,3760,4362,3905,4363,2260,1951,2470,7482,1125, 817,4110, # 4310 4111,3906,1513,1766,2040,1487,4112,3030,3258,2824,3761,3127,7483,7484,1507,7485, # 4326 2683, 733, 40,1632,1106,2865, 345,4113, 841,2524, 230,4364,2984,1846,3259,3428, # 4342 7486,1263, 986,3429,7487, 735, 879, 254,1137, 857, 622,1300,1180,1388,1562,3907, # 4358 3908,2939, 967,2751,2655,1349, 592,2133,1692,3324,2985,1994,4114,1679,3909,1901, # 4374 2185,7488, 739,3642,2715,1296,1290,7489,4115,2198,2199,1921,1563,2595,2551,1870, # 4390 2752,2986,7490, 435,7491, 343,1108, 596, 17,1751,4365,2235,3430,3643,7492,4366, # 4406 294,3527,2940,1693, 477, 979, 281,2041,3528, 643,2042,3644,2621,2782,2261,1031, # 4422 2335,2134,2298,3529,4367, 367,1249,2552,7493,3530,7494,4368,1283,3325,2004, 240, # 4438 1762,3326,4369,4370, 836,1069,3128, 474,7495,2148,2525, 268,3531,7496,3188,1521, # 4454 1284,7497,1658,1546,4116,7498,3532,3533,7499,4117,3327,2684,1685,4118, 961,1673, # 4470 2622, 190,2005,2200,3762,4371,4372,7500, 570,2497,3645,1490,7501,4373,2623,3260, # 4486 1956,4374, 584,1514, 396,1045,1944,7502,4375,1967,2444,7503,7504,4376,3910, 619, # 4502 7505,3129,3261, 215,2006,2783,2553,3189,4377,3190,4378, 763,4119,3763,4379,7506, # 4518 7507,1957,1767,2941,3328,3646,1174, 452,1477,4380,3329,3130,7508,2825,1253,2382, # 4534 2186,1091,2285,4120, 492,7509, 638,1169,1824,2135,1752,3911, 648, 926,1021,1324, # 4550 4381, 520,4382, 997, 847,1007, 892,4383,3764,2262,1871,3647,7510,2400,1784,4384, # 4566 1952,2942,3080,3191,1728,4121,2043,3648,4385,2007,1701,3131,1551, 30,2263,4122, # 4582 7511,2026,4386,3534,7512, 501,7513,4123, 594,3431,2165,1821,3535,3432,3536,3192, # 4598 829,2826,4124,7514,1680,3132,1225,4125,7515,3262,4387,4126,3133,2336,7516,4388, # 4614 4127,7517,3912,3913,7518,1847,2383,2596,3330,7519,4389, 374,3914, 652,4128,4129, # 4630 375,1140, 798,7520,7521,7522,2361,4390,2264, 546,1659, 138,3031,2445,4391,7523, # 4646 2250, 612,1848, 910, 796,3765,1740,1371, 825,3766,3767,7524,2906,2554,7525, 692, # 4662 444,3032,2624, 801,4392,4130,7526,1491, 244,1053,3033,4131,4132, 340,7527,3915, # 4678 1041,2987, 293,1168, 87,1357,7528,1539, 959,7529,2236, 721, 694,4133,3768, 219, # 4694 1478, 644,1417,3331,2656,1413,1401,1335,1389,3916,7530,7531,2988,2362,3134,1825, # 4710 730,1515, 184,2827, 66,4393,7532,1660,2943, 246,3332, 378,1457, 226,3433, 975, # 4726 3917,2944,1264,3537, 674, 696,7533, 163,7534,1141,2417,2166, 713,3538,3333,4394, # 4742 3918,7535,7536,1186, 15,7537,1079,1070,7538,1522,3193,3539, 276,1050,2716, 758, # 4758 1126, 653,2945,3263,7539,2337, 889,3540,3919,3081,2989, 903,1250,4395,3920,3434, # 4774 3541,1342,1681,1718, 766,3264, 286, 89,2946,3649,7540,1713,7541,2597,3334,2990, # 4790 7542,2947,2215,3194,2866,7543,4396,2498,2526, 181, 387,1075,3921, 731,2187,3335, # 4806 7544,3265, 310, 313,3435,2299, 770,4134, 54,3034, 189,4397,3082,3769,3922,7545, # 4822 1230,1617,1849, 355,3542,4135,4398,3336, 111,4136,3650,1350,3135,3436,3035,4137, # 4838 2149,3266,3543,7546,2784,3923,3924,2991, 722,2008,7547,1071, 247,1207,2338,2471, # 4854 1378,4399,2009, 864,1437,1214,4400, 373,3770,1142,2216, 667,4401, 442,2753,2555, # 4870 3771,3925,1968,4138,3267,1839, 837, 170,1107, 934,1336,1882,7548,7549,2118,4139, # 4886 2828, 743,1569,7550,4402,4140, 582,2384,1418,3437,7551,1802,7552, 357,1395,1729, # 4902 3651,3268,2418,1564,2237,7553,3083,3772,1633,4403,1114,2085,4141,1532,7554, 482, # 4918 2446,4404,7555,7556,1492, 833,1466,7557,2717,3544,1641,2829,7558,1526,1272,3652, # 4934 4142,1686,1794, 416,2556,1902,1953,1803,7559,3773,2785,3774,1159,2316,7560,2867, # 4950 4405,1610,1584,3036,2419,2754, 443,3269,1163,3136,7561,7562,3926,7563,4143,2499, # 4966 3037,4406,3927,3137,2103,1647,3545,2010,1872,4144,7564,4145, 431,3438,7565, 250, # 4982 97, 81,4146,7566,1648,1850,1558, 160, 848,7567, 866, 740,1694,7568,2201,2830, # 4998 3195,4147,4407,3653,1687, 950,2472, 426, 469,3196,3654,3655,3928,7569,7570,1188, # 5014 424,1995, 861,3546,4148,3775,2202,2685, 168,1235,3547,4149,7571,2086,1674,4408, # 5030 3337,3270, 220,2557,1009,7572,3776, 670,2992, 332,1208, 717,7573,7574,3548,2447, # 5046 3929,3338,7575, 513,7576,1209,2868,3339,3138,4409,1080,7577,7578,7579,7580,2527, # 5062 3656,3549, 815,1587,3930,3931,7581,3550,3439,3777,1254,4410,1328,3038,1390,3932, # 5078 1741,3933,3778,3934,7582, 236,3779,2448,3271,7583,7584,3657,3780,1273,3781,4411, # 5094 7585, 308,7586,4412, 245,4413,1851,2473,1307,2575, 430, 715,2136,2449,7587, 270, # 5110 199,2869,3935,7588,3551,2718,1753, 761,1754, 725,1661,1840,4414,3440,3658,7589, # 5126 7590, 587, 14,3272, 227,2598, 326, 480,2265, 943,2755,3552, 291, 650,1883,7591, # 5142 1702,1226, 102,1547, 62,3441, 904,4415,3442,1164,4150,7592,7593,1224,1548,2756, # 5158 391, 498,1493,7594,1386,1419,7595,2055,1177,4416, 813, 880,1081,2363, 566,1145, # 5174 4417,2286,1001,1035,2558,2599,2238, 394,1286,7596,7597,2068,7598, 86,1494,1730, # 5190 3936, 491,1588, 745, 897,2948, 843,3340,3937,2757,2870,3273,1768, 998,2217,2069, # 5206 397,1826,1195,1969,3659,2993,3341, 284,7599,3782,2500,2137,2119,1903,7600,3938, # 5222 2150,3939,4151,1036,3443,1904, 114,2559,4152, 209,1527,7601,7602,2949,2831,2625, # 5238 2385,2719,3139, 812,2560,7603,3274,7604,1559, 737,1884,3660,1210, 885, 28,2686, # 5254 3553,3783,7605,4153,1004,1779,4418,7606, 346,1981,2218,2687,4419,3784,1742, 797, # 5270 1642,3940,1933,1072,1384,2151, 896,3941,3275,3661,3197,2871,3554,7607,2561,1958, # 5286 4420,2450,1785,7608,7609,7610,3942,4154,1005,1308,3662,4155,2720,4421,4422,1528, # 5302 2600, 161,1178,4156,1982, 987,4423,1101,4157, 631,3943,1157,3198,2420,1343,1241, # 5318 1016,2239,2562, 372, 877,2339,2501,1160, 555,1934, 911,3944,7611, 466,1170, 169, # 5334 1051,2907,2688,3663,2474,2994,1182,2011,2563,1251,2626,7612, 992,2340,3444,1540, # 5350 2721,1201,2070,2401,1996,2475,7613,4424, 528,1922,2188,1503,1873,1570,2364,3342, # 5366 3276,7614, 557,1073,7615,1827,3445,2087,2266,3140,3039,3084, 767,3085,2786,4425, # 5382 1006,4158,4426,2341,1267,2176,3664,3199, 778,3945,3200,2722,1597,2657,7616,4427, # 5398 7617,3446,7618,7619,7620,3277,2689,1433,3278, 131, 95,1504,3946, 723,4159,3141, # 5414 1841,3555,2758,2189,3947,2027,2104,3665,7621,2995,3948,1218,7622,3343,3201,3949, # 5430 4160,2576, 248,1634,3785, 912,7623,2832,3666,3040,3786, 654, 53,7624,2996,7625, # 5446 1688,4428, 777,3447,1032,3950,1425,7626, 191, 820,2120,2833, 971,4429, 931,3202, # 5462 135, 664, 783,3787,1997, 772,2908,1935,3951,3788,4430,2909,3203, 282,2723, 640, # 5478 1372,3448,1127, 922, 325,3344,7627,7628, 711,2044,7629,7630,3952,2219,2787,1936, # 5494 3953,3345,2220,2251,3789,2300,7631,4431,3790,1258,3279,3954,3204,2138,2950,3955, # 5510 3956,7632,2221, 258,3205,4432, 101,1227,7633,3280,1755,7634,1391,3281,7635,2910, # 5526 2056, 893,7636,7637,7638,1402,4161,2342,7639,7640,3206,3556,7641,7642, 878,1325, # 5542 1780,2788,4433, 259,1385,2577, 744,1183,2267,4434,7643,3957,2502,7644, 684,1024, # 5558 4162,7645, 472,3557,3449,1165,3282,3958,3959, 322,2152, 881, 455,1695,1152,1340, # 5574 660, 554,2153,4435,1058,4436,4163, 830,1065,3346,3960,4437,1923,7646,1703,1918, # 5590 7647, 932,2268, 122,7648,4438, 947, 677,7649,3791,2627, 297,1905,1924,2269,4439, # 5606 2317,3283,7650,7651,4164,7652,4165, 84,4166, 112, 989,7653, 547,1059,3961, 701, # 5622 3558,1019,7654,4167,7655,3450, 942, 639, 457,2301,2451, 993,2951, 407, 851, 494, # 5638 4440,3347, 927,7656,1237,7657,2421,3348, 573,4168, 680, 921,2911,1279,1874, 285, # 5654 790,1448,1983, 719,2167,7658,7659,4441,3962,3963,1649,7660,1541, 563,7661,1077, # 5670 7662,3349,3041,3451, 511,2997,3964,3965,3667,3966,1268,2564,3350,3207,4442,4443, # 5686 7663, 535,1048,1276,1189,2912,2028,3142,1438,1373,2834,2952,1134,2012,7664,4169, # 5702 1238,2578,3086,1259,7665, 700,7666,2953,3143,3668,4170,7667,4171,1146,1875,1906, # 5718 4444,2601,3967, 781,2422, 132,1589, 203, 147, 273,2789,2402, 898,1786,2154,3968, # 5734 3969,7668,3792,2790,7669,7670,4445,4446,7671,3208,7672,1635,3793, 965,7673,1804, # 5750 2690,1516,3559,1121,1082,1329,3284,3970,1449,3794, 65,1128,2835,2913,2759,1590, # 5766 3795,7674,7675, 12,2658, 45, 976,2579,3144,4447, 517,2528,1013,1037,3209,7676, # 5782 3796,2836,7677,3797,7678,3452,7679,2602, 614,1998,2318,3798,3087,2724,2628,7680, # 5798 2580,4172, 599,1269,7681,1810,3669,7682,2691,3088, 759,1060, 489,1805,3351,3285, # 5814 1358,7683,7684,2386,1387,1215,2629,2252, 490,7685,7686,4173,1759,2387,2343,7687, # 5830 4448,3799,1907,3971,2630,1806,3210,4449,3453,3286,2760,2344, 874,7688,7689,3454, # 5846 3670,1858, 91,2914,3671,3042,3800,4450,7690,3145,3972,2659,7691,3455,1202,1403, # 5862 3801,2954,2529,1517,2503,4451,3456,2504,7692,4452,7693,2692,1885,1495,1731,3973, # 5878 2365,4453,7694,2029,7695,7696,3974,2693,1216, 237,2581,4174,2319,3975,3802,4454, # 5894 4455,2694,3560,3457, 445,4456,7697,7698,7699,7700,2761, 61,3976,3672,1822,3977, # 5910 7701, 687,2045, 935, 925, 405,2660, 703,1096,1859,2725,4457,3978,1876,1367,2695, # 5926 3352, 918,2105,1781,2476, 334,3287,1611,1093,4458, 564,3146,3458,3673,3353, 945, # 5942 2631,2057,4459,7702,1925, 872,4175,7703,3459,2696,3089, 349,4176,3674,3979,4460, # 5958 3803,4177,3675,2155,3980,4461,4462,4178,4463,2403,2046, 782,3981, 400, 251,4179, # 5974 1624,7704,7705, 277,3676, 299,1265, 476,1191,3804,2121,4180,4181,1109, 205,7706, # 5990 2582,1000,2156,3561,1860,7707,7708,7709,4464,7710,4465,2565, 107,2477,2157,3982, # 6006 3460,3147,7711,1533, 541,1301, 158, 753,4182,2872,3562,7712,1696, 370,1088,4183, # 6022 4466,3563, 579, 327, 440, 162,2240, 269,1937,1374,3461, 968,3043, 56,1396,3090, # 6038 2106,3288,3354,7713,1926,2158,4467,2998,7714,3564,7715,7716,3677,4468,2478,7717, # 6054 2791,7718,1650,4469,7719,2603,7720,7721,3983,2661,3355,1149,3356,3984,3805,3985, # 6070 7722,1076, 49,7723, 951,3211,3289,3290, 450,2837, 920,7724,1811,2792,2366,4184, # 6086 1908,1138,2367,3806,3462,7725,3212,4470,1909,1147,1518,2423,4471,3807,7726,4472, # 6102 2388,2604, 260,1795,3213,7727,7728,3808,3291, 708,7729,3565,1704,7730,3566,1351, # 6118 1618,3357,2999,1886, 944,4185,3358,4186,3044,3359,4187,7731,3678, 422, 413,1714, # 6134 3292, 500,2058,2345,4188,2479,7732,1344,1910, 954,7733,1668,7734,7735,3986,2404, # 6150 4189,3567,3809,4190,7736,2302,1318,2505,3091, 133,3092,2873,4473, 629, 31,2838, # 6166 2697,3810,4474, 850, 949,4475,3987,2955,1732,2088,4191,1496,1852,7737,3988, 620, # 6182 3214, 981,1242,3679,3360,1619,3680,1643,3293,2139,2452,1970,1719,3463,2168,7738, # 6198 3215,7739,7740,3361,1828,7741,1277,4476,1565,2047,7742,1636,3568,3093,7743, 869, # 6214 2839, 655,3811,3812,3094,3989,3000,3813,1310,3569,4477,7744,7745,7746,1733, 558, # 6230 4478,3681, 335,1549,3045,1756,4192,3682,1945,3464,1829,1291,1192, 470,2726,2107, # 6246 2793, 913,1054,3990,7747,1027,7748,3046,3991,4479, 982,2662,3362,3148,3465,3216, # 6262 3217,1946,2794,7749, 571,4480,7750,1830,7751,3570,2583,1523,2424,7752,2089, 984, # 6278 4481,3683,1959,7753,3684, 852, 923,2795,3466,3685, 969,1519, 999,2048,2320,1705, # 6294 7754,3095, 615,1662, 151, 597,3992,2405,2321,1049, 275,4482,3686,4193, 568,3687, # 6310 3571,2480,4194,3688,7755,2425,2270, 409,3218,7756,1566,2874,3467,1002, 769,2840, # 6326 194,2090,3149,3689,2222,3294,4195, 628,1505,7757,7758,1763,2177,3001,3993, 521, # 6342 1161,2584,1787,2203,2406,4483,3994,1625,4196,4197, 412, 42,3096, 464,7759,2632, # 6358 4484,3363,1760,1571,2875,3468,2530,1219,2204,3814,2633,2140,2368,4485,4486,3295, # 6374 1651,3364,3572,7760,7761,3573,2481,3469,7762,3690,7763,7764,2271,2091, 460,7765, # 6390 4487,7766,3002, 962, 588,3574, 289,3219,2634,1116, 52,7767,3047,1796,7768,7769, # 6406 7770,1467,7771,1598,1143,3691,4198,1984,1734,1067,4488,1280,3365, 465,4489,1572, # 6422 510,7772,1927,2241,1812,1644,3575,7773,4490,3692,7774,7775,2663,1573,1534,7776, # 6438 7777,4199, 536,1807,1761,3470,3815,3150,2635,7778,7779,7780,4491,3471,2915,1911, # 6454 2796,7781,3296,1122, 377,3220,7782, 360,7783,7784,4200,1529, 551,7785,2059,3693, # 6470 1769,2426,7786,2916,4201,3297,3097,2322,2108,2030,4492,1404, 136,1468,1479, 672, # 6486 1171,3221,2303, 271,3151,7787,2762,7788,2049, 678,2727, 865,1947,4493,7789,2013, # 6502 3995,2956,7790,2728,2223,1397,3048,3694,4494,4495,1735,2917,3366,3576,7791,3816, # 6518 509,2841,2453,2876,3817,7792,7793,3152,3153,4496,4202,2531,4497,2304,1166,1010, # 6534 552, 681,1887,7794,7795,2957,2958,3996,1287,1596,1861,3154, 358, 453, 736, 175, # 6550 478,1117, 905,1167,1097,7796,1853,1530,7797,1706,7798,2178,3472,2287,3695,3473, # 6566 3577,4203,2092,4204,7799,3367,1193,2482,4205,1458,2190,2205,1862,1888,1421,3298, # 6582 2918,3049,2179,3474, 595,2122,7800,3997,7801,7802,4206,1707,2636, 223,3696,1359, # 6598 751,3098, 183,3475,7803,2797,3003, 419,2369, 633, 704,3818,2389, 241,7804,7805, # 6614 7806, 838,3004,3697,2272,2763,2454,3819,1938,2050,3998,1309,3099,2242,1181,7807, # 6630 1136,2206,3820,2370,1446,4207,2305,4498,7808,7809,4208,1055,2605, 484,3698,7810, # 6646 3999, 625,4209,2273,3368,1499,4210,4000,7811,4001,4211,3222,2274,2275,3476,7812, # 6662 7813,2764, 808,2606,3699,3369,4002,4212,3100,2532, 526,3370,3821,4213, 955,7814, # 6678 1620,4214,2637,2427,7815,1429,3700,1669,1831, 994, 928,7816,3578,1260,7817,7818, # 6694 7819,1948,2288, 741,2919,1626,4215,2729,2455, 867,1184, 362,3371,1392,7820,7821, # 6710 4003,4216,1770,1736,3223,2920,4499,4500,1928,2698,1459,1158,7822,3050,3372,2877, # 6726 1292,1929,2506,2842,3701,1985,1187,2071,2014,2607,4217,7823,2566,2507,2169,3702, # 6742 2483,3299,7824,3703,4501,7825,7826, 666,1003,3005,1022,3579,4218,7827,4502,1813, # 6758 2253, 574,3822,1603, 295,1535, 705,3823,4219, 283, 858, 417,7828,7829,3224,4503, # 6774 4504,3051,1220,1889,1046,2276,2456,4004,1393,1599, 689,2567, 388,4220,7830,2484, # 6790 802,7831,2798,3824,2060,1405,2254,7832,4505,3825,2109,1052,1345,3225,1585,7833, # 6806 809,7834,7835,7836, 575,2730,3477, 956,1552,1469,1144,2323,7837,2324,1560,2457, # 6822 3580,3226,4005, 616,2207,3155,2180,2289,7838,1832,7839,3478,4506,7840,1319,3704, # 6838 3705,1211,3581,1023,3227,1293,2799,7841,7842,7843,3826, 607,2306,3827, 762,2878, # 6854 1439,4221,1360,7844,1485,3052,7845,4507,1038,4222,1450,2061,2638,4223,1379,4508, # 6870 2585,7846,7847,4224,1352,1414,2325,2921,1172,7848,7849,3828,3829,7850,1797,1451, # 6886 7851,7852,7853,7854,2922,4006,4007,2485,2346, 411,4008,4009,3582,3300,3101,4509, # 6902 1561,2664,1452,4010,1375,7855,7856, 47,2959, 316,7857,1406,1591,2923,3156,7858, # 6918 1025,2141,3102,3157, 354,2731, 884,2224,4225,2407, 508,3706, 726,3583, 996,2428, # 6934 3584, 729,7859, 392,2191,1453,4011,4510,3707,7860,7861,2458,3585,2608,1675,2800, # 6950 919,2347,2960,2348,1270,4511,4012, 73,7862,7863, 647,7864,3228,2843,2255,1550, # 6966 1346,3006,7865,1332, 883,3479,7866,7867,7868,7869,3301,2765,7870,1212, 831,1347, # 6982 4226,4512,2326,3830,1863,3053, 720,3831,4513,4514,3832,7871,4227,7872,7873,4515, # 6998 7874,7875,1798,4516,3708,2609,4517,3586,1645,2371,7876,7877,2924, 669,2208,2665, # 7014 2429,7878,2879,7879,7880,1028,3229,7881,4228,2408,7882,2256,1353,7883,7884,4518, # 7030 3158, 518,7885,4013,7886,4229,1960,7887,2142,4230,7888,7889,3007,2349,2350,3833, # 7046 516,1833,1454,4014,2699,4231,4519,2225,2610,1971,1129,3587,7890,2766,7891,2961, # 7062 1422, 577,1470,3008,1524,3373,7892,7893, 432,4232,3054,3480,7894,2586,1455,2508, # 7078 2226,1972,1175,7895,1020,2732,4015,3481,4520,7896,2733,7897,1743,1361,3055,3482, # 7094 2639,4016,4233,4521,2290, 895, 924,4234,2170, 331,2243,3056, 166,1627,3057,1098, # 7110 7898,1232,2880,2227,3374,4522, 657, 403,1196,2372, 542,3709,3375,1600,4235,3483, # 7126 7899,4523,2767,3230, 576, 530,1362,7900,4524,2533,2666,3710,4017,7901, 842,3834, # 7142 7902,2801,2031,1014,4018, 213,2700,3376, 665, 621,4236,7903,3711,2925,2430,7904, # 7158 2431,3302,3588,3377,7905,4237,2534,4238,4525,3589,1682,4239,3484,1380,7906, 724, # 7174 2277, 600,1670,7907,1337,1233,4526,3103,2244,7908,1621,4527,7909, 651,4240,7910, # 7190 1612,4241,2611,7911,2844,7912,2734,2307,3058,7913, 716,2459,3059, 174,1255,2701, # 7206 4019,3590, 548,1320,1398, 728,4020,1574,7914,1890,1197,3060,4021,7915,3061,3062, # 7222 3712,3591,3713, 747,7916, 635,4242,4528,7917,7918,7919,4243,7920,7921,4529,7922, # 7238 3378,4530,2432, 451,7923,3714,2535,2072,4244,2735,4245,4022,7924,1764,4531,7925, # 7254 4246, 350,7926,2278,2390,2486,7927,4247,4023,2245,1434,4024, 488,4532, 458,4248, # 7270 4025,3715, 771,1330,2391,3835,2568,3159,2159,2409,1553,2667,3160,4249,7928,2487, # 7286 2881,2612,1720,2702,4250,3379,4533,7929,2536,4251,7930,3231,4252,2768,7931,2015, # 7302 2736,7932,1155,1017,3716,3836,7933,3303,2308, 201,1864,4253,1430,7934,4026,7935, # 7318 7936,7937,7938,7939,4254,1604,7940, 414,1865, 371,2587,4534,4535,3485,2016,3104, # 7334 4536,1708, 960,4255, 887, 389,2171,1536,1663,1721,7941,2228,4027,2351,2926,1580, # 7350 7942,7943,7944,1744,7945,2537,4537,4538,7946,4539,7947,2073,7948,7949,3592,3380, # 7366 2882,4256,7950,4257,2640,3381,2802, 673,2703,2460, 709,3486,4028,3593,4258,7951, # 7382 1148, 502, 634,7952,7953,1204,4540,3594,1575,4541,2613,3717,7954,3718,3105, 948, # 7398 3232, 121,1745,3837,1110,7955,4259,3063,2509,3009,4029,3719,1151,1771,3838,1488, # 7414 4030,1986,7956,2433,3487,7957,7958,2093,7959,4260,3839,1213,1407,2803, 531,2737, # 7430 2538,3233,1011,1537,7960,2769,4261,3106,1061,7961,3720,3721,1866,2883,7962,2017, # 7446 120,4262,4263,2062,3595,3234,2309,3840,2668,3382,1954,4542,7963,7964,3488,1047, # 7462 2704,1266,7965,1368,4543,2845, 649,3383,3841,2539,2738,1102,2846,2669,7966,7967, # 7478 1999,7968,1111,3596,2962,7969,2488,3842,3597,2804,1854,3384,3722,7970,7971,3385, # 7494 2410,2884,3304,3235,3598,7972,2569,7973,3599,2805,4031,1460, 856,7974,3600,7975, # 7510 2885,2963,7976,2886,3843,7977,4264, 632,2510, 875,3844,1697,3845,2291,7978,7979, # 7526 4544,3010,1239, 580,4545,4265,7980, 914, 936,2074,1190,4032,1039,2123,7981,7982, # 7542 7983,3386,1473,7984,1354,4266,3846,7985,2172,3064,4033, 915,3305,4267,4268,3306, # 7558 1605,1834,7986,2739, 398,3601,4269,3847,4034, 328,1912,2847,4035,3848,1331,4270, # 7574 3011, 937,4271,7987,3602,4036,4037,3387,2160,4546,3388, 524, 742, 538,3065,1012, # 7590 7988,7989,3849,2461,7990, 658,1103, 225,3850,7991,7992,4547,7993,4548,7994,3236, # 7606 1243,7995,4038, 963,2246,4549,7996,2705,3603,3161,7997,7998,2588,2327,7999,4550, # 7622 8000,8001,8002,3489,3307, 957,3389,2540,2032,1930,2927,2462, 870,2018,3604,1746, # 7638 2770,2771,2434,2463,8003,3851,8004,3723,3107,3724,3490,3390,3725,8005,1179,3066, # 7654 8006,3162,2373,4272,3726,2541,3163,3108,2740,4039,8007,3391,1556,2542,2292, 977, # 7670 2887,2033,4040,1205,3392,8008,1765,3393,3164,2124,1271,1689, 714,4551,3491,8009, # 7686 2328,3852, 533,4273,3605,2181, 617,8010,2464,3308,3492,2310,8011,8012,3165,8013, # 7702 8014,3853,1987, 618, 427,2641,3493,3394,8015,8016,1244,1690,8017,2806,4274,4552, # 7718 8018,3494,8019,8020,2279,1576, 473,3606,4275,3395, 972,8021,3607,8022,3067,8023, # 7734 8024,4553,4554,8025,3727,4041,4042,8026, 153,4555, 356,8027,1891,2888,4276,2143, # 7750 408, 803,2352,8028,3854,8029,4277,1646,2570,2511,4556,4557,3855,8030,3856,4278, # 7766 8031,2411,3396, 752,8032,8033,1961,2964,8034, 746,3012,2465,8035,4279,3728, 698, # 7782 4558,1892,4280,3608,2543,4559,3609,3857,8036,3166,3397,8037,1823,1302,4043,2706, # 7798 3858,1973,4281,8038,4282,3167, 823,1303,1288,1236,2848,3495,4044,3398, 774,3859, # 7814 8039,1581,4560,1304,2849,3860,4561,8040,2435,2161,1083,3237,4283,4045,4284, 344, # 7830 1173, 288,2311, 454,1683,8041,8042,1461,4562,4046,2589,8043,8044,4563, 985, 894, # 7846 8045,3399,3168,8046,1913,2928,3729,1988,8047,2110,1974,8048,4047,8049,2571,1194, # 7862 425,8050,4564,3169,1245,3730,4285,8051,8052,2850,8053, 636,4565,1855,3861, 760, # 7878 1799,8054,4286,2209,1508,4566,4048,1893,1684,2293,8055,8056,8057,4287,4288,2210, # 7894 479,8058,8059, 832,8060,4049,2489,8061,2965,2490,3731, 990,3109, 627,1814,2642, # 7910 4289,1582,4290,2125,2111,3496,4567,8062, 799,4291,3170,8063,4568,2112,1737,3013, # 7926 1018, 543, 754,4292,3309,1676,4569,4570,4050,8064,1489,8065,3497,8066,2614,2889, # 7942 4051,8067,8068,2966,8069,8070,8071,8072,3171,4571,4572,2182,1722,8073,3238,3239, # 7958 1842,3610,1715, 481, 365,1975,1856,8074,8075,1962,2491,4573,8076,2126,3611,3240, # 7974 433,1894,2063,2075,8077, 602,2741,8078,8079,8080,8081,8082,3014,1628,3400,8083, # 7990 3172,4574,4052,2890,4575,2512,8084,2544,2772,8085,8086,8087,3310,4576,2891,8088, # 8006 4577,8089,2851,4578,4579,1221,2967,4053,2513,8090,8091,8092,1867,1989,8093,8094, # 8022 8095,1895,8096,8097,4580,1896,4054, 318,8098,2094,4055,4293,8099,8100, 485,8101, # 8038 938,3862, 553,2670, 116,8102,3863,3612,8103,3498,2671,2773,3401,3311,2807,8104, # 8054 3613,2929,4056,1747,2930,2968,8105,8106, 207,8107,8108,2672,4581,2514,8109,3015, # 8070 890,3614,3864,8110,1877,3732,3402,8111,2183,2353,3403,1652,8112,8113,8114, 941, # 8086 2294, 208,3499,4057,2019, 330,4294,3865,2892,2492,3733,4295,8115,8116,8117,8118, # 8102 ) ���������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/euctwprober.py�������������������0000644�0001750�0001750�00000003323�00000000000�031013� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is mozilla.org code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### from .mbcharsetprober import MultiByteCharSetProber from .codingstatemachine import CodingStateMachine from .chardistribution import EUCTWDistributionAnalysis from .mbcssm import EUCTW_SM_MODEL class EUCTWProber(MultiByteCharSetProber): def __init__(self): super(EUCTWProber, self).__init__() self.coding_sm = CodingStateMachine(EUCTW_SM_MODEL) self.distribution_analyzer = EUCTWDistributionAnalysis() self.reset() @property def charset_name(self): return "EUC-TW" @property def language(self): return "Taiwan" �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/gb2312freq.py��������������������0000644�0001750�0001750�00000050353�00000000000�030235� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Communicator client code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### # GB2312 most frequently used character table # # Char to FreqOrder table , from hz6763 # 512 --> 0.79 -- 0.79 # 1024 --> 0.92 -- 0.13 # 2048 --> 0.98 -- 0.06 # 6768 --> 1.00 -- 0.02 # # Ideal Distribution Ratio = 0.79135/(1-0.79135) = 3.79 # Random Distribution Ration = 512 / (3755 - 512) = 0.157 # # Typical Distribution Ratio about 25% of Ideal one, still much higher that RDR GB2312_TYPICAL_DISTRIBUTION_RATIO = 0.9 GB2312_TABLE_SIZE = 3760 GB2312_CHAR_TO_FREQ_ORDER = ( 1671, 749,1443,2364,3924,3807,2330,3921,1704,3463,2691,1511,1515, 572,3191,2205, 2361, 224,2558, 479,1711, 963,3162, 440,4060,1905,2966,2947,3580,2647,3961,3842, 2204, 869,4207, 970,2678,5626,2944,2956,1479,4048, 514,3595, 588,1346,2820,3409, 249,4088,1746,1873,2047,1774, 581,1813, 358,1174,3590,1014,1561,4844,2245, 670, 1636,3112, 889,1286, 953, 556,2327,3060,1290,3141, 613, 185,3477,1367, 850,3820, 1715,2428,2642,2303,2732,3041,2562,2648,3566,3946,1349, 388,3098,2091,1360,3585, 152,1687,1539, 738,1559, 59,1232,2925,2267,1388,1249,1741,1679,2960, 151,1566, 1125,1352,4271, 924,4296, 385,3166,4459, 310,1245,2850, 70,3285,2729,3534,3575, 2398,3298,3466,1960,2265, 217,3647, 864,1909,2084,4401,2773,1010,3269,5152, 853, 3051,3121,1244,4251,1895, 364,1499,1540,2313,1180,3655,2268, 562, 715,2417,3061, 544, 336,3768,2380,1752,4075, 950, 280,2425,4382, 183,2759,3272, 333,4297,2155, 1688,2356,1444,1039,4540, 736,1177,3349,2443,2368,2144,2225, 565, 196,1482,3406, 927,1335,4147, 692, 878,1311,1653,3911,3622,1378,4200,1840,2969,3149,2126,1816, 2534,1546,2393,2760, 737,2494, 13, 447, 245,2747, 38,2765,2129,2589,1079, 606, 360, 471,3755,2890, 404, 848, 699,1785,1236, 370,2221,1023,3746,2074,2026,2023, 2388,1581,2119, 812,1141,3091,2536,1519, 804,2053, 406,1596,1090, 784, 548,4414, 1806,2264,2936,1100, 343,4114,5096, 622,3358, 743,3668,1510,1626,5020,3567,2513, 3195,4115,5627,2489,2991, 24,2065,2697,1087,2719, 48,1634, 315, 68, 985,2052, 198,2239,1347,1107,1439, 597,2366,2172, 871,3307, 919,2487,2790,1867, 236,2570, 1413,3794, 906,3365,3381,1701,1982,1818,1524,2924,1205, 616,2586,2072,2004, 575, 253,3099, 32,1365,1182, 197,1714,2454,1201, 554,3388,3224,2748, 756,2587, 250, 2567,1507,1517,3529,1922,2761,2337,3416,1961,1677,2452,2238,3153, 615, 911,1506, 1474,2495,1265,1906,2749,3756,3280,2161, 898,2714,1759,3450,2243,2444, 563, 26, 3286,2266,3769,3344,2707,3677, 611,1402, 531,1028,2871,4548,1375, 261,2948, 835, 1190,4134, 353, 840,2684,1900,3082,1435,2109,1207,1674, 329,1872,2781,4055,2686, 2104, 608,3318,2423,2957,2768,1108,3739,3512,3271,3985,2203,1771,3520,1418,2054, 1681,1153, 225,1627,2929, 162,2050,2511,3687,1954, 124,1859,2431,1684,3032,2894, 585,4805,3969,2869,2704,2088,2032,2095,3656,2635,4362,2209, 256, 518,2042,2105, 3777,3657, 643,2298,1148,1779, 190, 989,3544, 414, 11,2135,2063,2979,1471, 403, 3678, 126, 770,1563, 671,2499,3216,2877, 600,1179, 307,2805,4937,1268,1297,2694, 252,4032,1448,1494,1331,1394, 127,2256, 222,1647,1035,1481,3056,1915,1048, 873, 3651, 210, 33,1608,2516, 200,1520, 415, 102, 0,3389,1287, 817, 91,3299,2940, 836,1814, 549,2197,1396,1669,2987,3582,2297,2848,4528,1070, 687, 20,1819, 121, 1552,1364,1461,1968,2617,3540,2824,2083, 177, 948,4938,2291, 110,4549,2066, 648, 3359,1755,2110,2114,4642,4845,1693,3937,3308,1257,1869,2123, 208,1804,3159,2992, 2531,2549,3361,2418,1350,2347,2800,2568,1291,2036,2680, 72, 842,1990, 212,1233, 1154,1586, 75,2027,3410,4900,1823,1337,2710,2676, 728,2810,1522,3026,4995, 157, 755,1050,4022, 710, 785,1936,2194,2085,1406,2777,2400, 150,1250,4049,1206, 807, 1910, 534, 529,3309,1721,1660, 274, 39,2827, 661,2670,1578, 925,3248,3815,1094, 4278,4901,4252, 41,1150,3747,2572,2227,4501,3658,4902,3813,3357,3617,2884,2258, 887, 538,4187,3199,1294,2439,3042,2329,2343,2497,1255, 107, 543,1527, 521,3478, 3568, 194,5062, 15, 961,3870,1241,1192,2664, 66,5215,3260,2111,1295,1127,2152, 3805,4135, 901,1164,1976, 398,1278, 530,1460, 748, 904,1054,1966,1426, 53,2909, 509, 523,2279,1534, 536,1019, 239,1685, 460,2353, 673,1065,2401,3600,4298,2272, 1272,2363, 284,1753,3679,4064,1695, 81, 815,2677,2757,2731,1386, 859, 500,4221, 2190,2566, 757,1006,2519,2068,1166,1455, 337,2654,3203,1863,1682,1914,3025,1252, 1409,1366, 847, 714,2834,2038,3209, 964,2970,1901, 885,2553,1078,1756,3049, 301, 1572,3326, 688,2130,1996,2429,1805,1648,2930,3421,2750,3652,3088, 262,1158,1254, 389,1641,1812, 526,1719, 923,2073,1073,1902, 468, 489,4625,1140, 857,2375,3070, 3319,2863, 380, 116,1328,2693,1161,2244, 273,1212,1884,2769,3011,1775,1142, 461, 3066,1200,2147,2212, 790, 702,2695,4222,1601,1058, 434,2338,5153,3640, 67,2360, 4099,2502, 618,3472,1329, 416,1132, 830,2782,1807,2653,3211,3510,1662, 192,2124, 296,3979,1739,1611,3684, 23, 118, 324, 446,1239,1225, 293,2520,3814,3795,2535, 3116, 17,1074, 467,2692,2201, 387,2922, 45,1326,3055,1645,3659,2817, 958, 243, 1903,2320,1339,2825,1784,3289, 356, 576, 865,2315,2381,3377,3916,1088,3122,1713, 1655, 935, 628,4689,1034,1327, 441, 800, 720, 894,1979,2183,1528,5289,2702,1071, 4046,3572,2399,1571,3281, 79, 761,1103, 327, 134, 758,1899,1371,1615, 879, 442, 215,2605,2579, 173,2048,2485,1057,2975,3317,1097,2253,3801,4263,1403,1650,2946, 814,4968,3487,1548,2644,1567,1285, 2, 295,2636, 97, 946,3576, 832, 141,4257, 3273, 760,3821,3521,3156,2607, 949,1024,1733,1516,1803,1920,2125,2283,2665,3180, 1501,2064,3560,2171,1592, 803,3518,1416, 732,3897,4258,1363,1362,2458, 119,1427, 602,1525,2608,1605,1639,3175, 694,3064, 10, 465, 76,2000,4846,4208, 444,3781, 1619,3353,2206,1273,3796, 740,2483, 320,1723,2377,3660,2619,1359,1137,1762,1724, 2345,2842,1850,1862, 912, 821,1866, 612,2625,1735,2573,3369,1093, 844, 89, 937, 930,1424,3564,2413,2972,1004,3046,3019,2011, 711,3171,1452,4178, 428, 801,1943, 432, 445,2811, 206,4136,1472, 730, 349, 73, 397,2802,2547, 998,1637,1167, 789, 396,3217, 154,1218, 716,1120,1780,2819,4826,1931,3334,3762,2139,1215,2627, 552, 3664,3628,3232,1405,2383,3111,1356,2652,3577,3320,3101,1703, 640,1045,1370,1246, 4996, 371,1575,2436,1621,2210, 984,4033,1734,2638, 16,4529, 663,2755,3255,1451, 3917,2257,1253,1955,2234,1263,2951, 214,1229, 617, 485, 359,1831,1969, 473,2310, 750,2058, 165, 80,2864,2419, 361,4344,2416,2479,1134, 796,3726,1266,2943, 860, 2715, 938, 390,2734,1313,1384, 248, 202, 877,1064,2854, 522,3907, 279,1602, 297, 2357, 395,3740, 137,2075, 944,4089,2584,1267,3802, 62,1533,2285, 178, 176, 780, 2440, 201,3707, 590, 478,1560,4354,2117,1075, 30, 74,4643,4004,1635,1441,2745, 776,2596, 238,1077,1692,1912,2844, 605, 499,1742,3947, 241,3053, 980,1749, 936, 2640,4511,2582, 515,1543,2162,5322,2892,2993, 890,2148,1924, 665,1827,3581,1032, 968,3163, 339,1044,1896, 270, 583,1791,1720,4367,1194,3488,3669, 43,2523,1657, 163,2167, 290,1209,1622,3378, 550, 634,2508,2510, 695,2634,2384,2512,1476,1414, 220,1469,2341,2138,2852,3183,2900,4939,2865,3502,1211,3680, 854,3227,1299,2976, 3172, 186,2998,1459, 443,1067,3251,1495, 321,1932,3054, 909, 753,1410,1828, 436, 2441,1119,1587,3164,2186,1258, 227, 231,1425,1890,3200,3942, 247, 959, 725,5254, 2741, 577,2158,2079, 929, 120, 174, 838,2813, 591,1115, 417,2024, 40,3240,1536, 1037, 291,4151,2354, 632,1298,2406,2500,3535,1825,1846,3451, 205,1171, 345,4238, 18,1163, 811, 685,2208,1217, 425,1312,1508,1175,4308,2552,1033, 587,1381,3059, 2984,3482, 340,1316,4023,3972, 792,3176, 519, 777,4690, 918, 933,4130,2981,3741, 90,3360,2911,2200,5184,4550, 609,3079,2030, 272,3379,2736, 363,3881,1130,1447, 286, 779, 357,1169,3350,3137,1630,1220,2687,2391, 747,1277,3688,2618,2682,2601, 1156,3196,5290,4034,3102,1689,3596,3128, 874, 219,2783, 798, 508,1843,2461, 269, 1658,1776,1392,1913,2983,3287,2866,2159,2372, 829,4076, 46,4253,2873,1889,1894, 915,1834,1631,2181,2318, 298, 664,2818,3555,2735, 954,3228,3117, 527,3511,2173, 681,2712,3033,2247,2346,3467,1652, 155,2164,3382, 113,1994, 450, 899, 494, 994, 1237,2958,1875,2336,1926,3727, 545,1577,1550, 633,3473, 204,1305,3072,2410,1956, 2471, 707,2134, 841,2195,2196,2663,3843,1026,4940, 990,3252,4997, 368,1092, 437, 3212,3258,1933,1829, 675,2977,2893, 412, 943,3723,4644,3294,3283,2230,2373,5154, 2389,2241,2661,2323,1404,2524, 593, 787, 677,3008,1275,2059, 438,2709,2609,2240, 2269,2246,1446, 36,1568,1373,3892,1574,2301,1456,3962, 693,2276,5216,2035,1143, 2720,1919,1797,1811,2763,4137,2597,1830,1699,1488,1198,2090, 424,1694, 312,3634, 3390,4179,3335,2252,1214, 561,1059,3243,2295,2561, 975,5155,2321,2751,3772, 472, 1537,3282,3398,1047,2077,2348,2878,1323,3340,3076, 690,2906, 51, 369, 170,3541, 1060,2187,2688,3670,2541,1083,1683, 928,3918, 459, 109,4427, 599,3744,4286, 143, 2101,2730,2490, 82,1588,3036,2121, 281,1860, 477,4035,1238,2812,3020,2716,3312, 1530,2188,2055,1317, 843, 636,1808,1173,3495, 649, 181,1002, 147,3641,1159,2414, 3750,2289,2795, 813,3123,2610,1136,4368, 5,3391,4541,2174, 420, 429,1728, 754, 1228,2115,2219, 347,2223,2733, 735,1518,3003,2355,3134,1764,3948,3329,1888,2424, 1001,1234,1972,3321,3363,1672,1021,1450,1584, 226, 765, 655,2526,3404,3244,2302, 3665, 731, 594,2184, 319,1576, 621, 658,2656,4299,2099,3864,1279,2071,2598,2739, 795,3086,3699,3908,1707,2352,2402,1382,3136,2475,1465,4847,3496,3865,1085,3004, 2591,1084, 213,2287,1963,3565,2250, 822, 793,4574,3187,1772,1789,3050, 595,1484, 1959,2770,1080,2650, 456, 422,2996, 940,3322,4328,4345,3092,2742, 965,2784, 739, 4124, 952,1358,2498,2949,2565, 332,2698,2378, 660,2260,2473,4194,3856,2919, 535, 1260,2651,1208,1428,1300,1949,1303,2942, 433,2455,2450,1251,1946, 614,1269, 641, 1306,1810,2737,3078,2912, 564,2365,1419,1415,1497,4460,2367,2185,1379,3005,1307, 3218,2175,1897,3063, 682,1157,4040,4005,1712,1160,1941,1399, 394, 402,2952,1573, 1151,2986,2404, 862, 299,2033,1489,3006, 346, 171,2886,3401,1726,2932, 168,2533, 47,2507,1030,3735,1145,3370,1395,1318,1579,3609,4560,2857,4116,1457,2529,1965, 504,1036,2690,2988,2405, 745,5871, 849,2397,2056,3081, 863,2359,3857,2096, 99, 1397,1769,2300,4428,1643,3455,1978,1757,3718,1440, 35,4879,3742,1296,4228,2280, 160,5063,1599,2013, 166, 520,3479,1646,3345,3012, 490,1937,1545,1264,2182,2505, 1096,1188,1369,1436,2421,1667,2792,2460,1270,2122, 727,3167,2143, 806,1706,1012, 1800,3037, 960,2218,1882, 805, 139,2456,1139,1521, 851,1052,3093,3089, 342,2039, 744,5097,1468,1502,1585,2087, 223, 939, 326,2140,2577, 892,2481,1623,4077, 982, 3708, 135,2131, 87,2503,3114,2326,1106, 876,1616, 547,2997,2831,2093,3441,4530, 4314, 9,3256,4229,4148, 659,1462,1986,1710,2046,2913,2231,4090,4880,5255,3392, 3274,1368,3689,4645,1477, 705,3384,3635,1068,1529,2941,1458,3782,1509, 100,1656, 2548, 718,2339, 408,1590,2780,3548,1838,4117,3719,1345,3530, 717,3442,2778,3220, 2898,1892,4590,3614,3371,2043,1998,1224,3483, 891, 635, 584,2559,3355, 733,1766, 1729,1172,3789,1891,2307, 781,2982,2271,1957,1580,5773,2633,2005,4195,3097,1535, 3213,1189,1934,5693,3262, 586,3118,1324,1598, 517,1564,2217,1868,1893,4445,3728, 2703,3139,1526,1787,1992,3882,2875,1549,1199,1056,2224,1904,2711,5098,4287, 338, 1993,3129,3489,2689,1809,2815,1997, 957,1855,3898,2550,3275,3057,1105,1319, 627, 1505,1911,1883,3526, 698,3629,3456,1833,1431, 746, 77,1261,2017,2296,1977,1885, 125,1334,1600, 525,1798,1109,2222,1470,1945, 559,2236,1186,3443,2476,1929,1411, 2411,3135,1777,3372,2621,1841,1613,3229, 668,1430,1839,2643,2916, 195,1989,2671, 2358,1387, 629,3205,2293,5256,4439, 123,1310, 888,1879,4300,3021,3605,1003,1162, 3192,2910,2010, 140,2395,2859, 55,1082,2012,2901, 662, 419,2081,1438, 680,2774, 4654,3912,1620,1731,1625,5035,4065,2328, 512,1344, 802,5443,2163,2311,2537, 524, 3399, 98,1155,2103,1918,2606,3925,2816,1393,2465,1504,3773,2177,3963,1478,4346, 180,1113,4655,3461,2028,1698, 833,2696,1235,1322,1594,4408,3623,3013,3225,2040, 3022, 541,2881, 607,3632,2029,1665,1219, 639,1385,1686,1099,2803,3231,1938,3188, 2858, 427, 676,2772,1168,2025, 454,3253,2486,3556, 230,1950, 580, 791,1991,1280, 1086,1974,2034, 630, 257,3338,2788,4903,1017, 86,4790, 966,2789,1995,1696,1131, 259,3095,4188,1308, 179,1463,5257, 289,4107,1248, 42,3413,1725,2288, 896,1947, 774,4474,4254, 604,3430,4264, 392,2514,2588, 452, 237,1408,3018, 988,4531,1970, 3034,3310, 540,2370,1562,1288,2990, 502,4765,1147, 4,1853,2708, 207, 294,2814, 4078,2902,2509, 684, 34,3105,3532,2551, 644, 709,2801,2344, 573,1727,3573,3557, 2021,1081,3100,4315,2100,3681, 199,2263,1837,2385, 146,3484,1195,2776,3949, 997, 1939,3973,1008,1091,1202,1962,1847,1149,4209,5444,1076, 493, 117,5400,2521, 972, 1490,2934,1796,4542,2374,1512,2933,2657, 413,2888,1135,2762,2314,2156,1355,2369, 766,2007,2527,2170,3124,2491,2593,2632,4757,2437, 234,3125,3591,1898,1750,1376, 1942,3468,3138, 570,2127,2145,3276,4131, 962, 132,1445,4196, 19, 941,3624,3480, 3366,1973,1374,4461,3431,2629, 283,2415,2275, 808,2887,3620,2112,2563,1353,3610, 955,1089,3103,1053, 96, 88,4097, 823,3808,1583, 399, 292,4091,3313, 421,1128, 642,4006, 903,2539,1877,2082, 596, 29,4066,1790, 722,2157, 130, 995,1569, 769, 1485, 464, 513,2213, 288,1923,1101,2453,4316, 133, 486,2445, 50, 625, 487,2207, 57, 423, 481,2962, 159,3729,1558, 491, 303, 482, 501, 240,2837, 112,3648,2392, 1783, 362, 8,3433,3422, 610,2793,3277,1390,1284,1654, 21,3823, 734, 367, 623, 193, 287, 374,1009,1483, 816, 476, 313,2255,2340,1262,2150,2899,1146,2581, 782, 2116,1659,2018,1880, 255,3586,3314,1110,2867,2137,2564, 986,2767,5185,2006, 650, 158, 926, 762, 881,3157,2717,2362,3587, 306,3690,3245,1542,3077,2427,1691,2478, 2118,2985,3490,2438, 539,2305, 983, 129,1754, 355,4201,2386, 827,2923, 104,1773, 2838,2771, 411,2905,3919, 376, 767, 122,1114, 828,2422,1817,3506, 266,3460,1007, 1609,4998, 945,2612,4429,2274, 726,1247,1964,2914,2199,2070,4002,4108, 657,3323, 1422, 579, 455,2764,4737,1222,2895,1670, 824,1223,1487,2525, 558, 861,3080, 598, 2659,2515,1967, 752,2583,2376,2214,4180, 977, 704,2464,4999,2622,4109,1210,2961, 819,1541, 142,2284, 44, 418, 457,1126,3730,4347,4626,1644,1876,3671,1864, 302, 1063,5694, 624, 723,1984,3745,1314,1676,2488,1610,1449,3558,3569,2166,2098, 409, 1011,2325,3704,2306, 818,1732,1383,1824,1844,3757, 999,2705,3497,1216,1423,2683, 2426,2954,2501,2726,2229,1475,2554,5064,1971,1794,1666,2014,1343, 783, 724, 191, 2434,1354,2220,5065,1763,2752,2472,4152, 131, 175,2885,3434, 92,1466,4920,2616, 3871,3872,3866, 128,1551,1632, 669,1854,3682,4691,4125,1230, 188,2973,3290,1302, 1213, 560,3266, 917, 763,3909,3249,1760, 868,1958, 764,1782,2097, 145,2277,3774, 4462, 64,1491,3062, 971,2132,3606,2442, 221,1226,1617, 218, 323,1185,3207,3147, 571, 619,1473,1005,1744,2281, 449,1887,2396,3685, 275, 375,3816,1743,3844,3731, 845,1983,2350,4210,1377, 773, 967,3499,3052,3743,2725,4007,1697,1022,3943,1464, 3264,2855,2722,1952,1029,2839,2467, 84,4383,2215, 820,1391,2015,2448,3672, 377, 1948,2168, 797,2545,3536,2578,2645, 94,2874,1678, 405,1259,3071, 771, 546,1315, 470,1243,3083, 895,2468, 981, 969,2037, 846,4181, 653,1276,2928, 14,2594, 557, 3007,2474, 156, 902,1338,1740,2574, 537,2518, 973,2282,2216,2433,1928, 138,2903, 1293,2631,1612, 646,3457, 839,2935, 111, 496,2191,2847, 589,3186, 149,3994,2060, 4031,2641,4067,3145,1870, 37,3597,2136,1025,2051,3009,3383,3549,1121,1016,3261, 1301, 251,2446,2599,2153, 872,3246, 637, 334,3705, 831, 884, 921,3065,3140,4092, 2198,1944, 246,2964, 108,2045,1152,1921,2308,1031, 203,3173,4170,1907,3890, 810, 1401,2003,1690, 506, 647,1242,2828,1761,1649,3208,2249,1589,3709,2931,5156,1708, 498, 666,2613, 834,3817,1231, 184,2851,1124, 883,3197,2261,3710,1765,1553,2658, 1178,2639,2351, 93,1193, 942,2538,2141,4402, 235,1821, 870,1591,2192,1709,1871, 3341,1618,4126,2595,2334, 603, 651, 69, 701, 268,2662,3411,2555,1380,1606, 503, 448, 254,2371,2646, 574,1187,2309,1770, 322,2235,1292,1801, 305, 566,1133, 229, 2067,2057, 706, 167, 483,2002,2672,3295,1820,3561,3067, 316, 378,2746,3452,1112, 136,1981, 507,1651,2917,1117, 285,4591, 182,2580,3522,1304, 335,3303,1835,2504, 1795,1792,2248, 674,1018,2106,2449,1857,2292,2845, 976,3047,1781,2600,2727,1389, 1281, 52,3152, 153, 265,3950, 672,3485,3951,4463, 430,1183, 365, 278,2169, 27, 1407,1336,2304, 209,1340,1730,2202,1852,2403,2883, 979,1737,1062, 631,2829,2542, 3876,2592, 825,2086,2226,3048,3625, 352,1417,3724, 542, 991, 431,1351,3938,1861, 2294, 826,1361,2927,3142,3503,1738, 463,2462,2723, 582,1916,1595,2808, 400,3845, 3891,2868,3621,2254, 58,2492,1123, 910,2160,2614,1372,1603,1196,1072,3385,1700, 3267,1980, 696, 480,2430, 920, 799,1570,2920,1951,2041,4047,2540,1321,4223,2469, 3562,2228,1271,2602, 401,2833,3351,2575,5157, 907,2312,1256, 410, 263,3507,1582, 996, 678,1849,2316,1480, 908,3545,2237, 703,2322, 667,1826,2849,1531,2604,2999, 2407,3146,2151,2630,1786,3711, 469,3542, 497,3899,2409, 858, 837,4446,3393,1274, 786, 620,1845,2001,3311, 484, 308,3367,1204,1815,3691,2332,1532,2557,1842,2020, 2724,1927,2333,4440, 567, 22,1673,2728,4475,1987,1858,1144,1597, 101,1832,3601, 12, 974,3783,4391, 951,1412, 1,3720, 453,4608,4041, 528,1041,1027,3230,2628, 1129, 875,1051,3291,1203,2262,1069,2860,2799,2149,2615,3278, 144,1758,3040, 31, 475,1680, 366,2685,3184, 311,1642,4008,2466,5036,1593,1493,2809, 216,1420,1668, 233, 304,2128,3284, 232,1429,1768,1040,2008,3407,2740,2967,2543, 242,2133, 778, 1565,2022,2620, 505,2189,2756,1098,2273, 372,1614, 708, 553,2846,2094,2278, 169, 3626,2835,4161, 228,2674,3165, 809,1454,1309, 466,1705,1095, 900,3423, 880,2667, 3751,5258,2317,3109,2571,4317,2766,1503,1342, 866,4447,1118, 63,2076, 314,1881, 1348,1061, 172, 978,3515,1747, 532, 511,3970, 6, 601, 905,2699,3300,1751, 276, 1467,3725,2668, 65,4239,2544,2779,2556,1604, 578,2451,1802, 992,2331,2624,1320, 3446, 713,1513,1013, 103,2786,2447,1661, 886,1702, 916, 654,3574,2031,1556, 751, 2178,2821,2179,1498,1538,2176, 271, 914,2251,2080,1325, 638,1953,2937,3877,2432, 2754, 95,3265,1716, 260,1227,4083, 775, 106,1357,3254, 426,1607, 555,2480, 772, 1985, 244,2546, 474, 495,1046,2611,1851,2061, 71,2089,1675,2590, 742,3758,2843, 3222,1433, 267,2180,2576,2826,2233,2092,3913,2435, 956,1745,3075, 856,2113,1116, 451, 3,1988,2896,1398, 993,2463,1878,2049,1341,2718,2721,2870,2108, 712,2904, 4363,2753,2324, 277,2872,2349,2649, 384, 987, 435, 691,3000, 922, 164,3939, 652, 1500,1184,4153,2482,3373,2165,4848,2335,3775,3508,3154,2806,2830,1554,2102,1664, 2530,1434,2408, 893,1547,2623,3447,2832,2242,2532,3169,2856,3223,2078, 49,3770, 3469, 462, 318, 656,2259,3250,3069, 679,1629,2758, 344,1138,1104,3120,1836,1283, 3115,2154,1437,4448, 934, 759,1999, 794,2862,1038, 533,2560,1722,2342, 855,2626, 1197,1663,4476,3127, 85,4240,2528, 25,1111,1181,3673, 407,3470,4561,2679,2713, 768,1925,2841,3986,1544,1165, 932, 373,1240,2146,1930,2673, 721,4766, 354,4333, 391,2963, 187, 61,3364,1442,1102, 330,1940,1767, 341,3809,4118, 393,2496,2062, 2211, 105, 331, 300, 439, 913,1332, 626, 379,3304,1557, 328, 689,3952, 309,1555, 931, 317,2517,3027, 325, 569, 686,2107,3084, 60,1042,1333,2794, 264,3177,4014, 1628, 258,3712, 7,4464,1176,1043,1778, 683, 114,1975, 78,1492, 383,1886, 510, 386, 645,5291,2891,2069,3305,4138,3867,2939,2603,2493,1935,1066,1848,3588,1015, 1282,1289,4609, 697,1453,3044,2666,3611,1856,2412, 54, 719,1330, 568,3778,2459, 1748, 788, 492, 551,1191,1000, 488,3394,3763, 282,1799, 348,2016,1523,3155,2390, 1049, 382,2019,1788,1170, 729,2968,3523, 897,3926,2785,2938,3292, 350,2319,3238, 1718,1717,2655,3453,3143,4465, 161,2889,2980,2009,1421, 56,1908,1640,2387,2232, 1917,1874,2477,4921, 148, 83,3438, 592,4245,2882,1822,1055, 741, 115,1496,1624, 381,1638,4592,1020, 516,3214, 458, 947,4575,1432, 211,1514,2926,1865,2142, 189, 852,1221,1400,1486, 882,2299,4036, 351, 28,1122, 700,6479,6480,6481,6482,6483, #last 512 ) �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/gb2312prober.py������������������0000644�0001750�0001750�00000003332�00000000000�030564� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is mozilla.org code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### from .mbcharsetprober import MultiByteCharSetProber from .codingstatemachine import CodingStateMachine from .chardistribution import GB2312DistributionAnalysis from .mbcssm import GB2312_SM_MODEL class GB2312Prober(MultiByteCharSetProber): def __init__(self): super(GB2312Prober, self).__init__() self.coding_sm = CodingStateMachine(GB2312_SM_MODEL) self.distribution_analyzer = GB2312DistributionAnalysis() self.reset() @property def charset_name(self): return "GB2312" @property def language(self): return "Chinese" ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/hebrewprober.py������������������0000644�0001750�0001750�00000033016�00000000000�031142� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Universal charset detector code. # # The Initial Developer of the Original Code is # Shy Shalom # Portions created by the Initial Developer are Copyright (C) 2005 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### from .charsetprober import CharSetProber from .enums import ProbingState # This prober doesn't actually recognize a language or a charset. # It is a helper prober for the use of the Hebrew model probers ### General ideas of the Hebrew charset recognition ### # # Four main charsets exist in Hebrew: # "ISO-8859-8" - Visual Hebrew # "windows-1255" - Logical Hebrew # "ISO-8859-8-I" - Logical Hebrew # "x-mac-hebrew" - ?? Logical Hebrew ?? # # Both "ISO" charsets use a completely identical set of code points, whereas # "windows-1255" and "x-mac-hebrew" are two different proper supersets of # these code points. windows-1255 defines additional characters in the range # 0x80-0x9F as some misc punctuation marks as well as some Hebrew-specific # diacritics and additional 'Yiddish' ligature letters in the range 0xc0-0xd6. # x-mac-hebrew defines similar additional code points but with a different # mapping. # # As far as an average Hebrew text with no diacritics is concerned, all four # charsets are identical with respect to code points. Meaning that for the # main Hebrew alphabet, all four map the same values to all 27 Hebrew letters # (including final letters). # # The dominant difference between these charsets is their directionality. # "Visual" directionality means that the text is ordered as if the renderer is # not aware of a BIDI rendering algorithm. The renderer sees the text and # draws it from left to right. The text itself when ordered naturally is read # backwards. A buffer of Visual Hebrew generally looks like so: # "[last word of first line spelled backwards] [whole line ordered backwards # and spelled backwards] [first word of first line spelled backwards] # [end of line] [last word of second line] ... etc' " # adding punctuation marks, numbers and English text to visual text is # naturally also "visual" and from left to right. # # "Logical" directionality means the text is ordered "naturally" according to # the order it is read. It is the responsibility of the renderer to display # the text from right to left. A BIDI algorithm is used to place general # punctuation marks, numbers and English text in the text. # # Texts in x-mac-hebrew are almost impossible to find on the Internet. From # what little evidence I could find, it seems that its general directionality # is Logical. # # To sum up all of the above, the Hebrew probing mechanism knows about two # charsets: # Visual Hebrew - "ISO-8859-8" - backwards text - Words and sentences are # backwards while line order is natural. For charset recognition purposes # the line order is unimportant (In fact, for this implementation, even # word order is unimportant). # Logical Hebrew - "windows-1255" - normal, naturally ordered text. # # "ISO-8859-8-I" is a subset of windows-1255 and doesn't need to be # specifically identified. # "x-mac-hebrew" is also identified as windows-1255. A text in x-mac-hebrew # that contain special punctuation marks or diacritics is displayed with # some unconverted characters showing as question marks. This problem might # be corrected using another model prober for x-mac-hebrew. Due to the fact # that x-mac-hebrew texts are so rare, writing another model prober isn't # worth the effort and performance hit. # #### The Prober #### # # The prober is divided between two SBCharSetProbers and a HebrewProber, # all of which are managed, created, fed data, inquired and deleted by the # SBCSGroupProber. The two SBCharSetProbers identify that the text is in # fact some kind of Hebrew, Logical or Visual. The final decision about which # one is it is made by the HebrewProber by combining final-letter scores # with the scores of the two SBCharSetProbers to produce a final answer. # # The SBCSGroupProber is responsible for stripping the original text of HTML # tags, English characters, numbers, low-ASCII punctuation characters, spaces # and new lines. It reduces any sequence of such characters to a single space. # The buffer fed to each prober in the SBCS group prober is pure text in # high-ASCII. # The two SBCharSetProbers (model probers) share the same language model: # Win1255Model. # The first SBCharSetProber uses the model normally as any other # SBCharSetProber does, to recognize windows-1255, upon which this model was # built. The second SBCharSetProber is told to make the pair-of-letter # lookup in the language model backwards. This in practice exactly simulates # a visual Hebrew model using the windows-1255 logical Hebrew model. # # The HebrewProber is not using any language model. All it does is look for # final-letter evidence suggesting the text is either logical Hebrew or visual # Hebrew. Disjointed from the model probers, the results of the HebrewProber # alone are meaningless. HebrewProber always returns 0.00 as confidence # since it never identifies a charset by itself. Instead, the pointer to the # HebrewProber is passed to the model probers as a helper "Name Prober". # When the Group prober receives a positive identification from any prober, # it asks for the name of the charset identified. If the prober queried is a # Hebrew model prober, the model prober forwards the call to the # HebrewProber to make the final decision. In the HebrewProber, the # decision is made according to the final-letters scores maintained and Both # model probers scores. The answer is returned in the form of the name of the # charset identified, either "windows-1255" or "ISO-8859-8". class HebrewProber(CharSetProber): # windows-1255 / ISO-8859-8 code points of interest FINAL_KAF = 0xea NORMAL_KAF = 0xeb FINAL_MEM = 0xed NORMAL_MEM = 0xee FINAL_NUN = 0xef NORMAL_NUN = 0xf0 FINAL_PE = 0xf3 NORMAL_PE = 0xf4 FINAL_TSADI = 0xf5 NORMAL_TSADI = 0xf6 # Minimum Visual vs Logical final letter score difference. # If the difference is below this, don't rely solely on the final letter score # distance. MIN_FINAL_CHAR_DISTANCE = 5 # Minimum Visual vs Logical model score difference. # If the difference is below this, don't rely at all on the model score # distance. MIN_MODEL_DISTANCE = 0.01 VISUAL_HEBREW_NAME = "ISO-8859-8" LOGICAL_HEBREW_NAME = "windows-1255" def __init__(self): super(HebrewProber, self).__init__() self._final_char_logical_score = None self._final_char_visual_score = None self._prev = None self._before_prev = None self._logical_prober = None self._visual_prober = None self.reset() def reset(self): self._final_char_logical_score = 0 self._final_char_visual_score = 0 # The two last characters seen in the previous buffer, # mPrev and mBeforePrev are initialized to space in order to simulate # a word delimiter at the beginning of the data self._prev = ' ' self._before_prev = ' ' # These probers are owned by the group prober. def set_model_probers(self, logicalProber, visualProber): self._logical_prober = logicalProber self._visual_prober = visualProber def is_final(self, c): return c in [self.FINAL_KAF, self.FINAL_MEM, self.FINAL_NUN, self.FINAL_PE, self.FINAL_TSADI] def is_non_final(self, c): # The normal Tsadi is not a good Non-Final letter due to words like # 'lechotet' (to chat) containing an apostrophe after the tsadi. This # apostrophe is converted to a space in FilterWithoutEnglishLetters # causing the Non-Final tsadi to appear at an end of a word even # though this is not the case in the original text. # The letters Pe and Kaf rarely display a related behavior of not being # a good Non-Final letter. Words like 'Pop', 'Winamp' and 'Mubarak' # for example legally end with a Non-Final Pe or Kaf. However, the # benefit of these letters as Non-Final letters outweighs the damage # since these words are quite rare. return c in [self.NORMAL_KAF, self.NORMAL_MEM, self.NORMAL_NUN, self.NORMAL_PE] def feed(self, byte_str): # Final letter analysis for logical-visual decision. # Look for evidence that the received buffer is either logical Hebrew # or visual Hebrew. # The following cases are checked: # 1) A word longer than 1 letter, ending with a final letter. This is # an indication that the text is laid out "naturally" since the # final letter really appears at the end. +1 for logical score. # 2) A word longer than 1 letter, ending with a Non-Final letter. In # normal Hebrew, words ending with Kaf, Mem, Nun, Pe or Tsadi, # should not end with the Non-Final form of that letter. Exceptions # to this rule are mentioned above in isNonFinal(). This is an # indication that the text is laid out backwards. +1 for visual # score # 3) A word longer than 1 letter, starting with a final letter. Final # letters should not appear at the beginning of a word. This is an # indication that the text is laid out backwards. +1 for visual # score. # # The visual score and logical score are accumulated throughout the # text and are finally checked against each other in GetCharSetName(). # No checking for final letters in the middle of words is done since # that case is not an indication for either Logical or Visual text. # # We automatically filter out all 7-bit characters (replace them with # spaces) so the word boundary detection works properly. [MAP] if self.state == ProbingState.NOT_ME: # Both model probers say it's not them. No reason to continue. return ProbingState.NOT_ME byte_str = self.filter_high_byte_only(byte_str) for cur in byte_str: if cur == ' ': # We stand on a space - a word just ended if self._before_prev != ' ': # next-to-last char was not a space so self._prev is not a # 1 letter word if self.is_final(self._prev): # case (1) [-2:not space][-1:final letter][cur:space] self._final_char_logical_score += 1 elif self.is_non_final(self._prev): # case (2) [-2:not space][-1:Non-Final letter][ # cur:space] self._final_char_visual_score += 1 else: # Not standing on a space if ((self._before_prev == ' ') and (self.is_final(self._prev)) and (cur != ' ')): # case (3) [-2:space][-1:final letter][cur:not space] self._final_char_visual_score += 1 self._before_prev = self._prev self._prev = cur # Forever detecting, till the end or until both model probers return # ProbingState.NOT_ME (handled above) return ProbingState.DETECTING @property def charset_name(self): # Make the decision: is it Logical or Visual? # If the final letter score distance is dominant enough, rely on it. finalsub = self._final_char_logical_score - self._final_char_visual_score if finalsub >= self.MIN_FINAL_CHAR_DISTANCE: return self.LOGICAL_HEBREW_NAME if finalsub <= -self.MIN_FINAL_CHAR_DISTANCE: return self.VISUAL_HEBREW_NAME # It's not dominant enough, try to rely on the model scores instead. modelsub = (self._logical_prober.get_confidence() - self._visual_prober.get_confidence()) if modelsub > self.MIN_MODEL_DISTANCE: return self.LOGICAL_HEBREW_NAME if modelsub < -self.MIN_MODEL_DISTANCE: return self.VISUAL_HEBREW_NAME # Still no good, back to final letter distance, maybe it'll save the # day. if finalsub < 0.0: return self.VISUAL_HEBREW_NAME # (finalsub > 0 - Logical) or (don't know what to do) default to # Logical. return self.LOGICAL_HEBREW_NAME @property def language(self): return 'Hebrew' @property def state(self): # Remain active as long as any of the model probers are active. if (self._logical_prober.state == ProbingState.NOT_ME) and \ (self._visual_prober.state == ProbingState.NOT_ME): return ProbingState.NOT_ME return ProbingState.DETECTING ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/jisfreq.py�����������������������0000644�0001750�0001750�00000062261�00000000000�030123� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Communicator client code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### # Sampling from about 20M text materials include literature and computer technology # # Japanese frequency table, applied to both S-JIS and EUC-JP # They are sorted in order. # 128 --> 0.77094 # 256 --> 0.85710 # 512 --> 0.92635 # 1024 --> 0.97130 # 2048 --> 0.99431 # # Ideal Distribution Ratio = 0.92635 / (1-0.92635) = 12.58 # Random Distribution Ration = 512 / (2965+62+83+86-512) = 0.191 # # Typical Distribution Ratio, 25% of IDR JIS_TYPICAL_DISTRIBUTION_RATIO = 3.0 # Char to FreqOrder table , JIS_TABLE_SIZE = 4368 JIS_CHAR_TO_FREQ_ORDER = ( 40, 1, 6, 182, 152, 180, 295,2127, 285, 381,3295,4304,3068,4606,3165,3510, # 16 3511,1822,2785,4607,1193,2226,5070,4608, 171,2996,1247, 18, 179,5071, 856,1661, # 32 1262,5072, 619, 127,3431,3512,3230,1899,1700, 232, 228,1294,1298, 284, 283,2041, # 48 2042,1061,1062, 48, 49, 44, 45, 433, 434,1040,1041, 996, 787,2997,1255,4305, # 64 2108,4609,1684,1648,5073,5074,5075,5076,5077,5078,3687,5079,4610,5080,3927,3928, # 80 5081,3296,3432, 290,2285,1471,2187,5082,2580,2825,1303,2140,1739,1445,2691,3375, # 96 1691,3297,4306,4307,4611, 452,3376,1182,2713,3688,3069,4308,5083,5084,5085,5086, # 112 5087,5088,5089,5090,5091,5092,5093,5094,5095,5096,5097,5098,5099,5100,5101,5102, # 128 5103,5104,5105,5106,5107,5108,5109,5110,5111,5112,4097,5113,5114,5115,5116,5117, # 144 5118,5119,5120,5121,5122,5123,5124,5125,5126,5127,5128,5129,5130,5131,5132,5133, # 160 5134,5135,5136,5137,5138,5139,5140,5141,5142,5143,5144,5145,5146,5147,5148,5149, # 176 5150,5151,5152,4612,5153,5154,5155,5156,5157,5158,5159,5160,5161,5162,5163,5164, # 192 5165,5166,5167,5168,5169,5170,5171,5172,5173,5174,5175,1472, 598, 618, 820,1205, # 208 1309,1412,1858,1307,1692,5176,5177,5178,5179,5180,5181,5182,1142,1452,1234,1172, # 224 1875,2043,2149,1793,1382,2973, 925,2404,1067,1241, 960,1377,2935,1491, 919,1217, # 240 1865,2030,1406,1499,2749,4098,5183,5184,5185,5186,5187,5188,2561,4099,3117,1804, # 256 2049,3689,4309,3513,1663,5189,3166,3118,3298,1587,1561,3433,5190,3119,1625,2998, # 272 3299,4613,1766,3690,2786,4614,5191,5192,5193,5194,2161, 26,3377, 2,3929, 20, # 288 3691, 47,4100, 50, 17, 16, 35, 268, 27, 243, 42, 155, 24, 154, 29, 184, # 304 4, 91, 14, 92, 53, 396, 33, 289, 9, 37, 64, 620, 21, 39, 321, 5, # 320 12, 11, 52, 13, 3, 208, 138, 0, 7, 60, 526, 141, 151,1069, 181, 275, # 336 1591, 83, 132,1475, 126, 331, 829, 15, 69, 160, 59, 22, 157, 55,1079, 312, # 352 109, 38, 23, 25, 10, 19, 79,5195, 61, 382,1124, 8, 30,5196,5197,5198, # 368 5199,5200,5201,5202,5203,5204,5205,5206, 89, 62, 74, 34,2416, 112, 139, 196, # 384 271, 149, 84, 607, 131, 765, 46, 88, 153, 683, 76, 874, 101, 258, 57, 80, # 400 32, 364, 121,1508, 169,1547, 68, 235, 145,2999, 41, 360,3027, 70, 63, 31, # 416 43, 259, 262,1383, 99, 533, 194, 66, 93, 846, 217, 192, 56, 106, 58, 565, # 432 280, 272, 311, 256, 146, 82, 308, 71, 100, 128, 214, 655, 110, 261, 104,1140, # 448 54, 51, 36, 87, 67,3070, 185,2618,2936,2020, 28,1066,2390,2059,5207,5208, # 464 5209,5210,5211,5212,5213,5214,5215,5216,4615,5217,5218,5219,5220,5221,5222,5223, # 480 5224,5225,5226,5227,5228,5229,5230,5231,5232,5233,5234,5235,5236,3514,5237,5238, # 496 5239,5240,5241,5242,5243,5244,2297,2031,4616,4310,3692,5245,3071,5246,3598,5247, # 512 4617,3231,3515,5248,4101,4311,4618,3808,4312,4102,5249,4103,4104,3599,5250,5251, # 528 5252,5253,5254,5255,5256,5257,5258,5259,5260,5261,5262,5263,5264,5265,5266,5267, # 544 5268,5269,5270,5271,5272,5273,5274,5275,5276,5277,5278,5279,5280,5281,5282,5283, # 560 5284,5285,5286,5287,5288,5289,5290,5291,5292,5293,5294,5295,5296,5297,5298,5299, # 576 5300,5301,5302,5303,5304,5305,5306,5307,5308,5309,5310,5311,5312,5313,5314,5315, # 592 5316,5317,5318,5319,5320,5321,5322,5323,5324,5325,5326,5327,5328,5329,5330,5331, # 608 5332,5333,5334,5335,5336,5337,5338,5339,5340,5341,5342,5343,5344,5345,5346,5347, # 624 5348,5349,5350,5351,5352,5353,5354,5355,5356,5357,5358,5359,5360,5361,5362,5363, # 640 5364,5365,5366,5367,5368,5369,5370,5371,5372,5373,5374,5375,5376,5377,5378,5379, # 656 5380,5381, 363, 642,2787,2878,2788,2789,2316,3232,2317,3434,2011, 165,1942,3930, # 672 3931,3932,3933,5382,4619,5383,4620,5384,5385,5386,5387,5388,5389,5390,5391,5392, # 688 5393,5394,5395,5396,5397,5398,5399,5400,5401,5402,5403,5404,5405,5406,5407,5408, # 704 5409,5410,5411,5412,5413,5414,5415,5416,5417,5418,5419,5420,5421,5422,5423,5424, # 720 5425,5426,5427,5428,5429,5430,5431,5432,5433,5434,5435,5436,5437,5438,5439,5440, # 736 5441,5442,5443,5444,5445,5446,5447,5448,5449,5450,5451,5452,5453,5454,5455,5456, # 752 5457,5458,5459,5460,5461,5462,5463,5464,5465,5466,5467,5468,5469,5470,5471,5472, # 768 5473,5474,5475,5476,5477,5478,5479,5480,5481,5482,5483,5484,5485,5486,5487,5488, # 784 5489,5490,5491,5492,5493,5494,5495,5496,5497,5498,5499,5500,5501,5502,5503,5504, # 800 5505,5506,5507,5508,5509,5510,5511,5512,5513,5514,5515,5516,5517,5518,5519,5520, # 816 5521,5522,5523,5524,5525,5526,5527,5528,5529,5530,5531,5532,5533,5534,5535,5536, # 832 5537,5538,5539,5540,5541,5542,5543,5544,5545,5546,5547,5548,5549,5550,5551,5552, # 848 5553,5554,5555,5556,5557,5558,5559,5560,5561,5562,5563,5564,5565,5566,5567,5568, # 864 5569,5570,5571,5572,5573,5574,5575,5576,5577,5578,5579,5580,5581,5582,5583,5584, # 880 5585,5586,5587,5588,5589,5590,5591,5592,5593,5594,5595,5596,5597,5598,5599,5600, # 896 5601,5602,5603,5604,5605,5606,5607,5608,5609,5610,5611,5612,5613,5614,5615,5616, # 912 5617,5618,5619,5620,5621,5622,5623,5624,5625,5626,5627,5628,5629,5630,5631,5632, # 928 5633,5634,5635,5636,5637,5638,5639,5640,5641,5642,5643,5644,5645,5646,5647,5648, # 944 5649,5650,5651,5652,5653,5654,5655,5656,5657,5658,5659,5660,5661,5662,5663,5664, # 960 5665,5666,5667,5668,5669,5670,5671,5672,5673,5674,5675,5676,5677,5678,5679,5680, # 976 5681,5682,5683,5684,5685,5686,5687,5688,5689,5690,5691,5692,5693,5694,5695,5696, # 992 5697,5698,5699,5700,5701,5702,5703,5704,5705,5706,5707,5708,5709,5710,5711,5712, # 1008 5713,5714,5715,5716,5717,5718,5719,5720,5721,5722,5723,5724,5725,5726,5727,5728, # 1024 5729,5730,5731,5732,5733,5734,5735,5736,5737,5738,5739,5740,5741,5742,5743,5744, # 1040 5745,5746,5747,5748,5749,5750,5751,5752,5753,5754,5755,5756,5757,5758,5759,5760, # 1056 5761,5762,5763,5764,5765,5766,5767,5768,5769,5770,5771,5772,5773,5774,5775,5776, # 1072 5777,5778,5779,5780,5781,5782,5783,5784,5785,5786,5787,5788,5789,5790,5791,5792, # 1088 5793,5794,5795,5796,5797,5798,5799,5800,5801,5802,5803,5804,5805,5806,5807,5808, # 1104 5809,5810,5811,5812,5813,5814,5815,5816,5817,5818,5819,5820,5821,5822,5823,5824, # 1120 5825,5826,5827,5828,5829,5830,5831,5832,5833,5834,5835,5836,5837,5838,5839,5840, # 1136 5841,5842,5843,5844,5845,5846,5847,5848,5849,5850,5851,5852,5853,5854,5855,5856, # 1152 5857,5858,5859,5860,5861,5862,5863,5864,5865,5866,5867,5868,5869,5870,5871,5872, # 1168 5873,5874,5875,5876,5877,5878,5879,5880,5881,5882,5883,5884,5885,5886,5887,5888, # 1184 5889,5890,5891,5892,5893,5894,5895,5896,5897,5898,5899,5900,5901,5902,5903,5904, # 1200 5905,5906,5907,5908,5909,5910,5911,5912,5913,5914,5915,5916,5917,5918,5919,5920, # 1216 5921,5922,5923,5924,5925,5926,5927,5928,5929,5930,5931,5932,5933,5934,5935,5936, # 1232 5937,5938,5939,5940,5941,5942,5943,5944,5945,5946,5947,5948,5949,5950,5951,5952, # 1248 5953,5954,5955,5956,5957,5958,5959,5960,5961,5962,5963,5964,5965,5966,5967,5968, # 1264 5969,5970,5971,5972,5973,5974,5975,5976,5977,5978,5979,5980,5981,5982,5983,5984, # 1280 5985,5986,5987,5988,5989,5990,5991,5992,5993,5994,5995,5996,5997,5998,5999,6000, # 1296 6001,6002,6003,6004,6005,6006,6007,6008,6009,6010,6011,6012,6013,6014,6015,6016, # 1312 6017,6018,6019,6020,6021,6022,6023,6024,6025,6026,6027,6028,6029,6030,6031,6032, # 1328 6033,6034,6035,6036,6037,6038,6039,6040,6041,6042,6043,6044,6045,6046,6047,6048, # 1344 6049,6050,6051,6052,6053,6054,6055,6056,6057,6058,6059,6060,6061,6062,6063,6064, # 1360 6065,6066,6067,6068,6069,6070,6071,6072,6073,6074,6075,6076,6077,6078,6079,6080, # 1376 6081,6082,6083,6084,6085,6086,6087,6088,6089,6090,6091,6092,6093,6094,6095,6096, # 1392 6097,6098,6099,6100,6101,6102,6103,6104,6105,6106,6107,6108,6109,6110,6111,6112, # 1408 6113,6114,2044,2060,4621, 997,1235, 473,1186,4622, 920,3378,6115,6116, 379,1108, # 1424 4313,2657,2735,3934,6117,3809, 636,3233, 573,1026,3693,3435,2974,3300,2298,4105, # 1440 854,2937,2463, 393,2581,2417, 539, 752,1280,2750,2480, 140,1161, 440, 708,1569, # 1456 665,2497,1746,1291,1523,3000, 164,1603, 847,1331, 537,1997, 486, 508,1693,2418, # 1472 1970,2227, 878,1220, 299,1030, 969, 652,2751, 624,1137,3301,2619, 65,3302,2045, # 1488 1761,1859,3120,1930,3694,3516, 663,1767, 852, 835,3695, 269, 767,2826,2339,1305, # 1504 896,1150, 770,1616,6118, 506,1502,2075,1012,2519, 775,2520,2975,2340,2938,4314, # 1520 3028,2086,1224,1943,2286,6119,3072,4315,2240,1273,1987,3935,1557, 175, 597, 985, # 1536 3517,2419,2521,1416,3029, 585, 938,1931,1007,1052,1932,1685,6120,3379,4316,4623, # 1552 804, 599,3121,1333,2128,2539,1159,1554,2032,3810, 687,2033,2904, 952, 675,1467, # 1568 3436,6121,2241,1096,1786,2440,1543,1924, 980,1813,2228, 781,2692,1879, 728,1918, # 1584 3696,4624, 548,1950,4625,1809,1088,1356,3303,2522,1944, 502, 972, 373, 513,2827, # 1600 586,2377,2391,1003,1976,1631,6122,2464,1084, 648,1776,4626,2141, 324, 962,2012, # 1616 2177,2076,1384, 742,2178,1448,1173,1810, 222, 102, 301, 445, 125,2420, 662,2498, # 1632 277, 200,1476,1165,1068, 224,2562,1378,1446, 450,1880, 659, 791, 582,4627,2939, # 1648 3936,1516,1274, 555,2099,3697,1020,1389,1526,3380,1762,1723,1787,2229, 412,2114, # 1664 1900,2392,3518, 512,2597, 427,1925,2341,3122,1653,1686,2465,2499, 697, 330, 273, # 1680 380,2162, 951, 832, 780, 991,1301,3073, 965,2270,3519, 668,2523,2636,1286, 535, # 1696 1407, 518, 671, 957,2658,2378, 267, 611,2197,3030,6123, 248,2299, 967,1799,2356, # 1712 850,1418,3437,1876,1256,1480,2828,1718,6124,6125,1755,1664,2405,6126,4628,2879, # 1728 2829, 499,2179, 676,4629, 557,2329,2214,2090, 325,3234, 464, 811,3001, 992,2342, # 1744 2481,1232,1469, 303,2242, 466,1070,2163, 603,1777,2091,4630,2752,4631,2714, 322, # 1760 2659,1964,1768, 481,2188,1463,2330,2857,3600,2092,3031,2421,4632,2318,2070,1849, # 1776 2598,4633,1302,2254,1668,1701,2422,3811,2905,3032,3123,2046,4106,1763,1694,4634, # 1792 1604, 943,1724,1454, 917, 868,2215,1169,2940, 552,1145,1800,1228,1823,1955, 316, # 1808 1080,2510, 361,1807,2830,4107,2660,3381,1346,1423,1134,4108,6127, 541,1263,1229, # 1824 1148,2540, 545, 465,1833,2880,3438,1901,3074,2482, 816,3937, 713,1788,2500, 122, # 1840 1575, 195,1451,2501,1111,6128, 859, 374,1225,2243,2483,4317, 390,1033,3439,3075, # 1856 2524,1687, 266, 793,1440,2599, 946, 779, 802, 507, 897,1081, 528,2189,1292, 711, # 1872 1866,1725,1167,1640, 753, 398,2661,1053, 246, 348,4318, 137,1024,3440,1600,2077, # 1888 2129, 825,4319, 698, 238, 521, 187,2300,1157,2423,1641,1605,1464,1610,1097,2541, # 1904 1260,1436, 759,2255,1814,2150, 705,3235, 409,2563,3304, 561,3033,2005,2564, 726, # 1920 1956,2343,3698,4109, 949,3812,3813,3520,1669, 653,1379,2525, 881,2198, 632,2256, # 1936 1027, 778,1074, 733,1957, 514,1481,2466, 554,2180, 702,3938,1606,1017,1398,6129, # 1952 1380,3521, 921, 993,1313, 594, 449,1489,1617,1166, 768,1426,1360, 495,1794,3601, # 1968 1177,3602,1170,4320,2344, 476, 425,3167,4635,3168,1424, 401,2662,1171,3382,1998, # 1984 1089,4110, 477,3169, 474,6130,1909, 596,2831,1842, 494, 693,1051,1028,1207,3076, # 2000 606,2115, 727,2790,1473,1115, 743,3522, 630, 805,1532,4321,2021, 366,1057, 838, # 2016 684,1114,2142,4322,2050,1492,1892,1808,2271,3814,2424,1971,1447,1373,3305,1090, # 2032 1536,3939,3523,3306,1455,2199, 336, 369,2331,1035, 584,2393, 902, 718,2600,6131, # 2048 2753, 463,2151,1149,1611,2467, 715,1308,3124,1268, 343,1413,3236,1517,1347,2663, # 2064 2093,3940,2022,1131,1553,2100,2941,1427,3441,2942,1323,2484,6132,1980, 872,2368, # 2080 2441,2943, 320,2369,2116,1082, 679,1933,3941,2791,3815, 625,1143,2023, 422,2200, # 2096 3816,6133, 730,1695, 356,2257,1626,2301,2858,2637,1627,1778, 937, 883,2906,2693, # 2112 3002,1769,1086, 400,1063,1325,3307,2792,4111,3077, 456,2345,1046, 747,6134,1524, # 2128 884,1094,3383,1474,2164,1059, 974,1688,2181,2258,1047, 345,1665,1187, 358, 875, # 2144 3170, 305, 660,3524,2190,1334,1135,3171,1540,1649,2542,1527, 927, 968,2793, 885, # 2160 1972,1850, 482, 500,2638,1218,1109,1085,2543,1654,2034, 876, 78,2287,1482,1277, # 2176 861,1675,1083,1779, 724,2754, 454, 397,1132,1612,2332, 893, 672,1237, 257,2259, # 2192 2370, 135,3384, 337,2244, 547, 352, 340, 709,2485,1400, 788,1138,2511, 540, 772, # 2208 1682,2260,2272,2544,2013,1843,1902,4636,1999,1562,2288,4637,2201,1403,1533, 407, # 2224 576,3308,1254,2071, 978,3385, 170, 136,1201,3125,2664,3172,2394, 213, 912, 873, # 2240 3603,1713,2202, 699,3604,3699, 813,3442, 493, 531,1054, 468,2907,1483, 304, 281, # 2256 4112,1726,1252,2094, 339,2319,2130,2639, 756,1563,2944, 748, 571,2976,1588,2425, # 2272 2715,1851,1460,2426,1528,1392,1973,3237, 288,3309, 685,3386, 296, 892,2716,2216, # 2288 1570,2245, 722,1747,2217, 905,3238,1103,6135,1893,1441,1965, 251,1805,2371,3700, # 2304 2601,1919,1078, 75,2182,1509,1592,1270,2640,4638,2152,6136,3310,3817, 524, 706, # 2320 1075, 292,3818,1756,2602, 317, 98,3173,3605,3525,1844,2218,3819,2502, 814, 567, # 2336 385,2908,1534,6137, 534,1642,3239, 797,6138,1670,1529, 953,4323, 188,1071, 538, # 2352 178, 729,3240,2109,1226,1374,2000,2357,2977, 731,2468,1116,2014,2051,6139,1261, # 2368 1593, 803,2859,2736,3443, 556, 682, 823,1541,6140,1369,2289,1706,2794, 845, 462, # 2384 2603,2665,1361, 387, 162,2358,1740, 739,1770,1720,1304,1401,3241,1049, 627,1571, # 2400 2427,3526,1877,3942,1852,1500, 431,1910,1503, 677, 297,2795, 286,1433,1038,1198, # 2416 2290,1133,1596,4113,4639,2469,1510,1484,3943,6141,2442, 108, 712,4640,2372, 866, # 2432 3701,2755,3242,1348, 834,1945,1408,3527,2395,3243,1811, 824, 994,1179,2110,1548, # 2448 1453, 790,3003, 690,4324,4325,2832,2909,3820,1860,3821, 225,1748, 310, 346,1780, # 2464 2470, 821,1993,2717,2796, 828, 877,3528,2860,2471,1702,2165,2910,2486,1789, 453, # 2480 359,2291,1676, 73,1164,1461,1127,3311, 421, 604, 314,1037, 589, 116,2487, 737, # 2496 837,1180, 111, 244, 735,6142,2261,1861,1362, 986, 523, 418, 581,2666,3822, 103, # 2512 855, 503,1414,1867,2488,1091, 657,1597, 979, 605,1316,4641,1021,2443,2078,2001, # 2528 1209, 96, 587,2166,1032, 260,1072,2153, 173, 94, 226,3244, 819,2006,4642,4114, # 2544 2203, 231,1744, 782, 97,2667, 786,3387, 887, 391, 442,2219,4326,1425,6143,2694, # 2560 633,1544,1202, 483,2015, 592,2052,1958,2472,1655, 419, 129,4327,3444,3312,1714, # 2576 1257,3078,4328,1518,1098, 865,1310,1019,1885,1512,1734, 469,2444, 148, 773, 436, # 2592 1815,1868,1128,1055,4329,1245,2756,3445,2154,1934,1039,4643, 579,1238, 932,2320, # 2608 353, 205, 801, 115,2428, 944,2321,1881, 399,2565,1211, 678, 766,3944, 335,2101, # 2624 1459,1781,1402,3945,2737,2131,1010, 844, 981,1326,1013, 550,1816,1545,2620,1335, # 2640 1008, 371,2881, 936,1419,1613,3529,1456,1395,2273,1834,2604,1317,2738,2503, 416, # 2656 1643,4330, 806,1126, 229, 591,3946,1314,1981,1576,1837,1666, 347,1790, 977,3313, # 2672 764,2861,1853, 688,2429,1920,1462, 77, 595, 415,2002,3034, 798,1192,4115,6144, # 2688 2978,4331,3035,2695,2582,2072,2566, 430,2430,1727, 842,1396,3947,3702, 613, 377, # 2704 278, 236,1417,3388,3314,3174, 757,1869, 107,3530,6145,1194, 623,2262, 207,1253, # 2720 2167,3446,3948, 492,1117,1935, 536,1838,2757,1246,4332, 696,2095,2406,1393,1572, # 2736 3175,1782, 583, 190, 253,1390,2230, 830,3126,3389, 934,3245,1703,1749,2979,1870, # 2752 2545,1656,2204, 869,2346,4116,3176,1817, 496,1764,4644, 942,1504, 404,1903,1122, # 2768 1580,3606,2945,1022, 515, 372,1735, 955,2431,3036,6146,2797,1110,2302,2798, 617, # 2784 6147, 441, 762,1771,3447,3607,3608,1904, 840,3037, 86, 939,1385, 572,1370,2445, # 2800 1336, 114,3703, 898, 294, 203,3315, 703,1583,2274, 429, 961,4333,1854,1951,3390, # 2816 2373,3704,4334,1318,1381, 966,1911,2322,1006,1155, 309, 989, 458,2718,1795,1372, # 2832 1203, 252,1689,1363,3177, 517,1936, 168,1490, 562, 193,3823,1042,4117,1835, 551, # 2848 470,4645, 395, 489,3448,1871,1465,2583,2641, 417,1493, 279,1295, 511,1236,1119, # 2864 72,1231,1982,1812,3004, 871,1564, 984,3449,1667,2696,2096,4646,2347,2833,1673, # 2880 3609, 695,3246,2668, 807,1183,4647, 890, 388,2333,1801,1457,2911,1765,1477,1031, # 2896 3316,3317,1278,3391,2799,2292,2526, 163,3450,4335,2669,1404,1802,6148,2323,2407, # 2912 1584,1728,1494,1824,1269, 298, 909,3318,1034,1632, 375, 776,1683,2061, 291, 210, # 2928 1123, 809,1249,1002,2642,3038, 206,1011,2132, 144, 975, 882,1565, 342, 667, 754, # 2944 1442,2143,1299,2303,2062, 447, 626,2205,1221,2739,2912,1144,1214,2206,2584, 760, # 2960 1715, 614, 950,1281,2670,2621, 810, 577,1287,2546,4648, 242,2168, 250,2643, 691, # 2976 123,2644, 647, 313,1029, 689,1357,2946,1650, 216, 771,1339,1306, 808,2063, 549, # 2992 913,1371,2913,2914,6149,1466,1092,1174,1196,1311,2605,2396,1783,1796,3079, 406, # 3008 2671,2117,3949,4649, 487,1825,2220,6150,2915, 448,2348,1073,6151,2397,1707, 130, # 3024 900,1598, 329, 176,1959,2527,1620,6152,2275,4336,3319,1983,2191,3705,3610,2155, # 3040 3706,1912,1513,1614,6153,1988, 646, 392,2304,1589,3320,3039,1826,1239,1352,1340, # 3056 2916, 505,2567,1709,1437,2408,2547, 906,6154,2672, 384,1458,1594,1100,1329, 710, # 3072 423,3531,2064,2231,2622,1989,2673,1087,1882, 333, 841,3005,1296,2882,2379, 580, # 3088 1937,1827,1293,2585, 601, 574, 249,1772,4118,2079,1120, 645, 901,1176,1690, 795, # 3104 2207, 478,1434, 516,1190,1530, 761,2080, 930,1264, 355, 435,1552, 644,1791, 987, # 3120 220,1364,1163,1121,1538, 306,2169,1327,1222, 546,2645, 218, 241, 610,1704,3321, # 3136 1984,1839,1966,2528, 451,6155,2586,3707,2568, 907,3178, 254,2947, 186,1845,4650, # 3152 745, 432,1757, 428,1633, 888,2246,2221,2489,3611,2118,1258,1265, 956,3127,1784, # 3168 4337,2490, 319, 510, 119, 457,3612, 274,2035,2007,4651,1409,3128, 970,2758, 590, # 3184 2800, 661,2247,4652,2008,3950,1420,1549,3080,3322,3951,1651,1375,2111, 485,2491, # 3200 1429,1156,6156,2548,2183,1495, 831,1840,2529,2446, 501,1657, 307,1894,3247,1341, # 3216 666, 899,2156,1539,2549,1559, 886, 349,2208,3081,2305,1736,3824,2170,2759,1014, # 3232 1913,1386, 542,1397,2948, 490, 368, 716, 362, 159, 282,2569,1129,1658,1288,1750, # 3248 2674, 276, 649,2016, 751,1496, 658,1818,1284,1862,2209,2087,2512,3451, 622,2834, # 3264 376, 117,1060,2053,1208,1721,1101,1443, 247,1250,3179,1792,3952,2760,2398,3953, # 3280 6157,2144,3708, 446,2432,1151,2570,3452,2447,2761,2835,1210,2448,3082, 424,2222, # 3296 1251,2449,2119,2836, 504,1581,4338, 602, 817, 857,3825,2349,2306, 357,3826,1470, # 3312 1883,2883, 255, 958, 929,2917,3248, 302,4653,1050,1271,1751,2307,1952,1430,2697, # 3328 2719,2359, 354,3180, 777, 158,2036,4339,1659,4340,4654,2308,2949,2248,1146,2232, # 3344 3532,2720,1696,2623,3827,6158,3129,1550,2698,1485,1297,1428, 637, 931,2721,2145, # 3360 914,2550,2587, 81,2450, 612, 827,2646,1242,4655,1118,2884, 472,1855,3181,3533, # 3376 3534, 569,1353,2699,1244,1758,2588,4119,2009,2762,2171,3709,1312,1531,6159,1152, # 3392 1938, 134,1830, 471,3710,2276,1112,1535,3323,3453,3535, 982,1337,2950, 488, 826, # 3408 674,1058,1628,4120,2017, 522,2399, 211, 568,1367,3454, 350, 293,1872,1139,3249, # 3424 1399,1946,3006,1300,2360,3324, 588, 736,6160,2606, 744, 669,3536,3828,6161,1358, # 3440 199, 723, 848, 933, 851,1939,1505,1514,1338,1618,1831,4656,1634,3613, 443,2740, # 3456 3829, 717,1947, 491,1914,6162,2551,1542,4121,1025,6163,1099,1223, 198,3040,2722, # 3472 370, 410,1905,2589, 998,1248,3182,2380, 519,1449,4122,1710, 947, 928,1153,4341, # 3488 2277, 344,2624,1511, 615, 105, 161,1212,1076,1960,3130,2054,1926,1175,1906,2473, # 3504 414,1873,2801,6164,2309, 315,1319,3325, 318,2018,2146,2157, 963, 631, 223,4342, # 3520 4343,2675, 479,3711,1197,2625,3712,2676,2361,6165,4344,4123,6166,2451,3183,1886, # 3536 2184,1674,1330,1711,1635,1506, 799, 219,3250,3083,3954,1677,3713,3326,2081,3614, # 3552 1652,2073,4657,1147,3041,1752, 643,1961, 147,1974,3955,6167,1716,2037, 918,3007, # 3568 1994, 120,1537, 118, 609,3184,4345, 740,3455,1219, 332,1615,3830,6168,1621,2980, # 3584 1582, 783, 212, 553,2350,3714,1349,2433,2082,4124, 889,6169,2310,1275,1410, 973, # 3600 166,1320,3456,1797,1215,3185,2885,1846,2590,2763,4658, 629, 822,3008, 763, 940, # 3616 1990,2862, 439,2409,1566,1240,1622, 926,1282,1907,2764, 654,2210,1607, 327,1130, # 3632 3956,1678,1623,6170,2434,2192, 686, 608,3831,3715, 903,3957,3042,6171,2741,1522, # 3648 1915,1105,1555,2552,1359, 323,3251,4346,3457, 738,1354,2553,2311,2334,1828,2003, # 3664 3832,1753,2351,1227,6172,1887,4125,1478,6173,2410,1874,1712,1847, 520,1204,2607, # 3680 264,4659, 836,2677,2102, 600,4660,3833,2278,3084,6174,4347,3615,1342, 640, 532, # 3696 543,2608,1888,2400,2591,1009,4348,1497, 341,1737,3616,2723,1394, 529,3252,1321, # 3712 983,4661,1515,2120, 971,2592, 924, 287,1662,3186,4349,2700,4350,1519, 908,1948, # 3728 2452, 156, 796,1629,1486,2223,2055, 694,4126,1259,1036,3392,1213,2249,2742,1889, # 3744 1230,3958,1015, 910, 408, 559,3617,4662, 746, 725, 935,4663,3959,3009,1289, 563, # 3760 867,4664,3960,1567,2981,2038,2626, 988,2263,2381,4351, 143,2374, 704,1895,6175, # 3776 1188,3716,2088, 673,3085,2362,4352, 484,1608,1921,2765,2918, 215, 904,3618,3537, # 3792 894, 509, 976,3043,2701,3961,4353,2837,2982, 498,6176,6177,1102,3538,1332,3393, # 3808 1487,1636,1637, 233, 245,3962, 383, 650, 995,3044, 460,1520,1206,2352, 749,3327, # 3824 530, 700, 389,1438,1560,1773,3963,2264, 719,2951,2724,3834, 870,1832,1644,1000, # 3840 839,2474,3717, 197,1630,3394, 365,2886,3964,1285,2133, 734, 922, 818,1106, 732, # 3856 480,2083,1774,3458, 923,2279,1350, 221,3086, 85,2233,2234,3835,1585,3010,2147, # 3872 1387,1705,2382,1619,2475, 133, 239,2802,1991,1016,2084,2383, 411,2838,1113, 651, # 3888 1985,1160,3328, 990,1863,3087,1048,1276,2647, 265,2627,1599,3253,2056, 150, 638, # 3904 2019, 656, 853, 326,1479, 680,1439,4354,1001,1759, 413,3459,3395,2492,1431, 459, # 3920 4355,1125,3329,2265,1953,1450,2065,2863, 849, 351,2678,3131,3254,3255,1104,1577, # 3936 227,1351,1645,2453,2193,1421,2887, 812,2121, 634, 95,2435, 201,2312,4665,1646, # 3952 1671,2743,1601,2554,2702,2648,2280,1315,1366,2089,3132,1573,3718,3965,1729,1189, # 3968 328,2679,1077,1940,1136, 558,1283, 964,1195, 621,2074,1199,1743,3460,3619,1896, # 3984 1916,1890,3836,2952,1154,2112,1064, 862, 378,3011,2066,2113,2803,1568,2839,6178, # 4000 3088,2919,1941,1660,2004,1992,2194, 142, 707,1590,1708,1624,1922,1023,1836,1233, # 4016 1004,2313, 789, 741,3620,6179,1609,2411,1200,4127,3719,3720,4666,2057,3721, 593, # 4032 2840, 367,2920,1878,6180,3461,1521, 628,1168, 692,2211,2649, 300, 720,2067,2571, # 4048 2953,3396, 959,2504,3966,3539,3462,1977, 701,6181, 954,1043, 800, 681, 183,3722, # 4064 1803,1730,3540,4128,2103, 815,2314, 174, 467, 230,2454,1093,2134, 755,3541,3397, # 4080 1141,1162,6182,1738,2039, 270,3256,2513,1005,1647,2185,3837, 858,1679,1897,1719, # 4096 2954,2324,1806, 402, 670, 167,4129,1498,2158,2104, 750,6183, 915, 189,1680,1551, # 4112 455,4356,1501,2455, 405,1095,2955, 338,1586,1266,1819, 570, 641,1324, 237,1556, # 4128 2650,1388,3723,6184,1368,2384,1343,1978,3089,2436, 879,3724, 792,1191, 758,3012, # 4144 1411,2135,1322,4357, 240,4667,1848,3725,1574,6185, 420,3045,1546,1391, 714,4358, # 4160 1967, 941,1864, 863, 664, 426, 560,1731,2680,1785,2864,1949,2363, 403,3330,1415, # 4176 1279,2136,1697,2335, 204, 721,2097,3838, 90,6186,2085,2505, 191,3967, 124,2148, # 4192 1376,1798,1178,1107,1898,1405, 860,4359,1243,1272,2375,2983,1558,2456,1638, 113, # 4208 3621, 578,1923,2609, 880, 386,4130, 784,2186,2266,1422,2956,2172,1722, 497, 263, # 4224 2514,1267,2412,2610, 177,2703,3542, 774,1927,1344, 616,1432,1595,1018, 172,4360, # 4240 2325, 911,4361, 438,1468,3622, 794,3968,2024,2173,1681,1829,2957, 945, 895,3090, # 4256 575,2212,2476, 475,2401,2681, 785,2744,1745,2293,2555,1975,3133,2865, 394,4668, # 4272 3839, 635,4131, 639, 202,1507,2195,2766,1345,1435,2572,3726,1908,1184,1181,2457, # 4288 3727,3134,4362, 843,2611, 437, 916,4669, 234, 769,1884,3046,3047,3623, 833,6187, # 4304 1639,2250,2402,1355,1185,2010,2047, 999, 525,1732,1290,1488,2612, 948,1578,3728, # 4320 2413,2477,1216,2725,2159, 334,3840,1328,3624,2921,1525,4132, 564,1056, 891,4363, # 4336 1444,1698,2385,2251,3729,1365,2281,2235,1717,6188, 864,3841,2515, 444, 527,2767, # 4352 2922,3625, 544, 461,6189, 566, 209,2437,3398,2098,1065,2068,3331,3626,3257,2137, # 4368 #last 512 ) �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/jpcntx.py������������������������0000644�0001750�0001750�00000046273�00000000000�027773� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Communicator client code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### # This is hiragana 2-char sequence table, the number in each cell represents its frequency category jp2CharContext = ( (0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1), (2,4,0,4,0,3,0,4,0,3,4,4,4,2,4,3,3,4,3,2,3,3,4,2,3,3,3,2,4,1,4,3,3,1,5,4,3,4,3,4,3,5,3,0,3,5,4,2,0,3,1,0,3,3,0,3,3,0,1,1,0,4,3,0,3,3,0,4,0,2,0,3,5,5,5,5,4,0,4,1,0,3,4), (0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2), (0,4,0,5,0,5,0,4,0,4,5,4,4,3,5,3,5,1,5,3,4,3,4,4,3,4,3,3,4,3,5,4,4,3,5,5,3,5,5,5,3,5,5,3,4,5,5,3,1,3,2,0,3,4,0,4,2,0,4,2,1,5,3,2,3,5,0,4,0,2,0,5,4,4,5,4,5,0,4,0,0,4,4), (0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0), (0,3,0,4,0,3,0,3,0,4,5,4,3,3,3,3,4,3,5,4,4,3,5,4,4,3,4,3,4,4,4,4,5,3,4,4,3,4,5,5,4,5,5,1,4,5,4,3,0,3,3,1,3,3,0,4,4,0,3,3,1,5,3,3,3,5,0,4,0,3,0,4,4,3,4,3,3,0,4,1,1,3,4), (0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0), (0,4,0,3,0,3,0,4,0,3,4,4,3,2,2,1,2,1,3,1,3,3,3,3,3,4,3,1,3,3,5,3,3,0,4,3,0,5,4,3,3,5,4,4,3,4,4,5,0,1,2,0,1,2,0,2,2,0,1,0,0,5,2,2,1,4,0,3,0,1,0,4,4,3,5,4,3,0,2,1,0,4,3), (0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0), (0,3,0,5,0,4,0,2,1,4,4,2,4,1,4,2,4,2,4,3,3,3,4,3,3,3,3,1,4,2,3,3,3,1,4,4,1,1,1,4,3,3,2,0,2,4,3,2,0,3,3,0,3,1,1,0,0,0,3,3,0,4,2,2,3,4,0,4,0,3,0,4,4,5,3,4,4,0,3,0,0,1,4), (1,4,0,4,0,4,0,4,0,3,5,4,4,3,4,3,5,4,3,3,4,3,5,4,4,4,4,3,4,2,4,3,3,1,5,4,3,2,4,5,4,5,5,4,4,5,4,4,0,3,2,2,3,3,0,4,3,1,3,2,1,4,3,3,4,5,0,3,0,2,0,4,5,5,4,5,4,0,4,0,0,5,4), (0,5,0,5,0,4,0,3,0,4,4,3,4,3,3,3,4,0,4,4,4,3,4,3,4,3,3,1,4,2,4,3,4,0,5,4,1,4,5,4,4,5,3,2,4,3,4,3,2,4,1,3,3,3,2,3,2,0,4,3,3,4,3,3,3,4,0,4,0,3,0,4,5,4,4,4,3,0,4,1,0,1,3), (0,3,1,4,0,3,0,2,0,3,4,4,3,1,4,2,3,3,4,3,4,3,4,3,4,4,3,2,3,1,5,4,4,1,4,4,3,5,4,4,3,5,5,4,3,4,4,3,1,2,3,1,2,2,0,3,2,0,3,1,0,5,3,3,3,4,3,3,3,3,4,4,4,4,5,4,2,0,3,3,2,4,3), (0,2,0,3,0,1,0,1,0,0,3,2,0,0,2,0,1,0,2,1,3,3,3,1,2,3,1,0,1,0,4,2,1,1,3,3,0,4,3,3,1,4,3,3,0,3,3,2,0,0,0,0,1,0,0,2,0,0,0,0,0,4,1,0,2,3,2,2,2,1,3,3,3,4,4,3,2,0,3,1,0,3,3), (0,4,0,4,0,3,0,3,0,4,4,4,3,3,3,3,3,3,4,3,4,2,4,3,4,3,3,2,4,3,4,5,4,1,4,5,3,5,4,5,3,5,4,0,3,5,5,3,1,3,3,2,2,3,0,3,4,1,3,3,2,4,3,3,3,4,0,4,0,3,0,4,5,4,4,5,3,0,4,1,0,3,4), (0,2,0,3,0,3,0,0,0,2,2,2,1,0,1,0,0,0,3,0,3,0,3,0,1,3,1,0,3,1,3,3,3,1,3,3,3,0,1,3,1,3,4,0,0,3,1,1,0,3,2,0,0,0,0,1,3,0,1,0,0,3,3,2,0,3,0,0,0,0,0,3,4,3,4,3,3,0,3,0,0,2,3), (2,3,0,3,0,2,0,1,0,3,3,4,3,1,3,1,1,1,3,1,4,3,4,3,3,3,0,0,3,1,5,4,3,1,4,3,2,5,5,4,4,4,4,3,3,4,4,4,0,2,1,1,3,2,0,1,2,0,0,1,0,4,1,3,3,3,0,3,0,1,0,4,4,4,5,5,3,0,2,0,0,4,4), (0,2,0,1,0,3,1,3,0,2,3,3,3,0,3,1,0,0,3,0,3,2,3,1,3,2,1,1,0,0,4,2,1,0,2,3,1,4,3,2,0,4,4,3,1,3,1,3,0,1,0,0,1,0,0,0,1,0,0,0,0,4,1,1,1,2,0,3,0,0,0,3,4,2,4,3,2,0,1,0,0,3,3), (0,1,0,4,0,5,0,4,0,2,4,4,2,3,3,2,3,3,5,3,3,3,4,3,4,2,3,0,4,3,3,3,4,1,4,3,2,1,5,5,3,4,5,1,3,5,4,2,0,3,3,0,1,3,0,4,2,0,1,3,1,4,3,3,3,3,0,3,0,1,0,3,4,4,4,5,5,0,3,0,1,4,5), (0,2,0,3,0,3,0,0,0,2,3,1,3,0,4,0,1,1,3,0,3,4,3,2,3,1,0,3,3,2,3,1,3,0,2,3,0,2,1,4,1,2,2,0,0,3,3,0,0,2,0,0,0,1,0,0,0,0,2,2,0,3,2,1,3,3,0,2,0,2,0,0,3,3,1,2,4,0,3,0,2,2,3), (2,4,0,5,0,4,0,4,0,2,4,4,4,3,4,3,3,3,1,2,4,3,4,3,4,4,5,0,3,3,3,3,2,0,4,3,1,4,3,4,1,4,4,3,3,4,4,3,1,2,3,0,4,2,0,4,1,0,3,3,0,4,3,3,3,4,0,4,0,2,0,3,5,3,4,5,2,0,3,0,0,4,5), (0,3,0,4,0,1,0,1,0,1,3,2,2,1,3,0,3,0,2,0,2,0,3,0,2,0,0,0,1,0,1,1,0,0,3,1,0,0,0,4,0,3,1,0,2,1,3,0,0,0,0,0,0,3,0,0,0,0,0,0,0,4,2,2,3,1,0,3,0,0,0,1,4,4,4,3,0,0,4,0,0,1,4), (1,4,1,5,0,3,0,3,0,4,5,4,4,3,5,3,3,4,4,3,4,1,3,3,3,3,2,1,4,1,5,4,3,1,4,4,3,5,4,4,3,5,4,3,3,4,4,4,0,3,3,1,2,3,0,3,1,0,3,3,0,5,4,4,4,4,4,4,3,3,5,4,4,3,3,5,4,0,3,2,0,4,4), (0,2,0,3,0,1,0,0,0,1,3,3,3,2,4,1,3,0,3,1,3,0,2,2,1,1,0,0,2,0,4,3,1,0,4,3,0,4,4,4,1,4,3,1,1,3,3,1,0,2,0,0,1,3,0,0,0,0,2,0,0,4,3,2,4,3,5,4,3,3,3,4,3,3,4,3,3,0,2,1,0,3,3), (0,2,0,4,0,3,0,2,0,2,5,5,3,4,4,4,4,1,4,3,3,0,4,3,4,3,1,3,3,2,4,3,0,3,4,3,0,3,4,4,2,4,4,0,4,5,3,3,2,2,1,1,1,2,0,1,5,0,3,3,2,4,3,3,3,4,0,3,0,2,0,4,4,3,5,5,0,0,3,0,2,3,3), (0,3,0,4,0,3,0,1,0,3,4,3,3,1,3,3,3,0,3,1,3,0,4,3,3,1,1,0,3,0,3,3,0,0,4,4,0,1,5,4,3,3,5,0,3,3,4,3,0,2,0,1,1,1,0,1,3,0,1,2,1,3,3,2,3,3,0,3,0,1,0,1,3,3,4,4,1,0,1,2,2,1,3), (0,1,0,4,0,4,0,3,0,1,3,3,3,2,3,1,1,0,3,0,3,3,4,3,2,4,2,0,1,0,4,3,2,0,4,3,0,5,3,3,2,4,4,4,3,3,3,4,0,1,3,0,0,1,0,0,1,0,0,0,0,4,2,3,3,3,0,3,0,0,0,4,4,4,5,3,2,0,3,3,0,3,5), (0,2,0,3,0,0,0,3,0,1,3,0,2,0,0,0,1,0,3,1,1,3,3,0,0,3,0,0,3,0,2,3,1,0,3,1,0,3,3,2,0,4,2,2,0,2,0,0,0,4,0,0,0,0,0,0,0,0,0,0,0,2,1,2,0,1,0,1,0,0,0,1,3,1,2,0,0,0,1,0,0,1,4), (0,3,0,3,0,5,0,1,0,2,4,3,1,3,3,2,1,1,5,2,1,0,5,1,2,0,0,0,3,3,2,2,3,2,4,3,0,0,3,3,1,3,3,0,2,5,3,4,0,3,3,0,1,2,0,2,2,0,3,2,0,2,2,3,3,3,0,2,0,1,0,3,4,4,2,5,4,0,3,0,0,3,5), (0,3,0,3,0,3,0,1,0,3,3,3,3,0,3,0,2,0,2,1,1,0,2,0,1,0,0,0,2,1,0,0,1,0,3,2,0,0,3,3,1,2,3,1,0,3,3,0,0,1,0,0,0,0,0,2,0,0,0,0,0,2,3,1,2,3,0,3,0,1,0,3,2,1,0,4,3,0,1,1,0,3,3), (0,4,0,5,0,3,0,3,0,4,5,5,4,3,5,3,4,3,5,3,3,2,5,3,4,4,4,3,4,3,4,5,5,3,4,4,3,4,4,5,4,4,4,3,4,5,5,4,2,3,4,2,3,4,0,3,3,1,4,3,2,4,3,3,5,5,0,3,0,3,0,5,5,5,5,4,4,0,4,0,1,4,4), (0,4,0,4,0,3,0,3,0,3,5,4,4,2,3,2,5,1,3,2,5,1,4,2,3,2,3,3,4,3,3,3,3,2,5,4,1,3,3,5,3,4,4,0,4,4,3,1,1,3,1,0,2,3,0,2,3,0,3,0,0,4,3,1,3,4,0,3,0,2,0,4,4,4,3,4,5,0,4,0,0,3,4), (0,3,0,3,0,3,1,2,0,3,4,4,3,3,3,0,2,2,4,3,3,1,3,3,3,1,1,0,3,1,4,3,2,3,4,4,2,4,4,4,3,4,4,3,2,4,4,3,1,3,3,1,3,3,0,4,1,0,2,2,1,4,3,2,3,3,5,4,3,3,5,4,4,3,3,0,4,0,3,2,2,4,4), (0,2,0,1,0,0,0,0,0,1,2,1,3,0,0,0,0,0,2,0,1,2,1,0,0,1,0,0,0,0,3,0,0,1,0,1,1,3,1,0,0,0,1,1,0,1,1,0,0,0,0,0,2,0,0,0,0,0,0,0,0,1,1,2,2,0,3,4,0,0,0,1,1,0,0,1,0,0,0,0,0,1,1), (0,1,0,0,0,1,0,0,0,0,4,0,4,1,4,0,3,0,4,0,3,0,4,0,3,0,3,0,4,1,5,1,4,0,0,3,0,5,0,5,2,0,1,0,0,0,2,1,4,0,1,3,0,0,3,0,0,3,1,1,4,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0), (1,4,0,5,0,3,0,2,0,3,5,4,4,3,4,3,5,3,4,3,3,0,4,3,3,3,3,3,3,2,4,4,3,1,3,4,4,5,4,4,3,4,4,1,3,5,4,3,3,3,1,2,2,3,3,1,3,1,3,3,3,5,3,3,4,5,0,3,0,3,0,3,4,3,4,4,3,0,3,0,2,4,3), (0,1,0,4,0,0,0,0,0,1,4,0,4,1,4,2,4,0,3,0,1,0,1,0,0,0,0,0,2,0,3,1,1,1,0,3,0,0,0,1,2,1,0,0,1,1,1,1,0,1,0,0,0,1,0,0,3,0,0,0,0,3,2,0,2,2,0,1,0,0,0,2,3,2,3,3,0,0,0,0,2,1,0), (0,5,1,5,0,3,0,3,0,5,4,4,5,1,5,3,3,0,4,3,4,3,5,3,4,3,3,2,4,3,4,3,3,0,3,3,1,4,4,3,4,4,4,3,4,5,5,3,2,3,1,1,3,3,1,3,1,1,3,3,2,4,5,3,3,5,0,4,0,3,0,4,4,3,5,3,3,0,3,4,0,4,3), (0,5,0,5,0,3,0,2,0,4,4,3,5,2,4,3,3,3,4,4,4,3,5,3,5,3,3,1,4,0,4,3,3,0,3,3,0,4,4,4,4,5,4,3,3,5,5,3,2,3,1,2,3,2,0,1,0,0,3,2,2,4,4,3,1,5,0,4,0,3,0,4,3,1,3,2,1,0,3,3,0,3,3), (0,4,0,5,0,5,0,4,0,4,5,5,5,3,4,3,3,2,5,4,4,3,5,3,5,3,4,0,4,3,4,4,3,2,4,4,3,4,5,4,4,5,5,0,3,5,5,4,1,3,3,2,3,3,1,3,1,0,4,3,1,4,4,3,4,5,0,4,0,2,0,4,3,4,4,3,3,0,4,0,0,5,5), (0,4,0,4,0,5,0,1,1,3,3,4,4,3,4,1,3,0,5,1,3,0,3,1,3,1,1,0,3,0,3,3,4,0,4,3,0,4,4,4,3,4,4,0,3,5,4,1,0,3,0,0,2,3,0,3,1,0,3,1,0,3,2,1,3,5,0,3,0,1,0,3,2,3,3,4,4,0,2,2,0,4,4), (2,4,0,5,0,4,0,3,0,4,5,5,4,3,5,3,5,3,5,3,5,2,5,3,4,3,3,4,3,4,5,3,2,1,5,4,3,2,3,4,5,3,4,1,2,5,4,3,0,3,3,0,3,2,0,2,3,0,4,1,0,3,4,3,3,5,0,3,0,1,0,4,5,5,5,4,3,0,4,2,0,3,5), (0,5,0,4,0,4,0,2,0,5,4,3,4,3,4,3,3,3,4,3,4,2,5,3,5,3,4,1,4,3,4,4,4,0,3,5,0,4,4,4,4,5,3,1,3,4,5,3,3,3,3,3,3,3,0,2,2,0,3,3,2,4,3,3,3,5,3,4,1,3,3,5,3,2,0,0,0,0,4,3,1,3,3), (0,1,0,3,0,3,0,1,0,1,3,3,3,2,3,3,3,0,3,0,0,0,3,1,3,0,0,0,2,2,2,3,0,0,3,2,0,1,2,4,1,3,3,0,0,3,3,3,0,1,0,0,2,1,0,0,3,0,3,1,0,3,0,0,1,3,0,2,0,1,0,3,3,1,3,3,0,0,1,1,0,3,3), (0,2,0,3,0,2,1,4,0,2,2,3,1,1,3,1,1,0,2,0,3,1,2,3,1,3,0,0,1,0,4,3,2,3,3,3,1,4,2,3,3,3,3,1,0,3,1,4,0,1,1,0,1,2,0,1,1,0,1,1,0,3,1,3,2,2,0,1,0,0,0,2,3,3,3,1,0,0,0,0,0,2,3), (0,5,0,4,0,5,0,2,0,4,5,5,3,3,4,3,3,1,5,4,4,2,4,4,4,3,4,2,4,3,5,5,4,3,3,4,3,3,5,5,4,5,5,1,3,4,5,3,1,4,3,1,3,3,0,3,3,1,4,3,1,4,5,3,3,5,0,4,0,3,0,5,3,3,1,4,3,0,4,0,1,5,3), (0,5,0,5,0,4,0,2,0,4,4,3,4,3,3,3,3,3,5,4,4,4,4,4,4,5,3,3,5,2,4,4,4,3,4,4,3,3,4,4,5,5,3,3,4,3,4,3,3,4,3,3,3,3,1,2,2,1,4,3,3,5,4,4,3,4,0,4,0,3,0,4,4,4,4,4,1,0,4,2,0,2,4), (0,4,0,4,0,3,0,1,0,3,5,2,3,0,3,0,2,1,4,2,3,3,4,1,4,3,3,2,4,1,3,3,3,0,3,3,0,0,3,3,3,5,3,3,3,3,3,2,0,2,0,0,2,0,0,2,0,0,1,0,0,3,1,2,2,3,0,3,0,2,0,4,4,3,3,4,1,0,3,0,0,2,4), (0,0,0,4,0,0,0,0,0,0,1,0,1,0,2,0,0,0,0,0,1,0,2,0,1,0,0,0,0,0,3,1,3,0,3,2,0,0,0,1,0,3,2,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,0,2,0,0,0,0,0,0,2), (0,2,1,3,0,2,0,2,0,3,3,3,3,1,3,1,3,3,3,3,3,3,4,2,2,1,2,1,4,0,4,3,1,3,3,3,2,4,3,5,4,3,3,3,3,3,3,3,0,1,3,0,2,0,0,1,0,0,1,0,0,4,2,0,2,3,0,3,3,0,3,3,4,2,3,1,4,0,1,2,0,2,3), (0,3,0,3,0,1,0,3,0,2,3,3,3,0,3,1,2,0,3,3,2,3,3,2,3,2,3,1,3,0,4,3,2,0,3,3,1,4,3,3,2,3,4,3,1,3,3,1,1,0,1,1,0,1,0,1,0,1,0,0,0,4,1,1,0,3,0,3,1,0,2,3,3,3,3,3,1,0,0,2,0,3,3), (0,0,0,0,0,0,0,0,0,0,3,0,2,0,3,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,3,0,3,0,3,1,0,1,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,2,0,2,3,0,0,0,0,0,0,0,0,3), (0,2,0,3,1,3,0,3,0,2,3,3,3,1,3,1,3,1,3,1,3,3,3,1,3,0,2,3,1,1,4,3,3,2,3,3,1,2,2,4,1,3,3,0,1,4,2,3,0,1,3,0,3,0,0,1,3,0,2,0,0,3,3,2,1,3,0,3,0,2,0,3,4,4,4,3,1,0,3,0,0,3,3), (0,2,0,1,0,2,0,0,0,1,3,2,2,1,3,0,1,1,3,0,3,2,3,1,2,0,2,0,1,1,3,3,3,0,3,3,1,1,2,3,2,3,3,1,2,3,2,0,0,1,0,0,0,0,0,0,3,0,1,0,0,2,1,2,1,3,0,3,0,0,0,3,4,4,4,3,2,0,2,0,0,2,4), (0,0,0,1,0,1,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,2,2,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,3,1,0,0,0,0,0,0,0,3), (0,3,0,3,0,2,0,3,0,3,3,3,2,3,2,2,2,0,3,1,3,3,3,2,3,3,0,0,3,0,3,2,2,0,2,3,1,4,3,4,3,3,2,3,1,5,4,4,0,3,1,2,1,3,0,3,1,1,2,0,2,3,1,3,1,3,0,3,0,1,0,3,3,4,4,2,1,0,2,1,0,2,4), (0,1,0,3,0,1,0,2,0,1,4,2,5,1,4,0,2,0,2,1,3,1,4,0,2,1,0,0,2,1,4,1,1,0,3,3,0,5,1,3,2,3,3,1,0,3,2,3,0,1,0,0,0,0,0,0,1,0,0,0,0,4,0,1,0,3,0,2,0,1,0,3,3,3,4,3,3,0,0,0,0,2,3), (0,0,0,1,0,0,0,0,0,0,2,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,0,1,0,0,0,0,0,3), (0,1,0,3,0,4,0,3,0,2,4,3,1,0,3,2,2,1,3,1,2,2,3,1,1,1,2,1,3,0,1,2,0,1,3,2,1,3,0,5,5,1,0,0,1,3,2,1,0,3,0,0,1,0,0,0,0,0,3,4,0,1,1,1,3,2,0,2,0,1,0,2,3,3,1,2,3,0,1,0,1,0,4), (0,0,0,1,0,3,0,3,0,2,2,1,0,0,4,0,3,0,3,1,3,0,3,0,3,0,1,0,3,0,3,1,3,0,3,3,0,0,1,2,1,1,1,0,1,2,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,2,2,1,2,0,0,2,0,0,0,0,2,3,3,3,3,0,0,0,0,1,4), (0,0,0,3,0,3,0,0,0,0,3,1,1,0,3,0,1,0,2,0,1,0,0,0,0,0,0,0,1,0,3,0,2,0,2,3,0,0,2,2,3,1,2,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,2,0,0,0,0,2,3), (2,4,0,5,0,5,0,4,0,3,4,3,3,3,4,3,3,3,4,3,4,4,5,4,5,5,5,2,3,0,5,5,4,1,5,4,3,1,5,4,3,4,4,3,3,4,3,3,0,3,2,0,2,3,0,3,0,0,3,3,0,5,3,2,3,3,0,3,0,3,0,3,4,5,4,5,3,0,4,3,0,3,4), (0,3,0,3,0,3,0,3,0,3,3,4,3,2,3,2,3,0,4,3,3,3,3,3,3,3,3,0,3,2,4,3,3,1,3,4,3,4,4,4,3,4,4,3,2,4,4,1,0,2,0,0,1,1,0,2,0,0,3,1,0,5,3,2,1,3,0,3,0,1,2,4,3,2,4,3,3,0,3,2,0,4,4), (0,3,0,3,0,1,0,0,0,1,4,3,3,2,3,1,3,1,4,2,3,2,4,2,3,4,3,0,2,2,3,3,3,0,3,3,3,0,3,4,1,3,3,0,3,4,3,3,0,1,1,0,1,0,0,0,4,0,3,0,0,3,1,2,1,3,0,4,0,1,0,4,3,3,4,3,3,0,2,0,0,3,3), (0,3,0,4,0,1,0,3,0,3,4,3,3,0,3,3,3,1,3,1,3,3,4,3,3,3,0,0,3,1,5,3,3,1,3,3,2,5,4,3,3,4,5,3,2,5,3,4,0,1,0,0,0,0,0,2,0,0,1,1,0,4,2,2,1,3,0,3,0,2,0,4,4,3,5,3,2,0,1,1,0,3,4), (0,5,0,4,0,5,0,2,0,4,4,3,3,2,3,3,3,1,4,3,4,1,5,3,4,3,4,0,4,2,4,3,4,1,5,4,0,4,4,4,4,5,4,1,3,5,4,2,1,4,1,1,3,2,0,3,1,0,3,2,1,4,3,3,3,4,0,4,0,3,0,4,4,4,3,3,3,0,4,2,0,3,4), (1,4,0,4,0,3,0,1,0,3,3,3,1,1,3,3,2,2,3,3,1,0,3,2,2,1,2,0,3,1,2,1,2,0,3,2,0,2,2,3,3,4,3,0,3,3,1,2,0,1,1,3,1,2,0,0,3,0,1,1,0,3,2,2,3,3,0,3,0,0,0,2,3,3,4,3,3,0,1,0,0,1,4), (0,4,0,4,0,4,0,0,0,3,4,4,3,1,4,2,3,2,3,3,3,1,4,3,4,0,3,0,4,2,3,3,2,2,5,4,2,1,3,4,3,4,3,1,3,3,4,2,0,2,1,0,3,3,0,0,2,0,3,1,0,4,4,3,4,3,0,4,0,1,0,2,4,4,4,4,4,0,3,2,0,3,3), (0,0,0,1,0,4,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,3,2,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,2), (0,2,0,3,0,4,0,4,0,1,3,3,3,0,4,0,2,1,2,1,1,1,2,0,3,1,1,0,1,0,3,1,0,0,3,3,2,0,1,1,0,0,0,0,0,1,0,2,0,2,2,0,3,1,0,0,1,0,1,1,0,1,2,0,3,0,0,0,0,1,0,0,3,3,4,3,1,0,1,0,3,0,2), (0,0,0,3,0,5,0,0,0,0,1,0,2,0,3,1,0,1,3,0,0,0,2,0,0,0,1,0,0,0,1,1,0,0,4,0,0,0,2,3,0,1,4,1,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,1,0,0,0,0,0,0,0,2,0,0,3,0,0,0,0,0,3), (0,2,0,5,0,5,0,1,0,2,4,3,3,2,5,1,3,2,3,3,3,0,4,1,2,0,3,0,4,0,2,2,1,1,5,3,0,0,1,4,2,3,2,0,3,3,3,2,0,2,4,1,1,2,0,1,1,0,3,1,0,1,3,1,2,3,0,2,0,0,0,1,3,5,4,4,4,0,3,0,0,1,3), (0,4,0,5,0,4,0,4,0,4,5,4,3,3,4,3,3,3,4,3,4,4,5,3,4,5,4,2,4,2,3,4,3,1,4,4,1,3,5,4,4,5,5,4,4,5,5,5,2,3,3,1,4,3,1,3,3,0,3,3,1,4,3,4,4,4,0,3,0,4,0,3,3,4,4,5,0,0,4,3,0,4,5), (0,4,0,4,0,3,0,3,0,3,4,4,4,3,3,2,4,3,4,3,4,3,5,3,4,3,2,1,4,2,4,4,3,1,3,4,2,4,5,5,3,4,5,4,1,5,4,3,0,3,2,2,3,2,1,3,1,0,3,3,3,5,3,3,3,5,4,4,2,3,3,4,3,3,3,2,1,0,3,2,1,4,3), (0,4,0,5,0,4,0,3,0,3,5,5,3,2,4,3,4,0,5,4,4,1,4,4,4,3,3,3,4,3,5,5,2,3,3,4,1,2,5,5,3,5,5,2,3,5,5,4,0,3,2,0,3,3,1,1,5,1,4,1,0,4,3,2,3,5,0,4,0,3,0,5,4,3,4,3,0,0,4,1,0,4,4), (1,3,0,4,0,2,0,2,0,2,5,5,3,3,3,3,3,0,4,2,3,4,4,4,3,4,0,0,3,4,5,4,3,3,3,3,2,5,5,4,5,5,5,4,3,5,5,5,1,3,1,0,1,0,0,3,2,0,4,2,0,5,2,3,2,4,1,3,0,3,0,4,5,4,5,4,3,0,4,2,0,5,4), (0,3,0,4,0,5,0,3,0,3,4,4,3,2,3,2,3,3,3,3,3,2,4,3,3,2,2,0,3,3,3,3,3,1,3,3,3,0,4,4,3,4,4,1,1,4,4,2,0,3,1,0,1,1,0,4,1,0,2,3,1,3,3,1,3,4,0,3,0,1,0,3,1,3,0,0,1,0,2,0,0,4,4), (0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0), (0,3,0,3,0,2,0,3,0,1,5,4,3,3,3,1,4,2,1,2,3,4,4,2,4,4,5,0,3,1,4,3,4,0,4,3,3,3,2,3,2,5,3,4,3,2,2,3,0,0,3,0,2,1,0,1,2,0,0,0,0,2,1,1,3,1,0,2,0,4,0,3,4,4,4,5,2,0,2,0,0,1,3), (0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,1,1,0,0,1,1,0,0,0,4,2,1,1,0,1,0,3,2,0,0,3,1,1,1,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,1,0,0,0,2,0,0,0,1,4,0,4,2,1,0,0,0,0,0,1), (0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,1,0,0,0,0,0,0,1,0,1,0,0,0,0,3,1,0,0,0,2,0,2,1,0,0,1,2,1,0,1,1,0,0,3,0,0,0,0,0,0,0,0,0,0,0,1,3,1,0,0,0,0,0,1,0,0,2,1,0,0,0,0,0,0,0,0,2), (0,4,0,4,0,4,0,3,0,4,4,3,4,2,4,3,2,0,4,4,4,3,5,3,5,3,3,2,4,2,4,3,4,3,1,4,0,2,3,4,4,4,3,3,3,4,4,4,3,4,1,3,4,3,2,1,2,1,3,3,3,4,4,3,3,5,0,4,0,3,0,4,3,3,3,2,1,0,3,0,0,3,3), (0,4,0,3,0,3,0,3,0,3,5,5,3,3,3,3,4,3,4,3,3,3,4,4,4,3,3,3,3,4,3,5,3,3,1,3,2,4,5,5,5,5,4,3,4,5,5,3,2,2,3,3,3,3,2,3,3,1,2,3,2,4,3,3,3,4,0,4,0,2,0,4,3,2,2,1,2,0,3,0,0,4,1), ) class JapaneseContextAnalysis(object): NUM_OF_CATEGORY = 6 DONT_KNOW = -1 ENOUGH_REL_THRESHOLD = 100 MAX_REL_THRESHOLD = 1000 MINIMUM_DATA_THRESHOLD = 4 def __init__(self): self._total_rel = None self._rel_sample = None self._need_to_skip_char_num = None self._last_char_order = None self._done = None self.reset() def reset(self): self._total_rel = 0 # total sequence received # category counters, each integer counts sequence in its category self._rel_sample = [0] * self.NUM_OF_CATEGORY # if last byte in current buffer is not the last byte of a character, # we need to know how many bytes to skip in next buffer self._need_to_skip_char_num = 0 self._last_char_order = -1 # The order of previous char # If this flag is set to True, detection is done and conclusion has # been made self._done = False def feed(self, byte_str, num_bytes): if self._done: return # The buffer we got is byte oriented, and a character may span in more than one # buffers. In case the last one or two byte in last buffer is not # complete, we record how many byte needed to complete that character # and skip these bytes here. We can choose to record those bytes as # well and analyse the character once it is complete, but since a # character will not make much difference, by simply skipping # this character will simply our logic and improve performance. i = self._need_to_skip_char_num while i < num_bytes: order, char_len = self.get_order(byte_str[i:i + 2]) i += char_len if i > num_bytes: self._need_to_skip_char_num = i - num_bytes self._last_char_order = -1 else: if (order != -1) and (self._last_char_order != -1): self._total_rel += 1 if self._total_rel > self.MAX_REL_THRESHOLD: self._done = True break self._rel_sample[jp2CharContext[self._last_char_order][order]] += 1 self._last_char_order = order def got_enough_data(self): return self._total_rel > self.ENOUGH_REL_THRESHOLD def get_confidence(self): # This is just one way to calculate confidence. It works well for me. if self._total_rel > self.MINIMUM_DATA_THRESHOLD: return (self._total_rel - self._rel_sample[0]) / self._total_rel else: return self.DONT_KNOW def get_order(self, byte_str): return -1, 1 class SJISContextAnalysis(JapaneseContextAnalysis): def __init__(self): super(SJISContextAnalysis, self).__init__() self._charset_name = "SHIFT_JIS" @property def charset_name(self): return self._charset_name def get_order(self, byte_str): if not byte_str: return -1, 1 # find out current char's byte length first_char = byte_str[0] if (0x81 <= first_char <= 0x9F) or (0xE0 <= first_char <= 0xFC): char_len = 2 if (first_char == 0x87) or (0xFA <= first_char <= 0xFC): self._charset_name = "CP932" else: char_len = 1 # return its order if it is hiragana if len(byte_str) > 1: second_char = byte_str[1] if (first_char == 202) and (0x9F <= second_char <= 0xF1): return second_char - 0x9F, char_len return -1, char_len class EUCJPContextAnalysis(JapaneseContextAnalysis): def get_order(self, byte_str): if not byte_str: return -1, 1 # find out current char's byte length first_char = byte_str[0] if (first_char == 0x8E) or (0xA1 <= first_char <= 0xFE): char_len = 2 elif first_char == 0x8F: char_len = 3 else: char_len = 1 # return its order if it is hiragana if len(byte_str) > 1: second_char = byte_str[1] if (first_char == 0xA4) and (0xA1 <= second_char <= 0xF3): return second_char - 0xA1, char_len return -1, char_len �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/langbulgarianmodel.py������������0000644�0001750�0001750�00000031047�00000000000�032305� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Communicator client code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### # 255: Control characters that usually does not exist in any text # 254: Carriage/Return # 253: symbol (punctuation) that does not belong to word # 252: 0 - 9 # Character Mapping Table: # this table is modified base on win1251BulgarianCharToOrderMap, so # only number <64 is sure valid Latin5_BulgarianCharToOrderMap = ( 255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 253, 77, 90, 99,100, 72,109,107,101, 79,185, 81,102, 76, 94, 82, # 40 110,186,108, 91, 74,119, 84, 96,111,187,115,253,253,253,253,253, # 50 253, 65, 69, 70, 66, 63, 68,112,103, 92,194,104, 95, 86, 87, 71, # 60 116,195, 85, 93, 97,113,196,197,198,199,200,253,253,253,253,253, # 70 194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209, # 80 210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225, # 90 81,226,227,228,229,230,105,231,232,233,234,235,236, 45,237,238, # a0 31, 32, 35, 43, 37, 44, 55, 47, 40, 59, 33, 46, 38, 36, 41, 30, # b0 39, 28, 34, 51, 48, 49, 53, 50, 54, 57, 61,239, 67,240, 60, 56, # c0 1, 18, 9, 20, 11, 3, 23, 15, 2, 26, 12, 10, 14, 6, 4, 13, # d0 7, 8, 5, 19, 29, 25, 22, 21, 27, 24, 17, 75, 52,241, 42, 16, # e0 62,242,243,244, 58,245, 98,246,247,248,249,250,251, 91,252,253, # f0 ) win1251BulgarianCharToOrderMap = ( 255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 253, 77, 90, 99,100, 72,109,107,101, 79,185, 81,102, 76, 94, 82, # 40 110,186,108, 91, 74,119, 84, 96,111,187,115,253,253,253,253,253, # 50 253, 65, 69, 70, 66, 63, 68,112,103, 92,194,104, 95, 86, 87, 71, # 60 116,195, 85, 93, 97,113,196,197,198,199,200,253,253,253,253,253, # 70 206,207,208,209,210,211,212,213,120,214,215,216,217,218,219,220, # 80 221, 78, 64, 83,121, 98,117,105,222,223,224,225,226,227,228,229, # 90 88,230,231,232,233,122, 89,106,234,235,236,237,238, 45,239,240, # a0 73, 80,118,114,241,242,243,244,245, 62, 58,246,247,248,249,250, # b0 31, 32, 35, 43, 37, 44, 55, 47, 40, 59, 33, 46, 38, 36, 41, 30, # c0 39, 28, 34, 51, 48, 49, 53, 50, 54, 57, 61,251, 67,252, 60, 56, # d0 1, 18, 9, 20, 11, 3, 23, 15, 2, 26, 12, 10, 14, 6, 4, 13, # e0 7, 8, 5, 19, 29, 25, 22, 21, 27, 24, 17, 75, 52,253, 42, 16, # f0 ) # Model Table: # total sequences: 100% # first 512 sequences: 96.9392% # first 1024 sequences:3.0618% # rest sequences: 0.2992% # negative sequences: 0.0020% BulgarianLangModel = ( 0,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,3,3,3,3,3,3,3,2,3,3,3,3,3, 3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,0,3,3,3,2,2,3,2,2,1,2,2, 3,1,3,3,2,3,3,3,3,3,3,3,3,3,3,3,3,0,3,3,3,3,3,3,3,3,3,3,0,3,0,1, 0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, 3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,2,3,3,3,3,3,3,3,3,0,3,1,0, 0,1,0,0,0,0,0,0,0,0,1,1,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, 3,2,2,2,3,3,3,3,3,3,3,3,3,3,3,3,3,1,3,2,3,3,3,3,3,3,3,3,0,3,0,0, 0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,2,3,3,2,3,3,3,3,3,3,3,3,3,3,3,3,1,3,2,3,3,3,3,3,3,3,3,0,3,0,0, 0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,3,3,3,3,3,3,3,3,2,3,2,2,1,3,3,3,3,2,2,2,1,1,2,0,1,0,1,0,0, 0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,1, 3,3,3,3,3,3,3,2,3,2,2,3,3,1,1,2,3,3,2,3,3,3,3,2,1,2,0,2,0,3,0,0, 0,0,0,0,0,0,0,1,0,0,2,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,1, 3,3,3,3,3,3,3,1,3,3,3,3,3,2,3,2,3,3,3,3,3,2,3,3,1,3,0,3,0,2,0,0, 0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1, 3,3,3,3,3,3,3,3,1,3,3,2,3,3,3,1,3,3,2,3,2,2,2,0,0,2,0,2,0,2,0,0, 0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,1, 3,3,3,3,3,3,3,3,3,0,3,3,3,2,2,3,3,3,1,2,2,3,2,1,1,2,0,2,0,0,0,0, 1,0,0,0,0,0,0,0,0,0,2,0,0,1,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1, 3,3,3,3,3,3,3,2,3,3,1,2,3,2,2,2,3,3,3,3,3,2,2,3,1,2,0,2,1,2,0,0, 0,0,0,0,0,0,0,0,0,0,3,0,0,1,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,1, 3,3,3,3,3,1,3,3,3,3,3,2,3,3,3,2,3,3,2,3,2,2,2,3,1,2,0,1,0,1,0,0, 0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1, 3,3,3,3,3,3,3,3,3,3,3,1,1,1,2,2,1,3,1,3,2,2,3,0,0,1,0,1,0,1,0,0, 0,0,0,1,0,0,0,0,1,0,2,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1, 3,3,3,3,3,2,2,3,2,2,3,1,2,1,1,1,2,3,1,3,1,2,2,0,1,1,1,1,0,1,0,0, 0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1, 3,3,3,3,3,1,3,2,2,3,3,1,2,3,1,1,3,3,3,3,1,2,2,1,1,1,0,2,0,2,0,1, 0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1, 3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,1,2,2,3,3,3,2,2,1,1,2,0,2,0,1,0,0, 0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1, 3,0,1,2,1,3,3,2,3,3,3,3,3,2,3,2,1,0,3,1,2,1,2,1,2,3,2,1,0,1,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 1,1,1,2,3,3,3,3,3,3,3,3,3,3,3,3,0,0,3,1,3,3,2,3,3,2,2,2,0,1,0,0, 0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,3,3,3,3,0,3,3,3,3,3,2,1,1,2,1,3,3,0,3,1,1,1,1,3,2,0,1,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1, 3,3,2,2,2,3,3,3,3,3,3,3,3,3,3,3,1,1,3,1,3,3,2,3,2,2,2,3,0,2,0,0, 0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,3,3,2,3,3,2,2,3,2,1,1,1,1,1,3,1,3,1,1,0,0,0,1,0,0,0,1,0,0, 0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,3,3,2,3,2,0,3,2,0,3,0,2,0,0,2,1,3,1,0,0,1,0,0,0,1,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, 3,3,3,3,2,1,1,1,1,2,1,1,2,1,1,1,2,2,1,2,1,1,1,0,1,1,0,1,0,1,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1, 3,3,3,3,2,1,3,1,1,2,1,3,2,1,1,0,1,2,3,2,1,1,1,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,3,3,3,3,2,2,1,0,1,0,0,1,0,0,0,2,1,0,3,0,0,1,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, 3,3,3,2,3,2,3,3,1,3,2,1,1,1,2,1,1,2,1,3,0,1,0,0,0,1,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,1,1,2,2,3,3,2,3,2,2,2,3,1,2,2,1,1,2,1,1,2,2,0,1,1,0,1,0,2,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,3,2,1,3,1,0,2,2,1,3,2,1,0,0,2,0,2,0,1,0,0,0,0,0,0,0,1,0,0, 0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1, 3,3,3,3,3,3,1,2,0,2,3,1,2,3,2,0,1,3,1,2,1,1,1,0,0,1,0,0,2,2,2,3, 2,2,2,2,1,2,1,1,2,2,1,1,2,0,1,1,1,0,0,1,1,0,0,1,1,0,0,0,1,1,0,1, 3,3,3,3,3,2,1,2,2,1,2,0,2,0,1,0,1,2,1,2,1,1,0,0,0,1,0,1,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,1, 3,3,2,3,3,1,1,3,1,0,3,2,1,0,0,0,1,2,0,2,0,1,0,0,0,1,0,1,2,1,2,2, 1,1,1,1,1,1,1,2,2,2,1,1,1,1,1,1,1,0,1,2,1,1,1,0,0,0,0,0,1,1,0,0, 3,1,0,1,0,2,3,2,2,2,3,2,2,2,2,2,1,0,2,1,2,1,1,1,0,1,2,1,2,2,2,1, 1,1,2,2,2,2,1,2,1,1,0,1,2,1,2,2,2,1,1,1,0,1,1,1,1,2,0,1,0,0,0,0, 2,3,2,3,3,0,0,2,1,0,2,1,0,0,0,0,2,3,0,2,0,0,0,0,0,1,0,0,2,0,1,2, 2,1,2,1,2,2,1,1,1,2,1,1,1,0,1,2,2,1,1,1,1,1,0,1,1,1,0,0,1,2,0,0, 3,3,2,2,3,0,2,3,1,1,2,0,0,0,1,0,0,2,0,2,0,0,0,1,0,1,0,1,2,0,2,2, 1,1,1,1,2,1,0,1,2,2,2,1,1,1,1,1,1,1,0,1,1,1,0,0,0,0,0,0,1,1,0,0, 2,3,2,3,3,0,0,3,0,1,1,0,1,0,0,0,2,2,1,2,0,0,0,0,0,0,0,0,2,0,1,2, 2,2,1,1,1,1,1,2,2,2,1,0,2,0,1,0,1,0,0,1,0,1,0,0,1,0,0,0,0,1,0,0, 3,3,3,3,2,2,2,2,2,0,2,1,1,1,1,2,1,2,1,1,0,2,0,1,0,1,0,0,2,0,1,2, 1,1,1,1,1,1,1,2,2,1,1,0,2,0,1,0,2,0,0,1,1,1,0,0,2,0,0,0,1,1,0,0, 2,3,3,3,3,1,0,0,0,0,0,0,0,0,0,0,2,0,0,1,1,0,0,0,0,0,0,1,2,0,1,2, 2,2,2,1,1,2,1,1,2,2,2,1,2,0,1,1,1,1,1,1,0,1,1,1,1,0,0,1,1,1,0,0, 2,3,3,3,3,0,2,2,0,2,1,0,0,0,1,1,1,2,0,2,0,0,0,3,0,0,0,0,2,0,2,2, 1,1,1,2,1,2,1,1,2,2,2,1,2,0,1,1,1,0,1,1,1,1,0,2,1,0,0,0,1,1,0,0, 2,3,3,3,3,0,2,1,0,0,2,0,0,0,0,0,1,2,0,2,0,0,0,0,0,0,0,0,2,0,1,2, 1,1,1,2,1,1,1,1,2,2,2,0,1,0,1,1,1,0,0,1,1,1,0,0,1,0,0,0,0,1,0,0, 3,3,2,2,3,0,1,0,1,0,0,0,0,0,0,0,1,1,0,3,0,0,0,0,0,0,0,0,1,0,2,2, 1,1,1,1,1,2,1,1,2,2,1,2,2,1,0,1,1,1,1,1,0,1,0,0,1,0,0,0,1,1,0,0, 3,1,0,1,0,2,2,2,2,3,2,1,1,1,2,3,0,0,1,0,2,1,1,0,1,1,1,1,2,1,1,1, 1,2,2,1,2,1,2,2,1,1,0,1,2,1,2,2,1,1,1,0,0,1,1,1,2,1,0,1,0,0,0,0, 2,1,0,1,0,3,1,2,2,2,2,1,2,2,1,1,1,0,2,1,2,2,1,1,2,1,1,0,2,1,1,1, 1,2,2,2,2,2,2,2,1,2,0,1,1,0,2,1,1,1,1,1,0,0,1,1,1,1,0,1,0,0,0,0, 2,1,1,1,1,2,2,2,2,1,2,2,2,1,2,2,1,1,2,1,2,3,2,2,1,1,1,1,0,1,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,2,2,3,2,0,1,2,0,1,2,1,1,0,1,0,1,2,1,2,0,0,0,1,1,0,0,0,1,0,0,2, 1,1,0,0,1,1,0,1,1,1,1,0,2,0,1,1,1,0,0,1,1,0,0,0,0,1,0,0,0,1,0,0, 2,0,0,0,0,1,2,2,2,2,2,2,2,1,2,1,1,1,1,1,1,1,0,1,1,1,1,1,2,1,1,1, 1,2,2,2,2,1,1,2,1,2,1,1,1,0,2,1,2,1,1,1,0,2,1,1,1,1,0,1,0,0,0,0, 3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0, 1,1,0,1,0,1,1,1,1,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,2,2,3,2,0,0,0,0,1,0,0,0,0,0,0,1,1,0,2,0,0,0,0,0,0,0,0,1,0,1,2, 1,1,1,1,1,1,0,0,2,2,2,2,2,0,1,1,0,1,1,1,1,1,0,0,1,0,0,0,1,1,0,1, 2,3,1,2,1,0,1,1,0,2,2,2,0,0,1,0,0,1,1,1,1,0,0,0,0,0,0,0,1,0,1,2, 1,1,1,1,2,1,1,1,1,1,1,1,1,0,1,1,0,1,0,1,0,1,0,0,1,0,0,0,0,1,0,0, 2,2,2,2,2,0,0,2,0,0,2,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,2,0,2,2, 1,1,1,1,1,0,0,1,2,1,1,0,1,0,1,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0, 1,2,2,2,2,0,0,2,0,1,1,0,0,0,1,0,0,2,0,2,0,0,0,0,0,0,0,0,0,0,1,1, 0,0,0,1,1,1,1,1,1,1,1,1,1,0,1,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0, 1,2,2,3,2,0,0,1,0,0,1,0,0,0,0,0,0,1,0,2,0,0,0,1,0,0,0,0,0,0,0,2, 1,1,0,0,1,0,0,0,1,1,0,0,1,0,1,1,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0, 2,1,2,2,2,1,2,1,2,2,1,1,2,1,1,1,0,1,1,1,1,2,0,1,0,1,1,1,1,0,1,1, 1,1,2,1,1,1,1,1,1,0,0,1,2,1,1,1,1,1,1,0,0,1,1,1,0,0,0,0,0,0,0,0, 1,0,0,1,3,1,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,2,2,2,1,0,0,1,0,2,0,0,0,0,0,1,1,1,0,1,0,0,0,0,0,0,0,0,2,0,0,1, 0,2,0,1,0,0,1,1,2,0,1,0,1,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0, 1,2,2,2,2,0,1,1,0,2,1,0,1,1,1,0,0,1,0,2,0,1,0,0,0,0,0,0,0,0,0,1, 0,1,0,0,1,0,0,0,1,1,0,0,1,0,0,1,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0, 2,2,2,2,2,0,0,1,0,0,0,1,0,1,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,1, 0,1,0,1,1,1,0,0,1,1,1,0,1,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0, 2,0,1,0,0,1,2,1,1,1,1,1,1,2,2,1,0,0,1,0,1,0,0,0,0,1,1,1,1,0,0,0, 1,1,2,1,1,1,1,0,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,2,1,2,1,0,0,1,0,0,0,0,0,0,0,0,1,1,0,1,0,0,0,0,0,0,0,0,0,0,0,1, 0,0,0,0,0,0,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 1,0,0,1,2,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0, 0,1,1,0,1,1,1,0,0,1,0,0,1,0,1,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0, 1,0,1,0,0,1,1,1,1,1,1,1,1,1,1,1,0,0,1,0,2,0,0,2,0,1,0,0,1,0,0,1, 1,1,0,0,1,1,0,1,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0, 1,1,1,1,1,1,1,2,0,0,0,0,0,0,2,1,0,1,1,0,0,1,1,1,0,1,0,0,0,0,0,0, 2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 1,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,0,1,1,0,1,1,1,1,1,0,1,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, ) Latin5BulgarianModel = { 'char_to_order_map': Latin5_BulgarianCharToOrderMap, 'precedence_matrix': BulgarianLangModel, 'typical_positive_ratio': 0.969392, 'keep_english_letter': False, 'charset_name': "ISO-8859-5", 'language': 'Bulgairan', } Win1251BulgarianModel = { 'char_to_order_map': win1251BulgarianCharToOrderMap, 'precedence_matrix': BulgarianLangModel, 'typical_positive_ratio': 0.969392, 'keep_english_letter': False, 'charset_name': "windows-1251", 'language': 'Bulgarian', } �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/langcyrillicmodel.py�������������0000644�0001750�0001750�00000043034�00000000000�032152� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Communicator client code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### # KOI8-R language model # Character Mapping Table: KOI8R_char_to_order_map = ( 255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 253,142,143,144,145,146,147,148,149,150,151,152, 74,153, 75,154, # 40 155,156,157,158,159,160,161,162,163,164,165,253,253,253,253,253, # 50 253, 71,172, 66,173, 65,174, 76,175, 64,176,177, 77, 72,178, 69, # 60 67,179, 78, 73,180,181, 79,182,183,184,185,253,253,253,253,253, # 70 191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206, # 80 207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222, # 90 223,224,225, 68,226,227,228,229,230,231,232,233,234,235,236,237, # a0 238,239,240,241,242,243,244,245,246,247,248,249,250,251,252,253, # b0 27, 3, 21, 28, 13, 2, 39, 19, 26, 4, 23, 11, 8, 12, 5, 1, # c0 15, 16, 9, 7, 6, 14, 24, 10, 17, 18, 20, 25, 30, 29, 22, 54, # d0 59, 37, 44, 58, 41, 48, 53, 46, 55, 42, 60, 36, 49, 38, 31, 34, # e0 35, 43, 45, 32, 40, 52, 56, 33, 61, 62, 51, 57, 47, 63, 50, 70, # f0 ) win1251_char_to_order_map = ( 255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 253,142,143,144,145,146,147,148,149,150,151,152, 74,153, 75,154, # 40 155,156,157,158,159,160,161,162,163,164,165,253,253,253,253,253, # 50 253, 71,172, 66,173, 65,174, 76,175, 64,176,177, 77, 72,178, 69, # 60 67,179, 78, 73,180,181, 79,182,183,184,185,253,253,253,253,253, # 70 191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206, 207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222, 223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238, 239,240,241,242,243,244,245,246, 68,247,248,249,250,251,252,253, 37, 44, 33, 46, 41, 48, 56, 51, 42, 60, 36, 49, 38, 31, 34, 35, 45, 32, 40, 52, 53, 55, 58, 50, 57, 63, 70, 62, 61, 47, 59, 43, 3, 21, 10, 19, 13, 2, 24, 20, 4, 23, 11, 8, 12, 5, 1, 15, 9, 7, 6, 14, 39, 26, 28, 22, 25, 29, 54, 18, 17, 30, 27, 16, ) latin5_char_to_order_map = ( 255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 253,142,143,144,145,146,147,148,149,150,151,152, 74,153, 75,154, # 40 155,156,157,158,159,160,161,162,163,164,165,253,253,253,253,253, # 50 253, 71,172, 66,173, 65,174, 76,175, 64,176,177, 77, 72,178, 69, # 60 67,179, 78, 73,180,181, 79,182,183,184,185,253,253,253,253,253, # 70 191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206, 207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222, 223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238, 37, 44, 33, 46, 41, 48, 56, 51, 42, 60, 36, 49, 38, 31, 34, 35, 45, 32, 40, 52, 53, 55, 58, 50, 57, 63, 70, 62, 61, 47, 59, 43, 3, 21, 10, 19, 13, 2, 24, 20, 4, 23, 11, 8, 12, 5, 1, 15, 9, 7, 6, 14, 39, 26, 28, 22, 25, 29, 54, 18, 17, 30, 27, 16, 239, 68,240,241,242,243,244,245,246,247,248,249,250,251,252,255, ) macCyrillic_char_to_order_map = ( 255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 253,142,143,144,145,146,147,148,149,150,151,152, 74,153, 75,154, # 40 155,156,157,158,159,160,161,162,163,164,165,253,253,253,253,253, # 50 253, 71,172, 66,173, 65,174, 76,175, 64,176,177, 77, 72,178, 69, # 60 67,179, 78, 73,180,181, 79,182,183,184,185,253,253,253,253,253, # 70 37, 44, 33, 46, 41, 48, 56, 51, 42, 60, 36, 49, 38, 31, 34, 35, 45, 32, 40, 52, 53, 55, 58, 50, 57, 63, 70, 62, 61, 47, 59, 43, 191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206, 207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222, 223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238, 239,240,241,242,243,244,245,246,247,248,249,250,251,252, 68, 16, 3, 21, 10, 19, 13, 2, 24, 20, 4, 23, 11, 8, 12, 5, 1, 15, 9, 7, 6, 14, 39, 26, 28, 22, 25, 29, 54, 18, 17, 30, 27,255, ) IBM855_char_to_order_map = ( 255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 253,142,143,144,145,146,147,148,149,150,151,152, 74,153, 75,154, # 40 155,156,157,158,159,160,161,162,163,164,165,253,253,253,253,253, # 50 253, 71,172, 66,173, 65,174, 76,175, 64,176,177, 77, 72,178, 69, # 60 67,179, 78, 73,180,181, 79,182,183,184,185,253,253,253,253,253, # 70 191,192,193,194, 68,195,196,197,198,199,200,201,202,203,204,205, 206,207,208,209,210,211,212,213,214,215,216,217, 27, 59, 54, 70, 3, 37, 21, 44, 28, 58, 13, 41, 2, 48, 39, 53, 19, 46,218,219, 220,221,222,223,224, 26, 55, 4, 42,225,226,227,228, 23, 60,229, 230,231,232,233,234,235, 11, 36,236,237,238,239,240,241,242,243, 8, 49, 12, 38, 5, 31, 1, 34, 15,244,245,246,247, 35, 16,248, 43, 9, 45, 7, 32, 6, 40, 14, 52, 24, 56, 10, 33, 17, 61,249, 250, 18, 62, 20, 51, 25, 57, 30, 47, 29, 63, 22, 50,251,252,255, ) IBM866_char_to_order_map = ( 255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 253,142,143,144,145,146,147,148,149,150,151,152, 74,153, 75,154, # 40 155,156,157,158,159,160,161,162,163,164,165,253,253,253,253,253, # 50 253, 71,172, 66,173, 65,174, 76,175, 64,176,177, 77, 72,178, 69, # 60 67,179, 78, 73,180,181, 79,182,183,184,185,253,253,253,253,253, # 70 37, 44, 33, 46, 41, 48, 56, 51, 42, 60, 36, 49, 38, 31, 34, 35, 45, 32, 40, 52, 53, 55, 58, 50, 57, 63, 70, 62, 61, 47, 59, 43, 3, 21, 10, 19, 13, 2, 24, 20, 4, 23, 11, 8, 12, 5, 1, 15, 191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206, 207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222, 223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238, 9, 7, 6, 14, 39, 26, 28, 22, 25, 29, 54, 18, 17, 30, 27, 16, 239, 68,240,241,242,243,244,245,246,247,248,249,250,251,252,255, ) # Model Table: # total sequences: 100% # first 512 sequences: 97.6601% # first 1024 sequences: 2.3389% # rest sequences: 0.1237% # negative sequences: 0.0009% RussianLangModel = ( 0,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,1,1,3,3,3,3,1,3,3,3,2,3,2,3,3, 3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,0,3,2,2,2,2,2,0,0,2, 3,3,3,2,3,3,3,3,3,3,3,3,3,3,2,3,3,0,0,3,3,3,3,3,3,3,3,3,2,3,2,0, 0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,2,2,3,3,3,3,3,3,3,3,3,2,3,3,0,0,3,3,3,3,3,3,3,3,2,3,3,1,0, 0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,2,3,2,3,3,3,3,3,3,3,3,3,3,3,3,3,0,0,3,3,3,3,3,3,3,3,3,3,3,2,1, 0,0,0,0,0,0,0,2,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,3,0,0,3,3,3,3,3,3,3,3,3,3,3,2,1, 0,0,0,0,0,1,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,3,3,3,3,3,2,2,2,3,1,3,3,1,3,3,3,3,2,2,3,0,2,2,2,3,3,2,1,0, 0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0, 3,3,3,3,3,3,2,3,3,3,3,3,2,2,3,2,3,3,3,2,1,2,2,0,1,2,2,2,2,2,2,0, 0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0, 3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,2,2,3,0,2,2,3,3,2,1,2,0, 0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,1,0,0,2,0,0,0,0,0,0,0,0,0, 3,3,3,3,3,3,2,3,3,1,2,3,2,2,3,2,3,3,3,3,2,2,3,0,3,2,2,3,1,1,1,0, 0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,3,3,3,3,3,2,2,3,3,3,3,3,2,3,3,3,3,2,2,2,0,3,3,3,2,2,2,2,0, 0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,3,3,3,3,3,3,3,2,3,2,3,3,3,3,3,3,2,3,2,2,0,1,3,2,1,2,2,1,0, 0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0, 3,3,3,3,3,3,3,3,3,3,3,2,1,1,3,0,1,1,1,1,2,1,1,0,2,2,2,1,2,0,1,0, 0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,3,3,3,2,3,3,2,2,2,2,1,3,2,3,2,3,2,1,2,2,0,1,1,2,1,2,1,2,0, 0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,3,3,3,3,3,3,3,3,3,2,2,3,2,3,3,3,2,2,2,2,0,2,2,2,2,3,1,1,0, 0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0, 3,2,3,2,2,3,3,3,3,3,3,3,3,3,1,3,2,0,0,3,3,3,3,2,3,3,3,3,2,3,2,0, 0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,3,3,3,3,3,2,2,3,3,0,2,1,0,3,2,3,2,3,0,0,1,2,0,0,1,0,1,2,1,1,0, 0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,0,3,0,2,3,3,3,3,2,3,3,3,3,1,2,2,0,0,2,3,2,2,2,3,2,3,2,2,3,0,0, 0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,2,3,0,2,3,2,3,0,1,2,3,3,2,0,2,3,0,0,2,3,2,2,0,1,3,1,3,2,2,1,0, 0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,1,3,0,2,3,3,3,3,3,3,3,3,2,1,3,2,0,0,2,2,3,3,3,2,3,3,0,2,2,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,3,3,3,2,2,3,3,2,2,2,3,3,0,0,1,1,1,1,1,2,0,0,1,1,1,1,0,1,0, 0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,3,3,3,2,2,3,3,3,3,3,3,3,0,3,2,3,3,2,3,2,0,2,1,0,1,1,0,1,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0, 3,3,3,3,3,3,2,3,3,3,2,2,2,2,3,1,3,2,3,1,1,2,1,0,2,2,2,2,1,3,1,0, 0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0, 2,2,3,3,3,3,3,1,2,2,1,3,1,0,3,0,0,3,0,0,0,1,1,0,1,2,1,0,0,0,0,0, 0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,2,2,1,1,3,3,3,2,2,1,2,2,3,1,1,2,0,0,2,2,1,3,0,0,2,1,1,2,1,1,0, 0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,2,3,3,3,3,1,2,2,2,1,2,1,3,3,1,1,2,1,2,1,2,2,0,2,0,0,1,1,0,1,0, 0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,3,3,3,3,3,2,1,3,2,2,3,2,0,3,2,0,3,0,1,0,1,1,0,0,1,1,1,1,0,1,0, 0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,2,3,3,3,2,2,2,3,3,1,2,1,2,1,0,1,0,1,1,0,1,0,0,2,1,1,1,0,1,0, 0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0, 3,1,1,2,1,2,3,3,2,2,1,2,2,3,0,2,1,0,0,2,2,3,2,1,2,2,2,2,2,3,1,0, 0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,3,3,1,1,0,1,1,2,2,1,1,3,0,0,1,3,1,1,1,0,0,0,1,0,1,1,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,1,3,3,3,2,0,0,0,2,1,0,1,0,2,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,0,1,0,0,2,3,2,2,2,1,2,2,2,1,2,1,0,0,1,1,1,0,2,0,1,1,1,0,0,1,1, 1,0,0,0,0,0,1,2,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0, 2,3,3,3,3,0,0,0,0,1,0,0,0,0,3,0,1,2,1,0,0,0,0,0,0,0,1,1,0,0,1,1, 1,0,1,0,1,2,0,0,1,1,2,1,0,1,1,1,1,0,1,1,1,1,0,1,0,0,1,0,0,1,1,0, 2,2,3,2,2,2,3,1,2,2,2,2,2,2,2,2,1,1,1,1,1,1,1,0,1,0,1,1,1,0,2,1, 1,1,1,1,1,1,1,1,2,1,1,1,1,1,1,1,1,1,1,0,1,0,1,1,0,1,1,1,0,1,1,0, 3,3,3,2,2,2,2,3,2,2,1,1,2,2,2,2,1,1,3,1,2,1,2,0,0,1,1,0,1,0,2,1, 1,1,1,1,1,2,1,0,1,1,1,1,0,1,0,0,1,1,0,0,1,0,1,0,0,1,0,0,0,1,1,0, 2,0,0,1,0,3,2,2,2,2,1,2,1,2,1,2,0,0,0,2,1,2,2,1,1,2,2,0,1,1,0,2, 1,1,1,1,1,0,1,1,1,2,1,1,1,2,1,0,1,2,1,1,1,1,0,1,1,1,0,0,1,0,0,1, 1,3,2,2,2,1,1,1,2,3,0,0,0,0,2,0,2,2,1,0,0,0,0,0,0,1,0,0,0,0,1,1, 1,0,1,1,0,1,0,1,1,0,1,1,0,2,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,1,1,0, 2,3,2,3,2,1,2,2,2,2,1,0,0,0,2,0,0,1,1,0,0,0,0,0,0,0,1,1,0,0,2,1, 1,1,2,1,0,2,0,0,1,0,1,0,0,1,0,0,1,1,0,1,1,0,0,0,0,0,1,0,0,0,0,0, 3,0,0,1,0,2,2,2,3,2,2,2,2,2,2,2,0,0,0,2,1,2,1,1,1,2,2,0,0,0,1,2, 1,1,1,1,1,0,1,2,1,1,1,1,1,1,1,0,1,1,1,1,1,1,0,1,1,1,1,1,1,0,0,1, 2,3,2,3,3,2,0,1,1,1,0,0,1,0,2,0,1,1,3,1,0,0,0,0,0,0,0,1,0,0,2,1, 1,1,1,1,1,1,1,0,1,0,1,1,1,1,0,1,1,1,0,0,1,1,0,1,0,0,0,0,0,0,1,0, 2,3,3,3,3,1,2,2,2,2,0,1,1,0,2,1,1,1,2,1,0,1,1,0,0,1,0,1,0,0,2,0, 0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,3,3,3,2,0,0,1,1,2,2,1,0,0,2,0,1,1,3,0,0,1,0,0,0,0,0,1,0,1,2,1, 1,1,2,0,1,1,1,0,1,0,1,1,0,1,0,1,1,1,1,0,1,0,0,0,0,0,0,1,0,1,1,0, 1,3,2,3,2,1,0,0,2,2,2,0,1,0,2,0,1,1,1,0,1,0,0,0,3,0,1,1,0,0,2,1, 1,1,1,0,1,1,0,0,0,0,1,1,0,1,0,0,2,1,1,0,1,0,0,0,1,0,1,0,0,1,1,0, 3,1,2,1,1,2,2,2,2,2,2,1,2,2,1,1,0,0,0,2,2,2,0,0,0,1,2,1,0,1,0,1, 2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,2,1,1,1,0,1,0,1,1,0,1,1,1,0,0,1, 3,0,0,0,0,2,0,1,1,1,1,1,1,1,0,1,0,0,0,1,1,1,0,1,0,1,1,0,0,1,0,1, 1,1,0,0,1,0,0,0,1,0,1,1,0,0,1,0,1,0,1,0,0,0,0,1,0,0,0,1,0,0,0,1, 1,3,3,2,2,0,0,0,2,2,0,0,0,1,2,0,1,1,2,0,0,0,0,0,0,0,0,1,0,0,2,1, 0,1,1,0,0,1,1,0,0,0,1,1,0,1,1,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0,1,0, 2,3,2,3,2,0,0,0,0,1,1,0,0,0,2,0,2,0,2,0,0,0,0,0,1,0,0,1,0,0,1,1, 1,1,2,0,1,2,1,0,1,1,2,1,1,1,1,1,2,1,1,0,1,0,0,1,1,1,1,1,0,1,1,0, 1,3,2,2,2,1,0,0,2,2,1,0,1,2,2,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,1,1, 0,0,1,1,0,1,1,0,0,1,1,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0, 1,0,0,1,0,2,3,1,2,2,2,2,2,2,1,1,0,0,0,1,0,1,0,2,1,1,1,0,0,0,0,1, 1,1,0,1,1,0,1,1,1,1,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0, 2,0,2,0,0,1,0,3,2,1,2,1,2,2,0,1,0,0,0,2,1,0,0,2,1,1,1,1,0,2,0,2, 2,1,1,1,1,1,1,1,1,1,1,1,1,2,1,0,1,1,1,1,0,0,0,1,1,1,1,0,1,0,0,1, 1,2,2,2,2,1,0,0,1,0,0,0,0,0,2,0,1,1,1,1,0,0,0,0,1,0,1,2,0,0,2,0, 1,0,1,1,1,2,1,0,1,0,1,1,0,0,1,0,1,1,1,0,1,0,0,0,1,0,0,1,0,1,1,0, 2,1,2,2,2,0,3,0,1,1,0,0,0,0,2,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1, 0,0,0,1,1,1,0,0,1,0,1,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0, 1,2,2,3,2,2,0,0,1,1,2,0,1,2,1,0,1,0,1,0,0,1,0,0,0,0,0,0,0,0,0,1, 0,1,1,0,0,1,1,0,0,1,1,0,0,1,1,0,1,1,0,0,1,0,0,0,0,0,0,0,0,1,1,0, 2,2,1,1,2,1,2,2,2,2,2,1,2,2,0,1,0,0,0,1,2,2,2,1,2,1,1,1,1,1,2,1, 1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,0,1,1,1,0,0,0,0,1,1,1,0,1,1,0,0,1, 1,2,2,2,2,0,1,0,2,2,0,0,0,0,2,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,2,0, 0,0,1,0,0,1,0,0,0,0,1,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0, 0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 1,2,2,2,2,0,0,0,2,2,2,0,1,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,1, 0,1,1,0,0,1,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 1,2,2,2,2,0,0,0,0,1,0,0,1,1,2,0,0,0,0,1,0,1,0,0,1,0,0,2,0,0,0,1, 0,0,1,0,0,1,0,0,0,1,1,0,0,0,0,0,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0, 1,2,2,2,1,1,2,0,2,1,1,1,1,0,2,2,0,0,0,0,0,0,0,0,0,1,1,0,0,0,1,1, 0,0,1,0,1,1,0,0,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0, 1,0,2,1,2,0,0,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0, 0,0,1,0,1,1,0,0,0,0,1,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0, 1,0,0,0,0,2,0,1,2,1,0,1,1,1,0,1,0,0,0,1,0,1,0,0,1,0,1,0,0,0,0,1, 0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1, 2,2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, 1,0,0,0,1,0,0,0,1,1,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,1,0,0,0,0,0, 2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, 1,1,1,0,1,0,1,0,0,1,1,1,1,0,0,0,1,0,0,0,0,1,0,0,0,1,0,1,0,0,0,0, 1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, 1,1,0,1,1,0,1,0,1,0,0,0,0,1,1,0,1,1,0,0,0,0,0,1,0,1,1,0,1,0,0,0, 0,1,1,1,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0, ) Koi8rModel = { 'char_to_order_map': KOI8R_char_to_order_map, 'precedence_matrix': RussianLangModel, 'typical_positive_ratio': 0.976601, 'keep_english_letter': False, 'charset_name': "KOI8-R", 'language': 'Russian', } Win1251CyrillicModel = { 'char_to_order_map': win1251_char_to_order_map, 'precedence_matrix': RussianLangModel, 'typical_positive_ratio': 0.976601, 'keep_english_letter': False, 'charset_name': "windows-1251", 'language': 'Russian', } Latin5CyrillicModel = { 'char_to_order_map': latin5_char_to_order_map, 'precedence_matrix': RussianLangModel, 'typical_positive_ratio': 0.976601, 'keep_english_letter': False, 'charset_name': "ISO-8859-5", 'language': 'Russian', } MacCyrillicModel = { 'char_to_order_map': macCyrillic_char_to_order_map, 'precedence_matrix': RussianLangModel, 'typical_positive_ratio': 0.976601, 'keep_english_letter': False, 'charset_name': "MacCyrillic", 'language': 'Russian', } Ibm866Model = { 'char_to_order_map': IBM866_char_to_order_map, 'precedence_matrix': RussianLangModel, 'typical_positive_ratio': 0.976601, 'keep_english_letter': False, 'charset_name': "IBM866", 'language': 'Russian', } Ibm855Model = { 'char_to_order_map': IBM855_char_to_order_map, 'precedence_matrix': RussianLangModel, 'typical_positive_ratio': 0.976601, 'keep_english_letter': False, 'charset_name': "IBM855", 'language': 'Russian', } ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/langgreekmodel.py����������������0000644�0001750�0001750�00000030620�00000000000�031432� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Communicator client code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### # 255: Control characters that usually does not exist in any text # 254: Carriage/Return # 253: symbol (punctuation) that does not belong to word # 252: 0 - 9 # Character Mapping Table: Latin7_char_to_order_map = ( 255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 253, 82,100,104, 94, 98,101,116,102,111,187,117, 92, 88,113, 85, # 40 79,118,105, 83, 67,114,119, 95, 99,109,188,253,253,253,253,253, # 50 253, 72, 70, 80, 81, 60, 96, 93, 89, 68,120, 97, 77, 86, 69, 55, # 60 78,115, 65, 66, 58, 76,106,103, 87,107,112,253,253,253,253,253, # 70 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 80 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 90 253,233, 90,253,253,253,253,253,253,253,253,253,253, 74,253,253, # a0 253,253,253,253,247,248, 61, 36, 46, 71, 73,253, 54,253,108,123, # b0 110, 31, 51, 43, 41, 34, 91, 40, 52, 47, 44, 53, 38, 49, 59, 39, # c0 35, 48,250, 37, 33, 45, 56, 50, 84, 57,120,121, 17, 18, 22, 15, # d0 124, 1, 29, 20, 21, 3, 32, 13, 25, 5, 11, 16, 10, 6, 30, 4, # e0 9, 8, 14, 7, 2, 12, 28, 23, 42, 24, 64, 75, 19, 26, 27,253, # f0 ) win1253_char_to_order_map = ( 255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 253, 82,100,104, 94, 98,101,116,102,111,187,117, 92, 88,113, 85, # 40 79,118,105, 83, 67,114,119, 95, 99,109,188,253,253,253,253,253, # 50 253, 72, 70, 80, 81, 60, 96, 93, 89, 68,120, 97, 77, 86, 69, 55, # 60 78,115, 65, 66, 58, 76,106,103, 87,107,112,253,253,253,253,253, # 70 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 80 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 90 253,233, 61,253,253,253,253,253,253,253,253,253,253, 74,253,253, # a0 253,253,253,253,247,253,253, 36, 46, 71, 73,253, 54,253,108,123, # b0 110, 31, 51, 43, 41, 34, 91, 40, 52, 47, 44, 53, 38, 49, 59, 39, # c0 35, 48,250, 37, 33, 45, 56, 50, 84, 57,120,121, 17, 18, 22, 15, # d0 124, 1, 29, 20, 21, 3, 32, 13, 25, 5, 11, 16, 10, 6, 30, 4, # e0 9, 8, 14, 7, 2, 12, 28, 23, 42, 24, 64, 75, 19, 26, 27,253, # f0 ) # Model Table: # total sequences: 100% # first 512 sequences: 98.2851% # first 1024 sequences:1.7001% # rest sequences: 0.0359% # negative sequences: 0.0148% GreekLangModel = ( 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,3,2,2,3,3,3,3,3,3,3,3,1,3,3,3,0,2,2,3,3,0,3,0,3,2,0,3,3,3,0, 3,0,0,0,2,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,3,3,3,3,3,0,3,3,0,3,2,3,3,0,3,2,3,3,3,0,0,3,0,3,0,3,3,2,0,0,0, 2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0, 0,2,3,2,2,3,3,3,3,3,3,3,3,0,3,3,3,3,0,2,3,3,0,3,3,3,3,2,3,3,3,0, 2,0,0,0,2,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,2,3,3,2,3,3,3,3,3,3,3,3,3,3,3,3,0,2,1,3,3,3,3,2,3,3,2,3,3,2,0, 0,0,0,0,2,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,3,3,3,3,0,3,3,3,3,3,3,0,3,3,0,3,3,3,3,3,3,3,3,3,3,0,3,2,3,3,0, 2,0,1,0,2,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0, 0,3,3,3,3,3,2,3,0,0,0,0,3,3,0,3,1,3,3,3,0,3,3,0,3,3,3,3,0,0,0,0, 2,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,3,3,3,3,3,0,3,0,3,3,3,3,3,0,3,2,2,2,3,0,2,3,3,3,3,3,2,3,3,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,3,3,3,3,3,3,2,2,2,3,3,3,3,0,3,1,3,3,3,3,2,3,3,3,3,3,3,3,2,2,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,3,3,3,3,3,2,0,3,0,0,0,3,3,2,3,3,3,3,3,0,0,3,2,3,0,2,3,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,3,0,3,3,3,3,0,0,3,3,0,2,3,0,3,0,3,3,3,0,0,3,0,3,0,2,2,3,3,0,0, 0,0,1,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,3,3,3,3,3,2,0,3,2,3,3,3,3,0,3,3,3,3,3,0,3,3,2,3,2,3,3,2,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,3,3,2,3,2,3,3,3,3,3,3,0,2,3,2,3,2,2,2,3,2,3,3,2,3,0,2,2,2,3,0, 2,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,3,0,0,0,3,3,3,2,3,3,0,0,3,0,3,0,0,0,3,2,0,3,0,3,0,0,2,0,2,0, 0,0,0,0,2,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,3,3,3,3,0,3,3,3,3,3,3,0,3,3,0,3,0,0,0,3,3,0,3,3,3,0,0,1,2,3,0, 3,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,3,3,3,3,3,2,0,0,3,2,2,3,3,0,3,3,3,3,3,2,1,3,0,3,2,3,3,2,1,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,3,3,0,2,3,3,3,3,3,3,0,0,3,0,3,0,0,0,3,3,0,3,2,3,0,0,3,3,3,0, 3,0,0,0,2,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,3,3,3,3,0,3,3,3,3,3,3,0,0,3,0,3,0,0,0,3,2,0,3,2,3,0,0,3,2,3,0, 2,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,3,1,2,2,3,3,3,3,3,3,0,2,3,0,3,0,0,0,3,3,0,3,0,2,0,0,2,3,1,0, 2,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,3,0,3,3,3,3,0,3,0,3,3,2,3,0,3,3,3,3,3,3,0,3,3,3,0,2,3,0,0,3,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,3,0,3,3,3,0,0,3,0,0,0,3,3,0,3,0,2,3,3,0,0,3,0,3,0,3,3,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,3,0,0,0,3,3,3,3,3,3,0,0,3,0,2,0,0,0,3,3,0,3,0,3,0,0,2,0,2,0, 0,0,0,0,1,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,3,3,3,3,3,3,0,3,0,2,0,3,2,0,3,2,3,2,3,0,0,3,2,3,2,3,3,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,3,0,0,2,3,3,3,3,3,0,0,0,3,0,2,1,0,0,3,2,2,2,0,3,0,0,2,2,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,3,0,3,3,3,2,0,3,0,3,0,3,3,0,2,1,2,3,3,0,0,3,0,3,0,3,3,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,2,3,3,3,0,3,3,3,3,3,3,0,2,3,0,3,0,0,0,2,1,0,2,2,3,0,0,2,2,2,0, 0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,3,0,0,2,3,3,3,2,3,0,0,1,3,0,2,0,0,0,0,3,0,1,0,2,0,0,1,1,1,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,3,3,3,3,3,1,0,3,0,0,0,3,2,0,3,2,3,3,3,0,0,3,0,3,2,2,2,1,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,3,0,3,3,3,0,0,3,0,0,0,0,2,0,2,3,3,2,2,2,2,3,0,2,0,2,2,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,3,3,3,3,2,0,0,0,0,0,0,2,3,0,2,0,2,3,2,0,0,3,0,3,0,3,1,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,3,2,3,3,2,2,3,0,2,0,3,0,0,0,2,0,0,0,0,1,2,0,2,0,2,0, 0,2,0,2,0,2,2,0,0,1,0,2,2,2,0,2,2,2,0,2,2,2,0,0,2,0,0,1,0,0,0,0, 0,2,0,3,3,2,0,0,0,0,0,0,1,3,0,2,0,2,2,2,0,0,2,0,3,0,0,2,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,3,0,2,3,2,0,2,2,0,2,0,2,2,0,2,0,2,2,2,0,0,0,0,0,0,2,3,0,0,0,2, 0,1,2,0,0,0,0,2,2,0,0,0,2,1,0,2,2,0,0,0,0,0,0,1,0,2,0,0,0,0,0,0, 0,0,2,1,0,2,3,2,2,3,2,3,2,0,0,3,3,3,0,0,3,2,0,0,0,1,1,0,2,0,2,2, 0,2,0,2,0,2,2,0,0,2,0,2,2,2,0,2,2,2,2,0,0,2,0,0,0,2,0,1,0,0,0,0, 0,3,0,3,3,2,2,0,3,0,0,0,2,2,0,2,2,2,1,2,0,0,1,2,2,0,0,3,0,0,0,2, 0,1,2,0,0,0,1,2,0,0,0,0,0,0,0,2,2,0,1,0,0,2,0,0,0,2,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,2,3,3,2,2,0,0,0,2,0,2,3,3,0,2,0,0,0,0,0,0,2,2,2,0,2,2,0,2,0,2, 0,2,2,0,0,2,2,2,2,1,0,0,2,2,0,2,0,0,2,0,0,0,0,0,0,2,0,0,0,0,0,0, 0,2,0,3,2,3,0,0,0,3,0,0,2,2,0,2,0,2,2,2,0,0,2,0,0,0,0,0,0,0,0,2, 0,0,2,2,0,0,2,2,2,0,0,0,0,0,0,2,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,2,0,0,3,2,0,2,2,2,2,2,0,0,0,2,0,0,0,0,2,0,1,0,0,2,0,1,0,0,0, 0,2,2,2,0,2,2,0,1,2,0,2,2,2,0,2,2,2,2,1,2,2,0,0,2,0,0,0,0,0,0,0, 0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0, 0,2,0,2,0,2,2,0,0,0,0,1,2,1,0,0,2,2,0,0,2,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,3,2,3,0,0,2,0,0,0,2,2,0,2,0,0,0,1,0,0,2,0,2,0,2,2,0,0,0,0, 0,0,2,0,0,0,0,2,2,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0, 0,2,2,3,2,2,0,0,0,0,0,0,1,3,0,2,0,2,2,0,0,0,1,0,2,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,2,0,2,0,3,2,0,2,0,0,0,0,0,0,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, 0,0,2,0,0,0,0,1,1,0,0,2,1,2,0,2,2,0,1,0,0,1,0,0,0,2,0,0,0,0,0,0, 0,3,0,2,2,2,0,0,2,0,0,0,2,0,0,0,2,3,0,2,0,0,0,0,0,0,2,2,0,0,0,2, 0,1,2,0,0,0,1,2,2,1,0,0,0,2,0,0,2,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,2,1,2,0,2,2,0,2,0,0,2,0,0,0,0,1,2,1,0,2,1,0,0,0,0,0,0,0,0,0,0, 0,0,2,0,0,0,3,1,2,2,0,2,0,0,0,0,2,0,0,0,2,0,0,3,0,0,0,0,2,2,2,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,2,1,0,2,0,1,2,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,1,0,0,0,0,0,0,2, 0,2,2,0,0,2,2,2,2,2,0,1,2,0,0,0,2,2,0,1,0,2,0,0,2,2,0,0,0,0,0,0, 0,0,0,0,1,0,0,0,0,0,0,0,3,0,0,2,0,0,0,0,0,0,0,0,2,0,2,0,0,0,0,2, 0,1,2,0,0,0,0,2,2,1,0,1,0,1,0,2,2,2,1,0,0,0,0,0,0,1,0,0,0,0,0,0, 0,2,0,1,2,0,0,0,0,0,0,0,0,0,0,2,0,0,2,2,0,0,0,0,1,0,0,0,0,0,0,2, 0,2,2,0,0,0,0,2,2,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,2,0,0,2,0,0,0, 0,2,2,2,2,0,0,0,3,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,2,0,0,0,0,0,0,1, 0,0,2,0,0,0,0,1,2,0,0,0,0,0,0,2,2,1,1,0,0,0,0,0,0,1,0,0,0,0,0,0, 0,2,0,2,2,2,0,0,2,0,0,0,0,0,0,0,2,2,2,0,0,0,2,0,0,0,0,0,0,0,0,2, 0,0,1,0,0,0,0,2,1,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0, 0,3,0,2,0,0,0,0,0,0,0,0,2,0,0,0,0,0,2,0,0,0,0,0,0,0,2,0,0,0,0,2, 0,0,2,0,0,0,0,2,2,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,2,0,2,2,1,0,0,0,0,0,0,2,0,0,2,0,2,2,2,0,0,0,0,0,0,2,0,0,0,0,2, 0,0,2,0,0,2,0,2,2,0,0,0,0,2,0,2,0,0,0,0,0,2,0,0,0,2,0,0,0,0,0,0, 0,0,3,0,0,0,2,2,0,2,2,0,0,0,0,0,2,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,2,0,0,0,0,0, 0,2,2,2,2,2,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,1, 0,0,0,0,0,0,0,2,1,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,2,2,0,0,0,0,0,2,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0, 0,2,0,0,0,2,0,0,0,0,0,1,0,0,0,0,2,2,0,0,0,1,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,2,0,0,0, 0,2,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,1,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,2,0,2,0,0,0, 0,0,0,0,0,0,0,0,2,1,0,0,0,0,0,0,2,0,0,0,1,2,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, ) Latin7GreekModel = { 'char_to_order_map': Latin7_char_to_order_map, 'precedence_matrix': GreekLangModel, 'typical_positive_ratio': 0.982851, 'keep_english_letter': False, 'charset_name': "ISO-8859-7", 'language': 'Greek', } Win1253GreekModel = { 'char_to_order_map': win1253_char_to_order_map, 'precedence_matrix': GreekLangModel, 'typical_positive_ratio': 0.982851, 'keep_english_letter': False, 'charset_name': "windows-1253", 'language': 'Greek', } ����������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/langhebrewmodel.py���������������0000644�0001750�0001750�00000026121�00000000000�031612� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Universal charset detector code. # # The Initial Developer of the Original Code is # Simon Montagu # Portions created by the Initial Developer are Copyright (C) 2005 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # Shy Shalom - original C code # Shoshannah Forbes - original C code (?) # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### # 255: Control characters that usually does not exist in any text # 254: Carriage/Return # 253: symbol (punctuation) that does not belong to word # 252: 0 - 9 # Windows-1255 language model # Character Mapping Table: WIN1255_CHAR_TO_ORDER_MAP = ( 255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 253, 69, 91, 79, 80, 92, 89, 97, 90, 68,111,112, 82, 73, 95, 85, # 40 78,121, 86, 71, 67,102,107, 84,114,103,115,253,253,253,253,253, # 50 253, 50, 74, 60, 61, 42, 76, 70, 64, 53,105, 93, 56, 65, 54, 49, # 60 66,110, 51, 43, 44, 63, 81, 77, 98, 75,108,253,253,253,253,253, # 70 124,202,203,204,205, 40, 58,206,207,208,209,210,211,212,213,214, 215, 83, 52, 47, 46, 72, 32, 94,216,113,217,109,218,219,220,221, 34,116,222,118,100,223,224,117,119,104,125,225,226, 87, 99,227, 106,122,123,228, 55,229,230,101,231,232,120,233, 48, 39, 57,234, 30, 59, 41, 88, 33, 37, 36, 31, 29, 35,235, 62, 28,236,126,237, 238, 38, 45,239,240,241,242,243,127,244,245,246,247,248,249,250, 9, 8, 20, 16, 3, 2, 24, 14, 22, 1, 25, 15, 4, 11, 6, 23, 12, 19, 13, 26, 18, 27, 21, 17, 7, 10, 5,251,252,128, 96,253, ) # Model Table: # total sequences: 100% # first 512 sequences: 98.4004% # first 1024 sequences: 1.5981% # rest sequences: 0.087% # negative sequences: 0.0015% HEBREW_LANG_MODEL = ( 0,3,3,3,3,3,3,3,3,3,3,2,3,3,3,3,3,3,3,3,3,3,3,2,3,2,1,2,0,1,0,0, 3,0,3,1,0,0,1,3,2,0,1,1,2,0,2,2,2,1,1,1,1,2,1,1,1,2,0,0,2,2,0,1, 3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,2,2,2, 1,2,1,2,1,2,0,0,2,0,0,0,0,0,1,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0, 3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,2,2, 1,2,1,3,1,1,0,0,2,0,0,0,1,0,1,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0, 3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,1,0,1,2,2,1,3, 1,2,1,1,2,2,0,0,2,2,0,0,0,0,1,0,1,0,0,0,1,0,0,0,0,0,0,1,0,1,1,0, 3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,3,2,2,2,2,3,2, 1,2,1,2,2,2,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0, 3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,3,2,3,2,2,3,2,2,2,1,2,2,2,2, 1,2,1,1,2,2,0,1,2,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0, 3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,0,2,2,2,2,2, 0,2,0,2,2,2,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0, 3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,0,2,2,2, 0,2,1,2,2,2,0,0,2,1,0,0,0,0,1,0,1,0,0,0,0,0,0,2,0,0,0,0,0,0,1,0, 3,3,3,3,3,3,3,3,3,3,3,2,3,3,3,3,3,3,3,3,3,3,3,3,3,2,1,2,3,2,2,2, 1,2,1,2,2,2,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,1,0, 3,3,3,3,3,3,3,3,3,2,3,3,3,2,3,3,3,3,3,3,3,3,3,3,3,3,3,1,0,2,0,2, 0,2,1,2,2,2,0,0,1,2,0,0,0,0,1,0,1,0,0,0,0,0,0,1,0,0,0,2,0,0,1,0, 3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,2,3,2,2,3,2,1,2,1,1,1, 0,1,1,1,1,1,3,0,1,0,0,0,0,2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0, 3,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,1,0,1,1,0,0,1,0,0,1,0,0,0,0, 0,0,1,0,0,0,0,0,2,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,2,2,2,2,2,2, 0,2,0,1,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0, 3,3,3,3,3,3,3,3,3,2,3,3,3,2,1,2,3,3,2,3,3,3,3,2,3,2,1,2,0,2,1,2, 0,2,0,2,2,2,0,0,1,2,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0, 3,3,3,3,3,3,3,3,3,2,3,3,3,1,2,2,3,3,2,3,2,3,2,2,3,1,2,2,0,2,2,2, 0,2,1,2,2,2,0,0,1,2,0,0,0,0,1,0,0,0,0,0,1,0,0,1,0,0,0,1,0,0,1,0, 3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,3,3,2,3,3,2,2,2,3,3,3,3,1,3,2,2,2, 0,2,0,1,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0, 3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,2,3,3,3,2,3,2,2,2,1,2,2,0,2,2,2,2, 0,2,0,2,2,2,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0, 3,3,3,3,3,3,3,3,3,3,3,2,3,3,3,1,3,2,3,3,2,3,3,2,2,1,2,2,2,2,2,2, 0,2,1,2,1,2,0,0,1,0,0,0,0,0,1,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,1,0, 3,3,3,3,3,3,2,3,2,3,3,2,3,3,3,3,2,3,2,3,3,3,3,3,2,2,2,2,2,2,2,1, 0,2,0,1,2,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0, 3,3,3,3,3,3,3,3,3,2,1,2,3,3,3,3,3,3,3,2,3,2,3,2,1,2,3,0,2,1,2,2, 0,2,1,1,2,1,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,2,0, 3,3,3,3,3,3,3,3,3,2,3,3,3,3,2,1,3,1,2,2,2,1,2,3,3,1,2,1,2,2,2,2, 0,1,1,1,1,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,2,0,0,0,0,0,0,0,0, 3,3,3,3,3,3,3,3,3,3,0,2,3,3,3,1,3,3,3,1,2,2,2,2,1,1,2,2,2,2,2,2, 0,2,0,1,1,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0, 3,3,3,3,3,3,2,3,3,3,2,2,3,3,3,2,1,2,3,2,3,2,2,2,2,1,2,1,1,1,2,2, 0,2,1,1,1,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0, 3,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,0,0,0,1,0,0,0,0,0, 1,0,1,0,0,0,0,0,2,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,3,3,2,3,3,2,3,1,2,2,2,2,3,2,3,1,1,2,2,1,2,2,1,1,0,2,2,2,2, 0,1,0,1,2,2,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0, 3,0,0,1,1,0,1,0,0,1,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,2,0, 0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,0,1,0,1,0,1,1,0,1,1,0,0,0,1,1,0,1,1,1,0,0,0,0,0,0,1,0,0,0,0,0, 0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,0,0,0,1,1,0,1,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0, 3,2,2,1,2,2,2,2,2,2,2,1,2,2,1,2,2,1,1,1,1,1,1,1,1,2,1,1,0,3,3,3, 0,3,0,2,2,2,2,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0, 2,2,2,3,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,1,2,2,1,2,2,2,1,1,1,2,0,1, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,2,2,2,2,2,2,2,2,2,2,1,2,2,2,2,2,2,2,2,2,2,2,0,2,2,0,0,0,0,0,0, 0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,3,1,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,1,2,1,0,2,1,0, 0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,1,0,1,1,1,1,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0, 0,3,1,1,2,2,2,2,2,1,2,2,2,1,1,2,2,2,2,2,2,2,1,2,2,1,0,1,1,1,1,0, 0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,2,1,1,1,1,2,1,1,2,1,0,1,1,1,1,1,1,1,1,1,1,1,0,1,0,0,0,0,0,0,0, 0,0,2,0,0,0,0,0,0,0,0,1,1,0,0,0,0,1,1,0,0,1,1,0,0,0,0,0,0,1,0,0, 2,1,1,2,2,2,2,2,2,2,2,2,2,2,1,2,2,2,2,2,1,2,1,2,1,1,1,1,0,0,0,0, 0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 1,2,1,2,2,2,2,2,2,2,2,2,2,1,2,1,2,1,1,2,1,1,1,2,1,2,1,2,0,1,0,1, 0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,3,1,2,2,2,1,2,2,2,2,2,2,2,2,1,2,1,1,1,1,1,1,2,1,2,1,1,0,1,0,1, 0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,1,2,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,2, 0,2,0,1,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0, 3,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,1,1,1,1,1,1,1,0,1,1,0,1,0,0,1,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,2,0,1,1,1,0,1,0,0,0,1,1,0,1,1,0,0,0,0,0,1,1,0,0, 0,1,1,1,2,1,2,2,2,0,2,0,2,0,1,1,2,1,1,1,1,2,1,0,1,1,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0, 1,0,1,0,0,0,0,0,1,0,1,2,2,0,1,0,0,1,1,2,2,1,2,0,2,0,0,0,1,2,0,1, 2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,2,0,2,1,2,0,2,0,0,1,1,1,1,1,1,0,1,0,0,0,1,0,0,1, 2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,1,0,0,0,0,0,1,0,2,1,1,0,1,0,0,1,1,1,2,2,0,0,1,0,0,0,1,0,0,1, 1,1,2,1,0,1,1,1,0,1,0,1,1,1,1,0,0,0,1,0,1,0,0,0,0,0,0,0,0,2,2,1, 0,2,0,1,2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,1,0,0,1,0,1,1,1,1,0,0,0,0,0,1,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 1,1,1,1,1,1,1,1,1,2,1,0,1,1,1,1,1,1,1,1,1,1,1,0,1,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,1,1,1,0,1,1,0,1,0,0,0,1,1,0,1, 2,0,1,0,1,0,1,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,1,0,1,1,1,0,1,0,0,1,1,2,1,1,2,0,1,0,0,0,1,1,0,1, 1,0,0,1,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,1,0,1,1,2,0,1,0,0,0,0,2,1,1,2,0,2,0,0,0,1,1,0,1, 1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,1,0,2,1,1,0,1,0,0,2,2,1,2,1,1,0,1,0,0,0,1,1,0,1, 2,0,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,1,2,2,0,0,0,0,0,1,1,0,1,0,0,1,0,0,0,0,1,0,1, 1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,1,2,2,0,0,0,0,2,1,1,1,0,2,1,1,0,0,0,2,1,0,1, 1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,1,0,1,1,2,0,1,0,0,1,1,0,2,1,1,0,1,0,0,0,1,1,0,1, 2,2,1,1,1,0,1,1,0,1,1,0,1,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,1,0,2,1,1,0,1,0,0,1,1,0,1,2,1,0,2,0,0,0,1,1,0,1, 2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0, 0,1,0,0,2,0,2,1,1,0,1,0,1,0,0,1,0,0,0,0,1,0,0,0,1,0,0,0,0,0,1,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 1,0,0,1,0,0,1,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,1,0,1,1,2,0,1,0,0,1,1,1,0,1,0,0,1,0,0,0,1,0,0,1, 1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 1,0,0,0,0,0,0,0,1,0,1,1,0,0,1,0,0,2,1,1,1,1,1,0,1,0,0,0,0,1,0,1, 0,1,1,1,2,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,1,1,0,1,1,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,1,2,1,0,0,0,0,0,1,1,1,1,1,0,1,0,0,0,1,1,0,0, ) Win1255HebrewModel = { 'char_to_order_map': WIN1255_CHAR_TO_ORDER_MAP, 'precedence_matrix': HEBREW_LANG_MODEL, 'typical_positive_ratio': 0.984004, 'keep_english_letter': False, 'charset_name': "windows-1255", 'language': 'Hebrew', } �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/langhungarianmodel.py������������0000644�0001750�0001750�00000030460�00000000000�032313� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Communicator client code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### # 255: Control characters that usually does not exist in any text # 254: Carriage/Return # 253: symbol (punctuation) that does not belong to word # 252: 0 - 9 # Character Mapping Table: Latin2_HungarianCharToOrderMap = ( 255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 253, 28, 40, 54, 45, 32, 50, 49, 38, 39, 53, 36, 41, 34, 35, 47, 46, 71, 43, 33, 37, 57, 48, 64, 68, 55, 52,253,253,253,253,253, 253, 2, 18, 26, 17, 1, 27, 12, 20, 9, 22, 7, 6, 13, 4, 8, 23, 67, 10, 5, 3, 21, 19, 65, 62, 16, 11,253,253,253,253,253, 159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174, 175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190, 191,192,193,194,195,196,197, 75,198,199,200,201,202,203,204,205, 79,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220, 221, 51, 81,222, 78,223,224,225,226, 44,227,228,229, 61,230,231, 232,233,234, 58,235, 66, 59,236,237,238, 60, 69, 63,239,240,241, 82, 14, 74,242, 70, 80,243, 72,244, 15, 83, 77, 84, 30, 76, 85, 245,246,247, 25, 73, 42, 24,248,249,250, 31, 56, 29,251,252,253, ) win1250HungarianCharToOrderMap = ( 255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 253, 28, 40, 54, 45, 32, 50, 49, 38, 39, 53, 36, 41, 34, 35, 47, 46, 72, 43, 33, 37, 57, 48, 64, 68, 55, 52,253,253,253,253,253, 253, 2, 18, 26, 17, 1, 27, 12, 20, 9, 22, 7, 6, 13, 4, 8, 23, 67, 10, 5, 3, 21, 19, 65, 62, 16, 11,253,253,253,253,253, 161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176, 177,178,179,180, 78,181, 69,182,183,184,185,186,187,188,189,190, 191,192,193,194,195,196,197, 76,198,199,200,201,202,203,204,205, 81,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220, 221, 51, 83,222, 80,223,224,225,226, 44,227,228,229, 61,230,231, 232,233,234, 58,235, 66, 59,236,237,238, 60, 70, 63,239,240,241, 84, 14, 75,242, 71, 82,243, 73,244, 15, 85, 79, 86, 30, 77, 87, 245,246,247, 25, 74, 42, 24,248,249,250, 31, 56, 29,251,252,253, ) # Model Table: # total sequences: 100% # first 512 sequences: 94.7368% # first 1024 sequences:5.2623% # rest sequences: 0.8894% # negative sequences: 0.0009% HungarianLangModel = ( 0,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,1,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3, 3,3,3,3,3,3,3,3,3,3,2,3,3,3,3,3,3,3,3,2,2,3,3,1,1,2,2,2,2,2,1,2, 3,2,2,3,3,3,3,3,2,3,3,3,3,3,3,1,2,3,3,3,3,2,3,3,1,1,3,3,0,1,1,1, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0, 3,2,1,3,3,3,3,3,2,3,3,3,3,3,1,1,2,3,3,3,3,3,3,3,1,1,3,2,0,1,1,1, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0, 3,3,3,3,3,3,3,3,3,3,3,1,1,2,3,3,3,1,3,3,3,3,3,1,3,3,2,2,0,3,2,3, 0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0, 3,3,3,3,3,3,2,3,3,3,2,3,3,2,3,3,3,3,3,2,3,3,2,2,3,2,3,2,0,3,2,2, 0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0, 3,3,3,3,3,3,2,3,3,3,3,3,2,3,3,3,1,2,3,2,2,3,1,2,3,3,2,2,0,3,3,3, 0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, 3,3,3,3,3,3,3,3,3,3,2,2,3,3,3,3,3,3,2,3,3,3,3,2,3,3,3,3,0,2,3,2, 0,0,0,1,1,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, 3,3,3,3,3,3,3,3,3,3,3,1,1,1,3,3,2,1,3,2,2,3,2,1,3,2,2,1,0,3,3,1, 0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, 3,2,2,3,3,3,3,3,1,2,3,3,3,3,1,2,1,3,3,3,3,2,2,3,1,1,3,2,0,1,1,1, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0, 3,3,3,3,3,3,3,3,2,2,3,3,3,3,3,2,1,3,3,3,3,3,2,2,1,3,3,3,0,1,1,2, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0, 3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,3,3,2,3,3,2,3,3,3,2,0,3,2,3, 0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,1,0, 3,3,3,3,3,3,2,3,3,3,2,3,2,3,3,3,1,3,2,2,2,3,1,1,3,3,1,1,0,3,3,2, 0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, 3,3,3,3,3,3,3,2,3,3,3,2,3,2,3,3,3,2,3,3,3,3,3,1,2,3,2,2,0,2,2,2, 0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, 3,3,3,2,2,2,3,1,3,3,2,2,1,3,3,3,1,1,3,1,2,3,2,3,2,2,2,1,0,2,2,2, 0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0, 3,1,1,3,3,3,3,3,1,2,3,3,3,3,1,2,1,3,3,3,2,2,3,2,1,0,3,2,0,1,1,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,1,1,3,3,3,3,3,1,2,3,3,3,3,1,1,0,3,3,3,3,0,2,3,0,0,2,1,0,1,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,3,3,3,2,2,3,3,2,2,2,2,3,3,0,1,2,3,2,3,2,2,3,2,1,2,0,2,2,2, 0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0, 3,3,3,3,3,3,1,2,3,3,3,2,1,2,3,3,2,2,2,3,2,3,3,1,3,3,1,1,0,2,3,2, 0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, 3,3,3,1,2,2,2,2,3,3,3,1,1,1,3,3,1,1,3,1,1,3,2,1,2,3,1,1,0,2,2,2, 0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, 3,3,3,2,1,2,1,1,3,3,1,1,1,1,3,3,1,1,2,2,1,2,1,1,2,2,1,1,0,2,2,1, 0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, 3,3,3,1,1,2,1,1,3,3,1,0,1,1,3,3,2,0,1,1,2,3,1,0,2,2,1,0,0,1,3,2, 0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, 3,2,1,3,3,3,3,3,1,2,3,2,3,3,2,1,1,3,2,3,2,1,2,2,0,1,2,1,0,0,1,1, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0, 3,3,3,3,2,2,2,2,3,1,2,2,1,1,3,3,0,3,2,1,2,3,2,1,3,3,1,1,0,2,1,3, 0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, 3,3,3,2,2,2,3,2,3,3,3,2,1,1,3,3,1,1,1,2,2,3,2,3,2,2,2,1,0,2,2,1, 0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, 1,0,0,3,3,3,3,3,0,0,3,3,2,3,0,0,0,2,3,3,1,0,1,2,0,0,1,1,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,1,2,3,3,3,3,3,1,2,3,3,2,2,1,1,0,3,3,2,2,1,2,2,1,0,2,2,0,1,1,1, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,2,2,1,3,1,2,3,3,2,2,1,1,2,2,1,1,1,1,3,2,1,1,1,1,2,1,0,1,2,1, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0, 2,3,3,1,1,1,1,1,3,3,3,0,1,1,3,3,1,1,1,1,1,2,2,0,3,1,1,2,0,2,1,1, 0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, 3,1,0,1,2,1,2,2,0,1,2,3,1,2,0,0,0,2,1,1,1,1,1,2,0,0,1,1,0,0,0,0, 1,2,1,2,2,2,1,2,1,2,0,2,0,2,2,1,1,2,1,1,2,1,1,1,0,1,0,0,0,1,1,0, 1,1,1,2,3,2,3,3,0,1,2,2,3,1,0,1,0,2,1,2,2,0,1,1,0,0,1,1,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 1,0,0,3,3,2,2,1,0,0,3,2,3,2,0,0,0,1,1,3,0,0,1,1,0,0,2,1,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,1,1,2,2,3,3,1,0,1,3,2,3,1,1,1,0,1,1,1,1,1,3,1,0,0,2,2,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,1,1,1,2,2,2,1,0,1,2,3,3,2,0,0,0,2,1,1,1,2,1,1,1,0,1,1,1,0,0,0, 1,2,2,2,2,2,1,1,1,2,0,2,1,1,1,1,1,2,1,1,1,1,1,1,0,1,1,1,0,0,1,1, 3,2,2,1,0,0,1,1,2,2,0,3,0,1,2,1,1,0,0,1,1,1,0,1,1,1,1,0,2,1,1,1, 2,2,1,1,1,2,1,2,1,1,1,1,1,1,1,2,1,1,1,2,3,1,1,1,1,1,1,1,1,1,0,1, 2,3,3,0,1,0,0,0,3,3,1,0,0,1,2,2,1,0,0,0,0,2,0,0,1,1,1,0,2,1,1,1, 2,1,1,1,1,1,1,2,1,1,0,1,1,0,1,1,1,0,1,2,1,1,0,1,1,1,1,1,1,1,0,1, 2,3,3,0,1,0,0,0,2,2,0,0,0,0,1,2,2,0,0,0,0,1,0,0,1,1,0,0,2,0,1,0, 2,1,1,1,1,2,1,1,1,1,1,1,1,2,1,1,1,1,1,1,1,1,1,2,0,1,1,1,1,1,0,1, 3,2,2,0,1,0,1,0,2,3,2,0,0,1,2,2,1,0,0,1,1,1,0,0,2,1,0,1,2,2,1,1, 2,1,1,1,1,1,1,2,1,1,1,1,1,1,0,2,1,0,1,1,0,1,1,1,0,1,1,2,1,1,0,1, 2,2,2,0,0,1,0,0,2,2,1,1,0,0,2,1,1,0,0,0,1,2,0,0,2,1,0,0,2,1,1,1, 2,1,1,1,1,2,1,2,1,1,1,2,2,1,1,2,1,1,1,2,1,1,1,1,1,1,1,1,1,1,0,1, 1,2,3,0,0,0,1,0,3,2,1,0,0,1,2,1,1,0,0,0,0,2,1,0,1,1,0,0,2,1,2,1, 1,1,0,0,0,1,0,1,1,1,1,1,2,0,0,1,0,0,0,2,0,0,1,1,1,1,1,1,1,1,0,1, 3,0,0,2,1,2,2,1,0,0,2,1,2,2,0,0,0,2,1,1,1,0,1,1,0,0,1,1,2,0,0,0, 1,2,1,2,2,1,1,2,1,2,0,1,1,1,1,1,1,1,1,1,2,1,1,0,0,1,1,1,1,0,0,1, 1,3,2,0,0,0,1,0,2,2,2,0,0,0,2,2,1,0,0,0,0,3,1,1,1,1,0,0,2,1,1,1, 2,1,0,1,1,1,0,1,1,1,1,1,1,1,0,2,1,0,0,1,0,1,1,0,1,1,1,1,1,1,0,1, 2,3,2,0,0,0,1,0,2,2,0,0,0,0,2,1,1,0,0,0,0,2,1,0,1,1,0,0,2,1,1,0, 2,1,1,1,1,2,1,2,1,2,0,1,1,1,0,2,1,1,1,2,1,1,1,1,0,1,1,1,1,1,0,1, 3,1,1,2,2,2,3,2,1,1,2,2,1,1,0,1,0,2,2,1,1,1,1,1,0,0,1,1,0,1,1,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,2,2,0,0,0,0,0,2,2,0,0,0,0,2,2,1,0,0,0,1,1,0,0,1,2,0,0,2,1,1,1, 2,2,1,1,1,2,1,2,1,1,0,1,1,1,1,2,1,1,1,2,1,1,1,1,0,1,2,1,1,1,0,1, 1,0,0,1,2,3,2,1,0,0,2,0,1,1,0,0,0,1,1,1,1,0,1,1,0,0,1,0,0,0,0,0, 1,2,1,2,1,2,1,1,1,2,0,2,1,1,1,0,1,2,0,0,1,1,1,0,0,0,0,0,0,0,0,0, 2,3,2,0,0,0,0,0,1,1,2,1,0,0,1,1,1,0,0,0,0,2,0,0,1,1,0,0,2,1,1,1, 2,1,1,1,1,1,1,2,1,0,1,1,1,1,0,2,1,1,1,1,1,1,0,1,0,1,1,1,1,1,0,1, 1,2,2,0,1,1,1,0,2,2,2,0,0,0,3,2,1,0,0,0,1,1,0,0,1,1,0,1,1,1,0,0, 1,1,0,1,1,1,1,1,1,1,1,2,1,1,1,1,1,1,1,2,1,1,1,0,0,1,1,1,0,1,0,1, 2,1,0,2,1,1,2,2,1,1,2,1,1,1,0,0,0,1,1,0,1,1,1,1,0,0,1,1,1,0,0,0, 1,2,2,2,2,2,1,1,1,2,0,2,1,1,1,1,1,1,1,1,1,1,1,1,0,1,1,0,0,0,1,0, 1,2,3,0,0,0,1,0,2,2,0,0,0,0,2,2,0,0,0,0,0,1,0,0,1,0,0,0,2,0,1,0, 2,1,1,1,1,1,0,2,0,0,0,1,2,1,1,1,1,0,1,2,0,1,0,1,0,1,1,1,0,1,0,1, 2,2,2,0,0,0,1,0,2,1,2,0,0,0,1,1,2,0,0,0,0,1,0,0,1,1,0,0,2,1,0,1, 2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,0,1,1,1,1,1,0,1, 1,2,2,0,0,0,1,0,2,2,2,0,0,0,1,1,0,0,0,0,0,1,1,0,2,0,0,1,1,1,0,1, 1,0,1,1,1,1,1,1,0,1,1,1,1,0,0,1,0,0,1,1,0,1,0,1,1,1,1,1,0,0,0,1, 1,0,0,1,0,1,2,1,0,0,1,1,1,2,0,0,0,1,1,0,1,0,1,1,0,0,1,0,0,0,0,0, 0,2,1,2,1,1,1,1,1,2,0,2,0,1,1,0,1,2,1,0,1,1,1,0,0,0,0,0,0,1,0,0, 2,1,1,0,1,2,0,0,1,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,1,0,0,0,2,1,0,1, 2,2,1,1,1,1,1,2,1,1,0,1,1,1,1,2,1,1,1,2,1,1,0,1,0,1,1,1,1,1,0,1, 1,2,2,0,0,0,0,0,1,1,0,0,0,0,2,1,0,0,0,0,0,2,0,0,2,2,0,0,2,0,0,1, 2,1,1,1,1,1,1,1,0,1,1,0,1,1,0,1,0,0,0,1,1,1,1,0,0,1,1,1,1,0,0,1, 1,1,2,0,0,3,1,0,2,1,1,1,0,0,1,1,1,0,0,0,1,1,0,0,0,1,0,0,1,0,1,0, 1,2,1,0,1,1,1,2,1,1,0,1,1,1,1,1,0,0,0,1,1,1,1,1,0,1,0,0,0,1,0,0, 2,1,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,1,0,0,0,1,0,0,0,0,2,0,0,0, 2,1,1,1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,2,1,1,0,0,1,1,1,1,1,0,1, 2,1,1,1,2,1,1,1,0,1,1,2,1,0,0,0,0,1,1,1,1,0,1,0,0,0,0,1,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 1,1,0,1,1,1,1,1,0,0,1,1,2,1,0,0,0,1,1,0,0,0,1,1,0,0,1,0,1,0,0,0, 1,2,1,1,1,1,1,1,1,1,0,1,0,1,1,1,1,1,1,0,1,1,1,0,0,0,0,0,0,1,0,0, 2,0,0,0,1,1,1,1,0,0,1,1,0,0,0,0,0,1,1,1,2,0,0,1,0,0,1,0,1,0,0,0, 0,1,1,1,1,1,1,1,1,2,0,1,1,1,1,0,1,1,1,0,1,1,1,0,0,0,0,0,0,0,0,0, 1,0,0,1,1,1,1,1,0,0,2,1,0,1,0,0,0,1,0,1,0,0,0,0,0,0,1,0,0,0,0,0, 0,1,1,1,1,1,1,0,1,1,0,1,0,1,1,0,1,1,0,0,1,1,1,0,0,0,0,0,0,0,0,0, 1,0,0,1,1,1,0,0,0,0,1,0,2,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0, 0,1,1,1,1,1,0,0,1,1,0,1,0,1,0,0,1,1,1,0,1,1,1,0,0,0,0,0,0,0,0,0, 0,0,0,1,0,0,0,0,0,0,1,1,2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,1,1,1,0,1,0,0,1,1,0,1,0,1,1,0,1,1,1,0,1,1,1,0,0,0,0,0,0,0,0,0, 2,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,0,0,1,0,0,1,0,1,0,1,1,1,0,0,1,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 1,0,0,1,1,1,1,0,0,0,1,1,1,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0, 0,1,1,1,1,1,1,0,1,1,0,1,0,1,0,0,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0, ) Latin2HungarianModel = { 'char_to_order_map': Latin2_HungarianCharToOrderMap, 'precedence_matrix': HungarianLangModel, 'typical_positive_ratio': 0.947368, 'keep_english_letter': True, 'charset_name': "ISO-8859-2", 'language': 'Hungarian', } Win1250HungarianModel = { 'char_to_order_map': win1250HungarianCharToOrderMap, 'precedence_matrix': HungarianLangModel, 'typical_positive_ratio': 0.947368, 'keep_english_letter': True, 'charset_name': "windows-1250", 'language': 'Hungarian', } ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/langthaimodel.py�����������������0000644�0001750�0001750�00000026032�00000000000�031264� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Communicator client code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### # 255: Control characters that usually does not exist in any text # 254: Carriage/Return # 253: symbol (punctuation) that does not belong to word # 252: 0 - 9 # The following result for thai was collected from a limited sample (1M). # Character Mapping Table: TIS620CharToOrderMap = ( 255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 253,182,106,107,100,183,184,185,101, 94,186,187,108,109,110,111, # 40 188,189,190, 89, 95,112,113,191,192,193,194,253,253,253,253,253, # 50 253, 64, 72, 73,114, 74,115,116,102, 81,201,117, 90,103, 78, 82, # 60 96,202, 91, 79, 84,104,105, 97, 98, 92,203,253,253,253,253,253, # 70 209,210,211,212,213, 88,214,215,216,217,218,219,220,118,221,222, 223,224, 99, 85, 83,225,226,227,228,229,230,231,232,233,234,235, 236, 5, 30,237, 24,238, 75, 8, 26, 52, 34, 51,119, 47, 58, 57, 49, 53, 55, 43, 20, 19, 44, 14, 48, 3, 17, 25, 39, 62, 31, 54, 45, 9, 16, 2, 61, 15,239, 12, 42, 46, 18, 21, 76, 4, 66, 63, 22, 10, 1, 36, 23, 13, 40, 27, 32, 35, 86,240,241,242,243,244, 11, 28, 41, 29, 33,245, 50, 37, 6, 7, 67, 77, 38, 93,246,247, 68, 56, 59, 65, 69, 60, 70, 80, 71, 87,248,249,250,251,252,253, ) # Model Table: # total sequences: 100% # first 512 sequences: 92.6386% # first 1024 sequences:7.3177% # rest sequences: 1.0230% # negative sequences: 0.0436% ThaiLangModel = ( 0,1,3,3,3,3,0,0,3,3,0,3,3,0,3,3,3,3,3,3,3,3,0,0,3,3,3,0,3,3,3,3, 0,3,3,0,0,0,1,3,0,3,3,2,3,3,0,1,2,3,3,3,3,0,2,0,2,0,0,3,2,1,2,2, 3,0,3,3,2,3,0,0,3,3,0,3,3,0,3,3,3,3,3,3,3,3,3,0,3,2,3,0,2,2,2,3, 0,2,3,0,0,0,0,1,0,1,2,3,1,1,3,2,2,0,1,1,0,0,1,0,0,0,0,0,0,0,1,1, 3,3,3,2,3,3,3,3,3,3,3,3,3,3,3,2,2,2,2,2,2,2,3,3,2,3,2,3,3,2,2,2, 3,1,2,3,0,3,3,2,2,1,2,3,3,1,2,0,1,3,0,1,0,0,1,0,0,0,0,0,0,0,1,1, 3,3,2,2,3,3,3,3,1,2,3,3,3,3,3,2,2,2,2,3,3,2,2,3,3,2,2,3,2,3,2,2, 3,3,1,2,3,1,2,2,3,3,1,0,2,1,0,0,3,1,2,1,0,0,1,0,0,0,0,0,0,1,0,1, 3,3,3,3,3,3,2,2,3,3,3,3,2,3,2,2,3,3,2,2,3,2,2,2,2,1,1,3,1,2,1,1, 3,2,1,0,2,1,0,1,0,1,1,0,1,1,0,0,1,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0, 3,3,3,2,3,2,3,3,2,2,3,2,3,3,2,3,1,1,2,3,2,2,2,3,2,2,2,2,2,1,2,1, 2,2,1,1,3,3,2,1,0,1,2,2,0,1,3,0,0,0,1,1,0,0,0,0,0,2,3,0,0,2,1,1, 3,3,2,3,3,2,0,0,3,3,0,3,3,0,2,2,3,1,2,2,1,1,1,0,2,2,2,0,2,2,1,1, 0,2,1,0,2,0,0,2,0,1,0,0,1,0,0,0,1,1,1,1,0,0,0,0,0,0,0,0,0,0,1,0, 3,3,2,3,3,2,0,0,3,3,0,2,3,0,2,1,2,2,2,2,1,2,0,0,2,2,2,0,2,2,1,1, 0,2,1,0,2,0,0,2,0,1,1,0,1,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0, 3,3,2,3,2,3,2,0,2,2,1,3,2,1,3,2,1,2,3,2,2,3,0,2,3,2,2,1,2,2,2,2, 1,2,2,0,0,0,0,2,0,1,2,0,1,1,1,0,1,0,3,1,1,0,0,0,0,0,0,0,0,0,1,0, 3,3,2,3,3,2,3,2,2,2,3,2,2,3,2,2,1,2,3,2,2,3,1,3,2,2,2,3,2,2,2,3, 3,2,1,3,0,1,1,1,0,2,1,1,1,1,1,0,1,0,1,1,0,0,0,0,0,0,0,0,0,2,0,0, 1,0,0,3,0,3,3,3,3,3,0,0,3,0,2,2,3,3,3,3,3,0,0,0,1,1,3,0,0,0,0,2, 0,0,1,0,0,0,0,0,0,0,2,3,0,0,0,3,0,2,0,0,0,0,0,3,0,0,0,0,0,0,0,0, 2,0,3,3,3,3,0,0,2,3,0,0,3,0,3,3,2,3,3,3,3,3,0,0,3,3,3,0,0,0,3,3, 0,0,3,0,0,0,0,2,0,0,2,1,1,3,0,0,1,0,0,2,3,0,1,0,0,0,0,0,0,0,1,0, 3,3,3,3,2,3,3,3,3,3,3,3,1,2,1,3,3,2,2,1,2,2,2,3,1,1,2,0,2,1,2,1, 2,2,1,0,0,0,1,1,0,1,0,1,1,0,0,0,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0, 3,0,2,1,2,3,3,3,0,2,0,2,2,0,2,1,3,2,2,1,2,1,0,0,2,2,1,0,2,1,2,2, 0,1,1,0,0,0,0,1,0,1,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,3,2,1,3,3,1,1,3,0,2,3,1,1,3,2,1,1,2,0,2,2,3,2,1,1,1,1,1,2, 3,0,0,1,3,1,2,1,2,0,3,0,0,0,1,0,3,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0, 3,3,1,1,3,2,3,3,3,1,3,2,1,3,2,1,3,2,2,2,2,1,3,3,1,2,1,3,1,2,3,0, 2,1,1,3,2,2,2,1,2,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2, 3,3,2,3,2,3,3,2,3,2,3,2,3,3,2,1,0,3,2,2,2,1,2,2,2,1,2,2,1,2,1,1, 2,2,2,3,0,1,3,1,1,1,1,0,1,1,0,2,1,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,3,2,3,2,2,1,1,3,2,3,2,3,2,0,3,2,2,1,2,0,2,2,2,1,2,2,2,2,1, 3,2,1,2,2,1,0,2,0,1,0,0,1,1,0,0,0,0,0,1,1,0,1,0,0,0,0,0,0,0,0,1, 3,3,3,3,3,2,3,1,2,3,3,2,2,3,0,1,1,2,0,3,3,2,2,3,0,1,1,3,0,0,0,0, 3,1,0,3,3,0,2,0,2,1,0,0,3,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,2,3,2,3,3,0,1,3,1,1,2,1,2,1,1,3,1,1,0,2,3,1,1,1,1,1,1,1,1, 3,1,1,2,2,2,2,1,1,1,0,0,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, 3,2,2,1,1,2,1,3,3,2,3,2,2,3,2,2,3,1,2,2,1,2,0,3,2,1,2,2,2,2,2,1, 3,2,1,2,2,2,1,1,1,1,0,0,1,1,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,3,3,3,3,3,1,3,3,0,2,1,0,3,2,0,0,3,1,0,1,1,0,1,0,0,0,0,0,1, 1,0,0,1,0,3,2,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,0,2,2,2,3,0,0,1,3,0,3,2,0,3,2,2,3,3,3,3,3,1,0,2,2,2,0,2,2,1,2, 0,2,3,0,0,0,0,1,0,1,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, 3,0,2,3,1,3,3,2,3,3,0,3,3,0,3,2,2,3,2,3,3,3,0,0,2,2,3,0,1,1,1,3, 0,0,3,0,0,0,2,2,0,1,3,0,1,2,2,2,3,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1, 3,2,3,3,2,0,3,3,2,2,3,1,3,2,1,3,2,0,1,2,2,0,2,3,2,1,0,3,0,0,0,0, 3,0,0,2,3,1,3,0,0,3,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,1,3,2,2,2,1,2,0,1,3,1,1,3,1,3,0,0,2,1,1,1,1,2,1,1,1,0,2,1,0,1, 1,2,0,0,0,3,1,1,0,0,0,0,1,0,1,0,0,1,0,1,0,0,0,0,0,3,1,0,0,0,1,0, 3,3,3,3,2,2,2,2,2,1,3,1,1,1,2,0,1,1,2,1,2,1,3,2,0,0,3,1,1,1,1,1, 3,1,0,2,3,0,0,0,3,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,2,3,0,3,3,0,2,0,0,0,0,0,0,0,3,0,0,1,0,0,0,0,0,0,0,0,0,0,0, 0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,2,3,1,3,0,0,1,2,0,0,2,0,3,3,2,3,3,3,2,3,0,0,2,2,2,0,0,0,2,2, 0,0,1,0,0,0,0,3,0,0,0,0,2,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0, 0,0,0,3,0,2,0,0,0,0,0,0,0,0,0,0,1,2,3,1,3,3,0,0,1,0,3,0,0,0,0,0, 0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,1,2,3,1,2,3,1,0,3,0,2,2,1,0,2,1,1,2,0,1,0,0,1,1,1,1,0,1,0,0, 1,0,0,0,0,1,1,0,3,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,3,2,1,0,1,1,1,3,1,2,2,2,2,2,2,1,1,1,1,0,3,1,0,1,3,1,1,1,1, 1,1,0,2,0,1,3,1,1,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,2,0,1, 3,0,2,2,1,3,3,2,3,3,0,1,1,0,2,2,1,2,1,3,3,1,0,0,3,2,0,0,0,0,2,1, 0,1,0,0,0,0,1,2,0,1,1,3,1,1,2,2,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0, 0,0,3,0,0,1,0,0,0,3,0,0,3,0,3,1,0,1,1,1,3,2,0,0,0,3,0,0,0,0,2,0, 0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,2,0,0,0,0,0,0,0,0,0, 3,3,1,3,2,1,3,3,1,2,2,0,1,2,1,0,1,2,0,0,0,0,0,3,0,0,0,3,0,0,0,0, 3,0,0,1,1,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,0,1,2,0,3,3,3,2,2,0,1,1,0,1,3,0,0,0,2,2,0,0,0,0,3,1,0,1,0,0,0, 0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,0,2,3,1,2,0,0,2,1,0,3,1,0,1,2,0,1,1,1,1,3,0,0,3,1,1,0,2,2,1,1, 0,2,0,0,0,0,0,1,0,1,0,0,1,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,0,0,3,1,2,0,0,2,2,0,1,2,0,1,0,1,3,1,2,1,0,0,0,2,0,3,0,0,0,1,0, 0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,0,1,1,2,2,0,0,0,2,0,2,1,0,1,1,0,1,1,1,2,1,0,0,1,1,1,0,2,1,1,1, 0,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,1, 0,0,0,2,0,1,3,1,1,1,1,0,0,0,0,3,2,0,1,0,0,0,1,2,0,0,0,1,0,0,0,0, 0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,3,3,3,3,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 1,0,2,3,2,2,0,0,0,1,0,0,0,0,2,3,2,1,2,2,3,0,0,0,2,3,1,0,0,0,1,1, 0,0,1,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,1,1,0,1,0,0,0,0,0,0,0,0,0, 3,3,2,2,0,1,0,0,0,0,2,0,2,0,1,0,0,0,1,1,0,0,0,2,1,0,1,0,1,1,0,0, 0,1,0,2,0,0,1,0,3,0,1,0,0,0,2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,1,0,0,1,0,0,0,0,0,1,1,2,0,0,0,0,1,0,0,1,3,1,0,0,0,0,1,1,0,0, 0,1,0,0,0,0,3,0,0,0,0,0,0,3,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0, 3,3,1,1,1,1,2,3,0,0,2,1,1,1,1,1,0,2,1,1,0,0,0,2,1,0,1,2,1,1,0,1, 2,1,0,3,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 1,3,1,0,0,0,0,0,0,0,3,0,0,0,3,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,1, 0,0,0,2,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,2,0,0,0,0,0,0,1,2,1,0,1,1,0,2,0,0,1,0,0,2,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,2,0,0,0,1,3,0,1,0,0,0,2,0,0,0,0,0,0,0,1,2,0,0,0,0,0, 3,3,0,0,1,1,2,0,0,1,2,1,0,1,1,1,0,1,1,0,0,2,1,1,0,1,0,0,1,1,1,0, 0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,2,2,1,0,0,0,0,1,0,0,0,0,3,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0, 2,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,3,0,0,1,1,0,0,0,2,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 1,1,0,1,2,0,1,2,0,0,1,1,0,2,0,1,0,0,1,0,0,0,0,1,0,0,0,2,0,0,0,0, 1,0,0,1,0,1,1,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,1,0,0,0,0,0,0,0,1,1,0,1,1,0,2,1,3,0,0,0,0,1,1,0,0,0,0,0,0,0,3, 1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,0,1,0,1,0,0,2,0,0,2,0,0,1,1,2,0,0,1,1,0,0,0,1,0,0,0,1,1,0,0,0, 1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0, 1,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,1,1,0,0,0, 2,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,0,0,0,0,2,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,3,0,0,0, 2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,1,0,0,0,0, 1,0,0,0,0,0,0,0,0,1,0,0,0,0,2,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,1,1,0,0,2,1,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, ) TIS620ThaiModel = { 'char_to_order_map': TIS620CharToOrderMap, 'precedence_matrix': ThaiLangModel, 'typical_positive_ratio': 0.926386, 'keep_english_letter': False, 'charset_name': "TIS-620", 'language': 'Thai', } ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/langturkishmodel.py��������������0000644�0001750�0001750�00000025536�00000000000�032040� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- ######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Communicator client code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # Özgür Baskın - Turkish Language Model # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### # 255: Control characters that usually does not exist in any text # 254: Carriage/Return # 253: symbol (punctuation) that does not belong to word # 252: 0 - 9 # Character Mapping Table: Latin5_TurkishCharToOrderMap = ( 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, 255, 23, 37, 47, 39, 29, 52, 36, 45, 53, 60, 16, 49, 20, 46, 42, 48, 69, 44, 35, 31, 51, 38, 62, 65, 43, 56,255,255,255,255,255, 255, 1, 21, 28, 12, 2, 18, 27, 25, 3, 24, 10, 5, 13, 4, 15, 26, 64, 7, 8, 9, 14, 32, 57, 58, 11, 22,255,255,255,255,255, 180,179,178,177,176,175,174,173,172,171,170,169,168,167,166,165, 164,163,162,161,160,159,101,158,157,156,155,154,153,152,151,106, 150,149,148,147,146,145,144,100,143,142,141,140,139,138,137,136, 94, 80, 93,135,105,134,133, 63,132,131,130,129,128,127,126,125, 124,104, 73, 99, 79, 85,123, 54,122, 98, 92,121,120, 91,103,119, 68,118,117, 97,116,115, 50, 90,114,113,112,111, 55, 41, 40, 86, 89, 70, 59, 78, 71, 82, 88, 33, 77, 66, 84, 83,110, 75, 61, 96, 30, 67,109, 74, 87,102, 34, 95, 81,108, 76, 72, 17, 6, 19,107, ) TurkishLangModel = ( 3,2,3,3,3,1,3,3,3,3,3,3,3,3,2,1,1,3,3,1,3,3,0,3,3,3,3,3,0,3,1,3, 3,2,1,0,0,1,1,0,0,0,1,0,0,1,1,1,1,0,0,0,0,0,0,0,2,2,0,0,1,0,0,1, 3,2,2,3,3,0,3,3,3,3,3,3,3,2,3,1,0,3,3,1,3,3,0,3,3,3,3,3,0,3,0,3, 3,1,1,0,1,0,1,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0,0,0,2,2,0,0,0,1,0,1, 3,3,2,3,3,0,3,3,3,3,3,3,3,2,3,1,1,3,3,0,3,3,1,2,3,3,3,3,0,3,0,3, 3,1,1,0,0,0,1,0,0,0,0,1,1,0,1,2,1,0,0,0,1,0,0,0,0,2,0,0,0,0,0,1, 3,3,3,3,3,3,2,3,3,3,3,3,3,3,3,1,3,3,2,0,3,2,1,2,2,1,3,3,0,0,0,2, 2,2,0,1,0,0,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,1,0,0,1, 3,3,3,2,3,3,1,2,3,3,3,3,3,3,3,1,3,2,1,0,3,2,0,1,2,3,3,2,1,0,0,2, 2,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,2,0,2,0,0,0, 1,0,1,3,3,1,3,3,3,3,3,3,3,1,2,0,0,2,3,0,2,3,0,0,2,2,2,3,0,3,0,1, 2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,0,3,3,3,0,3,2,0,2,3,2,3,3,1,0,0,2, 3,2,0,0,1,0,0,0,0,0,0,2,0,0,1,0,0,0,0,0,0,0,0,0,1,1,1,0,2,0,0,1, 3,3,3,2,3,3,2,3,3,3,3,2,3,3,3,0,3,3,0,0,2,1,0,0,2,3,2,2,0,0,0,2, 2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,1,0,1,0,2,0,0,1, 3,3,3,2,3,3,3,3,3,3,3,2,3,3,3,0,3,2,0,1,3,2,1,1,3,2,3,2,1,0,0,2, 2,2,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0, 3,3,3,2,3,3,3,3,3,3,3,2,3,3,3,0,3,2,2,0,2,3,0,0,2,2,2,2,0,0,0,2, 3,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,2,0,1,0,0,0, 3,3,3,3,3,3,3,2,2,2,2,3,2,3,3,0,3,3,1,1,2,2,0,0,2,2,3,2,0,0,1,3, 0,3,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,1, 3,3,3,2,3,3,3,2,1,2,2,3,2,3,3,0,3,2,0,0,1,1,0,1,1,2,1,2,0,0,0,1, 0,3,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,1,0,0,0, 3,3,3,2,3,3,2,3,2,2,2,3,3,3,3,1,3,1,1,0,3,2,1,1,3,3,2,3,1,0,0,1, 1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,2,0,0,1, 3,2,2,3,3,0,3,3,3,3,3,3,3,2,2,1,0,3,3,1,3,3,0,1,3,3,2,3,0,3,0,3, 2,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0, 2,2,2,3,3,0,3,3,3,3,3,3,3,3,3,0,0,3,2,0,3,3,0,3,2,3,3,3,0,3,1,3, 2,0,0,0,0,0,0,0,0,0,0,1,0,1,2,0,1,0,0,0,0,0,0,0,2,2,0,0,1,0,0,1, 3,3,3,1,2,3,3,1,0,0,1,0,0,3,3,2,3,0,0,2,0,0,2,0,2,0,0,0,2,0,2,0, 0,3,1,0,1,0,0,0,2,2,1,0,1,1,2,1,2,2,2,0,2,1,1,0,0,0,2,0,0,0,0,0, 1,2,1,3,3,0,3,3,3,3,3,2,3,0,0,0,0,2,3,0,2,3,1,0,2,3,1,3,0,3,0,2, 3,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,1,3,3,2,2,3,2,2,0,1,2,3,0,1,2,1,0,1,0,0,0,1,0,2,2,0,0,0,1, 1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,1,0,0,1,0,0,0, 3,3,3,1,3,3,1,1,3,3,1,1,3,3,1,0,2,1,2,0,2,1,0,0,1,1,2,1,0,0,0,2, 2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,1,0,2,1,3,0,0,2,0,0,3,3,0,3,0,0,1,0,1,2,0,0,1,1,2,2,0,1,0, 0,1,2,1,1,0,1,0,1,1,1,1,1,0,1,1,1,2,2,1,2,0,1,0,0,0,0,0,0,1,0,0, 3,3,3,2,3,2,3,3,0,2,2,2,3,3,3,0,3,0,0,0,2,2,0,1,2,1,1,1,0,0,0,1, 0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0, 3,3,3,3,3,3,2,1,2,2,3,3,3,3,2,0,2,0,0,0,2,2,0,0,2,1,3,3,0,0,1,1, 1,1,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0, 1,1,2,3,3,0,3,3,3,3,3,3,2,2,0,2,0,2,3,2,3,2,2,2,2,2,2,2,1,3,2,3, 2,0,2,1,2,2,2,2,1,1,2,2,1,2,2,1,2,0,0,2,1,1,0,2,1,0,0,1,0,0,0,1, 2,3,3,1,1,1,0,1,1,1,2,3,2,1,1,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0, 0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,2,2,2,3,2,3,2,2,1,3,3,3,0,2,1,2,0,2,1,0,0,1,1,1,1,1,0,0,1, 2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,2,0,1,0,0,0, 3,3,3,2,3,3,3,3,3,2,3,1,2,3,3,1,2,0,0,0,0,0,0,0,3,2,1,1,0,0,0,0, 2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0, 3,3,3,2,2,3,3,2,1,1,1,1,1,3,3,0,3,1,0,0,1,1,0,0,3,1,2,1,0,0,0,0, 0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0, 3,3,3,2,2,3,2,2,2,3,2,1,1,3,3,0,3,0,0,0,0,1,0,0,3,1,1,2,0,0,0,1, 1,0,0,1,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, 1,1,1,3,3,0,3,3,3,3,3,2,2,2,1,2,0,2,1,2,2,1,1,0,1,2,2,2,2,2,2,2, 0,0,2,1,2,1,2,1,0,1,1,3,1,2,1,1,2,0,0,2,0,1,0,1,0,1,0,0,0,1,0,1, 3,3,3,1,3,3,3,0,1,1,0,2,2,3,1,0,3,0,0,0,1,0,0,0,1,0,0,1,0,1,0,0, 1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,2,0,0,2,2,1,0,0,1,0,0,3,3,1,3,0,0,1,1,0,2,0,3,0,0,0,2,0,1,1, 0,1,2,0,1,2,2,0,2,2,2,2,1,0,2,1,1,0,2,0,2,1,2,0,0,0,0,0,0,0,0,0, 3,3,3,1,3,2,3,2,0,2,2,2,1,3,2,0,2,1,2,0,1,2,0,0,1,0,2,2,0,0,0,2, 1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,1,0,0,0, 3,3,3,0,3,3,1,1,2,3,1,0,3,2,3,0,3,0,0,0,1,0,0,0,1,0,1,0,0,0,0,0, 1,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,3,3,0,3,3,2,3,3,2,2,0,0,0,0,1,2,0,1,3,0,0,0,3,1,1,0,3,0,2, 2,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,1,2,2,1,0,3,1,1,1,1,3,3,2,3,0,0,1,0,1,2,0,2,2,0,2,2,0,2,1, 0,2,2,1,1,1,1,0,2,1,1,0,1,1,1,1,2,1,2,1,2,0,1,0,1,0,0,0,0,0,0,0, 3,3,3,0,1,1,3,0,0,1,1,0,0,2,2,0,3,0,0,1,1,0,1,0,0,0,0,0,2,0,0,0, 0,3,1,0,1,0,1,0,2,0,0,1,0,1,0,1,1,1,2,1,1,0,2,0,0,0,0,0,0,0,0,0, 3,3,3,0,2,0,2,0,1,1,1,0,0,3,3,0,2,0,0,1,0,0,2,1,1,0,1,0,1,0,1,0, 0,2,0,1,2,0,2,0,2,1,1,0,1,0,2,1,1,0,2,1,1,0,1,0,0,0,1,1,0,0,0,0, 3,2,3,0,1,0,0,0,0,0,0,0,0,1,2,0,1,0,0,1,0,0,1,0,0,0,0,0,2,0,0,0, 0,0,1,1,0,0,1,0,1,0,0,1,0,0,0,2,1,0,1,0,2,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,0,0,2,3,0,0,1,0,1,0,2,3,2,3,0,0,1,3,0,2,1,0,0,0,0,2,0,1,0, 0,2,1,0,0,1,1,0,2,1,0,0,1,0,0,1,1,0,1,1,2,0,1,0,0,0,0,1,0,0,0,0, 3,2,2,0,0,1,1,0,0,0,0,0,0,3,1,1,1,0,0,0,0,0,1,0,0,0,0,0,2,0,1,0, 0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0, 0,0,0,3,3,0,2,3,2,2,1,2,2,1,1,2,0,1,3,2,2,2,0,0,2,2,0,0,0,1,2,1, 3,0,2,1,1,0,1,1,1,0,1,2,2,2,1,1,2,0,0,0,0,1,0,1,1,0,0,0,0,0,0,0, 0,1,1,2,3,0,3,3,3,2,2,2,2,1,0,1,0,1,0,1,2,2,0,0,2,2,1,3,1,1,2,1, 0,0,1,1,2,0,1,1,0,0,1,2,0,2,1,1,2,0,0,1,0,0,0,1,0,1,0,1,0,0,0,0, 3,3,2,0,0,3,1,0,0,0,0,0,0,3,2,1,2,0,0,1,0,0,2,0,0,0,0,0,2,0,1,0, 0,2,1,1,0,0,1,0,1,2,0,0,1,1,0,0,2,1,1,1,1,0,2,0,0,0,0,0,0,0,0,0, 3,3,2,0,0,1,0,0,0,0,1,0,0,3,3,2,2,0,0,1,0,0,2,0,1,0,0,0,2,0,1,0, 0,0,1,1,0,0,2,0,2,1,0,0,1,1,2,1,2,0,2,1,2,1,1,1,0,0,1,1,0,0,0,0, 3,3,2,0,0,2,2,0,0,0,1,1,0,2,2,1,3,1,0,1,0,1,2,0,0,0,0,0,1,0,1,0, 0,1,1,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,2,0,0,0,1,0,0,1,0,0,2,3,1,2,0,0,1,0,0,2,0,0,0,1,0,2,0,2,0, 0,1,1,2,2,1,2,0,2,1,1,0,0,1,1,0,1,1,1,1,2,1,1,0,0,0,0,0,0,0,0,0, 3,3,3,0,2,1,2,1,0,0,1,1,0,3,3,1,2,0,0,1,0,0,2,0,2,0,1,1,2,0,0,0, 0,0,1,1,1,1,2,0,1,1,0,1,1,1,1,0,0,0,1,1,1,0,1,0,0,0,1,0,0,0,0,0, 3,3,3,0,2,2,3,2,0,0,1,0,0,2,3,1,0,0,0,0,0,0,2,0,2,0,0,0,2,0,0,0, 0,1,1,0,0,0,1,0,0,1,0,1,1,0,1,0,1,1,1,0,1,0,0,0,0,0,0,0,0,0,0,0, 3,2,3,0,0,0,0,0,0,0,1,0,0,2,2,2,2,0,0,1,0,0,2,0,0,0,0,0,2,0,1,0, 0,0,2,1,1,0,1,0,2,1,1,0,0,1,1,2,1,0,2,0,2,0,1,0,0,0,2,0,0,0,0,0, 0,0,0,2,2,0,2,1,1,1,1,2,2,0,0,1,0,1,0,0,1,3,0,0,0,0,1,0,0,2,1,0, 0,0,1,0,1,0,0,0,0,0,2,1,0,1,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0, 2,0,0,2,3,0,2,3,1,2,2,0,2,0,0,2,0,2,1,1,1,2,1,0,0,1,2,1,1,2,1,0, 1,0,2,0,1,0,1,1,0,0,2,2,1,2,1,1,2,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0, 3,3,3,0,2,1,2,0,0,0,1,0,0,3,2,0,1,0,0,1,0,0,2,0,0,0,1,2,1,0,1,0, 0,0,0,0,1,0,1,0,0,1,0,0,0,0,1,0,1,0,1,1,1,0,1,0,0,0,0,0,0,0,0,0, 0,0,0,2,2,0,2,2,1,1,0,1,1,1,1,1,0,0,1,2,1,1,1,0,1,0,0,0,1,1,1,1, 0,0,2,1,0,1,1,1,0,1,1,2,1,2,1,1,2,0,1,1,2,1,0,2,0,0,0,0,0,0,0,0, 3,2,2,0,0,2,0,0,0,0,0,0,0,2,2,0,2,0,0,1,0,0,2,0,0,0,0,0,2,0,0,0, 0,2,1,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0, 0,0,0,3,2,0,2,2,0,1,1,0,1,0,0,1,0,0,0,1,0,1,0,0,0,0,0,1,0,0,0,0, 2,0,1,0,1,0,1,1,0,0,1,2,0,1,0,1,1,0,0,1,0,1,0,2,0,0,0,0,0,0,0,0, 2,2,2,0,1,1,0,0,0,1,0,0,0,1,2,0,1,0,0,1,0,0,1,0,0,0,0,1,2,0,1,0, 0,0,1,0,0,0,1,0,0,1,0,0,0,0,0,0,1,0,1,0,2,0,0,0,0,0,0,0,0,0,0,0, 2,2,2,2,1,0,1,1,1,0,0,0,0,1,2,0,0,1,0,0,0,1,0,0,1,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0, 1,1,2,0,1,0,0,0,1,0,1,0,0,0,1,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,2,0,0,0,0,0,1, 0,0,1,2,2,0,2,1,2,1,1,2,2,0,0,0,0,1,0,0,1,1,0,0,2,0,0,0,0,1,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0, 2,2,2,0,0,0,1,0,0,0,0,0,0,2,2,1,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0, 0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,1,1,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 2,2,2,0,1,0,1,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,1,0,0,0,0,0,0,0,0,0,0,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, ) Latin5TurkishModel = { 'char_to_order_map': Latin5_TurkishCharToOrderMap, 'precedence_matrix': TurkishLangModel, 'typical_positive_ratio': 0.970290, 'keep_english_letter': True, 'charset_name': "ISO-8859-9", 'language': 'Turkish', } ������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/latin1prober.py������������������0000644�0001750�0001750�00000012372�00000000000�031060� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Universal charset detector code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 2001 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # Shy Shalom - original C code # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### from .charsetprober import CharSetProber from .enums import ProbingState FREQ_CAT_NUM = 4 UDF = 0 # undefined OTH = 1 # other ASC = 2 # ascii capital letter ASS = 3 # ascii small letter ACV = 4 # accent capital vowel ACO = 5 # accent capital other ASV = 6 # accent small vowel ASO = 7 # accent small other CLASS_NUM = 8 # total classes Latin1_CharToClass = ( OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 00 - 07 OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 08 - 0F OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 10 - 17 OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 18 - 1F OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 20 - 27 OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 28 - 2F OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 30 - 37 OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 38 - 3F OTH, ASC, ASC, ASC, ASC, ASC, ASC, ASC, # 40 - 47 ASC, ASC, ASC, ASC, ASC, ASC, ASC, ASC, # 48 - 4F ASC, ASC, ASC, ASC, ASC, ASC, ASC, ASC, # 50 - 57 ASC, ASC, ASC, OTH, OTH, OTH, OTH, OTH, # 58 - 5F OTH, ASS, ASS, ASS, ASS, ASS, ASS, ASS, # 60 - 67 ASS, ASS, ASS, ASS, ASS, ASS, ASS, ASS, # 68 - 6F ASS, ASS, ASS, ASS, ASS, ASS, ASS, ASS, # 70 - 77 ASS, ASS, ASS, OTH, OTH, OTH, OTH, OTH, # 78 - 7F OTH, UDF, OTH, ASO, OTH, OTH, OTH, OTH, # 80 - 87 OTH, OTH, ACO, OTH, ACO, UDF, ACO, UDF, # 88 - 8F UDF, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 90 - 97 OTH, OTH, ASO, OTH, ASO, UDF, ASO, ACO, # 98 - 9F OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # A0 - A7 OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # A8 - AF OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # B0 - B7 OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # B8 - BF ACV, ACV, ACV, ACV, ACV, ACV, ACO, ACO, # C0 - C7 ACV, ACV, ACV, ACV, ACV, ACV, ACV, ACV, # C8 - CF ACO, ACO, ACV, ACV, ACV, ACV, ACV, OTH, # D0 - D7 ACV, ACV, ACV, ACV, ACV, ACO, ACO, ACO, # D8 - DF ASV, ASV, ASV, ASV, ASV, ASV, ASO, ASO, # E0 - E7 ASV, ASV, ASV, ASV, ASV, ASV, ASV, ASV, # E8 - EF ASO, ASO, ASV, ASV, ASV, ASV, ASV, OTH, # F0 - F7 ASV, ASV, ASV, ASV, ASV, ASO, ASO, ASO, # F8 - FF ) # 0 : illegal # 1 : very unlikely # 2 : normal # 3 : very likely Latin1ClassModel = ( # UDF OTH ASC ASS ACV ACO ASV ASO 0, 0, 0, 0, 0, 0, 0, 0, # UDF 0, 3, 3, 3, 3, 3, 3, 3, # OTH 0, 3, 3, 3, 3, 3, 3, 3, # ASC 0, 3, 3, 3, 1, 1, 3, 3, # ASS 0, 3, 3, 3, 1, 2, 1, 2, # ACV 0, 3, 3, 3, 3, 3, 3, 3, # ACO 0, 3, 1, 3, 1, 1, 1, 3, # ASV 0, 3, 1, 3, 1, 1, 3, 3, # ASO ) class Latin1Prober(CharSetProber): def __init__(self): super(Latin1Prober, self).__init__() self._last_char_class = None self._freq_counter = None self.reset() def reset(self): self._last_char_class = OTH self._freq_counter = [0] * FREQ_CAT_NUM CharSetProber.reset(self) @property def charset_name(self): return "ISO-8859-1" @property def language(self): return "" def feed(self, byte_str): byte_str = self.filter_with_english_letters(byte_str) for c in byte_str: char_class = Latin1_CharToClass[c] freq = Latin1ClassModel[(self._last_char_class * CLASS_NUM) + char_class] if freq == 0: self._state = ProbingState.NOT_ME break self._freq_counter[freq] += 1 self._last_char_class = char_class return self.state def get_confidence(self): if self.state == ProbingState.NOT_ME: return 0.01 total = sum(self._freq_counter) if total < 0.01: confidence = 0.0 else: confidence = ((self._freq_counter[3] - self._freq_counter[1] * 20.0) / total) if confidence < 0.0: confidence = 0.0 # lower the confidence of latin1 so that other more accurate # detector can take priority. confidence = confidence * 0.73 return confidence ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/mbcharsetprober.py���������������0000644�0001750�0001750�00000006525�00000000000�031643� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Universal charset detector code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 2001 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # Shy Shalom - original C code # Proofpoint, Inc. # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### from .charsetprober import CharSetProber from .enums import ProbingState, MachineState class MultiByteCharSetProber(CharSetProber): """ MultiByteCharSetProber """ def __init__(self, lang_filter=None): super(MultiByteCharSetProber, self).__init__(lang_filter=lang_filter) self.distribution_analyzer = None self.coding_sm = None self._last_char = [0, 0] def reset(self): super(MultiByteCharSetProber, self).reset() if self.coding_sm: self.coding_sm.reset() if self.distribution_analyzer: self.distribution_analyzer.reset() self._last_char = [0, 0] @property def charset_name(self): raise NotImplementedError @property def language(self): raise NotImplementedError def feed(self, byte_str): for i in range(len(byte_str)): coding_state = self.coding_sm.next_state(byte_str[i]) if coding_state == MachineState.ERROR: self.logger.debug('%s %s prober hit error at byte %s', self.charset_name, self.language, i) self._state = ProbingState.NOT_ME break elif coding_state == MachineState.ITS_ME: self._state = ProbingState.FOUND_IT break elif coding_state == MachineState.START: char_len = self.coding_sm.get_current_charlen() if i == 0: self._last_char[1] = byte_str[0] self.distribution_analyzer.feed(self._last_char, char_len) else: self.distribution_analyzer.feed(byte_str[i - 1:i + 1], char_len) self._last_char[0] = byte_str[-1] if self.state == ProbingState.DETECTING: if (self.distribution_analyzer.got_enough_data() and (self.get_confidence() > self.SHORTCUT_THRESHOLD)): self._state = ProbingState.FOUND_IT return self.state def get_confidence(self): return self.distribution_analyzer.get_confidence() ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/mbcsgroupprober.py���������������0000644�0001750�0001750�00000003734�00000000000�031673� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Universal charset detector code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 2001 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # Shy Shalom - original C code # Proofpoint, Inc. # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### from .charsetgroupprober import CharSetGroupProber from .utf8prober import UTF8Prober from .sjisprober import SJISProber from .eucjpprober import EUCJPProber from .gb2312prober import GB2312Prober from .euckrprober import EUCKRProber from .cp949prober import CP949Prober from .big5prober import Big5Prober from .euctwprober import EUCTWProber class MBCSGroupProber(CharSetGroupProber): def __init__(self, lang_filter=None): super(MBCSGroupProber, self).__init__(lang_filter=lang_filter) self.probers = [ UTF8Prober(), SJISProber(), EUCJPProber(), GB2312Prober(), EUCKRProber(), CP949Prober(), Big5Prober(), EUCTWProber() ] self.reset() ������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/mbcssm.py������������������������0000644�0001750�0001750�00000061611�00000000000�027742� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is mozilla.org code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### from .enums import MachineState # BIG5 BIG5_CLS = ( 1,1,1,1,1,1,1,1, # 00 - 07 #allow 0x00 as legal value 1,1,1,1,1,1,0,0, # 08 - 0f 1,1,1,1,1,1,1,1, # 10 - 17 1,1,1,0,1,1,1,1, # 18 - 1f 1,1,1,1,1,1,1,1, # 20 - 27 1,1,1,1,1,1,1,1, # 28 - 2f 1,1,1,1,1,1,1,1, # 30 - 37 1,1,1,1,1,1,1,1, # 38 - 3f 2,2,2,2,2,2,2,2, # 40 - 47 2,2,2,2,2,2,2,2, # 48 - 4f 2,2,2,2,2,2,2,2, # 50 - 57 2,2,2,2,2,2,2,2, # 58 - 5f 2,2,2,2,2,2,2,2, # 60 - 67 2,2,2,2,2,2,2,2, # 68 - 6f 2,2,2,2,2,2,2,2, # 70 - 77 2,2,2,2,2,2,2,1, # 78 - 7f 4,4,4,4,4,4,4,4, # 80 - 87 4,4,4,4,4,4,4,4, # 88 - 8f 4,4,4,4,4,4,4,4, # 90 - 97 4,4,4,4,4,4,4,4, # 98 - 9f 4,3,3,3,3,3,3,3, # a0 - a7 3,3,3,3,3,3,3,3, # a8 - af 3,3,3,3,3,3,3,3, # b0 - b7 3,3,3,3,3,3,3,3, # b8 - bf 3,3,3,3,3,3,3,3, # c0 - c7 3,3,3,3,3,3,3,3, # c8 - cf 3,3,3,3,3,3,3,3, # d0 - d7 3,3,3,3,3,3,3,3, # d8 - df 3,3,3,3,3,3,3,3, # e0 - e7 3,3,3,3,3,3,3,3, # e8 - ef 3,3,3,3,3,3,3,3, # f0 - f7 3,3,3,3,3,3,3,0 # f8 - ff ) BIG5_ST = ( MachineState.ERROR,MachineState.START,MachineState.START, 3,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#00-07 MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,#08-0f MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START#10-17 ) BIG5_CHAR_LEN_TABLE = (0, 1, 1, 2, 0) BIG5_SM_MODEL = {'class_table': BIG5_CLS, 'class_factor': 5, 'state_table': BIG5_ST, 'char_len_table': BIG5_CHAR_LEN_TABLE, 'name': 'Big5'} # CP949 CP949_CLS = ( 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,0,0, # 00 - 0f 1,1,1,1,1,1,1,1, 1,1,1,0,1,1,1,1, # 10 - 1f 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, # 20 - 2f 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, # 30 - 3f 1,4,4,4,4,4,4,4, 4,4,4,4,4,4,4,4, # 40 - 4f 4,4,5,5,5,5,5,5, 5,5,5,1,1,1,1,1, # 50 - 5f 1,5,5,5,5,5,5,5, 5,5,5,5,5,5,5,5, # 60 - 6f 5,5,5,5,5,5,5,5, 5,5,5,1,1,1,1,1, # 70 - 7f 0,6,6,6,6,6,6,6, 6,6,6,6,6,6,6,6, # 80 - 8f 6,6,6,6,6,6,6,6, 6,6,6,6,6,6,6,6, # 90 - 9f 6,7,7,7,7,7,7,7, 7,7,7,7,7,8,8,8, # a0 - af 7,7,7,7,7,7,7,7, 7,7,7,7,7,7,7,7, # b0 - bf 7,7,7,7,7,7,9,2, 2,3,2,2,2,2,2,2, # c0 - cf 2,2,2,2,2,2,2,2, 2,2,2,2,2,2,2,2, # d0 - df 2,2,2,2,2,2,2,2, 2,2,2,2,2,2,2,2, # e0 - ef 2,2,2,2,2,2,2,2, 2,2,2,2,2,2,2,0, # f0 - ff ) CP949_ST = ( #cls= 0 1 2 3 4 5 6 7 8 9 # previous state = MachineState.ERROR,MachineState.START, 3,MachineState.ERROR,MachineState.START,MachineState.START, 4, 5,MachineState.ERROR, 6, # MachineState.START MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, # MachineState.ERROR MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME, # MachineState.ITS_ME MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START, # 3 MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START, # 4 MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START, # 5 MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START, # 6 ) CP949_CHAR_LEN_TABLE = (0, 1, 2, 0, 1, 1, 2, 2, 0, 2) CP949_SM_MODEL = {'class_table': CP949_CLS, 'class_factor': 10, 'state_table': CP949_ST, 'char_len_table': CP949_CHAR_LEN_TABLE, 'name': 'CP949'} # EUC-JP EUCJP_CLS = ( 4,4,4,4,4,4,4,4, # 00 - 07 4,4,4,4,4,4,5,5, # 08 - 0f 4,4,4,4,4,4,4,4, # 10 - 17 4,4,4,5,4,4,4,4, # 18 - 1f 4,4,4,4,4,4,4,4, # 20 - 27 4,4,4,4,4,4,4,4, # 28 - 2f 4,4,4,4,4,4,4,4, # 30 - 37 4,4,4,4,4,4,4,4, # 38 - 3f 4,4,4,4,4,4,4,4, # 40 - 47 4,4,4,4,4,4,4,4, # 48 - 4f 4,4,4,4,4,4,4,4, # 50 - 57 4,4,4,4,4,4,4,4, # 58 - 5f 4,4,4,4,4,4,4,4, # 60 - 67 4,4,4,4,4,4,4,4, # 68 - 6f 4,4,4,4,4,4,4,4, # 70 - 77 4,4,4,4,4,4,4,4, # 78 - 7f 5,5,5,5,5,5,5,5, # 80 - 87 5,5,5,5,5,5,1,3, # 88 - 8f 5,5,5,5,5,5,5,5, # 90 - 97 5,5,5,5,5,5,5,5, # 98 - 9f 5,2,2,2,2,2,2,2, # a0 - a7 2,2,2,2,2,2,2,2, # a8 - af 2,2,2,2,2,2,2,2, # b0 - b7 2,2,2,2,2,2,2,2, # b8 - bf 2,2,2,2,2,2,2,2, # c0 - c7 2,2,2,2,2,2,2,2, # c8 - cf 2,2,2,2,2,2,2,2, # d0 - d7 2,2,2,2,2,2,2,2, # d8 - df 0,0,0,0,0,0,0,0, # e0 - e7 0,0,0,0,0,0,0,0, # e8 - ef 0,0,0,0,0,0,0,0, # f0 - f7 0,0,0,0,0,0,0,5 # f8 - ff ) EUCJP_ST = ( 3, 4, 3, 5,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#00-07 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,#08-0f MachineState.ITS_ME,MachineState.ITS_ME,MachineState.START,MachineState.ERROR,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#10-17 MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 3,MachineState.ERROR,#18-1f 3,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START#20-27 ) EUCJP_CHAR_LEN_TABLE = (2, 2, 2, 3, 1, 0) EUCJP_SM_MODEL = {'class_table': EUCJP_CLS, 'class_factor': 6, 'state_table': EUCJP_ST, 'char_len_table': EUCJP_CHAR_LEN_TABLE, 'name': 'EUC-JP'} # EUC-KR EUCKR_CLS = ( 1,1,1,1,1,1,1,1, # 00 - 07 1,1,1,1,1,1,0,0, # 08 - 0f 1,1,1,1,1,1,1,1, # 10 - 17 1,1,1,0,1,1,1,1, # 18 - 1f 1,1,1,1,1,1,1,1, # 20 - 27 1,1,1,1,1,1,1,1, # 28 - 2f 1,1,1,1,1,1,1,1, # 30 - 37 1,1,1,1,1,1,1,1, # 38 - 3f 1,1,1,1,1,1,1,1, # 40 - 47 1,1,1,1,1,1,1,1, # 48 - 4f 1,1,1,1,1,1,1,1, # 50 - 57 1,1,1,1,1,1,1,1, # 58 - 5f 1,1,1,1,1,1,1,1, # 60 - 67 1,1,1,1,1,1,1,1, # 68 - 6f 1,1,1,1,1,1,1,1, # 70 - 77 1,1,1,1,1,1,1,1, # 78 - 7f 0,0,0,0,0,0,0,0, # 80 - 87 0,0,0,0,0,0,0,0, # 88 - 8f 0,0,0,0,0,0,0,0, # 90 - 97 0,0,0,0,0,0,0,0, # 98 - 9f 0,2,2,2,2,2,2,2, # a0 - a7 2,2,2,2,2,3,3,3, # a8 - af 2,2,2,2,2,2,2,2, # b0 - b7 2,2,2,2,2,2,2,2, # b8 - bf 2,2,2,2,2,2,2,2, # c0 - c7 2,3,2,2,2,2,2,2, # c8 - cf 2,2,2,2,2,2,2,2, # d0 - d7 2,2,2,2,2,2,2,2, # d8 - df 2,2,2,2,2,2,2,2, # e0 - e7 2,2,2,2,2,2,2,2, # e8 - ef 2,2,2,2,2,2,2,2, # f0 - f7 2,2,2,2,2,2,2,0 # f8 - ff ) EUCKR_ST = ( MachineState.ERROR,MachineState.START, 3,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#00-07 MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START #08-0f ) EUCKR_CHAR_LEN_TABLE = (0, 1, 2, 0) EUCKR_SM_MODEL = {'class_table': EUCKR_CLS, 'class_factor': 4, 'state_table': EUCKR_ST, 'char_len_table': EUCKR_CHAR_LEN_TABLE, 'name': 'EUC-KR'} # EUC-TW EUCTW_CLS = ( 2,2,2,2,2,2,2,2, # 00 - 07 2,2,2,2,2,2,0,0, # 08 - 0f 2,2,2,2,2,2,2,2, # 10 - 17 2,2,2,0,2,2,2,2, # 18 - 1f 2,2,2,2,2,2,2,2, # 20 - 27 2,2,2,2,2,2,2,2, # 28 - 2f 2,2,2,2,2,2,2,2, # 30 - 37 2,2,2,2,2,2,2,2, # 38 - 3f 2,2,2,2,2,2,2,2, # 40 - 47 2,2,2,2,2,2,2,2, # 48 - 4f 2,2,2,2,2,2,2,2, # 50 - 57 2,2,2,2,2,2,2,2, # 58 - 5f 2,2,2,2,2,2,2,2, # 60 - 67 2,2,2,2,2,2,2,2, # 68 - 6f 2,2,2,2,2,2,2,2, # 70 - 77 2,2,2,2,2,2,2,2, # 78 - 7f 0,0,0,0,0,0,0,0, # 80 - 87 0,0,0,0,0,0,6,0, # 88 - 8f 0,0,0,0,0,0,0,0, # 90 - 97 0,0,0,0,0,0,0,0, # 98 - 9f 0,3,4,4,4,4,4,4, # a0 - a7 5,5,1,1,1,1,1,1, # a8 - af 1,1,1,1,1,1,1,1, # b0 - b7 1,1,1,1,1,1,1,1, # b8 - bf 1,1,3,1,3,3,3,3, # c0 - c7 3,3,3,3,3,3,3,3, # c8 - cf 3,3,3,3,3,3,3,3, # d0 - d7 3,3,3,3,3,3,3,3, # d8 - df 3,3,3,3,3,3,3,3, # e0 - e7 3,3,3,3,3,3,3,3, # e8 - ef 3,3,3,3,3,3,3,3, # f0 - f7 3,3,3,3,3,3,3,0 # f8 - ff ) EUCTW_ST = ( MachineState.ERROR,MachineState.ERROR,MachineState.START, 3, 3, 3, 4,MachineState.ERROR,#00-07 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,#08-0f MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.START,MachineState.ERROR,#10-17 MachineState.START,MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#18-1f 5,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.ERROR,MachineState.START,MachineState.START,#20-27 MachineState.START,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START #28-2f ) EUCTW_CHAR_LEN_TABLE = (0, 0, 1, 2, 2, 2, 3) EUCTW_SM_MODEL = {'class_table': EUCTW_CLS, 'class_factor': 7, 'state_table': EUCTW_ST, 'char_len_table': EUCTW_CHAR_LEN_TABLE, 'name': 'x-euc-tw'} # GB2312 GB2312_CLS = ( 1,1,1,1,1,1,1,1, # 00 - 07 1,1,1,1,1,1,0,0, # 08 - 0f 1,1,1,1,1,1,1,1, # 10 - 17 1,1,1,0,1,1,1,1, # 18 - 1f 1,1,1,1,1,1,1,1, # 20 - 27 1,1,1,1,1,1,1,1, # 28 - 2f 3,3,3,3,3,3,3,3, # 30 - 37 3,3,1,1,1,1,1,1, # 38 - 3f 2,2,2,2,2,2,2,2, # 40 - 47 2,2,2,2,2,2,2,2, # 48 - 4f 2,2,2,2,2,2,2,2, # 50 - 57 2,2,2,2,2,2,2,2, # 58 - 5f 2,2,2,2,2,2,2,2, # 60 - 67 2,2,2,2,2,2,2,2, # 68 - 6f 2,2,2,2,2,2,2,2, # 70 - 77 2,2,2,2,2,2,2,4, # 78 - 7f 5,6,6,6,6,6,6,6, # 80 - 87 6,6,6,6,6,6,6,6, # 88 - 8f 6,6,6,6,6,6,6,6, # 90 - 97 6,6,6,6,6,6,6,6, # 98 - 9f 6,6,6,6,6,6,6,6, # a0 - a7 6,6,6,6,6,6,6,6, # a8 - af 6,6,6,6,6,6,6,6, # b0 - b7 6,6,6,6,6,6,6,6, # b8 - bf 6,6,6,6,6,6,6,6, # c0 - c7 6,6,6,6,6,6,6,6, # c8 - cf 6,6,6,6,6,6,6,6, # d0 - d7 6,6,6,6,6,6,6,6, # d8 - df 6,6,6,6,6,6,6,6, # e0 - e7 6,6,6,6,6,6,6,6, # e8 - ef 6,6,6,6,6,6,6,6, # f0 - f7 6,6,6,6,6,6,6,0 # f8 - ff ) GB2312_ST = ( MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START, 3,MachineState.ERROR,#00-07 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,#08-0f MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.START,#10-17 4,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#18-1f MachineState.ERROR,MachineState.ERROR, 5,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,#20-27 MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START #28-2f ) # To be accurate, the length of class 6 can be either 2 or 4. # But it is not necessary to discriminate between the two since # it is used for frequency analysis only, and we are validating # each code range there as well. So it is safe to set it to be # 2 here. GB2312_CHAR_LEN_TABLE = (0, 1, 1, 1, 1, 1, 2) GB2312_SM_MODEL = {'class_table': GB2312_CLS, 'class_factor': 7, 'state_table': GB2312_ST, 'char_len_table': GB2312_CHAR_LEN_TABLE, 'name': 'GB2312'} # Shift_JIS SJIS_CLS = ( 1,1,1,1,1,1,1,1, # 00 - 07 1,1,1,1,1,1,0,0, # 08 - 0f 1,1,1,1,1,1,1,1, # 10 - 17 1,1,1,0,1,1,1,1, # 18 - 1f 1,1,1,1,1,1,1,1, # 20 - 27 1,1,1,1,1,1,1,1, # 28 - 2f 1,1,1,1,1,1,1,1, # 30 - 37 1,1,1,1,1,1,1,1, # 38 - 3f 2,2,2,2,2,2,2,2, # 40 - 47 2,2,2,2,2,2,2,2, # 48 - 4f 2,2,2,2,2,2,2,2, # 50 - 57 2,2,2,2,2,2,2,2, # 58 - 5f 2,2,2,2,2,2,2,2, # 60 - 67 2,2,2,2,2,2,2,2, # 68 - 6f 2,2,2,2,2,2,2,2, # 70 - 77 2,2,2,2,2,2,2,1, # 78 - 7f 3,3,3,3,3,2,2,3, # 80 - 87 3,3,3,3,3,3,3,3, # 88 - 8f 3,3,3,3,3,3,3,3, # 90 - 97 3,3,3,3,3,3,3,3, # 98 - 9f #0xa0 is illegal in sjis encoding, but some pages does #contain such byte. We need to be more error forgiven. 2,2,2,2,2,2,2,2, # a0 - a7 2,2,2,2,2,2,2,2, # a8 - af 2,2,2,2,2,2,2,2, # b0 - b7 2,2,2,2,2,2,2,2, # b8 - bf 2,2,2,2,2,2,2,2, # c0 - c7 2,2,2,2,2,2,2,2, # c8 - cf 2,2,2,2,2,2,2,2, # d0 - d7 2,2,2,2,2,2,2,2, # d8 - df 3,3,3,3,3,3,3,3, # e0 - e7 3,3,3,3,3,4,4,4, # e8 - ef 3,3,3,3,3,3,3,3, # f0 - f7 3,3,3,3,3,0,0,0) # f8 - ff SJIS_ST = ( MachineState.ERROR,MachineState.START,MachineState.START, 3,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#00-07 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,#08-0f MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START #10-17 ) SJIS_CHAR_LEN_TABLE = (0, 1, 1, 2, 0, 0) SJIS_SM_MODEL = {'class_table': SJIS_CLS, 'class_factor': 6, 'state_table': SJIS_ST, 'char_len_table': SJIS_CHAR_LEN_TABLE, 'name': 'Shift_JIS'} # UCS2-BE UCS2BE_CLS = ( 0,0,0,0,0,0,0,0, # 00 - 07 0,0,1,0,0,2,0,0, # 08 - 0f 0,0,0,0,0,0,0,0, # 10 - 17 0,0,0,3,0,0,0,0, # 18 - 1f 0,0,0,0,0,0,0,0, # 20 - 27 0,3,3,3,3,3,0,0, # 28 - 2f 0,0,0,0,0,0,0,0, # 30 - 37 0,0,0,0,0,0,0,0, # 38 - 3f 0,0,0,0,0,0,0,0, # 40 - 47 0,0,0,0,0,0,0,0, # 48 - 4f 0,0,0,0,0,0,0,0, # 50 - 57 0,0,0,0,0,0,0,0, # 58 - 5f 0,0,0,0,0,0,0,0, # 60 - 67 0,0,0,0,0,0,0,0, # 68 - 6f 0,0,0,0,0,0,0,0, # 70 - 77 0,0,0,0,0,0,0,0, # 78 - 7f 0,0,0,0,0,0,0,0, # 80 - 87 0,0,0,0,0,0,0,0, # 88 - 8f 0,0,0,0,0,0,0,0, # 90 - 97 0,0,0,0,0,0,0,0, # 98 - 9f 0,0,0,0,0,0,0,0, # a0 - a7 0,0,0,0,0,0,0,0, # a8 - af 0,0,0,0,0,0,0,0, # b0 - b7 0,0,0,0,0,0,0,0, # b8 - bf 0,0,0,0,0,0,0,0, # c0 - c7 0,0,0,0,0,0,0,0, # c8 - cf 0,0,0,0,0,0,0,0, # d0 - d7 0,0,0,0,0,0,0,0, # d8 - df 0,0,0,0,0,0,0,0, # e0 - e7 0,0,0,0,0,0,0,0, # e8 - ef 0,0,0,0,0,0,0,0, # f0 - f7 0,0,0,0,0,0,4,5 # f8 - ff ) UCS2BE_ST = ( 5, 7, 7,MachineState.ERROR, 4, 3,MachineState.ERROR,MachineState.ERROR,#00-07 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,#08-0f MachineState.ITS_ME,MachineState.ITS_ME, 6, 6, 6, 6,MachineState.ERROR,MachineState.ERROR,#10-17 6, 6, 6, 6, 6,MachineState.ITS_ME, 6, 6,#18-1f 6, 6, 6, 6, 5, 7, 7,MachineState.ERROR,#20-27 5, 8, 6, 6,MachineState.ERROR, 6, 6, 6,#28-2f 6, 6, 6, 6,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START #30-37 ) UCS2BE_CHAR_LEN_TABLE = (2, 2, 2, 0, 2, 2) UCS2BE_SM_MODEL = {'class_table': UCS2BE_CLS, 'class_factor': 6, 'state_table': UCS2BE_ST, 'char_len_table': UCS2BE_CHAR_LEN_TABLE, 'name': 'UTF-16BE'} # UCS2-LE UCS2LE_CLS = ( 0,0,0,0,0,0,0,0, # 00 - 07 0,0,1,0,0,2,0,0, # 08 - 0f 0,0,0,0,0,0,0,0, # 10 - 17 0,0,0,3,0,0,0,0, # 18 - 1f 0,0,0,0,0,0,0,0, # 20 - 27 0,3,3,3,3,3,0,0, # 28 - 2f 0,0,0,0,0,0,0,0, # 30 - 37 0,0,0,0,0,0,0,0, # 38 - 3f 0,0,0,0,0,0,0,0, # 40 - 47 0,0,0,0,0,0,0,0, # 48 - 4f 0,0,0,0,0,0,0,0, # 50 - 57 0,0,0,0,0,0,0,0, # 58 - 5f 0,0,0,0,0,0,0,0, # 60 - 67 0,0,0,0,0,0,0,0, # 68 - 6f 0,0,0,0,0,0,0,0, # 70 - 77 0,0,0,0,0,0,0,0, # 78 - 7f 0,0,0,0,0,0,0,0, # 80 - 87 0,0,0,0,0,0,0,0, # 88 - 8f 0,0,0,0,0,0,0,0, # 90 - 97 0,0,0,0,0,0,0,0, # 98 - 9f 0,0,0,0,0,0,0,0, # a0 - a7 0,0,0,0,0,0,0,0, # a8 - af 0,0,0,0,0,0,0,0, # b0 - b7 0,0,0,0,0,0,0,0, # b8 - bf 0,0,0,0,0,0,0,0, # c0 - c7 0,0,0,0,0,0,0,0, # c8 - cf 0,0,0,0,0,0,0,0, # d0 - d7 0,0,0,0,0,0,0,0, # d8 - df 0,0,0,0,0,0,0,0, # e0 - e7 0,0,0,0,0,0,0,0, # e8 - ef 0,0,0,0,0,0,0,0, # f0 - f7 0,0,0,0,0,0,4,5 # f8 - ff ) UCS2LE_ST = ( 6, 6, 7, 6, 4, 3,MachineState.ERROR,MachineState.ERROR,#00-07 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,#08-0f MachineState.ITS_ME,MachineState.ITS_ME, 5, 5, 5,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,#10-17 5, 5, 5,MachineState.ERROR, 5,MachineState.ERROR, 6, 6,#18-1f 7, 6, 8, 8, 5, 5, 5,MachineState.ERROR,#20-27 5, 5, 5,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 5, 5,#28-2f 5, 5, 5,MachineState.ERROR, 5,MachineState.ERROR,MachineState.START,MachineState.START #30-37 ) UCS2LE_CHAR_LEN_TABLE = (2, 2, 2, 2, 2, 2) UCS2LE_SM_MODEL = {'class_table': UCS2LE_CLS, 'class_factor': 6, 'state_table': UCS2LE_ST, 'char_len_table': UCS2LE_CHAR_LEN_TABLE, 'name': 'UTF-16LE'} # UTF-8 UTF8_CLS = ( 1,1,1,1,1,1,1,1, # 00 - 07 #allow 0x00 as a legal value 1,1,1,1,1,1,0,0, # 08 - 0f 1,1,1,1,1,1,1,1, # 10 - 17 1,1,1,0,1,1,1,1, # 18 - 1f 1,1,1,1,1,1,1,1, # 20 - 27 1,1,1,1,1,1,1,1, # 28 - 2f 1,1,1,1,1,1,1,1, # 30 - 37 1,1,1,1,1,1,1,1, # 38 - 3f 1,1,1,1,1,1,1,1, # 40 - 47 1,1,1,1,1,1,1,1, # 48 - 4f 1,1,1,1,1,1,1,1, # 50 - 57 1,1,1,1,1,1,1,1, # 58 - 5f 1,1,1,1,1,1,1,1, # 60 - 67 1,1,1,1,1,1,1,1, # 68 - 6f 1,1,1,1,1,1,1,1, # 70 - 77 1,1,1,1,1,1,1,1, # 78 - 7f 2,2,2,2,3,3,3,3, # 80 - 87 4,4,4,4,4,4,4,4, # 88 - 8f 4,4,4,4,4,4,4,4, # 90 - 97 4,4,4,4,4,4,4,4, # 98 - 9f 5,5,5,5,5,5,5,5, # a0 - a7 5,5,5,5,5,5,5,5, # a8 - af 5,5,5,5,5,5,5,5, # b0 - b7 5,5,5,5,5,5,5,5, # b8 - bf 0,0,6,6,6,6,6,6, # c0 - c7 6,6,6,6,6,6,6,6, # c8 - cf 6,6,6,6,6,6,6,6, # d0 - d7 6,6,6,6,6,6,6,6, # d8 - df 7,8,8,8,8,8,8,8, # e0 - e7 8,8,8,8,8,9,8,8, # e8 - ef 10,11,11,11,11,11,11,11, # f0 - f7 12,13,13,13,14,15,0,0 # f8 - ff ) UTF8_ST = ( MachineState.ERROR,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 12, 10,#00-07 9, 11, 8, 7, 6, 5, 4, 3,#08-0f MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#10-17 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#18-1f MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,#20-27 MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,#28-2f MachineState.ERROR,MachineState.ERROR, 5, 5, 5, 5,MachineState.ERROR,MachineState.ERROR,#30-37 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#38-3f MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 5, 5, 5,MachineState.ERROR,MachineState.ERROR,#40-47 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#48-4f MachineState.ERROR,MachineState.ERROR, 7, 7, 7, 7,MachineState.ERROR,MachineState.ERROR,#50-57 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#58-5f MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 7, 7,MachineState.ERROR,MachineState.ERROR,#60-67 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#68-6f MachineState.ERROR,MachineState.ERROR, 9, 9, 9, 9,MachineState.ERROR,MachineState.ERROR,#70-77 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#78-7f MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 9,MachineState.ERROR,MachineState.ERROR,#80-87 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#88-8f MachineState.ERROR,MachineState.ERROR, 12, 12, 12, 12,MachineState.ERROR,MachineState.ERROR,#90-97 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#98-9f MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 12,MachineState.ERROR,MachineState.ERROR,#a0-a7 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#a8-af MachineState.ERROR,MachineState.ERROR, 12, 12, 12,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#b0-b7 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#b8-bf MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,#c0-c7 MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR #c8-cf ) UTF8_CHAR_LEN_TABLE = (0, 1, 0, 0, 0, 0, 2, 3, 3, 3, 4, 4, 5, 5, 6, 6) UTF8_SM_MODEL = {'class_table': UTF8_CLS, 'class_factor': 16, 'state_table': UTF8_ST, 'char_len_table': UTF8_CHAR_LEN_TABLE, 'name': 'UTF-8'} �����������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/sbcharsetprober.py���������������0000644�0001750�0001750�00000013031�00000000000�031637� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Universal charset detector code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 2001 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # Shy Shalom - original C code # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### from .charsetprober import CharSetProber from .enums import CharacterCategory, ProbingState, SequenceLikelihood class SingleByteCharSetProber(CharSetProber): SAMPLE_SIZE = 64 SB_ENOUGH_REL_THRESHOLD = 1024 # 0.25 * SAMPLE_SIZE^2 POSITIVE_SHORTCUT_THRESHOLD = 0.95 NEGATIVE_SHORTCUT_THRESHOLD = 0.05 def __init__(self, model, reversed=False, name_prober=None): super(SingleByteCharSetProber, self).__init__() self._model = model # TRUE if we need to reverse every pair in the model lookup self._reversed = reversed # Optional auxiliary prober for name decision self._name_prober = name_prober self._last_order = None self._seq_counters = None self._total_seqs = None self._total_char = None self._freq_char = None self.reset() def reset(self): super(SingleByteCharSetProber, self).reset() # char order of last character self._last_order = 255 self._seq_counters = [0] * SequenceLikelihood.get_num_categories() self._total_seqs = 0 self._total_char = 0 # characters that fall in our sampling range self._freq_char = 0 @property def charset_name(self): if self._name_prober: return self._name_prober.charset_name else: return self._model['charset_name'] @property def language(self): if self._name_prober: return self._name_prober.language else: return self._model.get('language') def feed(self, byte_str): if not self._model['keep_english_letter']: byte_str = self.filter_international_words(byte_str) if not byte_str: return self.state char_to_order_map = self._model['char_to_order_map'] for i, c in enumerate(byte_str): # XXX: Order is in range 1-64, so one would think we want 0-63 here, # but that leads to 27 more test failures than before. order = char_to_order_map[c] # XXX: This was SYMBOL_CAT_ORDER before, with a value of 250, but # CharacterCategory.SYMBOL is actually 253, so we use CONTROL # to make it closer to the original intent. The only difference # is whether or not we count digits and control characters for # _total_char purposes. if order < CharacterCategory.CONTROL: self._total_char += 1 if order < self.SAMPLE_SIZE: self._freq_char += 1 if self._last_order < self.SAMPLE_SIZE: self._total_seqs += 1 if not self._reversed: i = (self._last_order * self.SAMPLE_SIZE) + order model = self._model['precedence_matrix'][i] else: # reverse the order of the letters in the lookup i = (order * self.SAMPLE_SIZE) + self._last_order model = self._model['precedence_matrix'][i] self._seq_counters[model] += 1 self._last_order = order charset_name = self._model['charset_name'] if self.state == ProbingState.DETECTING: if self._total_seqs > self.SB_ENOUGH_REL_THRESHOLD: confidence = self.get_confidence() if confidence > self.POSITIVE_SHORTCUT_THRESHOLD: self.logger.debug('%s confidence = %s, we have a winner', charset_name, confidence) self._state = ProbingState.FOUND_IT elif confidence < self.NEGATIVE_SHORTCUT_THRESHOLD: self.logger.debug('%s confidence = %s, below negative ' 'shortcut threshhold %s', charset_name, confidence, self.NEGATIVE_SHORTCUT_THRESHOLD) self._state = ProbingState.NOT_ME return self.state def get_confidence(self): r = 0.01 if self._total_seqs > 0: r = ((1.0 * self._seq_counters[SequenceLikelihood.POSITIVE]) / self._total_seqs / self._model['typical_positive_ratio']) r = r * self._freq_char / self._total_char if r >= 1.0: r = 0.99 return r �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/sbcsgroupprober.py���������������0000644�0001750�0001750�00000006732�00000000000�031702� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Universal charset detector code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 2001 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # Shy Shalom - original C code # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### from .charsetgroupprober import CharSetGroupProber from .sbcharsetprober import SingleByteCharSetProber from .langcyrillicmodel import (Win1251CyrillicModel, Koi8rModel, Latin5CyrillicModel, MacCyrillicModel, Ibm866Model, Ibm855Model) from .langgreekmodel import Latin7GreekModel, Win1253GreekModel from .langbulgarianmodel import Latin5BulgarianModel, Win1251BulgarianModel # from .langhungarianmodel import Latin2HungarianModel, Win1250HungarianModel from .langthaimodel import TIS620ThaiModel from .langhebrewmodel import Win1255HebrewModel from .hebrewprober import HebrewProber from .langturkishmodel import Latin5TurkishModel class SBCSGroupProber(CharSetGroupProber): def __init__(self): super(SBCSGroupProber, self).__init__() self.probers = [ SingleByteCharSetProber(Win1251CyrillicModel), SingleByteCharSetProber(Koi8rModel), SingleByteCharSetProber(Latin5CyrillicModel), SingleByteCharSetProber(MacCyrillicModel), SingleByteCharSetProber(Ibm866Model), SingleByteCharSetProber(Ibm855Model), SingleByteCharSetProber(Latin7GreekModel), SingleByteCharSetProber(Win1253GreekModel), SingleByteCharSetProber(Latin5BulgarianModel), SingleByteCharSetProber(Win1251BulgarianModel), # TODO: Restore Hungarian encodings (iso-8859-2 and windows-1250) # after we retrain model. # SingleByteCharSetProber(Latin2HungarianModel), # SingleByteCharSetProber(Win1250HungarianModel), SingleByteCharSetProber(TIS620ThaiModel), SingleByteCharSetProber(Latin5TurkishModel), ] hebrew_prober = HebrewProber() logical_hebrew_prober = SingleByteCharSetProber(Win1255HebrewModel, False, hebrew_prober) visual_hebrew_prober = SingleByteCharSetProber(Win1255HebrewModel, True, hebrew_prober) hebrew_prober.set_model_probers(logical_hebrew_prober, visual_hebrew_prober) self.probers.extend([hebrew_prober, logical_hebrew_prober, visual_hebrew_prober]) self.reset() ��������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/sjisprober.py��������������������0000644�0001750�0001750�00000007276�00000000000�030647� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is mozilla.org code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### from .mbcharsetprober import MultiByteCharSetProber from .codingstatemachine import CodingStateMachine from .chardistribution import SJISDistributionAnalysis from .jpcntx import SJISContextAnalysis from .mbcssm import SJIS_SM_MODEL from .enums import ProbingState, MachineState class SJISProber(MultiByteCharSetProber): def __init__(self): super(SJISProber, self).__init__() self.coding_sm = CodingStateMachine(SJIS_SM_MODEL) self.distribution_analyzer = SJISDistributionAnalysis() self.context_analyzer = SJISContextAnalysis() self.reset() def reset(self): super(SJISProber, self).reset() self.context_analyzer.reset() @property def charset_name(self): return self.context_analyzer.charset_name @property def language(self): return "Japanese" def feed(self, byte_str): for i in range(len(byte_str)): coding_state = self.coding_sm.next_state(byte_str[i]) if coding_state == MachineState.ERROR: self.logger.debug('%s %s prober hit error at byte %s', self.charset_name, self.language, i) self._state = ProbingState.NOT_ME break elif coding_state == MachineState.ITS_ME: self._state = ProbingState.FOUND_IT break elif coding_state == MachineState.START: char_len = self.coding_sm.get_current_charlen() if i == 0: self._last_char[1] = byte_str[0] self.context_analyzer.feed(self._last_char[2 - char_len:], char_len) self.distribution_analyzer.feed(self._last_char, char_len) else: self.context_analyzer.feed(byte_str[i + 1 - char_len:i + 3 - char_len], char_len) self.distribution_analyzer.feed(byte_str[i - 1:i + 1], char_len) self._last_char[0] = byte_str[-1] if self.state == ProbingState.DETECTING: if (self.context_analyzer.got_enough_data() and (self.get_confidence() > self.SHORTCUT_THRESHOLD)): self._state = ProbingState.FOUND_IT return self.state def get_confidence(self): context_conf = self.context_analyzer.get_confidence() distrib_conf = self.distribution_analyzer.get_confidence() return max(context_conf, distrib_conf) ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/universaldetector.py�������������0000644�0001750�0001750�00000030305�00000000000�032214� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is Mozilla Universal charset detector code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 2001 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # Shy Shalom - original C code # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### """ Module containing the UniversalDetector detector class, which is the primary class a user of ``chardet`` should use. :author: Mark Pilgrim (initial port to Python) :author: Shy Shalom (original C code) :author: Dan Blanchard (major refactoring for 3.0) :author: Ian Cordasco """ import codecs import logging import re from .charsetgroupprober import CharSetGroupProber from .enums import InputState, LanguageFilter, ProbingState from .escprober import EscCharSetProber from .latin1prober import Latin1Prober from .mbcsgroupprober import MBCSGroupProber from .sbcsgroupprober import SBCSGroupProber class UniversalDetector(object): """ The ``UniversalDetector`` class underlies the ``chardet.detect`` function and coordinates all of the different charset probers. To get a ``dict`` containing an encoding and its confidence, you can simply run: .. code:: u = UniversalDetector() u.feed(some_bytes) u.close() detected = u.result """ MINIMUM_THRESHOLD = 0.20 HIGH_BYTE_DETECTOR = re.compile(b'[\x80-\xFF]') ESC_DETECTOR = re.compile(b'(\033|~{)') WIN_BYTE_DETECTOR = re.compile(b'[\x80-\x9F]') ISO_WIN_MAP = {'iso-8859-1': 'Windows-1252', 'iso-8859-2': 'Windows-1250', 'iso-8859-5': 'Windows-1251', 'iso-8859-6': 'Windows-1256', 'iso-8859-7': 'Windows-1253', 'iso-8859-8': 'Windows-1255', 'iso-8859-9': 'Windows-1254', 'iso-8859-13': 'Windows-1257'} def __init__(self, lang_filter=LanguageFilter.ALL): self._esc_charset_prober = None self._charset_probers = [] self.result = None self.done = None self._got_data = None self._input_state = None self._last_char = None self.lang_filter = lang_filter self.logger = logging.getLogger(__name__) self._has_win_bytes = None self.reset() def reset(self): """ Reset the UniversalDetector and all of its probers back to their initial states. This is called by ``__init__``, so you only need to call this directly in between analyses of different documents. """ self.result = {'encoding': None, 'confidence': 0.0, 'language': None} self.done = False self._got_data = False self._has_win_bytes = False self._input_state = InputState.PURE_ASCII self._last_char = b'' if self._esc_charset_prober: self._esc_charset_prober.reset() for prober in self._charset_probers: prober.reset() def feed(self, byte_str): """ Takes a chunk of a document and feeds it through all of the relevant charset probers. After calling ``feed``, you can check the value of the ``done`` attribute to see if you need to continue feeding the ``UniversalDetector`` more data, or if it has made a prediction (in the ``result`` attribute). .. note:: You should always call ``close`` when you're done feeding in your document if ``done`` is not already ``True``. """ if self.done: return if not len(byte_str): return if not isinstance(byte_str, bytearray): byte_str = bytearray(byte_str) # First check for known BOMs, since these are guaranteed to be correct if not self._got_data: # If the data starts with BOM, we know it is UTF if byte_str.startswith(codecs.BOM_UTF8): # EF BB BF UTF-8 with BOM self.result = {'encoding': "UTF-8-SIG", 'confidence': 1.0, 'language': ''} elif byte_str.startswith((codecs.BOM_UTF32_LE, codecs.BOM_UTF32_BE)): # FF FE 00 00 UTF-32, little-endian BOM # 00 00 FE FF UTF-32, big-endian BOM self.result = {'encoding': "UTF-32", 'confidence': 1.0, 'language': ''} elif byte_str.startswith(b'\xFE\xFF\x00\x00'): # FE FF 00 00 UCS-4, unusual octet order BOM (3412) self.result = {'encoding': "X-ISO-10646-UCS-4-3412", 'confidence': 1.0, 'language': ''} elif byte_str.startswith(b'\x00\x00\xFF\xFE'): # 00 00 FF FE UCS-4, unusual octet order BOM (2143) self.result = {'encoding': "X-ISO-10646-UCS-4-2143", 'confidence': 1.0, 'language': ''} elif byte_str.startswith((codecs.BOM_LE, codecs.BOM_BE)): # FF FE UTF-16, little endian BOM # FE FF UTF-16, big endian BOM self.result = {'encoding': "UTF-16", 'confidence': 1.0, 'language': ''} self._got_data = True if self.result['encoding'] is not None: self.done = True return # If none of those matched and we've only see ASCII so far, check # for high bytes and escape sequences if self._input_state == InputState.PURE_ASCII: if self.HIGH_BYTE_DETECTOR.search(byte_str): self._input_state = InputState.HIGH_BYTE elif self._input_state == InputState.PURE_ASCII and \ self.ESC_DETECTOR.search(self._last_char + byte_str): self._input_state = InputState.ESC_ASCII self._last_char = byte_str[-1:] # If we've seen escape sequences, use the EscCharSetProber, which # uses a simple state machine to check for known escape sequences in # HZ and ISO-2022 encodings, since those are the only encodings that # use such sequences. if self._input_state == InputState.ESC_ASCII: if not self._esc_charset_prober: self._esc_charset_prober = EscCharSetProber(self.lang_filter) if self._esc_charset_prober.feed(byte_str) == ProbingState.FOUND_IT: self.result = {'encoding': self._esc_charset_prober.charset_name, 'confidence': self._esc_charset_prober.get_confidence(), 'language': self._esc_charset_prober.language} self.done = True # If we've seen high bytes (i.e., those with values greater than 127), # we need to do more complicated checks using all our multi-byte and # single-byte probers that are left. The single-byte probers # use character bigram distributions to determine the encoding, whereas # the multi-byte probers use a combination of character unigram and # bigram distributions. elif self._input_state == InputState.HIGH_BYTE: if not self._charset_probers: self._charset_probers = [MBCSGroupProber(self.lang_filter)] # If we're checking non-CJK encodings, use single-byte prober if self.lang_filter & LanguageFilter.NON_CJK: self._charset_probers.append(SBCSGroupProber()) self._charset_probers.append(Latin1Prober()) for prober in self._charset_probers: if prober.feed(byte_str) == ProbingState.FOUND_IT: self.result = {'encoding': prober.charset_name, 'confidence': prober.get_confidence(), 'language': prober.language} self.done = True break if self.WIN_BYTE_DETECTOR.search(byte_str): self._has_win_bytes = True def close(self): """ Stop analyzing the current document and come up with a final prediction. :returns: The ``result`` attribute, a ``dict`` with the keys `encoding`, `confidence`, and `language`. """ # Don't bother with checks if we're already done if self.done: return self.result self.done = True if not self._got_data: self.logger.debug('no data received!') # Default to ASCII if it is all we've seen so far elif self._input_state == InputState.PURE_ASCII: self.result = {'encoding': 'ascii', 'confidence': 1.0, 'language': ''} # If we have seen non-ASCII, return the best that met MINIMUM_THRESHOLD elif self._input_state == InputState.HIGH_BYTE: prober_confidence = None max_prober_confidence = 0.0 max_prober = None for prober in self._charset_probers: if not prober: continue prober_confidence = prober.get_confidence() if prober_confidence > max_prober_confidence: max_prober_confidence = prober_confidence max_prober = prober if max_prober and (max_prober_confidence > self.MINIMUM_THRESHOLD): charset_name = max_prober.charset_name lower_charset_name = max_prober.charset_name.lower() confidence = max_prober.get_confidence() # Use Windows encoding name instead of ISO-8859 if we saw any # extra Windows-specific bytes if lower_charset_name.startswith('iso-8859'): if self._has_win_bytes: charset_name = self.ISO_WIN_MAP.get(lower_charset_name, charset_name) self.result = {'encoding': charset_name, 'confidence': confidence, 'language': max_prober.language} # Log all prober confidences if none met MINIMUM_THRESHOLD if self.logger.getEffectiveLevel() == logging.DEBUG: if self.result['encoding'] is None: self.logger.debug('no probers hit minimum threshold') for group_prober in self._charset_probers: if not group_prober: continue if isinstance(group_prober, CharSetGroupProber): for prober in group_prober.probers: self.logger.debug('%s %s confidence = %s', prober.charset_name, prober.language, prober.get_confidence()) else: self.logger.debug('%s %s confidence = %s', prober.charset_name, prober.language, prober.get_confidence()) return self.result ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/utf8prober.py��������������������0000644�0001750�0001750�00000005316�00000000000�030556� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������######################## BEGIN LICENSE BLOCK ######################## # The Original Code is mozilla.org code. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1998 # the Initial Developer. All Rights Reserved. # # Contributor(s): # Mark Pilgrim - port to Python # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA # 02110-1301 USA ######################### END LICENSE BLOCK ######################### from .charsetprober import CharSetProber from .enums import ProbingState, MachineState from .codingstatemachine import CodingStateMachine from .mbcssm import UTF8_SM_MODEL class UTF8Prober(CharSetProber): ONE_CHAR_PROB = 0.5 def __init__(self): super(UTF8Prober, self).__init__() self.coding_sm = CodingStateMachine(UTF8_SM_MODEL) self._num_mb_chars = None self.reset() def reset(self): super(UTF8Prober, self).reset() self.coding_sm.reset() self._num_mb_chars = 0 @property def charset_name(self): return "utf-8" @property def language(self): return "" def feed(self, byte_str): for c in byte_str: coding_state = self.coding_sm.next_state(c) if coding_state == MachineState.ERROR: self._state = ProbingState.NOT_ME break elif coding_state == MachineState.ITS_ME: self._state = ProbingState.FOUND_IT break elif coding_state == MachineState.START: if self.coding_sm.get_current_charlen() >= 2: self._num_mb_chars += 1 if self.state == ProbingState.DETECTING: if self.get_confidence() > self.SHORTCUT_THRESHOLD: self._state = ProbingState.FOUND_IT return self.state def get_confidence(self): unlike = 0.99 if self._num_mb_chars < 6: unlike *= self.ONE_CHAR_PROB ** self._num_mb_chars return 1.0 - unlike else: return unlike ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/chardet/version.py�����������������������0000644�0001750�0001750�00000000362�00000000000�030137� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" This module exists only to simplify retrieving the version number of chardet from within setup.py and from chardet subpackages. :author: Dan Blanchard (dan.blanchard@gmail.com) """ __version__ = "3.0.4" VERSION = __version__.split('.') ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735099.9766607 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/colorama/��������������������������������0000755�0001750�0001750�00000000000�00000000000�026262� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/colorama/__init__.py���������������������0000644�0001750�0001750�00000000357�00000000000�030400� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. from .initialise import init, deinit, reinit, colorama_text from .ansi import Fore, Back, Style, Cursor from .ansitowin32 import AnsiToWin32 __version__ = '0.4.1' ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/colorama/ansi.py�������������������������0000644�0001750�0001750�00000004734�00000000000�027576� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. ''' This module generates ANSI character codes to printing colors to terminals. See: http://en.wikipedia.org/wiki/ANSI_escape_code ''' CSI = '\033[' OSC = '\033]' BEL = '\007' def code_to_chars(code): return CSI + str(code) + 'm' def set_title(title): return OSC + '2;' + title + BEL def clear_screen(mode=2): return CSI + str(mode) + 'J' def clear_line(mode=2): return CSI + str(mode) + 'K' class AnsiCodes(object): def __init__(self): # the subclasses declare class attributes which are numbers. # Upon instantiation we define instance attributes, which are the same # as the class attributes but wrapped with the ANSI escape sequence for name in dir(self): if not name.startswith('_'): value = getattr(self, name) setattr(self, name, code_to_chars(value)) class AnsiCursor(object): def UP(self, n=1): return CSI + str(n) + 'A' def DOWN(self, n=1): return CSI + str(n) + 'B' def FORWARD(self, n=1): return CSI + str(n) + 'C' def BACK(self, n=1): return CSI + str(n) + 'D' def POS(self, x=1, y=1): return CSI + str(y) + ';' + str(x) + 'H' class AnsiFore(AnsiCodes): BLACK = 30 RED = 31 GREEN = 32 YELLOW = 33 BLUE = 34 MAGENTA = 35 CYAN = 36 WHITE = 37 RESET = 39 # These are fairly well supported, but not part of the standard. LIGHTBLACK_EX = 90 LIGHTRED_EX = 91 LIGHTGREEN_EX = 92 LIGHTYELLOW_EX = 93 LIGHTBLUE_EX = 94 LIGHTMAGENTA_EX = 95 LIGHTCYAN_EX = 96 LIGHTWHITE_EX = 97 class AnsiBack(AnsiCodes): BLACK = 40 RED = 41 GREEN = 42 YELLOW = 43 BLUE = 44 MAGENTA = 45 CYAN = 46 WHITE = 47 RESET = 49 # These are fairly well supported, but not part of the standard. LIGHTBLACK_EX = 100 LIGHTRED_EX = 101 LIGHTGREEN_EX = 102 LIGHTYELLOW_EX = 103 LIGHTBLUE_EX = 104 LIGHTMAGENTA_EX = 105 LIGHTCYAN_EX = 106 LIGHTWHITE_EX = 107 class AnsiStyle(AnsiCodes): BRIGHT = 1 DIM = 2 NORMAL = 22 RESET_ALL = 0 Fore = AnsiFore() Back = AnsiBack() Style = AnsiStyle() Cursor = AnsiCursor() ������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/colorama/ansitowin32.py������������������0000644�0001750�0001750�00000024336�00000000000�031024� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. import re import sys import os from .ansi import AnsiFore, AnsiBack, AnsiStyle, Style from .winterm import WinTerm, WinColor, WinStyle from .win32 import windll, winapi_test winterm = None if windll is not None: winterm = WinTerm() class StreamWrapper(object): ''' Wraps a stream (such as stdout), acting as a transparent proxy for all attribute access apart from method 'write()', which is delegated to our Converter instance. ''' def __init__(self, wrapped, converter): # double-underscore everything to prevent clashes with names of # attributes on the wrapped stream object. self.__wrapped = wrapped self.__convertor = converter def __getattr__(self, name): return getattr(self.__wrapped, name) def __enter__(self, *args, **kwargs): # special method lookup bypasses __getattr__/__getattribute__, see # https://stackoverflow.com/questions/12632894/why-doesnt-getattr-work-with-exit # thus, contextlib magic methods are not proxied via __getattr__ return self.__wrapped.__enter__(*args, **kwargs) def __exit__(self, *args, **kwargs): return self.__wrapped.__exit__(*args, **kwargs) def write(self, text): self.__convertor.write(text) def isatty(self): stream = self.__wrapped if 'PYCHARM_HOSTED' in os.environ: if stream is not None and (stream is sys.__stdout__ or stream is sys.__stderr__): return True try: stream_isatty = stream.isatty except AttributeError: return False else: return stream_isatty() @property def closed(self): stream = self.__wrapped try: return stream.closed except AttributeError: return True class AnsiToWin32(object): ''' Implements a 'write()' method which, on Windows, will strip ANSI character sequences from the text, and if outputting to a tty, will convert them into win32 function calls. ''' ANSI_CSI_RE = re.compile('\001?\033\\[((?:\\d|;)*)([a-zA-Z])\002?') # Control Sequence Introducer ANSI_OSC_RE = re.compile('\001?\033\\]((?:.|;)*?)(\x07)\002?') # Operating System Command def __init__(self, wrapped, convert=None, strip=None, autoreset=False): # The wrapped stream (normally sys.stdout or sys.stderr) self.wrapped = wrapped # should we reset colors to defaults after every .write() self.autoreset = autoreset # create the proxy wrapping our output stream self.stream = StreamWrapper(wrapped, self) on_windows = os.name == 'nt' # We test if the WinAPI works, because even if we are on Windows # we may be using a terminal that doesn't support the WinAPI # (e.g. Cygwin Terminal). In this case it's up to the terminal # to support the ANSI codes. conversion_supported = on_windows and winapi_test() # should we strip ANSI sequences from our output? if strip is None: strip = conversion_supported or (not self.stream.closed and not self.stream.isatty()) self.strip = strip # should we should convert ANSI sequences into win32 calls? if convert is None: convert = conversion_supported and not self.stream.closed and self.stream.isatty() self.convert = convert # dict of ansi codes to win32 functions and parameters self.win32_calls = self.get_win32_calls() # are we wrapping stderr? self.on_stderr = self.wrapped is sys.stderr def should_wrap(self): ''' True if this class is actually needed. If false, then the output stream will not be affected, nor will win32 calls be issued, so wrapping stdout is not actually required. This will generally be False on non-Windows platforms, unless optional functionality like autoreset has been requested using kwargs to init() ''' return self.convert or self.strip or self.autoreset def get_win32_calls(self): if self.convert and winterm: return { AnsiStyle.RESET_ALL: (winterm.reset_all, ), AnsiStyle.BRIGHT: (winterm.style, WinStyle.BRIGHT), AnsiStyle.DIM: (winterm.style, WinStyle.NORMAL), AnsiStyle.NORMAL: (winterm.style, WinStyle.NORMAL), AnsiFore.BLACK: (winterm.fore, WinColor.BLACK), AnsiFore.RED: (winterm.fore, WinColor.RED), AnsiFore.GREEN: (winterm.fore, WinColor.GREEN), AnsiFore.YELLOW: (winterm.fore, WinColor.YELLOW), AnsiFore.BLUE: (winterm.fore, WinColor.BLUE), AnsiFore.MAGENTA: (winterm.fore, WinColor.MAGENTA), AnsiFore.CYAN: (winterm.fore, WinColor.CYAN), AnsiFore.WHITE: (winterm.fore, WinColor.GREY), AnsiFore.RESET: (winterm.fore, ), AnsiFore.LIGHTBLACK_EX: (winterm.fore, WinColor.BLACK, True), AnsiFore.LIGHTRED_EX: (winterm.fore, WinColor.RED, True), AnsiFore.LIGHTGREEN_EX: (winterm.fore, WinColor.GREEN, True), AnsiFore.LIGHTYELLOW_EX: (winterm.fore, WinColor.YELLOW, True), AnsiFore.LIGHTBLUE_EX: (winterm.fore, WinColor.BLUE, True), AnsiFore.LIGHTMAGENTA_EX: (winterm.fore, WinColor.MAGENTA, True), AnsiFore.LIGHTCYAN_EX: (winterm.fore, WinColor.CYAN, True), AnsiFore.LIGHTWHITE_EX: (winterm.fore, WinColor.GREY, True), AnsiBack.BLACK: (winterm.back, WinColor.BLACK), AnsiBack.RED: (winterm.back, WinColor.RED), AnsiBack.GREEN: (winterm.back, WinColor.GREEN), AnsiBack.YELLOW: (winterm.back, WinColor.YELLOW), AnsiBack.BLUE: (winterm.back, WinColor.BLUE), AnsiBack.MAGENTA: (winterm.back, WinColor.MAGENTA), AnsiBack.CYAN: (winterm.back, WinColor.CYAN), AnsiBack.WHITE: (winterm.back, WinColor.GREY), AnsiBack.RESET: (winterm.back, ), AnsiBack.LIGHTBLACK_EX: (winterm.back, WinColor.BLACK, True), AnsiBack.LIGHTRED_EX: (winterm.back, WinColor.RED, True), AnsiBack.LIGHTGREEN_EX: (winterm.back, WinColor.GREEN, True), AnsiBack.LIGHTYELLOW_EX: (winterm.back, WinColor.YELLOW, True), AnsiBack.LIGHTBLUE_EX: (winterm.back, WinColor.BLUE, True), AnsiBack.LIGHTMAGENTA_EX: (winterm.back, WinColor.MAGENTA, True), AnsiBack.LIGHTCYAN_EX: (winterm.back, WinColor.CYAN, True), AnsiBack.LIGHTWHITE_EX: (winterm.back, WinColor.GREY, True), } return dict() def write(self, text): if self.strip or self.convert: self.write_and_convert(text) else: self.wrapped.write(text) self.wrapped.flush() if self.autoreset: self.reset_all() def reset_all(self): if self.convert: self.call_win32('m', (0,)) elif not self.strip and not self.stream.closed: self.wrapped.write(Style.RESET_ALL) def write_and_convert(self, text): ''' Write the given text to our wrapped stream, stripping any ANSI sequences from the text, and optionally converting them into win32 calls. ''' cursor = 0 text = self.convert_osc(text) for match in self.ANSI_CSI_RE.finditer(text): start, end = match.span() self.write_plain_text(text, cursor, start) self.convert_ansi(*match.groups()) cursor = end self.write_plain_text(text, cursor, len(text)) def write_plain_text(self, text, start, end): if start < end: self.wrapped.write(text[start:end]) self.wrapped.flush() def convert_ansi(self, paramstring, command): if self.convert: params = self.extract_params(command, paramstring) self.call_win32(command, params) def extract_params(self, command, paramstring): if command in 'Hf': params = tuple(int(p) if len(p) != 0 else 1 for p in paramstring.split(';')) while len(params) < 2: # defaults: params = params + (1,) else: params = tuple(int(p) for p in paramstring.split(';') if len(p) != 0) if len(params) == 0: # defaults: if command in 'JKm': params = (0,) elif command in 'ABCD': params = (1,) return params def call_win32(self, command, params): if command == 'm': for param in params: if param in self.win32_calls: func_args = self.win32_calls[param] func = func_args[0] args = func_args[1:] kwargs = dict(on_stderr=self.on_stderr) func(*args, **kwargs) elif command in 'J': winterm.erase_screen(params[0], on_stderr=self.on_stderr) elif command in 'K': winterm.erase_line(params[0], on_stderr=self.on_stderr) elif command in 'Hf': # cursor position - absolute winterm.set_cursor_position(params, on_stderr=self.on_stderr) elif command in 'ABCD': # cursor position - relative n = params[0] # A - up, B - down, C - forward, D - back x, y = {'A': (0, -n), 'B': (0, n), 'C': (n, 0), 'D': (-n, 0)}[command] winterm.cursor_adjust(x, y, on_stderr=self.on_stderr) def convert_osc(self, text): for match in self.ANSI_OSC_RE.finditer(text): start, end = match.span() text = text[:start] + text[end:] paramstring, command = match.groups() if command in '\x07': # \x07 = BEL params = paramstring.split(";") # 0 - change title and icon (we will only change title) # 1 - change icon (we don't support this) # 2 - change title if params[0] in '02': winterm.set_title(params[1]) return text ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/colorama/initialise.py�������������������0000644�0001750�0001750�00000003573�00000000000�030776� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. import atexit import contextlib import sys from .ansitowin32 import AnsiToWin32 orig_stdout = None orig_stderr = None wrapped_stdout = None wrapped_stderr = None atexit_done = False def reset_all(): if AnsiToWin32 is not None: # Issue #74: objects might become None at exit AnsiToWin32(orig_stdout).reset_all() def init(autoreset=False, convert=None, strip=None, wrap=True): if not wrap and any([autoreset, convert, strip]): raise ValueError('wrap=False conflicts with any other arg=True') global wrapped_stdout, wrapped_stderr global orig_stdout, orig_stderr orig_stdout = sys.stdout orig_stderr = sys.stderr if sys.stdout is None: wrapped_stdout = None else: sys.stdout = wrapped_stdout = \ wrap_stream(orig_stdout, convert, strip, autoreset, wrap) if sys.stderr is None: wrapped_stderr = None else: sys.stderr = wrapped_stderr = \ wrap_stream(orig_stderr, convert, strip, autoreset, wrap) global atexit_done if not atexit_done: atexit.register(reset_all) atexit_done = True def deinit(): if orig_stdout is not None: sys.stdout = orig_stdout if orig_stderr is not None: sys.stderr = orig_stderr @contextlib.contextmanager def colorama_text(*args, **kwargs): init(*args, **kwargs) try: yield finally: deinit() def reinit(): if wrapped_stdout is not None: sys.stdout = wrapped_stdout if wrapped_stderr is not None: sys.stderr = wrapped_stderr def wrap_stream(stream, convert, strip, autoreset, wrap): if wrap: wrapper = AnsiToWin32(stream, convert=convert, strip=strip, autoreset=autoreset) if wrapper.should_wrap(): stream = wrapper.stream return stream �������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/colorama/win32.py������������������������0000644�0001750�0001750�00000012434�00000000000�027602� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. # from winbase.h STDOUT = -11 STDERR = -12 try: import ctypes from ctypes import LibraryLoader windll = LibraryLoader(ctypes.WinDLL) from ctypes import wintypes except (AttributeError, ImportError): windll = None SetConsoleTextAttribute = lambda *_: None winapi_test = lambda *_: None else: from ctypes import byref, Structure, c_char, POINTER COORD = wintypes._COORD class CONSOLE_SCREEN_BUFFER_INFO(Structure): """struct in wincon.h.""" _fields_ = [ ("dwSize", COORD), ("dwCursorPosition", COORD), ("wAttributes", wintypes.WORD), ("srWindow", wintypes.SMALL_RECT), ("dwMaximumWindowSize", COORD), ] def __str__(self): return '(%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d)' % ( self.dwSize.Y, self.dwSize.X , self.dwCursorPosition.Y, self.dwCursorPosition.X , self.wAttributes , self.srWindow.Top, self.srWindow.Left, self.srWindow.Bottom, self.srWindow.Right , self.dwMaximumWindowSize.Y, self.dwMaximumWindowSize.X ) _GetStdHandle = windll.kernel32.GetStdHandle _GetStdHandle.argtypes = [ wintypes.DWORD, ] _GetStdHandle.restype = wintypes.HANDLE _GetConsoleScreenBufferInfo = windll.kernel32.GetConsoleScreenBufferInfo _GetConsoleScreenBufferInfo.argtypes = [ wintypes.HANDLE, POINTER(CONSOLE_SCREEN_BUFFER_INFO), ] _GetConsoleScreenBufferInfo.restype = wintypes.BOOL _SetConsoleTextAttribute = windll.kernel32.SetConsoleTextAttribute _SetConsoleTextAttribute.argtypes = [ wintypes.HANDLE, wintypes.WORD, ] _SetConsoleTextAttribute.restype = wintypes.BOOL _SetConsoleCursorPosition = windll.kernel32.SetConsoleCursorPosition _SetConsoleCursorPosition.argtypes = [ wintypes.HANDLE, COORD, ] _SetConsoleCursorPosition.restype = wintypes.BOOL _FillConsoleOutputCharacterA = windll.kernel32.FillConsoleOutputCharacterA _FillConsoleOutputCharacterA.argtypes = [ wintypes.HANDLE, c_char, wintypes.DWORD, COORD, POINTER(wintypes.DWORD), ] _FillConsoleOutputCharacterA.restype = wintypes.BOOL _FillConsoleOutputAttribute = windll.kernel32.FillConsoleOutputAttribute _FillConsoleOutputAttribute.argtypes = [ wintypes.HANDLE, wintypes.WORD, wintypes.DWORD, COORD, POINTER(wintypes.DWORD), ] _FillConsoleOutputAttribute.restype = wintypes.BOOL _SetConsoleTitleW = windll.kernel32.SetConsoleTitleW _SetConsoleTitleW.argtypes = [ wintypes.LPCWSTR ] _SetConsoleTitleW.restype = wintypes.BOOL def _winapi_test(handle): csbi = CONSOLE_SCREEN_BUFFER_INFO() success = _GetConsoleScreenBufferInfo( handle, byref(csbi)) return bool(success) def winapi_test(): return any(_winapi_test(h) for h in (_GetStdHandle(STDOUT), _GetStdHandle(STDERR))) def GetConsoleScreenBufferInfo(stream_id=STDOUT): handle = _GetStdHandle(stream_id) csbi = CONSOLE_SCREEN_BUFFER_INFO() success = _GetConsoleScreenBufferInfo( handle, byref(csbi)) return csbi def SetConsoleTextAttribute(stream_id, attrs): handle = _GetStdHandle(stream_id) return _SetConsoleTextAttribute(handle, attrs) def SetConsoleCursorPosition(stream_id, position, adjust=True): position = COORD(*position) # If the position is out of range, do nothing. if position.Y <= 0 or position.X <= 0: return # Adjust for Windows' SetConsoleCursorPosition: # 1. being 0-based, while ANSI is 1-based. # 2. expecting (x,y), while ANSI uses (y,x). adjusted_position = COORD(position.Y - 1, position.X - 1) if adjust: # Adjust for viewport's scroll position sr = GetConsoleScreenBufferInfo(STDOUT).srWindow adjusted_position.Y += sr.Top adjusted_position.X += sr.Left # Resume normal processing handle = _GetStdHandle(stream_id) return _SetConsoleCursorPosition(handle, adjusted_position) def FillConsoleOutputCharacter(stream_id, char, length, start): handle = _GetStdHandle(stream_id) char = c_char(char.encode()) length = wintypes.DWORD(length) num_written = wintypes.DWORD(0) # Note that this is hard-coded for ANSI (vs wide) bytes. success = _FillConsoleOutputCharacterA( handle, char, length, start, byref(num_written)) return num_written.value def FillConsoleOutputAttribute(stream_id, attr, length, start): ''' FillConsoleOutputAttribute( hConsole, csbi.wAttributes, dwConSize, coordScreen, &cCharsWritten )''' handle = _GetStdHandle(stream_id) attribute = wintypes.WORD(attr) length = wintypes.DWORD(length) num_written = wintypes.DWORD(0) # Note that this is hard-coded for ANSI (vs wide) bytes. return _FillConsoleOutputAttribute( handle, attribute, length, start, byref(num_written)) def SetConsoleTitle(title): return _SetConsoleTitleW(title) ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/colorama/winterm.py����������������������0000644�0001750�0001750�00000014446�00000000000�030332� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. from . import win32 # from wincon.h class WinColor(object): BLACK = 0 BLUE = 1 GREEN = 2 CYAN = 3 RED = 4 MAGENTA = 5 YELLOW = 6 GREY = 7 # from wincon.h class WinStyle(object): NORMAL = 0x00 # dim text, dim background BRIGHT = 0x08 # bright text, dim background BRIGHT_BACKGROUND = 0x80 # dim text, bright background class WinTerm(object): def __init__(self): self._default = win32.GetConsoleScreenBufferInfo(win32.STDOUT).wAttributes self.set_attrs(self._default) self._default_fore = self._fore self._default_back = self._back self._default_style = self._style # In order to emulate LIGHT_EX in windows, we borrow the BRIGHT style. # So that LIGHT_EX colors and BRIGHT style do not clobber each other, # we track them separately, since LIGHT_EX is overwritten by Fore/Back # and BRIGHT is overwritten by Style codes. self._light = 0 def get_attrs(self): return self._fore + self._back * 16 + (self._style | self._light) def set_attrs(self, value): self._fore = value & 7 self._back = (value >> 4) & 7 self._style = value & (WinStyle.BRIGHT | WinStyle.BRIGHT_BACKGROUND) def reset_all(self, on_stderr=None): self.set_attrs(self._default) self.set_console(attrs=self._default) self._light = 0 def fore(self, fore=None, light=False, on_stderr=False): if fore is None: fore = self._default_fore self._fore = fore # Emulate LIGHT_EX with BRIGHT Style if light: self._light |= WinStyle.BRIGHT else: self._light &= ~WinStyle.BRIGHT self.set_console(on_stderr=on_stderr) def back(self, back=None, light=False, on_stderr=False): if back is None: back = self._default_back self._back = back # Emulate LIGHT_EX with BRIGHT_BACKGROUND Style if light: self._light |= WinStyle.BRIGHT_BACKGROUND else: self._light &= ~WinStyle.BRIGHT_BACKGROUND self.set_console(on_stderr=on_stderr) def style(self, style=None, on_stderr=False): if style is None: style = self._default_style self._style = style self.set_console(on_stderr=on_stderr) def set_console(self, attrs=None, on_stderr=False): if attrs is None: attrs = self.get_attrs() handle = win32.STDOUT if on_stderr: handle = win32.STDERR win32.SetConsoleTextAttribute(handle, attrs) def get_position(self, handle): position = win32.GetConsoleScreenBufferInfo(handle).dwCursorPosition # Because Windows coordinates are 0-based, # and win32.SetConsoleCursorPosition expects 1-based. position.X += 1 position.Y += 1 return position def set_cursor_position(self, position=None, on_stderr=False): if position is None: # I'm not currently tracking the position, so there is no default. # position = self.get_position() return handle = win32.STDOUT if on_stderr: handle = win32.STDERR win32.SetConsoleCursorPosition(handle, position) def cursor_adjust(self, x, y, on_stderr=False): handle = win32.STDOUT if on_stderr: handle = win32.STDERR position = self.get_position(handle) adjusted_position = (position.Y + y, position.X + x) win32.SetConsoleCursorPosition(handle, adjusted_position, adjust=False) def erase_screen(self, mode=0, on_stderr=False): # 0 should clear from the cursor to the end of the screen. # 1 should clear from the cursor to the beginning of the screen. # 2 should clear the entire screen, and move cursor to (1,1) handle = win32.STDOUT if on_stderr: handle = win32.STDERR csbi = win32.GetConsoleScreenBufferInfo(handle) # get the number of character cells in the current buffer cells_in_screen = csbi.dwSize.X * csbi.dwSize.Y # get number of character cells before current cursor position cells_before_cursor = csbi.dwSize.X * csbi.dwCursorPosition.Y + csbi.dwCursorPosition.X if mode == 0: from_coord = csbi.dwCursorPosition cells_to_erase = cells_in_screen - cells_before_cursor elif mode == 1: from_coord = win32.COORD(0, 0) cells_to_erase = cells_before_cursor elif mode == 2: from_coord = win32.COORD(0, 0) cells_to_erase = cells_in_screen else: # invalid mode return # fill the entire screen with blanks win32.FillConsoleOutputCharacter(handle, ' ', cells_to_erase, from_coord) # now set the buffer's attributes accordingly win32.FillConsoleOutputAttribute(handle, self.get_attrs(), cells_to_erase, from_coord) if mode == 2: # put the cursor where needed win32.SetConsoleCursorPosition(handle, (1, 1)) def erase_line(self, mode=0, on_stderr=False): # 0 should clear from the cursor to the end of the line. # 1 should clear from the cursor to the beginning of the line. # 2 should clear the entire line. handle = win32.STDOUT if on_stderr: handle = win32.STDERR csbi = win32.GetConsoleScreenBufferInfo(handle) if mode == 0: from_coord = csbi.dwCursorPosition cells_to_erase = csbi.dwSize.X - csbi.dwCursorPosition.X elif mode == 1: from_coord = win32.COORD(0, csbi.dwCursorPosition.Y) cells_to_erase = csbi.dwCursorPosition.X elif mode == 2: from_coord = win32.COORD(0, csbi.dwCursorPosition.Y) cells_to_erase = csbi.dwSize.X else: # invalid mode return # fill the entire screen with blanks win32.FillConsoleOutputCharacter(handle, ' ', cells_to_erase, from_coord) # now set the buffer's attributes accordingly win32.FillConsoleOutputAttribute(handle, self.get_attrs(), cells_to_erase, from_coord) def set_title(self, title): win32.SetConsoleTitle(title) ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735099.9833274 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/distlib/���������������������������������0000755�0001750�0001750�00000000000�00000000000�026117� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/distlib/__init__.py����������������������0000644�0001750�0001750�00000001113�00000000000�030224� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # # Copyright (C) 2012-2019 Vinay Sajip. # Licensed to the Python Software Foundation under a contributor agreement. # See LICENSE.txt and CONTRIBUTORS.txt. # import logging __version__ = '0.2.9.post0' class DistlibException(Exception): pass try: from logging import NullHandler except ImportError: # pragma: no cover class NullHandler(logging.Handler): def handle(self, record): pass def emit(self, record): pass def createLock(self): self.lock = None logger = logging.getLogger(__name__) logger.addHandler(NullHandler()) �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735099.9866607 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/distlib/_backport/�����������������������0000755�0001750�0001750�00000000000�00000000000�030063� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/distlib/_backport/__init__.py������������0000644�0001750�0001750�00000000422�00000000000�032172� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""Modules copied from Python 3 standard libraries, for internal use only. Individual classes and functions are found in d2._backport.misc. Intended usage is to always import things missing from 3.1 from that module: the built-in/stdlib objects will be used if found. """ ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/distlib/_backport/misc.py����������������0000644�0001750�0001750�00000001713�00000000000�031372� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # # Copyright (C) 2012 The Python Software Foundation. # See LICENSE.txt and CONTRIBUTORS.txt. # """Backports for individual classes and functions.""" import os import sys __all__ = ['cache_from_source', 'callable', 'fsencode'] try: from imp import cache_from_source except ImportError: def cache_from_source(py_file, debug=__debug__): ext = debug and 'c' or 'o' return py_file + ext try: callable = callable except NameError: from collections import Callable def callable(obj): return isinstance(obj, Callable) try: fsencode = os.fsencode except AttributeError: def fsencode(filename): if isinstance(filename, bytes): return filename elif isinstance(filename, str): return filename.encode(sys.getfilesystemencoding()) else: raise TypeError("expect bytes or str, not %s" % type(filename).__name__) �����������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/distlib/_backport/shutil.py��������������0000644�0001750�0001750�00000062057�00000000000�031757� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # # Copyright (C) 2012 The Python Software Foundation. # See LICENSE.txt and CONTRIBUTORS.txt. # """Utility functions for copying and archiving files and directory trees. XXX The functions here don't copy the resource fork or other metadata on Mac. """ import os import sys import stat from os.path import abspath import fnmatch import collections import errno from . import tarfile try: import bz2 _BZ2_SUPPORTED = True except ImportError: _BZ2_SUPPORTED = False try: from pwd import getpwnam except ImportError: getpwnam = None try: from grp import getgrnam except ImportError: getgrnam = None __all__ = ["copyfileobj", "copyfile", "copymode", "copystat", "copy", "copy2", "copytree", "move", "rmtree", "Error", "SpecialFileError", "ExecError", "make_archive", "get_archive_formats", "register_archive_format", "unregister_archive_format", "get_unpack_formats", "register_unpack_format", "unregister_unpack_format", "unpack_archive", "ignore_patterns"] class Error(EnvironmentError): pass class SpecialFileError(EnvironmentError): """Raised when trying to do a kind of operation (e.g. copying) which is not supported on a special file (e.g. a named pipe)""" class ExecError(EnvironmentError): """Raised when a command could not be executed""" class ReadError(EnvironmentError): """Raised when an archive cannot be read""" class RegistryError(Exception): """Raised when a registry operation with the archiving and unpacking registries fails""" try: WindowsError except NameError: WindowsError = None def copyfileobj(fsrc, fdst, length=16*1024): """copy data from file-like object fsrc to file-like object fdst""" while 1: buf = fsrc.read(length) if not buf: break fdst.write(buf) def _samefile(src, dst): # Macintosh, Unix. if hasattr(os.path, 'samefile'): try: return os.path.samefile(src, dst) except OSError: return False # All other platforms: check for same pathname. return (os.path.normcase(os.path.abspath(src)) == os.path.normcase(os.path.abspath(dst))) def copyfile(src, dst): """Copy data from src to dst""" if _samefile(src, dst): raise Error("`%s` and `%s` are the same file" % (src, dst)) for fn in [src, dst]: try: st = os.stat(fn) except OSError: # File most likely does not exist pass else: # XXX What about other special files? (sockets, devices...) if stat.S_ISFIFO(st.st_mode): raise SpecialFileError("`%s` is a named pipe" % fn) with open(src, 'rb') as fsrc: with open(dst, 'wb') as fdst: copyfileobj(fsrc, fdst) def copymode(src, dst): """Copy mode bits from src to dst""" if hasattr(os, 'chmod'): st = os.stat(src) mode = stat.S_IMODE(st.st_mode) os.chmod(dst, mode) def copystat(src, dst): """Copy all stat info (mode bits, atime, mtime, flags) from src to dst""" st = os.stat(src) mode = stat.S_IMODE(st.st_mode) if hasattr(os, 'utime'): os.utime(dst, (st.st_atime, st.st_mtime)) if hasattr(os, 'chmod'): os.chmod(dst, mode) if hasattr(os, 'chflags') and hasattr(st, 'st_flags'): try: os.chflags(dst, st.st_flags) except OSError as why: if (not hasattr(errno, 'EOPNOTSUPP') or why.errno != errno.EOPNOTSUPP): raise def copy(src, dst): """Copy data and mode bits ("cp src dst"). The destination may be a directory. """ if os.path.isdir(dst): dst = os.path.join(dst, os.path.basename(src)) copyfile(src, dst) copymode(src, dst) def copy2(src, dst): """Copy data and all stat info ("cp -p src dst"). The destination may be a directory. """ if os.path.isdir(dst): dst = os.path.join(dst, os.path.basename(src)) copyfile(src, dst) copystat(src, dst) def ignore_patterns(*patterns): """Function that can be used as copytree() ignore parameter. Patterns is a sequence of glob-style patterns that are used to exclude files""" def _ignore_patterns(path, names): ignored_names = [] for pattern in patterns: ignored_names.extend(fnmatch.filter(names, pattern)) return set(ignored_names) return _ignore_patterns def copytree(src, dst, symlinks=False, ignore=None, copy_function=copy2, ignore_dangling_symlinks=False): """Recursively copy a directory tree. The destination directory must not already exist. If exception(s) occur, an Error is raised with a list of reasons. If the optional symlinks flag is true, symbolic links in the source tree result in symbolic links in the destination tree; if it is false, the contents of the files pointed to by symbolic links are copied. If the file pointed by the symlink doesn't exist, an exception will be added in the list of errors raised in an Error exception at the end of the copy process. You can set the optional ignore_dangling_symlinks flag to true if you want to silence this exception. Notice that this has no effect on platforms that don't support os.symlink. The optional ignore argument is a callable. If given, it is called with the `src` parameter, which is the directory being visited by copytree(), and `names` which is the list of `src` contents, as returned by os.listdir(): callable(src, names) -> ignored_names Since copytree() is called recursively, the callable will be called once for each directory that is copied. It returns a list of names relative to the `src` directory that should not be copied. The optional copy_function argument is a callable that will be used to copy each file. It will be called with the source path and the destination path as arguments. By default, copy2() is used, but any function that supports the same signature (like copy()) can be used. """ names = os.listdir(src) if ignore is not None: ignored_names = ignore(src, names) else: ignored_names = set() os.makedirs(dst) errors = [] for name in names: if name in ignored_names: continue srcname = os.path.join(src, name) dstname = os.path.join(dst, name) try: if os.path.islink(srcname): linkto = os.readlink(srcname) if symlinks: os.symlink(linkto, dstname) else: # ignore dangling symlink if the flag is on if not os.path.exists(linkto) and ignore_dangling_symlinks: continue # otherwise let the copy occurs. copy2 will raise an error copy_function(srcname, dstname) elif os.path.isdir(srcname): copytree(srcname, dstname, symlinks, ignore, copy_function) else: # Will raise a SpecialFileError for unsupported file types copy_function(srcname, dstname) # catch the Error from the recursive copytree so that we can # continue with other files except Error as err: errors.extend(err.args[0]) except EnvironmentError as why: errors.append((srcname, dstname, str(why))) try: copystat(src, dst) except OSError as why: if WindowsError is not None and isinstance(why, WindowsError): # Copying file access times may fail on Windows pass else: errors.extend((src, dst, str(why))) if errors: raise Error(errors) def rmtree(path, ignore_errors=False, onerror=None): """Recursively delete a directory tree. If ignore_errors is set, errors are ignored; otherwise, if onerror is set, it is called to handle the error with arguments (func, path, exc_info) where func is os.listdir, os.remove, or os.rmdir; path is the argument to that function that caused it to fail; and exc_info is a tuple returned by sys.exc_info(). If ignore_errors is false and onerror is None, an exception is raised. """ if ignore_errors: def onerror(*args): pass elif onerror is None: def onerror(*args): raise try: if os.path.islink(path): # symlinks to directories are forbidden, see bug #1669 raise OSError("Cannot call rmtree on a symbolic link") except OSError: onerror(os.path.islink, path, sys.exc_info()) # can't continue even if onerror hook returns return names = [] try: names = os.listdir(path) except os.error: onerror(os.listdir, path, sys.exc_info()) for name in names: fullname = os.path.join(path, name) try: mode = os.lstat(fullname).st_mode except os.error: mode = 0 if stat.S_ISDIR(mode): rmtree(fullname, ignore_errors, onerror) else: try: os.remove(fullname) except os.error: onerror(os.remove, fullname, sys.exc_info()) try: os.rmdir(path) except os.error: onerror(os.rmdir, path, sys.exc_info()) def _basename(path): # A basename() variant which first strips the trailing slash, if present. # Thus we always get the last component of the path, even for directories. return os.path.basename(path.rstrip(os.path.sep)) def move(src, dst): """Recursively move a file or directory to another location. This is similar to the Unix "mv" command. If the destination is a directory or a symlink to a directory, the source is moved inside the directory. The destination path must not already exist. If the destination already exists but is not a directory, it may be overwritten depending on os.rename() semantics. If the destination is on our current filesystem, then rename() is used. Otherwise, src is copied to the destination and then removed. A lot more could be done here... A look at a mv.c shows a lot of the issues this implementation glosses over. """ real_dst = dst if os.path.isdir(dst): if _samefile(src, dst): # We might be on a case insensitive filesystem, # perform the rename anyway. os.rename(src, dst) return real_dst = os.path.join(dst, _basename(src)) if os.path.exists(real_dst): raise Error("Destination path '%s' already exists" % real_dst) try: os.rename(src, real_dst) except OSError: if os.path.isdir(src): if _destinsrc(src, dst): raise Error("Cannot move a directory '%s' into itself '%s'." % (src, dst)) copytree(src, real_dst, symlinks=True) rmtree(src) else: copy2(src, real_dst) os.unlink(src) def _destinsrc(src, dst): src = abspath(src) dst = abspath(dst) if not src.endswith(os.path.sep): src += os.path.sep if not dst.endswith(os.path.sep): dst += os.path.sep return dst.startswith(src) def _get_gid(name): """Returns a gid, given a group name.""" if getgrnam is None or name is None: return None try: result = getgrnam(name) except KeyError: result = None if result is not None: return result[2] return None def _get_uid(name): """Returns an uid, given a user name.""" if getpwnam is None or name is None: return None try: result = getpwnam(name) except KeyError: result = None if result is not None: return result[2] return None def _make_tarball(base_name, base_dir, compress="gzip", verbose=0, dry_run=0, owner=None, group=None, logger=None): """Create a (possibly compressed) tar file from all the files under 'base_dir'. 'compress' must be "gzip" (the default), "bzip2", or None. 'owner' and 'group' can be used to define an owner and a group for the archive that is being built. If not provided, the current owner and group will be used. The output tar file will be named 'base_name' + ".tar", possibly plus the appropriate compression extension (".gz", or ".bz2"). Returns the output filename. """ tar_compression = {'gzip': 'gz', None: ''} compress_ext = {'gzip': '.gz'} if _BZ2_SUPPORTED: tar_compression['bzip2'] = 'bz2' compress_ext['bzip2'] = '.bz2' # flags for compression program, each element of list will be an argument if compress is not None and compress not in compress_ext: raise ValueError("bad value for 'compress', or compression format not " "supported : {0}".format(compress)) archive_name = base_name + '.tar' + compress_ext.get(compress, '') archive_dir = os.path.dirname(archive_name) if not os.path.exists(archive_dir): if logger is not None: logger.info("creating %s", archive_dir) if not dry_run: os.makedirs(archive_dir) # creating the tarball if logger is not None: logger.info('Creating tar archive') uid = _get_uid(owner) gid = _get_gid(group) def _set_uid_gid(tarinfo): if gid is not None: tarinfo.gid = gid tarinfo.gname = group if uid is not None: tarinfo.uid = uid tarinfo.uname = owner return tarinfo if not dry_run: tar = tarfile.open(archive_name, 'w|%s' % tar_compression[compress]) try: tar.add(base_dir, filter=_set_uid_gid) finally: tar.close() return archive_name def _call_external_zip(base_dir, zip_filename, verbose=False, dry_run=False): # XXX see if we want to keep an external call here if verbose: zipoptions = "-r" else: zipoptions = "-rq" from distutils.errors import DistutilsExecError from distutils.spawn import spawn try: spawn(["zip", zipoptions, zip_filename, base_dir], dry_run=dry_run) except DistutilsExecError: # XXX really should distinguish between "couldn't find # external 'zip' command" and "zip failed". raise ExecError("unable to create zip file '%s': " "could neither import the 'zipfile' module nor " "find a standalone zip utility") % zip_filename def _make_zipfile(base_name, base_dir, verbose=0, dry_run=0, logger=None): """Create a zip file from all the files under 'base_dir'. The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises ExecError. Returns the name of the output zip file. """ zip_filename = base_name + ".zip" archive_dir = os.path.dirname(base_name) if not os.path.exists(archive_dir): if logger is not None: logger.info("creating %s", archive_dir) if not dry_run: os.makedirs(archive_dir) # If zipfile module is not available, try spawning an external 'zip' # command. try: import zipfile except ImportError: zipfile = None if zipfile is None: _call_external_zip(base_dir, zip_filename, verbose, dry_run) else: if logger is not None: logger.info("creating '%s' and adding '%s' to it", zip_filename, base_dir) if not dry_run: zip = zipfile.ZipFile(zip_filename, "w", compression=zipfile.ZIP_DEFLATED) for dirpath, dirnames, filenames in os.walk(base_dir): for name in filenames: path = os.path.normpath(os.path.join(dirpath, name)) if os.path.isfile(path): zip.write(path, path) if logger is not None: logger.info("adding '%s'", path) zip.close() return zip_filename _ARCHIVE_FORMATS = { 'gztar': (_make_tarball, [('compress', 'gzip')], "gzip'ed tar-file"), 'bztar': (_make_tarball, [('compress', 'bzip2')], "bzip2'ed tar-file"), 'tar': (_make_tarball, [('compress', None)], "uncompressed tar file"), 'zip': (_make_zipfile, [], "ZIP file"), } if _BZ2_SUPPORTED: _ARCHIVE_FORMATS['bztar'] = (_make_tarball, [('compress', 'bzip2')], "bzip2'ed tar-file") def get_archive_formats(): """Returns a list of supported formats for archiving and unarchiving. Each element of the returned sequence is a tuple (name, description) """ formats = [(name, registry[2]) for name, registry in _ARCHIVE_FORMATS.items()] formats.sort() return formats def register_archive_format(name, function, extra_args=None, description=''): """Registers an archive format. name is the name of the format. function is the callable that will be used to create archives. If provided, extra_args is a sequence of (name, value) tuples that will be passed as arguments to the callable. description can be provided to describe the format, and will be returned by the get_archive_formats() function. """ if extra_args is None: extra_args = [] if not isinstance(function, collections.Callable): raise TypeError('The %s object is not callable' % function) if not isinstance(extra_args, (tuple, list)): raise TypeError('extra_args needs to be a sequence') for element in extra_args: if not isinstance(element, (tuple, list)) or len(element) !=2: raise TypeError('extra_args elements are : (arg_name, value)') _ARCHIVE_FORMATS[name] = (function, extra_args, description) def unregister_archive_format(name): del _ARCHIVE_FORMATS[name] def make_archive(base_name, format, root_dir=None, base_dir=None, verbose=0, dry_run=0, owner=None, group=None, logger=None): """Create an archive file (eg. zip or tar). 'base_name' is the name of the file to create, minus any format-specific extension; 'format' is the archive format: one of "zip", "tar", "bztar" or "gztar". 'root_dir' is a directory that will be the root directory of the archive; ie. we typically chdir into 'root_dir' before creating the archive. 'base_dir' is the directory where we start archiving from; ie. 'base_dir' will be the common prefix of all files and directories in the archive. 'root_dir' and 'base_dir' both default to the current directory. Returns the name of the archive file. 'owner' and 'group' are used when creating a tar archive. By default, uses the current owner and group. """ save_cwd = os.getcwd() if root_dir is not None: if logger is not None: logger.debug("changing into '%s'", root_dir) base_name = os.path.abspath(base_name) if not dry_run: os.chdir(root_dir) if base_dir is None: base_dir = os.curdir kwargs = {'dry_run': dry_run, 'logger': logger} try: format_info = _ARCHIVE_FORMATS[format] except KeyError: raise ValueError("unknown archive format '%s'" % format) func = format_info[0] for arg, val in format_info[1]: kwargs[arg] = val if format != 'zip': kwargs['owner'] = owner kwargs['group'] = group try: filename = func(base_name, base_dir, **kwargs) finally: if root_dir is not None: if logger is not None: logger.debug("changing back to '%s'", save_cwd) os.chdir(save_cwd) return filename def get_unpack_formats(): """Returns a list of supported formats for unpacking. Each element of the returned sequence is a tuple (name, extensions, description) """ formats = [(name, info[0], info[3]) for name, info in _UNPACK_FORMATS.items()] formats.sort() return formats def _check_unpack_options(extensions, function, extra_args): """Checks what gets registered as an unpacker.""" # first make sure no other unpacker is registered for this extension existing_extensions = {} for name, info in _UNPACK_FORMATS.items(): for ext in info[0]: existing_extensions[ext] = name for extension in extensions: if extension in existing_extensions: msg = '%s is already registered for "%s"' raise RegistryError(msg % (extension, existing_extensions[extension])) if not isinstance(function, collections.Callable): raise TypeError('The registered function must be a callable') def register_unpack_format(name, extensions, function, extra_args=None, description=''): """Registers an unpack format. `name` is the name of the format. `extensions` is a list of extensions corresponding to the format. `function` is the callable that will be used to unpack archives. The callable will receive archives to unpack. If it's unable to handle an archive, it needs to raise a ReadError exception. If provided, `extra_args` is a sequence of (name, value) tuples that will be passed as arguments to the callable. description can be provided to describe the format, and will be returned by the get_unpack_formats() function. """ if extra_args is None: extra_args = [] _check_unpack_options(extensions, function, extra_args) _UNPACK_FORMATS[name] = extensions, function, extra_args, description def unregister_unpack_format(name): """Removes the pack format from the registry.""" del _UNPACK_FORMATS[name] def _ensure_directory(path): """Ensure that the parent directory of `path` exists""" dirname = os.path.dirname(path) if not os.path.isdir(dirname): os.makedirs(dirname) def _unpack_zipfile(filename, extract_dir): """Unpack zip `filename` to `extract_dir` """ try: import zipfile except ImportError: raise ReadError('zlib not supported, cannot unpack this archive.') if not zipfile.is_zipfile(filename): raise ReadError("%s is not a zip file" % filename) zip = zipfile.ZipFile(filename) try: for info in zip.infolist(): name = info.filename # don't extract absolute paths or ones with .. in them if name.startswith('/') or '..' in name: continue target = os.path.join(extract_dir, *name.split('/')) if not target: continue _ensure_directory(target) if not name.endswith('/'): # file data = zip.read(info.filename) f = open(target, 'wb') try: f.write(data) finally: f.close() del data finally: zip.close() def _unpack_tarfile(filename, extract_dir): """Unpack tar/tar.gz/tar.bz2 `filename` to `extract_dir` """ try: tarobj = tarfile.open(filename) except tarfile.TarError: raise ReadError( "%s is not a compressed or uncompressed tar file" % filename) try: tarobj.extractall(extract_dir) finally: tarobj.close() _UNPACK_FORMATS = { 'gztar': (['.tar.gz', '.tgz'], _unpack_tarfile, [], "gzip'ed tar-file"), 'tar': (['.tar'], _unpack_tarfile, [], "uncompressed tar file"), 'zip': (['.zip'], _unpack_zipfile, [], "ZIP file") } if _BZ2_SUPPORTED: _UNPACK_FORMATS['bztar'] = (['.bz2'], _unpack_tarfile, [], "bzip2'ed tar-file") def _find_unpack_format(filename): for name, info in _UNPACK_FORMATS.items(): for extension in info[0]: if filename.endswith(extension): return name return None def unpack_archive(filename, extract_dir=None, format=None): """Unpack an archive. `filename` is the name of the archive. `extract_dir` is the name of the target directory, where the archive is unpacked. If not provided, the current working directory is used. `format` is the archive format: one of "zip", "tar", or "gztar". Or any other registered format. If not provided, unpack_archive will use the filename extension and see if an unpacker was registered for that extension. In case none is found, a ValueError is raised. """ if extract_dir is None: extract_dir = os.getcwd() if format is not None: try: format_info = _UNPACK_FORMATS[format] except KeyError: raise ValueError("Unknown unpack format '{0}'".format(format)) func = format_info[1] func(filename, extract_dir, **dict(format_info[2])) else: # we need to look at the registered unpackers supported extensions format = _find_unpack_format(filename) if format is None: raise ReadError("Unknown archive format '{0}'".format(filename)) func = _UNPACK_FORMATS[format][1] kwargs = dict(_UNPACK_FORMATS[format][2]) func(filename, extract_dir, **kwargs) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/distlib/_backport/sysconfig.py�����������0000644�0001750�0001750�00000064524�00000000000�032454� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # # Copyright (C) 2012 The Python Software Foundation. # See LICENSE.txt and CONTRIBUTORS.txt. # """Access to Python's configuration information.""" import codecs import os import re import sys from os.path import pardir, realpath try: import configparser except ImportError: import ConfigParser as configparser __all__ = [ 'get_config_h_filename', 'get_config_var', 'get_config_vars', 'get_makefile_filename', 'get_path', 'get_path_names', 'get_paths', 'get_platform', 'get_python_version', 'get_scheme_names', 'parse_config_h', ] def _safe_realpath(path): try: return realpath(path) except OSError: return path if sys.executable: _PROJECT_BASE = os.path.dirname(_safe_realpath(sys.executable)) else: # sys.executable can be empty if argv[0] has been changed and Python is # unable to retrieve the real program name _PROJECT_BASE = _safe_realpath(os.getcwd()) if os.name == "nt" and "pcbuild" in _PROJECT_BASE[-8:].lower(): _PROJECT_BASE = _safe_realpath(os.path.join(_PROJECT_BASE, pardir)) # PC/VS7.1 if os.name == "nt" and "\\pc\\v" in _PROJECT_BASE[-10:].lower(): _PROJECT_BASE = _safe_realpath(os.path.join(_PROJECT_BASE, pardir, pardir)) # PC/AMD64 if os.name == "nt" and "\\pcbuild\\amd64" in _PROJECT_BASE[-14:].lower(): _PROJECT_BASE = _safe_realpath(os.path.join(_PROJECT_BASE, pardir, pardir)) def is_python_build(): for fn in ("Setup.dist", "Setup.local"): if os.path.isfile(os.path.join(_PROJECT_BASE, "Modules", fn)): return True return False _PYTHON_BUILD = is_python_build() _cfg_read = False def _ensure_cfg_read(): global _cfg_read if not _cfg_read: from ..resources import finder backport_package = __name__.rsplit('.', 1)[0] _finder = finder(backport_package) _cfgfile = _finder.find('sysconfig.cfg') assert _cfgfile, 'sysconfig.cfg exists' with _cfgfile.as_stream() as s: _SCHEMES.readfp(s) if _PYTHON_BUILD: for scheme in ('posix_prefix', 'posix_home'): _SCHEMES.set(scheme, 'include', '{srcdir}/Include') _SCHEMES.set(scheme, 'platinclude', '{projectbase}/.') _cfg_read = True _SCHEMES = configparser.RawConfigParser() _VAR_REPL = re.compile(r'\{([^{]*?)\}') def _expand_globals(config): _ensure_cfg_read() if config.has_section('globals'): globals = config.items('globals') else: globals = tuple() sections = config.sections() for section in sections: if section == 'globals': continue for option, value in globals: if config.has_option(section, option): continue config.set(section, option, value) config.remove_section('globals') # now expanding local variables defined in the cfg file # for section in config.sections(): variables = dict(config.items(section)) def _replacer(matchobj): name = matchobj.group(1) if name in variables: return variables[name] return matchobj.group(0) for option, value in config.items(section): config.set(section, option, _VAR_REPL.sub(_replacer, value)) #_expand_globals(_SCHEMES) # FIXME don't rely on sys.version here, its format is an implementation detail # of CPython, use sys.version_info or sys.hexversion _PY_VERSION = sys.version.split()[0] _PY_VERSION_SHORT = sys.version[:3] _PY_VERSION_SHORT_NO_DOT = _PY_VERSION[0] + _PY_VERSION[2] _PREFIX = os.path.normpath(sys.prefix) _EXEC_PREFIX = os.path.normpath(sys.exec_prefix) _CONFIG_VARS = None _USER_BASE = None def _subst_vars(path, local_vars): """In the string `path`, replace tokens like {some.thing} with the corresponding value from the map `local_vars`. If there is no corresponding value, leave the token unchanged. """ def _replacer(matchobj): name = matchobj.group(1) if name in local_vars: return local_vars[name] elif name in os.environ: return os.environ[name] return matchobj.group(0) return _VAR_REPL.sub(_replacer, path) def _extend_dict(target_dict, other_dict): target_keys = target_dict.keys() for key, value in other_dict.items(): if key in target_keys: continue target_dict[key] = value def _expand_vars(scheme, vars): res = {} if vars is None: vars = {} _extend_dict(vars, get_config_vars()) for key, value in _SCHEMES.items(scheme): if os.name in ('posix', 'nt'): value = os.path.expanduser(value) res[key] = os.path.normpath(_subst_vars(value, vars)) return res def format_value(value, vars): def _replacer(matchobj): name = matchobj.group(1) if name in vars: return vars[name] return matchobj.group(0) return _VAR_REPL.sub(_replacer, value) def _get_default_scheme(): if os.name == 'posix': # the default scheme for posix is posix_prefix return 'posix_prefix' return os.name def _getuserbase(): env_base = os.environ.get("PYTHONUSERBASE", None) def joinuser(*args): return os.path.expanduser(os.path.join(*args)) # what about 'os2emx', 'riscos' ? if os.name == "nt": base = os.environ.get("APPDATA") or "~" if env_base: return env_base else: return joinuser(base, "Python") if sys.platform == "darwin": framework = get_config_var("PYTHONFRAMEWORK") if framework: if env_base: return env_base else: return joinuser("~", "Library", framework, "%d.%d" % sys.version_info[:2]) if env_base: return env_base else: return joinuser("~", ".local") def _parse_makefile(filename, vars=None): """Parse a Makefile-style file. A dictionary containing name/value pairs is returned. If an optional dictionary is passed in as the second argument, it is used instead of a new dictionary. """ # Regexes needed for parsing Makefile (and similar syntaxes, # like old-style Setup files). _variable_rx = re.compile(r"([a-zA-Z][a-zA-Z0-9_]+)\s*=\s*(.*)") _findvar1_rx = re.compile(r"\$\(([A-Za-z][A-Za-z0-9_]*)\)") _findvar2_rx = re.compile(r"\${([A-Za-z][A-Za-z0-9_]*)}") if vars is None: vars = {} done = {} notdone = {} with codecs.open(filename, encoding='utf-8', errors="surrogateescape") as f: lines = f.readlines() for line in lines: if line.startswith('#') or line.strip() == '': continue m = _variable_rx.match(line) if m: n, v = m.group(1, 2) v = v.strip() # `$$' is a literal `$' in make tmpv = v.replace('$$', '') if "$" in tmpv: notdone[n] = v else: try: v = int(v) except ValueError: # insert literal `$' done[n] = v.replace('$$', '$') else: done[n] = v # do variable interpolation here variables = list(notdone.keys()) # Variables with a 'PY_' prefix in the makefile. These need to # be made available without that prefix through sysconfig. # Special care is needed to ensure that variable expansion works, even # if the expansion uses the name without a prefix. renamed_variables = ('CFLAGS', 'LDFLAGS', 'CPPFLAGS') while len(variables) > 0: for name in tuple(variables): value = notdone[name] m = _findvar1_rx.search(value) or _findvar2_rx.search(value) if m is not None: n = m.group(1) found = True if n in done: item = str(done[n]) elif n in notdone: # get it on a subsequent round found = False elif n in os.environ: # do it like make: fall back to environment item = os.environ[n] elif n in renamed_variables: if (name.startswith('PY_') and name[3:] in renamed_variables): item = "" elif 'PY_' + n in notdone: found = False else: item = str(done['PY_' + n]) else: done[n] = item = "" if found: after = value[m.end():] value = value[:m.start()] + item + after if "$" in after: notdone[name] = value else: try: value = int(value) except ValueError: done[name] = value.strip() else: done[name] = value variables.remove(name) if (name.startswith('PY_') and name[3:] in renamed_variables): name = name[3:] if name not in done: done[name] = value else: # bogus variable reference (e.g. "prefix=$/opt/python"); # just drop it since we can't deal done[name] = value variables.remove(name) # strip spurious spaces for k, v in done.items(): if isinstance(v, str): done[k] = v.strip() # save the results in the global dictionary vars.update(done) return vars def get_makefile_filename(): """Return the path of the Makefile.""" if _PYTHON_BUILD: return os.path.join(_PROJECT_BASE, "Makefile") if hasattr(sys, 'abiflags'): config_dir_name = 'config-%s%s' % (_PY_VERSION_SHORT, sys.abiflags) else: config_dir_name = 'config' return os.path.join(get_path('stdlib'), config_dir_name, 'Makefile') def _init_posix(vars): """Initialize the module as appropriate for POSIX systems.""" # load the installed Makefile: makefile = get_makefile_filename() try: _parse_makefile(makefile, vars) except IOError as e: msg = "invalid Python installation: unable to open %s" % makefile if hasattr(e, "strerror"): msg = msg + " (%s)" % e.strerror raise IOError(msg) # load the installed pyconfig.h: config_h = get_config_h_filename() try: with open(config_h) as f: parse_config_h(f, vars) except IOError as e: msg = "invalid Python installation: unable to open %s" % config_h if hasattr(e, "strerror"): msg = msg + " (%s)" % e.strerror raise IOError(msg) # On AIX, there are wrong paths to the linker scripts in the Makefile # -- these paths are relative to the Python source, but when installed # the scripts are in another directory. if _PYTHON_BUILD: vars['LDSHARED'] = vars['BLDSHARED'] def _init_non_posix(vars): """Initialize the module as appropriate for NT""" # set basic install directories vars['LIBDEST'] = get_path('stdlib') vars['BINLIBDEST'] = get_path('platstdlib') vars['INCLUDEPY'] = get_path('include') vars['SO'] = '.pyd' vars['EXE'] = '.exe' vars['VERSION'] = _PY_VERSION_SHORT_NO_DOT vars['BINDIR'] = os.path.dirname(_safe_realpath(sys.executable)) # # public APIs # def parse_config_h(fp, vars=None): """Parse a config.h-style file. A dictionary containing name/value pairs is returned. If an optional dictionary is passed in as the second argument, it is used instead of a new dictionary. """ if vars is None: vars = {} define_rx = re.compile("#define ([A-Z][A-Za-z0-9_]+) (.*)\n") undef_rx = re.compile("/[*] #undef ([A-Z][A-Za-z0-9_]+) [*]/\n") while True: line = fp.readline() if not line: break m = define_rx.match(line) if m: n, v = m.group(1, 2) try: v = int(v) except ValueError: pass vars[n] = v else: m = undef_rx.match(line) if m: vars[m.group(1)] = 0 return vars def get_config_h_filename(): """Return the path of pyconfig.h.""" if _PYTHON_BUILD: if os.name == "nt": inc_dir = os.path.join(_PROJECT_BASE, "PC") else: inc_dir = _PROJECT_BASE else: inc_dir = get_path('platinclude') return os.path.join(inc_dir, 'pyconfig.h') def get_scheme_names(): """Return a tuple containing the schemes names.""" return tuple(sorted(_SCHEMES.sections())) def get_path_names(): """Return a tuple containing the paths names.""" # xxx see if we want a static list return _SCHEMES.options('posix_prefix') def get_paths(scheme=_get_default_scheme(), vars=None, expand=True): """Return a mapping containing an install scheme. ``scheme`` is the install scheme name. If not provided, it will return the default scheme for the current platform. """ _ensure_cfg_read() if expand: return _expand_vars(scheme, vars) else: return dict(_SCHEMES.items(scheme)) def get_path(name, scheme=_get_default_scheme(), vars=None, expand=True): """Return a path corresponding to the scheme. ``scheme`` is the install scheme name. """ return get_paths(scheme, vars, expand)[name] def get_config_vars(*args): """With no arguments, return a dictionary of all configuration variables relevant for the current platform. On Unix, this means every variable defined in Python's installed Makefile; On Windows and Mac OS it's a much smaller set. With arguments, return a list of values that result from looking up each argument in the configuration variable dictionary. """ global _CONFIG_VARS if _CONFIG_VARS is None: _CONFIG_VARS = {} # Normalized versions of prefix and exec_prefix are handy to have; # in fact, these are the standard versions used most places in the # distutils2 module. _CONFIG_VARS['prefix'] = _PREFIX _CONFIG_VARS['exec_prefix'] = _EXEC_PREFIX _CONFIG_VARS['py_version'] = _PY_VERSION _CONFIG_VARS['py_version_short'] = _PY_VERSION_SHORT _CONFIG_VARS['py_version_nodot'] = _PY_VERSION[0] + _PY_VERSION[2] _CONFIG_VARS['base'] = _PREFIX _CONFIG_VARS['platbase'] = _EXEC_PREFIX _CONFIG_VARS['projectbase'] = _PROJECT_BASE try: _CONFIG_VARS['abiflags'] = sys.abiflags except AttributeError: # sys.abiflags may not be defined on all platforms. _CONFIG_VARS['abiflags'] = '' if os.name in ('nt', 'os2'): _init_non_posix(_CONFIG_VARS) if os.name == 'posix': _init_posix(_CONFIG_VARS) # Setting 'userbase' is done below the call to the # init function to enable using 'get_config_var' in # the init-function. if sys.version >= '2.6': _CONFIG_VARS['userbase'] = _getuserbase() if 'srcdir' not in _CONFIG_VARS: _CONFIG_VARS['srcdir'] = _PROJECT_BASE else: _CONFIG_VARS['srcdir'] = _safe_realpath(_CONFIG_VARS['srcdir']) # Convert srcdir into an absolute path if it appears necessary. # Normally it is relative to the build directory. However, during # testing, for example, we might be running a non-installed python # from a different directory. if _PYTHON_BUILD and os.name == "posix": base = _PROJECT_BASE try: cwd = os.getcwd() except OSError: cwd = None if (not os.path.isabs(_CONFIG_VARS['srcdir']) and base != cwd): # srcdir is relative and we are not in the same directory # as the executable. Assume executable is in the build # directory and make srcdir absolute. srcdir = os.path.join(base, _CONFIG_VARS['srcdir']) _CONFIG_VARS['srcdir'] = os.path.normpath(srcdir) if sys.platform == 'darwin': kernel_version = os.uname()[2] # Kernel version (8.4.3) major_version = int(kernel_version.split('.')[0]) if major_version < 8: # On Mac OS X before 10.4, check if -arch and -isysroot # are in CFLAGS or LDFLAGS and remove them if they are. # This is needed when building extensions on a 10.3 system # using a universal build of python. for key in ('LDFLAGS', 'BASECFLAGS', # a number of derived variables. These need to be # patched up as well. 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): flags = _CONFIG_VARS[key] flags = re.sub(r'-arch\s+\w+\s', ' ', flags) flags = re.sub('-isysroot [^ \t]*', ' ', flags) _CONFIG_VARS[key] = flags else: # Allow the user to override the architecture flags using # an environment variable. # NOTE: This name was introduced by Apple in OSX 10.5 and # is used by several scripting languages distributed with # that OS release. if 'ARCHFLAGS' in os.environ: arch = os.environ['ARCHFLAGS'] for key in ('LDFLAGS', 'BASECFLAGS', # a number of derived variables. These need to be # patched up as well. 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): flags = _CONFIG_VARS[key] flags = re.sub(r'-arch\s+\w+\s', ' ', flags) flags = flags + ' ' + arch _CONFIG_VARS[key] = flags # If we're on OSX 10.5 or later and the user tries to # compiles an extension using an SDK that is not present # on the current machine it is better to not use an SDK # than to fail. # # The major usecase for this is users using a Python.org # binary installer on OSX 10.6: that installer uses # the 10.4u SDK, but that SDK is not installed by default # when you install Xcode. # CFLAGS = _CONFIG_VARS.get('CFLAGS', '') m = re.search(r'-isysroot\s+(\S+)', CFLAGS) if m is not None: sdk = m.group(1) if not os.path.exists(sdk): for key in ('LDFLAGS', 'BASECFLAGS', # a number of derived variables. These need to be # patched up as well. 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): flags = _CONFIG_VARS[key] flags = re.sub(r'-isysroot\s+\S+(\s|$)', ' ', flags) _CONFIG_VARS[key] = flags if args: vals = [] for name in args: vals.append(_CONFIG_VARS.get(name)) return vals else: return _CONFIG_VARS def get_config_var(name): """Return the value of a single variable using the dictionary returned by 'get_config_vars()'. Equivalent to get_config_vars().get(name) """ return get_config_vars().get(name) def get_platform(): """Return a string that identifies the current platform. This is used mainly to distinguish platform-specific build directories and platform-specific built distributions. Typically includes the OS name and version and the architecture (as supplied by 'os.uname()'), although the exact information included depends on the OS; eg. for IRIX the architecture isn't particularly important (IRIX only runs on SGI hardware), but for Linux the kernel version isn't particularly important. Examples of returned values: linux-i586 linux-alpha (?) solaris-2.6-sun4u irix-5.3 irix64-6.2 Windows will return one of: win-amd64 (64bit Windows on AMD64 (aka x86_64, Intel64, EM64T, etc) win-ia64 (64bit Windows on Itanium) win32 (all others - specifically, sys.platform is returned) For other non-POSIX platforms, currently just returns 'sys.platform'. """ if os.name == 'nt': # sniff sys.version for architecture. prefix = " bit (" i = sys.version.find(prefix) if i == -1: return sys.platform j = sys.version.find(")", i) look = sys.version[i+len(prefix):j].lower() if look == 'amd64': return 'win-amd64' if look == 'itanium': return 'win-ia64' return sys.platform if os.name != "posix" or not hasattr(os, 'uname'): # XXX what about the architecture? NT is Intel or Alpha, # Mac OS is M68k or PPC, etc. return sys.platform # Try to distinguish various flavours of Unix osname, host, release, version, machine = os.uname() # Convert the OS name to lowercase, remove '/' characters # (to accommodate BSD/OS), and translate spaces (for "Power Macintosh") osname = osname.lower().replace('/', '') machine = machine.replace(' ', '_') machine = machine.replace('/', '-') if osname[:5] == "linux": # At least on Linux/Intel, 'machine' is the processor -- # i386, etc. # XXX what about Alpha, SPARC, etc? return "%s-%s" % (osname, machine) elif osname[:5] == "sunos": if release[0] >= "5": # SunOS 5 == Solaris 2 osname = "solaris" release = "%d.%s" % (int(release[0]) - 3, release[2:]) # fall through to standard osname-release-machine representation elif osname[:4] == "irix": # could be "irix64"! return "%s-%s" % (osname, release) elif osname[:3] == "aix": return "%s-%s.%s" % (osname, version, release) elif osname[:6] == "cygwin": osname = "cygwin" rel_re = re.compile(r'[\d.]+') m = rel_re.match(release) if m: release = m.group() elif osname[:6] == "darwin": # # For our purposes, we'll assume that the system version from # distutils' perspective is what MACOSX_DEPLOYMENT_TARGET is set # to. This makes the compatibility story a bit more sane because the # machine is going to compile and link as if it were # MACOSX_DEPLOYMENT_TARGET. cfgvars = get_config_vars() macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if True: # Always calculate the release of the running machine, # needed to determine if we can build fat binaries or not. macrelease = macver # Get the system version. Reading this plist is a documented # way to get the system version (see the documentation for # the Gestalt Manager) try: f = open('/System/Library/CoreServices/SystemVersion.plist') except IOError: # We're on a plain darwin box, fall back to the default # behaviour. pass else: try: m = re.search(r'<key>ProductUserVisibleVersion</key>\s*' r'<string>(.*?)</string>', f.read()) finally: f.close() if m is not None: macrelease = '.'.join(m.group(1).split('.')[:2]) # else: fall back to the default behaviour if not macver: macver = macrelease if macver: release = macver osname = "macosx" if ((macrelease + '.') >= '10.4.' and '-arch' in get_config_vars().get('CFLAGS', '').strip()): # The universal build will build fat binaries, but not on # systems before 10.4 # # Try to detect 4-way universal builds, those have machine-type # 'universal' instead of 'fat'. machine = 'fat' cflags = get_config_vars().get('CFLAGS') archs = re.findall(r'-arch\s+(\S+)', cflags) archs = tuple(sorted(set(archs))) if len(archs) == 1: machine = archs[0] elif archs == ('i386', 'ppc'): machine = 'fat' elif archs == ('i386', 'x86_64'): machine = 'intel' elif archs == ('i386', 'ppc', 'x86_64'): machine = 'fat3' elif archs == ('ppc64', 'x86_64'): machine = 'fat64' elif archs == ('i386', 'ppc', 'ppc64', 'x86_64'): machine = 'universal' else: raise ValueError( "Don't know machine value for archs=%r" % (archs,)) elif machine == 'i386': # On OSX the machine type returned by uname is always the # 32-bit variant, even if the executable architecture is # the 64-bit variant if sys.maxsize >= 2**32: machine = 'x86_64' elif machine in ('PowerPC', 'Power_Macintosh'): # Pick a sane name for the PPC architecture. # See 'i386' case if sys.maxsize >= 2**32: machine = 'ppc64' else: machine = 'ppc' return "%s-%s-%s" % (osname, release, machine) def get_python_version(): return _PY_VERSION_SHORT def _print_dict(title, data): for index, (key, value) in enumerate(sorted(data.items())): if index == 0: print('%s: ' % (title)) print('\t%s = "%s"' % (key, value)) def _main(): """Display all information sysconfig detains.""" print('Platform: "%s"' % get_platform()) print('Python version: "%s"' % get_python_version()) print('Current installation scheme: "%s"' % _get_default_scheme()) print() _print_dict('Paths', get_paths()) print() _print_dict('Variables', get_config_vars()) if __name__ == '__main__': _main() ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/distlib/_backport/tarfile.py�������������0000644�0001750�0001750�00000264724�00000000000�032102� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������#------------------------------------------------------------------- # tarfile.py #------------------------------------------------------------------- # Copyright (C) 2002 Lars Gustaebel <lars@gustaebel.de> # All rights reserved. # # Permission is hereby granted, free of charge, to any person # obtaining a copy of this software and associated documentation # files (the "Software"), to deal in the Software without # restriction, including without limitation the rights to use, # copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following # conditions: # # The above copyright notice and this permission notice shall be # included in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES # OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND # NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT # HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR # OTHER DEALINGS IN THE SOFTWARE. # from __future__ import print_function """Read from and write to tar format archives. """ __version__ = "$Revision$" version = "0.9.0" __author__ = "Lars Gust\u00e4bel (lars@gustaebel.de)" __date__ = "$Date: 2011-02-25 17:42:01 +0200 (Fri, 25 Feb 2011) $" __cvsid__ = "$Id: tarfile.py 88586 2011-02-25 15:42:01Z marc-andre.lemburg $" __credits__ = "Gustavo Niemeyer, Niels Gust\u00e4bel, Richard Townsend." #--------- # Imports #--------- import sys import os import stat import errno import time import struct import copy import re try: import grp, pwd except ImportError: grp = pwd = None # os.symlink on Windows prior to 6.0 raises NotImplementedError symlink_exception = (AttributeError, NotImplementedError) try: # WindowsError (1314) will be raised if the caller does not hold the # SeCreateSymbolicLinkPrivilege privilege symlink_exception += (WindowsError,) except NameError: pass # from tarfile import * __all__ = ["TarFile", "TarInfo", "is_tarfile", "TarError"] if sys.version_info[0] < 3: import __builtin__ as builtins else: import builtins _open = builtins.open # Since 'open' is TarFile.open #--------------------------------------------------------- # tar constants #--------------------------------------------------------- NUL = b"\0" # the null character BLOCKSIZE = 512 # length of processing blocks RECORDSIZE = BLOCKSIZE * 20 # length of records GNU_MAGIC = b"ustar \0" # magic gnu tar string POSIX_MAGIC = b"ustar\x0000" # magic posix tar string LENGTH_NAME = 100 # maximum length of a filename LENGTH_LINK = 100 # maximum length of a linkname LENGTH_PREFIX = 155 # maximum length of the prefix field REGTYPE = b"0" # regular file AREGTYPE = b"\0" # regular file LNKTYPE = b"1" # link (inside tarfile) SYMTYPE = b"2" # symbolic link CHRTYPE = b"3" # character special device BLKTYPE = b"4" # block special device DIRTYPE = b"5" # directory FIFOTYPE = b"6" # fifo special device CONTTYPE = b"7" # contiguous file GNUTYPE_LONGNAME = b"L" # GNU tar longname GNUTYPE_LONGLINK = b"K" # GNU tar longlink GNUTYPE_SPARSE = b"S" # GNU tar sparse file XHDTYPE = b"x" # POSIX.1-2001 extended header XGLTYPE = b"g" # POSIX.1-2001 global header SOLARIS_XHDTYPE = b"X" # Solaris extended header USTAR_FORMAT = 0 # POSIX.1-1988 (ustar) format GNU_FORMAT = 1 # GNU tar format PAX_FORMAT = 2 # POSIX.1-2001 (pax) format DEFAULT_FORMAT = GNU_FORMAT #--------------------------------------------------------- # tarfile constants #--------------------------------------------------------- # File types that tarfile supports: SUPPORTED_TYPES = (REGTYPE, AREGTYPE, LNKTYPE, SYMTYPE, DIRTYPE, FIFOTYPE, CONTTYPE, CHRTYPE, BLKTYPE, GNUTYPE_LONGNAME, GNUTYPE_LONGLINK, GNUTYPE_SPARSE) # File types that will be treated as a regular file. REGULAR_TYPES = (REGTYPE, AREGTYPE, CONTTYPE, GNUTYPE_SPARSE) # File types that are part of the GNU tar format. GNU_TYPES = (GNUTYPE_LONGNAME, GNUTYPE_LONGLINK, GNUTYPE_SPARSE) # Fields from a pax header that override a TarInfo attribute. PAX_FIELDS = ("path", "linkpath", "size", "mtime", "uid", "gid", "uname", "gname") # Fields from a pax header that are affected by hdrcharset. PAX_NAME_FIELDS = set(("path", "linkpath", "uname", "gname")) # Fields in a pax header that are numbers, all other fields # are treated as strings. PAX_NUMBER_FIELDS = { "atime": float, "ctime": float, "mtime": float, "uid": int, "gid": int, "size": int } #--------------------------------------------------------- # Bits used in the mode field, values in octal. #--------------------------------------------------------- S_IFLNK = 0o120000 # symbolic link S_IFREG = 0o100000 # regular file S_IFBLK = 0o060000 # block device S_IFDIR = 0o040000 # directory S_IFCHR = 0o020000 # character device S_IFIFO = 0o010000 # fifo TSUID = 0o4000 # set UID on execution TSGID = 0o2000 # set GID on execution TSVTX = 0o1000 # reserved TUREAD = 0o400 # read by owner TUWRITE = 0o200 # write by owner TUEXEC = 0o100 # execute/search by owner TGREAD = 0o040 # read by group TGWRITE = 0o020 # write by group TGEXEC = 0o010 # execute/search by group TOREAD = 0o004 # read by other TOWRITE = 0o002 # write by other TOEXEC = 0o001 # execute/search by other #--------------------------------------------------------- # initialization #--------------------------------------------------------- if os.name in ("nt", "ce"): ENCODING = "utf-8" else: ENCODING = sys.getfilesystemencoding() #--------------------------------------------------------- # Some useful functions #--------------------------------------------------------- def stn(s, length, encoding, errors): """Convert a string to a null-terminated bytes object. """ s = s.encode(encoding, errors) return s[:length] + (length - len(s)) * NUL def nts(s, encoding, errors): """Convert a null-terminated bytes object to a string. """ p = s.find(b"\0") if p != -1: s = s[:p] return s.decode(encoding, errors) def nti(s): """Convert a number field to a python number. """ # There are two possible encodings for a number field, see # itn() below. if s[0] != chr(0o200): try: n = int(nts(s, "ascii", "strict") or "0", 8) except ValueError: raise InvalidHeaderError("invalid header") else: n = 0 for i in range(len(s) - 1): n <<= 8 n += ord(s[i + 1]) return n def itn(n, digits=8, format=DEFAULT_FORMAT): """Convert a python number to a number field. """ # POSIX 1003.1-1988 requires numbers to be encoded as a string of # octal digits followed by a null-byte, this allows values up to # (8**(digits-1))-1. GNU tar allows storing numbers greater than # that if necessary. A leading 0o200 byte indicates this particular # encoding, the following digits-1 bytes are a big-endian # representation. This allows values up to (256**(digits-1))-1. if 0 <= n < 8 ** (digits - 1): s = ("%0*o" % (digits - 1, n)).encode("ascii") + NUL else: if format != GNU_FORMAT or n >= 256 ** (digits - 1): raise ValueError("overflow in number field") if n < 0: # XXX We mimic GNU tar's behaviour with negative numbers, # this could raise OverflowError. n = struct.unpack("L", struct.pack("l", n))[0] s = bytearray() for i in range(digits - 1): s.insert(0, n & 0o377) n >>= 8 s.insert(0, 0o200) return s def calc_chksums(buf): """Calculate the checksum for a member's header by summing up all characters except for the chksum field which is treated as if it was filled with spaces. According to the GNU tar sources, some tars (Sun and NeXT) calculate chksum with signed char, which will be different if there are chars in the buffer with the high bit set. So we calculate two checksums, unsigned and signed. """ unsigned_chksum = 256 + sum(struct.unpack("148B", buf[:148]) + struct.unpack("356B", buf[156:512])) signed_chksum = 256 + sum(struct.unpack("148b", buf[:148]) + struct.unpack("356b", buf[156:512])) return unsigned_chksum, signed_chksum def copyfileobj(src, dst, length=None): """Copy length bytes from fileobj src to fileobj dst. If length is None, copy the entire content. """ if length == 0: return if length is None: while True: buf = src.read(16*1024) if not buf: break dst.write(buf) return BUFSIZE = 16 * 1024 blocks, remainder = divmod(length, BUFSIZE) for b in range(blocks): buf = src.read(BUFSIZE) if len(buf) < BUFSIZE: raise IOError("end of file reached") dst.write(buf) if remainder != 0: buf = src.read(remainder) if len(buf) < remainder: raise IOError("end of file reached") dst.write(buf) return filemode_table = ( ((S_IFLNK, "l"), (S_IFREG, "-"), (S_IFBLK, "b"), (S_IFDIR, "d"), (S_IFCHR, "c"), (S_IFIFO, "p")), ((TUREAD, "r"),), ((TUWRITE, "w"),), ((TUEXEC|TSUID, "s"), (TSUID, "S"), (TUEXEC, "x")), ((TGREAD, "r"),), ((TGWRITE, "w"),), ((TGEXEC|TSGID, "s"), (TSGID, "S"), (TGEXEC, "x")), ((TOREAD, "r"),), ((TOWRITE, "w"),), ((TOEXEC|TSVTX, "t"), (TSVTX, "T"), (TOEXEC, "x")) ) def filemode(mode): """Convert a file's mode to a string of the form -rwxrwxrwx. Used by TarFile.list() """ perm = [] for table in filemode_table: for bit, char in table: if mode & bit == bit: perm.append(char) break else: perm.append("-") return "".join(perm) class TarError(Exception): """Base exception.""" pass class ExtractError(TarError): """General exception for extract errors.""" pass class ReadError(TarError): """Exception for unreadable tar archives.""" pass class CompressionError(TarError): """Exception for unavailable compression methods.""" pass class StreamError(TarError): """Exception for unsupported operations on stream-like TarFiles.""" pass class HeaderError(TarError): """Base exception for header errors.""" pass class EmptyHeaderError(HeaderError): """Exception for empty headers.""" pass class TruncatedHeaderError(HeaderError): """Exception for truncated headers.""" pass class EOFHeaderError(HeaderError): """Exception for end of file headers.""" pass class InvalidHeaderError(HeaderError): """Exception for invalid headers.""" pass class SubsequentHeaderError(HeaderError): """Exception for missing and invalid extended headers.""" pass #--------------------------- # internal stream interface #--------------------------- class _LowLevelFile(object): """Low-level file object. Supports reading and writing. It is used instead of a regular file object for streaming access. """ def __init__(self, name, mode): mode = { "r": os.O_RDONLY, "w": os.O_WRONLY | os.O_CREAT | os.O_TRUNC, }[mode] if hasattr(os, "O_BINARY"): mode |= os.O_BINARY self.fd = os.open(name, mode, 0o666) def close(self): os.close(self.fd) def read(self, size): return os.read(self.fd, size) def write(self, s): os.write(self.fd, s) class _Stream(object): """Class that serves as an adapter between TarFile and a stream-like object. The stream-like object only needs to have a read() or write() method and is accessed blockwise. Use of gzip or bzip2 compression is possible. A stream-like object could be for example: sys.stdin, sys.stdout, a socket, a tape device etc. _Stream is intended to be used only internally. """ def __init__(self, name, mode, comptype, fileobj, bufsize): """Construct a _Stream object. """ self._extfileobj = True if fileobj is None: fileobj = _LowLevelFile(name, mode) self._extfileobj = False if comptype == '*': # Enable transparent compression detection for the # stream interface fileobj = _StreamProxy(fileobj) comptype = fileobj.getcomptype() self.name = name or "" self.mode = mode self.comptype = comptype self.fileobj = fileobj self.bufsize = bufsize self.buf = b"" self.pos = 0 self.closed = False try: if comptype == "gz": try: import zlib except ImportError: raise CompressionError("zlib module is not available") self.zlib = zlib self.crc = zlib.crc32(b"") if mode == "r": self._init_read_gz() else: self._init_write_gz() if comptype == "bz2": try: import bz2 except ImportError: raise CompressionError("bz2 module is not available") if mode == "r": self.dbuf = b"" self.cmp = bz2.BZ2Decompressor() else: self.cmp = bz2.BZ2Compressor() except: if not self._extfileobj: self.fileobj.close() self.closed = True raise def __del__(self): if hasattr(self, "closed") and not self.closed: self.close() def _init_write_gz(self): """Initialize for writing with gzip compression. """ self.cmp = self.zlib.compressobj(9, self.zlib.DEFLATED, -self.zlib.MAX_WBITS, self.zlib.DEF_MEM_LEVEL, 0) timestamp = struct.pack("<L", int(time.time())) self.__write(b"\037\213\010\010" + timestamp + b"\002\377") if self.name.endswith(".gz"): self.name = self.name[:-3] # RFC1952 says we must use ISO-8859-1 for the FNAME field. self.__write(self.name.encode("iso-8859-1", "replace") + NUL) def write(self, s): """Write string s to the stream. """ if self.comptype == "gz": self.crc = self.zlib.crc32(s, self.crc) self.pos += len(s) if self.comptype != "tar": s = self.cmp.compress(s) self.__write(s) def __write(self, s): """Write string s to the stream if a whole new block is ready to be written. """ self.buf += s while len(self.buf) > self.bufsize: self.fileobj.write(self.buf[:self.bufsize]) self.buf = self.buf[self.bufsize:] def close(self): """Close the _Stream object. No operation should be done on it afterwards. """ if self.closed: return if self.mode == "w" and self.comptype != "tar": self.buf += self.cmp.flush() if self.mode == "w" and self.buf: self.fileobj.write(self.buf) self.buf = b"" if self.comptype == "gz": # The native zlib crc is an unsigned 32-bit integer, but # the Python wrapper implicitly casts that to a signed C # long. So, on a 32-bit box self.crc may "look negative", # while the same crc on a 64-bit box may "look positive". # To avoid irksome warnings from the `struct` module, force # it to look positive on all boxes. self.fileobj.write(struct.pack("<L", self.crc & 0xffffffff)) self.fileobj.write(struct.pack("<L", self.pos & 0xffffFFFF)) if not self._extfileobj: self.fileobj.close() self.closed = True def _init_read_gz(self): """Initialize for reading a gzip compressed fileobj. """ self.cmp = self.zlib.decompressobj(-self.zlib.MAX_WBITS) self.dbuf = b"" # taken from gzip.GzipFile with some alterations if self.__read(2) != b"\037\213": raise ReadError("not a gzip file") if self.__read(1) != b"\010": raise CompressionError("unsupported compression method") flag = ord(self.__read(1)) self.__read(6) if flag & 4: xlen = ord(self.__read(1)) + 256 * ord(self.__read(1)) self.read(xlen) if flag & 8: while True: s = self.__read(1) if not s or s == NUL: break if flag & 16: while True: s = self.__read(1) if not s or s == NUL: break if flag & 2: self.__read(2) def tell(self): """Return the stream's file pointer position. """ return self.pos def seek(self, pos=0): """Set the stream's file pointer to pos. Negative seeking is forbidden. """ if pos - self.pos >= 0: blocks, remainder = divmod(pos - self.pos, self.bufsize) for i in range(blocks): self.read(self.bufsize) self.read(remainder) else: raise StreamError("seeking backwards is not allowed") return self.pos def read(self, size=None): """Return the next size number of bytes from the stream. If size is not defined, return all bytes of the stream up to EOF. """ if size is None: t = [] while True: buf = self._read(self.bufsize) if not buf: break t.append(buf) buf = "".join(t) else: buf = self._read(size) self.pos += len(buf) return buf def _read(self, size): """Return size bytes from the stream. """ if self.comptype == "tar": return self.__read(size) c = len(self.dbuf) while c < size: buf = self.__read(self.bufsize) if not buf: break try: buf = self.cmp.decompress(buf) except IOError: raise ReadError("invalid compressed data") self.dbuf += buf c += len(buf) buf = self.dbuf[:size] self.dbuf = self.dbuf[size:] return buf def __read(self, size): """Return size bytes from stream. If internal buffer is empty, read another block from the stream. """ c = len(self.buf) while c < size: buf = self.fileobj.read(self.bufsize) if not buf: break self.buf += buf c += len(buf) buf = self.buf[:size] self.buf = self.buf[size:] return buf # class _Stream class _StreamProxy(object): """Small proxy class that enables transparent compression detection for the Stream interface (mode 'r|*'). """ def __init__(self, fileobj): self.fileobj = fileobj self.buf = self.fileobj.read(BLOCKSIZE) def read(self, size): self.read = self.fileobj.read return self.buf def getcomptype(self): if self.buf.startswith(b"\037\213\010"): return "gz" if self.buf.startswith(b"BZh91"): return "bz2" return "tar" def close(self): self.fileobj.close() # class StreamProxy class _BZ2Proxy(object): """Small proxy class that enables external file object support for "r:bz2" and "w:bz2" modes. This is actually a workaround for a limitation in bz2 module's BZ2File class which (unlike gzip.GzipFile) has no support for a file object argument. """ blocksize = 16 * 1024 def __init__(self, fileobj, mode): self.fileobj = fileobj self.mode = mode self.name = getattr(self.fileobj, "name", None) self.init() def init(self): import bz2 self.pos = 0 if self.mode == "r": self.bz2obj = bz2.BZ2Decompressor() self.fileobj.seek(0) self.buf = b"" else: self.bz2obj = bz2.BZ2Compressor() def read(self, size): x = len(self.buf) while x < size: raw = self.fileobj.read(self.blocksize) if not raw: break data = self.bz2obj.decompress(raw) self.buf += data x += len(data) buf = self.buf[:size] self.buf = self.buf[size:] self.pos += len(buf) return buf def seek(self, pos): if pos < self.pos: self.init() self.read(pos - self.pos) def tell(self): return self.pos def write(self, data): self.pos += len(data) raw = self.bz2obj.compress(data) self.fileobj.write(raw) def close(self): if self.mode == "w": raw = self.bz2obj.flush() self.fileobj.write(raw) # class _BZ2Proxy #------------------------ # Extraction file object #------------------------ class _FileInFile(object): """A thin wrapper around an existing file object that provides a part of its data as an individual file object. """ def __init__(self, fileobj, offset, size, blockinfo=None): self.fileobj = fileobj self.offset = offset self.size = size self.position = 0 if blockinfo is None: blockinfo = [(0, size)] # Construct a map with data and zero blocks. self.map_index = 0 self.map = [] lastpos = 0 realpos = self.offset for offset, size in blockinfo: if offset > lastpos: self.map.append((False, lastpos, offset, None)) self.map.append((True, offset, offset + size, realpos)) realpos += size lastpos = offset + size if lastpos < self.size: self.map.append((False, lastpos, self.size, None)) def seekable(self): if not hasattr(self.fileobj, "seekable"): # XXX gzip.GzipFile and bz2.BZ2File return True return self.fileobj.seekable() def tell(self): """Return the current file position. """ return self.position def seek(self, position): """Seek to a position in the file. """ self.position = position def read(self, size=None): """Read data from the file. """ if size is None: size = self.size - self.position else: size = min(size, self.size - self.position) buf = b"" while size > 0: while True: data, start, stop, offset = self.map[self.map_index] if start <= self.position < stop: break else: self.map_index += 1 if self.map_index == len(self.map): self.map_index = 0 length = min(size, stop - self.position) if data: self.fileobj.seek(offset + (self.position - start)) buf += self.fileobj.read(length) else: buf += NUL * length size -= length self.position += length return buf #class _FileInFile class ExFileObject(object): """File-like object for reading an archive member. Is returned by TarFile.extractfile(). """ blocksize = 1024 def __init__(self, tarfile, tarinfo): self.fileobj = _FileInFile(tarfile.fileobj, tarinfo.offset_data, tarinfo.size, tarinfo.sparse) self.name = tarinfo.name self.mode = "r" self.closed = False self.size = tarinfo.size self.position = 0 self.buffer = b"" def readable(self): return True def writable(self): return False def seekable(self): return self.fileobj.seekable() def read(self, size=None): """Read at most size bytes from the file. If size is not present or None, read all data until EOF is reached. """ if self.closed: raise ValueError("I/O operation on closed file") buf = b"" if self.buffer: if size is None: buf = self.buffer self.buffer = b"" else: buf = self.buffer[:size] self.buffer = self.buffer[size:] if size is None: buf += self.fileobj.read() else: buf += self.fileobj.read(size - len(buf)) self.position += len(buf) return buf # XXX TextIOWrapper uses the read1() method. read1 = read def readline(self, size=-1): """Read one entire line from the file. If size is present and non-negative, return a string with at most that size, which may be an incomplete line. """ if self.closed: raise ValueError("I/O operation on closed file") pos = self.buffer.find(b"\n") + 1 if pos == 0: # no newline found. while True: buf = self.fileobj.read(self.blocksize) self.buffer += buf if not buf or b"\n" in buf: pos = self.buffer.find(b"\n") + 1 if pos == 0: # no newline found. pos = len(self.buffer) break if size != -1: pos = min(size, pos) buf = self.buffer[:pos] self.buffer = self.buffer[pos:] self.position += len(buf) return buf def readlines(self): """Return a list with all remaining lines. """ result = [] while True: line = self.readline() if not line: break result.append(line) return result def tell(self): """Return the current file position. """ if self.closed: raise ValueError("I/O operation on closed file") return self.position def seek(self, pos, whence=os.SEEK_SET): """Seek to a position in the file. """ if self.closed: raise ValueError("I/O operation on closed file") if whence == os.SEEK_SET: self.position = min(max(pos, 0), self.size) elif whence == os.SEEK_CUR: if pos < 0: self.position = max(self.position + pos, 0) else: self.position = min(self.position + pos, self.size) elif whence == os.SEEK_END: self.position = max(min(self.size + pos, self.size), 0) else: raise ValueError("Invalid argument") self.buffer = b"" self.fileobj.seek(self.position) def close(self): """Close the file object. """ self.closed = True def __iter__(self): """Get an iterator over the file's lines. """ while True: line = self.readline() if not line: break yield line #class ExFileObject #------------------ # Exported Classes #------------------ class TarInfo(object): """Informational class which holds the details about an archive member given by a tar header block. TarInfo objects are returned by TarFile.getmember(), TarFile.getmembers() and TarFile.gettarinfo() and are usually created internally. """ __slots__ = ("name", "mode", "uid", "gid", "size", "mtime", "chksum", "type", "linkname", "uname", "gname", "devmajor", "devminor", "offset", "offset_data", "pax_headers", "sparse", "tarfile", "_sparse_structs", "_link_target") def __init__(self, name=""): """Construct a TarInfo object. name is the optional name of the member. """ self.name = name # member name self.mode = 0o644 # file permissions self.uid = 0 # user id self.gid = 0 # group id self.size = 0 # file size self.mtime = 0 # modification time self.chksum = 0 # header checksum self.type = REGTYPE # member type self.linkname = "" # link name self.uname = "" # user name self.gname = "" # group name self.devmajor = 0 # device major number self.devminor = 0 # device minor number self.offset = 0 # the tar header starts here self.offset_data = 0 # the file's data starts here self.sparse = None # sparse member information self.pax_headers = {} # pax header information # In pax headers the "name" and "linkname" field are called # "path" and "linkpath". def _getpath(self): return self.name def _setpath(self, name): self.name = name path = property(_getpath, _setpath) def _getlinkpath(self): return self.linkname def _setlinkpath(self, linkname): self.linkname = linkname linkpath = property(_getlinkpath, _setlinkpath) def __repr__(self): return "<%s %r at %#x>" % (self.__class__.__name__,self.name,id(self)) def get_info(self): """Return the TarInfo's attributes as a dictionary. """ info = { "name": self.name, "mode": self.mode & 0o7777, "uid": self.uid, "gid": self.gid, "size": self.size, "mtime": self.mtime, "chksum": self.chksum, "type": self.type, "linkname": self.linkname, "uname": self.uname, "gname": self.gname, "devmajor": self.devmajor, "devminor": self.devminor } if info["type"] == DIRTYPE and not info["name"].endswith("/"): info["name"] += "/" return info def tobuf(self, format=DEFAULT_FORMAT, encoding=ENCODING, errors="surrogateescape"): """Return a tar header as a string of 512 byte blocks. """ info = self.get_info() if format == USTAR_FORMAT: return self.create_ustar_header(info, encoding, errors) elif format == GNU_FORMAT: return self.create_gnu_header(info, encoding, errors) elif format == PAX_FORMAT: return self.create_pax_header(info, encoding) else: raise ValueError("invalid format") def create_ustar_header(self, info, encoding, errors): """Return the object as a ustar header block. """ info["magic"] = POSIX_MAGIC if len(info["linkname"]) > LENGTH_LINK: raise ValueError("linkname is too long") if len(info["name"]) > LENGTH_NAME: info["prefix"], info["name"] = self._posix_split_name(info["name"]) return self._create_header(info, USTAR_FORMAT, encoding, errors) def create_gnu_header(self, info, encoding, errors): """Return the object as a GNU header block sequence. """ info["magic"] = GNU_MAGIC buf = b"" if len(info["linkname"]) > LENGTH_LINK: buf += self._create_gnu_long_header(info["linkname"], GNUTYPE_LONGLINK, encoding, errors) if len(info["name"]) > LENGTH_NAME: buf += self._create_gnu_long_header(info["name"], GNUTYPE_LONGNAME, encoding, errors) return buf + self._create_header(info, GNU_FORMAT, encoding, errors) def create_pax_header(self, info, encoding): """Return the object as a ustar header block. If it cannot be represented this way, prepend a pax extended header sequence with supplement information. """ info["magic"] = POSIX_MAGIC pax_headers = self.pax_headers.copy() # Test string fields for values that exceed the field length or cannot # be represented in ASCII encoding. for name, hname, length in ( ("name", "path", LENGTH_NAME), ("linkname", "linkpath", LENGTH_LINK), ("uname", "uname", 32), ("gname", "gname", 32)): if hname in pax_headers: # The pax header has priority. continue # Try to encode the string as ASCII. try: info[name].encode("ascii", "strict") except UnicodeEncodeError: pax_headers[hname] = info[name] continue if len(info[name]) > length: pax_headers[hname] = info[name] # Test number fields for values that exceed the field limit or values # that like to be stored as float. for name, digits in (("uid", 8), ("gid", 8), ("size", 12), ("mtime", 12)): if name in pax_headers: # The pax header has priority. Avoid overflow. info[name] = 0 continue val = info[name] if not 0 <= val < 8 ** (digits - 1) or isinstance(val, float): pax_headers[name] = str(val) info[name] = 0 # Create a pax extended header if necessary. if pax_headers: buf = self._create_pax_generic_header(pax_headers, XHDTYPE, encoding) else: buf = b"" return buf + self._create_header(info, USTAR_FORMAT, "ascii", "replace") @classmethod def create_pax_global_header(cls, pax_headers): """Return the object as a pax global header block sequence. """ return cls._create_pax_generic_header(pax_headers, XGLTYPE, "utf8") def _posix_split_name(self, name): """Split a name longer than 100 chars into a prefix and a name part. """ prefix = name[:LENGTH_PREFIX + 1] while prefix and prefix[-1] != "/": prefix = prefix[:-1] name = name[len(prefix):] prefix = prefix[:-1] if not prefix or len(name) > LENGTH_NAME: raise ValueError("name is too long") return prefix, name @staticmethod def _create_header(info, format, encoding, errors): """Return a header block. info is a dictionary with file information, format must be one of the *_FORMAT constants. """ parts = [ stn(info.get("name", ""), 100, encoding, errors), itn(info.get("mode", 0) & 0o7777, 8, format), itn(info.get("uid", 0), 8, format), itn(info.get("gid", 0), 8, format), itn(info.get("size", 0), 12, format), itn(info.get("mtime", 0), 12, format), b" ", # checksum field info.get("type", REGTYPE), stn(info.get("linkname", ""), 100, encoding, errors), info.get("magic", POSIX_MAGIC), stn(info.get("uname", ""), 32, encoding, errors), stn(info.get("gname", ""), 32, encoding, errors), itn(info.get("devmajor", 0), 8, format), itn(info.get("devminor", 0), 8, format), stn(info.get("prefix", ""), 155, encoding, errors) ] buf = struct.pack("%ds" % BLOCKSIZE, b"".join(parts)) chksum = calc_chksums(buf[-BLOCKSIZE:])[0] buf = buf[:-364] + ("%06o\0" % chksum).encode("ascii") + buf[-357:] return buf @staticmethod def _create_payload(payload): """Return the string payload filled with zero bytes up to the next 512 byte border. """ blocks, remainder = divmod(len(payload), BLOCKSIZE) if remainder > 0: payload += (BLOCKSIZE - remainder) * NUL return payload @classmethod def _create_gnu_long_header(cls, name, type, encoding, errors): """Return a GNUTYPE_LONGNAME or GNUTYPE_LONGLINK sequence for name. """ name = name.encode(encoding, errors) + NUL info = {} info["name"] = "././@LongLink" info["type"] = type info["size"] = len(name) info["magic"] = GNU_MAGIC # create extended header + name blocks. return cls._create_header(info, USTAR_FORMAT, encoding, errors) + \ cls._create_payload(name) @classmethod def _create_pax_generic_header(cls, pax_headers, type, encoding): """Return a POSIX.1-2008 extended or global header sequence that contains a list of keyword, value pairs. The values must be strings. """ # Check if one of the fields contains surrogate characters and thereby # forces hdrcharset=BINARY, see _proc_pax() for more information. binary = False for keyword, value in pax_headers.items(): try: value.encode("utf8", "strict") except UnicodeEncodeError: binary = True break records = b"" if binary: # Put the hdrcharset field at the beginning of the header. records += b"21 hdrcharset=BINARY\n" for keyword, value in pax_headers.items(): keyword = keyword.encode("utf8") if binary: # Try to restore the original byte representation of `value'. # Needless to say, that the encoding must match the string. value = value.encode(encoding, "surrogateescape") else: value = value.encode("utf8") l = len(keyword) + len(value) + 3 # ' ' + '=' + '\n' n = p = 0 while True: n = l + len(str(p)) if n == p: break p = n records += bytes(str(p), "ascii") + b" " + keyword + b"=" + value + b"\n" # We use a hardcoded "././@PaxHeader" name like star does # instead of the one that POSIX recommends. info = {} info["name"] = "././@PaxHeader" info["type"] = type info["size"] = len(records) info["magic"] = POSIX_MAGIC # Create pax header + record blocks. return cls._create_header(info, USTAR_FORMAT, "ascii", "replace") + \ cls._create_payload(records) @classmethod def frombuf(cls, buf, encoding, errors): """Construct a TarInfo object from a 512 byte bytes object. """ if len(buf) == 0: raise EmptyHeaderError("empty header") if len(buf) != BLOCKSIZE: raise TruncatedHeaderError("truncated header") if buf.count(NUL) == BLOCKSIZE: raise EOFHeaderError("end of file header") chksum = nti(buf[148:156]) if chksum not in calc_chksums(buf): raise InvalidHeaderError("bad checksum") obj = cls() obj.name = nts(buf[0:100], encoding, errors) obj.mode = nti(buf[100:108]) obj.uid = nti(buf[108:116]) obj.gid = nti(buf[116:124]) obj.size = nti(buf[124:136]) obj.mtime = nti(buf[136:148]) obj.chksum = chksum obj.type = buf[156:157] obj.linkname = nts(buf[157:257], encoding, errors) obj.uname = nts(buf[265:297], encoding, errors) obj.gname = nts(buf[297:329], encoding, errors) obj.devmajor = nti(buf[329:337]) obj.devminor = nti(buf[337:345]) prefix = nts(buf[345:500], encoding, errors) # Old V7 tar format represents a directory as a regular # file with a trailing slash. if obj.type == AREGTYPE and obj.name.endswith("/"): obj.type = DIRTYPE # The old GNU sparse format occupies some of the unused # space in the buffer for up to 4 sparse structures. # Save the them for later processing in _proc_sparse(). if obj.type == GNUTYPE_SPARSE: pos = 386 structs = [] for i in range(4): try: offset = nti(buf[pos:pos + 12]) numbytes = nti(buf[pos + 12:pos + 24]) except ValueError: break structs.append((offset, numbytes)) pos += 24 isextended = bool(buf[482]) origsize = nti(buf[483:495]) obj._sparse_structs = (structs, isextended, origsize) # Remove redundant slashes from directories. if obj.isdir(): obj.name = obj.name.rstrip("/") # Reconstruct a ustar longname. if prefix and obj.type not in GNU_TYPES: obj.name = prefix + "/" + obj.name return obj @classmethod def fromtarfile(cls, tarfile): """Return the next TarInfo object from TarFile object tarfile. """ buf = tarfile.fileobj.read(BLOCKSIZE) obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors) obj.offset = tarfile.fileobj.tell() - BLOCKSIZE return obj._proc_member(tarfile) #-------------------------------------------------------------------------- # The following are methods that are called depending on the type of a # member. The entry point is _proc_member() which can be overridden in a # subclass to add custom _proc_*() methods. A _proc_*() method MUST # implement the following # operations: # 1. Set self.offset_data to the position where the data blocks begin, # if there is data that follows. # 2. Set tarfile.offset to the position where the next member's header will # begin. # 3. Return self or another valid TarInfo object. def _proc_member(self, tarfile): """Choose the right processing method depending on the type and call it. """ if self.type in (GNUTYPE_LONGNAME, GNUTYPE_LONGLINK): return self._proc_gnulong(tarfile) elif self.type == GNUTYPE_SPARSE: return self._proc_sparse(tarfile) elif self.type in (XHDTYPE, XGLTYPE, SOLARIS_XHDTYPE): return self._proc_pax(tarfile) else: return self._proc_builtin(tarfile) def _proc_builtin(self, tarfile): """Process a builtin type or an unknown type which will be treated as a regular file. """ self.offset_data = tarfile.fileobj.tell() offset = self.offset_data if self.isreg() or self.type not in SUPPORTED_TYPES: # Skip the following data blocks. offset += self._block(self.size) tarfile.offset = offset # Patch the TarInfo object with saved global # header information. self._apply_pax_info(tarfile.pax_headers, tarfile.encoding, tarfile.errors) return self def _proc_gnulong(self, tarfile): """Process the blocks that hold a GNU longname or longlink member. """ buf = tarfile.fileobj.read(self._block(self.size)) # Fetch the next header and process it. try: next = self.fromtarfile(tarfile) except HeaderError: raise SubsequentHeaderError("missing or bad subsequent header") # Patch the TarInfo object from the next header with # the longname information. next.offset = self.offset if self.type == GNUTYPE_LONGNAME: next.name = nts(buf, tarfile.encoding, tarfile.errors) elif self.type == GNUTYPE_LONGLINK: next.linkname = nts(buf, tarfile.encoding, tarfile.errors) return next def _proc_sparse(self, tarfile): """Process a GNU sparse header plus extra headers. """ # We already collected some sparse structures in frombuf(). structs, isextended, origsize = self._sparse_structs del self._sparse_structs # Collect sparse structures from extended header blocks. while isextended: buf = tarfile.fileobj.read(BLOCKSIZE) pos = 0 for i in range(21): try: offset = nti(buf[pos:pos + 12]) numbytes = nti(buf[pos + 12:pos + 24]) except ValueError: break if offset and numbytes: structs.append((offset, numbytes)) pos += 24 isextended = bool(buf[504]) self.sparse = structs self.offset_data = tarfile.fileobj.tell() tarfile.offset = self.offset_data + self._block(self.size) self.size = origsize return self def _proc_pax(self, tarfile): """Process an extended or global header as described in POSIX.1-2008. """ # Read the header information. buf = tarfile.fileobj.read(self._block(self.size)) # A pax header stores supplemental information for either # the following file (extended) or all following files # (global). if self.type == XGLTYPE: pax_headers = tarfile.pax_headers else: pax_headers = tarfile.pax_headers.copy() # Check if the pax header contains a hdrcharset field. This tells us # the encoding of the path, linkpath, uname and gname fields. Normally, # these fields are UTF-8 encoded but since POSIX.1-2008 tar # implementations are allowed to store them as raw binary strings if # the translation to UTF-8 fails. match = re.search(br"\d+ hdrcharset=([^\n]+)\n", buf) if match is not None: pax_headers["hdrcharset"] = match.group(1).decode("utf8") # For the time being, we don't care about anything other than "BINARY". # The only other value that is currently allowed by the standard is # "ISO-IR 10646 2000 UTF-8" in other words UTF-8. hdrcharset = pax_headers.get("hdrcharset") if hdrcharset == "BINARY": encoding = tarfile.encoding else: encoding = "utf8" # Parse pax header information. A record looks like that: # "%d %s=%s\n" % (length, keyword, value). length is the size # of the complete record including the length field itself and # the newline. keyword and value are both UTF-8 encoded strings. regex = re.compile(br"(\d+) ([^=]+)=") pos = 0 while True: match = regex.match(buf, pos) if not match: break length, keyword = match.groups() length = int(length) value = buf[match.end(2) + 1:match.start(1) + length - 1] # Normally, we could just use "utf8" as the encoding and "strict" # as the error handler, but we better not take the risk. For # example, GNU tar <= 1.23 is known to store filenames it cannot # translate to UTF-8 as raw strings (unfortunately without a # hdrcharset=BINARY header). # We first try the strict standard encoding, and if that fails we # fall back on the user's encoding and error handler. keyword = self._decode_pax_field(keyword, "utf8", "utf8", tarfile.errors) if keyword in PAX_NAME_FIELDS: value = self._decode_pax_field(value, encoding, tarfile.encoding, tarfile.errors) else: value = self._decode_pax_field(value, "utf8", "utf8", tarfile.errors) pax_headers[keyword] = value pos += length # Fetch the next header. try: next = self.fromtarfile(tarfile) except HeaderError: raise SubsequentHeaderError("missing or bad subsequent header") # Process GNU sparse information. if "GNU.sparse.map" in pax_headers: # GNU extended sparse format version 0.1. self._proc_gnusparse_01(next, pax_headers) elif "GNU.sparse.size" in pax_headers: # GNU extended sparse format version 0.0. self._proc_gnusparse_00(next, pax_headers, buf) elif pax_headers.get("GNU.sparse.major") == "1" and pax_headers.get("GNU.sparse.minor") == "0": # GNU extended sparse format version 1.0. self._proc_gnusparse_10(next, pax_headers, tarfile) if self.type in (XHDTYPE, SOLARIS_XHDTYPE): # Patch the TarInfo object with the extended header info. next._apply_pax_info(pax_headers, tarfile.encoding, tarfile.errors) next.offset = self.offset if "size" in pax_headers: # If the extended header replaces the size field, # we need to recalculate the offset where the next # header starts. offset = next.offset_data if next.isreg() or next.type not in SUPPORTED_TYPES: offset += next._block(next.size) tarfile.offset = offset return next def _proc_gnusparse_00(self, next, pax_headers, buf): """Process a GNU tar extended sparse header, version 0.0. """ offsets = [] for match in re.finditer(br"\d+ GNU.sparse.offset=(\d+)\n", buf): offsets.append(int(match.group(1))) numbytes = [] for match in re.finditer(br"\d+ GNU.sparse.numbytes=(\d+)\n", buf): numbytes.append(int(match.group(1))) next.sparse = list(zip(offsets, numbytes)) def _proc_gnusparse_01(self, next, pax_headers): """Process a GNU tar extended sparse header, version 0.1. """ sparse = [int(x) for x in pax_headers["GNU.sparse.map"].split(",")] next.sparse = list(zip(sparse[::2], sparse[1::2])) def _proc_gnusparse_10(self, next, pax_headers, tarfile): """Process a GNU tar extended sparse header, version 1.0. """ fields = None sparse = [] buf = tarfile.fileobj.read(BLOCKSIZE) fields, buf = buf.split(b"\n", 1) fields = int(fields) while len(sparse) < fields * 2: if b"\n" not in buf: buf += tarfile.fileobj.read(BLOCKSIZE) number, buf = buf.split(b"\n", 1) sparse.append(int(number)) next.offset_data = tarfile.fileobj.tell() next.sparse = list(zip(sparse[::2], sparse[1::2])) def _apply_pax_info(self, pax_headers, encoding, errors): """Replace fields with supplemental information from a previous pax extended or global header. """ for keyword, value in pax_headers.items(): if keyword == "GNU.sparse.name": setattr(self, "path", value) elif keyword == "GNU.sparse.size": setattr(self, "size", int(value)) elif keyword == "GNU.sparse.realsize": setattr(self, "size", int(value)) elif keyword in PAX_FIELDS: if keyword in PAX_NUMBER_FIELDS: try: value = PAX_NUMBER_FIELDS[keyword](value) except ValueError: value = 0 if keyword == "path": value = value.rstrip("/") setattr(self, keyword, value) self.pax_headers = pax_headers.copy() def _decode_pax_field(self, value, encoding, fallback_encoding, fallback_errors): """Decode a single field from a pax record. """ try: return value.decode(encoding, "strict") except UnicodeDecodeError: return value.decode(fallback_encoding, fallback_errors) def _block(self, count): """Round up a byte count by BLOCKSIZE and return it, e.g. _block(834) => 1024. """ blocks, remainder = divmod(count, BLOCKSIZE) if remainder: blocks += 1 return blocks * BLOCKSIZE def isreg(self): return self.type in REGULAR_TYPES def isfile(self): return self.isreg() def isdir(self): return self.type == DIRTYPE def issym(self): return self.type == SYMTYPE def islnk(self): return self.type == LNKTYPE def ischr(self): return self.type == CHRTYPE def isblk(self): return self.type == BLKTYPE def isfifo(self): return self.type == FIFOTYPE def issparse(self): return self.sparse is not None def isdev(self): return self.type in (CHRTYPE, BLKTYPE, FIFOTYPE) # class TarInfo class TarFile(object): """The TarFile Class provides an interface to tar archives. """ debug = 0 # May be set from 0 (no msgs) to 3 (all msgs) dereference = False # If true, add content of linked file to the # tar file, else the link. ignore_zeros = False # If true, skips empty or invalid blocks and # continues processing. errorlevel = 1 # If 0, fatal errors only appear in debug # messages (if debug >= 0). If > 0, errors # are passed to the caller as exceptions. format = DEFAULT_FORMAT # The format to use when creating an archive. encoding = ENCODING # Encoding for 8-bit character strings. errors = None # Error handler for unicode conversion. tarinfo = TarInfo # The default TarInfo class to use. fileobject = ExFileObject # The default ExFileObject class to use. def __init__(self, name=None, mode="r", fileobj=None, format=None, tarinfo=None, dereference=None, ignore_zeros=None, encoding=None, errors="surrogateescape", pax_headers=None, debug=None, errorlevel=None): """Open an (uncompressed) tar archive `name'. `mode' is either 'r' to read from an existing archive, 'a' to append data to an existing file or 'w' to create a new file overwriting an existing one. `mode' defaults to 'r'. If `fileobj' is given, it is used for reading or writing data. If it can be determined, `mode' is overridden by `fileobj's mode. `fileobj' is not closed, when TarFile is closed. """ if len(mode) > 1 or mode not in "raw": raise ValueError("mode must be 'r', 'a' or 'w'") self.mode = mode self._mode = {"r": "rb", "a": "r+b", "w": "wb"}[mode] if not fileobj: if self.mode == "a" and not os.path.exists(name): # Create nonexistent files in append mode. self.mode = "w" self._mode = "wb" fileobj = bltn_open(name, self._mode) self._extfileobj = False else: if name is None and hasattr(fileobj, "name"): name = fileobj.name if hasattr(fileobj, "mode"): self._mode = fileobj.mode self._extfileobj = True self.name = os.path.abspath(name) if name else None self.fileobj = fileobj # Init attributes. if format is not None: self.format = format if tarinfo is not None: self.tarinfo = tarinfo if dereference is not None: self.dereference = dereference if ignore_zeros is not None: self.ignore_zeros = ignore_zeros if encoding is not None: self.encoding = encoding self.errors = errors if pax_headers is not None and self.format == PAX_FORMAT: self.pax_headers = pax_headers else: self.pax_headers = {} if debug is not None: self.debug = debug if errorlevel is not None: self.errorlevel = errorlevel # Init datastructures. self.closed = False self.members = [] # list of members as TarInfo objects self._loaded = False # flag if all members have been read self.offset = self.fileobj.tell() # current position in the archive file self.inodes = {} # dictionary caching the inodes of # archive members already added try: if self.mode == "r": self.firstmember = None self.firstmember = self.next() if self.mode == "a": # Move to the end of the archive, # before the first empty block. while True: self.fileobj.seek(self.offset) try: tarinfo = self.tarinfo.fromtarfile(self) self.members.append(tarinfo) except EOFHeaderError: self.fileobj.seek(self.offset) break except HeaderError as e: raise ReadError(str(e)) if self.mode in "aw": self._loaded = True if self.pax_headers: buf = self.tarinfo.create_pax_global_header(self.pax_headers.copy()) self.fileobj.write(buf) self.offset += len(buf) except: if not self._extfileobj: self.fileobj.close() self.closed = True raise #-------------------------------------------------------------------------- # Below are the classmethods which act as alternate constructors to the # TarFile class. The open() method is the only one that is needed for # public use; it is the "super"-constructor and is able to select an # adequate "sub"-constructor for a particular compression using the mapping # from OPEN_METH. # # This concept allows one to subclass TarFile without losing the comfort of # the super-constructor. A sub-constructor is registered and made available # by adding it to the mapping in OPEN_METH. @classmethod def open(cls, name=None, mode="r", fileobj=None, bufsize=RECORDSIZE, **kwargs): """Open a tar archive for reading, writing or appending. Return an appropriate TarFile class. mode: 'r' or 'r:*' open for reading with transparent compression 'r:' open for reading exclusively uncompressed 'r:gz' open for reading with gzip compression 'r:bz2' open for reading with bzip2 compression 'a' or 'a:' open for appending, creating the file if necessary 'w' or 'w:' open for writing without compression 'w:gz' open for writing with gzip compression 'w:bz2' open for writing with bzip2 compression 'r|*' open a stream of tar blocks with transparent compression 'r|' open an uncompressed stream of tar blocks for reading 'r|gz' open a gzip compressed stream of tar blocks 'r|bz2' open a bzip2 compressed stream of tar blocks 'w|' open an uncompressed stream for writing 'w|gz' open a gzip compressed stream for writing 'w|bz2' open a bzip2 compressed stream for writing """ if not name and not fileobj: raise ValueError("nothing to open") if mode in ("r", "r:*"): # Find out which *open() is appropriate for opening the file. for comptype in cls.OPEN_METH: func = getattr(cls, cls.OPEN_METH[comptype]) if fileobj is not None: saved_pos = fileobj.tell() try: return func(name, "r", fileobj, **kwargs) except (ReadError, CompressionError) as e: if fileobj is not None: fileobj.seek(saved_pos) continue raise ReadError("file could not be opened successfully") elif ":" in mode: filemode, comptype = mode.split(":", 1) filemode = filemode or "r" comptype = comptype or "tar" # Select the *open() function according to # given compression. if comptype in cls.OPEN_METH: func = getattr(cls, cls.OPEN_METH[comptype]) else: raise CompressionError("unknown compression type %r" % comptype) return func(name, filemode, fileobj, **kwargs) elif "|" in mode: filemode, comptype = mode.split("|", 1) filemode = filemode or "r" comptype = comptype or "tar" if filemode not in "rw": raise ValueError("mode must be 'r' or 'w'") stream = _Stream(name, filemode, comptype, fileobj, bufsize) try: t = cls(name, filemode, stream, **kwargs) except: stream.close() raise t._extfileobj = False return t elif mode in "aw": return cls.taropen(name, mode, fileobj, **kwargs) raise ValueError("undiscernible mode") @classmethod def taropen(cls, name, mode="r", fileobj=None, **kwargs): """Open uncompressed tar archive name for reading or writing. """ if len(mode) > 1 or mode not in "raw": raise ValueError("mode must be 'r', 'a' or 'w'") return cls(name, mode, fileobj, **kwargs) @classmethod def gzopen(cls, name, mode="r", fileobj=None, compresslevel=9, **kwargs): """Open gzip compressed tar archive name for reading or writing. Appending is not allowed. """ if len(mode) > 1 or mode not in "rw": raise ValueError("mode must be 'r' or 'w'") try: import gzip gzip.GzipFile except (ImportError, AttributeError): raise CompressionError("gzip module is not available") extfileobj = fileobj is not None try: fileobj = gzip.GzipFile(name, mode + "b", compresslevel, fileobj) t = cls.taropen(name, mode, fileobj, **kwargs) except IOError: if not extfileobj and fileobj is not None: fileobj.close() if fileobj is None: raise raise ReadError("not a gzip file") except: if not extfileobj and fileobj is not None: fileobj.close() raise t._extfileobj = extfileobj return t @classmethod def bz2open(cls, name, mode="r", fileobj=None, compresslevel=9, **kwargs): """Open bzip2 compressed tar archive name for reading or writing. Appending is not allowed. """ if len(mode) > 1 or mode not in "rw": raise ValueError("mode must be 'r' or 'w'.") try: import bz2 except ImportError: raise CompressionError("bz2 module is not available") if fileobj is not None: fileobj = _BZ2Proxy(fileobj, mode) else: fileobj = bz2.BZ2File(name, mode, compresslevel=compresslevel) try: t = cls.taropen(name, mode, fileobj, **kwargs) except (IOError, EOFError): fileobj.close() raise ReadError("not a bzip2 file") t._extfileobj = False return t # All *open() methods are registered here. OPEN_METH = { "tar": "taropen", # uncompressed tar "gz": "gzopen", # gzip compressed tar "bz2": "bz2open" # bzip2 compressed tar } #-------------------------------------------------------------------------- # The public methods which TarFile provides: def close(self): """Close the TarFile. In write-mode, two finishing zero blocks are appended to the archive. """ if self.closed: return if self.mode in "aw": self.fileobj.write(NUL * (BLOCKSIZE * 2)) self.offset += (BLOCKSIZE * 2) # fill up the end with zero-blocks # (like option -b20 for tar does) blocks, remainder = divmod(self.offset, RECORDSIZE) if remainder > 0: self.fileobj.write(NUL * (RECORDSIZE - remainder)) if not self._extfileobj: self.fileobj.close() self.closed = True def getmember(self, name): """Return a TarInfo object for member `name'. If `name' can not be found in the archive, KeyError is raised. If a member occurs more than once in the archive, its last occurrence is assumed to be the most up-to-date version. """ tarinfo = self._getmember(name) if tarinfo is None: raise KeyError("filename %r not found" % name) return tarinfo def getmembers(self): """Return the members of the archive as a list of TarInfo objects. The list has the same order as the members in the archive. """ self._check() if not self._loaded: # if we want to obtain a list of self._load() # all members, we first have to # scan the whole archive. return self.members def getnames(self): """Return the members of the archive as a list of their names. It has the same order as the list returned by getmembers(). """ return [tarinfo.name for tarinfo in self.getmembers()] def gettarinfo(self, name=None, arcname=None, fileobj=None): """Create a TarInfo object for either the file `name' or the file object `fileobj' (using os.fstat on its file descriptor). You can modify some of the TarInfo's attributes before you add it using addfile(). If given, `arcname' specifies an alternative name for the file in the archive. """ self._check("aw") # When fileobj is given, replace name by # fileobj's real name. if fileobj is not None: name = fileobj.name # Building the name of the member in the archive. # Backward slashes are converted to forward slashes, # Absolute paths are turned to relative paths. if arcname is None: arcname = name drv, arcname = os.path.splitdrive(arcname) arcname = arcname.replace(os.sep, "/") arcname = arcname.lstrip("/") # Now, fill the TarInfo object with # information specific for the file. tarinfo = self.tarinfo() tarinfo.tarfile = self # Use os.stat or os.lstat, depending on platform # and if symlinks shall be resolved. if fileobj is None: if hasattr(os, "lstat") and not self.dereference: statres = os.lstat(name) else: statres = os.stat(name) else: statres = os.fstat(fileobj.fileno()) linkname = "" stmd = statres.st_mode if stat.S_ISREG(stmd): inode = (statres.st_ino, statres.st_dev) if not self.dereference and statres.st_nlink > 1 and \ inode in self.inodes and arcname != self.inodes[inode]: # Is it a hardlink to an already # archived file? type = LNKTYPE linkname = self.inodes[inode] else: # The inode is added only if its valid. # For win32 it is always 0. type = REGTYPE if inode[0]: self.inodes[inode] = arcname elif stat.S_ISDIR(stmd): type = DIRTYPE elif stat.S_ISFIFO(stmd): type = FIFOTYPE elif stat.S_ISLNK(stmd): type = SYMTYPE linkname = os.readlink(name) elif stat.S_ISCHR(stmd): type = CHRTYPE elif stat.S_ISBLK(stmd): type = BLKTYPE else: return None # Fill the TarInfo object with all # information we can get. tarinfo.name = arcname tarinfo.mode = stmd tarinfo.uid = statres.st_uid tarinfo.gid = statres.st_gid if type == REGTYPE: tarinfo.size = statres.st_size else: tarinfo.size = 0 tarinfo.mtime = statres.st_mtime tarinfo.type = type tarinfo.linkname = linkname if pwd: try: tarinfo.uname = pwd.getpwuid(tarinfo.uid)[0] except KeyError: pass if grp: try: tarinfo.gname = grp.getgrgid(tarinfo.gid)[0] except KeyError: pass if type in (CHRTYPE, BLKTYPE): if hasattr(os, "major") and hasattr(os, "minor"): tarinfo.devmajor = os.major(statres.st_rdev) tarinfo.devminor = os.minor(statres.st_rdev) return tarinfo def list(self, verbose=True): """Print a table of contents to sys.stdout. If `verbose' is False, only the names of the members are printed. If it is True, an `ls -l'-like output is produced. """ self._check() for tarinfo in self: if verbose: print(filemode(tarinfo.mode), end=' ') print("%s/%s" % (tarinfo.uname or tarinfo.uid, tarinfo.gname or tarinfo.gid), end=' ') if tarinfo.ischr() or tarinfo.isblk(): print("%10s" % ("%d,%d" \ % (tarinfo.devmajor, tarinfo.devminor)), end=' ') else: print("%10d" % tarinfo.size, end=' ') print("%d-%02d-%02d %02d:%02d:%02d" \ % time.localtime(tarinfo.mtime)[:6], end=' ') print(tarinfo.name + ("/" if tarinfo.isdir() else ""), end=' ') if verbose: if tarinfo.issym(): print("->", tarinfo.linkname, end=' ') if tarinfo.islnk(): print("link to", tarinfo.linkname, end=' ') print() def add(self, name, arcname=None, recursive=True, exclude=None, filter=None): """Add the file `name' to the archive. `name' may be any type of file (directory, fifo, symbolic link, etc.). If given, `arcname' specifies an alternative name for the file in the archive. Directories are added recursively by default. This can be avoided by setting `recursive' to False. `exclude' is a function that should return True for each filename to be excluded. `filter' is a function that expects a TarInfo object argument and returns the changed TarInfo object, if it returns None the TarInfo object will be excluded from the archive. """ self._check("aw") if arcname is None: arcname = name # Exclude pathnames. if exclude is not None: import warnings warnings.warn("use the filter argument instead", DeprecationWarning, 2) if exclude(name): self._dbg(2, "tarfile: Excluded %r" % name) return # Skip if somebody tries to archive the archive... if self.name is not None and os.path.abspath(name) == self.name: self._dbg(2, "tarfile: Skipped %r" % name) return self._dbg(1, name) # Create a TarInfo object from the file. tarinfo = self.gettarinfo(name, arcname) if tarinfo is None: self._dbg(1, "tarfile: Unsupported type %r" % name) return # Change or exclude the TarInfo object. if filter is not None: tarinfo = filter(tarinfo) if tarinfo is None: self._dbg(2, "tarfile: Excluded %r" % name) return # Append the tar header and data to the archive. if tarinfo.isreg(): f = bltn_open(name, "rb") self.addfile(tarinfo, f) f.close() elif tarinfo.isdir(): self.addfile(tarinfo) if recursive: for f in os.listdir(name): self.add(os.path.join(name, f), os.path.join(arcname, f), recursive, exclude, filter=filter) else: self.addfile(tarinfo) def addfile(self, tarinfo, fileobj=None): """Add the TarInfo object `tarinfo' to the archive. If `fileobj' is given, tarinfo.size bytes are read from it and added to the archive. You can create TarInfo objects using gettarinfo(). On Windows platforms, `fileobj' should always be opened with mode 'rb' to avoid irritation about the file size. """ self._check("aw") tarinfo = copy.copy(tarinfo) buf = tarinfo.tobuf(self.format, self.encoding, self.errors) self.fileobj.write(buf) self.offset += len(buf) # If there's data to follow, append it. if fileobj is not None: copyfileobj(fileobj, self.fileobj, tarinfo.size) blocks, remainder = divmod(tarinfo.size, BLOCKSIZE) if remainder > 0: self.fileobj.write(NUL * (BLOCKSIZE - remainder)) blocks += 1 self.offset += blocks * BLOCKSIZE self.members.append(tarinfo) def extractall(self, path=".", members=None): """Extract all members from the archive to the current working directory and set owner, modification time and permissions on directories afterwards. `path' specifies a different directory to extract to. `members' is optional and must be a subset of the list returned by getmembers(). """ directories = [] if members is None: members = self for tarinfo in members: if tarinfo.isdir(): # Extract directories with a safe mode. directories.append(tarinfo) tarinfo = copy.copy(tarinfo) tarinfo.mode = 0o700 # Do not set_attrs directories, as we will do that further down self.extract(tarinfo, path, set_attrs=not tarinfo.isdir()) # Reverse sort directories. directories.sort(key=lambda a: a.name) directories.reverse() # Set correct owner, mtime and filemode on directories. for tarinfo in directories: dirpath = os.path.join(path, tarinfo.name) try: self.chown(tarinfo, dirpath) self.utime(tarinfo, dirpath) self.chmod(tarinfo, dirpath) except ExtractError as e: if self.errorlevel > 1: raise else: self._dbg(1, "tarfile: %s" % e) def extract(self, member, path="", set_attrs=True): """Extract a member from the archive to the current working directory, using its full name. Its file information is extracted as accurately as possible. `member' may be a filename or a TarInfo object. You can specify a different directory using `path'. File attributes (owner, mtime, mode) are set unless `set_attrs' is False. """ self._check("r") if isinstance(member, str): tarinfo = self.getmember(member) else: tarinfo = member # Prepare the link target for makelink(). if tarinfo.islnk(): tarinfo._link_target = os.path.join(path, tarinfo.linkname) try: self._extract_member(tarinfo, os.path.join(path, tarinfo.name), set_attrs=set_attrs) except EnvironmentError as e: if self.errorlevel > 0: raise else: if e.filename is None: self._dbg(1, "tarfile: %s" % e.strerror) else: self._dbg(1, "tarfile: %s %r" % (e.strerror, e.filename)) except ExtractError as e: if self.errorlevel > 1: raise else: self._dbg(1, "tarfile: %s" % e) def extractfile(self, member): """Extract a member from the archive as a file object. `member' may be a filename or a TarInfo object. If `member' is a regular file, a file-like object is returned. If `member' is a link, a file-like object is constructed from the link's target. If `member' is none of the above, None is returned. The file-like object is read-only and provides the following methods: read(), readline(), readlines(), seek() and tell() """ self._check("r") if isinstance(member, str): tarinfo = self.getmember(member) else: tarinfo = member if tarinfo.isreg(): return self.fileobject(self, tarinfo) elif tarinfo.type not in SUPPORTED_TYPES: # If a member's type is unknown, it is treated as a # regular file. return self.fileobject(self, tarinfo) elif tarinfo.islnk() or tarinfo.issym(): if isinstance(self.fileobj, _Stream): # A small but ugly workaround for the case that someone tries # to extract a (sym)link as a file-object from a non-seekable # stream of tar blocks. raise StreamError("cannot extract (sym)link as file object") else: # A (sym)link's file object is its target's file object. return self.extractfile(self._find_link_target(tarinfo)) else: # If there's no data associated with the member (directory, chrdev, # blkdev, etc.), return None instead of a file object. return None def _extract_member(self, tarinfo, targetpath, set_attrs=True): """Extract the TarInfo object tarinfo to a physical file called targetpath. """ # Fetch the TarInfo object for the given name # and build the destination pathname, replacing # forward slashes to platform specific separators. targetpath = targetpath.rstrip("/") targetpath = targetpath.replace("/", os.sep) # Create all upper directories. upperdirs = os.path.dirname(targetpath) if upperdirs and not os.path.exists(upperdirs): # Create directories that are not part of the archive with # default permissions. os.makedirs(upperdirs) if tarinfo.islnk() or tarinfo.issym(): self._dbg(1, "%s -> %s" % (tarinfo.name, tarinfo.linkname)) else: self._dbg(1, tarinfo.name) if tarinfo.isreg(): self.makefile(tarinfo, targetpath) elif tarinfo.isdir(): self.makedir(tarinfo, targetpath) elif tarinfo.isfifo(): self.makefifo(tarinfo, targetpath) elif tarinfo.ischr() or tarinfo.isblk(): self.makedev(tarinfo, targetpath) elif tarinfo.islnk() or tarinfo.issym(): self.makelink(tarinfo, targetpath) elif tarinfo.type not in SUPPORTED_TYPES: self.makeunknown(tarinfo, targetpath) else: self.makefile(tarinfo, targetpath) if set_attrs: self.chown(tarinfo, targetpath) if not tarinfo.issym(): self.chmod(tarinfo, targetpath) self.utime(tarinfo, targetpath) #-------------------------------------------------------------------------- # Below are the different file methods. They are called via # _extract_member() when extract() is called. They can be replaced in a # subclass to implement other functionality. def makedir(self, tarinfo, targetpath): """Make a directory called targetpath. """ try: # Use a safe mode for the directory, the real mode is set # later in _extract_member(). os.mkdir(targetpath, 0o700) except EnvironmentError as e: if e.errno != errno.EEXIST: raise def makefile(self, tarinfo, targetpath): """Make a file called targetpath. """ source = self.fileobj source.seek(tarinfo.offset_data) target = bltn_open(targetpath, "wb") if tarinfo.sparse is not None: for offset, size in tarinfo.sparse: target.seek(offset) copyfileobj(source, target, size) else: copyfileobj(source, target, tarinfo.size) target.seek(tarinfo.size) target.truncate() target.close() def makeunknown(self, tarinfo, targetpath): """Make a file from a TarInfo object with an unknown type at targetpath. """ self.makefile(tarinfo, targetpath) self._dbg(1, "tarfile: Unknown file type %r, " \ "extracted as regular file." % tarinfo.type) def makefifo(self, tarinfo, targetpath): """Make a fifo called targetpath. """ if hasattr(os, "mkfifo"): os.mkfifo(targetpath) else: raise ExtractError("fifo not supported by system") def makedev(self, tarinfo, targetpath): """Make a character or block device called targetpath. """ if not hasattr(os, "mknod") or not hasattr(os, "makedev"): raise ExtractError("special devices not supported by system") mode = tarinfo.mode if tarinfo.isblk(): mode |= stat.S_IFBLK else: mode |= stat.S_IFCHR os.mknod(targetpath, mode, os.makedev(tarinfo.devmajor, tarinfo.devminor)) def makelink(self, tarinfo, targetpath): """Make a (symbolic) link called targetpath. If it cannot be created (platform limitation), we try to make a copy of the referenced file instead of a link. """ try: # For systems that support symbolic and hard links. if tarinfo.issym(): os.symlink(tarinfo.linkname, targetpath) else: # See extract(). if os.path.exists(tarinfo._link_target): os.link(tarinfo._link_target, targetpath) else: self._extract_member(self._find_link_target(tarinfo), targetpath) except symlink_exception: if tarinfo.issym(): linkpath = os.path.join(os.path.dirname(tarinfo.name), tarinfo.linkname) else: linkpath = tarinfo.linkname else: try: self._extract_member(self._find_link_target(tarinfo), targetpath) except KeyError: raise ExtractError("unable to resolve link inside archive") def chown(self, tarinfo, targetpath): """Set owner of targetpath according to tarinfo. """ if pwd and hasattr(os, "geteuid") and os.geteuid() == 0: # We have to be root to do so. try: g = grp.getgrnam(tarinfo.gname)[2] except KeyError: g = tarinfo.gid try: u = pwd.getpwnam(tarinfo.uname)[2] except KeyError: u = tarinfo.uid try: if tarinfo.issym() and hasattr(os, "lchown"): os.lchown(targetpath, u, g) else: if sys.platform != "os2emx": os.chown(targetpath, u, g) except EnvironmentError as e: raise ExtractError("could not change owner") def chmod(self, tarinfo, targetpath): """Set file permissions of targetpath according to tarinfo. """ if hasattr(os, 'chmod'): try: os.chmod(targetpath, tarinfo.mode) except EnvironmentError as e: raise ExtractError("could not change mode") def utime(self, tarinfo, targetpath): """Set modification time of targetpath according to tarinfo. """ if not hasattr(os, 'utime'): return try: os.utime(targetpath, (tarinfo.mtime, tarinfo.mtime)) except EnvironmentError as e: raise ExtractError("could not change modification time") #-------------------------------------------------------------------------- def next(self): """Return the next member of the archive as a TarInfo object, when TarFile is opened for reading. Return None if there is no more available. """ self._check("ra") if self.firstmember is not None: m = self.firstmember self.firstmember = None return m # Read the next block. self.fileobj.seek(self.offset) tarinfo = None while True: try: tarinfo = self.tarinfo.fromtarfile(self) except EOFHeaderError as e: if self.ignore_zeros: self._dbg(2, "0x%X: %s" % (self.offset, e)) self.offset += BLOCKSIZE continue except InvalidHeaderError as e: if self.ignore_zeros: self._dbg(2, "0x%X: %s" % (self.offset, e)) self.offset += BLOCKSIZE continue elif self.offset == 0: raise ReadError(str(e)) except EmptyHeaderError: if self.offset == 0: raise ReadError("empty file") except TruncatedHeaderError as e: if self.offset == 0: raise ReadError(str(e)) except SubsequentHeaderError as e: raise ReadError(str(e)) break if tarinfo is not None: self.members.append(tarinfo) else: self._loaded = True return tarinfo #-------------------------------------------------------------------------- # Little helper methods: def _getmember(self, name, tarinfo=None, normalize=False): """Find an archive member by name from bottom to top. If tarinfo is given, it is used as the starting point. """ # Ensure that all members have been loaded. members = self.getmembers() # Limit the member search list up to tarinfo. if tarinfo is not None: members = members[:members.index(tarinfo)] if normalize: name = os.path.normpath(name) for member in reversed(members): if normalize: member_name = os.path.normpath(member.name) else: member_name = member.name if name == member_name: return member def _load(self): """Read through the entire archive file and look for readable members. """ while True: tarinfo = self.next() if tarinfo is None: break self._loaded = True def _check(self, mode=None): """Check if TarFile is still open, and if the operation's mode corresponds to TarFile's mode. """ if self.closed: raise IOError("%s is closed" % self.__class__.__name__) if mode is not None and self.mode not in mode: raise IOError("bad operation for mode %r" % self.mode) def _find_link_target(self, tarinfo): """Find the target member of a symlink or hardlink member in the archive. """ if tarinfo.issym(): # Always search the entire archive. linkname = os.path.dirname(tarinfo.name) + "/" + tarinfo.linkname limit = None else: # Search the archive before the link, because a hard link is # just a reference to an already archived file. linkname = tarinfo.linkname limit = tarinfo member = self._getmember(linkname, tarinfo=limit, normalize=True) if member is None: raise KeyError("linkname %r not found" % linkname) return member def __iter__(self): """Provide an iterator object. """ if self._loaded: return iter(self.members) else: return TarIter(self) def _dbg(self, level, msg): """Write debugging output to sys.stderr. """ if level <= self.debug: print(msg, file=sys.stderr) def __enter__(self): self._check() return self def __exit__(self, type, value, traceback): if type is None: self.close() else: # An exception occurred. We must not call close() because # it would try to write end-of-archive blocks and padding. if not self._extfileobj: self.fileobj.close() self.closed = True # class TarFile class TarIter(object): """Iterator Class. for tarinfo in TarFile(...): suite... """ def __init__(self, tarfile): """Construct a TarIter object. """ self.tarfile = tarfile self.index = 0 def __iter__(self): """Return iterator object. """ return self def __next__(self): """Return the next item using TarFile's next() method. When all members have been read, set TarFile as _loaded. """ # Fix for SF #1100429: Under rare circumstances it can # happen that getmembers() is called during iteration, # which will cause TarIter to stop prematurely. if not self.tarfile._loaded: tarinfo = self.tarfile.next() if not tarinfo: self.tarfile._loaded = True raise StopIteration else: try: tarinfo = self.tarfile.members[self.index] except IndexError: raise StopIteration self.index += 1 return tarinfo next = __next__ # for Python 2.x #-------------------- # exported functions #-------------------- def is_tarfile(name): """Return True if name points to a tar archive that we are able to handle, else return False. """ try: t = open(name) t.close() return True except TarError: return False bltn_open = open open = TarFile.open ��������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/distlib/compat.py������������������������0000644�0001750�0001750�00000120674�00000000000�027766� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # # Copyright (C) 2013-2017 Vinay Sajip. # Licensed to the Python Software Foundation under a contributor agreement. # See LICENSE.txt and CONTRIBUTORS.txt. # from __future__ import absolute_import import os import re import sys try: import ssl except ImportError: # pragma: no cover ssl = None if sys.version_info[0] < 3: # pragma: no cover from StringIO import StringIO string_types = basestring, text_type = unicode from types import FileType as file_type import __builtin__ as builtins import ConfigParser as configparser from ._backport import shutil from urlparse import urlparse, urlunparse, urljoin, urlsplit, urlunsplit from urllib import (urlretrieve, quote as _quote, unquote, url2pathname, pathname2url, ContentTooShortError, splittype) def quote(s): if isinstance(s, unicode): s = s.encode('utf-8') return _quote(s) import urllib2 from urllib2 import (Request, urlopen, URLError, HTTPError, HTTPBasicAuthHandler, HTTPPasswordMgr, HTTPHandler, HTTPRedirectHandler, build_opener) if ssl: from urllib2 import HTTPSHandler import httplib import xmlrpclib import Queue as queue from HTMLParser import HTMLParser import htmlentitydefs raw_input = raw_input from itertools import ifilter as filter from itertools import ifilterfalse as filterfalse _userprog = None def splituser(host): """splituser('user[:passwd]@host[:port]') --> 'user[:passwd]', 'host[:port]'.""" global _userprog if _userprog is None: import re _userprog = re.compile('^(.*)@(.*)$') match = _userprog.match(host) if match: return match.group(1, 2) return None, host else: # pragma: no cover from io import StringIO string_types = str, text_type = str from io import TextIOWrapper as file_type import builtins import configparser import shutil from urllib.parse import (urlparse, urlunparse, urljoin, splituser, quote, unquote, urlsplit, urlunsplit, splittype) from urllib.request import (urlopen, urlretrieve, Request, url2pathname, pathname2url, HTTPBasicAuthHandler, HTTPPasswordMgr, HTTPHandler, HTTPRedirectHandler, build_opener) if ssl: from urllib.request import HTTPSHandler from urllib.error import HTTPError, URLError, ContentTooShortError import http.client as httplib import urllib.request as urllib2 import xmlrpc.client as xmlrpclib import queue from html.parser import HTMLParser import html.entities as htmlentitydefs raw_input = input from itertools import filterfalse filter = filter try: from ssl import match_hostname, CertificateError except ImportError: # pragma: no cover class CertificateError(ValueError): pass def _dnsname_match(dn, hostname, max_wildcards=1): """Matching according to RFC 6125, section 6.4.3 http://tools.ietf.org/html/rfc6125#section-6.4.3 """ pats = [] if not dn: return False parts = dn.split('.') leftmost, remainder = parts[0], parts[1:] wildcards = leftmost.count('*') if wildcards > max_wildcards: # Issue #17980: avoid denials of service by refusing more # than one wildcard per fragment. A survey of established # policy among SSL implementations showed it to be a # reasonable choice. raise CertificateError( "too many wildcards in certificate DNS name: " + repr(dn)) # speed up common case w/o wildcards if not wildcards: return dn.lower() == hostname.lower() # RFC 6125, section 6.4.3, subitem 1. # The client SHOULD NOT attempt to match a presented identifier in which # the wildcard character comprises a label other than the left-most label. if leftmost == '*': # When '*' is a fragment by itself, it matches a non-empty dotless # fragment. pats.append('[^.]+') elif leftmost.startswith('xn--') or hostname.startswith('xn--'): # RFC 6125, section 6.4.3, subitem 3. # The client SHOULD NOT attempt to match a presented identifier # where the wildcard character is embedded within an A-label or # U-label of an internationalized domain name. pats.append(re.escape(leftmost)) else: # Otherwise, '*' matches any dotless string, e.g. www* pats.append(re.escape(leftmost).replace(r'\*', '[^.]*')) # add the remaining fragments, ignore any wildcards for frag in remainder: pats.append(re.escape(frag)) pat = re.compile(r'\A' + r'\.'.join(pats) + r'\Z', re.IGNORECASE) return pat.match(hostname) def match_hostname(cert, hostname): """Verify that *cert* (in decoded format as returned by SSLSocket.getpeercert()) matches the *hostname*. RFC 2818 and RFC 6125 rules are followed, but IP addresses are not accepted for *hostname*. CertificateError is raised on failure. On success, the function returns nothing. """ if not cert: raise ValueError("empty or no certificate, match_hostname needs a " "SSL socket or SSL context with either " "CERT_OPTIONAL or CERT_REQUIRED") dnsnames = [] san = cert.get('subjectAltName', ()) for key, value in san: if key == 'DNS': if _dnsname_match(value, hostname): return dnsnames.append(value) if not dnsnames: # The subject is only checked when there is no dNSName entry # in subjectAltName for sub in cert.get('subject', ()): for key, value in sub: # XXX according to RFC 2818, the most specific Common Name # must be used. if key == 'commonName': if _dnsname_match(value, hostname): return dnsnames.append(value) if len(dnsnames) > 1: raise CertificateError("hostname %r " "doesn't match either of %s" % (hostname, ', '.join(map(repr, dnsnames)))) elif len(dnsnames) == 1: raise CertificateError("hostname %r " "doesn't match %r" % (hostname, dnsnames[0])) else: raise CertificateError("no appropriate commonName or " "subjectAltName fields were found") try: from types import SimpleNamespace as Container except ImportError: # pragma: no cover class Container(object): """ A generic container for when multiple values need to be returned """ def __init__(self, **kwargs): self.__dict__.update(kwargs) try: from shutil import which except ImportError: # pragma: no cover # Implementation from Python 3.3 def which(cmd, mode=os.F_OK | os.X_OK, path=None): """Given a command, mode, and a PATH string, return the path which conforms to the given mode on the PATH, or None if there is no such file. `mode` defaults to os.F_OK | os.X_OK. `path` defaults to the result of os.environ.get("PATH"), or can be overridden with a custom search path. """ # Check that a given file can be accessed with the correct mode. # Additionally check that `file` is not a directory, as on Windows # directories pass the os.access check. def _access_check(fn, mode): return (os.path.exists(fn) and os.access(fn, mode) and not os.path.isdir(fn)) # If we're given a path with a directory part, look it up directly rather # than referring to PATH directories. This includes checking relative to the # current directory, e.g. ./script if os.path.dirname(cmd): if _access_check(cmd, mode): return cmd return None if path is None: path = os.environ.get("PATH", os.defpath) if not path: return None path = path.split(os.pathsep) if sys.platform == "win32": # The current directory takes precedence on Windows. if not os.curdir in path: path.insert(0, os.curdir) # PATHEXT is necessary to check on Windows. pathext = os.environ.get("PATHEXT", "").split(os.pathsep) # See if the given file matches any of the expected path extensions. # This will allow us to short circuit when given "python.exe". # If it does match, only test that one, otherwise we have to try # others. if any(cmd.lower().endswith(ext.lower()) for ext in pathext): files = [cmd] else: files = [cmd + ext for ext in pathext] else: # On other platforms you don't have things like PATHEXT to tell you # what file suffixes are executable, so just pass on cmd as-is. files = [cmd] seen = set() for dir in path: normdir = os.path.normcase(dir) if not normdir in seen: seen.add(normdir) for thefile in files: name = os.path.join(dir, thefile) if _access_check(name, mode): return name return None # ZipFile is a context manager in 2.7, but not in 2.6 from zipfile import ZipFile as BaseZipFile if hasattr(BaseZipFile, '__enter__'): # pragma: no cover ZipFile = BaseZipFile else: # pragma: no cover from zipfile import ZipExtFile as BaseZipExtFile class ZipExtFile(BaseZipExtFile): def __init__(self, base): self.__dict__.update(base.__dict__) def __enter__(self): return self def __exit__(self, *exc_info): self.close() # return None, so if an exception occurred, it will propagate class ZipFile(BaseZipFile): def __enter__(self): return self def __exit__(self, *exc_info): self.close() # return None, so if an exception occurred, it will propagate def open(self, *args, **kwargs): base = BaseZipFile.open(self, *args, **kwargs) return ZipExtFile(base) try: from platform import python_implementation except ImportError: # pragma: no cover def python_implementation(): """Return a string identifying the Python implementation.""" if 'PyPy' in sys.version: return 'PyPy' if os.name == 'java': return 'Jython' if sys.version.startswith('IronPython'): return 'IronPython' return 'CPython' try: import sysconfig except ImportError: # pragma: no cover from ._backport import sysconfig try: callable = callable except NameError: # pragma: no cover from collections import Callable def callable(obj): return isinstance(obj, Callable) try: fsencode = os.fsencode fsdecode = os.fsdecode except AttributeError: # pragma: no cover # Issue #99: on some systems (e.g. containerised), # sys.getfilesystemencoding() returns None, and we need a real value, # so fall back to utf-8. From the CPython 2.7 docs relating to Unix and # sys.getfilesystemencoding(): the return value is "the user’s preference # according to the result of nl_langinfo(CODESET), or None if the # nl_langinfo(CODESET) failed." _fsencoding = sys.getfilesystemencoding() or 'utf-8' if _fsencoding == 'mbcs': _fserrors = 'strict' else: _fserrors = 'surrogateescape' def fsencode(filename): if isinstance(filename, bytes): return filename elif isinstance(filename, text_type): return filename.encode(_fsencoding, _fserrors) else: raise TypeError("expect bytes or str, not %s" % type(filename).__name__) def fsdecode(filename): if isinstance(filename, text_type): return filename elif isinstance(filename, bytes): return filename.decode(_fsencoding, _fserrors) else: raise TypeError("expect bytes or str, not %s" % type(filename).__name__) try: from tokenize import detect_encoding except ImportError: # pragma: no cover from codecs import BOM_UTF8, lookup import re cookie_re = re.compile(r"coding[:=]\s*([-\w.]+)") def _get_normal_name(orig_enc): """Imitates get_normal_name in tokenizer.c.""" # Only care about the first 12 characters. enc = orig_enc[:12].lower().replace("_", "-") if enc == "utf-8" or enc.startswith("utf-8-"): return "utf-8" if enc in ("latin-1", "iso-8859-1", "iso-latin-1") or \ enc.startswith(("latin-1-", "iso-8859-1-", "iso-latin-1-")): return "iso-8859-1" return orig_enc def detect_encoding(readline): """ The detect_encoding() function is used to detect the encoding that should be used to decode a Python source file. It requires one argument, readline, in the same way as the tokenize() generator. It will call readline a maximum of twice, and return the encoding used (as a string) and a list of any lines (left as bytes) it has read in. It detects the encoding from the presence of a utf-8 bom or an encoding cookie as specified in pep-0263. If both a bom and a cookie are present, but disagree, a SyntaxError will be raised. If the encoding cookie is an invalid charset, raise a SyntaxError. Note that if a utf-8 bom is found, 'utf-8-sig' is returned. If no encoding is specified, then the default of 'utf-8' will be returned. """ try: filename = readline.__self__.name except AttributeError: filename = None bom_found = False encoding = None default = 'utf-8' def read_or_stop(): try: return readline() except StopIteration: return b'' def find_cookie(line): try: # Decode as UTF-8. Either the line is an encoding declaration, # in which case it should be pure ASCII, or it must be UTF-8 # per default encoding. line_string = line.decode('utf-8') except UnicodeDecodeError: msg = "invalid or missing encoding declaration" if filename is not None: msg = '{} for {!r}'.format(msg, filename) raise SyntaxError(msg) matches = cookie_re.findall(line_string) if not matches: return None encoding = _get_normal_name(matches[0]) try: codec = lookup(encoding) except LookupError: # This behaviour mimics the Python interpreter if filename is None: msg = "unknown encoding: " + encoding else: msg = "unknown encoding for {!r}: {}".format(filename, encoding) raise SyntaxError(msg) if bom_found: if codec.name != 'utf-8': # This behaviour mimics the Python interpreter if filename is None: msg = 'encoding problem: utf-8' else: msg = 'encoding problem for {!r}: utf-8'.format(filename) raise SyntaxError(msg) encoding += '-sig' return encoding first = read_or_stop() if first.startswith(BOM_UTF8): bom_found = True first = first[3:] default = 'utf-8-sig' if not first: return default, [] encoding = find_cookie(first) if encoding: return encoding, [first] second = read_or_stop() if not second: return default, [first] encoding = find_cookie(second) if encoding: return encoding, [first, second] return default, [first, second] # For converting & <-> & etc. try: from html import escape except ImportError: from cgi import escape if sys.version_info[:2] < (3, 4): unescape = HTMLParser().unescape else: from html import unescape try: from collections import ChainMap except ImportError: # pragma: no cover from collections import MutableMapping try: from reprlib import recursive_repr as _recursive_repr except ImportError: def _recursive_repr(fillvalue='...'): ''' Decorator to make a repr function return fillvalue for a recursive call ''' def decorating_function(user_function): repr_running = set() def wrapper(self): key = id(self), get_ident() if key in repr_running: return fillvalue repr_running.add(key) try: result = user_function(self) finally: repr_running.discard(key) return result # Can't use functools.wraps() here because of bootstrap issues wrapper.__module__ = getattr(user_function, '__module__') wrapper.__doc__ = getattr(user_function, '__doc__') wrapper.__name__ = getattr(user_function, '__name__') wrapper.__annotations__ = getattr(user_function, '__annotations__', {}) return wrapper return decorating_function class ChainMap(MutableMapping): ''' A ChainMap groups multiple dicts (or other mappings) together to create a single, updateable view. The underlying mappings are stored in a list. That list is public and can accessed or updated using the *maps* attribute. There is no other state. Lookups search the underlying mappings successively until a key is found. In contrast, writes, updates, and deletions only operate on the first mapping. ''' def __init__(self, *maps): '''Initialize a ChainMap by setting *maps* to the given mappings. If no mappings are provided, a single empty dictionary is used. ''' self.maps = list(maps) or [{}] # always at least one map def __missing__(self, key): raise KeyError(key) def __getitem__(self, key): for mapping in self.maps: try: return mapping[key] # can't use 'key in mapping' with defaultdict except KeyError: pass return self.__missing__(key) # support subclasses that define __missing__ def get(self, key, default=None): return self[key] if key in self else default def __len__(self): return len(set().union(*self.maps)) # reuses stored hash values if possible def __iter__(self): return iter(set().union(*self.maps)) def __contains__(self, key): return any(key in m for m in self.maps) def __bool__(self): return any(self.maps) @_recursive_repr() def __repr__(self): return '{0.__class__.__name__}({1})'.format( self, ', '.join(map(repr, self.maps))) @classmethod def fromkeys(cls, iterable, *args): 'Create a ChainMap with a single dict created from the iterable.' return cls(dict.fromkeys(iterable, *args)) def copy(self): 'New ChainMap or subclass with a new copy of maps[0] and refs to maps[1:]' return self.__class__(self.maps[0].copy(), *self.maps[1:]) __copy__ = copy def new_child(self): # like Django's Context.push() 'New ChainMap with a new dict followed by all previous maps.' return self.__class__({}, *self.maps) @property def parents(self): # like Django's Context.pop() 'New ChainMap from maps[1:].' return self.__class__(*self.maps[1:]) def __setitem__(self, key, value): self.maps[0][key] = value def __delitem__(self, key): try: del self.maps[0][key] except KeyError: raise KeyError('Key not found in the first mapping: {!r}'.format(key)) def popitem(self): 'Remove and return an item pair from maps[0]. Raise KeyError is maps[0] is empty.' try: return self.maps[0].popitem() except KeyError: raise KeyError('No keys found in the first mapping.') def pop(self, key, *args): 'Remove *key* from maps[0] and return its value. Raise KeyError if *key* not in maps[0].' try: return self.maps[0].pop(key, *args) except KeyError: raise KeyError('Key not found in the first mapping: {!r}'.format(key)) def clear(self): 'Clear maps[0], leaving maps[1:] intact.' self.maps[0].clear() try: from importlib.util import cache_from_source # Python >= 3.4 except ImportError: # pragma: no cover try: from imp import cache_from_source except ImportError: # pragma: no cover def cache_from_source(path, debug_override=None): assert path.endswith('.py') if debug_override is None: debug_override = __debug__ if debug_override: suffix = 'c' else: suffix = 'o' return path + suffix try: from collections import OrderedDict except ImportError: # pragma: no cover ## {{{ http://code.activestate.com/recipes/576693/ (r9) # Backport of OrderedDict() class that runs on Python 2.4, 2.5, 2.6, 2.7 and pypy. # Passes Python2.7's test suite and incorporates all the latest updates. try: from thread import get_ident as _get_ident except ImportError: from dummy_thread import get_ident as _get_ident try: from _abcoll import KeysView, ValuesView, ItemsView except ImportError: pass class OrderedDict(dict): 'Dictionary that remembers insertion order' # An inherited dict maps keys to values. # The inherited dict provides __getitem__, __len__, __contains__, and get. # The remaining methods are order-aware. # Big-O running times for all methods are the same as for regular dictionaries. # The internal self.__map dictionary maps keys to links in a doubly linked list. # The circular doubly linked list starts and ends with a sentinel element. # The sentinel element never gets deleted (this simplifies the algorithm). # Each link is stored as a list of length three: [PREV, NEXT, KEY]. def __init__(self, *args, **kwds): '''Initialize an ordered dictionary. Signature is the same as for regular dictionaries, but keyword arguments are not recommended because their insertion order is arbitrary. ''' if len(args) > 1: raise TypeError('expected at most 1 arguments, got %d' % len(args)) try: self.__root except AttributeError: self.__root = root = [] # sentinel node root[:] = [root, root, None] self.__map = {} self.__update(*args, **kwds) def __setitem__(self, key, value, dict_setitem=dict.__setitem__): 'od.__setitem__(i, y) <==> od[i]=y' # Setting a new item creates a new link which goes at the end of the linked # list, and the inherited dictionary is updated with the new key/value pair. if key not in self: root = self.__root last = root[0] last[1] = root[0] = self.__map[key] = [last, root, key] dict_setitem(self, key, value) def __delitem__(self, key, dict_delitem=dict.__delitem__): 'od.__delitem__(y) <==> del od[y]' # Deleting an existing item uses self.__map to find the link which is # then removed by updating the links in the predecessor and successor nodes. dict_delitem(self, key) link_prev, link_next, key = self.__map.pop(key) link_prev[1] = link_next link_next[0] = link_prev def __iter__(self): 'od.__iter__() <==> iter(od)' root = self.__root curr = root[1] while curr is not root: yield curr[2] curr = curr[1] def __reversed__(self): 'od.__reversed__() <==> reversed(od)' root = self.__root curr = root[0] while curr is not root: yield curr[2] curr = curr[0] def clear(self): 'od.clear() -> None. Remove all items from od.' try: for node in self.__map.itervalues(): del node[:] root = self.__root root[:] = [root, root, None] self.__map.clear() except AttributeError: pass dict.clear(self) def popitem(self, last=True): '''od.popitem() -> (k, v), return and remove a (key, value) pair. Pairs are returned in LIFO order if last is true or FIFO order if false. ''' if not self: raise KeyError('dictionary is empty') root = self.__root if last: link = root[0] link_prev = link[0] link_prev[1] = root root[0] = link_prev else: link = root[1] link_next = link[1] root[1] = link_next link_next[0] = root key = link[2] del self.__map[key] value = dict.pop(self, key) return key, value # -- the following methods do not depend on the internal structure -- def keys(self): 'od.keys() -> list of keys in od' return list(self) def values(self): 'od.values() -> list of values in od' return [self[key] for key in self] def items(self): 'od.items() -> list of (key, value) pairs in od' return [(key, self[key]) for key in self] def iterkeys(self): 'od.iterkeys() -> an iterator over the keys in od' return iter(self) def itervalues(self): 'od.itervalues -> an iterator over the values in od' for k in self: yield self[k] def iteritems(self): 'od.iteritems -> an iterator over the (key, value) items in od' for k in self: yield (k, self[k]) def update(*args, **kwds): '''od.update(E, **F) -> None. Update od from dict/iterable E and F. If E is a dict instance, does: for k in E: od[k] = E[k] If E has a .keys() method, does: for k in E.keys(): od[k] = E[k] Or if E is an iterable of items, does: for k, v in E: od[k] = v In either case, this is followed by: for k, v in F.items(): od[k] = v ''' if len(args) > 2: raise TypeError('update() takes at most 2 positional ' 'arguments (%d given)' % (len(args),)) elif not args: raise TypeError('update() takes at least 1 argument (0 given)') self = args[0] # Make progressively weaker assumptions about "other" other = () if len(args) == 2: other = args[1] if isinstance(other, dict): for key in other: self[key] = other[key] elif hasattr(other, 'keys'): for key in other.keys(): self[key] = other[key] else: for key, value in other: self[key] = value for key, value in kwds.items(): self[key] = value __update = update # let subclasses override update without breaking __init__ __marker = object() def pop(self, key, default=__marker): '''od.pop(k[,d]) -> v, remove specified key and return the corresponding value. If key is not found, d is returned if given, otherwise KeyError is raised. ''' if key in self: result = self[key] del self[key] return result if default is self.__marker: raise KeyError(key) return default def setdefault(self, key, default=None): 'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od' if key in self: return self[key] self[key] = default return default def __repr__(self, _repr_running=None): 'od.__repr__() <==> repr(od)' if not _repr_running: _repr_running = {} call_key = id(self), _get_ident() if call_key in _repr_running: return '...' _repr_running[call_key] = 1 try: if not self: return '%s()' % (self.__class__.__name__,) return '%s(%r)' % (self.__class__.__name__, self.items()) finally: del _repr_running[call_key] def __reduce__(self): 'Return state information for pickling' items = [[k, self[k]] for k in self] inst_dict = vars(self).copy() for k in vars(OrderedDict()): inst_dict.pop(k, None) if inst_dict: return (self.__class__, (items,), inst_dict) return self.__class__, (items,) def copy(self): 'od.copy() -> a shallow copy of od' return self.__class__(self) @classmethod def fromkeys(cls, iterable, value=None): '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S and values equal to v (which defaults to None). ''' d = cls() for key in iterable: d[key] = value return d def __eq__(self, other): '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive while comparison to a regular mapping is order-insensitive. ''' if isinstance(other, OrderedDict): return len(self)==len(other) and self.items() == other.items() return dict.__eq__(self, other) def __ne__(self, other): return not self == other # -- the following methods are only used in Python 2.7 -- def viewkeys(self): "od.viewkeys() -> a set-like object providing a view on od's keys" return KeysView(self) def viewvalues(self): "od.viewvalues() -> an object providing a view on od's values" return ValuesView(self) def viewitems(self): "od.viewitems() -> a set-like object providing a view on od's items" return ItemsView(self) try: from logging.config import BaseConfigurator, valid_ident except ImportError: # pragma: no cover IDENTIFIER = re.compile('^[a-z_][a-z0-9_]*$', re.I) def valid_ident(s): m = IDENTIFIER.match(s) if not m: raise ValueError('Not a valid Python identifier: %r' % s) return True # The ConvertingXXX classes are wrappers around standard Python containers, # and they serve to convert any suitable values in the container. The # conversion converts base dicts, lists and tuples to their wrapped # equivalents, whereas strings which match a conversion format are converted # appropriately. # # Each wrapper should have a configurator attribute holding the actual # configurator to use for conversion. class ConvertingDict(dict): """A converting dictionary wrapper.""" def __getitem__(self, key): value = dict.__getitem__(self, key) result = self.configurator.convert(value) #If the converted value is different, save for next time if value is not result: self[key] = result if type(result) in (ConvertingDict, ConvertingList, ConvertingTuple): result.parent = self result.key = key return result def get(self, key, default=None): value = dict.get(self, key, default) result = self.configurator.convert(value) #If the converted value is different, save for next time if value is not result: self[key] = result if type(result) in (ConvertingDict, ConvertingList, ConvertingTuple): result.parent = self result.key = key return result def pop(self, key, default=None): value = dict.pop(self, key, default) result = self.configurator.convert(value) if value is not result: if type(result) in (ConvertingDict, ConvertingList, ConvertingTuple): result.parent = self result.key = key return result class ConvertingList(list): """A converting list wrapper.""" def __getitem__(self, key): value = list.__getitem__(self, key) result = self.configurator.convert(value) #If the converted value is different, save for next time if value is not result: self[key] = result if type(result) in (ConvertingDict, ConvertingList, ConvertingTuple): result.parent = self result.key = key return result def pop(self, idx=-1): value = list.pop(self, idx) result = self.configurator.convert(value) if value is not result: if type(result) in (ConvertingDict, ConvertingList, ConvertingTuple): result.parent = self return result class ConvertingTuple(tuple): """A converting tuple wrapper.""" def __getitem__(self, key): value = tuple.__getitem__(self, key) result = self.configurator.convert(value) if value is not result: if type(result) in (ConvertingDict, ConvertingList, ConvertingTuple): result.parent = self result.key = key return result class BaseConfigurator(object): """ The configurator base class which defines some useful defaults. """ CONVERT_PATTERN = re.compile(r'^(?P<prefix>[a-z]+)://(?P<suffix>.*)$') WORD_PATTERN = re.compile(r'^\s*(\w+)\s*') DOT_PATTERN = re.compile(r'^\.\s*(\w+)\s*') INDEX_PATTERN = re.compile(r'^\[\s*(\w+)\s*\]\s*') DIGIT_PATTERN = re.compile(r'^\d+$') value_converters = { 'ext' : 'ext_convert', 'cfg' : 'cfg_convert', } # We might want to use a different one, e.g. importlib importer = staticmethod(__import__) def __init__(self, config): self.config = ConvertingDict(config) self.config.configurator = self def resolve(self, s): """ Resolve strings to objects using standard import and attribute syntax. """ name = s.split('.') used = name.pop(0) try: found = self.importer(used) for frag in name: used += '.' + frag try: found = getattr(found, frag) except AttributeError: self.importer(used) found = getattr(found, frag) return found except ImportError: e, tb = sys.exc_info()[1:] v = ValueError('Cannot resolve %r: %s' % (s, e)) v.__cause__, v.__traceback__ = e, tb raise v def ext_convert(self, value): """Default converter for the ext:// protocol.""" return self.resolve(value) def cfg_convert(self, value): """Default converter for the cfg:// protocol.""" rest = value m = self.WORD_PATTERN.match(rest) if m is None: raise ValueError("Unable to convert %r" % value) else: rest = rest[m.end():] d = self.config[m.groups()[0]] #print d, rest while rest: m = self.DOT_PATTERN.match(rest) if m: d = d[m.groups()[0]] else: m = self.INDEX_PATTERN.match(rest) if m: idx = m.groups()[0] if not self.DIGIT_PATTERN.match(idx): d = d[idx] else: try: n = int(idx) # try as number first (most likely) d = d[n] except TypeError: d = d[idx] if m: rest = rest[m.end():] else: raise ValueError('Unable to convert ' '%r at %r' % (value, rest)) #rest should be empty return d def convert(self, value): """ Convert values to an appropriate type. dicts, lists and tuples are replaced by their converting alternatives. Strings are checked to see if they have a conversion format and are converted if they do. """ if not isinstance(value, ConvertingDict) and isinstance(value, dict): value = ConvertingDict(value) value.configurator = self elif not isinstance(value, ConvertingList) and isinstance(value, list): value = ConvertingList(value) value.configurator = self elif not isinstance(value, ConvertingTuple) and\ isinstance(value, tuple): value = ConvertingTuple(value) value.configurator = self elif isinstance(value, string_types): m = self.CONVERT_PATTERN.match(value) if m: d = m.groupdict() prefix = d['prefix'] converter = self.value_converters.get(prefix, None) if converter: suffix = d['suffix'] converter = getattr(self, converter) value = converter(suffix) return value def configure_custom(self, config): """Configure an object with a user-supplied factory.""" c = config.pop('()') if not callable(c): c = self.resolve(c) props = config.pop('.', None) # Check for valid identifiers kwargs = dict([(k, config[k]) for k in config if valid_ident(k)]) result = c(**kwargs) if props: for name, value in props.items(): setattr(result, name, value) return result def as_tuple(self, value): """Utility function which converts lists to tuples.""" if isinstance(value, list): value = tuple(value) return value ��������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/distlib/database.py����������������������0000644�0001750�0001750�00000143525�00000000000�030247� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # # Copyright (C) 2012-2017 The Python Software Foundation. # See LICENSE.txt and CONTRIBUTORS.txt. # """PEP 376 implementation.""" from __future__ import unicode_literals import base64 import codecs import contextlib import hashlib import logging import os import posixpath import sys import zipimport from . import DistlibException, resources from .compat import StringIO from .version import get_scheme, UnsupportedVersionError from .metadata import (Metadata, METADATA_FILENAME, WHEEL_METADATA_FILENAME, LEGACY_METADATA_FILENAME) from .util import (parse_requirement, cached_property, parse_name_and_version, read_exports, write_exports, CSVReader, CSVWriter) __all__ = ['Distribution', 'BaseInstalledDistribution', 'InstalledDistribution', 'EggInfoDistribution', 'DistributionPath'] logger = logging.getLogger(__name__) EXPORTS_FILENAME = 'pydist-exports.json' COMMANDS_FILENAME = 'pydist-commands.json' DIST_FILES = ('INSTALLER', METADATA_FILENAME, 'RECORD', 'REQUESTED', 'RESOURCES', EXPORTS_FILENAME, 'SHARED') DISTINFO_EXT = '.dist-info' class _Cache(object): """ A simple cache mapping names and .dist-info paths to distributions """ def __init__(self): """ Initialise an instance. There is normally one for each DistributionPath. """ self.name = {} self.path = {} self.generated = False def clear(self): """ Clear the cache, setting it to its initial state. """ self.name.clear() self.path.clear() self.generated = False def add(self, dist): """ Add a distribution to the cache. :param dist: The distribution to add. """ if dist.path not in self.path: self.path[dist.path] = dist self.name.setdefault(dist.key, []).append(dist) class DistributionPath(object): """ Represents a set of distributions installed on a path (typically sys.path). """ def __init__(self, path=None, include_egg=False): """ Create an instance from a path, optionally including legacy (distutils/ setuptools/distribute) distributions. :param path: The path to use, as a list of directories. If not specified, sys.path is used. :param include_egg: If True, this instance will look for and return legacy distributions as well as those based on PEP 376. """ if path is None: path = sys.path self.path = path self._include_dist = True self._include_egg = include_egg self._cache = _Cache() self._cache_egg = _Cache() self._cache_enabled = True self._scheme = get_scheme('default') def _get_cache_enabled(self): return self._cache_enabled def _set_cache_enabled(self, value): self._cache_enabled = value cache_enabled = property(_get_cache_enabled, _set_cache_enabled) def clear_cache(self): """ Clears the internal cache. """ self._cache.clear() self._cache_egg.clear() def _yield_distributions(self): """ Yield .dist-info and/or .egg(-info) distributions. """ # We need to check if we've seen some resources already, because on # some Linux systems (e.g. some Debian/Ubuntu variants) there are # symlinks which alias other files in the environment. seen = set() for path in self.path: finder = resources.finder_for_path(path) if finder is None: continue r = finder.find('') if not r or not r.is_container: continue rset = sorted(r.resources) for entry in rset: r = finder.find(entry) if not r or r.path in seen: continue if self._include_dist and entry.endswith(DISTINFO_EXT): possible_filenames = [METADATA_FILENAME, WHEEL_METADATA_FILENAME, LEGACY_METADATA_FILENAME] for metadata_filename in possible_filenames: metadata_path = posixpath.join(entry, metadata_filename) pydist = finder.find(metadata_path) if pydist: break else: continue with contextlib.closing(pydist.as_stream()) as stream: metadata = Metadata(fileobj=stream, scheme='legacy') logger.debug('Found %s', r.path) seen.add(r.path) yield new_dist_class(r.path, metadata=metadata, env=self) elif self._include_egg and entry.endswith(('.egg-info', '.egg')): logger.debug('Found %s', r.path) seen.add(r.path) yield old_dist_class(r.path, self) def _generate_cache(self): """ Scan the path for distributions and populate the cache with those that are found. """ gen_dist = not self._cache.generated gen_egg = self._include_egg and not self._cache_egg.generated if gen_dist or gen_egg: for dist in self._yield_distributions(): if isinstance(dist, InstalledDistribution): self._cache.add(dist) else: self._cache_egg.add(dist) if gen_dist: self._cache.generated = True if gen_egg: self._cache_egg.generated = True @classmethod def distinfo_dirname(cls, name, version): """ The *name* and *version* parameters are converted into their filename-escaped form, i.e. any ``'-'`` characters are replaced with ``'_'`` other than the one in ``'dist-info'`` and the one separating the name from the version number. :parameter name: is converted to a standard distribution name by replacing any runs of non- alphanumeric characters with a single ``'-'``. :type name: string :parameter version: is converted to a standard version string. Spaces become dots, and all other non-alphanumeric characters (except dots) become dashes, with runs of multiple dashes condensed to a single dash. :type version: string :returns: directory name :rtype: string""" name = name.replace('-', '_') return '-'.join([name, version]) + DISTINFO_EXT def get_distributions(self): """ Provides an iterator that looks for distributions and returns :class:`InstalledDistribution` or :class:`EggInfoDistribution` instances for each one of them. :rtype: iterator of :class:`InstalledDistribution` and :class:`EggInfoDistribution` instances """ if not self._cache_enabled: for dist in self._yield_distributions(): yield dist else: self._generate_cache() for dist in self._cache.path.values(): yield dist if self._include_egg: for dist in self._cache_egg.path.values(): yield dist def get_distribution(self, name): """ Looks for a named distribution on the path. This function only returns the first result found, as no more than one value is expected. If nothing is found, ``None`` is returned. :rtype: :class:`InstalledDistribution`, :class:`EggInfoDistribution` or ``None`` """ result = None name = name.lower() if not self._cache_enabled: for dist in self._yield_distributions(): if dist.key == name: result = dist break else: self._generate_cache() if name in self._cache.name: result = self._cache.name[name][0] elif self._include_egg and name in self._cache_egg.name: result = self._cache_egg.name[name][0] return result def provides_distribution(self, name, version=None): """ Iterates over all distributions to find which distributions provide *name*. If a *version* is provided, it will be used to filter the results. This function only returns the first result found, since no more than one values are expected. If the directory is not found, returns ``None``. :parameter version: a version specifier that indicates the version required, conforming to the format in ``PEP-345`` :type name: string :type version: string """ matcher = None if version is not None: try: matcher = self._scheme.matcher('%s (%s)' % (name, version)) except ValueError: raise DistlibException('invalid name or version: %r, %r' % (name, version)) for dist in self.get_distributions(): # We hit a problem on Travis where enum34 was installed and doesn't # have a provides attribute ... if not hasattr(dist, 'provides'): logger.debug('No "provides": %s', dist) else: provided = dist.provides for p in provided: p_name, p_ver = parse_name_and_version(p) if matcher is None: if p_name == name: yield dist break else: if p_name == name and matcher.match(p_ver): yield dist break def get_file_path(self, name, relative_path): """ Return the path to a resource file. """ dist = self.get_distribution(name) if dist is None: raise LookupError('no distribution named %r found' % name) return dist.get_resource_path(relative_path) def get_exported_entries(self, category, name=None): """ Return all of the exported entries in a particular category. :param category: The category to search for entries. :param name: If specified, only entries with that name are returned. """ for dist in self.get_distributions(): r = dist.exports if category in r: d = r[category] if name is not None: if name in d: yield d[name] else: for v in d.values(): yield v class Distribution(object): """ A base class for distributions, whether installed or from indexes. Either way, it must have some metadata, so that's all that's needed for construction. """ build_time_dependency = False """ Set to True if it's known to be only a build-time dependency (i.e. not needed after installation). """ requested = False """A boolean that indicates whether the ``REQUESTED`` metadata file is present (in other words, whether the package was installed by user request or it was installed as a dependency).""" def __init__(self, metadata): """ Initialise an instance. :param metadata: The instance of :class:`Metadata` describing this distribution. """ self.metadata = metadata self.name = metadata.name self.key = self.name.lower() # for case-insensitive comparisons self.version = metadata.version self.locator = None self.digest = None self.extras = None # additional features requested self.context = None # environment marker overrides self.download_urls = set() self.digests = {} @property def source_url(self): """ The source archive download URL for this distribution. """ return self.metadata.source_url download_url = source_url # Backward compatibility @property def name_and_version(self): """ A utility property which displays the name and version in parentheses. """ return '%s (%s)' % (self.name, self.version) @property def provides(self): """ A set of distribution names and versions provided by this distribution. :return: A set of "name (version)" strings. """ plist = self.metadata.provides s = '%s (%s)' % (self.name, self.version) if s not in plist: plist.append(s) return plist def _get_requirements(self, req_attr): md = self.metadata logger.debug('Getting requirements from metadata %r', md.todict()) reqts = getattr(md, req_attr) return set(md.get_requirements(reqts, extras=self.extras, env=self.context)) @property def run_requires(self): return self._get_requirements('run_requires') @property def meta_requires(self): return self._get_requirements('meta_requires') @property def build_requires(self): return self._get_requirements('build_requires') @property def test_requires(self): return self._get_requirements('test_requires') @property def dev_requires(self): return self._get_requirements('dev_requires') def matches_requirement(self, req): """ Say if this instance matches (fulfills) a requirement. :param req: The requirement to match. :rtype req: str :return: True if it matches, else False. """ # Requirement may contain extras - parse to lose those # from what's passed to the matcher r = parse_requirement(req) scheme = get_scheme(self.metadata.scheme) try: matcher = scheme.matcher(r.requirement) except UnsupportedVersionError: # XXX compat-mode if cannot read the version logger.warning('could not read version %r - using name only', req) name = req.split()[0] matcher = scheme.matcher(name) name = matcher.key # case-insensitive result = False for p in self.provides: p_name, p_ver = parse_name_and_version(p) if p_name != name: continue try: result = matcher.match(p_ver) break except UnsupportedVersionError: pass return result def __repr__(self): """ Return a textual representation of this instance, """ if self.source_url: suffix = ' [%s]' % self.source_url else: suffix = '' return '<Distribution %s (%s)%s>' % (self.name, self.version, suffix) def __eq__(self, other): """ See if this distribution is the same as another. :param other: The distribution to compare with. To be equal to one another. distributions must have the same type, name, version and source_url. :return: True if it is the same, else False. """ if type(other) is not type(self): result = False else: result = (self.name == other.name and self.version == other.version and self.source_url == other.source_url) return result def __hash__(self): """ Compute hash in a way which matches the equality test. """ return hash(self.name) + hash(self.version) + hash(self.source_url) class BaseInstalledDistribution(Distribution): """ This is the base class for installed distributions (whether PEP 376 or legacy). """ hasher = None def __init__(self, metadata, path, env=None): """ Initialise an instance. :param metadata: An instance of :class:`Metadata` which describes the distribution. This will normally have been initialised from a metadata file in the ``path``. :param path: The path of the ``.dist-info`` or ``.egg-info`` directory for the distribution. :param env: This is normally the :class:`DistributionPath` instance where this distribution was found. """ super(BaseInstalledDistribution, self).__init__(metadata) self.path = path self.dist_path = env def get_hash(self, data, hasher=None): """ Get the hash of some data, using a particular hash algorithm, if specified. :param data: The data to be hashed. :type data: bytes :param hasher: The name of a hash implementation, supported by hashlib, or ``None``. Examples of valid values are ``'sha1'``, ``'sha224'``, ``'sha384'``, '``sha256'``, ``'md5'`` and ``'sha512'``. If no hasher is specified, the ``hasher`` attribute of the :class:`InstalledDistribution` instance is used. If the hasher is determined to be ``None``, MD5 is used as the hashing algorithm. :returns: The hash of the data. If a hasher was explicitly specified, the returned hash will be prefixed with the specified hasher followed by '='. :rtype: str """ if hasher is None: hasher = self.hasher if hasher is None: hasher = hashlib.md5 prefix = '' else: hasher = getattr(hashlib, hasher) prefix = '%s=' % self.hasher digest = hasher(data).digest() digest = base64.urlsafe_b64encode(digest).rstrip(b'=').decode('ascii') return '%s%s' % (prefix, digest) class InstalledDistribution(BaseInstalledDistribution): """ Created with the *path* of the ``.dist-info`` directory provided to the constructor. It reads the metadata contained in ``pydist.json`` when it is instantiated., or uses a passed in Metadata instance (useful for when dry-run mode is being used). """ hasher = 'sha256' def __init__(self, path, metadata=None, env=None): self.modules = [] self.finder = finder = resources.finder_for_path(path) if finder is None: raise ValueError('finder unavailable for %s' % path) if env and env._cache_enabled and path in env._cache.path: metadata = env._cache.path[path].metadata elif metadata is None: r = finder.find(METADATA_FILENAME) # Temporary - for Wheel 0.23 support if r is None: r = finder.find(WHEEL_METADATA_FILENAME) # Temporary - for legacy support if r is None: r = finder.find('METADATA') if r is None: raise ValueError('no %s found in %s' % (METADATA_FILENAME, path)) with contextlib.closing(r.as_stream()) as stream: metadata = Metadata(fileobj=stream, scheme='legacy') super(InstalledDistribution, self).__init__(metadata, path, env) if env and env._cache_enabled: env._cache.add(self) r = finder.find('REQUESTED') self.requested = r is not None p = os.path.join(path, 'top_level.txt') if os.path.exists(p): with open(p, 'rb') as f: data = f.read() self.modules = data.splitlines() def __repr__(self): return '<InstalledDistribution %r %s at %r>' % ( self.name, self.version, self.path) def __str__(self): return "%s %s" % (self.name, self.version) def _get_records(self): """ Get the list of installed files for the distribution :return: A list of tuples of path, hash and size. Note that hash and size might be ``None`` for some entries. The path is exactly as stored in the file (which is as in PEP 376). """ results = [] r = self.get_distinfo_resource('RECORD') with contextlib.closing(r.as_stream()) as stream: with CSVReader(stream=stream) as record_reader: # Base location is parent dir of .dist-info dir #base_location = os.path.dirname(self.path) #base_location = os.path.abspath(base_location) for row in record_reader: missing = [None for i in range(len(row), 3)] path, checksum, size = row + missing #if not os.path.isabs(path): # path = path.replace('/', os.sep) # path = os.path.join(base_location, path) results.append((path, checksum, size)) return results @cached_property def exports(self): """ Return the information exported by this distribution. :return: A dictionary of exports, mapping an export category to a dict of :class:`ExportEntry` instances describing the individual export entries, and keyed by name. """ result = {} r = self.get_distinfo_resource(EXPORTS_FILENAME) if r: result = self.read_exports() return result def read_exports(self): """ Read exports data from a file in .ini format. :return: A dictionary of exports, mapping an export category to a list of :class:`ExportEntry` instances describing the individual export entries. """ result = {} r = self.get_distinfo_resource(EXPORTS_FILENAME) if r: with contextlib.closing(r.as_stream()) as stream: result = read_exports(stream) return result def write_exports(self, exports): """ Write a dictionary of exports to a file in .ini format. :param exports: A dictionary of exports, mapping an export category to a list of :class:`ExportEntry` instances describing the individual export entries. """ rf = self.get_distinfo_file(EXPORTS_FILENAME) with open(rf, 'w') as f: write_exports(exports, f) def get_resource_path(self, relative_path): """ NOTE: This API may change in the future. Return the absolute path to a resource file with the given relative path. :param relative_path: The path, relative to .dist-info, of the resource of interest. :return: The absolute path where the resource is to be found. """ r = self.get_distinfo_resource('RESOURCES') with contextlib.closing(r.as_stream()) as stream: with CSVReader(stream=stream) as resources_reader: for relative, destination in resources_reader: if relative == relative_path: return destination raise KeyError('no resource file with relative path %r ' 'is installed' % relative_path) def list_installed_files(self): """ Iterates over the ``RECORD`` entries and returns a tuple ``(path, hash, size)`` for each line. :returns: iterator of (path, hash, size) """ for result in self._get_records(): yield result def write_installed_files(self, paths, prefix, dry_run=False): """ Writes the ``RECORD`` file, using the ``paths`` iterable passed in. Any existing ``RECORD`` file is silently overwritten. prefix is used to determine when to write absolute paths. """ prefix = os.path.join(prefix, '') base = os.path.dirname(self.path) base_under_prefix = base.startswith(prefix) base = os.path.join(base, '') record_path = self.get_distinfo_file('RECORD') logger.info('creating %s', record_path) if dry_run: return None with CSVWriter(record_path) as writer: for path in paths: if os.path.isdir(path) or path.endswith(('.pyc', '.pyo')): # do not put size and hash, as in PEP-376 hash_value = size = '' else: size = '%d' % os.path.getsize(path) with open(path, 'rb') as fp: hash_value = self.get_hash(fp.read()) if path.startswith(base) or (base_under_prefix and path.startswith(prefix)): path = os.path.relpath(path, base) writer.writerow((path, hash_value, size)) # add the RECORD file itself if record_path.startswith(base): record_path = os.path.relpath(record_path, base) writer.writerow((record_path, '', '')) return record_path def check_installed_files(self): """ Checks that the hashes and sizes of the files in ``RECORD`` are matched by the files themselves. Returns a (possibly empty) list of mismatches. Each entry in the mismatch list will be a tuple consisting of the path, 'exists', 'size' or 'hash' according to what didn't match (existence is checked first, then size, then hash), the expected value and the actual value. """ mismatches = [] base = os.path.dirname(self.path) record_path = self.get_distinfo_file('RECORD') for path, hash_value, size in self.list_installed_files(): if not os.path.isabs(path): path = os.path.join(base, path) if path == record_path: continue if not os.path.exists(path): mismatches.append((path, 'exists', True, False)) elif os.path.isfile(path): actual_size = str(os.path.getsize(path)) if size and actual_size != size: mismatches.append((path, 'size', size, actual_size)) elif hash_value: if '=' in hash_value: hasher = hash_value.split('=', 1)[0] else: hasher = None with open(path, 'rb') as f: actual_hash = self.get_hash(f.read(), hasher) if actual_hash != hash_value: mismatches.append((path, 'hash', hash_value, actual_hash)) return mismatches @cached_property def shared_locations(self): """ A dictionary of shared locations whose keys are in the set 'prefix', 'purelib', 'platlib', 'scripts', 'headers', 'data' and 'namespace'. The corresponding value is the absolute path of that category for this distribution, and takes into account any paths selected by the user at installation time (e.g. via command-line arguments). In the case of the 'namespace' key, this would be a list of absolute paths for the roots of namespace packages in this distribution. The first time this property is accessed, the relevant information is read from the SHARED file in the .dist-info directory. """ result = {} shared_path = os.path.join(self.path, 'SHARED') if os.path.isfile(shared_path): with codecs.open(shared_path, 'r', encoding='utf-8') as f: lines = f.read().splitlines() for line in lines: key, value = line.split('=', 1) if key == 'namespace': result.setdefault(key, []).append(value) else: result[key] = value return result def write_shared_locations(self, paths, dry_run=False): """ Write shared location information to the SHARED file in .dist-info. :param paths: A dictionary as described in the documentation for :meth:`shared_locations`. :param dry_run: If True, the action is logged but no file is actually written. :return: The path of the file written to. """ shared_path = os.path.join(self.path, 'SHARED') logger.info('creating %s', shared_path) if dry_run: return None lines = [] for key in ('prefix', 'lib', 'headers', 'scripts', 'data'): path = paths[key] if os.path.isdir(paths[key]): lines.append('%s=%s' % (key, path)) for ns in paths.get('namespace', ()): lines.append('namespace=%s' % ns) with codecs.open(shared_path, 'w', encoding='utf-8') as f: f.write('\n'.join(lines)) return shared_path def get_distinfo_resource(self, path): if path not in DIST_FILES: raise DistlibException('invalid path for a dist-info file: ' '%r at %r' % (path, self.path)) finder = resources.finder_for_path(self.path) if finder is None: raise DistlibException('Unable to get a finder for %s' % self.path) return finder.find(path) def get_distinfo_file(self, path): """ Returns a path located under the ``.dist-info`` directory. Returns a string representing the path. :parameter path: a ``'/'``-separated path relative to the ``.dist-info`` directory or an absolute path; If *path* is an absolute path and doesn't start with the ``.dist-info`` directory path, a :class:`DistlibException` is raised :type path: str :rtype: str """ # Check if it is an absolute path # XXX use relpath, add tests if path.find(os.sep) >= 0: # it's an absolute path? distinfo_dirname, path = path.split(os.sep)[-2:] if distinfo_dirname != self.path.split(os.sep)[-1]: raise DistlibException( 'dist-info file %r does not belong to the %r %s ' 'distribution' % (path, self.name, self.version)) # The file must be relative if path not in DIST_FILES: raise DistlibException('invalid path for a dist-info file: ' '%r at %r' % (path, self.path)) return os.path.join(self.path, path) def list_distinfo_files(self): """ Iterates over the ``RECORD`` entries and returns paths for each line if the path is pointing to a file located in the ``.dist-info`` directory or one of its subdirectories. :returns: iterator of paths """ base = os.path.dirname(self.path) for path, checksum, size in self._get_records(): # XXX add separator or use real relpath algo if not os.path.isabs(path): path = os.path.join(base, path) if path.startswith(self.path): yield path def __eq__(self, other): return (isinstance(other, InstalledDistribution) and self.path == other.path) # See http://docs.python.org/reference/datamodel#object.__hash__ __hash__ = object.__hash__ class EggInfoDistribution(BaseInstalledDistribution): """Created with the *path* of the ``.egg-info`` directory or file provided to the constructor. It reads the metadata contained in the file itself, or if the given path happens to be a directory, the metadata is read from the file ``PKG-INFO`` under that directory.""" requested = True # as we have no way of knowing, assume it was shared_locations = {} def __init__(self, path, env=None): def set_name_and_version(s, n, v): s.name = n s.key = n.lower() # for case-insensitive comparisons s.version = v self.path = path self.dist_path = env if env and env._cache_enabled and path in env._cache_egg.path: metadata = env._cache_egg.path[path].metadata set_name_and_version(self, metadata.name, metadata.version) else: metadata = self._get_metadata(path) # Need to be set before caching set_name_and_version(self, metadata.name, metadata.version) if env and env._cache_enabled: env._cache_egg.add(self) super(EggInfoDistribution, self).__init__(metadata, path, env) def _get_metadata(self, path): requires = None def parse_requires_data(data): """Create a list of dependencies from a requires.txt file. *data*: the contents of a setuptools-produced requires.txt file. """ reqs = [] lines = data.splitlines() for line in lines: line = line.strip() if line.startswith('['): logger.warning('Unexpected line: quitting requirement scan: %r', line) break r = parse_requirement(line) if not r: logger.warning('Not recognised as a requirement: %r', line) continue if r.extras: logger.warning('extra requirements in requires.txt are ' 'not supported') if not r.constraints: reqs.append(r.name) else: cons = ', '.join('%s%s' % c for c in r.constraints) reqs.append('%s (%s)' % (r.name, cons)) return reqs def parse_requires_path(req_path): """Create a list of dependencies from a requires.txt file. *req_path*: the path to a setuptools-produced requires.txt file. """ reqs = [] try: with codecs.open(req_path, 'r', 'utf-8') as fp: reqs = parse_requires_data(fp.read()) except IOError: pass return reqs tl_path = tl_data = None if path.endswith('.egg'): if os.path.isdir(path): p = os.path.join(path, 'EGG-INFO') meta_path = os.path.join(p, 'PKG-INFO') metadata = Metadata(path=meta_path, scheme='legacy') req_path = os.path.join(p, 'requires.txt') tl_path = os.path.join(p, 'top_level.txt') requires = parse_requires_path(req_path) else: # FIXME handle the case where zipfile is not available zipf = zipimport.zipimporter(path) fileobj = StringIO( zipf.get_data('EGG-INFO/PKG-INFO').decode('utf8')) metadata = Metadata(fileobj=fileobj, scheme='legacy') try: data = zipf.get_data('EGG-INFO/requires.txt') tl_data = zipf.get_data('EGG-INFO/top_level.txt').decode('utf-8') requires = parse_requires_data(data.decode('utf-8')) except IOError: requires = None elif path.endswith('.egg-info'): if os.path.isdir(path): req_path = os.path.join(path, 'requires.txt') requires = parse_requires_path(req_path) path = os.path.join(path, 'PKG-INFO') tl_path = os.path.join(path, 'top_level.txt') metadata = Metadata(path=path, scheme='legacy') else: raise DistlibException('path must end with .egg-info or .egg, ' 'got %r' % path) if requires: metadata.add_requirements(requires) # look for top-level modules in top_level.txt, if present if tl_data is None: if tl_path is not None and os.path.exists(tl_path): with open(tl_path, 'rb') as f: tl_data = f.read().decode('utf-8') if not tl_data: tl_data = [] else: tl_data = tl_data.splitlines() self.modules = tl_data return metadata def __repr__(self): return '<EggInfoDistribution %r %s at %r>' % ( self.name, self.version, self.path) def __str__(self): return "%s %s" % (self.name, self.version) def check_installed_files(self): """ Checks that the hashes and sizes of the files in ``RECORD`` are matched by the files themselves. Returns a (possibly empty) list of mismatches. Each entry in the mismatch list will be a tuple consisting of the path, 'exists', 'size' or 'hash' according to what didn't match (existence is checked first, then size, then hash), the expected value and the actual value. """ mismatches = [] record_path = os.path.join(self.path, 'installed-files.txt') if os.path.exists(record_path): for path, _, _ in self.list_installed_files(): if path == record_path: continue if not os.path.exists(path): mismatches.append((path, 'exists', True, False)) return mismatches def list_installed_files(self): """ Iterates over the ``installed-files.txt`` entries and returns a tuple ``(path, hash, size)`` for each line. :returns: a list of (path, hash, size) """ def _md5(path): f = open(path, 'rb') try: content = f.read() finally: f.close() return hashlib.md5(content).hexdigest() def _size(path): return os.stat(path).st_size record_path = os.path.join(self.path, 'installed-files.txt') result = [] if os.path.exists(record_path): with codecs.open(record_path, 'r', encoding='utf-8') as f: for line in f: line = line.strip() p = os.path.normpath(os.path.join(self.path, line)) # "./" is present as a marker between installed files # and installation metadata files if not os.path.exists(p): logger.warning('Non-existent file: %s', p) if p.endswith(('.pyc', '.pyo')): continue #otherwise fall through and fail if not os.path.isdir(p): result.append((p, _md5(p), _size(p))) result.append((record_path, None, None)) return result def list_distinfo_files(self, absolute=False): """ Iterates over the ``installed-files.txt`` entries and returns paths for each line if the path is pointing to a file located in the ``.egg-info`` directory or one of its subdirectories. :parameter absolute: If *absolute* is ``True``, each returned path is transformed into a local absolute path. Otherwise the raw value from ``installed-files.txt`` is returned. :type absolute: boolean :returns: iterator of paths """ record_path = os.path.join(self.path, 'installed-files.txt') if os.path.exists(record_path): skip = True with codecs.open(record_path, 'r', encoding='utf-8') as f: for line in f: line = line.strip() if line == './': skip = False continue if not skip: p = os.path.normpath(os.path.join(self.path, line)) if p.startswith(self.path): if absolute: yield p else: yield line def __eq__(self, other): return (isinstance(other, EggInfoDistribution) and self.path == other.path) # See http://docs.python.org/reference/datamodel#object.__hash__ __hash__ = object.__hash__ new_dist_class = InstalledDistribution old_dist_class = EggInfoDistribution class DependencyGraph(object): """ Represents a dependency graph between distributions. The dependency relationships are stored in an ``adjacency_list`` that maps distributions to a list of ``(other, label)`` tuples where ``other`` is a distribution and the edge is labeled with ``label`` (i.e. the version specifier, if such was provided). Also, for more efficient traversal, for every distribution ``x``, a list of predecessors is kept in ``reverse_list[x]``. An edge from distribution ``a`` to distribution ``b`` means that ``a`` depends on ``b``. If any missing dependencies are found, they are stored in ``missing``, which is a dictionary that maps distributions to a list of requirements that were not provided by any other distributions. """ def __init__(self): self.adjacency_list = {} self.reverse_list = {} self.missing = {} def add_distribution(self, distribution): """Add the *distribution* to the graph. :type distribution: :class:`distutils2.database.InstalledDistribution` or :class:`distutils2.database.EggInfoDistribution` """ self.adjacency_list[distribution] = [] self.reverse_list[distribution] = [] #self.missing[distribution] = [] def add_edge(self, x, y, label=None): """Add an edge from distribution *x* to distribution *y* with the given *label*. :type x: :class:`distutils2.database.InstalledDistribution` or :class:`distutils2.database.EggInfoDistribution` :type y: :class:`distutils2.database.InstalledDistribution` or :class:`distutils2.database.EggInfoDistribution` :type label: ``str`` or ``None`` """ self.adjacency_list[x].append((y, label)) # multiple edges are allowed, so be careful if x not in self.reverse_list[y]: self.reverse_list[y].append(x) def add_missing(self, distribution, requirement): """ Add a missing *requirement* for the given *distribution*. :type distribution: :class:`distutils2.database.InstalledDistribution` or :class:`distutils2.database.EggInfoDistribution` :type requirement: ``str`` """ logger.debug('%s missing %r', distribution, requirement) self.missing.setdefault(distribution, []).append(requirement) def _repr_dist(self, dist): return '%s %s' % (dist.name, dist.version) def repr_node(self, dist, level=1): """Prints only a subgraph""" output = [self._repr_dist(dist)] for other, label in self.adjacency_list[dist]: dist = self._repr_dist(other) if label is not None: dist = '%s [%s]' % (dist, label) output.append(' ' * level + str(dist)) suboutput = self.repr_node(other, level + 1) subs = suboutput.split('\n') output.extend(subs[1:]) return '\n'.join(output) def to_dot(self, f, skip_disconnected=True): """Writes a DOT output for the graph to the provided file *f*. If *skip_disconnected* is set to ``True``, then all distributions that are not dependent on any other distribution are skipped. :type f: has to support ``file``-like operations :type skip_disconnected: ``bool`` """ disconnected = [] f.write("digraph dependencies {\n") for dist, adjs in self.adjacency_list.items(): if len(adjs) == 0 and not skip_disconnected: disconnected.append(dist) for other, label in adjs: if not label is None: f.write('"%s" -> "%s" [label="%s"]\n' % (dist.name, other.name, label)) else: f.write('"%s" -> "%s"\n' % (dist.name, other.name)) if not skip_disconnected and len(disconnected) > 0: f.write('subgraph disconnected {\n') f.write('label = "Disconnected"\n') f.write('bgcolor = red\n') for dist in disconnected: f.write('"%s"' % dist.name) f.write('\n') f.write('}\n') f.write('}\n') def topological_sort(self): """ Perform a topological sort of the graph. :return: A tuple, the first element of which is a topologically sorted list of distributions, and the second element of which is a list of distributions that cannot be sorted because they have circular dependencies and so form a cycle. """ result = [] # Make a shallow copy of the adjacency list alist = {} for k, v in self.adjacency_list.items(): alist[k] = v[:] while True: # See what we can remove in this run to_remove = [] for k, v in list(alist.items())[:]: if not v: to_remove.append(k) del alist[k] if not to_remove: # What's left in alist (if anything) is a cycle. break # Remove from the adjacency list of others for k, v in alist.items(): alist[k] = [(d, r) for d, r in v if d not in to_remove] logger.debug('Moving to result: %s', ['%s (%s)' % (d.name, d.version) for d in to_remove]) result.extend(to_remove) return result, list(alist.keys()) def __repr__(self): """Representation of the graph""" output = [] for dist, adjs in self.adjacency_list.items(): output.append(self.repr_node(dist)) return '\n'.join(output) def make_graph(dists, scheme='default'): """Makes a dependency graph from the given distributions. :parameter dists: a list of distributions :type dists: list of :class:`distutils2.database.InstalledDistribution` and :class:`distutils2.database.EggInfoDistribution` instances :rtype: a :class:`DependencyGraph` instance """ scheme = get_scheme(scheme) graph = DependencyGraph() provided = {} # maps names to lists of (version, dist) tuples # first, build the graph and find out what's provided for dist in dists: graph.add_distribution(dist) for p in dist.provides: name, version = parse_name_and_version(p) logger.debug('Add to provided: %s, %s, %s', name, version, dist) provided.setdefault(name, []).append((version, dist)) # now make the edges for dist in dists: requires = (dist.run_requires | dist.meta_requires | dist.build_requires | dist.dev_requires) for req in requires: try: matcher = scheme.matcher(req) except UnsupportedVersionError: # XXX compat-mode if cannot read the version logger.warning('could not read version %r - using name only', req) name = req.split()[0] matcher = scheme.matcher(name) name = matcher.key # case-insensitive matched = False if name in provided: for version, provider in provided[name]: try: match = matcher.match(version) except UnsupportedVersionError: match = False if match: graph.add_edge(dist, provider, req) matched = True break if not matched: graph.add_missing(dist, req) return graph def get_dependent_dists(dists, dist): """Recursively generate a list of distributions from *dists* that are dependent on *dist*. :param dists: a list of distributions :param dist: a distribution, member of *dists* for which we are interested """ if dist not in dists: raise DistlibException('given distribution %r is not a member ' 'of the list' % dist.name) graph = make_graph(dists) dep = [dist] # dependent distributions todo = graph.reverse_list[dist] # list of nodes we should inspect while todo: d = todo.pop() dep.append(d) for succ in graph.reverse_list[d]: if succ not in dep: todo.append(succ) dep.pop(0) # remove dist from dep, was there to prevent infinite loops return dep def get_required_dists(dists, dist): """Recursively generate a list of distributions from *dists* that are required by *dist*. :param dists: a list of distributions :param dist: a distribution, member of *dists* for which we are interested """ if dist not in dists: raise DistlibException('given distribution %r is not a member ' 'of the list' % dist.name) graph = make_graph(dists) req = [] # required distributions todo = graph.adjacency_list[dist] # list of nodes we should inspect while todo: d = todo.pop()[0] req.append(d) for pred in graph.adjacency_list[d]: if pred not in req: todo.append(pred) return req def make_dist(name, version, **kwargs): """ A convenience method for making a dist given just a name and version. """ summary = kwargs.pop('summary', 'Placeholder for summary') md = Metadata(**kwargs) md.name = name md.version = version md.summary = summary or 'Placeholder for summary' return Distribution(md) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/distlib/index.py�������������������������0000644�0001750�0001750�00000051112�00000000000�027600� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # # Copyright (C) 2013 Vinay Sajip. # Licensed to the Python Software Foundation under a contributor agreement. # See LICENSE.txt and CONTRIBUTORS.txt. # import hashlib import logging import os import shutil import subprocess import tempfile try: from threading import Thread except ImportError: from dummy_threading import Thread from . import DistlibException from .compat import (HTTPBasicAuthHandler, Request, HTTPPasswordMgr, urlparse, build_opener, string_types) from .util import cached_property, zip_dir, ServerProxy logger = logging.getLogger(__name__) DEFAULT_INDEX = 'https://pypi.org/pypi' DEFAULT_REALM = 'pypi' class PackageIndex(object): """ This class represents a package index compatible with PyPI, the Python Package Index. """ boundary = b'----------ThIs_Is_tHe_distlib_index_bouNdaRY_$' def __init__(self, url=None): """ Initialise an instance. :param url: The URL of the index. If not specified, the URL for PyPI is used. """ self.url = url or DEFAULT_INDEX self.read_configuration() scheme, netloc, path, params, query, frag = urlparse(self.url) if params or query or frag or scheme not in ('http', 'https'): raise DistlibException('invalid repository: %s' % self.url) self.password_handler = None self.ssl_verifier = None self.gpg = None self.gpg_home = None with open(os.devnull, 'w') as sink: # Use gpg by default rather than gpg2, as gpg2 insists on # prompting for passwords for s in ('gpg', 'gpg2'): try: rc = subprocess.check_call([s, '--version'], stdout=sink, stderr=sink) if rc == 0: self.gpg = s break except OSError: pass def _get_pypirc_command(self): """ Get the distutils command for interacting with PyPI configurations. :return: the command. """ from distutils.core import Distribution from distutils.config import PyPIRCCommand d = Distribution() return PyPIRCCommand(d) def read_configuration(self): """ Read the PyPI access configuration as supported by distutils, getting PyPI to do the actual work. This populates ``username``, ``password``, ``realm`` and ``url`` attributes from the configuration. """ # get distutils to do the work c = self._get_pypirc_command() c.repository = self.url cfg = c._read_pypirc() self.username = cfg.get('username') self.password = cfg.get('password') self.realm = cfg.get('realm', 'pypi') self.url = cfg.get('repository', self.url) def save_configuration(self): """ Save the PyPI access configuration. You must have set ``username`` and ``password`` attributes before calling this method. Again, distutils is used to do the actual work. """ self.check_credentials() # get distutils to do the work c = self._get_pypirc_command() c._store_pypirc(self.username, self.password) def check_credentials(self): """ Check that ``username`` and ``password`` have been set, and raise an exception if not. """ if self.username is None or self.password is None: raise DistlibException('username and password must be set') pm = HTTPPasswordMgr() _, netloc, _, _, _, _ = urlparse(self.url) pm.add_password(self.realm, netloc, self.username, self.password) self.password_handler = HTTPBasicAuthHandler(pm) def register(self, metadata): """ Register a distribution on PyPI, using the provided metadata. :param metadata: A :class:`Metadata` instance defining at least a name and version number for the distribution to be registered. :return: The HTTP response received from PyPI upon submission of the request. """ self.check_credentials() metadata.validate() d = metadata.todict() d[':action'] = 'verify' request = self.encode_request(d.items(), []) response = self.send_request(request) d[':action'] = 'submit' request = self.encode_request(d.items(), []) return self.send_request(request) def _reader(self, name, stream, outbuf): """ Thread runner for reading lines of from a subprocess into a buffer. :param name: The logical name of the stream (used for logging only). :param stream: The stream to read from. This will typically a pipe connected to the output stream of a subprocess. :param outbuf: The list to append the read lines to. """ while True: s = stream.readline() if not s: break s = s.decode('utf-8').rstrip() outbuf.append(s) logger.debug('%s: %s' % (name, s)) stream.close() def get_sign_command(self, filename, signer, sign_password, keystore=None): """ Return a suitable command for signing a file. :param filename: The pathname to the file to be signed. :param signer: The identifier of the signer of the file. :param sign_password: The passphrase for the signer's private key used for signing. :param keystore: The path to a directory which contains the keys used in verification. If not specified, the instance's ``gpg_home`` attribute is used instead. :return: The signing command as a list suitable to be passed to :class:`subprocess.Popen`. """ cmd = [self.gpg, '--status-fd', '2', '--no-tty'] if keystore is None: keystore = self.gpg_home if keystore: cmd.extend(['--homedir', keystore]) if sign_password is not None: cmd.extend(['--batch', '--passphrase-fd', '0']) td = tempfile.mkdtemp() sf = os.path.join(td, os.path.basename(filename) + '.asc') cmd.extend(['--detach-sign', '--armor', '--local-user', signer, '--output', sf, filename]) logger.debug('invoking: %s', ' '.join(cmd)) return cmd, sf def run_command(self, cmd, input_data=None): """ Run a command in a child process , passing it any input data specified. :param cmd: The command to run. :param input_data: If specified, this must be a byte string containing data to be sent to the child process. :return: A tuple consisting of the subprocess' exit code, a list of lines read from the subprocess' ``stdout``, and a list of lines read from the subprocess' ``stderr``. """ kwargs = { 'stdout': subprocess.PIPE, 'stderr': subprocess.PIPE, } if input_data is not None: kwargs['stdin'] = subprocess.PIPE stdout = [] stderr = [] p = subprocess.Popen(cmd, **kwargs) # We don't use communicate() here because we may need to # get clever with interacting with the command t1 = Thread(target=self._reader, args=('stdout', p.stdout, stdout)) t1.start() t2 = Thread(target=self._reader, args=('stderr', p.stderr, stderr)) t2.start() if input_data is not None: p.stdin.write(input_data) p.stdin.close() p.wait() t1.join() t2.join() return p.returncode, stdout, stderr def sign_file(self, filename, signer, sign_password, keystore=None): """ Sign a file. :param filename: The pathname to the file to be signed. :param signer: The identifier of the signer of the file. :param sign_password: The passphrase for the signer's private key used for signing. :param keystore: The path to a directory which contains the keys used in signing. If not specified, the instance's ``gpg_home`` attribute is used instead. :return: The absolute pathname of the file where the signature is stored. """ cmd, sig_file = self.get_sign_command(filename, signer, sign_password, keystore) rc, stdout, stderr = self.run_command(cmd, sign_password.encode('utf-8')) if rc != 0: raise DistlibException('sign command failed with error ' 'code %s' % rc) return sig_file def upload_file(self, metadata, filename, signer=None, sign_password=None, filetype='sdist', pyversion='source', keystore=None): """ Upload a release file to the index. :param metadata: A :class:`Metadata` instance defining at least a name and version number for the file to be uploaded. :param filename: The pathname of the file to be uploaded. :param signer: The identifier of the signer of the file. :param sign_password: The passphrase for the signer's private key used for signing. :param filetype: The type of the file being uploaded. This is the distutils command which produced that file, e.g. ``sdist`` or ``bdist_wheel``. :param pyversion: The version of Python which the release relates to. For code compatible with any Python, this would be ``source``, otherwise it would be e.g. ``3.2``. :param keystore: The path to a directory which contains the keys used in signing. If not specified, the instance's ``gpg_home`` attribute is used instead. :return: The HTTP response received from PyPI upon submission of the request. """ self.check_credentials() if not os.path.exists(filename): raise DistlibException('not found: %s' % filename) metadata.validate() d = metadata.todict() sig_file = None if signer: if not self.gpg: logger.warning('no signing program available - not signed') else: sig_file = self.sign_file(filename, signer, sign_password, keystore) with open(filename, 'rb') as f: file_data = f.read() md5_digest = hashlib.md5(file_data).hexdigest() sha256_digest = hashlib.sha256(file_data).hexdigest() d.update({ ':action': 'file_upload', 'protocol_version': '1', 'filetype': filetype, 'pyversion': pyversion, 'md5_digest': md5_digest, 'sha256_digest': sha256_digest, }) files = [('content', os.path.basename(filename), file_data)] if sig_file: with open(sig_file, 'rb') as f: sig_data = f.read() files.append(('gpg_signature', os.path.basename(sig_file), sig_data)) shutil.rmtree(os.path.dirname(sig_file)) request = self.encode_request(d.items(), files) return self.send_request(request) def upload_documentation(self, metadata, doc_dir): """ Upload documentation to the index. :param metadata: A :class:`Metadata` instance defining at least a name and version number for the documentation to be uploaded. :param doc_dir: The pathname of the directory which contains the documentation. This should be the directory that contains the ``index.html`` for the documentation. :return: The HTTP response received from PyPI upon submission of the request. """ self.check_credentials() if not os.path.isdir(doc_dir): raise DistlibException('not a directory: %r' % doc_dir) fn = os.path.join(doc_dir, 'index.html') if not os.path.exists(fn): raise DistlibException('not found: %r' % fn) metadata.validate() name, version = metadata.name, metadata.version zip_data = zip_dir(doc_dir).getvalue() fields = [(':action', 'doc_upload'), ('name', name), ('version', version)] files = [('content', name, zip_data)] request = self.encode_request(fields, files) return self.send_request(request) def get_verify_command(self, signature_filename, data_filename, keystore=None): """ Return a suitable command for verifying a file. :param signature_filename: The pathname to the file containing the signature. :param data_filename: The pathname to the file containing the signed data. :param keystore: The path to a directory which contains the keys used in verification. If not specified, the instance's ``gpg_home`` attribute is used instead. :return: The verifying command as a list suitable to be passed to :class:`subprocess.Popen`. """ cmd = [self.gpg, '--status-fd', '2', '--no-tty'] if keystore is None: keystore = self.gpg_home if keystore: cmd.extend(['--homedir', keystore]) cmd.extend(['--verify', signature_filename, data_filename]) logger.debug('invoking: %s', ' '.join(cmd)) return cmd def verify_signature(self, signature_filename, data_filename, keystore=None): """ Verify a signature for a file. :param signature_filename: The pathname to the file containing the signature. :param data_filename: The pathname to the file containing the signed data. :param keystore: The path to a directory which contains the keys used in verification. If not specified, the instance's ``gpg_home`` attribute is used instead. :return: True if the signature was verified, else False. """ if not self.gpg: raise DistlibException('verification unavailable because gpg ' 'unavailable') cmd = self.get_verify_command(signature_filename, data_filename, keystore) rc, stdout, stderr = self.run_command(cmd) if rc not in (0, 1): raise DistlibException('verify command failed with error ' 'code %s' % rc) return rc == 0 def download_file(self, url, destfile, digest=None, reporthook=None): """ This is a convenience method for downloading a file from an URL. Normally, this will be a file from the index, though currently no check is made for this (i.e. a file can be downloaded from anywhere). The method is just like the :func:`urlretrieve` function in the standard library, except that it allows digest computation to be done during download and checking that the downloaded data matched any expected value. :param url: The URL of the file to be downloaded (assumed to be available via an HTTP GET request). :param destfile: The pathname where the downloaded file is to be saved. :param digest: If specified, this must be a (hasher, value) tuple, where hasher is the algorithm used (e.g. ``'md5'``) and ``value`` is the expected value. :param reporthook: The same as for :func:`urlretrieve` in the standard library. """ if digest is None: digester = None logger.debug('No digest specified') else: if isinstance(digest, (list, tuple)): hasher, digest = digest else: hasher = 'md5' digester = getattr(hashlib, hasher)() logger.debug('Digest specified: %s' % digest) # The following code is equivalent to urlretrieve. # We need to do it this way so that we can compute the # digest of the file as we go. with open(destfile, 'wb') as dfp: # addinfourl is not a context manager on 2.x # so we have to use try/finally sfp = self.send_request(Request(url)) try: headers = sfp.info() blocksize = 8192 size = -1 read = 0 blocknum = 0 if "content-length" in headers: size = int(headers["Content-Length"]) if reporthook: reporthook(blocknum, blocksize, size) while True: block = sfp.read(blocksize) if not block: break read += len(block) dfp.write(block) if digester: digester.update(block) blocknum += 1 if reporthook: reporthook(blocknum, blocksize, size) finally: sfp.close() # check that we got the whole file, if we can if size >= 0 and read < size: raise DistlibException( 'retrieval incomplete: got only %d out of %d bytes' % (read, size)) # if we have a digest, it must match. if digester: actual = digester.hexdigest() if digest != actual: raise DistlibException('%s digest mismatch for %s: expected ' '%s, got %s' % (hasher, destfile, digest, actual)) logger.debug('Digest verified: %s', digest) def send_request(self, req): """ Send a standard library :class:`Request` to PyPI and return its response. :param req: The request to send. :return: The HTTP response from PyPI (a standard library HTTPResponse). """ handlers = [] if self.password_handler: handlers.append(self.password_handler) if self.ssl_verifier: handlers.append(self.ssl_verifier) opener = build_opener(*handlers) return opener.open(req) def encode_request(self, fields, files): """ Encode fields and files for posting to an HTTP server. :param fields: The fields to send as a list of (fieldname, value) tuples. :param files: The files to send as a list of (fieldname, filename, file_bytes) tuple. """ # Adapted from packaging, which in turn was adapted from # http://code.activestate.com/recipes/146306 parts = [] boundary = self.boundary for k, values in fields: if not isinstance(values, (list, tuple)): values = [values] for v in values: parts.extend(( b'--' + boundary, ('Content-Disposition: form-data; name="%s"' % k).encode('utf-8'), b'', v.encode('utf-8'))) for key, filename, value in files: parts.extend(( b'--' + boundary, ('Content-Disposition: form-data; name="%s"; filename="%s"' % (key, filename)).encode('utf-8'), b'', value)) parts.extend((b'--' + boundary + b'--', b'')) body = b'\r\n'.join(parts) ct = b'multipart/form-data; boundary=' + boundary headers = { 'Content-type': ct, 'Content-length': str(len(body)) } return Request(self.url, body, headers) def search(self, terms, operator=None): if isinstance(terms, string_types): terms = {'name': terms} rpc_proxy = ServerProxy(self.url, timeout=3.0) try: return rpc_proxy.search(terms, operator or 'and') finally: rpc_proxy('close')() ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/distlib/locators.py����������������������0000644�0001750�0001750�00000145137�00000000000�030332� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # # Copyright (C) 2012-2015 Vinay Sajip. # Licensed to the Python Software Foundation under a contributor agreement. # See LICENSE.txt and CONTRIBUTORS.txt. # import gzip from io import BytesIO import json import logging import os import posixpath import re try: import threading except ImportError: # pragma: no cover import dummy_threading as threading import zlib from . import DistlibException from .compat import (urljoin, urlparse, urlunparse, url2pathname, pathname2url, queue, quote, unescape, string_types, build_opener, HTTPRedirectHandler as BaseRedirectHandler, text_type, Request, HTTPError, URLError) from .database import Distribution, DistributionPath, make_dist from .metadata import Metadata, MetadataInvalidError from .util import (cached_property, parse_credentials, ensure_slash, split_filename, get_project_data, parse_requirement, parse_name_and_version, ServerProxy, normalize_name) from .version import get_scheme, UnsupportedVersionError from .wheel import Wheel, is_compatible logger = logging.getLogger(__name__) HASHER_HASH = re.compile(r'^(\w+)=([a-f0-9]+)') CHARSET = re.compile(r';\s*charset\s*=\s*(.*)\s*$', re.I) HTML_CONTENT_TYPE = re.compile('text/html|application/x(ht)?ml') DEFAULT_INDEX = 'https://pypi.org/pypi' def get_all_distribution_names(url=None): """ Return all distribution names known by an index. :param url: The URL of the index. :return: A list of all known distribution names. """ if url is None: url = DEFAULT_INDEX client = ServerProxy(url, timeout=3.0) try: return client.list_packages() finally: client('close')() class RedirectHandler(BaseRedirectHandler): """ A class to work around a bug in some Python 3.2.x releases. """ # There's a bug in the base version for some 3.2.x # (e.g. 3.2.2 on Ubuntu Oneiric). If a Location header # returns e.g. /abc, it bails because it says the scheme '' # is bogus, when actually it should use the request's # URL for the scheme. See Python issue #13696. def http_error_302(self, req, fp, code, msg, headers): # Some servers (incorrectly) return multiple Location headers # (so probably same goes for URI). Use first header. newurl = None for key in ('location', 'uri'): if key in headers: newurl = headers[key] break if newurl is None: # pragma: no cover return urlparts = urlparse(newurl) if urlparts.scheme == '': newurl = urljoin(req.get_full_url(), newurl) if hasattr(headers, 'replace_header'): headers.replace_header(key, newurl) else: headers[key] = newurl return BaseRedirectHandler.http_error_302(self, req, fp, code, msg, headers) http_error_301 = http_error_303 = http_error_307 = http_error_302 class Locator(object): """ A base class for locators - things that locate distributions. """ source_extensions = ('.tar.gz', '.tar.bz2', '.tar', '.zip', '.tgz', '.tbz') binary_extensions = ('.egg', '.exe', '.whl') excluded_extensions = ('.pdf',) # A list of tags indicating which wheels you want to match. The default # value of None matches against the tags compatible with the running # Python. If you want to match other values, set wheel_tags on a locator # instance to a list of tuples (pyver, abi, arch) which you want to match. wheel_tags = None downloadable_extensions = source_extensions + ('.whl',) def __init__(self, scheme='default'): """ Initialise an instance. :param scheme: Because locators look for most recent versions, they need to know the version scheme to use. This specifies the current PEP-recommended scheme - use ``'legacy'`` if you need to support existing distributions on PyPI. """ self._cache = {} self.scheme = scheme # Because of bugs in some of the handlers on some of the platforms, # we use our own opener rather than just using urlopen. self.opener = build_opener(RedirectHandler()) # If get_project() is called from locate(), the matcher instance # is set from the requirement passed to locate(). See issue #18 for # why this can be useful to know. self.matcher = None self.errors = queue.Queue() def get_errors(self): """ Return any errors which have occurred. """ result = [] while not self.errors.empty(): # pragma: no cover try: e = self.errors.get(False) result.append(e) except self.errors.Empty: continue self.errors.task_done() return result def clear_errors(self): """ Clear any errors which may have been logged. """ # Just get the errors and throw them away self.get_errors() def clear_cache(self): self._cache.clear() def _get_scheme(self): return self._scheme def _set_scheme(self, value): self._scheme = value scheme = property(_get_scheme, _set_scheme) def _get_project(self, name): """ For a given project, get a dictionary mapping available versions to Distribution instances. This should be implemented in subclasses. If called from a locate() request, self.matcher will be set to a matcher for the requirement to satisfy, otherwise it will be None. """ raise NotImplementedError('Please implement in the subclass') def get_distribution_names(self): """ Return all the distribution names known to this locator. """ raise NotImplementedError('Please implement in the subclass') def get_project(self, name): """ For a given project, get a dictionary mapping available versions to Distribution instances. This calls _get_project to do all the work, and just implements a caching layer on top. """ if self._cache is None: # pragma: no cover result = self._get_project(name) elif name in self._cache: result = self._cache[name] else: self.clear_errors() result = self._get_project(name) self._cache[name] = result return result def score_url(self, url): """ Give an url a score which can be used to choose preferred URLs for a given project release. """ t = urlparse(url) basename = posixpath.basename(t.path) compatible = True is_wheel = basename.endswith('.whl') is_downloadable = basename.endswith(self.downloadable_extensions) if is_wheel: compatible = is_compatible(Wheel(basename), self.wheel_tags) return (t.scheme == 'https', 'pypi.org' in t.netloc, is_downloadable, is_wheel, compatible, basename) def prefer_url(self, url1, url2): """ Choose one of two URLs where both are candidates for distribution archives for the same version of a distribution (for example, .tar.gz vs. zip). The current implementation favours https:// URLs over http://, archives from PyPI over those from other locations, wheel compatibility (if a wheel) and then the archive name. """ result = url2 if url1: s1 = self.score_url(url1) s2 = self.score_url(url2) if s1 > s2: result = url1 if result != url2: logger.debug('Not replacing %r with %r', url1, url2) else: logger.debug('Replacing %r with %r', url1, url2) return result def split_filename(self, filename, project_name): """ Attempt to split a filename in project name, version and Python version. """ return split_filename(filename, project_name) def convert_url_to_download_info(self, url, project_name): """ See if a URL is a candidate for a download URL for a project (the URL has typically been scraped from an HTML page). If it is, a dictionary is returned with keys "name", "version", "filename" and "url"; otherwise, None is returned. """ def same_project(name1, name2): return normalize_name(name1) == normalize_name(name2) result = None scheme, netloc, path, params, query, frag = urlparse(url) if frag.lower().startswith('egg='): # pragma: no cover logger.debug('%s: version hint in fragment: %r', project_name, frag) m = HASHER_HASH.match(frag) if m: algo, digest = m.groups() else: algo, digest = None, None origpath = path if path and path[-1] == '/': # pragma: no cover path = path[:-1] if path.endswith('.whl'): try: wheel = Wheel(path) if not is_compatible(wheel, self.wheel_tags): logger.debug('Wheel not compatible: %s', path) else: if project_name is None: include = True else: include = same_project(wheel.name, project_name) if include: result = { 'name': wheel.name, 'version': wheel.version, 'filename': wheel.filename, 'url': urlunparse((scheme, netloc, origpath, params, query, '')), 'python-version': ', '.join( ['.'.join(list(v[2:])) for v in wheel.pyver]), } except Exception as e: # pragma: no cover logger.warning('invalid path for wheel: %s', path) elif not path.endswith(self.downloadable_extensions): # pragma: no cover logger.debug('Not downloadable: %s', path) else: # downloadable extension path = filename = posixpath.basename(path) for ext in self.downloadable_extensions: if path.endswith(ext): path = path[:-len(ext)] t = self.split_filename(path, project_name) if not t: # pragma: no cover logger.debug('No match for project/version: %s', path) else: name, version, pyver = t if not project_name or same_project(project_name, name): result = { 'name': name, 'version': version, 'filename': filename, 'url': urlunparse((scheme, netloc, origpath, params, query, '')), #'packagetype': 'sdist', } if pyver: # pragma: no cover result['python-version'] = pyver break if result and algo: result['%s_digest' % algo] = digest return result def _get_digest(self, info): """ Get a digest from a dictionary by looking at keys of the form 'algo_digest'. Returns a 2-tuple (algo, digest) if found, else None. Currently looks only for SHA256, then MD5. """ result = None for algo in ('sha256', 'md5'): key = '%s_digest' % algo if key in info: result = (algo, info[key]) break return result def _update_version_data(self, result, info): """ Update a result dictionary (the final result from _get_project) with a dictionary for a specific version, which typically holds information gleaned from a filename or URL for an archive for the distribution. """ name = info.pop('name') version = info.pop('version') if version in result: dist = result[version] md = dist.metadata else: dist = make_dist(name, version, scheme=self.scheme) md = dist.metadata dist.digest = digest = self._get_digest(info) url = info['url'] result['digests'][url] = digest if md.source_url != info['url']: md.source_url = self.prefer_url(md.source_url, url) result['urls'].setdefault(version, set()).add(url) dist.locator = self result[version] = dist def locate(self, requirement, prereleases=False): """ Find the most recent distribution which matches the given requirement. :param requirement: A requirement of the form 'foo (1.0)' or perhaps 'foo (>= 1.0, < 2.0, != 1.3)' :param prereleases: If ``True``, allow pre-release versions to be located. Otherwise, pre-release versions are not returned. :return: A :class:`Distribution` instance, or ``None`` if no such distribution could be located. """ result = None r = parse_requirement(requirement) if r is None: # pragma: no cover raise DistlibException('Not a valid requirement: %r' % requirement) scheme = get_scheme(self.scheme) self.matcher = matcher = scheme.matcher(r.requirement) logger.debug('matcher: %s (%s)', matcher, type(matcher).__name__) versions = self.get_project(r.name) if len(versions) > 2: # urls and digests keys are present # sometimes, versions are invalid slist = [] vcls = matcher.version_class for k in versions: if k in ('urls', 'digests'): continue try: if not matcher.match(k): logger.debug('%s did not match %r', matcher, k) else: if prereleases or not vcls(k).is_prerelease: slist.append(k) else: logger.debug('skipping pre-release ' 'version %s of %s', k, matcher.name) except Exception: # pragma: no cover logger.warning('error matching %s with %r', matcher, k) pass # slist.append(k) if len(slist) > 1: slist = sorted(slist, key=scheme.key) if slist: logger.debug('sorted list: %s', slist) version = slist[-1] result = versions[version] if result: if r.extras: result.extras = r.extras result.download_urls = versions.get('urls', {}).get(version, set()) d = {} sd = versions.get('digests', {}) for url in result.download_urls: if url in sd: # pragma: no cover d[url] = sd[url] result.digests = d self.matcher = None return result class PyPIRPCLocator(Locator): """ This locator uses XML-RPC to locate distributions. It therefore cannot be used with simple mirrors (that only mirror file content). """ def __init__(self, url, **kwargs): """ Initialise an instance. :param url: The URL to use for XML-RPC. :param kwargs: Passed to the superclass constructor. """ super(PyPIRPCLocator, self).__init__(**kwargs) self.base_url = url self.client = ServerProxy(url, timeout=3.0) def get_distribution_names(self): """ Return all the distribution names known to this locator. """ return set(self.client.list_packages()) def _get_project(self, name): result = {'urls': {}, 'digests': {}} versions = self.client.package_releases(name, True) for v in versions: urls = self.client.release_urls(name, v) data = self.client.release_data(name, v) metadata = Metadata(scheme=self.scheme) metadata.name = data['name'] metadata.version = data['version'] metadata.license = data.get('license') metadata.keywords = data.get('keywords', []) metadata.summary = data.get('summary') dist = Distribution(metadata) if urls: info = urls[0] metadata.source_url = info['url'] dist.digest = self._get_digest(info) dist.locator = self result[v] = dist for info in urls: url = info['url'] digest = self._get_digest(info) result['urls'].setdefault(v, set()).add(url) result['digests'][url] = digest return result class PyPIJSONLocator(Locator): """ This locator uses PyPI's JSON interface. It's very limited in functionality and probably not worth using. """ def __init__(self, url, **kwargs): super(PyPIJSONLocator, self).__init__(**kwargs) self.base_url = ensure_slash(url) def get_distribution_names(self): """ Return all the distribution names known to this locator. """ raise NotImplementedError('Not available from this locator') def _get_project(self, name): result = {'urls': {}, 'digests': {}} url = urljoin(self.base_url, '%s/json' % quote(name)) try: resp = self.opener.open(url) data = resp.read().decode() # for now d = json.loads(data) md = Metadata(scheme=self.scheme) data = d['info'] md.name = data['name'] md.version = data['version'] md.license = data.get('license') md.keywords = data.get('keywords', []) md.summary = data.get('summary') dist = Distribution(md) dist.locator = self urls = d['urls'] result[md.version] = dist for info in d['urls']: url = info['url'] dist.download_urls.add(url) dist.digests[url] = self._get_digest(info) result['urls'].setdefault(md.version, set()).add(url) result['digests'][url] = self._get_digest(info) # Now get other releases for version, infos in d['releases'].items(): if version == md.version: continue # already done omd = Metadata(scheme=self.scheme) omd.name = md.name omd.version = version odist = Distribution(omd) odist.locator = self result[version] = odist for info in infos: url = info['url'] odist.download_urls.add(url) odist.digests[url] = self._get_digest(info) result['urls'].setdefault(version, set()).add(url) result['digests'][url] = self._get_digest(info) # for info in urls: # md.source_url = info['url'] # dist.digest = self._get_digest(info) # dist.locator = self # for info in urls: # url = info['url'] # result['urls'].setdefault(md.version, set()).add(url) # result['digests'][url] = self._get_digest(info) except Exception as e: self.errors.put(text_type(e)) logger.exception('JSON fetch failed: %s', e) return result class Page(object): """ This class represents a scraped HTML page. """ # The following slightly hairy-looking regex just looks for the contents of # an anchor link, which has an attribute "href" either immediately preceded # or immediately followed by a "rel" attribute. The attribute values can be # declared with double quotes, single quotes or no quotes - which leads to # the length of the expression. _href = re.compile(""" (rel\\s*=\\s*(?:"(?P<rel1>[^"]*)"|'(?P<rel2>[^']*)'|(?P<rel3>[^>\\s\n]*))\\s+)? href\\s*=\\s*(?:"(?P<url1>[^"]*)"|'(?P<url2>[^']*)'|(?P<url3>[^>\\s\n]*)) (\\s+rel\\s*=\\s*(?:"(?P<rel4>[^"]*)"|'(?P<rel5>[^']*)'|(?P<rel6>[^>\\s\n]*)))? """, re.I | re.S | re.X) _base = re.compile(r"""<base\s+href\s*=\s*['"]?([^'">]+)""", re.I | re.S) def __init__(self, data, url): """ Initialise an instance with the Unicode page contents and the URL they came from. """ self.data = data self.base_url = self.url = url m = self._base.search(self.data) if m: self.base_url = m.group(1) _clean_re = re.compile(r'[^a-z0-9$&+,/:;=?@.#%_\\|-]', re.I) @cached_property def links(self): """ Return the URLs of all the links on a page together with information about their "rel" attribute, for determining which ones to treat as downloads and which ones to queue for further scraping. """ def clean(url): "Tidy up an URL." scheme, netloc, path, params, query, frag = urlparse(url) return urlunparse((scheme, netloc, quote(path), params, query, frag)) result = set() for match in self._href.finditer(self.data): d = match.groupdict('') rel = (d['rel1'] or d['rel2'] or d['rel3'] or d['rel4'] or d['rel5'] or d['rel6']) url = d['url1'] or d['url2'] or d['url3'] url = urljoin(self.base_url, url) url = unescape(url) url = self._clean_re.sub(lambda m: '%%%2x' % ord(m.group(0)), url) result.add((url, rel)) # We sort the result, hoping to bring the most recent versions # to the front result = sorted(result, key=lambda t: t[0], reverse=True) return result class SimpleScrapingLocator(Locator): """ A locator which scrapes HTML pages to locate downloads for a distribution. This runs multiple threads to do the I/O; performance is at least as good as pip's PackageFinder, which works in an analogous fashion. """ # These are used to deal with various Content-Encoding schemes. decoders = { 'deflate': zlib.decompress, 'gzip': lambda b: gzip.GzipFile(fileobj=BytesIO(d)).read(), 'none': lambda b: b, } def __init__(self, url, timeout=None, num_workers=10, **kwargs): """ Initialise an instance. :param url: The root URL to use for scraping. :param timeout: The timeout, in seconds, to be applied to requests. This defaults to ``None`` (no timeout specified). :param num_workers: The number of worker threads you want to do I/O, This defaults to 10. :param kwargs: Passed to the superclass. """ super(SimpleScrapingLocator, self).__init__(**kwargs) self.base_url = ensure_slash(url) self.timeout = timeout self._page_cache = {} self._seen = set() self._to_fetch = queue.Queue() self._bad_hosts = set() self.skip_externals = False self.num_workers = num_workers self._lock = threading.RLock() # See issue #45: we need to be resilient when the locator is used # in a thread, e.g. with concurrent.futures. We can't use self._lock # as it is for coordinating our internal threads - the ones created # in _prepare_threads. self._gplock = threading.RLock() self.platform_check = False # See issue #112 def _prepare_threads(self): """ Threads are created only when get_project is called, and terminate before it returns. They are there primarily to parallelise I/O (i.e. fetching web pages). """ self._threads = [] for i in range(self.num_workers): t = threading.Thread(target=self._fetch) t.setDaemon(True) t.start() self._threads.append(t) def _wait_threads(self): """ Tell all the threads to terminate (by sending a sentinel value) and wait for them to do so. """ # Note that you need two loops, since you can't say which # thread will get each sentinel for t in self._threads: self._to_fetch.put(None) # sentinel for t in self._threads: t.join() self._threads = [] def _get_project(self, name): result = {'urls': {}, 'digests': {}} with self._gplock: self.result = result self.project_name = name url = urljoin(self.base_url, '%s/' % quote(name)) self._seen.clear() self._page_cache.clear() self._prepare_threads() try: logger.debug('Queueing %s', url) self._to_fetch.put(url) self._to_fetch.join() finally: self._wait_threads() del self.result return result platform_dependent = re.compile(r'\b(linux_(i\d86|x86_64|arm\w+)|' r'win(32|_amd64)|macosx_?\d+)\b', re.I) def _is_platform_dependent(self, url): """ Does an URL refer to a platform-specific download? """ return self.platform_dependent.search(url) def _process_download(self, url): """ See if an URL is a suitable download for a project. If it is, register information in the result dictionary (for _get_project) about the specific version it's for. Note that the return value isn't actually used other than as a boolean value. """ if self.platform_check and self._is_platform_dependent(url): info = None else: info = self.convert_url_to_download_info(url, self.project_name) logger.debug('process_download: %s -> %s', url, info) if info: with self._lock: # needed because self.result is shared self._update_version_data(self.result, info) return info def _should_queue(self, link, referrer, rel): """ Determine whether a link URL from a referring page and with a particular "rel" attribute should be queued for scraping. """ scheme, netloc, path, _, _, _ = urlparse(link) if path.endswith(self.source_extensions + self.binary_extensions + self.excluded_extensions): result = False elif self.skip_externals and not link.startswith(self.base_url): result = False elif not referrer.startswith(self.base_url): result = False elif rel not in ('homepage', 'download'): result = False elif scheme not in ('http', 'https', 'ftp'): result = False elif self._is_platform_dependent(link): result = False else: host = netloc.split(':', 1)[0] if host.lower() == 'localhost': result = False else: result = True logger.debug('should_queue: %s (%s) from %s -> %s', link, rel, referrer, result) return result def _fetch(self): """ Get a URL to fetch from the work queue, get the HTML page, examine its links for download candidates and candidates for further scraping. This is a handy method to run in a thread. """ while True: url = self._to_fetch.get() try: if url: page = self.get_page(url) if page is None: # e.g. after an error continue for link, rel in page.links: if link not in self._seen: try: self._seen.add(link) if (not self._process_download(link) and self._should_queue(link, url, rel)): logger.debug('Queueing %s from %s', link, url) self._to_fetch.put(link) except MetadataInvalidError: # e.g. invalid versions pass except Exception as e: # pragma: no cover self.errors.put(text_type(e)) finally: # always do this, to avoid hangs :-) self._to_fetch.task_done() if not url: #logger.debug('Sentinel seen, quitting.') break def get_page(self, url): """ Get the HTML for an URL, possibly from an in-memory cache. XXX TODO Note: this cache is never actually cleared. It's assumed that the data won't get stale over the lifetime of a locator instance (not necessarily true for the default_locator). """ # http://peak.telecommunity.com/DevCenter/EasyInstall#package-index-api scheme, netloc, path, _, _, _ = urlparse(url) if scheme == 'file' and os.path.isdir(url2pathname(path)): url = urljoin(ensure_slash(url), 'index.html') if url in self._page_cache: result = self._page_cache[url] logger.debug('Returning %s from cache: %s', url, result) else: host = netloc.split(':', 1)[0] result = None if host in self._bad_hosts: logger.debug('Skipping %s due to bad host %s', url, host) else: req = Request(url, headers={'Accept-encoding': 'identity'}) try: logger.debug('Fetching %s', url) resp = self.opener.open(req, timeout=self.timeout) logger.debug('Fetched %s', url) headers = resp.info() content_type = headers.get('Content-Type', '') if HTML_CONTENT_TYPE.match(content_type): final_url = resp.geturl() data = resp.read() encoding = headers.get('Content-Encoding') if encoding: decoder = self.decoders[encoding] # fail if not found data = decoder(data) encoding = 'utf-8' m = CHARSET.search(content_type) if m: encoding = m.group(1) try: data = data.decode(encoding) except UnicodeError: # pragma: no cover data = data.decode('latin-1') # fallback result = Page(data, final_url) self._page_cache[final_url] = result except HTTPError as e: if e.code != 404: logger.exception('Fetch failed: %s: %s', url, e) except URLError as e: # pragma: no cover logger.exception('Fetch failed: %s: %s', url, e) with self._lock: self._bad_hosts.add(host) except Exception as e: # pragma: no cover logger.exception('Fetch failed: %s: %s', url, e) finally: self._page_cache[url] = result # even if None (failure) return result _distname_re = re.compile('<a href=[^>]*>([^<]+)<') def get_distribution_names(self): """ Return all the distribution names known to this locator. """ result = set() page = self.get_page(self.base_url) if not page: raise DistlibException('Unable to get %s' % self.base_url) for match in self._distname_re.finditer(page.data): result.add(match.group(1)) return result class DirectoryLocator(Locator): """ This class locates distributions in a directory tree. """ def __init__(self, path, **kwargs): """ Initialise an instance. :param path: The root of the directory tree to search. :param kwargs: Passed to the superclass constructor, except for: * recursive - if True (the default), subdirectories are recursed into. If False, only the top-level directory is searched, """ self.recursive = kwargs.pop('recursive', True) super(DirectoryLocator, self).__init__(**kwargs) path = os.path.abspath(path) if not os.path.isdir(path): # pragma: no cover raise DistlibException('Not a directory: %r' % path) self.base_dir = path def should_include(self, filename, parent): """ Should a filename be considered as a candidate for a distribution archive? As well as the filename, the directory which contains it is provided, though not used by the current implementation. """ return filename.endswith(self.downloadable_extensions) def _get_project(self, name): result = {'urls': {}, 'digests': {}} for root, dirs, files in os.walk(self.base_dir): for fn in files: if self.should_include(fn, root): fn = os.path.join(root, fn) url = urlunparse(('file', '', pathname2url(os.path.abspath(fn)), '', '', '')) info = self.convert_url_to_download_info(url, name) if info: self._update_version_data(result, info) if not self.recursive: break return result def get_distribution_names(self): """ Return all the distribution names known to this locator. """ result = set() for root, dirs, files in os.walk(self.base_dir): for fn in files: if self.should_include(fn, root): fn = os.path.join(root, fn) url = urlunparse(('file', '', pathname2url(os.path.abspath(fn)), '', '', '')) info = self.convert_url_to_download_info(url, None) if info: result.add(info['name']) if not self.recursive: break return result class JSONLocator(Locator): """ This locator uses special extended metadata (not available on PyPI) and is the basis of performant dependency resolution in distlib. Other locators require archive downloads before dependencies can be determined! As you might imagine, that can be slow. """ def get_distribution_names(self): """ Return all the distribution names known to this locator. """ raise NotImplementedError('Not available from this locator') def _get_project(self, name): result = {'urls': {}, 'digests': {}} data = get_project_data(name) if data: for info in data.get('files', []): if info['ptype'] != 'sdist' or info['pyversion'] != 'source': continue # We don't store summary in project metadata as it makes # the data bigger for no benefit during dependency # resolution dist = make_dist(data['name'], info['version'], summary=data.get('summary', 'Placeholder for summary'), scheme=self.scheme) md = dist.metadata md.source_url = info['url'] # TODO SHA256 digest if 'digest' in info and info['digest']: dist.digest = ('md5', info['digest']) md.dependencies = info.get('requirements', {}) dist.exports = info.get('exports', {}) result[dist.version] = dist result['urls'].setdefault(dist.version, set()).add(info['url']) return result class DistPathLocator(Locator): """ This locator finds installed distributions in a path. It can be useful for adding to an :class:`AggregatingLocator`. """ def __init__(self, distpath, **kwargs): """ Initialise an instance. :param distpath: A :class:`DistributionPath` instance to search. """ super(DistPathLocator, self).__init__(**kwargs) assert isinstance(distpath, DistributionPath) self.distpath = distpath def _get_project(self, name): dist = self.distpath.get_distribution(name) if dist is None: result = {'urls': {}, 'digests': {}} else: result = { dist.version: dist, 'urls': {dist.version: set([dist.source_url])}, 'digests': {dist.version: set([None])} } return result class AggregatingLocator(Locator): """ This class allows you to chain and/or merge a list of locators. """ def __init__(self, *locators, **kwargs): """ Initialise an instance. :param locators: The list of locators to search. :param kwargs: Passed to the superclass constructor, except for: * merge - if False (the default), the first successful search from any of the locators is returned. If True, the results from all locators are merged (this can be slow). """ self.merge = kwargs.pop('merge', False) self.locators = locators super(AggregatingLocator, self).__init__(**kwargs) def clear_cache(self): super(AggregatingLocator, self).clear_cache() for locator in self.locators: locator.clear_cache() def _set_scheme(self, value): self._scheme = value for locator in self.locators: locator.scheme = value scheme = property(Locator.scheme.fget, _set_scheme) def _get_project(self, name): result = {} for locator in self.locators: d = locator.get_project(name) if d: if self.merge: files = result.get('urls', {}) digests = result.get('digests', {}) # next line could overwrite result['urls'], result['digests'] result.update(d) df = result.get('urls') if files and df: for k, v in files.items(): if k in df: df[k] |= v else: df[k] = v dd = result.get('digests') if digests and dd: dd.update(digests) else: # See issue #18. If any dists are found and we're looking # for specific constraints, we only return something if # a match is found. For example, if a DirectoryLocator # returns just foo (1.0) while we're looking for # foo (>= 2.0), we'll pretend there was nothing there so # that subsequent locators can be queried. Otherwise we # would just return foo (1.0) which would then lead to a # failure to find foo (>= 2.0), because other locators # weren't searched. Note that this only matters when # merge=False. if self.matcher is None: found = True else: found = False for k in d: if self.matcher.match(k): found = True break if found: result = d break return result def get_distribution_names(self): """ Return all the distribution names known to this locator. """ result = set() for locator in self.locators: try: result |= locator.get_distribution_names() except NotImplementedError: pass return result # We use a legacy scheme simply because most of the dists on PyPI use legacy # versions which don't conform to PEP 426 / PEP 440. default_locator = AggregatingLocator( JSONLocator(), SimpleScrapingLocator('https://pypi.org/simple/', timeout=3.0), scheme='legacy') locate = default_locator.locate NAME_VERSION_RE = re.compile(r'(?P<name>[\w-]+)\s*' r'\(\s*(==\s*)?(?P<ver>[^)]+)\)$') class DependencyFinder(object): """ Locate dependencies for distributions. """ def __init__(self, locator=None): """ Initialise an instance, using the specified locator to locate distributions. """ self.locator = locator or default_locator self.scheme = get_scheme(self.locator.scheme) def add_distribution(self, dist): """ Add a distribution to the finder. This will update internal information about who provides what. :param dist: The distribution to add. """ logger.debug('adding distribution %s', dist) name = dist.key self.dists_by_name[name] = dist self.dists[(name, dist.version)] = dist for p in dist.provides: name, version = parse_name_and_version(p) logger.debug('Add to provided: %s, %s, %s', name, version, dist) self.provided.setdefault(name, set()).add((version, dist)) def remove_distribution(self, dist): """ Remove a distribution from the finder. This will update internal information about who provides what. :param dist: The distribution to remove. """ logger.debug('removing distribution %s', dist) name = dist.key del self.dists_by_name[name] del self.dists[(name, dist.version)] for p in dist.provides: name, version = parse_name_and_version(p) logger.debug('Remove from provided: %s, %s, %s', name, version, dist) s = self.provided[name] s.remove((version, dist)) if not s: del self.provided[name] def get_matcher(self, reqt): """ Get a version matcher for a requirement. :param reqt: The requirement :type reqt: str :return: A version matcher (an instance of :class:`distlib.version.Matcher`). """ try: matcher = self.scheme.matcher(reqt) except UnsupportedVersionError: # pragma: no cover # XXX compat-mode if cannot read the version name = reqt.split()[0] matcher = self.scheme.matcher(name) return matcher def find_providers(self, reqt): """ Find the distributions which can fulfill a requirement. :param reqt: The requirement. :type reqt: str :return: A set of distribution which can fulfill the requirement. """ matcher = self.get_matcher(reqt) name = matcher.key # case-insensitive result = set() provided = self.provided if name in provided: for version, provider in provided[name]: try: match = matcher.match(version) except UnsupportedVersionError: match = False if match: result.add(provider) break return result def try_to_replace(self, provider, other, problems): """ Attempt to replace one provider with another. This is typically used when resolving dependencies from multiple sources, e.g. A requires (B >= 1.0) while C requires (B >= 1.1). For successful replacement, ``provider`` must meet all the requirements which ``other`` fulfills. :param provider: The provider we are trying to replace with. :param other: The provider we're trying to replace. :param problems: If False is returned, this will contain what problems prevented replacement. This is currently a tuple of the literal string 'cantreplace', ``provider``, ``other`` and the set of requirements that ``provider`` couldn't fulfill. :return: True if we can replace ``other`` with ``provider``, else False. """ rlist = self.reqts[other] unmatched = set() for s in rlist: matcher = self.get_matcher(s) if not matcher.match(provider.version): unmatched.add(s) if unmatched: # can't replace other with provider problems.add(('cantreplace', provider, other, frozenset(unmatched))) result = False else: # can replace other with provider self.remove_distribution(other) del self.reqts[other] for s in rlist: self.reqts.setdefault(provider, set()).add(s) self.add_distribution(provider) result = True return result def find(self, requirement, meta_extras=None, prereleases=False): """ Find a distribution and all distributions it depends on. :param requirement: The requirement specifying the distribution to find, or a Distribution instance. :param meta_extras: A list of meta extras such as :test:, :build: and so on. :param prereleases: If ``True``, allow pre-release versions to be returned - otherwise, don't return prereleases unless they're all that's available. Return a set of :class:`Distribution` instances and a set of problems. The distributions returned should be such that they have the :attr:`required` attribute set to ``True`` if they were from the ``requirement`` passed to ``find()``, and they have the :attr:`build_time_dependency` attribute set to ``True`` unless they are post-installation dependencies of the ``requirement``. The problems should be a tuple consisting of the string ``'unsatisfied'`` and the requirement which couldn't be satisfied by any distribution known to the locator. """ self.provided = {} self.dists = {} self.dists_by_name = {} self.reqts = {} meta_extras = set(meta_extras or []) if ':*:' in meta_extras: meta_extras.remove(':*:') # :meta: and :run: are implicitly included meta_extras |= set([':test:', ':build:', ':dev:']) if isinstance(requirement, Distribution): dist = odist = requirement logger.debug('passed %s as requirement', odist) else: dist = odist = self.locator.locate(requirement, prereleases=prereleases) if dist is None: raise DistlibException('Unable to locate %r' % requirement) logger.debug('located %s', odist) dist.requested = True problems = set() todo = set([dist]) install_dists = set([odist]) while todo: dist = todo.pop() name = dist.key # case-insensitive if name not in self.dists_by_name: self.add_distribution(dist) else: #import pdb; pdb.set_trace() other = self.dists_by_name[name] if other != dist: self.try_to_replace(dist, other, problems) ireqts = dist.run_requires | dist.meta_requires sreqts = dist.build_requires ereqts = set() if meta_extras and dist in install_dists: for key in ('test', 'build', 'dev'): e = ':%s:' % key if e in meta_extras: ereqts |= getattr(dist, '%s_requires' % key) all_reqts = ireqts | sreqts | ereqts for r in all_reqts: providers = self.find_providers(r) if not providers: logger.debug('No providers found for %r', r) provider = self.locator.locate(r, prereleases=prereleases) # If no provider is found and we didn't consider # prereleases, consider them now. if provider is None and not prereleases: provider = self.locator.locate(r, prereleases=True) if provider is None: logger.debug('Cannot satisfy %r', r) problems.add(('unsatisfied', r)) else: n, v = provider.key, provider.version if (n, v) not in self.dists: todo.add(provider) providers.add(provider) if r in ireqts and dist in install_dists: install_dists.add(provider) logger.debug('Adding %s to install_dists', provider.name_and_version) for p in providers: name = p.key if name not in self.dists_by_name: self.reqts.setdefault(p, set()).add(r) else: other = self.dists_by_name[name] if other != p: # see if other can be replaced by p self.try_to_replace(p, other, problems) dists = set(self.dists.values()) for dist in dists: dist.build_time_dependency = dist not in install_dists if dist.build_time_dependency: logger.debug('%s is a build-time dependency only.', dist.name_and_version) logger.debug('find done for %s', odist) return dists, problems ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/distlib/manifest.py����������������������0000644�0001750�0001750�00000034733�00000000000�030311� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # # Copyright (C) 2012-2013 Python Software Foundation. # See LICENSE.txt and CONTRIBUTORS.txt. # """ Class representing the list of files in a distribution. Equivalent to distutils.filelist, but fixes some problems. """ import fnmatch import logging import os import re import sys from . import DistlibException from .compat import fsdecode from .util import convert_path __all__ = ['Manifest'] logger = logging.getLogger(__name__) # a \ followed by some spaces + EOL _COLLAPSE_PATTERN = re.compile('\\\\w*\n', re.M) _COMMENTED_LINE = re.compile('#.*?(?=\n)|\n(?=$)', re.M | re.S) # # Due to the different results returned by fnmatch.translate, we need # to do slightly different processing for Python 2.7 and 3.2 ... this needed # to be brought in for Python 3.6 onwards. # _PYTHON_VERSION = sys.version_info[:2] class Manifest(object): """A list of files built by on exploring the filesystem and filtered by applying various patterns to what we find there. """ def __init__(self, base=None): """ Initialise an instance. :param base: The base directory to explore under. """ self.base = os.path.abspath(os.path.normpath(base or os.getcwd())) self.prefix = self.base + os.sep self.allfiles = None self.files = set() # # Public API # def findall(self): """Find all files under the base and set ``allfiles`` to the absolute pathnames of files found. """ from stat import S_ISREG, S_ISDIR, S_ISLNK self.allfiles = allfiles = [] root = self.base stack = [root] pop = stack.pop push = stack.append while stack: root = pop() names = os.listdir(root) for name in names: fullname = os.path.join(root, name) # Avoid excess stat calls -- just one will do, thank you! stat = os.stat(fullname) mode = stat.st_mode if S_ISREG(mode): allfiles.append(fsdecode(fullname)) elif S_ISDIR(mode) and not S_ISLNK(mode): push(fullname) def add(self, item): """ Add a file to the manifest. :param item: The pathname to add. This can be relative to the base. """ if not item.startswith(self.prefix): item = os.path.join(self.base, item) self.files.add(os.path.normpath(item)) def add_many(self, items): """ Add a list of files to the manifest. :param items: The pathnames to add. These can be relative to the base. """ for item in items: self.add(item) def sorted(self, wantdirs=False): """ Return sorted files in directory order """ def add_dir(dirs, d): dirs.add(d) logger.debug('add_dir added %s', d) if d != self.base: parent, _ = os.path.split(d) assert parent not in ('', '/') add_dir(dirs, parent) result = set(self.files) # make a copy! if wantdirs: dirs = set() for f in result: add_dir(dirs, os.path.dirname(f)) result |= dirs return [os.path.join(*path_tuple) for path_tuple in sorted(os.path.split(path) for path in result)] def clear(self): """Clear all collected files.""" self.files = set() self.allfiles = [] def process_directive(self, directive): """ Process a directive which either adds some files from ``allfiles`` to ``files``, or removes some files from ``files``. :param directive: The directive to process. This should be in a format compatible with distutils ``MANIFEST.in`` files: http://docs.python.org/distutils/sourcedist.html#commands """ # Parse the line: split it up, make sure the right number of words # is there, and return the relevant words. 'action' is always # defined: it's the first word of the line. Which of the other # three are defined depends on the action; it'll be either # patterns, (dir and patterns), or (dirpattern). action, patterns, thedir, dirpattern = self._parse_directive(directive) # OK, now we know that the action is valid and we have the # right number of words on the line for that action -- so we # can proceed with minimal error-checking. if action == 'include': for pattern in patterns: if not self._include_pattern(pattern, anchor=True): logger.warning('no files found matching %r', pattern) elif action == 'exclude': for pattern in patterns: found = self._exclude_pattern(pattern, anchor=True) #if not found: # logger.warning('no previously-included files ' # 'found matching %r', pattern) elif action == 'global-include': for pattern in patterns: if not self._include_pattern(pattern, anchor=False): logger.warning('no files found matching %r ' 'anywhere in distribution', pattern) elif action == 'global-exclude': for pattern in patterns: found = self._exclude_pattern(pattern, anchor=False) #if not found: # logger.warning('no previously-included files ' # 'matching %r found anywhere in ' # 'distribution', pattern) elif action == 'recursive-include': for pattern in patterns: if not self._include_pattern(pattern, prefix=thedir): logger.warning('no files found matching %r ' 'under directory %r', pattern, thedir) elif action == 'recursive-exclude': for pattern in patterns: found = self._exclude_pattern(pattern, prefix=thedir) #if not found: # logger.warning('no previously-included files ' # 'matching %r found under directory %r', # pattern, thedir) elif action == 'graft': if not self._include_pattern(None, prefix=dirpattern): logger.warning('no directories found matching %r', dirpattern) elif action == 'prune': if not self._exclude_pattern(None, prefix=dirpattern): logger.warning('no previously-included directories found ' 'matching %r', dirpattern) else: # pragma: no cover # This should never happen, as it should be caught in # _parse_template_line raise DistlibException( 'invalid action %r' % action) # # Private API # def _parse_directive(self, directive): """ Validate a directive. :param directive: The directive to validate. :return: A tuple of action, patterns, thedir, dir_patterns """ words = directive.split() if len(words) == 1 and words[0] not in ('include', 'exclude', 'global-include', 'global-exclude', 'recursive-include', 'recursive-exclude', 'graft', 'prune'): # no action given, let's use the default 'include' words.insert(0, 'include') action = words[0] patterns = thedir = dir_pattern = None if action in ('include', 'exclude', 'global-include', 'global-exclude'): if len(words) < 2: raise DistlibException( '%r expects <pattern1> <pattern2> ...' % action) patterns = [convert_path(word) for word in words[1:]] elif action in ('recursive-include', 'recursive-exclude'): if len(words) < 3: raise DistlibException( '%r expects <dir> <pattern1> <pattern2> ...' % action) thedir = convert_path(words[1]) patterns = [convert_path(word) for word in words[2:]] elif action in ('graft', 'prune'): if len(words) != 2: raise DistlibException( '%r expects a single <dir_pattern>' % action) dir_pattern = convert_path(words[1]) else: raise DistlibException('unknown action %r' % action) return action, patterns, thedir, dir_pattern def _include_pattern(self, pattern, anchor=True, prefix=None, is_regex=False): """Select strings (presumably filenames) from 'self.files' that match 'pattern', a Unix-style wildcard (glob) pattern. Patterns are not quite the same as implemented by the 'fnmatch' module: '*' and '?' match non-special characters, where "special" is platform-dependent: slash on Unix; colon, slash, and backslash on DOS/Windows; and colon on Mac OS. If 'anchor' is true (the default), then the pattern match is more stringent: "*.py" will match "foo.py" but not "foo/bar.py". If 'anchor' is false, both of these will match. If 'prefix' is supplied, then only filenames starting with 'prefix' (itself a pattern) and ending with 'pattern', with anything in between them, will match. 'anchor' is ignored in this case. If 'is_regex' is true, 'anchor' and 'prefix' are ignored, and 'pattern' is assumed to be either a string containing a regex or a regex object -- no translation is done, the regex is just compiled and used as-is. Selected strings will be added to self.files. Return True if files are found. """ # XXX docstring lying about what the special chars are? found = False pattern_re = self._translate_pattern(pattern, anchor, prefix, is_regex) # delayed loading of allfiles list if self.allfiles is None: self.findall() for name in self.allfiles: if pattern_re.search(name): self.files.add(name) found = True return found def _exclude_pattern(self, pattern, anchor=True, prefix=None, is_regex=False): """Remove strings (presumably filenames) from 'files' that match 'pattern'. Other parameters are the same as for 'include_pattern()', above. The list 'self.files' is modified in place. Return True if files are found. This API is public to allow e.g. exclusion of SCM subdirs, e.g. when packaging source distributions """ found = False pattern_re = self._translate_pattern(pattern, anchor, prefix, is_regex) for f in list(self.files): if pattern_re.search(f): self.files.remove(f) found = True return found def _translate_pattern(self, pattern, anchor=True, prefix=None, is_regex=False): """Translate a shell-like wildcard pattern to a compiled regular expression. Return the compiled regex. If 'is_regex' true, then 'pattern' is directly compiled to a regex (if it's a string) or just returned as-is (assumes it's a regex object). """ if is_regex: if isinstance(pattern, str): return re.compile(pattern) else: return pattern if _PYTHON_VERSION > (3, 2): # ditch start and end characters start, _, end = self._glob_to_re('_').partition('_') if pattern: pattern_re = self._glob_to_re(pattern) if _PYTHON_VERSION > (3, 2): assert pattern_re.startswith(start) and pattern_re.endswith(end) else: pattern_re = '' base = re.escape(os.path.join(self.base, '')) if prefix is not None: # ditch end of pattern character if _PYTHON_VERSION <= (3, 2): empty_pattern = self._glob_to_re('') prefix_re = self._glob_to_re(prefix)[:-len(empty_pattern)] else: prefix_re = self._glob_to_re(prefix) assert prefix_re.startswith(start) and prefix_re.endswith(end) prefix_re = prefix_re[len(start): len(prefix_re) - len(end)] sep = os.sep if os.sep == '\\': sep = r'\\' if _PYTHON_VERSION <= (3, 2): pattern_re = '^' + base + sep.join((prefix_re, '.*' + pattern_re)) else: pattern_re = pattern_re[len(start): len(pattern_re) - len(end)] pattern_re = r'%s%s%s%s.*%s%s' % (start, base, prefix_re, sep, pattern_re, end) else: # no prefix -- respect anchor flag if anchor: if _PYTHON_VERSION <= (3, 2): pattern_re = '^' + base + pattern_re else: pattern_re = r'%s%s%s' % (start, base, pattern_re[len(start):]) return re.compile(pattern_re) def _glob_to_re(self, pattern): """Translate a shell-like glob pattern to a regular expression. Return a string containing the regex. Differs from 'fnmatch.translate()' in that '*' does not match "special characters" (which are platform-specific). """ pattern_re = fnmatch.translate(pattern) # '?' and '*' in the glob pattern become '.' and '.*' in the RE, which # IMHO is wrong -- '?' and '*' aren't supposed to match slash in Unix, # and by extension they shouldn't match such "special characters" under # any OS. So change all non-escaped dots in the RE to match any # character except the special characters (currently: just os.sep). sep = os.sep if os.sep == '\\': # we're using a regex to manipulate a regex, so we need # to escape the backslash twice sep = r'\\\\' escaped = r'\1[^%s]' % sep pattern_re = re.sub(r'((?<!\\)(\\\\)*)\.', escaped, pattern_re) return pattern_re �������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/distlib/markers.py�����������������������0000644�0001750�0001750�00000010443�00000000000�030137� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # # Copyright (C) 2012-2017 Vinay Sajip. # Licensed to the Python Software Foundation under a contributor agreement. # See LICENSE.txt and CONTRIBUTORS.txt. # """ Parser for the environment markers micro-language defined in PEP 508. """ # Note: In PEP 345, the micro-language was Python compatible, so the ast # module could be used to parse it. However, PEP 508 introduced operators such # as ~= and === which aren't in Python, necessitating a different approach. import os import sys import platform import re from .compat import python_implementation, urlparse, string_types from .util import in_venv, parse_marker __all__ = ['interpret'] def _is_literal(o): if not isinstance(o, string_types) or not o: return False return o[0] in '\'"' class Evaluator(object): """ This class is used to evaluate marker expessions. """ operations = { '==': lambda x, y: x == y, '===': lambda x, y: x == y, '~=': lambda x, y: x == y or x > y, '!=': lambda x, y: x != y, '<': lambda x, y: x < y, '<=': lambda x, y: x == y or x < y, '>': lambda x, y: x > y, '>=': lambda x, y: x == y or x > y, 'and': lambda x, y: x and y, 'or': lambda x, y: x or y, 'in': lambda x, y: x in y, 'not in': lambda x, y: x not in y, } def evaluate(self, expr, context): """ Evaluate a marker expression returned by the :func:`parse_requirement` function in the specified context. """ if isinstance(expr, string_types): if expr[0] in '\'"': result = expr[1:-1] else: if expr not in context: raise SyntaxError('unknown variable: %s' % expr) result = context[expr] else: assert isinstance(expr, dict) op = expr['op'] if op not in self.operations: raise NotImplementedError('op not implemented: %s' % op) elhs = expr['lhs'] erhs = expr['rhs'] if _is_literal(expr['lhs']) and _is_literal(expr['rhs']): raise SyntaxError('invalid comparison: %s %s %s' % (elhs, op, erhs)) lhs = self.evaluate(elhs, context) rhs = self.evaluate(erhs, context) result = self.operations[op](lhs, rhs) return result def default_context(): def format_full_version(info): version = '%s.%s.%s' % (info.major, info.minor, info.micro) kind = info.releaselevel if kind != 'final': version += kind[0] + str(info.serial) return version if hasattr(sys, 'implementation'): implementation_version = format_full_version(sys.implementation.version) implementation_name = sys.implementation.name else: implementation_version = '0' implementation_name = '' result = { 'implementation_name': implementation_name, 'implementation_version': implementation_version, 'os_name': os.name, 'platform_machine': platform.machine(), 'platform_python_implementation': platform.python_implementation(), 'platform_release': platform.release(), 'platform_system': platform.system(), 'platform_version': platform.version(), 'platform_in_venv': str(in_venv()), 'python_full_version': platform.python_version(), 'python_version': platform.python_version()[:3], 'sys_platform': sys.platform, } return result DEFAULT_CONTEXT = default_context() del default_context evaluator = Evaluator() def interpret(marker, execution_context=None): """ Interpret a marker and return a result depending on environment. :param marker: The marker to interpret. :type marker: str :param execution_context: The context used for name lookup. :type execution_context: mapping """ try: expr, rest = parse_marker(marker) except Exception as e: raise SyntaxError('Unable to interpret marker syntax: %s: %s' % (marker, e)) if rest and rest[0] != '#': raise SyntaxError('unexpected trailing data in marker: %s: %s' % (marker, rest)) context = dict(DEFAULT_CONTEXT) if execution_context: context.update(execution_context) return evaluator.evaluate(expr, context) �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/distlib/metadata.py����������������������0000644�0001750�0001750�00000116452�00000000000�030262� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # # Copyright (C) 2012 The Python Software Foundation. # See LICENSE.txt and CONTRIBUTORS.txt. # """Implementation of the Metadata for Python packages PEPs. Supports all metadata formats (1.0, 1.1, 1.2, and 2.0 experimental). """ from __future__ import unicode_literals import codecs from email import message_from_file import json import logging import re from . import DistlibException, __version__ from .compat import StringIO, string_types, text_type from .markers import interpret from .util import extract_by_key, get_extras from .version import get_scheme, PEP440_VERSION_RE logger = logging.getLogger(__name__) class MetadataMissingError(DistlibException): """A required metadata is missing""" class MetadataConflictError(DistlibException): """Attempt to read or write metadata fields that are conflictual.""" class MetadataUnrecognizedVersionError(DistlibException): """Unknown metadata version number.""" class MetadataInvalidError(DistlibException): """A metadata value is invalid""" # public API of this module __all__ = ['Metadata', 'PKG_INFO_ENCODING', 'PKG_INFO_PREFERRED_VERSION'] # Encoding used for the PKG-INFO files PKG_INFO_ENCODING = 'utf-8' # preferred version. Hopefully will be changed # to 1.2 once PEP 345 is supported everywhere PKG_INFO_PREFERRED_VERSION = '1.1' _LINE_PREFIX_1_2 = re.compile('\n \\|') _LINE_PREFIX_PRE_1_2 = re.compile('\n ') _241_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform', 'Summary', 'Description', 'Keywords', 'Home-page', 'Author', 'Author-email', 'License') _314_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform', 'Supported-Platform', 'Summary', 'Description', 'Keywords', 'Home-page', 'Author', 'Author-email', 'License', 'Classifier', 'Download-URL', 'Obsoletes', 'Provides', 'Requires') _314_MARKERS = ('Obsoletes', 'Provides', 'Requires', 'Classifier', 'Download-URL') _345_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform', 'Supported-Platform', 'Summary', 'Description', 'Keywords', 'Home-page', 'Author', 'Author-email', 'Maintainer', 'Maintainer-email', 'License', 'Classifier', 'Download-URL', 'Obsoletes-Dist', 'Project-URL', 'Provides-Dist', 'Requires-Dist', 'Requires-Python', 'Requires-External') _345_MARKERS = ('Provides-Dist', 'Requires-Dist', 'Requires-Python', 'Obsoletes-Dist', 'Requires-External', 'Maintainer', 'Maintainer-email', 'Project-URL') _426_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform', 'Supported-Platform', 'Summary', 'Description', 'Keywords', 'Home-page', 'Author', 'Author-email', 'Maintainer', 'Maintainer-email', 'License', 'Classifier', 'Download-URL', 'Obsoletes-Dist', 'Project-URL', 'Provides-Dist', 'Requires-Dist', 'Requires-Python', 'Requires-External', 'Private-Version', 'Obsoleted-By', 'Setup-Requires-Dist', 'Extension', 'Provides-Extra') _426_MARKERS = ('Private-Version', 'Provides-Extra', 'Obsoleted-By', 'Setup-Requires-Dist', 'Extension') # See issue #106: Sometimes 'Requires' and 'Provides' occur wrongly in # the metadata. Include them in the tuple literal below to allow them # (for now). _566_FIELDS = _426_FIELDS + ('Description-Content-Type', 'Requires', 'Provides') _566_MARKERS = ('Description-Content-Type',) _ALL_FIELDS = set() _ALL_FIELDS.update(_241_FIELDS) _ALL_FIELDS.update(_314_FIELDS) _ALL_FIELDS.update(_345_FIELDS) _ALL_FIELDS.update(_426_FIELDS) _ALL_FIELDS.update(_566_FIELDS) EXTRA_RE = re.compile(r'''extra\s*==\s*("([^"]+)"|'([^']+)')''') def _version2fieldlist(version): if version == '1.0': return _241_FIELDS elif version == '1.1': return _314_FIELDS elif version == '1.2': return _345_FIELDS elif version in ('1.3', '2.1'): return _345_FIELDS + _566_FIELDS elif version == '2.0': return _426_FIELDS raise MetadataUnrecognizedVersionError(version) def _best_version(fields): """Detect the best version depending on the fields used.""" def _has_marker(keys, markers): for marker in markers: if marker in keys: return True return False keys = [] for key, value in fields.items(): if value in ([], 'UNKNOWN', None): continue keys.append(key) possible_versions = ['1.0', '1.1', '1.2', '1.3', '2.0', '2.1'] # first let's try to see if a field is not part of one of the version for key in keys: if key not in _241_FIELDS and '1.0' in possible_versions: possible_versions.remove('1.0') logger.debug('Removed 1.0 due to %s', key) if key not in _314_FIELDS and '1.1' in possible_versions: possible_versions.remove('1.1') logger.debug('Removed 1.1 due to %s', key) if key not in _345_FIELDS and '1.2' in possible_versions: possible_versions.remove('1.2') logger.debug('Removed 1.2 due to %s', key) if key not in _566_FIELDS and '1.3' in possible_versions: possible_versions.remove('1.3') logger.debug('Removed 1.3 due to %s', key) if key not in _566_FIELDS and '2.1' in possible_versions: if key != 'Description': # In 2.1, description allowed after headers possible_versions.remove('2.1') logger.debug('Removed 2.1 due to %s', key) if key not in _426_FIELDS and '2.0' in possible_versions: possible_versions.remove('2.0') logger.debug('Removed 2.0 due to %s', key) # possible_version contains qualified versions if len(possible_versions) == 1: return possible_versions[0] # found ! elif len(possible_versions) == 0: logger.debug('Out of options - unknown metadata set: %s', fields) raise MetadataConflictError('Unknown metadata set') # let's see if one unique marker is found is_1_1 = '1.1' in possible_versions and _has_marker(keys, _314_MARKERS) is_1_2 = '1.2' in possible_versions and _has_marker(keys, _345_MARKERS) is_2_1 = '2.1' in possible_versions and _has_marker(keys, _566_MARKERS) is_2_0 = '2.0' in possible_versions and _has_marker(keys, _426_MARKERS) if int(is_1_1) + int(is_1_2) + int(is_2_1) + int(is_2_0) > 1: raise MetadataConflictError('You used incompatible 1.1/1.2/2.0/2.1 fields') # we have the choice, 1.0, or 1.2, or 2.0 # - 1.0 has a broken Summary field but works with all tools # - 1.1 is to avoid # - 1.2 fixes Summary but has little adoption # - 2.0 adds more features and is very new if not is_1_1 and not is_1_2 and not is_2_1 and not is_2_0: # we couldn't find any specific marker if PKG_INFO_PREFERRED_VERSION in possible_versions: return PKG_INFO_PREFERRED_VERSION if is_1_1: return '1.1' if is_1_2: return '1.2' if is_2_1: return '2.1' return '2.0' _ATTR2FIELD = { 'metadata_version': 'Metadata-Version', 'name': 'Name', 'version': 'Version', 'platform': 'Platform', 'supported_platform': 'Supported-Platform', 'summary': 'Summary', 'description': 'Description', 'keywords': 'Keywords', 'home_page': 'Home-page', 'author': 'Author', 'author_email': 'Author-email', 'maintainer': 'Maintainer', 'maintainer_email': 'Maintainer-email', 'license': 'License', 'classifier': 'Classifier', 'download_url': 'Download-URL', 'obsoletes_dist': 'Obsoletes-Dist', 'provides_dist': 'Provides-Dist', 'requires_dist': 'Requires-Dist', 'setup_requires_dist': 'Setup-Requires-Dist', 'requires_python': 'Requires-Python', 'requires_external': 'Requires-External', 'requires': 'Requires', 'provides': 'Provides', 'obsoletes': 'Obsoletes', 'project_url': 'Project-URL', 'private_version': 'Private-Version', 'obsoleted_by': 'Obsoleted-By', 'extension': 'Extension', 'provides_extra': 'Provides-Extra', } _PREDICATE_FIELDS = ('Requires-Dist', 'Obsoletes-Dist', 'Provides-Dist') _VERSIONS_FIELDS = ('Requires-Python',) _VERSION_FIELDS = ('Version',) _LISTFIELDS = ('Platform', 'Classifier', 'Obsoletes', 'Requires', 'Provides', 'Obsoletes-Dist', 'Provides-Dist', 'Requires-Dist', 'Requires-External', 'Project-URL', 'Supported-Platform', 'Setup-Requires-Dist', 'Provides-Extra', 'Extension') _LISTTUPLEFIELDS = ('Project-URL',) _ELEMENTSFIELD = ('Keywords',) _UNICODEFIELDS = ('Author', 'Maintainer', 'Summary', 'Description') _MISSING = object() _FILESAFE = re.compile('[^A-Za-z0-9.]+') def _get_name_and_version(name, version, for_filename=False): """Return the distribution name with version. If for_filename is true, return a filename-escaped form.""" if for_filename: # For both name and version any runs of non-alphanumeric or '.' # characters are replaced with a single '-'. Additionally any # spaces in the version string become '.' name = _FILESAFE.sub('-', name) version = _FILESAFE.sub('-', version.replace(' ', '.')) return '%s-%s' % (name, version) class LegacyMetadata(object): """The legacy metadata of a release. Supports versions 1.0, 1.1 and 1.2 (auto-detected). You can instantiate the class with one of these arguments (or none): - *path*, the path to a metadata file - *fileobj* give a file-like object with metadata as content - *mapping* is a dict-like object - *scheme* is a version scheme name """ # TODO document the mapping API and UNKNOWN default key def __init__(self, path=None, fileobj=None, mapping=None, scheme='default'): if [path, fileobj, mapping].count(None) < 2: raise TypeError('path, fileobj and mapping are exclusive') self._fields = {} self.requires_files = [] self._dependencies = None self.scheme = scheme if path is not None: self.read(path) elif fileobj is not None: self.read_file(fileobj) elif mapping is not None: self.update(mapping) self.set_metadata_version() def set_metadata_version(self): self._fields['Metadata-Version'] = _best_version(self._fields) def _write_field(self, fileobj, name, value): fileobj.write('%s: %s\n' % (name, value)) def __getitem__(self, name): return self.get(name) def __setitem__(self, name, value): return self.set(name, value) def __delitem__(self, name): field_name = self._convert_name(name) try: del self._fields[field_name] except KeyError: raise KeyError(name) def __contains__(self, name): return (name in self._fields or self._convert_name(name) in self._fields) def _convert_name(self, name): if name in _ALL_FIELDS: return name name = name.replace('-', '_').lower() return _ATTR2FIELD.get(name, name) def _default_value(self, name): if name in _LISTFIELDS or name in _ELEMENTSFIELD: return [] return 'UNKNOWN' def _remove_line_prefix(self, value): if self.metadata_version in ('1.0', '1.1'): return _LINE_PREFIX_PRE_1_2.sub('\n', value) else: return _LINE_PREFIX_1_2.sub('\n', value) def __getattr__(self, name): if name in _ATTR2FIELD: return self[name] raise AttributeError(name) # # Public API # # dependencies = property(_get_dependencies, _set_dependencies) def get_fullname(self, filesafe=False): """Return the distribution name with version. If filesafe is true, return a filename-escaped form.""" return _get_name_and_version(self['Name'], self['Version'], filesafe) def is_field(self, name): """return True if name is a valid metadata key""" name = self._convert_name(name) return name in _ALL_FIELDS def is_multi_field(self, name): name = self._convert_name(name) return name in _LISTFIELDS def read(self, filepath): """Read the metadata values from a file path.""" fp = codecs.open(filepath, 'r', encoding='utf-8') try: self.read_file(fp) finally: fp.close() def read_file(self, fileob): """Read the metadata values from a file object.""" msg = message_from_file(fileob) self._fields['Metadata-Version'] = msg['metadata-version'] # When reading, get all the fields we can for field in _ALL_FIELDS: if field not in msg: continue if field in _LISTFIELDS: # we can have multiple lines values = msg.get_all(field) if field in _LISTTUPLEFIELDS and values is not None: values = [tuple(value.split(',')) for value in values] self.set(field, values) else: # single line value = msg[field] if value is not None and value != 'UNKNOWN': self.set(field, value) # logger.debug('Attempting to set metadata for %s', self) # self.set_metadata_version() def write(self, filepath, skip_unknown=False): """Write the metadata fields to filepath.""" fp = codecs.open(filepath, 'w', encoding='utf-8') try: self.write_file(fp, skip_unknown) finally: fp.close() def write_file(self, fileobject, skip_unknown=False): """Write the PKG-INFO format data to a file object.""" self.set_metadata_version() for field in _version2fieldlist(self['Metadata-Version']): values = self.get(field) if skip_unknown and values in ('UNKNOWN', [], ['UNKNOWN']): continue if field in _ELEMENTSFIELD: self._write_field(fileobject, field, ','.join(values)) continue if field not in _LISTFIELDS: if field == 'Description': if self.metadata_version in ('1.0', '1.1'): values = values.replace('\n', '\n ') else: values = values.replace('\n', '\n |') values = [values] if field in _LISTTUPLEFIELDS: values = [','.join(value) for value in values] for value in values: self._write_field(fileobject, field, value) def update(self, other=None, **kwargs): """Set metadata values from the given iterable `other` and kwargs. Behavior is like `dict.update`: If `other` has a ``keys`` method, they are looped over and ``self[key]`` is assigned ``other[key]``. Else, ``other`` is an iterable of ``(key, value)`` iterables. Keys that don't match a metadata field or that have an empty value are dropped. """ def _set(key, value): if key in _ATTR2FIELD and value: self.set(self._convert_name(key), value) if not other: # other is None or empty container pass elif hasattr(other, 'keys'): for k in other.keys(): _set(k, other[k]) else: for k, v in other: _set(k, v) if kwargs: for k, v in kwargs.items(): _set(k, v) def set(self, name, value): """Control then set a metadata field.""" name = self._convert_name(name) if ((name in _ELEMENTSFIELD or name == 'Platform') and not isinstance(value, (list, tuple))): if isinstance(value, string_types): value = [v.strip() for v in value.split(',')] else: value = [] elif (name in _LISTFIELDS and not isinstance(value, (list, tuple))): if isinstance(value, string_types): value = [value] else: value = [] if logger.isEnabledFor(logging.WARNING): project_name = self['Name'] scheme = get_scheme(self.scheme) if name in _PREDICATE_FIELDS and value is not None: for v in value: # check that the values are valid if not scheme.is_valid_matcher(v.split(';')[0]): logger.warning( "'%s': '%s' is not valid (field '%s')", project_name, v, name) # FIXME this rejects UNKNOWN, is that right? elif name in _VERSIONS_FIELDS and value is not None: if not scheme.is_valid_constraint_list(value): logger.warning("'%s': '%s' is not a valid version (field '%s')", project_name, value, name) elif name in _VERSION_FIELDS and value is not None: if not scheme.is_valid_version(value): logger.warning("'%s': '%s' is not a valid version (field '%s')", project_name, value, name) if name in _UNICODEFIELDS: if name == 'Description': value = self._remove_line_prefix(value) self._fields[name] = value def get(self, name, default=_MISSING): """Get a metadata field.""" name = self._convert_name(name) if name not in self._fields: if default is _MISSING: default = self._default_value(name) return default if name in _UNICODEFIELDS: value = self._fields[name] return value elif name in _LISTFIELDS: value = self._fields[name] if value is None: return [] res = [] for val in value: if name not in _LISTTUPLEFIELDS: res.append(val) else: # That's for Project-URL res.append((val[0], val[1])) return res elif name in _ELEMENTSFIELD: value = self._fields[name] if isinstance(value, string_types): return value.split(',') return self._fields[name] def check(self, strict=False): """Check if the metadata is compliant. If strict is True then raise if no Name or Version are provided""" self.set_metadata_version() # XXX should check the versions (if the file was loaded) missing, warnings = [], [] for attr in ('Name', 'Version'): # required by PEP 345 if attr not in self: missing.append(attr) if strict and missing != []: msg = 'missing required metadata: %s' % ', '.join(missing) raise MetadataMissingError(msg) for attr in ('Home-page', 'Author'): if attr not in self: missing.append(attr) # checking metadata 1.2 (XXX needs to check 1.1, 1.0) if self['Metadata-Version'] != '1.2': return missing, warnings scheme = get_scheme(self.scheme) def are_valid_constraints(value): for v in value: if not scheme.is_valid_matcher(v.split(';')[0]): return False return True for fields, controller in ((_PREDICATE_FIELDS, are_valid_constraints), (_VERSIONS_FIELDS, scheme.is_valid_constraint_list), (_VERSION_FIELDS, scheme.is_valid_version)): for field in fields: value = self.get(field, None) if value is not None and not controller(value): warnings.append("Wrong value for '%s': %s" % (field, value)) return missing, warnings def todict(self, skip_missing=False): """Return fields as a dict. Field names will be converted to use the underscore-lowercase style instead of hyphen-mixed case (i.e. home_page instead of Home-page). """ self.set_metadata_version() mapping_1_0 = ( ('metadata_version', 'Metadata-Version'), ('name', 'Name'), ('version', 'Version'), ('summary', 'Summary'), ('home_page', 'Home-page'), ('author', 'Author'), ('author_email', 'Author-email'), ('license', 'License'), ('description', 'Description'), ('keywords', 'Keywords'), ('platform', 'Platform'), ('classifiers', 'Classifier'), ('download_url', 'Download-URL'), ) data = {} for key, field_name in mapping_1_0: if not skip_missing or field_name in self._fields: data[key] = self[field_name] if self['Metadata-Version'] == '1.2': mapping_1_2 = ( ('requires_dist', 'Requires-Dist'), ('requires_python', 'Requires-Python'), ('requires_external', 'Requires-External'), ('provides_dist', 'Provides-Dist'), ('obsoletes_dist', 'Obsoletes-Dist'), ('project_url', 'Project-URL'), ('maintainer', 'Maintainer'), ('maintainer_email', 'Maintainer-email'), ) for key, field_name in mapping_1_2: if not skip_missing or field_name in self._fields: if key != 'project_url': data[key] = self[field_name] else: data[key] = [','.join(u) for u in self[field_name]] elif self['Metadata-Version'] == '1.1': mapping_1_1 = ( ('provides', 'Provides'), ('requires', 'Requires'), ('obsoletes', 'Obsoletes'), ) for key, field_name in mapping_1_1: if not skip_missing or field_name in self._fields: data[key] = self[field_name] return data def add_requirements(self, requirements): if self['Metadata-Version'] == '1.1': # we can't have 1.1 metadata *and* Setuptools requires for field in ('Obsoletes', 'Requires', 'Provides'): if field in self: del self[field] self['Requires-Dist'] += requirements # Mapping API # TODO could add iter* variants def keys(self): return list(_version2fieldlist(self['Metadata-Version'])) def __iter__(self): for key in self.keys(): yield key def values(self): return [self[key] for key in self.keys()] def items(self): return [(key, self[key]) for key in self.keys()] def __repr__(self): return '<%s %s %s>' % (self.__class__.__name__, self.name, self.version) METADATA_FILENAME = 'pydist.json' WHEEL_METADATA_FILENAME = 'metadata.json' LEGACY_METADATA_FILENAME = 'METADATA' class Metadata(object): """ The metadata of a release. This implementation uses 2.0 (JSON) metadata where possible. If not possible, it wraps a LegacyMetadata instance which handles the key-value metadata format. """ METADATA_VERSION_MATCHER = re.compile(r'^\d+(\.\d+)*$') NAME_MATCHER = re.compile('^[0-9A-Z]([0-9A-Z_.-]*[0-9A-Z])?$', re.I) VERSION_MATCHER = PEP440_VERSION_RE SUMMARY_MATCHER = re.compile('.{1,2047}') METADATA_VERSION = '2.0' GENERATOR = 'distlib (%s)' % __version__ MANDATORY_KEYS = { 'name': (), 'version': (), 'summary': ('legacy',), } INDEX_KEYS = ('name version license summary description author ' 'author_email keywords platform home_page classifiers ' 'download_url') DEPENDENCY_KEYS = ('extras run_requires test_requires build_requires ' 'dev_requires provides meta_requires obsoleted_by ' 'supports_environments') SYNTAX_VALIDATORS = { 'metadata_version': (METADATA_VERSION_MATCHER, ()), 'name': (NAME_MATCHER, ('legacy',)), 'version': (VERSION_MATCHER, ('legacy',)), 'summary': (SUMMARY_MATCHER, ('legacy',)), } __slots__ = ('_legacy', '_data', 'scheme') def __init__(self, path=None, fileobj=None, mapping=None, scheme='default'): if [path, fileobj, mapping].count(None) < 2: raise TypeError('path, fileobj and mapping are exclusive') self._legacy = None self._data = None self.scheme = scheme #import pdb; pdb.set_trace() if mapping is not None: try: self._validate_mapping(mapping, scheme) self._data = mapping except MetadataUnrecognizedVersionError: self._legacy = LegacyMetadata(mapping=mapping, scheme=scheme) self.validate() else: data = None if path: with open(path, 'rb') as f: data = f.read() elif fileobj: data = fileobj.read() if data is None: # Initialised with no args - to be added self._data = { 'metadata_version': self.METADATA_VERSION, 'generator': self.GENERATOR, } else: if not isinstance(data, text_type): data = data.decode('utf-8') try: self._data = json.loads(data) self._validate_mapping(self._data, scheme) except ValueError: # Note: MetadataUnrecognizedVersionError does not # inherit from ValueError (it's a DistlibException, # which should not inherit from ValueError). # The ValueError comes from the json.load - if that # succeeds and we get a validation error, we want # that to propagate self._legacy = LegacyMetadata(fileobj=StringIO(data), scheme=scheme) self.validate() common_keys = set(('name', 'version', 'license', 'keywords', 'summary')) none_list = (None, list) none_dict = (None, dict) mapped_keys = { 'run_requires': ('Requires-Dist', list), 'build_requires': ('Setup-Requires-Dist', list), 'dev_requires': none_list, 'test_requires': none_list, 'meta_requires': none_list, 'extras': ('Provides-Extra', list), 'modules': none_list, 'namespaces': none_list, 'exports': none_dict, 'commands': none_dict, 'classifiers': ('Classifier', list), 'source_url': ('Download-URL', None), 'metadata_version': ('Metadata-Version', None), } del none_list, none_dict def __getattribute__(self, key): common = object.__getattribute__(self, 'common_keys') mapped = object.__getattribute__(self, 'mapped_keys') if key in mapped: lk, maker = mapped[key] if self._legacy: if lk is None: result = None if maker is None else maker() else: result = self._legacy.get(lk) else: value = None if maker is None else maker() if key not in ('commands', 'exports', 'modules', 'namespaces', 'classifiers'): result = self._data.get(key, value) else: # special cases for PEP 459 sentinel = object() result = sentinel d = self._data.get('extensions') if d: if key == 'commands': result = d.get('python.commands', value) elif key == 'classifiers': d = d.get('python.details') if d: result = d.get(key, value) else: d = d.get('python.exports') if not d: d = self._data.get('python.exports') if d: result = d.get(key, value) if result is sentinel: result = value elif key not in common: result = object.__getattribute__(self, key) elif self._legacy: result = self._legacy.get(key) else: result = self._data.get(key) return result def _validate_value(self, key, value, scheme=None): if key in self.SYNTAX_VALIDATORS: pattern, exclusions = self.SYNTAX_VALIDATORS[key] if (scheme or self.scheme) not in exclusions: m = pattern.match(value) if not m: raise MetadataInvalidError("'%s' is an invalid value for " "the '%s' property" % (value, key)) def __setattr__(self, key, value): self._validate_value(key, value) common = object.__getattribute__(self, 'common_keys') mapped = object.__getattribute__(self, 'mapped_keys') if key in mapped: lk, _ = mapped[key] if self._legacy: if lk is None: raise NotImplementedError self._legacy[lk] = value elif key not in ('commands', 'exports', 'modules', 'namespaces', 'classifiers'): self._data[key] = value else: # special cases for PEP 459 d = self._data.setdefault('extensions', {}) if key == 'commands': d['python.commands'] = value elif key == 'classifiers': d = d.setdefault('python.details', {}) d[key] = value else: d = d.setdefault('python.exports', {}) d[key] = value elif key not in common: object.__setattr__(self, key, value) else: if key == 'keywords': if isinstance(value, string_types): value = value.strip() if value: value = value.split() else: value = [] if self._legacy: self._legacy[key] = value else: self._data[key] = value @property def name_and_version(self): return _get_name_and_version(self.name, self.version, True) @property def provides(self): if self._legacy: result = self._legacy['Provides-Dist'] else: result = self._data.setdefault('provides', []) s = '%s (%s)' % (self.name, self.version) if s not in result: result.append(s) return result @provides.setter def provides(self, value): if self._legacy: self._legacy['Provides-Dist'] = value else: self._data['provides'] = value def get_requirements(self, reqts, extras=None, env=None): """ Base method to get dependencies, given a set of extras to satisfy and an optional environment context. :param reqts: A list of sometimes-wanted dependencies, perhaps dependent on extras and environment. :param extras: A list of optional components being requested. :param env: An optional environment for marker evaluation. """ if self._legacy: result = reqts else: result = [] extras = get_extras(extras or [], self.extras) for d in reqts: if 'extra' not in d and 'environment' not in d: # unconditional include = True else: if 'extra' not in d: # Not extra-dependent - only environment-dependent include = True else: include = d.get('extra') in extras if include: # Not excluded because of extras, check environment marker = d.get('environment') if marker: include = interpret(marker, env) if include: result.extend(d['requires']) for key in ('build', 'dev', 'test'): e = ':%s:' % key if e in extras: extras.remove(e) # A recursive call, but it should terminate since 'test' # has been removed from the extras reqts = self._data.get('%s_requires' % key, []) result.extend(self.get_requirements(reqts, extras=extras, env=env)) return result @property def dictionary(self): if self._legacy: return self._from_legacy() return self._data @property def dependencies(self): if self._legacy: raise NotImplementedError else: return extract_by_key(self._data, self.DEPENDENCY_KEYS) @dependencies.setter def dependencies(self, value): if self._legacy: raise NotImplementedError else: self._data.update(value) def _validate_mapping(self, mapping, scheme): if mapping.get('metadata_version') != self.METADATA_VERSION: raise MetadataUnrecognizedVersionError() missing = [] for key, exclusions in self.MANDATORY_KEYS.items(): if key not in mapping: if scheme not in exclusions: missing.append(key) if missing: msg = 'Missing metadata items: %s' % ', '.join(missing) raise MetadataMissingError(msg) for k, v in mapping.items(): self._validate_value(k, v, scheme) def validate(self): if self._legacy: missing, warnings = self._legacy.check(True) if missing or warnings: logger.warning('Metadata: missing: %s, warnings: %s', missing, warnings) else: self._validate_mapping(self._data, self.scheme) def todict(self): if self._legacy: return self._legacy.todict(True) else: result = extract_by_key(self._data, self.INDEX_KEYS) return result def _from_legacy(self): assert self._legacy and not self._data result = { 'metadata_version': self.METADATA_VERSION, 'generator': self.GENERATOR, } lmd = self._legacy.todict(True) # skip missing ones for k in ('name', 'version', 'license', 'summary', 'description', 'classifier'): if k in lmd: if k == 'classifier': nk = 'classifiers' else: nk = k result[nk] = lmd[k] kw = lmd.get('Keywords', []) if kw == ['']: kw = [] result['keywords'] = kw keys = (('requires_dist', 'run_requires'), ('setup_requires_dist', 'build_requires')) for ok, nk in keys: if ok in lmd and lmd[ok]: result[nk] = [{'requires': lmd[ok]}] result['provides'] = self.provides author = {} maintainer = {} return result LEGACY_MAPPING = { 'name': 'Name', 'version': 'Version', 'license': 'License', 'summary': 'Summary', 'description': 'Description', 'classifiers': 'Classifier', } def _to_legacy(self): def process_entries(entries): reqts = set() for e in entries: extra = e.get('extra') env = e.get('environment') rlist = e['requires'] for r in rlist: if not env and not extra: reqts.add(r) else: marker = '' if extra: marker = 'extra == "%s"' % extra if env: if marker: marker = '(%s) and %s' % (env, marker) else: marker = env reqts.add(';'.join((r, marker))) return reqts assert self._data and not self._legacy result = LegacyMetadata() nmd = self._data for nk, ok in self.LEGACY_MAPPING.items(): if nk in nmd: result[ok] = nmd[nk] r1 = process_entries(self.run_requires + self.meta_requires) r2 = process_entries(self.build_requires + self.dev_requires) if self.extras: result['Provides-Extra'] = sorted(self.extras) result['Requires-Dist'] = sorted(r1) result['Setup-Requires-Dist'] = sorted(r2) # TODO: other fields such as contacts return result def write(self, path=None, fileobj=None, legacy=False, skip_unknown=True): if [path, fileobj].count(None) != 1: raise ValueError('Exactly one of path and fileobj is needed') self.validate() if legacy: if self._legacy: legacy_md = self._legacy else: legacy_md = self._to_legacy() if path: legacy_md.write(path, skip_unknown=skip_unknown) else: legacy_md.write_file(fileobj, skip_unknown=skip_unknown) else: if self._legacy: d = self._from_legacy() else: d = self._data if fileobj: json.dump(d, fileobj, ensure_ascii=True, indent=2, sort_keys=True) else: with codecs.open(path, 'w', 'utf-8') as f: json.dump(d, f, ensure_ascii=True, indent=2, sort_keys=True) def add_requirements(self, requirements): if self._legacy: self._legacy.add_requirements(requirements) else: run_requires = self._data.setdefault('run_requires', []) always = None for entry in run_requires: if 'environment' not in entry and 'extra' not in entry: always = entry break if always is None: always = { 'requires': requirements } run_requires.insert(0, always) else: rset = set(always['requires']) | set(requirements) always['requires'] = sorted(rset) def __repr__(self): name = self.name or '(no name)' version = self.version or 'no version' return '<%s %s %s (%s)>' % (self.__class__.__name__, self.metadata_version, name, version) ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/distlib/resources.py���������������������0000644�0001750�0001750�00000025016�00000000000�030507� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # # Copyright (C) 2013-2017 Vinay Sajip. # Licensed to the Python Software Foundation under a contributor agreement. # See LICENSE.txt and CONTRIBUTORS.txt. # from __future__ import unicode_literals import bisect import io import logging import os import pkgutil import shutil import sys import types import zipimport from . import DistlibException from .util import cached_property, get_cache_base, path_to_cache_dir, Cache logger = logging.getLogger(__name__) cache = None # created when needed class ResourceCache(Cache): def __init__(self, base=None): if base is None: # Use native string to avoid issues on 2.x: see Python #20140. base = os.path.join(get_cache_base(), str('resource-cache')) super(ResourceCache, self).__init__(base) def is_stale(self, resource, path): """ Is the cache stale for the given resource? :param resource: The :class:`Resource` being cached. :param path: The path of the resource in the cache. :return: True if the cache is stale. """ # Cache invalidation is a hard problem :-) return True def get(self, resource): """ Get a resource into the cache, :param resource: A :class:`Resource` instance. :return: The pathname of the resource in the cache. """ prefix, path = resource.finder.get_cache_info(resource) if prefix is None: result = path else: result = os.path.join(self.base, self.prefix_to_dir(prefix), path) dirname = os.path.dirname(result) if not os.path.isdir(dirname): os.makedirs(dirname) if not os.path.exists(result): stale = True else: stale = self.is_stale(resource, path) if stale: # write the bytes of the resource to the cache location with open(result, 'wb') as f: f.write(resource.bytes) return result class ResourceBase(object): def __init__(self, finder, name): self.finder = finder self.name = name class Resource(ResourceBase): """ A class representing an in-package resource, such as a data file. This is not normally instantiated by user code, but rather by a :class:`ResourceFinder` which manages the resource. """ is_container = False # Backwards compatibility def as_stream(self): """ Get the resource as a stream. This is not a property to make it obvious that it returns a new stream each time. """ return self.finder.get_stream(self) @cached_property def file_path(self): global cache if cache is None: cache = ResourceCache() return cache.get(self) @cached_property def bytes(self): return self.finder.get_bytes(self) @cached_property def size(self): return self.finder.get_size(self) class ResourceContainer(ResourceBase): is_container = True # Backwards compatibility @cached_property def resources(self): return self.finder.get_resources(self) class ResourceFinder(object): """ Resource finder for file system resources. """ if sys.platform.startswith('java'): skipped_extensions = ('.pyc', '.pyo', '.class') else: skipped_extensions = ('.pyc', '.pyo') def __init__(self, module): self.module = module self.loader = getattr(module, '__loader__', None) self.base = os.path.dirname(getattr(module, '__file__', '')) def _adjust_path(self, path): return os.path.realpath(path) def _make_path(self, resource_name): # Issue #50: need to preserve type of path on Python 2.x # like os.path._get_sep if isinstance(resource_name, bytes): # should only happen on 2.x sep = b'/' else: sep = '/' parts = resource_name.split(sep) parts.insert(0, self.base) result = os.path.join(*parts) return self._adjust_path(result) def _find(self, path): return os.path.exists(path) def get_cache_info(self, resource): return None, resource.path def find(self, resource_name): path = self._make_path(resource_name) if not self._find(path): result = None else: if self._is_directory(path): result = ResourceContainer(self, resource_name) else: result = Resource(self, resource_name) result.path = path return result def get_stream(self, resource): return open(resource.path, 'rb') def get_bytes(self, resource): with open(resource.path, 'rb') as f: return f.read() def get_size(self, resource): return os.path.getsize(resource.path) def get_resources(self, resource): def allowed(f): return (f != '__pycache__' and not f.endswith(self.skipped_extensions)) return set([f for f in os.listdir(resource.path) if allowed(f)]) def is_container(self, resource): return self._is_directory(resource.path) _is_directory = staticmethod(os.path.isdir) def iterator(self, resource_name): resource = self.find(resource_name) if resource is not None: todo = [resource] while todo: resource = todo.pop(0) yield resource if resource.is_container: rname = resource.name for name in resource.resources: if not rname: new_name = name else: new_name = '/'.join([rname, name]) child = self.find(new_name) if child.is_container: todo.append(child) else: yield child class ZipResourceFinder(ResourceFinder): """ Resource finder for resources in .zip files. """ def __init__(self, module): super(ZipResourceFinder, self).__init__(module) archive = self.loader.archive self.prefix_len = 1 + len(archive) # PyPy doesn't have a _files attr on zipimporter, and you can't set one if hasattr(self.loader, '_files'): self._files = self.loader._files else: self._files = zipimport._zip_directory_cache[archive] self.index = sorted(self._files) def _adjust_path(self, path): return path def _find(self, path): path = path[self.prefix_len:] if path in self._files: result = True else: if path and path[-1] != os.sep: path = path + os.sep i = bisect.bisect(self.index, path) try: result = self.index[i].startswith(path) except IndexError: result = False if not result: logger.debug('_find failed: %r %r', path, self.loader.prefix) else: logger.debug('_find worked: %r %r', path, self.loader.prefix) return result def get_cache_info(self, resource): prefix = self.loader.archive path = resource.path[1 + len(prefix):] return prefix, path def get_bytes(self, resource): return self.loader.get_data(resource.path) def get_stream(self, resource): return io.BytesIO(self.get_bytes(resource)) def get_size(self, resource): path = resource.path[self.prefix_len:] return self._files[path][3] def get_resources(self, resource): path = resource.path[self.prefix_len:] if path and path[-1] != os.sep: path += os.sep plen = len(path) result = set() i = bisect.bisect(self.index, path) while i < len(self.index): if not self.index[i].startswith(path): break s = self.index[i][plen:] result.add(s.split(os.sep, 1)[0]) # only immediate children i += 1 return result def _is_directory(self, path): path = path[self.prefix_len:] if path and path[-1] != os.sep: path += os.sep i = bisect.bisect(self.index, path) try: result = self.index[i].startswith(path) except IndexError: result = False return result _finder_registry = { type(None): ResourceFinder, zipimport.zipimporter: ZipResourceFinder } try: # In Python 3.6, _frozen_importlib -> _frozen_importlib_external try: import _frozen_importlib_external as _fi except ImportError: import _frozen_importlib as _fi _finder_registry[_fi.SourceFileLoader] = ResourceFinder _finder_registry[_fi.FileFinder] = ResourceFinder del _fi except (ImportError, AttributeError): pass def register_finder(loader, finder_maker): _finder_registry[type(loader)] = finder_maker _finder_cache = {} def finder(package): """ Return a resource finder for a package. :param package: The name of the package. :return: A :class:`ResourceFinder` instance for the package. """ if package in _finder_cache: result = _finder_cache[package] else: if package not in sys.modules: __import__(package) module = sys.modules[package] path = getattr(module, '__path__', None) if path is None: raise DistlibException('You cannot get a finder for a module, ' 'only for a package') loader = getattr(module, '__loader__', None) finder_maker = _finder_registry.get(type(loader)) if finder_maker is None: raise DistlibException('Unable to locate finder for %r' % package) result = finder_maker(module) _finder_cache[package] = result return result _dummy_module = types.ModuleType(str('__dummy__')) def finder_for_path(path): """ Return a resource finder for a path, which should represent a container. :param path: The path. :return: A :class:`ResourceFinder` instance for the path. """ result = None # calls any path hooks, gets importer into cache pkgutil.get_importer(path) loader = sys.path_importer_cache.get(path) finder = _finder_registry.get(type(loader)) if finder: module = _dummy_module module.__file__ = os.path.join(path, '') module.__loader__ = loader result = finder(module) return result ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/distlib/scripts.py�����������������������0000644�0001750�0001750�00000037642�00000000000�030174� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # # Copyright (C) 2013-2015 Vinay Sajip. # Licensed to the Python Software Foundation under a contributor agreement. # See LICENSE.txt and CONTRIBUTORS.txt. # from io import BytesIO import logging import os import re import struct import sys from .compat import sysconfig, detect_encoding, ZipFile from .resources import finder from .util import (FileOperator, get_export_entry, convert_path, get_executable, in_venv) logger = logging.getLogger(__name__) _DEFAULT_MANIFEST = ''' <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <assemblyIdentity version="1.0.0.0" processorArchitecture="X86" name="%s" type="win32"/> <!-- Identify the application security requirements. --> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"/> </requestedPrivileges> </security> </trustInfo> </assembly>'''.strip() # check if Python is called on the first line with this expression FIRST_LINE_RE = re.compile(b'^#!.*pythonw?[0-9.]*([ \t].*)?$') SCRIPT_TEMPLATE = r'''# -*- coding: utf-8 -*- import re import sys from %(module)s import %(import_name)s if __name__ == '__main__': sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) sys.exit(%(func)s()) ''' def _enquote_executable(executable): if ' ' in executable: # make sure we quote only the executable in case of env # for example /usr/bin/env "/dir with spaces/bin/jython" # instead of "/usr/bin/env /dir with spaces/bin/jython" # otherwise whole if executable.startswith('/usr/bin/env '): env, _executable = executable.split(' ', 1) if ' ' in _executable and not _executable.startswith('"'): executable = '%s "%s"' % (env, _executable) else: if not executable.startswith('"'): executable = '"%s"' % executable return executable class ScriptMaker(object): """ A class to copy or create scripts from source scripts or callable specifications. """ script_template = SCRIPT_TEMPLATE executable = None # for shebangs def __init__(self, source_dir, target_dir, add_launchers=True, dry_run=False, fileop=None): self.source_dir = source_dir self.target_dir = target_dir self.add_launchers = add_launchers self.force = False self.clobber = False # It only makes sense to set mode bits on POSIX. self.set_mode = (os.name == 'posix') or (os.name == 'java' and os._name == 'posix') self.variants = set(('', 'X.Y')) self._fileop = fileop or FileOperator(dry_run) self._is_nt = os.name == 'nt' or ( os.name == 'java' and os._name == 'nt') def _get_alternate_executable(self, executable, options): if options.get('gui', False) and self._is_nt: # pragma: no cover dn, fn = os.path.split(executable) fn = fn.replace('python', 'pythonw') executable = os.path.join(dn, fn) return executable if sys.platform.startswith('java'): # pragma: no cover def _is_shell(self, executable): """ Determine if the specified executable is a script (contains a #! line) """ try: with open(executable) as fp: return fp.read(2) == '#!' except (OSError, IOError): logger.warning('Failed to open %s', executable) return False def _fix_jython_executable(self, executable): if self._is_shell(executable): # Workaround for Jython is not needed on Linux systems. import java if java.lang.System.getProperty('os.name') == 'Linux': return executable elif executable.lower().endswith('jython.exe'): # Use wrapper exe for Jython on Windows return executable return '/usr/bin/env %s' % executable def _build_shebang(self, executable, post_interp): """ Build a shebang line. In the simple case (on Windows, or a shebang line which is not too long or contains spaces) use a simple formulation for the shebang. Otherwise, use /bin/sh as the executable, with a contrived shebang which allows the script to run either under Python or sh, using suitable quoting. Thanks to Harald Nordgren for his input. See also: http://www.in-ulm.de/~mascheck/various/shebang/#length https://hg.mozilla.org/mozilla-central/file/tip/mach """ if os.name != 'posix': simple_shebang = True else: # Add 3 for '#!' prefix and newline suffix. shebang_length = len(executable) + len(post_interp) + 3 if sys.platform == 'darwin': max_shebang_length = 512 else: max_shebang_length = 127 simple_shebang = ((b' ' not in executable) and (shebang_length <= max_shebang_length)) if simple_shebang: result = b'#!' + executable + post_interp + b'\n' else: result = b'#!/bin/sh\n' result += b"'''exec' " + executable + post_interp + b' "$0" "$@"\n' result += b"' '''" return result def _get_shebang(self, encoding, post_interp=b'', options=None): enquote = True if self.executable: executable = self.executable enquote = False # assume this will be taken care of elif not sysconfig.is_python_build(): executable = get_executable() elif in_venv(): # pragma: no cover executable = os.path.join(sysconfig.get_path('scripts'), 'python%s' % sysconfig.get_config_var('EXE')) else: # pragma: no cover executable = os.path.join( sysconfig.get_config_var('BINDIR'), 'python%s%s' % (sysconfig.get_config_var('VERSION'), sysconfig.get_config_var('EXE'))) if options: executable = self._get_alternate_executable(executable, options) if sys.platform.startswith('java'): # pragma: no cover executable = self._fix_jython_executable(executable) # Normalise case for Windows executable = os.path.normcase(executable) # If the user didn't specify an executable, it may be necessary to # cater for executable paths with spaces (not uncommon on Windows) if enquote: executable = _enquote_executable(executable) # Issue #51: don't use fsencode, since we later try to # check that the shebang is decodable using utf-8. executable = executable.encode('utf-8') # in case of IronPython, play safe and enable frames support if (sys.platform == 'cli' and '-X:Frames' not in post_interp and '-X:FullFrames' not in post_interp): # pragma: no cover post_interp += b' -X:Frames' shebang = self._build_shebang(executable, post_interp) # Python parser starts to read a script using UTF-8 until # it gets a #coding:xxx cookie. The shebang has to be the # first line of a file, the #coding:xxx cookie cannot be # written before. So the shebang has to be decodable from # UTF-8. try: shebang.decode('utf-8') except UnicodeDecodeError: # pragma: no cover raise ValueError( 'The shebang (%r) is not decodable from utf-8' % shebang) # If the script is encoded to a custom encoding (use a # #coding:xxx cookie), the shebang has to be decodable from # the script encoding too. if encoding != 'utf-8': try: shebang.decode(encoding) except UnicodeDecodeError: # pragma: no cover raise ValueError( 'The shebang (%r) is not decodable ' 'from the script encoding (%r)' % (shebang, encoding)) return shebang def _get_script_text(self, entry): return self.script_template % dict(module=entry.prefix, import_name=entry.suffix.split('.')[0], func=entry.suffix) manifest = _DEFAULT_MANIFEST def get_manifest(self, exename): base = os.path.basename(exename) return self.manifest % base def _write_script(self, names, shebang, script_bytes, filenames, ext): use_launcher = self.add_launchers and self._is_nt linesep = os.linesep.encode('utf-8') if not shebang.endswith(linesep): shebang += linesep if not use_launcher: script_bytes = shebang + script_bytes else: # pragma: no cover if ext == 'py': launcher = self._get_launcher('t') else: launcher = self._get_launcher('w') stream = BytesIO() with ZipFile(stream, 'w') as zf: zf.writestr('__main__.py', script_bytes) zip_data = stream.getvalue() script_bytes = launcher + shebang + zip_data for name in names: outname = os.path.join(self.target_dir, name) if use_launcher: # pragma: no cover n, e = os.path.splitext(outname) if e.startswith('.py'): outname = n outname = '%s.exe' % outname try: self._fileop.write_binary_file(outname, script_bytes) except Exception: # Failed writing an executable - it might be in use. logger.warning('Failed to write executable - trying to ' 'use .deleteme logic') dfname = '%s.deleteme' % outname if os.path.exists(dfname): os.remove(dfname) # Not allowed to fail here os.rename(outname, dfname) # nor here self._fileop.write_binary_file(outname, script_bytes) logger.debug('Able to replace executable using ' '.deleteme logic') try: os.remove(dfname) except Exception: pass # still in use - ignore error else: if self._is_nt and not outname.endswith('.' + ext): # pragma: no cover outname = '%s.%s' % (outname, ext) if os.path.exists(outname) and not self.clobber: logger.warning('Skipping existing file %s', outname) continue self._fileop.write_binary_file(outname, script_bytes) if self.set_mode: self._fileop.set_executable_mode([outname]) filenames.append(outname) def _make_script(self, entry, filenames, options=None): post_interp = b'' if options: args = options.get('interpreter_args', []) if args: args = ' %s' % ' '.join(args) post_interp = args.encode('utf-8') shebang = self._get_shebang('utf-8', post_interp, options=options) script = self._get_script_text(entry).encode('utf-8') name = entry.name scriptnames = set() if '' in self.variants: scriptnames.add(name) if 'X' in self.variants: scriptnames.add('%s%s' % (name, sys.version[0])) if 'X.Y' in self.variants: scriptnames.add('%s-%s' % (name, sys.version[:3])) if options and options.get('gui', False): ext = 'pyw' else: ext = 'py' self._write_script(scriptnames, shebang, script, filenames, ext) def _copy_script(self, script, filenames): adjust = False script = os.path.join(self.source_dir, convert_path(script)) outname = os.path.join(self.target_dir, os.path.basename(script)) if not self.force and not self._fileop.newer(script, outname): logger.debug('not copying %s (up-to-date)', script) return # Always open the file, but ignore failures in dry-run mode -- # that way, we'll get accurate feedback if we can read the # script. try: f = open(script, 'rb') except IOError: # pragma: no cover if not self.dry_run: raise f = None else: first_line = f.readline() if not first_line: # pragma: no cover logger.warning('%s: %s is an empty file (skipping)', self.get_command_name(), script) return match = FIRST_LINE_RE.match(first_line.replace(b'\r\n', b'\n')) if match: adjust = True post_interp = match.group(1) or b'' if not adjust: if f: f.close() self._fileop.copy_file(script, outname) if self.set_mode: self._fileop.set_executable_mode([outname]) filenames.append(outname) else: logger.info('copying and adjusting %s -> %s', script, self.target_dir) if not self._fileop.dry_run: encoding, lines = detect_encoding(f.readline) f.seek(0) shebang = self._get_shebang(encoding, post_interp) if b'pythonw' in first_line: # pragma: no cover ext = 'pyw' else: ext = 'py' n = os.path.basename(outname) self._write_script([n], shebang, f.read(), filenames, ext) if f: f.close() @property def dry_run(self): return self._fileop.dry_run @dry_run.setter def dry_run(self, value): self._fileop.dry_run = value if os.name == 'nt' or (os.name == 'java' and os._name == 'nt'): # pragma: no cover # Executable launcher support. # Launchers are from https://bitbucket.org/vinay.sajip/simple_launcher/ def _get_launcher(self, kind): if struct.calcsize('P') == 8: # 64-bit bits = '64' else: bits = '32' name = '%s%s.exe' % (kind, bits) # Issue 31: don't hardcode an absolute package name, but # determine it relative to the current package distlib_package = __name__.rsplit('.', 1)[0] result = finder(distlib_package).find(name).bytes return result # Public API follows def make(self, specification, options=None): """ Make a script. :param specification: The specification, which is either a valid export entry specification (to make a script from a callable) or a filename (to make a script by copying from a source location). :param options: A dictionary of options controlling script generation. :return: A list of all absolute pathnames written to. """ filenames = [] entry = get_export_entry(specification) if entry is None: self._copy_script(specification, filenames) else: self._make_script(entry, filenames, options=options) return filenames def make_multiple(self, specifications, options=None): """ Take a list of specifications and make scripts from them, :param specifications: A list of specifications. :return: A list of all absolute pathnames written to, """ filenames = [] for specification in specifications: filenames.extend(self.make(specification, options)) return filenames ����������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/distlib/util.py��������������������������0000644�0001750�0001750�00000164616�00000000000�027464� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# # Copyright (C) 2012-2017 The Python Software Foundation. # See LICENSE.txt and CONTRIBUTORS.txt. # import codecs from collections import deque import contextlib import csv from glob import iglob as std_iglob import io import json import logging import os import py_compile import re import socket try: import ssl except ImportError: # pragma: no cover ssl = None import subprocess import sys import tarfile import tempfile import textwrap try: import threading except ImportError: # pragma: no cover import dummy_threading as threading import time from . import DistlibException from .compat import (string_types, text_type, shutil, raw_input, StringIO, cache_from_source, urlopen, urljoin, httplib, xmlrpclib, splittype, HTTPHandler, BaseConfigurator, valid_ident, Container, configparser, URLError, ZipFile, fsdecode, unquote, urlparse) logger = logging.getLogger(__name__) # # Requirement parsing code as per PEP 508 # IDENTIFIER = re.compile(r'^([\w\.-]+)\s*') VERSION_IDENTIFIER = re.compile(r'^([\w\.*+-]+)\s*') COMPARE_OP = re.compile(r'^(<=?|>=?|={2,3}|[~!]=)\s*') MARKER_OP = re.compile(r'^((<=?)|(>=?)|={2,3}|[~!]=|in|not\s+in)\s*') OR = re.compile(r'^or\b\s*') AND = re.compile(r'^and\b\s*') NON_SPACE = re.compile(r'(\S+)\s*') STRING_CHUNK = re.compile(r'([\s\w\.{}()*+#:;,/?!~`@$%^&=|<>\[\]-]+)') def parse_marker(marker_string): """ Parse a marker string and return a dictionary containing a marker expression. The dictionary will contain keys "op", "lhs" and "rhs" for non-terminals in the expression grammar, or strings. A string contained in quotes is to be interpreted as a literal string, and a string not contained in quotes is a variable (such as os_name). """ def marker_var(remaining): # either identifier, or literal string m = IDENTIFIER.match(remaining) if m: result = m.groups()[0] remaining = remaining[m.end():] elif not remaining: raise SyntaxError('unexpected end of input') else: q = remaining[0] if q not in '\'"': raise SyntaxError('invalid expression: %s' % remaining) oq = '\'"'.replace(q, '') remaining = remaining[1:] parts = [q] while remaining: # either a string chunk, or oq, or q to terminate if remaining[0] == q: break elif remaining[0] == oq: parts.append(oq) remaining = remaining[1:] else: m = STRING_CHUNK.match(remaining) if not m: raise SyntaxError('error in string literal: %s' % remaining) parts.append(m.groups()[0]) remaining = remaining[m.end():] else: s = ''.join(parts) raise SyntaxError('unterminated string: %s' % s) parts.append(q) result = ''.join(parts) remaining = remaining[1:].lstrip() # skip past closing quote return result, remaining def marker_expr(remaining): if remaining and remaining[0] == '(': result, remaining = marker(remaining[1:].lstrip()) if remaining[0] != ')': raise SyntaxError('unterminated parenthesis: %s' % remaining) remaining = remaining[1:].lstrip() else: lhs, remaining = marker_var(remaining) while remaining: m = MARKER_OP.match(remaining) if not m: break op = m.groups()[0] remaining = remaining[m.end():] rhs, remaining = marker_var(remaining) lhs = {'op': op, 'lhs': lhs, 'rhs': rhs} result = lhs return result, remaining def marker_and(remaining): lhs, remaining = marker_expr(remaining) while remaining: m = AND.match(remaining) if not m: break remaining = remaining[m.end():] rhs, remaining = marker_expr(remaining) lhs = {'op': 'and', 'lhs': lhs, 'rhs': rhs} return lhs, remaining def marker(remaining): lhs, remaining = marker_and(remaining) while remaining: m = OR.match(remaining) if not m: break remaining = remaining[m.end():] rhs, remaining = marker_and(remaining) lhs = {'op': 'or', 'lhs': lhs, 'rhs': rhs} return lhs, remaining return marker(marker_string) def parse_requirement(req): """ Parse a requirement passed in as a string. Return a Container whose attributes contain the various parts of the requirement. """ remaining = req.strip() if not remaining or remaining.startswith('#'): return None m = IDENTIFIER.match(remaining) if not m: raise SyntaxError('name expected: %s' % remaining) distname = m.groups()[0] remaining = remaining[m.end():] extras = mark_expr = versions = uri = None if remaining and remaining[0] == '[': i = remaining.find(']', 1) if i < 0: raise SyntaxError('unterminated extra: %s' % remaining) s = remaining[1:i] remaining = remaining[i + 1:].lstrip() extras = [] while s: m = IDENTIFIER.match(s) if not m: raise SyntaxError('malformed extra: %s' % s) extras.append(m.groups()[0]) s = s[m.end():] if not s: break if s[0] != ',': raise SyntaxError('comma expected in extras: %s' % s) s = s[1:].lstrip() if not extras: extras = None if remaining: if remaining[0] == '@': # it's a URI remaining = remaining[1:].lstrip() m = NON_SPACE.match(remaining) if not m: raise SyntaxError('invalid URI: %s' % remaining) uri = m.groups()[0] t = urlparse(uri) # there are issues with Python and URL parsing, so this test # is a bit crude. See bpo-20271, bpo-23505. Python doesn't # always parse invalid URLs correctly - it should raise # exceptions for malformed URLs if not (t.scheme and t.netloc): raise SyntaxError('Invalid URL: %s' % uri) remaining = remaining[m.end():].lstrip() else: def get_versions(ver_remaining): """ Return a list of operator, version tuples if any are specified, else None. """ m = COMPARE_OP.match(ver_remaining) versions = None if m: versions = [] while True: op = m.groups()[0] ver_remaining = ver_remaining[m.end():] m = VERSION_IDENTIFIER.match(ver_remaining) if not m: raise SyntaxError('invalid version: %s' % ver_remaining) v = m.groups()[0] versions.append((op, v)) ver_remaining = ver_remaining[m.end():] if not ver_remaining or ver_remaining[0] != ',': break ver_remaining = ver_remaining[1:].lstrip() m = COMPARE_OP.match(ver_remaining) if not m: raise SyntaxError('invalid constraint: %s' % ver_remaining) if not versions: versions = None return versions, ver_remaining if remaining[0] != '(': versions, remaining = get_versions(remaining) else: i = remaining.find(')', 1) if i < 0: raise SyntaxError('unterminated parenthesis: %s' % remaining) s = remaining[1:i] remaining = remaining[i + 1:].lstrip() # As a special diversion from PEP 508, allow a version number # a.b.c in parentheses as a synonym for ~= a.b.c (because this # is allowed in earlier PEPs) if COMPARE_OP.match(s): versions, _ = get_versions(s) else: m = VERSION_IDENTIFIER.match(s) if not m: raise SyntaxError('invalid constraint: %s' % s) v = m.groups()[0] s = s[m.end():].lstrip() if s: raise SyntaxError('invalid constraint: %s' % s) versions = [('~=', v)] if remaining: if remaining[0] != ';': raise SyntaxError('invalid requirement: %s' % remaining) remaining = remaining[1:].lstrip() mark_expr, remaining = parse_marker(remaining) if remaining and remaining[0] != '#': raise SyntaxError('unexpected trailing data: %s' % remaining) if not versions: rs = distname else: rs = '%s %s' % (distname, ', '.join(['%s %s' % con for con in versions])) return Container(name=distname, extras=extras, constraints=versions, marker=mark_expr, url=uri, requirement=rs) def get_resources_dests(resources_root, rules): """Find destinations for resources files""" def get_rel_path(root, path): # normalizes and returns a lstripped-/-separated path root = root.replace(os.path.sep, '/') path = path.replace(os.path.sep, '/') assert path.startswith(root) return path[len(root):].lstrip('/') destinations = {} for base, suffix, dest in rules: prefix = os.path.join(resources_root, base) for abs_base in iglob(prefix): abs_glob = os.path.join(abs_base, suffix) for abs_path in iglob(abs_glob): resource_file = get_rel_path(resources_root, abs_path) if dest is None: # remove the entry if it was here destinations.pop(resource_file, None) else: rel_path = get_rel_path(abs_base, abs_path) rel_dest = dest.replace(os.path.sep, '/').rstrip('/') destinations[resource_file] = rel_dest + '/' + rel_path return destinations def in_venv(): if hasattr(sys, 'real_prefix'): # virtualenv venvs result = True else: # PEP 405 venvs result = sys.prefix != getattr(sys, 'base_prefix', sys.prefix) return result def get_executable(): # The __PYVENV_LAUNCHER__ dance is apparently no longer needed, as # changes to the stub launcher mean that sys.executable always points # to the stub on OS X # if sys.platform == 'darwin' and ('__PYVENV_LAUNCHER__' # in os.environ): # result = os.environ['__PYVENV_LAUNCHER__'] # else: # result = sys.executable # return result result = os.path.normcase(sys.executable) if not isinstance(result, text_type): result = fsdecode(result) return result def proceed(prompt, allowed_chars, error_prompt=None, default=None): p = prompt while True: s = raw_input(p) p = prompt if not s and default: s = default if s: c = s[0].lower() if c in allowed_chars: break if error_prompt: p = '%c: %s\n%s' % (c, error_prompt, prompt) return c def extract_by_key(d, keys): if isinstance(keys, string_types): keys = keys.split() result = {} for key in keys: if key in d: result[key] = d[key] return result def read_exports(stream): if sys.version_info[0] >= 3: # needs to be a text stream stream = codecs.getreader('utf-8')(stream) # Try to load as JSON, falling back on legacy format data = stream.read() stream = StringIO(data) try: jdata = json.load(stream) result = jdata['extensions']['python.exports']['exports'] for group, entries in result.items(): for k, v in entries.items(): s = '%s = %s' % (k, v) entry = get_export_entry(s) assert entry is not None entries[k] = entry return result except Exception: stream.seek(0, 0) def read_stream(cp, stream): if hasattr(cp, 'read_file'): cp.read_file(stream) else: cp.readfp(stream) cp = configparser.ConfigParser() try: read_stream(cp, stream) except configparser.MissingSectionHeaderError: stream.close() data = textwrap.dedent(data) stream = StringIO(data) read_stream(cp, stream) result = {} for key in cp.sections(): result[key] = entries = {} for name, value in cp.items(key): s = '%s = %s' % (name, value) entry = get_export_entry(s) assert entry is not None #entry.dist = self entries[name] = entry return result def write_exports(exports, stream): if sys.version_info[0] >= 3: # needs to be a text stream stream = codecs.getwriter('utf-8')(stream) cp = configparser.ConfigParser() for k, v in exports.items(): # TODO check k, v for valid values cp.add_section(k) for entry in v.values(): if entry.suffix is None: s = entry.prefix else: s = '%s:%s' % (entry.prefix, entry.suffix) if entry.flags: s = '%s [%s]' % (s, ', '.join(entry.flags)) cp.set(k, entry.name, s) cp.write(stream) @contextlib.contextmanager def tempdir(): td = tempfile.mkdtemp() try: yield td finally: shutil.rmtree(td) @contextlib.contextmanager def chdir(d): cwd = os.getcwd() try: os.chdir(d) yield finally: os.chdir(cwd) @contextlib.contextmanager def socket_timeout(seconds=15): cto = socket.getdefaulttimeout() try: socket.setdefaulttimeout(seconds) yield finally: socket.setdefaulttimeout(cto) class cached_property(object): def __init__(self, func): self.func = func #for attr in ('__name__', '__module__', '__doc__'): # setattr(self, attr, getattr(func, attr, None)) def __get__(self, obj, cls=None): if obj is None: return self value = self.func(obj) object.__setattr__(obj, self.func.__name__, value) #obj.__dict__[self.func.__name__] = value = self.func(obj) return value def convert_path(pathname): """Return 'pathname' as a name that will work on the native filesystem. The path is split on '/' and put back together again using the current directory separator. Needed because filenames in the setup script are always supplied in Unix style, and have to be converted to the local convention before we can actually use them in the filesystem. Raises ValueError on non-Unix-ish systems if 'pathname' either starts or ends with a slash. """ if os.sep == '/': return pathname if not pathname: return pathname if pathname[0] == '/': raise ValueError("path '%s' cannot be absolute" % pathname) if pathname[-1] == '/': raise ValueError("path '%s' cannot end with '/'" % pathname) paths = pathname.split('/') while os.curdir in paths: paths.remove(os.curdir) if not paths: return os.curdir return os.path.join(*paths) class FileOperator(object): def __init__(self, dry_run=False): self.dry_run = dry_run self.ensured = set() self._init_record() def _init_record(self): self.record = False self.files_written = set() self.dirs_created = set() def record_as_written(self, path): if self.record: self.files_written.add(path) def newer(self, source, target): """Tell if the target is newer than the source. Returns true if 'source' exists and is more recently modified than 'target', or if 'source' exists and 'target' doesn't. Returns false if both exist and 'target' is the same age or younger than 'source'. Raise PackagingFileError if 'source' does not exist. Note that this test is not very accurate: files created in the same second will have the same "age". """ if not os.path.exists(source): raise DistlibException("file '%r' does not exist" % os.path.abspath(source)) if not os.path.exists(target): return True return os.stat(source).st_mtime > os.stat(target).st_mtime def copy_file(self, infile, outfile, check=True): """Copy a file respecting dry-run and force flags. """ self.ensure_dir(os.path.dirname(outfile)) logger.info('Copying %s to %s', infile, outfile) if not self.dry_run: msg = None if check: if os.path.islink(outfile): msg = '%s is a symlink' % outfile elif os.path.exists(outfile) and not os.path.isfile(outfile): msg = '%s is a non-regular file' % outfile if msg: raise ValueError(msg + ' which would be overwritten') shutil.copyfile(infile, outfile) self.record_as_written(outfile) def copy_stream(self, instream, outfile, encoding=None): assert not os.path.isdir(outfile) self.ensure_dir(os.path.dirname(outfile)) logger.info('Copying stream %s to %s', instream, outfile) if not self.dry_run: if encoding is None: outstream = open(outfile, 'wb') else: outstream = codecs.open(outfile, 'w', encoding=encoding) try: shutil.copyfileobj(instream, outstream) finally: outstream.close() self.record_as_written(outfile) def write_binary_file(self, path, data): self.ensure_dir(os.path.dirname(path)) if not self.dry_run: if os.path.exists(path): os.remove(path) with open(path, 'wb') as f: f.write(data) self.record_as_written(path) def write_text_file(self, path, data, encoding): self.write_binary_file(path, data.encode(encoding)) def set_mode(self, bits, mask, files): if os.name == 'posix' or (os.name == 'java' and os._name == 'posix'): # Set the executable bits (owner, group, and world) on # all the files specified. for f in files: if self.dry_run: logger.info("changing mode of %s", f) else: mode = (os.stat(f).st_mode | bits) & mask logger.info("changing mode of %s to %o", f, mode) os.chmod(f, mode) set_executable_mode = lambda s, f: s.set_mode(0o555, 0o7777, f) def ensure_dir(self, path): path = os.path.abspath(path) if path not in self.ensured and not os.path.exists(path): self.ensured.add(path) d, f = os.path.split(path) self.ensure_dir(d) logger.info('Creating %s' % path) if not self.dry_run: os.mkdir(path) if self.record: self.dirs_created.add(path) def byte_compile(self, path, optimize=False, force=False, prefix=None, hashed_invalidation=False): dpath = cache_from_source(path, not optimize) logger.info('Byte-compiling %s to %s', path, dpath) if not self.dry_run: if force or self.newer(path, dpath): if not prefix: diagpath = None else: assert path.startswith(prefix) diagpath = path[len(prefix):] compile_kwargs = {} if hashed_invalidation and hasattr(py_compile, 'PycInvalidationMode'): compile_kwargs['invalidation_mode'] = py_compile.PycInvalidationMode.CHECKED_HASH py_compile.compile(path, dpath, diagpath, True, **compile_kwargs) # raise error self.record_as_written(dpath) return dpath def ensure_removed(self, path): if os.path.exists(path): if os.path.isdir(path) and not os.path.islink(path): logger.debug('Removing directory tree at %s', path) if not self.dry_run: shutil.rmtree(path) if self.record: if path in self.dirs_created: self.dirs_created.remove(path) else: if os.path.islink(path): s = 'link' else: s = 'file' logger.debug('Removing %s %s', s, path) if not self.dry_run: os.remove(path) if self.record: if path in self.files_written: self.files_written.remove(path) def is_writable(self, path): result = False while not result: if os.path.exists(path): result = os.access(path, os.W_OK) break parent = os.path.dirname(path) if parent == path: break path = parent return result def commit(self): """ Commit recorded changes, turn off recording, return changes. """ assert self.record result = self.files_written, self.dirs_created self._init_record() return result def rollback(self): if not self.dry_run: for f in list(self.files_written): if os.path.exists(f): os.remove(f) # dirs should all be empty now, except perhaps for # __pycache__ subdirs # reverse so that subdirs appear before their parents dirs = sorted(self.dirs_created, reverse=True) for d in dirs: flist = os.listdir(d) if flist: assert flist == ['__pycache__'] sd = os.path.join(d, flist[0]) os.rmdir(sd) os.rmdir(d) # should fail if non-empty self._init_record() def resolve(module_name, dotted_path): if module_name in sys.modules: mod = sys.modules[module_name] else: mod = __import__(module_name) if dotted_path is None: result = mod else: parts = dotted_path.split('.') result = getattr(mod, parts.pop(0)) for p in parts: result = getattr(result, p) return result class ExportEntry(object): def __init__(self, name, prefix, suffix, flags): self.name = name self.prefix = prefix self.suffix = suffix self.flags = flags @cached_property def value(self): return resolve(self.prefix, self.suffix) def __repr__(self): # pragma: no cover return '<ExportEntry %s = %s:%s %s>' % (self.name, self.prefix, self.suffix, self.flags) def __eq__(self, other): if not isinstance(other, ExportEntry): result = False else: result = (self.name == other.name and self.prefix == other.prefix and self.suffix == other.suffix and self.flags == other.flags) return result __hash__ = object.__hash__ ENTRY_RE = re.compile(r'''(?P<name>(\w|[-.+])+) \s*=\s*(?P<callable>(\w+)([:\.]\w+)*) \s*(\[\s*(?P<flags>\w+(=\w+)?(,\s*\w+(=\w+)?)*)\s*\])? ''', re.VERBOSE) def get_export_entry(specification): m = ENTRY_RE.search(specification) if not m: result = None if '[' in specification or ']' in specification: raise DistlibException("Invalid specification " "'%s'" % specification) else: d = m.groupdict() name = d['name'] path = d['callable'] colons = path.count(':') if colons == 0: prefix, suffix = path, None else: if colons != 1: raise DistlibException("Invalid specification " "'%s'" % specification) prefix, suffix = path.split(':') flags = d['flags'] if flags is None: if '[' in specification or ']' in specification: raise DistlibException("Invalid specification " "'%s'" % specification) flags = [] else: flags = [f.strip() for f in flags.split(',')] result = ExportEntry(name, prefix, suffix, flags) return result def get_cache_base(suffix=None): """ Return the default base location for distlib caches. If the directory does not exist, it is created. Use the suffix provided for the base directory, and default to '.distlib' if it isn't provided. On Windows, if LOCALAPPDATA is defined in the environment, then it is assumed to be a directory, and will be the parent directory of the result. On POSIX, and on Windows if LOCALAPPDATA is not defined, the user's home directory - using os.expanduser('~') - will be the parent directory of the result. The result is just the directory '.distlib' in the parent directory as determined above, or with the name specified with ``suffix``. """ if suffix is None: suffix = '.distlib' if os.name == 'nt' and 'LOCALAPPDATA' in os.environ: result = os.path.expandvars('$localappdata') else: # Assume posix, or old Windows result = os.path.expanduser('~') # we use 'isdir' instead of 'exists', because we want to # fail if there's a file with that name if os.path.isdir(result): usable = os.access(result, os.W_OK) if not usable: logger.warning('Directory exists but is not writable: %s', result) else: try: os.makedirs(result) usable = True except OSError: logger.warning('Unable to create %s', result, exc_info=True) usable = False if not usable: result = tempfile.mkdtemp() logger.warning('Default location unusable, using %s', result) return os.path.join(result, suffix) def path_to_cache_dir(path): """ Convert an absolute path to a directory name for use in a cache. The algorithm used is: #. On Windows, any ``':'`` in the drive is replaced with ``'---'``. #. Any occurrence of ``os.sep`` is replaced with ``'--'``. #. ``'.cache'`` is appended. """ d, p = os.path.splitdrive(os.path.abspath(path)) if d: d = d.replace(':', '---') p = p.replace(os.sep, '--') return d + p + '.cache' def ensure_slash(s): if not s.endswith('/'): return s + '/' return s def parse_credentials(netloc): username = password = None if '@' in netloc: prefix, netloc = netloc.rsplit('@', 1) if ':' not in prefix: username = prefix else: username, password = prefix.split(':', 1) if username: username = unquote(username) if password: password = unquote(password) return username, password, netloc def get_process_umask(): result = os.umask(0o22) os.umask(result) return result def is_string_sequence(seq): result = True i = None for i, s in enumerate(seq): if not isinstance(s, string_types): result = False break assert i is not None return result PROJECT_NAME_AND_VERSION = re.compile('([a-z0-9_]+([.-][a-z_][a-z0-9_]*)*)-' '([a-z0-9_.+-]+)', re.I) PYTHON_VERSION = re.compile(r'-py(\d\.?\d?)') def split_filename(filename, project_name=None): """ Extract name, version, python version from a filename (no extension) Return name, version, pyver or None """ result = None pyver = None filename = unquote(filename).replace(' ', '-') m = PYTHON_VERSION.search(filename) if m: pyver = m.group(1) filename = filename[:m.start()] if project_name and len(filename) > len(project_name) + 1: m = re.match(re.escape(project_name) + r'\b', filename) if m: n = m.end() result = filename[:n], filename[n + 1:], pyver if result is None: m = PROJECT_NAME_AND_VERSION.match(filename) if m: result = m.group(1), m.group(3), pyver return result # Allow spaces in name because of legacy dists like "Twisted Core" NAME_VERSION_RE = re.compile(r'(?P<name>[\w .-]+)\s*' r'\(\s*(?P<ver>[^\s)]+)\)$') def parse_name_and_version(p): """ A utility method used to get name and version from a string. From e.g. a Provides-Dist value. :param p: A value in a form 'foo (1.0)' :return: The name and version as a tuple. """ m = NAME_VERSION_RE.match(p) if not m: raise DistlibException('Ill-formed name/version string: \'%s\'' % p) d = m.groupdict() return d['name'].strip().lower(), d['ver'] def get_extras(requested, available): result = set() requested = set(requested or []) available = set(available or []) if '*' in requested: requested.remove('*') result |= available for r in requested: if r == '-': result.add(r) elif r.startswith('-'): unwanted = r[1:] if unwanted not in available: logger.warning('undeclared extra: %s' % unwanted) if unwanted in result: result.remove(unwanted) else: if r not in available: logger.warning('undeclared extra: %s' % r) result.add(r) return result # # Extended metadata functionality # def _get_external_data(url): result = {} try: # urlopen might fail if it runs into redirections, # because of Python issue #13696. Fixed in locators # using a custom redirect handler. resp = urlopen(url) headers = resp.info() ct = headers.get('Content-Type') if not ct.startswith('application/json'): logger.debug('Unexpected response for JSON request: %s', ct) else: reader = codecs.getreader('utf-8')(resp) #data = reader.read().decode('utf-8') #result = json.loads(data) result = json.load(reader) except Exception as e: logger.exception('Failed to get external data for %s: %s', url, e) return result _external_data_base_url = 'https://www.red-dove.com/pypi/projects/' def get_project_data(name): url = '%s/%s/project.json' % (name[0].upper(), name) url = urljoin(_external_data_base_url, url) result = _get_external_data(url) return result def get_package_data(name, version): url = '%s/%s/package-%s.json' % (name[0].upper(), name, version) url = urljoin(_external_data_base_url, url) return _get_external_data(url) class Cache(object): """ A class implementing a cache for resources that need to live in the file system e.g. shared libraries. This class was moved from resources to here because it could be used by other modules, e.g. the wheel module. """ def __init__(self, base): """ Initialise an instance. :param base: The base directory where the cache should be located. """ # we use 'isdir' instead of 'exists', because we want to # fail if there's a file with that name if not os.path.isdir(base): # pragma: no cover os.makedirs(base) if (os.stat(base).st_mode & 0o77) != 0: logger.warning('Directory \'%s\' is not private', base) self.base = os.path.abspath(os.path.normpath(base)) def prefix_to_dir(self, prefix): """ Converts a resource prefix to a directory name in the cache. """ return path_to_cache_dir(prefix) def clear(self): """ Clear the cache. """ not_removed = [] for fn in os.listdir(self.base): fn = os.path.join(self.base, fn) try: if os.path.islink(fn) or os.path.isfile(fn): os.remove(fn) elif os.path.isdir(fn): shutil.rmtree(fn) except Exception: not_removed.append(fn) return not_removed class EventMixin(object): """ A very simple publish/subscribe system. """ def __init__(self): self._subscribers = {} def add(self, event, subscriber, append=True): """ Add a subscriber for an event. :param event: The name of an event. :param subscriber: The subscriber to be added (and called when the event is published). :param append: Whether to append or prepend the subscriber to an existing subscriber list for the event. """ subs = self._subscribers if event not in subs: subs[event] = deque([subscriber]) else: sq = subs[event] if append: sq.append(subscriber) else: sq.appendleft(subscriber) def remove(self, event, subscriber): """ Remove a subscriber for an event. :param event: The name of an event. :param subscriber: The subscriber to be removed. """ subs = self._subscribers if event not in subs: raise ValueError('No subscribers: %r' % event) subs[event].remove(subscriber) def get_subscribers(self, event): """ Return an iterator for the subscribers for an event. :param event: The event to return subscribers for. """ return iter(self._subscribers.get(event, ())) def publish(self, event, *args, **kwargs): """ Publish a event and return a list of values returned by its subscribers. :param event: The event to publish. :param args: The positional arguments to pass to the event's subscribers. :param kwargs: The keyword arguments to pass to the event's subscribers. """ result = [] for subscriber in self.get_subscribers(event): try: value = subscriber(event, *args, **kwargs) except Exception: logger.exception('Exception during event publication') value = None result.append(value) logger.debug('publish %s: args = %s, kwargs = %s, result = %s', event, args, kwargs, result) return result # # Simple sequencing # class Sequencer(object): def __init__(self): self._preds = {} self._succs = {} self._nodes = set() # nodes with no preds/succs def add_node(self, node): self._nodes.add(node) def remove_node(self, node, edges=False): if node in self._nodes: self._nodes.remove(node) if edges: for p in set(self._preds.get(node, ())): self.remove(p, node) for s in set(self._succs.get(node, ())): self.remove(node, s) # Remove empties for k, v in list(self._preds.items()): if not v: del self._preds[k] for k, v in list(self._succs.items()): if not v: del self._succs[k] def add(self, pred, succ): assert pred != succ self._preds.setdefault(succ, set()).add(pred) self._succs.setdefault(pred, set()).add(succ) def remove(self, pred, succ): assert pred != succ try: preds = self._preds[succ] succs = self._succs[pred] except KeyError: # pragma: no cover raise ValueError('%r not a successor of anything' % succ) try: preds.remove(pred) succs.remove(succ) except KeyError: # pragma: no cover raise ValueError('%r not a successor of %r' % (succ, pred)) def is_step(self, step): return (step in self._preds or step in self._succs or step in self._nodes) def get_steps(self, final): if not self.is_step(final): raise ValueError('Unknown: %r' % final) result = [] todo = [] seen = set() todo.append(final) while todo: step = todo.pop(0) if step in seen: # if a step was already seen, # move it to the end (so it will appear earlier # when reversed on return) ... but not for the # final step, as that would be confusing for # users if step != final: result.remove(step) result.append(step) else: seen.add(step) result.append(step) preds = self._preds.get(step, ()) todo.extend(preds) return reversed(result) @property def strong_connections(self): #http://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm index_counter = [0] stack = [] lowlinks = {} index = {} result = [] graph = self._succs def strongconnect(node): # set the depth index for this node to the smallest unused index index[node] = index_counter[0] lowlinks[node] = index_counter[0] index_counter[0] += 1 stack.append(node) # Consider successors try: successors = graph[node] except Exception: successors = [] for successor in successors: if successor not in lowlinks: # Successor has not yet been visited strongconnect(successor) lowlinks[node] = min(lowlinks[node],lowlinks[successor]) elif successor in stack: # the successor is in the stack and hence in the current # strongly connected component (SCC) lowlinks[node] = min(lowlinks[node],index[successor]) # If `node` is a root node, pop the stack and generate an SCC if lowlinks[node] == index[node]: connected_component = [] while True: successor = stack.pop() connected_component.append(successor) if successor == node: break component = tuple(connected_component) # storing the result result.append(component) for node in graph: if node not in lowlinks: strongconnect(node) return result @property def dot(self): result = ['digraph G {'] for succ in self._preds: preds = self._preds[succ] for pred in preds: result.append(' %s -> %s;' % (pred, succ)) for node in self._nodes: result.append(' %s;' % node) result.append('}') return '\n'.join(result) # # Unarchiving functionality for zip, tar, tgz, tbz, whl # ARCHIVE_EXTENSIONS = ('.tar.gz', '.tar.bz2', '.tar', '.zip', '.tgz', '.tbz', '.whl') def unarchive(archive_filename, dest_dir, format=None, check=True): def check_path(path): if not isinstance(path, text_type): path = path.decode('utf-8') p = os.path.abspath(os.path.join(dest_dir, path)) if not p.startswith(dest_dir) or p[plen] != os.sep: raise ValueError('path outside destination: %r' % p) dest_dir = os.path.abspath(dest_dir) plen = len(dest_dir) archive = None if format is None: if archive_filename.endswith(('.zip', '.whl')): format = 'zip' elif archive_filename.endswith(('.tar.gz', '.tgz')): format = 'tgz' mode = 'r:gz' elif archive_filename.endswith(('.tar.bz2', '.tbz')): format = 'tbz' mode = 'r:bz2' elif archive_filename.endswith('.tar'): format = 'tar' mode = 'r' else: # pragma: no cover raise ValueError('Unknown format for %r' % archive_filename) try: if format == 'zip': archive = ZipFile(archive_filename, 'r') if check: names = archive.namelist() for name in names: check_path(name) else: archive = tarfile.open(archive_filename, mode) if check: names = archive.getnames() for name in names: check_path(name) if format != 'zip' and sys.version_info[0] < 3: # See Python issue 17153. If the dest path contains Unicode, # tarfile extraction fails on Python 2.x if a member path name # contains non-ASCII characters - it leads to an implicit # bytes -> unicode conversion using ASCII to decode. for tarinfo in archive.getmembers(): if not isinstance(tarinfo.name, text_type): tarinfo.name = tarinfo.name.decode('utf-8') archive.extractall(dest_dir) finally: if archive: archive.close() def zip_dir(directory): """zip a directory tree into a BytesIO object""" result = io.BytesIO() dlen = len(directory) with ZipFile(result, "w") as zf: for root, dirs, files in os.walk(directory): for name in files: full = os.path.join(root, name) rel = root[dlen:] dest = os.path.join(rel, name) zf.write(full, dest) return result # # Simple progress bar # UNITS = ('', 'K', 'M', 'G','T','P') class Progress(object): unknown = 'UNKNOWN' def __init__(self, minval=0, maxval=100): assert maxval is None or maxval >= minval self.min = self.cur = minval self.max = maxval self.started = None self.elapsed = 0 self.done = False def update(self, curval): assert self.min <= curval assert self.max is None or curval <= self.max self.cur = curval now = time.time() if self.started is None: self.started = now else: self.elapsed = now - self.started def increment(self, incr): assert incr >= 0 self.update(self.cur + incr) def start(self): self.update(self.min) return self def stop(self): if self.max is not None: self.update(self.max) self.done = True @property def maximum(self): return self.unknown if self.max is None else self.max @property def percentage(self): if self.done: result = '100 %' elif self.max is None: result = ' ?? %' else: v = 100.0 * (self.cur - self.min) / (self.max - self.min) result = '%3d %%' % v return result def format_duration(self, duration): if (duration <= 0) and self.max is None or self.cur == self.min: result = '??:??:??' #elif duration < 1: # result = '--:--:--' else: result = time.strftime('%H:%M:%S', time.gmtime(duration)) return result @property def ETA(self): if self.done: prefix = 'Done' t = self.elapsed #import pdb; pdb.set_trace() else: prefix = 'ETA ' if self.max is None: t = -1 elif self.elapsed == 0 or (self.cur == self.min): t = 0 else: #import pdb; pdb.set_trace() t = float(self.max - self.min) t /= self.cur - self.min t = (t - 1) * self.elapsed return '%s: %s' % (prefix, self.format_duration(t)) @property def speed(self): if self.elapsed == 0: result = 0.0 else: result = (self.cur - self.min) / self.elapsed for unit in UNITS: if result < 1000: break result /= 1000.0 return '%d %sB/s' % (result, unit) # # Glob functionality # RICH_GLOB = re.compile(r'\{([^}]*)\}') _CHECK_RECURSIVE_GLOB = re.compile(r'[^/\\,{]\*\*|\*\*[^/\\,}]') _CHECK_MISMATCH_SET = re.compile(r'^[^{]*\}|\{[^}]*$') def iglob(path_glob): """Extended globbing function that supports ** and {opt1,opt2,opt3}.""" if _CHECK_RECURSIVE_GLOB.search(path_glob): msg = """invalid glob %r: recursive glob "**" must be used alone""" raise ValueError(msg % path_glob) if _CHECK_MISMATCH_SET.search(path_glob): msg = """invalid glob %r: mismatching set marker '{' or '}'""" raise ValueError(msg % path_glob) return _iglob(path_glob) def _iglob(path_glob): rich_path_glob = RICH_GLOB.split(path_glob, 1) if len(rich_path_glob) > 1: assert len(rich_path_glob) == 3, rich_path_glob prefix, set, suffix = rich_path_glob for item in set.split(','): for path in _iglob(''.join((prefix, item, suffix))): yield path else: if '**' not in path_glob: for item in std_iglob(path_glob): yield item else: prefix, radical = path_glob.split('**', 1) if prefix == '': prefix = '.' if radical == '': radical = '*' else: # we support both radical = radical.lstrip('/') radical = radical.lstrip('\\') for path, dir, files in os.walk(prefix): path = os.path.normpath(path) for fn in _iglob(os.path.join(path, radical)): yield fn if ssl: from .compat import (HTTPSHandler as BaseHTTPSHandler, match_hostname, CertificateError) # # HTTPSConnection which verifies certificates/matches domains # class HTTPSConnection(httplib.HTTPSConnection): ca_certs = None # set this to the path to the certs file (.pem) check_domain = True # only used if ca_certs is not None # noinspection PyPropertyAccess def connect(self): sock = socket.create_connection((self.host, self.port), self.timeout) if getattr(self, '_tunnel_host', False): self.sock = sock self._tunnel() if not hasattr(ssl, 'SSLContext'): # For 2.x if self.ca_certs: cert_reqs = ssl.CERT_REQUIRED else: cert_reqs = ssl.CERT_NONE self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file, cert_reqs=cert_reqs, ssl_version=ssl.PROTOCOL_SSLv23, ca_certs=self.ca_certs) else: # pragma: no cover context = ssl.SSLContext(ssl.PROTOCOL_SSLv23) context.options |= ssl.OP_NO_SSLv2 if self.cert_file: context.load_cert_chain(self.cert_file, self.key_file) kwargs = {} if self.ca_certs: context.verify_mode = ssl.CERT_REQUIRED context.load_verify_locations(cafile=self.ca_certs) if getattr(ssl, 'HAS_SNI', False): kwargs['server_hostname'] = self.host self.sock = context.wrap_socket(sock, **kwargs) if self.ca_certs and self.check_domain: try: match_hostname(self.sock.getpeercert(), self.host) logger.debug('Host verified: %s', self.host) except CertificateError: # pragma: no cover self.sock.shutdown(socket.SHUT_RDWR) self.sock.close() raise class HTTPSHandler(BaseHTTPSHandler): def __init__(self, ca_certs, check_domain=True): BaseHTTPSHandler.__init__(self) self.ca_certs = ca_certs self.check_domain = check_domain def _conn_maker(self, *args, **kwargs): """ This is called to create a connection instance. Normally you'd pass a connection class to do_open, but it doesn't actually check for a class, and just expects a callable. As long as we behave just as a constructor would have, we should be OK. If it ever changes so that we *must* pass a class, we'll create an UnsafeHTTPSConnection class which just sets check_domain to False in the class definition, and choose which one to pass to do_open. """ result = HTTPSConnection(*args, **kwargs) if self.ca_certs: result.ca_certs = self.ca_certs result.check_domain = self.check_domain return result def https_open(self, req): try: return self.do_open(self._conn_maker, req) except URLError as e: if 'certificate verify failed' in str(e.reason): raise CertificateError('Unable to verify server certificate ' 'for %s' % req.host) else: raise # # To prevent against mixing HTTP traffic with HTTPS (examples: A Man-In-The- # Middle proxy using HTTP listens on port 443, or an index mistakenly serves # HTML containing a http://xyz link when it should be https://xyz), # you can use the following handler class, which does not allow HTTP traffic. # # It works by inheriting from HTTPHandler - so build_opener won't add a # handler for HTTP itself. # class HTTPSOnlyHandler(HTTPSHandler, HTTPHandler): def http_open(self, req): raise URLError('Unexpected HTTP request on what should be a secure ' 'connection: %s' % req) # # XML-RPC with timeouts # _ver_info = sys.version_info[:2] if _ver_info == (2, 6): class HTTP(httplib.HTTP): def __init__(self, host='', port=None, **kwargs): if port == 0: # 0 means use port 0, not the default port port = None self._setup(self._connection_class(host, port, **kwargs)) if ssl: class HTTPS(httplib.HTTPS): def __init__(self, host='', port=None, **kwargs): if port == 0: # 0 means use port 0, not the default port port = None self._setup(self._connection_class(host, port, **kwargs)) class Transport(xmlrpclib.Transport): def __init__(self, timeout, use_datetime=0): self.timeout = timeout xmlrpclib.Transport.__init__(self, use_datetime) def make_connection(self, host): h, eh, x509 = self.get_host_info(host) if _ver_info == (2, 6): result = HTTP(h, timeout=self.timeout) else: if not self._connection or host != self._connection[0]: self._extra_headers = eh self._connection = host, httplib.HTTPConnection(h) result = self._connection[1] return result if ssl: class SafeTransport(xmlrpclib.SafeTransport): def __init__(self, timeout, use_datetime=0): self.timeout = timeout xmlrpclib.SafeTransport.__init__(self, use_datetime) def make_connection(self, host): h, eh, kwargs = self.get_host_info(host) if not kwargs: kwargs = {} kwargs['timeout'] = self.timeout if _ver_info == (2, 6): result = HTTPS(host, None, **kwargs) else: if not self._connection or host != self._connection[0]: self._extra_headers = eh self._connection = host, httplib.HTTPSConnection(h, None, **kwargs) result = self._connection[1] return result class ServerProxy(xmlrpclib.ServerProxy): def __init__(self, uri, **kwargs): self.timeout = timeout = kwargs.pop('timeout', None) # The above classes only come into play if a timeout # is specified if timeout is not None: scheme, _ = splittype(uri) use_datetime = kwargs.get('use_datetime', 0) if scheme == 'https': tcls = SafeTransport else: tcls = Transport kwargs['transport'] = t = tcls(timeout, use_datetime=use_datetime) self.transport = t xmlrpclib.ServerProxy.__init__(self, uri, **kwargs) # # CSV functionality. This is provided because on 2.x, the csv module can't # handle Unicode. However, we need to deal with Unicode in e.g. RECORD files. # def _csv_open(fn, mode, **kwargs): if sys.version_info[0] < 3: mode += 'b' else: kwargs['newline'] = '' # Python 3 determines encoding from locale. Force 'utf-8' # file encoding to match other forced utf-8 encoding kwargs['encoding'] = 'utf-8' return open(fn, mode, **kwargs) class CSVBase(object): defaults = { 'delimiter': str(','), # The strs are used because we need native 'quotechar': str('"'), # str in the csv API (2.x won't take 'lineterminator': str('\n') # Unicode) } def __enter__(self): return self def __exit__(self, *exc_info): self.stream.close() class CSVReader(CSVBase): def __init__(self, **kwargs): if 'stream' in kwargs: stream = kwargs['stream'] if sys.version_info[0] >= 3: # needs to be a text stream stream = codecs.getreader('utf-8')(stream) self.stream = stream else: self.stream = _csv_open(kwargs['path'], 'r') self.reader = csv.reader(self.stream, **self.defaults) def __iter__(self): return self def next(self): result = next(self.reader) if sys.version_info[0] < 3: for i, item in enumerate(result): if not isinstance(item, text_type): result[i] = item.decode('utf-8') return result __next__ = next class CSVWriter(CSVBase): def __init__(self, fn, **kwargs): self.stream = _csv_open(fn, 'w') self.writer = csv.writer(self.stream, **self.defaults) def writerow(self, row): if sys.version_info[0] < 3: r = [] for item in row: if isinstance(item, text_type): item = item.encode('utf-8') r.append(item) row = r self.writer.writerow(row) # # Configurator functionality # class Configurator(BaseConfigurator): value_converters = dict(BaseConfigurator.value_converters) value_converters['inc'] = 'inc_convert' def __init__(self, config, base=None): super(Configurator, self).__init__(config) self.base = base or os.getcwd() def configure_custom(self, config): def convert(o): if isinstance(o, (list, tuple)): result = type(o)([convert(i) for i in o]) elif isinstance(o, dict): if '()' in o: result = self.configure_custom(o) else: result = {} for k in o: result[k] = convert(o[k]) else: result = self.convert(o) return result c = config.pop('()') if not callable(c): c = self.resolve(c) props = config.pop('.', None) # Check for valid identifiers args = config.pop('[]', ()) if args: args = tuple([convert(o) for o in args]) items = [(k, convert(config[k])) for k in config if valid_ident(k)] kwargs = dict(items) result = c(*args, **kwargs) if props: for n, v in props.items(): setattr(result, n, convert(v)) return result def __getitem__(self, key): result = self.config[key] if isinstance(result, dict) and '()' in result: self.config[key] = result = self.configure_custom(result) return result def inc_convert(self, value): """Default converter for the inc:// protocol.""" if not os.path.isabs(value): value = os.path.join(self.base, value) with codecs.open(value, 'r', encoding='utf-8') as f: result = json.load(f) return result class SubprocessMixin(object): """ Mixin for running subprocesses and capturing their output """ def __init__(self, verbose=False, progress=None): self.verbose = verbose self.progress = progress def reader(self, stream, context): """ Read lines from a subprocess' output stream and either pass to a progress callable (if specified) or write progress information to sys.stderr. """ progress = self.progress verbose = self.verbose while True: s = stream.readline() if not s: break if progress is not None: progress(s, context) else: if not verbose: sys.stderr.write('.') else: sys.stderr.write(s.decode('utf-8')) sys.stderr.flush() stream.close() def run_command(self, cmd, **kwargs): p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, **kwargs) t1 = threading.Thread(target=self.reader, args=(p.stdout, 'stdout')) t1.start() t2 = threading.Thread(target=self.reader, args=(p.stderr, 'stderr')) t2.start() p.wait() t1.join() t2.join() if self.progress is not None: self.progress('done.', 'main') elif self.verbose: sys.stderr.write('done.\n') return p def normalize_name(name): """Normalize a python package name a la PEP 503""" # https://www.python.org/dev/peps/pep-0503/#normalized-names return re.sub('[-_.]+', '-', name).lower() ������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/distlib/version.py�����������������������0000644�0001750�0001750�00000055537�00000000000�030175� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # # Copyright (C) 2012-2017 The Python Software Foundation. # See LICENSE.txt and CONTRIBUTORS.txt. # """ Implementation of a flexible versioning scheme providing support for PEP-440, setuptools-compatible and semantic versioning. """ import logging import re from .compat import string_types from .util import parse_requirement __all__ = ['NormalizedVersion', 'NormalizedMatcher', 'LegacyVersion', 'LegacyMatcher', 'SemanticVersion', 'SemanticMatcher', 'UnsupportedVersionError', 'get_scheme'] logger = logging.getLogger(__name__) class UnsupportedVersionError(ValueError): """This is an unsupported version.""" pass class Version(object): def __init__(self, s): self._string = s = s.strip() self._parts = parts = self.parse(s) assert isinstance(parts, tuple) assert len(parts) > 0 def parse(self, s): raise NotImplementedError('please implement in a subclass') def _check_compatible(self, other): if type(self) != type(other): raise TypeError('cannot compare %r and %r' % (self, other)) def __eq__(self, other): self._check_compatible(other) return self._parts == other._parts def __ne__(self, other): return not self.__eq__(other) def __lt__(self, other): self._check_compatible(other) return self._parts < other._parts def __gt__(self, other): return not (self.__lt__(other) or self.__eq__(other)) def __le__(self, other): return self.__lt__(other) or self.__eq__(other) def __ge__(self, other): return self.__gt__(other) or self.__eq__(other) # See http://docs.python.org/reference/datamodel#object.__hash__ def __hash__(self): return hash(self._parts) def __repr__(self): return "%s('%s')" % (self.__class__.__name__, self._string) def __str__(self): return self._string @property def is_prerelease(self): raise NotImplementedError('Please implement in subclasses.') class Matcher(object): version_class = None # value is either a callable or the name of a method _operators = { '<': lambda v, c, p: v < c, '>': lambda v, c, p: v > c, '<=': lambda v, c, p: v == c or v < c, '>=': lambda v, c, p: v == c or v > c, '==': lambda v, c, p: v == c, '===': lambda v, c, p: v == c, # by default, compatible => >=. '~=': lambda v, c, p: v == c or v > c, '!=': lambda v, c, p: v != c, } # this is a method only to support alternative implementations # via overriding def parse_requirement(self, s): return parse_requirement(s) def __init__(self, s): if self.version_class is None: raise ValueError('Please specify a version class') self._string = s = s.strip() r = self.parse_requirement(s) if not r: raise ValueError('Not valid: %r' % s) self.name = r.name self.key = self.name.lower() # for case-insensitive comparisons clist = [] if r.constraints: # import pdb; pdb.set_trace() for op, s in r.constraints: if s.endswith('.*'): if op not in ('==', '!='): raise ValueError('\'.*\' not allowed for ' '%r constraints' % op) # Could be a partial version (e.g. for '2.*') which # won't parse as a version, so keep it as a string vn, prefix = s[:-2], True # Just to check that vn is a valid version self.version_class(vn) else: # Should parse as a version, so we can create an # instance for the comparison vn, prefix = self.version_class(s), False clist.append((op, vn, prefix)) self._parts = tuple(clist) def match(self, version): """ Check if the provided version matches the constraints. :param version: The version to match against this instance. :type version: String or :class:`Version` instance. """ if isinstance(version, string_types): version = self.version_class(version) for operator, constraint, prefix in self._parts: f = self._operators.get(operator) if isinstance(f, string_types): f = getattr(self, f) if not f: msg = ('%r not implemented ' 'for %s' % (operator, self.__class__.__name__)) raise NotImplementedError(msg) if not f(version, constraint, prefix): return False return True @property def exact_version(self): result = None if len(self._parts) == 1 and self._parts[0][0] in ('==', '==='): result = self._parts[0][1] return result def _check_compatible(self, other): if type(self) != type(other) or self.name != other.name: raise TypeError('cannot compare %s and %s' % (self, other)) def __eq__(self, other): self._check_compatible(other) return self.key == other.key and self._parts == other._parts def __ne__(self, other): return not self.__eq__(other) # See http://docs.python.org/reference/datamodel#object.__hash__ def __hash__(self): return hash(self.key) + hash(self._parts) def __repr__(self): return "%s(%r)" % (self.__class__.__name__, self._string) def __str__(self): return self._string PEP440_VERSION_RE = re.compile(r'^v?(\d+!)?(\d+(\.\d+)*)((a|b|c|rc)(\d+))?' r'(\.(post)(\d+))?(\.(dev)(\d+))?' r'(\+([a-zA-Z\d]+(\.[a-zA-Z\d]+)?))?$') def _pep_440_key(s): s = s.strip() m = PEP440_VERSION_RE.match(s) if not m: raise UnsupportedVersionError('Not a valid version: %s' % s) groups = m.groups() nums = tuple(int(v) for v in groups[1].split('.')) while len(nums) > 1 and nums[-1] == 0: nums = nums[:-1] if not groups[0]: epoch = 0 else: epoch = int(groups[0]) pre = groups[4:6] post = groups[7:9] dev = groups[10:12] local = groups[13] if pre == (None, None): pre = () else: pre = pre[0], int(pre[1]) if post == (None, None): post = () else: post = post[0], int(post[1]) if dev == (None, None): dev = () else: dev = dev[0], int(dev[1]) if local is None: local = () else: parts = [] for part in local.split('.'): # to ensure that numeric compares as > lexicographic, avoid # comparing them directly, but encode a tuple which ensures # correct sorting if part.isdigit(): part = (1, int(part)) else: part = (0, part) parts.append(part) local = tuple(parts) if not pre: # either before pre-release, or final release and after if not post and dev: # before pre-release pre = ('a', -1) # to sort before a0 else: pre = ('z',) # to sort after all pre-releases # now look at the state of post and dev. if not post: post = ('_',) # sort before 'a' if not dev: dev = ('final',) #print('%s -> %s' % (s, m.groups())) return epoch, nums, pre, post, dev, local _normalized_key = _pep_440_key class NormalizedVersion(Version): """A rational version. Good: 1.2 # equivalent to "1.2.0" 1.2.0 1.2a1 1.2.3a2 1.2.3b1 1.2.3c1 1.2.3.4 TODO: fill this out Bad: 1 # minimum two numbers 1.2a # release level must have a release serial 1.2.3b """ def parse(self, s): result = _normalized_key(s) # _normalized_key loses trailing zeroes in the release # clause, since that's needed to ensure that X.Y == X.Y.0 == X.Y.0.0 # However, PEP 440 prefix matching needs it: for example, # (~= 1.4.5.0) matches differently to (~= 1.4.5.0.0). m = PEP440_VERSION_RE.match(s) # must succeed groups = m.groups() self._release_clause = tuple(int(v) for v in groups[1].split('.')) return result PREREL_TAGS = set(['a', 'b', 'c', 'rc', 'dev']) @property def is_prerelease(self): return any(t[0] in self.PREREL_TAGS for t in self._parts if t) def _match_prefix(x, y): x = str(x) y = str(y) if x == y: return True if not x.startswith(y): return False n = len(y) return x[n] == '.' class NormalizedMatcher(Matcher): version_class = NormalizedVersion # value is either a callable or the name of a method _operators = { '~=': '_match_compatible', '<': '_match_lt', '>': '_match_gt', '<=': '_match_le', '>=': '_match_ge', '==': '_match_eq', '===': '_match_arbitrary', '!=': '_match_ne', } def _adjust_local(self, version, constraint, prefix): if prefix: strip_local = '+' not in constraint and version._parts[-1] else: # both constraint and version are # NormalizedVersion instances. # If constraint does not have a local component, # ensure the version doesn't, either. strip_local = not constraint._parts[-1] and version._parts[-1] if strip_local: s = version._string.split('+', 1)[0] version = self.version_class(s) return version, constraint def _match_lt(self, version, constraint, prefix): version, constraint = self._adjust_local(version, constraint, prefix) if version >= constraint: return False release_clause = constraint._release_clause pfx = '.'.join([str(i) for i in release_clause]) return not _match_prefix(version, pfx) def _match_gt(self, version, constraint, prefix): version, constraint = self._adjust_local(version, constraint, prefix) if version <= constraint: return False release_clause = constraint._release_clause pfx = '.'.join([str(i) for i in release_clause]) return not _match_prefix(version, pfx) def _match_le(self, version, constraint, prefix): version, constraint = self._adjust_local(version, constraint, prefix) return version <= constraint def _match_ge(self, version, constraint, prefix): version, constraint = self._adjust_local(version, constraint, prefix) return version >= constraint def _match_eq(self, version, constraint, prefix): version, constraint = self._adjust_local(version, constraint, prefix) if not prefix: result = (version == constraint) else: result = _match_prefix(version, constraint) return result def _match_arbitrary(self, version, constraint, prefix): return str(version) == str(constraint) def _match_ne(self, version, constraint, prefix): version, constraint = self._adjust_local(version, constraint, prefix) if not prefix: result = (version != constraint) else: result = not _match_prefix(version, constraint) return result def _match_compatible(self, version, constraint, prefix): version, constraint = self._adjust_local(version, constraint, prefix) if version == constraint: return True if version < constraint: return False # if not prefix: # return True release_clause = constraint._release_clause if len(release_clause) > 1: release_clause = release_clause[:-1] pfx = '.'.join([str(i) for i in release_clause]) return _match_prefix(version, pfx) _REPLACEMENTS = ( (re.compile('[.+-]$'), ''), # remove trailing puncts (re.compile(r'^[.](\d)'), r'0.\1'), # .N -> 0.N at start (re.compile('^[.-]'), ''), # remove leading puncts (re.compile(r'^\((.*)\)$'), r'\1'), # remove parentheses (re.compile(r'^v(ersion)?\s*(\d+)'), r'\2'), # remove leading v(ersion) (re.compile(r'^r(ev)?\s*(\d+)'), r'\2'), # remove leading v(ersion) (re.compile('[.]{2,}'), '.'), # multiple runs of '.' (re.compile(r'\b(alfa|apha)\b'), 'alpha'), # misspelt alpha (re.compile(r'\b(pre-alpha|prealpha)\b'), 'pre.alpha'), # standardise (re.compile(r'\(beta\)$'), 'beta'), # remove parentheses ) _SUFFIX_REPLACEMENTS = ( (re.compile('^[:~._+-]+'), ''), # remove leading puncts (re.compile('[,*")([\\]]'), ''), # remove unwanted chars (re.compile('[~:+_ -]'), '.'), # replace illegal chars (re.compile('[.]{2,}'), '.'), # multiple runs of '.' (re.compile(r'\.$'), ''), # trailing '.' ) _NUMERIC_PREFIX = re.compile(r'(\d+(\.\d+)*)') def _suggest_semantic_version(s): """ Try to suggest a semantic form for a version for which _suggest_normalized_version couldn't come up with anything. """ result = s.strip().lower() for pat, repl in _REPLACEMENTS: result = pat.sub(repl, result) if not result: result = '0.0.0' # Now look for numeric prefix, and separate it out from # the rest. #import pdb; pdb.set_trace() m = _NUMERIC_PREFIX.match(result) if not m: prefix = '0.0.0' suffix = result else: prefix = m.groups()[0].split('.') prefix = [int(i) for i in prefix] while len(prefix) < 3: prefix.append(0) if len(prefix) == 3: suffix = result[m.end():] else: suffix = '.'.join([str(i) for i in prefix[3:]]) + result[m.end():] prefix = prefix[:3] prefix = '.'.join([str(i) for i in prefix]) suffix = suffix.strip() if suffix: #import pdb; pdb.set_trace() # massage the suffix. for pat, repl in _SUFFIX_REPLACEMENTS: suffix = pat.sub(repl, suffix) if not suffix: result = prefix else: sep = '-' if 'dev' in suffix else '+' result = prefix + sep + suffix if not is_semver(result): result = None return result def _suggest_normalized_version(s): """Suggest a normalized version close to the given version string. If you have a version string that isn't rational (i.e. NormalizedVersion doesn't like it) then you might be able to get an equivalent (or close) rational version from this function. This does a number of simple normalizations to the given string, based on observation of versions currently in use on PyPI. Given a dump of those version during PyCon 2009, 4287 of them: - 2312 (53.93%) match NormalizedVersion without change with the automatic suggestion - 3474 (81.04%) match when using this suggestion method @param s {str} An irrational version string. @returns A rational version string, or None, if couldn't determine one. """ try: _normalized_key(s) return s # already rational except UnsupportedVersionError: pass rs = s.lower() # part of this could use maketrans for orig, repl in (('-alpha', 'a'), ('-beta', 'b'), ('alpha', 'a'), ('beta', 'b'), ('rc', 'c'), ('-final', ''), ('-pre', 'c'), ('-release', ''), ('.release', ''), ('-stable', ''), ('+', '.'), ('_', '.'), (' ', ''), ('.final', ''), ('final', '')): rs = rs.replace(orig, repl) # if something ends with dev or pre, we add a 0 rs = re.sub(r"pre$", r"pre0", rs) rs = re.sub(r"dev$", r"dev0", rs) # if we have something like "b-2" or "a.2" at the end of the # version, that is probably beta, alpha, etc # let's remove the dash or dot rs = re.sub(r"([abc]|rc)[\-\.](\d+)$", r"\1\2", rs) # 1.0-dev-r371 -> 1.0.dev371 # 0.1-dev-r79 -> 0.1.dev79 rs = re.sub(r"[\-\.](dev)[\-\.]?r?(\d+)$", r".\1\2", rs) # Clean: 2.0.a.3, 2.0.b1, 0.9.0~c1 rs = re.sub(r"[.~]?([abc])\.?", r"\1", rs) # Clean: v0.3, v1.0 if rs.startswith('v'): rs = rs[1:] # Clean leading '0's on numbers. #TODO: unintended side-effect on, e.g., "2003.05.09" # PyPI stats: 77 (~2%) better rs = re.sub(r"\b0+(\d+)(?!\d)", r"\1", rs) # Clean a/b/c with no version. E.g. "1.0a" -> "1.0a0". Setuptools infers # zero. # PyPI stats: 245 (7.56%) better rs = re.sub(r"(\d+[abc])$", r"\g<1>0", rs) # the 'dev-rNNN' tag is a dev tag rs = re.sub(r"\.?(dev-r|dev\.r)\.?(\d+)$", r".dev\2", rs) # clean the - when used as a pre delimiter rs = re.sub(r"-(a|b|c)(\d+)$", r"\1\2", rs) # a terminal "dev" or "devel" can be changed into ".dev0" rs = re.sub(r"[\.\-](dev|devel)$", r".dev0", rs) # a terminal "dev" can be changed into ".dev0" rs = re.sub(r"(?![\.\-])dev$", r".dev0", rs) # a terminal "final" or "stable" can be removed rs = re.sub(r"(final|stable)$", "", rs) # The 'r' and the '-' tags are post release tags # 0.4a1.r10 -> 0.4a1.post10 # 0.9.33-17222 -> 0.9.33.post17222 # 0.9.33-r17222 -> 0.9.33.post17222 rs = re.sub(r"\.?(r|-|-r)\.?(\d+)$", r".post\2", rs) # Clean 'r' instead of 'dev' usage: # 0.9.33+r17222 -> 0.9.33.dev17222 # 1.0dev123 -> 1.0.dev123 # 1.0.git123 -> 1.0.dev123 # 1.0.bzr123 -> 1.0.dev123 # 0.1a0dev.123 -> 0.1a0.dev123 # PyPI stats: ~150 (~4%) better rs = re.sub(r"\.?(dev|git|bzr)\.?(\d+)$", r".dev\2", rs) # Clean '.pre' (normalized from '-pre' above) instead of 'c' usage: # 0.2.pre1 -> 0.2c1 # 0.2-c1 -> 0.2c1 # 1.0preview123 -> 1.0c123 # PyPI stats: ~21 (0.62%) better rs = re.sub(r"\.?(pre|preview|-c)(\d+)$", r"c\g<2>", rs) # Tcl/Tk uses "px" for their post release markers rs = re.sub(r"p(\d+)$", r".post\1", rs) try: _normalized_key(rs) except UnsupportedVersionError: rs = None return rs # # Legacy version processing (distribute-compatible) # _VERSION_PART = re.compile(r'([a-z]+|\d+|[\.-])', re.I) _VERSION_REPLACE = { 'pre': 'c', 'preview': 'c', '-': 'final-', 'rc': 'c', 'dev': '@', '': None, '.': None, } def _legacy_key(s): def get_parts(s): result = [] for p in _VERSION_PART.split(s.lower()): p = _VERSION_REPLACE.get(p, p) if p: if '0' <= p[:1] <= '9': p = p.zfill(8) else: p = '*' + p result.append(p) result.append('*final') return result result = [] for p in get_parts(s): if p.startswith('*'): if p < '*final': while result and result[-1] == '*final-': result.pop() while result and result[-1] == '00000000': result.pop() result.append(p) return tuple(result) class LegacyVersion(Version): def parse(self, s): return _legacy_key(s) @property def is_prerelease(self): result = False for x in self._parts: if (isinstance(x, string_types) and x.startswith('*') and x < '*final'): result = True break return result class LegacyMatcher(Matcher): version_class = LegacyVersion _operators = dict(Matcher._operators) _operators['~='] = '_match_compatible' numeric_re = re.compile(r'^(\d+(\.\d+)*)') def _match_compatible(self, version, constraint, prefix): if version < constraint: return False m = self.numeric_re.match(str(constraint)) if not m: logger.warning('Cannot compute compatible match for version %s ' ' and constraint %s', version, constraint) return True s = m.groups()[0] if '.' in s: s = s.rsplit('.', 1)[0] return _match_prefix(version, s) # # Semantic versioning # _SEMVER_RE = re.compile(r'^(\d+)\.(\d+)\.(\d+)' r'(-[a-z0-9]+(\.[a-z0-9-]+)*)?' r'(\+[a-z0-9]+(\.[a-z0-9-]+)*)?$', re.I) def is_semver(s): return _SEMVER_RE.match(s) def _semantic_key(s): def make_tuple(s, absent): if s is None: result = (absent,) else: parts = s[1:].split('.') # We can't compare ints and strings on Python 3, so fudge it # by zero-filling numeric values so simulate a numeric comparison result = tuple([p.zfill(8) if p.isdigit() else p for p in parts]) return result m = is_semver(s) if not m: raise UnsupportedVersionError(s) groups = m.groups() major, minor, patch = [int(i) for i in groups[:3]] # choose the '|' and '*' so that versions sort correctly pre, build = make_tuple(groups[3], '|'), make_tuple(groups[5], '*') return (major, minor, patch), pre, build class SemanticVersion(Version): def parse(self, s): return _semantic_key(s) @property def is_prerelease(self): return self._parts[1][0] != '|' class SemanticMatcher(Matcher): version_class = SemanticVersion class VersionScheme(object): def __init__(self, key, matcher, suggester=None): self.key = key self.matcher = matcher self.suggester = suggester def is_valid_version(self, s): try: self.matcher.version_class(s) result = True except UnsupportedVersionError: result = False return result def is_valid_matcher(self, s): try: self.matcher(s) result = True except UnsupportedVersionError: result = False return result def is_valid_constraint_list(self, s): """ Used for processing some metadata fields """ return self.is_valid_matcher('dummy_name (%s)' % s) def suggest(self, s): if self.suggester is None: result = None else: result = self.suggester(s) return result _SCHEMES = { 'normalized': VersionScheme(_normalized_key, NormalizedMatcher, _suggest_normalized_version), 'legacy': VersionScheme(_legacy_key, LegacyMatcher, lambda self, s: s), 'semantic': VersionScheme(_semantic_key, SemanticMatcher, _suggest_semantic_version), } _SCHEMES['default'] = _SCHEMES['normalized'] def get_scheme(name): if name not in _SCHEMES: raise ValueError('unknown scheme name: %r' % name) return _SCHEMES[name] �����������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/distlib/wheel.py�������������������������0000644�0001750�0001750�00000116765�00000000000�027615� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # # Copyright (C) 2013-2017 Vinay Sajip. # Licensed to the Python Software Foundation under a contributor agreement. # See LICENSE.txt and CONTRIBUTORS.txt. # from __future__ import unicode_literals import base64 import codecs import datetime import distutils.util from email import message_from_file import hashlib import imp import json import logging import os import posixpath import re import shutil import sys import tempfile import zipfile from . import __version__, DistlibException from .compat import sysconfig, ZipFile, fsdecode, text_type, filter from .database import InstalledDistribution from .metadata import Metadata, METADATA_FILENAME, WHEEL_METADATA_FILENAME from .util import (FileOperator, convert_path, CSVReader, CSVWriter, Cache, cached_property, get_cache_base, read_exports, tempdir) from .version import NormalizedVersion, UnsupportedVersionError logger = logging.getLogger(__name__) cache = None # created when needed if hasattr(sys, 'pypy_version_info'): # pragma: no cover IMP_PREFIX = 'pp' elif sys.platform.startswith('java'): # pragma: no cover IMP_PREFIX = 'jy' elif sys.platform == 'cli': # pragma: no cover IMP_PREFIX = 'ip' else: IMP_PREFIX = 'cp' VER_SUFFIX = sysconfig.get_config_var('py_version_nodot') if not VER_SUFFIX: # pragma: no cover VER_SUFFIX = '%s%s' % sys.version_info[:2] PYVER = 'py' + VER_SUFFIX IMPVER = IMP_PREFIX + VER_SUFFIX ARCH = distutils.util.get_platform().replace('-', '_').replace('.', '_') ABI = sysconfig.get_config_var('SOABI') if ABI and ABI.startswith('cpython-'): ABI = ABI.replace('cpython-', 'cp') else: def _derive_abi(): parts = ['cp', VER_SUFFIX] if sysconfig.get_config_var('Py_DEBUG'): parts.append('d') if sysconfig.get_config_var('WITH_PYMALLOC'): parts.append('m') if sysconfig.get_config_var('Py_UNICODE_SIZE') == 4: parts.append('u') return ''.join(parts) ABI = _derive_abi() del _derive_abi FILENAME_RE = re.compile(r''' (?P<nm>[^-]+) -(?P<vn>\d+[^-]*) (-(?P<bn>\d+[^-]*))? -(?P<py>\w+\d+(\.\w+\d+)*) -(?P<bi>\w+) -(?P<ar>\w+(\.\w+)*) \.whl$ ''', re.IGNORECASE | re.VERBOSE) NAME_VERSION_RE = re.compile(r''' (?P<nm>[^-]+) -(?P<vn>\d+[^-]*) (-(?P<bn>\d+[^-]*))?$ ''', re.IGNORECASE | re.VERBOSE) SHEBANG_RE = re.compile(br'\s*#![^\r\n]*') SHEBANG_DETAIL_RE = re.compile(br'^(\s*#!("[^"]+"|\S+))\s+(.*)$') SHEBANG_PYTHON = b'#!python' SHEBANG_PYTHONW = b'#!pythonw' if os.sep == '/': to_posix = lambda o: o else: to_posix = lambda o: o.replace(os.sep, '/') class Mounter(object): def __init__(self): self.impure_wheels = {} self.libs = {} def add(self, pathname, extensions): self.impure_wheels[pathname] = extensions self.libs.update(extensions) def remove(self, pathname): extensions = self.impure_wheels.pop(pathname) for k, v in extensions: if k in self.libs: del self.libs[k] def find_module(self, fullname, path=None): if fullname in self.libs: result = self else: result = None return result def load_module(self, fullname): if fullname in sys.modules: result = sys.modules[fullname] else: if fullname not in self.libs: raise ImportError('unable to find extension for %s' % fullname) result = imp.load_dynamic(fullname, self.libs[fullname]) result.__loader__ = self parts = fullname.rsplit('.', 1) if len(parts) > 1: result.__package__ = parts[0] return result _hook = Mounter() class Wheel(object): """ Class to build and install from Wheel files (PEP 427). """ wheel_version = (1, 1) hash_kind = 'sha256' def __init__(self, filename=None, sign=False, verify=False): """ Initialise an instance using a (valid) filename. """ self.sign = sign self.should_verify = verify self.buildver = '' self.pyver = [PYVER] self.abi = ['none'] self.arch = ['any'] self.dirname = os.getcwd() if filename is None: self.name = 'dummy' self.version = '0.1' self._filename = self.filename else: m = NAME_VERSION_RE.match(filename) if m: info = m.groupdict('') self.name = info['nm'] # Reinstate the local version separator self.version = info['vn'].replace('_', '-') self.buildver = info['bn'] self._filename = self.filename else: dirname, filename = os.path.split(filename) m = FILENAME_RE.match(filename) if not m: raise DistlibException('Invalid name or ' 'filename: %r' % filename) if dirname: self.dirname = os.path.abspath(dirname) self._filename = filename info = m.groupdict('') self.name = info['nm'] self.version = info['vn'] self.buildver = info['bn'] self.pyver = info['py'].split('.') self.abi = info['bi'].split('.') self.arch = info['ar'].split('.') @property def filename(self): """ Build and return a filename from the various components. """ if self.buildver: buildver = '-' + self.buildver else: buildver = '' pyver = '.'.join(self.pyver) abi = '.'.join(self.abi) arch = '.'.join(self.arch) # replace - with _ as a local version separator version = self.version.replace('-', '_') return '%s-%s%s-%s-%s-%s.whl' % (self.name, version, buildver, pyver, abi, arch) @property def exists(self): path = os.path.join(self.dirname, self.filename) return os.path.isfile(path) @property def tags(self): for pyver in self.pyver: for abi in self.abi: for arch in self.arch: yield pyver, abi, arch @cached_property def metadata(self): pathname = os.path.join(self.dirname, self.filename) name_ver = '%s-%s' % (self.name, self.version) info_dir = '%s.dist-info' % name_ver wrapper = codecs.getreader('utf-8') with ZipFile(pathname, 'r') as zf: wheel_metadata = self.get_wheel_metadata(zf) wv = wheel_metadata['Wheel-Version'].split('.', 1) file_version = tuple([int(i) for i in wv]) if file_version < (1, 1): fns = [WHEEL_METADATA_FILENAME, METADATA_FILENAME, 'METADATA'] else: fns = [WHEEL_METADATA_FILENAME, METADATA_FILENAME] result = None for fn in fns: try: metadata_filename = posixpath.join(info_dir, fn) with zf.open(metadata_filename) as bf: wf = wrapper(bf) result = Metadata(fileobj=wf) if result: break except KeyError: pass if not result: raise ValueError('Invalid wheel, because metadata is ' 'missing: looked in %s' % ', '.join(fns)) return result def get_wheel_metadata(self, zf): name_ver = '%s-%s' % (self.name, self.version) info_dir = '%s.dist-info' % name_ver metadata_filename = posixpath.join(info_dir, 'WHEEL') with zf.open(metadata_filename) as bf: wf = codecs.getreader('utf-8')(bf) message = message_from_file(wf) return dict(message) @cached_property def info(self): pathname = os.path.join(self.dirname, self.filename) with ZipFile(pathname, 'r') as zf: result = self.get_wheel_metadata(zf) return result def process_shebang(self, data): m = SHEBANG_RE.match(data) if m: end = m.end() shebang, data_after_shebang = data[:end], data[end:] # Preserve any arguments after the interpreter if b'pythonw' in shebang.lower(): shebang_python = SHEBANG_PYTHONW else: shebang_python = SHEBANG_PYTHON m = SHEBANG_DETAIL_RE.match(shebang) if m: args = b' ' + m.groups()[-1] else: args = b'' shebang = shebang_python + args data = shebang + data_after_shebang else: cr = data.find(b'\r') lf = data.find(b'\n') if cr < 0 or cr > lf: term = b'\n' else: if data[cr:cr + 2] == b'\r\n': term = b'\r\n' else: term = b'\r' data = SHEBANG_PYTHON + term + data return data def get_hash(self, data, hash_kind=None): if hash_kind is None: hash_kind = self.hash_kind try: hasher = getattr(hashlib, hash_kind) except AttributeError: raise DistlibException('Unsupported hash algorithm: %r' % hash_kind) result = hasher(data).digest() result = base64.urlsafe_b64encode(result).rstrip(b'=').decode('ascii') return hash_kind, result def write_record(self, records, record_path, base): records = list(records) # make a copy for sorting p = to_posix(os.path.relpath(record_path, base)) records.append((p, '', '')) records.sort() with CSVWriter(record_path) as writer: for row in records: writer.writerow(row) def write_records(self, info, libdir, archive_paths): records = [] distinfo, info_dir = info hasher = getattr(hashlib, self.hash_kind) for ap, p in archive_paths: with open(p, 'rb') as f: data = f.read() digest = '%s=%s' % self.get_hash(data) size = os.path.getsize(p) records.append((ap, digest, size)) p = os.path.join(distinfo, 'RECORD') self.write_record(records, p, libdir) ap = to_posix(os.path.join(info_dir, 'RECORD')) archive_paths.append((ap, p)) def build_zip(self, pathname, archive_paths): with ZipFile(pathname, 'w', zipfile.ZIP_DEFLATED) as zf: for ap, p in archive_paths: logger.debug('Wrote %s to %s in wheel', p, ap) zf.write(p, ap) def build(self, paths, tags=None, wheel_version=None): """ Build a wheel from files in specified paths, and use any specified tags when determining the name of the wheel. """ if tags is None: tags = {} libkey = list(filter(lambda o: o in paths, ('purelib', 'platlib')))[0] if libkey == 'platlib': is_pure = 'false' default_pyver = [IMPVER] default_abi = [ABI] default_arch = [ARCH] else: is_pure = 'true' default_pyver = [PYVER] default_abi = ['none'] default_arch = ['any'] self.pyver = tags.get('pyver', default_pyver) self.abi = tags.get('abi', default_abi) self.arch = tags.get('arch', default_arch) libdir = paths[libkey] name_ver = '%s-%s' % (self.name, self.version) data_dir = '%s.data' % name_ver info_dir = '%s.dist-info' % name_ver archive_paths = [] # First, stuff which is not in site-packages for key in ('data', 'headers', 'scripts'): if key not in paths: continue path = paths[key] if os.path.isdir(path): for root, dirs, files in os.walk(path): for fn in files: p = fsdecode(os.path.join(root, fn)) rp = os.path.relpath(p, path) ap = to_posix(os.path.join(data_dir, key, rp)) archive_paths.append((ap, p)) if key == 'scripts' and not p.endswith('.exe'): with open(p, 'rb') as f: data = f.read() data = self.process_shebang(data) with open(p, 'wb') as f: f.write(data) # Now, stuff which is in site-packages, other than the # distinfo stuff. path = libdir distinfo = None for root, dirs, files in os.walk(path): if root == path: # At the top level only, save distinfo for later # and skip it for now for i, dn in enumerate(dirs): dn = fsdecode(dn) if dn.endswith('.dist-info'): distinfo = os.path.join(root, dn) del dirs[i] break assert distinfo, '.dist-info directory expected, not found' for fn in files: # comment out next suite to leave .pyc files in if fsdecode(fn).endswith(('.pyc', '.pyo')): continue p = os.path.join(root, fn) rp = to_posix(os.path.relpath(p, path)) archive_paths.append((rp, p)) # Now distinfo. Assumed to be flat, i.e. os.listdir is enough. files = os.listdir(distinfo) for fn in files: if fn not in ('RECORD', 'INSTALLER', 'SHARED', 'WHEEL'): p = fsdecode(os.path.join(distinfo, fn)) ap = to_posix(os.path.join(info_dir, fn)) archive_paths.append((ap, p)) wheel_metadata = [ 'Wheel-Version: %d.%d' % (wheel_version or self.wheel_version), 'Generator: distlib %s' % __version__, 'Root-Is-Purelib: %s' % is_pure, ] for pyver, abi, arch in self.tags: wheel_metadata.append('Tag: %s-%s-%s' % (pyver, abi, arch)) p = os.path.join(distinfo, 'WHEEL') with open(p, 'w') as f: f.write('\n'.join(wheel_metadata)) ap = to_posix(os.path.join(info_dir, 'WHEEL')) archive_paths.append((ap, p)) # Now, at last, RECORD. # Paths in here are archive paths - nothing else makes sense. self.write_records((distinfo, info_dir), libdir, archive_paths) # Now, ready to build the zip file pathname = os.path.join(self.dirname, self.filename) self.build_zip(pathname, archive_paths) return pathname def skip_entry(self, arcname): """ Determine whether an archive entry should be skipped when verifying or installing. """ # The signature file won't be in RECORD, # and we don't currently don't do anything with it # We also skip directories, as they won't be in RECORD # either. See: # # https://github.com/pypa/wheel/issues/294 # https://github.com/pypa/wheel/issues/287 # https://github.com/pypa/wheel/pull/289 # return arcname.endswith(('/', '/RECORD.jws')) def install(self, paths, maker, **kwargs): """ Install a wheel to the specified paths. If kwarg ``warner`` is specified, it should be a callable, which will be called with two tuples indicating the wheel version of this software and the wheel version in the file, if there is a discrepancy in the versions. This can be used to issue any warnings to raise any exceptions. If kwarg ``lib_only`` is True, only the purelib/platlib files are installed, and the headers, scripts, data and dist-info metadata are not written. If kwarg ``bytecode_hashed_invalidation`` is True, written bytecode will try to use file-hash based invalidation (PEP-552) on supported interpreter versions (CPython 2.7+). The return value is a :class:`InstalledDistribution` instance unless ``options.lib_only`` is True, in which case the return value is ``None``. """ dry_run = maker.dry_run warner = kwargs.get('warner') lib_only = kwargs.get('lib_only', False) bc_hashed_invalidation = kwargs.get('bytecode_hashed_invalidation', False) pathname = os.path.join(self.dirname, self.filename) name_ver = '%s-%s' % (self.name, self.version) data_dir = '%s.data' % name_ver info_dir = '%s.dist-info' % name_ver metadata_name = posixpath.join(info_dir, METADATA_FILENAME) wheel_metadata_name = posixpath.join(info_dir, 'WHEEL') record_name = posixpath.join(info_dir, 'RECORD') wrapper = codecs.getreader('utf-8') with ZipFile(pathname, 'r') as zf: with zf.open(wheel_metadata_name) as bwf: wf = wrapper(bwf) message = message_from_file(wf) wv = message['Wheel-Version'].split('.', 1) file_version = tuple([int(i) for i in wv]) if (file_version != self.wheel_version) and warner: warner(self.wheel_version, file_version) if message['Root-Is-Purelib'] == 'true': libdir = paths['purelib'] else: libdir = paths['platlib'] records = {} with zf.open(record_name) as bf: with CSVReader(stream=bf) as reader: for row in reader: p = row[0] records[p] = row data_pfx = posixpath.join(data_dir, '') info_pfx = posixpath.join(info_dir, '') script_pfx = posixpath.join(data_dir, 'scripts', '') # make a new instance rather than a copy of maker's, # as we mutate it fileop = FileOperator(dry_run=dry_run) fileop.record = True # so we can rollback if needed bc = not sys.dont_write_bytecode # Double negatives. Lovely! outfiles = [] # for RECORD writing # for script copying/shebang processing workdir = tempfile.mkdtemp() # set target dir later # we default add_launchers to False, as the # Python Launcher should be used instead maker.source_dir = workdir maker.target_dir = None try: for zinfo in zf.infolist(): arcname = zinfo.filename if isinstance(arcname, text_type): u_arcname = arcname else: u_arcname = arcname.decode('utf-8') if self.skip_entry(u_arcname): continue row = records[u_arcname] if row[2] and str(zinfo.file_size) != row[2]: raise DistlibException('size mismatch for ' '%s' % u_arcname) if row[1]: kind, value = row[1].split('=', 1) with zf.open(arcname) as bf: data = bf.read() _, digest = self.get_hash(data, kind) if digest != value: raise DistlibException('digest mismatch for ' '%s' % arcname) if lib_only and u_arcname.startswith((info_pfx, data_pfx)): logger.debug('lib_only: skipping %s', u_arcname) continue is_script = (u_arcname.startswith(script_pfx) and not u_arcname.endswith('.exe')) if u_arcname.startswith(data_pfx): _, where, rp = u_arcname.split('/', 2) outfile = os.path.join(paths[where], convert_path(rp)) else: # meant for site-packages. if u_arcname in (wheel_metadata_name, record_name): continue outfile = os.path.join(libdir, convert_path(u_arcname)) if not is_script: with zf.open(arcname) as bf: fileop.copy_stream(bf, outfile) outfiles.append(outfile) # Double check the digest of the written file if not dry_run and row[1]: with open(outfile, 'rb') as bf: data = bf.read() _, newdigest = self.get_hash(data, kind) if newdigest != digest: raise DistlibException('digest mismatch ' 'on write for ' '%s' % outfile) if bc and outfile.endswith('.py'): try: pyc = fileop.byte_compile(outfile, hashed_invalidation=bc_hashed_invalidation) outfiles.append(pyc) except Exception: # Don't give up if byte-compilation fails, # but log it and perhaps warn the user logger.warning('Byte-compilation failed', exc_info=True) else: fn = os.path.basename(convert_path(arcname)) workname = os.path.join(workdir, fn) with zf.open(arcname) as bf: fileop.copy_stream(bf, workname) dn, fn = os.path.split(outfile) maker.target_dir = dn filenames = maker.make(fn) fileop.set_executable_mode(filenames) outfiles.extend(filenames) if lib_only: logger.debug('lib_only: returning None') dist = None else: # Generate scripts # Try to get pydist.json so we can see if there are # any commands to generate. If this fails (e.g. because # of a legacy wheel), log a warning but don't give up. commands = None file_version = self.info['Wheel-Version'] if file_version == '1.0': # Use legacy info ep = posixpath.join(info_dir, 'entry_points.txt') try: with zf.open(ep) as bwf: epdata = read_exports(bwf) commands = {} for key in ('console', 'gui'): k = '%s_scripts' % key if k in epdata: commands['wrap_%s' % key] = d = {} for v in epdata[k].values(): s = '%s:%s' % (v.prefix, v.suffix) if v.flags: s += ' %s' % v.flags d[v.name] = s except Exception: logger.warning('Unable to read legacy script ' 'metadata, so cannot generate ' 'scripts') else: try: with zf.open(metadata_name) as bwf: wf = wrapper(bwf) commands = json.load(wf).get('extensions') if commands: commands = commands.get('python.commands') except Exception: logger.warning('Unable to read JSON metadata, so ' 'cannot generate scripts') if commands: console_scripts = commands.get('wrap_console', {}) gui_scripts = commands.get('wrap_gui', {}) if console_scripts or gui_scripts: script_dir = paths.get('scripts', '') if not os.path.isdir(script_dir): raise ValueError('Valid script path not ' 'specified') maker.target_dir = script_dir for k, v in console_scripts.items(): script = '%s = %s' % (k, v) filenames = maker.make(script) fileop.set_executable_mode(filenames) if gui_scripts: options = {'gui': True } for k, v in gui_scripts.items(): script = '%s = %s' % (k, v) filenames = maker.make(script, options) fileop.set_executable_mode(filenames) p = os.path.join(libdir, info_dir) dist = InstalledDistribution(p) # Write SHARED paths = dict(paths) # don't change passed in dict del paths['purelib'] del paths['platlib'] paths['lib'] = libdir p = dist.write_shared_locations(paths, dry_run) if p: outfiles.append(p) # Write RECORD dist.write_installed_files(outfiles, paths['prefix'], dry_run) return dist except Exception: # pragma: no cover logger.exception('installation failed.') fileop.rollback() raise finally: shutil.rmtree(workdir) def _get_dylib_cache(self): global cache if cache is None: # Use native string to avoid issues on 2.x: see Python #20140. base = os.path.join(get_cache_base(), str('dylib-cache'), sys.version[:3]) cache = Cache(base) return cache def _get_extensions(self): pathname = os.path.join(self.dirname, self.filename) name_ver = '%s-%s' % (self.name, self.version) info_dir = '%s.dist-info' % name_ver arcname = posixpath.join(info_dir, 'EXTENSIONS') wrapper = codecs.getreader('utf-8') result = [] with ZipFile(pathname, 'r') as zf: try: with zf.open(arcname) as bf: wf = wrapper(bf) extensions = json.load(wf) cache = self._get_dylib_cache() prefix = cache.prefix_to_dir(pathname) cache_base = os.path.join(cache.base, prefix) if not os.path.isdir(cache_base): os.makedirs(cache_base) for name, relpath in extensions.items(): dest = os.path.join(cache_base, convert_path(relpath)) if not os.path.exists(dest): extract = True else: file_time = os.stat(dest).st_mtime file_time = datetime.datetime.fromtimestamp(file_time) info = zf.getinfo(relpath) wheel_time = datetime.datetime(*info.date_time) extract = wheel_time > file_time if extract: zf.extract(relpath, cache_base) result.append((name, dest)) except KeyError: pass return result def is_compatible(self): """ Determine if a wheel is compatible with the running system. """ return is_compatible(self) def is_mountable(self): """ Determine if a wheel is asserted as mountable by its metadata. """ return True # for now - metadata details TBD def mount(self, append=False): pathname = os.path.abspath(os.path.join(self.dirname, self.filename)) if not self.is_compatible(): msg = 'Wheel %s not compatible with this Python.' % pathname raise DistlibException(msg) if not self.is_mountable(): msg = 'Wheel %s is marked as not mountable.' % pathname raise DistlibException(msg) if pathname in sys.path: logger.debug('%s already in path', pathname) else: if append: sys.path.append(pathname) else: sys.path.insert(0, pathname) extensions = self._get_extensions() if extensions: if _hook not in sys.meta_path: sys.meta_path.append(_hook) _hook.add(pathname, extensions) def unmount(self): pathname = os.path.abspath(os.path.join(self.dirname, self.filename)) if pathname not in sys.path: logger.debug('%s not in path', pathname) else: sys.path.remove(pathname) if pathname in _hook.impure_wheels: _hook.remove(pathname) if not _hook.impure_wheels: if _hook in sys.meta_path: sys.meta_path.remove(_hook) def verify(self): pathname = os.path.join(self.dirname, self.filename) name_ver = '%s-%s' % (self.name, self.version) data_dir = '%s.data' % name_ver info_dir = '%s.dist-info' % name_ver metadata_name = posixpath.join(info_dir, METADATA_FILENAME) wheel_metadata_name = posixpath.join(info_dir, 'WHEEL') record_name = posixpath.join(info_dir, 'RECORD') wrapper = codecs.getreader('utf-8') with ZipFile(pathname, 'r') as zf: with zf.open(wheel_metadata_name) as bwf: wf = wrapper(bwf) message = message_from_file(wf) wv = message['Wheel-Version'].split('.', 1) file_version = tuple([int(i) for i in wv]) # TODO version verification records = {} with zf.open(record_name) as bf: with CSVReader(stream=bf) as reader: for row in reader: p = row[0] records[p] = row for zinfo in zf.infolist(): arcname = zinfo.filename if isinstance(arcname, text_type): u_arcname = arcname else: u_arcname = arcname.decode('utf-8') # See issue #115: some wheels have .. in their entries, but # in the filename ... e.g. __main__..py ! So the check is # updated to look for .. in the directory portions p = u_arcname.split('/') if '..' in p: raise DistlibException('invalid entry in ' 'wheel: %r' % u_arcname) if self.skip_entry(u_arcname): continue row = records[u_arcname] if row[2] and str(zinfo.file_size) != row[2]: raise DistlibException('size mismatch for ' '%s' % u_arcname) if row[1]: kind, value = row[1].split('=', 1) with zf.open(arcname) as bf: data = bf.read() _, digest = self.get_hash(data, kind) if digest != value: raise DistlibException('digest mismatch for ' '%s' % arcname) def update(self, modifier, dest_dir=None, **kwargs): """ Update the contents of a wheel in a generic way. The modifier should be a callable which expects a dictionary argument: its keys are archive-entry paths, and its values are absolute filesystem paths where the contents the corresponding archive entries can be found. The modifier is free to change the contents of the files pointed to, add new entries and remove entries, before returning. This method will extract the entire contents of the wheel to a temporary location, call the modifier, and then use the passed (and possibly updated) dictionary to write a new wheel. If ``dest_dir`` is specified, the new wheel is written there -- otherwise, the original wheel is overwritten. The modifier should return True if it updated the wheel, else False. This method returns the same value the modifier returns. """ def get_version(path_map, info_dir): version = path = None key = '%s/%s' % (info_dir, METADATA_FILENAME) if key not in path_map: key = '%s/PKG-INFO' % info_dir if key in path_map: path = path_map[key] version = Metadata(path=path).version return version, path def update_version(version, path): updated = None try: v = NormalizedVersion(version) i = version.find('-') if i < 0: updated = '%s+1' % version else: parts = [int(s) for s in version[i + 1:].split('.')] parts[-1] += 1 updated = '%s+%s' % (version[:i], '.'.join(str(i) for i in parts)) except UnsupportedVersionError: logger.debug('Cannot update non-compliant (PEP-440) ' 'version %r', version) if updated: md = Metadata(path=path) md.version = updated legacy = not path.endswith(METADATA_FILENAME) md.write(path=path, legacy=legacy) logger.debug('Version updated from %r to %r', version, updated) pathname = os.path.join(self.dirname, self.filename) name_ver = '%s-%s' % (self.name, self.version) info_dir = '%s.dist-info' % name_ver record_name = posixpath.join(info_dir, 'RECORD') with tempdir() as workdir: with ZipFile(pathname, 'r') as zf: path_map = {} for zinfo in zf.infolist(): arcname = zinfo.filename if isinstance(arcname, text_type): u_arcname = arcname else: u_arcname = arcname.decode('utf-8') if u_arcname == record_name: continue if '..' in u_arcname: raise DistlibException('invalid entry in ' 'wheel: %r' % u_arcname) zf.extract(zinfo, workdir) path = os.path.join(workdir, convert_path(u_arcname)) path_map[u_arcname] = path # Remember the version. original_version, _ = get_version(path_map, info_dir) # Files extracted. Call the modifier. modified = modifier(path_map, **kwargs) if modified: # Something changed - need to build a new wheel. current_version, path = get_version(path_map, info_dir) if current_version and (current_version == original_version): # Add or update local version to signify changes. update_version(current_version, path) # Decide where the new wheel goes. if dest_dir is None: fd, newpath = tempfile.mkstemp(suffix='.whl', prefix='wheel-update-', dir=workdir) os.close(fd) else: if not os.path.isdir(dest_dir): raise DistlibException('Not a directory: %r' % dest_dir) newpath = os.path.join(dest_dir, self.filename) archive_paths = list(path_map.items()) distinfo = os.path.join(workdir, info_dir) info = distinfo, info_dir self.write_records(info, workdir, archive_paths) self.build_zip(newpath, archive_paths) if dest_dir is None: shutil.copyfile(newpath, pathname) return modified def compatible_tags(): """ Return (pyver, abi, arch) tuples compatible with this Python. """ versions = [VER_SUFFIX] major = VER_SUFFIX[0] for minor in range(sys.version_info[1] - 1, - 1, -1): versions.append(''.join([major, str(minor)])) abis = [] for suffix, _, _ in imp.get_suffixes(): if suffix.startswith('.abi'): abis.append(suffix.split('.', 2)[1]) abis.sort() if ABI != 'none': abis.insert(0, ABI) abis.append('none') result = [] arches = [ARCH] if sys.platform == 'darwin': m = re.match(r'(\w+)_(\d+)_(\d+)_(\w+)$', ARCH) if m: name, major, minor, arch = m.groups() minor = int(minor) matches = [arch] if arch in ('i386', 'ppc'): matches.append('fat') if arch in ('i386', 'ppc', 'x86_64'): matches.append('fat3') if arch in ('ppc64', 'x86_64'): matches.append('fat64') if arch in ('i386', 'x86_64'): matches.append('intel') if arch in ('i386', 'x86_64', 'intel', 'ppc', 'ppc64'): matches.append('universal') while minor >= 0: for match in matches: s = '%s_%s_%s_%s' % (name, major, minor, match) if s != ARCH: # already there arches.append(s) minor -= 1 # Most specific - our Python version, ABI and arch for abi in abis: for arch in arches: result.append((''.join((IMP_PREFIX, versions[0])), abi, arch)) # where no ABI / arch dependency, but IMP_PREFIX dependency for i, version in enumerate(versions): result.append((''.join((IMP_PREFIX, version)), 'none', 'any')) if i == 0: result.append((''.join((IMP_PREFIX, version[0])), 'none', 'any')) # no IMP_PREFIX, ABI or arch dependency for i, version in enumerate(versions): result.append((''.join(('py', version)), 'none', 'any')) if i == 0: result.append((''.join(('py', version[0])), 'none', 'any')) return set(result) COMPATIBLE_TAGS = compatible_tags() del compatible_tags def is_compatible(wheel, tags=None): if not isinstance(wheel, Wheel): wheel = Wheel(wheel) # assume it's a filename result = False if tags is None: tags = COMPATIBLE_TAGS for ver, abi, arch in tags: if ver in wheel.pyver and abi in wheel.abi and arch in wheel.arch: result = True break return result �����������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/distro.py��������������������������������0000644�0001750�0001750�00000124363�00000000000�026354� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# Copyright 2015,2016,2017 Nir Cohen # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ The ``distro`` package (``distro`` stands for Linux Distribution) provides information about the Linux distribution it runs on, such as a reliable machine-readable distro ID, or version information. It is the recommended replacement for Python's original :py:func:`platform.linux_distribution` function, but it provides much more functionality. An alternative implementation became necessary because Python 3.5 deprecated this function, and Python 3.8 will remove it altogether. Its predecessor function :py:func:`platform.dist` was already deprecated since Python 2.6 and will also be removed in Python 3.8. Still, there are many cases in which access to OS distribution information is needed. See `Python issue 1322 <https://bugs.python.org/issue1322>`_ for more information. """ import os import re import sys import json import shlex import logging import argparse import subprocess _UNIXCONFDIR = os.environ.get('UNIXCONFDIR', '/etc') _OS_RELEASE_BASENAME = 'os-release' #: Translation table for normalizing the "ID" attribute defined in os-release #: files, for use by the :func:`distro.id` method. #: #: * Key: Value as defined in the os-release file, translated to lower case, #: with blanks translated to underscores. #: #: * Value: Normalized value. NORMALIZED_OS_ID = { 'ol': 'oracle', # Oracle Enterprise Linux } #: Translation table for normalizing the "Distributor ID" attribute returned by #: the lsb_release command, for use by the :func:`distro.id` method. #: #: * Key: Value as returned by the lsb_release command, translated to lower #: case, with blanks translated to underscores. #: #: * Value: Normalized value. NORMALIZED_LSB_ID = { 'enterpriseenterprise': 'oracle', # Oracle Enterprise Linux 'redhatenterpriseworkstation': 'rhel', # RHEL 6, 7 Workstation 'redhatenterpriseserver': 'rhel', # RHEL 6, 7 Server } #: Translation table for normalizing the distro ID derived from the file name #: of distro release files, for use by the :func:`distro.id` method. #: #: * Key: Value as derived from the file name of a distro release file, #: translated to lower case, with blanks translated to underscores. #: #: * Value: Normalized value. NORMALIZED_DISTRO_ID = { 'redhat': 'rhel', # RHEL 6.x, 7.x } # Pattern for content of distro release file (reversed) _DISTRO_RELEASE_CONTENT_REVERSED_PATTERN = re.compile( r'(?:[^)]*\)(.*)\()? *(?:STL )?([\d.+\-a-z]*\d) *(?:esaeler *)?(.+)') # Pattern for base file name of distro release file _DISTRO_RELEASE_BASENAME_PATTERN = re.compile( r'(\w+)[-_](release|version)$') # Base file names to be ignored when searching for distro release file _DISTRO_RELEASE_IGNORE_BASENAMES = ( 'debian_version', 'lsb-release', 'oem-release', _OS_RELEASE_BASENAME, 'system-release' ) def linux_distribution(full_distribution_name=True): """ Return information about the current OS distribution as a tuple ``(id_name, version, codename)`` with items as follows: * ``id_name``: If *full_distribution_name* is false, the result of :func:`distro.id`. Otherwise, the result of :func:`distro.name`. * ``version``: The result of :func:`distro.version`. * ``codename``: The result of :func:`distro.codename`. The interface of this function is compatible with the original :py:func:`platform.linux_distribution` function, supporting a subset of its parameters. The data it returns may not exactly be the same, because it uses more data sources than the original function, and that may lead to different data if the OS distribution is not consistent across multiple data sources it provides (there are indeed such distributions ...). Another reason for differences is the fact that the :func:`distro.id` method normalizes the distro ID string to a reliable machine-readable value for a number of popular OS distributions. """ return _distro.linux_distribution(full_distribution_name) def id(): """ Return the distro ID of the current distribution, as a machine-readable string. For a number of OS distributions, the returned distro ID value is *reliable*, in the sense that it is documented and that it does not change across releases of the distribution. This package maintains the following reliable distro ID values: ============== ========================================= Distro ID Distribution ============== ========================================= "ubuntu" Ubuntu "debian" Debian "rhel" RedHat Enterprise Linux "centos" CentOS "fedora" Fedora "sles" SUSE Linux Enterprise Server "opensuse" openSUSE "amazon" Amazon Linux "arch" Arch Linux "cloudlinux" CloudLinux OS "exherbo" Exherbo Linux "gentoo" GenToo Linux "ibm_powerkvm" IBM PowerKVM "kvmibm" KVM for IBM z Systems "linuxmint" Linux Mint "mageia" Mageia "mandriva" Mandriva Linux "parallels" Parallels "pidora" Pidora "raspbian" Raspbian "oracle" Oracle Linux (and Oracle Enterprise Linux) "scientific" Scientific Linux "slackware" Slackware "xenserver" XenServer "openbsd" OpenBSD "netbsd" NetBSD "freebsd" FreeBSD ============== ========================================= If you have a need to get distros for reliable IDs added into this set, or if you find that the :func:`distro.id` function returns a different distro ID for one of the listed distros, please create an issue in the `distro issue tracker`_. **Lookup hierarchy and transformations:** First, the ID is obtained from the following sources, in the specified order. The first available and non-empty value is used: * the value of the "ID" attribute of the os-release file, * the value of the "Distributor ID" attribute returned by the lsb_release command, * the first part of the file name of the distro release file, The so determined ID value then passes the following transformations, before it is returned by this method: * it is translated to lower case, * blanks (which should not be there anyway) are translated to underscores, * a normalization of the ID is performed, based upon `normalization tables`_. The purpose of this normalization is to ensure that the ID is as reliable as possible, even across incompatible changes in the OS distributions. A common reason for an incompatible change is the addition of an os-release file, or the addition of the lsb_release command, with ID values that differ from what was previously determined from the distro release file name. """ return _distro.id() def name(pretty=False): """ Return the name of the current OS distribution, as a human-readable string. If *pretty* is false, the name is returned without version or codename. (e.g. "CentOS Linux") If *pretty* is true, the version and codename are appended. (e.g. "CentOS Linux 7.1.1503 (Core)") **Lookup hierarchy:** The name is obtained from the following sources, in the specified order. The first available and non-empty value is used: * If *pretty* is false: - the value of the "NAME" attribute of the os-release file, - the value of the "Distributor ID" attribute returned by the lsb_release command, - the value of the "<name>" field of the distro release file. * If *pretty* is true: - the value of the "PRETTY_NAME" attribute of the os-release file, - the value of the "Description" attribute returned by the lsb_release command, - the value of the "<name>" field of the distro release file, appended with the value of the pretty version ("<version_id>" and "<codename>" fields) of the distro release file, if available. """ return _distro.name(pretty) def version(pretty=False, best=False): """ Return the version of the current OS distribution, as a human-readable string. If *pretty* is false, the version is returned without codename (e.g. "7.0"). If *pretty* is true, the codename in parenthesis is appended, if the codename is non-empty (e.g. "7.0 (Maipo)"). Some distributions provide version numbers with different precisions in the different sources of distribution information. Examining the different sources in a fixed priority order does not always yield the most precise version (e.g. for Debian 8.2, or CentOS 7.1). The *best* parameter can be used to control the approach for the returned version: If *best* is false, the first non-empty version number in priority order of the examined sources is returned. If *best* is true, the most precise version number out of all examined sources is returned. **Lookup hierarchy:** In all cases, the version number is obtained from the following sources. If *best* is false, this order represents the priority order: * the value of the "VERSION_ID" attribute of the os-release file, * the value of the "Release" attribute returned by the lsb_release command, * the version number parsed from the "<version_id>" field of the first line of the distro release file, * the version number parsed from the "PRETTY_NAME" attribute of the os-release file, if it follows the format of the distro release files. * the version number parsed from the "Description" attribute returned by the lsb_release command, if it follows the format of the distro release files. """ return _distro.version(pretty, best) def version_parts(best=False): """ Return the version of the current OS distribution as a tuple ``(major, minor, build_number)`` with items as follows: * ``major``: The result of :func:`distro.major_version`. * ``minor``: The result of :func:`distro.minor_version`. * ``build_number``: The result of :func:`distro.build_number`. For a description of the *best* parameter, see the :func:`distro.version` method. """ return _distro.version_parts(best) def major_version(best=False): """ Return the major version of the current OS distribution, as a string, if provided. Otherwise, the empty string is returned. The major version is the first part of the dot-separated version string. For a description of the *best* parameter, see the :func:`distro.version` method. """ return _distro.major_version(best) def minor_version(best=False): """ Return the minor version of the current OS distribution, as a string, if provided. Otherwise, the empty string is returned. The minor version is the second part of the dot-separated version string. For a description of the *best* parameter, see the :func:`distro.version` method. """ return _distro.minor_version(best) def build_number(best=False): """ Return the build number of the current OS distribution, as a string, if provided. Otherwise, the empty string is returned. The build number is the third part of the dot-separated version string. For a description of the *best* parameter, see the :func:`distro.version` method. """ return _distro.build_number(best) def like(): """ Return a space-separated list of distro IDs of distributions that are closely related to the current OS distribution in regards to packaging and programming interfaces, for example distributions the current distribution is a derivative from. **Lookup hierarchy:** This information item is only provided by the os-release file. For details, see the description of the "ID_LIKE" attribute in the `os-release man page <http://www.freedesktop.org/software/systemd/man/os-release.html>`_. """ return _distro.like() def codename(): """ Return the codename for the release of the current OS distribution, as a string. If the distribution does not have a codename, an empty string is returned. Note that the returned codename is not always really a codename. For example, openSUSE returns "x86_64". This function does not handle such cases in any special way and just returns the string it finds, if any. **Lookup hierarchy:** * the codename within the "VERSION" attribute of the os-release file, if provided, * the value of the "Codename" attribute returned by the lsb_release command, * the value of the "<codename>" field of the distro release file. """ return _distro.codename() def info(pretty=False, best=False): """ Return certain machine-readable information items about the current OS distribution in a dictionary, as shown in the following example: .. sourcecode:: python { 'id': 'rhel', 'version': '7.0', 'version_parts': { 'major': '7', 'minor': '0', 'build_number': '' }, 'like': 'fedora', 'codename': 'Maipo' } The dictionary structure and keys are always the same, regardless of which information items are available in the underlying data sources. The values for the various keys are as follows: * ``id``: The result of :func:`distro.id`. * ``version``: The result of :func:`distro.version`. * ``version_parts -> major``: The result of :func:`distro.major_version`. * ``version_parts -> minor``: The result of :func:`distro.minor_version`. * ``version_parts -> build_number``: The result of :func:`distro.build_number`. * ``like``: The result of :func:`distro.like`. * ``codename``: The result of :func:`distro.codename`. For a description of the *pretty* and *best* parameters, see the :func:`distro.version` method. """ return _distro.info(pretty, best) def os_release_info(): """ Return a dictionary containing key-value pairs for the information items from the os-release file data source of the current OS distribution. See `os-release file`_ for details about these information items. """ return _distro.os_release_info() def lsb_release_info(): """ Return a dictionary containing key-value pairs for the information items from the lsb_release command data source of the current OS distribution. See `lsb_release command output`_ for details about these information items. """ return _distro.lsb_release_info() def distro_release_info(): """ Return a dictionary containing key-value pairs for the information items from the distro release file data source of the current OS distribution. See `distro release file`_ for details about these information items. """ return _distro.distro_release_info() def uname_info(): """ Return a dictionary containing key-value pairs for the information items from the distro release file data source of the current OS distribution. """ return _distro.uname_info() def os_release_attr(attribute): """ Return a single named information item from the os-release file data source of the current OS distribution. Parameters: * ``attribute`` (string): Key of the information item. Returns: * (string): Value of the information item, if the item exists. The empty string, if the item does not exist. See `os-release file`_ for details about these information items. """ return _distro.os_release_attr(attribute) def lsb_release_attr(attribute): """ Return a single named information item from the lsb_release command output data source of the current OS distribution. Parameters: * ``attribute`` (string): Key of the information item. Returns: * (string): Value of the information item, if the item exists. The empty string, if the item does not exist. See `lsb_release command output`_ for details about these information items. """ return _distro.lsb_release_attr(attribute) def distro_release_attr(attribute): """ Return a single named information item from the distro release file data source of the current OS distribution. Parameters: * ``attribute`` (string): Key of the information item. Returns: * (string): Value of the information item, if the item exists. The empty string, if the item does not exist. See `distro release file`_ for details about these information items. """ return _distro.distro_release_attr(attribute) def uname_attr(attribute): """ Return a single named information item from the distro release file data source of the current OS distribution. Parameters: * ``attribute`` (string): Key of the information item. Returns: * (string): Value of the information item, if the item exists. The empty string, if the item does not exist. """ return _distro.uname_attr(attribute) class cached_property(object): """A version of @property which caches the value. On access, it calls the underlying function and sets the value in `__dict__` so future accesses will not re-call the property. """ def __init__(self, f): self._fname = f.__name__ self._f = f def __get__(self, obj, owner): assert obj is not None, 'call {} on an instance'.format(self._fname) ret = obj.__dict__[self._fname] = self._f(obj) return ret class LinuxDistribution(object): """ Provides information about a OS distribution. This package creates a private module-global instance of this class with default initialization arguments, that is used by the `consolidated accessor functions`_ and `single source accessor functions`_. By using default initialization arguments, that module-global instance returns data about the current OS distribution (i.e. the distro this package runs on). Normally, it is not necessary to create additional instances of this class. However, in situations where control is needed over the exact data sources that are used, instances of this class can be created with a specific distro release file, or a specific os-release file, or without invoking the lsb_release command. """ def __init__(self, include_lsb=True, os_release_file='', distro_release_file='', include_uname=True): """ The initialization method of this class gathers information from the available data sources, and stores that in private instance attributes. Subsequent access to the information items uses these private instance attributes, so that the data sources are read only once. Parameters: * ``include_lsb`` (bool): Controls whether the `lsb_release command output`_ is included as a data source. If the lsb_release command is not available in the program execution path, the data source for the lsb_release command will be empty. * ``os_release_file`` (string): The path name of the `os-release file`_ that is to be used as a data source. An empty string (the default) will cause the default path name to be used (see `os-release file`_ for details). If the specified or defaulted os-release file does not exist, the data source for the os-release file will be empty. * ``distro_release_file`` (string): The path name of the `distro release file`_ that is to be used as a data source. An empty string (the default) will cause a default search algorithm to be used (see `distro release file`_ for details). If the specified distro release file does not exist, or if no default distro release file can be found, the data source for the distro release file will be empty. * ``include_name`` (bool): Controls whether uname command output is included as a data source. If the uname command is not available in the program execution path the data source for the uname command will be empty. Public instance attributes: * ``os_release_file`` (string): The path name of the `os-release file`_ that is actually used as a data source. The empty string if no distro release file is used as a data source. * ``distro_release_file`` (string): The path name of the `distro release file`_ that is actually used as a data source. The empty string if no distro release file is used as a data source. * ``include_lsb`` (bool): The result of the ``include_lsb`` parameter. This controls whether the lsb information will be loaded. * ``include_uname`` (bool): The result of the ``include_uname`` parameter. This controls whether the uname information will be loaded. Raises: * :py:exc:`IOError`: Some I/O issue with an os-release file or distro release file. * :py:exc:`subprocess.CalledProcessError`: The lsb_release command had some issue (other than not being available in the program execution path). * :py:exc:`UnicodeError`: A data source has unexpected characters or uses an unexpected encoding. """ self.os_release_file = os_release_file or \ os.path.join(_UNIXCONFDIR, _OS_RELEASE_BASENAME) self.distro_release_file = distro_release_file or '' # updated later self.include_lsb = include_lsb self.include_uname = include_uname def __repr__(self): """Return repr of all info """ return \ "LinuxDistribution(" \ "os_release_file={self.os_release_file!r}, " \ "distro_release_file={self.distro_release_file!r}, " \ "include_lsb={self.include_lsb!r}, " \ "include_uname={self.include_uname!r}, " \ "_os_release_info={self._os_release_info!r}, " \ "_lsb_release_info={self._lsb_release_info!r}, " \ "_distro_release_info={self._distro_release_info!r}, " \ "_uname_info={self._uname_info!r})".format( self=self) def linux_distribution(self, full_distribution_name=True): """ Return information about the OS distribution that is compatible with Python's :func:`platform.linux_distribution`, supporting a subset of its parameters. For details, see :func:`distro.linux_distribution`. """ return ( self.name() if full_distribution_name else self.id(), self.version(), self.codename() ) def id(self): """Return the distro ID of the OS distribution, as a string. For details, see :func:`distro.id`. """ def normalize(distro_id, table): distro_id = distro_id.lower().replace(' ', '_') return table.get(distro_id, distro_id) distro_id = self.os_release_attr('id') if distro_id: return normalize(distro_id, NORMALIZED_OS_ID) distro_id = self.lsb_release_attr('distributor_id') if distro_id: return normalize(distro_id, NORMALIZED_LSB_ID) distro_id = self.distro_release_attr('id') if distro_id: return normalize(distro_id, NORMALIZED_DISTRO_ID) distro_id = self.uname_attr('id') if distro_id: return normalize(distro_id, NORMALIZED_DISTRO_ID) return '' def name(self, pretty=False): """ Return the name of the OS distribution, as a string. For details, see :func:`distro.name`. """ name = self.os_release_attr('name') \ or self.lsb_release_attr('distributor_id') \ or self.distro_release_attr('name') \ or self.uname_attr('name') if pretty: name = self.os_release_attr('pretty_name') \ or self.lsb_release_attr('description') if not name: name = self.distro_release_attr('name') \ or self.uname_attr('name') version = self.version(pretty=True) if version: name = name + ' ' + version return name or '' def version(self, pretty=False, best=False): """ Return the version of the OS distribution, as a string. For details, see :func:`distro.version`. """ versions = [ self.os_release_attr('version_id'), self.lsb_release_attr('release'), self.distro_release_attr('version_id'), self._parse_distro_release_content( self.os_release_attr('pretty_name')).get('version_id', ''), self._parse_distro_release_content( self.lsb_release_attr('description')).get('version_id', ''), self.uname_attr('release') ] version = '' if best: # This algorithm uses the last version in priority order that has # the best precision. If the versions are not in conflict, that # does not matter; otherwise, using the last one instead of the # first one might be considered a surprise. for v in versions: if v.count(".") > version.count(".") or version == '': version = v else: for v in versions: if v != '': version = v break if pretty and version and self.codename(): version = u'{0} ({1})'.format(version, self.codename()) return version def version_parts(self, best=False): """ Return the version of the OS distribution, as a tuple of version numbers. For details, see :func:`distro.version_parts`. """ version_str = self.version(best=best) if version_str: version_regex = re.compile(r'(\d+)\.?(\d+)?\.?(\d+)?') matches = version_regex.match(version_str) if matches: major, minor, build_number = matches.groups() return major, minor or '', build_number or '' return '', '', '' def major_version(self, best=False): """ Return the major version number of the current distribution. For details, see :func:`distro.major_version`. """ return self.version_parts(best)[0] def minor_version(self, best=False): """ Return the minor version number of the current distribution. For details, see :func:`distro.minor_version`. """ return self.version_parts(best)[1] def build_number(self, best=False): """ Return the build number of the current distribution. For details, see :func:`distro.build_number`. """ return self.version_parts(best)[2] def like(self): """ Return the IDs of distributions that are like the OS distribution. For details, see :func:`distro.like`. """ return self.os_release_attr('id_like') or '' def codename(self): """ Return the codename of the OS distribution. For details, see :func:`distro.codename`. """ try: # Handle os_release specially since distros might purposefully set # this to empty string to have no codename return self._os_release_info['codename'] except KeyError: return self.lsb_release_attr('codename') \ or self.distro_release_attr('codename') \ or '' def info(self, pretty=False, best=False): """ Return certain machine-readable information about the OS distribution. For details, see :func:`distro.info`. """ return dict( id=self.id(), version=self.version(pretty, best), version_parts=dict( major=self.major_version(best), minor=self.minor_version(best), build_number=self.build_number(best) ), like=self.like(), codename=self.codename(), ) def os_release_info(self): """ Return a dictionary containing key-value pairs for the information items from the os-release file data source of the OS distribution. For details, see :func:`distro.os_release_info`. """ return self._os_release_info def lsb_release_info(self): """ Return a dictionary containing key-value pairs for the information items from the lsb_release command data source of the OS distribution. For details, see :func:`distro.lsb_release_info`. """ return self._lsb_release_info def distro_release_info(self): """ Return a dictionary containing key-value pairs for the information items from the distro release file data source of the OS distribution. For details, see :func:`distro.distro_release_info`. """ return self._distro_release_info def uname_info(self): """ Return a dictionary containing key-value pairs for the information items from the uname command data source of the OS distribution. For details, see :func:`distro.uname_info`. """ return self._uname_info def os_release_attr(self, attribute): """ Return a single named information item from the os-release file data source of the OS distribution. For details, see :func:`distro.os_release_attr`. """ return self._os_release_info.get(attribute, '') def lsb_release_attr(self, attribute): """ Return a single named information item from the lsb_release command output data source of the OS distribution. For details, see :func:`distro.lsb_release_attr`. """ return self._lsb_release_info.get(attribute, '') def distro_release_attr(self, attribute): """ Return a single named information item from the distro release file data source of the OS distribution. For details, see :func:`distro.distro_release_attr`. """ return self._distro_release_info.get(attribute, '') def uname_attr(self, attribute): """ Return a single named information item from the uname command output data source of the OS distribution. For details, see :func:`distro.uname_release_attr`. """ return self._uname_info.get(attribute, '') @cached_property def _os_release_info(self): """ Get the information items from the specified os-release file. Returns: A dictionary containing all information items. """ if os.path.isfile(self.os_release_file): with open(self.os_release_file) as release_file: return self._parse_os_release_content(release_file) return {} @staticmethod def _parse_os_release_content(lines): """ Parse the lines of an os-release file. Parameters: * lines: Iterable through the lines in the os-release file. Each line must be a unicode string or a UTF-8 encoded byte string. Returns: A dictionary containing all information items. """ props = {} lexer = shlex.shlex(lines, posix=True) lexer.whitespace_split = True # The shlex module defines its `wordchars` variable using literals, # making it dependent on the encoding of the Python source file. # In Python 2.6 and 2.7, the shlex source file is encoded in # 'iso-8859-1', and the `wordchars` variable is defined as a byte # string. This causes a UnicodeDecodeError to be raised when the # parsed content is a unicode object. The following fix resolves that # (... but it should be fixed in shlex...): if sys.version_info[0] == 2 and isinstance(lexer.wordchars, bytes): lexer.wordchars = lexer.wordchars.decode('iso-8859-1') tokens = list(lexer) for token in tokens: # At this point, all shell-like parsing has been done (i.e. # comments processed, quotes and backslash escape sequences # processed, multi-line values assembled, trailing newlines # stripped, etc.), so the tokens are now either: # * variable assignments: var=value # * commands or their arguments (not allowed in os-release) if '=' in token: k, v = token.split('=', 1) if isinstance(v, bytes): v = v.decode('utf-8') props[k.lower()] = v else: # Ignore any tokens that are not variable assignments pass if 'version_codename' in props: # os-release added a version_codename field. Use that in # preference to anything else Note that some distros purposefully # do not have code names. They should be setting # version_codename="" props['codename'] = props['version_codename'] elif 'ubuntu_codename' in props: # Same as above but a non-standard field name used on older Ubuntus props['codename'] = props['ubuntu_codename'] elif 'version' in props: # If there is no version_codename, parse it from the version codename = re.search(r'(\(\D+\))|,(\s+)?\D+', props['version']) if codename: codename = codename.group() codename = codename.strip('()') codename = codename.strip(',') codename = codename.strip() # codename appears within paranthese. props['codename'] = codename return props @cached_property def _lsb_release_info(self): """ Get the information items from the lsb_release command output. Returns: A dictionary containing all information items. """ if not self.include_lsb: return {} with open(os.devnull, 'w') as devnull: try: cmd = ('lsb_release', '-a') stdout = subprocess.check_output(cmd, stderr=devnull) except OSError: # Command not found return {} content = stdout.decode(sys.getfilesystemencoding()).splitlines() return self._parse_lsb_release_content(content) @staticmethod def _parse_lsb_release_content(lines): """ Parse the output of the lsb_release command. Parameters: * lines: Iterable through the lines of the lsb_release output. Each line must be a unicode string or a UTF-8 encoded byte string. Returns: A dictionary containing all information items. """ props = {} for line in lines: kv = line.strip('\n').split(':', 1) if len(kv) != 2: # Ignore lines without colon. continue k, v = kv props.update({k.replace(' ', '_').lower(): v.strip()}) return props @cached_property def _uname_info(self): with open(os.devnull, 'w') as devnull: try: cmd = ('uname', '-rs') stdout = subprocess.check_output(cmd, stderr=devnull) except OSError: return {} content = stdout.decode(sys.getfilesystemencoding()).splitlines() return self._parse_uname_content(content) @staticmethod def _parse_uname_content(lines): props = {} match = re.search(r'^([^\s]+)\s+([\d\.]+)', lines[0].strip()) if match: name, version = match.groups() # This is to prevent the Linux kernel version from # appearing as the 'best' version on otherwise # identifiable distributions. if name == 'Linux': return {} props['id'] = name.lower() props['name'] = name props['release'] = version return props @cached_property def _distro_release_info(self): """ Get the information items from the specified distro release file. Returns: A dictionary containing all information items. """ if self.distro_release_file: # If it was specified, we use it and parse what we can, even if # its file name or content does not match the expected pattern. distro_info = self._parse_distro_release_file( self.distro_release_file) basename = os.path.basename(self.distro_release_file) # The file name pattern for user-specified distro release files # is somewhat more tolerant (compared to when searching for the # file), because we want to use what was specified as best as # possible. match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename) if 'name' in distro_info \ and 'cloudlinux' in distro_info['name'].lower(): distro_info['id'] = 'cloudlinux' elif match: distro_info['id'] = match.group(1) return distro_info else: try: basenames = os.listdir(_UNIXCONFDIR) # We sort for repeatability in cases where there are multiple # distro specific files; e.g. CentOS, Oracle, Enterprise all # containing `redhat-release` on top of their own. basenames.sort() except OSError: # This may occur when /etc is not readable but we can't be # sure about the *-release files. Check common entries of # /etc for information. If they turn out to not be there the # error is handled in `_parse_distro_release_file()`. basenames = ['SuSE-release', 'arch-release', 'base-release', 'centos-release', 'fedora-release', 'gentoo-release', 'mageia-release', 'mandrake-release', 'mandriva-release', 'mandrivalinux-release', 'manjaro-release', 'oracle-release', 'redhat-release', 'sl-release', 'slackware-version'] for basename in basenames: if basename in _DISTRO_RELEASE_IGNORE_BASENAMES: continue match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename) if match: filepath = os.path.join(_UNIXCONFDIR, basename) distro_info = self._parse_distro_release_file(filepath) if 'name' in distro_info: # The name is always present if the pattern matches self.distro_release_file = filepath distro_info['id'] = match.group(1) if 'cloudlinux' in distro_info['name'].lower(): distro_info['id'] = 'cloudlinux' return distro_info return {} def _parse_distro_release_file(self, filepath): """ Parse a distro release file. Parameters: * filepath: Path name of the distro release file. Returns: A dictionary containing all information items. """ try: with open(filepath) as fp: # Only parse the first line. For instance, on SLES there # are multiple lines. We don't want them... return self._parse_distro_release_content(fp.readline()) except (OSError, IOError): # Ignore not being able to read a specific, seemingly version # related file. # See https://github.com/nir0s/distro/issues/162 return {} @staticmethod def _parse_distro_release_content(line): """ Parse a line from a distro release file. Parameters: * line: Line from the distro release file. Must be a unicode string or a UTF-8 encoded byte string. Returns: A dictionary containing all information items. """ if isinstance(line, bytes): line = line.decode('utf-8') matches = _DISTRO_RELEASE_CONTENT_REVERSED_PATTERN.match( line.strip()[::-1]) distro_info = {} if matches: # regexp ensures non-None distro_info['name'] = matches.group(3)[::-1] if matches.group(2): distro_info['version_id'] = matches.group(2)[::-1] if matches.group(1): distro_info['codename'] = matches.group(1)[::-1] elif line: distro_info['name'] = line.strip() return distro_info _distro = LinuxDistribution() def main(): logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) logger.addHandler(logging.StreamHandler(sys.stdout)) parser = argparse.ArgumentParser(description="OS distro info tool") parser.add_argument( '--json', '-j', help="Output in machine readable format", action="store_true") args = parser.parse_args() if args.json: logger.info(json.dumps(info(), indent=4, sort_keys=True)) else: logger.info('Name: %s', name(pretty=True)) distribution_version = version(pretty=True) logger.info('Version: %s', distribution_version) distribution_codename = codename() logger.info('Codename: %s', distribution_codename) if __name__ == '__main__': main() �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000033�00000000000�011451� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������27 mtime=1604735099.989994 �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/��������������������������������0000755�0001750�0001750�00000000000�00000000000�026205� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/__init__.py���������������������0000644�0001750�0001750�00000002212�00000000000�030313� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" HTML parsing library based on the `WHATWG HTML specification <https://whatwg.org/html>`_. The parser is designed to be compatible with existing HTML found in the wild and implements well-defined error recovery that is largely compatible with modern desktop web browsers. Example usage:: from pip._vendor import html5lib with open("my_document.html", "rb") as f: tree = html5lib.parse(f) For convenience, this module re-exports the following names: * :func:`~.html5parser.parse` * :func:`~.html5parser.parseFragment` * :class:`~.html5parser.HTMLParser` * :func:`~.treebuilders.getTreeBuilder` * :func:`~.treewalkers.getTreeWalker` * :func:`~.serializer.serialize` """ from __future__ import absolute_import, division, unicode_literals from .html5parser import HTMLParser, parse, parseFragment from .treebuilders import getTreeBuilder from .treewalkers import getTreeWalker from .serializer import serialize __all__ = ["HTMLParser", "parse", "parseFragment", "getTreeBuilder", "getTreeWalker", "serialize"] # this has to be at the top level, see how setup.py parses this #: Distribution version number. __version__ = "1.0.1" ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/_ihatexml.py��������������������0000644�0001750�0001750�00000040501�00000000000�030531� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals import re import warnings from .constants import DataLossWarning baseChar = """ [#x0041-#x005A] | [#x0061-#x007A] | [#x00C0-#x00D6] | [#x00D8-#x00F6] | [#x00F8-#x00FF] | [#x0100-#x0131] | [#x0134-#x013E] | [#x0141-#x0148] | [#x014A-#x017E] | [#x0180-#x01C3] | [#x01CD-#x01F0] | [#x01F4-#x01F5] | [#x01FA-#x0217] | [#x0250-#x02A8] | [#x02BB-#x02C1] | #x0386 | [#x0388-#x038A] | #x038C | [#x038E-#x03A1] | [#x03A3-#x03CE] | [#x03D0-#x03D6] | #x03DA | #x03DC | #x03DE | #x03E0 | [#x03E2-#x03F3] | [#x0401-#x040C] | [#x040E-#x044F] | [#x0451-#x045C] | [#x045E-#x0481] | [#x0490-#x04C4] | [#x04C7-#x04C8] | [#x04CB-#x04CC] | [#x04D0-#x04EB] | [#x04EE-#x04F5] | [#x04F8-#x04F9] | [#x0531-#x0556] | #x0559 | [#x0561-#x0586] | [#x05D0-#x05EA] | [#x05F0-#x05F2] | [#x0621-#x063A] | [#x0641-#x064A] | [#x0671-#x06B7] | [#x06BA-#x06BE] | [#x06C0-#x06CE] | [#x06D0-#x06D3] | #x06D5 | [#x06E5-#x06E6] | [#x0905-#x0939] | #x093D | [#x0958-#x0961] | [#x0985-#x098C] | [#x098F-#x0990] | [#x0993-#x09A8] | [#x09AA-#x09B0] | #x09B2 | [#x09B6-#x09B9] | [#x09DC-#x09DD] | [#x09DF-#x09E1] | [#x09F0-#x09F1] | [#x0A05-#x0A0A] | [#x0A0F-#x0A10] | [#x0A13-#x0A28] | [#x0A2A-#x0A30] | [#x0A32-#x0A33] | [#x0A35-#x0A36] | [#x0A38-#x0A39] | [#x0A59-#x0A5C] | #x0A5E | [#x0A72-#x0A74] | [#x0A85-#x0A8B] | #x0A8D | [#x0A8F-#x0A91] | [#x0A93-#x0AA8] | [#x0AAA-#x0AB0] | [#x0AB2-#x0AB3] | [#x0AB5-#x0AB9] | #x0ABD | #x0AE0 | [#x0B05-#x0B0C] | [#x0B0F-#x0B10] | [#x0B13-#x0B28] | [#x0B2A-#x0B30] | [#x0B32-#x0B33] | [#x0B36-#x0B39] | #x0B3D | [#x0B5C-#x0B5D] | [#x0B5F-#x0B61] | [#x0B85-#x0B8A] | [#x0B8E-#x0B90] | [#x0B92-#x0B95] | [#x0B99-#x0B9A] | #x0B9C | [#x0B9E-#x0B9F] | [#x0BA3-#x0BA4] | [#x0BA8-#x0BAA] | [#x0BAE-#x0BB5] | [#x0BB7-#x0BB9] | [#x0C05-#x0C0C] | [#x0C0E-#x0C10] | [#x0C12-#x0C28] | [#x0C2A-#x0C33] | [#x0C35-#x0C39] | [#x0C60-#x0C61] | [#x0C85-#x0C8C] | [#x0C8E-#x0C90] | [#x0C92-#x0CA8] | [#x0CAA-#x0CB3] | [#x0CB5-#x0CB9] | #x0CDE | [#x0CE0-#x0CE1] | [#x0D05-#x0D0C] | [#x0D0E-#x0D10] | [#x0D12-#x0D28] | [#x0D2A-#x0D39] | [#x0D60-#x0D61] | [#x0E01-#x0E2E] | #x0E30 | [#x0E32-#x0E33] | [#x0E40-#x0E45] | [#x0E81-#x0E82] | #x0E84 | [#x0E87-#x0E88] | #x0E8A | #x0E8D | [#x0E94-#x0E97] | [#x0E99-#x0E9F] | [#x0EA1-#x0EA3] | #x0EA5 | #x0EA7 | [#x0EAA-#x0EAB] | [#x0EAD-#x0EAE] | #x0EB0 | [#x0EB2-#x0EB3] | #x0EBD | [#x0EC0-#x0EC4] | [#x0F40-#x0F47] | [#x0F49-#x0F69] | [#x10A0-#x10C5] | [#x10D0-#x10F6] | #x1100 | [#x1102-#x1103] | [#x1105-#x1107] | #x1109 | [#x110B-#x110C] | [#x110E-#x1112] | #x113C | #x113E | #x1140 | #x114C | #x114E | #x1150 | [#x1154-#x1155] | #x1159 | [#x115F-#x1161] | #x1163 | #x1165 | #x1167 | #x1169 | [#x116D-#x116E] | [#x1172-#x1173] | #x1175 | #x119E | #x11A8 | #x11AB | [#x11AE-#x11AF] | [#x11B7-#x11B8] | #x11BA | [#x11BC-#x11C2] | #x11EB | #x11F0 | #x11F9 | [#x1E00-#x1E9B] | [#x1EA0-#x1EF9] | [#x1F00-#x1F15] | [#x1F18-#x1F1D] | [#x1F20-#x1F45] | [#x1F48-#x1F4D] | [#x1F50-#x1F57] | #x1F59 | #x1F5B | #x1F5D | [#x1F5F-#x1F7D] | [#x1F80-#x1FB4] | [#x1FB6-#x1FBC] | #x1FBE | [#x1FC2-#x1FC4] | [#x1FC6-#x1FCC] | [#x1FD0-#x1FD3] | [#x1FD6-#x1FDB] | [#x1FE0-#x1FEC] | [#x1FF2-#x1FF4] | [#x1FF6-#x1FFC] | #x2126 | [#x212A-#x212B] | #x212E | [#x2180-#x2182] | [#x3041-#x3094] | [#x30A1-#x30FA] | [#x3105-#x312C] | [#xAC00-#xD7A3]""" ideographic = """[#x4E00-#x9FA5] | #x3007 | [#x3021-#x3029]""" combiningCharacter = """ [#x0300-#x0345] | [#x0360-#x0361] | [#x0483-#x0486] | [#x0591-#x05A1] | [#x05A3-#x05B9] | [#x05BB-#x05BD] | #x05BF | [#x05C1-#x05C2] | #x05C4 | [#x064B-#x0652] | #x0670 | [#x06D6-#x06DC] | [#x06DD-#x06DF] | [#x06E0-#x06E4] | [#x06E7-#x06E8] | [#x06EA-#x06ED] | [#x0901-#x0903] | #x093C | [#x093E-#x094C] | #x094D | [#x0951-#x0954] | [#x0962-#x0963] | [#x0981-#x0983] | #x09BC | #x09BE | #x09BF | [#x09C0-#x09C4] | [#x09C7-#x09C8] | [#x09CB-#x09CD] | #x09D7 | [#x09E2-#x09E3] | #x0A02 | #x0A3C | #x0A3E | #x0A3F | [#x0A40-#x0A42] | [#x0A47-#x0A48] | [#x0A4B-#x0A4D] | [#x0A70-#x0A71] | [#x0A81-#x0A83] | #x0ABC | [#x0ABE-#x0AC5] | [#x0AC7-#x0AC9] | [#x0ACB-#x0ACD] | [#x0B01-#x0B03] | #x0B3C | [#x0B3E-#x0B43] | [#x0B47-#x0B48] | [#x0B4B-#x0B4D] | [#x0B56-#x0B57] | [#x0B82-#x0B83] | [#x0BBE-#x0BC2] | [#x0BC6-#x0BC8] | [#x0BCA-#x0BCD] | #x0BD7 | [#x0C01-#x0C03] | [#x0C3E-#x0C44] | [#x0C46-#x0C48] | [#x0C4A-#x0C4D] | [#x0C55-#x0C56] | [#x0C82-#x0C83] | [#x0CBE-#x0CC4] | [#x0CC6-#x0CC8] | [#x0CCA-#x0CCD] | [#x0CD5-#x0CD6] | [#x0D02-#x0D03] | [#x0D3E-#x0D43] | [#x0D46-#x0D48] | [#x0D4A-#x0D4D] | #x0D57 | #x0E31 | [#x0E34-#x0E3A] | [#x0E47-#x0E4E] | #x0EB1 | [#x0EB4-#x0EB9] | [#x0EBB-#x0EBC] | [#x0EC8-#x0ECD] | [#x0F18-#x0F19] | #x0F35 | #x0F37 | #x0F39 | #x0F3E | #x0F3F | [#x0F71-#x0F84] | [#x0F86-#x0F8B] | [#x0F90-#x0F95] | #x0F97 | [#x0F99-#x0FAD] | [#x0FB1-#x0FB7] | #x0FB9 | [#x20D0-#x20DC] | #x20E1 | [#x302A-#x302F] | #x3099 | #x309A""" digit = """ [#x0030-#x0039] | [#x0660-#x0669] | [#x06F0-#x06F9] | [#x0966-#x096F] | [#x09E6-#x09EF] | [#x0A66-#x0A6F] | [#x0AE6-#x0AEF] | [#x0B66-#x0B6F] | [#x0BE7-#x0BEF] | [#x0C66-#x0C6F] | [#x0CE6-#x0CEF] | [#x0D66-#x0D6F] | [#x0E50-#x0E59] | [#x0ED0-#x0ED9] | [#x0F20-#x0F29]""" extender = """ #x00B7 | #x02D0 | #x02D1 | #x0387 | #x0640 | #x0E46 | #x0EC6 | #x3005 | #[#x3031-#x3035] | [#x309D-#x309E] | [#x30FC-#x30FE]""" letter = " | ".join([baseChar, ideographic]) # Without the name = " | ".join([letter, digit, ".", "-", "_", combiningCharacter, extender]) nameFirst = " | ".join([letter, "_"]) reChar = re.compile(r"#x([\d|A-F]{4,4})") reCharRange = re.compile(r"\[#x([\d|A-F]{4,4})-#x([\d|A-F]{4,4})\]") def charStringToList(chars): charRanges = [item.strip() for item in chars.split(" | ")] rv = [] for item in charRanges: foundMatch = False for regexp in (reChar, reCharRange): match = regexp.match(item) if match is not None: rv.append([hexToInt(item) for item in match.groups()]) if len(rv[-1]) == 1: rv[-1] = rv[-1] * 2 foundMatch = True break if not foundMatch: assert len(item) == 1 rv.append([ord(item)] * 2) rv = normaliseCharList(rv) return rv def normaliseCharList(charList): charList = sorted(charList) for item in charList: assert item[1] >= item[0] rv = [] i = 0 while i < len(charList): j = 1 rv.append(charList[i]) while i + j < len(charList) and charList[i + j][0] <= rv[-1][1] + 1: rv[-1][1] = charList[i + j][1] j += 1 i += j return rv # We don't really support characters above the BMP :( max_unicode = int("FFFF", 16) def missingRanges(charList): rv = [] if charList[0] != 0: rv.append([0, charList[0][0] - 1]) for i, item in enumerate(charList[:-1]): rv.append([item[1] + 1, charList[i + 1][0] - 1]) if charList[-1][1] != max_unicode: rv.append([charList[-1][1] + 1, max_unicode]) return rv def listToRegexpStr(charList): rv = [] for item in charList: if item[0] == item[1]: rv.append(escapeRegexp(chr(item[0]))) else: rv.append(escapeRegexp(chr(item[0])) + "-" + escapeRegexp(chr(item[1]))) return "[%s]" % "".join(rv) def hexToInt(hex_str): return int(hex_str, 16) def escapeRegexp(string): specialCharacters = (".", "^", "$", "*", "+", "?", "{", "}", "[", "]", "|", "(", ")", "-") for char in specialCharacters: string = string.replace(char, "\\" + char) return string # output from the above nonXmlNameBMPRegexp = re.compile('[\x00-,/:-@\\[-\\^`\\{-\xb6\xb8-\xbf\xd7\xf7\u0132-\u0133\u013f-\u0140\u0149\u017f\u01c4-\u01cc\u01f1-\u01f3\u01f6-\u01f9\u0218-\u024f\u02a9-\u02ba\u02c2-\u02cf\u02d2-\u02ff\u0346-\u035f\u0362-\u0385\u038b\u038d\u03a2\u03cf\u03d7-\u03d9\u03db\u03dd\u03df\u03e1\u03f4-\u0400\u040d\u0450\u045d\u0482\u0487-\u048f\u04c5-\u04c6\u04c9-\u04ca\u04cd-\u04cf\u04ec-\u04ed\u04f6-\u04f7\u04fa-\u0530\u0557-\u0558\u055a-\u0560\u0587-\u0590\u05a2\u05ba\u05be\u05c0\u05c3\u05c5-\u05cf\u05eb-\u05ef\u05f3-\u0620\u063b-\u063f\u0653-\u065f\u066a-\u066f\u06b8-\u06b9\u06bf\u06cf\u06d4\u06e9\u06ee-\u06ef\u06fa-\u0900\u0904\u093a-\u093b\u094e-\u0950\u0955-\u0957\u0964-\u0965\u0970-\u0980\u0984\u098d-\u098e\u0991-\u0992\u09a9\u09b1\u09b3-\u09b5\u09ba-\u09bb\u09bd\u09c5-\u09c6\u09c9-\u09ca\u09ce-\u09d6\u09d8-\u09db\u09de\u09e4-\u09e5\u09f2-\u0a01\u0a03-\u0a04\u0a0b-\u0a0e\u0a11-\u0a12\u0a29\u0a31\u0a34\u0a37\u0a3a-\u0a3b\u0a3d\u0a43-\u0a46\u0a49-\u0a4a\u0a4e-\u0a58\u0a5d\u0a5f-\u0a65\u0a75-\u0a80\u0a84\u0a8c\u0a8e\u0a92\u0aa9\u0ab1\u0ab4\u0aba-\u0abb\u0ac6\u0aca\u0ace-\u0adf\u0ae1-\u0ae5\u0af0-\u0b00\u0b04\u0b0d-\u0b0e\u0b11-\u0b12\u0b29\u0b31\u0b34-\u0b35\u0b3a-\u0b3b\u0b44-\u0b46\u0b49-\u0b4a\u0b4e-\u0b55\u0b58-\u0b5b\u0b5e\u0b62-\u0b65\u0b70-\u0b81\u0b84\u0b8b-\u0b8d\u0b91\u0b96-\u0b98\u0b9b\u0b9d\u0ba0-\u0ba2\u0ba5-\u0ba7\u0bab-\u0bad\u0bb6\u0bba-\u0bbd\u0bc3-\u0bc5\u0bc9\u0bce-\u0bd6\u0bd8-\u0be6\u0bf0-\u0c00\u0c04\u0c0d\u0c11\u0c29\u0c34\u0c3a-\u0c3d\u0c45\u0c49\u0c4e-\u0c54\u0c57-\u0c5f\u0c62-\u0c65\u0c70-\u0c81\u0c84\u0c8d\u0c91\u0ca9\u0cb4\u0cba-\u0cbd\u0cc5\u0cc9\u0cce-\u0cd4\u0cd7-\u0cdd\u0cdf\u0ce2-\u0ce5\u0cf0-\u0d01\u0d04\u0d0d\u0d11\u0d29\u0d3a-\u0d3d\u0d44-\u0d45\u0d49\u0d4e-\u0d56\u0d58-\u0d5f\u0d62-\u0d65\u0d70-\u0e00\u0e2f\u0e3b-\u0e3f\u0e4f\u0e5a-\u0e80\u0e83\u0e85-\u0e86\u0e89\u0e8b-\u0e8c\u0e8e-\u0e93\u0e98\u0ea0\u0ea4\u0ea6\u0ea8-\u0ea9\u0eac\u0eaf\u0eba\u0ebe-\u0ebf\u0ec5\u0ec7\u0ece-\u0ecf\u0eda-\u0f17\u0f1a-\u0f1f\u0f2a-\u0f34\u0f36\u0f38\u0f3a-\u0f3d\u0f48\u0f6a-\u0f70\u0f85\u0f8c-\u0f8f\u0f96\u0f98\u0fae-\u0fb0\u0fb8\u0fba-\u109f\u10c6-\u10cf\u10f7-\u10ff\u1101\u1104\u1108\u110a\u110d\u1113-\u113b\u113d\u113f\u1141-\u114b\u114d\u114f\u1151-\u1153\u1156-\u1158\u115a-\u115e\u1162\u1164\u1166\u1168\u116a-\u116c\u116f-\u1171\u1174\u1176-\u119d\u119f-\u11a7\u11a9-\u11aa\u11ac-\u11ad\u11b0-\u11b6\u11b9\u11bb\u11c3-\u11ea\u11ec-\u11ef\u11f1-\u11f8\u11fa-\u1dff\u1e9c-\u1e9f\u1efa-\u1eff\u1f16-\u1f17\u1f1e-\u1f1f\u1f46-\u1f47\u1f4e-\u1f4f\u1f58\u1f5a\u1f5c\u1f5e\u1f7e-\u1f7f\u1fb5\u1fbd\u1fbf-\u1fc1\u1fc5\u1fcd-\u1fcf\u1fd4-\u1fd5\u1fdc-\u1fdf\u1fed-\u1ff1\u1ff5\u1ffd-\u20cf\u20dd-\u20e0\u20e2-\u2125\u2127-\u2129\u212c-\u212d\u212f-\u217f\u2183-\u3004\u3006\u3008-\u3020\u3030\u3036-\u3040\u3095-\u3098\u309b-\u309c\u309f-\u30a0\u30fb\u30ff-\u3104\u312d-\u4dff\u9fa6-\uabff\ud7a4-\uffff]') # noqa nonXmlNameFirstBMPRegexp = re.compile('[\x00-@\\[-\\^`\\{-\xbf\xd7\xf7\u0132-\u0133\u013f-\u0140\u0149\u017f\u01c4-\u01cc\u01f1-\u01f3\u01f6-\u01f9\u0218-\u024f\u02a9-\u02ba\u02c2-\u0385\u0387\u038b\u038d\u03a2\u03cf\u03d7-\u03d9\u03db\u03dd\u03df\u03e1\u03f4-\u0400\u040d\u0450\u045d\u0482-\u048f\u04c5-\u04c6\u04c9-\u04ca\u04cd-\u04cf\u04ec-\u04ed\u04f6-\u04f7\u04fa-\u0530\u0557-\u0558\u055a-\u0560\u0587-\u05cf\u05eb-\u05ef\u05f3-\u0620\u063b-\u0640\u064b-\u0670\u06b8-\u06b9\u06bf\u06cf\u06d4\u06d6-\u06e4\u06e7-\u0904\u093a-\u093c\u093e-\u0957\u0962-\u0984\u098d-\u098e\u0991-\u0992\u09a9\u09b1\u09b3-\u09b5\u09ba-\u09db\u09de\u09e2-\u09ef\u09f2-\u0a04\u0a0b-\u0a0e\u0a11-\u0a12\u0a29\u0a31\u0a34\u0a37\u0a3a-\u0a58\u0a5d\u0a5f-\u0a71\u0a75-\u0a84\u0a8c\u0a8e\u0a92\u0aa9\u0ab1\u0ab4\u0aba-\u0abc\u0abe-\u0adf\u0ae1-\u0b04\u0b0d-\u0b0e\u0b11-\u0b12\u0b29\u0b31\u0b34-\u0b35\u0b3a-\u0b3c\u0b3e-\u0b5b\u0b5e\u0b62-\u0b84\u0b8b-\u0b8d\u0b91\u0b96-\u0b98\u0b9b\u0b9d\u0ba0-\u0ba2\u0ba5-\u0ba7\u0bab-\u0bad\u0bb6\u0bba-\u0c04\u0c0d\u0c11\u0c29\u0c34\u0c3a-\u0c5f\u0c62-\u0c84\u0c8d\u0c91\u0ca9\u0cb4\u0cba-\u0cdd\u0cdf\u0ce2-\u0d04\u0d0d\u0d11\u0d29\u0d3a-\u0d5f\u0d62-\u0e00\u0e2f\u0e31\u0e34-\u0e3f\u0e46-\u0e80\u0e83\u0e85-\u0e86\u0e89\u0e8b-\u0e8c\u0e8e-\u0e93\u0e98\u0ea0\u0ea4\u0ea6\u0ea8-\u0ea9\u0eac\u0eaf\u0eb1\u0eb4-\u0ebc\u0ebe-\u0ebf\u0ec5-\u0f3f\u0f48\u0f6a-\u109f\u10c6-\u10cf\u10f7-\u10ff\u1101\u1104\u1108\u110a\u110d\u1113-\u113b\u113d\u113f\u1141-\u114b\u114d\u114f\u1151-\u1153\u1156-\u1158\u115a-\u115e\u1162\u1164\u1166\u1168\u116a-\u116c\u116f-\u1171\u1174\u1176-\u119d\u119f-\u11a7\u11a9-\u11aa\u11ac-\u11ad\u11b0-\u11b6\u11b9\u11bb\u11c3-\u11ea\u11ec-\u11ef\u11f1-\u11f8\u11fa-\u1dff\u1e9c-\u1e9f\u1efa-\u1eff\u1f16-\u1f17\u1f1e-\u1f1f\u1f46-\u1f47\u1f4e-\u1f4f\u1f58\u1f5a\u1f5c\u1f5e\u1f7e-\u1f7f\u1fb5\u1fbd\u1fbf-\u1fc1\u1fc5\u1fcd-\u1fcf\u1fd4-\u1fd5\u1fdc-\u1fdf\u1fed-\u1ff1\u1ff5\u1ffd-\u2125\u2127-\u2129\u212c-\u212d\u212f-\u217f\u2183-\u3006\u3008-\u3020\u302a-\u3040\u3095-\u30a0\u30fb-\u3104\u312d-\u4dff\u9fa6-\uabff\ud7a4-\uffff]') # noqa # Simpler things nonPubidCharRegexp = re.compile("[^\x20\x0D\x0Aa-zA-Z0-9\\-'()+,./:=?;!*#@$_%]") class InfosetFilter(object): replacementRegexp = re.compile(r"U[\dA-F]{5,5}") def __init__(self, dropXmlnsLocalName=False, dropXmlnsAttrNs=False, preventDoubleDashComments=False, preventDashAtCommentEnd=False, replaceFormFeedCharacters=True, preventSingleQuotePubid=False): self.dropXmlnsLocalName = dropXmlnsLocalName self.dropXmlnsAttrNs = dropXmlnsAttrNs self.preventDoubleDashComments = preventDoubleDashComments self.preventDashAtCommentEnd = preventDashAtCommentEnd self.replaceFormFeedCharacters = replaceFormFeedCharacters self.preventSingleQuotePubid = preventSingleQuotePubid self.replaceCache = {} def coerceAttribute(self, name, namespace=None): if self.dropXmlnsLocalName and name.startswith("xmlns:"): warnings.warn("Attributes cannot begin with xmlns", DataLossWarning) return None elif (self.dropXmlnsAttrNs and namespace == "http://www.w3.org/2000/xmlns/"): warnings.warn("Attributes cannot be in the xml namespace", DataLossWarning) return None else: return self.toXmlName(name) def coerceElement(self, name): return self.toXmlName(name) def coerceComment(self, data): if self.preventDoubleDashComments: while "--" in data: warnings.warn("Comments cannot contain adjacent dashes", DataLossWarning) data = data.replace("--", "- -") if data.endswith("-"): warnings.warn("Comments cannot end in a dash", DataLossWarning) data += " " return data def coerceCharacters(self, data): if self.replaceFormFeedCharacters: for _ in range(data.count("\x0C")): warnings.warn("Text cannot contain U+000C", DataLossWarning) data = data.replace("\x0C", " ") # Other non-xml characters return data def coercePubid(self, data): dataOutput = data for char in nonPubidCharRegexp.findall(data): warnings.warn("Coercing non-XML pubid", DataLossWarning) replacement = self.getReplacementCharacter(char) dataOutput = dataOutput.replace(char, replacement) if self.preventSingleQuotePubid and dataOutput.find("'") >= 0: warnings.warn("Pubid cannot contain single quote", DataLossWarning) dataOutput = dataOutput.replace("'", self.getReplacementCharacter("'")) return dataOutput def toXmlName(self, name): nameFirst = name[0] nameRest = name[1:] m = nonXmlNameFirstBMPRegexp.match(nameFirst) if m: warnings.warn("Coercing non-XML name", DataLossWarning) nameFirstOutput = self.getReplacementCharacter(nameFirst) else: nameFirstOutput = nameFirst nameRestOutput = nameRest replaceChars = set(nonXmlNameBMPRegexp.findall(nameRest)) for char in replaceChars: warnings.warn("Coercing non-XML name", DataLossWarning) replacement = self.getReplacementCharacter(char) nameRestOutput = nameRestOutput.replace(char, replacement) return nameFirstOutput + nameRestOutput def getReplacementCharacter(self, char): if char in self.replaceCache: replacement = self.replaceCache[char] else: replacement = self.escapeChar(char) return replacement def fromXmlName(self, name): for item in set(self.replacementRegexp.findall(name)): name = name.replace(item, self.unescapeChar(item)) return name def escapeChar(self, char): replacement = "U%05X" % ord(char) self.replaceCache[char] = replacement return replacement def unescapeChar(self, charcode): return chr(int(charcode[1:], 16)) �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/_inputstream.py�����������������0000644�0001750�0001750�00000077450�00000000000�031306� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals from pip._vendor.six import text_type, binary_type from pip._vendor.six.moves import http_client, urllib import codecs import re from pip._vendor import webencodings from .constants import EOF, spaceCharacters, asciiLetters, asciiUppercase from .constants import _ReparseException from . import _utils from io import StringIO try: from io import BytesIO except ImportError: BytesIO = StringIO # Non-unicode versions of constants for use in the pre-parser spaceCharactersBytes = frozenset([item.encode("ascii") for item in spaceCharacters]) asciiLettersBytes = frozenset([item.encode("ascii") for item in asciiLetters]) asciiUppercaseBytes = frozenset([item.encode("ascii") for item in asciiUppercase]) spacesAngleBrackets = spaceCharactersBytes | frozenset([b">", b"<"]) invalid_unicode_no_surrogate = "[\u0001-\u0008\u000B\u000E-\u001F\u007F-\u009F\uFDD0-\uFDEF\uFFFE\uFFFF\U0001FFFE\U0001FFFF\U0002FFFE\U0002FFFF\U0003FFFE\U0003FFFF\U0004FFFE\U0004FFFF\U0005FFFE\U0005FFFF\U0006FFFE\U0006FFFF\U0007FFFE\U0007FFFF\U0008FFFE\U0008FFFF\U0009FFFE\U0009FFFF\U000AFFFE\U000AFFFF\U000BFFFE\U000BFFFF\U000CFFFE\U000CFFFF\U000DFFFE\U000DFFFF\U000EFFFE\U000EFFFF\U000FFFFE\U000FFFFF\U0010FFFE\U0010FFFF]" # noqa if _utils.supports_lone_surrogates: # Use one extra step of indirection and create surrogates with # eval. Not using this indirection would introduce an illegal # unicode literal on platforms not supporting such lone # surrogates. assert invalid_unicode_no_surrogate[-1] == "]" and invalid_unicode_no_surrogate.count("]") == 1 invalid_unicode_re = re.compile(invalid_unicode_no_surrogate[:-1] + eval('"\\uD800-\\uDFFF"') + # pylint:disable=eval-used "]") else: invalid_unicode_re = re.compile(invalid_unicode_no_surrogate) non_bmp_invalid_codepoints = set([0x1FFFE, 0x1FFFF, 0x2FFFE, 0x2FFFF, 0x3FFFE, 0x3FFFF, 0x4FFFE, 0x4FFFF, 0x5FFFE, 0x5FFFF, 0x6FFFE, 0x6FFFF, 0x7FFFE, 0x7FFFF, 0x8FFFE, 0x8FFFF, 0x9FFFE, 0x9FFFF, 0xAFFFE, 0xAFFFF, 0xBFFFE, 0xBFFFF, 0xCFFFE, 0xCFFFF, 0xDFFFE, 0xDFFFF, 0xEFFFE, 0xEFFFF, 0xFFFFE, 0xFFFFF, 0x10FFFE, 0x10FFFF]) ascii_punctuation_re = re.compile("[\u0009-\u000D\u0020-\u002F\u003A-\u0040\u005C\u005B-\u0060\u007B-\u007E]") # Cache for charsUntil() charsUntilRegEx = {} class BufferedStream(object): """Buffering for streams that do not have buffering of their own The buffer is implemented as a list of chunks on the assumption that joining many strings will be slow since it is O(n**2) """ def __init__(self, stream): self.stream = stream self.buffer = [] self.position = [-1, 0] # chunk number, offset def tell(self): pos = 0 for chunk in self.buffer[:self.position[0]]: pos += len(chunk) pos += self.position[1] return pos def seek(self, pos): assert pos <= self._bufferedBytes() offset = pos i = 0 while len(self.buffer[i]) < offset: offset -= len(self.buffer[i]) i += 1 self.position = [i, offset] def read(self, bytes): if not self.buffer: return self._readStream(bytes) elif (self.position[0] == len(self.buffer) and self.position[1] == len(self.buffer[-1])): return self._readStream(bytes) else: return self._readFromBuffer(bytes) def _bufferedBytes(self): return sum([len(item) for item in self.buffer]) def _readStream(self, bytes): data = self.stream.read(bytes) self.buffer.append(data) self.position[0] += 1 self.position[1] = len(data) return data def _readFromBuffer(self, bytes): remainingBytes = bytes rv = [] bufferIndex = self.position[0] bufferOffset = self.position[1] while bufferIndex < len(self.buffer) and remainingBytes != 0: assert remainingBytes > 0 bufferedData = self.buffer[bufferIndex] if remainingBytes <= len(bufferedData) - bufferOffset: bytesToRead = remainingBytes self.position = [bufferIndex, bufferOffset + bytesToRead] else: bytesToRead = len(bufferedData) - bufferOffset self.position = [bufferIndex, len(bufferedData)] bufferIndex += 1 rv.append(bufferedData[bufferOffset:bufferOffset + bytesToRead]) remainingBytes -= bytesToRead bufferOffset = 0 if remainingBytes: rv.append(self._readStream(remainingBytes)) return b"".join(rv) def HTMLInputStream(source, **kwargs): # Work around Python bug #20007: read(0) closes the connection. # http://bugs.python.org/issue20007 if (isinstance(source, http_client.HTTPResponse) or # Also check for addinfourl wrapping HTTPResponse (isinstance(source, urllib.response.addbase) and isinstance(source.fp, http_client.HTTPResponse))): isUnicode = False elif hasattr(source, "read"): isUnicode = isinstance(source.read(0), text_type) else: isUnicode = isinstance(source, text_type) if isUnicode: encodings = [x for x in kwargs if x.endswith("_encoding")] if encodings: raise TypeError("Cannot set an encoding with a unicode input, set %r" % encodings) return HTMLUnicodeInputStream(source, **kwargs) else: return HTMLBinaryInputStream(source, **kwargs) class HTMLUnicodeInputStream(object): """Provides a unicode stream of characters to the HTMLTokenizer. This class takes care of character encoding and removing or replacing incorrect byte-sequences and also provides column and line tracking. """ _defaultChunkSize = 10240 def __init__(self, source): """Initialises the HTMLInputStream. HTMLInputStream(source, [encoding]) -> Normalized stream from source for use by html5lib. source can be either a file-object, local filename or a string. The optional encoding parameter must be a string that indicates the encoding. If specified, that encoding will be used, regardless of any BOM or later declaration (such as in a meta element) """ if not _utils.supports_lone_surrogates: # Such platforms will have already checked for such # surrogate errors, so no need to do this checking. self.reportCharacterErrors = None elif len("\U0010FFFF") == 1: self.reportCharacterErrors = self.characterErrorsUCS4 else: self.reportCharacterErrors = self.characterErrorsUCS2 # List of where new lines occur self.newLines = [0] self.charEncoding = (lookupEncoding("utf-8"), "certain") self.dataStream = self.openStream(source) self.reset() def reset(self): self.chunk = "" self.chunkSize = 0 self.chunkOffset = 0 self.errors = [] # number of (complete) lines in previous chunks self.prevNumLines = 0 # number of columns in the last line of the previous chunk self.prevNumCols = 0 # Deal with CR LF and surrogates split over chunk boundaries self._bufferedCharacter = None def openStream(self, source): """Produces a file object from source. source can be either a file object, local filename or a string. """ # Already a file object if hasattr(source, 'read'): stream = source else: stream = StringIO(source) return stream def _position(self, offset): chunk = self.chunk nLines = chunk.count('\n', 0, offset) positionLine = self.prevNumLines + nLines lastLinePos = chunk.rfind('\n', 0, offset) if lastLinePos == -1: positionColumn = self.prevNumCols + offset else: positionColumn = offset - (lastLinePos + 1) return (positionLine, positionColumn) def position(self): """Returns (line, col) of the current position in the stream.""" line, col = self._position(self.chunkOffset) return (line + 1, col) def char(self): """ Read one character from the stream or queue if available. Return EOF when EOF is reached. """ # Read a new chunk from the input stream if necessary if self.chunkOffset >= self.chunkSize: if not self.readChunk(): return EOF chunkOffset = self.chunkOffset char = self.chunk[chunkOffset] self.chunkOffset = chunkOffset + 1 return char def readChunk(self, chunkSize=None): if chunkSize is None: chunkSize = self._defaultChunkSize self.prevNumLines, self.prevNumCols = self._position(self.chunkSize) self.chunk = "" self.chunkSize = 0 self.chunkOffset = 0 data = self.dataStream.read(chunkSize) # Deal with CR LF and surrogates broken across chunks if self._bufferedCharacter: data = self._bufferedCharacter + data self._bufferedCharacter = None elif not data: # We have no more data, bye-bye stream return False if len(data) > 1: lastv = ord(data[-1]) if lastv == 0x0D or 0xD800 <= lastv <= 0xDBFF: self._bufferedCharacter = data[-1] data = data[:-1] if self.reportCharacterErrors: self.reportCharacterErrors(data) # Replace invalid characters data = data.replace("\r\n", "\n") data = data.replace("\r", "\n") self.chunk = data self.chunkSize = len(data) return True def characterErrorsUCS4(self, data): for _ in range(len(invalid_unicode_re.findall(data))): self.errors.append("invalid-codepoint") def characterErrorsUCS2(self, data): # Someone picked the wrong compile option # You lose skip = False for match in invalid_unicode_re.finditer(data): if skip: continue codepoint = ord(match.group()) pos = match.start() # Pretty sure there should be endianness issues here if _utils.isSurrogatePair(data[pos:pos + 2]): # We have a surrogate pair! char_val = _utils.surrogatePairToCodepoint(data[pos:pos + 2]) if char_val in non_bmp_invalid_codepoints: self.errors.append("invalid-codepoint") skip = True elif (codepoint >= 0xD800 and codepoint <= 0xDFFF and pos == len(data) - 1): self.errors.append("invalid-codepoint") else: skip = False self.errors.append("invalid-codepoint") def charsUntil(self, characters, opposite=False): """ Returns a string of characters from the stream up to but not including any character in 'characters' or EOF. 'characters' must be a container that supports the 'in' method and iteration over its characters. """ # Use a cache of regexps to find the required characters try: chars = charsUntilRegEx[(characters, opposite)] except KeyError: if __debug__: for c in characters: assert(ord(c) < 128) regex = "".join(["\\x%02x" % ord(c) for c in characters]) if not opposite: regex = "^%s" % regex chars = charsUntilRegEx[(characters, opposite)] = re.compile("[%s]+" % regex) rv = [] while True: # Find the longest matching prefix m = chars.match(self.chunk, self.chunkOffset) if m is None: # If nothing matched, and it wasn't because we ran out of chunk, # then stop if self.chunkOffset != self.chunkSize: break else: end = m.end() # If not the whole chunk matched, return everything # up to the part that didn't match if end != self.chunkSize: rv.append(self.chunk[self.chunkOffset:end]) self.chunkOffset = end break # If the whole remainder of the chunk matched, # use it all and read the next chunk rv.append(self.chunk[self.chunkOffset:]) if not self.readChunk(): # Reached EOF break r = "".join(rv) return r def unget(self, char): # Only one character is allowed to be ungotten at once - it must # be consumed again before any further call to unget if char is not None: if self.chunkOffset == 0: # unget is called quite rarely, so it's a good idea to do # more work here if it saves a bit of work in the frequently # called char and charsUntil. # So, just prepend the ungotten character onto the current # chunk: self.chunk = char + self.chunk self.chunkSize += 1 else: self.chunkOffset -= 1 assert self.chunk[self.chunkOffset] == char class HTMLBinaryInputStream(HTMLUnicodeInputStream): """Provides a unicode stream of characters to the HTMLTokenizer. This class takes care of character encoding and removing or replacing incorrect byte-sequences and also provides column and line tracking. """ def __init__(self, source, override_encoding=None, transport_encoding=None, same_origin_parent_encoding=None, likely_encoding=None, default_encoding="windows-1252", useChardet=True): """Initialises the HTMLInputStream. HTMLInputStream(source, [encoding]) -> Normalized stream from source for use by html5lib. source can be either a file-object, local filename or a string. The optional encoding parameter must be a string that indicates the encoding. If specified, that encoding will be used, regardless of any BOM or later declaration (such as in a meta element) """ # Raw Stream - for unicode objects this will encode to utf-8 and set # self.charEncoding as appropriate self.rawStream = self.openStream(source) HTMLUnicodeInputStream.__init__(self, self.rawStream) # Encoding Information # Number of bytes to use when looking for a meta element with # encoding information self.numBytesMeta = 1024 # Number of bytes to use when using detecting encoding using chardet self.numBytesChardet = 100 # Things from args self.override_encoding = override_encoding self.transport_encoding = transport_encoding self.same_origin_parent_encoding = same_origin_parent_encoding self.likely_encoding = likely_encoding self.default_encoding = default_encoding # Determine encoding self.charEncoding = self.determineEncoding(useChardet) assert self.charEncoding[0] is not None # Call superclass self.reset() def reset(self): self.dataStream = self.charEncoding[0].codec_info.streamreader(self.rawStream, 'replace') HTMLUnicodeInputStream.reset(self) def openStream(self, source): """Produces a file object from source. source can be either a file object, local filename or a string. """ # Already a file object if hasattr(source, 'read'): stream = source else: stream = BytesIO(source) try: stream.seek(stream.tell()) except: # pylint:disable=bare-except stream = BufferedStream(stream) return stream def determineEncoding(self, chardet=True): # BOMs take precedence over everything # This will also read past the BOM if present charEncoding = self.detectBOM(), "certain" if charEncoding[0] is not None: return charEncoding # If we've been overriden, we've been overriden charEncoding = lookupEncoding(self.override_encoding), "certain" if charEncoding[0] is not None: return charEncoding # Now check the transport layer charEncoding = lookupEncoding(self.transport_encoding), "certain" if charEncoding[0] is not None: return charEncoding # Look for meta elements with encoding information charEncoding = self.detectEncodingMeta(), "tentative" if charEncoding[0] is not None: return charEncoding # Parent document encoding charEncoding = lookupEncoding(self.same_origin_parent_encoding), "tentative" if charEncoding[0] is not None and not charEncoding[0].name.startswith("utf-16"): return charEncoding # "likely" encoding charEncoding = lookupEncoding(self.likely_encoding), "tentative" if charEncoding[0] is not None: return charEncoding # Guess with chardet, if available if chardet: try: from pip._vendor.chardet.universaldetector import UniversalDetector except ImportError: pass else: buffers = [] detector = UniversalDetector() while not detector.done: buffer = self.rawStream.read(self.numBytesChardet) assert isinstance(buffer, bytes) if not buffer: break buffers.append(buffer) detector.feed(buffer) detector.close() encoding = lookupEncoding(detector.result['encoding']) self.rawStream.seek(0) if encoding is not None: return encoding, "tentative" # Try the default encoding charEncoding = lookupEncoding(self.default_encoding), "tentative" if charEncoding[0] is not None: return charEncoding # Fallback to html5lib's default if even that hasn't worked return lookupEncoding("windows-1252"), "tentative" def changeEncoding(self, newEncoding): assert self.charEncoding[1] != "certain" newEncoding = lookupEncoding(newEncoding) if newEncoding is None: return if newEncoding.name in ("utf-16be", "utf-16le"): newEncoding = lookupEncoding("utf-8") assert newEncoding is not None elif newEncoding == self.charEncoding[0]: self.charEncoding = (self.charEncoding[0], "certain") else: self.rawStream.seek(0) self.charEncoding = (newEncoding, "certain") self.reset() raise _ReparseException("Encoding changed from %s to %s" % (self.charEncoding[0], newEncoding)) def detectBOM(self): """Attempts to detect at BOM at the start of the stream. If an encoding can be determined from the BOM return the name of the encoding otherwise return None""" bomDict = { codecs.BOM_UTF8: 'utf-8', codecs.BOM_UTF16_LE: 'utf-16le', codecs.BOM_UTF16_BE: 'utf-16be', codecs.BOM_UTF32_LE: 'utf-32le', codecs.BOM_UTF32_BE: 'utf-32be' } # Go to beginning of file and read in 4 bytes string = self.rawStream.read(4) assert isinstance(string, bytes) # Try detecting the BOM using bytes from the string encoding = bomDict.get(string[:3]) # UTF-8 seek = 3 if not encoding: # Need to detect UTF-32 before UTF-16 encoding = bomDict.get(string) # UTF-32 seek = 4 if not encoding: encoding = bomDict.get(string[:2]) # UTF-16 seek = 2 # Set the read position past the BOM if one was found, otherwise # set it to the start of the stream if encoding: self.rawStream.seek(seek) return lookupEncoding(encoding) else: self.rawStream.seek(0) return None def detectEncodingMeta(self): """Report the encoding declared by the meta element """ buffer = self.rawStream.read(self.numBytesMeta) assert isinstance(buffer, bytes) parser = EncodingParser(buffer) self.rawStream.seek(0) encoding = parser.getEncoding() if encoding is not None and encoding.name in ("utf-16be", "utf-16le"): encoding = lookupEncoding("utf-8") return encoding class EncodingBytes(bytes): """String-like object with an associated position and various extra methods If the position is ever greater than the string length then an exception is raised""" def __new__(self, value): assert isinstance(value, bytes) return bytes.__new__(self, value.lower()) def __init__(self, value): # pylint:disable=unused-argument self._position = -1 def __iter__(self): return self def __next__(self): p = self._position = self._position + 1 if p >= len(self): raise StopIteration elif p < 0: raise TypeError return self[p:p + 1] def next(self): # Py2 compat return self.__next__() def previous(self): p = self._position if p >= len(self): raise StopIteration elif p < 0: raise TypeError self._position = p = p - 1 return self[p:p + 1] def setPosition(self, position): if self._position >= len(self): raise StopIteration self._position = position def getPosition(self): if self._position >= len(self): raise StopIteration if self._position >= 0: return self._position else: return None position = property(getPosition, setPosition) def getCurrentByte(self): return self[self.position:self.position + 1] currentByte = property(getCurrentByte) def skip(self, chars=spaceCharactersBytes): """Skip past a list of characters""" p = self.position # use property for the error-checking while p < len(self): c = self[p:p + 1] if c not in chars: self._position = p return c p += 1 self._position = p return None def skipUntil(self, chars): p = self.position while p < len(self): c = self[p:p + 1] if c in chars: self._position = p return c p += 1 self._position = p return None def matchBytes(self, bytes): """Look for a sequence of bytes at the start of a string. If the bytes are found return True and advance the position to the byte after the match. Otherwise return False and leave the position alone""" p = self.position data = self[p:p + len(bytes)] rv = data.startswith(bytes) if rv: self.position += len(bytes) return rv def jumpTo(self, bytes): """Look for the next sequence of bytes matching a given sequence. If a match is found advance the position to the last byte of the match""" newPosition = self[self.position:].find(bytes) if newPosition > -1: # XXX: This is ugly, but I can't see a nicer way to fix this. if self._position == -1: self._position = 0 self._position += (newPosition + len(bytes) - 1) return True else: raise StopIteration class EncodingParser(object): """Mini parser for detecting character encoding from meta elements""" def __init__(self, data): """string - the data to work on for encoding detection""" self.data = EncodingBytes(data) self.encoding = None def getEncoding(self): methodDispatch = ( (b"<!--", self.handleComment), (b"<meta", self.handleMeta), (b"</", self.handlePossibleEndTag), (b"<!", self.handleOther), (b"<?", self.handleOther), (b"<", self.handlePossibleStartTag)) for _ in self.data: keepParsing = True for key, method in methodDispatch: if self.data.matchBytes(key): try: keepParsing = method() break except StopIteration: keepParsing = False break if not keepParsing: break return self.encoding def handleComment(self): """Skip over comments""" return self.data.jumpTo(b"-->") def handleMeta(self): if self.data.currentByte not in spaceCharactersBytes: # if we have <meta not followed by a space so just keep going return True # We have a valid meta element we want to search for attributes hasPragma = False pendingEncoding = None while True: # Try to find the next attribute after the current position attr = self.getAttribute() if attr is None: return True else: if attr[0] == b"http-equiv": hasPragma = attr[1] == b"content-type" if hasPragma and pendingEncoding is not None: self.encoding = pendingEncoding return False elif attr[0] == b"charset": tentativeEncoding = attr[1] codec = lookupEncoding(tentativeEncoding) if codec is not None: self.encoding = codec return False elif attr[0] == b"content": contentParser = ContentAttrParser(EncodingBytes(attr[1])) tentativeEncoding = contentParser.parse() if tentativeEncoding is not None: codec = lookupEncoding(tentativeEncoding) if codec is not None: if hasPragma: self.encoding = codec return False else: pendingEncoding = codec def handlePossibleStartTag(self): return self.handlePossibleTag(False) def handlePossibleEndTag(self): next(self.data) return self.handlePossibleTag(True) def handlePossibleTag(self, endTag): data = self.data if data.currentByte not in asciiLettersBytes: # If the next byte is not an ascii letter either ignore this # fragment (possible start tag case) or treat it according to # handleOther if endTag: data.previous() self.handleOther() return True c = data.skipUntil(spacesAngleBrackets) if c == b"<": # return to the first step in the overall "two step" algorithm # reprocessing the < byte data.previous() else: # Read all attributes attr = self.getAttribute() while attr is not None: attr = self.getAttribute() return True def handleOther(self): return self.data.jumpTo(b">") def getAttribute(self): """Return a name,value pair for the next attribute in the stream, if one is found, or None""" data = self.data # Step 1 (skip chars) c = data.skip(spaceCharactersBytes | frozenset([b"/"])) assert c is None or len(c) == 1 # Step 2 if c in (b">", None): return None # Step 3 attrName = [] attrValue = [] # Step 4 attribute name while True: if c == b"=" and attrName: break elif c in spaceCharactersBytes: # Step 6! c = data.skip() break elif c in (b"/", b">"): return b"".join(attrName), b"" elif c in asciiUppercaseBytes: attrName.append(c.lower()) elif c is None: return None else: attrName.append(c) # Step 5 c = next(data) # Step 7 if c != b"=": data.previous() return b"".join(attrName), b"" # Step 8 next(data) # Step 9 c = data.skip() # Step 10 if c in (b"'", b'"'): # 10.1 quoteChar = c while True: # 10.2 c = next(data) # 10.3 if c == quoteChar: next(data) return b"".join(attrName), b"".join(attrValue) # 10.4 elif c in asciiUppercaseBytes: attrValue.append(c.lower()) # 10.5 else: attrValue.append(c) elif c == b">": return b"".join(attrName), b"" elif c in asciiUppercaseBytes: attrValue.append(c.lower()) elif c is None: return None else: attrValue.append(c) # Step 11 while True: c = next(data) if c in spacesAngleBrackets: return b"".join(attrName), b"".join(attrValue) elif c in asciiUppercaseBytes: attrValue.append(c.lower()) elif c is None: return None else: attrValue.append(c) class ContentAttrParser(object): def __init__(self, data): assert isinstance(data, bytes) self.data = data def parse(self): try: # Check if the attr name is charset # otherwise return self.data.jumpTo(b"charset") self.data.position += 1 self.data.skip() if not self.data.currentByte == b"=": # If there is no = sign keep looking for attrs return None self.data.position += 1 self.data.skip() # Look for an encoding between matching quote marks if self.data.currentByte in (b'"', b"'"): quoteMark = self.data.currentByte self.data.position += 1 oldPosition = self.data.position if self.data.jumpTo(quoteMark): return self.data[oldPosition:self.data.position] else: return None else: # Unquoted value oldPosition = self.data.position try: self.data.skipUntil(spaceCharactersBytes) return self.data[oldPosition:self.data.position] except StopIteration: # Return the whole remaining value return self.data[oldPosition:] except StopIteration: return None def lookupEncoding(encoding): """Return the python codec name corresponding to an encoding or None if the string doesn't correspond to a valid encoding.""" if isinstance(encoding, binary_type): try: encoding = encoding.decode("ascii") except UnicodeDecodeError: return None if encoding is not None: try: return webencodings.lookup(encoding) except AttributeError: return None else: return None ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/_tokenizer.py�������������������0000644�0001750�0001750�00000225444�00000000000�030743� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals from pip._vendor.six import unichr as chr from collections import deque from .constants import spaceCharacters from .constants import entities from .constants import asciiLetters, asciiUpper2Lower from .constants import digits, hexDigits, EOF from .constants import tokenTypes, tagTokenTypes from .constants import replacementCharacters from ._inputstream import HTMLInputStream from ._trie import Trie entitiesTrie = Trie(entities) class HTMLTokenizer(object): """ This class takes care of tokenizing HTML. * self.currentToken Holds the token that is currently being processed. * self.state Holds a reference to the method to be invoked... XXX * self.stream Points to HTMLInputStream object. """ def __init__(self, stream, parser=None, **kwargs): self.stream = HTMLInputStream(stream, **kwargs) self.parser = parser # Setup the initial tokenizer state self.escapeFlag = False self.lastFourChars = [] self.state = self.dataState self.escape = False # The current token being created self.currentToken = None super(HTMLTokenizer, self).__init__() def __iter__(self): """ This is where the magic happens. We do our usually processing through the states and when we have a token to return we yield the token which pauses processing until the next token is requested. """ self.tokenQueue = deque([]) # Start processing. When EOF is reached self.state will return False # instead of True and the loop will terminate. while self.state(): while self.stream.errors: yield {"type": tokenTypes["ParseError"], "data": self.stream.errors.pop(0)} while self.tokenQueue: yield self.tokenQueue.popleft() def consumeNumberEntity(self, isHex): """This function returns either U+FFFD or the character based on the decimal or hexadecimal representation. It also discards ";" if present. If not present self.tokenQueue.append({"type": tokenTypes["ParseError"]}) is invoked. """ allowed = digits radix = 10 if isHex: allowed = hexDigits radix = 16 charStack = [] # Consume all the characters that are in range while making sure we # don't hit an EOF. c = self.stream.char() while c in allowed and c is not EOF: charStack.append(c) c = self.stream.char() # Convert the set of characters consumed to an int. charAsInt = int("".join(charStack), radix) # Certain characters get replaced with others if charAsInt in replacementCharacters: char = replacementCharacters[charAsInt] self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "illegal-codepoint-for-numeric-entity", "datavars": {"charAsInt": charAsInt}}) elif ((0xD800 <= charAsInt <= 0xDFFF) or (charAsInt > 0x10FFFF)): char = "\uFFFD" self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "illegal-codepoint-for-numeric-entity", "datavars": {"charAsInt": charAsInt}}) else: # Should speed up this check somehow (e.g. move the set to a constant) if ((0x0001 <= charAsInt <= 0x0008) or (0x000E <= charAsInt <= 0x001F) or (0x007F <= charAsInt <= 0x009F) or (0xFDD0 <= charAsInt <= 0xFDEF) or charAsInt in frozenset([0x000B, 0xFFFE, 0xFFFF, 0x1FFFE, 0x1FFFF, 0x2FFFE, 0x2FFFF, 0x3FFFE, 0x3FFFF, 0x4FFFE, 0x4FFFF, 0x5FFFE, 0x5FFFF, 0x6FFFE, 0x6FFFF, 0x7FFFE, 0x7FFFF, 0x8FFFE, 0x8FFFF, 0x9FFFE, 0x9FFFF, 0xAFFFE, 0xAFFFF, 0xBFFFE, 0xBFFFF, 0xCFFFE, 0xCFFFF, 0xDFFFE, 0xDFFFF, 0xEFFFE, 0xEFFFF, 0xFFFFE, 0xFFFFF, 0x10FFFE, 0x10FFFF])): self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "illegal-codepoint-for-numeric-entity", "datavars": {"charAsInt": charAsInt}}) try: # Try/except needed as UCS-2 Python builds' unichar only works # within the BMP. char = chr(charAsInt) except ValueError: v = charAsInt - 0x10000 char = chr(0xD800 | (v >> 10)) + chr(0xDC00 | (v & 0x3FF)) # Discard the ; if present. Otherwise, put it back on the queue and # invoke parseError on parser. if c != ";": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "numeric-entity-without-semicolon"}) self.stream.unget(c) return char def consumeEntity(self, allowedChar=None, fromAttribute=False): # Initialise to the default output for when no entity is matched output = "&" charStack = [self.stream.char()] if (charStack[0] in spaceCharacters or charStack[0] in (EOF, "<", "&") or (allowedChar is not None and allowedChar == charStack[0])): self.stream.unget(charStack[0]) elif charStack[0] == "#": # Read the next character to see if it's hex or decimal hex = False charStack.append(self.stream.char()) if charStack[-1] in ("x", "X"): hex = True charStack.append(self.stream.char()) # charStack[-1] should be the first digit if (hex and charStack[-1] in hexDigits) \ or (not hex and charStack[-1] in digits): # At least one digit found, so consume the whole number self.stream.unget(charStack[-1]) output = self.consumeNumberEntity(hex) else: # No digits found self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "expected-numeric-entity"}) self.stream.unget(charStack.pop()) output = "&" + "".join(charStack) else: # At this point in the process might have named entity. Entities # are stored in the global variable "entities". # # Consume characters and compare to these to a substring of the # entity names in the list until the substring no longer matches. while (charStack[-1] is not EOF): if not entitiesTrie.has_keys_with_prefix("".join(charStack)): break charStack.append(self.stream.char()) # At this point we have a string that starts with some characters # that may match an entity # Try to find the longest entity the string will match to take care # of ¬i for instance. try: entityName = entitiesTrie.longest_prefix("".join(charStack[:-1])) entityLength = len(entityName) except KeyError: entityName = None if entityName is not None: if entityName[-1] != ";": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "named-entity-without-semicolon"}) if (entityName[-1] != ";" and fromAttribute and (charStack[entityLength] in asciiLetters or charStack[entityLength] in digits or charStack[entityLength] == "=")): self.stream.unget(charStack.pop()) output = "&" + "".join(charStack) else: output = entities[entityName] self.stream.unget(charStack.pop()) output += "".join(charStack[entityLength:]) else: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "expected-named-entity"}) self.stream.unget(charStack.pop()) output = "&" + "".join(charStack) if fromAttribute: self.currentToken["data"][-1][1] += output else: if output in spaceCharacters: tokenType = "SpaceCharacters" else: tokenType = "Characters" self.tokenQueue.append({"type": tokenTypes[tokenType], "data": output}) def processEntityInAttribute(self, allowedChar): """This method replaces the need for "entityInAttributeValueState". """ self.consumeEntity(allowedChar=allowedChar, fromAttribute=True) def emitCurrentToken(self): """This method is a generic handler for emitting the tags. It also sets the state to "data" because that's what's needed after a token has been emitted. """ token = self.currentToken # Add token to the queue to be yielded if (token["type"] in tagTokenTypes): token["name"] = token["name"].translate(asciiUpper2Lower) if token["type"] == tokenTypes["EndTag"]: if token["data"]: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "attributes-in-end-tag"}) if token["selfClosing"]: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "self-closing-flag-on-end-tag"}) self.tokenQueue.append(token) self.state = self.dataState # Below are the various tokenizer states worked out. def dataState(self): data = self.stream.char() if data == "&": self.state = self.entityDataState elif data == "<": self.state = self.tagOpenState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "\u0000"}) elif data is EOF: # Tokenization ends. return False elif data in spaceCharacters: # Directly after emitting a token you switch back to the "data # state". At that point spaceCharacters are important so they are # emitted separately. self.tokenQueue.append({"type": tokenTypes["SpaceCharacters"], "data": data + self.stream.charsUntil(spaceCharacters, True)}) # No need to update lastFourChars here, since the first space will # have already been appended to lastFourChars and will have broken # any <!-- or --> sequences else: chars = self.stream.charsUntil(("&", "<", "\u0000")) self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data + chars}) return True def entityDataState(self): self.consumeEntity() self.state = self.dataState return True def rcdataState(self): data = self.stream.char() if data == "&": self.state = self.characterReferenceInRcdata elif data == "<": self.state = self.rcdataLessThanSignState elif data == EOF: # Tokenization ends. return False elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "\uFFFD"}) elif data in spaceCharacters: # Directly after emitting a token you switch back to the "data # state". At that point spaceCharacters are important so they are # emitted separately. self.tokenQueue.append({"type": tokenTypes["SpaceCharacters"], "data": data + self.stream.charsUntil(spaceCharacters, True)}) # No need to update lastFourChars here, since the first space will # have already been appended to lastFourChars and will have broken # any <!-- or --> sequences else: chars = self.stream.charsUntil(("&", "<", "\u0000")) self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data + chars}) return True def characterReferenceInRcdata(self): self.consumeEntity() self.state = self.rcdataState return True def rawtextState(self): data = self.stream.char() if data == "<": self.state = self.rawtextLessThanSignState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "\uFFFD"}) elif data == EOF: # Tokenization ends. return False else: chars = self.stream.charsUntil(("<", "\u0000")) self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data + chars}) return True def scriptDataState(self): data = self.stream.char() if data == "<": self.state = self.scriptDataLessThanSignState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "\uFFFD"}) elif data == EOF: # Tokenization ends. return False else: chars = self.stream.charsUntil(("<", "\u0000")) self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data + chars}) return True def plaintextState(self): data = self.stream.char() if data == EOF: # Tokenization ends. return False elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "\uFFFD"}) else: self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data + self.stream.charsUntil("\u0000")}) return True def tagOpenState(self): data = self.stream.char() if data == "!": self.state = self.markupDeclarationOpenState elif data == "/": self.state = self.closeTagOpenState elif data in asciiLetters: self.currentToken = {"type": tokenTypes["StartTag"], "name": data, "data": [], "selfClosing": False, "selfClosingAcknowledged": False} self.state = self.tagNameState elif data == ">": # XXX In theory it could be something besides a tag name. But # do we really care? self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "expected-tag-name-but-got-right-bracket"}) self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<>"}) self.state = self.dataState elif data == "?": # XXX In theory it could be something besides a tag name. But # do we really care? self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "expected-tag-name-but-got-question-mark"}) self.stream.unget(data) self.state = self.bogusCommentState else: # XXX self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "expected-tag-name"}) self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"}) self.stream.unget(data) self.state = self.dataState return True def closeTagOpenState(self): data = self.stream.char() if data in asciiLetters: self.currentToken = {"type": tokenTypes["EndTag"], "name": data, "data": [], "selfClosing": False} self.state = self.tagNameState elif data == ">": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "expected-closing-tag-but-got-right-bracket"}) self.state = self.dataState elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "expected-closing-tag-but-got-eof"}) self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "</"}) self.state = self.dataState else: # XXX data can be _'_... self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "expected-closing-tag-but-got-char", "datavars": {"data": data}}) self.stream.unget(data) self.state = self.bogusCommentState return True def tagNameState(self): data = self.stream.char() if data in spaceCharacters: self.state = self.beforeAttributeNameState elif data == ">": self.emitCurrentToken() elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-tag-name"}) self.state = self.dataState elif data == "/": self.state = self.selfClosingStartTagState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.currentToken["name"] += "\uFFFD" else: self.currentToken["name"] += data # (Don't use charsUntil here, because tag names are # very short and it's faster to not do anything fancy) return True def rcdataLessThanSignState(self): data = self.stream.char() if data == "/": self.temporaryBuffer = "" self.state = self.rcdataEndTagOpenState else: self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"}) self.stream.unget(data) self.state = self.rcdataState return True def rcdataEndTagOpenState(self): data = self.stream.char() if data in asciiLetters: self.temporaryBuffer += data self.state = self.rcdataEndTagNameState else: self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "</"}) self.stream.unget(data) self.state = self.rcdataState return True def rcdataEndTagNameState(self): appropriate = self.currentToken and self.currentToken["name"].lower() == self.temporaryBuffer.lower() data = self.stream.char() if data in spaceCharacters and appropriate: self.currentToken = {"type": tokenTypes["EndTag"], "name": self.temporaryBuffer, "data": [], "selfClosing": False} self.state = self.beforeAttributeNameState elif data == "/" and appropriate: self.currentToken = {"type": tokenTypes["EndTag"], "name": self.temporaryBuffer, "data": [], "selfClosing": False} self.state = self.selfClosingStartTagState elif data == ">" and appropriate: self.currentToken = {"type": tokenTypes["EndTag"], "name": self.temporaryBuffer, "data": [], "selfClosing": False} self.emitCurrentToken() self.state = self.dataState elif data in asciiLetters: self.temporaryBuffer += data else: self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "</" + self.temporaryBuffer}) self.stream.unget(data) self.state = self.rcdataState return True def rawtextLessThanSignState(self): data = self.stream.char() if data == "/": self.temporaryBuffer = "" self.state = self.rawtextEndTagOpenState else: self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"}) self.stream.unget(data) self.state = self.rawtextState return True def rawtextEndTagOpenState(self): data = self.stream.char() if data in asciiLetters: self.temporaryBuffer += data self.state = self.rawtextEndTagNameState else: self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "</"}) self.stream.unget(data) self.state = self.rawtextState return True def rawtextEndTagNameState(self): appropriate = self.currentToken and self.currentToken["name"].lower() == self.temporaryBuffer.lower() data = self.stream.char() if data in spaceCharacters and appropriate: self.currentToken = {"type": tokenTypes["EndTag"], "name": self.temporaryBuffer, "data": [], "selfClosing": False} self.state = self.beforeAttributeNameState elif data == "/" and appropriate: self.currentToken = {"type": tokenTypes["EndTag"], "name": self.temporaryBuffer, "data": [], "selfClosing": False} self.state = self.selfClosingStartTagState elif data == ">" and appropriate: self.currentToken = {"type": tokenTypes["EndTag"], "name": self.temporaryBuffer, "data": [], "selfClosing": False} self.emitCurrentToken() self.state = self.dataState elif data in asciiLetters: self.temporaryBuffer += data else: self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "</" + self.temporaryBuffer}) self.stream.unget(data) self.state = self.rawtextState return True def scriptDataLessThanSignState(self): data = self.stream.char() if data == "/": self.temporaryBuffer = "" self.state = self.scriptDataEndTagOpenState elif data == "!": self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<!"}) self.state = self.scriptDataEscapeStartState else: self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"}) self.stream.unget(data) self.state = self.scriptDataState return True def scriptDataEndTagOpenState(self): data = self.stream.char() if data in asciiLetters: self.temporaryBuffer += data self.state = self.scriptDataEndTagNameState else: self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "</"}) self.stream.unget(data) self.state = self.scriptDataState return True def scriptDataEndTagNameState(self): appropriate = self.currentToken and self.currentToken["name"].lower() == self.temporaryBuffer.lower() data = self.stream.char() if data in spaceCharacters and appropriate: self.currentToken = {"type": tokenTypes["EndTag"], "name": self.temporaryBuffer, "data": [], "selfClosing": False} self.state = self.beforeAttributeNameState elif data == "/" and appropriate: self.currentToken = {"type": tokenTypes["EndTag"], "name": self.temporaryBuffer, "data": [], "selfClosing": False} self.state = self.selfClosingStartTagState elif data == ">" and appropriate: self.currentToken = {"type": tokenTypes["EndTag"], "name": self.temporaryBuffer, "data": [], "selfClosing": False} self.emitCurrentToken() self.state = self.dataState elif data in asciiLetters: self.temporaryBuffer += data else: self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "</" + self.temporaryBuffer}) self.stream.unget(data) self.state = self.scriptDataState return True def scriptDataEscapeStartState(self): data = self.stream.char() if data == "-": self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"}) self.state = self.scriptDataEscapeStartDashState else: self.stream.unget(data) self.state = self.scriptDataState return True def scriptDataEscapeStartDashState(self): data = self.stream.char() if data == "-": self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"}) self.state = self.scriptDataEscapedDashDashState else: self.stream.unget(data) self.state = self.scriptDataState return True def scriptDataEscapedState(self): data = self.stream.char() if data == "-": self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"}) self.state = self.scriptDataEscapedDashState elif data == "<": self.state = self.scriptDataEscapedLessThanSignState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "\uFFFD"}) elif data == EOF: self.state = self.dataState else: chars = self.stream.charsUntil(("<", "-", "\u0000")) self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data + chars}) return True def scriptDataEscapedDashState(self): data = self.stream.char() if data == "-": self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"}) self.state = self.scriptDataEscapedDashDashState elif data == "<": self.state = self.scriptDataEscapedLessThanSignState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "\uFFFD"}) self.state = self.scriptDataEscapedState elif data == EOF: self.state = self.dataState else: self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data}) self.state = self.scriptDataEscapedState return True def scriptDataEscapedDashDashState(self): data = self.stream.char() if data == "-": self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"}) elif data == "<": self.state = self.scriptDataEscapedLessThanSignState elif data == ">": self.tokenQueue.append({"type": tokenTypes["Characters"], "data": ">"}) self.state = self.scriptDataState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "\uFFFD"}) self.state = self.scriptDataEscapedState elif data == EOF: self.state = self.dataState else: self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data}) self.state = self.scriptDataEscapedState return True def scriptDataEscapedLessThanSignState(self): data = self.stream.char() if data == "/": self.temporaryBuffer = "" self.state = self.scriptDataEscapedEndTagOpenState elif data in asciiLetters: self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<" + data}) self.temporaryBuffer = data self.state = self.scriptDataDoubleEscapeStartState else: self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"}) self.stream.unget(data) self.state = self.scriptDataEscapedState return True def scriptDataEscapedEndTagOpenState(self): data = self.stream.char() if data in asciiLetters: self.temporaryBuffer = data self.state = self.scriptDataEscapedEndTagNameState else: self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "</"}) self.stream.unget(data) self.state = self.scriptDataEscapedState return True def scriptDataEscapedEndTagNameState(self): appropriate = self.currentToken and self.currentToken["name"].lower() == self.temporaryBuffer.lower() data = self.stream.char() if data in spaceCharacters and appropriate: self.currentToken = {"type": tokenTypes["EndTag"], "name": self.temporaryBuffer, "data": [], "selfClosing": False} self.state = self.beforeAttributeNameState elif data == "/" and appropriate: self.currentToken = {"type": tokenTypes["EndTag"], "name": self.temporaryBuffer, "data": [], "selfClosing": False} self.state = self.selfClosingStartTagState elif data == ">" and appropriate: self.currentToken = {"type": tokenTypes["EndTag"], "name": self.temporaryBuffer, "data": [], "selfClosing": False} self.emitCurrentToken() self.state = self.dataState elif data in asciiLetters: self.temporaryBuffer += data else: self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "</" + self.temporaryBuffer}) self.stream.unget(data) self.state = self.scriptDataEscapedState return True def scriptDataDoubleEscapeStartState(self): data = self.stream.char() if data in (spaceCharacters | frozenset(("/", ">"))): self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data}) if self.temporaryBuffer.lower() == "script": self.state = self.scriptDataDoubleEscapedState else: self.state = self.scriptDataEscapedState elif data in asciiLetters: self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data}) self.temporaryBuffer += data else: self.stream.unget(data) self.state = self.scriptDataEscapedState return True def scriptDataDoubleEscapedState(self): data = self.stream.char() if data == "-": self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"}) self.state = self.scriptDataDoubleEscapedDashState elif data == "<": self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"}) self.state = self.scriptDataDoubleEscapedLessThanSignState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "\uFFFD"}) elif data == EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-script-in-script"}) self.state = self.dataState else: self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data}) return True def scriptDataDoubleEscapedDashState(self): data = self.stream.char() if data == "-": self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"}) self.state = self.scriptDataDoubleEscapedDashDashState elif data == "<": self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"}) self.state = self.scriptDataDoubleEscapedLessThanSignState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "\uFFFD"}) self.state = self.scriptDataDoubleEscapedState elif data == EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-script-in-script"}) self.state = self.dataState else: self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data}) self.state = self.scriptDataDoubleEscapedState return True def scriptDataDoubleEscapedDashDashState(self): data = self.stream.char() if data == "-": self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"}) elif data == "<": self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"}) self.state = self.scriptDataDoubleEscapedLessThanSignState elif data == ">": self.tokenQueue.append({"type": tokenTypes["Characters"], "data": ">"}) self.state = self.scriptDataState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "\uFFFD"}) self.state = self.scriptDataDoubleEscapedState elif data == EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-script-in-script"}) self.state = self.dataState else: self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data}) self.state = self.scriptDataDoubleEscapedState return True def scriptDataDoubleEscapedLessThanSignState(self): data = self.stream.char() if data == "/": self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "/"}) self.temporaryBuffer = "" self.state = self.scriptDataDoubleEscapeEndState else: self.stream.unget(data) self.state = self.scriptDataDoubleEscapedState return True def scriptDataDoubleEscapeEndState(self): data = self.stream.char() if data in (spaceCharacters | frozenset(("/", ">"))): self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data}) if self.temporaryBuffer.lower() == "script": self.state = self.scriptDataEscapedState else: self.state = self.scriptDataDoubleEscapedState elif data in asciiLetters: self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data}) self.temporaryBuffer += data else: self.stream.unget(data) self.state = self.scriptDataDoubleEscapedState return True def beforeAttributeNameState(self): data = self.stream.char() if data in spaceCharacters: self.stream.charsUntil(spaceCharacters, True) elif data in asciiLetters: self.currentToken["data"].append([data, ""]) self.state = self.attributeNameState elif data == ">": self.emitCurrentToken() elif data == "/": self.state = self.selfClosingStartTagState elif data in ("'", '"', "=", "<"): self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-character-in-attribute-name"}) self.currentToken["data"].append([data, ""]) self.state = self.attributeNameState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.currentToken["data"].append(["\uFFFD", ""]) self.state = self.attributeNameState elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "expected-attribute-name-but-got-eof"}) self.state = self.dataState else: self.currentToken["data"].append([data, ""]) self.state = self.attributeNameState return True def attributeNameState(self): data = self.stream.char() leavingThisState = True emitToken = False if data == "=": self.state = self.beforeAttributeValueState elif data in asciiLetters: self.currentToken["data"][-1][0] += data +\ self.stream.charsUntil(asciiLetters, True) leavingThisState = False elif data == ">": # XXX If we emit here the attributes are converted to a dict # without being checked and when the code below runs we error # because data is a dict not a list emitToken = True elif data in spaceCharacters: self.state = self.afterAttributeNameState elif data == "/": self.state = self.selfClosingStartTagState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.currentToken["data"][-1][0] += "\uFFFD" leavingThisState = False elif data in ("'", '"', "<"): self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-character-in-attribute-name"}) self.currentToken["data"][-1][0] += data leavingThisState = False elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-attribute-name"}) self.state = self.dataState else: self.currentToken["data"][-1][0] += data leavingThisState = False if leavingThisState: # Attributes are not dropped at this stage. That happens when the # start tag token is emitted so values can still be safely appended # to attributes, but we do want to report the parse error in time. self.currentToken["data"][-1][0] = ( self.currentToken["data"][-1][0].translate(asciiUpper2Lower)) for name, _ in self.currentToken["data"][:-1]: if self.currentToken["data"][-1][0] == name: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "duplicate-attribute"}) break # XXX Fix for above XXX if emitToken: self.emitCurrentToken() return True def afterAttributeNameState(self): data = self.stream.char() if data in spaceCharacters: self.stream.charsUntil(spaceCharacters, True) elif data == "=": self.state = self.beforeAttributeValueState elif data == ">": self.emitCurrentToken() elif data in asciiLetters: self.currentToken["data"].append([data, ""]) self.state = self.attributeNameState elif data == "/": self.state = self.selfClosingStartTagState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.currentToken["data"].append(["\uFFFD", ""]) self.state = self.attributeNameState elif data in ("'", '"', "<"): self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-character-after-attribute-name"}) self.currentToken["data"].append([data, ""]) self.state = self.attributeNameState elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "expected-end-of-tag-but-got-eof"}) self.state = self.dataState else: self.currentToken["data"].append([data, ""]) self.state = self.attributeNameState return True def beforeAttributeValueState(self): data = self.stream.char() if data in spaceCharacters: self.stream.charsUntil(spaceCharacters, True) elif data == "\"": self.state = self.attributeValueDoubleQuotedState elif data == "&": self.state = self.attributeValueUnQuotedState self.stream.unget(data) elif data == "'": self.state = self.attributeValueSingleQuotedState elif data == ">": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "expected-attribute-value-but-got-right-bracket"}) self.emitCurrentToken() elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.currentToken["data"][-1][1] += "\uFFFD" self.state = self.attributeValueUnQuotedState elif data in ("=", "<", "`"): self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "equals-in-unquoted-attribute-value"}) self.currentToken["data"][-1][1] += data self.state = self.attributeValueUnQuotedState elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "expected-attribute-value-but-got-eof"}) self.state = self.dataState else: self.currentToken["data"][-1][1] += data self.state = self.attributeValueUnQuotedState return True def attributeValueDoubleQuotedState(self): data = self.stream.char() if data == "\"": self.state = self.afterAttributeValueState elif data == "&": self.processEntityInAttribute('"') elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.currentToken["data"][-1][1] += "\uFFFD" elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-attribute-value-double-quote"}) self.state = self.dataState else: self.currentToken["data"][-1][1] += data +\ self.stream.charsUntil(("\"", "&", "\u0000")) return True def attributeValueSingleQuotedState(self): data = self.stream.char() if data == "'": self.state = self.afterAttributeValueState elif data == "&": self.processEntityInAttribute("'") elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.currentToken["data"][-1][1] += "\uFFFD" elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-attribute-value-single-quote"}) self.state = self.dataState else: self.currentToken["data"][-1][1] += data +\ self.stream.charsUntil(("'", "&", "\u0000")) return True def attributeValueUnQuotedState(self): data = self.stream.char() if data in spaceCharacters: self.state = self.beforeAttributeNameState elif data == "&": self.processEntityInAttribute(">") elif data == ">": self.emitCurrentToken() elif data in ('"', "'", "=", "<", "`"): self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-character-in-unquoted-attribute-value"}) self.currentToken["data"][-1][1] += data elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.currentToken["data"][-1][1] += "\uFFFD" elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-attribute-value-no-quotes"}) self.state = self.dataState else: self.currentToken["data"][-1][1] += data + self.stream.charsUntil( frozenset(("&", ">", '"', "'", "=", "<", "`", "\u0000")) | spaceCharacters) return True def afterAttributeValueState(self): data = self.stream.char() if data in spaceCharacters: self.state = self.beforeAttributeNameState elif data == ">": self.emitCurrentToken() elif data == "/": self.state = self.selfClosingStartTagState elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-EOF-after-attribute-value"}) self.stream.unget(data) self.state = self.dataState else: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-character-after-attribute-value"}) self.stream.unget(data) self.state = self.beforeAttributeNameState return True def selfClosingStartTagState(self): data = self.stream.char() if data == ">": self.currentToken["selfClosing"] = True self.emitCurrentToken() elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-EOF-after-solidus-in-tag"}) self.stream.unget(data) self.state = self.dataState else: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-character-after-solidus-in-tag"}) self.stream.unget(data) self.state = self.beforeAttributeNameState return True def bogusCommentState(self): # Make a new comment token and give it as value all the characters # until the first > or EOF (charsUntil checks for EOF automatically) # and emit it. data = self.stream.charsUntil(">") data = data.replace("\u0000", "\uFFFD") self.tokenQueue.append( {"type": tokenTypes["Comment"], "data": data}) # Eat the character directly after the bogus comment which is either a # ">" or an EOF. self.stream.char() self.state = self.dataState return True def markupDeclarationOpenState(self): charStack = [self.stream.char()] if charStack[-1] == "-": charStack.append(self.stream.char()) if charStack[-1] == "-": self.currentToken = {"type": tokenTypes["Comment"], "data": ""} self.state = self.commentStartState return True elif charStack[-1] in ('d', 'D'): matched = True for expected in (('o', 'O'), ('c', 'C'), ('t', 'T'), ('y', 'Y'), ('p', 'P'), ('e', 'E')): charStack.append(self.stream.char()) if charStack[-1] not in expected: matched = False break if matched: self.currentToken = {"type": tokenTypes["Doctype"], "name": "", "publicId": None, "systemId": None, "correct": True} self.state = self.doctypeState return True elif (charStack[-1] == "[" and self.parser is not None and self.parser.tree.openElements and self.parser.tree.openElements[-1].namespace != self.parser.tree.defaultNamespace): matched = True for expected in ["C", "D", "A", "T", "A", "["]: charStack.append(self.stream.char()) if charStack[-1] != expected: matched = False break if matched: self.state = self.cdataSectionState return True self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "expected-dashes-or-doctype"}) while charStack: self.stream.unget(charStack.pop()) self.state = self.bogusCommentState return True def commentStartState(self): data = self.stream.char() if data == "-": self.state = self.commentStartDashState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.currentToken["data"] += "\uFFFD" elif data == ">": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "incorrect-comment"}) self.tokenQueue.append(self.currentToken) self.state = self.dataState elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-comment"}) self.tokenQueue.append(self.currentToken) self.state = self.dataState else: self.currentToken["data"] += data self.state = self.commentState return True def commentStartDashState(self): data = self.stream.char() if data == "-": self.state = self.commentEndState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.currentToken["data"] += "-\uFFFD" elif data == ">": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "incorrect-comment"}) self.tokenQueue.append(self.currentToken) self.state = self.dataState elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-comment"}) self.tokenQueue.append(self.currentToken) self.state = self.dataState else: self.currentToken["data"] += "-" + data self.state = self.commentState return True def commentState(self): data = self.stream.char() if data == "-": self.state = self.commentEndDashState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.currentToken["data"] += "\uFFFD" elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-comment"}) self.tokenQueue.append(self.currentToken) self.state = self.dataState else: self.currentToken["data"] += data + \ self.stream.charsUntil(("-", "\u0000")) return True def commentEndDashState(self): data = self.stream.char() if data == "-": self.state = self.commentEndState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.currentToken["data"] += "-\uFFFD" self.state = self.commentState elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-comment-end-dash"}) self.tokenQueue.append(self.currentToken) self.state = self.dataState else: self.currentToken["data"] += "-" + data self.state = self.commentState return True def commentEndState(self): data = self.stream.char() if data == ">": self.tokenQueue.append(self.currentToken) self.state = self.dataState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.currentToken["data"] += "--\uFFFD" self.state = self.commentState elif data == "!": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-bang-after-double-dash-in-comment"}) self.state = self.commentEndBangState elif data == "-": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-dash-after-double-dash-in-comment"}) self.currentToken["data"] += data elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-comment-double-dash"}) self.tokenQueue.append(self.currentToken) self.state = self.dataState else: # XXX self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-char-in-comment"}) self.currentToken["data"] += "--" + data self.state = self.commentState return True def commentEndBangState(self): data = self.stream.char() if data == ">": self.tokenQueue.append(self.currentToken) self.state = self.dataState elif data == "-": self.currentToken["data"] += "--!" self.state = self.commentEndDashState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.currentToken["data"] += "--!\uFFFD" self.state = self.commentState elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-comment-end-bang-state"}) self.tokenQueue.append(self.currentToken) self.state = self.dataState else: self.currentToken["data"] += "--!" + data self.state = self.commentState return True def doctypeState(self): data = self.stream.char() if data in spaceCharacters: self.state = self.beforeDoctypeNameState elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "expected-doctype-name-but-got-eof"}) self.currentToken["correct"] = False self.tokenQueue.append(self.currentToken) self.state = self.dataState else: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "need-space-after-doctype"}) self.stream.unget(data) self.state = self.beforeDoctypeNameState return True def beforeDoctypeNameState(self): data = self.stream.char() if data in spaceCharacters: pass elif data == ">": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "expected-doctype-name-but-got-right-bracket"}) self.currentToken["correct"] = False self.tokenQueue.append(self.currentToken) self.state = self.dataState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.currentToken["name"] = "\uFFFD" self.state = self.doctypeNameState elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "expected-doctype-name-but-got-eof"}) self.currentToken["correct"] = False self.tokenQueue.append(self.currentToken) self.state = self.dataState else: self.currentToken["name"] = data self.state = self.doctypeNameState return True def doctypeNameState(self): data = self.stream.char() if data in spaceCharacters: self.currentToken["name"] = self.currentToken["name"].translate(asciiUpper2Lower) self.state = self.afterDoctypeNameState elif data == ">": self.currentToken["name"] = self.currentToken["name"].translate(asciiUpper2Lower) self.tokenQueue.append(self.currentToken) self.state = self.dataState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.currentToken["name"] += "\uFFFD" self.state = self.doctypeNameState elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-doctype-name"}) self.currentToken["correct"] = False self.currentToken["name"] = self.currentToken["name"].translate(asciiUpper2Lower) self.tokenQueue.append(self.currentToken) self.state = self.dataState else: self.currentToken["name"] += data return True def afterDoctypeNameState(self): data = self.stream.char() if data in spaceCharacters: pass elif data == ">": self.tokenQueue.append(self.currentToken) self.state = self.dataState elif data is EOF: self.currentToken["correct"] = False self.stream.unget(data) self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-doctype"}) self.tokenQueue.append(self.currentToken) self.state = self.dataState else: if data in ("p", "P"): matched = True for expected in (("u", "U"), ("b", "B"), ("l", "L"), ("i", "I"), ("c", "C")): data = self.stream.char() if data not in expected: matched = False break if matched: self.state = self.afterDoctypePublicKeywordState return True elif data in ("s", "S"): matched = True for expected in (("y", "Y"), ("s", "S"), ("t", "T"), ("e", "E"), ("m", "M")): data = self.stream.char() if data not in expected: matched = False break if matched: self.state = self.afterDoctypeSystemKeywordState return True # All the characters read before the current 'data' will be # [a-zA-Z], so they're garbage in the bogus doctype and can be # discarded; only the latest character might be '>' or EOF # and needs to be ungetted self.stream.unget(data) self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "expected-space-or-right-bracket-in-doctype", "datavars": {"data": data}}) self.currentToken["correct"] = False self.state = self.bogusDoctypeState return True def afterDoctypePublicKeywordState(self): data = self.stream.char() if data in spaceCharacters: self.state = self.beforeDoctypePublicIdentifierState elif data in ("'", '"'): self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-char-in-doctype"}) self.stream.unget(data) self.state = self.beforeDoctypePublicIdentifierState elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-doctype"}) self.currentToken["correct"] = False self.tokenQueue.append(self.currentToken) self.state = self.dataState else: self.stream.unget(data) self.state = self.beforeDoctypePublicIdentifierState return True def beforeDoctypePublicIdentifierState(self): data = self.stream.char() if data in spaceCharacters: pass elif data == "\"": self.currentToken["publicId"] = "" self.state = self.doctypePublicIdentifierDoubleQuotedState elif data == "'": self.currentToken["publicId"] = "" self.state = self.doctypePublicIdentifierSingleQuotedState elif data == ">": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-end-of-doctype"}) self.currentToken["correct"] = False self.tokenQueue.append(self.currentToken) self.state = self.dataState elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-doctype"}) self.currentToken["correct"] = False self.tokenQueue.append(self.currentToken) self.state = self.dataState else: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-char-in-doctype"}) self.currentToken["correct"] = False self.state = self.bogusDoctypeState return True def doctypePublicIdentifierDoubleQuotedState(self): data = self.stream.char() if data == "\"": self.state = self.afterDoctypePublicIdentifierState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.currentToken["publicId"] += "\uFFFD" elif data == ">": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-end-of-doctype"}) self.currentToken["correct"] = False self.tokenQueue.append(self.currentToken) self.state = self.dataState elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-doctype"}) self.currentToken["correct"] = False self.tokenQueue.append(self.currentToken) self.state = self.dataState else: self.currentToken["publicId"] += data return True def doctypePublicIdentifierSingleQuotedState(self): data = self.stream.char() if data == "'": self.state = self.afterDoctypePublicIdentifierState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.currentToken["publicId"] += "\uFFFD" elif data == ">": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-end-of-doctype"}) self.currentToken["correct"] = False self.tokenQueue.append(self.currentToken) self.state = self.dataState elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-doctype"}) self.currentToken["correct"] = False self.tokenQueue.append(self.currentToken) self.state = self.dataState else: self.currentToken["publicId"] += data return True def afterDoctypePublicIdentifierState(self): data = self.stream.char() if data in spaceCharacters: self.state = self.betweenDoctypePublicAndSystemIdentifiersState elif data == ">": self.tokenQueue.append(self.currentToken) self.state = self.dataState elif data == '"': self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-char-in-doctype"}) self.currentToken["systemId"] = "" self.state = self.doctypeSystemIdentifierDoubleQuotedState elif data == "'": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-char-in-doctype"}) self.currentToken["systemId"] = "" self.state = self.doctypeSystemIdentifierSingleQuotedState elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-doctype"}) self.currentToken["correct"] = False self.tokenQueue.append(self.currentToken) self.state = self.dataState else: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-char-in-doctype"}) self.currentToken["correct"] = False self.state = self.bogusDoctypeState return True def betweenDoctypePublicAndSystemIdentifiersState(self): data = self.stream.char() if data in spaceCharacters: pass elif data == ">": self.tokenQueue.append(self.currentToken) self.state = self.dataState elif data == '"': self.currentToken["systemId"] = "" self.state = self.doctypeSystemIdentifierDoubleQuotedState elif data == "'": self.currentToken["systemId"] = "" self.state = self.doctypeSystemIdentifierSingleQuotedState elif data == EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-doctype"}) self.currentToken["correct"] = False self.tokenQueue.append(self.currentToken) self.state = self.dataState else: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-char-in-doctype"}) self.currentToken["correct"] = False self.state = self.bogusDoctypeState return True def afterDoctypeSystemKeywordState(self): data = self.stream.char() if data in spaceCharacters: self.state = self.beforeDoctypeSystemIdentifierState elif data in ("'", '"'): self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-char-in-doctype"}) self.stream.unget(data) self.state = self.beforeDoctypeSystemIdentifierState elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-doctype"}) self.currentToken["correct"] = False self.tokenQueue.append(self.currentToken) self.state = self.dataState else: self.stream.unget(data) self.state = self.beforeDoctypeSystemIdentifierState return True def beforeDoctypeSystemIdentifierState(self): data = self.stream.char() if data in spaceCharacters: pass elif data == "\"": self.currentToken["systemId"] = "" self.state = self.doctypeSystemIdentifierDoubleQuotedState elif data == "'": self.currentToken["systemId"] = "" self.state = self.doctypeSystemIdentifierSingleQuotedState elif data == ">": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-char-in-doctype"}) self.currentToken["correct"] = False self.tokenQueue.append(self.currentToken) self.state = self.dataState elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-doctype"}) self.currentToken["correct"] = False self.tokenQueue.append(self.currentToken) self.state = self.dataState else: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-char-in-doctype"}) self.currentToken["correct"] = False self.state = self.bogusDoctypeState return True def doctypeSystemIdentifierDoubleQuotedState(self): data = self.stream.char() if data == "\"": self.state = self.afterDoctypeSystemIdentifierState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.currentToken["systemId"] += "\uFFFD" elif data == ">": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-end-of-doctype"}) self.currentToken["correct"] = False self.tokenQueue.append(self.currentToken) self.state = self.dataState elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-doctype"}) self.currentToken["correct"] = False self.tokenQueue.append(self.currentToken) self.state = self.dataState else: self.currentToken["systemId"] += data return True def doctypeSystemIdentifierSingleQuotedState(self): data = self.stream.char() if data == "'": self.state = self.afterDoctypeSystemIdentifierState elif data == "\u0000": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) self.currentToken["systemId"] += "\uFFFD" elif data == ">": self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-end-of-doctype"}) self.currentToken["correct"] = False self.tokenQueue.append(self.currentToken) self.state = self.dataState elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-doctype"}) self.currentToken["correct"] = False self.tokenQueue.append(self.currentToken) self.state = self.dataState else: self.currentToken["systemId"] += data return True def afterDoctypeSystemIdentifierState(self): data = self.stream.char() if data in spaceCharacters: pass elif data == ">": self.tokenQueue.append(self.currentToken) self.state = self.dataState elif data is EOF: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "eof-in-doctype"}) self.currentToken["correct"] = False self.tokenQueue.append(self.currentToken) self.state = self.dataState else: self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "unexpected-char-in-doctype"}) self.state = self.bogusDoctypeState return True def bogusDoctypeState(self): data = self.stream.char() if data == ">": self.tokenQueue.append(self.currentToken) self.state = self.dataState elif data is EOF: # XXX EMIT self.stream.unget(data) self.tokenQueue.append(self.currentToken) self.state = self.dataState else: pass return True def cdataSectionState(self): data = [] while True: data.append(self.stream.charsUntil("]")) data.append(self.stream.charsUntil(">")) char = self.stream.char() if char == EOF: break else: assert char == ">" if data[-1][-2:] == "]]": data[-1] = data[-1][:-2] break else: data.append(char) data = "".join(data) # pylint:disable=redefined-variable-type # Deal with null here rather than in the parser nullCount = data.count("\u0000") if nullCount > 0: for _ in range(nullCount): self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": "invalid-codepoint"}) data = data.replace("\u0000", "\uFFFD") if data: self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data}) self.state = self.dataState return True ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735099.9933274 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/_trie/��������������������������0000755�0001750�0001750�00000000000�00000000000�027307� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/_trie/__init__.py���������������0000644�0001750�0001750�00000000441�00000000000�031417� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals from .py import Trie as PyTrie Trie = PyTrie # pylint:disable=wrong-import-position try: from .datrie import Trie as DATrie except ImportError: pass else: Trie = DATrie # pylint:enable=wrong-import-position �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/_trie/_base.py������������������0000644�0001750�0001750�00000001765�00000000000�030743� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals try: from collections.abc import Mapping except ImportError: # Python 2.7 from collections import Mapping class Trie(Mapping): """Abstract base class for tries""" def keys(self, prefix=None): # pylint:disable=arguments-differ keys = super(Trie, self).keys() if prefix is None: return set(keys) return {x for x in keys if x.startswith(prefix)} def has_keys_with_prefix(self, prefix): for key in self.keys(): if key.startswith(prefix): return True return False def longest_prefix(self, prefix): if prefix in self: return prefix for i in range(1, len(prefix) + 1): if prefix[:-i] in self: return prefix[:-i] raise KeyError(prefix) def longest_prefix_item(self, prefix): lprefix = self.longest_prefix(prefix) return (lprefix, self[lprefix]) �����������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/_trie/datrie.py�����������������0000644�0001750�0001750�00000002232�00000000000�031130� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals from datrie import Trie as DATrie from pip._vendor.six import text_type from ._base import Trie as ABCTrie class Trie(ABCTrie): def __init__(self, data): chars = set() for key in data.keys(): if not isinstance(key, text_type): raise TypeError("All keys must be strings") for char in key: chars.add(char) self._data = DATrie("".join(chars)) for key, value in data.items(): self._data[key] = value def __contains__(self, key): return key in self._data def __len__(self): return len(self._data) def __iter__(self): raise NotImplementedError() def __getitem__(self, key): return self._data[key] def keys(self, prefix=None): return self._data.keys(prefix) def has_keys_with_prefix(self, prefix): return self._data.has_keys_with_prefix(prefix) def longest_prefix(self, prefix): return self._data.longest_prefix(prefix) def longest_prefix_item(self, prefix): return self._data.longest_prefix_item(prefix) ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/_trie/py.py���������������������0000644�0001750�0001750�00000003357�00000000000�030321� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals from pip._vendor.six import text_type from bisect import bisect_left from ._base import Trie as ABCTrie class Trie(ABCTrie): def __init__(self, data): if not all(isinstance(x, text_type) for x in data.keys()): raise TypeError("All keys must be strings") self._data = data self._keys = sorted(data.keys()) self._cachestr = "" self._cachepoints = (0, len(data)) def __contains__(self, key): return key in self._data def __len__(self): return len(self._data) def __iter__(self): return iter(self._data) def __getitem__(self, key): return self._data[key] def keys(self, prefix=None): if prefix is None or prefix == "" or not self._keys: return set(self._keys) if prefix.startswith(self._cachestr): lo, hi = self._cachepoints start = i = bisect_left(self._keys, prefix, lo, hi) else: start = i = bisect_left(self._keys, prefix) keys = set() if start == len(self._keys): return keys while self._keys[i].startswith(prefix): keys.add(self._keys[i]) i += 1 self._cachestr = prefix self._cachepoints = (start, i) return keys def has_keys_with_prefix(self, prefix): if prefix in self._data: return True if prefix.startswith(self._cachestr): lo, hi = self._cachepoints i = bisect_left(self._keys, prefix, lo, hi) else: i = bisect_left(self._keys, prefix) if i == len(self._keys): return False return self._keys[i].startswith(prefix) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/_utils.py�����������������������0000644�0001750�0001750�00000007657�00000000000�030075� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals from types import ModuleType from pip._vendor.six import text_type try: import xml.etree.cElementTree as default_etree except ImportError: import xml.etree.ElementTree as default_etree __all__ = ["default_etree", "MethodDispatcher", "isSurrogatePair", "surrogatePairToCodepoint", "moduleFactoryFactory", "supports_lone_surrogates"] # Platforms not supporting lone surrogates (\uD800-\uDFFF) should be # caught by the below test. In general this would be any platform # using UTF-16 as its encoding of unicode strings, such as # Jython. This is because UTF-16 itself is based on the use of such # surrogates, and there is no mechanism to further escape such # escapes. try: _x = eval('"\\uD800"') # pylint:disable=eval-used if not isinstance(_x, text_type): # We need this with u"" because of http://bugs.jython.org/issue2039 _x = eval('u"\\uD800"') # pylint:disable=eval-used assert isinstance(_x, text_type) except: # pylint:disable=bare-except supports_lone_surrogates = False else: supports_lone_surrogates = True class MethodDispatcher(dict): """Dict with 2 special properties: On initiation, keys that are lists, sets or tuples are converted to multiple keys so accessing any one of the items in the original list-like object returns the matching value md = MethodDispatcher({("foo", "bar"):"baz"}) md["foo"] == "baz" A default value which can be set through the default attribute. """ def __init__(self, items=()): # Using _dictEntries instead of directly assigning to self is about # twice as fast. Please do careful performance testing before changing # anything here. _dictEntries = [] for name, value in items: if isinstance(name, (list, tuple, frozenset, set)): for item in name: _dictEntries.append((item, value)) else: _dictEntries.append((name, value)) dict.__init__(self, _dictEntries) assert len(self) == len(_dictEntries) self.default = None def __getitem__(self, key): return dict.get(self, key, self.default) # Some utility functions to deal with weirdness around UCS2 vs UCS4 # python builds def isSurrogatePair(data): return (len(data) == 2 and ord(data[0]) >= 0xD800 and ord(data[0]) <= 0xDBFF and ord(data[1]) >= 0xDC00 and ord(data[1]) <= 0xDFFF) def surrogatePairToCodepoint(data): char_val = (0x10000 + (ord(data[0]) - 0xD800) * 0x400 + (ord(data[1]) - 0xDC00)) return char_val # Module Factory Factory (no, this isn't Java, I know) # Here to stop this being duplicated all over the place. def moduleFactoryFactory(factory): moduleCache = {} def moduleFactory(baseModule, *args, **kwargs): if isinstance(ModuleType.__name__, type("")): name = "_%s_factory" % baseModule.__name__ else: name = b"_%s_factory" % baseModule.__name__ kwargs_tuple = tuple(kwargs.items()) try: return moduleCache[name][args][kwargs_tuple] except KeyError: mod = ModuleType(name) objs = factory(baseModule, *args, **kwargs) mod.__dict__.update(objs) if "name" not in moduleCache: moduleCache[name] = {} if "args" not in moduleCache[name]: moduleCache[name][args] = {} if "kwargs" not in moduleCache[name][args]: moduleCache[name][args][kwargs_tuple] = {} moduleCache[name][args][kwargs_tuple] = mod return mod return moduleFactory def memoize(func): cache = {} def wrapped(*args, **kwargs): key = (tuple(args), tuple(kwargs.items())) if key not in cache: cache[key] = func(*args, **kwargs) return cache[key] return wrapped ���������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/constants.py��������������������0000644�0001750�0001750�00000243076�00000000000�030607� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals import string EOF = None E = { "null-character": "Null character in input stream, replaced with U+FFFD.", "invalid-codepoint": "Invalid codepoint in stream.", "incorrectly-placed-solidus": "Solidus (/) incorrectly placed in tag.", "incorrect-cr-newline-entity": "Incorrect CR newline entity, replaced with LF.", "illegal-windows-1252-entity": "Entity used with illegal number (windows-1252 reference).", "cant-convert-numeric-entity": "Numeric entity couldn't be converted to character " "(codepoint U+%(charAsInt)08x).", "illegal-codepoint-for-numeric-entity": "Numeric entity represents an illegal codepoint: " "U+%(charAsInt)08x.", "numeric-entity-without-semicolon": "Numeric entity didn't end with ';'.", "expected-numeric-entity-but-got-eof": "Numeric entity expected. Got end of file instead.", "expected-numeric-entity": "Numeric entity expected but none found.", "named-entity-without-semicolon": "Named entity didn't end with ';'.", "expected-named-entity": "Named entity expected. Got none.", "attributes-in-end-tag": "End tag contains unexpected attributes.", 'self-closing-flag-on-end-tag': "End tag contains unexpected self-closing flag.", "expected-tag-name-but-got-right-bracket": "Expected tag name. Got '>' instead.", "expected-tag-name-but-got-question-mark": "Expected tag name. Got '?' instead. (HTML doesn't " "support processing instructions.)", "expected-tag-name": "Expected tag name. Got something else instead", "expected-closing-tag-but-got-right-bracket": "Expected closing tag. Got '>' instead. Ignoring '</>'.", "expected-closing-tag-but-got-eof": "Expected closing tag. Unexpected end of file.", "expected-closing-tag-but-got-char": "Expected closing tag. Unexpected character '%(data)s' found.", "eof-in-tag-name": "Unexpected end of file in the tag name.", "expected-attribute-name-but-got-eof": "Unexpected end of file. Expected attribute name instead.", "eof-in-attribute-name": "Unexpected end of file in attribute name.", "invalid-character-in-attribute-name": "Invalid character in attribute name", "duplicate-attribute": "Dropped duplicate attribute on tag.", "expected-end-of-tag-name-but-got-eof": "Unexpected end of file. Expected = or end of tag.", "expected-attribute-value-but-got-eof": "Unexpected end of file. Expected attribute value.", "expected-attribute-value-but-got-right-bracket": "Expected attribute value. Got '>' instead.", 'equals-in-unquoted-attribute-value': "Unexpected = in unquoted attribute", 'unexpected-character-in-unquoted-attribute-value': "Unexpected character in unquoted attribute", "invalid-character-after-attribute-name": "Unexpected character after attribute name.", "unexpected-character-after-attribute-value": "Unexpected character after attribute value.", "eof-in-attribute-value-double-quote": "Unexpected end of file in attribute value (\").", "eof-in-attribute-value-single-quote": "Unexpected end of file in attribute value (').", "eof-in-attribute-value-no-quotes": "Unexpected end of file in attribute value.", "unexpected-EOF-after-solidus-in-tag": "Unexpected end of file in tag. Expected >", "unexpected-character-after-solidus-in-tag": "Unexpected character after / in tag. Expected >", "expected-dashes-or-doctype": "Expected '--' or 'DOCTYPE'. Not found.", "unexpected-bang-after-double-dash-in-comment": "Unexpected ! after -- in comment", "unexpected-space-after-double-dash-in-comment": "Unexpected space after -- in comment", "incorrect-comment": "Incorrect comment.", "eof-in-comment": "Unexpected end of file in comment.", "eof-in-comment-end-dash": "Unexpected end of file in comment (-)", "unexpected-dash-after-double-dash-in-comment": "Unexpected '-' after '--' found in comment.", "eof-in-comment-double-dash": "Unexpected end of file in comment (--).", "eof-in-comment-end-space-state": "Unexpected end of file in comment.", "eof-in-comment-end-bang-state": "Unexpected end of file in comment.", "unexpected-char-in-comment": "Unexpected character in comment found.", "need-space-after-doctype": "No space after literal string 'DOCTYPE'.", "expected-doctype-name-but-got-right-bracket": "Unexpected > character. Expected DOCTYPE name.", "expected-doctype-name-but-got-eof": "Unexpected end of file. Expected DOCTYPE name.", "eof-in-doctype-name": "Unexpected end of file in DOCTYPE name.", "eof-in-doctype": "Unexpected end of file in DOCTYPE.", "expected-space-or-right-bracket-in-doctype": "Expected space or '>'. Got '%(data)s'", "unexpected-end-of-doctype": "Unexpected end of DOCTYPE.", "unexpected-char-in-doctype": "Unexpected character in DOCTYPE.", "eof-in-innerhtml": "XXX innerHTML EOF", "unexpected-doctype": "Unexpected DOCTYPE. Ignored.", "non-html-root": "html needs to be the first start tag.", "expected-doctype-but-got-eof": "Unexpected End of file. Expected DOCTYPE.", "unknown-doctype": "Erroneous DOCTYPE.", "expected-doctype-but-got-chars": "Unexpected non-space characters. Expected DOCTYPE.", "expected-doctype-but-got-start-tag": "Unexpected start tag (%(name)s). Expected DOCTYPE.", "expected-doctype-but-got-end-tag": "Unexpected end tag (%(name)s). Expected DOCTYPE.", "end-tag-after-implied-root": "Unexpected end tag (%(name)s) after the (implied) root element.", "expected-named-closing-tag-but-got-eof": "Unexpected end of file. Expected end tag (%(name)s).", "two-heads-are-not-better-than-one": "Unexpected start tag head in existing head. Ignored.", "unexpected-end-tag": "Unexpected end tag (%(name)s). Ignored.", "unexpected-start-tag-out-of-my-head": "Unexpected start tag (%(name)s) that can be in head. Moved.", "unexpected-start-tag": "Unexpected start tag (%(name)s).", "missing-end-tag": "Missing end tag (%(name)s).", "missing-end-tags": "Missing end tags (%(name)s).", "unexpected-start-tag-implies-end-tag": "Unexpected start tag (%(startName)s) " "implies end tag (%(endName)s).", "unexpected-start-tag-treated-as": "Unexpected start tag (%(originalName)s). Treated as %(newName)s.", "deprecated-tag": "Unexpected start tag %(name)s. Don't use it!", "unexpected-start-tag-ignored": "Unexpected start tag %(name)s. Ignored.", "expected-one-end-tag-but-got-another": "Unexpected end tag (%(gotName)s). " "Missing end tag (%(expectedName)s).", "end-tag-too-early": "End tag (%(name)s) seen too early. Expected other end tag.", "end-tag-too-early-named": "Unexpected end tag (%(gotName)s). Expected end tag (%(expectedName)s).", "end-tag-too-early-ignored": "End tag (%(name)s) seen too early. Ignored.", "adoption-agency-1.1": "End tag (%(name)s) violates step 1, " "paragraph 1 of the adoption agency algorithm.", "adoption-agency-1.2": "End tag (%(name)s) violates step 1, " "paragraph 2 of the adoption agency algorithm.", "adoption-agency-1.3": "End tag (%(name)s) violates step 1, " "paragraph 3 of the adoption agency algorithm.", "adoption-agency-4.4": "End tag (%(name)s) violates step 4, " "paragraph 4 of the adoption agency algorithm.", "unexpected-end-tag-treated-as": "Unexpected end tag (%(originalName)s). Treated as %(newName)s.", "no-end-tag": "This element (%(name)s) has no end tag.", "unexpected-implied-end-tag-in-table": "Unexpected implied end tag (%(name)s) in the table phase.", "unexpected-implied-end-tag-in-table-body": "Unexpected implied end tag (%(name)s) in the table body phase.", "unexpected-char-implies-table-voodoo": "Unexpected non-space characters in " "table context caused voodoo mode.", "unexpected-hidden-input-in-table": "Unexpected input with type hidden in table context.", "unexpected-form-in-table": "Unexpected form in table context.", "unexpected-start-tag-implies-table-voodoo": "Unexpected start tag (%(name)s) in " "table context caused voodoo mode.", "unexpected-end-tag-implies-table-voodoo": "Unexpected end tag (%(name)s) in " "table context caused voodoo mode.", "unexpected-cell-in-table-body": "Unexpected table cell start tag (%(name)s) " "in the table body phase.", "unexpected-cell-end-tag": "Got table cell end tag (%(name)s) " "while required end tags are missing.", "unexpected-end-tag-in-table-body": "Unexpected end tag (%(name)s) in the table body phase. Ignored.", "unexpected-implied-end-tag-in-table-row": "Unexpected implied end tag (%(name)s) in the table row phase.", "unexpected-end-tag-in-table-row": "Unexpected end tag (%(name)s) in the table row phase. Ignored.", "unexpected-select-in-select": "Unexpected select start tag in the select phase " "treated as select end tag.", "unexpected-input-in-select": "Unexpected input start tag in the select phase.", "unexpected-start-tag-in-select": "Unexpected start tag token (%(name)s in the select phase. " "Ignored.", "unexpected-end-tag-in-select": "Unexpected end tag (%(name)s) in the select phase. Ignored.", "unexpected-table-element-start-tag-in-select-in-table": "Unexpected table element start tag (%(name)s) in the select in table phase.", "unexpected-table-element-end-tag-in-select-in-table": "Unexpected table element end tag (%(name)s) in the select in table phase.", "unexpected-char-after-body": "Unexpected non-space characters in the after body phase.", "unexpected-start-tag-after-body": "Unexpected start tag token (%(name)s)" " in the after body phase.", "unexpected-end-tag-after-body": "Unexpected end tag token (%(name)s)" " in the after body phase.", "unexpected-char-in-frameset": "Unexpected characters in the frameset phase. Characters ignored.", "unexpected-start-tag-in-frameset": "Unexpected start tag token (%(name)s)" " in the frameset phase. Ignored.", "unexpected-frameset-in-frameset-innerhtml": "Unexpected end tag token (frameset) " "in the frameset phase (innerHTML).", "unexpected-end-tag-in-frameset": "Unexpected end tag token (%(name)s)" " in the frameset phase. Ignored.", "unexpected-char-after-frameset": "Unexpected non-space characters in the " "after frameset phase. Ignored.", "unexpected-start-tag-after-frameset": "Unexpected start tag (%(name)s)" " in the after frameset phase. Ignored.", "unexpected-end-tag-after-frameset": "Unexpected end tag (%(name)s)" " in the after frameset phase. Ignored.", "unexpected-end-tag-after-body-innerhtml": "Unexpected end tag after body(innerHtml)", "expected-eof-but-got-char": "Unexpected non-space characters. Expected end of file.", "expected-eof-but-got-start-tag": "Unexpected start tag (%(name)s)" ". Expected end of file.", "expected-eof-but-got-end-tag": "Unexpected end tag (%(name)s)" ". Expected end of file.", "eof-in-table": "Unexpected end of file. Expected table content.", "eof-in-select": "Unexpected end of file. Expected select content.", "eof-in-frameset": "Unexpected end of file. Expected frameset content.", "eof-in-script-in-script": "Unexpected end of file. Expected script content.", "eof-in-foreign-lands": "Unexpected end of file. Expected foreign content", "non-void-element-with-trailing-solidus": "Trailing solidus not allowed on element %(name)s", "unexpected-html-element-in-foreign-content": "Element %(name)s not allowed in a non-html context", "unexpected-end-tag-before-html": "Unexpected end tag (%(name)s) before html.", "unexpected-inhead-noscript-tag": "Element %(name)s not allowed in a inhead-noscript context", "eof-in-head-noscript": "Unexpected end of file. Expected inhead-noscript content", "char-in-head-noscript": "Unexpected non-space character. Expected inhead-noscript content", "XXX-undefined-error": "Undefined error (this sucks and should be fixed)", } namespaces = { "html": "http://www.w3.org/1999/xhtml", "mathml": "http://www.w3.org/1998/Math/MathML", "svg": "http://www.w3.org/2000/svg", "xlink": "http://www.w3.org/1999/xlink", "xml": "http://www.w3.org/XML/1998/namespace", "xmlns": "http://www.w3.org/2000/xmlns/" } scopingElements = frozenset([ (namespaces["html"], "applet"), (namespaces["html"], "caption"), (namespaces["html"], "html"), (namespaces["html"], "marquee"), (namespaces["html"], "object"), (namespaces["html"], "table"), (namespaces["html"], "td"), (namespaces["html"], "th"), (namespaces["mathml"], "mi"), (namespaces["mathml"], "mo"), (namespaces["mathml"], "mn"), (namespaces["mathml"], "ms"), (namespaces["mathml"], "mtext"), (namespaces["mathml"], "annotation-xml"), (namespaces["svg"], "foreignObject"), (namespaces["svg"], "desc"), (namespaces["svg"], "title"), ]) formattingElements = frozenset([ (namespaces["html"], "a"), (namespaces["html"], "b"), (namespaces["html"], "big"), (namespaces["html"], "code"), (namespaces["html"], "em"), (namespaces["html"], "font"), (namespaces["html"], "i"), (namespaces["html"], "nobr"), (namespaces["html"], "s"), (namespaces["html"], "small"), (namespaces["html"], "strike"), (namespaces["html"], "strong"), (namespaces["html"], "tt"), (namespaces["html"], "u") ]) specialElements = frozenset([ (namespaces["html"], "address"), (namespaces["html"], "applet"), (namespaces["html"], "area"), (namespaces["html"], "article"), (namespaces["html"], "aside"), (namespaces["html"], "base"), (namespaces["html"], "basefont"), (namespaces["html"], "bgsound"), (namespaces["html"], "blockquote"), (namespaces["html"], "body"), (namespaces["html"], "br"), (namespaces["html"], "button"), (namespaces["html"], "caption"), (namespaces["html"], "center"), (namespaces["html"], "col"), (namespaces["html"], "colgroup"), (namespaces["html"], "command"), (namespaces["html"], "dd"), (namespaces["html"], "details"), (namespaces["html"], "dir"), (namespaces["html"], "div"), (namespaces["html"], "dl"), (namespaces["html"], "dt"), (namespaces["html"], "embed"), (namespaces["html"], "fieldset"), (namespaces["html"], "figure"), (namespaces["html"], "footer"), (namespaces["html"], "form"), (namespaces["html"], "frame"), (namespaces["html"], "frameset"), (namespaces["html"], "h1"), (namespaces["html"], "h2"), (namespaces["html"], "h3"), (namespaces["html"], "h4"), (namespaces["html"], "h5"), (namespaces["html"], "h6"), (namespaces["html"], "head"), (namespaces["html"], "header"), (namespaces["html"], "hr"), (namespaces["html"], "html"), (namespaces["html"], "iframe"), # Note that image is commented out in the spec as "this isn't an # element that can end up on the stack, so it doesn't matter," (namespaces["html"], "image"), (namespaces["html"], "img"), (namespaces["html"], "input"), (namespaces["html"], "isindex"), (namespaces["html"], "li"), (namespaces["html"], "link"), (namespaces["html"], "listing"), (namespaces["html"], "marquee"), (namespaces["html"], "menu"), (namespaces["html"], "meta"), (namespaces["html"], "nav"), (namespaces["html"], "noembed"), (namespaces["html"], "noframes"), (namespaces["html"], "noscript"), (namespaces["html"], "object"), (namespaces["html"], "ol"), (namespaces["html"], "p"), (namespaces["html"], "param"), (namespaces["html"], "plaintext"), (namespaces["html"], "pre"), (namespaces["html"], "script"), (namespaces["html"], "section"), (namespaces["html"], "select"), (namespaces["html"], "style"), (namespaces["html"], "table"), (namespaces["html"], "tbody"), (namespaces["html"], "td"), (namespaces["html"], "textarea"), (namespaces["html"], "tfoot"), (namespaces["html"], "th"), (namespaces["html"], "thead"), (namespaces["html"], "title"), (namespaces["html"], "tr"), (namespaces["html"], "ul"), (namespaces["html"], "wbr"), (namespaces["html"], "xmp"), (namespaces["svg"], "foreignObject") ]) htmlIntegrationPointElements = frozenset([ (namespaces["mathml"], "annotation-xml"), (namespaces["svg"], "foreignObject"), (namespaces["svg"], "desc"), (namespaces["svg"], "title") ]) mathmlTextIntegrationPointElements = frozenset([ (namespaces["mathml"], "mi"), (namespaces["mathml"], "mo"), (namespaces["mathml"], "mn"), (namespaces["mathml"], "ms"), (namespaces["mathml"], "mtext") ]) adjustSVGAttributes = { "attributename": "attributeName", "attributetype": "attributeType", "basefrequency": "baseFrequency", "baseprofile": "baseProfile", "calcmode": "calcMode", "clippathunits": "clipPathUnits", "contentscripttype": "contentScriptType", "contentstyletype": "contentStyleType", "diffuseconstant": "diffuseConstant", "edgemode": "edgeMode", "externalresourcesrequired": "externalResourcesRequired", "filterres": "filterRes", "filterunits": "filterUnits", "glyphref": "glyphRef", "gradienttransform": "gradientTransform", "gradientunits": "gradientUnits", "kernelmatrix": "kernelMatrix", "kernelunitlength": "kernelUnitLength", "keypoints": "keyPoints", "keysplines": "keySplines", "keytimes": "keyTimes", "lengthadjust": "lengthAdjust", "limitingconeangle": "limitingConeAngle", "markerheight": "markerHeight", "markerunits": "markerUnits", "markerwidth": "markerWidth", "maskcontentunits": "maskContentUnits", "maskunits": "maskUnits", "numoctaves": "numOctaves", "pathlength": "pathLength", "patterncontentunits": "patternContentUnits", "patterntransform": "patternTransform", "patternunits": "patternUnits", "pointsatx": "pointsAtX", "pointsaty": "pointsAtY", "pointsatz": "pointsAtZ", "preservealpha": "preserveAlpha", "preserveaspectratio": "preserveAspectRatio", "primitiveunits": "primitiveUnits", "refx": "refX", "refy": "refY", "repeatcount": "repeatCount", "repeatdur": "repeatDur", "requiredextensions": "requiredExtensions", "requiredfeatures": "requiredFeatures", "specularconstant": "specularConstant", "specularexponent": "specularExponent", "spreadmethod": "spreadMethod", "startoffset": "startOffset", "stddeviation": "stdDeviation", "stitchtiles": "stitchTiles", "surfacescale": "surfaceScale", "systemlanguage": "systemLanguage", "tablevalues": "tableValues", "targetx": "targetX", "targety": "targetY", "textlength": "textLength", "viewbox": "viewBox", "viewtarget": "viewTarget", "xchannelselector": "xChannelSelector", "ychannelselector": "yChannelSelector", "zoomandpan": "zoomAndPan" } adjustMathMLAttributes = {"definitionurl": "definitionURL"} adjustForeignAttributes = { "xlink:actuate": ("xlink", "actuate", namespaces["xlink"]), "xlink:arcrole": ("xlink", "arcrole", namespaces["xlink"]), "xlink:href": ("xlink", "href", namespaces["xlink"]), "xlink:role": ("xlink", "role", namespaces["xlink"]), "xlink:show": ("xlink", "show", namespaces["xlink"]), "xlink:title": ("xlink", "title", namespaces["xlink"]), "xlink:type": ("xlink", "type", namespaces["xlink"]), "xml:base": ("xml", "base", namespaces["xml"]), "xml:lang": ("xml", "lang", namespaces["xml"]), "xml:space": ("xml", "space", namespaces["xml"]), "xmlns": (None, "xmlns", namespaces["xmlns"]), "xmlns:xlink": ("xmlns", "xlink", namespaces["xmlns"]) } unadjustForeignAttributes = dict([((ns, local), qname) for qname, (prefix, local, ns) in adjustForeignAttributes.items()]) spaceCharacters = frozenset([ "\t", "\n", "\u000C", " ", "\r" ]) tableInsertModeElements = frozenset([ "table", "tbody", "tfoot", "thead", "tr" ]) asciiLowercase = frozenset(string.ascii_lowercase) asciiUppercase = frozenset(string.ascii_uppercase) asciiLetters = frozenset(string.ascii_letters) digits = frozenset(string.digits) hexDigits = frozenset(string.hexdigits) asciiUpper2Lower = dict([(ord(c), ord(c.lower())) for c in string.ascii_uppercase]) # Heading elements need to be ordered headingElements = ( "h1", "h2", "h3", "h4", "h5", "h6" ) voidElements = frozenset([ "base", "command", "event-source", "link", "meta", "hr", "br", "img", "embed", "param", "area", "col", "input", "source", "track" ]) cdataElements = frozenset(['title', 'textarea']) rcdataElements = frozenset([ 'style', 'script', 'xmp', 'iframe', 'noembed', 'noframes', 'noscript' ]) booleanAttributes = { "": frozenset(["irrelevant", "itemscope"]), "style": frozenset(["scoped"]), "img": frozenset(["ismap"]), "audio": frozenset(["autoplay", "controls"]), "video": frozenset(["autoplay", "controls"]), "script": frozenset(["defer", "async"]), "details": frozenset(["open"]), "datagrid": frozenset(["multiple", "disabled"]), "command": frozenset(["hidden", "disabled", "checked", "default"]), "hr": frozenset(["noshade"]), "menu": frozenset(["autosubmit"]), "fieldset": frozenset(["disabled", "readonly"]), "option": frozenset(["disabled", "readonly", "selected"]), "optgroup": frozenset(["disabled", "readonly"]), "button": frozenset(["disabled", "autofocus"]), "input": frozenset(["disabled", "readonly", "required", "autofocus", "checked", "ismap"]), "select": frozenset(["disabled", "readonly", "autofocus", "multiple"]), "output": frozenset(["disabled", "readonly"]), "iframe": frozenset(["seamless"]), } # entitiesWindows1252 has to be _ordered_ and needs to have an index. It # therefore can't be a frozenset. entitiesWindows1252 = ( 8364, # 0x80 0x20AC EURO SIGN 65533, # 0x81 UNDEFINED 8218, # 0x82 0x201A SINGLE LOW-9 QUOTATION MARK 402, # 0x83 0x0192 LATIN SMALL LETTER F WITH HOOK 8222, # 0x84 0x201E DOUBLE LOW-9 QUOTATION MARK 8230, # 0x85 0x2026 HORIZONTAL ELLIPSIS 8224, # 0x86 0x2020 DAGGER 8225, # 0x87 0x2021 DOUBLE DAGGER 710, # 0x88 0x02C6 MODIFIER LETTER CIRCUMFLEX ACCENT 8240, # 0x89 0x2030 PER MILLE SIGN 352, # 0x8A 0x0160 LATIN CAPITAL LETTER S WITH CARON 8249, # 0x8B 0x2039 SINGLE LEFT-POINTING ANGLE QUOTATION MARK 338, # 0x8C 0x0152 LATIN CAPITAL LIGATURE OE 65533, # 0x8D UNDEFINED 381, # 0x8E 0x017D LATIN CAPITAL LETTER Z WITH CARON 65533, # 0x8F UNDEFINED 65533, # 0x90 UNDEFINED 8216, # 0x91 0x2018 LEFT SINGLE QUOTATION MARK 8217, # 0x92 0x2019 RIGHT SINGLE QUOTATION MARK 8220, # 0x93 0x201C LEFT DOUBLE QUOTATION MARK 8221, # 0x94 0x201D RIGHT DOUBLE QUOTATION MARK 8226, # 0x95 0x2022 BULLET 8211, # 0x96 0x2013 EN DASH 8212, # 0x97 0x2014 EM DASH 732, # 0x98 0x02DC SMALL TILDE 8482, # 0x99 0x2122 TRADE MARK SIGN 353, # 0x9A 0x0161 LATIN SMALL LETTER S WITH CARON 8250, # 0x9B 0x203A SINGLE RIGHT-POINTING ANGLE QUOTATION MARK 339, # 0x9C 0x0153 LATIN SMALL LIGATURE OE 65533, # 0x9D UNDEFINED 382, # 0x9E 0x017E LATIN SMALL LETTER Z WITH CARON 376 # 0x9F 0x0178 LATIN CAPITAL LETTER Y WITH DIAERESIS ) xmlEntities = frozenset(['lt;', 'gt;', 'amp;', 'apos;', 'quot;']) entities = { "AElig": "\xc6", "AElig;": "\xc6", "AMP": "&", "AMP;": "&", "Aacute": "\xc1", "Aacute;": "\xc1", "Abreve;": "\u0102", "Acirc": "\xc2", "Acirc;": "\xc2", "Acy;": "\u0410", "Afr;": "\U0001d504", "Agrave": "\xc0", "Agrave;": "\xc0", "Alpha;": "\u0391", "Amacr;": "\u0100", "And;": "\u2a53", "Aogon;": "\u0104", "Aopf;": "\U0001d538", "ApplyFunction;": "\u2061", "Aring": "\xc5", "Aring;": "\xc5", "Ascr;": "\U0001d49c", "Assign;": "\u2254", "Atilde": "\xc3", "Atilde;": "\xc3", "Auml": "\xc4", "Auml;": "\xc4", "Backslash;": "\u2216", "Barv;": "\u2ae7", "Barwed;": "\u2306", "Bcy;": "\u0411", "Because;": "\u2235", "Bernoullis;": "\u212c", "Beta;": "\u0392", "Bfr;": "\U0001d505", "Bopf;": "\U0001d539", "Breve;": "\u02d8", "Bscr;": "\u212c", "Bumpeq;": "\u224e", "CHcy;": "\u0427", "COPY": "\xa9", "COPY;": "\xa9", "Cacute;": "\u0106", "Cap;": "\u22d2", "CapitalDifferentialD;": "\u2145", "Cayleys;": "\u212d", "Ccaron;": "\u010c", "Ccedil": "\xc7", "Ccedil;": "\xc7", "Ccirc;": "\u0108", "Cconint;": "\u2230", "Cdot;": "\u010a", "Cedilla;": "\xb8", "CenterDot;": "\xb7", "Cfr;": "\u212d", "Chi;": "\u03a7", "CircleDot;": "\u2299", "CircleMinus;": "\u2296", "CirclePlus;": "\u2295", "CircleTimes;": "\u2297", "ClockwiseContourIntegral;": "\u2232", "CloseCurlyDoubleQuote;": "\u201d", "CloseCurlyQuote;": "\u2019", "Colon;": "\u2237", "Colone;": "\u2a74", "Congruent;": "\u2261", "Conint;": "\u222f", "ContourIntegral;": "\u222e", "Copf;": "\u2102", "Coproduct;": "\u2210", "CounterClockwiseContourIntegral;": "\u2233", "Cross;": "\u2a2f", "Cscr;": "\U0001d49e", "Cup;": "\u22d3", "CupCap;": "\u224d", "DD;": "\u2145", "DDotrahd;": "\u2911", "DJcy;": "\u0402", "DScy;": "\u0405", "DZcy;": "\u040f", "Dagger;": "\u2021", "Darr;": "\u21a1", "Dashv;": "\u2ae4", "Dcaron;": "\u010e", "Dcy;": "\u0414", "Del;": "\u2207", "Delta;": "\u0394", "Dfr;": "\U0001d507", "DiacriticalAcute;": "\xb4", "DiacriticalDot;": "\u02d9", "DiacriticalDoubleAcute;": "\u02dd", "DiacriticalGrave;": "`", "DiacriticalTilde;": "\u02dc", "Diamond;": "\u22c4", "DifferentialD;": "\u2146", "Dopf;": "\U0001d53b", "Dot;": "\xa8", "DotDot;": "\u20dc", "DotEqual;": "\u2250", "DoubleContourIntegral;": "\u222f", "DoubleDot;": "\xa8", "DoubleDownArrow;": "\u21d3", "DoubleLeftArrow;": "\u21d0", "DoubleLeftRightArrow;": "\u21d4", "DoubleLeftTee;": "\u2ae4", "DoubleLongLeftArrow;": "\u27f8", "DoubleLongLeftRightArrow;": "\u27fa", "DoubleLongRightArrow;": "\u27f9", "DoubleRightArrow;": "\u21d2", "DoubleRightTee;": "\u22a8", "DoubleUpArrow;": "\u21d1", "DoubleUpDownArrow;": "\u21d5", "DoubleVerticalBar;": "\u2225", "DownArrow;": "\u2193", "DownArrowBar;": "\u2913", "DownArrowUpArrow;": "\u21f5", "DownBreve;": "\u0311", "DownLeftRightVector;": "\u2950", "DownLeftTeeVector;": "\u295e", "DownLeftVector;": "\u21bd", "DownLeftVectorBar;": "\u2956", "DownRightTeeVector;": "\u295f", "DownRightVector;": "\u21c1", "DownRightVectorBar;": "\u2957", "DownTee;": "\u22a4", "DownTeeArrow;": "\u21a7", "Downarrow;": "\u21d3", "Dscr;": "\U0001d49f", "Dstrok;": "\u0110", "ENG;": "\u014a", "ETH": "\xd0", "ETH;": "\xd0", "Eacute": "\xc9", "Eacute;": "\xc9", "Ecaron;": "\u011a", "Ecirc": "\xca", "Ecirc;": "\xca", "Ecy;": "\u042d", "Edot;": "\u0116", "Efr;": "\U0001d508", "Egrave": "\xc8", "Egrave;": "\xc8", "Element;": "\u2208", "Emacr;": "\u0112", "EmptySmallSquare;": "\u25fb", "EmptyVerySmallSquare;": "\u25ab", "Eogon;": "\u0118", "Eopf;": "\U0001d53c", "Epsilon;": "\u0395", "Equal;": "\u2a75", "EqualTilde;": "\u2242", "Equilibrium;": "\u21cc", "Escr;": "\u2130", "Esim;": "\u2a73", "Eta;": "\u0397", "Euml": "\xcb", "Euml;": "\xcb", "Exists;": "\u2203", "ExponentialE;": "\u2147", "Fcy;": "\u0424", "Ffr;": "\U0001d509", "FilledSmallSquare;": "\u25fc", "FilledVerySmallSquare;": "\u25aa", "Fopf;": "\U0001d53d", "ForAll;": "\u2200", "Fouriertrf;": "\u2131", "Fscr;": "\u2131", "GJcy;": "\u0403", "GT": ">", "GT;": ">", "Gamma;": "\u0393", "Gammad;": "\u03dc", "Gbreve;": "\u011e", "Gcedil;": "\u0122", "Gcirc;": "\u011c", "Gcy;": "\u0413", "Gdot;": "\u0120", "Gfr;": "\U0001d50a", "Gg;": "\u22d9", "Gopf;": "\U0001d53e", "GreaterEqual;": "\u2265", "GreaterEqualLess;": "\u22db", "GreaterFullEqual;": "\u2267", "GreaterGreater;": "\u2aa2", "GreaterLess;": "\u2277", "GreaterSlantEqual;": "\u2a7e", "GreaterTilde;": "\u2273", "Gscr;": "\U0001d4a2", "Gt;": "\u226b", "HARDcy;": "\u042a", "Hacek;": "\u02c7", "Hat;": "^", "Hcirc;": "\u0124", "Hfr;": "\u210c", "HilbertSpace;": "\u210b", "Hopf;": "\u210d", "HorizontalLine;": "\u2500", "Hscr;": "\u210b", "Hstrok;": "\u0126", "HumpDownHump;": "\u224e", "HumpEqual;": "\u224f", "IEcy;": "\u0415", "IJlig;": "\u0132", "IOcy;": "\u0401", "Iacute": "\xcd", "Iacute;": "\xcd", "Icirc": "\xce", "Icirc;": "\xce", "Icy;": "\u0418", "Idot;": "\u0130", "Ifr;": "\u2111", "Igrave": "\xcc", "Igrave;": "\xcc", "Im;": "\u2111", "Imacr;": "\u012a", "ImaginaryI;": "\u2148", "Implies;": "\u21d2", "Int;": "\u222c", "Integral;": "\u222b", "Intersection;": "\u22c2", "InvisibleComma;": "\u2063", "InvisibleTimes;": "\u2062", "Iogon;": "\u012e", "Iopf;": "\U0001d540", "Iota;": "\u0399", "Iscr;": "\u2110", "Itilde;": "\u0128", "Iukcy;": "\u0406", "Iuml": "\xcf", "Iuml;": "\xcf", "Jcirc;": "\u0134", "Jcy;": "\u0419", "Jfr;": "\U0001d50d", "Jopf;": "\U0001d541", "Jscr;": "\U0001d4a5", "Jsercy;": "\u0408", "Jukcy;": "\u0404", "KHcy;": "\u0425", "KJcy;": "\u040c", "Kappa;": "\u039a", "Kcedil;": "\u0136", "Kcy;": "\u041a", "Kfr;": "\U0001d50e", "Kopf;": "\U0001d542", "Kscr;": "\U0001d4a6", "LJcy;": "\u0409", "LT": "<", "LT;": "<", "Lacute;": "\u0139", "Lambda;": "\u039b", "Lang;": "\u27ea", "Laplacetrf;": "\u2112", "Larr;": "\u219e", "Lcaron;": "\u013d", "Lcedil;": "\u013b", "Lcy;": "\u041b", "LeftAngleBracket;": "\u27e8", "LeftArrow;": "\u2190", "LeftArrowBar;": "\u21e4", "LeftArrowRightArrow;": "\u21c6", "LeftCeiling;": "\u2308", "LeftDoubleBracket;": "\u27e6", "LeftDownTeeVector;": "\u2961", "LeftDownVector;": "\u21c3", "LeftDownVectorBar;": "\u2959", "LeftFloor;": "\u230a", "LeftRightArrow;": "\u2194", "LeftRightVector;": "\u294e", "LeftTee;": "\u22a3", "LeftTeeArrow;": "\u21a4", "LeftTeeVector;": "\u295a", "LeftTriangle;": "\u22b2", "LeftTriangleBar;": "\u29cf", "LeftTriangleEqual;": "\u22b4", "LeftUpDownVector;": "\u2951", "LeftUpTeeVector;": "\u2960", "LeftUpVector;": "\u21bf", "LeftUpVectorBar;": "\u2958", "LeftVector;": "\u21bc", "LeftVectorBar;": "\u2952", "Leftarrow;": "\u21d0", "Leftrightarrow;": "\u21d4", "LessEqualGreater;": "\u22da", "LessFullEqual;": "\u2266", "LessGreater;": "\u2276", "LessLess;": "\u2aa1", "LessSlantEqual;": "\u2a7d", "LessTilde;": "\u2272", "Lfr;": "\U0001d50f", "Ll;": "\u22d8", "Lleftarrow;": "\u21da", "Lmidot;": "\u013f", "LongLeftArrow;": "\u27f5", "LongLeftRightArrow;": "\u27f7", "LongRightArrow;": "\u27f6", "Longleftarrow;": "\u27f8", "Longleftrightarrow;": "\u27fa", "Longrightarrow;": "\u27f9", "Lopf;": "\U0001d543", "LowerLeftArrow;": "\u2199", "LowerRightArrow;": "\u2198", "Lscr;": "\u2112", "Lsh;": "\u21b0", "Lstrok;": "\u0141", "Lt;": "\u226a", "Map;": "\u2905", "Mcy;": "\u041c", "MediumSpace;": "\u205f", "Mellintrf;": "\u2133", "Mfr;": "\U0001d510", "MinusPlus;": "\u2213", "Mopf;": "\U0001d544", "Mscr;": "\u2133", "Mu;": "\u039c", "NJcy;": "\u040a", "Nacute;": "\u0143", "Ncaron;": "\u0147", "Ncedil;": "\u0145", "Ncy;": "\u041d", "NegativeMediumSpace;": "\u200b", "NegativeThickSpace;": "\u200b", "NegativeThinSpace;": "\u200b", "NegativeVeryThinSpace;": "\u200b", "NestedGreaterGreater;": "\u226b", "NestedLessLess;": "\u226a", "NewLine;": "\n", "Nfr;": "\U0001d511", "NoBreak;": "\u2060", "NonBreakingSpace;": "\xa0", "Nopf;": "\u2115", "Not;": "\u2aec", "NotCongruent;": "\u2262", "NotCupCap;": "\u226d", "NotDoubleVerticalBar;": "\u2226", "NotElement;": "\u2209", "NotEqual;": "\u2260", "NotEqualTilde;": "\u2242\u0338", "NotExists;": "\u2204", "NotGreater;": "\u226f", "NotGreaterEqual;": "\u2271", "NotGreaterFullEqual;": "\u2267\u0338", "NotGreaterGreater;": "\u226b\u0338", "NotGreaterLess;": "\u2279", "NotGreaterSlantEqual;": "\u2a7e\u0338", "NotGreaterTilde;": "\u2275", "NotHumpDownHump;": "\u224e\u0338", "NotHumpEqual;": "\u224f\u0338", "NotLeftTriangle;": "\u22ea", "NotLeftTriangleBar;": "\u29cf\u0338", "NotLeftTriangleEqual;": "\u22ec", "NotLess;": "\u226e", "NotLessEqual;": "\u2270", "NotLessGreater;": "\u2278", "NotLessLess;": "\u226a\u0338", "NotLessSlantEqual;": "\u2a7d\u0338", "NotLessTilde;": "\u2274", "NotNestedGreaterGreater;": "\u2aa2\u0338", "NotNestedLessLess;": "\u2aa1\u0338", "NotPrecedes;": "\u2280", "NotPrecedesEqual;": "\u2aaf\u0338", "NotPrecedesSlantEqual;": "\u22e0", "NotReverseElement;": "\u220c", "NotRightTriangle;": "\u22eb", "NotRightTriangleBar;": "\u29d0\u0338", "NotRightTriangleEqual;": "\u22ed", "NotSquareSubset;": "\u228f\u0338", "NotSquareSubsetEqual;": "\u22e2", "NotSquareSuperset;": "\u2290\u0338", "NotSquareSupersetEqual;": "\u22e3", "NotSubset;": "\u2282\u20d2", "NotSubsetEqual;": "\u2288", "NotSucceeds;": "\u2281", "NotSucceedsEqual;": "\u2ab0\u0338", "NotSucceedsSlantEqual;": "\u22e1", "NotSucceedsTilde;": "\u227f\u0338", "NotSuperset;": "\u2283\u20d2", "NotSupersetEqual;": "\u2289", "NotTilde;": "\u2241", "NotTildeEqual;": "\u2244", "NotTildeFullEqual;": "\u2247", "NotTildeTilde;": "\u2249", "NotVerticalBar;": "\u2224", "Nscr;": "\U0001d4a9", "Ntilde": "\xd1", "Ntilde;": "\xd1", "Nu;": "\u039d", "OElig;": "\u0152", "Oacute": "\xd3", "Oacute;": "\xd3", "Ocirc": "\xd4", "Ocirc;": "\xd4", "Ocy;": "\u041e", "Odblac;": "\u0150", "Ofr;": "\U0001d512", "Ograve": "\xd2", "Ograve;": "\xd2", "Omacr;": "\u014c", "Omega;": "\u03a9", "Omicron;": "\u039f", "Oopf;": "\U0001d546", "OpenCurlyDoubleQuote;": "\u201c", "OpenCurlyQuote;": "\u2018", "Or;": "\u2a54", "Oscr;": "\U0001d4aa", "Oslash": "\xd8", "Oslash;": "\xd8", "Otilde": "\xd5", "Otilde;": "\xd5", "Otimes;": "\u2a37", "Ouml": "\xd6", "Ouml;": "\xd6", "OverBar;": "\u203e", "OverBrace;": "\u23de", "OverBracket;": "\u23b4", "OverParenthesis;": "\u23dc", "PartialD;": "\u2202", "Pcy;": "\u041f", "Pfr;": "\U0001d513", "Phi;": "\u03a6", "Pi;": "\u03a0", "PlusMinus;": "\xb1", "Poincareplane;": "\u210c", "Popf;": "\u2119", "Pr;": "\u2abb", "Precedes;": "\u227a", "PrecedesEqual;": "\u2aaf", "PrecedesSlantEqual;": "\u227c", "PrecedesTilde;": "\u227e", "Prime;": "\u2033", "Product;": "\u220f", "Proportion;": "\u2237", "Proportional;": "\u221d", "Pscr;": "\U0001d4ab", "Psi;": "\u03a8", "QUOT": "\"", "QUOT;": "\"", "Qfr;": "\U0001d514", "Qopf;": "\u211a", "Qscr;": "\U0001d4ac", "RBarr;": "\u2910", "REG": "\xae", "REG;": "\xae", "Racute;": "\u0154", "Rang;": "\u27eb", "Rarr;": "\u21a0", "Rarrtl;": "\u2916", "Rcaron;": "\u0158", "Rcedil;": "\u0156", "Rcy;": "\u0420", "Re;": "\u211c", "ReverseElement;": "\u220b", "ReverseEquilibrium;": "\u21cb", "ReverseUpEquilibrium;": "\u296f", "Rfr;": "\u211c", "Rho;": "\u03a1", "RightAngleBracket;": "\u27e9", "RightArrow;": "\u2192", "RightArrowBar;": "\u21e5", "RightArrowLeftArrow;": "\u21c4", "RightCeiling;": "\u2309", "RightDoubleBracket;": "\u27e7", "RightDownTeeVector;": "\u295d", "RightDownVector;": "\u21c2", "RightDownVectorBar;": "\u2955", "RightFloor;": "\u230b", "RightTee;": "\u22a2", "RightTeeArrow;": "\u21a6", "RightTeeVector;": "\u295b", "RightTriangle;": "\u22b3", "RightTriangleBar;": "\u29d0", "RightTriangleEqual;": "\u22b5", "RightUpDownVector;": "\u294f", "RightUpTeeVector;": "\u295c", "RightUpVector;": "\u21be", "RightUpVectorBar;": "\u2954", "RightVector;": "\u21c0", "RightVectorBar;": "\u2953", "Rightarrow;": "\u21d2", "Ropf;": "\u211d", "RoundImplies;": "\u2970", "Rrightarrow;": "\u21db", "Rscr;": "\u211b", "Rsh;": "\u21b1", "RuleDelayed;": "\u29f4", "SHCHcy;": "\u0429", "SHcy;": "\u0428", "SOFTcy;": "\u042c", "Sacute;": "\u015a", "Sc;": "\u2abc", "Scaron;": "\u0160", "Scedil;": "\u015e", "Scirc;": "\u015c", "Scy;": "\u0421", "Sfr;": "\U0001d516", "ShortDownArrow;": "\u2193", "ShortLeftArrow;": "\u2190", "ShortRightArrow;": "\u2192", "ShortUpArrow;": "\u2191", "Sigma;": "\u03a3", "SmallCircle;": "\u2218", "Sopf;": "\U0001d54a", "Sqrt;": "\u221a", "Square;": "\u25a1", "SquareIntersection;": "\u2293", "SquareSubset;": "\u228f", "SquareSubsetEqual;": "\u2291", "SquareSuperset;": "\u2290", "SquareSupersetEqual;": "\u2292", "SquareUnion;": "\u2294", "Sscr;": "\U0001d4ae", "Star;": "\u22c6", "Sub;": "\u22d0", "Subset;": "\u22d0", "SubsetEqual;": "\u2286", "Succeeds;": "\u227b", "SucceedsEqual;": "\u2ab0", "SucceedsSlantEqual;": "\u227d", "SucceedsTilde;": "\u227f", "SuchThat;": "\u220b", "Sum;": "\u2211", "Sup;": "\u22d1", "Superset;": "\u2283", "SupersetEqual;": "\u2287", "Supset;": "\u22d1", "THORN": "\xde", "THORN;": "\xde", "TRADE;": "\u2122", "TSHcy;": "\u040b", "TScy;": "\u0426", "Tab;": "\t", "Tau;": "\u03a4", "Tcaron;": "\u0164", "Tcedil;": "\u0162", "Tcy;": "\u0422", "Tfr;": "\U0001d517", "Therefore;": "\u2234", "Theta;": "\u0398", "ThickSpace;": "\u205f\u200a", "ThinSpace;": "\u2009", "Tilde;": "\u223c", "TildeEqual;": "\u2243", "TildeFullEqual;": "\u2245", "TildeTilde;": "\u2248", "Topf;": "\U0001d54b", "TripleDot;": "\u20db", "Tscr;": "\U0001d4af", "Tstrok;": "\u0166", "Uacute": "\xda", "Uacute;": "\xda", "Uarr;": "\u219f", "Uarrocir;": "\u2949", "Ubrcy;": "\u040e", "Ubreve;": "\u016c", "Ucirc": "\xdb", "Ucirc;": "\xdb", "Ucy;": "\u0423", "Udblac;": "\u0170", "Ufr;": "\U0001d518", "Ugrave": "\xd9", "Ugrave;": "\xd9", "Umacr;": "\u016a", "UnderBar;": "_", "UnderBrace;": "\u23df", "UnderBracket;": "\u23b5", "UnderParenthesis;": "\u23dd", "Union;": "\u22c3", "UnionPlus;": "\u228e", "Uogon;": "\u0172", "Uopf;": "\U0001d54c", "UpArrow;": "\u2191", "UpArrowBar;": "\u2912", "UpArrowDownArrow;": "\u21c5", "UpDownArrow;": "\u2195", "UpEquilibrium;": "\u296e", "UpTee;": "\u22a5", "UpTeeArrow;": "\u21a5", "Uparrow;": "\u21d1", "Updownarrow;": "\u21d5", "UpperLeftArrow;": "\u2196", "UpperRightArrow;": "\u2197", "Upsi;": "\u03d2", "Upsilon;": "\u03a5", "Uring;": "\u016e", "Uscr;": "\U0001d4b0", "Utilde;": "\u0168", "Uuml": "\xdc", "Uuml;": "\xdc", "VDash;": "\u22ab", "Vbar;": "\u2aeb", "Vcy;": "\u0412", "Vdash;": "\u22a9", "Vdashl;": "\u2ae6", "Vee;": "\u22c1", "Verbar;": "\u2016", "Vert;": "\u2016", "VerticalBar;": "\u2223", "VerticalLine;": "|", "VerticalSeparator;": "\u2758", "VerticalTilde;": "\u2240", "VeryThinSpace;": "\u200a", "Vfr;": "\U0001d519", "Vopf;": "\U0001d54d", "Vscr;": "\U0001d4b1", "Vvdash;": "\u22aa", "Wcirc;": "\u0174", "Wedge;": "\u22c0", "Wfr;": "\U0001d51a", "Wopf;": "\U0001d54e", "Wscr;": "\U0001d4b2", "Xfr;": "\U0001d51b", "Xi;": "\u039e", "Xopf;": "\U0001d54f", "Xscr;": "\U0001d4b3", "YAcy;": "\u042f", "YIcy;": "\u0407", "YUcy;": "\u042e", "Yacute": "\xdd", "Yacute;": "\xdd", "Ycirc;": "\u0176", "Ycy;": "\u042b", "Yfr;": "\U0001d51c", "Yopf;": "\U0001d550", "Yscr;": "\U0001d4b4", "Yuml;": "\u0178", "ZHcy;": "\u0416", "Zacute;": "\u0179", "Zcaron;": "\u017d", "Zcy;": "\u0417", "Zdot;": "\u017b", "ZeroWidthSpace;": "\u200b", "Zeta;": "\u0396", "Zfr;": "\u2128", "Zopf;": "\u2124", "Zscr;": "\U0001d4b5", "aacute": "\xe1", "aacute;": "\xe1", "abreve;": "\u0103", "ac;": "\u223e", "acE;": "\u223e\u0333", "acd;": "\u223f", "acirc": "\xe2", "acirc;": "\xe2", "acute": "\xb4", "acute;": "\xb4", "acy;": "\u0430", "aelig": "\xe6", "aelig;": "\xe6", "af;": "\u2061", "afr;": "\U0001d51e", "agrave": "\xe0", "agrave;": "\xe0", "alefsym;": "\u2135", "aleph;": "\u2135", "alpha;": "\u03b1", "amacr;": "\u0101", "amalg;": "\u2a3f", "amp": "&", "amp;": "&", "and;": "\u2227", "andand;": "\u2a55", "andd;": "\u2a5c", "andslope;": "\u2a58", "andv;": "\u2a5a", "ang;": "\u2220", "ange;": "\u29a4", "angle;": "\u2220", "angmsd;": "\u2221", "angmsdaa;": "\u29a8", "angmsdab;": "\u29a9", "angmsdac;": "\u29aa", "angmsdad;": "\u29ab", "angmsdae;": "\u29ac", "angmsdaf;": "\u29ad", "angmsdag;": "\u29ae", "angmsdah;": "\u29af", "angrt;": "\u221f", "angrtvb;": "\u22be", "angrtvbd;": "\u299d", "angsph;": "\u2222", "angst;": "\xc5", "angzarr;": "\u237c", "aogon;": "\u0105", "aopf;": "\U0001d552", "ap;": "\u2248", "apE;": "\u2a70", "apacir;": "\u2a6f", "ape;": "\u224a", "apid;": "\u224b", "apos;": "'", "approx;": "\u2248", "approxeq;": "\u224a", "aring": "\xe5", "aring;": "\xe5", "ascr;": "\U0001d4b6", "ast;": "*", "asymp;": "\u2248", "asympeq;": "\u224d", "atilde": "\xe3", "atilde;": "\xe3", "auml": "\xe4", "auml;": "\xe4", "awconint;": "\u2233", "awint;": "\u2a11", "bNot;": "\u2aed", "backcong;": "\u224c", "backepsilon;": "\u03f6", "backprime;": "\u2035", "backsim;": "\u223d", "backsimeq;": "\u22cd", "barvee;": "\u22bd", "barwed;": "\u2305", "barwedge;": "\u2305", "bbrk;": "\u23b5", "bbrktbrk;": "\u23b6", "bcong;": "\u224c", "bcy;": "\u0431", "bdquo;": "\u201e", "becaus;": "\u2235", "because;": "\u2235", "bemptyv;": "\u29b0", "bepsi;": "\u03f6", "bernou;": "\u212c", "beta;": "\u03b2", "beth;": "\u2136", "between;": "\u226c", "bfr;": "\U0001d51f", "bigcap;": "\u22c2", "bigcirc;": "\u25ef", "bigcup;": "\u22c3", "bigodot;": "\u2a00", "bigoplus;": "\u2a01", "bigotimes;": "\u2a02", "bigsqcup;": "\u2a06", "bigstar;": "\u2605", "bigtriangledown;": "\u25bd", "bigtriangleup;": "\u25b3", "biguplus;": "\u2a04", "bigvee;": "\u22c1", "bigwedge;": "\u22c0", "bkarow;": "\u290d", "blacklozenge;": "\u29eb", "blacksquare;": "\u25aa", "blacktriangle;": "\u25b4", "blacktriangledown;": "\u25be", "blacktriangleleft;": "\u25c2", "blacktriangleright;": "\u25b8", "blank;": "\u2423", "blk12;": "\u2592", "blk14;": "\u2591", "blk34;": "\u2593", "block;": "\u2588", "bne;": "=\u20e5", "bnequiv;": "\u2261\u20e5", "bnot;": "\u2310", "bopf;": "\U0001d553", "bot;": "\u22a5", "bottom;": "\u22a5", "bowtie;": "\u22c8", "boxDL;": "\u2557", "boxDR;": "\u2554", "boxDl;": "\u2556", "boxDr;": "\u2553", "boxH;": "\u2550", "boxHD;": "\u2566", "boxHU;": "\u2569", "boxHd;": "\u2564", "boxHu;": "\u2567", "boxUL;": "\u255d", "boxUR;": "\u255a", "boxUl;": "\u255c", "boxUr;": "\u2559", "boxV;": "\u2551", "boxVH;": "\u256c", "boxVL;": "\u2563", "boxVR;": "\u2560", "boxVh;": "\u256b", "boxVl;": "\u2562", "boxVr;": "\u255f", "boxbox;": "\u29c9", "boxdL;": "\u2555", "boxdR;": "\u2552", "boxdl;": "\u2510", "boxdr;": "\u250c", "boxh;": "\u2500", "boxhD;": "\u2565", "boxhU;": "\u2568", "boxhd;": "\u252c", "boxhu;": "\u2534", "boxminus;": "\u229f", "boxplus;": "\u229e", "boxtimes;": "\u22a0", "boxuL;": "\u255b", "boxuR;": "\u2558", "boxul;": "\u2518", "boxur;": "\u2514", "boxv;": "\u2502", "boxvH;": "\u256a", "boxvL;": "\u2561", "boxvR;": "\u255e", "boxvh;": "\u253c", "boxvl;": "\u2524", "boxvr;": "\u251c", "bprime;": "\u2035", "breve;": "\u02d8", "brvbar": "\xa6", "brvbar;": "\xa6", "bscr;": "\U0001d4b7", "bsemi;": "\u204f", "bsim;": "\u223d", "bsime;": "\u22cd", "bsol;": "\\", "bsolb;": "\u29c5", "bsolhsub;": "\u27c8", "bull;": "\u2022", "bullet;": "\u2022", "bump;": "\u224e", "bumpE;": "\u2aae", "bumpe;": "\u224f", "bumpeq;": "\u224f", "cacute;": "\u0107", "cap;": "\u2229", "capand;": "\u2a44", "capbrcup;": "\u2a49", "capcap;": "\u2a4b", "capcup;": "\u2a47", "capdot;": "\u2a40", "caps;": "\u2229\ufe00", "caret;": "\u2041", "caron;": "\u02c7", "ccaps;": "\u2a4d", "ccaron;": "\u010d", "ccedil": "\xe7", "ccedil;": "\xe7", "ccirc;": "\u0109", "ccups;": "\u2a4c", "ccupssm;": "\u2a50", "cdot;": "\u010b", "cedil": "\xb8", "cedil;": "\xb8", "cemptyv;": "\u29b2", "cent": "\xa2", "cent;": "\xa2", "centerdot;": "\xb7", "cfr;": "\U0001d520", "chcy;": "\u0447", "check;": "\u2713", "checkmark;": "\u2713", "chi;": "\u03c7", "cir;": "\u25cb", "cirE;": "\u29c3", "circ;": "\u02c6", "circeq;": "\u2257", "circlearrowleft;": "\u21ba", "circlearrowright;": "\u21bb", "circledR;": "\xae", "circledS;": "\u24c8", "circledast;": "\u229b", "circledcirc;": "\u229a", "circleddash;": "\u229d", "cire;": "\u2257", "cirfnint;": "\u2a10", "cirmid;": "\u2aef", "cirscir;": "\u29c2", "clubs;": "\u2663", "clubsuit;": "\u2663", "colon;": ":", "colone;": "\u2254", "coloneq;": "\u2254", "comma;": ",", "commat;": "@", "comp;": "\u2201", "compfn;": "\u2218", "complement;": "\u2201", "complexes;": "\u2102", "cong;": "\u2245", "congdot;": "\u2a6d", "conint;": "\u222e", "copf;": "\U0001d554", "coprod;": "\u2210", "copy": "\xa9", "copy;": "\xa9", "copysr;": "\u2117", "crarr;": "\u21b5", "cross;": "\u2717", "cscr;": "\U0001d4b8", "csub;": "\u2acf", "csube;": "\u2ad1", "csup;": "\u2ad0", "csupe;": "\u2ad2", "ctdot;": "\u22ef", "cudarrl;": "\u2938", "cudarrr;": "\u2935", "cuepr;": "\u22de", "cuesc;": "\u22df", "cularr;": "\u21b6", "cularrp;": "\u293d", "cup;": "\u222a", "cupbrcap;": "\u2a48", "cupcap;": "\u2a46", "cupcup;": "\u2a4a", "cupdot;": "\u228d", "cupor;": "\u2a45", "cups;": "\u222a\ufe00", "curarr;": "\u21b7", "curarrm;": "\u293c", "curlyeqprec;": "\u22de", "curlyeqsucc;": "\u22df", "curlyvee;": "\u22ce", "curlywedge;": "\u22cf", "curren": "\xa4", "curren;": "\xa4", "curvearrowleft;": "\u21b6", "curvearrowright;": "\u21b7", "cuvee;": "\u22ce", "cuwed;": "\u22cf", "cwconint;": "\u2232", "cwint;": "\u2231", "cylcty;": "\u232d", "dArr;": "\u21d3", "dHar;": "\u2965", "dagger;": "\u2020", "daleth;": "\u2138", "darr;": "\u2193", "dash;": "\u2010", "dashv;": "\u22a3", "dbkarow;": "\u290f", "dblac;": "\u02dd", "dcaron;": "\u010f", "dcy;": "\u0434", "dd;": "\u2146", "ddagger;": "\u2021", "ddarr;": "\u21ca", "ddotseq;": "\u2a77", "deg": "\xb0", "deg;": "\xb0", "delta;": "\u03b4", "demptyv;": "\u29b1", "dfisht;": "\u297f", "dfr;": "\U0001d521", "dharl;": "\u21c3", "dharr;": "\u21c2", "diam;": "\u22c4", "diamond;": "\u22c4", "diamondsuit;": "\u2666", "diams;": "\u2666", "die;": "\xa8", "digamma;": "\u03dd", "disin;": "\u22f2", "div;": "\xf7", "divide": "\xf7", "divide;": "\xf7", "divideontimes;": "\u22c7", "divonx;": "\u22c7", "djcy;": "\u0452", "dlcorn;": "\u231e", "dlcrop;": "\u230d", "dollar;": "$", "dopf;": "\U0001d555", "dot;": "\u02d9", "doteq;": "\u2250", "doteqdot;": "\u2251", "dotminus;": "\u2238", "dotplus;": "\u2214", "dotsquare;": "\u22a1", "doublebarwedge;": "\u2306", "downarrow;": "\u2193", "downdownarrows;": "\u21ca", "downharpoonleft;": "\u21c3", "downharpoonright;": "\u21c2", "drbkarow;": "\u2910", "drcorn;": "\u231f", "drcrop;": "\u230c", "dscr;": "\U0001d4b9", "dscy;": "\u0455", "dsol;": "\u29f6", "dstrok;": "\u0111", "dtdot;": "\u22f1", "dtri;": "\u25bf", "dtrif;": "\u25be", "duarr;": "\u21f5", "duhar;": "\u296f", "dwangle;": "\u29a6", "dzcy;": "\u045f", "dzigrarr;": "\u27ff", "eDDot;": "\u2a77", "eDot;": "\u2251", "eacute": "\xe9", "eacute;": "\xe9", "easter;": "\u2a6e", "ecaron;": "\u011b", "ecir;": "\u2256", "ecirc": "\xea", "ecirc;": "\xea", "ecolon;": "\u2255", "ecy;": "\u044d", "edot;": "\u0117", "ee;": "\u2147", "efDot;": "\u2252", "efr;": "\U0001d522", "eg;": "\u2a9a", "egrave": "\xe8", "egrave;": "\xe8", "egs;": "\u2a96", "egsdot;": "\u2a98", "el;": "\u2a99", "elinters;": "\u23e7", "ell;": "\u2113", "els;": "\u2a95", "elsdot;": "\u2a97", "emacr;": "\u0113", "empty;": "\u2205", "emptyset;": "\u2205", "emptyv;": "\u2205", "emsp13;": "\u2004", "emsp14;": "\u2005", "emsp;": "\u2003", "eng;": "\u014b", "ensp;": "\u2002", "eogon;": "\u0119", "eopf;": "\U0001d556", "epar;": "\u22d5", "eparsl;": "\u29e3", "eplus;": "\u2a71", "epsi;": "\u03b5", "epsilon;": "\u03b5", "epsiv;": "\u03f5", "eqcirc;": "\u2256", "eqcolon;": "\u2255", "eqsim;": "\u2242", "eqslantgtr;": "\u2a96", "eqslantless;": "\u2a95", "equals;": "=", "equest;": "\u225f", "equiv;": "\u2261", "equivDD;": "\u2a78", "eqvparsl;": "\u29e5", "erDot;": "\u2253", "erarr;": "\u2971", "escr;": "\u212f", "esdot;": "\u2250", "esim;": "\u2242", "eta;": "\u03b7", "eth": "\xf0", "eth;": "\xf0", "euml": "\xeb", "euml;": "\xeb", "euro;": "\u20ac", "excl;": "!", "exist;": "\u2203", "expectation;": "\u2130", "exponentiale;": "\u2147", "fallingdotseq;": "\u2252", "fcy;": "\u0444", "female;": "\u2640", "ffilig;": "\ufb03", "fflig;": "\ufb00", "ffllig;": "\ufb04", "ffr;": "\U0001d523", "filig;": "\ufb01", "fjlig;": "fj", "flat;": "\u266d", "fllig;": "\ufb02", "fltns;": "\u25b1", "fnof;": "\u0192", "fopf;": "\U0001d557", "forall;": "\u2200", "fork;": "\u22d4", "forkv;": "\u2ad9", "fpartint;": "\u2a0d", "frac12": "\xbd", "frac12;": "\xbd", "frac13;": "\u2153", "frac14": "\xbc", "frac14;": "\xbc", "frac15;": "\u2155", "frac16;": "\u2159", "frac18;": "\u215b", "frac23;": "\u2154", "frac25;": "\u2156", "frac34": "\xbe", "frac34;": "\xbe", "frac35;": "\u2157", "frac38;": "\u215c", "frac45;": "\u2158", "frac56;": "\u215a", "frac58;": "\u215d", "frac78;": "\u215e", "frasl;": "\u2044", "frown;": "\u2322", "fscr;": "\U0001d4bb", "gE;": "\u2267", "gEl;": "\u2a8c", "gacute;": "\u01f5", "gamma;": "\u03b3", "gammad;": "\u03dd", "gap;": "\u2a86", "gbreve;": "\u011f", "gcirc;": "\u011d", "gcy;": "\u0433", "gdot;": "\u0121", "ge;": "\u2265", "gel;": "\u22db", "geq;": "\u2265", "geqq;": "\u2267", "geqslant;": "\u2a7e", "ges;": "\u2a7e", "gescc;": "\u2aa9", "gesdot;": "\u2a80", "gesdoto;": "\u2a82", "gesdotol;": "\u2a84", "gesl;": "\u22db\ufe00", "gesles;": "\u2a94", "gfr;": "\U0001d524", "gg;": "\u226b", "ggg;": "\u22d9", "gimel;": "\u2137", "gjcy;": "\u0453", "gl;": "\u2277", "glE;": "\u2a92", "gla;": "\u2aa5", "glj;": "\u2aa4", "gnE;": "\u2269", "gnap;": "\u2a8a", "gnapprox;": "\u2a8a", "gne;": "\u2a88", "gneq;": "\u2a88", "gneqq;": "\u2269", "gnsim;": "\u22e7", "gopf;": "\U0001d558", "grave;": "`", "gscr;": "\u210a", "gsim;": "\u2273", "gsime;": "\u2a8e", "gsiml;": "\u2a90", "gt": ">", "gt;": ">", "gtcc;": "\u2aa7", "gtcir;": "\u2a7a", "gtdot;": "\u22d7", "gtlPar;": "\u2995", "gtquest;": "\u2a7c", "gtrapprox;": "\u2a86", "gtrarr;": "\u2978", "gtrdot;": "\u22d7", "gtreqless;": "\u22db", "gtreqqless;": "\u2a8c", "gtrless;": "\u2277", "gtrsim;": "\u2273", "gvertneqq;": "\u2269\ufe00", "gvnE;": "\u2269\ufe00", "hArr;": "\u21d4", "hairsp;": "\u200a", "half;": "\xbd", "hamilt;": "\u210b", "hardcy;": "\u044a", "harr;": "\u2194", "harrcir;": "\u2948", "harrw;": "\u21ad", "hbar;": "\u210f", "hcirc;": "\u0125", "hearts;": "\u2665", "heartsuit;": "\u2665", "hellip;": "\u2026", "hercon;": "\u22b9", "hfr;": "\U0001d525", "hksearow;": "\u2925", "hkswarow;": "\u2926", "hoarr;": "\u21ff", "homtht;": "\u223b", "hookleftarrow;": "\u21a9", "hookrightarrow;": "\u21aa", "hopf;": "\U0001d559", "horbar;": "\u2015", "hscr;": "\U0001d4bd", "hslash;": "\u210f", "hstrok;": "\u0127", "hybull;": "\u2043", "hyphen;": "\u2010", "iacute": "\xed", "iacute;": "\xed", "ic;": "\u2063", "icirc": "\xee", "icirc;": "\xee", "icy;": "\u0438", "iecy;": "\u0435", "iexcl": "\xa1", "iexcl;": "\xa1", "iff;": "\u21d4", "ifr;": "\U0001d526", "igrave": "\xec", "igrave;": "\xec", "ii;": "\u2148", "iiiint;": "\u2a0c", "iiint;": "\u222d", "iinfin;": "\u29dc", "iiota;": "\u2129", "ijlig;": "\u0133", "imacr;": "\u012b", "image;": "\u2111", "imagline;": "\u2110", "imagpart;": "\u2111", "imath;": "\u0131", "imof;": "\u22b7", "imped;": "\u01b5", "in;": "\u2208", "incare;": "\u2105", "infin;": "\u221e", "infintie;": "\u29dd", "inodot;": "\u0131", "int;": "\u222b", "intcal;": "\u22ba", "integers;": "\u2124", "intercal;": "\u22ba", "intlarhk;": "\u2a17", "intprod;": "\u2a3c", "iocy;": "\u0451", "iogon;": "\u012f", "iopf;": "\U0001d55a", "iota;": "\u03b9", "iprod;": "\u2a3c", "iquest": "\xbf", "iquest;": "\xbf", "iscr;": "\U0001d4be", "isin;": "\u2208", "isinE;": "\u22f9", "isindot;": "\u22f5", "isins;": "\u22f4", "isinsv;": "\u22f3", "isinv;": "\u2208", "it;": "\u2062", "itilde;": "\u0129", "iukcy;": "\u0456", "iuml": "\xef", "iuml;": "\xef", "jcirc;": "\u0135", "jcy;": "\u0439", "jfr;": "\U0001d527", "jmath;": "\u0237", "jopf;": "\U0001d55b", "jscr;": "\U0001d4bf", "jsercy;": "\u0458", "jukcy;": "\u0454", "kappa;": "\u03ba", "kappav;": "\u03f0", "kcedil;": "\u0137", "kcy;": "\u043a", "kfr;": "\U0001d528", "kgreen;": "\u0138", "khcy;": "\u0445", "kjcy;": "\u045c", "kopf;": "\U0001d55c", "kscr;": "\U0001d4c0", "lAarr;": "\u21da", "lArr;": "\u21d0", "lAtail;": "\u291b", "lBarr;": "\u290e", "lE;": "\u2266", "lEg;": "\u2a8b", "lHar;": "\u2962", "lacute;": "\u013a", "laemptyv;": "\u29b4", "lagran;": "\u2112", "lambda;": "\u03bb", "lang;": "\u27e8", "langd;": "\u2991", "langle;": "\u27e8", "lap;": "\u2a85", "laquo": "\xab", "laquo;": "\xab", "larr;": "\u2190", "larrb;": "\u21e4", "larrbfs;": "\u291f", "larrfs;": "\u291d", "larrhk;": "\u21a9", "larrlp;": "\u21ab", "larrpl;": "\u2939", "larrsim;": "\u2973", "larrtl;": "\u21a2", "lat;": "\u2aab", "latail;": "\u2919", "late;": "\u2aad", "lates;": "\u2aad\ufe00", "lbarr;": "\u290c", "lbbrk;": "\u2772", "lbrace;": "{", "lbrack;": "[", "lbrke;": "\u298b", "lbrksld;": "\u298f", "lbrkslu;": "\u298d", "lcaron;": "\u013e", "lcedil;": "\u013c", "lceil;": "\u2308", "lcub;": "{", "lcy;": "\u043b", "ldca;": "\u2936", "ldquo;": "\u201c", "ldquor;": "\u201e", "ldrdhar;": "\u2967", "ldrushar;": "\u294b", "ldsh;": "\u21b2", "le;": "\u2264", "leftarrow;": "\u2190", "leftarrowtail;": "\u21a2", "leftharpoondown;": "\u21bd", "leftharpoonup;": "\u21bc", "leftleftarrows;": "\u21c7", "leftrightarrow;": "\u2194", "leftrightarrows;": "\u21c6", "leftrightharpoons;": "\u21cb", "leftrightsquigarrow;": "\u21ad", "leftthreetimes;": "\u22cb", "leg;": "\u22da", "leq;": "\u2264", "leqq;": "\u2266", "leqslant;": "\u2a7d", "les;": "\u2a7d", "lescc;": "\u2aa8", "lesdot;": "\u2a7f", "lesdoto;": "\u2a81", "lesdotor;": "\u2a83", "lesg;": "\u22da\ufe00", "lesges;": "\u2a93", "lessapprox;": "\u2a85", "lessdot;": "\u22d6", "lesseqgtr;": "\u22da", "lesseqqgtr;": "\u2a8b", "lessgtr;": "\u2276", "lesssim;": "\u2272", "lfisht;": "\u297c", "lfloor;": "\u230a", "lfr;": "\U0001d529", "lg;": "\u2276", "lgE;": "\u2a91", "lhard;": "\u21bd", "lharu;": "\u21bc", "lharul;": "\u296a", "lhblk;": "\u2584", "ljcy;": "\u0459", "ll;": "\u226a", "llarr;": "\u21c7", "llcorner;": "\u231e", "llhard;": "\u296b", "lltri;": "\u25fa", "lmidot;": "\u0140", "lmoust;": "\u23b0", "lmoustache;": "\u23b0", "lnE;": "\u2268", "lnap;": "\u2a89", "lnapprox;": "\u2a89", "lne;": "\u2a87", "lneq;": "\u2a87", "lneqq;": "\u2268", "lnsim;": "\u22e6", "loang;": "\u27ec", "loarr;": "\u21fd", "lobrk;": "\u27e6", "longleftarrow;": "\u27f5", "longleftrightarrow;": "\u27f7", "longmapsto;": "\u27fc", "longrightarrow;": "\u27f6", "looparrowleft;": "\u21ab", "looparrowright;": "\u21ac", "lopar;": "\u2985", "lopf;": "\U0001d55d", "loplus;": "\u2a2d", "lotimes;": "\u2a34", "lowast;": "\u2217", "lowbar;": "_", "loz;": "\u25ca", "lozenge;": "\u25ca", "lozf;": "\u29eb", "lpar;": "(", "lparlt;": "\u2993", "lrarr;": "\u21c6", "lrcorner;": "\u231f", "lrhar;": "\u21cb", "lrhard;": "\u296d", "lrm;": "\u200e", "lrtri;": "\u22bf", "lsaquo;": "\u2039", "lscr;": "\U0001d4c1", "lsh;": "\u21b0", "lsim;": "\u2272", "lsime;": "\u2a8d", "lsimg;": "\u2a8f", "lsqb;": "[", "lsquo;": "\u2018", "lsquor;": "\u201a", "lstrok;": "\u0142", "lt": "<", "lt;": "<", "ltcc;": "\u2aa6", "ltcir;": "\u2a79", "ltdot;": "\u22d6", "lthree;": "\u22cb", "ltimes;": "\u22c9", "ltlarr;": "\u2976", "ltquest;": "\u2a7b", "ltrPar;": "\u2996", "ltri;": "\u25c3", "ltrie;": "\u22b4", "ltrif;": "\u25c2", "lurdshar;": "\u294a", "luruhar;": "\u2966", "lvertneqq;": "\u2268\ufe00", "lvnE;": "\u2268\ufe00", "mDDot;": "\u223a", "macr": "\xaf", "macr;": "\xaf", "male;": "\u2642", "malt;": "\u2720", "maltese;": "\u2720", "map;": "\u21a6", "mapsto;": "\u21a6", "mapstodown;": "\u21a7", "mapstoleft;": "\u21a4", "mapstoup;": "\u21a5", "marker;": "\u25ae", "mcomma;": "\u2a29", "mcy;": "\u043c", "mdash;": "\u2014", "measuredangle;": "\u2221", "mfr;": "\U0001d52a", "mho;": "\u2127", "micro": "\xb5", "micro;": "\xb5", "mid;": "\u2223", "midast;": "*", "midcir;": "\u2af0", "middot": "\xb7", "middot;": "\xb7", "minus;": "\u2212", "minusb;": "\u229f", "minusd;": "\u2238", "minusdu;": "\u2a2a", "mlcp;": "\u2adb", "mldr;": "\u2026", "mnplus;": "\u2213", "models;": "\u22a7", "mopf;": "\U0001d55e", "mp;": "\u2213", "mscr;": "\U0001d4c2", "mstpos;": "\u223e", "mu;": "\u03bc", "multimap;": "\u22b8", "mumap;": "\u22b8", "nGg;": "\u22d9\u0338", "nGt;": "\u226b\u20d2", "nGtv;": "\u226b\u0338", "nLeftarrow;": "\u21cd", "nLeftrightarrow;": "\u21ce", "nLl;": "\u22d8\u0338", "nLt;": "\u226a\u20d2", "nLtv;": "\u226a\u0338", "nRightarrow;": "\u21cf", "nVDash;": "\u22af", "nVdash;": "\u22ae", "nabla;": "\u2207", "nacute;": "\u0144", "nang;": "\u2220\u20d2", "nap;": "\u2249", "napE;": "\u2a70\u0338", "napid;": "\u224b\u0338", "napos;": "\u0149", "napprox;": "\u2249", "natur;": "\u266e", "natural;": "\u266e", "naturals;": "\u2115", "nbsp": "\xa0", "nbsp;": "\xa0", "nbump;": "\u224e\u0338", "nbumpe;": "\u224f\u0338", "ncap;": "\u2a43", "ncaron;": "\u0148", "ncedil;": "\u0146", "ncong;": "\u2247", "ncongdot;": "\u2a6d\u0338", "ncup;": "\u2a42", "ncy;": "\u043d", "ndash;": "\u2013", "ne;": "\u2260", "neArr;": "\u21d7", "nearhk;": "\u2924", "nearr;": "\u2197", "nearrow;": "\u2197", "nedot;": "\u2250\u0338", "nequiv;": "\u2262", "nesear;": "\u2928", "nesim;": "\u2242\u0338", "nexist;": "\u2204", "nexists;": "\u2204", "nfr;": "\U0001d52b", "ngE;": "\u2267\u0338", "nge;": "\u2271", "ngeq;": "\u2271", "ngeqq;": "\u2267\u0338", "ngeqslant;": "\u2a7e\u0338", "nges;": "\u2a7e\u0338", "ngsim;": "\u2275", "ngt;": "\u226f", "ngtr;": "\u226f", "nhArr;": "\u21ce", "nharr;": "\u21ae", "nhpar;": "\u2af2", "ni;": "\u220b", "nis;": "\u22fc", "nisd;": "\u22fa", "niv;": "\u220b", "njcy;": "\u045a", "nlArr;": "\u21cd", "nlE;": "\u2266\u0338", "nlarr;": "\u219a", "nldr;": "\u2025", "nle;": "\u2270", "nleftarrow;": "\u219a", "nleftrightarrow;": "\u21ae", "nleq;": "\u2270", "nleqq;": "\u2266\u0338", "nleqslant;": "\u2a7d\u0338", "nles;": "\u2a7d\u0338", "nless;": "\u226e", "nlsim;": "\u2274", "nlt;": "\u226e", "nltri;": "\u22ea", "nltrie;": "\u22ec", "nmid;": "\u2224", "nopf;": "\U0001d55f", "not": "\xac", "not;": "\xac", "notin;": "\u2209", "notinE;": "\u22f9\u0338", "notindot;": "\u22f5\u0338", "notinva;": "\u2209", "notinvb;": "\u22f7", "notinvc;": "\u22f6", "notni;": "\u220c", "notniva;": "\u220c", "notnivb;": "\u22fe", "notnivc;": "\u22fd", "npar;": "\u2226", "nparallel;": "\u2226", "nparsl;": "\u2afd\u20e5", "npart;": "\u2202\u0338", "npolint;": "\u2a14", "npr;": "\u2280", "nprcue;": "\u22e0", "npre;": "\u2aaf\u0338", "nprec;": "\u2280", "npreceq;": "\u2aaf\u0338", "nrArr;": "\u21cf", "nrarr;": "\u219b", "nrarrc;": "\u2933\u0338", "nrarrw;": "\u219d\u0338", "nrightarrow;": "\u219b", "nrtri;": "\u22eb", "nrtrie;": "\u22ed", "nsc;": "\u2281", "nsccue;": "\u22e1", "nsce;": "\u2ab0\u0338", "nscr;": "\U0001d4c3", "nshortmid;": "\u2224", "nshortparallel;": "\u2226", "nsim;": "\u2241", "nsime;": "\u2244", "nsimeq;": "\u2244", "nsmid;": "\u2224", "nspar;": "\u2226", "nsqsube;": "\u22e2", "nsqsupe;": "\u22e3", "nsub;": "\u2284", "nsubE;": "\u2ac5\u0338", "nsube;": "\u2288", "nsubset;": "\u2282\u20d2", "nsubseteq;": "\u2288", "nsubseteqq;": "\u2ac5\u0338", "nsucc;": "\u2281", "nsucceq;": "\u2ab0\u0338", "nsup;": "\u2285", "nsupE;": "\u2ac6\u0338", "nsupe;": "\u2289", "nsupset;": "\u2283\u20d2", "nsupseteq;": "\u2289", "nsupseteqq;": "\u2ac6\u0338", "ntgl;": "\u2279", "ntilde": "\xf1", "ntilde;": "\xf1", "ntlg;": "\u2278", "ntriangleleft;": "\u22ea", "ntrianglelefteq;": "\u22ec", "ntriangleright;": "\u22eb", "ntrianglerighteq;": "\u22ed", "nu;": "\u03bd", "num;": "#", "numero;": "\u2116", "numsp;": "\u2007", "nvDash;": "\u22ad", "nvHarr;": "\u2904", "nvap;": "\u224d\u20d2", "nvdash;": "\u22ac", "nvge;": "\u2265\u20d2", "nvgt;": ">\u20d2", "nvinfin;": "\u29de", "nvlArr;": "\u2902", "nvle;": "\u2264\u20d2", "nvlt;": "<\u20d2", "nvltrie;": "\u22b4\u20d2", "nvrArr;": "\u2903", "nvrtrie;": "\u22b5\u20d2", "nvsim;": "\u223c\u20d2", "nwArr;": "\u21d6", "nwarhk;": "\u2923", "nwarr;": "\u2196", "nwarrow;": "\u2196", "nwnear;": "\u2927", "oS;": "\u24c8", "oacute": "\xf3", "oacute;": "\xf3", "oast;": "\u229b", "ocir;": "\u229a", "ocirc": "\xf4", "ocirc;": "\xf4", "ocy;": "\u043e", "odash;": "\u229d", "odblac;": "\u0151", "odiv;": "\u2a38", "odot;": "\u2299", "odsold;": "\u29bc", "oelig;": "\u0153", "ofcir;": "\u29bf", "ofr;": "\U0001d52c", "ogon;": "\u02db", "ograve": "\xf2", "ograve;": "\xf2", "ogt;": "\u29c1", "ohbar;": "\u29b5", "ohm;": "\u03a9", "oint;": "\u222e", "olarr;": "\u21ba", "olcir;": "\u29be", "olcross;": "\u29bb", "oline;": "\u203e", "olt;": "\u29c0", "omacr;": "\u014d", "omega;": "\u03c9", "omicron;": "\u03bf", "omid;": "\u29b6", "ominus;": "\u2296", "oopf;": "\U0001d560", "opar;": "\u29b7", "operp;": "\u29b9", "oplus;": "\u2295", "or;": "\u2228", "orarr;": "\u21bb", "ord;": "\u2a5d", "order;": "\u2134", "orderof;": "\u2134", "ordf": "\xaa", "ordf;": "\xaa", "ordm": "\xba", "ordm;": "\xba", "origof;": "\u22b6", "oror;": "\u2a56", "orslope;": "\u2a57", "orv;": "\u2a5b", "oscr;": "\u2134", "oslash": "\xf8", "oslash;": "\xf8", "osol;": "\u2298", "otilde": "\xf5", "otilde;": "\xf5", "otimes;": "\u2297", "otimesas;": "\u2a36", "ouml": "\xf6", "ouml;": "\xf6", "ovbar;": "\u233d", "par;": "\u2225", "para": "\xb6", "para;": "\xb6", "parallel;": "\u2225", "parsim;": "\u2af3", "parsl;": "\u2afd", "part;": "\u2202", "pcy;": "\u043f", "percnt;": "%", "period;": ".", "permil;": "\u2030", "perp;": "\u22a5", "pertenk;": "\u2031", "pfr;": "\U0001d52d", "phi;": "\u03c6", "phiv;": "\u03d5", "phmmat;": "\u2133", "phone;": "\u260e", "pi;": "\u03c0", "pitchfork;": "\u22d4", "piv;": "\u03d6", "planck;": "\u210f", "planckh;": "\u210e", "plankv;": "\u210f", "plus;": "+", "plusacir;": "\u2a23", "plusb;": "\u229e", "pluscir;": "\u2a22", "plusdo;": "\u2214", "plusdu;": "\u2a25", "pluse;": "\u2a72", "plusmn": "\xb1", "plusmn;": "\xb1", "plussim;": "\u2a26", "plustwo;": "\u2a27", "pm;": "\xb1", "pointint;": "\u2a15", "popf;": "\U0001d561", "pound": "\xa3", "pound;": "\xa3", "pr;": "\u227a", "prE;": "\u2ab3", "prap;": "\u2ab7", "prcue;": "\u227c", "pre;": "\u2aaf", "prec;": "\u227a", "precapprox;": "\u2ab7", "preccurlyeq;": "\u227c", "preceq;": "\u2aaf", "precnapprox;": "\u2ab9", "precneqq;": "\u2ab5", "precnsim;": "\u22e8", "precsim;": "\u227e", "prime;": "\u2032", "primes;": "\u2119", "prnE;": "\u2ab5", "prnap;": "\u2ab9", "prnsim;": "\u22e8", "prod;": "\u220f", "profalar;": "\u232e", "profline;": "\u2312", "profsurf;": "\u2313", "prop;": "\u221d", "propto;": "\u221d", "prsim;": "\u227e", "prurel;": "\u22b0", "pscr;": "\U0001d4c5", "psi;": "\u03c8", "puncsp;": "\u2008", "qfr;": "\U0001d52e", "qint;": "\u2a0c", "qopf;": "\U0001d562", "qprime;": "\u2057", "qscr;": "\U0001d4c6", "quaternions;": "\u210d", "quatint;": "\u2a16", "quest;": "?", "questeq;": "\u225f", "quot": "\"", "quot;": "\"", "rAarr;": "\u21db", "rArr;": "\u21d2", "rAtail;": "\u291c", "rBarr;": "\u290f", "rHar;": "\u2964", "race;": "\u223d\u0331", "racute;": "\u0155", "radic;": "\u221a", "raemptyv;": "\u29b3", "rang;": "\u27e9", "rangd;": "\u2992", "range;": "\u29a5", "rangle;": "\u27e9", "raquo": "\xbb", "raquo;": "\xbb", "rarr;": "\u2192", "rarrap;": "\u2975", "rarrb;": "\u21e5", "rarrbfs;": "\u2920", "rarrc;": "\u2933", "rarrfs;": "\u291e", "rarrhk;": "\u21aa", "rarrlp;": "\u21ac", "rarrpl;": "\u2945", "rarrsim;": "\u2974", "rarrtl;": "\u21a3", "rarrw;": "\u219d", "ratail;": "\u291a", "ratio;": "\u2236", "rationals;": "\u211a", "rbarr;": "\u290d", "rbbrk;": "\u2773", "rbrace;": "}", "rbrack;": "]", "rbrke;": "\u298c", "rbrksld;": "\u298e", "rbrkslu;": "\u2990", "rcaron;": "\u0159", "rcedil;": "\u0157", "rceil;": "\u2309", "rcub;": "}", "rcy;": "\u0440", "rdca;": "\u2937", "rdldhar;": "\u2969", "rdquo;": "\u201d", "rdquor;": "\u201d", "rdsh;": "\u21b3", "real;": "\u211c", "realine;": "\u211b", "realpart;": "\u211c", "reals;": "\u211d", "rect;": "\u25ad", "reg": "\xae", "reg;": "\xae", "rfisht;": "\u297d", "rfloor;": "\u230b", "rfr;": "\U0001d52f", "rhard;": "\u21c1", "rharu;": "\u21c0", "rharul;": "\u296c", "rho;": "\u03c1", "rhov;": "\u03f1", "rightarrow;": "\u2192", "rightarrowtail;": "\u21a3", "rightharpoondown;": "\u21c1", "rightharpoonup;": "\u21c0", "rightleftarrows;": "\u21c4", "rightleftharpoons;": "\u21cc", "rightrightarrows;": "\u21c9", "rightsquigarrow;": "\u219d", "rightthreetimes;": "\u22cc", "ring;": "\u02da", "risingdotseq;": "\u2253", "rlarr;": "\u21c4", "rlhar;": "\u21cc", "rlm;": "\u200f", "rmoust;": "\u23b1", "rmoustache;": "\u23b1", "rnmid;": "\u2aee", "roang;": "\u27ed", "roarr;": "\u21fe", "robrk;": "\u27e7", "ropar;": "\u2986", "ropf;": "\U0001d563", "roplus;": "\u2a2e", "rotimes;": "\u2a35", "rpar;": ")", "rpargt;": "\u2994", "rppolint;": "\u2a12", "rrarr;": "\u21c9", "rsaquo;": "\u203a", "rscr;": "\U0001d4c7", "rsh;": "\u21b1", "rsqb;": "]", "rsquo;": "\u2019", "rsquor;": "\u2019", "rthree;": "\u22cc", "rtimes;": "\u22ca", "rtri;": "\u25b9", "rtrie;": "\u22b5", "rtrif;": "\u25b8", "rtriltri;": "\u29ce", "ruluhar;": "\u2968", "rx;": "\u211e", "sacute;": "\u015b", "sbquo;": "\u201a", "sc;": "\u227b", "scE;": "\u2ab4", "scap;": "\u2ab8", "scaron;": "\u0161", "sccue;": "\u227d", "sce;": "\u2ab0", "scedil;": "\u015f", "scirc;": "\u015d", "scnE;": "\u2ab6", "scnap;": "\u2aba", "scnsim;": "\u22e9", "scpolint;": "\u2a13", "scsim;": "\u227f", "scy;": "\u0441", "sdot;": "\u22c5", "sdotb;": "\u22a1", "sdote;": "\u2a66", "seArr;": "\u21d8", "searhk;": "\u2925", "searr;": "\u2198", "searrow;": "\u2198", "sect": "\xa7", "sect;": "\xa7", "semi;": ";", "seswar;": "\u2929", "setminus;": "\u2216", "setmn;": "\u2216", "sext;": "\u2736", "sfr;": "\U0001d530", "sfrown;": "\u2322", "sharp;": "\u266f", "shchcy;": "\u0449", "shcy;": "\u0448", "shortmid;": "\u2223", "shortparallel;": "\u2225", "shy": "\xad", "shy;": "\xad", "sigma;": "\u03c3", "sigmaf;": "\u03c2", "sigmav;": "\u03c2", "sim;": "\u223c", "simdot;": "\u2a6a", "sime;": "\u2243", "simeq;": "\u2243", "simg;": "\u2a9e", "simgE;": "\u2aa0", "siml;": "\u2a9d", "simlE;": "\u2a9f", "simne;": "\u2246", "simplus;": "\u2a24", "simrarr;": "\u2972", "slarr;": "\u2190", "smallsetminus;": "\u2216", "smashp;": "\u2a33", "smeparsl;": "\u29e4", "smid;": "\u2223", "smile;": "\u2323", "smt;": "\u2aaa", "smte;": "\u2aac", "smtes;": "\u2aac\ufe00", "softcy;": "\u044c", "sol;": "/", "solb;": "\u29c4", "solbar;": "\u233f", "sopf;": "\U0001d564", "spades;": "\u2660", "spadesuit;": "\u2660", "spar;": "\u2225", "sqcap;": "\u2293", "sqcaps;": "\u2293\ufe00", "sqcup;": "\u2294", "sqcups;": "\u2294\ufe00", "sqsub;": "\u228f", "sqsube;": "\u2291", "sqsubset;": "\u228f", "sqsubseteq;": "\u2291", "sqsup;": "\u2290", "sqsupe;": "\u2292", "sqsupset;": "\u2290", "sqsupseteq;": "\u2292", "squ;": "\u25a1", "square;": "\u25a1", "squarf;": "\u25aa", "squf;": "\u25aa", "srarr;": "\u2192", "sscr;": "\U0001d4c8", "ssetmn;": "\u2216", "ssmile;": "\u2323", "sstarf;": "\u22c6", "star;": "\u2606", "starf;": "\u2605", "straightepsilon;": "\u03f5", "straightphi;": "\u03d5", "strns;": "\xaf", "sub;": "\u2282", "subE;": "\u2ac5", "subdot;": "\u2abd", "sube;": "\u2286", "subedot;": "\u2ac3", "submult;": "\u2ac1", "subnE;": "\u2acb", "subne;": "\u228a", "subplus;": "\u2abf", "subrarr;": "\u2979", "subset;": "\u2282", "subseteq;": "\u2286", "subseteqq;": "\u2ac5", "subsetneq;": "\u228a", "subsetneqq;": "\u2acb", "subsim;": "\u2ac7", "subsub;": "\u2ad5", "subsup;": "\u2ad3", "succ;": "\u227b", "succapprox;": "\u2ab8", "succcurlyeq;": "\u227d", "succeq;": "\u2ab0", "succnapprox;": "\u2aba", "succneqq;": "\u2ab6", "succnsim;": "\u22e9", "succsim;": "\u227f", "sum;": "\u2211", "sung;": "\u266a", "sup1": "\xb9", "sup1;": "\xb9", "sup2": "\xb2", "sup2;": "\xb2", "sup3": "\xb3", "sup3;": "\xb3", "sup;": "\u2283", "supE;": "\u2ac6", "supdot;": "\u2abe", "supdsub;": "\u2ad8", "supe;": "\u2287", "supedot;": "\u2ac4", "suphsol;": "\u27c9", "suphsub;": "\u2ad7", "suplarr;": "\u297b", "supmult;": "\u2ac2", "supnE;": "\u2acc", "supne;": "\u228b", "supplus;": "\u2ac0", "supset;": "\u2283", "supseteq;": "\u2287", "supseteqq;": "\u2ac6", "supsetneq;": "\u228b", "supsetneqq;": "\u2acc", "supsim;": "\u2ac8", "supsub;": "\u2ad4", "supsup;": "\u2ad6", "swArr;": "\u21d9", "swarhk;": "\u2926", "swarr;": "\u2199", "swarrow;": "\u2199", "swnwar;": "\u292a", "szlig": "\xdf", "szlig;": "\xdf", "target;": "\u2316", "tau;": "\u03c4", "tbrk;": "\u23b4", "tcaron;": "\u0165", "tcedil;": "\u0163", "tcy;": "\u0442", "tdot;": "\u20db", "telrec;": "\u2315", "tfr;": "\U0001d531", "there4;": "\u2234", "therefore;": "\u2234", "theta;": "\u03b8", "thetasym;": "\u03d1", "thetav;": "\u03d1", "thickapprox;": "\u2248", "thicksim;": "\u223c", "thinsp;": "\u2009", "thkap;": "\u2248", "thksim;": "\u223c", "thorn": "\xfe", "thorn;": "\xfe", "tilde;": "\u02dc", "times": "\xd7", "times;": "\xd7", "timesb;": "\u22a0", "timesbar;": "\u2a31", "timesd;": "\u2a30", "tint;": "\u222d", "toea;": "\u2928", "top;": "\u22a4", "topbot;": "\u2336", "topcir;": "\u2af1", "topf;": "\U0001d565", "topfork;": "\u2ada", "tosa;": "\u2929", "tprime;": "\u2034", "trade;": "\u2122", "triangle;": "\u25b5", "triangledown;": "\u25bf", "triangleleft;": "\u25c3", "trianglelefteq;": "\u22b4", "triangleq;": "\u225c", "triangleright;": "\u25b9", "trianglerighteq;": "\u22b5", "tridot;": "\u25ec", "trie;": "\u225c", "triminus;": "\u2a3a", "triplus;": "\u2a39", "trisb;": "\u29cd", "tritime;": "\u2a3b", "trpezium;": "\u23e2", "tscr;": "\U0001d4c9", "tscy;": "\u0446", "tshcy;": "\u045b", "tstrok;": "\u0167", "twixt;": "\u226c", "twoheadleftarrow;": "\u219e", "twoheadrightarrow;": "\u21a0", "uArr;": "\u21d1", "uHar;": "\u2963", "uacute": "\xfa", "uacute;": "\xfa", "uarr;": "\u2191", "ubrcy;": "\u045e", "ubreve;": "\u016d", "ucirc": "\xfb", "ucirc;": "\xfb", "ucy;": "\u0443", "udarr;": "\u21c5", "udblac;": "\u0171", "udhar;": "\u296e", "ufisht;": "\u297e", "ufr;": "\U0001d532", "ugrave": "\xf9", "ugrave;": "\xf9", "uharl;": "\u21bf", "uharr;": "\u21be", "uhblk;": "\u2580", "ulcorn;": "\u231c", "ulcorner;": "\u231c", "ulcrop;": "\u230f", "ultri;": "\u25f8", "umacr;": "\u016b", "uml": "\xa8", "uml;": "\xa8", "uogon;": "\u0173", "uopf;": "\U0001d566", "uparrow;": "\u2191", "updownarrow;": "\u2195", "upharpoonleft;": "\u21bf", "upharpoonright;": "\u21be", "uplus;": "\u228e", "upsi;": "\u03c5", "upsih;": "\u03d2", "upsilon;": "\u03c5", "upuparrows;": "\u21c8", "urcorn;": "\u231d", "urcorner;": "\u231d", "urcrop;": "\u230e", "uring;": "\u016f", "urtri;": "\u25f9", "uscr;": "\U0001d4ca", "utdot;": "\u22f0", "utilde;": "\u0169", "utri;": "\u25b5", "utrif;": "\u25b4", "uuarr;": "\u21c8", "uuml": "\xfc", "uuml;": "\xfc", "uwangle;": "\u29a7", "vArr;": "\u21d5", "vBar;": "\u2ae8", "vBarv;": "\u2ae9", "vDash;": "\u22a8", "vangrt;": "\u299c", "varepsilon;": "\u03f5", "varkappa;": "\u03f0", "varnothing;": "\u2205", "varphi;": "\u03d5", "varpi;": "\u03d6", "varpropto;": "\u221d", "varr;": "\u2195", "varrho;": "\u03f1", "varsigma;": "\u03c2", "varsubsetneq;": "\u228a\ufe00", "varsubsetneqq;": "\u2acb\ufe00", "varsupsetneq;": "\u228b\ufe00", "varsupsetneqq;": "\u2acc\ufe00", "vartheta;": "\u03d1", "vartriangleleft;": "\u22b2", "vartriangleright;": "\u22b3", "vcy;": "\u0432", "vdash;": "\u22a2", "vee;": "\u2228", "veebar;": "\u22bb", "veeeq;": "\u225a", "vellip;": "\u22ee", "verbar;": "|", "vert;": "|", "vfr;": "\U0001d533", "vltri;": "\u22b2", "vnsub;": "\u2282\u20d2", "vnsup;": "\u2283\u20d2", "vopf;": "\U0001d567", "vprop;": "\u221d", "vrtri;": "\u22b3", "vscr;": "\U0001d4cb", "vsubnE;": "\u2acb\ufe00", "vsubne;": "\u228a\ufe00", "vsupnE;": "\u2acc\ufe00", "vsupne;": "\u228b\ufe00", "vzigzag;": "\u299a", "wcirc;": "\u0175", "wedbar;": "\u2a5f", "wedge;": "\u2227", "wedgeq;": "\u2259", "weierp;": "\u2118", "wfr;": "\U0001d534", "wopf;": "\U0001d568", "wp;": "\u2118", "wr;": "\u2240", "wreath;": "\u2240", "wscr;": "\U0001d4cc", "xcap;": "\u22c2", "xcirc;": "\u25ef", "xcup;": "\u22c3", "xdtri;": "\u25bd", "xfr;": "\U0001d535", "xhArr;": "\u27fa", "xharr;": "\u27f7", "xi;": "\u03be", "xlArr;": "\u27f8", "xlarr;": "\u27f5", "xmap;": "\u27fc", "xnis;": "\u22fb", "xodot;": "\u2a00", "xopf;": "\U0001d569", "xoplus;": "\u2a01", "xotime;": "\u2a02", "xrArr;": "\u27f9", "xrarr;": "\u27f6", "xscr;": "\U0001d4cd", "xsqcup;": "\u2a06", "xuplus;": "\u2a04", "xutri;": "\u25b3", "xvee;": "\u22c1", "xwedge;": "\u22c0", "yacute": "\xfd", "yacute;": "\xfd", "yacy;": "\u044f", "ycirc;": "\u0177", "ycy;": "\u044b", "yen": "\xa5", "yen;": "\xa5", "yfr;": "\U0001d536", "yicy;": "\u0457", "yopf;": "\U0001d56a", "yscr;": "\U0001d4ce", "yucy;": "\u044e", "yuml": "\xff", "yuml;": "\xff", "zacute;": "\u017a", "zcaron;": "\u017e", "zcy;": "\u0437", "zdot;": "\u017c", "zeetrf;": "\u2128", "zeta;": "\u03b6", "zfr;": "\U0001d537", "zhcy;": "\u0436", "zigrarr;": "\u21dd", "zopf;": "\U0001d56b", "zscr;": "\U0001d4cf", "zwj;": "\u200d", "zwnj;": "\u200c", } replacementCharacters = { 0x0: "\uFFFD", 0x0d: "\u000D", 0x80: "\u20AC", 0x81: "\u0081", 0x82: "\u201A", 0x83: "\u0192", 0x84: "\u201E", 0x85: "\u2026", 0x86: "\u2020", 0x87: "\u2021", 0x88: "\u02C6", 0x89: "\u2030", 0x8A: "\u0160", 0x8B: "\u2039", 0x8C: "\u0152", 0x8D: "\u008D", 0x8E: "\u017D", 0x8F: "\u008F", 0x90: "\u0090", 0x91: "\u2018", 0x92: "\u2019", 0x93: "\u201C", 0x94: "\u201D", 0x95: "\u2022", 0x96: "\u2013", 0x97: "\u2014", 0x98: "\u02DC", 0x99: "\u2122", 0x9A: "\u0161", 0x9B: "\u203A", 0x9C: "\u0153", 0x9D: "\u009D", 0x9E: "\u017E", 0x9F: "\u0178", } tokenTypes = { "Doctype": 0, "Characters": 1, "SpaceCharacters": 2, "StartTag": 3, "EndTag": 4, "EmptyTag": 5, "Comment": 6, "ParseError": 7 } tagTokenTypes = frozenset([tokenTypes["StartTag"], tokenTypes["EndTag"], tokenTypes["EmptyTag"]]) prefixes = dict([(v, k) for k, v in namespaces.items()]) prefixes["http://www.w3.org/1998/Math/MathML"] = "math" class DataLossWarning(UserWarning): """Raised when the current tree is unable to represent the input data""" pass class _ReparseException(Exception): pass ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000033�00000000000�011451� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������27 mtime=1604735099.996661 �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/filters/������������������������0000755�0001750�0001750�00000000000�00000000000�027655� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/filters/__init__.py�������������0000644�0001750�0001750�00000000000�00000000000�031754� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000205�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������111 path=cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/filters/alphabeticalattributes.py 22 mtime=1564430190.0 �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/filters/alphabeticalattributes.p0000644�0001750�0001750�00000001627�00000000000�034564� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals from . import base from collections import OrderedDict def _attr_key(attr): """Return an appropriate key for an attribute for sorting Attributes have a namespace that can be either ``None`` or a string. We can't compare the two because they're different types, so we convert ``None`` to an empty string first. """ return (attr[0][0] or ''), attr[0][1] class Filter(base.Filter): """Alphabetizes attributes for elements""" def __iter__(self): for token in base.Filter.__iter__(self): if token["type"] in ("StartTag", "EmptyTag"): attrs = OrderedDict() for name, value in sorted(token["data"].items(), key=_attr_key): attrs[name] = value token["data"] = attrs yield token ���������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/filters/base.py�����������������0000644�0001750�0001750�00000000436�00000000000�031144� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals class Filter(object): def __init__(self, source): self.source = source def __iter__(self): return iter(self.source) def __getattr__(self, name): return getattr(self.source, name) ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/filters/inject_meta_charset.py��0000644�0001750�0001750�00000005601�00000000000�034224� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals from . import base class Filter(base.Filter): """Injects ``<meta charset=ENCODING>`` tag into head of document""" def __init__(self, source, encoding): """Creates a Filter :arg source: the source token stream :arg encoding: the encoding to set """ base.Filter.__init__(self, source) self.encoding = encoding def __iter__(self): state = "pre_head" meta_found = (self.encoding is None) pending = [] for token in base.Filter.__iter__(self): type = token["type"] if type == "StartTag": if token["name"].lower() == "head": state = "in_head" elif type == "EmptyTag": if token["name"].lower() == "meta": # replace charset with actual encoding has_http_equiv_content_type = False for (namespace, name), value in token["data"].items(): if namespace is not None: continue elif name.lower() == 'charset': token["data"][(namespace, name)] = self.encoding meta_found = True break elif name == 'http-equiv' and value.lower() == 'content-type': has_http_equiv_content_type = True else: if has_http_equiv_content_type and (None, "content") in token["data"]: token["data"][(None, "content")] = 'text/html; charset=%s' % self.encoding meta_found = True elif token["name"].lower() == "head" and not meta_found: # insert meta into empty head yield {"type": "StartTag", "name": "head", "data": token["data"]} yield {"type": "EmptyTag", "name": "meta", "data": {(None, "charset"): self.encoding}} yield {"type": "EndTag", "name": "head"} meta_found = True continue elif type == "EndTag": if token["name"].lower() == "head" and pending: # insert meta into head (if necessary) and flush pending queue yield pending.pop(0) if not meta_found: yield {"type": "EmptyTag", "name": "meta", "data": {(None, "charset"): self.encoding}} while pending: yield pending.pop(0) meta_found = True state = "post_head" if state == "in_head": pending.append(token) else: yield token �������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/filters/lint.py�����������������0000644�0001750�0001750�00000007073�00000000000�031204� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals from pip._vendor.six import text_type from . import base from ..constants import namespaces, voidElements from ..constants import spaceCharacters spaceCharacters = "".join(spaceCharacters) class Filter(base.Filter): """Lints the token stream for errors If it finds any errors, it'll raise an ``AssertionError``. """ def __init__(self, source, require_matching_tags=True): """Creates a Filter :arg source: the source token stream :arg require_matching_tags: whether or not to require matching tags """ super(Filter, self).__init__(source) self.require_matching_tags = require_matching_tags def __iter__(self): open_elements = [] for token in base.Filter.__iter__(self): type = token["type"] if type in ("StartTag", "EmptyTag"): namespace = token["namespace"] name = token["name"] assert namespace is None or isinstance(namespace, text_type) assert namespace != "" assert isinstance(name, text_type) assert name != "" assert isinstance(token["data"], dict) if (not namespace or namespace == namespaces["html"]) and name in voidElements: assert type == "EmptyTag" else: assert type == "StartTag" if type == "StartTag" and self.require_matching_tags: open_elements.append((namespace, name)) for (namespace, name), value in token["data"].items(): assert namespace is None or isinstance(namespace, text_type) assert namespace != "" assert isinstance(name, text_type) assert name != "" assert isinstance(value, text_type) elif type == "EndTag": namespace = token["namespace"] name = token["name"] assert namespace is None or isinstance(namespace, text_type) assert namespace != "" assert isinstance(name, text_type) assert name != "" if (not namespace or namespace == namespaces["html"]) and name in voidElements: assert False, "Void element reported as EndTag token: %(tag)s" % {"tag": name} elif self.require_matching_tags: start = open_elements.pop() assert start == (namespace, name) elif type == "Comment": data = token["data"] assert isinstance(data, text_type) elif type in ("Characters", "SpaceCharacters"): data = token["data"] assert isinstance(data, text_type) assert data != "" if type == "SpaceCharacters": assert data.strip(spaceCharacters) == "" elif type == "Doctype": name = token["name"] assert name is None or isinstance(name, text_type) assert token["publicId"] is None or isinstance(name, text_type) assert token["systemId"] is None or isinstance(name, text_type) elif type == "Entity": assert isinstance(token["name"], text_type) elif type == "SerializerError": assert isinstance(token["data"], text_type) else: assert False, "Unknown token type: %(type)s" % {"type": type} yield token ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/filters/optionaltags.py���������0000644�0001750�0001750�00000024534�00000000000�032743� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals from . import base class Filter(base.Filter): """Removes optional tags from the token stream""" def slider(self): previous1 = previous2 = None for token in self.source: if previous1 is not None: yield previous2, previous1, token previous2 = previous1 previous1 = token if previous1 is not None: yield previous2, previous1, None def __iter__(self): for previous, token, next in self.slider(): type = token["type"] if type == "StartTag": if (token["data"] or not self.is_optional_start(token["name"], previous, next)): yield token elif type == "EndTag": if not self.is_optional_end(token["name"], next): yield token else: yield token def is_optional_start(self, tagname, previous, next): type = next and next["type"] or None if tagname in 'html': # An html element's start tag may be omitted if the first thing # inside the html element is not a space character or a comment. return type not in ("Comment", "SpaceCharacters") elif tagname == 'head': # A head element's start tag may be omitted if the first thing # inside the head element is an element. # XXX: we also omit the start tag if the head element is empty if type in ("StartTag", "EmptyTag"): return True elif type == "EndTag": return next["name"] == "head" elif tagname == 'body': # A body element's start tag may be omitted if the first thing # inside the body element is not a space character or a comment, # except if the first thing inside the body element is a script # or style element and the node immediately preceding the body # element is a head element whose end tag has been omitted. if type in ("Comment", "SpaceCharacters"): return False elif type == "StartTag": # XXX: we do not look at the preceding event, so we never omit # the body element's start tag if it's followed by a script or # a style element. return next["name"] not in ('script', 'style') else: return True elif tagname == 'colgroup': # A colgroup element's start tag may be omitted if the first thing # inside the colgroup element is a col element, and if the element # is not immediately preceded by another colgroup element whose # end tag has been omitted. if type in ("StartTag", "EmptyTag"): # XXX: we do not look at the preceding event, so instead we never # omit the colgroup element's end tag when it is immediately # followed by another colgroup element. See is_optional_end. return next["name"] == "col" else: return False elif tagname == 'tbody': # A tbody element's start tag may be omitted if the first thing # inside the tbody element is a tr element, and if the element is # not immediately preceded by a tbody, thead, or tfoot element # whose end tag has been omitted. if type == "StartTag": # omit the thead and tfoot elements' end tag when they are # immediately followed by a tbody element. See is_optional_end. if previous and previous['type'] == 'EndTag' and \ previous['name'] in ('tbody', 'thead', 'tfoot'): return False return next["name"] == 'tr' else: return False return False def is_optional_end(self, tagname, next): type = next and next["type"] or None if tagname in ('html', 'head', 'body'): # An html element's end tag may be omitted if the html element # is not immediately followed by a space character or a comment. return type not in ("Comment", "SpaceCharacters") elif tagname in ('li', 'optgroup', 'tr'): # A li element's end tag may be omitted if the li element is # immediately followed by another li element or if there is # no more content in the parent element. # An optgroup element's end tag may be omitted if the optgroup # element is immediately followed by another optgroup element, # or if there is no more content in the parent element. # A tr element's end tag may be omitted if the tr element is # immediately followed by another tr element, or if there is # no more content in the parent element. if type == "StartTag": return next["name"] == tagname else: return type == "EndTag" or type is None elif tagname in ('dt', 'dd'): # A dt element's end tag may be omitted if the dt element is # immediately followed by another dt element or a dd element. # A dd element's end tag may be omitted if the dd element is # immediately followed by another dd element or a dt element, # or if there is no more content in the parent element. if type == "StartTag": return next["name"] in ('dt', 'dd') elif tagname == 'dd': return type == "EndTag" or type is None else: return False elif tagname == 'p': # A p element's end tag may be omitted if the p element is # immediately followed by an address, article, aside, # blockquote, datagrid, dialog, dir, div, dl, fieldset, # footer, form, h1, h2, h3, h4, h5, h6, header, hr, menu, # nav, ol, p, pre, section, table, or ul, element, or if # there is no more content in the parent element. if type in ("StartTag", "EmptyTag"): return next["name"] in ('address', 'article', 'aside', 'blockquote', 'datagrid', 'dialog', 'dir', 'div', 'dl', 'fieldset', 'footer', 'form', 'h1', 'h2', 'h3', 'h4', 'h5', 'h6', 'header', 'hr', 'menu', 'nav', 'ol', 'p', 'pre', 'section', 'table', 'ul') else: return type == "EndTag" or type is None elif tagname == 'option': # An option element's end tag may be omitted if the option # element is immediately followed by another option element, # or if it is immediately followed by an <code>optgroup</code> # element, or if there is no more content in the parent # element. if type == "StartTag": return next["name"] in ('option', 'optgroup') else: return type == "EndTag" or type is None elif tagname in ('rt', 'rp'): # An rt element's end tag may be omitted if the rt element is # immediately followed by an rt or rp element, or if there is # no more content in the parent element. # An rp element's end tag may be omitted if the rp element is # immediately followed by an rt or rp element, or if there is # no more content in the parent element. if type == "StartTag": return next["name"] in ('rt', 'rp') else: return type == "EndTag" or type is None elif tagname == 'colgroup': # A colgroup element's end tag may be omitted if the colgroup # element is not immediately followed by a space character or # a comment. if type in ("Comment", "SpaceCharacters"): return False elif type == "StartTag": # XXX: we also look for an immediately following colgroup # element. See is_optional_start. return next["name"] != 'colgroup' else: return True elif tagname in ('thead', 'tbody'): # A thead element's end tag may be omitted if the thead element # is immediately followed by a tbody or tfoot element. # A tbody element's end tag may be omitted if the tbody element # is immediately followed by a tbody or tfoot element, or if # there is no more content in the parent element. # A tfoot element's end tag may be omitted if the tfoot element # is immediately followed by a tbody element, or if there is no # more content in the parent element. # XXX: we never omit the end tag when the following element is # a tbody. See is_optional_start. if type == "StartTag": return next["name"] in ['tbody', 'tfoot'] elif tagname == 'tbody': return type == "EndTag" or type is None else: return False elif tagname == 'tfoot': # A tfoot element's end tag may be omitted if the tfoot element # is immediately followed by a tbody element, or if there is no # more content in the parent element. # XXX: we never omit the end tag when the following element is # a tbody. See is_optional_start. if type == "StartTag": return next["name"] == 'tbody' else: return type == "EndTag" or type is None elif tagname in ('td', 'th'): # A td element's end tag may be omitted if the td element is # immediately followed by a td or th element, or if there is # no more content in the parent element. # A th element's end tag may be omitted if the th element is # immediately followed by a td or th element, or if there is # no more content in the parent element. if type == "StartTag": return next["name"] in ('td', 'th') else: return type == "EndTag" or type is None return False ��������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/filters/sanitizer.py������������0000644�0001750�0001750�00000063210�00000000000�032241� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals import re from xml.sax.saxutils import escape, unescape from pip._vendor.six.moves import urllib_parse as urlparse from . import base from ..constants import namespaces, prefixes __all__ = ["Filter"] allowed_elements = frozenset(( (namespaces['html'], 'a'), (namespaces['html'], 'abbr'), (namespaces['html'], 'acronym'), (namespaces['html'], 'address'), (namespaces['html'], 'area'), (namespaces['html'], 'article'), (namespaces['html'], 'aside'), (namespaces['html'], 'audio'), (namespaces['html'], 'b'), (namespaces['html'], 'big'), (namespaces['html'], 'blockquote'), (namespaces['html'], 'br'), (namespaces['html'], 'button'), (namespaces['html'], 'canvas'), (namespaces['html'], 'caption'), (namespaces['html'], 'center'), (namespaces['html'], 'cite'), (namespaces['html'], 'code'), (namespaces['html'], 'col'), (namespaces['html'], 'colgroup'), (namespaces['html'], 'command'), (namespaces['html'], 'datagrid'), (namespaces['html'], 'datalist'), (namespaces['html'], 'dd'), (namespaces['html'], 'del'), (namespaces['html'], 'details'), (namespaces['html'], 'dfn'), (namespaces['html'], 'dialog'), (namespaces['html'], 'dir'), (namespaces['html'], 'div'), (namespaces['html'], 'dl'), (namespaces['html'], 'dt'), (namespaces['html'], 'em'), (namespaces['html'], 'event-source'), (namespaces['html'], 'fieldset'), (namespaces['html'], 'figcaption'), (namespaces['html'], 'figure'), (namespaces['html'], 'footer'), (namespaces['html'], 'font'), (namespaces['html'], 'form'), (namespaces['html'], 'header'), (namespaces['html'], 'h1'), (namespaces['html'], 'h2'), (namespaces['html'], 'h3'), (namespaces['html'], 'h4'), (namespaces['html'], 'h5'), (namespaces['html'], 'h6'), (namespaces['html'], 'hr'), (namespaces['html'], 'i'), (namespaces['html'], 'img'), (namespaces['html'], 'input'), (namespaces['html'], 'ins'), (namespaces['html'], 'keygen'), (namespaces['html'], 'kbd'), (namespaces['html'], 'label'), (namespaces['html'], 'legend'), (namespaces['html'], 'li'), (namespaces['html'], 'm'), (namespaces['html'], 'map'), (namespaces['html'], 'menu'), (namespaces['html'], 'meter'), (namespaces['html'], 'multicol'), (namespaces['html'], 'nav'), (namespaces['html'], 'nextid'), (namespaces['html'], 'ol'), (namespaces['html'], 'output'), (namespaces['html'], 'optgroup'), (namespaces['html'], 'option'), (namespaces['html'], 'p'), (namespaces['html'], 'pre'), (namespaces['html'], 'progress'), (namespaces['html'], 'q'), (namespaces['html'], 's'), (namespaces['html'], 'samp'), (namespaces['html'], 'section'), (namespaces['html'], 'select'), (namespaces['html'], 'small'), (namespaces['html'], 'sound'), (namespaces['html'], 'source'), (namespaces['html'], 'spacer'), (namespaces['html'], 'span'), (namespaces['html'], 'strike'), (namespaces['html'], 'strong'), (namespaces['html'], 'sub'), (namespaces['html'], 'sup'), (namespaces['html'], 'table'), (namespaces['html'], 'tbody'), (namespaces['html'], 'td'), (namespaces['html'], 'textarea'), (namespaces['html'], 'time'), (namespaces['html'], 'tfoot'), (namespaces['html'], 'th'), (namespaces['html'], 'thead'), (namespaces['html'], 'tr'), (namespaces['html'], 'tt'), (namespaces['html'], 'u'), (namespaces['html'], 'ul'), (namespaces['html'], 'var'), (namespaces['html'], 'video'), (namespaces['mathml'], 'maction'), (namespaces['mathml'], 'math'), (namespaces['mathml'], 'merror'), (namespaces['mathml'], 'mfrac'), (namespaces['mathml'], 'mi'), (namespaces['mathml'], 'mmultiscripts'), (namespaces['mathml'], 'mn'), (namespaces['mathml'], 'mo'), (namespaces['mathml'], 'mover'), (namespaces['mathml'], 'mpadded'), (namespaces['mathml'], 'mphantom'), (namespaces['mathml'], 'mprescripts'), (namespaces['mathml'], 'mroot'), (namespaces['mathml'], 'mrow'), (namespaces['mathml'], 'mspace'), (namespaces['mathml'], 'msqrt'), (namespaces['mathml'], 'mstyle'), (namespaces['mathml'], 'msub'), (namespaces['mathml'], 'msubsup'), (namespaces['mathml'], 'msup'), (namespaces['mathml'], 'mtable'), (namespaces['mathml'], 'mtd'), (namespaces['mathml'], 'mtext'), (namespaces['mathml'], 'mtr'), (namespaces['mathml'], 'munder'), (namespaces['mathml'], 'munderover'), (namespaces['mathml'], 'none'), (namespaces['svg'], 'a'), (namespaces['svg'], 'animate'), (namespaces['svg'], 'animateColor'), (namespaces['svg'], 'animateMotion'), (namespaces['svg'], 'animateTransform'), (namespaces['svg'], 'clipPath'), (namespaces['svg'], 'circle'), (namespaces['svg'], 'defs'), (namespaces['svg'], 'desc'), (namespaces['svg'], 'ellipse'), (namespaces['svg'], 'font-face'), (namespaces['svg'], 'font-face-name'), (namespaces['svg'], 'font-face-src'), (namespaces['svg'], 'g'), (namespaces['svg'], 'glyph'), (namespaces['svg'], 'hkern'), (namespaces['svg'], 'linearGradient'), (namespaces['svg'], 'line'), (namespaces['svg'], 'marker'), (namespaces['svg'], 'metadata'), (namespaces['svg'], 'missing-glyph'), (namespaces['svg'], 'mpath'), (namespaces['svg'], 'path'), (namespaces['svg'], 'polygon'), (namespaces['svg'], 'polyline'), (namespaces['svg'], 'radialGradient'), (namespaces['svg'], 'rect'), (namespaces['svg'], 'set'), (namespaces['svg'], 'stop'), (namespaces['svg'], 'svg'), (namespaces['svg'], 'switch'), (namespaces['svg'], 'text'), (namespaces['svg'], 'title'), (namespaces['svg'], 'tspan'), (namespaces['svg'], 'use'), )) allowed_attributes = frozenset(( # HTML attributes (None, 'abbr'), (None, 'accept'), (None, 'accept-charset'), (None, 'accesskey'), (None, 'action'), (None, 'align'), (None, 'alt'), (None, 'autocomplete'), (None, 'autofocus'), (None, 'axis'), (None, 'background'), (None, 'balance'), (None, 'bgcolor'), (None, 'bgproperties'), (None, 'border'), (None, 'bordercolor'), (None, 'bordercolordark'), (None, 'bordercolorlight'), (None, 'bottompadding'), (None, 'cellpadding'), (None, 'cellspacing'), (None, 'ch'), (None, 'challenge'), (None, 'char'), (None, 'charoff'), (None, 'choff'), (None, 'charset'), (None, 'checked'), (None, 'cite'), (None, 'class'), (None, 'clear'), (None, 'color'), (None, 'cols'), (None, 'colspan'), (None, 'compact'), (None, 'contenteditable'), (None, 'controls'), (None, 'coords'), (None, 'data'), (None, 'datafld'), (None, 'datapagesize'), (None, 'datasrc'), (None, 'datetime'), (None, 'default'), (None, 'delay'), (None, 'dir'), (None, 'disabled'), (None, 'draggable'), (None, 'dynsrc'), (None, 'enctype'), (None, 'end'), (None, 'face'), (None, 'for'), (None, 'form'), (None, 'frame'), (None, 'galleryimg'), (None, 'gutter'), (None, 'headers'), (None, 'height'), (None, 'hidefocus'), (None, 'hidden'), (None, 'high'), (None, 'href'), (None, 'hreflang'), (None, 'hspace'), (None, 'icon'), (None, 'id'), (None, 'inputmode'), (None, 'ismap'), (None, 'keytype'), (None, 'label'), (None, 'leftspacing'), (None, 'lang'), (None, 'list'), (None, 'longdesc'), (None, 'loop'), (None, 'loopcount'), (None, 'loopend'), (None, 'loopstart'), (None, 'low'), (None, 'lowsrc'), (None, 'max'), (None, 'maxlength'), (None, 'media'), (None, 'method'), (None, 'min'), (None, 'multiple'), (None, 'name'), (None, 'nohref'), (None, 'noshade'), (None, 'nowrap'), (None, 'open'), (None, 'optimum'), (None, 'pattern'), (None, 'ping'), (None, 'point-size'), (None, 'poster'), (None, 'pqg'), (None, 'preload'), (None, 'prompt'), (None, 'radiogroup'), (None, 'readonly'), (None, 'rel'), (None, 'repeat-max'), (None, 'repeat-min'), (None, 'replace'), (None, 'required'), (None, 'rev'), (None, 'rightspacing'), (None, 'rows'), (None, 'rowspan'), (None, 'rules'), (None, 'scope'), (None, 'selected'), (None, 'shape'), (None, 'size'), (None, 'span'), (None, 'src'), (None, 'start'), (None, 'step'), (None, 'style'), (None, 'summary'), (None, 'suppress'), (None, 'tabindex'), (None, 'target'), (None, 'template'), (None, 'title'), (None, 'toppadding'), (None, 'type'), (None, 'unselectable'), (None, 'usemap'), (None, 'urn'), (None, 'valign'), (None, 'value'), (None, 'variable'), (None, 'volume'), (None, 'vspace'), (None, 'vrml'), (None, 'width'), (None, 'wrap'), (namespaces['xml'], 'lang'), # MathML attributes (None, 'actiontype'), (None, 'align'), (None, 'columnalign'), (None, 'columnalign'), (None, 'columnalign'), (None, 'columnlines'), (None, 'columnspacing'), (None, 'columnspan'), (None, 'depth'), (None, 'display'), (None, 'displaystyle'), (None, 'equalcolumns'), (None, 'equalrows'), (None, 'fence'), (None, 'fontstyle'), (None, 'fontweight'), (None, 'frame'), (None, 'height'), (None, 'linethickness'), (None, 'lspace'), (None, 'mathbackground'), (None, 'mathcolor'), (None, 'mathvariant'), (None, 'mathvariant'), (None, 'maxsize'), (None, 'minsize'), (None, 'other'), (None, 'rowalign'), (None, 'rowalign'), (None, 'rowalign'), (None, 'rowlines'), (None, 'rowspacing'), (None, 'rowspan'), (None, 'rspace'), (None, 'scriptlevel'), (None, 'selection'), (None, 'separator'), (None, 'stretchy'), (None, 'width'), (None, 'width'), (namespaces['xlink'], 'href'), (namespaces['xlink'], 'show'), (namespaces['xlink'], 'type'), # SVG attributes (None, 'accent-height'), (None, 'accumulate'), (None, 'additive'), (None, 'alphabetic'), (None, 'arabic-form'), (None, 'ascent'), (None, 'attributeName'), (None, 'attributeType'), (None, 'baseProfile'), (None, 'bbox'), (None, 'begin'), (None, 'by'), (None, 'calcMode'), (None, 'cap-height'), (None, 'class'), (None, 'clip-path'), (None, 'color'), (None, 'color-rendering'), (None, 'content'), (None, 'cx'), (None, 'cy'), (None, 'd'), (None, 'dx'), (None, 'dy'), (None, 'descent'), (None, 'display'), (None, 'dur'), (None, 'end'), (None, 'fill'), (None, 'fill-opacity'), (None, 'fill-rule'), (None, 'font-family'), (None, 'font-size'), (None, 'font-stretch'), (None, 'font-style'), (None, 'font-variant'), (None, 'font-weight'), (None, 'from'), (None, 'fx'), (None, 'fy'), (None, 'g1'), (None, 'g2'), (None, 'glyph-name'), (None, 'gradientUnits'), (None, 'hanging'), (None, 'height'), (None, 'horiz-adv-x'), (None, 'horiz-origin-x'), (None, 'id'), (None, 'ideographic'), (None, 'k'), (None, 'keyPoints'), (None, 'keySplines'), (None, 'keyTimes'), (None, 'lang'), (None, 'marker-end'), (None, 'marker-mid'), (None, 'marker-start'), (None, 'markerHeight'), (None, 'markerUnits'), (None, 'markerWidth'), (None, 'mathematical'), (None, 'max'), (None, 'min'), (None, 'name'), (None, 'offset'), (None, 'opacity'), (None, 'orient'), (None, 'origin'), (None, 'overline-position'), (None, 'overline-thickness'), (None, 'panose-1'), (None, 'path'), (None, 'pathLength'), (None, 'points'), (None, 'preserveAspectRatio'), (None, 'r'), (None, 'refX'), (None, 'refY'), (None, 'repeatCount'), (None, 'repeatDur'), (None, 'requiredExtensions'), (None, 'requiredFeatures'), (None, 'restart'), (None, 'rotate'), (None, 'rx'), (None, 'ry'), (None, 'slope'), (None, 'stemh'), (None, 'stemv'), (None, 'stop-color'), (None, 'stop-opacity'), (None, 'strikethrough-position'), (None, 'strikethrough-thickness'), (None, 'stroke'), (None, 'stroke-dasharray'), (None, 'stroke-dashoffset'), (None, 'stroke-linecap'), (None, 'stroke-linejoin'), (None, 'stroke-miterlimit'), (None, 'stroke-opacity'), (None, 'stroke-width'), (None, 'systemLanguage'), (None, 'target'), (None, 'text-anchor'), (None, 'to'), (None, 'transform'), (None, 'type'), (None, 'u1'), (None, 'u2'), (None, 'underline-position'), (None, 'underline-thickness'), (None, 'unicode'), (None, 'unicode-range'), (None, 'units-per-em'), (None, 'values'), (None, 'version'), (None, 'viewBox'), (None, 'visibility'), (None, 'width'), (None, 'widths'), (None, 'x'), (None, 'x-height'), (None, 'x1'), (None, 'x2'), (namespaces['xlink'], 'actuate'), (namespaces['xlink'], 'arcrole'), (namespaces['xlink'], 'href'), (namespaces['xlink'], 'role'), (namespaces['xlink'], 'show'), (namespaces['xlink'], 'title'), (namespaces['xlink'], 'type'), (namespaces['xml'], 'base'), (namespaces['xml'], 'lang'), (namespaces['xml'], 'space'), (None, 'y'), (None, 'y1'), (None, 'y2'), (None, 'zoomAndPan'), )) attr_val_is_uri = frozenset(( (None, 'href'), (None, 'src'), (None, 'cite'), (None, 'action'), (None, 'longdesc'), (None, 'poster'), (None, 'background'), (None, 'datasrc'), (None, 'dynsrc'), (None, 'lowsrc'), (None, 'ping'), (namespaces['xlink'], 'href'), (namespaces['xml'], 'base'), )) svg_attr_val_allows_ref = frozenset(( (None, 'clip-path'), (None, 'color-profile'), (None, 'cursor'), (None, 'fill'), (None, 'filter'), (None, 'marker'), (None, 'marker-start'), (None, 'marker-mid'), (None, 'marker-end'), (None, 'mask'), (None, 'stroke'), )) svg_allow_local_href = frozenset(( (None, 'altGlyph'), (None, 'animate'), (None, 'animateColor'), (None, 'animateMotion'), (None, 'animateTransform'), (None, 'cursor'), (None, 'feImage'), (None, 'filter'), (None, 'linearGradient'), (None, 'pattern'), (None, 'radialGradient'), (None, 'textpath'), (None, 'tref'), (None, 'set'), (None, 'use') )) allowed_css_properties = frozenset(( 'azimuth', 'background-color', 'border-bottom-color', 'border-collapse', 'border-color', 'border-left-color', 'border-right-color', 'border-top-color', 'clear', 'color', 'cursor', 'direction', 'display', 'elevation', 'float', 'font', 'font-family', 'font-size', 'font-style', 'font-variant', 'font-weight', 'height', 'letter-spacing', 'line-height', 'overflow', 'pause', 'pause-after', 'pause-before', 'pitch', 'pitch-range', 'richness', 'speak', 'speak-header', 'speak-numeral', 'speak-punctuation', 'speech-rate', 'stress', 'text-align', 'text-decoration', 'text-indent', 'unicode-bidi', 'vertical-align', 'voice-family', 'volume', 'white-space', 'width', )) allowed_css_keywords = frozenset(( 'auto', 'aqua', 'black', 'block', 'blue', 'bold', 'both', 'bottom', 'brown', 'center', 'collapse', 'dashed', 'dotted', 'fuchsia', 'gray', 'green', '!important', 'italic', 'left', 'lime', 'maroon', 'medium', 'none', 'navy', 'normal', 'nowrap', 'olive', 'pointer', 'purple', 'red', 'right', 'solid', 'silver', 'teal', 'top', 'transparent', 'underline', 'white', 'yellow', )) allowed_svg_properties = frozenset(( 'fill', 'fill-opacity', 'fill-rule', 'stroke', 'stroke-width', 'stroke-linecap', 'stroke-linejoin', 'stroke-opacity', )) allowed_protocols = frozenset(( 'ed2k', 'ftp', 'http', 'https', 'irc', 'mailto', 'news', 'gopher', 'nntp', 'telnet', 'webcal', 'xmpp', 'callto', 'feed', 'urn', 'aim', 'rsync', 'tag', 'ssh', 'sftp', 'rtsp', 'afs', 'data', )) allowed_content_types = frozenset(( 'image/png', 'image/jpeg', 'image/gif', 'image/webp', 'image/bmp', 'text/plain', )) data_content_type = re.compile(r''' ^ # Match a content type <application>/<type> (?P<content_type>[-a-zA-Z0-9.]+/[-a-zA-Z0-9.]+) # Match any character set and encoding (?:(?:;charset=(?:[-a-zA-Z0-9]+)(?:;(?:base64))?) |(?:;(?:base64))?(?:;charset=(?:[-a-zA-Z0-9]+))?) # Assume the rest is data ,.* $ ''', re.VERBOSE) class Filter(base.Filter): """Sanitizes token stream of XHTML+MathML+SVG and of inline style attributes""" def __init__(self, source, allowed_elements=allowed_elements, allowed_attributes=allowed_attributes, allowed_css_properties=allowed_css_properties, allowed_css_keywords=allowed_css_keywords, allowed_svg_properties=allowed_svg_properties, allowed_protocols=allowed_protocols, allowed_content_types=allowed_content_types, attr_val_is_uri=attr_val_is_uri, svg_attr_val_allows_ref=svg_attr_val_allows_ref, svg_allow_local_href=svg_allow_local_href): """Creates a Filter :arg allowed_elements: set of elements to allow--everything else will be escaped :arg allowed_attributes: set of attributes to allow in elements--everything else will be stripped :arg allowed_css_properties: set of CSS properties to allow--everything else will be stripped :arg allowed_css_keywords: set of CSS keywords to allow--everything else will be stripped :arg allowed_svg_properties: set of SVG properties to allow--everything else will be removed :arg allowed_protocols: set of allowed protocols for URIs :arg allowed_content_types: set of allowed content types for ``data`` URIs. :arg attr_val_is_uri: set of attributes that have URI values--values that have a scheme not listed in ``allowed_protocols`` are removed :arg svg_attr_val_allows_ref: set of SVG attributes that can have references :arg svg_allow_local_href: set of SVG elements that can have local hrefs--these are removed """ super(Filter, self).__init__(source) self.allowed_elements = allowed_elements self.allowed_attributes = allowed_attributes self.allowed_css_properties = allowed_css_properties self.allowed_css_keywords = allowed_css_keywords self.allowed_svg_properties = allowed_svg_properties self.allowed_protocols = allowed_protocols self.allowed_content_types = allowed_content_types self.attr_val_is_uri = attr_val_is_uri self.svg_attr_val_allows_ref = svg_attr_val_allows_ref self.svg_allow_local_href = svg_allow_local_href def __iter__(self): for token in base.Filter.__iter__(self): token = self.sanitize_token(token) if token: yield token # Sanitize the +html+, escaping all elements not in ALLOWED_ELEMENTS, and # stripping out all attributes not in ALLOWED_ATTRIBUTES. Style attributes # are parsed, and a restricted set, specified by ALLOWED_CSS_PROPERTIES and # ALLOWED_CSS_KEYWORDS, are allowed through. attributes in ATTR_VAL_IS_URI # are scanned, and only URI schemes specified in ALLOWED_PROTOCOLS are # allowed. # # sanitize_html('<script> do_nasty_stuff() </script>') # => <script> do_nasty_stuff() </script> # sanitize_html('<a href="javascript: sucker();">Click here for $100</a>') # => <a>Click here for $100</a> def sanitize_token(self, token): # accommodate filters which use token_type differently token_type = token["type"] if token_type in ("StartTag", "EndTag", "EmptyTag"): name = token["name"] namespace = token["namespace"] if ((namespace, name) in self.allowed_elements or (namespace is None and (namespaces["html"], name) in self.allowed_elements)): return self.allowed_token(token) else: return self.disallowed_token(token) elif token_type == "Comment": pass else: return token def allowed_token(self, token): if "data" in token: attrs = token["data"] attr_names = set(attrs.keys()) # Remove forbidden attributes for to_remove in (attr_names - self.allowed_attributes): del token["data"][to_remove] attr_names.remove(to_remove) # Remove attributes with disallowed URL values for attr in (attr_names & self.attr_val_is_uri): assert attr in attrs # I don't have a clue where this regexp comes from or why it matches those # characters, nor why we call unescape. I just know it's always been here. # Should you be worried by this comment in a sanitizer? Yes. On the other hand, all # this will do is remove *more* than it otherwise would. val_unescaped = re.sub("[`\x00-\x20\x7f-\xa0\\s]+", '', unescape(attrs[attr])).lower() # remove replacement characters from unescaped characters val_unescaped = val_unescaped.replace("\ufffd", "") try: uri = urlparse.urlparse(val_unescaped) except ValueError: uri = None del attrs[attr] if uri and uri.scheme: if uri.scheme not in self.allowed_protocols: del attrs[attr] if uri.scheme == 'data': m = data_content_type.match(uri.path) if not m: del attrs[attr] elif m.group('content_type') not in self.allowed_content_types: del attrs[attr] for attr in self.svg_attr_val_allows_ref: if attr in attrs: attrs[attr] = re.sub(r'url\s*\(\s*[^#\s][^)]+?\)', ' ', unescape(attrs[attr])) if (token["name"] in self.svg_allow_local_href and (namespaces['xlink'], 'href') in attrs and re.search(r'^\s*[^#\s].*', attrs[(namespaces['xlink'], 'href')])): del attrs[(namespaces['xlink'], 'href')] if (None, 'style') in attrs: attrs[(None, 'style')] = self.sanitize_css(attrs[(None, 'style')]) token["data"] = attrs return token def disallowed_token(self, token): token_type = token["type"] if token_type == "EndTag": token["data"] = "</%s>" % token["name"] elif token["data"]: assert token_type in ("StartTag", "EmptyTag") attrs = [] for (ns, name), v in token["data"].items(): attrs.append(' %s="%s"' % (name if ns is None else "%s:%s" % (prefixes[ns], name), escape(v))) token["data"] = "<%s%s>" % (token["name"], ''.join(attrs)) else: token["data"] = "<%s>" % token["name"] if token.get("selfClosing"): token["data"] = token["data"][:-1] + "/>" token["type"] = "Characters" del token["name"] return token def sanitize_css(self, style): # disallow urls style = re.compile(r'url\s*\(\s*[^\s)]+?\s*\)\s*').sub(' ', style) # gauntlet if not re.match(r"""^([:,;#%.\sa-zA-Z0-9!]|\w-\w|'[\s\w]+'|"[\s\w]+"|\([\d,\s]+\))*$""", style): return '' if not re.match(r"^\s*([-\w]+\s*:[^:;]*(;\s*|$))*$", style): return '' clean = [] for prop, value in re.findall(r"([-\w]+)\s*:\s*([^:;]*)", style): if not value: continue if prop.lower() in self.allowed_css_properties: clean.append(prop + ': ' + value + ';') elif prop.split('-')[0].lower() in ['background', 'border', 'margin', 'padding']: for keyword in value.split(): if keyword not in self.allowed_css_keywords and \ not re.match(r"^(#[0-9a-fA-F]+|rgb\(\d+%?,\d*%?,?\d*%?\)?|\d{0,2}\.?\d{0,2}(cm|em|ex|in|mm|pc|pt|px|%|,|\))?)$", keyword): # noqa break else: clean.append(prop + ': ' + value + ';') elif prop.lower() in self.allowed_svg_properties: clean.append(prop + ': ' + value + ';') return ' '.join(clean) ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/filters/whitespace.py�����������0000644�0001750�0001750�00000002276�00000000000�032372� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals import re from . import base from ..constants import rcdataElements, spaceCharacters spaceCharacters = "".join(spaceCharacters) SPACES_REGEX = re.compile("[%s]+" % spaceCharacters) class Filter(base.Filter): """Collapses whitespace except in pre, textarea, and script elements""" spacePreserveElements = frozenset(["pre", "textarea"] + list(rcdataElements)) def __iter__(self): preserve = 0 for token in base.Filter.__iter__(self): type = token["type"] if type == "StartTag" \ and (preserve or token["name"] in self.spacePreserveElements): preserve += 1 elif type == "EndTag" and preserve: preserve -= 1 elif not preserve and type == "SpaceCharacters" and token["data"]: # Test on token["data"] above to not introduce spaces where there were not token["data"] = " " elif not preserve and type == "Characters": token["data"] = collapse_spaces(token["data"]) yield token def collapse_spaces(text): return SPACES_REGEX.sub(' ', text) ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/html5parser.py������������������0000644�0001750�0001750�00000350263�00000000000�031036� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals from pip._vendor.six import with_metaclass, viewkeys import types from collections import OrderedDict from . import _inputstream from . import _tokenizer from . import treebuilders from .treebuilders.base import Marker from . import _utils from .constants import ( spaceCharacters, asciiUpper2Lower, specialElements, headingElements, cdataElements, rcdataElements, tokenTypes, tagTokenTypes, namespaces, htmlIntegrationPointElements, mathmlTextIntegrationPointElements, adjustForeignAttributes as adjustForeignAttributesMap, adjustMathMLAttributes, adjustSVGAttributes, E, _ReparseException ) def parse(doc, treebuilder="etree", namespaceHTMLElements=True, **kwargs): """Parse an HTML document as a string or file-like object into a tree :arg doc: the document to parse as a string or file-like object :arg treebuilder: the treebuilder to use when parsing :arg namespaceHTMLElements: whether or not to namespace HTML elements :returns: parsed tree Example: >>> from html5lib.html5parser import parse >>> parse('<html><body><p>This is a doc</p></body></html>') <Element u'{http://www.w3.org/1999/xhtml}html' at 0x7feac4909db0> """ tb = treebuilders.getTreeBuilder(treebuilder) p = HTMLParser(tb, namespaceHTMLElements=namespaceHTMLElements) return p.parse(doc, **kwargs) def parseFragment(doc, container="div", treebuilder="etree", namespaceHTMLElements=True, **kwargs): """Parse an HTML fragment as a string or file-like object into a tree :arg doc: the fragment to parse as a string or file-like object :arg container: the container context to parse the fragment in :arg treebuilder: the treebuilder to use when parsing :arg namespaceHTMLElements: whether or not to namespace HTML elements :returns: parsed tree Example: >>> from html5lib.html5libparser import parseFragment >>> parseFragment('<b>this is a fragment</b>') <Element u'DOCUMENT_FRAGMENT' at 0x7feac484b090> """ tb = treebuilders.getTreeBuilder(treebuilder) p = HTMLParser(tb, namespaceHTMLElements=namespaceHTMLElements) return p.parseFragment(doc, container=container, **kwargs) def method_decorator_metaclass(function): class Decorated(type): def __new__(meta, classname, bases, classDict): for attributeName, attribute in classDict.items(): if isinstance(attribute, types.FunctionType): attribute = function(attribute) classDict[attributeName] = attribute return type.__new__(meta, classname, bases, classDict) return Decorated class HTMLParser(object): """HTML parser Generates a tree structure from a stream of (possibly malformed) HTML. """ def __init__(self, tree=None, strict=False, namespaceHTMLElements=True, debug=False): """ :arg tree: a treebuilder class controlling the type of tree that will be returned. Built in treebuilders can be accessed through html5lib.treebuilders.getTreeBuilder(treeType) :arg strict: raise an exception when a parse error is encountered :arg namespaceHTMLElements: whether or not to namespace HTML elements :arg debug: whether or not to enable debug mode which logs things Example: >>> from html5lib.html5parser import HTMLParser >>> parser = HTMLParser() # generates parser with etree builder >>> parser = HTMLParser('lxml', strict=True) # generates parser with lxml builder which is strict """ # Raise an exception on the first error encountered self.strict = strict if tree is None: tree = treebuilders.getTreeBuilder("etree") self.tree = tree(namespaceHTMLElements) self.errors = [] self.phases = dict([(name, cls(self, self.tree)) for name, cls in getPhases(debug).items()]) def _parse(self, stream, innerHTML=False, container="div", scripting=False, **kwargs): self.innerHTMLMode = innerHTML self.container = container self.scripting = scripting self.tokenizer = _tokenizer.HTMLTokenizer(stream, parser=self, **kwargs) self.reset() try: self.mainLoop() except _ReparseException: self.reset() self.mainLoop() def reset(self): self.tree.reset() self.firstStartTag = False self.errors = [] self.log = [] # only used with debug mode # "quirks" / "limited quirks" / "no quirks" self.compatMode = "no quirks" if self.innerHTMLMode: self.innerHTML = self.container.lower() if self.innerHTML in cdataElements: self.tokenizer.state = self.tokenizer.rcdataState elif self.innerHTML in rcdataElements: self.tokenizer.state = self.tokenizer.rawtextState elif self.innerHTML == 'plaintext': self.tokenizer.state = self.tokenizer.plaintextState else: # state already is data state # self.tokenizer.state = self.tokenizer.dataState pass self.phase = self.phases["beforeHtml"] self.phase.insertHtmlElement() self.resetInsertionMode() else: self.innerHTML = False # pylint:disable=redefined-variable-type self.phase = self.phases["initial"] self.lastPhase = None self.beforeRCDataPhase = None self.framesetOK = True @property def documentEncoding(self): """Name of the character encoding that was used to decode the input stream, or :obj:`None` if that is not determined yet """ if not hasattr(self, 'tokenizer'): return None return self.tokenizer.stream.charEncoding[0].name def isHTMLIntegrationPoint(self, element): if (element.name == "annotation-xml" and element.namespace == namespaces["mathml"]): return ("encoding" in element.attributes and element.attributes["encoding"].translate( asciiUpper2Lower) in ("text/html", "application/xhtml+xml")) else: return (element.namespace, element.name) in htmlIntegrationPointElements def isMathMLTextIntegrationPoint(self, element): return (element.namespace, element.name) in mathmlTextIntegrationPointElements def mainLoop(self): CharactersToken = tokenTypes["Characters"] SpaceCharactersToken = tokenTypes["SpaceCharacters"] StartTagToken = tokenTypes["StartTag"] EndTagToken = tokenTypes["EndTag"] CommentToken = tokenTypes["Comment"] DoctypeToken = tokenTypes["Doctype"] ParseErrorToken = tokenTypes["ParseError"] for token in self.normalizedTokens(): prev_token = None new_token = token while new_token is not None: prev_token = new_token currentNode = self.tree.openElements[-1] if self.tree.openElements else None currentNodeNamespace = currentNode.namespace if currentNode else None currentNodeName = currentNode.name if currentNode else None type = new_token["type"] if type == ParseErrorToken: self.parseError(new_token["data"], new_token.get("datavars", {})) new_token = None else: if (len(self.tree.openElements) == 0 or currentNodeNamespace == self.tree.defaultNamespace or (self.isMathMLTextIntegrationPoint(currentNode) and ((type == StartTagToken and token["name"] not in frozenset(["mglyph", "malignmark"])) or type in (CharactersToken, SpaceCharactersToken))) or (currentNodeNamespace == namespaces["mathml"] and currentNodeName == "annotation-xml" and type == StartTagToken and token["name"] == "svg") or (self.isHTMLIntegrationPoint(currentNode) and type in (StartTagToken, CharactersToken, SpaceCharactersToken))): phase = self.phase else: phase = self.phases["inForeignContent"] if type == CharactersToken: new_token = phase.processCharacters(new_token) elif type == SpaceCharactersToken: new_token = phase.processSpaceCharacters(new_token) elif type == StartTagToken: new_token = phase.processStartTag(new_token) elif type == EndTagToken: new_token = phase.processEndTag(new_token) elif type == CommentToken: new_token = phase.processComment(new_token) elif type == DoctypeToken: new_token = phase.processDoctype(new_token) if (type == StartTagToken and prev_token["selfClosing"] and not prev_token["selfClosingAcknowledged"]): self.parseError("non-void-element-with-trailing-solidus", {"name": prev_token["name"]}) # When the loop finishes it's EOF reprocess = True phases = [] while reprocess: phases.append(self.phase) reprocess = self.phase.processEOF() if reprocess: assert self.phase not in phases def normalizedTokens(self): for token in self.tokenizer: yield self.normalizeToken(token) def parse(self, stream, *args, **kwargs): """Parse a HTML document into a well-formed tree :arg stream: a file-like object or string containing the HTML to be parsed The optional encoding parameter must be a string that indicates the encoding. If specified, that encoding will be used, regardless of any BOM or later declaration (such as in a meta element). :arg scripting: treat noscript elements as if JavaScript was turned on :returns: parsed tree Example: >>> from html5lib.html5parser import HTMLParser >>> parser = HTMLParser() >>> parser.parse('<html><body><p>This is a doc</p></body></html>') <Element u'{http://www.w3.org/1999/xhtml}html' at 0x7feac4909db0> """ self._parse(stream, False, None, *args, **kwargs) return self.tree.getDocument() def parseFragment(self, stream, *args, **kwargs): """Parse a HTML fragment into a well-formed tree fragment :arg container: name of the element we're setting the innerHTML property if set to None, default to 'div' :arg stream: a file-like object or string containing the HTML to be parsed The optional encoding parameter must be a string that indicates the encoding. If specified, that encoding will be used, regardless of any BOM or later declaration (such as in a meta element) :arg scripting: treat noscript elements as if JavaScript was turned on :returns: parsed tree Example: >>> from html5lib.html5libparser import HTMLParser >>> parser = HTMLParser() >>> parser.parseFragment('<b>this is a fragment</b>') <Element u'DOCUMENT_FRAGMENT' at 0x7feac484b090> """ self._parse(stream, True, *args, **kwargs) return self.tree.getFragment() def parseError(self, errorcode="XXX-undefined-error", datavars=None): # XXX The idea is to make errorcode mandatory. if datavars is None: datavars = {} self.errors.append((self.tokenizer.stream.position(), errorcode, datavars)) if self.strict: raise ParseError(E[errorcode] % datavars) def normalizeToken(self, token): # HTML5 specific normalizations to the token stream if token["type"] == tokenTypes["StartTag"]: raw = token["data"] token["data"] = OrderedDict(raw) if len(raw) > len(token["data"]): # we had some duplicated attribute, fix so first wins token["data"].update(raw[::-1]) return token def adjustMathMLAttributes(self, token): adjust_attributes(token, adjustMathMLAttributes) def adjustSVGAttributes(self, token): adjust_attributes(token, adjustSVGAttributes) def adjustForeignAttributes(self, token): adjust_attributes(token, adjustForeignAttributesMap) def reparseTokenNormal(self, token): # pylint:disable=unused-argument self.parser.phase() def resetInsertionMode(self): # The name of this method is mostly historical. (It's also used in the # specification.) last = False newModes = { "select": "inSelect", "td": "inCell", "th": "inCell", "tr": "inRow", "tbody": "inTableBody", "thead": "inTableBody", "tfoot": "inTableBody", "caption": "inCaption", "colgroup": "inColumnGroup", "table": "inTable", "head": "inBody", "body": "inBody", "frameset": "inFrameset", "html": "beforeHead" } for node in self.tree.openElements[::-1]: nodeName = node.name new_phase = None if node == self.tree.openElements[0]: assert self.innerHTML last = True nodeName = self.innerHTML # Check for conditions that should only happen in the innerHTML # case if nodeName in ("select", "colgroup", "head", "html"): assert self.innerHTML if not last and node.namespace != self.tree.defaultNamespace: continue if nodeName in newModes: new_phase = self.phases[newModes[nodeName]] break elif last: new_phase = self.phases["inBody"] break self.phase = new_phase def parseRCDataRawtext(self, token, contentType): # Generic RCDATA/RAWTEXT Parsing algorithm assert contentType in ("RAWTEXT", "RCDATA") self.tree.insertElement(token) if contentType == "RAWTEXT": self.tokenizer.state = self.tokenizer.rawtextState else: self.tokenizer.state = self.tokenizer.rcdataState self.originalPhase = self.phase self.phase = self.phases["text"] @_utils.memoize def getPhases(debug): def log(function): """Logger that records which phase processes each token""" type_names = dict((value, key) for key, value in tokenTypes.items()) def wrapped(self, *args, **kwargs): if function.__name__.startswith("process") and len(args) > 0: token = args[0] try: info = {"type": type_names[token['type']]} except: raise if token['type'] in tagTokenTypes: info["name"] = token['name'] self.parser.log.append((self.parser.tokenizer.state.__name__, self.parser.phase.__class__.__name__, self.__class__.__name__, function.__name__, info)) return function(self, *args, **kwargs) else: return function(self, *args, **kwargs) return wrapped def getMetaclass(use_metaclass, metaclass_func): if use_metaclass: return method_decorator_metaclass(metaclass_func) else: return type # pylint:disable=unused-argument class Phase(with_metaclass(getMetaclass(debug, log))): """Base class for helper object that implements each phase of processing """ def __init__(self, parser, tree): self.parser = parser self.tree = tree def processEOF(self): raise NotImplementedError def processComment(self, token): # For most phases the following is correct. Where it's not it will be # overridden. self.tree.insertComment(token, self.tree.openElements[-1]) def processDoctype(self, token): self.parser.parseError("unexpected-doctype") def processCharacters(self, token): self.tree.insertText(token["data"]) def processSpaceCharacters(self, token): self.tree.insertText(token["data"]) def processStartTag(self, token): return self.startTagHandler[token["name"]](token) def startTagHtml(self, token): if not self.parser.firstStartTag and token["name"] == "html": self.parser.parseError("non-html-root") # XXX Need a check here to see if the first start tag token emitted is # this token... If it's not, invoke self.parser.parseError(). for attr, value in token["data"].items(): if attr not in self.tree.openElements[0].attributes: self.tree.openElements[0].attributes[attr] = value self.parser.firstStartTag = False def processEndTag(self, token): return self.endTagHandler[token["name"]](token) class InitialPhase(Phase): def processSpaceCharacters(self, token): pass def processComment(self, token): self.tree.insertComment(token, self.tree.document) def processDoctype(self, token): name = token["name"] publicId = token["publicId"] systemId = token["systemId"] correct = token["correct"] if (name != "html" or publicId is not None or systemId is not None and systemId != "about:legacy-compat"): self.parser.parseError("unknown-doctype") if publicId is None: publicId = "" self.tree.insertDoctype(token) if publicId != "": publicId = publicId.translate(asciiUpper2Lower) if (not correct or token["name"] != "html" or publicId.startswith( ("+//silmaril//dtd html pro v0r11 19970101//", "-//advasoft ltd//dtd html 3.0 aswedit + extensions//", "-//as//dtd html 3.0 aswedit + extensions//", "-//ietf//dtd html 2.0 level 1//", "-//ietf//dtd html 2.0 level 2//", "-//ietf//dtd html 2.0 strict level 1//", "-//ietf//dtd html 2.0 strict level 2//", "-//ietf//dtd html 2.0 strict//", "-//ietf//dtd html 2.0//", "-//ietf//dtd html 2.1e//", "-//ietf//dtd html 3.0//", "-//ietf//dtd html 3.2 final//", "-//ietf//dtd html 3.2//", "-//ietf//dtd html 3//", "-//ietf//dtd html level 0//", "-//ietf//dtd html level 1//", "-//ietf//dtd html level 2//", "-//ietf//dtd html level 3//", "-//ietf//dtd html strict level 0//", "-//ietf//dtd html strict level 1//", "-//ietf//dtd html strict level 2//", "-//ietf//dtd html strict level 3//", "-//ietf//dtd html strict//", "-//ietf//dtd html//", "-//metrius//dtd metrius presentational//", "-//microsoft//dtd internet explorer 2.0 html strict//", "-//microsoft//dtd internet explorer 2.0 html//", "-//microsoft//dtd internet explorer 2.0 tables//", "-//microsoft//dtd internet explorer 3.0 html strict//", "-//microsoft//dtd internet explorer 3.0 html//", "-//microsoft//dtd internet explorer 3.0 tables//", "-//netscape comm. corp.//dtd html//", "-//netscape comm. corp.//dtd strict html//", "-//o'reilly and associates//dtd html 2.0//", "-//o'reilly and associates//dtd html extended 1.0//", "-//o'reilly and associates//dtd html extended relaxed 1.0//", "-//softquad software//dtd hotmetal pro 6.0::19990601::extensions to html 4.0//", "-//softquad//dtd hotmetal pro 4.0::19971010::extensions to html 4.0//", "-//spyglass//dtd html 2.0 extended//", "-//sq//dtd html 2.0 hotmetal + extensions//", "-//sun microsystems corp.//dtd hotjava html//", "-//sun microsystems corp.//dtd hotjava strict html//", "-//w3c//dtd html 3 1995-03-24//", "-//w3c//dtd html 3.2 draft//", "-//w3c//dtd html 3.2 final//", "-//w3c//dtd html 3.2//", "-//w3c//dtd html 3.2s draft//", "-//w3c//dtd html 4.0 frameset//", "-//w3c//dtd html 4.0 transitional//", "-//w3c//dtd html experimental 19960712//", "-//w3c//dtd html experimental 970421//", "-//w3c//dtd w3 html//", "-//w3o//dtd w3 html 3.0//", "-//webtechs//dtd mozilla html 2.0//", "-//webtechs//dtd mozilla html//")) or publicId in ("-//w3o//dtd w3 html strict 3.0//en//", "-/w3c/dtd html 4.0 transitional/en", "html") or publicId.startswith( ("-//w3c//dtd html 4.01 frameset//", "-//w3c//dtd html 4.01 transitional//")) and systemId is None or systemId and systemId.lower() == "http://www.ibm.com/data/dtd/v11/ibmxhtml1-transitional.dtd"): self.parser.compatMode = "quirks" elif (publicId.startswith( ("-//w3c//dtd xhtml 1.0 frameset//", "-//w3c//dtd xhtml 1.0 transitional//")) or publicId.startswith( ("-//w3c//dtd html 4.01 frameset//", "-//w3c//dtd html 4.01 transitional//")) and systemId is not None): self.parser.compatMode = "limited quirks" self.parser.phase = self.parser.phases["beforeHtml"] def anythingElse(self): self.parser.compatMode = "quirks" self.parser.phase = self.parser.phases["beforeHtml"] def processCharacters(self, token): self.parser.parseError("expected-doctype-but-got-chars") self.anythingElse() return token def processStartTag(self, token): self.parser.parseError("expected-doctype-but-got-start-tag", {"name": token["name"]}) self.anythingElse() return token def processEndTag(self, token): self.parser.parseError("expected-doctype-but-got-end-tag", {"name": token["name"]}) self.anythingElse() return token def processEOF(self): self.parser.parseError("expected-doctype-but-got-eof") self.anythingElse() return True class BeforeHtmlPhase(Phase): # helper methods def insertHtmlElement(self): self.tree.insertRoot(impliedTagToken("html", "StartTag")) self.parser.phase = self.parser.phases["beforeHead"] # other def processEOF(self): self.insertHtmlElement() return True def processComment(self, token): self.tree.insertComment(token, self.tree.document) def processSpaceCharacters(self, token): pass def processCharacters(self, token): self.insertHtmlElement() return token def processStartTag(self, token): if token["name"] == "html": self.parser.firstStartTag = True self.insertHtmlElement() return token def processEndTag(self, token): if token["name"] not in ("head", "body", "html", "br"): self.parser.parseError("unexpected-end-tag-before-html", {"name": token["name"]}) else: self.insertHtmlElement() return token class BeforeHeadPhase(Phase): def __init__(self, parser, tree): Phase.__init__(self, parser, tree) self.startTagHandler = _utils.MethodDispatcher([ ("html", self.startTagHtml), ("head", self.startTagHead) ]) self.startTagHandler.default = self.startTagOther self.endTagHandler = _utils.MethodDispatcher([ (("head", "body", "html", "br"), self.endTagImplyHead) ]) self.endTagHandler.default = self.endTagOther def processEOF(self): self.startTagHead(impliedTagToken("head", "StartTag")) return True def processSpaceCharacters(self, token): pass def processCharacters(self, token): self.startTagHead(impliedTagToken("head", "StartTag")) return token def startTagHtml(self, token): return self.parser.phases["inBody"].processStartTag(token) def startTagHead(self, token): self.tree.insertElement(token) self.tree.headPointer = self.tree.openElements[-1] self.parser.phase = self.parser.phases["inHead"] def startTagOther(self, token): self.startTagHead(impliedTagToken("head", "StartTag")) return token def endTagImplyHead(self, token): self.startTagHead(impliedTagToken("head", "StartTag")) return token def endTagOther(self, token): self.parser.parseError("end-tag-after-implied-root", {"name": token["name"]}) class InHeadPhase(Phase): def __init__(self, parser, tree): Phase.__init__(self, parser, tree) self.startTagHandler = _utils.MethodDispatcher([ ("html", self.startTagHtml), ("title", self.startTagTitle), (("noframes", "style"), self.startTagNoFramesStyle), ("noscript", self.startTagNoscript), ("script", self.startTagScript), (("base", "basefont", "bgsound", "command", "link"), self.startTagBaseLinkCommand), ("meta", self.startTagMeta), ("head", self.startTagHead) ]) self.startTagHandler.default = self.startTagOther self.endTagHandler = _utils.MethodDispatcher([ ("head", self.endTagHead), (("br", "html", "body"), self.endTagHtmlBodyBr) ]) self.endTagHandler.default = self.endTagOther # the real thing def processEOF(self): self.anythingElse() return True def processCharacters(self, token): self.anythingElse() return token def startTagHtml(self, token): return self.parser.phases["inBody"].processStartTag(token) def startTagHead(self, token): self.parser.parseError("two-heads-are-not-better-than-one") def startTagBaseLinkCommand(self, token): self.tree.insertElement(token) self.tree.openElements.pop() token["selfClosingAcknowledged"] = True def startTagMeta(self, token): self.tree.insertElement(token) self.tree.openElements.pop() token["selfClosingAcknowledged"] = True attributes = token["data"] if self.parser.tokenizer.stream.charEncoding[1] == "tentative": if "charset" in attributes: self.parser.tokenizer.stream.changeEncoding(attributes["charset"]) elif ("content" in attributes and "http-equiv" in attributes and attributes["http-equiv"].lower() == "content-type"): # Encoding it as UTF-8 here is a hack, as really we should pass # the abstract Unicode string, and just use the # ContentAttrParser on that, but using UTF-8 allows all chars # to be encoded and as a ASCII-superset works. data = _inputstream.EncodingBytes(attributes["content"].encode("utf-8")) parser = _inputstream.ContentAttrParser(data) codec = parser.parse() self.parser.tokenizer.stream.changeEncoding(codec) def startTagTitle(self, token): self.parser.parseRCDataRawtext(token, "RCDATA") def startTagNoFramesStyle(self, token): # Need to decide whether to implement the scripting-disabled case self.parser.parseRCDataRawtext(token, "RAWTEXT") def startTagNoscript(self, token): if self.parser.scripting: self.parser.parseRCDataRawtext(token, "RAWTEXT") else: self.tree.insertElement(token) self.parser.phase = self.parser.phases["inHeadNoscript"] def startTagScript(self, token): self.tree.insertElement(token) self.parser.tokenizer.state = self.parser.tokenizer.scriptDataState self.parser.originalPhase = self.parser.phase self.parser.phase = self.parser.phases["text"] def startTagOther(self, token): self.anythingElse() return token def endTagHead(self, token): node = self.parser.tree.openElements.pop() assert node.name == "head", "Expected head got %s" % node.name self.parser.phase = self.parser.phases["afterHead"] def endTagHtmlBodyBr(self, token): self.anythingElse() return token def endTagOther(self, token): self.parser.parseError("unexpected-end-tag", {"name": token["name"]}) def anythingElse(self): self.endTagHead(impliedTagToken("head")) class InHeadNoscriptPhase(Phase): def __init__(self, parser, tree): Phase.__init__(self, parser, tree) self.startTagHandler = _utils.MethodDispatcher([ ("html", self.startTagHtml), (("basefont", "bgsound", "link", "meta", "noframes", "style"), self.startTagBaseLinkCommand), (("head", "noscript"), self.startTagHeadNoscript), ]) self.startTagHandler.default = self.startTagOther self.endTagHandler = _utils.MethodDispatcher([ ("noscript", self.endTagNoscript), ("br", self.endTagBr), ]) self.endTagHandler.default = self.endTagOther def processEOF(self): self.parser.parseError("eof-in-head-noscript") self.anythingElse() return True def processComment(self, token): return self.parser.phases["inHead"].processComment(token) def processCharacters(self, token): self.parser.parseError("char-in-head-noscript") self.anythingElse() return token def processSpaceCharacters(self, token): return self.parser.phases["inHead"].processSpaceCharacters(token) def startTagHtml(self, token): return self.parser.phases["inBody"].processStartTag(token) def startTagBaseLinkCommand(self, token): return self.parser.phases["inHead"].processStartTag(token) def startTagHeadNoscript(self, token): self.parser.parseError("unexpected-start-tag", {"name": token["name"]}) def startTagOther(self, token): self.parser.parseError("unexpected-inhead-noscript-tag", {"name": token["name"]}) self.anythingElse() return token def endTagNoscript(self, token): node = self.parser.tree.openElements.pop() assert node.name == "noscript", "Expected noscript got %s" % node.name self.parser.phase = self.parser.phases["inHead"] def endTagBr(self, token): self.parser.parseError("unexpected-inhead-noscript-tag", {"name": token["name"]}) self.anythingElse() return token def endTagOther(self, token): self.parser.parseError("unexpected-end-tag", {"name": token["name"]}) def anythingElse(self): # Caller must raise parse error first! self.endTagNoscript(impliedTagToken("noscript")) class AfterHeadPhase(Phase): def __init__(self, parser, tree): Phase.__init__(self, parser, tree) self.startTagHandler = _utils.MethodDispatcher([ ("html", self.startTagHtml), ("body", self.startTagBody), ("frameset", self.startTagFrameset), (("base", "basefont", "bgsound", "link", "meta", "noframes", "script", "style", "title"), self.startTagFromHead), ("head", self.startTagHead) ]) self.startTagHandler.default = self.startTagOther self.endTagHandler = _utils.MethodDispatcher([(("body", "html", "br"), self.endTagHtmlBodyBr)]) self.endTagHandler.default = self.endTagOther def processEOF(self): self.anythingElse() return True def processCharacters(self, token): self.anythingElse() return token def startTagHtml(self, token): return self.parser.phases["inBody"].processStartTag(token) def startTagBody(self, token): self.parser.framesetOK = False self.tree.insertElement(token) self.parser.phase = self.parser.phases["inBody"] def startTagFrameset(self, token): self.tree.insertElement(token) self.parser.phase = self.parser.phases["inFrameset"] def startTagFromHead(self, token): self.parser.parseError("unexpected-start-tag-out-of-my-head", {"name": token["name"]}) self.tree.openElements.append(self.tree.headPointer) self.parser.phases["inHead"].processStartTag(token) for node in self.tree.openElements[::-1]: if node.name == "head": self.tree.openElements.remove(node) break def startTagHead(self, token): self.parser.parseError("unexpected-start-tag", {"name": token["name"]}) def startTagOther(self, token): self.anythingElse() return token def endTagHtmlBodyBr(self, token): self.anythingElse() return token def endTagOther(self, token): self.parser.parseError("unexpected-end-tag", {"name": token["name"]}) def anythingElse(self): self.tree.insertElement(impliedTagToken("body", "StartTag")) self.parser.phase = self.parser.phases["inBody"] self.parser.framesetOK = True class InBodyPhase(Phase): # http://www.whatwg.org/specs/web-apps/current-work/#parsing-main-inbody # the really-really-really-very crazy mode def __init__(self, parser, tree): Phase.__init__(self, parser, tree) # Set this to the default handler self.processSpaceCharacters = self.processSpaceCharactersNonPre self.startTagHandler = _utils.MethodDispatcher([ ("html", self.startTagHtml), (("base", "basefont", "bgsound", "command", "link", "meta", "script", "style", "title"), self.startTagProcessInHead), ("body", self.startTagBody), ("frameset", self.startTagFrameset), (("address", "article", "aside", "blockquote", "center", "details", "dir", "div", "dl", "fieldset", "figcaption", "figure", "footer", "header", "hgroup", "main", "menu", "nav", "ol", "p", "section", "summary", "ul"), self.startTagCloseP), (headingElements, self.startTagHeading), (("pre", "listing"), self.startTagPreListing), ("form", self.startTagForm), (("li", "dd", "dt"), self.startTagListItem), ("plaintext", self.startTagPlaintext), ("a", self.startTagA), (("b", "big", "code", "em", "font", "i", "s", "small", "strike", "strong", "tt", "u"), self.startTagFormatting), ("nobr", self.startTagNobr), ("button", self.startTagButton), (("applet", "marquee", "object"), self.startTagAppletMarqueeObject), ("xmp", self.startTagXmp), ("table", self.startTagTable), (("area", "br", "embed", "img", "keygen", "wbr"), self.startTagVoidFormatting), (("param", "source", "track"), self.startTagParamSource), ("input", self.startTagInput), ("hr", self.startTagHr), ("image", self.startTagImage), ("isindex", self.startTagIsIndex), ("textarea", self.startTagTextarea), ("iframe", self.startTagIFrame), ("noscript", self.startTagNoscript), (("noembed", "noframes"), self.startTagRawtext), ("select", self.startTagSelect), (("rp", "rt"), self.startTagRpRt), (("option", "optgroup"), self.startTagOpt), (("math"), self.startTagMath), (("svg"), self.startTagSvg), (("caption", "col", "colgroup", "frame", "head", "tbody", "td", "tfoot", "th", "thead", "tr"), self.startTagMisplaced) ]) self.startTagHandler.default = self.startTagOther self.endTagHandler = _utils.MethodDispatcher([ ("body", self.endTagBody), ("html", self.endTagHtml), (("address", "article", "aside", "blockquote", "button", "center", "details", "dialog", "dir", "div", "dl", "fieldset", "figcaption", "figure", "footer", "header", "hgroup", "listing", "main", "menu", "nav", "ol", "pre", "section", "summary", "ul"), self.endTagBlock), ("form", self.endTagForm), ("p", self.endTagP), (("dd", "dt", "li"), self.endTagListItem), (headingElements, self.endTagHeading), (("a", "b", "big", "code", "em", "font", "i", "nobr", "s", "small", "strike", "strong", "tt", "u"), self.endTagFormatting), (("applet", "marquee", "object"), self.endTagAppletMarqueeObject), ("br", self.endTagBr), ]) self.endTagHandler.default = self.endTagOther def isMatchingFormattingElement(self, node1, node2): return (node1.name == node2.name and node1.namespace == node2.namespace and node1.attributes == node2.attributes) # helper def addFormattingElement(self, token): self.tree.insertElement(token) element = self.tree.openElements[-1] matchingElements = [] for node in self.tree.activeFormattingElements[::-1]: if node is Marker: break elif self.isMatchingFormattingElement(node, element): matchingElements.append(node) assert len(matchingElements) <= 3 if len(matchingElements) == 3: self.tree.activeFormattingElements.remove(matchingElements[-1]) self.tree.activeFormattingElements.append(element) # the real deal def processEOF(self): allowed_elements = frozenset(("dd", "dt", "li", "p", "tbody", "td", "tfoot", "th", "thead", "tr", "body", "html")) for node in self.tree.openElements[::-1]: if node.name not in allowed_elements: self.parser.parseError("expected-closing-tag-but-got-eof") break # Stop parsing def processSpaceCharactersDropNewline(self, token): # Sometimes (start of <pre>, <listing>, and <textarea> blocks) we # want to drop leading newlines data = token["data"] self.processSpaceCharacters = self.processSpaceCharactersNonPre if (data.startswith("\n") and self.tree.openElements[-1].name in ("pre", "listing", "textarea") and not self.tree.openElements[-1].hasContent()): data = data[1:] if data: self.tree.reconstructActiveFormattingElements() self.tree.insertText(data) def processCharacters(self, token): if token["data"] == "\u0000": # The tokenizer should always emit null on its own return self.tree.reconstructActiveFormattingElements() self.tree.insertText(token["data"]) # This must be bad for performance if (self.parser.framesetOK and any([char not in spaceCharacters for char in token["data"]])): self.parser.framesetOK = False def processSpaceCharactersNonPre(self, token): self.tree.reconstructActiveFormattingElements() self.tree.insertText(token["data"]) def startTagProcessInHead(self, token): return self.parser.phases["inHead"].processStartTag(token) def startTagBody(self, token): self.parser.parseError("unexpected-start-tag", {"name": "body"}) if (len(self.tree.openElements) == 1 or self.tree.openElements[1].name != "body"): assert self.parser.innerHTML else: self.parser.framesetOK = False for attr, value in token["data"].items(): if attr not in self.tree.openElements[1].attributes: self.tree.openElements[1].attributes[attr] = value def startTagFrameset(self, token): self.parser.parseError("unexpected-start-tag", {"name": "frameset"}) if (len(self.tree.openElements) == 1 or self.tree.openElements[1].name != "body"): assert self.parser.innerHTML elif not self.parser.framesetOK: pass else: if self.tree.openElements[1].parent: self.tree.openElements[1].parent.removeChild(self.tree.openElements[1]) while self.tree.openElements[-1].name != "html": self.tree.openElements.pop() self.tree.insertElement(token) self.parser.phase = self.parser.phases["inFrameset"] def startTagCloseP(self, token): if self.tree.elementInScope("p", variant="button"): self.endTagP(impliedTagToken("p")) self.tree.insertElement(token) def startTagPreListing(self, token): if self.tree.elementInScope("p", variant="button"): self.endTagP(impliedTagToken("p")) self.tree.insertElement(token) self.parser.framesetOK = False self.processSpaceCharacters = self.processSpaceCharactersDropNewline def startTagForm(self, token): if self.tree.formPointer: self.parser.parseError("unexpected-start-tag", {"name": "form"}) else: if self.tree.elementInScope("p", variant="button"): self.endTagP(impliedTagToken("p")) self.tree.insertElement(token) self.tree.formPointer = self.tree.openElements[-1] def startTagListItem(self, token): self.parser.framesetOK = False stopNamesMap = {"li": ["li"], "dt": ["dt", "dd"], "dd": ["dt", "dd"]} stopNames = stopNamesMap[token["name"]] for node in reversed(self.tree.openElements): if node.name in stopNames: self.parser.phase.processEndTag( impliedTagToken(node.name, "EndTag")) break if (node.nameTuple in specialElements and node.name not in ("address", "div", "p")): break if self.tree.elementInScope("p", variant="button"): self.parser.phase.processEndTag( impliedTagToken("p", "EndTag")) self.tree.insertElement(token) def startTagPlaintext(self, token): if self.tree.elementInScope("p", variant="button"): self.endTagP(impliedTagToken("p")) self.tree.insertElement(token) self.parser.tokenizer.state = self.parser.tokenizer.plaintextState def startTagHeading(self, token): if self.tree.elementInScope("p", variant="button"): self.endTagP(impliedTagToken("p")) if self.tree.openElements[-1].name in headingElements: self.parser.parseError("unexpected-start-tag", {"name": token["name"]}) self.tree.openElements.pop() self.tree.insertElement(token) def startTagA(self, token): afeAElement = self.tree.elementInActiveFormattingElements("a") if afeAElement: self.parser.parseError("unexpected-start-tag-implies-end-tag", {"startName": "a", "endName": "a"}) self.endTagFormatting(impliedTagToken("a")) if afeAElement in self.tree.openElements: self.tree.openElements.remove(afeAElement) if afeAElement in self.tree.activeFormattingElements: self.tree.activeFormattingElements.remove(afeAElement) self.tree.reconstructActiveFormattingElements() self.addFormattingElement(token) def startTagFormatting(self, token): self.tree.reconstructActiveFormattingElements() self.addFormattingElement(token) def startTagNobr(self, token): self.tree.reconstructActiveFormattingElements() if self.tree.elementInScope("nobr"): self.parser.parseError("unexpected-start-tag-implies-end-tag", {"startName": "nobr", "endName": "nobr"}) self.processEndTag(impliedTagToken("nobr")) # XXX Need tests that trigger the following self.tree.reconstructActiveFormattingElements() self.addFormattingElement(token) def startTagButton(self, token): if self.tree.elementInScope("button"): self.parser.parseError("unexpected-start-tag-implies-end-tag", {"startName": "button", "endName": "button"}) self.processEndTag(impliedTagToken("button")) return token else: self.tree.reconstructActiveFormattingElements() self.tree.insertElement(token) self.parser.framesetOK = False def startTagAppletMarqueeObject(self, token): self.tree.reconstructActiveFormattingElements() self.tree.insertElement(token) self.tree.activeFormattingElements.append(Marker) self.parser.framesetOK = False def startTagXmp(self, token): if self.tree.elementInScope("p", variant="button"): self.endTagP(impliedTagToken("p")) self.tree.reconstructActiveFormattingElements() self.parser.framesetOK = False self.parser.parseRCDataRawtext(token, "RAWTEXT") def startTagTable(self, token): if self.parser.compatMode != "quirks": if self.tree.elementInScope("p", variant="button"): self.processEndTag(impliedTagToken("p")) self.tree.insertElement(token) self.parser.framesetOK = False self.parser.phase = self.parser.phases["inTable"] def startTagVoidFormatting(self, token): self.tree.reconstructActiveFormattingElements() self.tree.insertElement(token) self.tree.openElements.pop() token["selfClosingAcknowledged"] = True self.parser.framesetOK = False def startTagInput(self, token): framesetOK = self.parser.framesetOK self.startTagVoidFormatting(token) if ("type" in token["data"] and token["data"]["type"].translate(asciiUpper2Lower) == "hidden"): # input type=hidden doesn't change framesetOK self.parser.framesetOK = framesetOK def startTagParamSource(self, token): self.tree.insertElement(token) self.tree.openElements.pop() token["selfClosingAcknowledged"] = True def startTagHr(self, token): if self.tree.elementInScope("p", variant="button"): self.endTagP(impliedTagToken("p")) self.tree.insertElement(token) self.tree.openElements.pop() token["selfClosingAcknowledged"] = True self.parser.framesetOK = False def startTagImage(self, token): # No really... self.parser.parseError("unexpected-start-tag-treated-as", {"originalName": "image", "newName": "img"}) self.processStartTag(impliedTagToken("img", "StartTag", attributes=token["data"], selfClosing=token["selfClosing"])) def startTagIsIndex(self, token): self.parser.parseError("deprecated-tag", {"name": "isindex"}) if self.tree.formPointer: return form_attrs = {} if "action" in token["data"]: form_attrs["action"] = token["data"]["action"] self.processStartTag(impliedTagToken("form", "StartTag", attributes=form_attrs)) self.processStartTag(impliedTagToken("hr", "StartTag")) self.processStartTag(impliedTagToken("label", "StartTag")) # XXX Localization ... if "prompt" in token["data"]: prompt = token["data"]["prompt"] else: prompt = "This is a searchable index. Enter search keywords: " self.processCharacters( {"type": tokenTypes["Characters"], "data": prompt}) attributes = token["data"].copy() if "action" in attributes: del attributes["action"] if "prompt" in attributes: del attributes["prompt"] attributes["name"] = "isindex" self.processStartTag(impliedTagToken("input", "StartTag", attributes=attributes, selfClosing=token["selfClosing"])) self.processEndTag(impliedTagToken("label")) self.processStartTag(impliedTagToken("hr", "StartTag")) self.processEndTag(impliedTagToken("form")) def startTagTextarea(self, token): self.tree.insertElement(token) self.parser.tokenizer.state = self.parser.tokenizer.rcdataState self.processSpaceCharacters = self.processSpaceCharactersDropNewline self.parser.framesetOK = False def startTagIFrame(self, token): self.parser.framesetOK = False self.startTagRawtext(token) def startTagNoscript(self, token): if self.parser.scripting: self.startTagRawtext(token) else: self.startTagOther(token) def startTagRawtext(self, token): """iframe, noembed noframes, noscript(if scripting enabled)""" self.parser.parseRCDataRawtext(token, "RAWTEXT") def startTagOpt(self, token): if self.tree.openElements[-1].name == "option": self.parser.phase.processEndTag(impliedTagToken("option")) self.tree.reconstructActiveFormattingElements() self.parser.tree.insertElement(token) def startTagSelect(self, token): self.tree.reconstructActiveFormattingElements() self.tree.insertElement(token) self.parser.framesetOK = False if self.parser.phase in (self.parser.phases["inTable"], self.parser.phases["inCaption"], self.parser.phases["inColumnGroup"], self.parser.phases["inTableBody"], self.parser.phases["inRow"], self.parser.phases["inCell"]): self.parser.phase = self.parser.phases["inSelectInTable"] else: self.parser.phase = self.parser.phases["inSelect"] def startTagRpRt(self, token): if self.tree.elementInScope("ruby"): self.tree.generateImpliedEndTags() if self.tree.openElements[-1].name != "ruby": self.parser.parseError() self.tree.insertElement(token) def startTagMath(self, token): self.tree.reconstructActiveFormattingElements() self.parser.adjustMathMLAttributes(token) self.parser.adjustForeignAttributes(token) token["namespace"] = namespaces["mathml"] self.tree.insertElement(token) # Need to get the parse error right for the case where the token # has a namespace not equal to the xmlns attribute if token["selfClosing"]: self.tree.openElements.pop() token["selfClosingAcknowledged"] = True def startTagSvg(self, token): self.tree.reconstructActiveFormattingElements() self.parser.adjustSVGAttributes(token) self.parser.adjustForeignAttributes(token) token["namespace"] = namespaces["svg"] self.tree.insertElement(token) # Need to get the parse error right for the case where the token # has a namespace not equal to the xmlns attribute if token["selfClosing"]: self.tree.openElements.pop() token["selfClosingAcknowledged"] = True def startTagMisplaced(self, token): """ Elements that should be children of other elements that have a different insertion mode; here they are ignored "caption", "col", "colgroup", "frame", "frameset", "head", "option", "optgroup", "tbody", "td", "tfoot", "th", "thead", "tr", "noscript" """ self.parser.parseError("unexpected-start-tag-ignored", {"name": token["name"]}) def startTagOther(self, token): self.tree.reconstructActiveFormattingElements() self.tree.insertElement(token) def endTagP(self, token): if not self.tree.elementInScope("p", variant="button"): self.startTagCloseP(impliedTagToken("p", "StartTag")) self.parser.parseError("unexpected-end-tag", {"name": "p"}) self.endTagP(impliedTagToken("p", "EndTag")) else: self.tree.generateImpliedEndTags("p") if self.tree.openElements[-1].name != "p": self.parser.parseError("unexpected-end-tag", {"name": "p"}) node = self.tree.openElements.pop() while node.name != "p": node = self.tree.openElements.pop() def endTagBody(self, token): if not self.tree.elementInScope("body"): self.parser.parseError() return elif self.tree.openElements[-1].name != "body": for node in self.tree.openElements[2:]: if node.name not in frozenset(("dd", "dt", "li", "optgroup", "option", "p", "rp", "rt", "tbody", "td", "tfoot", "th", "thead", "tr", "body", "html")): # Not sure this is the correct name for the parse error self.parser.parseError( "expected-one-end-tag-but-got-another", {"gotName": "body", "expectedName": node.name}) break self.parser.phase = self.parser.phases["afterBody"] def endTagHtml(self, token): # We repeat the test for the body end tag token being ignored here if self.tree.elementInScope("body"): self.endTagBody(impliedTagToken("body")) return token def endTagBlock(self, token): # Put us back in the right whitespace handling mode if token["name"] == "pre": self.processSpaceCharacters = self.processSpaceCharactersNonPre inScope = self.tree.elementInScope(token["name"]) if inScope: self.tree.generateImpliedEndTags() if self.tree.openElements[-1].name != token["name"]: self.parser.parseError("end-tag-too-early", {"name": token["name"]}) if inScope: node = self.tree.openElements.pop() while node.name != token["name"]: node = self.tree.openElements.pop() def endTagForm(self, token): node = self.tree.formPointer self.tree.formPointer = None if node is None or not self.tree.elementInScope(node): self.parser.parseError("unexpected-end-tag", {"name": "form"}) else: self.tree.generateImpliedEndTags() if self.tree.openElements[-1] != node: self.parser.parseError("end-tag-too-early-ignored", {"name": "form"}) self.tree.openElements.remove(node) def endTagListItem(self, token): if token["name"] == "li": variant = "list" else: variant = None if not self.tree.elementInScope(token["name"], variant=variant): self.parser.parseError("unexpected-end-tag", {"name": token["name"]}) else: self.tree.generateImpliedEndTags(exclude=token["name"]) if self.tree.openElements[-1].name != token["name"]: self.parser.parseError( "end-tag-too-early", {"name": token["name"]}) node = self.tree.openElements.pop() while node.name != token["name"]: node = self.tree.openElements.pop() def endTagHeading(self, token): for item in headingElements: if self.tree.elementInScope(item): self.tree.generateImpliedEndTags() break if self.tree.openElements[-1].name != token["name"]: self.parser.parseError("end-tag-too-early", {"name": token["name"]}) for item in headingElements: if self.tree.elementInScope(item): item = self.tree.openElements.pop() while item.name not in headingElements: item = self.tree.openElements.pop() break def endTagFormatting(self, token): """The much-feared adoption agency algorithm""" # http://svn.whatwg.org/webapps/complete.html#adoptionAgency revision 7867 # XXX Better parseError messages appreciated. # Step 1 outerLoopCounter = 0 # Step 2 while outerLoopCounter < 8: # Step 3 outerLoopCounter += 1 # Step 4: # Let the formatting element be the last element in # the list of active formatting elements that: # - is between the end of the list and the last scope # marker in the list, if any, or the start of the list # otherwise, and # - has the same tag name as the token. formattingElement = self.tree.elementInActiveFormattingElements( token["name"]) if (not formattingElement or (formattingElement in self.tree.openElements and not self.tree.elementInScope(formattingElement.name))): # If there is no such node, then abort these steps # and instead act as described in the "any other # end tag" entry below. self.endTagOther(token) return # Otherwise, if there is such a node, but that node is # not in the stack of open elements, then this is a # parse error; remove the element from the list, and # abort these steps. elif formattingElement not in self.tree.openElements: self.parser.parseError("adoption-agency-1.2", {"name": token["name"]}) self.tree.activeFormattingElements.remove(formattingElement) return # Otherwise, if there is such a node, and that node is # also in the stack of open elements, but the element # is not in scope, then this is a parse error; ignore # the token, and abort these steps. elif not self.tree.elementInScope(formattingElement.name): self.parser.parseError("adoption-agency-4.4", {"name": token["name"]}) return # Otherwise, there is a formatting element and that # element is in the stack and is in scope. If the # element is not the current node, this is a parse # error. In any case, proceed with the algorithm as # written in the following steps. else: if formattingElement != self.tree.openElements[-1]: self.parser.parseError("adoption-agency-1.3", {"name": token["name"]}) # Step 5: # Let the furthest block be the topmost node in the # stack of open elements that is lower in the stack # than the formatting element, and is an element in # the special category. There might not be one. afeIndex = self.tree.openElements.index(formattingElement) furthestBlock = None for element in self.tree.openElements[afeIndex:]: if element.nameTuple in specialElements: furthestBlock = element break # Step 6: # If there is no furthest block, then the UA must # first pop all the nodes from the bottom of the stack # of open elements, from the current node up to and # including the formatting element, then remove the # formatting element from the list of active # formatting elements, and finally abort these steps. if furthestBlock is None: element = self.tree.openElements.pop() while element != formattingElement: element = self.tree.openElements.pop() self.tree.activeFormattingElements.remove(element) return # Step 7 commonAncestor = self.tree.openElements[afeIndex - 1] # Step 8: # The bookmark is supposed to help us identify where to reinsert # nodes in step 15. We have to ensure that we reinsert nodes after # the node before the active formatting element. Note the bookmark # can move in step 9.7 bookmark = self.tree.activeFormattingElements.index(formattingElement) # Step 9 lastNode = node = furthestBlock innerLoopCounter = 0 index = self.tree.openElements.index(node) while innerLoopCounter < 3: innerLoopCounter += 1 # Node is element before node in open elements index -= 1 node = self.tree.openElements[index] if node not in self.tree.activeFormattingElements: self.tree.openElements.remove(node) continue # Step 9.6 if node == formattingElement: break # Step 9.7 if lastNode == furthestBlock: bookmark = self.tree.activeFormattingElements.index(node) + 1 # Step 9.8 clone = node.cloneNode() # Replace node with clone self.tree.activeFormattingElements[ self.tree.activeFormattingElements.index(node)] = clone self.tree.openElements[ self.tree.openElements.index(node)] = clone node = clone # Step 9.9 # Remove lastNode from its parents, if any if lastNode.parent: lastNode.parent.removeChild(lastNode) node.appendChild(lastNode) # Step 9.10 lastNode = node # Step 10 # Foster parent lastNode if commonAncestor is a # table, tbody, tfoot, thead, or tr we need to foster # parent the lastNode if lastNode.parent: lastNode.parent.removeChild(lastNode) if commonAncestor.name in frozenset(("table", "tbody", "tfoot", "thead", "tr")): parent, insertBefore = self.tree.getTableMisnestedNodePosition() parent.insertBefore(lastNode, insertBefore) else: commonAncestor.appendChild(lastNode) # Step 11 clone = formattingElement.cloneNode() # Step 12 furthestBlock.reparentChildren(clone) # Step 13 furthestBlock.appendChild(clone) # Step 14 self.tree.activeFormattingElements.remove(formattingElement) self.tree.activeFormattingElements.insert(bookmark, clone) # Step 15 self.tree.openElements.remove(formattingElement) self.tree.openElements.insert( self.tree.openElements.index(furthestBlock) + 1, clone) def endTagAppletMarqueeObject(self, token): if self.tree.elementInScope(token["name"]): self.tree.generateImpliedEndTags() if self.tree.openElements[-1].name != token["name"]: self.parser.parseError("end-tag-too-early", {"name": token["name"]}) if self.tree.elementInScope(token["name"]): element = self.tree.openElements.pop() while element.name != token["name"]: element = self.tree.openElements.pop() self.tree.clearActiveFormattingElements() def endTagBr(self, token): self.parser.parseError("unexpected-end-tag-treated-as", {"originalName": "br", "newName": "br element"}) self.tree.reconstructActiveFormattingElements() self.tree.insertElement(impliedTagToken("br", "StartTag")) self.tree.openElements.pop() def endTagOther(self, token): for node in self.tree.openElements[::-1]: if node.name == token["name"]: self.tree.generateImpliedEndTags(exclude=token["name"]) if self.tree.openElements[-1].name != token["name"]: self.parser.parseError("unexpected-end-tag", {"name": token["name"]}) while self.tree.openElements.pop() != node: pass break else: if node.nameTuple in specialElements: self.parser.parseError("unexpected-end-tag", {"name": token["name"]}) break class TextPhase(Phase): def __init__(self, parser, tree): Phase.__init__(self, parser, tree) self.startTagHandler = _utils.MethodDispatcher([]) self.startTagHandler.default = self.startTagOther self.endTagHandler = _utils.MethodDispatcher([ ("script", self.endTagScript)]) self.endTagHandler.default = self.endTagOther def processCharacters(self, token): self.tree.insertText(token["data"]) def processEOF(self): self.parser.parseError("expected-named-closing-tag-but-got-eof", {"name": self.tree.openElements[-1].name}) self.tree.openElements.pop() self.parser.phase = self.parser.originalPhase return True def startTagOther(self, token): assert False, "Tried to process start tag %s in RCDATA/RAWTEXT mode" % token['name'] def endTagScript(self, token): node = self.tree.openElements.pop() assert node.name == "script" self.parser.phase = self.parser.originalPhase # The rest of this method is all stuff that only happens if # document.write works def endTagOther(self, token): self.tree.openElements.pop() self.parser.phase = self.parser.originalPhase class InTablePhase(Phase): # http://www.whatwg.org/specs/web-apps/current-work/#in-table def __init__(self, parser, tree): Phase.__init__(self, parser, tree) self.startTagHandler = _utils.MethodDispatcher([ ("html", self.startTagHtml), ("caption", self.startTagCaption), ("colgroup", self.startTagColgroup), ("col", self.startTagCol), (("tbody", "tfoot", "thead"), self.startTagRowGroup), (("td", "th", "tr"), self.startTagImplyTbody), ("table", self.startTagTable), (("style", "script"), self.startTagStyleScript), ("input", self.startTagInput), ("form", self.startTagForm) ]) self.startTagHandler.default = self.startTagOther self.endTagHandler = _utils.MethodDispatcher([ ("table", self.endTagTable), (("body", "caption", "col", "colgroup", "html", "tbody", "td", "tfoot", "th", "thead", "tr"), self.endTagIgnore) ]) self.endTagHandler.default = self.endTagOther # helper methods def clearStackToTableContext(self): # "clear the stack back to a table context" while self.tree.openElements[-1].name not in ("table", "html"): # self.parser.parseError("unexpected-implied-end-tag-in-table", # {"name": self.tree.openElements[-1].name}) self.tree.openElements.pop() # When the current node is <html> it's an innerHTML case # processing methods def processEOF(self): if self.tree.openElements[-1].name != "html": self.parser.parseError("eof-in-table") else: assert self.parser.innerHTML # Stop parsing def processSpaceCharacters(self, token): originalPhase = self.parser.phase self.parser.phase = self.parser.phases["inTableText"] self.parser.phase.originalPhase = originalPhase self.parser.phase.processSpaceCharacters(token) def processCharacters(self, token): originalPhase = self.parser.phase self.parser.phase = self.parser.phases["inTableText"] self.parser.phase.originalPhase = originalPhase self.parser.phase.processCharacters(token) def insertText(self, token): # If we get here there must be at least one non-whitespace character # Do the table magic! self.tree.insertFromTable = True self.parser.phases["inBody"].processCharacters(token) self.tree.insertFromTable = False def startTagCaption(self, token): self.clearStackToTableContext() self.tree.activeFormattingElements.append(Marker) self.tree.insertElement(token) self.parser.phase = self.parser.phases["inCaption"] def startTagColgroup(self, token): self.clearStackToTableContext() self.tree.insertElement(token) self.parser.phase = self.parser.phases["inColumnGroup"] def startTagCol(self, token): self.startTagColgroup(impliedTagToken("colgroup", "StartTag")) return token def startTagRowGroup(self, token): self.clearStackToTableContext() self.tree.insertElement(token) self.parser.phase = self.parser.phases["inTableBody"] def startTagImplyTbody(self, token): self.startTagRowGroup(impliedTagToken("tbody", "StartTag")) return token def startTagTable(self, token): self.parser.parseError("unexpected-start-tag-implies-end-tag", {"startName": "table", "endName": "table"}) self.parser.phase.processEndTag(impliedTagToken("table")) if not self.parser.innerHTML: return token def startTagStyleScript(self, token): return self.parser.phases["inHead"].processStartTag(token) def startTagInput(self, token): if ("type" in token["data"] and token["data"]["type"].translate(asciiUpper2Lower) == "hidden"): self.parser.parseError("unexpected-hidden-input-in-table") self.tree.insertElement(token) # XXX associate with form self.tree.openElements.pop() else: self.startTagOther(token) def startTagForm(self, token): self.parser.parseError("unexpected-form-in-table") if self.tree.formPointer is None: self.tree.insertElement(token) self.tree.formPointer = self.tree.openElements[-1] self.tree.openElements.pop() def startTagOther(self, token): self.parser.parseError("unexpected-start-tag-implies-table-voodoo", {"name": token["name"]}) # Do the table magic! self.tree.insertFromTable = True self.parser.phases["inBody"].processStartTag(token) self.tree.insertFromTable = False def endTagTable(self, token): if self.tree.elementInScope("table", variant="table"): self.tree.generateImpliedEndTags() if self.tree.openElements[-1].name != "table": self.parser.parseError("end-tag-too-early-named", {"gotName": "table", "expectedName": self.tree.openElements[-1].name}) while self.tree.openElements[-1].name != "table": self.tree.openElements.pop() self.tree.openElements.pop() self.parser.resetInsertionMode() else: # innerHTML case assert self.parser.innerHTML self.parser.parseError() def endTagIgnore(self, token): self.parser.parseError("unexpected-end-tag", {"name": token["name"]}) def endTagOther(self, token): self.parser.parseError("unexpected-end-tag-implies-table-voodoo", {"name": token["name"]}) # Do the table magic! self.tree.insertFromTable = True self.parser.phases["inBody"].processEndTag(token) self.tree.insertFromTable = False class InTableTextPhase(Phase): def __init__(self, parser, tree): Phase.__init__(self, parser, tree) self.originalPhase = None self.characterTokens = [] def flushCharacters(self): data = "".join([item["data"] for item in self.characterTokens]) if any([item not in spaceCharacters for item in data]): token = {"type": tokenTypes["Characters"], "data": data} self.parser.phases["inTable"].insertText(token) elif data: self.tree.insertText(data) self.characterTokens = [] def processComment(self, token): self.flushCharacters() self.parser.phase = self.originalPhase return token def processEOF(self): self.flushCharacters() self.parser.phase = self.originalPhase return True def processCharacters(self, token): if token["data"] == "\u0000": return self.characterTokens.append(token) def processSpaceCharacters(self, token): # pretty sure we should never reach here self.characterTokens.append(token) # assert False def processStartTag(self, token): self.flushCharacters() self.parser.phase = self.originalPhase return token def processEndTag(self, token): self.flushCharacters() self.parser.phase = self.originalPhase return token class InCaptionPhase(Phase): # http://www.whatwg.org/specs/web-apps/current-work/#in-caption def __init__(self, parser, tree): Phase.__init__(self, parser, tree) self.startTagHandler = _utils.MethodDispatcher([ ("html", self.startTagHtml), (("caption", "col", "colgroup", "tbody", "td", "tfoot", "th", "thead", "tr"), self.startTagTableElement) ]) self.startTagHandler.default = self.startTagOther self.endTagHandler = _utils.MethodDispatcher([ ("caption", self.endTagCaption), ("table", self.endTagTable), (("body", "col", "colgroup", "html", "tbody", "td", "tfoot", "th", "thead", "tr"), self.endTagIgnore) ]) self.endTagHandler.default = self.endTagOther def ignoreEndTagCaption(self): return not self.tree.elementInScope("caption", variant="table") def processEOF(self): self.parser.phases["inBody"].processEOF() def processCharacters(self, token): return self.parser.phases["inBody"].processCharacters(token) def startTagTableElement(self, token): self.parser.parseError() # XXX Have to duplicate logic here to find out if the tag is ignored ignoreEndTag = self.ignoreEndTagCaption() self.parser.phase.processEndTag(impliedTagToken("caption")) if not ignoreEndTag: return token def startTagOther(self, token): return self.parser.phases["inBody"].processStartTag(token) def endTagCaption(self, token): if not self.ignoreEndTagCaption(): # AT this code is quite similar to endTagTable in "InTable" self.tree.generateImpliedEndTags() if self.tree.openElements[-1].name != "caption": self.parser.parseError("expected-one-end-tag-but-got-another", {"gotName": "caption", "expectedName": self.tree.openElements[-1].name}) while self.tree.openElements[-1].name != "caption": self.tree.openElements.pop() self.tree.openElements.pop() self.tree.clearActiveFormattingElements() self.parser.phase = self.parser.phases["inTable"] else: # innerHTML case assert self.parser.innerHTML self.parser.parseError() def endTagTable(self, token): self.parser.parseError() ignoreEndTag = self.ignoreEndTagCaption() self.parser.phase.processEndTag(impliedTagToken("caption")) if not ignoreEndTag: return token def endTagIgnore(self, token): self.parser.parseError("unexpected-end-tag", {"name": token["name"]}) def endTagOther(self, token): return self.parser.phases["inBody"].processEndTag(token) class InColumnGroupPhase(Phase): # http://www.whatwg.org/specs/web-apps/current-work/#in-column def __init__(self, parser, tree): Phase.__init__(self, parser, tree) self.startTagHandler = _utils.MethodDispatcher([ ("html", self.startTagHtml), ("col", self.startTagCol) ]) self.startTagHandler.default = self.startTagOther self.endTagHandler = _utils.MethodDispatcher([ ("colgroup", self.endTagColgroup), ("col", self.endTagCol) ]) self.endTagHandler.default = self.endTagOther def ignoreEndTagColgroup(self): return self.tree.openElements[-1].name == "html" def processEOF(self): if self.tree.openElements[-1].name == "html": assert self.parser.innerHTML return else: ignoreEndTag = self.ignoreEndTagColgroup() self.endTagColgroup(impliedTagToken("colgroup")) if not ignoreEndTag: return True def processCharacters(self, token): ignoreEndTag = self.ignoreEndTagColgroup() self.endTagColgroup(impliedTagToken("colgroup")) if not ignoreEndTag: return token def startTagCol(self, token): self.tree.insertElement(token) self.tree.openElements.pop() token["selfClosingAcknowledged"] = True def startTagOther(self, token): ignoreEndTag = self.ignoreEndTagColgroup() self.endTagColgroup(impliedTagToken("colgroup")) if not ignoreEndTag: return token def endTagColgroup(self, token): if self.ignoreEndTagColgroup(): # innerHTML case assert self.parser.innerHTML self.parser.parseError() else: self.tree.openElements.pop() self.parser.phase = self.parser.phases["inTable"] def endTagCol(self, token): self.parser.parseError("no-end-tag", {"name": "col"}) def endTagOther(self, token): ignoreEndTag = self.ignoreEndTagColgroup() self.endTagColgroup(impliedTagToken("colgroup")) if not ignoreEndTag: return token class InTableBodyPhase(Phase): # http://www.whatwg.org/specs/web-apps/current-work/#in-table0 def __init__(self, parser, tree): Phase.__init__(self, parser, tree) self.startTagHandler = _utils.MethodDispatcher([ ("html", self.startTagHtml), ("tr", self.startTagTr), (("td", "th"), self.startTagTableCell), (("caption", "col", "colgroup", "tbody", "tfoot", "thead"), self.startTagTableOther) ]) self.startTagHandler.default = self.startTagOther self.endTagHandler = _utils.MethodDispatcher([ (("tbody", "tfoot", "thead"), self.endTagTableRowGroup), ("table", self.endTagTable), (("body", "caption", "col", "colgroup", "html", "td", "th", "tr"), self.endTagIgnore) ]) self.endTagHandler.default = self.endTagOther # helper methods def clearStackToTableBodyContext(self): while self.tree.openElements[-1].name not in ("tbody", "tfoot", "thead", "html"): # self.parser.parseError("unexpected-implied-end-tag-in-table", # {"name": self.tree.openElements[-1].name}) self.tree.openElements.pop() if self.tree.openElements[-1].name == "html": assert self.parser.innerHTML # the rest def processEOF(self): self.parser.phases["inTable"].processEOF() def processSpaceCharacters(self, token): return self.parser.phases["inTable"].processSpaceCharacters(token) def processCharacters(self, token): return self.parser.phases["inTable"].processCharacters(token) def startTagTr(self, token): self.clearStackToTableBodyContext() self.tree.insertElement(token) self.parser.phase = self.parser.phases["inRow"] def startTagTableCell(self, token): self.parser.parseError("unexpected-cell-in-table-body", {"name": token["name"]}) self.startTagTr(impliedTagToken("tr", "StartTag")) return token def startTagTableOther(self, token): # XXX AT Any ideas on how to share this with endTagTable? if (self.tree.elementInScope("tbody", variant="table") or self.tree.elementInScope("thead", variant="table") or self.tree.elementInScope("tfoot", variant="table")): self.clearStackToTableBodyContext() self.endTagTableRowGroup( impliedTagToken(self.tree.openElements[-1].name)) return token else: # innerHTML case assert self.parser.innerHTML self.parser.parseError() def startTagOther(self, token): return self.parser.phases["inTable"].processStartTag(token) def endTagTableRowGroup(self, token): if self.tree.elementInScope(token["name"], variant="table"): self.clearStackToTableBodyContext() self.tree.openElements.pop() self.parser.phase = self.parser.phases["inTable"] else: self.parser.parseError("unexpected-end-tag-in-table-body", {"name": token["name"]}) def endTagTable(self, token): if (self.tree.elementInScope("tbody", variant="table") or self.tree.elementInScope("thead", variant="table") or self.tree.elementInScope("tfoot", variant="table")): self.clearStackToTableBodyContext() self.endTagTableRowGroup( impliedTagToken(self.tree.openElements[-1].name)) return token else: # innerHTML case assert self.parser.innerHTML self.parser.parseError() def endTagIgnore(self, token): self.parser.parseError("unexpected-end-tag-in-table-body", {"name": token["name"]}) def endTagOther(self, token): return self.parser.phases["inTable"].processEndTag(token) class InRowPhase(Phase): # http://www.whatwg.org/specs/web-apps/current-work/#in-row def __init__(self, parser, tree): Phase.__init__(self, parser, tree) self.startTagHandler = _utils.MethodDispatcher([ ("html", self.startTagHtml), (("td", "th"), self.startTagTableCell), (("caption", "col", "colgroup", "tbody", "tfoot", "thead", "tr"), self.startTagTableOther) ]) self.startTagHandler.default = self.startTagOther self.endTagHandler = _utils.MethodDispatcher([ ("tr", self.endTagTr), ("table", self.endTagTable), (("tbody", "tfoot", "thead"), self.endTagTableRowGroup), (("body", "caption", "col", "colgroup", "html", "td", "th"), self.endTagIgnore) ]) self.endTagHandler.default = self.endTagOther # helper methods (XXX unify this with other table helper methods) def clearStackToTableRowContext(self): while self.tree.openElements[-1].name not in ("tr", "html"): self.parser.parseError("unexpected-implied-end-tag-in-table-row", {"name": self.tree.openElements[-1].name}) self.tree.openElements.pop() def ignoreEndTagTr(self): return not self.tree.elementInScope("tr", variant="table") # the rest def processEOF(self): self.parser.phases["inTable"].processEOF() def processSpaceCharacters(self, token): return self.parser.phases["inTable"].processSpaceCharacters(token) def processCharacters(self, token): return self.parser.phases["inTable"].processCharacters(token) def startTagTableCell(self, token): self.clearStackToTableRowContext() self.tree.insertElement(token) self.parser.phase = self.parser.phases["inCell"] self.tree.activeFormattingElements.append(Marker) def startTagTableOther(self, token): ignoreEndTag = self.ignoreEndTagTr() self.endTagTr(impliedTagToken("tr")) # XXX how are we sure it's always ignored in the innerHTML case? if not ignoreEndTag: return token def startTagOther(self, token): return self.parser.phases["inTable"].processStartTag(token) def endTagTr(self, token): if not self.ignoreEndTagTr(): self.clearStackToTableRowContext() self.tree.openElements.pop() self.parser.phase = self.parser.phases["inTableBody"] else: # innerHTML case assert self.parser.innerHTML self.parser.parseError() def endTagTable(self, token): ignoreEndTag = self.ignoreEndTagTr() self.endTagTr(impliedTagToken("tr")) # Reprocess the current tag if the tr end tag was not ignored # XXX how are we sure it's always ignored in the innerHTML case? if not ignoreEndTag: return token def endTagTableRowGroup(self, token): if self.tree.elementInScope(token["name"], variant="table"): self.endTagTr(impliedTagToken("tr")) return token else: self.parser.parseError() def endTagIgnore(self, token): self.parser.parseError("unexpected-end-tag-in-table-row", {"name": token["name"]}) def endTagOther(self, token): return self.parser.phases["inTable"].processEndTag(token) class InCellPhase(Phase): # http://www.whatwg.org/specs/web-apps/current-work/#in-cell def __init__(self, parser, tree): Phase.__init__(self, parser, tree) self.startTagHandler = _utils.MethodDispatcher([ ("html", self.startTagHtml), (("caption", "col", "colgroup", "tbody", "td", "tfoot", "th", "thead", "tr"), self.startTagTableOther) ]) self.startTagHandler.default = self.startTagOther self.endTagHandler = _utils.MethodDispatcher([ (("td", "th"), self.endTagTableCell), (("body", "caption", "col", "colgroup", "html"), self.endTagIgnore), (("table", "tbody", "tfoot", "thead", "tr"), self.endTagImply) ]) self.endTagHandler.default = self.endTagOther # helper def closeCell(self): if self.tree.elementInScope("td", variant="table"): self.endTagTableCell(impliedTagToken("td")) elif self.tree.elementInScope("th", variant="table"): self.endTagTableCell(impliedTagToken("th")) # the rest def processEOF(self): self.parser.phases["inBody"].processEOF() def processCharacters(self, token): return self.parser.phases["inBody"].processCharacters(token) def startTagTableOther(self, token): if (self.tree.elementInScope("td", variant="table") or self.tree.elementInScope("th", variant="table")): self.closeCell() return token else: # innerHTML case assert self.parser.innerHTML self.parser.parseError() def startTagOther(self, token): return self.parser.phases["inBody"].processStartTag(token) def endTagTableCell(self, token): if self.tree.elementInScope(token["name"], variant="table"): self.tree.generateImpliedEndTags(token["name"]) if self.tree.openElements[-1].name != token["name"]: self.parser.parseError("unexpected-cell-end-tag", {"name": token["name"]}) while True: node = self.tree.openElements.pop() if node.name == token["name"]: break else: self.tree.openElements.pop() self.tree.clearActiveFormattingElements() self.parser.phase = self.parser.phases["inRow"] else: self.parser.parseError("unexpected-end-tag", {"name": token["name"]}) def endTagIgnore(self, token): self.parser.parseError("unexpected-end-tag", {"name": token["name"]}) def endTagImply(self, token): if self.tree.elementInScope(token["name"], variant="table"): self.closeCell() return token else: # sometimes innerHTML case self.parser.parseError() def endTagOther(self, token): return self.parser.phases["inBody"].processEndTag(token) class InSelectPhase(Phase): def __init__(self, parser, tree): Phase.__init__(self, parser, tree) self.startTagHandler = _utils.MethodDispatcher([ ("html", self.startTagHtml), ("option", self.startTagOption), ("optgroup", self.startTagOptgroup), ("select", self.startTagSelect), (("input", "keygen", "textarea"), self.startTagInput), ("script", self.startTagScript) ]) self.startTagHandler.default = self.startTagOther self.endTagHandler = _utils.MethodDispatcher([ ("option", self.endTagOption), ("optgroup", self.endTagOptgroup), ("select", self.endTagSelect) ]) self.endTagHandler.default = self.endTagOther # http://www.whatwg.org/specs/web-apps/current-work/#in-select def processEOF(self): if self.tree.openElements[-1].name != "html": self.parser.parseError("eof-in-select") else: assert self.parser.innerHTML def processCharacters(self, token): if token["data"] == "\u0000": return self.tree.insertText(token["data"]) def startTagOption(self, token): # We need to imply </option> if <option> is the current node. if self.tree.openElements[-1].name == "option": self.tree.openElements.pop() self.tree.insertElement(token) def startTagOptgroup(self, token): if self.tree.openElements[-1].name == "option": self.tree.openElements.pop() if self.tree.openElements[-1].name == "optgroup": self.tree.openElements.pop() self.tree.insertElement(token) def startTagSelect(self, token): self.parser.parseError("unexpected-select-in-select") self.endTagSelect(impliedTagToken("select")) def startTagInput(self, token): self.parser.parseError("unexpected-input-in-select") if self.tree.elementInScope("select", variant="select"): self.endTagSelect(impliedTagToken("select")) return token else: assert self.parser.innerHTML def startTagScript(self, token): return self.parser.phases["inHead"].processStartTag(token) def startTagOther(self, token): self.parser.parseError("unexpected-start-tag-in-select", {"name": token["name"]}) def endTagOption(self, token): if self.tree.openElements[-1].name == "option": self.tree.openElements.pop() else: self.parser.parseError("unexpected-end-tag-in-select", {"name": "option"}) def endTagOptgroup(self, token): # </optgroup> implicitly closes <option> if (self.tree.openElements[-1].name == "option" and self.tree.openElements[-2].name == "optgroup"): self.tree.openElements.pop() # It also closes </optgroup> if self.tree.openElements[-1].name == "optgroup": self.tree.openElements.pop() # But nothing else else: self.parser.parseError("unexpected-end-tag-in-select", {"name": "optgroup"}) def endTagSelect(self, token): if self.tree.elementInScope("select", variant="select"): node = self.tree.openElements.pop() while node.name != "select": node = self.tree.openElements.pop() self.parser.resetInsertionMode() else: # innerHTML case assert self.parser.innerHTML self.parser.parseError() def endTagOther(self, token): self.parser.parseError("unexpected-end-tag-in-select", {"name": token["name"]}) class InSelectInTablePhase(Phase): def __init__(self, parser, tree): Phase.__init__(self, parser, tree) self.startTagHandler = _utils.MethodDispatcher([ (("caption", "table", "tbody", "tfoot", "thead", "tr", "td", "th"), self.startTagTable) ]) self.startTagHandler.default = self.startTagOther self.endTagHandler = _utils.MethodDispatcher([ (("caption", "table", "tbody", "tfoot", "thead", "tr", "td", "th"), self.endTagTable) ]) self.endTagHandler.default = self.endTagOther def processEOF(self): self.parser.phases["inSelect"].processEOF() def processCharacters(self, token): return self.parser.phases["inSelect"].processCharacters(token) def startTagTable(self, token): self.parser.parseError("unexpected-table-element-start-tag-in-select-in-table", {"name": token["name"]}) self.endTagOther(impliedTagToken("select")) return token def startTagOther(self, token): return self.parser.phases["inSelect"].processStartTag(token) def endTagTable(self, token): self.parser.parseError("unexpected-table-element-end-tag-in-select-in-table", {"name": token["name"]}) if self.tree.elementInScope(token["name"], variant="table"): self.endTagOther(impliedTagToken("select")) return token def endTagOther(self, token): return self.parser.phases["inSelect"].processEndTag(token) class InForeignContentPhase(Phase): breakoutElements = frozenset(["b", "big", "blockquote", "body", "br", "center", "code", "dd", "div", "dl", "dt", "em", "embed", "h1", "h2", "h3", "h4", "h5", "h6", "head", "hr", "i", "img", "li", "listing", "menu", "meta", "nobr", "ol", "p", "pre", "ruby", "s", "small", "span", "strong", "strike", "sub", "sup", "table", "tt", "u", "ul", "var"]) def __init__(self, parser, tree): Phase.__init__(self, parser, tree) def adjustSVGTagNames(self, token): replacements = {"altglyph": "altGlyph", "altglyphdef": "altGlyphDef", "altglyphitem": "altGlyphItem", "animatecolor": "animateColor", "animatemotion": "animateMotion", "animatetransform": "animateTransform", "clippath": "clipPath", "feblend": "feBlend", "fecolormatrix": "feColorMatrix", "fecomponenttransfer": "feComponentTransfer", "fecomposite": "feComposite", "feconvolvematrix": "feConvolveMatrix", "fediffuselighting": "feDiffuseLighting", "fedisplacementmap": "feDisplacementMap", "fedistantlight": "feDistantLight", "feflood": "feFlood", "fefunca": "feFuncA", "fefuncb": "feFuncB", "fefuncg": "feFuncG", "fefuncr": "feFuncR", "fegaussianblur": "feGaussianBlur", "feimage": "feImage", "femerge": "feMerge", "femergenode": "feMergeNode", "femorphology": "feMorphology", "feoffset": "feOffset", "fepointlight": "fePointLight", "fespecularlighting": "feSpecularLighting", "fespotlight": "feSpotLight", "fetile": "feTile", "feturbulence": "feTurbulence", "foreignobject": "foreignObject", "glyphref": "glyphRef", "lineargradient": "linearGradient", "radialgradient": "radialGradient", "textpath": "textPath"} if token["name"] in replacements: token["name"] = replacements[token["name"]] def processCharacters(self, token): if token["data"] == "\u0000": token["data"] = "\uFFFD" elif (self.parser.framesetOK and any(char not in spaceCharacters for char in token["data"])): self.parser.framesetOK = False Phase.processCharacters(self, token) def processStartTag(self, token): currentNode = self.tree.openElements[-1] if (token["name"] in self.breakoutElements or (token["name"] == "font" and set(token["data"].keys()) & set(["color", "face", "size"]))): self.parser.parseError("unexpected-html-element-in-foreign-content", {"name": token["name"]}) while (self.tree.openElements[-1].namespace != self.tree.defaultNamespace and not self.parser.isHTMLIntegrationPoint(self.tree.openElements[-1]) and not self.parser.isMathMLTextIntegrationPoint(self.tree.openElements[-1])): self.tree.openElements.pop() return token else: if currentNode.namespace == namespaces["mathml"]: self.parser.adjustMathMLAttributes(token) elif currentNode.namespace == namespaces["svg"]: self.adjustSVGTagNames(token) self.parser.adjustSVGAttributes(token) self.parser.adjustForeignAttributes(token) token["namespace"] = currentNode.namespace self.tree.insertElement(token) if token["selfClosing"]: self.tree.openElements.pop() token["selfClosingAcknowledged"] = True def processEndTag(self, token): nodeIndex = len(self.tree.openElements) - 1 node = self.tree.openElements[-1] if node.name.translate(asciiUpper2Lower) != token["name"]: self.parser.parseError("unexpected-end-tag", {"name": token["name"]}) while True: if node.name.translate(asciiUpper2Lower) == token["name"]: # XXX this isn't in the spec but it seems necessary if self.parser.phase == self.parser.phases["inTableText"]: self.parser.phase.flushCharacters() self.parser.phase = self.parser.phase.originalPhase while self.tree.openElements.pop() != node: assert self.tree.openElements new_token = None break nodeIndex -= 1 node = self.tree.openElements[nodeIndex] if node.namespace != self.tree.defaultNamespace: continue else: new_token = self.parser.phase.processEndTag(token) break return new_token class AfterBodyPhase(Phase): def __init__(self, parser, tree): Phase.__init__(self, parser, tree) self.startTagHandler = _utils.MethodDispatcher([ ("html", self.startTagHtml) ]) self.startTagHandler.default = self.startTagOther self.endTagHandler = _utils.MethodDispatcher([("html", self.endTagHtml)]) self.endTagHandler.default = self.endTagOther def processEOF(self): # Stop parsing pass def processComment(self, token): # This is needed because data is to be appended to the <html> element # here and not to whatever is currently open. self.tree.insertComment(token, self.tree.openElements[0]) def processCharacters(self, token): self.parser.parseError("unexpected-char-after-body") self.parser.phase = self.parser.phases["inBody"] return token def startTagHtml(self, token): return self.parser.phases["inBody"].processStartTag(token) def startTagOther(self, token): self.parser.parseError("unexpected-start-tag-after-body", {"name": token["name"]}) self.parser.phase = self.parser.phases["inBody"] return token def endTagHtml(self, name): if self.parser.innerHTML: self.parser.parseError("unexpected-end-tag-after-body-innerhtml") else: self.parser.phase = self.parser.phases["afterAfterBody"] def endTagOther(self, token): self.parser.parseError("unexpected-end-tag-after-body", {"name": token["name"]}) self.parser.phase = self.parser.phases["inBody"] return token class InFramesetPhase(Phase): # http://www.whatwg.org/specs/web-apps/current-work/#in-frameset def __init__(self, parser, tree): Phase.__init__(self, parser, tree) self.startTagHandler = _utils.MethodDispatcher([ ("html", self.startTagHtml), ("frameset", self.startTagFrameset), ("frame", self.startTagFrame), ("noframes", self.startTagNoframes) ]) self.startTagHandler.default = self.startTagOther self.endTagHandler = _utils.MethodDispatcher([ ("frameset", self.endTagFrameset) ]) self.endTagHandler.default = self.endTagOther def processEOF(self): if self.tree.openElements[-1].name != "html": self.parser.parseError("eof-in-frameset") else: assert self.parser.innerHTML def processCharacters(self, token): self.parser.parseError("unexpected-char-in-frameset") def startTagFrameset(self, token): self.tree.insertElement(token) def startTagFrame(self, token): self.tree.insertElement(token) self.tree.openElements.pop() def startTagNoframes(self, token): return self.parser.phases["inBody"].processStartTag(token) def startTagOther(self, token): self.parser.parseError("unexpected-start-tag-in-frameset", {"name": token["name"]}) def endTagFrameset(self, token): if self.tree.openElements[-1].name == "html": # innerHTML case self.parser.parseError("unexpected-frameset-in-frameset-innerhtml") else: self.tree.openElements.pop() if (not self.parser.innerHTML and self.tree.openElements[-1].name != "frameset"): # If we're not in innerHTML mode and the current node is not a # "frameset" element (anymore) then switch. self.parser.phase = self.parser.phases["afterFrameset"] def endTagOther(self, token): self.parser.parseError("unexpected-end-tag-in-frameset", {"name": token["name"]}) class AfterFramesetPhase(Phase): # http://www.whatwg.org/specs/web-apps/current-work/#after3 def __init__(self, parser, tree): Phase.__init__(self, parser, tree) self.startTagHandler = _utils.MethodDispatcher([ ("html", self.startTagHtml), ("noframes", self.startTagNoframes) ]) self.startTagHandler.default = self.startTagOther self.endTagHandler = _utils.MethodDispatcher([ ("html", self.endTagHtml) ]) self.endTagHandler.default = self.endTagOther def processEOF(self): # Stop parsing pass def processCharacters(self, token): self.parser.parseError("unexpected-char-after-frameset") def startTagNoframes(self, token): return self.parser.phases["inHead"].processStartTag(token) def startTagOther(self, token): self.parser.parseError("unexpected-start-tag-after-frameset", {"name": token["name"]}) def endTagHtml(self, token): self.parser.phase = self.parser.phases["afterAfterFrameset"] def endTagOther(self, token): self.parser.parseError("unexpected-end-tag-after-frameset", {"name": token["name"]}) class AfterAfterBodyPhase(Phase): def __init__(self, parser, tree): Phase.__init__(self, parser, tree) self.startTagHandler = _utils.MethodDispatcher([ ("html", self.startTagHtml) ]) self.startTagHandler.default = self.startTagOther def processEOF(self): pass def processComment(self, token): self.tree.insertComment(token, self.tree.document) def processSpaceCharacters(self, token): return self.parser.phases["inBody"].processSpaceCharacters(token) def processCharacters(self, token): self.parser.parseError("expected-eof-but-got-char") self.parser.phase = self.parser.phases["inBody"] return token def startTagHtml(self, token): return self.parser.phases["inBody"].processStartTag(token) def startTagOther(self, token): self.parser.parseError("expected-eof-but-got-start-tag", {"name": token["name"]}) self.parser.phase = self.parser.phases["inBody"] return token def processEndTag(self, token): self.parser.parseError("expected-eof-but-got-end-tag", {"name": token["name"]}) self.parser.phase = self.parser.phases["inBody"] return token class AfterAfterFramesetPhase(Phase): def __init__(self, parser, tree): Phase.__init__(self, parser, tree) self.startTagHandler = _utils.MethodDispatcher([ ("html", self.startTagHtml), ("noframes", self.startTagNoFrames) ]) self.startTagHandler.default = self.startTagOther def processEOF(self): pass def processComment(self, token): self.tree.insertComment(token, self.tree.document) def processSpaceCharacters(self, token): return self.parser.phases["inBody"].processSpaceCharacters(token) def processCharacters(self, token): self.parser.parseError("expected-eof-but-got-char") def startTagHtml(self, token): return self.parser.phases["inBody"].processStartTag(token) def startTagNoFrames(self, token): return self.parser.phases["inHead"].processStartTag(token) def startTagOther(self, token): self.parser.parseError("expected-eof-but-got-start-tag", {"name": token["name"]}) def processEndTag(self, token): self.parser.parseError("expected-eof-but-got-end-tag", {"name": token["name"]}) # pylint:enable=unused-argument return { "initial": InitialPhase, "beforeHtml": BeforeHtmlPhase, "beforeHead": BeforeHeadPhase, "inHead": InHeadPhase, "inHeadNoscript": InHeadNoscriptPhase, "afterHead": AfterHeadPhase, "inBody": InBodyPhase, "text": TextPhase, "inTable": InTablePhase, "inTableText": InTableTextPhase, "inCaption": InCaptionPhase, "inColumnGroup": InColumnGroupPhase, "inTableBody": InTableBodyPhase, "inRow": InRowPhase, "inCell": InCellPhase, "inSelect": InSelectPhase, "inSelectInTable": InSelectInTablePhase, "inForeignContent": InForeignContentPhase, "afterBody": AfterBodyPhase, "inFrameset": InFramesetPhase, "afterFrameset": AfterFramesetPhase, "afterAfterBody": AfterAfterBodyPhase, "afterAfterFrameset": AfterAfterFramesetPhase, # XXX after after frameset } def adjust_attributes(token, replacements): needs_adjustment = viewkeys(token['data']) & viewkeys(replacements) if needs_adjustment: token['data'] = OrderedDict((replacements.get(k, k), v) for k, v in token['data'].items()) def impliedTagToken(name, type="EndTag", attributes=None, selfClosing=False): if attributes is None: attributes = {} return {"type": tokenTypes[type], "name": name, "data": attributes, "selfClosing": selfClosing} class ParseError(Exception): """Error in parsed document""" pass ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/serializer.py�������������������0000644�0001750�0001750�00000036616�00000000000�030744� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals from pip._vendor.six import text_type import re from codecs import register_error, xmlcharrefreplace_errors from .constants import voidElements, booleanAttributes, spaceCharacters from .constants import rcdataElements, entities, xmlEntities from . import treewalkers, _utils from xml.sax.saxutils import escape _quoteAttributeSpecChars = "".join(spaceCharacters) + "\"'=<>`" _quoteAttributeSpec = re.compile("[" + _quoteAttributeSpecChars + "]") _quoteAttributeLegacy = re.compile("[" + _quoteAttributeSpecChars + "\x00\x01\x02\x03\x04\x05\x06\x07\x08\t\n" "\x0b\x0c\r\x0e\x0f\x10\x11\x12\x13\x14\x15" "\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f" "\x20\x2f\x60\xa0\u1680\u180e\u180f\u2000" "\u2001\u2002\u2003\u2004\u2005\u2006\u2007" "\u2008\u2009\u200a\u2028\u2029\u202f\u205f" "\u3000]") _encode_entity_map = {} _is_ucs4 = len("\U0010FFFF") == 1 for k, v in list(entities.items()): # skip multi-character entities if ((_is_ucs4 and len(v) > 1) or (not _is_ucs4 and len(v) > 2)): continue if v != "&": if len(v) == 2: v = _utils.surrogatePairToCodepoint(v) else: v = ord(v) if v not in _encode_entity_map or k.islower(): # prefer < over < and similarly for &, >, etc. _encode_entity_map[v] = k def htmlentityreplace_errors(exc): if isinstance(exc, (UnicodeEncodeError, UnicodeTranslateError)): res = [] codepoints = [] skip = False for i, c in enumerate(exc.object[exc.start:exc.end]): if skip: skip = False continue index = i + exc.start if _utils.isSurrogatePair(exc.object[index:min([exc.end, index + 2])]): codepoint = _utils.surrogatePairToCodepoint(exc.object[index:index + 2]) skip = True else: codepoint = ord(c) codepoints.append(codepoint) for cp in codepoints: e = _encode_entity_map.get(cp) if e: res.append("&") res.append(e) if not e.endswith(";"): res.append(";") else: res.append("&#x%s;" % (hex(cp)[2:])) return ("".join(res), exc.end) else: return xmlcharrefreplace_errors(exc) register_error("htmlentityreplace", htmlentityreplace_errors) def serialize(input, tree="etree", encoding=None, **serializer_opts): """Serializes the input token stream using the specified treewalker :arg input: the token stream to serialize :arg tree: the treewalker to use :arg encoding: the encoding to use :arg serializer_opts: any options to pass to the :py:class:`html5lib.serializer.HTMLSerializer` that gets created :returns: the tree serialized as a string Example: >>> from html5lib.html5parser import parse >>> from html5lib.serializer import serialize >>> token_stream = parse('<html><body><p>Hi!</p></body></html>') >>> serialize(token_stream, omit_optional_tags=False) '<html><head></head><body><p>Hi!</p></body></html>' """ # XXX: Should we cache this? walker = treewalkers.getTreeWalker(tree) s = HTMLSerializer(**serializer_opts) return s.render(walker(input), encoding) class HTMLSerializer(object): # attribute quoting options quote_attr_values = "legacy" # be secure by default quote_char = '"' use_best_quote_char = True # tag syntax options omit_optional_tags = True minimize_boolean_attributes = True use_trailing_solidus = False space_before_trailing_solidus = True # escaping options escape_lt_in_attrs = False escape_rcdata = False resolve_entities = True # miscellaneous options alphabetical_attributes = False inject_meta_charset = True strip_whitespace = False sanitize = False options = ("quote_attr_values", "quote_char", "use_best_quote_char", "omit_optional_tags", "minimize_boolean_attributes", "use_trailing_solidus", "space_before_trailing_solidus", "escape_lt_in_attrs", "escape_rcdata", "resolve_entities", "alphabetical_attributes", "inject_meta_charset", "strip_whitespace", "sanitize") def __init__(self, **kwargs): """Initialize HTMLSerializer :arg inject_meta_charset: Whether or not to inject the meta charset. Defaults to ``True``. :arg quote_attr_values: Whether to quote attribute values that don't require quoting per legacy browser behavior (``"legacy"``), when required by the standard (``"spec"``), or always (``"always"``). Defaults to ``"legacy"``. :arg quote_char: Use given quote character for attribute quoting. Defaults to ``"`` which will use double quotes unless attribute value contains a double quote, in which case single quotes are used. :arg escape_lt_in_attrs: Whether or not to escape ``<`` in attribute values. Defaults to ``False``. :arg escape_rcdata: Whether to escape characters that need to be escaped within normal elements within rcdata elements such as style. Defaults to ``False``. :arg resolve_entities: Whether to resolve named character entities that appear in the source tree. The XML predefined entities < > & " ' are unaffected by this setting. Defaults to ``True``. :arg strip_whitespace: Whether to remove semantically meaningless whitespace. (This compresses all whitespace to a single space except within ``pre``.) Defaults to ``False``. :arg minimize_boolean_attributes: Shortens boolean attributes to give just the attribute value, for example:: <input disabled="disabled"> becomes:: <input disabled> Defaults to ``True``. :arg use_trailing_solidus: Includes a close-tag slash at the end of the start tag of void elements (empty elements whose end tag is forbidden). E.g. ``<hr/>``. Defaults to ``False``. :arg space_before_trailing_solidus: Places a space immediately before the closing slash in a tag using a trailing solidus. E.g. ``<hr />``. Requires ``use_trailing_solidus=True``. Defaults to ``True``. :arg sanitize: Strip all unsafe or unknown constructs from output. See :py:class:`html5lib.filters.sanitizer.Filter`. Defaults to ``False``. :arg omit_optional_tags: Omit start/end tags that are optional. Defaults to ``True``. :arg alphabetical_attributes: Reorder attributes to be in alphabetical order. Defaults to ``False``. """ unexpected_args = frozenset(kwargs) - frozenset(self.options) if len(unexpected_args) > 0: raise TypeError("__init__() got an unexpected keyword argument '%s'" % next(iter(unexpected_args))) if 'quote_char' in kwargs: self.use_best_quote_char = False for attr in self.options: setattr(self, attr, kwargs.get(attr, getattr(self, attr))) self.errors = [] self.strict = False def encode(self, string): assert(isinstance(string, text_type)) if self.encoding: return string.encode(self.encoding, "htmlentityreplace") else: return string def encodeStrict(self, string): assert(isinstance(string, text_type)) if self.encoding: return string.encode(self.encoding, "strict") else: return string def serialize(self, treewalker, encoding=None): # pylint:disable=too-many-nested-blocks self.encoding = encoding in_cdata = False self.errors = [] if encoding and self.inject_meta_charset: from .filters.inject_meta_charset import Filter treewalker = Filter(treewalker, encoding) # Alphabetical attributes is here under the assumption that none of # the later filters add or change order of attributes; it needs to be # before the sanitizer so escaped elements come out correctly if self.alphabetical_attributes: from .filters.alphabeticalattributes import Filter treewalker = Filter(treewalker) # WhitespaceFilter should be used before OptionalTagFilter # for maximum efficiently of this latter filter if self.strip_whitespace: from .filters.whitespace import Filter treewalker = Filter(treewalker) if self.sanitize: from .filters.sanitizer import Filter treewalker = Filter(treewalker) if self.omit_optional_tags: from .filters.optionaltags import Filter treewalker = Filter(treewalker) for token in treewalker: type = token["type"] if type == "Doctype": doctype = "<!DOCTYPE %s" % token["name"] if token["publicId"]: doctype += ' PUBLIC "%s"' % token["publicId"] elif token["systemId"]: doctype += " SYSTEM" if token["systemId"]: if token["systemId"].find('"') >= 0: if token["systemId"].find("'") >= 0: self.serializeError("System identifer contains both single and double quote characters") quote_char = "'" else: quote_char = '"' doctype += " %s%s%s" % (quote_char, token["systemId"], quote_char) doctype += ">" yield self.encodeStrict(doctype) elif type in ("Characters", "SpaceCharacters"): if type == "SpaceCharacters" or in_cdata: if in_cdata and token["data"].find("</") >= 0: self.serializeError("Unexpected </ in CDATA") yield self.encode(token["data"]) else: yield self.encode(escape(token["data"])) elif type in ("StartTag", "EmptyTag"): name = token["name"] yield self.encodeStrict("<%s" % name) if name in rcdataElements and not self.escape_rcdata: in_cdata = True elif in_cdata: self.serializeError("Unexpected child element of a CDATA element") for (_, attr_name), attr_value in token["data"].items(): # TODO: Add namespace support here k = attr_name v = attr_value yield self.encodeStrict(' ') yield self.encodeStrict(k) if not self.minimize_boolean_attributes or \ (k not in booleanAttributes.get(name, tuple()) and k not in booleanAttributes.get("", tuple())): yield self.encodeStrict("=") if self.quote_attr_values == "always" or len(v) == 0: quote_attr = True elif self.quote_attr_values == "spec": quote_attr = _quoteAttributeSpec.search(v) is not None elif self.quote_attr_values == "legacy": quote_attr = _quoteAttributeLegacy.search(v) is not None else: raise ValueError("quote_attr_values must be one of: " "'always', 'spec', or 'legacy'") v = v.replace("&", "&") if self.escape_lt_in_attrs: v = v.replace("<", "<") if quote_attr: quote_char = self.quote_char if self.use_best_quote_char: if "'" in v and '"' not in v: quote_char = '"' elif '"' in v and "'" not in v: quote_char = "'" if quote_char == "'": v = v.replace("'", "'") else: v = v.replace('"', """) yield self.encodeStrict(quote_char) yield self.encode(v) yield self.encodeStrict(quote_char) else: yield self.encode(v) if name in voidElements and self.use_trailing_solidus: if self.space_before_trailing_solidus: yield self.encodeStrict(" /") else: yield self.encodeStrict("/") yield self.encode(">") elif type == "EndTag": name = token["name"] if name in rcdataElements: in_cdata = False elif in_cdata: self.serializeError("Unexpected child element of a CDATA element") yield self.encodeStrict("</%s>" % name) elif type == "Comment": data = token["data"] if data.find("--") >= 0: self.serializeError("Comment contains --") yield self.encodeStrict("<!--%s-->" % token["data"]) elif type == "Entity": name = token["name"] key = name + ";" if key not in entities: self.serializeError("Entity %s not recognized" % name) if self.resolve_entities and key not in xmlEntities: data = entities[key] else: data = "&%s;" % name yield self.encodeStrict(data) else: self.serializeError(token["data"]) def render(self, treewalker, encoding=None): """Serializes the stream from the treewalker into a string :arg treewalker: the treewalker to serialize :arg encoding: the string encoding to use :returns: the serialized tree Example: >>> from html5lib import parse, getTreeWalker >>> from html5lib.serializer import HTMLSerializer >>> token_stream = parse('<html><body>Hi!</body></html>') >>> walker = getTreeWalker('etree') >>> serializer = HTMLSerializer(omit_optional_tags=False) >>> serializer.render(walker(token_stream)) '<html><head></head><body>Hi!</body></html>' """ if encoding: return b"".join(list(self.serialize(treewalker, encoding))) else: return "".join(list(self.serialize(treewalker))) def serializeError(self, data="XXX ERROR MESSAGE NEEDED"): # XXX The idea is to make data mandatory. self.errors.append(data) if self.strict: raise SerializeError class SerializeError(Exception): """Error in serialized tree""" pass ������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000033�00000000000�011451� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������27 mtime=1604735099.996661 �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treeadapters/�������������������0000755�0001750�0001750�00000000000�00000000000�030670� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treeadapters/__init__.py��������0000644�0001750�0001750�00000001247�00000000000�033005� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""Tree adapters let you convert from one tree structure to another Example: .. code-block:: python from pip._vendor import html5lib from pip._vendor.html5lib.treeadapters import genshi doc = '<html><body>Hi!</body></html>' treebuilder = html5lib.getTreeBuilder('etree') parser = html5lib.HTMLParser(tree=treebuilder) tree = parser.parse(doc) TreeWalker = html5lib.getTreeWalker('etree') genshi_tree = genshi.to_genshi(TreeWalker(tree)) """ from __future__ import absolute_import, division, unicode_literals from . import sax __all__ = ["sax"] try: from . import genshi # noqa except ImportError: pass else: __all__.append("genshi") ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treeadapters/genshi.py����������0000644�0001750�0001750�00000003263�00000000000�032523� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals from genshi.core import QName, Attrs from genshi.core import START, END, TEXT, COMMENT, DOCTYPE def to_genshi(walker): """Convert a tree to a genshi tree :arg walker: the treewalker to use to walk the tree to convert it :returns: generator of genshi nodes """ text = [] for token in walker: type = token["type"] if type in ("Characters", "SpaceCharacters"): text.append(token["data"]) elif text: yield TEXT, "".join(text), (None, -1, -1) text = [] if type in ("StartTag", "EmptyTag"): if token["namespace"]: name = "{%s}%s" % (token["namespace"], token["name"]) else: name = token["name"] attrs = Attrs([(QName("{%s}%s" % attr if attr[0] is not None else attr[1]), value) for attr, value in token["data"].items()]) yield (START, (QName(name), attrs), (None, -1, -1)) if type == "EmptyTag": type = "EndTag" if type == "EndTag": if token["namespace"]: name = "{%s}%s" % (token["namespace"], token["name"]) else: name = token["name"] yield END, QName(name), (None, -1, -1) elif type == "Comment": yield COMMENT, token["data"], (None, -1, -1) elif type == "Doctype": yield DOCTYPE, (token["name"], token["publicId"], token["systemId"]), (None, -1, -1) else: pass # FIXME: What to do? if text: yield TEXT, "".join(text), (None, -1, -1) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treeadapters/sax.py�������������0000644�0001750�0001750�00000003360�00000000000�032037� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals from xml.sax.xmlreader import AttributesNSImpl from ..constants import adjustForeignAttributes, unadjustForeignAttributes prefix_mapping = {} for prefix, localName, namespace in adjustForeignAttributes.values(): if prefix is not None: prefix_mapping[prefix] = namespace def to_sax(walker, handler): """Call SAX-like content handler based on treewalker walker :arg walker: the treewalker to use to walk the tree to convert it :arg handler: SAX handler to use """ handler.startDocument() for prefix, namespace in prefix_mapping.items(): handler.startPrefixMapping(prefix, namespace) for token in walker: type = token["type"] if type == "Doctype": continue elif type in ("StartTag", "EmptyTag"): attrs = AttributesNSImpl(token["data"], unadjustForeignAttributes) handler.startElementNS((token["namespace"], token["name"]), token["name"], attrs) if type == "EmptyTag": handler.endElementNS((token["namespace"], token["name"]), token["name"]) elif type == "EndTag": handler.endElementNS((token["namespace"], token["name"]), token["name"]) elif type in ("Characters", "SpaceCharacters"): handler.characters(token["data"]) elif type == "Comment": pass else: assert False, "Unknown token type" for prefix, namespace in prefix_mapping.items(): handler.endPrefixMapping(prefix) handler.endDocument() ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000033�00000000000�011451� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������27 mtime=1604735099.996661 �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treebuilders/�������������������0000755�0001750�0001750�00000000000�00000000000�030676� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treebuilders/__init__.py��������0000644�0001750�0001750�00000007010�00000000000�033005� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""A collection of modules for building different kinds of trees from HTML documents. To create a treebuilder for a new type of tree, you need to do implement several things: 1. A set of classes for various types of elements: Document, Doctype, Comment, Element. These must implement the interface of ``base.treebuilders.Node`` (although comment nodes have a different signature for their constructor, see ``treebuilders.etree.Comment``) Textual content may also be implemented as another node type, or not, as your tree implementation requires. 2. A treebuilder object (called ``TreeBuilder`` by convention) that inherits from ``treebuilders.base.TreeBuilder``. This has 4 required attributes: * ``documentClass`` - the class to use for the bottommost node of a document * ``elementClass`` - the class to use for HTML Elements * ``commentClass`` - the class to use for comments * ``doctypeClass`` - the class to use for doctypes It also has one required method: * ``getDocument`` - Returns the root node of the complete document tree 3. If you wish to run the unit tests, you must also create a ``testSerializer`` method on your treebuilder which accepts a node and returns a string containing Node and its children serialized according to the format used in the unittests """ from __future__ import absolute_import, division, unicode_literals from .._utils import default_etree treeBuilderCache = {} def getTreeBuilder(treeType, implementation=None, **kwargs): """Get a TreeBuilder class for various types of trees with built-in support :arg treeType: the name of the tree type required (case-insensitive). Supported values are: * "dom" - A generic builder for DOM implementations, defaulting to a xml.dom.minidom based implementation. * "etree" - A generic builder for tree implementations exposing an ElementTree-like interface, defaulting to xml.etree.cElementTree if available and xml.etree.ElementTree if not. * "lxml" - A etree-based builder for lxml.etree, handling limitations of lxml's implementation. :arg implementation: (Currently applies to the "etree" and "dom" tree types). A module implementing the tree type e.g. xml.etree.ElementTree or xml.etree.cElementTree. :arg kwargs: Any additional options to pass to the TreeBuilder when creating it. Example: >>> from html5lib.treebuilders import getTreeBuilder >>> builder = getTreeBuilder('etree') """ treeType = treeType.lower() if treeType not in treeBuilderCache: if treeType == "dom": from . import dom # Come up with a sane default (pref. from the stdlib) if implementation is None: from xml.dom import minidom implementation = minidom # NEVER cache here, caching is done in the dom submodule return dom.getDomModule(implementation, **kwargs).TreeBuilder elif treeType == "lxml": from . import etree_lxml treeBuilderCache[treeType] = etree_lxml.TreeBuilder elif treeType == "etree": from . import etree if implementation is None: implementation = default_etree # NEVER cache here, caching is done in the etree submodule return etree.getETreeModule(implementation, **kwargs).TreeBuilder else: raise ValueError("""Unrecognised treebuilder "%s" """ % treeType) return treeBuilderCache.get(treeType) ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treebuilders/base.py������������0000644�0001750�0001750�00000034363�00000000000�032173� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals from pip._vendor.six import text_type from ..constants import scopingElements, tableInsertModeElements, namespaces # The scope markers are inserted when entering object elements, # marquees, table cells, and table captions, and are used to prevent formatting # from "leaking" into tables, object elements, and marquees. Marker = None listElementsMap = { None: (frozenset(scopingElements), False), "button": (frozenset(scopingElements | set([(namespaces["html"], "button")])), False), "list": (frozenset(scopingElements | set([(namespaces["html"], "ol"), (namespaces["html"], "ul")])), False), "table": (frozenset([(namespaces["html"], "html"), (namespaces["html"], "table")]), False), "select": (frozenset([(namespaces["html"], "optgroup"), (namespaces["html"], "option")]), True) } class Node(object): """Represents an item in the tree""" def __init__(self, name): """Creates a Node :arg name: The tag name associated with the node """ # The tag name assocaited with the node self.name = name # The parent of the current node (or None for the document node) self.parent = None # The value of the current node (applies to text nodes and comments) self.value = None # A dict holding name -> value pairs for attributes of the node self.attributes = {} # A list of child nodes of the current node. This must include all # elements but not necessarily other node types. self.childNodes = [] # A list of miscellaneous flags that can be set on the node. self._flags = [] def __str__(self): attributesStr = " ".join(["%s=\"%s\"" % (name, value) for name, value in self.attributes.items()]) if attributesStr: return "<%s %s>" % (self.name, attributesStr) else: return "<%s>" % (self.name) def __repr__(self): return "<%s>" % (self.name) def appendChild(self, node): """Insert node as a child of the current node :arg node: the node to insert """ raise NotImplementedError def insertText(self, data, insertBefore=None): """Insert data as text in the current node, positioned before the start of node insertBefore or to the end of the node's text. :arg data: the data to insert :arg insertBefore: True if you want to insert the text before the node and False if you want to insert it after the node """ raise NotImplementedError def insertBefore(self, node, refNode): """Insert node as a child of the current node, before refNode in the list of child nodes. Raises ValueError if refNode is not a child of the current node :arg node: the node to insert :arg refNode: the child node to insert the node before """ raise NotImplementedError def removeChild(self, node): """Remove node from the children of the current node :arg node: the child node to remove """ raise NotImplementedError def reparentChildren(self, newParent): """Move all the children of the current node to newParent. This is needed so that trees that don't store text as nodes move the text in the correct way :arg newParent: the node to move all this node's children to """ # XXX - should this method be made more general? for child in self.childNodes: newParent.appendChild(child) self.childNodes = [] def cloneNode(self): """Return a shallow copy of the current node i.e. a node with the same name and attributes but with no parent or child nodes """ raise NotImplementedError def hasContent(self): """Return true if the node has children or text, false otherwise """ raise NotImplementedError class ActiveFormattingElements(list): def append(self, node): equalCount = 0 if node != Marker: for element in self[::-1]: if element == Marker: break if self.nodesEqual(element, node): equalCount += 1 if equalCount == 3: self.remove(element) break list.append(self, node) def nodesEqual(self, node1, node2): if not node1.nameTuple == node2.nameTuple: return False if not node1.attributes == node2.attributes: return False return True class TreeBuilder(object): """Base treebuilder implementation * documentClass - the class to use for the bottommost node of a document * elementClass - the class to use for HTML Elements * commentClass - the class to use for comments * doctypeClass - the class to use for doctypes """ # pylint:disable=not-callable # Document class documentClass = None # The class to use for creating a node elementClass = None # The class to use for creating comments commentClass = None # The class to use for creating doctypes doctypeClass = None # Fragment class fragmentClass = None def __init__(self, namespaceHTMLElements): """Create a TreeBuilder :arg namespaceHTMLElements: whether or not to namespace HTML elements """ if namespaceHTMLElements: self.defaultNamespace = "http://www.w3.org/1999/xhtml" else: self.defaultNamespace = None self.reset() def reset(self): self.openElements = [] self.activeFormattingElements = ActiveFormattingElements() # XXX - rename these to headElement, formElement self.headPointer = None self.formPointer = None self.insertFromTable = False self.document = self.documentClass() def elementInScope(self, target, variant=None): # If we pass a node in we match that. if we pass a string # match any node with that name exactNode = hasattr(target, "nameTuple") if not exactNode: if isinstance(target, text_type): target = (namespaces["html"], target) assert isinstance(target, tuple) listElements, invert = listElementsMap[variant] for node in reversed(self.openElements): if exactNode and node == target: return True elif not exactNode and node.nameTuple == target: return True elif (invert ^ (node.nameTuple in listElements)): return False assert False # We should never reach this point def reconstructActiveFormattingElements(self): # Within this algorithm the order of steps described in the # specification is not quite the same as the order of steps in the # code. It should still do the same though. # Step 1: stop the algorithm when there's nothing to do. if not self.activeFormattingElements: return # Step 2 and step 3: we start with the last element. So i is -1. i = len(self.activeFormattingElements) - 1 entry = self.activeFormattingElements[i] if entry == Marker or entry in self.openElements: return # Step 6 while entry != Marker and entry not in self.openElements: if i == 0: # This will be reset to 0 below i = -1 break i -= 1 # Step 5: let entry be one earlier in the list. entry = self.activeFormattingElements[i] while True: # Step 7 i += 1 # Step 8 entry = self.activeFormattingElements[i] clone = entry.cloneNode() # Mainly to get a new copy of the attributes # Step 9 element = self.insertElement({"type": "StartTag", "name": clone.name, "namespace": clone.namespace, "data": clone.attributes}) # Step 10 self.activeFormattingElements[i] = element # Step 11 if element == self.activeFormattingElements[-1]: break def clearActiveFormattingElements(self): entry = self.activeFormattingElements.pop() while self.activeFormattingElements and entry != Marker: entry = self.activeFormattingElements.pop() def elementInActiveFormattingElements(self, name): """Check if an element exists between the end of the active formatting elements and the last marker. If it does, return it, else return false""" for item in self.activeFormattingElements[::-1]: # Check for Marker first because if it's a Marker it doesn't have a # name attribute. if item == Marker: break elif item.name == name: return item return False def insertRoot(self, token): element = self.createElement(token) self.openElements.append(element) self.document.appendChild(element) def insertDoctype(self, token): name = token["name"] publicId = token["publicId"] systemId = token["systemId"] doctype = self.doctypeClass(name, publicId, systemId) self.document.appendChild(doctype) def insertComment(self, token, parent=None): if parent is None: parent = self.openElements[-1] parent.appendChild(self.commentClass(token["data"])) def createElement(self, token): """Create an element but don't insert it anywhere""" name = token["name"] namespace = token.get("namespace", self.defaultNamespace) element = self.elementClass(name, namespace) element.attributes = token["data"] return element def _getInsertFromTable(self): return self._insertFromTable def _setInsertFromTable(self, value): """Switch the function used to insert an element from the normal one to the misnested table one and back again""" self._insertFromTable = value if value: self.insertElement = self.insertElementTable else: self.insertElement = self.insertElementNormal insertFromTable = property(_getInsertFromTable, _setInsertFromTable) def insertElementNormal(self, token): name = token["name"] assert isinstance(name, text_type), "Element %s not unicode" % name namespace = token.get("namespace", self.defaultNamespace) element = self.elementClass(name, namespace) element.attributes = token["data"] self.openElements[-1].appendChild(element) self.openElements.append(element) return element def insertElementTable(self, token): """Create an element and insert it into the tree""" element = self.createElement(token) if self.openElements[-1].name not in tableInsertModeElements: return self.insertElementNormal(token) else: # We should be in the InTable mode. This means we want to do # special magic element rearranging parent, insertBefore = self.getTableMisnestedNodePosition() if insertBefore is None: parent.appendChild(element) else: parent.insertBefore(element, insertBefore) self.openElements.append(element) return element def insertText(self, data, parent=None): """Insert text data.""" if parent is None: parent = self.openElements[-1] if (not self.insertFromTable or (self.insertFromTable and self.openElements[-1].name not in tableInsertModeElements)): parent.insertText(data) else: # We should be in the InTable mode. This means we want to do # special magic element rearranging parent, insertBefore = self.getTableMisnestedNodePosition() parent.insertText(data, insertBefore) def getTableMisnestedNodePosition(self): """Get the foster parent element, and sibling to insert before (or None) when inserting a misnested table node""" # The foster parent element is the one which comes before the most # recently opened table element # XXX - this is really inelegant lastTable = None fosterParent = None insertBefore = None for elm in self.openElements[::-1]: if elm.name == "table": lastTable = elm break if lastTable: # XXX - we should really check that this parent is actually a # node here if lastTable.parent: fosterParent = lastTable.parent insertBefore = lastTable else: fosterParent = self.openElements[ self.openElements.index(lastTable) - 1] else: fosterParent = self.openElements[0] return fosterParent, insertBefore def generateImpliedEndTags(self, exclude=None): name = self.openElements[-1].name # XXX td, th and tr are not actually needed if (name in frozenset(("dd", "dt", "li", "option", "optgroup", "p", "rp", "rt")) and name != exclude): self.openElements.pop() # XXX This is not entirely what the specification says. We should # investigate it more closely. self.generateImpliedEndTags(exclude) def getDocument(self): """Return the final tree""" return self.document def getFragment(self): """Return the final fragment""" # assert self.innerHTML fragment = self.fragmentClass() self.openElements[0].reparentChildren(fragment) return fragment def testSerializer(self, node): """Serialize the subtree of node in the format required by unit tests :arg node: the node from which to start serializing """ raise NotImplementedError �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treebuilders/dom.py�������������0000644�0001750�0001750�00000021335�00000000000�032033� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals try: from collections.abc import MutableMapping except ImportError: # Python 2.7 from collections import MutableMapping from xml.dom import minidom, Node import weakref from . import base from .. import constants from ..constants import namespaces from .._utils import moduleFactoryFactory def getDomBuilder(DomImplementation): Dom = DomImplementation class AttrList(MutableMapping): def __init__(self, element): self.element = element def __iter__(self): return iter(self.element.attributes.keys()) def __setitem__(self, name, value): if isinstance(name, tuple): raise NotImplementedError else: attr = self.element.ownerDocument.createAttribute(name) attr.value = value self.element.attributes[name] = attr def __len__(self): return len(self.element.attributes) def items(self): return list(self.element.attributes.items()) def values(self): return list(self.element.attributes.values()) def __getitem__(self, name): if isinstance(name, tuple): raise NotImplementedError else: return self.element.attributes[name].value def __delitem__(self, name): if isinstance(name, tuple): raise NotImplementedError else: del self.element.attributes[name] class NodeBuilder(base.Node): def __init__(self, element): base.Node.__init__(self, element.nodeName) self.element = element namespace = property(lambda self: hasattr(self.element, "namespaceURI") and self.element.namespaceURI or None) def appendChild(self, node): node.parent = self self.element.appendChild(node.element) def insertText(self, data, insertBefore=None): text = self.element.ownerDocument.createTextNode(data) if insertBefore: self.element.insertBefore(text, insertBefore.element) else: self.element.appendChild(text) def insertBefore(self, node, refNode): self.element.insertBefore(node.element, refNode.element) node.parent = self def removeChild(self, node): if node.element.parentNode == self.element: self.element.removeChild(node.element) node.parent = None def reparentChildren(self, newParent): while self.element.hasChildNodes(): child = self.element.firstChild self.element.removeChild(child) newParent.element.appendChild(child) self.childNodes = [] def getAttributes(self): return AttrList(self.element) def setAttributes(self, attributes): if attributes: for name, value in list(attributes.items()): if isinstance(name, tuple): if name[0] is not None: qualifiedName = (name[0] + ":" + name[1]) else: qualifiedName = name[1] self.element.setAttributeNS(name[2], qualifiedName, value) else: self.element.setAttribute( name, value) attributes = property(getAttributes, setAttributes) def cloneNode(self): return NodeBuilder(self.element.cloneNode(False)) def hasContent(self): return self.element.hasChildNodes() def getNameTuple(self): if self.namespace is None: return namespaces["html"], self.name else: return self.namespace, self.name nameTuple = property(getNameTuple) class TreeBuilder(base.TreeBuilder): # pylint:disable=unused-variable def documentClass(self): self.dom = Dom.getDOMImplementation().createDocument(None, None, None) return weakref.proxy(self) def insertDoctype(self, token): name = token["name"] publicId = token["publicId"] systemId = token["systemId"] domimpl = Dom.getDOMImplementation() doctype = domimpl.createDocumentType(name, publicId, systemId) self.document.appendChild(NodeBuilder(doctype)) if Dom == minidom: doctype.ownerDocument = self.dom def elementClass(self, name, namespace=None): if namespace is None and self.defaultNamespace is None: node = self.dom.createElement(name) else: node = self.dom.createElementNS(namespace, name) return NodeBuilder(node) def commentClass(self, data): return NodeBuilder(self.dom.createComment(data)) def fragmentClass(self): return NodeBuilder(self.dom.createDocumentFragment()) def appendChild(self, node): self.dom.appendChild(node.element) def testSerializer(self, element): return testSerializer(element) def getDocument(self): return self.dom def getFragment(self): return base.TreeBuilder.getFragment(self).element def insertText(self, data, parent=None): data = data if parent != self: base.TreeBuilder.insertText(self, data, parent) else: # HACK: allow text nodes as children of the document node if hasattr(self.dom, '_child_node_types'): # pylint:disable=protected-access if Node.TEXT_NODE not in self.dom._child_node_types: self.dom._child_node_types = list(self.dom._child_node_types) self.dom._child_node_types.append(Node.TEXT_NODE) self.dom.appendChild(self.dom.createTextNode(data)) implementation = DomImplementation name = None def testSerializer(element): element.normalize() rv = [] def serializeElement(element, indent=0): if element.nodeType == Node.DOCUMENT_TYPE_NODE: if element.name: if element.publicId or element.systemId: publicId = element.publicId or "" systemId = element.systemId or "" rv.append("""|%s<!DOCTYPE %s "%s" "%s">""" % (' ' * indent, element.name, publicId, systemId)) else: rv.append("|%s<!DOCTYPE %s>" % (' ' * indent, element.name)) else: rv.append("|%s<!DOCTYPE >" % (' ' * indent,)) elif element.nodeType == Node.DOCUMENT_NODE: rv.append("#document") elif element.nodeType == Node.DOCUMENT_FRAGMENT_NODE: rv.append("#document-fragment") elif element.nodeType == Node.COMMENT_NODE: rv.append("|%s<!-- %s -->" % (' ' * indent, element.nodeValue)) elif element.nodeType == Node.TEXT_NODE: rv.append("|%s\"%s\"" % (' ' * indent, element.nodeValue)) else: if (hasattr(element, "namespaceURI") and element.namespaceURI is not None): name = "%s %s" % (constants.prefixes[element.namespaceURI], element.nodeName) else: name = element.nodeName rv.append("|%s<%s>" % (' ' * indent, name)) if element.hasAttributes(): attributes = [] for i in range(len(element.attributes)): attr = element.attributes.item(i) name = attr.nodeName value = attr.value ns = attr.namespaceURI if ns: name = "%s %s" % (constants.prefixes[ns], attr.localName) else: name = attr.nodeName attributes.append((name, value)) for name, value in sorted(attributes): rv.append('|%s%s="%s"' % (' ' * (indent + 2), name, value)) indent += 2 for child in element.childNodes: serializeElement(child, indent) serializeElement(element, 0) return "\n".join(rv) return locals() # The actual means to get a module! getDomModule = moduleFactoryFactory(getDomBuilder) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treebuilders/etree.py�����������0000644�0001750�0001750�00000030734�00000000000�032363� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals # pylint:disable=protected-access from pip._vendor.six import text_type import re from . import base from .. import _ihatexml from .. import constants from ..constants import namespaces from .._utils import moduleFactoryFactory tag_regexp = re.compile("{([^}]*)}(.*)") def getETreeBuilder(ElementTreeImplementation, fullTree=False): ElementTree = ElementTreeImplementation ElementTreeCommentType = ElementTree.Comment("asd").tag class Element(base.Node): def __init__(self, name, namespace=None): self._name = name self._namespace = namespace self._element = ElementTree.Element(self._getETreeTag(name, namespace)) if namespace is None: self.nameTuple = namespaces["html"], self._name else: self.nameTuple = self._namespace, self._name self.parent = None self._childNodes = [] self._flags = [] def _getETreeTag(self, name, namespace): if namespace is None: etree_tag = name else: etree_tag = "{%s}%s" % (namespace, name) return etree_tag def _setName(self, name): self._name = name self._element.tag = self._getETreeTag(self._name, self._namespace) def _getName(self): return self._name name = property(_getName, _setName) def _setNamespace(self, namespace): self._namespace = namespace self._element.tag = self._getETreeTag(self._name, self._namespace) def _getNamespace(self): return self._namespace namespace = property(_getNamespace, _setNamespace) def _getAttributes(self): return self._element.attrib def _setAttributes(self, attributes): # Delete existing attributes first # XXX - there may be a better way to do this... for key in list(self._element.attrib.keys()): del self._element.attrib[key] for key, value in attributes.items(): if isinstance(key, tuple): name = "{%s}%s" % (key[2], key[1]) else: name = key self._element.set(name, value) attributes = property(_getAttributes, _setAttributes) def _getChildNodes(self): return self._childNodes def _setChildNodes(self, value): del self._element[:] self._childNodes = [] for element in value: self.insertChild(element) childNodes = property(_getChildNodes, _setChildNodes) def hasContent(self): """Return true if the node has children or text""" return bool(self._element.text or len(self._element)) def appendChild(self, node): self._childNodes.append(node) self._element.append(node._element) node.parent = self def insertBefore(self, node, refNode): index = list(self._element).index(refNode._element) self._element.insert(index, node._element) node.parent = self def removeChild(self, node): self._childNodes.remove(node) self._element.remove(node._element) node.parent = None def insertText(self, data, insertBefore=None): if not(len(self._element)): if not self._element.text: self._element.text = "" self._element.text += data elif insertBefore is None: # Insert the text as the tail of the last child element if not self._element[-1].tail: self._element[-1].tail = "" self._element[-1].tail += data else: # Insert the text before the specified node children = list(self._element) index = children.index(insertBefore._element) if index > 0: if not self._element[index - 1].tail: self._element[index - 1].tail = "" self._element[index - 1].tail += data else: if not self._element.text: self._element.text = "" self._element.text += data def cloneNode(self): element = type(self)(self.name, self.namespace) for name, value in self.attributes.items(): element.attributes[name] = value return element def reparentChildren(self, newParent): if newParent.childNodes: newParent.childNodes[-1]._element.tail += self._element.text else: if not newParent._element.text: newParent._element.text = "" if self._element.text is not None: newParent._element.text += self._element.text self._element.text = "" base.Node.reparentChildren(self, newParent) class Comment(Element): def __init__(self, data): # Use the superclass constructor to set all properties on the # wrapper element self._element = ElementTree.Comment(data) self.parent = None self._childNodes = [] self._flags = [] def _getData(self): return self._element.text def _setData(self, value): self._element.text = value data = property(_getData, _setData) class DocumentType(Element): def __init__(self, name, publicId, systemId): Element.__init__(self, "<!DOCTYPE>") self._element.text = name self.publicId = publicId self.systemId = systemId def _getPublicId(self): return self._element.get("publicId", "") def _setPublicId(self, value): if value is not None: self._element.set("publicId", value) publicId = property(_getPublicId, _setPublicId) def _getSystemId(self): return self._element.get("systemId", "") def _setSystemId(self, value): if value is not None: self._element.set("systemId", value) systemId = property(_getSystemId, _setSystemId) class Document(Element): def __init__(self): Element.__init__(self, "DOCUMENT_ROOT") class DocumentFragment(Element): def __init__(self): Element.__init__(self, "DOCUMENT_FRAGMENT") def testSerializer(element): rv = [] def serializeElement(element, indent=0): if not(hasattr(element, "tag")): element = element.getroot() if element.tag == "<!DOCTYPE>": if element.get("publicId") or element.get("systemId"): publicId = element.get("publicId") or "" systemId = element.get("systemId") or "" rv.append("""<!DOCTYPE %s "%s" "%s">""" % (element.text, publicId, systemId)) else: rv.append("<!DOCTYPE %s>" % (element.text,)) elif element.tag == "DOCUMENT_ROOT": rv.append("#document") if element.text is not None: rv.append("|%s\"%s\"" % (' ' * (indent + 2), element.text)) if element.tail is not None: raise TypeError("Document node cannot have tail") if hasattr(element, "attrib") and len(element.attrib): raise TypeError("Document node cannot have attributes") elif element.tag == ElementTreeCommentType: rv.append("|%s<!-- %s -->" % (' ' * indent, element.text)) else: assert isinstance(element.tag, text_type), \ "Expected unicode, got %s, %s" % (type(element.tag), element.tag) nsmatch = tag_regexp.match(element.tag) if nsmatch is None: name = element.tag else: ns, name = nsmatch.groups() prefix = constants.prefixes[ns] name = "%s %s" % (prefix, name) rv.append("|%s<%s>" % (' ' * indent, name)) if hasattr(element, "attrib"): attributes = [] for name, value in element.attrib.items(): nsmatch = tag_regexp.match(name) if nsmatch is not None: ns, name = nsmatch.groups() prefix = constants.prefixes[ns] attr_string = "%s %s" % (prefix, name) else: attr_string = name attributes.append((attr_string, value)) for name, value in sorted(attributes): rv.append('|%s%s="%s"' % (' ' * (indent + 2), name, value)) if element.text: rv.append("|%s\"%s\"" % (' ' * (indent + 2), element.text)) indent += 2 for child in element: serializeElement(child, indent) if element.tail: rv.append("|%s\"%s\"" % (' ' * (indent - 2), element.tail)) serializeElement(element, 0) return "\n".join(rv) def tostring(element): # pylint:disable=unused-variable """Serialize an element and its child nodes to a string""" rv = [] filter = _ihatexml.InfosetFilter() def serializeElement(element): if isinstance(element, ElementTree.ElementTree): element = element.getroot() if element.tag == "<!DOCTYPE>": if element.get("publicId") or element.get("systemId"): publicId = element.get("publicId") or "" systemId = element.get("systemId") or "" rv.append("""<!DOCTYPE %s PUBLIC "%s" "%s">""" % (element.text, publicId, systemId)) else: rv.append("<!DOCTYPE %s>" % (element.text,)) elif element.tag == "DOCUMENT_ROOT": if element.text is not None: rv.append(element.text) if element.tail is not None: raise TypeError("Document node cannot have tail") if hasattr(element, "attrib") and len(element.attrib): raise TypeError("Document node cannot have attributes") for child in element: serializeElement(child) elif element.tag == ElementTreeCommentType: rv.append("<!--%s-->" % (element.text,)) else: # This is assumed to be an ordinary element if not element.attrib: rv.append("<%s>" % (filter.fromXmlName(element.tag),)) else: attr = " ".join(["%s=\"%s\"" % ( filter.fromXmlName(name), value) for name, value in element.attrib.items()]) rv.append("<%s %s>" % (element.tag, attr)) if element.text: rv.append(element.text) for child in element: serializeElement(child) rv.append("</%s>" % (element.tag,)) if element.tail: rv.append(element.tail) serializeElement(element) return "".join(rv) class TreeBuilder(base.TreeBuilder): # pylint:disable=unused-variable documentClass = Document doctypeClass = DocumentType elementClass = Element commentClass = Comment fragmentClass = DocumentFragment implementation = ElementTreeImplementation def testSerializer(self, element): return testSerializer(element) def getDocument(self): if fullTree: return self.document._element else: if self.defaultNamespace is not None: return self.document._element.find( "{%s}html" % self.defaultNamespace) else: return self.document._element.find("html") def getFragment(self): return base.TreeBuilder.getFragment(self)._element return locals() getETreeModule = moduleFactoryFactory(getETreeBuilder) ������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treebuilders/etree_lxml.py������0000644�0001750�0001750�00000033452�00000000000�033417� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""Module for supporting the lxml.etree library. The idea here is to use as much of the native library as possible, without using fragile hacks like custom element names that break between releases. The downside of this is that we cannot represent all possible trees; specifically the following are known to cause problems: Text or comments as siblings of the root element Docypes with no name When any of these things occur, we emit a DataLossWarning """ from __future__ import absolute_import, division, unicode_literals # pylint:disable=protected-access import warnings import re import sys from . import base from ..constants import DataLossWarning from .. import constants from . import etree as etree_builders from .. import _ihatexml import lxml.etree as etree fullTree = True tag_regexp = re.compile("{([^}]*)}(.*)") comment_type = etree.Comment("asd").tag class DocumentType(object): def __init__(self, name, publicId, systemId): self.name = name self.publicId = publicId self.systemId = systemId class Document(object): def __init__(self): self._elementTree = None self._childNodes = [] def appendChild(self, element): self._elementTree.getroot().addnext(element._element) def _getChildNodes(self): return self._childNodes childNodes = property(_getChildNodes) def testSerializer(element): rv = [] infosetFilter = _ihatexml.InfosetFilter(preventDoubleDashComments=True) def serializeElement(element, indent=0): if not hasattr(element, "tag"): if hasattr(element, "getroot"): # Full tree case rv.append("#document") if element.docinfo.internalDTD: if not (element.docinfo.public_id or element.docinfo.system_url): dtd_str = "<!DOCTYPE %s>" % element.docinfo.root_name else: dtd_str = """<!DOCTYPE %s "%s" "%s">""" % ( element.docinfo.root_name, element.docinfo.public_id, element.docinfo.system_url) rv.append("|%s%s" % (' ' * (indent + 2), dtd_str)) next_element = element.getroot() while next_element.getprevious() is not None: next_element = next_element.getprevious() while next_element is not None: serializeElement(next_element, indent + 2) next_element = next_element.getnext() elif isinstance(element, str) or isinstance(element, bytes): # Text in a fragment assert isinstance(element, str) or sys.version_info[0] == 2 rv.append("|%s\"%s\"" % (' ' * indent, element)) else: # Fragment case rv.append("#document-fragment") for next_element in element: serializeElement(next_element, indent + 2) elif element.tag == comment_type: rv.append("|%s<!-- %s -->" % (' ' * indent, element.text)) if hasattr(element, "tail") and element.tail: rv.append("|%s\"%s\"" % (' ' * indent, element.tail)) else: assert isinstance(element, etree._Element) nsmatch = etree_builders.tag_regexp.match(element.tag) if nsmatch is not None: ns = nsmatch.group(1) tag = nsmatch.group(2) prefix = constants.prefixes[ns] rv.append("|%s<%s %s>" % (' ' * indent, prefix, infosetFilter.fromXmlName(tag))) else: rv.append("|%s<%s>" % (' ' * indent, infosetFilter.fromXmlName(element.tag))) if hasattr(element, "attrib"): attributes = [] for name, value in element.attrib.items(): nsmatch = tag_regexp.match(name) if nsmatch is not None: ns, name = nsmatch.groups() name = infosetFilter.fromXmlName(name) prefix = constants.prefixes[ns] attr_string = "%s %s" % (prefix, name) else: attr_string = infosetFilter.fromXmlName(name) attributes.append((attr_string, value)) for name, value in sorted(attributes): rv.append('|%s%s="%s"' % (' ' * (indent + 2), name, value)) if element.text: rv.append("|%s\"%s\"" % (' ' * (indent + 2), element.text)) indent += 2 for child in element: serializeElement(child, indent) if hasattr(element, "tail") and element.tail: rv.append("|%s\"%s\"" % (' ' * (indent - 2), element.tail)) serializeElement(element, 0) return "\n".join(rv) def tostring(element): """Serialize an element and its child nodes to a string""" rv = [] def serializeElement(element): if not hasattr(element, "tag"): if element.docinfo.internalDTD: if element.docinfo.doctype: dtd_str = element.docinfo.doctype else: dtd_str = "<!DOCTYPE %s>" % element.docinfo.root_name rv.append(dtd_str) serializeElement(element.getroot()) elif element.tag == comment_type: rv.append("<!--%s-->" % (element.text,)) else: # This is assumed to be an ordinary element if not element.attrib: rv.append("<%s>" % (element.tag,)) else: attr = " ".join(["%s=\"%s\"" % (name, value) for name, value in element.attrib.items()]) rv.append("<%s %s>" % (element.tag, attr)) if element.text: rv.append(element.text) for child in element: serializeElement(child) rv.append("</%s>" % (element.tag,)) if hasattr(element, "tail") and element.tail: rv.append(element.tail) serializeElement(element) return "".join(rv) class TreeBuilder(base.TreeBuilder): documentClass = Document doctypeClass = DocumentType elementClass = None commentClass = None fragmentClass = Document implementation = etree def __init__(self, namespaceHTMLElements, fullTree=False): builder = etree_builders.getETreeModule(etree, fullTree=fullTree) infosetFilter = self.infosetFilter = _ihatexml.InfosetFilter(preventDoubleDashComments=True) self.namespaceHTMLElements = namespaceHTMLElements class Attributes(dict): def __init__(self, element, value=None): if value is None: value = {} self._element = element dict.__init__(self, value) # pylint:disable=non-parent-init-called for key, value in self.items(): if isinstance(key, tuple): name = "{%s}%s" % (key[2], infosetFilter.coerceAttribute(key[1])) else: name = infosetFilter.coerceAttribute(key) self._element._element.attrib[name] = value def __setitem__(self, key, value): dict.__setitem__(self, key, value) if isinstance(key, tuple): name = "{%s}%s" % (key[2], infosetFilter.coerceAttribute(key[1])) else: name = infosetFilter.coerceAttribute(key) self._element._element.attrib[name] = value class Element(builder.Element): def __init__(self, name, namespace): name = infosetFilter.coerceElement(name) builder.Element.__init__(self, name, namespace=namespace) self._attributes = Attributes(self) def _setName(self, name): self._name = infosetFilter.coerceElement(name) self._element.tag = self._getETreeTag( self._name, self._namespace) def _getName(self): return infosetFilter.fromXmlName(self._name) name = property(_getName, _setName) def _getAttributes(self): return self._attributes def _setAttributes(self, attributes): self._attributes = Attributes(self, attributes) attributes = property(_getAttributes, _setAttributes) def insertText(self, data, insertBefore=None): data = infosetFilter.coerceCharacters(data) builder.Element.insertText(self, data, insertBefore) def appendChild(self, child): builder.Element.appendChild(self, child) class Comment(builder.Comment): def __init__(self, data): data = infosetFilter.coerceComment(data) builder.Comment.__init__(self, data) def _setData(self, data): data = infosetFilter.coerceComment(data) self._element.text = data def _getData(self): return self._element.text data = property(_getData, _setData) self.elementClass = Element self.commentClass = Comment # self.fragmentClass = builder.DocumentFragment base.TreeBuilder.__init__(self, namespaceHTMLElements) def reset(self): base.TreeBuilder.reset(self) self.insertComment = self.insertCommentInitial self.initial_comments = [] self.doctype = None def testSerializer(self, element): return testSerializer(element) def getDocument(self): if fullTree: return self.document._elementTree else: return self.document._elementTree.getroot() def getFragment(self): fragment = [] element = self.openElements[0]._element if element.text: fragment.append(element.text) fragment.extend(list(element)) if element.tail: fragment.append(element.tail) return fragment def insertDoctype(self, token): name = token["name"] publicId = token["publicId"] systemId = token["systemId"] if not name: warnings.warn("lxml cannot represent empty doctype", DataLossWarning) self.doctype = None else: coercedName = self.infosetFilter.coerceElement(name) if coercedName != name: warnings.warn("lxml cannot represent non-xml doctype", DataLossWarning) doctype = self.doctypeClass(coercedName, publicId, systemId) self.doctype = doctype def insertCommentInitial(self, data, parent=None): assert parent is None or parent is self.document assert self.document._elementTree is None self.initial_comments.append(data) def insertCommentMain(self, data, parent=None): if (parent == self.document and self.document._elementTree.getroot()[-1].tag == comment_type): warnings.warn("lxml cannot represent adjacent comments beyond the root elements", DataLossWarning) super(TreeBuilder, self).insertComment(data, parent) def insertRoot(self, token): # Because of the way libxml2 works, it doesn't seem to be possible to # alter information like the doctype after the tree has been parsed. # Therefore we need to use the built-in parser to create our initial # tree, after which we can add elements like normal docStr = "" if self.doctype: assert self.doctype.name docStr += "<!DOCTYPE %s" % self.doctype.name if (self.doctype.publicId is not None or self.doctype.systemId is not None): docStr += (' PUBLIC "%s" ' % (self.infosetFilter.coercePubid(self.doctype.publicId or ""))) if self.doctype.systemId: sysid = self.doctype.systemId if sysid.find("'") >= 0 and sysid.find('"') >= 0: warnings.warn("DOCTYPE system cannot contain single and double quotes", DataLossWarning) sysid = sysid.replace("'", 'U00027') if sysid.find("'") >= 0: docStr += '"%s"' % sysid else: docStr += "'%s'" % sysid else: docStr += "''" docStr += ">" if self.doctype.name != token["name"]: warnings.warn("lxml cannot represent doctype with a different name to the root element", DataLossWarning) docStr += "<THIS_SHOULD_NEVER_APPEAR_PUBLICLY/>" root = etree.fromstring(docStr) # Append the initial comments: for comment_token in self.initial_comments: comment = self.commentClass(comment_token["data"]) root.addprevious(comment._element) # Create the root document and add the ElementTree to it self.document = self.documentClass() self.document._elementTree = root.getroottree() # Give the root element the right name name = token["name"] namespace = token.get("namespace", self.defaultNamespace) if namespace is None: etree_tag = name else: etree_tag = "{%s}%s" % (namespace, name) root.tag = etree_tag # Add the root element to the internal child/open data structures root_element = self.elementClass(name, namespace) root_element._element = root self.document._childNodes.append(root_element) self.openElements.append(root_element) # Reset to the default insert comment function self.insertComment = self.insertCommentMain ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735099.9999943 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treewalkers/��������������������0000755�0001750�0001750�00000000000�00000000000�030535� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treewalkers/__init__.py���������0000644�0001750�0001750�00000013122�00000000000�032645� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""A collection of modules for iterating through different kinds of tree, generating tokens identical to those produced by the tokenizer module. To create a tree walker for a new type of tree, you need to do implement a tree walker object (called TreeWalker by convention) that implements a 'serialize' method taking a tree as sole argument and returning an iterator generating tokens. """ from __future__ import absolute_import, division, unicode_literals from .. import constants from .._utils import default_etree __all__ = ["getTreeWalker", "pprint"] treeWalkerCache = {} def getTreeWalker(treeType, implementation=None, **kwargs): """Get a TreeWalker class for various types of tree with built-in support :arg str treeType: the name of the tree type required (case-insensitive). Supported values are: * "dom": The xml.dom.minidom DOM implementation * "etree": A generic walker for tree implementations exposing an elementtree-like interface (known to work with ElementTree, cElementTree and lxml.etree). * "lxml": Optimized walker for lxml.etree * "genshi": a Genshi stream :arg implementation: A module implementing the tree type e.g. xml.etree.ElementTree or cElementTree (Currently applies to the "etree" tree type only). :arg kwargs: keyword arguments passed to the etree walker--for other walkers, this has no effect :returns: a TreeWalker class """ treeType = treeType.lower() if treeType not in treeWalkerCache: if treeType == "dom": from . import dom treeWalkerCache[treeType] = dom.TreeWalker elif treeType == "genshi": from . import genshi treeWalkerCache[treeType] = genshi.TreeWalker elif treeType == "lxml": from . import etree_lxml treeWalkerCache[treeType] = etree_lxml.TreeWalker elif treeType == "etree": from . import etree if implementation is None: implementation = default_etree # XXX: NEVER cache here, caching is done in the etree submodule return etree.getETreeModule(implementation, **kwargs).TreeWalker return treeWalkerCache.get(treeType) def concatenateCharacterTokens(tokens): pendingCharacters = [] for token in tokens: type = token["type"] if type in ("Characters", "SpaceCharacters"): pendingCharacters.append(token["data"]) else: if pendingCharacters: yield {"type": "Characters", "data": "".join(pendingCharacters)} pendingCharacters = [] yield token if pendingCharacters: yield {"type": "Characters", "data": "".join(pendingCharacters)} def pprint(walker): """Pretty printer for tree walkers Takes a TreeWalker instance and pretty prints the output of walking the tree. :arg walker: a TreeWalker instance """ output = [] indent = 0 for token in concatenateCharacterTokens(walker): type = token["type"] if type in ("StartTag", "EmptyTag"): # tag name if token["namespace"] and token["namespace"] != constants.namespaces["html"]: if token["namespace"] in constants.prefixes: ns = constants.prefixes[token["namespace"]] else: ns = token["namespace"] name = "%s %s" % (ns, token["name"]) else: name = token["name"] output.append("%s<%s>" % (" " * indent, name)) indent += 2 # attributes (sorted for consistent ordering) attrs = token["data"] for (namespace, localname), value in sorted(attrs.items()): if namespace: if namespace in constants.prefixes: ns = constants.prefixes[namespace] else: ns = namespace name = "%s %s" % (ns, localname) else: name = localname output.append("%s%s=\"%s\"" % (" " * indent, name, value)) # self-closing if type == "EmptyTag": indent -= 2 elif type == "EndTag": indent -= 2 elif type == "Comment": output.append("%s<!-- %s -->" % (" " * indent, token["data"])) elif type == "Doctype": if token["name"]: if token["publicId"]: output.append("""%s<!DOCTYPE %s "%s" "%s">""" % (" " * indent, token["name"], token["publicId"], token["systemId"] if token["systemId"] else "")) elif token["systemId"]: output.append("""%s<!DOCTYPE %s "" "%s">""" % (" " * indent, token["name"], token["systemId"])) else: output.append("%s<!DOCTYPE %s>" % (" " * indent, token["name"])) else: output.append("%s<!DOCTYPE >" % (" " * indent,)) elif type == "Characters": output.append("%s\"%s\"" % (" " * indent, token["data"])) elif type == "SpaceCharacters": assert False, "concatenateCharacterTokens should have got rid of all Space tokens" else: raise ValueError("Unknown token type, %s" % type) return "\n".join(output) ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treewalkers/base.py�������������0000644�0001750�0001750�00000016464�00000000000�032034� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals from xml.dom import Node from ..constants import namespaces, voidElements, spaceCharacters __all__ = ["DOCUMENT", "DOCTYPE", "TEXT", "ELEMENT", "COMMENT", "ENTITY", "UNKNOWN", "TreeWalker", "NonRecursiveTreeWalker"] DOCUMENT = Node.DOCUMENT_NODE DOCTYPE = Node.DOCUMENT_TYPE_NODE TEXT = Node.TEXT_NODE ELEMENT = Node.ELEMENT_NODE COMMENT = Node.COMMENT_NODE ENTITY = Node.ENTITY_NODE UNKNOWN = "<#UNKNOWN#>" spaceCharacters = "".join(spaceCharacters) class TreeWalker(object): """Walks a tree yielding tokens Tokens are dicts that all have a ``type`` field specifying the type of the token. """ def __init__(self, tree): """Creates a TreeWalker :arg tree: the tree to walk """ self.tree = tree def __iter__(self): raise NotImplementedError def error(self, msg): """Generates an error token with the given message :arg msg: the error message :returns: SerializeError token """ return {"type": "SerializeError", "data": msg} def emptyTag(self, namespace, name, attrs, hasChildren=False): """Generates an EmptyTag token :arg namespace: the namespace of the token--can be ``None`` :arg name: the name of the element :arg attrs: the attributes of the element as a dict :arg hasChildren: whether or not to yield a SerializationError because this tag shouldn't have children :returns: EmptyTag token """ yield {"type": "EmptyTag", "name": name, "namespace": namespace, "data": attrs} if hasChildren: yield self.error("Void element has children") def startTag(self, namespace, name, attrs): """Generates a StartTag token :arg namespace: the namespace of the token--can be ``None`` :arg name: the name of the element :arg attrs: the attributes of the element as a dict :returns: StartTag token """ return {"type": "StartTag", "name": name, "namespace": namespace, "data": attrs} def endTag(self, namespace, name): """Generates an EndTag token :arg namespace: the namespace of the token--can be ``None`` :arg name: the name of the element :returns: EndTag token """ return {"type": "EndTag", "name": name, "namespace": namespace} def text(self, data): """Generates SpaceCharacters and Characters tokens Depending on what's in the data, this generates one or more ``SpaceCharacters`` and ``Characters`` tokens. For example: >>> from html5lib.treewalkers.base import TreeWalker >>> # Give it an empty tree just so it instantiates >>> walker = TreeWalker([]) >>> list(walker.text('')) [] >>> list(walker.text(' ')) [{u'data': ' ', u'type': u'SpaceCharacters'}] >>> list(walker.text(' abc ')) # doctest: +NORMALIZE_WHITESPACE [{u'data': ' ', u'type': u'SpaceCharacters'}, {u'data': u'abc', u'type': u'Characters'}, {u'data': u' ', u'type': u'SpaceCharacters'}] :arg data: the text data :returns: one or more ``SpaceCharacters`` and ``Characters`` tokens """ data = data middle = data.lstrip(spaceCharacters) left = data[:len(data) - len(middle)] if left: yield {"type": "SpaceCharacters", "data": left} data = middle middle = data.rstrip(spaceCharacters) right = data[len(middle):] if middle: yield {"type": "Characters", "data": middle} if right: yield {"type": "SpaceCharacters", "data": right} def comment(self, data): """Generates a Comment token :arg data: the comment :returns: Comment token """ return {"type": "Comment", "data": data} def doctype(self, name, publicId=None, systemId=None): """Generates a Doctype token :arg name: :arg publicId: :arg systemId: :returns: the Doctype token """ return {"type": "Doctype", "name": name, "publicId": publicId, "systemId": systemId} def entity(self, name): """Generates an Entity token :arg name: the entity name :returns: an Entity token """ return {"type": "Entity", "name": name} def unknown(self, nodeType): """Handles unknown node types""" return self.error("Unknown node type: " + nodeType) class NonRecursiveTreeWalker(TreeWalker): def getNodeDetails(self, node): raise NotImplementedError def getFirstChild(self, node): raise NotImplementedError def getNextSibling(self, node): raise NotImplementedError def getParentNode(self, node): raise NotImplementedError def __iter__(self): currentNode = self.tree while currentNode is not None: details = self.getNodeDetails(currentNode) type, details = details[0], details[1:] hasChildren = False if type == DOCTYPE: yield self.doctype(*details) elif type == TEXT: for token in self.text(*details): yield token elif type == ELEMENT: namespace, name, attributes, hasChildren = details if (not namespace or namespace == namespaces["html"]) and name in voidElements: for token in self.emptyTag(namespace, name, attributes, hasChildren): yield token hasChildren = False else: yield self.startTag(namespace, name, attributes) elif type == COMMENT: yield self.comment(details[0]) elif type == ENTITY: yield self.entity(details[0]) elif type == DOCUMENT: hasChildren = True else: yield self.unknown(details[0]) if hasChildren: firstChild = self.getFirstChild(currentNode) else: firstChild = None if firstChild is not None: currentNode = firstChild else: while currentNode is not None: details = self.getNodeDetails(currentNode) type, details = details[0], details[1:] if type == ELEMENT: namespace, name, attributes, hasChildren = details if (namespace and namespace != namespaces["html"]) or name not in voidElements: yield self.endTag(namespace, name) if self.tree is currentNode: currentNode = None break nextSibling = self.getNextSibling(currentNode) if nextSibling is not None: currentNode = nextSibling break else: currentNode = self.getParentNode(currentNode) ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treewalkers/dom.py��������������0000644�0001750�0001750�00000002605�00000000000�031671� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals from xml.dom import Node from . import base class TreeWalker(base.NonRecursiveTreeWalker): def getNodeDetails(self, node): if node.nodeType == Node.DOCUMENT_TYPE_NODE: return base.DOCTYPE, node.name, node.publicId, node.systemId elif node.nodeType in (Node.TEXT_NODE, Node.CDATA_SECTION_NODE): return base.TEXT, node.nodeValue elif node.nodeType == Node.ELEMENT_NODE: attrs = {} for attr in list(node.attributes.keys()): attr = node.getAttributeNode(attr) if attr.namespaceURI: attrs[(attr.namespaceURI, attr.localName)] = attr.value else: attrs[(None, attr.name)] = attr.value return (base.ELEMENT, node.namespaceURI, node.nodeName, attrs, node.hasChildNodes()) elif node.nodeType == Node.COMMENT_NODE: return base.COMMENT, node.nodeValue elif node.nodeType in (Node.DOCUMENT_NODE, Node.DOCUMENT_FRAGMENT_NODE): return (base.DOCUMENT,) else: return base.UNKNOWN, node.nodeType def getFirstChild(self, node): return node.firstChild def getNextSibling(self, node): return node.nextSibling def getParentNode(self, node): return node.parentNode ���������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treewalkers/etree.py������������0000644�0001750�0001750�00000010706�00000000000�032217� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals from collections import OrderedDict import re from pip._vendor.six import string_types from . import base from .._utils import moduleFactoryFactory tag_regexp = re.compile("{([^}]*)}(.*)") def getETreeBuilder(ElementTreeImplementation): ElementTree = ElementTreeImplementation ElementTreeCommentType = ElementTree.Comment("asd").tag class TreeWalker(base.NonRecursiveTreeWalker): # pylint:disable=unused-variable """Given the particular ElementTree representation, this implementation, to avoid using recursion, returns "nodes" as tuples with the following content: 1. The current element 2. The index of the element relative to its parent 3. A stack of ancestor elements 4. A flag "text", "tail" or None to indicate if the current node is a text node; either the text or tail of the current element (1) """ def getNodeDetails(self, node): if isinstance(node, tuple): # It might be the root Element elt, _, _, flag = node if flag in ("text", "tail"): return base.TEXT, getattr(elt, flag) else: node = elt if not(hasattr(node, "tag")): node = node.getroot() if node.tag in ("DOCUMENT_ROOT", "DOCUMENT_FRAGMENT"): return (base.DOCUMENT,) elif node.tag == "<!DOCTYPE>": return (base.DOCTYPE, node.text, node.get("publicId"), node.get("systemId")) elif node.tag == ElementTreeCommentType: return base.COMMENT, node.text else: assert isinstance(node.tag, string_types), type(node.tag) # This is assumed to be an ordinary element match = tag_regexp.match(node.tag) if match: namespace, tag = match.groups() else: namespace = None tag = node.tag attrs = OrderedDict() for name, value in list(node.attrib.items()): match = tag_regexp.match(name) if match: attrs[(match.group(1), match.group(2))] = value else: attrs[(None, name)] = value return (base.ELEMENT, namespace, tag, attrs, len(node) or node.text) def getFirstChild(self, node): if isinstance(node, tuple): element, key, parents, flag = node else: element, key, parents, flag = node, None, [], None if flag in ("text", "tail"): return None else: if element.text: return element, key, parents, "text" elif len(element): parents.append(element) return element[0], 0, parents, None else: return None def getNextSibling(self, node): if isinstance(node, tuple): element, key, parents, flag = node else: return None if flag == "text": if len(element): parents.append(element) return element[0], 0, parents, None else: return None else: if element.tail and flag != "tail": return element, key, parents, "tail" elif key < len(parents[-1]) - 1: return parents[-1][key + 1], key + 1, parents, None else: return None def getParentNode(self, node): if isinstance(node, tuple): element, key, parents, flag = node else: return None if flag == "text": if not parents: return element else: return element, key, parents, None else: parent = parents.pop() if not parents: return parent else: assert list(parents[-1]).count(parent) == 1 return parent, list(parents[-1]).index(parent), parents, None return locals() getETreeModule = moduleFactoryFactory(getETreeBuilder) ����������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treewalkers/etree_lxml.py�������0000644�0001750�0001750�00000014245�00000000000�033255� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals from pip._vendor.six import text_type from lxml import etree from ..treebuilders.etree import tag_regexp from . import base from .. import _ihatexml def ensure_str(s): if s is None: return None elif isinstance(s, text_type): return s else: return s.decode("ascii", "strict") class Root(object): def __init__(self, et): self.elementtree = et self.children = [] try: if et.docinfo.internalDTD: self.children.append(Doctype(self, ensure_str(et.docinfo.root_name), ensure_str(et.docinfo.public_id), ensure_str(et.docinfo.system_url))) except AttributeError: pass try: node = et.getroot() except AttributeError: node = et while node.getprevious() is not None: node = node.getprevious() while node is not None: self.children.append(node) node = node.getnext() self.text = None self.tail = None def __getitem__(self, key): return self.children[key] def getnext(self): return None def __len__(self): return 1 class Doctype(object): def __init__(self, root_node, name, public_id, system_id): self.root_node = root_node self.name = name self.public_id = public_id self.system_id = system_id self.text = None self.tail = None def getnext(self): return self.root_node.children[1] class FragmentRoot(Root): def __init__(self, children): self.children = [FragmentWrapper(self, child) for child in children] self.text = self.tail = None def getnext(self): return None class FragmentWrapper(object): def __init__(self, fragment_root, obj): self.root_node = fragment_root self.obj = obj if hasattr(self.obj, 'text'): self.text = ensure_str(self.obj.text) else: self.text = None if hasattr(self.obj, 'tail'): self.tail = ensure_str(self.obj.tail) else: self.tail = None def __getattr__(self, name): return getattr(self.obj, name) def getnext(self): siblings = self.root_node.children idx = siblings.index(self) if idx < len(siblings) - 1: return siblings[idx + 1] else: return None def __getitem__(self, key): return self.obj[key] def __bool__(self): return bool(self.obj) def getparent(self): return None def __str__(self): return str(self.obj) def __unicode__(self): return str(self.obj) def __len__(self): return len(self.obj) class TreeWalker(base.NonRecursiveTreeWalker): def __init__(self, tree): # pylint:disable=redefined-variable-type if isinstance(tree, list): self.fragmentChildren = set(tree) tree = FragmentRoot(tree) else: self.fragmentChildren = set() tree = Root(tree) base.NonRecursiveTreeWalker.__init__(self, tree) self.filter = _ihatexml.InfosetFilter() def getNodeDetails(self, node): if isinstance(node, tuple): # Text node node, key = node assert key in ("text", "tail"), "Text nodes are text or tail, found %s" % key return base.TEXT, ensure_str(getattr(node, key)) elif isinstance(node, Root): return (base.DOCUMENT,) elif isinstance(node, Doctype): return base.DOCTYPE, node.name, node.public_id, node.system_id elif isinstance(node, FragmentWrapper) and not hasattr(node, "tag"): return base.TEXT, ensure_str(node.obj) elif node.tag == etree.Comment: return base.COMMENT, ensure_str(node.text) elif node.tag == etree.Entity: return base.ENTITY, ensure_str(node.text)[1:-1] # strip &; else: # This is assumed to be an ordinary element match = tag_regexp.match(ensure_str(node.tag)) if match: namespace, tag = match.groups() else: namespace = None tag = ensure_str(node.tag) attrs = {} for name, value in list(node.attrib.items()): name = ensure_str(name) value = ensure_str(value) match = tag_regexp.match(name) if match: attrs[(match.group(1), match.group(2))] = value else: attrs[(None, name)] = value return (base.ELEMENT, namespace, self.filter.fromXmlName(tag), attrs, len(node) > 0 or node.text) def getFirstChild(self, node): assert not isinstance(node, tuple), "Text nodes have no children" assert len(node) or node.text, "Node has no children" if node.text: return (node, "text") else: return node[0] def getNextSibling(self, node): if isinstance(node, tuple): # Text node node, key = node assert key in ("text", "tail"), "Text nodes are text or tail, found %s" % key if key == "text": # XXX: we cannot use a "bool(node) and node[0] or None" construct here # because node[0] might evaluate to False if it has no child element if len(node): return node[0] else: return None else: # tail return node.getnext() return (node, "tail") if node.tail else node.getnext() def getParentNode(self, node): if isinstance(node, tuple): # Text node node, key = node assert key in ("text", "tail"), "Text nodes are text or tail, found %s" % key if key == "text": return node # else: fallback to "normal" processing elif node in self.fragmentChildren: return None return node.getparent() �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/html5lib/treewalkers/genshi.py�����������0000644�0001750�0001750�00000004405�00000000000�032367� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division, unicode_literals from genshi.core import QName from genshi.core import START, END, XML_NAMESPACE, DOCTYPE, TEXT from genshi.core import START_NS, END_NS, START_CDATA, END_CDATA, PI, COMMENT from . import base from ..constants import voidElements, namespaces class TreeWalker(base.TreeWalker): def __iter__(self): # Buffer the events so we can pass in the following one previous = None for event in self.tree: if previous is not None: for token in self.tokens(previous, event): yield token previous = event # Don't forget the final event! if previous is not None: for token in self.tokens(previous, None): yield token def tokens(self, event, next): kind, data, _ = event if kind == START: tag, attribs = data name = tag.localname namespace = tag.namespace converted_attribs = {} for k, v in attribs: if isinstance(k, QName): converted_attribs[(k.namespace, k.localname)] = v else: converted_attribs[(None, k)] = v if namespace == namespaces["html"] and name in voidElements: for token in self.emptyTag(namespace, name, converted_attribs, not next or next[0] != END or next[1] != tag): yield token else: yield self.startTag(namespace, name, converted_attribs) elif kind == END: name = data.localname namespace = data.namespace if namespace != namespaces["html"] or name not in voidElements: yield self.endTag(namespace, name) elif kind == COMMENT: yield self.comment(data) elif kind == TEXT: for token in self.text(data): yield token elif kind == DOCTYPE: yield self.doctype(*data) elif kind in (XML_NAMESPACE, DOCTYPE, START_NS, END_NS, START_CDATA, END_CDATA, PI): pass else: yield self.unknown(kind) �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735100.0033276 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/idna/������������������������������������0000755�0001750�0001750�00000000000�00000000000�025400� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/idna/__init__.py�������������������������0000644�0001750�0001750�00000000072�00000000000�027510� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from .package_data import __version__ from .core import * ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/idna/codec.py����������������������������0000644�0001750�0001750�00000006343�00000000000�027035� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from .core import encode, decode, alabel, ulabel, IDNAError import codecs import re _unicode_dots_re = re.compile(u'[\u002e\u3002\uff0e\uff61]') class Codec(codecs.Codec): def encode(self, data, errors='strict'): if errors != 'strict': raise IDNAError("Unsupported error handling \"{0}\"".format(errors)) if not data: return "", 0 return encode(data), len(data) def decode(self, data, errors='strict'): if errors != 'strict': raise IDNAError("Unsupported error handling \"{0}\"".format(errors)) if not data: return u"", 0 return decode(data), len(data) class IncrementalEncoder(codecs.BufferedIncrementalEncoder): def _buffer_encode(self, data, errors, final): if errors != 'strict': raise IDNAError("Unsupported error handling \"{0}\"".format(errors)) if not data: return ("", 0) labels = _unicode_dots_re.split(data) trailing_dot = u'' if labels: if not labels[-1]: trailing_dot = '.' del labels[-1] elif not final: # Keep potentially unfinished label until the next call del labels[-1] if labels: trailing_dot = '.' result = [] size = 0 for label in labels: result.append(alabel(label)) if size: size += 1 size += len(label) # Join with U+002E result = ".".join(result) + trailing_dot size += len(trailing_dot) return (result, size) class IncrementalDecoder(codecs.BufferedIncrementalDecoder): def _buffer_decode(self, data, errors, final): if errors != 'strict': raise IDNAError("Unsupported error handling \"{0}\"".format(errors)) if not data: return (u"", 0) # IDNA allows decoding to operate on Unicode strings, too. if isinstance(data, unicode): labels = _unicode_dots_re.split(data) else: # Must be ASCII string data = str(data) unicode(data, "ascii") labels = data.split(".") trailing_dot = u'' if labels: if not labels[-1]: trailing_dot = u'.' del labels[-1] elif not final: # Keep potentially unfinished label until the next call del labels[-1] if labels: trailing_dot = u'.' result = [] size = 0 for label in labels: result.append(ulabel(label)) if size: size += 1 size += len(label) result = u".".join(result) + trailing_dot size += len(trailing_dot) return (result, size) class StreamWriter(Codec, codecs.StreamWriter): pass class StreamReader(Codec, codecs.StreamReader): pass def getregentry(): return codecs.CodecInfo( name='idna', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamwriter=StreamWriter, streamreader=StreamReader, ) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/idna/compat.py���������������������������0000644�0001750�0001750�00000000350�00000000000�027233� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from .core import * from .codec import * def ToASCII(label): return encode(label) def ToUnicode(label): return decode(label) def nameprep(s): raise NotImplementedError("IDNA 2008 does not utilise nameprep protocol") ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/idna/core.py�����������������������������0000644�0001750�0001750�00000026725�00000000000�026716� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from . import idnadata import bisect import unicodedata import re import sys from .intranges import intranges_contain _virama_combining_class = 9 _alabel_prefix = b'xn--' _unicode_dots_re = re.compile(u'[\u002e\u3002\uff0e\uff61]') if sys.version_info[0] == 3: unicode = str unichr = chr class IDNAError(UnicodeError): """ Base exception for all IDNA-encoding related problems """ pass class IDNABidiError(IDNAError): """ Exception when bidirectional requirements are not satisfied """ pass class InvalidCodepoint(IDNAError): """ Exception when a disallowed or unallocated codepoint is used """ pass class InvalidCodepointContext(IDNAError): """ Exception when the codepoint is not valid in the context it is used """ pass def _combining_class(cp): v = unicodedata.combining(unichr(cp)) if v == 0: if not unicodedata.name(unichr(cp)): raise ValueError("Unknown character in unicodedata") return v def _is_script(cp, script): return intranges_contain(ord(cp), idnadata.scripts[script]) def _punycode(s): return s.encode('punycode') def _unot(s): return 'U+{0:04X}'.format(s) def valid_label_length(label): if len(label) > 63: return False return True def valid_string_length(label, trailing_dot): if len(label) > (254 if trailing_dot else 253): return False return True def check_bidi(label, check_ltr=False): # Bidi rules should only be applied if string contains RTL characters bidi_label = False for (idx, cp) in enumerate(label, 1): direction = unicodedata.bidirectional(cp) if direction == '': # String likely comes from a newer version of Unicode raise IDNABidiError('Unknown directionality in label {0} at position {1}'.format(repr(label), idx)) if direction in ['R', 'AL', 'AN']: bidi_label = True if not bidi_label and not check_ltr: return True # Bidi rule 1 direction = unicodedata.bidirectional(label[0]) if direction in ['R', 'AL']: rtl = True elif direction == 'L': rtl = False else: raise IDNABidiError('First codepoint in label {0} must be directionality L, R or AL'.format(repr(label))) valid_ending = False number_type = False for (idx, cp) in enumerate(label, 1): direction = unicodedata.bidirectional(cp) if rtl: # Bidi rule 2 if not direction in ['R', 'AL', 'AN', 'EN', 'ES', 'CS', 'ET', 'ON', 'BN', 'NSM']: raise IDNABidiError('Invalid direction for codepoint at position {0} in a right-to-left label'.format(idx)) # Bidi rule 3 if direction in ['R', 'AL', 'EN', 'AN']: valid_ending = True elif direction != 'NSM': valid_ending = False # Bidi rule 4 if direction in ['AN', 'EN']: if not number_type: number_type = direction else: if number_type != direction: raise IDNABidiError('Can not mix numeral types in a right-to-left label') else: # Bidi rule 5 if not direction in ['L', 'EN', 'ES', 'CS', 'ET', 'ON', 'BN', 'NSM']: raise IDNABidiError('Invalid direction for codepoint at position {0} in a left-to-right label'.format(idx)) # Bidi rule 6 if direction in ['L', 'EN']: valid_ending = True elif direction != 'NSM': valid_ending = False if not valid_ending: raise IDNABidiError('Label ends with illegal codepoint directionality') return True def check_initial_combiner(label): if unicodedata.category(label[0])[0] == 'M': raise IDNAError('Label begins with an illegal combining character') return True def check_hyphen_ok(label): if label[2:4] == '--': raise IDNAError('Label has disallowed hyphens in 3rd and 4th position') if label[0] == '-' or label[-1] == '-': raise IDNAError('Label must not start or end with a hyphen') return True def check_nfc(label): if unicodedata.normalize('NFC', label) != label: raise IDNAError('Label must be in Normalization Form C') def valid_contextj(label, pos): cp_value = ord(label[pos]) if cp_value == 0x200c: if pos > 0: if _combining_class(ord(label[pos - 1])) == _virama_combining_class: return True ok = False for i in range(pos-1, -1, -1): joining_type = idnadata.joining_types.get(ord(label[i])) if joining_type == ord('T'): continue if joining_type in [ord('L'), ord('D')]: ok = True break if not ok: return False ok = False for i in range(pos+1, len(label)): joining_type = idnadata.joining_types.get(ord(label[i])) if joining_type == ord('T'): continue if joining_type in [ord('R'), ord('D')]: ok = True break return ok if cp_value == 0x200d: if pos > 0: if _combining_class(ord(label[pos - 1])) == _virama_combining_class: return True return False else: return False def valid_contexto(label, pos, exception=False): cp_value = ord(label[pos]) if cp_value == 0x00b7: if 0 < pos < len(label)-1: if ord(label[pos - 1]) == 0x006c and ord(label[pos + 1]) == 0x006c: return True return False elif cp_value == 0x0375: if pos < len(label)-1 and len(label) > 1: return _is_script(label[pos + 1], 'Greek') return False elif cp_value == 0x05f3 or cp_value == 0x05f4: if pos > 0: return _is_script(label[pos - 1], 'Hebrew') return False elif cp_value == 0x30fb: for cp in label: if cp == u'\u30fb': continue if _is_script(cp, 'Hiragana') or _is_script(cp, 'Katakana') or _is_script(cp, 'Han'): return True return False elif 0x660 <= cp_value <= 0x669: for cp in label: if 0x6f0 <= ord(cp) <= 0x06f9: return False return True elif 0x6f0 <= cp_value <= 0x6f9: for cp in label: if 0x660 <= ord(cp) <= 0x0669: return False return True def check_label(label): if isinstance(label, (bytes, bytearray)): label = label.decode('utf-8') if len(label) == 0: raise IDNAError('Empty Label') check_nfc(label) check_hyphen_ok(label) check_initial_combiner(label) for (pos, cp) in enumerate(label): cp_value = ord(cp) if intranges_contain(cp_value, idnadata.codepoint_classes['PVALID']): continue elif intranges_contain(cp_value, idnadata.codepoint_classes['CONTEXTJ']): try: if not valid_contextj(label, pos): raise InvalidCodepointContext('Joiner {0} not allowed at position {1} in {2}'.format( _unot(cp_value), pos+1, repr(label))) except ValueError: raise IDNAError('Unknown codepoint adjacent to joiner {0} at position {1} in {2}'.format( _unot(cp_value), pos+1, repr(label))) elif intranges_contain(cp_value, idnadata.codepoint_classes['CONTEXTO']): if not valid_contexto(label, pos): raise InvalidCodepointContext('Codepoint {0} not allowed at position {1} in {2}'.format(_unot(cp_value), pos+1, repr(label))) else: raise InvalidCodepoint('Codepoint {0} at position {1} of {2} not allowed'.format(_unot(cp_value), pos+1, repr(label))) check_bidi(label) def alabel(label): try: label = label.encode('ascii') ulabel(label) if not valid_label_length(label): raise IDNAError('Label too long') return label except UnicodeEncodeError: pass if not label: raise IDNAError('No Input') label = unicode(label) check_label(label) label = _punycode(label) label = _alabel_prefix + label if not valid_label_length(label): raise IDNAError('Label too long') return label def ulabel(label): if not isinstance(label, (bytes, bytearray)): try: label = label.encode('ascii') except UnicodeEncodeError: check_label(label) return label label = label.lower() if label.startswith(_alabel_prefix): label = label[len(_alabel_prefix):] else: check_label(label) return label.decode('ascii') label = label.decode('punycode') check_label(label) return label def uts46_remap(domain, std3_rules=True, transitional=False): """Re-map the characters in the string according to UTS46 processing.""" from .uts46data import uts46data output = u"" try: for pos, char in enumerate(domain): code_point = ord(char) uts46row = uts46data[code_point if code_point < 256 else bisect.bisect_left(uts46data, (code_point, "Z")) - 1] status = uts46row[1] replacement = uts46row[2] if len(uts46row) == 3 else None if (status == "V" or (status == "D" and not transitional) or (status == "3" and not std3_rules and replacement is None)): output += char elif replacement is not None and (status == "M" or (status == "3" and not std3_rules) or (status == "D" and transitional)): output += replacement elif status != "I": raise IndexError() return unicodedata.normalize("NFC", output) except IndexError: raise InvalidCodepoint( "Codepoint {0} not allowed at position {1} in {2}".format( _unot(code_point), pos + 1, repr(domain))) def encode(s, strict=False, uts46=False, std3_rules=False, transitional=False): if isinstance(s, (bytes, bytearray)): s = s.decode("ascii") if uts46: s = uts46_remap(s, std3_rules, transitional) trailing_dot = False result = [] if strict: labels = s.split('.') else: labels = _unicode_dots_re.split(s) if not labels or labels == ['']: raise IDNAError('Empty domain') if labels[-1] == '': del labels[-1] trailing_dot = True for label in labels: s = alabel(label) if s: result.append(s) else: raise IDNAError('Empty label') if trailing_dot: result.append(b'') s = b'.'.join(result) if not valid_string_length(s, trailing_dot): raise IDNAError('Domain too long') return s def decode(s, strict=False, uts46=False, std3_rules=False): if isinstance(s, (bytes, bytearray)): s = s.decode("ascii") if uts46: s = uts46_remap(s, std3_rules, False) trailing_dot = False result = [] if not strict: labels = _unicode_dots_re.split(s) else: labels = s.split(u'.') if not labels or labels == ['']: raise IDNAError('Empty domain') if not labels[-1]: del labels[-1] trailing_dot = True for label in labels: s = ulabel(label) if s: result.append(s) else: raise IDNAError('Empty label') if trailing_dot: result.append(u'') return u'.'.join(result) �������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/idna/idnadata.py�������������������������0000644�0001750�0001750�00000117703�00000000000�027530� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is automatically generated by tools/idna-data __version__ = "11.0.0" scripts = { 'Greek': ( 0x37000000374, 0x37500000378, 0x37a0000037e, 0x37f00000380, 0x38400000385, 0x38600000387, 0x3880000038b, 0x38c0000038d, 0x38e000003a2, 0x3a3000003e2, 0x3f000000400, 0x1d2600001d2b, 0x1d5d00001d62, 0x1d6600001d6b, 0x1dbf00001dc0, 0x1f0000001f16, 0x1f1800001f1e, 0x1f2000001f46, 0x1f4800001f4e, 0x1f5000001f58, 0x1f5900001f5a, 0x1f5b00001f5c, 0x1f5d00001f5e, 0x1f5f00001f7e, 0x1f8000001fb5, 0x1fb600001fc5, 0x1fc600001fd4, 0x1fd600001fdc, 0x1fdd00001ff0, 0x1ff200001ff5, 0x1ff600001fff, 0x212600002127, 0xab650000ab66, 0x101400001018f, 0x101a0000101a1, 0x1d2000001d246, ), 'Han': ( 0x2e8000002e9a, 0x2e9b00002ef4, 0x2f0000002fd6, 0x300500003006, 0x300700003008, 0x30210000302a, 0x30380000303c, 0x340000004db6, 0x4e0000009ff0, 0xf9000000fa6e, 0xfa700000fada, 0x200000002a6d7, 0x2a7000002b735, 0x2b7400002b81e, 0x2b8200002cea2, 0x2ceb00002ebe1, 0x2f8000002fa1e, ), 'Hebrew': ( 0x591000005c8, 0x5d0000005eb, 0x5ef000005f5, 0xfb1d0000fb37, 0xfb380000fb3d, 0xfb3e0000fb3f, 0xfb400000fb42, 0xfb430000fb45, 0xfb460000fb50, ), 'Hiragana': ( 0x304100003097, 0x309d000030a0, 0x1b0010001b11f, 0x1f2000001f201, ), 'Katakana': ( 0x30a1000030fb, 0x30fd00003100, 0x31f000003200, 0x32d0000032ff, 0x330000003358, 0xff660000ff70, 0xff710000ff9e, 0x1b0000001b001, ), } joining_types = { 0x600: 85, 0x601: 85, 0x602: 85, 0x603: 85, 0x604: 85, 0x605: 85, 0x608: 85, 0x60b: 85, 0x620: 68, 0x621: 85, 0x622: 82, 0x623: 82, 0x624: 82, 0x625: 82, 0x626: 68, 0x627: 82, 0x628: 68, 0x629: 82, 0x62a: 68, 0x62b: 68, 0x62c: 68, 0x62d: 68, 0x62e: 68, 0x62f: 82, 0x630: 82, 0x631: 82, 0x632: 82, 0x633: 68, 0x634: 68, 0x635: 68, 0x636: 68, 0x637: 68, 0x638: 68, 0x639: 68, 0x63a: 68, 0x63b: 68, 0x63c: 68, 0x63d: 68, 0x63e: 68, 0x63f: 68, 0x640: 67, 0x641: 68, 0x642: 68, 0x643: 68, 0x644: 68, 0x645: 68, 0x646: 68, 0x647: 68, 0x648: 82, 0x649: 68, 0x64a: 68, 0x66e: 68, 0x66f: 68, 0x671: 82, 0x672: 82, 0x673: 82, 0x674: 85, 0x675: 82, 0x676: 82, 0x677: 82, 0x678: 68, 0x679: 68, 0x67a: 68, 0x67b: 68, 0x67c: 68, 0x67d: 68, 0x67e: 68, 0x67f: 68, 0x680: 68, 0x681: 68, 0x682: 68, 0x683: 68, 0x684: 68, 0x685: 68, 0x686: 68, 0x687: 68, 0x688: 82, 0x689: 82, 0x68a: 82, 0x68b: 82, 0x68c: 82, 0x68d: 82, 0x68e: 82, 0x68f: 82, 0x690: 82, 0x691: 82, 0x692: 82, 0x693: 82, 0x694: 82, 0x695: 82, 0x696: 82, 0x697: 82, 0x698: 82, 0x699: 82, 0x69a: 68, 0x69b: 68, 0x69c: 68, 0x69d: 68, 0x69e: 68, 0x69f: 68, 0x6a0: 68, 0x6a1: 68, 0x6a2: 68, 0x6a3: 68, 0x6a4: 68, 0x6a5: 68, 0x6a6: 68, 0x6a7: 68, 0x6a8: 68, 0x6a9: 68, 0x6aa: 68, 0x6ab: 68, 0x6ac: 68, 0x6ad: 68, 0x6ae: 68, 0x6af: 68, 0x6b0: 68, 0x6b1: 68, 0x6b2: 68, 0x6b3: 68, 0x6b4: 68, 0x6b5: 68, 0x6b6: 68, 0x6b7: 68, 0x6b8: 68, 0x6b9: 68, 0x6ba: 68, 0x6bb: 68, 0x6bc: 68, 0x6bd: 68, 0x6be: 68, 0x6bf: 68, 0x6c0: 82, 0x6c1: 68, 0x6c2: 68, 0x6c3: 82, 0x6c4: 82, 0x6c5: 82, 0x6c6: 82, 0x6c7: 82, 0x6c8: 82, 0x6c9: 82, 0x6ca: 82, 0x6cb: 82, 0x6cc: 68, 0x6cd: 82, 0x6ce: 68, 0x6cf: 82, 0x6d0: 68, 0x6d1: 68, 0x6d2: 82, 0x6d3: 82, 0x6d5: 82, 0x6dd: 85, 0x6ee: 82, 0x6ef: 82, 0x6fa: 68, 0x6fb: 68, 0x6fc: 68, 0x6ff: 68, 0x70f: 84, 0x710: 82, 0x712: 68, 0x713: 68, 0x714: 68, 0x715: 82, 0x716: 82, 0x717: 82, 0x718: 82, 0x719: 82, 0x71a: 68, 0x71b: 68, 0x71c: 68, 0x71d: 68, 0x71e: 82, 0x71f: 68, 0x720: 68, 0x721: 68, 0x722: 68, 0x723: 68, 0x724: 68, 0x725: 68, 0x726: 68, 0x727: 68, 0x728: 82, 0x729: 68, 0x72a: 82, 0x72b: 68, 0x72c: 82, 0x72d: 68, 0x72e: 68, 0x72f: 82, 0x74d: 82, 0x74e: 68, 0x74f: 68, 0x750: 68, 0x751: 68, 0x752: 68, 0x753: 68, 0x754: 68, 0x755: 68, 0x756: 68, 0x757: 68, 0x758: 68, 0x759: 82, 0x75a: 82, 0x75b: 82, 0x75c: 68, 0x75d: 68, 0x75e: 68, 0x75f: 68, 0x760: 68, 0x761: 68, 0x762: 68, 0x763: 68, 0x764: 68, 0x765: 68, 0x766: 68, 0x767: 68, 0x768: 68, 0x769: 68, 0x76a: 68, 0x76b: 82, 0x76c: 82, 0x76d: 68, 0x76e: 68, 0x76f: 68, 0x770: 68, 0x771: 82, 0x772: 68, 0x773: 82, 0x774: 82, 0x775: 68, 0x776: 68, 0x777: 68, 0x778: 82, 0x779: 82, 0x77a: 68, 0x77b: 68, 0x77c: 68, 0x77d: 68, 0x77e: 68, 0x77f: 68, 0x7ca: 68, 0x7cb: 68, 0x7cc: 68, 0x7cd: 68, 0x7ce: 68, 0x7cf: 68, 0x7d0: 68, 0x7d1: 68, 0x7d2: 68, 0x7d3: 68, 0x7d4: 68, 0x7d5: 68, 0x7d6: 68, 0x7d7: 68, 0x7d8: 68, 0x7d9: 68, 0x7da: 68, 0x7db: 68, 0x7dc: 68, 0x7dd: 68, 0x7de: 68, 0x7df: 68, 0x7e0: 68, 0x7e1: 68, 0x7e2: 68, 0x7e3: 68, 0x7e4: 68, 0x7e5: 68, 0x7e6: 68, 0x7e7: 68, 0x7e8: 68, 0x7e9: 68, 0x7ea: 68, 0x7fa: 67, 0x840: 82, 0x841: 68, 0x842: 68, 0x843: 68, 0x844: 68, 0x845: 68, 0x846: 82, 0x847: 82, 0x848: 68, 0x849: 82, 0x84a: 68, 0x84b: 68, 0x84c: 68, 0x84d: 68, 0x84e: 68, 0x84f: 68, 0x850: 68, 0x851: 68, 0x852: 68, 0x853: 68, 0x854: 82, 0x855: 68, 0x856: 85, 0x857: 85, 0x858: 85, 0x860: 68, 0x861: 85, 0x862: 68, 0x863: 68, 0x864: 68, 0x865: 68, 0x866: 85, 0x867: 82, 0x868: 68, 0x869: 82, 0x86a: 82, 0x8a0: 68, 0x8a1: 68, 0x8a2: 68, 0x8a3: 68, 0x8a4: 68, 0x8a5: 68, 0x8a6: 68, 0x8a7: 68, 0x8a8: 68, 0x8a9: 68, 0x8aa: 82, 0x8ab: 82, 0x8ac: 82, 0x8ad: 85, 0x8ae: 82, 0x8af: 68, 0x8b0: 68, 0x8b1: 82, 0x8b2: 82, 0x8b3: 68, 0x8b4: 68, 0x8b6: 68, 0x8b7: 68, 0x8b8: 68, 0x8b9: 82, 0x8ba: 68, 0x8bb: 68, 0x8bc: 68, 0x8bd: 68, 0x8e2: 85, 0x1806: 85, 0x1807: 68, 0x180a: 67, 0x180e: 85, 0x1820: 68, 0x1821: 68, 0x1822: 68, 0x1823: 68, 0x1824: 68, 0x1825: 68, 0x1826: 68, 0x1827: 68, 0x1828: 68, 0x1829: 68, 0x182a: 68, 0x182b: 68, 0x182c: 68, 0x182d: 68, 0x182e: 68, 0x182f: 68, 0x1830: 68, 0x1831: 68, 0x1832: 68, 0x1833: 68, 0x1834: 68, 0x1835: 68, 0x1836: 68, 0x1837: 68, 0x1838: 68, 0x1839: 68, 0x183a: 68, 0x183b: 68, 0x183c: 68, 0x183d: 68, 0x183e: 68, 0x183f: 68, 0x1840: 68, 0x1841: 68, 0x1842: 68, 0x1843: 68, 0x1844: 68, 0x1845: 68, 0x1846: 68, 0x1847: 68, 0x1848: 68, 0x1849: 68, 0x184a: 68, 0x184b: 68, 0x184c: 68, 0x184d: 68, 0x184e: 68, 0x184f: 68, 0x1850: 68, 0x1851: 68, 0x1852: 68, 0x1853: 68, 0x1854: 68, 0x1855: 68, 0x1856: 68, 0x1857: 68, 0x1858: 68, 0x1859: 68, 0x185a: 68, 0x185b: 68, 0x185c: 68, 0x185d: 68, 0x185e: 68, 0x185f: 68, 0x1860: 68, 0x1861: 68, 0x1862: 68, 0x1863: 68, 0x1864: 68, 0x1865: 68, 0x1866: 68, 0x1867: 68, 0x1868: 68, 0x1869: 68, 0x186a: 68, 0x186b: 68, 0x186c: 68, 0x186d: 68, 0x186e: 68, 0x186f: 68, 0x1870: 68, 0x1871: 68, 0x1872: 68, 0x1873: 68, 0x1874: 68, 0x1875: 68, 0x1876: 68, 0x1877: 68, 0x1878: 68, 0x1880: 85, 0x1881: 85, 0x1882: 85, 0x1883: 85, 0x1884: 85, 0x1885: 84, 0x1886: 84, 0x1887: 68, 0x1888: 68, 0x1889: 68, 0x188a: 68, 0x188b: 68, 0x188c: 68, 0x188d: 68, 0x188e: 68, 0x188f: 68, 0x1890: 68, 0x1891: 68, 0x1892: 68, 0x1893: 68, 0x1894: 68, 0x1895: 68, 0x1896: 68, 0x1897: 68, 0x1898: 68, 0x1899: 68, 0x189a: 68, 0x189b: 68, 0x189c: 68, 0x189d: 68, 0x189e: 68, 0x189f: 68, 0x18a0: 68, 0x18a1: 68, 0x18a2: 68, 0x18a3: 68, 0x18a4: 68, 0x18a5: 68, 0x18a6: 68, 0x18a7: 68, 0x18a8: 68, 0x18aa: 68, 0x200c: 85, 0x200d: 67, 0x202f: 85, 0x2066: 85, 0x2067: 85, 0x2068: 85, 0x2069: 85, 0xa840: 68, 0xa841: 68, 0xa842: 68, 0xa843: 68, 0xa844: 68, 0xa845: 68, 0xa846: 68, 0xa847: 68, 0xa848: 68, 0xa849: 68, 0xa84a: 68, 0xa84b: 68, 0xa84c: 68, 0xa84d: 68, 0xa84e: 68, 0xa84f: 68, 0xa850: 68, 0xa851: 68, 0xa852: 68, 0xa853: 68, 0xa854: 68, 0xa855: 68, 0xa856: 68, 0xa857: 68, 0xa858: 68, 0xa859: 68, 0xa85a: 68, 0xa85b: 68, 0xa85c: 68, 0xa85d: 68, 0xa85e: 68, 0xa85f: 68, 0xa860: 68, 0xa861: 68, 0xa862: 68, 0xa863: 68, 0xa864: 68, 0xa865: 68, 0xa866: 68, 0xa867: 68, 0xa868: 68, 0xa869: 68, 0xa86a: 68, 0xa86b: 68, 0xa86c: 68, 0xa86d: 68, 0xa86e: 68, 0xa86f: 68, 0xa870: 68, 0xa871: 68, 0xa872: 76, 0xa873: 85, 0x10ac0: 68, 0x10ac1: 68, 0x10ac2: 68, 0x10ac3: 68, 0x10ac4: 68, 0x10ac5: 82, 0x10ac6: 85, 0x10ac7: 82, 0x10ac8: 85, 0x10ac9: 82, 0x10aca: 82, 0x10acb: 85, 0x10acc: 85, 0x10acd: 76, 0x10ace: 82, 0x10acf: 82, 0x10ad0: 82, 0x10ad1: 82, 0x10ad2: 82, 0x10ad3: 68, 0x10ad4: 68, 0x10ad5: 68, 0x10ad6: 68, 0x10ad7: 76, 0x10ad8: 68, 0x10ad9: 68, 0x10ada: 68, 0x10adb: 68, 0x10adc: 68, 0x10add: 82, 0x10ade: 68, 0x10adf: 68, 0x10ae0: 68, 0x10ae1: 82, 0x10ae2: 85, 0x10ae3: 85, 0x10ae4: 82, 0x10aeb: 68, 0x10aec: 68, 0x10aed: 68, 0x10aee: 68, 0x10aef: 82, 0x10b80: 68, 0x10b81: 82, 0x10b82: 68, 0x10b83: 82, 0x10b84: 82, 0x10b85: 82, 0x10b86: 68, 0x10b87: 68, 0x10b88: 68, 0x10b89: 82, 0x10b8a: 68, 0x10b8b: 68, 0x10b8c: 82, 0x10b8d: 68, 0x10b8e: 82, 0x10b8f: 82, 0x10b90: 68, 0x10b91: 82, 0x10ba9: 82, 0x10baa: 82, 0x10bab: 82, 0x10bac: 82, 0x10bad: 68, 0x10bae: 68, 0x10baf: 85, 0x10d00: 76, 0x10d01: 68, 0x10d02: 68, 0x10d03: 68, 0x10d04: 68, 0x10d05: 68, 0x10d06: 68, 0x10d07: 68, 0x10d08: 68, 0x10d09: 68, 0x10d0a: 68, 0x10d0b: 68, 0x10d0c: 68, 0x10d0d: 68, 0x10d0e: 68, 0x10d0f: 68, 0x10d10: 68, 0x10d11: 68, 0x10d12: 68, 0x10d13: 68, 0x10d14: 68, 0x10d15: 68, 0x10d16: 68, 0x10d17: 68, 0x10d18: 68, 0x10d19: 68, 0x10d1a: 68, 0x10d1b: 68, 0x10d1c: 68, 0x10d1d: 68, 0x10d1e: 68, 0x10d1f: 68, 0x10d20: 68, 0x10d21: 68, 0x10d22: 82, 0x10d23: 68, 0x10f30: 68, 0x10f31: 68, 0x10f32: 68, 0x10f33: 82, 0x10f34: 68, 0x10f35: 68, 0x10f36: 68, 0x10f37: 68, 0x10f38: 68, 0x10f39: 68, 0x10f3a: 68, 0x10f3b: 68, 0x10f3c: 68, 0x10f3d: 68, 0x10f3e: 68, 0x10f3f: 68, 0x10f40: 68, 0x10f41: 68, 0x10f42: 68, 0x10f43: 68, 0x10f44: 68, 0x10f45: 85, 0x10f51: 68, 0x10f52: 68, 0x10f53: 68, 0x10f54: 82, 0x110bd: 85, 0x110cd: 85, 0x1e900: 68, 0x1e901: 68, 0x1e902: 68, 0x1e903: 68, 0x1e904: 68, 0x1e905: 68, 0x1e906: 68, 0x1e907: 68, 0x1e908: 68, 0x1e909: 68, 0x1e90a: 68, 0x1e90b: 68, 0x1e90c: 68, 0x1e90d: 68, 0x1e90e: 68, 0x1e90f: 68, 0x1e910: 68, 0x1e911: 68, 0x1e912: 68, 0x1e913: 68, 0x1e914: 68, 0x1e915: 68, 0x1e916: 68, 0x1e917: 68, 0x1e918: 68, 0x1e919: 68, 0x1e91a: 68, 0x1e91b: 68, 0x1e91c: 68, 0x1e91d: 68, 0x1e91e: 68, 0x1e91f: 68, 0x1e920: 68, 0x1e921: 68, 0x1e922: 68, 0x1e923: 68, 0x1e924: 68, 0x1e925: 68, 0x1e926: 68, 0x1e927: 68, 0x1e928: 68, 0x1e929: 68, 0x1e92a: 68, 0x1e92b: 68, 0x1e92c: 68, 0x1e92d: 68, 0x1e92e: 68, 0x1e92f: 68, 0x1e930: 68, 0x1e931: 68, 0x1e932: 68, 0x1e933: 68, 0x1e934: 68, 0x1e935: 68, 0x1e936: 68, 0x1e937: 68, 0x1e938: 68, 0x1e939: 68, 0x1e93a: 68, 0x1e93b: 68, 0x1e93c: 68, 0x1e93d: 68, 0x1e93e: 68, 0x1e93f: 68, 0x1e940: 68, 0x1e941: 68, 0x1e942: 68, 0x1e943: 68, } codepoint_classes = { 'PVALID': ( 0x2d0000002e, 0x300000003a, 0x610000007b, 0xdf000000f7, 0xf800000100, 0x10100000102, 0x10300000104, 0x10500000106, 0x10700000108, 0x1090000010a, 0x10b0000010c, 0x10d0000010e, 0x10f00000110, 0x11100000112, 0x11300000114, 0x11500000116, 0x11700000118, 0x1190000011a, 0x11b0000011c, 0x11d0000011e, 0x11f00000120, 0x12100000122, 0x12300000124, 0x12500000126, 0x12700000128, 0x1290000012a, 0x12b0000012c, 0x12d0000012e, 0x12f00000130, 0x13100000132, 0x13500000136, 0x13700000139, 0x13a0000013b, 0x13c0000013d, 0x13e0000013f, 0x14200000143, 0x14400000145, 0x14600000147, 0x14800000149, 0x14b0000014c, 0x14d0000014e, 0x14f00000150, 0x15100000152, 0x15300000154, 0x15500000156, 0x15700000158, 0x1590000015a, 0x15b0000015c, 0x15d0000015e, 0x15f00000160, 0x16100000162, 0x16300000164, 0x16500000166, 0x16700000168, 0x1690000016a, 0x16b0000016c, 0x16d0000016e, 0x16f00000170, 0x17100000172, 0x17300000174, 0x17500000176, 0x17700000178, 0x17a0000017b, 0x17c0000017d, 0x17e0000017f, 0x18000000181, 0x18300000184, 0x18500000186, 0x18800000189, 0x18c0000018e, 0x19200000193, 0x19500000196, 0x1990000019c, 0x19e0000019f, 0x1a1000001a2, 0x1a3000001a4, 0x1a5000001a6, 0x1a8000001a9, 0x1aa000001ac, 0x1ad000001ae, 0x1b0000001b1, 0x1b4000001b5, 0x1b6000001b7, 0x1b9000001bc, 0x1bd000001c4, 0x1ce000001cf, 0x1d0000001d1, 0x1d2000001d3, 0x1d4000001d5, 0x1d6000001d7, 0x1d8000001d9, 0x1da000001db, 0x1dc000001de, 0x1df000001e0, 0x1e1000001e2, 0x1e3000001e4, 0x1e5000001e6, 0x1e7000001e8, 0x1e9000001ea, 0x1eb000001ec, 0x1ed000001ee, 0x1ef000001f1, 0x1f5000001f6, 0x1f9000001fa, 0x1fb000001fc, 0x1fd000001fe, 0x1ff00000200, 0x20100000202, 0x20300000204, 0x20500000206, 0x20700000208, 0x2090000020a, 0x20b0000020c, 0x20d0000020e, 0x20f00000210, 0x21100000212, 0x21300000214, 0x21500000216, 0x21700000218, 0x2190000021a, 0x21b0000021c, 0x21d0000021e, 0x21f00000220, 0x22100000222, 0x22300000224, 0x22500000226, 0x22700000228, 0x2290000022a, 0x22b0000022c, 0x22d0000022e, 0x22f00000230, 0x23100000232, 0x2330000023a, 0x23c0000023d, 0x23f00000241, 0x24200000243, 0x24700000248, 0x2490000024a, 0x24b0000024c, 0x24d0000024e, 0x24f000002b0, 0x2b9000002c2, 0x2c6000002d2, 0x2ec000002ed, 0x2ee000002ef, 0x30000000340, 0x34200000343, 0x3460000034f, 0x35000000370, 0x37100000372, 0x37300000374, 0x37700000378, 0x37b0000037e, 0x39000000391, 0x3ac000003cf, 0x3d7000003d8, 0x3d9000003da, 0x3db000003dc, 0x3dd000003de, 0x3df000003e0, 0x3e1000003e2, 0x3e3000003e4, 0x3e5000003e6, 0x3e7000003e8, 0x3e9000003ea, 0x3eb000003ec, 0x3ed000003ee, 0x3ef000003f0, 0x3f3000003f4, 0x3f8000003f9, 0x3fb000003fd, 0x43000000460, 0x46100000462, 0x46300000464, 0x46500000466, 0x46700000468, 0x4690000046a, 0x46b0000046c, 0x46d0000046e, 0x46f00000470, 0x47100000472, 0x47300000474, 0x47500000476, 0x47700000478, 0x4790000047a, 0x47b0000047c, 0x47d0000047e, 0x47f00000480, 0x48100000482, 0x48300000488, 0x48b0000048c, 0x48d0000048e, 0x48f00000490, 0x49100000492, 0x49300000494, 0x49500000496, 0x49700000498, 0x4990000049a, 0x49b0000049c, 0x49d0000049e, 0x49f000004a0, 0x4a1000004a2, 0x4a3000004a4, 0x4a5000004a6, 0x4a7000004a8, 0x4a9000004aa, 0x4ab000004ac, 0x4ad000004ae, 0x4af000004b0, 0x4b1000004b2, 0x4b3000004b4, 0x4b5000004b6, 0x4b7000004b8, 0x4b9000004ba, 0x4bb000004bc, 0x4bd000004be, 0x4bf000004c0, 0x4c2000004c3, 0x4c4000004c5, 0x4c6000004c7, 0x4c8000004c9, 0x4ca000004cb, 0x4cc000004cd, 0x4ce000004d0, 0x4d1000004d2, 0x4d3000004d4, 0x4d5000004d6, 0x4d7000004d8, 0x4d9000004da, 0x4db000004dc, 0x4dd000004de, 0x4df000004e0, 0x4e1000004e2, 0x4e3000004e4, 0x4e5000004e6, 0x4e7000004e8, 0x4e9000004ea, 0x4eb000004ec, 0x4ed000004ee, 0x4ef000004f0, 0x4f1000004f2, 0x4f3000004f4, 0x4f5000004f6, 0x4f7000004f8, 0x4f9000004fa, 0x4fb000004fc, 0x4fd000004fe, 0x4ff00000500, 0x50100000502, 0x50300000504, 0x50500000506, 0x50700000508, 0x5090000050a, 0x50b0000050c, 0x50d0000050e, 0x50f00000510, 0x51100000512, 0x51300000514, 0x51500000516, 0x51700000518, 0x5190000051a, 0x51b0000051c, 0x51d0000051e, 0x51f00000520, 0x52100000522, 0x52300000524, 0x52500000526, 0x52700000528, 0x5290000052a, 0x52b0000052c, 0x52d0000052e, 0x52f00000530, 0x5590000055a, 0x56000000587, 0x58800000589, 0x591000005be, 0x5bf000005c0, 0x5c1000005c3, 0x5c4000005c6, 0x5c7000005c8, 0x5d0000005eb, 0x5ef000005f3, 0x6100000061b, 0x62000000640, 0x64100000660, 0x66e00000675, 0x679000006d4, 0x6d5000006dd, 0x6df000006e9, 0x6ea000006f0, 0x6fa00000700, 0x7100000074b, 0x74d000007b2, 0x7c0000007f6, 0x7fd000007fe, 0x8000000082e, 0x8400000085c, 0x8600000086b, 0x8a0000008b5, 0x8b6000008be, 0x8d3000008e2, 0x8e300000958, 0x96000000964, 0x96600000970, 0x97100000984, 0x9850000098d, 0x98f00000991, 0x993000009a9, 0x9aa000009b1, 0x9b2000009b3, 0x9b6000009ba, 0x9bc000009c5, 0x9c7000009c9, 0x9cb000009cf, 0x9d7000009d8, 0x9e0000009e4, 0x9e6000009f2, 0x9fc000009fd, 0x9fe000009ff, 0xa0100000a04, 0xa0500000a0b, 0xa0f00000a11, 0xa1300000a29, 0xa2a00000a31, 0xa3200000a33, 0xa3500000a36, 0xa3800000a3a, 0xa3c00000a3d, 0xa3e00000a43, 0xa4700000a49, 0xa4b00000a4e, 0xa5100000a52, 0xa5c00000a5d, 0xa6600000a76, 0xa8100000a84, 0xa8500000a8e, 0xa8f00000a92, 0xa9300000aa9, 0xaaa00000ab1, 0xab200000ab4, 0xab500000aba, 0xabc00000ac6, 0xac700000aca, 0xacb00000ace, 0xad000000ad1, 0xae000000ae4, 0xae600000af0, 0xaf900000b00, 0xb0100000b04, 0xb0500000b0d, 0xb0f00000b11, 0xb1300000b29, 0xb2a00000b31, 0xb3200000b34, 0xb3500000b3a, 0xb3c00000b45, 0xb4700000b49, 0xb4b00000b4e, 0xb5600000b58, 0xb5f00000b64, 0xb6600000b70, 0xb7100000b72, 0xb8200000b84, 0xb8500000b8b, 0xb8e00000b91, 0xb9200000b96, 0xb9900000b9b, 0xb9c00000b9d, 0xb9e00000ba0, 0xba300000ba5, 0xba800000bab, 0xbae00000bba, 0xbbe00000bc3, 0xbc600000bc9, 0xbca00000bce, 0xbd000000bd1, 0xbd700000bd8, 0xbe600000bf0, 0xc0000000c0d, 0xc0e00000c11, 0xc1200000c29, 0xc2a00000c3a, 0xc3d00000c45, 0xc4600000c49, 0xc4a00000c4e, 0xc5500000c57, 0xc5800000c5b, 0xc6000000c64, 0xc6600000c70, 0xc8000000c84, 0xc8500000c8d, 0xc8e00000c91, 0xc9200000ca9, 0xcaa00000cb4, 0xcb500000cba, 0xcbc00000cc5, 0xcc600000cc9, 0xcca00000cce, 0xcd500000cd7, 0xcde00000cdf, 0xce000000ce4, 0xce600000cf0, 0xcf100000cf3, 0xd0000000d04, 0xd0500000d0d, 0xd0e00000d11, 0xd1200000d45, 0xd4600000d49, 0xd4a00000d4f, 0xd5400000d58, 0xd5f00000d64, 0xd6600000d70, 0xd7a00000d80, 0xd8200000d84, 0xd8500000d97, 0xd9a00000db2, 0xdb300000dbc, 0xdbd00000dbe, 0xdc000000dc7, 0xdca00000dcb, 0xdcf00000dd5, 0xdd600000dd7, 0xdd800000de0, 0xde600000df0, 0xdf200000df4, 0xe0100000e33, 0xe3400000e3b, 0xe4000000e4f, 0xe5000000e5a, 0xe8100000e83, 0xe8400000e85, 0xe8700000e89, 0xe8a00000e8b, 0xe8d00000e8e, 0xe9400000e98, 0xe9900000ea0, 0xea100000ea4, 0xea500000ea6, 0xea700000ea8, 0xeaa00000eac, 0xead00000eb3, 0xeb400000eba, 0xebb00000ebe, 0xec000000ec5, 0xec600000ec7, 0xec800000ece, 0xed000000eda, 0xede00000ee0, 0xf0000000f01, 0xf0b00000f0c, 0xf1800000f1a, 0xf2000000f2a, 0xf3500000f36, 0xf3700000f38, 0xf3900000f3a, 0xf3e00000f43, 0xf4400000f48, 0xf4900000f4d, 0xf4e00000f52, 0xf5300000f57, 0xf5800000f5c, 0xf5d00000f69, 0xf6a00000f6d, 0xf7100000f73, 0xf7400000f75, 0xf7a00000f81, 0xf8200000f85, 0xf8600000f93, 0xf9400000f98, 0xf9900000f9d, 0xf9e00000fa2, 0xfa300000fa7, 0xfa800000fac, 0xfad00000fb9, 0xfba00000fbd, 0xfc600000fc7, 0x10000000104a, 0x10500000109e, 0x10d0000010fb, 0x10fd00001100, 0x120000001249, 0x124a0000124e, 0x125000001257, 0x125800001259, 0x125a0000125e, 0x126000001289, 0x128a0000128e, 0x1290000012b1, 0x12b2000012b6, 0x12b8000012bf, 0x12c0000012c1, 0x12c2000012c6, 0x12c8000012d7, 0x12d800001311, 0x131200001316, 0x13180000135b, 0x135d00001360, 0x138000001390, 0x13a0000013f6, 0x14010000166d, 0x166f00001680, 0x16810000169b, 0x16a0000016eb, 0x16f1000016f9, 0x17000000170d, 0x170e00001715, 0x172000001735, 0x174000001754, 0x17600000176d, 0x176e00001771, 0x177200001774, 0x1780000017b4, 0x17b6000017d4, 0x17d7000017d8, 0x17dc000017de, 0x17e0000017ea, 0x18100000181a, 0x182000001879, 0x1880000018ab, 0x18b0000018f6, 0x19000000191f, 0x19200000192c, 0x19300000193c, 0x19460000196e, 0x197000001975, 0x1980000019ac, 0x19b0000019ca, 0x19d0000019da, 0x1a0000001a1c, 0x1a2000001a5f, 0x1a6000001a7d, 0x1a7f00001a8a, 0x1a9000001a9a, 0x1aa700001aa8, 0x1ab000001abe, 0x1b0000001b4c, 0x1b5000001b5a, 0x1b6b00001b74, 0x1b8000001bf4, 0x1c0000001c38, 0x1c4000001c4a, 0x1c4d00001c7e, 0x1cd000001cd3, 0x1cd400001cfa, 0x1d0000001d2c, 0x1d2f00001d30, 0x1d3b00001d3c, 0x1d4e00001d4f, 0x1d6b00001d78, 0x1d7900001d9b, 0x1dc000001dfa, 0x1dfb00001e00, 0x1e0100001e02, 0x1e0300001e04, 0x1e0500001e06, 0x1e0700001e08, 0x1e0900001e0a, 0x1e0b00001e0c, 0x1e0d00001e0e, 0x1e0f00001e10, 0x1e1100001e12, 0x1e1300001e14, 0x1e1500001e16, 0x1e1700001e18, 0x1e1900001e1a, 0x1e1b00001e1c, 0x1e1d00001e1e, 0x1e1f00001e20, 0x1e2100001e22, 0x1e2300001e24, 0x1e2500001e26, 0x1e2700001e28, 0x1e2900001e2a, 0x1e2b00001e2c, 0x1e2d00001e2e, 0x1e2f00001e30, 0x1e3100001e32, 0x1e3300001e34, 0x1e3500001e36, 0x1e3700001e38, 0x1e3900001e3a, 0x1e3b00001e3c, 0x1e3d00001e3e, 0x1e3f00001e40, 0x1e4100001e42, 0x1e4300001e44, 0x1e4500001e46, 0x1e4700001e48, 0x1e4900001e4a, 0x1e4b00001e4c, 0x1e4d00001e4e, 0x1e4f00001e50, 0x1e5100001e52, 0x1e5300001e54, 0x1e5500001e56, 0x1e5700001e58, 0x1e5900001e5a, 0x1e5b00001e5c, 0x1e5d00001e5e, 0x1e5f00001e60, 0x1e6100001e62, 0x1e6300001e64, 0x1e6500001e66, 0x1e6700001e68, 0x1e6900001e6a, 0x1e6b00001e6c, 0x1e6d00001e6e, 0x1e6f00001e70, 0x1e7100001e72, 0x1e7300001e74, 0x1e7500001e76, 0x1e7700001e78, 0x1e7900001e7a, 0x1e7b00001e7c, 0x1e7d00001e7e, 0x1e7f00001e80, 0x1e8100001e82, 0x1e8300001e84, 0x1e8500001e86, 0x1e8700001e88, 0x1e8900001e8a, 0x1e8b00001e8c, 0x1e8d00001e8e, 0x1e8f00001e90, 0x1e9100001e92, 0x1e9300001e94, 0x1e9500001e9a, 0x1e9c00001e9e, 0x1e9f00001ea0, 0x1ea100001ea2, 0x1ea300001ea4, 0x1ea500001ea6, 0x1ea700001ea8, 0x1ea900001eaa, 0x1eab00001eac, 0x1ead00001eae, 0x1eaf00001eb0, 0x1eb100001eb2, 0x1eb300001eb4, 0x1eb500001eb6, 0x1eb700001eb8, 0x1eb900001eba, 0x1ebb00001ebc, 0x1ebd00001ebe, 0x1ebf00001ec0, 0x1ec100001ec2, 0x1ec300001ec4, 0x1ec500001ec6, 0x1ec700001ec8, 0x1ec900001eca, 0x1ecb00001ecc, 0x1ecd00001ece, 0x1ecf00001ed0, 0x1ed100001ed2, 0x1ed300001ed4, 0x1ed500001ed6, 0x1ed700001ed8, 0x1ed900001eda, 0x1edb00001edc, 0x1edd00001ede, 0x1edf00001ee0, 0x1ee100001ee2, 0x1ee300001ee4, 0x1ee500001ee6, 0x1ee700001ee8, 0x1ee900001eea, 0x1eeb00001eec, 0x1eed00001eee, 0x1eef00001ef0, 0x1ef100001ef2, 0x1ef300001ef4, 0x1ef500001ef6, 0x1ef700001ef8, 0x1ef900001efa, 0x1efb00001efc, 0x1efd00001efe, 0x1eff00001f08, 0x1f1000001f16, 0x1f2000001f28, 0x1f3000001f38, 0x1f4000001f46, 0x1f5000001f58, 0x1f6000001f68, 0x1f7000001f71, 0x1f7200001f73, 0x1f7400001f75, 0x1f7600001f77, 0x1f7800001f79, 0x1f7a00001f7b, 0x1f7c00001f7d, 0x1fb000001fb2, 0x1fb600001fb7, 0x1fc600001fc7, 0x1fd000001fd3, 0x1fd600001fd8, 0x1fe000001fe3, 0x1fe400001fe8, 0x1ff600001ff7, 0x214e0000214f, 0x218400002185, 0x2c3000002c5f, 0x2c6100002c62, 0x2c6500002c67, 0x2c6800002c69, 0x2c6a00002c6b, 0x2c6c00002c6d, 0x2c7100002c72, 0x2c7300002c75, 0x2c7600002c7c, 0x2c8100002c82, 0x2c8300002c84, 0x2c8500002c86, 0x2c8700002c88, 0x2c8900002c8a, 0x2c8b00002c8c, 0x2c8d00002c8e, 0x2c8f00002c90, 0x2c9100002c92, 0x2c9300002c94, 0x2c9500002c96, 0x2c9700002c98, 0x2c9900002c9a, 0x2c9b00002c9c, 0x2c9d00002c9e, 0x2c9f00002ca0, 0x2ca100002ca2, 0x2ca300002ca4, 0x2ca500002ca6, 0x2ca700002ca8, 0x2ca900002caa, 0x2cab00002cac, 0x2cad00002cae, 0x2caf00002cb0, 0x2cb100002cb2, 0x2cb300002cb4, 0x2cb500002cb6, 0x2cb700002cb8, 0x2cb900002cba, 0x2cbb00002cbc, 0x2cbd00002cbe, 0x2cbf00002cc0, 0x2cc100002cc2, 0x2cc300002cc4, 0x2cc500002cc6, 0x2cc700002cc8, 0x2cc900002cca, 0x2ccb00002ccc, 0x2ccd00002cce, 0x2ccf00002cd0, 0x2cd100002cd2, 0x2cd300002cd4, 0x2cd500002cd6, 0x2cd700002cd8, 0x2cd900002cda, 0x2cdb00002cdc, 0x2cdd00002cde, 0x2cdf00002ce0, 0x2ce100002ce2, 0x2ce300002ce5, 0x2cec00002ced, 0x2cee00002cf2, 0x2cf300002cf4, 0x2d0000002d26, 0x2d2700002d28, 0x2d2d00002d2e, 0x2d3000002d68, 0x2d7f00002d97, 0x2da000002da7, 0x2da800002daf, 0x2db000002db7, 0x2db800002dbf, 0x2dc000002dc7, 0x2dc800002dcf, 0x2dd000002dd7, 0x2dd800002ddf, 0x2de000002e00, 0x2e2f00002e30, 0x300500003008, 0x302a0000302e, 0x303c0000303d, 0x304100003097, 0x30990000309b, 0x309d0000309f, 0x30a1000030fb, 0x30fc000030ff, 0x310500003130, 0x31a0000031bb, 0x31f000003200, 0x340000004db6, 0x4e0000009ff0, 0xa0000000a48d, 0xa4d00000a4fe, 0xa5000000a60d, 0xa6100000a62c, 0xa6410000a642, 0xa6430000a644, 0xa6450000a646, 0xa6470000a648, 0xa6490000a64a, 0xa64b0000a64c, 0xa64d0000a64e, 0xa64f0000a650, 0xa6510000a652, 0xa6530000a654, 0xa6550000a656, 0xa6570000a658, 0xa6590000a65a, 0xa65b0000a65c, 0xa65d0000a65e, 0xa65f0000a660, 0xa6610000a662, 0xa6630000a664, 0xa6650000a666, 0xa6670000a668, 0xa6690000a66a, 0xa66b0000a66c, 0xa66d0000a670, 0xa6740000a67e, 0xa67f0000a680, 0xa6810000a682, 0xa6830000a684, 0xa6850000a686, 0xa6870000a688, 0xa6890000a68a, 0xa68b0000a68c, 0xa68d0000a68e, 0xa68f0000a690, 0xa6910000a692, 0xa6930000a694, 0xa6950000a696, 0xa6970000a698, 0xa6990000a69a, 0xa69b0000a69c, 0xa69e0000a6e6, 0xa6f00000a6f2, 0xa7170000a720, 0xa7230000a724, 0xa7250000a726, 0xa7270000a728, 0xa7290000a72a, 0xa72b0000a72c, 0xa72d0000a72e, 0xa72f0000a732, 0xa7330000a734, 0xa7350000a736, 0xa7370000a738, 0xa7390000a73a, 0xa73b0000a73c, 0xa73d0000a73e, 0xa73f0000a740, 0xa7410000a742, 0xa7430000a744, 0xa7450000a746, 0xa7470000a748, 0xa7490000a74a, 0xa74b0000a74c, 0xa74d0000a74e, 0xa74f0000a750, 0xa7510000a752, 0xa7530000a754, 0xa7550000a756, 0xa7570000a758, 0xa7590000a75a, 0xa75b0000a75c, 0xa75d0000a75e, 0xa75f0000a760, 0xa7610000a762, 0xa7630000a764, 0xa7650000a766, 0xa7670000a768, 0xa7690000a76a, 0xa76b0000a76c, 0xa76d0000a76e, 0xa76f0000a770, 0xa7710000a779, 0xa77a0000a77b, 0xa77c0000a77d, 0xa77f0000a780, 0xa7810000a782, 0xa7830000a784, 0xa7850000a786, 0xa7870000a789, 0xa78c0000a78d, 0xa78e0000a790, 0xa7910000a792, 0xa7930000a796, 0xa7970000a798, 0xa7990000a79a, 0xa79b0000a79c, 0xa79d0000a79e, 0xa79f0000a7a0, 0xa7a10000a7a2, 0xa7a30000a7a4, 0xa7a50000a7a6, 0xa7a70000a7a8, 0xa7a90000a7aa, 0xa7af0000a7b0, 0xa7b50000a7b6, 0xa7b70000a7b8, 0xa7b90000a7ba, 0xa7f70000a7f8, 0xa7fa0000a828, 0xa8400000a874, 0xa8800000a8c6, 0xa8d00000a8da, 0xa8e00000a8f8, 0xa8fb0000a8fc, 0xa8fd0000a92e, 0xa9300000a954, 0xa9800000a9c1, 0xa9cf0000a9da, 0xa9e00000a9ff, 0xaa000000aa37, 0xaa400000aa4e, 0xaa500000aa5a, 0xaa600000aa77, 0xaa7a0000aac3, 0xaadb0000aade, 0xaae00000aaf0, 0xaaf20000aaf7, 0xab010000ab07, 0xab090000ab0f, 0xab110000ab17, 0xab200000ab27, 0xab280000ab2f, 0xab300000ab5b, 0xab600000ab66, 0xabc00000abeb, 0xabec0000abee, 0xabf00000abfa, 0xac000000d7a4, 0xfa0e0000fa10, 0xfa110000fa12, 0xfa130000fa15, 0xfa1f0000fa20, 0xfa210000fa22, 0xfa230000fa25, 0xfa270000fa2a, 0xfb1e0000fb1f, 0xfe200000fe30, 0xfe730000fe74, 0x100000001000c, 0x1000d00010027, 0x100280001003b, 0x1003c0001003e, 0x1003f0001004e, 0x100500001005e, 0x10080000100fb, 0x101fd000101fe, 0x102800001029d, 0x102a0000102d1, 0x102e0000102e1, 0x1030000010320, 0x1032d00010341, 0x103420001034a, 0x103500001037b, 0x103800001039e, 0x103a0000103c4, 0x103c8000103d0, 0x104280001049e, 0x104a0000104aa, 0x104d8000104fc, 0x1050000010528, 0x1053000010564, 0x1060000010737, 0x1074000010756, 0x1076000010768, 0x1080000010806, 0x1080800010809, 0x1080a00010836, 0x1083700010839, 0x1083c0001083d, 0x1083f00010856, 0x1086000010877, 0x108800001089f, 0x108e0000108f3, 0x108f4000108f6, 0x1090000010916, 0x109200001093a, 0x10980000109b8, 0x109be000109c0, 0x10a0000010a04, 0x10a0500010a07, 0x10a0c00010a14, 0x10a1500010a18, 0x10a1900010a36, 0x10a3800010a3b, 0x10a3f00010a40, 0x10a6000010a7d, 0x10a8000010a9d, 0x10ac000010ac8, 0x10ac900010ae7, 0x10b0000010b36, 0x10b4000010b56, 0x10b6000010b73, 0x10b8000010b92, 0x10c0000010c49, 0x10cc000010cf3, 0x10d0000010d28, 0x10d3000010d3a, 0x10f0000010f1d, 0x10f2700010f28, 0x10f3000010f51, 0x1100000011047, 0x1106600011070, 0x1107f000110bb, 0x110d0000110e9, 0x110f0000110fa, 0x1110000011135, 0x1113600011140, 0x1114400011147, 0x1115000011174, 0x1117600011177, 0x11180000111c5, 0x111c9000111cd, 0x111d0000111db, 0x111dc000111dd, 0x1120000011212, 0x1121300011238, 0x1123e0001123f, 0x1128000011287, 0x1128800011289, 0x1128a0001128e, 0x1128f0001129e, 0x1129f000112a9, 0x112b0000112eb, 0x112f0000112fa, 0x1130000011304, 0x113050001130d, 0x1130f00011311, 0x1131300011329, 0x1132a00011331, 0x1133200011334, 0x113350001133a, 0x1133b00011345, 0x1134700011349, 0x1134b0001134e, 0x1135000011351, 0x1135700011358, 0x1135d00011364, 0x113660001136d, 0x1137000011375, 0x114000001144b, 0x114500001145a, 0x1145e0001145f, 0x11480000114c6, 0x114c7000114c8, 0x114d0000114da, 0x11580000115b6, 0x115b8000115c1, 0x115d8000115de, 0x1160000011641, 0x1164400011645, 0x116500001165a, 0x11680000116b8, 0x116c0000116ca, 0x117000001171b, 0x1171d0001172c, 0x117300001173a, 0x118000001183b, 0x118c0000118ea, 0x118ff00011900, 0x11a0000011a3f, 0x11a4700011a48, 0x11a5000011a84, 0x11a8600011a9a, 0x11a9d00011a9e, 0x11ac000011af9, 0x11c0000011c09, 0x11c0a00011c37, 0x11c3800011c41, 0x11c5000011c5a, 0x11c7200011c90, 0x11c9200011ca8, 0x11ca900011cb7, 0x11d0000011d07, 0x11d0800011d0a, 0x11d0b00011d37, 0x11d3a00011d3b, 0x11d3c00011d3e, 0x11d3f00011d48, 0x11d5000011d5a, 0x11d6000011d66, 0x11d6700011d69, 0x11d6a00011d8f, 0x11d9000011d92, 0x11d9300011d99, 0x11da000011daa, 0x11ee000011ef7, 0x120000001239a, 0x1248000012544, 0x130000001342f, 0x1440000014647, 0x1680000016a39, 0x16a4000016a5f, 0x16a6000016a6a, 0x16ad000016aee, 0x16af000016af5, 0x16b0000016b37, 0x16b4000016b44, 0x16b5000016b5a, 0x16b6300016b78, 0x16b7d00016b90, 0x16e6000016e80, 0x16f0000016f45, 0x16f5000016f7f, 0x16f8f00016fa0, 0x16fe000016fe2, 0x17000000187f2, 0x1880000018af3, 0x1b0000001b11f, 0x1b1700001b2fc, 0x1bc000001bc6b, 0x1bc700001bc7d, 0x1bc800001bc89, 0x1bc900001bc9a, 0x1bc9d0001bc9f, 0x1da000001da37, 0x1da3b0001da6d, 0x1da750001da76, 0x1da840001da85, 0x1da9b0001daa0, 0x1daa10001dab0, 0x1e0000001e007, 0x1e0080001e019, 0x1e01b0001e022, 0x1e0230001e025, 0x1e0260001e02b, 0x1e8000001e8c5, 0x1e8d00001e8d7, 0x1e9220001e94b, 0x1e9500001e95a, 0x200000002a6d7, 0x2a7000002b735, 0x2b7400002b81e, 0x2b8200002cea2, 0x2ceb00002ebe1, ), 'CONTEXTJ': ( 0x200c0000200e, ), 'CONTEXTO': ( 0xb7000000b8, 0x37500000376, 0x5f3000005f5, 0x6600000066a, 0x6f0000006fa, 0x30fb000030fc, ), } �������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/idna/intranges.py������������������������0000644�0001750�0001750�00000003325�00000000000�027747� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" Given a list of integers, made up of (hopefully) a small number of long runs of consecutive integers, compute a representation of the form ((start1, end1), (start2, end2) ...). Then answer the question "was x present in the original list?" in time O(log(# runs)). """ import bisect def intranges_from_list(list_): """Represent a list of integers as a sequence of ranges: ((start_0, end_0), (start_1, end_1), ...), such that the original integers are exactly those x such that start_i <= x < end_i for some i. Ranges are encoded as single integers (start << 32 | end), not as tuples. """ sorted_list = sorted(list_) ranges = [] last_write = -1 for i in range(len(sorted_list)): if i+1 < len(sorted_list): if sorted_list[i] == sorted_list[i+1]-1: continue current_range = sorted_list[last_write+1:i+1] ranges.append(_encode_range(current_range[0], current_range[-1] + 1)) last_write = i return tuple(ranges) def _encode_range(start, end): return (start << 32) | end def _decode_range(r): return (r >> 32), (r & ((1 << 32) - 1)) def intranges_contain(int_, ranges): """Determine if `int_` falls into one of the ranges in `ranges`.""" tuple_ = _encode_range(int_, 0) pos = bisect.bisect_left(ranges, tuple_) # we could be immediately ahead of a tuple (start, end) # with start < int_ <= end if pos > 0: left, right = _decode_range(ranges[pos-1]) if left <= int_ < right: return True # or we could be immediately behind a tuple (int_, end) if pos < len(ranges): left, _ = _decode_range(ranges[pos]) if left == int_: return True return False �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/idna/package_data.py���������������������0000644�0001750�0001750�00000000025�00000000000�030333� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������__version__ = '2.8' �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/idna/uts46data.py������������������������0000644�0001750�0001750�00000603224�00000000000�027600� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is automatically generated by tools/idna-data # vim: set fileencoding=utf-8 : """IDNA Mapping Table from UTS46.""" __version__ = "11.0.0" def _seg_0(): return [ (0x0, '3'), (0x1, '3'), (0x2, '3'), (0x3, '3'), (0x4, '3'), (0x5, '3'), (0x6, '3'), (0x7, '3'), (0x8, '3'), (0x9, '3'), (0xA, '3'), (0xB, '3'), (0xC, '3'), (0xD, '3'), (0xE, '3'), (0xF, '3'), (0x10, '3'), (0x11, '3'), (0x12, '3'), (0x13, '3'), (0x14, '3'), (0x15, '3'), (0x16, '3'), (0x17, '3'), (0x18, '3'), (0x19, '3'), (0x1A, '3'), (0x1B, '3'), (0x1C, '3'), (0x1D, '3'), (0x1E, '3'), (0x1F, '3'), (0x20, '3'), (0x21, '3'), (0x22, '3'), (0x23, '3'), (0x24, '3'), (0x25, '3'), (0x26, '3'), (0x27, '3'), (0x28, '3'), (0x29, '3'), (0x2A, '3'), (0x2B, '3'), (0x2C, '3'), (0x2D, 'V'), (0x2E, 'V'), (0x2F, '3'), (0x30, 'V'), (0x31, 'V'), (0x32, 'V'), (0x33, 'V'), (0x34, 'V'), (0x35, 'V'), (0x36, 'V'), (0x37, 'V'), (0x38, 'V'), (0x39, 'V'), (0x3A, '3'), (0x3B, '3'), (0x3C, '3'), (0x3D, '3'), (0x3E, '3'), (0x3F, '3'), (0x40, '3'), (0x41, 'M', u'a'), (0x42, 'M', u'b'), (0x43, 'M', u'c'), (0x44, 'M', u'd'), (0x45, 'M', u'e'), (0x46, 'M', u'f'), (0x47, 'M', u'g'), (0x48, 'M', u'h'), (0x49, 'M', u'i'), (0x4A, 'M', u'j'), (0x4B, 'M', u'k'), (0x4C, 'M', u'l'), (0x4D, 'M', u'm'), (0x4E, 'M', u'n'), (0x4F, 'M', u'o'), (0x50, 'M', u'p'), (0x51, 'M', u'q'), (0x52, 'M', u'r'), (0x53, 'M', u's'), (0x54, 'M', u't'), (0x55, 'M', u'u'), (0x56, 'M', u'v'), (0x57, 'M', u'w'), (0x58, 'M', u'x'), (0x59, 'M', u'y'), (0x5A, 'M', u'z'), (0x5B, '3'), (0x5C, '3'), (0x5D, '3'), (0x5E, '3'), (0x5F, '3'), (0x60, '3'), (0x61, 'V'), (0x62, 'V'), (0x63, 'V'), ] def _seg_1(): return [ (0x64, 'V'), (0x65, 'V'), (0x66, 'V'), (0x67, 'V'), (0x68, 'V'), (0x69, 'V'), (0x6A, 'V'), (0x6B, 'V'), (0x6C, 'V'), (0x6D, 'V'), (0x6E, 'V'), (0x6F, 'V'), (0x70, 'V'), (0x71, 'V'), (0x72, 'V'), (0x73, 'V'), (0x74, 'V'), (0x75, 'V'), (0x76, 'V'), (0x77, 'V'), (0x78, 'V'), (0x79, 'V'), (0x7A, 'V'), (0x7B, '3'), (0x7C, '3'), (0x7D, '3'), (0x7E, '3'), (0x7F, '3'), (0x80, 'X'), (0x81, 'X'), (0x82, 'X'), (0x83, 'X'), (0x84, 'X'), (0x85, 'X'), (0x86, 'X'), (0x87, 'X'), (0x88, 'X'), (0x89, 'X'), (0x8A, 'X'), (0x8B, 'X'), (0x8C, 'X'), (0x8D, 'X'), (0x8E, 'X'), (0x8F, 'X'), (0x90, 'X'), (0x91, 'X'), (0x92, 'X'), (0x93, 'X'), (0x94, 'X'), (0x95, 'X'), (0x96, 'X'), (0x97, 'X'), (0x98, 'X'), (0x99, 'X'), (0x9A, 'X'), (0x9B, 'X'), (0x9C, 'X'), (0x9D, 'X'), (0x9E, 'X'), (0x9F, 'X'), (0xA0, '3', u' '), (0xA1, 'V'), (0xA2, 'V'), (0xA3, 'V'), (0xA4, 'V'), (0xA5, 'V'), (0xA6, 'V'), (0xA7, 'V'), (0xA8, '3', u' ̈'), (0xA9, 'V'), (0xAA, 'M', u'a'), (0xAB, 'V'), (0xAC, 'V'), (0xAD, 'I'), (0xAE, 'V'), (0xAF, '3', u' ̄'), (0xB0, 'V'), (0xB1, 'V'), (0xB2, 'M', u'2'), (0xB3, 'M', u'3'), (0xB4, '3', u' ́'), (0xB5, 'M', u'μ'), (0xB6, 'V'), (0xB7, 'V'), (0xB8, '3', u' ̧'), (0xB9, 'M', u'1'), (0xBA, 'M', u'o'), (0xBB, 'V'), (0xBC, 'M', u'1⁄4'), (0xBD, 'M', u'1⁄2'), (0xBE, 'M', u'3⁄4'), (0xBF, 'V'), (0xC0, 'M', u'à'), (0xC1, 'M', u'á'), (0xC2, 'M', u'â'), (0xC3, 'M', u'ã'), (0xC4, 'M', u'ä'), (0xC5, 'M', u'å'), (0xC6, 'M', u'æ'), (0xC7, 'M', u'ç'), ] def _seg_2(): return [ (0xC8, 'M', u'è'), (0xC9, 'M', u'é'), (0xCA, 'M', u'ê'), (0xCB, 'M', u'ë'), (0xCC, 'M', u'ì'), (0xCD, 'M', u'í'), (0xCE, 'M', u'î'), (0xCF, 'M', u'ï'), (0xD0, 'M', u'ð'), (0xD1, 'M', u'ñ'), (0xD2, 'M', u'ò'), (0xD3, 'M', u'ó'), (0xD4, 'M', u'ô'), (0xD5, 'M', u'õ'), (0xD6, 'M', u'ö'), (0xD7, 'V'), (0xD8, 'M', u'ø'), (0xD9, 'M', u'ù'), (0xDA, 'M', u'ú'), (0xDB, 'M', u'û'), (0xDC, 'M', u'ü'), (0xDD, 'M', u'ý'), (0xDE, 'M', u'þ'), (0xDF, 'D', u'ss'), (0xE0, 'V'), (0xE1, 'V'), (0xE2, 'V'), (0xE3, 'V'), (0xE4, 'V'), (0xE5, 'V'), (0xE6, 'V'), (0xE7, 'V'), (0xE8, 'V'), (0xE9, 'V'), (0xEA, 'V'), (0xEB, 'V'), (0xEC, 'V'), (0xED, 'V'), (0xEE, 'V'), (0xEF, 'V'), (0xF0, 'V'), (0xF1, 'V'), (0xF2, 'V'), (0xF3, 'V'), (0xF4, 'V'), (0xF5, 'V'), (0xF6, 'V'), (0xF7, 'V'), (0xF8, 'V'), (0xF9, 'V'), (0xFA, 'V'), (0xFB, 'V'), (0xFC, 'V'), (0xFD, 'V'), (0xFE, 'V'), (0xFF, 'V'), (0x100, 'M', u'ā'), (0x101, 'V'), (0x102, 'M', u'ă'), (0x103, 'V'), (0x104, 'M', u'ą'), (0x105, 'V'), (0x106, 'M', u'ć'), (0x107, 'V'), (0x108, 'M', u'ĉ'), (0x109, 'V'), (0x10A, 'M', u'ċ'), (0x10B, 'V'), (0x10C, 'M', u'č'), (0x10D, 'V'), (0x10E, 'M', u'ď'), (0x10F, 'V'), (0x110, 'M', u'đ'), (0x111, 'V'), (0x112, 'M', u'ē'), (0x113, 'V'), (0x114, 'M', u'ĕ'), (0x115, 'V'), (0x116, 'M', u'ė'), (0x117, 'V'), (0x118, 'M', u'ę'), (0x119, 'V'), (0x11A, 'M', u'ě'), (0x11B, 'V'), (0x11C, 'M', u'ĝ'), (0x11D, 'V'), (0x11E, 'M', u'ğ'), (0x11F, 'V'), (0x120, 'M', u'ġ'), (0x121, 'V'), (0x122, 'M', u'ģ'), (0x123, 'V'), (0x124, 'M', u'ĥ'), (0x125, 'V'), (0x126, 'M', u'ħ'), (0x127, 'V'), (0x128, 'M', u'ĩ'), (0x129, 'V'), (0x12A, 'M', u'ī'), (0x12B, 'V'), ] def _seg_3(): return [ (0x12C, 'M', u'ĭ'), (0x12D, 'V'), (0x12E, 'M', u'į'), (0x12F, 'V'), (0x130, 'M', u'i̇'), (0x131, 'V'), (0x132, 'M', u'ij'), (0x134, 'M', u'ĵ'), (0x135, 'V'), (0x136, 'M', u'ķ'), (0x137, 'V'), (0x139, 'M', u'ĺ'), (0x13A, 'V'), (0x13B, 'M', u'ļ'), (0x13C, 'V'), (0x13D, 'M', u'ľ'), (0x13E, 'V'), (0x13F, 'M', u'l·'), (0x141, 'M', u'ł'), (0x142, 'V'), (0x143, 'M', u'ń'), (0x144, 'V'), (0x145, 'M', u'ņ'), (0x146, 'V'), (0x147, 'M', u'ň'), (0x148, 'V'), (0x149, 'M', u'ʼn'), (0x14A, 'M', u'ŋ'), (0x14B, 'V'), (0x14C, 'M', u'ō'), (0x14D, 'V'), (0x14E, 'M', u'ŏ'), (0x14F, 'V'), (0x150, 'M', u'ő'), (0x151, 'V'), (0x152, 'M', u'œ'), (0x153, 'V'), (0x154, 'M', u'ŕ'), (0x155, 'V'), (0x156, 'M', u'ŗ'), (0x157, 'V'), (0x158, 'M', u'ř'), (0x159, 'V'), (0x15A, 'M', u'ś'), (0x15B, 'V'), (0x15C, 'M', u'ŝ'), (0x15D, 'V'), (0x15E, 'M', u'ş'), (0x15F, 'V'), (0x160, 'M', u'š'), (0x161, 'V'), (0x162, 'M', u'ţ'), (0x163, 'V'), (0x164, 'M', u'ť'), (0x165, 'V'), (0x166, 'M', u'ŧ'), (0x167, 'V'), (0x168, 'M', u'ũ'), (0x169, 'V'), (0x16A, 'M', u'ū'), (0x16B, 'V'), (0x16C, 'M', u'ŭ'), (0x16D, 'V'), (0x16E, 'M', u'ů'), (0x16F, 'V'), (0x170, 'M', u'ű'), (0x171, 'V'), (0x172, 'M', u'ų'), (0x173, 'V'), (0x174, 'M', u'ŵ'), (0x175, 'V'), (0x176, 'M', u'ŷ'), (0x177, 'V'), (0x178, 'M', u'ÿ'), (0x179, 'M', u'ź'), (0x17A, 'V'), (0x17B, 'M', u'ż'), (0x17C, 'V'), (0x17D, 'M', u'ž'), (0x17E, 'V'), (0x17F, 'M', u's'), (0x180, 'V'), (0x181, 'M', u'ɓ'), (0x182, 'M', u'ƃ'), (0x183, 'V'), (0x184, 'M', u'ƅ'), (0x185, 'V'), (0x186, 'M', u'ɔ'), (0x187, 'M', u'ƈ'), (0x188, 'V'), (0x189, 'M', u'ɖ'), (0x18A, 'M', u'ɗ'), (0x18B, 'M', u'ƌ'), (0x18C, 'V'), (0x18E, 'M', u'ǝ'), (0x18F, 'M', u'ə'), (0x190, 'M', u'ɛ'), (0x191, 'M', u'ƒ'), (0x192, 'V'), (0x193, 'M', u'ɠ'), ] def _seg_4(): return [ (0x194, 'M', u'ɣ'), (0x195, 'V'), (0x196, 'M', u'ɩ'), (0x197, 'M', u'ɨ'), (0x198, 'M', u'ƙ'), (0x199, 'V'), (0x19C, 'M', u'ɯ'), (0x19D, 'M', u'ɲ'), (0x19E, 'V'), (0x19F, 'M', u'ɵ'), (0x1A0, 'M', u'ơ'), (0x1A1, 'V'), (0x1A2, 'M', u'ƣ'), (0x1A3, 'V'), (0x1A4, 'M', u'ƥ'), (0x1A5, 'V'), (0x1A6, 'M', u'ʀ'), (0x1A7, 'M', u'ƨ'), (0x1A8, 'V'), (0x1A9, 'M', u'ʃ'), (0x1AA, 'V'), (0x1AC, 'M', u'ƭ'), (0x1AD, 'V'), (0x1AE, 'M', u'ʈ'), (0x1AF, 'M', u'ư'), (0x1B0, 'V'), (0x1B1, 'M', u'ʊ'), (0x1B2, 'M', u'ʋ'), (0x1B3, 'M', u'ƴ'), (0x1B4, 'V'), (0x1B5, 'M', u'ƶ'), (0x1B6, 'V'), (0x1B7, 'M', u'ʒ'), (0x1B8, 'M', u'ƹ'), (0x1B9, 'V'), (0x1BC, 'M', u'ƽ'), (0x1BD, 'V'), (0x1C4, 'M', u'dž'), (0x1C7, 'M', u'lj'), (0x1CA, 'M', u'nj'), (0x1CD, 'M', u'ǎ'), (0x1CE, 'V'), (0x1CF, 'M', u'ǐ'), (0x1D0, 'V'), (0x1D1, 'M', u'ǒ'), (0x1D2, 'V'), (0x1D3, 'M', u'ǔ'), (0x1D4, 'V'), (0x1D5, 'M', u'ǖ'), (0x1D6, 'V'), (0x1D7, 'M', u'ǘ'), (0x1D8, 'V'), (0x1D9, 'M', u'ǚ'), (0x1DA, 'V'), (0x1DB, 'M', u'ǜ'), (0x1DC, 'V'), (0x1DE, 'M', u'ǟ'), (0x1DF, 'V'), (0x1E0, 'M', u'ǡ'), (0x1E1, 'V'), (0x1E2, 'M', u'ǣ'), (0x1E3, 'V'), (0x1E4, 'M', u'ǥ'), (0x1E5, 'V'), (0x1E6, 'M', u'ǧ'), (0x1E7, 'V'), (0x1E8, 'M', u'ǩ'), (0x1E9, 'V'), (0x1EA, 'M', u'ǫ'), (0x1EB, 'V'), (0x1EC, 'M', u'ǭ'), (0x1ED, 'V'), (0x1EE, 'M', u'ǯ'), (0x1EF, 'V'), (0x1F1, 'M', u'dz'), (0x1F4, 'M', u'ǵ'), (0x1F5, 'V'), (0x1F6, 'M', u'ƕ'), (0x1F7, 'M', u'ƿ'), (0x1F8, 'M', u'ǹ'), (0x1F9, 'V'), (0x1FA, 'M', u'ǻ'), (0x1FB, 'V'), (0x1FC, 'M', u'ǽ'), (0x1FD, 'V'), (0x1FE, 'M', u'ǿ'), (0x1FF, 'V'), (0x200, 'M', u'ȁ'), (0x201, 'V'), (0x202, 'M', u'ȃ'), (0x203, 'V'), (0x204, 'M', u'ȅ'), (0x205, 'V'), (0x206, 'M', u'ȇ'), (0x207, 'V'), (0x208, 'M', u'ȉ'), (0x209, 'V'), (0x20A, 'M', u'ȋ'), (0x20B, 'V'), (0x20C, 'M', u'ȍ'), ] def _seg_5(): return [ (0x20D, 'V'), (0x20E, 'M', u'ȏ'), (0x20F, 'V'), (0x210, 'M', u'ȑ'), (0x211, 'V'), (0x212, 'M', u'ȓ'), (0x213, 'V'), (0x214, 'M', u'ȕ'), (0x215, 'V'), (0x216, 'M', u'ȗ'), (0x217, 'V'), (0x218, 'M', u'ș'), (0x219, 'V'), (0x21A, 'M', u'ț'), (0x21B, 'V'), (0x21C, 'M', u'ȝ'), (0x21D, 'V'), (0x21E, 'M', u'ȟ'), (0x21F, 'V'), (0x220, 'M', u'ƞ'), (0x221, 'V'), (0x222, 'M', u'ȣ'), (0x223, 'V'), (0x224, 'M', u'ȥ'), (0x225, 'V'), (0x226, 'M', u'ȧ'), (0x227, 'V'), (0x228, 'M', u'ȩ'), (0x229, 'V'), (0x22A, 'M', u'ȫ'), (0x22B, 'V'), (0x22C, 'M', u'ȭ'), (0x22D, 'V'), (0x22E, 'M', u'ȯ'), (0x22F, 'V'), (0x230, 'M', u'ȱ'), (0x231, 'V'), (0x232, 'M', u'ȳ'), (0x233, 'V'), (0x23A, 'M', u'ⱥ'), (0x23B, 'M', u'ȼ'), (0x23C, 'V'), (0x23D, 'M', u'ƚ'), (0x23E, 'M', u'ⱦ'), (0x23F, 'V'), (0x241, 'M', u'ɂ'), (0x242, 'V'), (0x243, 'M', u'ƀ'), (0x244, 'M', u'ʉ'), (0x245, 'M', u'ʌ'), (0x246, 'M', u'ɇ'), (0x247, 'V'), (0x248, 'M', u'ɉ'), (0x249, 'V'), (0x24A, 'M', u'ɋ'), (0x24B, 'V'), (0x24C, 'M', u'ɍ'), (0x24D, 'V'), (0x24E, 'M', u'ɏ'), (0x24F, 'V'), (0x2B0, 'M', u'h'), (0x2B1, 'M', u'ɦ'), (0x2B2, 'M', u'j'), (0x2B3, 'M', u'r'), (0x2B4, 'M', u'ɹ'), (0x2B5, 'M', u'ɻ'), (0x2B6, 'M', u'ʁ'), (0x2B7, 'M', u'w'), (0x2B8, 'M', u'y'), (0x2B9, 'V'), (0x2D8, '3', u' ̆'), (0x2D9, '3', u' ̇'), (0x2DA, '3', u' ̊'), (0x2DB, '3', u' ̨'), (0x2DC, '3', u' ̃'), (0x2DD, '3', u' ̋'), (0x2DE, 'V'), (0x2E0, 'M', u'ɣ'), (0x2E1, 'M', u'l'), (0x2E2, 'M', u's'), (0x2E3, 'M', u'x'), (0x2E4, 'M', u'ʕ'), (0x2E5, 'V'), (0x340, 'M', u'̀'), (0x341, 'M', u'́'), (0x342, 'V'), (0x343, 'M', u'̓'), (0x344, 'M', u'̈́'), (0x345, 'M', u'ι'), (0x346, 'V'), (0x34F, 'I'), (0x350, 'V'), (0x370, 'M', u'ͱ'), (0x371, 'V'), (0x372, 'M', u'ͳ'), (0x373, 'V'), (0x374, 'M', u'ʹ'), (0x375, 'V'), (0x376, 'M', u'ͷ'), (0x377, 'V'), ] def _seg_6(): return [ (0x378, 'X'), (0x37A, '3', u' ι'), (0x37B, 'V'), (0x37E, '3', u';'), (0x37F, 'M', u'ϳ'), (0x380, 'X'), (0x384, '3', u' ́'), (0x385, '3', u' ̈́'), (0x386, 'M', u'ά'), (0x387, 'M', u'·'), (0x388, 'M', u'έ'), (0x389, 'M', u'ή'), (0x38A, 'M', u'ί'), (0x38B, 'X'), (0x38C, 'M', u'ό'), (0x38D, 'X'), (0x38E, 'M', u'ύ'), (0x38F, 'M', u'ώ'), (0x390, 'V'), (0x391, 'M', u'α'), (0x392, 'M', u'β'), (0x393, 'M', u'γ'), (0x394, 'M', u'δ'), (0x395, 'M', u'ε'), (0x396, 'M', u'ζ'), (0x397, 'M', u'η'), (0x398, 'M', u'θ'), (0x399, 'M', u'ι'), (0x39A, 'M', u'κ'), (0x39B, 'M', u'λ'), (0x39C, 'M', u'μ'), (0x39D, 'M', u'ν'), (0x39E, 'M', u'ξ'), (0x39F, 'M', u'ο'), (0x3A0, 'M', u'π'), (0x3A1, 'M', u'ρ'), (0x3A2, 'X'), (0x3A3, 'M', u'σ'), (0x3A4, 'M', u'τ'), (0x3A5, 'M', u'υ'), (0x3A6, 'M', u'φ'), (0x3A7, 'M', u'χ'), (0x3A8, 'M', u'ψ'), (0x3A9, 'M', u'ω'), (0x3AA, 'M', u'ϊ'), (0x3AB, 'M', u'ϋ'), (0x3AC, 'V'), (0x3C2, 'D', u'σ'), (0x3C3, 'V'), (0x3CF, 'M', u'ϗ'), (0x3D0, 'M', u'β'), (0x3D1, 'M', u'θ'), (0x3D2, 'M', u'υ'), (0x3D3, 'M', u'ύ'), (0x3D4, 'M', u'ϋ'), (0x3D5, 'M', u'φ'), (0x3D6, 'M', u'π'), (0x3D7, 'V'), (0x3D8, 'M', u'ϙ'), (0x3D9, 'V'), (0x3DA, 'M', u'ϛ'), (0x3DB, 'V'), (0x3DC, 'M', u'ϝ'), (0x3DD, 'V'), (0x3DE, 'M', u'ϟ'), (0x3DF, 'V'), (0x3E0, 'M', u'ϡ'), (0x3E1, 'V'), (0x3E2, 'M', u'ϣ'), (0x3E3, 'V'), (0x3E4, 'M', u'ϥ'), (0x3E5, 'V'), (0x3E6, 'M', u'ϧ'), (0x3E7, 'V'), (0x3E8, 'M', u'ϩ'), (0x3E9, 'V'), (0x3EA, 'M', u'ϫ'), (0x3EB, 'V'), (0x3EC, 'M', u'ϭ'), (0x3ED, 'V'), (0x3EE, 'M', u'ϯ'), (0x3EF, 'V'), (0x3F0, 'M', u'κ'), (0x3F1, 'M', u'ρ'), (0x3F2, 'M', u'σ'), (0x3F3, 'V'), (0x3F4, 'M', u'θ'), (0x3F5, 'M', u'ε'), (0x3F6, 'V'), (0x3F7, 'M', u'ϸ'), (0x3F8, 'V'), (0x3F9, 'M', u'σ'), (0x3FA, 'M', u'ϻ'), (0x3FB, 'V'), (0x3FD, 'M', u'ͻ'), (0x3FE, 'M', u'ͼ'), (0x3FF, 'M', u'ͽ'), (0x400, 'M', u'ѐ'), (0x401, 'M', u'ё'), (0x402, 'M', u'ђ'), ] def _seg_7(): return [ (0x403, 'M', u'ѓ'), (0x404, 'M', u'є'), (0x405, 'M', u'ѕ'), (0x406, 'M', u'і'), (0x407, 'M', u'ї'), (0x408, 'M', u'ј'), (0x409, 'M', u'љ'), (0x40A, 'M', u'њ'), (0x40B, 'M', u'ћ'), (0x40C, 'M', u'ќ'), (0x40D, 'M', u'ѝ'), (0x40E, 'M', u'ў'), (0x40F, 'M', u'џ'), (0x410, 'M', u'а'), (0x411, 'M', u'б'), (0x412, 'M', u'в'), (0x413, 'M', u'г'), (0x414, 'M', u'д'), (0x415, 'M', u'е'), (0x416, 'M', u'ж'), (0x417, 'M', u'з'), (0x418, 'M', u'и'), (0x419, 'M', u'й'), (0x41A, 'M', u'к'), (0x41B, 'M', u'л'), (0x41C, 'M', u'м'), (0x41D, 'M', u'н'), (0x41E, 'M', u'о'), (0x41F, 'M', u'п'), (0x420, 'M', u'р'), (0x421, 'M', u'с'), (0x422, 'M', u'т'), (0x423, 'M', u'у'), (0x424, 'M', u'ф'), (0x425, 'M', u'х'), (0x426, 'M', u'ц'), (0x427, 'M', u'ч'), (0x428, 'M', u'ш'), (0x429, 'M', u'щ'), (0x42A, 'M', u'ъ'), (0x42B, 'M', u'ы'), (0x42C, 'M', u'ь'), (0x42D, 'M', u'э'), (0x42E, 'M', u'ю'), (0x42F, 'M', u'я'), (0x430, 'V'), (0x460, 'M', u'ѡ'), (0x461, 'V'), (0x462, 'M', u'ѣ'), (0x463, 'V'), (0x464, 'M', u'ѥ'), (0x465, 'V'), (0x466, 'M', u'ѧ'), (0x467, 'V'), (0x468, 'M', u'ѩ'), (0x469, 'V'), (0x46A, 'M', u'ѫ'), (0x46B, 'V'), (0x46C, 'M', u'ѭ'), (0x46D, 'V'), (0x46E, 'M', u'ѯ'), (0x46F, 'V'), (0x470, 'M', u'ѱ'), (0x471, 'V'), (0x472, 'M', u'ѳ'), (0x473, 'V'), (0x474, 'M', u'ѵ'), (0x475, 'V'), (0x476, 'M', u'ѷ'), (0x477, 'V'), (0x478, 'M', u'ѹ'), (0x479, 'V'), (0x47A, 'M', u'ѻ'), (0x47B, 'V'), (0x47C, 'M', u'ѽ'), (0x47D, 'V'), (0x47E, 'M', u'ѿ'), (0x47F, 'V'), (0x480, 'M', u'ҁ'), (0x481, 'V'), (0x48A, 'M', u'ҋ'), (0x48B, 'V'), (0x48C, 'M', u'ҍ'), (0x48D, 'V'), (0x48E, 'M', u'ҏ'), (0x48F, 'V'), (0x490, 'M', u'ґ'), (0x491, 'V'), (0x492, 'M', u'ғ'), (0x493, 'V'), (0x494, 'M', u'ҕ'), (0x495, 'V'), (0x496, 'M', u'җ'), (0x497, 'V'), (0x498, 'M', u'ҙ'), (0x499, 'V'), (0x49A, 'M', u'қ'), (0x49B, 'V'), (0x49C, 'M', u'ҝ'), (0x49D, 'V'), ] def _seg_8(): return [ (0x49E, 'M', u'ҟ'), (0x49F, 'V'), (0x4A0, 'M', u'ҡ'), (0x4A1, 'V'), (0x4A2, 'M', u'ң'), (0x4A3, 'V'), (0x4A4, 'M', u'ҥ'), (0x4A5, 'V'), (0x4A6, 'M', u'ҧ'), (0x4A7, 'V'), (0x4A8, 'M', u'ҩ'), (0x4A9, 'V'), (0x4AA, 'M', u'ҫ'), (0x4AB, 'V'), (0x4AC, 'M', u'ҭ'), (0x4AD, 'V'), (0x4AE, 'M', u'ү'), (0x4AF, 'V'), (0x4B0, 'M', u'ұ'), (0x4B1, 'V'), (0x4B2, 'M', u'ҳ'), (0x4B3, 'V'), (0x4B4, 'M', u'ҵ'), (0x4B5, 'V'), (0x4B6, 'M', u'ҷ'), (0x4B7, 'V'), (0x4B8, 'M', u'ҹ'), (0x4B9, 'V'), (0x4BA, 'M', u'һ'), (0x4BB, 'V'), (0x4BC, 'M', u'ҽ'), (0x4BD, 'V'), (0x4BE, 'M', u'ҿ'), (0x4BF, 'V'), (0x4C0, 'X'), (0x4C1, 'M', u'ӂ'), (0x4C2, 'V'), (0x4C3, 'M', u'ӄ'), (0x4C4, 'V'), (0x4C5, 'M', u'ӆ'), (0x4C6, 'V'), (0x4C7, 'M', u'ӈ'), (0x4C8, 'V'), (0x4C9, 'M', u'ӊ'), (0x4CA, 'V'), (0x4CB, 'M', u'ӌ'), (0x4CC, 'V'), (0x4CD, 'M', u'ӎ'), (0x4CE, 'V'), (0x4D0, 'M', u'ӑ'), (0x4D1, 'V'), (0x4D2, 'M', u'ӓ'), (0x4D3, 'V'), (0x4D4, 'M', u'ӕ'), (0x4D5, 'V'), (0x4D6, 'M', u'ӗ'), (0x4D7, 'V'), (0x4D8, 'M', u'ә'), (0x4D9, 'V'), (0x4DA, 'M', u'ӛ'), (0x4DB, 'V'), (0x4DC, 'M', u'ӝ'), (0x4DD, 'V'), (0x4DE, 'M', u'ӟ'), (0x4DF, 'V'), (0x4E0, 'M', u'ӡ'), (0x4E1, 'V'), (0x4E2, 'M', u'ӣ'), (0x4E3, 'V'), (0x4E4, 'M', u'ӥ'), (0x4E5, 'V'), (0x4E6, 'M', u'ӧ'), (0x4E7, 'V'), (0x4E8, 'M', u'ө'), (0x4E9, 'V'), (0x4EA, 'M', u'ӫ'), (0x4EB, 'V'), (0x4EC, 'M', u'ӭ'), (0x4ED, 'V'), (0x4EE, 'M', u'ӯ'), (0x4EF, 'V'), (0x4F0, 'M', u'ӱ'), (0x4F1, 'V'), (0x4F2, 'M', u'ӳ'), (0x4F3, 'V'), (0x4F4, 'M', u'ӵ'), (0x4F5, 'V'), (0x4F6, 'M', u'ӷ'), (0x4F7, 'V'), (0x4F8, 'M', u'ӹ'), (0x4F9, 'V'), (0x4FA, 'M', u'ӻ'), (0x4FB, 'V'), (0x4FC, 'M', u'ӽ'), (0x4FD, 'V'), (0x4FE, 'M', u'ӿ'), (0x4FF, 'V'), (0x500, 'M', u'ԁ'), (0x501, 'V'), (0x502, 'M', u'ԃ'), ] def _seg_9(): return [ (0x503, 'V'), (0x504, 'M', u'ԅ'), (0x505, 'V'), (0x506, 'M', u'ԇ'), (0x507, 'V'), (0x508, 'M', u'ԉ'), (0x509, 'V'), (0x50A, 'M', u'ԋ'), (0x50B, 'V'), (0x50C, 'M', u'ԍ'), (0x50D, 'V'), (0x50E, 'M', u'ԏ'), (0x50F, 'V'), (0x510, 'M', u'ԑ'), (0x511, 'V'), (0x512, 'M', u'ԓ'), (0x513, 'V'), (0x514, 'M', u'ԕ'), (0x515, 'V'), (0x516, 'M', u'ԗ'), (0x517, 'V'), (0x518, 'M', u'ԙ'), (0x519, 'V'), (0x51A, 'M', u'ԛ'), (0x51B, 'V'), (0x51C, 'M', u'ԝ'), (0x51D, 'V'), (0x51E, 'M', u'ԟ'), (0x51F, 'V'), (0x520, 'M', u'ԡ'), (0x521, 'V'), (0x522, 'M', u'ԣ'), (0x523, 'V'), (0x524, 'M', u'ԥ'), (0x525, 'V'), (0x526, 'M', u'ԧ'), (0x527, 'V'), (0x528, 'M', u'ԩ'), (0x529, 'V'), (0x52A, 'M', u'ԫ'), (0x52B, 'V'), (0x52C, 'M', u'ԭ'), (0x52D, 'V'), (0x52E, 'M', u'ԯ'), (0x52F, 'V'), (0x530, 'X'), (0x531, 'M', u'ա'), (0x532, 'M', u'բ'), (0x533, 'M', u'գ'), (0x534, 'M', u'դ'), (0x535, 'M', u'ե'), (0x536, 'M', u'զ'), (0x537, 'M', u'է'), (0x538, 'M', u'ը'), (0x539, 'M', u'թ'), (0x53A, 'M', u'ժ'), (0x53B, 'M', u'ի'), (0x53C, 'M', u'լ'), (0x53D, 'M', u'խ'), (0x53E, 'M', u'ծ'), (0x53F, 'M', u'կ'), (0x540, 'M', u'հ'), (0x541, 'M', u'ձ'), (0x542, 'M', u'ղ'), (0x543, 'M', u'ճ'), (0x544, 'M', u'մ'), (0x545, 'M', u'յ'), (0x546, 'M', u'ն'), (0x547, 'M', u'շ'), (0x548, 'M', u'ո'), (0x549, 'M', u'չ'), (0x54A, 'M', u'պ'), (0x54B, 'M', u'ջ'), (0x54C, 'M', u'ռ'), (0x54D, 'M', u'ս'), (0x54E, 'M', u'վ'), (0x54F, 'M', u'տ'), (0x550, 'M', u'ր'), (0x551, 'M', u'ց'), (0x552, 'M', u'ւ'), (0x553, 'M', u'փ'), (0x554, 'M', u'ք'), (0x555, 'M', u'օ'), (0x556, 'M', u'ֆ'), (0x557, 'X'), (0x559, 'V'), (0x587, 'M', u'եւ'), (0x588, 'V'), (0x58B, 'X'), (0x58D, 'V'), (0x590, 'X'), (0x591, 'V'), (0x5C8, 'X'), (0x5D0, 'V'), (0x5EB, 'X'), (0x5EF, 'V'), (0x5F5, 'X'), (0x606, 'V'), (0x61C, 'X'), (0x61E, 'V'), ] def _seg_10(): return [ (0x675, 'M', u'اٴ'), (0x676, 'M', u'وٴ'), (0x677, 'M', u'ۇٴ'), (0x678, 'M', u'يٴ'), (0x679, 'V'), (0x6DD, 'X'), (0x6DE, 'V'), (0x70E, 'X'), (0x710, 'V'), (0x74B, 'X'), (0x74D, 'V'), (0x7B2, 'X'), (0x7C0, 'V'), (0x7FB, 'X'), (0x7FD, 'V'), (0x82E, 'X'), (0x830, 'V'), (0x83F, 'X'), (0x840, 'V'), (0x85C, 'X'), (0x85E, 'V'), (0x85F, 'X'), (0x860, 'V'), (0x86B, 'X'), (0x8A0, 'V'), (0x8B5, 'X'), (0x8B6, 'V'), (0x8BE, 'X'), (0x8D3, 'V'), (0x8E2, 'X'), (0x8E3, 'V'), (0x958, 'M', u'क़'), (0x959, 'M', u'ख़'), (0x95A, 'M', u'ग़'), (0x95B, 'M', u'ज़'), (0x95C, 'M', u'ड़'), (0x95D, 'M', u'ढ़'), (0x95E, 'M', u'फ़'), (0x95F, 'M', u'य़'), (0x960, 'V'), (0x984, 'X'), (0x985, 'V'), (0x98D, 'X'), (0x98F, 'V'), (0x991, 'X'), (0x993, 'V'), (0x9A9, 'X'), (0x9AA, 'V'), (0x9B1, 'X'), (0x9B2, 'V'), (0x9B3, 'X'), (0x9B6, 'V'), (0x9BA, 'X'), (0x9BC, 'V'), (0x9C5, 'X'), (0x9C7, 'V'), (0x9C9, 'X'), (0x9CB, 'V'), (0x9CF, 'X'), (0x9D7, 'V'), (0x9D8, 'X'), (0x9DC, 'M', u'ড়'), (0x9DD, 'M', u'ঢ়'), (0x9DE, 'X'), (0x9DF, 'M', u'য়'), (0x9E0, 'V'), (0x9E4, 'X'), (0x9E6, 'V'), (0x9FF, 'X'), (0xA01, 'V'), (0xA04, 'X'), (0xA05, 'V'), (0xA0B, 'X'), (0xA0F, 'V'), (0xA11, 'X'), (0xA13, 'V'), (0xA29, 'X'), (0xA2A, 'V'), (0xA31, 'X'), (0xA32, 'V'), (0xA33, 'M', u'ਲ਼'), (0xA34, 'X'), (0xA35, 'V'), (0xA36, 'M', u'ਸ਼'), (0xA37, 'X'), (0xA38, 'V'), (0xA3A, 'X'), (0xA3C, 'V'), (0xA3D, 'X'), (0xA3E, 'V'), (0xA43, 'X'), (0xA47, 'V'), (0xA49, 'X'), (0xA4B, 'V'), (0xA4E, 'X'), (0xA51, 'V'), (0xA52, 'X'), (0xA59, 'M', u'ਖ਼'), (0xA5A, 'M', u'ਗ਼'), (0xA5B, 'M', u'ਜ਼'), ] def _seg_11(): return [ (0xA5C, 'V'), (0xA5D, 'X'), (0xA5E, 'M', u'ਫ਼'), (0xA5F, 'X'), (0xA66, 'V'), (0xA77, 'X'), (0xA81, 'V'), (0xA84, 'X'), (0xA85, 'V'), (0xA8E, 'X'), (0xA8F, 'V'), (0xA92, 'X'), (0xA93, 'V'), (0xAA9, 'X'), (0xAAA, 'V'), (0xAB1, 'X'), (0xAB2, 'V'), (0xAB4, 'X'), (0xAB5, 'V'), (0xABA, 'X'), (0xABC, 'V'), (0xAC6, 'X'), (0xAC7, 'V'), (0xACA, 'X'), (0xACB, 'V'), (0xACE, 'X'), (0xAD0, 'V'), (0xAD1, 'X'), (0xAE0, 'V'), (0xAE4, 'X'), (0xAE6, 'V'), (0xAF2, 'X'), (0xAF9, 'V'), (0xB00, 'X'), (0xB01, 'V'), (0xB04, 'X'), (0xB05, 'V'), (0xB0D, 'X'), (0xB0F, 'V'), (0xB11, 'X'), (0xB13, 'V'), (0xB29, 'X'), (0xB2A, 'V'), (0xB31, 'X'), (0xB32, 'V'), (0xB34, 'X'), (0xB35, 'V'), (0xB3A, 'X'), (0xB3C, 'V'), (0xB45, 'X'), (0xB47, 'V'), (0xB49, 'X'), (0xB4B, 'V'), (0xB4E, 'X'), (0xB56, 'V'), (0xB58, 'X'), (0xB5C, 'M', u'ଡ଼'), (0xB5D, 'M', u'ଢ଼'), (0xB5E, 'X'), (0xB5F, 'V'), (0xB64, 'X'), (0xB66, 'V'), (0xB78, 'X'), (0xB82, 'V'), (0xB84, 'X'), (0xB85, 'V'), (0xB8B, 'X'), (0xB8E, 'V'), (0xB91, 'X'), (0xB92, 'V'), (0xB96, 'X'), (0xB99, 'V'), (0xB9B, 'X'), (0xB9C, 'V'), (0xB9D, 'X'), (0xB9E, 'V'), (0xBA0, 'X'), (0xBA3, 'V'), (0xBA5, 'X'), (0xBA8, 'V'), (0xBAB, 'X'), (0xBAE, 'V'), (0xBBA, 'X'), (0xBBE, 'V'), (0xBC3, 'X'), (0xBC6, 'V'), (0xBC9, 'X'), (0xBCA, 'V'), (0xBCE, 'X'), (0xBD0, 'V'), (0xBD1, 'X'), (0xBD7, 'V'), (0xBD8, 'X'), (0xBE6, 'V'), (0xBFB, 'X'), (0xC00, 'V'), (0xC0D, 'X'), (0xC0E, 'V'), (0xC11, 'X'), (0xC12, 'V'), ] def _seg_12(): return [ (0xC29, 'X'), (0xC2A, 'V'), (0xC3A, 'X'), (0xC3D, 'V'), (0xC45, 'X'), (0xC46, 'V'), (0xC49, 'X'), (0xC4A, 'V'), (0xC4E, 'X'), (0xC55, 'V'), (0xC57, 'X'), (0xC58, 'V'), (0xC5B, 'X'), (0xC60, 'V'), (0xC64, 'X'), (0xC66, 'V'), (0xC70, 'X'), (0xC78, 'V'), (0xC8D, 'X'), (0xC8E, 'V'), (0xC91, 'X'), (0xC92, 'V'), (0xCA9, 'X'), (0xCAA, 'V'), (0xCB4, 'X'), (0xCB5, 'V'), (0xCBA, 'X'), (0xCBC, 'V'), (0xCC5, 'X'), (0xCC6, 'V'), (0xCC9, 'X'), (0xCCA, 'V'), (0xCCE, 'X'), (0xCD5, 'V'), (0xCD7, 'X'), (0xCDE, 'V'), (0xCDF, 'X'), (0xCE0, 'V'), (0xCE4, 'X'), (0xCE6, 'V'), (0xCF0, 'X'), (0xCF1, 'V'), (0xCF3, 'X'), (0xD00, 'V'), (0xD04, 'X'), (0xD05, 'V'), (0xD0D, 'X'), (0xD0E, 'V'), (0xD11, 'X'), (0xD12, 'V'), (0xD45, 'X'), (0xD46, 'V'), (0xD49, 'X'), (0xD4A, 'V'), (0xD50, 'X'), (0xD54, 'V'), (0xD64, 'X'), (0xD66, 'V'), (0xD80, 'X'), (0xD82, 'V'), (0xD84, 'X'), (0xD85, 'V'), (0xD97, 'X'), (0xD9A, 'V'), (0xDB2, 'X'), (0xDB3, 'V'), (0xDBC, 'X'), (0xDBD, 'V'), (0xDBE, 'X'), (0xDC0, 'V'), (0xDC7, 'X'), (0xDCA, 'V'), (0xDCB, 'X'), (0xDCF, 'V'), (0xDD5, 'X'), (0xDD6, 'V'), (0xDD7, 'X'), (0xDD8, 'V'), (0xDE0, 'X'), (0xDE6, 'V'), (0xDF0, 'X'), (0xDF2, 'V'), (0xDF5, 'X'), (0xE01, 'V'), (0xE33, 'M', u'ํา'), (0xE34, 'V'), (0xE3B, 'X'), (0xE3F, 'V'), (0xE5C, 'X'), (0xE81, 'V'), (0xE83, 'X'), (0xE84, 'V'), (0xE85, 'X'), (0xE87, 'V'), (0xE89, 'X'), (0xE8A, 'V'), (0xE8B, 'X'), (0xE8D, 'V'), (0xE8E, 'X'), (0xE94, 'V'), ] def _seg_13(): return [ (0xE98, 'X'), (0xE99, 'V'), (0xEA0, 'X'), (0xEA1, 'V'), (0xEA4, 'X'), (0xEA5, 'V'), (0xEA6, 'X'), (0xEA7, 'V'), (0xEA8, 'X'), (0xEAA, 'V'), (0xEAC, 'X'), (0xEAD, 'V'), (0xEB3, 'M', u'ໍາ'), (0xEB4, 'V'), (0xEBA, 'X'), (0xEBB, 'V'), (0xEBE, 'X'), (0xEC0, 'V'), (0xEC5, 'X'), (0xEC6, 'V'), (0xEC7, 'X'), (0xEC8, 'V'), (0xECE, 'X'), (0xED0, 'V'), (0xEDA, 'X'), (0xEDC, 'M', u'ຫນ'), (0xEDD, 'M', u'ຫມ'), (0xEDE, 'V'), (0xEE0, 'X'), (0xF00, 'V'), (0xF0C, 'M', u'་'), (0xF0D, 'V'), (0xF43, 'M', u'གྷ'), (0xF44, 'V'), (0xF48, 'X'), (0xF49, 'V'), (0xF4D, 'M', u'ཌྷ'), (0xF4E, 'V'), (0xF52, 'M', u'དྷ'), (0xF53, 'V'), (0xF57, 'M', u'བྷ'), (0xF58, 'V'), (0xF5C, 'M', u'ཛྷ'), (0xF5D, 'V'), (0xF69, 'M', u'ཀྵ'), (0xF6A, 'V'), (0xF6D, 'X'), (0xF71, 'V'), (0xF73, 'M', u'ཱི'), (0xF74, 'V'), (0xF75, 'M', u'ཱུ'), (0xF76, 'M', u'ྲྀ'), (0xF77, 'M', u'ྲཱྀ'), (0xF78, 'M', u'ླྀ'), (0xF79, 'M', u'ླཱྀ'), (0xF7A, 'V'), (0xF81, 'M', u'ཱྀ'), (0xF82, 'V'), (0xF93, 'M', u'ྒྷ'), (0xF94, 'V'), (0xF98, 'X'), (0xF99, 'V'), (0xF9D, 'M', u'ྜྷ'), (0xF9E, 'V'), (0xFA2, 'M', u'ྡྷ'), (0xFA3, 'V'), (0xFA7, 'M', u'ྦྷ'), (0xFA8, 'V'), (0xFAC, 'M', u'ྫྷ'), (0xFAD, 'V'), (0xFB9, 'M', u'ྐྵ'), (0xFBA, 'V'), (0xFBD, 'X'), (0xFBE, 'V'), (0xFCD, 'X'), (0xFCE, 'V'), (0xFDB, 'X'), (0x1000, 'V'), (0x10A0, 'X'), (0x10C7, 'M', u'ⴧ'), (0x10C8, 'X'), (0x10CD, 'M', u'ⴭ'), (0x10CE, 'X'), (0x10D0, 'V'), (0x10FC, 'M', u'ნ'), (0x10FD, 'V'), (0x115F, 'X'), (0x1161, 'V'), (0x1249, 'X'), (0x124A, 'V'), (0x124E, 'X'), (0x1250, 'V'), (0x1257, 'X'), (0x1258, 'V'), (0x1259, 'X'), (0x125A, 'V'), (0x125E, 'X'), (0x1260, 'V'), (0x1289, 'X'), (0x128A, 'V'), ] def _seg_14(): return [ (0x128E, 'X'), (0x1290, 'V'), (0x12B1, 'X'), (0x12B2, 'V'), (0x12B6, 'X'), (0x12B8, 'V'), (0x12BF, 'X'), (0x12C0, 'V'), (0x12C1, 'X'), (0x12C2, 'V'), (0x12C6, 'X'), (0x12C8, 'V'), (0x12D7, 'X'), (0x12D8, 'V'), (0x1311, 'X'), (0x1312, 'V'), (0x1316, 'X'), (0x1318, 'V'), (0x135B, 'X'), (0x135D, 'V'), (0x137D, 'X'), (0x1380, 'V'), (0x139A, 'X'), (0x13A0, 'V'), (0x13F6, 'X'), (0x13F8, 'M', u'Ᏸ'), (0x13F9, 'M', u'Ᏹ'), (0x13FA, 'M', u'Ᏺ'), (0x13FB, 'M', u'Ᏻ'), (0x13FC, 'M', u'Ᏼ'), (0x13FD, 'M', u'Ᏽ'), (0x13FE, 'X'), (0x1400, 'V'), (0x1680, 'X'), (0x1681, 'V'), (0x169D, 'X'), (0x16A0, 'V'), (0x16F9, 'X'), (0x1700, 'V'), (0x170D, 'X'), (0x170E, 'V'), (0x1715, 'X'), (0x1720, 'V'), (0x1737, 'X'), (0x1740, 'V'), (0x1754, 'X'), (0x1760, 'V'), (0x176D, 'X'), (0x176E, 'V'), (0x1771, 'X'), (0x1772, 'V'), (0x1774, 'X'), (0x1780, 'V'), (0x17B4, 'X'), (0x17B6, 'V'), (0x17DE, 'X'), (0x17E0, 'V'), (0x17EA, 'X'), (0x17F0, 'V'), (0x17FA, 'X'), (0x1800, 'V'), (0x1806, 'X'), (0x1807, 'V'), (0x180B, 'I'), (0x180E, 'X'), (0x1810, 'V'), (0x181A, 'X'), (0x1820, 'V'), (0x1879, 'X'), (0x1880, 'V'), (0x18AB, 'X'), (0x18B0, 'V'), (0x18F6, 'X'), (0x1900, 'V'), (0x191F, 'X'), (0x1920, 'V'), (0x192C, 'X'), (0x1930, 'V'), (0x193C, 'X'), (0x1940, 'V'), (0x1941, 'X'), (0x1944, 'V'), (0x196E, 'X'), (0x1970, 'V'), (0x1975, 'X'), (0x1980, 'V'), (0x19AC, 'X'), (0x19B0, 'V'), (0x19CA, 'X'), (0x19D0, 'V'), (0x19DB, 'X'), (0x19DE, 'V'), (0x1A1C, 'X'), (0x1A1E, 'V'), (0x1A5F, 'X'), (0x1A60, 'V'), (0x1A7D, 'X'), (0x1A7F, 'V'), (0x1A8A, 'X'), (0x1A90, 'V'), ] def _seg_15(): return [ (0x1A9A, 'X'), (0x1AA0, 'V'), (0x1AAE, 'X'), (0x1AB0, 'V'), (0x1ABF, 'X'), (0x1B00, 'V'), (0x1B4C, 'X'), (0x1B50, 'V'), (0x1B7D, 'X'), (0x1B80, 'V'), (0x1BF4, 'X'), (0x1BFC, 'V'), (0x1C38, 'X'), (0x1C3B, 'V'), (0x1C4A, 'X'), (0x1C4D, 'V'), (0x1C80, 'M', u'в'), (0x1C81, 'M', u'д'), (0x1C82, 'M', u'о'), (0x1C83, 'M', u'с'), (0x1C84, 'M', u'т'), (0x1C86, 'M', u'ъ'), (0x1C87, 'M', u'ѣ'), (0x1C88, 'M', u'ꙋ'), (0x1C89, 'X'), (0x1CC0, 'V'), (0x1CC8, 'X'), (0x1CD0, 'V'), (0x1CFA, 'X'), (0x1D00, 'V'), (0x1D2C, 'M', u'a'), (0x1D2D, 'M', u'æ'), (0x1D2E, 'M', u'b'), (0x1D2F, 'V'), (0x1D30, 'M', u'd'), (0x1D31, 'M', u'e'), (0x1D32, 'M', u'ǝ'), (0x1D33, 'M', u'g'), (0x1D34, 'M', u'h'), (0x1D35, 'M', u'i'), (0x1D36, 'M', u'j'), (0x1D37, 'M', u'k'), (0x1D38, 'M', u'l'), (0x1D39, 'M', u'm'), (0x1D3A, 'M', u'n'), (0x1D3B, 'V'), (0x1D3C, 'M', u'o'), (0x1D3D, 'M', u'ȣ'), (0x1D3E, 'M', u'p'), (0x1D3F, 'M', u'r'), (0x1D40, 'M', u't'), (0x1D41, 'M', u'u'), (0x1D42, 'M', u'w'), (0x1D43, 'M', u'a'), (0x1D44, 'M', u'ɐ'), (0x1D45, 'M', u'ɑ'), (0x1D46, 'M', u'ᴂ'), (0x1D47, 'M', u'b'), (0x1D48, 'M', u'd'), (0x1D49, 'M', u'e'), (0x1D4A, 'M', u'ə'), (0x1D4B, 'M', u'ɛ'), (0x1D4C, 'M', u'ɜ'), (0x1D4D, 'M', u'g'), (0x1D4E, 'V'), (0x1D4F, 'M', u'k'), (0x1D50, 'M', u'm'), (0x1D51, 'M', u'ŋ'), (0x1D52, 'M', u'o'), (0x1D53, 'M', u'ɔ'), (0x1D54, 'M', u'ᴖ'), (0x1D55, 'M', u'ᴗ'), (0x1D56, 'M', u'p'), (0x1D57, 'M', u't'), (0x1D58, 'M', u'u'), (0x1D59, 'M', u'ᴝ'), (0x1D5A, 'M', u'ɯ'), (0x1D5B, 'M', u'v'), (0x1D5C, 'M', u'ᴥ'), (0x1D5D, 'M', u'β'), (0x1D5E, 'M', u'γ'), (0x1D5F, 'M', u'δ'), (0x1D60, 'M', u'φ'), (0x1D61, 'M', u'χ'), (0x1D62, 'M', u'i'), (0x1D63, 'M', u'r'), (0x1D64, 'M', u'u'), (0x1D65, 'M', u'v'), (0x1D66, 'M', u'β'), (0x1D67, 'M', u'γ'), (0x1D68, 'M', u'ρ'), (0x1D69, 'M', u'φ'), (0x1D6A, 'M', u'χ'), (0x1D6B, 'V'), (0x1D78, 'M', u'н'), (0x1D79, 'V'), (0x1D9B, 'M', u'ɒ'), (0x1D9C, 'M', u'c'), (0x1D9D, 'M', u'ɕ'), (0x1D9E, 'M', u'ð'), ] def _seg_16(): return [ (0x1D9F, 'M', u'ɜ'), (0x1DA0, 'M', u'f'), (0x1DA1, 'M', u'ɟ'), (0x1DA2, 'M', u'ɡ'), (0x1DA3, 'M', u'ɥ'), (0x1DA4, 'M', u'ɨ'), (0x1DA5, 'M', u'ɩ'), (0x1DA6, 'M', u'ɪ'), (0x1DA7, 'M', u'ᵻ'), (0x1DA8, 'M', u'ʝ'), (0x1DA9, 'M', u'ɭ'), (0x1DAA, 'M', u'ᶅ'), (0x1DAB, 'M', u'ʟ'), (0x1DAC, 'M', u'ɱ'), (0x1DAD, 'M', u'ɰ'), (0x1DAE, 'M', u'ɲ'), (0x1DAF, 'M', u'ɳ'), (0x1DB0, 'M', u'ɴ'), (0x1DB1, 'M', u'ɵ'), (0x1DB2, 'M', u'ɸ'), (0x1DB3, 'M', u'ʂ'), (0x1DB4, 'M', u'ʃ'), (0x1DB5, 'M', u'ƫ'), (0x1DB6, 'M', u'ʉ'), (0x1DB7, 'M', u'ʊ'), (0x1DB8, 'M', u'ᴜ'), (0x1DB9, 'M', u'ʋ'), (0x1DBA, 'M', u'ʌ'), (0x1DBB, 'M', u'z'), (0x1DBC, 'M', u'ʐ'), (0x1DBD, 'M', u'ʑ'), (0x1DBE, 'M', u'ʒ'), (0x1DBF, 'M', u'θ'), (0x1DC0, 'V'), (0x1DFA, 'X'), (0x1DFB, 'V'), (0x1E00, 'M', u'ḁ'), (0x1E01, 'V'), (0x1E02, 'M', u'ḃ'), (0x1E03, 'V'), (0x1E04, 'M', u'ḅ'), (0x1E05, 'V'), (0x1E06, 'M', u'ḇ'), (0x1E07, 'V'), (0x1E08, 'M', u'ḉ'), (0x1E09, 'V'), (0x1E0A, 'M', u'ḋ'), (0x1E0B, 'V'), (0x1E0C, 'M', u'ḍ'), (0x1E0D, 'V'), (0x1E0E, 'M', u'ḏ'), (0x1E0F, 'V'), (0x1E10, 'M', u'ḑ'), (0x1E11, 'V'), (0x1E12, 'M', u'ḓ'), (0x1E13, 'V'), (0x1E14, 'M', u'ḕ'), (0x1E15, 'V'), (0x1E16, 'M', u'ḗ'), (0x1E17, 'V'), (0x1E18, 'M', u'ḙ'), (0x1E19, 'V'), (0x1E1A, 'M', u'ḛ'), (0x1E1B, 'V'), (0x1E1C, 'M', u'ḝ'), (0x1E1D, 'V'), (0x1E1E, 'M', u'ḟ'), (0x1E1F, 'V'), (0x1E20, 'M', u'ḡ'), (0x1E21, 'V'), (0x1E22, 'M', u'ḣ'), (0x1E23, 'V'), (0x1E24, 'M', u'ḥ'), (0x1E25, 'V'), (0x1E26, 'M', u'ḧ'), (0x1E27, 'V'), (0x1E28, 'M', u'ḩ'), (0x1E29, 'V'), (0x1E2A, 'M', u'ḫ'), (0x1E2B, 'V'), (0x1E2C, 'M', u'ḭ'), (0x1E2D, 'V'), (0x1E2E, 'M', u'ḯ'), (0x1E2F, 'V'), (0x1E30, 'M', u'ḱ'), (0x1E31, 'V'), (0x1E32, 'M', u'ḳ'), (0x1E33, 'V'), (0x1E34, 'M', u'ḵ'), (0x1E35, 'V'), (0x1E36, 'M', u'ḷ'), (0x1E37, 'V'), (0x1E38, 'M', u'ḹ'), (0x1E39, 'V'), (0x1E3A, 'M', u'ḻ'), (0x1E3B, 'V'), (0x1E3C, 'M', u'ḽ'), (0x1E3D, 'V'), (0x1E3E, 'M', u'ḿ'), (0x1E3F, 'V'), ] def _seg_17(): return [ (0x1E40, 'M', u'ṁ'), (0x1E41, 'V'), (0x1E42, 'M', u'ṃ'), (0x1E43, 'V'), (0x1E44, 'M', u'ṅ'), (0x1E45, 'V'), (0x1E46, 'M', u'ṇ'), (0x1E47, 'V'), (0x1E48, 'M', u'ṉ'), (0x1E49, 'V'), (0x1E4A, 'M', u'ṋ'), (0x1E4B, 'V'), (0x1E4C, 'M', u'ṍ'), (0x1E4D, 'V'), (0x1E4E, 'M', u'ṏ'), (0x1E4F, 'V'), (0x1E50, 'M', u'ṑ'), (0x1E51, 'V'), (0x1E52, 'M', u'ṓ'), (0x1E53, 'V'), (0x1E54, 'M', u'ṕ'), (0x1E55, 'V'), (0x1E56, 'M', u'ṗ'), (0x1E57, 'V'), (0x1E58, 'M', u'ṙ'), (0x1E59, 'V'), (0x1E5A, 'M', u'ṛ'), (0x1E5B, 'V'), (0x1E5C, 'M', u'ṝ'), (0x1E5D, 'V'), (0x1E5E, 'M', u'ṟ'), (0x1E5F, 'V'), (0x1E60, 'M', u'ṡ'), (0x1E61, 'V'), (0x1E62, 'M', u'ṣ'), (0x1E63, 'V'), (0x1E64, 'M', u'ṥ'), (0x1E65, 'V'), (0x1E66, 'M', u'ṧ'), (0x1E67, 'V'), (0x1E68, 'M', u'ṩ'), (0x1E69, 'V'), (0x1E6A, 'M', u'ṫ'), (0x1E6B, 'V'), (0x1E6C, 'M', u'ṭ'), (0x1E6D, 'V'), (0x1E6E, 'M', u'ṯ'), (0x1E6F, 'V'), (0x1E70, 'M', u'ṱ'), (0x1E71, 'V'), (0x1E72, 'M', u'ṳ'), (0x1E73, 'V'), (0x1E74, 'M', u'ṵ'), (0x1E75, 'V'), (0x1E76, 'M', u'ṷ'), (0x1E77, 'V'), (0x1E78, 'M', u'ṹ'), (0x1E79, 'V'), (0x1E7A, 'M', u'ṻ'), (0x1E7B, 'V'), (0x1E7C, 'M', u'ṽ'), (0x1E7D, 'V'), (0x1E7E, 'M', u'ṿ'), (0x1E7F, 'V'), (0x1E80, 'M', u'ẁ'), (0x1E81, 'V'), (0x1E82, 'M', u'ẃ'), (0x1E83, 'V'), (0x1E84, 'M', u'ẅ'), (0x1E85, 'V'), (0x1E86, 'M', u'ẇ'), (0x1E87, 'V'), (0x1E88, 'M', u'ẉ'), (0x1E89, 'V'), (0x1E8A, 'M', u'ẋ'), (0x1E8B, 'V'), (0x1E8C, 'M', u'ẍ'), (0x1E8D, 'V'), (0x1E8E, 'M', u'ẏ'), (0x1E8F, 'V'), (0x1E90, 'M', u'ẑ'), (0x1E91, 'V'), (0x1E92, 'M', u'ẓ'), (0x1E93, 'V'), (0x1E94, 'M', u'ẕ'), (0x1E95, 'V'), (0x1E9A, 'M', u'aʾ'), (0x1E9B, 'M', u'ṡ'), (0x1E9C, 'V'), (0x1E9E, 'M', u'ss'), (0x1E9F, 'V'), (0x1EA0, 'M', u'ạ'), (0x1EA1, 'V'), (0x1EA2, 'M', u'ả'), (0x1EA3, 'V'), (0x1EA4, 'M', u'ấ'), (0x1EA5, 'V'), (0x1EA6, 'M', u'ầ'), (0x1EA7, 'V'), (0x1EA8, 'M', u'ẩ'), ] def _seg_18(): return [ (0x1EA9, 'V'), (0x1EAA, 'M', u'ẫ'), (0x1EAB, 'V'), (0x1EAC, 'M', u'ậ'), (0x1EAD, 'V'), (0x1EAE, 'M', u'ắ'), (0x1EAF, 'V'), (0x1EB0, 'M', u'ằ'), (0x1EB1, 'V'), (0x1EB2, 'M', u'ẳ'), (0x1EB3, 'V'), (0x1EB4, 'M', u'ẵ'), (0x1EB5, 'V'), (0x1EB6, 'M', u'ặ'), (0x1EB7, 'V'), (0x1EB8, 'M', u'ẹ'), (0x1EB9, 'V'), (0x1EBA, 'M', u'ẻ'), (0x1EBB, 'V'), (0x1EBC, 'M', u'ẽ'), (0x1EBD, 'V'), (0x1EBE, 'M', u'ế'), (0x1EBF, 'V'), (0x1EC0, 'M', u'ề'), (0x1EC1, 'V'), (0x1EC2, 'M', u'ể'), (0x1EC3, 'V'), (0x1EC4, 'M', u'ễ'), (0x1EC5, 'V'), (0x1EC6, 'M', u'ệ'), (0x1EC7, 'V'), (0x1EC8, 'M', u'ỉ'), (0x1EC9, 'V'), (0x1ECA, 'M', u'ị'), (0x1ECB, 'V'), (0x1ECC, 'M', u'ọ'), (0x1ECD, 'V'), (0x1ECE, 'M', u'ỏ'), (0x1ECF, 'V'), (0x1ED0, 'M', u'ố'), (0x1ED1, 'V'), (0x1ED2, 'M', u'ồ'), (0x1ED3, 'V'), (0x1ED4, 'M', u'ổ'), (0x1ED5, 'V'), (0x1ED6, 'M', u'ỗ'), (0x1ED7, 'V'), (0x1ED8, 'M', u'ộ'), (0x1ED9, 'V'), (0x1EDA, 'M', u'ớ'), (0x1EDB, 'V'), (0x1EDC, 'M', u'ờ'), (0x1EDD, 'V'), (0x1EDE, 'M', u'ở'), (0x1EDF, 'V'), (0x1EE0, 'M', u'ỡ'), (0x1EE1, 'V'), (0x1EE2, 'M', u'ợ'), (0x1EE3, 'V'), (0x1EE4, 'M', u'ụ'), (0x1EE5, 'V'), (0x1EE6, 'M', u'ủ'), (0x1EE7, 'V'), (0x1EE8, 'M', u'ứ'), (0x1EE9, 'V'), (0x1EEA, 'M', u'ừ'), (0x1EEB, 'V'), (0x1EEC, 'M', u'ử'), (0x1EED, 'V'), (0x1EEE, 'M', u'ữ'), (0x1EEF, 'V'), (0x1EF0, 'M', u'ự'), (0x1EF1, 'V'), (0x1EF2, 'M', u'ỳ'), (0x1EF3, 'V'), (0x1EF4, 'M', u'ỵ'), (0x1EF5, 'V'), (0x1EF6, 'M', u'ỷ'), (0x1EF7, 'V'), (0x1EF8, 'M', u'ỹ'), (0x1EF9, 'V'), (0x1EFA, 'M', u'ỻ'), (0x1EFB, 'V'), (0x1EFC, 'M', u'ỽ'), (0x1EFD, 'V'), (0x1EFE, 'M', u'ỿ'), (0x1EFF, 'V'), (0x1F08, 'M', u'ἀ'), (0x1F09, 'M', u'ἁ'), (0x1F0A, 'M', u'ἂ'), (0x1F0B, 'M', u'ἃ'), (0x1F0C, 'M', u'ἄ'), (0x1F0D, 'M', u'ἅ'), (0x1F0E, 'M', u'ἆ'), (0x1F0F, 'M', u'ἇ'), (0x1F10, 'V'), (0x1F16, 'X'), (0x1F18, 'M', u'ἐ'), (0x1F19, 'M', u'ἑ'), (0x1F1A, 'M', u'ἒ'), ] def _seg_19(): return [ (0x1F1B, 'M', u'ἓ'), (0x1F1C, 'M', u'ἔ'), (0x1F1D, 'M', u'ἕ'), (0x1F1E, 'X'), (0x1F20, 'V'), (0x1F28, 'M', u'ἠ'), (0x1F29, 'M', u'ἡ'), (0x1F2A, 'M', u'ἢ'), (0x1F2B, 'M', u'ἣ'), (0x1F2C, 'M', u'ἤ'), (0x1F2D, 'M', u'ἥ'), (0x1F2E, 'M', u'ἦ'), (0x1F2F, 'M', u'ἧ'), (0x1F30, 'V'), (0x1F38, 'M', u'ἰ'), (0x1F39, 'M', u'ἱ'), (0x1F3A, 'M', u'ἲ'), (0x1F3B, 'M', u'ἳ'), (0x1F3C, 'M', u'ἴ'), (0x1F3D, 'M', u'ἵ'), (0x1F3E, 'M', u'ἶ'), (0x1F3F, 'M', u'ἷ'), (0x1F40, 'V'), (0x1F46, 'X'), (0x1F48, 'M', u'ὀ'), (0x1F49, 'M', u'ὁ'), (0x1F4A, 'M', u'ὂ'), (0x1F4B, 'M', u'ὃ'), (0x1F4C, 'M', u'ὄ'), (0x1F4D, 'M', u'ὅ'), (0x1F4E, 'X'), (0x1F50, 'V'), (0x1F58, 'X'), (0x1F59, 'M', u'ὑ'), (0x1F5A, 'X'), (0x1F5B, 'M', u'ὓ'), (0x1F5C, 'X'), (0x1F5D, 'M', u'ὕ'), (0x1F5E, 'X'), (0x1F5F, 'M', u'ὗ'), (0x1F60, 'V'), (0x1F68, 'M', u'ὠ'), (0x1F69, 'M', u'ὡ'), (0x1F6A, 'M', u'ὢ'), (0x1F6B, 'M', u'ὣ'), (0x1F6C, 'M', u'ὤ'), (0x1F6D, 'M', u'ὥ'), (0x1F6E, 'M', u'ὦ'), (0x1F6F, 'M', u'ὧ'), (0x1F70, 'V'), (0x1F71, 'M', u'ά'), (0x1F72, 'V'), (0x1F73, 'M', u'έ'), (0x1F74, 'V'), (0x1F75, 'M', u'ή'), (0x1F76, 'V'), (0x1F77, 'M', u'ί'), (0x1F78, 'V'), (0x1F79, 'M', u'ό'), (0x1F7A, 'V'), (0x1F7B, 'M', u'ύ'), (0x1F7C, 'V'), (0x1F7D, 'M', u'ώ'), (0x1F7E, 'X'), (0x1F80, 'M', u'ἀι'), (0x1F81, 'M', u'ἁι'), (0x1F82, 'M', u'ἂι'), (0x1F83, 'M', u'ἃι'), (0x1F84, 'M', u'ἄι'), (0x1F85, 'M', u'ἅι'), (0x1F86, 'M', u'ἆι'), (0x1F87, 'M', u'ἇι'), (0x1F88, 'M', u'ἀι'), (0x1F89, 'M', u'ἁι'), (0x1F8A, 'M', u'ἂι'), (0x1F8B, 'M', u'ἃι'), (0x1F8C, 'M', u'ἄι'), (0x1F8D, 'M', u'ἅι'), (0x1F8E, 'M', u'ἆι'), (0x1F8F, 'M', u'ἇι'), (0x1F90, 'M', u'ἠι'), (0x1F91, 'M', u'ἡι'), (0x1F92, 'M', u'ἢι'), (0x1F93, 'M', u'ἣι'), (0x1F94, 'M', u'ἤι'), (0x1F95, 'M', u'ἥι'), (0x1F96, 'M', u'ἦι'), (0x1F97, 'M', u'ἧι'), (0x1F98, 'M', u'ἠι'), (0x1F99, 'M', u'ἡι'), (0x1F9A, 'M', u'ἢι'), (0x1F9B, 'M', u'ἣι'), (0x1F9C, 'M', u'ἤι'), (0x1F9D, 'M', u'ἥι'), (0x1F9E, 'M', u'ἦι'), (0x1F9F, 'M', u'ἧι'), (0x1FA0, 'M', u'ὠι'), (0x1FA1, 'M', u'ὡι'), (0x1FA2, 'M', u'ὢι'), (0x1FA3, 'M', u'ὣι'), ] def _seg_20(): return [ (0x1FA4, 'M', u'ὤι'), (0x1FA5, 'M', u'ὥι'), (0x1FA6, 'M', u'ὦι'), (0x1FA7, 'M', u'ὧι'), (0x1FA8, 'M', u'ὠι'), (0x1FA9, 'M', u'ὡι'), (0x1FAA, 'M', u'ὢι'), (0x1FAB, 'M', u'ὣι'), (0x1FAC, 'M', u'ὤι'), (0x1FAD, 'M', u'ὥι'), (0x1FAE, 'M', u'ὦι'), (0x1FAF, 'M', u'ὧι'), (0x1FB0, 'V'), (0x1FB2, 'M', u'ὰι'), (0x1FB3, 'M', u'αι'), (0x1FB4, 'M', u'άι'), (0x1FB5, 'X'), (0x1FB6, 'V'), (0x1FB7, 'M', u'ᾶι'), (0x1FB8, 'M', u'ᾰ'), (0x1FB9, 'M', u'ᾱ'), (0x1FBA, 'M', u'ὰ'), (0x1FBB, 'M', u'ά'), (0x1FBC, 'M', u'αι'), (0x1FBD, '3', u' ̓'), (0x1FBE, 'M', u'ι'), (0x1FBF, '3', u' ̓'), (0x1FC0, '3', u' ͂'), (0x1FC1, '3', u' ̈͂'), (0x1FC2, 'M', u'ὴι'), (0x1FC3, 'M', u'ηι'), (0x1FC4, 'M', u'ήι'), (0x1FC5, 'X'), (0x1FC6, 'V'), (0x1FC7, 'M', u'ῆι'), (0x1FC8, 'M', u'ὲ'), (0x1FC9, 'M', u'έ'), (0x1FCA, 'M', u'ὴ'), (0x1FCB, 'M', u'ή'), (0x1FCC, 'M', u'ηι'), (0x1FCD, '3', u' ̓̀'), (0x1FCE, '3', u' ̓́'), (0x1FCF, '3', u' ̓͂'), (0x1FD0, 'V'), (0x1FD3, 'M', u'ΐ'), (0x1FD4, 'X'), (0x1FD6, 'V'), (0x1FD8, 'M', u'ῐ'), (0x1FD9, 'M', u'ῑ'), (0x1FDA, 'M', u'ὶ'), (0x1FDB, 'M', u'ί'), (0x1FDC, 'X'), (0x1FDD, '3', u' ̔̀'), (0x1FDE, '3', u' ̔́'), (0x1FDF, '3', u' ̔͂'), (0x1FE0, 'V'), (0x1FE3, 'M', u'ΰ'), (0x1FE4, 'V'), (0x1FE8, 'M', u'ῠ'), (0x1FE9, 'M', u'ῡ'), (0x1FEA, 'M', u'ὺ'), (0x1FEB, 'M', u'ύ'), (0x1FEC, 'M', u'ῥ'), (0x1FED, '3', u' ̈̀'), (0x1FEE, '3', u' ̈́'), (0x1FEF, '3', u'`'), (0x1FF0, 'X'), (0x1FF2, 'M', u'ὼι'), (0x1FF3, 'M', u'ωι'), (0x1FF4, 'M', u'ώι'), (0x1FF5, 'X'), (0x1FF6, 'V'), (0x1FF7, 'M', u'ῶι'), (0x1FF8, 'M', u'ὸ'), (0x1FF9, 'M', u'ό'), (0x1FFA, 'M', u'ὼ'), (0x1FFB, 'M', u'ώ'), (0x1FFC, 'M', u'ωι'), (0x1FFD, '3', u' ́'), (0x1FFE, '3', u' ̔'), (0x1FFF, 'X'), (0x2000, '3', u' '), (0x200B, 'I'), (0x200C, 'D', u''), (0x200E, 'X'), (0x2010, 'V'), (0x2011, 'M', u'‐'), (0x2012, 'V'), (0x2017, '3', u' ̳'), (0x2018, 'V'), (0x2024, 'X'), (0x2027, 'V'), (0x2028, 'X'), (0x202F, '3', u' '), (0x2030, 'V'), (0x2033, 'M', u'′′'), (0x2034, 'M', u'′′′'), (0x2035, 'V'), (0x2036, 'M', u'‵‵'), (0x2037, 'M', u'‵‵‵'), ] def _seg_21(): return [ (0x2038, 'V'), (0x203C, '3', u'!!'), (0x203D, 'V'), (0x203E, '3', u' ̅'), (0x203F, 'V'), (0x2047, '3', u'??'), (0x2048, '3', u'?!'), (0x2049, '3', u'!?'), (0x204A, 'V'), (0x2057, 'M', u'′′′′'), (0x2058, 'V'), (0x205F, '3', u' '), (0x2060, 'I'), (0x2061, 'X'), (0x2064, 'I'), (0x2065, 'X'), (0x2070, 'M', u'0'), (0x2071, 'M', u'i'), (0x2072, 'X'), (0x2074, 'M', u'4'), (0x2075, 'M', u'5'), (0x2076, 'M', u'6'), (0x2077, 'M', u'7'), (0x2078, 'M', u'8'), (0x2079, 'M', u'9'), (0x207A, '3', u'+'), (0x207B, 'M', u'−'), (0x207C, '3', u'='), (0x207D, '3', u'('), (0x207E, '3', u')'), (0x207F, 'M', u'n'), (0x2080, 'M', u'0'), (0x2081, 'M', u'1'), (0x2082, 'M', u'2'), (0x2083, 'M', u'3'), (0x2084, 'M', u'4'), (0x2085, 'M', u'5'), (0x2086, 'M', u'6'), (0x2087, 'M', u'7'), (0x2088, 'M', u'8'), (0x2089, 'M', u'9'), (0x208A, '3', u'+'), (0x208B, 'M', u'−'), (0x208C, '3', u'='), (0x208D, '3', u'('), (0x208E, '3', u')'), (0x208F, 'X'), (0x2090, 'M', u'a'), (0x2091, 'M', u'e'), (0x2092, 'M', u'o'), (0x2093, 'M', u'x'), (0x2094, 'M', u'ə'), (0x2095, 'M', u'h'), (0x2096, 'M', u'k'), (0x2097, 'M', u'l'), (0x2098, 'M', u'm'), (0x2099, 'M', u'n'), (0x209A, 'M', u'p'), (0x209B, 'M', u's'), (0x209C, 'M', u't'), (0x209D, 'X'), (0x20A0, 'V'), (0x20A8, 'M', u'rs'), (0x20A9, 'V'), (0x20C0, 'X'), (0x20D0, 'V'), (0x20F1, 'X'), (0x2100, '3', u'a/c'), (0x2101, '3', u'a/s'), (0x2102, 'M', u'c'), (0x2103, 'M', u'°c'), (0x2104, 'V'), (0x2105, '3', u'c/o'), (0x2106, '3', u'c/u'), (0x2107, 'M', u'ɛ'), (0x2108, 'V'), (0x2109, 'M', u'°f'), (0x210A, 'M', u'g'), (0x210B, 'M', u'h'), (0x210F, 'M', u'ħ'), (0x2110, 'M', u'i'), (0x2112, 'M', u'l'), (0x2114, 'V'), (0x2115, 'M', u'n'), (0x2116, 'M', u'no'), (0x2117, 'V'), (0x2119, 'M', u'p'), (0x211A, 'M', u'q'), (0x211B, 'M', u'r'), (0x211E, 'V'), (0x2120, 'M', u'sm'), (0x2121, 'M', u'tel'), (0x2122, 'M', u'tm'), (0x2123, 'V'), (0x2124, 'M', u'z'), (0x2125, 'V'), (0x2126, 'M', u'ω'), (0x2127, 'V'), (0x2128, 'M', u'z'), (0x2129, 'V'), ] def _seg_22(): return [ (0x212A, 'M', u'k'), (0x212B, 'M', u'å'), (0x212C, 'M', u'b'), (0x212D, 'M', u'c'), (0x212E, 'V'), (0x212F, 'M', u'e'), (0x2131, 'M', u'f'), (0x2132, 'X'), (0x2133, 'M', u'm'), (0x2134, 'M', u'o'), (0x2135, 'M', u'א'), (0x2136, 'M', u'ב'), (0x2137, 'M', u'ג'), (0x2138, 'M', u'ד'), (0x2139, 'M', u'i'), (0x213A, 'V'), (0x213B, 'M', u'fax'), (0x213C, 'M', u'π'), (0x213D, 'M', u'γ'), (0x213F, 'M', u'π'), (0x2140, 'M', u'∑'), (0x2141, 'V'), (0x2145, 'M', u'd'), (0x2147, 'M', u'e'), (0x2148, 'M', u'i'), (0x2149, 'M', u'j'), (0x214A, 'V'), (0x2150, 'M', u'1⁄7'), (0x2151, 'M', u'1⁄9'), (0x2152, 'M', u'1⁄10'), (0x2153, 'M', u'1⁄3'), (0x2154, 'M', u'2⁄3'), (0x2155, 'M', u'1⁄5'), (0x2156, 'M', u'2⁄5'), (0x2157, 'M', u'3⁄5'), (0x2158, 'M', u'4⁄5'), (0x2159, 'M', u'1⁄6'), (0x215A, 'M', u'5⁄6'), (0x215B, 'M', u'1⁄8'), (0x215C, 'M', u'3⁄8'), (0x215D, 'M', u'5⁄8'), (0x215E, 'M', u'7⁄8'), (0x215F, 'M', u'1⁄'), (0x2160, 'M', u'i'), (0x2161, 'M', u'ii'), (0x2162, 'M', u'iii'), (0x2163, 'M', u'iv'), (0x2164, 'M', u'v'), (0x2165, 'M', u'vi'), (0x2166, 'M', u'vii'), (0x2167, 'M', u'viii'), (0x2168, 'M', u'ix'), (0x2169, 'M', u'x'), (0x216A, 'M', u'xi'), (0x216B, 'M', u'xii'), (0x216C, 'M', u'l'), (0x216D, 'M', u'c'), (0x216E, 'M', u'd'), (0x216F, 'M', u'm'), (0x2170, 'M', u'i'), (0x2171, 'M', u'ii'), (0x2172, 'M', u'iii'), (0x2173, 'M', u'iv'), (0x2174, 'M', u'v'), (0x2175, 'M', u'vi'), (0x2176, 'M', u'vii'), (0x2177, 'M', u'viii'), (0x2178, 'M', u'ix'), (0x2179, 'M', u'x'), (0x217A, 'M', u'xi'), (0x217B, 'M', u'xii'), (0x217C, 'M', u'l'), (0x217D, 'M', u'c'), (0x217E, 'M', u'd'), (0x217F, 'M', u'm'), (0x2180, 'V'), (0x2183, 'X'), (0x2184, 'V'), (0x2189, 'M', u'0⁄3'), (0x218A, 'V'), (0x218C, 'X'), (0x2190, 'V'), (0x222C, 'M', u'∫∫'), (0x222D, 'M', u'∫∫∫'), (0x222E, 'V'), (0x222F, 'M', u'∮∮'), (0x2230, 'M', u'∮∮∮'), (0x2231, 'V'), (0x2260, '3'), (0x2261, 'V'), (0x226E, '3'), (0x2270, 'V'), (0x2329, 'M', u'〈'), (0x232A, 'M', u'〉'), (0x232B, 'V'), (0x2427, 'X'), (0x2440, 'V'), (0x244B, 'X'), (0x2460, 'M', u'1'), (0x2461, 'M', u'2'), ] def _seg_23(): return [ (0x2462, 'M', u'3'), (0x2463, 'M', u'4'), (0x2464, 'M', u'5'), (0x2465, 'M', u'6'), (0x2466, 'M', u'7'), (0x2467, 'M', u'8'), (0x2468, 'M', u'9'), (0x2469, 'M', u'10'), (0x246A, 'M', u'11'), (0x246B, 'M', u'12'), (0x246C, 'M', u'13'), (0x246D, 'M', u'14'), (0x246E, 'M', u'15'), (0x246F, 'M', u'16'), (0x2470, 'M', u'17'), (0x2471, 'M', u'18'), (0x2472, 'M', u'19'), (0x2473, 'M', u'20'), (0x2474, '3', u'(1)'), (0x2475, '3', u'(2)'), (0x2476, '3', u'(3)'), (0x2477, '3', u'(4)'), (0x2478, '3', u'(5)'), (0x2479, '3', u'(6)'), (0x247A, '3', u'(7)'), (0x247B, '3', u'(8)'), (0x247C, '3', u'(9)'), (0x247D, '3', u'(10)'), (0x247E, '3', u'(11)'), (0x247F, '3', u'(12)'), (0x2480, '3', u'(13)'), (0x2481, '3', u'(14)'), (0x2482, '3', u'(15)'), (0x2483, '3', u'(16)'), (0x2484, '3', u'(17)'), (0x2485, '3', u'(18)'), (0x2486, '3', u'(19)'), (0x2487, '3', u'(20)'), (0x2488, 'X'), (0x249C, '3', u'(a)'), (0x249D, '3', u'(b)'), (0x249E, '3', u'(c)'), (0x249F, '3', u'(d)'), (0x24A0, '3', u'(e)'), (0x24A1, '3', u'(f)'), (0x24A2, '3', u'(g)'), (0x24A3, '3', u'(h)'), (0x24A4, '3', u'(i)'), (0x24A5, '3', u'(j)'), (0x24A6, '3', u'(k)'), (0x24A7, '3', u'(l)'), (0x24A8, '3', u'(m)'), (0x24A9, '3', u'(n)'), (0x24AA, '3', u'(o)'), (0x24AB, '3', u'(p)'), (0x24AC, '3', u'(q)'), (0x24AD, '3', u'(r)'), (0x24AE, '3', u'(s)'), (0x24AF, '3', u'(t)'), (0x24B0, '3', u'(u)'), (0x24B1, '3', u'(v)'), (0x24B2, '3', u'(w)'), (0x24B3, '3', u'(x)'), (0x24B4, '3', u'(y)'), (0x24B5, '3', u'(z)'), (0x24B6, 'M', u'a'), (0x24B7, 'M', u'b'), (0x24B8, 'M', u'c'), (0x24B9, 'M', u'd'), (0x24BA, 'M', u'e'), (0x24BB, 'M', u'f'), (0x24BC, 'M', u'g'), (0x24BD, 'M', u'h'), (0x24BE, 'M', u'i'), (0x24BF, 'M', u'j'), (0x24C0, 'M', u'k'), (0x24C1, 'M', u'l'), (0x24C2, 'M', u'm'), (0x24C3, 'M', u'n'), (0x24C4, 'M', u'o'), (0x24C5, 'M', u'p'), (0x24C6, 'M', u'q'), (0x24C7, 'M', u'r'), (0x24C8, 'M', u's'), (0x24C9, 'M', u't'), (0x24CA, 'M', u'u'), (0x24CB, 'M', u'v'), (0x24CC, 'M', u'w'), (0x24CD, 'M', u'x'), (0x24CE, 'M', u'y'), (0x24CF, 'M', u'z'), (0x24D0, 'M', u'a'), (0x24D1, 'M', u'b'), (0x24D2, 'M', u'c'), (0x24D3, 'M', u'd'), (0x24D4, 'M', u'e'), (0x24D5, 'M', u'f'), (0x24D6, 'M', u'g'), (0x24D7, 'M', u'h'), (0x24D8, 'M', u'i'), ] def _seg_24(): return [ (0x24D9, 'M', u'j'), (0x24DA, 'M', u'k'), (0x24DB, 'M', u'l'), (0x24DC, 'M', u'm'), (0x24DD, 'M', u'n'), (0x24DE, 'M', u'o'), (0x24DF, 'M', u'p'), (0x24E0, 'M', u'q'), (0x24E1, 'M', u'r'), (0x24E2, 'M', u's'), (0x24E3, 'M', u't'), (0x24E4, 'M', u'u'), (0x24E5, 'M', u'v'), (0x24E6, 'M', u'w'), (0x24E7, 'M', u'x'), (0x24E8, 'M', u'y'), (0x24E9, 'M', u'z'), (0x24EA, 'M', u'0'), (0x24EB, 'V'), (0x2A0C, 'M', u'∫∫∫∫'), (0x2A0D, 'V'), (0x2A74, '3', u'::='), (0x2A75, '3', u'=='), (0x2A76, '3', u'==='), (0x2A77, 'V'), (0x2ADC, 'M', u'⫝̸'), (0x2ADD, 'V'), (0x2B74, 'X'), (0x2B76, 'V'), (0x2B96, 'X'), (0x2B98, 'V'), (0x2BC9, 'X'), (0x2BCA, 'V'), (0x2BFF, 'X'), (0x2C00, 'M', u'ⰰ'), (0x2C01, 'M', u'ⰱ'), (0x2C02, 'M', u'ⰲ'), (0x2C03, 'M', u'ⰳ'), (0x2C04, 'M', u'ⰴ'), (0x2C05, 'M', u'ⰵ'), (0x2C06, 'M', u'ⰶ'), (0x2C07, 'M', u'ⰷ'), (0x2C08, 'M', u'ⰸ'), (0x2C09, 'M', u'ⰹ'), (0x2C0A, 'M', u'ⰺ'), (0x2C0B, 'M', u'ⰻ'), (0x2C0C, 'M', u'ⰼ'), (0x2C0D, 'M', u'ⰽ'), (0x2C0E, 'M', u'ⰾ'), (0x2C0F, 'M', u'ⰿ'), (0x2C10, 'M', u'ⱀ'), (0x2C11, 'M', u'ⱁ'), (0x2C12, 'M', u'ⱂ'), (0x2C13, 'M', u'ⱃ'), (0x2C14, 'M', u'ⱄ'), (0x2C15, 'M', u'ⱅ'), (0x2C16, 'M', u'ⱆ'), (0x2C17, 'M', u'ⱇ'), (0x2C18, 'M', u'ⱈ'), (0x2C19, 'M', u'ⱉ'), (0x2C1A, 'M', u'ⱊ'), (0x2C1B, 'M', u'ⱋ'), (0x2C1C, 'M', u'ⱌ'), (0x2C1D, 'M', u'ⱍ'), (0x2C1E, 'M', u'ⱎ'), (0x2C1F, 'M', u'ⱏ'), (0x2C20, 'M', u'ⱐ'), (0x2C21, 'M', u'ⱑ'), (0x2C22, 'M', u'ⱒ'), (0x2C23, 'M', u'ⱓ'), (0x2C24, 'M', u'ⱔ'), (0x2C25, 'M', u'ⱕ'), (0x2C26, 'M', u'ⱖ'), (0x2C27, 'M', u'ⱗ'), (0x2C28, 'M', u'ⱘ'), (0x2C29, 'M', u'ⱙ'), (0x2C2A, 'M', u'ⱚ'), (0x2C2B, 'M', u'ⱛ'), (0x2C2C, 'M', u'ⱜ'), (0x2C2D, 'M', u'ⱝ'), (0x2C2E, 'M', u'ⱞ'), (0x2C2F, 'X'), (0x2C30, 'V'), (0x2C5F, 'X'), (0x2C60, 'M', u'ⱡ'), (0x2C61, 'V'), (0x2C62, 'M', u'ɫ'), (0x2C63, 'M', u'ᵽ'), (0x2C64, 'M', u'ɽ'), (0x2C65, 'V'), (0x2C67, 'M', u'ⱨ'), (0x2C68, 'V'), (0x2C69, 'M', u'ⱪ'), (0x2C6A, 'V'), (0x2C6B, 'M', u'ⱬ'), (0x2C6C, 'V'), (0x2C6D, 'M', u'ɑ'), (0x2C6E, 'M', u'ɱ'), (0x2C6F, 'M', u'ɐ'), (0x2C70, 'M', u'ɒ'), ] def _seg_25(): return [ (0x2C71, 'V'), (0x2C72, 'M', u'ⱳ'), (0x2C73, 'V'), (0x2C75, 'M', u'ⱶ'), (0x2C76, 'V'), (0x2C7C, 'M', u'j'), (0x2C7D, 'M', u'v'), (0x2C7E, 'M', u'ȿ'), (0x2C7F, 'M', u'ɀ'), (0x2C80, 'M', u'ⲁ'), (0x2C81, 'V'), (0x2C82, 'M', u'ⲃ'), (0x2C83, 'V'), (0x2C84, 'M', u'ⲅ'), (0x2C85, 'V'), (0x2C86, 'M', u'ⲇ'), (0x2C87, 'V'), (0x2C88, 'M', u'ⲉ'), (0x2C89, 'V'), (0x2C8A, 'M', u'ⲋ'), (0x2C8B, 'V'), (0x2C8C, 'M', u'ⲍ'), (0x2C8D, 'V'), (0x2C8E, 'M', u'ⲏ'), (0x2C8F, 'V'), (0x2C90, 'M', u'ⲑ'), (0x2C91, 'V'), (0x2C92, 'M', u'ⲓ'), (0x2C93, 'V'), (0x2C94, 'M', u'ⲕ'), (0x2C95, 'V'), (0x2C96, 'M', u'ⲗ'), (0x2C97, 'V'), (0x2C98, 'M', u'ⲙ'), (0x2C99, 'V'), (0x2C9A, 'M', u'ⲛ'), (0x2C9B, 'V'), (0x2C9C, 'M', u'ⲝ'), (0x2C9D, 'V'), (0x2C9E, 'M', u'ⲟ'), (0x2C9F, 'V'), (0x2CA0, 'M', u'ⲡ'), (0x2CA1, 'V'), (0x2CA2, 'M', u'ⲣ'), (0x2CA3, 'V'), (0x2CA4, 'M', u'ⲥ'), (0x2CA5, 'V'), (0x2CA6, 'M', u'ⲧ'), (0x2CA7, 'V'), (0x2CA8, 'M', u'ⲩ'), (0x2CA9, 'V'), (0x2CAA, 'M', u'ⲫ'), (0x2CAB, 'V'), (0x2CAC, 'M', u'ⲭ'), (0x2CAD, 'V'), (0x2CAE, 'M', u'ⲯ'), (0x2CAF, 'V'), (0x2CB0, 'M', u'ⲱ'), (0x2CB1, 'V'), (0x2CB2, 'M', u'ⲳ'), (0x2CB3, 'V'), (0x2CB4, 'M', u'ⲵ'), (0x2CB5, 'V'), (0x2CB6, 'M', u'ⲷ'), (0x2CB7, 'V'), (0x2CB8, 'M', u'ⲹ'), (0x2CB9, 'V'), (0x2CBA, 'M', u'ⲻ'), (0x2CBB, 'V'), (0x2CBC, 'M', u'ⲽ'), (0x2CBD, 'V'), (0x2CBE, 'M', u'ⲿ'), (0x2CBF, 'V'), (0x2CC0, 'M', u'ⳁ'), (0x2CC1, 'V'), (0x2CC2, 'M', u'ⳃ'), (0x2CC3, 'V'), (0x2CC4, 'M', u'ⳅ'), (0x2CC5, 'V'), (0x2CC6, 'M', u'ⳇ'), (0x2CC7, 'V'), (0x2CC8, 'M', u'ⳉ'), (0x2CC9, 'V'), (0x2CCA, 'M', u'ⳋ'), (0x2CCB, 'V'), (0x2CCC, 'M', u'ⳍ'), (0x2CCD, 'V'), (0x2CCE, 'M', u'ⳏ'), (0x2CCF, 'V'), (0x2CD0, 'M', u'ⳑ'), (0x2CD1, 'V'), (0x2CD2, 'M', u'ⳓ'), (0x2CD3, 'V'), (0x2CD4, 'M', u'ⳕ'), (0x2CD5, 'V'), (0x2CD6, 'M', u'ⳗ'), (0x2CD7, 'V'), (0x2CD8, 'M', u'ⳙ'), (0x2CD9, 'V'), (0x2CDA, 'M', u'ⳛ'), ] def _seg_26(): return [ (0x2CDB, 'V'), (0x2CDC, 'M', u'ⳝ'), (0x2CDD, 'V'), (0x2CDE, 'M', u'ⳟ'), (0x2CDF, 'V'), (0x2CE0, 'M', u'ⳡ'), (0x2CE1, 'V'), (0x2CE2, 'M', u'ⳣ'), (0x2CE3, 'V'), (0x2CEB, 'M', u'ⳬ'), (0x2CEC, 'V'), (0x2CED, 'M', u'ⳮ'), (0x2CEE, 'V'), (0x2CF2, 'M', u'ⳳ'), (0x2CF3, 'V'), (0x2CF4, 'X'), (0x2CF9, 'V'), (0x2D26, 'X'), (0x2D27, 'V'), (0x2D28, 'X'), (0x2D2D, 'V'), (0x2D2E, 'X'), (0x2D30, 'V'), (0x2D68, 'X'), (0x2D6F, 'M', u'ⵡ'), (0x2D70, 'V'), (0x2D71, 'X'), (0x2D7F, 'V'), (0x2D97, 'X'), (0x2DA0, 'V'), (0x2DA7, 'X'), (0x2DA8, 'V'), (0x2DAF, 'X'), (0x2DB0, 'V'), (0x2DB7, 'X'), (0x2DB8, 'V'), (0x2DBF, 'X'), (0x2DC0, 'V'), (0x2DC7, 'X'), (0x2DC8, 'V'), (0x2DCF, 'X'), (0x2DD0, 'V'), (0x2DD7, 'X'), (0x2DD8, 'V'), (0x2DDF, 'X'), (0x2DE0, 'V'), (0x2E4F, 'X'), (0x2E80, 'V'), (0x2E9A, 'X'), (0x2E9B, 'V'), (0x2E9F, 'M', u'母'), (0x2EA0, 'V'), (0x2EF3, 'M', u'龟'), (0x2EF4, 'X'), (0x2F00, 'M', u'一'), (0x2F01, 'M', u'丨'), (0x2F02, 'M', u'丶'), (0x2F03, 'M', u'丿'), (0x2F04, 'M', u'乙'), (0x2F05, 'M', u'亅'), (0x2F06, 'M', u'二'), (0x2F07, 'M', u'亠'), (0x2F08, 'M', u'人'), (0x2F09, 'M', u'儿'), (0x2F0A, 'M', u'入'), (0x2F0B, 'M', u'八'), (0x2F0C, 'M', u'冂'), (0x2F0D, 'M', u'冖'), (0x2F0E, 'M', u'冫'), (0x2F0F, 'M', u'几'), (0x2F10, 'M', u'凵'), (0x2F11, 'M', u'刀'), (0x2F12, 'M', u'力'), (0x2F13, 'M', u'勹'), (0x2F14, 'M', u'匕'), (0x2F15, 'M', u'匚'), (0x2F16, 'M', u'匸'), (0x2F17, 'M', u'十'), (0x2F18, 'M', u'卜'), (0x2F19, 'M', u'卩'), (0x2F1A, 'M', u'厂'), (0x2F1B, 'M', u'厶'), (0x2F1C, 'M', u'又'), (0x2F1D, 'M', u'口'), (0x2F1E, 'M', u'囗'), (0x2F1F, 'M', u'土'), (0x2F20, 'M', u'士'), (0x2F21, 'M', u'夂'), (0x2F22, 'M', u'夊'), (0x2F23, 'M', u'夕'), (0x2F24, 'M', u'大'), (0x2F25, 'M', u'女'), (0x2F26, 'M', u'子'), (0x2F27, 'M', u'宀'), (0x2F28, 'M', u'寸'), (0x2F29, 'M', u'小'), (0x2F2A, 'M', u'尢'), (0x2F2B, 'M', u'尸'), (0x2F2C, 'M', u'屮'), (0x2F2D, 'M', u'山'), ] def _seg_27(): return [ (0x2F2E, 'M', u'巛'), (0x2F2F, 'M', u'工'), (0x2F30, 'M', u'己'), (0x2F31, 'M', u'巾'), (0x2F32, 'M', u'干'), (0x2F33, 'M', u'幺'), (0x2F34, 'M', u'广'), (0x2F35, 'M', u'廴'), (0x2F36, 'M', u'廾'), (0x2F37, 'M', u'弋'), (0x2F38, 'M', u'弓'), (0x2F39, 'M', u'彐'), (0x2F3A, 'M', u'彡'), (0x2F3B, 'M', u'彳'), (0x2F3C, 'M', u'心'), (0x2F3D, 'M', u'戈'), (0x2F3E, 'M', u'戶'), (0x2F3F, 'M', u'手'), (0x2F40, 'M', u'支'), (0x2F41, 'M', u'攴'), (0x2F42, 'M', u'文'), (0x2F43, 'M', u'斗'), (0x2F44, 'M', u'斤'), (0x2F45, 'M', u'方'), (0x2F46, 'M', u'无'), (0x2F47, 'M', u'日'), (0x2F48, 'M', u'曰'), (0x2F49, 'M', u'月'), (0x2F4A, 'M', u'木'), (0x2F4B, 'M', u'欠'), (0x2F4C, 'M', u'止'), (0x2F4D, 'M', u'歹'), (0x2F4E, 'M', u'殳'), (0x2F4F, 'M', u'毋'), (0x2F50, 'M', u'比'), (0x2F51, 'M', u'毛'), (0x2F52, 'M', u'氏'), (0x2F53, 'M', u'气'), (0x2F54, 'M', u'水'), (0x2F55, 'M', u'火'), (0x2F56, 'M', u'爪'), (0x2F57, 'M', u'父'), (0x2F58, 'M', u'爻'), (0x2F59, 'M', u'爿'), (0x2F5A, 'M', u'片'), (0x2F5B, 'M', u'牙'), (0x2F5C, 'M', u'牛'), (0x2F5D, 'M', u'犬'), (0x2F5E, 'M', u'玄'), (0x2F5F, 'M', u'玉'), (0x2F60, 'M', u'瓜'), (0x2F61, 'M', u'瓦'), (0x2F62, 'M', u'甘'), (0x2F63, 'M', u'生'), (0x2F64, 'M', u'用'), (0x2F65, 'M', u'田'), (0x2F66, 'M', u'疋'), (0x2F67, 'M', u'疒'), (0x2F68, 'M', u'癶'), (0x2F69, 'M', u'白'), (0x2F6A, 'M', u'皮'), (0x2F6B, 'M', u'皿'), (0x2F6C, 'M', u'目'), (0x2F6D, 'M', u'矛'), (0x2F6E, 'M', u'矢'), (0x2F6F, 'M', u'石'), (0x2F70, 'M', u'示'), (0x2F71, 'M', u'禸'), (0x2F72, 'M', u'禾'), (0x2F73, 'M', u'穴'), (0x2F74, 'M', u'立'), (0x2F75, 'M', u'竹'), (0x2F76, 'M', u'米'), (0x2F77, 'M', u'糸'), (0x2F78, 'M', u'缶'), (0x2F79, 'M', u'网'), (0x2F7A, 'M', u'羊'), (0x2F7B, 'M', u'羽'), (0x2F7C, 'M', u'老'), (0x2F7D, 'M', u'而'), (0x2F7E, 'M', u'耒'), (0x2F7F, 'M', u'耳'), (0x2F80, 'M', u'聿'), (0x2F81, 'M', u'肉'), (0x2F82, 'M', u'臣'), (0x2F83, 'M', u'自'), (0x2F84, 'M', u'至'), (0x2F85, 'M', u'臼'), (0x2F86, 'M', u'舌'), (0x2F87, 'M', u'舛'), (0x2F88, 'M', u'舟'), (0x2F89, 'M', u'艮'), (0x2F8A, 'M', u'色'), (0x2F8B, 'M', u'艸'), (0x2F8C, 'M', u'虍'), (0x2F8D, 'M', u'虫'), (0x2F8E, 'M', u'血'), (0x2F8F, 'M', u'行'), (0x2F90, 'M', u'衣'), (0x2F91, 'M', u'襾'), ] def _seg_28(): return [ (0x2F92, 'M', u'見'), (0x2F93, 'M', u'角'), (0x2F94, 'M', u'言'), (0x2F95, 'M', u'谷'), (0x2F96, 'M', u'豆'), (0x2F97, 'M', u'豕'), (0x2F98, 'M', u'豸'), (0x2F99, 'M', u'貝'), (0x2F9A, 'M', u'赤'), (0x2F9B, 'M', u'走'), (0x2F9C, 'M', u'足'), (0x2F9D, 'M', u'身'), (0x2F9E, 'M', u'車'), (0x2F9F, 'M', u'辛'), (0x2FA0, 'M', u'辰'), (0x2FA1, 'M', u'辵'), (0x2FA2, 'M', u'邑'), (0x2FA3, 'M', u'酉'), (0x2FA4, 'M', u'釆'), (0x2FA5, 'M', u'里'), (0x2FA6, 'M', u'金'), (0x2FA7, 'M', u'長'), (0x2FA8, 'M', u'門'), (0x2FA9, 'M', u'阜'), (0x2FAA, 'M', u'隶'), (0x2FAB, 'M', u'隹'), (0x2FAC, 'M', u'雨'), (0x2FAD, 'M', u'靑'), (0x2FAE, 'M', u'非'), (0x2FAF, 'M', u'面'), (0x2FB0, 'M', u'革'), (0x2FB1, 'M', u'韋'), (0x2FB2, 'M', u'韭'), (0x2FB3, 'M', u'音'), (0x2FB4, 'M', u'頁'), (0x2FB5, 'M', u'風'), (0x2FB6, 'M', u'飛'), (0x2FB7, 'M', u'食'), (0x2FB8, 'M', u'首'), (0x2FB9, 'M', u'香'), (0x2FBA, 'M', u'馬'), (0x2FBB, 'M', u'骨'), (0x2FBC, 'M', u'高'), (0x2FBD, 'M', u'髟'), (0x2FBE, 'M', u'鬥'), (0x2FBF, 'M', u'鬯'), (0x2FC0, 'M', u'鬲'), (0x2FC1, 'M', u'鬼'), (0x2FC2, 'M', u'魚'), (0x2FC3, 'M', u'鳥'), (0x2FC4, 'M', u'鹵'), (0x2FC5, 'M', u'鹿'), (0x2FC6, 'M', u'麥'), (0x2FC7, 'M', u'麻'), (0x2FC8, 'M', u'黃'), (0x2FC9, 'M', u'黍'), (0x2FCA, 'M', u'黑'), (0x2FCB, 'M', u'黹'), (0x2FCC, 'M', u'黽'), (0x2FCD, 'M', u'鼎'), (0x2FCE, 'M', u'鼓'), (0x2FCF, 'M', u'鼠'), (0x2FD0, 'M', u'鼻'), (0x2FD1, 'M', u'齊'), (0x2FD2, 'M', u'齒'), (0x2FD3, 'M', u'龍'), (0x2FD4, 'M', u'龜'), (0x2FD5, 'M', u'龠'), (0x2FD6, 'X'), (0x3000, '3', u' '), (0x3001, 'V'), (0x3002, 'M', u'.'), (0x3003, 'V'), (0x3036, 'M', u'〒'), (0x3037, 'V'), (0x3038, 'M', u'十'), (0x3039, 'M', u'卄'), (0x303A, 'M', u'卅'), (0x303B, 'V'), (0x3040, 'X'), (0x3041, 'V'), (0x3097, 'X'), (0x3099, 'V'), (0x309B, '3', u' ゙'), (0x309C, '3', u' ゚'), (0x309D, 'V'), (0x309F, 'M', u'より'), (0x30A0, 'V'), (0x30FF, 'M', u'コト'), (0x3100, 'X'), (0x3105, 'V'), (0x3130, 'X'), (0x3131, 'M', u'ᄀ'), (0x3132, 'M', u'ᄁ'), (0x3133, 'M', u'ᆪ'), (0x3134, 'M', u'ᄂ'), (0x3135, 'M', u'ᆬ'), (0x3136, 'M', u'ᆭ'), (0x3137, 'M', u'ᄃ'), (0x3138, 'M', u'ᄄ'), ] def _seg_29(): return [ (0x3139, 'M', u'ᄅ'), (0x313A, 'M', u'ᆰ'), (0x313B, 'M', u'ᆱ'), (0x313C, 'M', u'ᆲ'), (0x313D, 'M', u'ᆳ'), (0x313E, 'M', u'ᆴ'), (0x313F, 'M', u'ᆵ'), (0x3140, 'M', u'ᄚ'), (0x3141, 'M', u'ᄆ'), (0x3142, 'M', u'ᄇ'), (0x3143, 'M', u'ᄈ'), (0x3144, 'M', u'ᄡ'), (0x3145, 'M', u'ᄉ'), (0x3146, 'M', u'ᄊ'), (0x3147, 'M', u'ᄋ'), (0x3148, 'M', u'ᄌ'), (0x3149, 'M', u'ᄍ'), (0x314A, 'M', u'ᄎ'), (0x314B, 'M', u'ᄏ'), (0x314C, 'M', u'ᄐ'), (0x314D, 'M', u'ᄑ'), (0x314E, 'M', u'ᄒ'), (0x314F, 'M', u'ᅡ'), (0x3150, 'M', u'ᅢ'), (0x3151, 'M', u'ᅣ'), (0x3152, 'M', u'ᅤ'), (0x3153, 'M', u'ᅥ'), (0x3154, 'M', u'ᅦ'), (0x3155, 'M', u'ᅧ'), (0x3156, 'M', u'ᅨ'), (0x3157, 'M', u'ᅩ'), (0x3158, 'M', u'ᅪ'), (0x3159, 'M', u'ᅫ'), (0x315A, 'M', u'ᅬ'), (0x315B, 'M', u'ᅭ'), (0x315C, 'M', u'ᅮ'), (0x315D, 'M', u'ᅯ'), (0x315E, 'M', u'ᅰ'), (0x315F, 'M', u'ᅱ'), (0x3160, 'M', u'ᅲ'), (0x3161, 'M', u'ᅳ'), (0x3162, 'M', u'ᅴ'), (0x3163, 'M', u'ᅵ'), (0x3164, 'X'), (0x3165, 'M', u'ᄔ'), (0x3166, 'M', u'ᄕ'), (0x3167, 'M', u'ᇇ'), (0x3168, 'M', u'ᇈ'), (0x3169, 'M', u'ᇌ'), (0x316A, 'M', u'ᇎ'), (0x316B, 'M', u'ᇓ'), (0x316C, 'M', u'ᇗ'), (0x316D, 'M', u'ᇙ'), (0x316E, 'M', u'ᄜ'), (0x316F, 'M', u'ᇝ'), (0x3170, 'M', u'ᇟ'), (0x3171, 'M', u'ᄝ'), (0x3172, 'M', u'ᄞ'), (0x3173, 'M', u'ᄠ'), (0x3174, 'M', u'ᄢ'), (0x3175, 'M', u'ᄣ'), (0x3176, 'M', u'ᄧ'), (0x3177, 'M', u'ᄩ'), (0x3178, 'M', u'ᄫ'), (0x3179, 'M', u'ᄬ'), (0x317A, 'M', u'ᄭ'), (0x317B, 'M', u'ᄮ'), (0x317C, 'M', u'ᄯ'), (0x317D, 'M', u'ᄲ'), (0x317E, 'M', u'ᄶ'), (0x317F, 'M', u'ᅀ'), (0x3180, 'M', u'ᅇ'), (0x3181, 'M', u'ᅌ'), (0x3182, 'M', u'ᇱ'), (0x3183, 'M', u'ᇲ'), (0x3184, 'M', u'ᅗ'), (0x3185, 'M', u'ᅘ'), (0x3186, 'M', u'ᅙ'), (0x3187, 'M', u'ᆄ'), (0x3188, 'M', u'ᆅ'), (0x3189, 'M', u'ᆈ'), (0x318A, 'M', u'ᆑ'), (0x318B, 'M', u'ᆒ'), (0x318C, 'M', u'ᆔ'), (0x318D, 'M', u'ᆞ'), (0x318E, 'M', u'ᆡ'), (0x318F, 'X'), (0x3190, 'V'), (0x3192, 'M', u'一'), (0x3193, 'M', u'二'), (0x3194, 'M', u'三'), (0x3195, 'M', u'四'), (0x3196, 'M', u'上'), (0x3197, 'M', u'中'), (0x3198, 'M', u'下'), (0x3199, 'M', u'甲'), (0x319A, 'M', u'乙'), (0x319B, 'M', u'丙'), (0x319C, 'M', u'丁'), (0x319D, 'M', u'天'), ] def _seg_30(): return [ (0x319E, 'M', u'地'), (0x319F, 'M', u'人'), (0x31A0, 'V'), (0x31BB, 'X'), (0x31C0, 'V'), (0x31E4, 'X'), (0x31F0, 'V'), (0x3200, '3', u'(ᄀ)'), (0x3201, '3', u'(ᄂ)'), (0x3202, '3', u'(ᄃ)'), (0x3203, '3', u'(ᄅ)'), (0x3204, '3', u'(ᄆ)'), (0x3205, '3', u'(ᄇ)'), (0x3206, '3', u'(ᄉ)'), (0x3207, '3', u'(ᄋ)'), (0x3208, '3', u'(ᄌ)'), (0x3209, '3', u'(ᄎ)'), (0x320A, '3', u'(ᄏ)'), (0x320B, '3', u'(ᄐ)'), (0x320C, '3', u'(ᄑ)'), (0x320D, '3', u'(ᄒ)'), (0x320E, '3', u'(가)'), (0x320F, '3', u'(나)'), (0x3210, '3', u'(다)'), (0x3211, '3', u'(라)'), (0x3212, '3', u'(마)'), (0x3213, '3', u'(바)'), (0x3214, '3', u'(사)'), (0x3215, '3', u'(아)'), (0x3216, '3', u'(자)'), (0x3217, '3', u'(차)'), (0x3218, '3', u'(카)'), (0x3219, '3', u'(타)'), (0x321A, '3', u'(파)'), (0x321B, '3', u'(하)'), (0x321C, '3', u'(주)'), (0x321D, '3', u'(오전)'), (0x321E, '3', u'(오후)'), (0x321F, 'X'), (0x3220, '3', u'(一)'), (0x3221, '3', u'(二)'), (0x3222, '3', u'(三)'), (0x3223, '3', u'(四)'), (0x3224, '3', u'(五)'), (0x3225, '3', u'(六)'), (0x3226, '3', u'(七)'), (0x3227, '3', u'(八)'), (0x3228, '3', u'(九)'), (0x3229, '3', u'(十)'), (0x322A, '3', u'(月)'), (0x322B, '3', u'(火)'), (0x322C, '3', u'(水)'), (0x322D, '3', u'(木)'), (0x322E, '3', u'(金)'), (0x322F, '3', u'(土)'), (0x3230, '3', u'(日)'), (0x3231, '3', u'(株)'), (0x3232, '3', u'(有)'), (0x3233, '3', u'(社)'), (0x3234, '3', u'(名)'), (0x3235, '3', u'(特)'), (0x3236, '3', u'(財)'), (0x3237, '3', u'(祝)'), (0x3238, '3', u'(労)'), (0x3239, '3', u'(代)'), (0x323A, '3', u'(呼)'), (0x323B, '3', u'(学)'), (0x323C, '3', u'(監)'), (0x323D, '3', u'(企)'), (0x323E, '3', u'(資)'), (0x323F, '3', u'(協)'), (0x3240, '3', u'(祭)'), (0x3241, '3', u'(休)'), (0x3242, '3', u'(自)'), (0x3243, '3', u'(至)'), (0x3244, 'M', u'問'), (0x3245, 'M', u'幼'), (0x3246, 'M', u'文'), (0x3247, 'M', u'箏'), (0x3248, 'V'), (0x3250, 'M', u'pte'), (0x3251, 'M', u'21'), (0x3252, 'M', u'22'), (0x3253, 'M', u'23'), (0x3254, 'M', u'24'), (0x3255, 'M', u'25'), (0x3256, 'M', u'26'), (0x3257, 'M', u'27'), (0x3258, 'M', u'28'), (0x3259, 'M', u'29'), (0x325A, 'M', u'30'), (0x325B, 'M', u'31'), (0x325C, 'M', u'32'), (0x325D, 'M', u'33'), (0x325E, 'M', u'34'), (0x325F, 'M', u'35'), (0x3260, 'M', u'ᄀ'), (0x3261, 'M', u'ᄂ'), (0x3262, 'M', u'ᄃ'), (0x3263, 'M', u'ᄅ'), ] def _seg_31(): return [ (0x3264, 'M', u'ᄆ'), (0x3265, 'M', u'ᄇ'), (0x3266, 'M', u'ᄉ'), (0x3267, 'M', u'ᄋ'), (0x3268, 'M', u'ᄌ'), (0x3269, 'M', u'ᄎ'), (0x326A, 'M', u'ᄏ'), (0x326B, 'M', u'ᄐ'), (0x326C, 'M', u'ᄑ'), (0x326D, 'M', u'ᄒ'), (0x326E, 'M', u'가'), (0x326F, 'M', u'나'), (0x3270, 'M', u'다'), (0x3271, 'M', u'라'), (0x3272, 'M', u'마'), (0x3273, 'M', u'바'), (0x3274, 'M', u'사'), (0x3275, 'M', u'아'), (0x3276, 'M', u'자'), (0x3277, 'M', u'차'), (0x3278, 'M', u'카'), (0x3279, 'M', u'타'), (0x327A, 'M', u'파'), (0x327B, 'M', u'하'), (0x327C, 'M', u'참고'), (0x327D, 'M', u'주의'), (0x327E, 'M', u'우'), (0x327F, 'V'), (0x3280, 'M', u'一'), (0x3281, 'M', u'二'), (0x3282, 'M', u'三'), (0x3283, 'M', u'四'), (0x3284, 'M', u'五'), (0x3285, 'M', u'六'), (0x3286, 'M', u'七'), (0x3287, 'M', u'八'), (0x3288, 'M', u'九'), (0x3289, 'M', u'十'), (0x328A, 'M', u'月'), (0x328B, 'M', u'火'), (0x328C, 'M', u'水'), (0x328D, 'M', u'木'), (0x328E, 'M', u'金'), (0x328F, 'M', u'土'), (0x3290, 'M', u'日'), (0x3291, 'M', u'株'), (0x3292, 'M', u'有'), (0x3293, 'M', u'社'), (0x3294, 'M', u'名'), (0x3295, 'M', u'特'), (0x3296, 'M', u'財'), (0x3297, 'M', u'祝'), (0x3298, 'M', u'労'), (0x3299, 'M', u'秘'), (0x329A, 'M', u'男'), (0x329B, 'M', u'女'), (0x329C, 'M', u'適'), (0x329D, 'M', u'優'), (0x329E, 'M', u'印'), (0x329F, 'M', u'注'), (0x32A0, 'M', u'項'), (0x32A1, 'M', u'休'), (0x32A2, 'M', u'写'), (0x32A3, 'M', u'正'), (0x32A4, 'M', u'上'), (0x32A5, 'M', u'中'), (0x32A6, 'M', u'下'), (0x32A7, 'M', u'左'), (0x32A8, 'M', u'右'), (0x32A9, 'M', u'医'), (0x32AA, 'M', u'宗'), (0x32AB, 'M', u'学'), (0x32AC, 'M', u'監'), (0x32AD, 'M', u'企'), (0x32AE, 'M', u'資'), (0x32AF, 'M', u'協'), (0x32B0, 'M', u'夜'), (0x32B1, 'M', u'36'), (0x32B2, 'M', u'37'), (0x32B3, 'M', u'38'), (0x32B4, 'M', u'39'), (0x32B5, 'M', u'40'), (0x32B6, 'M', u'41'), (0x32B7, 'M', u'42'), (0x32B8, 'M', u'43'), (0x32B9, 'M', u'44'), (0x32BA, 'M', u'45'), (0x32BB, 'M', u'46'), (0x32BC, 'M', u'47'), (0x32BD, 'M', u'48'), (0x32BE, 'M', u'49'), (0x32BF, 'M', u'50'), (0x32C0, 'M', u'1月'), (0x32C1, 'M', u'2月'), (0x32C2, 'M', u'3月'), (0x32C3, 'M', u'4月'), (0x32C4, 'M', u'5月'), (0x32C5, 'M', u'6月'), (0x32C6, 'M', u'7月'), (0x32C7, 'M', u'8月'), ] def _seg_32(): return [ (0x32C8, 'M', u'9月'), (0x32C9, 'M', u'10月'), (0x32CA, 'M', u'11月'), (0x32CB, 'M', u'12月'), (0x32CC, 'M', u'hg'), (0x32CD, 'M', u'erg'), (0x32CE, 'M', u'ev'), (0x32CF, 'M', u'ltd'), (0x32D0, 'M', u'ア'), (0x32D1, 'M', u'イ'), (0x32D2, 'M', u'ウ'), (0x32D3, 'M', u'エ'), (0x32D4, 'M', u'オ'), (0x32D5, 'M', u'カ'), (0x32D6, 'M', u'キ'), (0x32D7, 'M', u'ク'), (0x32D8, 'M', u'ケ'), (0x32D9, 'M', u'コ'), (0x32DA, 'M', u'サ'), (0x32DB, 'M', u'シ'), (0x32DC, 'M', u'ス'), (0x32DD, 'M', u'セ'), (0x32DE, 'M', u'ソ'), (0x32DF, 'M', u'タ'), (0x32E0, 'M', u'チ'), (0x32E1, 'M', u'ツ'), (0x32E2, 'M', u'テ'), (0x32E3, 'M', u'ト'), (0x32E4, 'M', u'ナ'), (0x32E5, 'M', u'ニ'), (0x32E6, 'M', u'ヌ'), (0x32E7, 'M', u'ネ'), (0x32E8, 'M', u'ノ'), (0x32E9, 'M', u'ハ'), (0x32EA, 'M', u'ヒ'), (0x32EB, 'M', u'フ'), (0x32EC, 'M', u'ヘ'), (0x32ED, 'M', u'ホ'), (0x32EE, 'M', u'マ'), (0x32EF, 'M', u'ミ'), (0x32F0, 'M', u'ム'), (0x32F1, 'M', u'メ'), (0x32F2, 'M', u'モ'), (0x32F3, 'M', u'ヤ'), (0x32F4, 'M', u'ユ'), (0x32F5, 'M', u'ヨ'), (0x32F6, 'M', u'ラ'), (0x32F7, 'M', u'リ'), (0x32F8, 'M', u'ル'), (0x32F9, 'M', u'レ'), (0x32FA, 'M', u'ロ'), (0x32FB, 'M', u'ワ'), (0x32FC, 'M', u'ヰ'), (0x32FD, 'M', u'ヱ'), (0x32FE, 'M', u'ヲ'), (0x32FF, 'X'), (0x3300, 'M', u'アパート'), (0x3301, 'M', u'アルファ'), (0x3302, 'M', u'アンペア'), (0x3303, 'M', u'アール'), (0x3304, 'M', u'イニング'), (0x3305, 'M', u'インチ'), (0x3306, 'M', u'ウォン'), (0x3307, 'M', u'エスクード'), (0x3308, 'M', u'エーカー'), (0x3309, 'M', u'オンス'), (0x330A, 'M', u'オーム'), (0x330B, 'M', u'カイリ'), (0x330C, 'M', u'カラット'), (0x330D, 'M', u'カロリー'), (0x330E, 'M', u'ガロン'), (0x330F, 'M', u'ガンマ'), (0x3310, 'M', u'ギガ'), (0x3311, 'M', u'ギニー'), (0x3312, 'M', u'キュリー'), (0x3313, 'M', u'ギルダー'), (0x3314, 'M', u'キロ'), (0x3315, 'M', u'キログラム'), (0x3316, 'M', u'キロメートル'), (0x3317, 'M', u'キロワット'), (0x3318, 'M', u'グラム'), (0x3319, 'M', u'グラムトン'), (0x331A, 'M', u'クルゼイロ'), (0x331B, 'M', u'クローネ'), (0x331C, 'M', u'ケース'), (0x331D, 'M', u'コルナ'), (0x331E, 'M', u'コーポ'), (0x331F, 'M', u'サイクル'), (0x3320, 'M', u'サンチーム'), (0x3321, 'M', u'シリング'), (0x3322, 'M', u'センチ'), (0x3323, 'M', u'セント'), (0x3324, 'M', u'ダース'), (0x3325, 'M', u'デシ'), (0x3326, 'M', u'ドル'), (0x3327, 'M', u'トン'), (0x3328, 'M', u'ナノ'), (0x3329, 'M', u'ノット'), (0x332A, 'M', u'ハイツ'), (0x332B, 'M', u'パーセント'), ] def _seg_33(): return [ (0x332C, 'M', u'パーツ'), (0x332D, 'M', u'バーレル'), (0x332E, 'M', u'ピアストル'), (0x332F, 'M', u'ピクル'), (0x3330, 'M', u'ピコ'), (0x3331, 'M', u'ビル'), (0x3332, 'M', u'ファラッド'), (0x3333, 'M', u'フィート'), (0x3334, 'M', u'ブッシェル'), (0x3335, 'M', u'フラン'), (0x3336, 'M', u'ヘクタール'), (0x3337, 'M', u'ペソ'), (0x3338, 'M', u'ペニヒ'), (0x3339, 'M', u'ヘルツ'), (0x333A, 'M', u'ペンス'), (0x333B, 'M', u'ページ'), (0x333C, 'M', u'ベータ'), (0x333D, 'M', u'ポイント'), (0x333E, 'M', u'ボルト'), (0x333F, 'M', u'ホン'), (0x3340, 'M', u'ポンド'), (0x3341, 'M', u'ホール'), (0x3342, 'M', u'ホーン'), (0x3343, 'M', u'マイクロ'), (0x3344, 'M', u'マイル'), (0x3345, 'M', u'マッハ'), (0x3346, 'M', u'マルク'), (0x3347, 'M', u'マンション'), (0x3348, 'M', u'ミクロン'), (0x3349, 'M', u'ミリ'), (0x334A, 'M', u'ミリバール'), (0x334B, 'M', u'メガ'), (0x334C, 'M', u'メガトン'), (0x334D, 'M', u'メートル'), (0x334E, 'M', u'ヤード'), (0x334F, 'M', u'ヤール'), (0x3350, 'M', u'ユアン'), (0x3351, 'M', u'リットル'), (0x3352, 'M', u'リラ'), (0x3353, 'M', u'ルピー'), (0x3354, 'M', u'ルーブル'), (0x3355, 'M', u'レム'), (0x3356, 'M', u'レントゲン'), (0x3357, 'M', u'ワット'), (0x3358, 'M', u'0点'), (0x3359, 'M', u'1点'), (0x335A, 'M', u'2点'), (0x335B, 'M', u'3点'), (0x335C, 'M', u'4点'), (0x335D, 'M', u'5点'), (0x335E, 'M', u'6点'), (0x335F, 'M', u'7点'), (0x3360, 'M', u'8点'), (0x3361, 'M', u'9点'), (0x3362, 'M', u'10点'), (0x3363, 'M', u'11点'), (0x3364, 'M', u'12点'), (0x3365, 'M', u'13点'), (0x3366, 'M', u'14点'), (0x3367, 'M', u'15点'), (0x3368, 'M', u'16点'), (0x3369, 'M', u'17点'), (0x336A, 'M', u'18点'), (0x336B, 'M', u'19点'), (0x336C, 'M', u'20点'), (0x336D, 'M', u'21点'), (0x336E, 'M', u'22点'), (0x336F, 'M', u'23点'), (0x3370, 'M', u'24点'), (0x3371, 'M', u'hpa'), (0x3372, 'M', u'da'), (0x3373, 'M', u'au'), (0x3374, 'M', u'bar'), (0x3375, 'M', u'ov'), (0x3376, 'M', u'pc'), (0x3377, 'M', u'dm'), (0x3378, 'M', u'dm2'), (0x3379, 'M', u'dm3'), (0x337A, 'M', u'iu'), (0x337B, 'M', u'平成'), (0x337C, 'M', u'昭和'), (0x337D, 'M', u'大正'), (0x337E, 'M', u'明治'), (0x337F, 'M', u'株式会社'), (0x3380, 'M', u'pa'), (0x3381, 'M', u'na'), (0x3382, 'M', u'μa'), (0x3383, 'M', u'ma'), (0x3384, 'M', u'ka'), (0x3385, 'M', u'kb'), (0x3386, 'M', u'mb'), (0x3387, 'M', u'gb'), (0x3388, 'M', u'cal'), (0x3389, 'M', u'kcal'), (0x338A, 'M', u'pf'), (0x338B, 'M', u'nf'), (0x338C, 'M', u'μf'), (0x338D, 'M', u'μg'), (0x338E, 'M', u'mg'), (0x338F, 'M', u'kg'), ] def _seg_34(): return [ (0x3390, 'M', u'hz'), (0x3391, 'M', u'khz'), (0x3392, 'M', u'mhz'), (0x3393, 'M', u'ghz'), (0x3394, 'M', u'thz'), (0x3395, 'M', u'μl'), (0x3396, 'M', u'ml'), (0x3397, 'M', u'dl'), (0x3398, 'M', u'kl'), (0x3399, 'M', u'fm'), (0x339A, 'M', u'nm'), (0x339B, 'M', u'μm'), (0x339C, 'M', u'mm'), (0x339D, 'M', u'cm'), (0x339E, 'M', u'km'), (0x339F, 'M', u'mm2'), (0x33A0, 'M', u'cm2'), (0x33A1, 'M', u'm2'), (0x33A2, 'M', u'km2'), (0x33A3, 'M', u'mm3'), (0x33A4, 'M', u'cm3'), (0x33A5, 'M', u'm3'), (0x33A6, 'M', u'km3'), (0x33A7, 'M', u'm∕s'), (0x33A8, 'M', u'm∕s2'), (0x33A9, 'M', u'pa'), (0x33AA, 'M', u'kpa'), (0x33AB, 'M', u'mpa'), (0x33AC, 'M', u'gpa'), (0x33AD, 'M', u'rad'), (0x33AE, 'M', u'rad∕s'), (0x33AF, 'M', u'rad∕s2'), (0x33B0, 'M', u'ps'), (0x33B1, 'M', u'ns'), (0x33B2, 'M', u'μs'), (0x33B3, 'M', u'ms'), (0x33B4, 'M', u'pv'), (0x33B5, 'M', u'nv'), (0x33B6, 'M', u'μv'), (0x33B7, 'M', u'mv'), (0x33B8, 'M', u'kv'), (0x33B9, 'M', u'mv'), (0x33BA, 'M', u'pw'), (0x33BB, 'M', u'nw'), (0x33BC, 'M', u'μw'), (0x33BD, 'M', u'mw'), (0x33BE, 'M', u'kw'), (0x33BF, 'M', u'mw'), (0x33C0, 'M', u'kω'), (0x33C1, 'M', u'mω'), (0x33C2, 'X'), (0x33C3, 'M', u'bq'), (0x33C4, 'M', u'cc'), (0x33C5, 'M', u'cd'), (0x33C6, 'M', u'c∕kg'), (0x33C7, 'X'), (0x33C8, 'M', u'db'), (0x33C9, 'M', u'gy'), (0x33CA, 'M', u'ha'), (0x33CB, 'M', u'hp'), (0x33CC, 'M', u'in'), (0x33CD, 'M', u'kk'), (0x33CE, 'M', u'km'), (0x33CF, 'M', u'kt'), (0x33D0, 'M', u'lm'), (0x33D1, 'M', u'ln'), (0x33D2, 'M', u'log'), (0x33D3, 'M', u'lx'), (0x33D4, 'M', u'mb'), (0x33D5, 'M', u'mil'), (0x33D6, 'M', u'mol'), (0x33D7, 'M', u'ph'), (0x33D8, 'X'), (0x33D9, 'M', u'ppm'), (0x33DA, 'M', u'pr'), (0x33DB, 'M', u'sr'), (0x33DC, 'M', u'sv'), (0x33DD, 'M', u'wb'), (0x33DE, 'M', u'v∕m'), (0x33DF, 'M', u'a∕m'), (0x33E0, 'M', u'1日'), (0x33E1, 'M', u'2日'), (0x33E2, 'M', u'3日'), (0x33E3, 'M', u'4日'), (0x33E4, 'M', u'5日'), (0x33E5, 'M', u'6日'), (0x33E6, 'M', u'7日'), (0x33E7, 'M', u'8日'), (0x33E8, 'M', u'9日'), (0x33E9, 'M', u'10日'), (0x33EA, 'M', u'11日'), (0x33EB, 'M', u'12日'), (0x33EC, 'M', u'13日'), (0x33ED, 'M', u'14日'), (0x33EE, 'M', u'15日'), (0x33EF, 'M', u'16日'), (0x33F0, 'M', u'17日'), (0x33F1, 'M', u'18日'), (0x33F2, 'M', u'19日'), (0x33F3, 'M', u'20日'), ] def _seg_35(): return [ (0x33F4, 'M', u'21日'), (0x33F5, 'M', u'22日'), (0x33F6, 'M', u'23日'), (0x33F7, 'M', u'24日'), (0x33F8, 'M', u'25日'), (0x33F9, 'M', u'26日'), (0x33FA, 'M', u'27日'), (0x33FB, 'M', u'28日'), (0x33FC, 'M', u'29日'), (0x33FD, 'M', u'30日'), (0x33FE, 'M', u'31日'), (0x33FF, 'M', u'gal'), (0x3400, 'V'), (0x4DB6, 'X'), (0x4DC0, 'V'), (0x9FF0, 'X'), (0xA000, 'V'), (0xA48D, 'X'), (0xA490, 'V'), (0xA4C7, 'X'), (0xA4D0, 'V'), (0xA62C, 'X'), (0xA640, 'M', u'ꙁ'), (0xA641, 'V'), (0xA642, 'M', u'ꙃ'), (0xA643, 'V'), (0xA644, 'M', u'ꙅ'), (0xA645, 'V'), (0xA646, 'M', u'ꙇ'), (0xA647, 'V'), (0xA648, 'M', u'ꙉ'), (0xA649, 'V'), (0xA64A, 'M', u'ꙋ'), (0xA64B, 'V'), (0xA64C, 'M', u'ꙍ'), (0xA64D, 'V'), (0xA64E, 'M', u'ꙏ'), (0xA64F, 'V'), (0xA650, 'M', u'ꙑ'), (0xA651, 'V'), (0xA652, 'M', u'ꙓ'), (0xA653, 'V'), (0xA654, 'M', u'ꙕ'), (0xA655, 'V'), (0xA656, 'M', u'ꙗ'), (0xA657, 'V'), (0xA658, 'M', u'ꙙ'), (0xA659, 'V'), (0xA65A, 'M', u'ꙛ'), (0xA65B, 'V'), (0xA65C, 'M', u'ꙝ'), (0xA65D, 'V'), (0xA65E, 'M', u'ꙟ'), (0xA65F, 'V'), (0xA660, 'M', u'ꙡ'), (0xA661, 'V'), (0xA662, 'M', u'ꙣ'), (0xA663, 'V'), (0xA664, 'M', u'ꙥ'), (0xA665, 'V'), (0xA666, 'M', u'ꙧ'), (0xA667, 'V'), (0xA668, 'M', u'ꙩ'), (0xA669, 'V'), (0xA66A, 'M', u'ꙫ'), (0xA66B, 'V'), (0xA66C, 'M', u'ꙭ'), (0xA66D, 'V'), (0xA680, 'M', u'ꚁ'), (0xA681, 'V'), (0xA682, 'M', u'ꚃ'), (0xA683, 'V'), (0xA684, 'M', u'ꚅ'), (0xA685, 'V'), (0xA686, 'M', u'ꚇ'), (0xA687, 'V'), (0xA688, 'M', u'ꚉ'), (0xA689, 'V'), (0xA68A, 'M', u'ꚋ'), (0xA68B, 'V'), (0xA68C, 'M', u'ꚍ'), (0xA68D, 'V'), (0xA68E, 'M', u'ꚏ'), (0xA68F, 'V'), (0xA690, 'M', u'ꚑ'), (0xA691, 'V'), (0xA692, 'M', u'ꚓ'), (0xA693, 'V'), (0xA694, 'M', u'ꚕ'), (0xA695, 'V'), (0xA696, 'M', u'ꚗ'), (0xA697, 'V'), (0xA698, 'M', u'ꚙ'), (0xA699, 'V'), (0xA69A, 'M', u'ꚛ'), (0xA69B, 'V'), (0xA69C, 'M', u'ъ'), (0xA69D, 'M', u'ь'), (0xA69E, 'V'), (0xA6F8, 'X'), ] def _seg_36(): return [ (0xA700, 'V'), (0xA722, 'M', u'ꜣ'), (0xA723, 'V'), (0xA724, 'M', u'ꜥ'), (0xA725, 'V'), (0xA726, 'M', u'ꜧ'), (0xA727, 'V'), (0xA728, 'M', u'ꜩ'), (0xA729, 'V'), (0xA72A, 'M', u'ꜫ'), (0xA72B, 'V'), (0xA72C, 'M', u'ꜭ'), (0xA72D, 'V'), (0xA72E, 'M', u'ꜯ'), (0xA72F, 'V'), (0xA732, 'M', u'ꜳ'), (0xA733, 'V'), (0xA734, 'M', u'ꜵ'), (0xA735, 'V'), (0xA736, 'M', u'ꜷ'), (0xA737, 'V'), (0xA738, 'M', u'ꜹ'), (0xA739, 'V'), (0xA73A, 'M', u'ꜻ'), (0xA73B, 'V'), (0xA73C, 'M', u'ꜽ'), (0xA73D, 'V'), (0xA73E, 'M', u'ꜿ'), (0xA73F, 'V'), (0xA740, 'M', u'ꝁ'), (0xA741, 'V'), (0xA742, 'M', u'ꝃ'), (0xA743, 'V'), (0xA744, 'M', u'ꝅ'), (0xA745, 'V'), (0xA746, 'M', u'ꝇ'), (0xA747, 'V'), (0xA748, 'M', u'ꝉ'), (0xA749, 'V'), (0xA74A, 'M', u'ꝋ'), (0xA74B, 'V'), (0xA74C, 'M', u'ꝍ'), (0xA74D, 'V'), (0xA74E, 'M', u'ꝏ'), (0xA74F, 'V'), (0xA750, 'M', u'ꝑ'), (0xA751, 'V'), (0xA752, 'M', u'ꝓ'), (0xA753, 'V'), (0xA754, 'M', u'ꝕ'), (0xA755, 'V'), (0xA756, 'M', u'ꝗ'), (0xA757, 'V'), (0xA758, 'M', u'ꝙ'), (0xA759, 'V'), (0xA75A, 'M', u'ꝛ'), (0xA75B, 'V'), (0xA75C, 'M', u'ꝝ'), (0xA75D, 'V'), (0xA75E, 'M', u'ꝟ'), (0xA75F, 'V'), (0xA760, 'M', u'ꝡ'), (0xA761, 'V'), (0xA762, 'M', u'ꝣ'), (0xA763, 'V'), (0xA764, 'M', u'ꝥ'), (0xA765, 'V'), (0xA766, 'M', u'ꝧ'), (0xA767, 'V'), (0xA768, 'M', u'ꝩ'), (0xA769, 'V'), (0xA76A, 'M', u'ꝫ'), (0xA76B, 'V'), (0xA76C, 'M', u'ꝭ'), (0xA76D, 'V'), (0xA76E, 'M', u'ꝯ'), (0xA76F, 'V'), (0xA770, 'M', u'ꝯ'), (0xA771, 'V'), (0xA779, 'M', u'ꝺ'), (0xA77A, 'V'), (0xA77B, 'M', u'ꝼ'), (0xA77C, 'V'), (0xA77D, 'M', u'ᵹ'), (0xA77E, 'M', u'ꝿ'), (0xA77F, 'V'), (0xA780, 'M', u'ꞁ'), (0xA781, 'V'), (0xA782, 'M', u'ꞃ'), (0xA783, 'V'), (0xA784, 'M', u'ꞅ'), (0xA785, 'V'), (0xA786, 'M', u'ꞇ'), (0xA787, 'V'), (0xA78B, 'M', u'ꞌ'), (0xA78C, 'V'), (0xA78D, 'M', u'ɥ'), (0xA78E, 'V'), (0xA790, 'M', u'ꞑ'), (0xA791, 'V'), ] def _seg_37(): return [ (0xA792, 'M', u'ꞓ'), (0xA793, 'V'), (0xA796, 'M', u'ꞗ'), (0xA797, 'V'), (0xA798, 'M', u'ꞙ'), (0xA799, 'V'), (0xA79A, 'M', u'ꞛ'), (0xA79B, 'V'), (0xA79C, 'M', u'ꞝ'), (0xA79D, 'V'), (0xA79E, 'M', u'ꞟ'), (0xA79F, 'V'), (0xA7A0, 'M', u'ꞡ'), (0xA7A1, 'V'), (0xA7A2, 'M', u'ꞣ'), (0xA7A3, 'V'), (0xA7A4, 'M', u'ꞥ'), (0xA7A5, 'V'), (0xA7A6, 'M', u'ꞧ'), (0xA7A7, 'V'), (0xA7A8, 'M', u'ꞩ'), (0xA7A9, 'V'), (0xA7AA, 'M', u'ɦ'), (0xA7AB, 'M', u'ɜ'), (0xA7AC, 'M', u'ɡ'), (0xA7AD, 'M', u'ɬ'), (0xA7AE, 'M', u'ɪ'), (0xA7AF, 'V'), (0xA7B0, 'M', u'ʞ'), (0xA7B1, 'M', u'ʇ'), (0xA7B2, 'M', u'ʝ'), (0xA7B3, 'M', u'ꭓ'), (0xA7B4, 'M', u'ꞵ'), (0xA7B5, 'V'), (0xA7B6, 'M', u'ꞷ'), (0xA7B7, 'V'), (0xA7B8, 'X'), (0xA7B9, 'V'), (0xA7BA, 'X'), (0xA7F7, 'V'), (0xA7F8, 'M', u'ħ'), (0xA7F9, 'M', u'œ'), (0xA7FA, 'V'), (0xA82C, 'X'), (0xA830, 'V'), (0xA83A, 'X'), (0xA840, 'V'), (0xA878, 'X'), (0xA880, 'V'), (0xA8C6, 'X'), (0xA8CE, 'V'), (0xA8DA, 'X'), (0xA8E0, 'V'), (0xA954, 'X'), (0xA95F, 'V'), (0xA97D, 'X'), (0xA980, 'V'), (0xA9CE, 'X'), (0xA9CF, 'V'), (0xA9DA, 'X'), (0xA9DE, 'V'), (0xA9FF, 'X'), (0xAA00, 'V'), (0xAA37, 'X'), (0xAA40, 'V'), (0xAA4E, 'X'), (0xAA50, 'V'), (0xAA5A, 'X'), (0xAA5C, 'V'), (0xAAC3, 'X'), (0xAADB, 'V'), (0xAAF7, 'X'), (0xAB01, 'V'), (0xAB07, 'X'), (0xAB09, 'V'), (0xAB0F, 'X'), (0xAB11, 'V'), (0xAB17, 'X'), (0xAB20, 'V'), (0xAB27, 'X'), (0xAB28, 'V'), (0xAB2F, 'X'), (0xAB30, 'V'), (0xAB5C, 'M', u'ꜧ'), (0xAB5D, 'M', u'ꬷ'), (0xAB5E, 'M', u'ɫ'), (0xAB5F, 'M', u'ꭒ'), (0xAB60, 'V'), (0xAB66, 'X'), (0xAB70, 'M', u'Ꭰ'), (0xAB71, 'M', u'Ꭱ'), (0xAB72, 'M', u'Ꭲ'), (0xAB73, 'M', u'Ꭳ'), (0xAB74, 'M', u'Ꭴ'), (0xAB75, 'M', u'Ꭵ'), (0xAB76, 'M', u'Ꭶ'), (0xAB77, 'M', u'Ꭷ'), (0xAB78, 'M', u'Ꭸ'), (0xAB79, 'M', u'Ꭹ'), (0xAB7A, 'M', u'Ꭺ'), ] def _seg_38(): return [ (0xAB7B, 'M', u'Ꭻ'), (0xAB7C, 'M', u'Ꭼ'), (0xAB7D, 'M', u'Ꭽ'), (0xAB7E, 'M', u'Ꭾ'), (0xAB7F, 'M', u'Ꭿ'), (0xAB80, 'M', u'Ꮀ'), (0xAB81, 'M', u'Ꮁ'), (0xAB82, 'M', u'Ꮂ'), (0xAB83, 'M', u'Ꮃ'), (0xAB84, 'M', u'Ꮄ'), (0xAB85, 'M', u'Ꮅ'), (0xAB86, 'M', u'Ꮆ'), (0xAB87, 'M', u'Ꮇ'), (0xAB88, 'M', u'Ꮈ'), (0xAB89, 'M', u'Ꮉ'), (0xAB8A, 'M', u'Ꮊ'), (0xAB8B, 'M', u'Ꮋ'), (0xAB8C, 'M', u'Ꮌ'), (0xAB8D, 'M', u'Ꮍ'), (0xAB8E, 'M', u'Ꮎ'), (0xAB8F, 'M', u'Ꮏ'), (0xAB90, 'M', u'Ꮐ'), (0xAB91, 'M', u'Ꮑ'), (0xAB92, 'M', u'Ꮒ'), (0xAB93, 'M', u'Ꮓ'), (0xAB94, 'M', u'Ꮔ'), (0xAB95, 'M', u'Ꮕ'), (0xAB96, 'M', u'Ꮖ'), (0xAB97, 'M', u'Ꮗ'), (0xAB98, 'M', u'Ꮘ'), (0xAB99, 'M', u'Ꮙ'), (0xAB9A, 'M', u'Ꮚ'), (0xAB9B, 'M', u'Ꮛ'), (0xAB9C, 'M', u'Ꮜ'), (0xAB9D, 'M', u'Ꮝ'), (0xAB9E, 'M', u'Ꮞ'), (0xAB9F, 'M', u'Ꮟ'), (0xABA0, 'M', u'Ꮠ'), (0xABA1, 'M', u'Ꮡ'), (0xABA2, 'M', u'Ꮢ'), (0xABA3, 'M', u'Ꮣ'), (0xABA4, 'M', u'Ꮤ'), (0xABA5, 'M', u'Ꮥ'), (0xABA6, 'M', u'Ꮦ'), (0xABA7, 'M', u'Ꮧ'), (0xABA8, 'M', u'Ꮨ'), (0xABA9, 'M', u'Ꮩ'), (0xABAA, 'M', u'Ꮪ'), (0xABAB, 'M', u'Ꮫ'), (0xABAC, 'M', u'Ꮬ'), (0xABAD, 'M', u'Ꮭ'), (0xABAE, 'M', u'Ꮮ'), (0xABAF, 'M', u'Ꮯ'), (0xABB0, 'M', u'Ꮰ'), (0xABB1, 'M', u'Ꮱ'), (0xABB2, 'M', u'Ꮲ'), (0xABB3, 'M', u'Ꮳ'), (0xABB4, 'M', u'Ꮴ'), (0xABB5, 'M', u'Ꮵ'), (0xABB6, 'M', u'Ꮶ'), (0xABB7, 'M', u'Ꮷ'), (0xABB8, 'M', u'Ꮸ'), (0xABB9, 'M', u'Ꮹ'), (0xABBA, 'M', u'Ꮺ'), (0xABBB, 'M', u'Ꮻ'), (0xABBC, 'M', u'Ꮼ'), (0xABBD, 'M', u'Ꮽ'), (0xABBE, 'M', u'Ꮾ'), (0xABBF, 'M', u'Ꮿ'), (0xABC0, 'V'), (0xABEE, 'X'), (0xABF0, 'V'), (0xABFA, 'X'), (0xAC00, 'V'), (0xD7A4, 'X'), (0xD7B0, 'V'), (0xD7C7, 'X'), (0xD7CB, 'V'), (0xD7FC, 'X'), (0xF900, 'M', u'豈'), (0xF901, 'M', u'更'), (0xF902, 'M', u'車'), (0xF903, 'M', u'賈'), (0xF904, 'M', u'滑'), (0xF905, 'M', u'串'), (0xF906, 'M', u'句'), (0xF907, 'M', u'龜'), (0xF909, 'M', u'契'), (0xF90A, 'M', u'金'), (0xF90B, 'M', u'喇'), (0xF90C, 'M', u'奈'), (0xF90D, 'M', u'懶'), (0xF90E, 'M', u'癩'), (0xF90F, 'M', u'羅'), (0xF910, 'M', u'蘿'), (0xF911, 'M', u'螺'), (0xF912, 'M', u'裸'), (0xF913, 'M', u'邏'), (0xF914, 'M', u'樂'), (0xF915, 'M', u'洛'), ] def _seg_39(): return [ (0xF916, 'M', u'烙'), (0xF917, 'M', u'珞'), (0xF918, 'M', u'落'), (0xF919, 'M', u'酪'), (0xF91A, 'M', u'駱'), (0xF91B, 'M', u'亂'), (0xF91C, 'M', u'卵'), (0xF91D, 'M', u'欄'), (0xF91E, 'M', u'爛'), (0xF91F, 'M', u'蘭'), (0xF920, 'M', u'鸞'), (0xF921, 'M', u'嵐'), (0xF922, 'M', u'濫'), (0xF923, 'M', u'藍'), (0xF924, 'M', u'襤'), (0xF925, 'M', u'拉'), (0xF926, 'M', u'臘'), (0xF927, 'M', u'蠟'), (0xF928, 'M', u'廊'), (0xF929, 'M', u'朗'), (0xF92A, 'M', u'浪'), (0xF92B, 'M', u'狼'), (0xF92C, 'M', u'郎'), (0xF92D, 'M', u'來'), (0xF92E, 'M', u'冷'), (0xF92F, 'M', u'勞'), (0xF930, 'M', u'擄'), (0xF931, 'M', u'櫓'), (0xF932, 'M', u'爐'), (0xF933, 'M', u'盧'), (0xF934, 'M', u'老'), (0xF935, 'M', u'蘆'), (0xF936, 'M', u'虜'), (0xF937, 'M', u'路'), (0xF938, 'M', u'露'), (0xF939, 'M', u'魯'), (0xF93A, 'M', u'鷺'), (0xF93B, 'M', u'碌'), (0xF93C, 'M', u'祿'), (0xF93D, 'M', u'綠'), (0xF93E, 'M', u'菉'), (0xF93F, 'M', u'錄'), (0xF940, 'M', u'鹿'), (0xF941, 'M', u'論'), (0xF942, 'M', u'壟'), (0xF943, 'M', u'弄'), (0xF944, 'M', u'籠'), (0xF945, 'M', u'聾'), (0xF946, 'M', u'牢'), (0xF947, 'M', u'磊'), (0xF948, 'M', u'賂'), (0xF949, 'M', u'雷'), (0xF94A, 'M', u'壘'), (0xF94B, 'M', u'屢'), (0xF94C, 'M', u'樓'), (0xF94D, 'M', u'淚'), (0xF94E, 'M', u'漏'), (0xF94F, 'M', u'累'), (0xF950, 'M', u'縷'), (0xF951, 'M', u'陋'), (0xF952, 'M', u'勒'), (0xF953, 'M', u'肋'), (0xF954, 'M', u'凜'), (0xF955, 'M', u'凌'), (0xF956, 'M', u'稜'), (0xF957, 'M', u'綾'), (0xF958, 'M', u'菱'), (0xF959, 'M', u'陵'), (0xF95A, 'M', u'讀'), (0xF95B, 'M', u'拏'), (0xF95C, 'M', u'樂'), (0xF95D, 'M', u'諾'), (0xF95E, 'M', u'丹'), (0xF95F, 'M', u'寧'), (0xF960, 'M', u'怒'), (0xF961, 'M', u'率'), (0xF962, 'M', u'異'), (0xF963, 'M', u'北'), (0xF964, 'M', u'磻'), (0xF965, 'M', u'便'), (0xF966, 'M', u'復'), (0xF967, 'M', u'不'), (0xF968, 'M', u'泌'), (0xF969, 'M', u'數'), (0xF96A, 'M', u'索'), (0xF96B, 'M', u'參'), (0xF96C, 'M', u'塞'), (0xF96D, 'M', u'省'), (0xF96E, 'M', u'葉'), (0xF96F, 'M', u'說'), (0xF970, 'M', u'殺'), (0xF971, 'M', u'辰'), (0xF972, 'M', u'沈'), (0xF973, 'M', u'拾'), (0xF974, 'M', u'若'), (0xF975, 'M', u'掠'), (0xF976, 'M', u'略'), (0xF977, 'M', u'亮'), (0xF978, 'M', u'兩'), (0xF979, 'M', u'凉'), ] def _seg_40(): return [ (0xF97A, 'M', u'梁'), (0xF97B, 'M', u'糧'), (0xF97C, 'M', u'良'), (0xF97D, 'M', u'諒'), (0xF97E, 'M', u'量'), (0xF97F, 'M', u'勵'), (0xF980, 'M', u'呂'), (0xF981, 'M', u'女'), (0xF982, 'M', u'廬'), (0xF983, 'M', u'旅'), (0xF984, 'M', u'濾'), (0xF985, 'M', u'礪'), (0xF986, 'M', u'閭'), (0xF987, 'M', u'驪'), (0xF988, 'M', u'麗'), (0xF989, 'M', u'黎'), (0xF98A, 'M', u'力'), (0xF98B, 'M', u'曆'), (0xF98C, 'M', u'歷'), (0xF98D, 'M', u'轢'), (0xF98E, 'M', u'年'), (0xF98F, 'M', u'憐'), (0xF990, 'M', u'戀'), (0xF991, 'M', u'撚'), (0xF992, 'M', u'漣'), (0xF993, 'M', u'煉'), (0xF994, 'M', u'璉'), (0xF995, 'M', u'秊'), (0xF996, 'M', u'練'), (0xF997, 'M', u'聯'), (0xF998, 'M', u'輦'), (0xF999, 'M', u'蓮'), (0xF99A, 'M', u'連'), (0xF99B, 'M', u'鍊'), (0xF99C, 'M', u'列'), (0xF99D, 'M', u'劣'), (0xF99E, 'M', u'咽'), (0xF99F, 'M', u'烈'), (0xF9A0, 'M', u'裂'), (0xF9A1, 'M', u'說'), (0xF9A2, 'M', u'廉'), (0xF9A3, 'M', u'念'), (0xF9A4, 'M', u'捻'), (0xF9A5, 'M', u'殮'), (0xF9A6, 'M', u'簾'), (0xF9A7, 'M', u'獵'), (0xF9A8, 'M', u'令'), (0xF9A9, 'M', u'囹'), (0xF9AA, 'M', u'寧'), (0xF9AB, 'M', u'嶺'), (0xF9AC, 'M', u'怜'), (0xF9AD, 'M', u'玲'), (0xF9AE, 'M', u'瑩'), (0xF9AF, 'M', u'羚'), (0xF9B0, 'M', u'聆'), (0xF9B1, 'M', u'鈴'), (0xF9B2, 'M', u'零'), (0xF9B3, 'M', u'靈'), (0xF9B4, 'M', u'領'), (0xF9B5, 'M', u'例'), (0xF9B6, 'M', u'禮'), (0xF9B7, 'M', u'醴'), (0xF9B8, 'M', u'隸'), (0xF9B9, 'M', u'惡'), (0xF9BA, 'M', u'了'), (0xF9BB, 'M', u'僚'), (0xF9BC, 'M', u'寮'), (0xF9BD, 'M', u'尿'), (0xF9BE, 'M', u'料'), (0xF9BF, 'M', u'樂'), (0xF9C0, 'M', u'燎'), (0xF9C1, 'M', u'療'), (0xF9C2, 'M', u'蓼'), (0xF9C3, 'M', u'遼'), (0xF9C4, 'M', u'龍'), (0xF9C5, 'M', u'暈'), (0xF9C6, 'M', u'阮'), (0xF9C7, 'M', u'劉'), (0xF9C8, 'M', u'杻'), (0xF9C9, 'M', u'柳'), (0xF9CA, 'M', u'流'), (0xF9CB, 'M', u'溜'), (0xF9CC, 'M', u'琉'), (0xF9CD, 'M', u'留'), (0xF9CE, 'M', u'硫'), (0xF9CF, 'M', u'紐'), (0xF9D0, 'M', u'類'), (0xF9D1, 'M', u'六'), (0xF9D2, 'M', u'戮'), (0xF9D3, 'M', u'陸'), (0xF9D4, 'M', u'倫'), (0xF9D5, 'M', u'崙'), (0xF9D6, 'M', u'淪'), (0xF9D7, 'M', u'輪'), (0xF9D8, 'M', u'律'), (0xF9D9, 'M', u'慄'), (0xF9DA, 'M', u'栗'), (0xF9DB, 'M', u'率'), (0xF9DC, 'M', u'隆'), (0xF9DD, 'M', u'利'), ] def _seg_41(): return [ (0xF9DE, 'M', u'吏'), (0xF9DF, 'M', u'履'), (0xF9E0, 'M', u'易'), (0xF9E1, 'M', u'李'), (0xF9E2, 'M', u'梨'), (0xF9E3, 'M', u'泥'), (0xF9E4, 'M', u'理'), (0xF9E5, 'M', u'痢'), (0xF9E6, 'M', u'罹'), (0xF9E7, 'M', u'裏'), (0xF9E8, 'M', u'裡'), (0xF9E9, 'M', u'里'), (0xF9EA, 'M', u'離'), (0xF9EB, 'M', u'匿'), (0xF9EC, 'M', u'溺'), (0xF9ED, 'M', u'吝'), (0xF9EE, 'M', u'燐'), (0xF9EF, 'M', u'璘'), (0xF9F0, 'M', u'藺'), (0xF9F1, 'M', u'隣'), (0xF9F2, 'M', u'鱗'), (0xF9F3, 'M', u'麟'), (0xF9F4, 'M', u'林'), (0xF9F5, 'M', u'淋'), (0xF9F6, 'M', u'臨'), (0xF9F7, 'M', u'立'), (0xF9F8, 'M', u'笠'), (0xF9F9, 'M', u'粒'), (0xF9FA, 'M', u'狀'), (0xF9FB, 'M', u'炙'), (0xF9FC, 'M', u'識'), (0xF9FD, 'M', u'什'), (0xF9FE, 'M', u'茶'), (0xF9FF, 'M', u'刺'), (0xFA00, 'M', u'切'), (0xFA01, 'M', u'度'), (0xFA02, 'M', u'拓'), (0xFA03, 'M', u'糖'), (0xFA04, 'M', u'宅'), (0xFA05, 'M', u'洞'), (0xFA06, 'M', u'暴'), (0xFA07, 'M', u'輻'), (0xFA08, 'M', u'行'), (0xFA09, 'M', u'降'), (0xFA0A, 'M', u'見'), (0xFA0B, 'M', u'廓'), (0xFA0C, 'M', u'兀'), (0xFA0D, 'M', u'嗀'), (0xFA0E, 'V'), (0xFA10, 'M', u'塚'), (0xFA11, 'V'), (0xFA12, 'M', u'晴'), (0xFA13, 'V'), (0xFA15, 'M', u'凞'), (0xFA16, 'M', u'猪'), (0xFA17, 'M', u'益'), (0xFA18, 'M', u'礼'), (0xFA19, 'M', u'神'), (0xFA1A, 'M', u'祥'), (0xFA1B, 'M', u'福'), (0xFA1C, 'M', u'靖'), (0xFA1D, 'M', u'精'), (0xFA1E, 'M', u'羽'), (0xFA1F, 'V'), (0xFA20, 'M', u'蘒'), (0xFA21, 'V'), (0xFA22, 'M', u'諸'), (0xFA23, 'V'), (0xFA25, 'M', u'逸'), (0xFA26, 'M', u'都'), (0xFA27, 'V'), (0xFA2A, 'M', u'飯'), (0xFA2B, 'M', u'飼'), (0xFA2C, 'M', u'館'), (0xFA2D, 'M', u'鶴'), (0xFA2E, 'M', u'郞'), (0xFA2F, 'M', u'隷'), (0xFA30, 'M', u'侮'), (0xFA31, 'M', u'僧'), (0xFA32, 'M', u'免'), (0xFA33, 'M', u'勉'), (0xFA34, 'M', u'勤'), (0xFA35, 'M', u'卑'), (0xFA36, 'M', u'喝'), (0xFA37, 'M', u'嘆'), (0xFA38, 'M', u'器'), (0xFA39, 'M', u'塀'), (0xFA3A, 'M', u'墨'), (0xFA3B, 'M', u'層'), (0xFA3C, 'M', u'屮'), (0xFA3D, 'M', u'悔'), (0xFA3E, 'M', u'慨'), (0xFA3F, 'M', u'憎'), (0xFA40, 'M', u'懲'), (0xFA41, 'M', u'敏'), (0xFA42, 'M', u'既'), (0xFA43, 'M', u'暑'), (0xFA44, 'M', u'梅'), (0xFA45, 'M', u'海'), (0xFA46, 'M', u'渚'), ] def _seg_42(): return [ (0xFA47, 'M', u'漢'), (0xFA48, 'M', u'煮'), (0xFA49, 'M', u'爫'), (0xFA4A, 'M', u'琢'), (0xFA4B, 'M', u'碑'), (0xFA4C, 'M', u'社'), (0xFA4D, 'M', u'祉'), (0xFA4E, 'M', u'祈'), (0xFA4F, 'M', u'祐'), (0xFA50, 'M', u'祖'), (0xFA51, 'M', u'祝'), (0xFA52, 'M', u'禍'), (0xFA53, 'M', u'禎'), (0xFA54, 'M', u'穀'), (0xFA55, 'M', u'突'), (0xFA56, 'M', u'節'), (0xFA57, 'M', u'練'), (0xFA58, 'M', u'縉'), (0xFA59, 'M', u'繁'), (0xFA5A, 'M', u'署'), (0xFA5B, 'M', u'者'), (0xFA5C, 'M', u'臭'), (0xFA5D, 'M', u'艹'), (0xFA5F, 'M', u'著'), (0xFA60, 'M', u'褐'), (0xFA61, 'M', u'視'), (0xFA62, 'M', u'謁'), (0xFA63, 'M', u'謹'), (0xFA64, 'M', u'賓'), (0xFA65, 'M', u'贈'), (0xFA66, 'M', u'辶'), (0xFA67, 'M', u'逸'), (0xFA68, 'M', u'難'), (0xFA69, 'M', u'響'), (0xFA6A, 'M', u'頻'), (0xFA6B, 'M', u'恵'), (0xFA6C, 'M', u'𤋮'), (0xFA6D, 'M', u'舘'), (0xFA6E, 'X'), (0xFA70, 'M', u'並'), (0xFA71, 'M', u'况'), (0xFA72, 'M', u'全'), (0xFA73, 'M', u'侀'), (0xFA74, 'M', u'充'), (0xFA75, 'M', u'冀'), (0xFA76, 'M', u'勇'), (0xFA77, 'M', u'勺'), (0xFA78, 'M', u'喝'), (0xFA79, 'M', u'啕'), (0xFA7A, 'M', u'喙'), (0xFA7B, 'M', u'嗢'), (0xFA7C, 'M', u'塚'), (0xFA7D, 'M', u'墳'), (0xFA7E, 'M', u'奄'), (0xFA7F, 'M', u'奔'), (0xFA80, 'M', u'婢'), (0xFA81, 'M', u'嬨'), (0xFA82, 'M', u'廒'), (0xFA83, 'M', u'廙'), (0xFA84, 'M', u'彩'), (0xFA85, 'M', u'徭'), (0xFA86, 'M', u'惘'), (0xFA87, 'M', u'慎'), (0xFA88, 'M', u'愈'), (0xFA89, 'M', u'憎'), (0xFA8A, 'M', u'慠'), (0xFA8B, 'M', u'懲'), (0xFA8C, 'M', u'戴'), (0xFA8D, 'M', u'揄'), (0xFA8E, 'M', u'搜'), (0xFA8F, 'M', u'摒'), (0xFA90, 'M', u'敖'), (0xFA91, 'M', u'晴'), (0xFA92, 'M', u'朗'), (0xFA93, 'M', u'望'), (0xFA94, 'M', u'杖'), (0xFA95, 'M', u'歹'), (0xFA96, 'M', u'殺'), (0xFA97, 'M', u'流'), (0xFA98, 'M', u'滛'), (0xFA99, 'M', u'滋'), (0xFA9A, 'M', u'漢'), (0xFA9B, 'M', u'瀞'), (0xFA9C, 'M', u'煮'), (0xFA9D, 'M', u'瞧'), (0xFA9E, 'M', u'爵'), (0xFA9F, 'M', u'犯'), (0xFAA0, 'M', u'猪'), (0xFAA1, 'M', u'瑱'), (0xFAA2, 'M', u'甆'), (0xFAA3, 'M', u'画'), (0xFAA4, 'M', u'瘝'), (0xFAA5, 'M', u'瘟'), (0xFAA6, 'M', u'益'), (0xFAA7, 'M', u'盛'), (0xFAA8, 'M', u'直'), (0xFAA9, 'M', u'睊'), (0xFAAA, 'M', u'着'), (0xFAAB, 'M', u'磌'), (0xFAAC, 'M', u'窱'), ] def _seg_43(): return [ (0xFAAD, 'M', u'節'), (0xFAAE, 'M', u'类'), (0xFAAF, 'M', u'絛'), (0xFAB0, 'M', u'練'), (0xFAB1, 'M', u'缾'), (0xFAB2, 'M', u'者'), (0xFAB3, 'M', u'荒'), (0xFAB4, 'M', u'華'), (0xFAB5, 'M', u'蝹'), (0xFAB6, 'M', u'襁'), (0xFAB7, 'M', u'覆'), (0xFAB8, 'M', u'視'), (0xFAB9, 'M', u'調'), (0xFABA, 'M', u'諸'), (0xFABB, 'M', u'請'), (0xFABC, 'M', u'謁'), (0xFABD, 'M', u'諾'), (0xFABE, 'M', u'諭'), (0xFABF, 'M', u'謹'), (0xFAC0, 'M', u'變'), (0xFAC1, 'M', u'贈'), (0xFAC2, 'M', u'輸'), (0xFAC3, 'M', u'遲'), (0xFAC4, 'M', u'醙'), (0xFAC5, 'M', u'鉶'), (0xFAC6, 'M', u'陼'), (0xFAC7, 'M', u'難'), (0xFAC8, 'M', u'靖'), (0xFAC9, 'M', u'韛'), (0xFACA, 'M', u'響'), (0xFACB, 'M', u'頋'), (0xFACC, 'M', u'頻'), (0xFACD, 'M', u'鬒'), (0xFACE, 'M', u'龜'), (0xFACF, 'M', u'𢡊'), (0xFAD0, 'M', u'𢡄'), (0xFAD1, 'M', u'𣏕'), (0xFAD2, 'M', u'㮝'), (0xFAD3, 'M', u'䀘'), (0xFAD4, 'M', u'䀹'), (0xFAD5, 'M', u'𥉉'), (0xFAD6, 'M', u'𥳐'), (0xFAD7, 'M', u'𧻓'), (0xFAD8, 'M', u'齃'), (0xFAD9, 'M', u'龎'), (0xFADA, 'X'), (0xFB00, 'M', u'ff'), (0xFB01, 'M', u'fi'), (0xFB02, 'M', u'fl'), (0xFB03, 'M', u'ffi'), (0xFB04, 'M', u'ffl'), (0xFB05, 'M', u'st'), (0xFB07, 'X'), (0xFB13, 'M', u'մն'), (0xFB14, 'M', u'մե'), (0xFB15, 'M', u'մի'), (0xFB16, 'M', u'վն'), (0xFB17, 'M', u'մխ'), (0xFB18, 'X'), (0xFB1D, 'M', u'יִ'), (0xFB1E, 'V'), (0xFB1F, 'M', u'ײַ'), (0xFB20, 'M', u'ע'), (0xFB21, 'M', u'א'), (0xFB22, 'M', u'ד'), (0xFB23, 'M', u'ה'), (0xFB24, 'M', u'כ'), (0xFB25, 'M', u'ל'), (0xFB26, 'M', u'ם'), (0xFB27, 'M', u'ר'), (0xFB28, 'M', u'ת'), (0xFB29, '3', u'+'), (0xFB2A, 'M', u'שׁ'), (0xFB2B, 'M', u'שׂ'), (0xFB2C, 'M', u'שּׁ'), (0xFB2D, 'M', u'שּׂ'), (0xFB2E, 'M', u'אַ'), (0xFB2F, 'M', u'אָ'), (0xFB30, 'M', u'אּ'), (0xFB31, 'M', u'בּ'), (0xFB32, 'M', u'גּ'), (0xFB33, 'M', u'דּ'), (0xFB34, 'M', u'הּ'), (0xFB35, 'M', u'וּ'), (0xFB36, 'M', u'זּ'), (0xFB37, 'X'), (0xFB38, 'M', u'טּ'), (0xFB39, 'M', u'יּ'), (0xFB3A, 'M', u'ךּ'), (0xFB3B, 'M', u'כּ'), (0xFB3C, 'M', u'לּ'), (0xFB3D, 'X'), (0xFB3E, 'M', u'מּ'), (0xFB3F, 'X'), (0xFB40, 'M', u'נּ'), (0xFB41, 'M', u'סּ'), (0xFB42, 'X'), (0xFB43, 'M', u'ףּ'), (0xFB44, 'M', u'פּ'), (0xFB45, 'X'), ] def _seg_44(): return [ (0xFB46, 'M', u'צּ'), (0xFB47, 'M', u'קּ'), (0xFB48, 'M', u'רּ'), (0xFB49, 'M', u'שּ'), (0xFB4A, 'M', u'תּ'), (0xFB4B, 'M', u'וֹ'), (0xFB4C, 'M', u'בֿ'), (0xFB4D, 'M', u'כֿ'), (0xFB4E, 'M', u'פֿ'), (0xFB4F, 'M', u'אל'), (0xFB50, 'M', u'ٱ'), (0xFB52, 'M', u'ٻ'), (0xFB56, 'M', u'پ'), (0xFB5A, 'M', u'ڀ'), (0xFB5E, 'M', u'ٺ'), (0xFB62, 'M', u'ٿ'), (0xFB66, 'M', u'ٹ'), (0xFB6A, 'M', u'ڤ'), (0xFB6E, 'M', u'ڦ'), (0xFB72, 'M', u'ڄ'), (0xFB76, 'M', u'ڃ'), (0xFB7A, 'M', u'چ'), (0xFB7E, 'M', u'ڇ'), (0xFB82, 'M', u'ڍ'), (0xFB84, 'M', u'ڌ'), (0xFB86, 'M', u'ڎ'), (0xFB88, 'M', u'ڈ'), (0xFB8A, 'M', u'ژ'), (0xFB8C, 'M', u'ڑ'), (0xFB8E, 'M', u'ک'), (0xFB92, 'M', u'گ'), (0xFB96, 'M', u'ڳ'), (0xFB9A, 'M', u'ڱ'), (0xFB9E, 'M', u'ں'), (0xFBA0, 'M', u'ڻ'), (0xFBA4, 'M', u'ۀ'), (0xFBA6, 'M', u'ہ'), (0xFBAA, 'M', u'ھ'), (0xFBAE, 'M', u'ے'), (0xFBB0, 'M', u'ۓ'), (0xFBB2, 'V'), (0xFBC2, 'X'), (0xFBD3, 'M', u'ڭ'), (0xFBD7, 'M', u'ۇ'), (0xFBD9, 'M', u'ۆ'), (0xFBDB, 'M', u'ۈ'), (0xFBDD, 'M', u'ۇٴ'), (0xFBDE, 'M', u'ۋ'), (0xFBE0, 'M', u'ۅ'), (0xFBE2, 'M', u'ۉ'), (0xFBE4, 'M', u'ې'), (0xFBE8, 'M', u'ى'), (0xFBEA, 'M', u'ئا'), (0xFBEC, 'M', u'ئە'), (0xFBEE, 'M', u'ئو'), (0xFBF0, 'M', u'ئۇ'), (0xFBF2, 'M', u'ئۆ'), (0xFBF4, 'M', u'ئۈ'), (0xFBF6, 'M', u'ئې'), (0xFBF9, 'M', u'ئى'), (0xFBFC, 'M', u'ی'), (0xFC00, 'M', u'ئج'), (0xFC01, 'M', u'ئح'), (0xFC02, 'M', u'ئم'), (0xFC03, 'M', u'ئى'), (0xFC04, 'M', u'ئي'), (0xFC05, 'M', u'بج'), (0xFC06, 'M', u'بح'), (0xFC07, 'M', u'بخ'), (0xFC08, 'M', u'بم'), (0xFC09, 'M', u'بى'), (0xFC0A, 'M', u'بي'), (0xFC0B, 'M', u'تج'), (0xFC0C, 'M', u'تح'), (0xFC0D, 'M', u'تخ'), (0xFC0E, 'M', u'تم'), (0xFC0F, 'M', u'تى'), (0xFC10, 'M', u'تي'), (0xFC11, 'M', u'ثج'), (0xFC12, 'M', u'ثم'), (0xFC13, 'M', u'ثى'), (0xFC14, 'M', u'ثي'), (0xFC15, 'M', u'جح'), (0xFC16, 'M', u'جم'), (0xFC17, 'M', u'حج'), (0xFC18, 'M', u'حم'), (0xFC19, 'M', u'خج'), (0xFC1A, 'M', u'خح'), (0xFC1B, 'M', u'خم'), (0xFC1C, 'M', u'سج'), (0xFC1D, 'M', u'سح'), (0xFC1E, 'M', u'سخ'), (0xFC1F, 'M', u'سم'), (0xFC20, 'M', u'صح'), (0xFC21, 'M', u'صم'), (0xFC22, 'M', u'ضج'), (0xFC23, 'M', u'ضح'), (0xFC24, 'M', u'ضخ'), (0xFC25, 'M', u'ضم'), (0xFC26, 'M', u'طح'), ] def _seg_45(): return [ (0xFC27, 'M', u'طم'), (0xFC28, 'M', u'ظم'), (0xFC29, 'M', u'عج'), (0xFC2A, 'M', u'عم'), (0xFC2B, 'M', u'غج'), (0xFC2C, 'M', u'غم'), (0xFC2D, 'M', u'فج'), (0xFC2E, 'M', u'فح'), (0xFC2F, 'M', u'فخ'), (0xFC30, 'M', u'فم'), (0xFC31, 'M', u'فى'), (0xFC32, 'M', u'في'), (0xFC33, 'M', u'قح'), (0xFC34, 'M', u'قم'), (0xFC35, 'M', u'قى'), (0xFC36, 'M', u'قي'), (0xFC37, 'M', u'كا'), (0xFC38, 'M', u'كج'), (0xFC39, 'M', u'كح'), (0xFC3A, 'M', u'كخ'), (0xFC3B, 'M', u'كل'), (0xFC3C, 'M', u'كم'), (0xFC3D, 'M', u'كى'), (0xFC3E, 'M', u'كي'), (0xFC3F, 'M', u'لج'), (0xFC40, 'M', u'لح'), (0xFC41, 'M', u'لخ'), (0xFC42, 'M', u'لم'), (0xFC43, 'M', u'لى'), (0xFC44, 'M', u'لي'), (0xFC45, 'M', u'مج'), (0xFC46, 'M', u'مح'), (0xFC47, 'M', u'مخ'), (0xFC48, 'M', u'مم'), (0xFC49, 'M', u'مى'), (0xFC4A, 'M', u'مي'), (0xFC4B, 'M', u'نج'), (0xFC4C, 'M', u'نح'), (0xFC4D, 'M', u'نخ'), (0xFC4E, 'M', u'نم'), (0xFC4F, 'M', u'نى'), (0xFC50, 'M', u'ني'), (0xFC51, 'M', u'هج'), (0xFC52, 'M', u'هم'), (0xFC53, 'M', u'هى'), (0xFC54, 'M', u'هي'), (0xFC55, 'M', u'يج'), (0xFC56, 'M', u'يح'), (0xFC57, 'M', u'يخ'), (0xFC58, 'M', u'يم'), (0xFC59, 'M', u'يى'), (0xFC5A, 'M', u'يي'), (0xFC5B, 'M', u'ذٰ'), (0xFC5C, 'M', u'رٰ'), (0xFC5D, 'M', u'ىٰ'), (0xFC5E, '3', u' ٌّ'), (0xFC5F, '3', u' ٍّ'), (0xFC60, '3', u' َّ'), (0xFC61, '3', u' ُّ'), (0xFC62, '3', u' ِّ'), (0xFC63, '3', u' ّٰ'), (0xFC64, 'M', u'ئر'), (0xFC65, 'M', u'ئز'), (0xFC66, 'M', u'ئم'), (0xFC67, 'M', u'ئن'), (0xFC68, 'M', u'ئى'), (0xFC69, 'M', u'ئي'), (0xFC6A, 'M', u'بر'), (0xFC6B, 'M', u'بز'), (0xFC6C, 'M', u'بم'), (0xFC6D, 'M', u'بن'), (0xFC6E, 'M', u'بى'), (0xFC6F, 'M', u'بي'), (0xFC70, 'M', u'تر'), (0xFC71, 'M', u'تز'), (0xFC72, 'M', u'تم'), (0xFC73, 'M', u'تن'), (0xFC74, 'M', u'تى'), (0xFC75, 'M', u'تي'), (0xFC76, 'M', u'ثر'), (0xFC77, 'M', u'ثز'), (0xFC78, 'M', u'ثم'), (0xFC79, 'M', u'ثن'), (0xFC7A, 'M', u'ثى'), (0xFC7B, 'M', u'ثي'), (0xFC7C, 'M', u'فى'), (0xFC7D, 'M', u'في'), (0xFC7E, 'M', u'قى'), (0xFC7F, 'M', u'قي'), (0xFC80, 'M', u'كا'), (0xFC81, 'M', u'كل'), (0xFC82, 'M', u'كم'), (0xFC83, 'M', u'كى'), (0xFC84, 'M', u'كي'), (0xFC85, 'M', u'لم'), (0xFC86, 'M', u'لى'), (0xFC87, 'M', u'لي'), (0xFC88, 'M', u'ما'), (0xFC89, 'M', u'مم'), (0xFC8A, 'M', u'نر'), ] def _seg_46(): return [ (0xFC8B, 'M', u'نز'), (0xFC8C, 'M', u'نم'), (0xFC8D, 'M', u'نن'), (0xFC8E, 'M', u'نى'), (0xFC8F, 'M', u'ني'), (0xFC90, 'M', u'ىٰ'), (0xFC91, 'M', u'ير'), (0xFC92, 'M', u'يز'), (0xFC93, 'M', u'يم'), (0xFC94, 'M', u'ين'), (0xFC95, 'M', u'يى'), (0xFC96, 'M', u'يي'), (0xFC97, 'M', u'ئج'), (0xFC98, 'M', u'ئح'), (0xFC99, 'M', u'ئخ'), (0xFC9A, 'M', u'ئم'), (0xFC9B, 'M', u'ئه'), (0xFC9C, 'M', u'بج'), (0xFC9D, 'M', u'بح'), (0xFC9E, 'M', u'بخ'), (0xFC9F, 'M', u'بم'), (0xFCA0, 'M', u'به'), (0xFCA1, 'M', u'تج'), (0xFCA2, 'M', u'تح'), (0xFCA3, 'M', u'تخ'), (0xFCA4, 'M', u'تم'), (0xFCA5, 'M', u'ته'), (0xFCA6, 'M', u'ثم'), (0xFCA7, 'M', u'جح'), (0xFCA8, 'M', u'جم'), (0xFCA9, 'M', u'حج'), (0xFCAA, 'M', u'حم'), (0xFCAB, 'M', u'خج'), (0xFCAC, 'M', u'خم'), (0xFCAD, 'M', u'سج'), (0xFCAE, 'M', u'سح'), (0xFCAF, 'M', u'سخ'), (0xFCB0, 'M', u'سم'), (0xFCB1, 'M', u'صح'), (0xFCB2, 'M', u'صخ'), (0xFCB3, 'M', u'صم'), (0xFCB4, 'M', u'ضج'), (0xFCB5, 'M', u'ضح'), (0xFCB6, 'M', u'ضخ'), (0xFCB7, 'M', u'ضم'), (0xFCB8, 'M', u'طح'), (0xFCB9, 'M', u'ظم'), (0xFCBA, 'M', u'عج'), (0xFCBB, 'M', u'عم'), (0xFCBC, 'M', u'غج'), (0xFCBD, 'M', u'غم'), (0xFCBE, 'M', u'فج'), (0xFCBF, 'M', u'فح'), (0xFCC0, 'M', u'فخ'), (0xFCC1, 'M', u'فم'), (0xFCC2, 'M', u'قح'), (0xFCC3, 'M', u'قم'), (0xFCC4, 'M', u'كج'), (0xFCC5, 'M', u'كح'), (0xFCC6, 'M', u'كخ'), (0xFCC7, 'M', u'كل'), (0xFCC8, 'M', u'كم'), (0xFCC9, 'M', u'لج'), (0xFCCA, 'M', u'لح'), (0xFCCB, 'M', u'لخ'), (0xFCCC, 'M', u'لم'), (0xFCCD, 'M', u'له'), (0xFCCE, 'M', u'مج'), (0xFCCF, 'M', u'مح'), (0xFCD0, 'M', u'مخ'), (0xFCD1, 'M', u'مم'), (0xFCD2, 'M', u'نج'), (0xFCD3, 'M', u'نح'), (0xFCD4, 'M', u'نخ'), (0xFCD5, 'M', u'نم'), (0xFCD6, 'M', u'نه'), (0xFCD7, 'M', u'هج'), (0xFCD8, 'M', u'هم'), (0xFCD9, 'M', u'هٰ'), (0xFCDA, 'M', u'يج'), (0xFCDB, 'M', u'يح'), (0xFCDC, 'M', u'يخ'), (0xFCDD, 'M', u'يم'), (0xFCDE, 'M', u'يه'), (0xFCDF, 'M', u'ئم'), (0xFCE0, 'M', u'ئه'), (0xFCE1, 'M', u'بم'), (0xFCE2, 'M', u'به'), (0xFCE3, 'M', u'تم'), (0xFCE4, 'M', u'ته'), (0xFCE5, 'M', u'ثم'), (0xFCE6, 'M', u'ثه'), (0xFCE7, 'M', u'سم'), (0xFCE8, 'M', u'سه'), (0xFCE9, 'M', u'شم'), (0xFCEA, 'M', u'شه'), (0xFCEB, 'M', u'كل'), (0xFCEC, 'M', u'كم'), (0xFCED, 'M', u'لم'), (0xFCEE, 'M', u'نم'), ] def _seg_47(): return [ (0xFCEF, 'M', u'نه'), (0xFCF0, 'M', u'يم'), (0xFCF1, 'M', u'يه'), (0xFCF2, 'M', u'ـَّ'), (0xFCF3, 'M', u'ـُّ'), (0xFCF4, 'M', u'ـِّ'), (0xFCF5, 'M', u'طى'), (0xFCF6, 'M', u'طي'), (0xFCF7, 'M', u'عى'), (0xFCF8, 'M', u'عي'), (0xFCF9, 'M', u'غى'), (0xFCFA, 'M', u'غي'), (0xFCFB, 'M', u'سى'), (0xFCFC, 'M', u'سي'), (0xFCFD, 'M', u'شى'), (0xFCFE, 'M', u'شي'), (0xFCFF, 'M', u'حى'), (0xFD00, 'M', u'حي'), (0xFD01, 'M', u'جى'), (0xFD02, 'M', u'جي'), (0xFD03, 'M', u'خى'), (0xFD04, 'M', u'خي'), (0xFD05, 'M', u'صى'), (0xFD06, 'M', u'صي'), (0xFD07, 'M', u'ضى'), (0xFD08, 'M', u'ضي'), (0xFD09, 'M', u'شج'), (0xFD0A, 'M', u'شح'), (0xFD0B, 'M', u'شخ'), (0xFD0C, 'M', u'شم'), (0xFD0D, 'M', u'شر'), (0xFD0E, 'M', u'سر'), (0xFD0F, 'M', u'صر'), (0xFD10, 'M', u'ضر'), (0xFD11, 'M', u'طى'), (0xFD12, 'M', u'طي'), (0xFD13, 'M', u'عى'), (0xFD14, 'M', u'عي'), (0xFD15, 'M', u'غى'), (0xFD16, 'M', u'غي'), (0xFD17, 'M', u'سى'), (0xFD18, 'M', u'سي'), (0xFD19, 'M', u'شى'), (0xFD1A, 'M', u'شي'), (0xFD1B, 'M', u'حى'), (0xFD1C, 'M', u'حي'), (0xFD1D, 'M', u'جى'), (0xFD1E, 'M', u'جي'), (0xFD1F, 'M', u'خى'), (0xFD20, 'M', u'خي'), (0xFD21, 'M', u'صى'), (0xFD22, 'M', u'صي'), (0xFD23, 'M', u'ضى'), (0xFD24, 'M', u'ضي'), (0xFD25, 'M', u'شج'), (0xFD26, 'M', u'شح'), (0xFD27, 'M', u'شخ'), (0xFD28, 'M', u'شم'), (0xFD29, 'M', u'شر'), (0xFD2A, 'M', u'سر'), (0xFD2B, 'M', u'صر'), (0xFD2C, 'M', u'ضر'), (0xFD2D, 'M', u'شج'), (0xFD2E, 'M', u'شح'), (0xFD2F, 'M', u'شخ'), (0xFD30, 'M', u'شم'), (0xFD31, 'M', u'سه'), (0xFD32, 'M', u'شه'), (0xFD33, 'M', u'طم'), (0xFD34, 'M', u'سج'), (0xFD35, 'M', u'سح'), (0xFD36, 'M', u'سخ'), (0xFD37, 'M', u'شج'), (0xFD38, 'M', u'شح'), (0xFD39, 'M', u'شخ'), (0xFD3A, 'M', u'طم'), (0xFD3B, 'M', u'ظم'), (0xFD3C, 'M', u'اً'), (0xFD3E, 'V'), (0xFD40, 'X'), (0xFD50, 'M', u'تجم'), (0xFD51, 'M', u'تحج'), (0xFD53, 'M', u'تحم'), (0xFD54, 'M', u'تخم'), (0xFD55, 'M', u'تمج'), (0xFD56, 'M', u'تمح'), (0xFD57, 'M', u'تمخ'), (0xFD58, 'M', u'جمح'), (0xFD5A, 'M', u'حمي'), (0xFD5B, 'M', u'حمى'), (0xFD5C, 'M', u'سحج'), (0xFD5D, 'M', u'سجح'), (0xFD5E, 'M', u'سجى'), (0xFD5F, 'M', u'سمح'), (0xFD61, 'M', u'سمج'), (0xFD62, 'M', u'سمم'), (0xFD64, 'M', u'صحح'), (0xFD66, 'M', u'صمم'), (0xFD67, 'M', u'شحم'), (0xFD69, 'M', u'شجي'), ] def _seg_48(): return [ (0xFD6A, 'M', u'شمخ'), (0xFD6C, 'M', u'شمم'), (0xFD6E, 'M', u'ضحى'), (0xFD6F, 'M', u'ضخم'), (0xFD71, 'M', u'طمح'), (0xFD73, 'M', u'طمم'), (0xFD74, 'M', u'طمي'), (0xFD75, 'M', u'عجم'), (0xFD76, 'M', u'عمم'), (0xFD78, 'M', u'عمى'), (0xFD79, 'M', u'غمم'), (0xFD7A, 'M', u'غمي'), (0xFD7B, 'M', u'غمى'), (0xFD7C, 'M', u'فخم'), (0xFD7E, 'M', u'قمح'), (0xFD7F, 'M', u'قمم'), (0xFD80, 'M', u'لحم'), (0xFD81, 'M', u'لحي'), (0xFD82, 'M', u'لحى'), (0xFD83, 'M', u'لجج'), (0xFD85, 'M', u'لخم'), (0xFD87, 'M', u'لمح'), (0xFD89, 'M', u'محج'), (0xFD8A, 'M', u'محم'), (0xFD8B, 'M', u'محي'), (0xFD8C, 'M', u'مجح'), (0xFD8D, 'M', u'مجم'), (0xFD8E, 'M', u'مخج'), (0xFD8F, 'M', u'مخم'), (0xFD90, 'X'), (0xFD92, 'M', u'مجخ'), (0xFD93, 'M', u'همج'), (0xFD94, 'M', u'همم'), (0xFD95, 'M', u'نحم'), (0xFD96, 'M', u'نحى'), (0xFD97, 'M', u'نجم'), (0xFD99, 'M', u'نجى'), (0xFD9A, 'M', u'نمي'), (0xFD9B, 'M', u'نمى'), (0xFD9C, 'M', u'يمم'), (0xFD9E, 'M', u'بخي'), (0xFD9F, 'M', u'تجي'), (0xFDA0, 'M', u'تجى'), (0xFDA1, 'M', u'تخي'), (0xFDA2, 'M', u'تخى'), (0xFDA3, 'M', u'تمي'), (0xFDA4, 'M', u'تمى'), (0xFDA5, 'M', u'جمي'), (0xFDA6, 'M', u'جحى'), (0xFDA7, 'M', u'جمى'), (0xFDA8, 'M', u'سخى'), (0xFDA9, 'M', u'صحي'), (0xFDAA, 'M', u'شحي'), (0xFDAB, 'M', u'ضحي'), (0xFDAC, 'M', u'لجي'), (0xFDAD, 'M', u'لمي'), (0xFDAE, 'M', u'يحي'), (0xFDAF, 'M', u'يجي'), (0xFDB0, 'M', u'يمي'), (0xFDB1, 'M', u'ممي'), (0xFDB2, 'M', u'قمي'), (0xFDB3, 'M', u'نحي'), (0xFDB4, 'M', u'قمح'), (0xFDB5, 'M', u'لحم'), (0xFDB6, 'M', u'عمي'), (0xFDB7, 'M', u'كمي'), (0xFDB8, 'M', u'نجح'), (0xFDB9, 'M', u'مخي'), (0xFDBA, 'M', u'لجم'), (0xFDBB, 'M', u'كمم'), (0xFDBC, 'M', u'لجم'), (0xFDBD, 'M', u'نجح'), (0xFDBE, 'M', u'جحي'), (0xFDBF, 'M', u'حجي'), (0xFDC0, 'M', u'مجي'), (0xFDC1, 'M', u'فمي'), (0xFDC2, 'M', u'بحي'), (0xFDC3, 'M', u'كمم'), (0xFDC4, 'M', u'عجم'), (0xFDC5, 'M', u'صمم'), (0xFDC6, 'M', u'سخي'), (0xFDC7, 'M', u'نجي'), (0xFDC8, 'X'), (0xFDF0, 'M', u'صلے'), (0xFDF1, 'M', u'قلے'), (0xFDF2, 'M', u'الله'), (0xFDF3, 'M', u'اكبر'), (0xFDF4, 'M', u'محمد'), (0xFDF5, 'M', u'صلعم'), (0xFDF6, 'M', u'رسول'), (0xFDF7, 'M', u'عليه'), (0xFDF8, 'M', u'وسلم'), (0xFDF9, 'M', u'صلى'), (0xFDFA, '3', u'صلى الله عليه وسلم'), (0xFDFB, '3', u'جل جلاله'), (0xFDFC, 'M', u'ریال'), (0xFDFD, 'V'), (0xFDFE, 'X'), (0xFE00, 'I'), (0xFE10, '3', u','), ] def _seg_49(): return [ (0xFE11, 'M', u'、'), (0xFE12, 'X'), (0xFE13, '3', u':'), (0xFE14, '3', u';'), (0xFE15, '3', u'!'), (0xFE16, '3', u'?'), (0xFE17, 'M', u'〖'), (0xFE18, 'M', u'〗'), (0xFE19, 'X'), (0xFE20, 'V'), (0xFE30, 'X'), (0xFE31, 'M', u'—'), (0xFE32, 'M', u'–'), (0xFE33, '3', u'_'), (0xFE35, '3', u'('), (0xFE36, '3', u')'), (0xFE37, '3', u'{'), (0xFE38, '3', u'}'), (0xFE39, 'M', u'〔'), (0xFE3A, 'M', u'〕'), (0xFE3B, 'M', u'【'), (0xFE3C, 'M', u'】'), (0xFE3D, 'M', u'《'), (0xFE3E, 'M', u'》'), (0xFE3F, 'M', u'〈'), (0xFE40, 'M', u'〉'), (0xFE41, 'M', u'「'), (0xFE42, 'M', u'」'), (0xFE43, 'M', u'『'), (0xFE44, 'M', u'』'), (0xFE45, 'V'), (0xFE47, '3', u'['), (0xFE48, '3', u']'), (0xFE49, '3', u' ̅'), (0xFE4D, '3', u'_'), (0xFE50, '3', u','), (0xFE51, 'M', u'、'), (0xFE52, 'X'), (0xFE54, '3', u';'), (0xFE55, '3', u':'), (0xFE56, '3', u'?'), (0xFE57, '3', u'!'), (0xFE58, 'M', u'—'), (0xFE59, '3', u'('), (0xFE5A, '3', u')'), (0xFE5B, '3', u'{'), (0xFE5C, '3', u'}'), (0xFE5D, 'M', u'〔'), (0xFE5E, 'M', u'〕'), (0xFE5F, '3', u'#'), (0xFE60, '3', u'&'), (0xFE61, '3', u'*'), (0xFE62, '3', u'+'), (0xFE63, 'M', u'-'), (0xFE64, '3', u'<'), (0xFE65, '3', u'>'), (0xFE66, '3', u'='), (0xFE67, 'X'), (0xFE68, '3', u'\\'), (0xFE69, '3', u'$'), (0xFE6A, '3', u'%'), (0xFE6B, '3', u'@'), (0xFE6C, 'X'), (0xFE70, '3', u' ً'), (0xFE71, 'M', u'ـً'), (0xFE72, '3', u' ٌ'), (0xFE73, 'V'), (0xFE74, '3', u' ٍ'), (0xFE75, 'X'), (0xFE76, '3', u' َ'), (0xFE77, 'M', u'ـَ'), (0xFE78, '3', u' ُ'), (0xFE79, 'M', u'ـُ'), (0xFE7A, '3', u' ِ'), (0xFE7B, 'M', u'ـِ'), (0xFE7C, '3', u' ّ'), (0xFE7D, 'M', u'ـّ'), (0xFE7E, '3', u' ْ'), (0xFE7F, 'M', u'ـْ'), (0xFE80, 'M', u'ء'), (0xFE81, 'M', u'آ'), (0xFE83, 'M', u'أ'), (0xFE85, 'M', u'ؤ'), (0xFE87, 'M', u'إ'), (0xFE89, 'M', u'ئ'), (0xFE8D, 'M', u'ا'), (0xFE8F, 'M', u'ب'), (0xFE93, 'M', u'ة'), (0xFE95, 'M', u'ت'), (0xFE99, 'M', u'ث'), (0xFE9D, 'M', u'ج'), (0xFEA1, 'M', u'ح'), (0xFEA5, 'M', u'خ'), (0xFEA9, 'M', u'د'), (0xFEAB, 'M', u'ذ'), (0xFEAD, 'M', u'ر'), (0xFEAF, 'M', u'ز'), (0xFEB1, 'M', u'س'), (0xFEB5, 'M', u'ش'), (0xFEB9, 'M', u'ص'), ] def _seg_50(): return [ (0xFEBD, 'M', u'ض'), (0xFEC1, 'M', u'ط'), (0xFEC5, 'M', u'ظ'), (0xFEC9, 'M', u'ع'), (0xFECD, 'M', u'غ'), (0xFED1, 'M', u'ف'), (0xFED5, 'M', u'ق'), (0xFED9, 'M', u'ك'), (0xFEDD, 'M', u'ل'), (0xFEE1, 'M', u'م'), (0xFEE5, 'M', u'ن'), (0xFEE9, 'M', u'ه'), (0xFEED, 'M', u'و'), (0xFEEF, 'M', u'ى'), (0xFEF1, 'M', u'ي'), (0xFEF5, 'M', u'لآ'), (0xFEF7, 'M', u'لأ'), (0xFEF9, 'M', u'لإ'), (0xFEFB, 'M', u'لا'), (0xFEFD, 'X'), (0xFEFF, 'I'), (0xFF00, 'X'), (0xFF01, '3', u'!'), (0xFF02, '3', u'"'), (0xFF03, '3', u'#'), (0xFF04, '3', u'$'), (0xFF05, '3', u'%'), (0xFF06, '3', u'&'), (0xFF07, '3', u'\''), (0xFF08, '3', u'('), (0xFF09, '3', u')'), (0xFF0A, '3', u'*'), (0xFF0B, '3', u'+'), (0xFF0C, '3', u','), (0xFF0D, 'M', u'-'), (0xFF0E, 'M', u'.'), (0xFF0F, '3', u'/'), (0xFF10, 'M', u'0'), (0xFF11, 'M', u'1'), (0xFF12, 'M', u'2'), (0xFF13, 'M', u'3'), (0xFF14, 'M', u'4'), (0xFF15, 'M', u'5'), (0xFF16, 'M', u'6'), (0xFF17, 'M', u'7'), (0xFF18, 'M', u'8'), (0xFF19, 'M', u'9'), (0xFF1A, '3', u':'), (0xFF1B, '3', u';'), (0xFF1C, '3', u'<'), (0xFF1D, '3', u'='), (0xFF1E, '3', u'>'), (0xFF1F, '3', u'?'), (0xFF20, '3', u'@'), (0xFF21, 'M', u'a'), (0xFF22, 'M', u'b'), (0xFF23, 'M', u'c'), (0xFF24, 'M', u'd'), (0xFF25, 'M', u'e'), (0xFF26, 'M', u'f'), (0xFF27, 'M', u'g'), (0xFF28, 'M', u'h'), (0xFF29, 'M', u'i'), (0xFF2A, 'M', u'j'), (0xFF2B, 'M', u'k'), (0xFF2C, 'M', u'l'), (0xFF2D, 'M', u'm'), (0xFF2E, 'M', u'n'), (0xFF2F, 'M', u'o'), (0xFF30, 'M', u'p'), (0xFF31, 'M', u'q'), (0xFF32, 'M', u'r'), (0xFF33, 'M', u's'), (0xFF34, 'M', u't'), (0xFF35, 'M', u'u'), (0xFF36, 'M', u'v'), (0xFF37, 'M', u'w'), (0xFF38, 'M', u'x'), (0xFF39, 'M', u'y'), (0xFF3A, 'M', u'z'), (0xFF3B, '3', u'['), (0xFF3C, '3', u'\\'), (0xFF3D, '3', u']'), (0xFF3E, '3', u'^'), (0xFF3F, '3', u'_'), (0xFF40, '3', u'`'), (0xFF41, 'M', u'a'), (0xFF42, 'M', u'b'), (0xFF43, 'M', u'c'), (0xFF44, 'M', u'd'), (0xFF45, 'M', u'e'), (0xFF46, 'M', u'f'), (0xFF47, 'M', u'g'), (0xFF48, 'M', u'h'), (0xFF49, 'M', u'i'), (0xFF4A, 'M', u'j'), (0xFF4B, 'M', u'k'), (0xFF4C, 'M', u'l'), (0xFF4D, 'M', u'm'), (0xFF4E, 'M', u'n'), ] def _seg_51(): return [ (0xFF4F, 'M', u'o'), (0xFF50, 'M', u'p'), (0xFF51, 'M', u'q'), (0xFF52, 'M', u'r'), (0xFF53, 'M', u's'), (0xFF54, 'M', u't'), (0xFF55, 'M', u'u'), (0xFF56, 'M', u'v'), (0xFF57, 'M', u'w'), (0xFF58, 'M', u'x'), (0xFF59, 'M', u'y'), (0xFF5A, 'M', u'z'), (0xFF5B, '3', u'{'), (0xFF5C, '3', u'|'), (0xFF5D, '3', u'}'), (0xFF5E, '3', u'~'), (0xFF5F, 'M', u'⦅'), (0xFF60, 'M', u'⦆'), (0xFF61, 'M', u'.'), (0xFF62, 'M', u'「'), (0xFF63, 'M', u'」'), (0xFF64, 'M', u'、'), (0xFF65, 'M', u'・'), (0xFF66, 'M', u'ヲ'), (0xFF67, 'M', u'ァ'), (0xFF68, 'M', u'ィ'), (0xFF69, 'M', u'ゥ'), (0xFF6A, 'M', u'ェ'), (0xFF6B, 'M', u'ォ'), (0xFF6C, 'M', u'ャ'), (0xFF6D, 'M', u'ュ'), (0xFF6E, 'M', u'ョ'), (0xFF6F, 'M', u'ッ'), (0xFF70, 'M', u'ー'), (0xFF71, 'M', u'ア'), (0xFF72, 'M', u'イ'), (0xFF73, 'M', u'ウ'), (0xFF74, 'M', u'エ'), (0xFF75, 'M', u'オ'), (0xFF76, 'M', u'カ'), (0xFF77, 'M', u'キ'), (0xFF78, 'M', u'ク'), (0xFF79, 'M', u'ケ'), (0xFF7A, 'M', u'コ'), (0xFF7B, 'M', u'サ'), (0xFF7C, 'M', u'シ'), (0xFF7D, 'M', u'ス'), (0xFF7E, 'M', u'セ'), (0xFF7F, 'M', u'ソ'), (0xFF80, 'M', u'タ'), (0xFF81, 'M', u'チ'), (0xFF82, 'M', u'ツ'), (0xFF83, 'M', u'テ'), (0xFF84, 'M', u'ト'), (0xFF85, 'M', u'ナ'), (0xFF86, 'M', u'ニ'), (0xFF87, 'M', u'ヌ'), (0xFF88, 'M', u'ネ'), (0xFF89, 'M', u'ノ'), (0xFF8A, 'M', u'ハ'), (0xFF8B, 'M', u'ヒ'), (0xFF8C, 'M', u'フ'), (0xFF8D, 'M', u'ヘ'), (0xFF8E, 'M', u'ホ'), (0xFF8F, 'M', u'マ'), (0xFF90, 'M', u'ミ'), (0xFF91, 'M', u'ム'), (0xFF92, 'M', u'メ'), (0xFF93, 'M', u'モ'), (0xFF94, 'M', u'ヤ'), (0xFF95, 'M', u'ユ'), (0xFF96, 'M', u'ヨ'), (0xFF97, 'M', u'ラ'), (0xFF98, 'M', u'リ'), (0xFF99, 'M', u'ル'), (0xFF9A, 'M', u'レ'), (0xFF9B, 'M', u'ロ'), (0xFF9C, 'M', u'ワ'), (0xFF9D, 'M', u'ン'), (0xFF9E, 'M', u'゙'), (0xFF9F, 'M', u'゚'), (0xFFA0, 'X'), (0xFFA1, 'M', u'ᄀ'), (0xFFA2, 'M', u'ᄁ'), (0xFFA3, 'M', u'ᆪ'), (0xFFA4, 'M', u'ᄂ'), (0xFFA5, 'M', u'ᆬ'), (0xFFA6, 'M', u'ᆭ'), (0xFFA7, 'M', u'ᄃ'), (0xFFA8, 'M', u'ᄄ'), (0xFFA9, 'M', u'ᄅ'), (0xFFAA, 'M', u'ᆰ'), (0xFFAB, 'M', u'ᆱ'), (0xFFAC, 'M', u'ᆲ'), (0xFFAD, 'M', u'ᆳ'), (0xFFAE, 'M', u'ᆴ'), (0xFFAF, 'M', u'ᆵ'), (0xFFB0, 'M', u'ᄚ'), (0xFFB1, 'M', u'ᄆ'), (0xFFB2, 'M', u'ᄇ'), ] def _seg_52(): return [ (0xFFB3, 'M', u'ᄈ'), (0xFFB4, 'M', u'ᄡ'), (0xFFB5, 'M', u'ᄉ'), (0xFFB6, 'M', u'ᄊ'), (0xFFB7, 'M', u'ᄋ'), (0xFFB8, 'M', u'ᄌ'), (0xFFB9, 'M', u'ᄍ'), (0xFFBA, 'M', u'ᄎ'), (0xFFBB, 'M', u'ᄏ'), (0xFFBC, 'M', u'ᄐ'), (0xFFBD, 'M', u'ᄑ'), (0xFFBE, 'M', u'ᄒ'), (0xFFBF, 'X'), (0xFFC2, 'M', u'ᅡ'), (0xFFC3, 'M', u'ᅢ'), (0xFFC4, 'M', u'ᅣ'), (0xFFC5, 'M', u'ᅤ'), (0xFFC6, 'M', u'ᅥ'), (0xFFC7, 'M', u'ᅦ'), (0xFFC8, 'X'), (0xFFCA, 'M', u'ᅧ'), (0xFFCB, 'M', u'ᅨ'), (0xFFCC, 'M', u'ᅩ'), (0xFFCD, 'M', u'ᅪ'), (0xFFCE, 'M', u'ᅫ'), (0xFFCF, 'M', u'ᅬ'), (0xFFD0, 'X'), (0xFFD2, 'M', u'ᅭ'), (0xFFD3, 'M', u'ᅮ'), (0xFFD4, 'M', u'ᅯ'), (0xFFD5, 'M', u'ᅰ'), (0xFFD6, 'M', u'ᅱ'), (0xFFD7, 'M', u'ᅲ'), (0xFFD8, 'X'), (0xFFDA, 'M', u'ᅳ'), (0xFFDB, 'M', u'ᅴ'), (0xFFDC, 'M', u'ᅵ'), (0xFFDD, 'X'), (0xFFE0, 'M', u'¢'), (0xFFE1, 'M', u'£'), (0xFFE2, 'M', u'¬'), (0xFFE3, '3', u' ̄'), (0xFFE4, 'M', u'¦'), (0xFFE5, 'M', u'¥'), (0xFFE6, 'M', u'₩'), (0xFFE7, 'X'), (0xFFE8, 'M', u'│'), (0xFFE9, 'M', u'←'), (0xFFEA, 'M', u'↑'), (0xFFEB, 'M', u'→'), (0xFFEC, 'M', u'↓'), (0xFFED, 'M', u'■'), (0xFFEE, 'M', u'○'), (0xFFEF, 'X'), (0x10000, 'V'), (0x1000C, 'X'), (0x1000D, 'V'), (0x10027, 'X'), (0x10028, 'V'), (0x1003B, 'X'), (0x1003C, 'V'), (0x1003E, 'X'), (0x1003F, 'V'), (0x1004E, 'X'), (0x10050, 'V'), (0x1005E, 'X'), (0x10080, 'V'), (0x100FB, 'X'), (0x10100, 'V'), (0x10103, 'X'), (0x10107, 'V'), (0x10134, 'X'), (0x10137, 'V'), (0x1018F, 'X'), (0x10190, 'V'), (0x1019C, 'X'), (0x101A0, 'V'), (0x101A1, 'X'), (0x101D0, 'V'), (0x101FE, 'X'), (0x10280, 'V'), (0x1029D, 'X'), (0x102A0, 'V'), (0x102D1, 'X'), (0x102E0, 'V'), (0x102FC, 'X'), (0x10300, 'V'), (0x10324, 'X'), (0x1032D, 'V'), (0x1034B, 'X'), (0x10350, 'V'), (0x1037B, 'X'), (0x10380, 'V'), (0x1039E, 'X'), (0x1039F, 'V'), (0x103C4, 'X'), (0x103C8, 'V'), (0x103D6, 'X'), (0x10400, 'M', u'𐐨'), (0x10401, 'M', u'𐐩'), ] def _seg_53(): return [ (0x10402, 'M', u'𐐪'), (0x10403, 'M', u'𐐫'), (0x10404, 'M', u'𐐬'), (0x10405, 'M', u'𐐭'), (0x10406, 'M', u'𐐮'), (0x10407, 'M', u'𐐯'), (0x10408, 'M', u'𐐰'), (0x10409, 'M', u'𐐱'), (0x1040A, 'M', u'𐐲'), (0x1040B, 'M', u'𐐳'), (0x1040C, 'M', u'𐐴'), (0x1040D, 'M', u'𐐵'), (0x1040E, 'M', u'𐐶'), (0x1040F, 'M', u'𐐷'), (0x10410, 'M', u'𐐸'), (0x10411, 'M', u'𐐹'), (0x10412, 'M', u'𐐺'), (0x10413, 'M', u'𐐻'), (0x10414, 'M', u'𐐼'), (0x10415, 'M', u'𐐽'), (0x10416, 'M', u'𐐾'), (0x10417, 'M', u'𐐿'), (0x10418, 'M', u'𐑀'), (0x10419, 'M', u'𐑁'), (0x1041A, 'M', u'𐑂'), (0x1041B, 'M', u'𐑃'), (0x1041C, 'M', u'𐑄'), (0x1041D, 'M', u'𐑅'), (0x1041E, 'M', u'𐑆'), (0x1041F, 'M', u'𐑇'), (0x10420, 'M', u'𐑈'), (0x10421, 'M', u'𐑉'), (0x10422, 'M', u'𐑊'), (0x10423, 'M', u'𐑋'), (0x10424, 'M', u'𐑌'), (0x10425, 'M', u'𐑍'), (0x10426, 'M', u'𐑎'), (0x10427, 'M', u'𐑏'), (0x10428, 'V'), (0x1049E, 'X'), (0x104A0, 'V'), (0x104AA, 'X'), (0x104B0, 'M', u'𐓘'), (0x104B1, 'M', u'𐓙'), (0x104B2, 'M', u'𐓚'), (0x104B3, 'M', u'𐓛'), (0x104B4, 'M', u'𐓜'), (0x104B5, 'M', u'𐓝'), (0x104B6, 'M', u'𐓞'), (0x104B7, 'M', u'𐓟'), (0x104B8, 'M', u'𐓠'), (0x104B9, 'M', u'𐓡'), (0x104BA, 'M', u'𐓢'), (0x104BB, 'M', u'𐓣'), (0x104BC, 'M', u'𐓤'), (0x104BD, 'M', u'𐓥'), (0x104BE, 'M', u'𐓦'), (0x104BF, 'M', u'𐓧'), (0x104C0, 'M', u'𐓨'), (0x104C1, 'M', u'𐓩'), (0x104C2, 'M', u'𐓪'), (0x104C3, 'M', u'𐓫'), (0x104C4, 'M', u'𐓬'), (0x104C5, 'M', u'𐓭'), (0x104C6, 'M', u'𐓮'), (0x104C7, 'M', u'𐓯'), (0x104C8, 'M', u'𐓰'), (0x104C9, 'M', u'𐓱'), (0x104CA, 'M', u'𐓲'), (0x104CB, 'M', u'𐓳'), (0x104CC, 'M', u'𐓴'), (0x104CD, 'M', u'𐓵'), (0x104CE, 'M', u'𐓶'), (0x104CF, 'M', u'𐓷'), (0x104D0, 'M', u'𐓸'), (0x104D1, 'M', u'𐓹'), (0x104D2, 'M', u'𐓺'), (0x104D3, 'M', u'𐓻'), (0x104D4, 'X'), (0x104D8, 'V'), (0x104FC, 'X'), (0x10500, 'V'), (0x10528, 'X'), (0x10530, 'V'), (0x10564, 'X'), (0x1056F, 'V'), (0x10570, 'X'), (0x10600, 'V'), (0x10737, 'X'), (0x10740, 'V'), (0x10756, 'X'), (0x10760, 'V'), (0x10768, 'X'), (0x10800, 'V'), (0x10806, 'X'), (0x10808, 'V'), (0x10809, 'X'), (0x1080A, 'V'), (0x10836, 'X'), (0x10837, 'V'), ] def _seg_54(): return [ (0x10839, 'X'), (0x1083C, 'V'), (0x1083D, 'X'), (0x1083F, 'V'), (0x10856, 'X'), (0x10857, 'V'), (0x1089F, 'X'), (0x108A7, 'V'), (0x108B0, 'X'), (0x108E0, 'V'), (0x108F3, 'X'), (0x108F4, 'V'), (0x108F6, 'X'), (0x108FB, 'V'), (0x1091C, 'X'), (0x1091F, 'V'), (0x1093A, 'X'), (0x1093F, 'V'), (0x10940, 'X'), (0x10980, 'V'), (0x109B8, 'X'), (0x109BC, 'V'), (0x109D0, 'X'), (0x109D2, 'V'), (0x10A04, 'X'), (0x10A05, 'V'), (0x10A07, 'X'), (0x10A0C, 'V'), (0x10A14, 'X'), (0x10A15, 'V'), (0x10A18, 'X'), (0x10A19, 'V'), (0x10A36, 'X'), (0x10A38, 'V'), (0x10A3B, 'X'), (0x10A3F, 'V'), (0x10A49, 'X'), (0x10A50, 'V'), (0x10A59, 'X'), (0x10A60, 'V'), (0x10AA0, 'X'), (0x10AC0, 'V'), (0x10AE7, 'X'), (0x10AEB, 'V'), (0x10AF7, 'X'), (0x10B00, 'V'), (0x10B36, 'X'), (0x10B39, 'V'), (0x10B56, 'X'), (0x10B58, 'V'), (0x10B73, 'X'), (0x10B78, 'V'), (0x10B92, 'X'), (0x10B99, 'V'), (0x10B9D, 'X'), (0x10BA9, 'V'), (0x10BB0, 'X'), (0x10C00, 'V'), (0x10C49, 'X'), (0x10C80, 'M', u'𐳀'), (0x10C81, 'M', u'𐳁'), (0x10C82, 'M', u'𐳂'), (0x10C83, 'M', u'𐳃'), (0x10C84, 'M', u'𐳄'), (0x10C85, 'M', u'𐳅'), (0x10C86, 'M', u'𐳆'), (0x10C87, 'M', u'𐳇'), (0x10C88, 'M', u'𐳈'), (0x10C89, 'M', u'𐳉'), (0x10C8A, 'M', u'𐳊'), (0x10C8B, 'M', u'𐳋'), (0x10C8C, 'M', u'𐳌'), (0x10C8D, 'M', u'𐳍'), (0x10C8E, 'M', u'𐳎'), (0x10C8F, 'M', u'𐳏'), (0x10C90, 'M', u'𐳐'), (0x10C91, 'M', u'𐳑'), (0x10C92, 'M', u'𐳒'), (0x10C93, 'M', u'𐳓'), (0x10C94, 'M', u'𐳔'), (0x10C95, 'M', u'𐳕'), (0x10C96, 'M', u'𐳖'), (0x10C97, 'M', u'𐳗'), (0x10C98, 'M', u'𐳘'), (0x10C99, 'M', u'𐳙'), (0x10C9A, 'M', u'𐳚'), (0x10C9B, 'M', u'𐳛'), (0x10C9C, 'M', u'𐳜'), (0x10C9D, 'M', u'𐳝'), (0x10C9E, 'M', u'𐳞'), (0x10C9F, 'M', u'𐳟'), (0x10CA0, 'M', u'𐳠'), (0x10CA1, 'M', u'𐳡'), (0x10CA2, 'M', u'𐳢'), (0x10CA3, 'M', u'𐳣'), (0x10CA4, 'M', u'𐳤'), (0x10CA5, 'M', u'𐳥'), (0x10CA6, 'M', u'𐳦'), (0x10CA7, 'M', u'𐳧'), (0x10CA8, 'M', u'𐳨'), ] def _seg_55(): return [ (0x10CA9, 'M', u'𐳩'), (0x10CAA, 'M', u'𐳪'), (0x10CAB, 'M', u'𐳫'), (0x10CAC, 'M', u'𐳬'), (0x10CAD, 'M', u'𐳭'), (0x10CAE, 'M', u'𐳮'), (0x10CAF, 'M', u'𐳯'), (0x10CB0, 'M', u'𐳰'), (0x10CB1, 'M', u'𐳱'), (0x10CB2, 'M', u'𐳲'), (0x10CB3, 'X'), (0x10CC0, 'V'), (0x10CF3, 'X'), (0x10CFA, 'V'), (0x10D28, 'X'), (0x10D30, 'V'), (0x10D3A, 'X'), (0x10E60, 'V'), (0x10E7F, 'X'), (0x10F00, 'V'), (0x10F28, 'X'), (0x10F30, 'V'), (0x10F5A, 'X'), (0x11000, 'V'), (0x1104E, 'X'), (0x11052, 'V'), (0x11070, 'X'), (0x1107F, 'V'), (0x110BD, 'X'), (0x110BE, 'V'), (0x110C2, 'X'), (0x110D0, 'V'), (0x110E9, 'X'), (0x110F0, 'V'), (0x110FA, 'X'), (0x11100, 'V'), (0x11135, 'X'), (0x11136, 'V'), (0x11147, 'X'), (0x11150, 'V'), (0x11177, 'X'), (0x11180, 'V'), (0x111CE, 'X'), (0x111D0, 'V'), (0x111E0, 'X'), (0x111E1, 'V'), (0x111F5, 'X'), (0x11200, 'V'), (0x11212, 'X'), (0x11213, 'V'), (0x1123F, 'X'), (0x11280, 'V'), (0x11287, 'X'), (0x11288, 'V'), (0x11289, 'X'), (0x1128A, 'V'), (0x1128E, 'X'), (0x1128F, 'V'), (0x1129E, 'X'), (0x1129F, 'V'), (0x112AA, 'X'), (0x112B0, 'V'), (0x112EB, 'X'), (0x112F0, 'V'), (0x112FA, 'X'), (0x11300, 'V'), (0x11304, 'X'), (0x11305, 'V'), (0x1130D, 'X'), (0x1130F, 'V'), (0x11311, 'X'), (0x11313, 'V'), (0x11329, 'X'), (0x1132A, 'V'), (0x11331, 'X'), (0x11332, 'V'), (0x11334, 'X'), (0x11335, 'V'), (0x1133A, 'X'), (0x1133B, 'V'), (0x11345, 'X'), (0x11347, 'V'), (0x11349, 'X'), (0x1134B, 'V'), (0x1134E, 'X'), (0x11350, 'V'), (0x11351, 'X'), (0x11357, 'V'), (0x11358, 'X'), (0x1135D, 'V'), (0x11364, 'X'), (0x11366, 'V'), (0x1136D, 'X'), (0x11370, 'V'), (0x11375, 'X'), (0x11400, 'V'), (0x1145A, 'X'), (0x1145B, 'V'), (0x1145C, 'X'), (0x1145D, 'V'), ] def _seg_56(): return [ (0x1145F, 'X'), (0x11480, 'V'), (0x114C8, 'X'), (0x114D0, 'V'), (0x114DA, 'X'), (0x11580, 'V'), (0x115B6, 'X'), (0x115B8, 'V'), (0x115DE, 'X'), (0x11600, 'V'), (0x11645, 'X'), (0x11650, 'V'), (0x1165A, 'X'), (0x11660, 'V'), (0x1166D, 'X'), (0x11680, 'V'), (0x116B8, 'X'), (0x116C0, 'V'), (0x116CA, 'X'), (0x11700, 'V'), (0x1171B, 'X'), (0x1171D, 'V'), (0x1172C, 'X'), (0x11730, 'V'), (0x11740, 'X'), (0x11800, 'V'), (0x1183C, 'X'), (0x118A0, 'M', u'𑣀'), (0x118A1, 'M', u'𑣁'), (0x118A2, 'M', u'𑣂'), (0x118A3, 'M', u'𑣃'), (0x118A4, 'M', u'𑣄'), (0x118A5, 'M', u'𑣅'), (0x118A6, 'M', u'𑣆'), (0x118A7, 'M', u'𑣇'), (0x118A8, 'M', u'𑣈'), (0x118A9, 'M', u'𑣉'), (0x118AA, 'M', u'𑣊'), (0x118AB, 'M', u'𑣋'), (0x118AC, 'M', u'𑣌'), (0x118AD, 'M', u'𑣍'), (0x118AE, 'M', u'𑣎'), (0x118AF, 'M', u'𑣏'), (0x118B0, 'M', u'𑣐'), (0x118B1, 'M', u'𑣑'), (0x118B2, 'M', u'𑣒'), (0x118B3, 'M', u'𑣓'), (0x118B4, 'M', u'𑣔'), (0x118B5, 'M', u'𑣕'), (0x118B6, 'M', u'𑣖'), (0x118B7, 'M', u'𑣗'), (0x118B8, 'M', u'𑣘'), (0x118B9, 'M', u'𑣙'), (0x118BA, 'M', u'𑣚'), (0x118BB, 'M', u'𑣛'), (0x118BC, 'M', u'𑣜'), (0x118BD, 'M', u'𑣝'), (0x118BE, 'M', u'𑣞'), (0x118BF, 'M', u'𑣟'), (0x118C0, 'V'), (0x118F3, 'X'), (0x118FF, 'V'), (0x11900, 'X'), (0x11A00, 'V'), (0x11A48, 'X'), (0x11A50, 'V'), (0x11A84, 'X'), (0x11A86, 'V'), (0x11AA3, 'X'), (0x11AC0, 'V'), (0x11AF9, 'X'), (0x11C00, 'V'), (0x11C09, 'X'), (0x11C0A, 'V'), (0x11C37, 'X'), (0x11C38, 'V'), (0x11C46, 'X'), (0x11C50, 'V'), (0x11C6D, 'X'), (0x11C70, 'V'), (0x11C90, 'X'), (0x11C92, 'V'), (0x11CA8, 'X'), (0x11CA9, 'V'), (0x11CB7, 'X'), (0x11D00, 'V'), (0x11D07, 'X'), (0x11D08, 'V'), (0x11D0A, 'X'), (0x11D0B, 'V'), (0x11D37, 'X'), (0x11D3A, 'V'), (0x11D3B, 'X'), (0x11D3C, 'V'), (0x11D3E, 'X'), (0x11D3F, 'V'), (0x11D48, 'X'), (0x11D50, 'V'), (0x11D5A, 'X'), (0x11D60, 'V'), ] def _seg_57(): return [ (0x11D66, 'X'), (0x11D67, 'V'), (0x11D69, 'X'), (0x11D6A, 'V'), (0x11D8F, 'X'), (0x11D90, 'V'), (0x11D92, 'X'), (0x11D93, 'V'), (0x11D99, 'X'), (0x11DA0, 'V'), (0x11DAA, 'X'), (0x11EE0, 'V'), (0x11EF9, 'X'), (0x12000, 'V'), (0x1239A, 'X'), (0x12400, 'V'), (0x1246F, 'X'), (0x12470, 'V'), (0x12475, 'X'), (0x12480, 'V'), (0x12544, 'X'), (0x13000, 'V'), (0x1342F, 'X'), (0x14400, 'V'), (0x14647, 'X'), (0x16800, 'V'), (0x16A39, 'X'), (0x16A40, 'V'), (0x16A5F, 'X'), (0x16A60, 'V'), (0x16A6A, 'X'), (0x16A6E, 'V'), (0x16A70, 'X'), (0x16AD0, 'V'), (0x16AEE, 'X'), (0x16AF0, 'V'), (0x16AF6, 'X'), (0x16B00, 'V'), (0x16B46, 'X'), (0x16B50, 'V'), (0x16B5A, 'X'), (0x16B5B, 'V'), (0x16B62, 'X'), (0x16B63, 'V'), (0x16B78, 'X'), (0x16B7D, 'V'), (0x16B90, 'X'), (0x16E60, 'V'), (0x16E9B, 'X'), (0x16F00, 'V'), (0x16F45, 'X'), (0x16F50, 'V'), (0x16F7F, 'X'), (0x16F8F, 'V'), (0x16FA0, 'X'), (0x16FE0, 'V'), (0x16FE2, 'X'), (0x17000, 'V'), (0x187F2, 'X'), (0x18800, 'V'), (0x18AF3, 'X'), (0x1B000, 'V'), (0x1B11F, 'X'), (0x1B170, 'V'), (0x1B2FC, 'X'), (0x1BC00, 'V'), (0x1BC6B, 'X'), (0x1BC70, 'V'), (0x1BC7D, 'X'), (0x1BC80, 'V'), (0x1BC89, 'X'), (0x1BC90, 'V'), (0x1BC9A, 'X'), (0x1BC9C, 'V'), (0x1BCA0, 'I'), (0x1BCA4, 'X'), (0x1D000, 'V'), (0x1D0F6, 'X'), (0x1D100, 'V'), (0x1D127, 'X'), (0x1D129, 'V'), (0x1D15E, 'M', u'𝅗𝅥'), (0x1D15F, 'M', u'𝅘𝅥'), (0x1D160, 'M', u'𝅘𝅥𝅮'), (0x1D161, 'M', u'𝅘𝅥𝅯'), (0x1D162, 'M', u'𝅘𝅥𝅰'), (0x1D163, 'M', u'𝅘𝅥𝅱'), (0x1D164, 'M', u'𝅘𝅥𝅲'), (0x1D165, 'V'), (0x1D173, 'X'), (0x1D17B, 'V'), (0x1D1BB, 'M', u'𝆹𝅥'), (0x1D1BC, 'M', u'𝆺𝅥'), (0x1D1BD, 'M', u'𝆹𝅥𝅮'), (0x1D1BE, 'M', u'𝆺𝅥𝅮'), (0x1D1BF, 'M', u'𝆹𝅥𝅯'), (0x1D1C0, 'M', u'𝆺𝅥𝅯'), (0x1D1C1, 'V'), (0x1D1E9, 'X'), (0x1D200, 'V'), ] def _seg_58(): return [ (0x1D246, 'X'), (0x1D2E0, 'V'), (0x1D2F4, 'X'), (0x1D300, 'V'), (0x1D357, 'X'), (0x1D360, 'V'), (0x1D379, 'X'), (0x1D400, 'M', u'a'), (0x1D401, 'M', u'b'), (0x1D402, 'M', u'c'), (0x1D403, 'M', u'd'), (0x1D404, 'M', u'e'), (0x1D405, 'M', u'f'), (0x1D406, 'M', u'g'), (0x1D407, 'M', u'h'), (0x1D408, 'M', u'i'), (0x1D409, 'M', u'j'), (0x1D40A, 'M', u'k'), (0x1D40B, 'M', u'l'), (0x1D40C, 'M', u'm'), (0x1D40D, 'M', u'n'), (0x1D40E, 'M', u'o'), (0x1D40F, 'M', u'p'), (0x1D410, 'M', u'q'), (0x1D411, 'M', u'r'), (0x1D412, 'M', u's'), (0x1D413, 'M', u't'), (0x1D414, 'M', u'u'), (0x1D415, 'M', u'v'), (0x1D416, 'M', u'w'), (0x1D417, 'M', u'x'), (0x1D418, 'M', u'y'), (0x1D419, 'M', u'z'), (0x1D41A, 'M', u'a'), (0x1D41B, 'M', u'b'), (0x1D41C, 'M', u'c'), (0x1D41D, 'M', u'd'), (0x1D41E, 'M', u'e'), (0x1D41F, 'M', u'f'), (0x1D420, 'M', u'g'), (0x1D421, 'M', u'h'), (0x1D422, 'M', u'i'), (0x1D423, 'M', u'j'), (0x1D424, 'M', u'k'), (0x1D425, 'M', u'l'), (0x1D426, 'M', u'm'), (0x1D427, 'M', u'n'), (0x1D428, 'M', u'o'), (0x1D429, 'M', u'p'), (0x1D42A, 'M', u'q'), (0x1D42B, 'M', u'r'), (0x1D42C, 'M', u's'), (0x1D42D, 'M', u't'), (0x1D42E, 'M', u'u'), (0x1D42F, 'M', u'v'), (0x1D430, 'M', u'w'), (0x1D431, 'M', u'x'), (0x1D432, 'M', u'y'), (0x1D433, 'M', u'z'), (0x1D434, 'M', u'a'), (0x1D435, 'M', u'b'), (0x1D436, 'M', u'c'), (0x1D437, 'M', u'd'), (0x1D438, 'M', u'e'), (0x1D439, 'M', u'f'), (0x1D43A, 'M', u'g'), (0x1D43B, 'M', u'h'), (0x1D43C, 'M', u'i'), (0x1D43D, 'M', u'j'), (0x1D43E, 'M', u'k'), (0x1D43F, 'M', u'l'), (0x1D440, 'M', u'm'), (0x1D441, 'M', u'n'), (0x1D442, 'M', u'o'), (0x1D443, 'M', u'p'), (0x1D444, 'M', u'q'), (0x1D445, 'M', u'r'), (0x1D446, 'M', u's'), (0x1D447, 'M', u't'), (0x1D448, 'M', u'u'), (0x1D449, 'M', u'v'), (0x1D44A, 'M', u'w'), (0x1D44B, 'M', u'x'), (0x1D44C, 'M', u'y'), (0x1D44D, 'M', u'z'), (0x1D44E, 'M', u'a'), (0x1D44F, 'M', u'b'), (0x1D450, 'M', u'c'), (0x1D451, 'M', u'd'), (0x1D452, 'M', u'e'), (0x1D453, 'M', u'f'), (0x1D454, 'M', u'g'), (0x1D455, 'X'), (0x1D456, 'M', u'i'), (0x1D457, 'M', u'j'), (0x1D458, 'M', u'k'), (0x1D459, 'M', u'l'), (0x1D45A, 'M', u'm'), (0x1D45B, 'M', u'n'), (0x1D45C, 'M', u'o'), ] def _seg_59(): return [ (0x1D45D, 'M', u'p'), (0x1D45E, 'M', u'q'), (0x1D45F, 'M', u'r'), (0x1D460, 'M', u's'), (0x1D461, 'M', u't'), (0x1D462, 'M', u'u'), (0x1D463, 'M', u'v'), (0x1D464, 'M', u'w'), (0x1D465, 'M', u'x'), (0x1D466, 'M', u'y'), (0x1D467, 'M', u'z'), (0x1D468, 'M', u'a'), (0x1D469, 'M', u'b'), (0x1D46A, 'M', u'c'), (0x1D46B, 'M', u'd'), (0x1D46C, 'M', u'e'), (0x1D46D, 'M', u'f'), (0x1D46E, 'M', u'g'), (0x1D46F, 'M', u'h'), (0x1D470, 'M', u'i'), (0x1D471, 'M', u'j'), (0x1D472, 'M', u'k'), (0x1D473, 'M', u'l'), (0x1D474, 'M', u'm'), (0x1D475, 'M', u'n'), (0x1D476, 'M', u'o'), (0x1D477, 'M', u'p'), (0x1D478, 'M', u'q'), (0x1D479, 'M', u'r'), (0x1D47A, 'M', u's'), (0x1D47B, 'M', u't'), (0x1D47C, 'M', u'u'), (0x1D47D, 'M', u'v'), (0x1D47E, 'M', u'w'), (0x1D47F, 'M', u'x'), (0x1D480, 'M', u'y'), (0x1D481, 'M', u'z'), (0x1D482, 'M', u'a'), (0x1D483, 'M', u'b'), (0x1D484, 'M', u'c'), (0x1D485, 'M', u'd'), (0x1D486, 'M', u'e'), (0x1D487, 'M', u'f'), (0x1D488, 'M', u'g'), (0x1D489, 'M', u'h'), (0x1D48A, 'M', u'i'), (0x1D48B, 'M', u'j'), (0x1D48C, 'M', u'k'), (0x1D48D, 'M', u'l'), (0x1D48E, 'M', u'm'), (0x1D48F, 'M', u'n'), (0x1D490, 'M', u'o'), (0x1D491, 'M', u'p'), (0x1D492, 'M', u'q'), (0x1D493, 'M', u'r'), (0x1D494, 'M', u's'), (0x1D495, 'M', u't'), (0x1D496, 'M', u'u'), (0x1D497, 'M', u'v'), (0x1D498, 'M', u'w'), (0x1D499, 'M', u'x'), (0x1D49A, 'M', u'y'), (0x1D49B, 'M', u'z'), (0x1D49C, 'M', u'a'), (0x1D49D, 'X'), (0x1D49E, 'M', u'c'), (0x1D49F, 'M', u'd'), (0x1D4A0, 'X'), (0x1D4A2, 'M', u'g'), (0x1D4A3, 'X'), (0x1D4A5, 'M', u'j'), (0x1D4A6, 'M', u'k'), (0x1D4A7, 'X'), (0x1D4A9, 'M', u'n'), (0x1D4AA, 'M', u'o'), (0x1D4AB, 'M', u'p'), (0x1D4AC, 'M', u'q'), (0x1D4AD, 'X'), (0x1D4AE, 'M', u's'), (0x1D4AF, 'M', u't'), (0x1D4B0, 'M', u'u'), (0x1D4B1, 'M', u'v'), (0x1D4B2, 'M', u'w'), (0x1D4B3, 'M', u'x'), (0x1D4B4, 'M', u'y'), (0x1D4B5, 'M', u'z'), (0x1D4B6, 'M', u'a'), (0x1D4B7, 'M', u'b'), (0x1D4B8, 'M', u'c'), (0x1D4B9, 'M', u'd'), (0x1D4BA, 'X'), (0x1D4BB, 'M', u'f'), (0x1D4BC, 'X'), (0x1D4BD, 'M', u'h'), (0x1D4BE, 'M', u'i'), (0x1D4BF, 'M', u'j'), (0x1D4C0, 'M', u'k'), (0x1D4C1, 'M', u'l'), (0x1D4C2, 'M', u'm'), (0x1D4C3, 'M', u'n'), ] def _seg_60(): return [ (0x1D4C4, 'X'), (0x1D4C5, 'M', u'p'), (0x1D4C6, 'M', u'q'), (0x1D4C7, 'M', u'r'), (0x1D4C8, 'M', u's'), (0x1D4C9, 'M', u't'), (0x1D4CA, 'M', u'u'), (0x1D4CB, 'M', u'v'), (0x1D4CC, 'M', u'w'), (0x1D4CD, 'M', u'x'), (0x1D4CE, 'M', u'y'), (0x1D4CF, 'M', u'z'), (0x1D4D0, 'M', u'a'), (0x1D4D1, 'M', u'b'), (0x1D4D2, 'M', u'c'), (0x1D4D3, 'M', u'd'), (0x1D4D4, 'M', u'e'), (0x1D4D5, 'M', u'f'), (0x1D4D6, 'M', u'g'), (0x1D4D7, 'M', u'h'), (0x1D4D8, 'M', u'i'), (0x1D4D9, 'M', u'j'), (0x1D4DA, 'M', u'k'), (0x1D4DB, 'M', u'l'), (0x1D4DC, 'M', u'm'), (0x1D4DD, 'M', u'n'), (0x1D4DE, 'M', u'o'), (0x1D4DF, 'M', u'p'), (0x1D4E0, 'M', u'q'), (0x1D4E1, 'M', u'r'), (0x1D4E2, 'M', u's'), (0x1D4E3, 'M', u't'), (0x1D4E4, 'M', u'u'), (0x1D4E5, 'M', u'v'), (0x1D4E6, 'M', u'w'), (0x1D4E7, 'M', u'x'), (0x1D4E8, 'M', u'y'), (0x1D4E9, 'M', u'z'), (0x1D4EA, 'M', u'a'), (0x1D4EB, 'M', u'b'), (0x1D4EC, 'M', u'c'), (0x1D4ED, 'M', u'd'), (0x1D4EE, 'M', u'e'), (0x1D4EF, 'M', u'f'), (0x1D4F0, 'M', u'g'), (0x1D4F1, 'M', u'h'), (0x1D4F2, 'M', u'i'), (0x1D4F3, 'M', u'j'), (0x1D4F4, 'M', u'k'), (0x1D4F5, 'M', u'l'), (0x1D4F6, 'M', u'm'), (0x1D4F7, 'M', u'n'), (0x1D4F8, 'M', u'o'), (0x1D4F9, 'M', u'p'), (0x1D4FA, 'M', u'q'), (0x1D4FB, 'M', u'r'), (0x1D4FC, 'M', u's'), (0x1D4FD, 'M', u't'), (0x1D4FE, 'M', u'u'), (0x1D4FF, 'M', u'v'), (0x1D500, 'M', u'w'), (0x1D501, 'M', u'x'), (0x1D502, 'M', u'y'), (0x1D503, 'M', u'z'), (0x1D504, 'M', u'a'), (0x1D505, 'M', u'b'), (0x1D506, 'X'), (0x1D507, 'M', u'd'), (0x1D508, 'M', u'e'), (0x1D509, 'M', u'f'), (0x1D50A, 'M', u'g'), (0x1D50B, 'X'), (0x1D50D, 'M', u'j'), (0x1D50E, 'M', u'k'), (0x1D50F, 'M', u'l'), (0x1D510, 'M', u'm'), (0x1D511, 'M', u'n'), (0x1D512, 'M', u'o'), (0x1D513, 'M', u'p'), (0x1D514, 'M', u'q'), (0x1D515, 'X'), (0x1D516, 'M', u's'), (0x1D517, 'M', u't'), (0x1D518, 'M', u'u'), (0x1D519, 'M', u'v'), (0x1D51A, 'M', u'w'), (0x1D51B, 'M', u'x'), (0x1D51C, 'M', u'y'), (0x1D51D, 'X'), (0x1D51E, 'M', u'a'), (0x1D51F, 'M', u'b'), (0x1D520, 'M', u'c'), (0x1D521, 'M', u'd'), (0x1D522, 'M', u'e'), (0x1D523, 'M', u'f'), (0x1D524, 'M', u'g'), (0x1D525, 'M', u'h'), (0x1D526, 'M', u'i'), (0x1D527, 'M', u'j'), (0x1D528, 'M', u'k'), ] def _seg_61(): return [ (0x1D529, 'M', u'l'), (0x1D52A, 'M', u'm'), (0x1D52B, 'M', u'n'), (0x1D52C, 'M', u'o'), (0x1D52D, 'M', u'p'), (0x1D52E, 'M', u'q'), (0x1D52F, 'M', u'r'), (0x1D530, 'M', u's'), (0x1D531, 'M', u't'), (0x1D532, 'M', u'u'), (0x1D533, 'M', u'v'), (0x1D534, 'M', u'w'), (0x1D535, 'M', u'x'), (0x1D536, 'M', u'y'), (0x1D537, 'M', u'z'), (0x1D538, 'M', u'a'), (0x1D539, 'M', u'b'), (0x1D53A, 'X'), (0x1D53B, 'M', u'd'), (0x1D53C, 'M', u'e'), (0x1D53D, 'M', u'f'), (0x1D53E, 'M', u'g'), (0x1D53F, 'X'), (0x1D540, 'M', u'i'), (0x1D541, 'M', u'j'), (0x1D542, 'M', u'k'), (0x1D543, 'M', u'l'), (0x1D544, 'M', u'm'), (0x1D545, 'X'), (0x1D546, 'M', u'o'), (0x1D547, 'X'), (0x1D54A, 'M', u's'), (0x1D54B, 'M', u't'), (0x1D54C, 'M', u'u'), (0x1D54D, 'M', u'v'), (0x1D54E, 'M', u'w'), (0x1D54F, 'M', u'x'), (0x1D550, 'M', u'y'), (0x1D551, 'X'), (0x1D552, 'M', u'a'), (0x1D553, 'M', u'b'), (0x1D554, 'M', u'c'), (0x1D555, 'M', u'd'), (0x1D556, 'M', u'e'), (0x1D557, 'M', u'f'), (0x1D558, 'M', u'g'), (0x1D559, 'M', u'h'), (0x1D55A, 'M', u'i'), (0x1D55B, 'M', u'j'), (0x1D55C, 'M', u'k'), (0x1D55D, 'M', u'l'), (0x1D55E, 'M', u'm'), (0x1D55F, 'M', u'n'), (0x1D560, 'M', u'o'), (0x1D561, 'M', u'p'), (0x1D562, 'M', u'q'), (0x1D563, 'M', u'r'), (0x1D564, 'M', u's'), (0x1D565, 'M', u't'), (0x1D566, 'M', u'u'), (0x1D567, 'M', u'v'), (0x1D568, 'M', u'w'), (0x1D569, 'M', u'x'), (0x1D56A, 'M', u'y'), (0x1D56B, 'M', u'z'), (0x1D56C, 'M', u'a'), (0x1D56D, 'M', u'b'), (0x1D56E, 'M', u'c'), (0x1D56F, 'M', u'd'), (0x1D570, 'M', u'e'), (0x1D571, 'M', u'f'), (0x1D572, 'M', u'g'), (0x1D573, 'M', u'h'), (0x1D574, 'M', u'i'), (0x1D575, 'M', u'j'), (0x1D576, 'M', u'k'), (0x1D577, 'M', u'l'), (0x1D578, 'M', u'm'), (0x1D579, 'M', u'n'), (0x1D57A, 'M', u'o'), (0x1D57B, 'M', u'p'), (0x1D57C, 'M', u'q'), (0x1D57D, 'M', u'r'), (0x1D57E, 'M', u's'), (0x1D57F, 'M', u't'), (0x1D580, 'M', u'u'), (0x1D581, 'M', u'v'), (0x1D582, 'M', u'w'), (0x1D583, 'M', u'x'), (0x1D584, 'M', u'y'), (0x1D585, 'M', u'z'), (0x1D586, 'M', u'a'), (0x1D587, 'M', u'b'), (0x1D588, 'M', u'c'), (0x1D589, 'M', u'd'), (0x1D58A, 'M', u'e'), (0x1D58B, 'M', u'f'), (0x1D58C, 'M', u'g'), (0x1D58D, 'M', u'h'), (0x1D58E, 'M', u'i'), ] def _seg_62(): return [ (0x1D58F, 'M', u'j'), (0x1D590, 'M', u'k'), (0x1D591, 'M', u'l'), (0x1D592, 'M', u'm'), (0x1D593, 'M', u'n'), (0x1D594, 'M', u'o'), (0x1D595, 'M', u'p'), (0x1D596, 'M', u'q'), (0x1D597, 'M', u'r'), (0x1D598, 'M', u's'), (0x1D599, 'M', u't'), (0x1D59A, 'M', u'u'), (0x1D59B, 'M', u'v'), (0x1D59C, 'M', u'w'), (0x1D59D, 'M', u'x'), (0x1D59E, 'M', u'y'), (0x1D59F, 'M', u'z'), (0x1D5A0, 'M', u'a'), (0x1D5A1, 'M', u'b'), (0x1D5A2, 'M', u'c'), (0x1D5A3, 'M', u'd'), (0x1D5A4, 'M', u'e'), (0x1D5A5, 'M', u'f'), (0x1D5A6, 'M', u'g'), (0x1D5A7, 'M', u'h'), (0x1D5A8, 'M', u'i'), (0x1D5A9, 'M', u'j'), (0x1D5AA, 'M', u'k'), (0x1D5AB, 'M', u'l'), (0x1D5AC, 'M', u'm'), (0x1D5AD, 'M', u'n'), (0x1D5AE, 'M', u'o'), (0x1D5AF, 'M', u'p'), (0x1D5B0, 'M', u'q'), (0x1D5B1, 'M', u'r'), (0x1D5B2, 'M', u's'), (0x1D5B3, 'M', u't'), (0x1D5B4, 'M', u'u'), (0x1D5B5, 'M', u'v'), (0x1D5B6, 'M', u'w'), (0x1D5B7, 'M', u'x'), (0x1D5B8, 'M', u'y'), (0x1D5B9, 'M', u'z'), (0x1D5BA, 'M', u'a'), (0x1D5BB, 'M', u'b'), (0x1D5BC, 'M', u'c'), (0x1D5BD, 'M', u'd'), (0x1D5BE, 'M', u'e'), (0x1D5BF, 'M', u'f'), (0x1D5C0, 'M', u'g'), (0x1D5C1, 'M', u'h'), (0x1D5C2, 'M', u'i'), (0x1D5C3, 'M', u'j'), (0x1D5C4, 'M', u'k'), (0x1D5C5, 'M', u'l'), (0x1D5C6, 'M', u'm'), (0x1D5C7, 'M', u'n'), (0x1D5C8, 'M', u'o'), (0x1D5C9, 'M', u'p'), (0x1D5CA, 'M', u'q'), (0x1D5CB, 'M', u'r'), (0x1D5CC, 'M', u's'), (0x1D5CD, 'M', u't'), (0x1D5CE, 'M', u'u'), (0x1D5CF, 'M', u'v'), (0x1D5D0, 'M', u'w'), (0x1D5D1, 'M', u'x'), (0x1D5D2, 'M', u'y'), (0x1D5D3, 'M', u'z'), (0x1D5D4, 'M', u'a'), (0x1D5D5, 'M', u'b'), (0x1D5D6, 'M', u'c'), (0x1D5D7, 'M', u'd'), (0x1D5D8, 'M', u'e'), (0x1D5D9, 'M', u'f'), (0x1D5DA, 'M', u'g'), (0x1D5DB, 'M', u'h'), (0x1D5DC, 'M', u'i'), (0x1D5DD, 'M', u'j'), (0x1D5DE, 'M', u'k'), (0x1D5DF, 'M', u'l'), (0x1D5E0, 'M', u'm'), (0x1D5E1, 'M', u'n'), (0x1D5E2, 'M', u'o'), (0x1D5E3, 'M', u'p'), (0x1D5E4, 'M', u'q'), (0x1D5E5, 'M', u'r'), (0x1D5E6, 'M', u's'), (0x1D5E7, 'M', u't'), (0x1D5E8, 'M', u'u'), (0x1D5E9, 'M', u'v'), (0x1D5EA, 'M', u'w'), (0x1D5EB, 'M', u'x'), (0x1D5EC, 'M', u'y'), (0x1D5ED, 'M', u'z'), (0x1D5EE, 'M', u'a'), (0x1D5EF, 'M', u'b'), (0x1D5F0, 'M', u'c'), (0x1D5F1, 'M', u'd'), (0x1D5F2, 'M', u'e'), ] def _seg_63(): return [ (0x1D5F3, 'M', u'f'), (0x1D5F4, 'M', u'g'), (0x1D5F5, 'M', u'h'), (0x1D5F6, 'M', u'i'), (0x1D5F7, 'M', u'j'), (0x1D5F8, 'M', u'k'), (0x1D5F9, 'M', u'l'), (0x1D5FA, 'M', u'm'), (0x1D5FB, 'M', u'n'), (0x1D5FC, 'M', u'o'), (0x1D5FD, 'M', u'p'), (0x1D5FE, 'M', u'q'), (0x1D5FF, 'M', u'r'), (0x1D600, 'M', u's'), (0x1D601, 'M', u't'), (0x1D602, 'M', u'u'), (0x1D603, 'M', u'v'), (0x1D604, 'M', u'w'), (0x1D605, 'M', u'x'), (0x1D606, 'M', u'y'), (0x1D607, 'M', u'z'), (0x1D608, 'M', u'a'), (0x1D609, 'M', u'b'), (0x1D60A, 'M', u'c'), (0x1D60B, 'M', u'd'), (0x1D60C, 'M', u'e'), (0x1D60D, 'M', u'f'), (0x1D60E, 'M', u'g'), (0x1D60F, 'M', u'h'), (0x1D610, 'M', u'i'), (0x1D611, 'M', u'j'), (0x1D612, 'M', u'k'), (0x1D613, 'M', u'l'), (0x1D614, 'M', u'm'), (0x1D615, 'M', u'n'), (0x1D616, 'M', u'o'), (0x1D617, 'M', u'p'), (0x1D618, 'M', u'q'), (0x1D619, 'M', u'r'), (0x1D61A, 'M', u's'), (0x1D61B, 'M', u't'), (0x1D61C, 'M', u'u'), (0x1D61D, 'M', u'v'), (0x1D61E, 'M', u'w'), (0x1D61F, 'M', u'x'), (0x1D620, 'M', u'y'), (0x1D621, 'M', u'z'), (0x1D622, 'M', u'a'), (0x1D623, 'M', u'b'), (0x1D624, 'M', u'c'), (0x1D625, 'M', u'd'), (0x1D626, 'M', u'e'), (0x1D627, 'M', u'f'), (0x1D628, 'M', u'g'), (0x1D629, 'M', u'h'), (0x1D62A, 'M', u'i'), (0x1D62B, 'M', u'j'), (0x1D62C, 'M', u'k'), (0x1D62D, 'M', u'l'), (0x1D62E, 'M', u'm'), (0x1D62F, 'M', u'n'), (0x1D630, 'M', u'o'), (0x1D631, 'M', u'p'), (0x1D632, 'M', u'q'), (0x1D633, 'M', u'r'), (0x1D634, 'M', u's'), (0x1D635, 'M', u't'), (0x1D636, 'M', u'u'), (0x1D637, 'M', u'v'), (0x1D638, 'M', u'w'), (0x1D639, 'M', u'x'), (0x1D63A, 'M', u'y'), (0x1D63B, 'M', u'z'), (0x1D63C, 'M', u'a'), (0x1D63D, 'M', u'b'), (0x1D63E, 'M', u'c'), (0x1D63F, 'M', u'd'), (0x1D640, 'M', u'e'), (0x1D641, 'M', u'f'), (0x1D642, 'M', u'g'), (0x1D643, 'M', u'h'), (0x1D644, 'M', u'i'), (0x1D645, 'M', u'j'), (0x1D646, 'M', u'k'), (0x1D647, 'M', u'l'), (0x1D648, 'M', u'm'), (0x1D649, 'M', u'n'), (0x1D64A, 'M', u'o'), (0x1D64B, 'M', u'p'), (0x1D64C, 'M', u'q'), (0x1D64D, 'M', u'r'), (0x1D64E, 'M', u's'), (0x1D64F, 'M', u't'), (0x1D650, 'M', u'u'), (0x1D651, 'M', u'v'), (0x1D652, 'M', u'w'), (0x1D653, 'M', u'x'), (0x1D654, 'M', u'y'), (0x1D655, 'M', u'z'), (0x1D656, 'M', u'a'), ] def _seg_64(): return [ (0x1D657, 'M', u'b'), (0x1D658, 'M', u'c'), (0x1D659, 'M', u'd'), (0x1D65A, 'M', u'e'), (0x1D65B, 'M', u'f'), (0x1D65C, 'M', u'g'), (0x1D65D, 'M', u'h'), (0x1D65E, 'M', u'i'), (0x1D65F, 'M', u'j'), (0x1D660, 'M', u'k'), (0x1D661, 'M', u'l'), (0x1D662, 'M', u'm'), (0x1D663, 'M', u'n'), (0x1D664, 'M', u'o'), (0x1D665, 'M', u'p'), (0x1D666, 'M', u'q'), (0x1D667, 'M', u'r'), (0x1D668, 'M', u's'), (0x1D669, 'M', u't'), (0x1D66A, 'M', u'u'), (0x1D66B, 'M', u'v'), (0x1D66C, 'M', u'w'), (0x1D66D, 'M', u'x'), (0x1D66E, 'M', u'y'), (0x1D66F, 'M', u'z'), (0x1D670, 'M', u'a'), (0x1D671, 'M', u'b'), (0x1D672, 'M', u'c'), (0x1D673, 'M', u'd'), (0x1D674, 'M', u'e'), (0x1D675, 'M', u'f'), (0x1D676, 'M', u'g'), (0x1D677, 'M', u'h'), (0x1D678, 'M', u'i'), (0x1D679, 'M', u'j'), (0x1D67A, 'M', u'k'), (0x1D67B, 'M', u'l'), (0x1D67C, 'M', u'm'), (0x1D67D, 'M', u'n'), (0x1D67E, 'M', u'o'), (0x1D67F, 'M', u'p'), (0x1D680, 'M', u'q'), (0x1D681, 'M', u'r'), (0x1D682, 'M', u's'), (0x1D683, 'M', u't'), (0x1D684, 'M', u'u'), (0x1D685, 'M', u'v'), (0x1D686, 'M', u'w'), (0x1D687, 'M', u'x'), (0x1D688, 'M', u'y'), (0x1D689, 'M', u'z'), (0x1D68A, 'M', u'a'), (0x1D68B, 'M', u'b'), (0x1D68C, 'M', u'c'), (0x1D68D, 'M', u'd'), (0x1D68E, 'M', u'e'), (0x1D68F, 'M', u'f'), (0x1D690, 'M', u'g'), (0x1D691, 'M', u'h'), (0x1D692, 'M', u'i'), (0x1D693, 'M', u'j'), (0x1D694, 'M', u'k'), (0x1D695, 'M', u'l'), (0x1D696, 'M', u'm'), (0x1D697, 'M', u'n'), (0x1D698, 'M', u'o'), (0x1D699, 'M', u'p'), (0x1D69A, 'M', u'q'), (0x1D69B, 'M', u'r'), (0x1D69C, 'M', u's'), (0x1D69D, 'M', u't'), (0x1D69E, 'M', u'u'), (0x1D69F, 'M', u'v'), (0x1D6A0, 'M', u'w'), (0x1D6A1, 'M', u'x'), (0x1D6A2, 'M', u'y'), (0x1D6A3, 'M', u'z'), (0x1D6A4, 'M', u'ı'), (0x1D6A5, 'M', u'ȷ'), (0x1D6A6, 'X'), (0x1D6A8, 'M', u'α'), (0x1D6A9, 'M', u'β'), (0x1D6AA, 'M', u'γ'), (0x1D6AB, 'M', u'δ'), (0x1D6AC, 'M', u'ε'), (0x1D6AD, 'M', u'ζ'), (0x1D6AE, 'M', u'η'), (0x1D6AF, 'M', u'θ'), (0x1D6B0, 'M', u'ι'), (0x1D6B1, 'M', u'κ'), (0x1D6B2, 'M', u'λ'), (0x1D6B3, 'M', u'μ'), (0x1D6B4, 'M', u'ν'), (0x1D6B5, 'M', u'ξ'), (0x1D6B6, 'M', u'ο'), (0x1D6B7, 'M', u'π'), (0x1D6B8, 'M', u'ρ'), (0x1D6B9, 'M', u'θ'), (0x1D6BA, 'M', u'σ'), (0x1D6BB, 'M', u'τ'), ] def _seg_65(): return [ (0x1D6BC, 'M', u'υ'), (0x1D6BD, 'M', u'φ'), (0x1D6BE, 'M', u'χ'), (0x1D6BF, 'M', u'ψ'), (0x1D6C0, 'M', u'ω'), (0x1D6C1, 'M', u'∇'), (0x1D6C2, 'M', u'α'), (0x1D6C3, 'M', u'β'), (0x1D6C4, 'M', u'γ'), (0x1D6C5, 'M', u'δ'), (0x1D6C6, 'M', u'ε'), (0x1D6C7, 'M', u'ζ'), (0x1D6C8, 'M', u'η'), (0x1D6C9, 'M', u'θ'), (0x1D6CA, 'M', u'ι'), (0x1D6CB, 'M', u'κ'), (0x1D6CC, 'M', u'λ'), (0x1D6CD, 'M', u'μ'), (0x1D6CE, 'M', u'ν'), (0x1D6CF, 'M', u'ξ'), (0x1D6D0, 'M', u'ο'), (0x1D6D1, 'M', u'π'), (0x1D6D2, 'M', u'ρ'), (0x1D6D3, 'M', u'σ'), (0x1D6D5, 'M', u'τ'), (0x1D6D6, 'M', u'υ'), (0x1D6D7, 'M', u'φ'), (0x1D6D8, 'M', u'χ'), (0x1D6D9, 'M', u'ψ'), (0x1D6DA, 'M', u'ω'), (0x1D6DB, 'M', u'∂'), (0x1D6DC, 'M', u'ε'), (0x1D6DD, 'M', u'θ'), (0x1D6DE, 'M', u'κ'), (0x1D6DF, 'M', u'φ'), (0x1D6E0, 'M', u'ρ'), (0x1D6E1, 'M', u'π'), (0x1D6E2, 'M', u'α'), (0x1D6E3, 'M', u'β'), (0x1D6E4, 'M', u'γ'), (0x1D6E5, 'M', u'δ'), (0x1D6E6, 'M', u'ε'), (0x1D6E7, 'M', u'ζ'), (0x1D6E8, 'M', u'η'), (0x1D6E9, 'M', u'θ'), (0x1D6EA, 'M', u'ι'), (0x1D6EB, 'M', u'κ'), (0x1D6EC, 'M', u'λ'), (0x1D6ED, 'M', u'μ'), (0x1D6EE, 'M', u'ν'), (0x1D6EF, 'M', u'ξ'), (0x1D6F0, 'M', u'ο'), (0x1D6F1, 'M', u'π'), (0x1D6F2, 'M', u'ρ'), (0x1D6F3, 'M', u'θ'), (0x1D6F4, 'M', u'σ'), (0x1D6F5, 'M', u'τ'), (0x1D6F6, 'M', u'υ'), (0x1D6F7, 'M', u'φ'), (0x1D6F8, 'M', u'χ'), (0x1D6F9, 'M', u'ψ'), (0x1D6FA, 'M', u'ω'), (0x1D6FB, 'M', u'∇'), (0x1D6FC, 'M', u'α'), (0x1D6FD, 'M', u'β'), (0x1D6FE, 'M', u'γ'), (0x1D6FF, 'M', u'δ'), (0x1D700, 'M', u'ε'), (0x1D701, 'M', u'ζ'), (0x1D702, 'M', u'η'), (0x1D703, 'M', u'θ'), (0x1D704, 'M', u'ι'), (0x1D705, 'M', u'κ'), (0x1D706, 'M', u'λ'), (0x1D707, 'M', u'μ'), (0x1D708, 'M', u'ν'), (0x1D709, 'M', u'ξ'), (0x1D70A, 'M', u'ο'), (0x1D70B, 'M', u'π'), (0x1D70C, 'M', u'ρ'), (0x1D70D, 'M', u'σ'), (0x1D70F, 'M', u'τ'), (0x1D710, 'M', u'υ'), (0x1D711, 'M', u'φ'), (0x1D712, 'M', u'χ'), (0x1D713, 'M', u'ψ'), (0x1D714, 'M', u'ω'), (0x1D715, 'M', u'∂'), (0x1D716, 'M', u'ε'), (0x1D717, 'M', u'θ'), (0x1D718, 'M', u'κ'), (0x1D719, 'M', u'φ'), (0x1D71A, 'M', u'ρ'), (0x1D71B, 'M', u'π'), (0x1D71C, 'M', u'α'), (0x1D71D, 'M', u'β'), (0x1D71E, 'M', u'γ'), (0x1D71F, 'M', u'δ'), (0x1D720, 'M', u'ε'), (0x1D721, 'M', u'ζ'), ] def _seg_66(): return [ (0x1D722, 'M', u'η'), (0x1D723, 'M', u'θ'), (0x1D724, 'M', u'ι'), (0x1D725, 'M', u'κ'), (0x1D726, 'M', u'λ'), (0x1D727, 'M', u'μ'), (0x1D728, 'M', u'ν'), (0x1D729, 'M', u'ξ'), (0x1D72A, 'M', u'ο'), (0x1D72B, 'M', u'π'), (0x1D72C, 'M', u'ρ'), (0x1D72D, 'M', u'θ'), (0x1D72E, 'M', u'σ'), (0x1D72F, 'M', u'τ'), (0x1D730, 'M', u'υ'), (0x1D731, 'M', u'φ'), (0x1D732, 'M', u'χ'), (0x1D733, 'M', u'ψ'), (0x1D734, 'M', u'ω'), (0x1D735, 'M', u'∇'), (0x1D736, 'M', u'α'), (0x1D737, 'M', u'β'), (0x1D738, 'M', u'γ'), (0x1D739, 'M', u'δ'), (0x1D73A, 'M', u'ε'), (0x1D73B, 'M', u'ζ'), (0x1D73C, 'M', u'η'), (0x1D73D, 'M', u'θ'), (0x1D73E, 'M', u'ι'), (0x1D73F, 'M', u'κ'), (0x1D740, 'M', u'λ'), (0x1D741, 'M', u'μ'), (0x1D742, 'M', u'ν'), (0x1D743, 'M', u'ξ'), (0x1D744, 'M', u'ο'), (0x1D745, 'M', u'π'), (0x1D746, 'M', u'ρ'), (0x1D747, 'M', u'σ'), (0x1D749, 'M', u'τ'), (0x1D74A, 'M', u'υ'), (0x1D74B, 'M', u'φ'), (0x1D74C, 'M', u'χ'), (0x1D74D, 'M', u'ψ'), (0x1D74E, 'M', u'ω'), (0x1D74F, 'M', u'∂'), (0x1D750, 'M', u'ε'), (0x1D751, 'M', u'θ'), (0x1D752, 'M', u'κ'), (0x1D753, 'M', u'φ'), (0x1D754, 'M', u'ρ'), (0x1D755, 'M', u'π'), (0x1D756, 'M', u'α'), (0x1D757, 'M', u'β'), (0x1D758, 'M', u'γ'), (0x1D759, 'M', u'δ'), (0x1D75A, 'M', u'ε'), (0x1D75B, 'M', u'ζ'), (0x1D75C, 'M', u'η'), (0x1D75D, 'M', u'θ'), (0x1D75E, 'M', u'ι'), (0x1D75F, 'M', u'κ'), (0x1D760, 'M', u'λ'), (0x1D761, 'M', u'μ'), (0x1D762, 'M', u'ν'), (0x1D763, 'M', u'ξ'), (0x1D764, 'M', u'ο'), (0x1D765, 'M', u'π'), (0x1D766, 'M', u'ρ'), (0x1D767, 'M', u'θ'), (0x1D768, 'M', u'σ'), (0x1D769, 'M', u'τ'), (0x1D76A, 'M', u'υ'), (0x1D76B, 'M', u'φ'), (0x1D76C, 'M', u'χ'), (0x1D76D, 'M', u'ψ'), (0x1D76E, 'M', u'ω'), (0x1D76F, 'M', u'∇'), (0x1D770, 'M', u'α'), (0x1D771, 'M', u'β'), (0x1D772, 'M', u'γ'), (0x1D773, 'M', u'δ'), (0x1D774, 'M', u'ε'), (0x1D775, 'M', u'ζ'), (0x1D776, 'M', u'η'), (0x1D777, 'M', u'θ'), (0x1D778, 'M', u'ι'), (0x1D779, 'M', u'κ'), (0x1D77A, 'M', u'λ'), (0x1D77B, 'M', u'μ'), (0x1D77C, 'M', u'ν'), (0x1D77D, 'M', u'ξ'), (0x1D77E, 'M', u'ο'), (0x1D77F, 'M', u'π'), (0x1D780, 'M', u'ρ'), (0x1D781, 'M', u'σ'), (0x1D783, 'M', u'τ'), (0x1D784, 'M', u'υ'), (0x1D785, 'M', u'φ'), (0x1D786, 'M', u'χ'), (0x1D787, 'M', u'ψ'), ] def _seg_67(): return [ (0x1D788, 'M', u'ω'), (0x1D789, 'M', u'∂'), (0x1D78A, 'M', u'ε'), (0x1D78B, 'M', u'θ'), (0x1D78C, 'M', u'κ'), (0x1D78D, 'M', u'φ'), (0x1D78E, 'M', u'ρ'), (0x1D78F, 'M', u'π'), (0x1D790, 'M', u'α'), (0x1D791, 'M', u'β'), (0x1D792, 'M', u'γ'), (0x1D793, 'M', u'δ'), (0x1D794, 'M', u'ε'), (0x1D795, 'M', u'ζ'), (0x1D796, 'M', u'η'), (0x1D797, 'M', u'θ'), (0x1D798, 'M', u'ι'), (0x1D799, 'M', u'κ'), (0x1D79A, 'M', u'λ'), (0x1D79B, 'M', u'μ'), (0x1D79C, 'M', u'ν'), (0x1D79D, 'M', u'ξ'), (0x1D79E, 'M', u'ο'), (0x1D79F, 'M', u'π'), (0x1D7A0, 'M', u'ρ'), (0x1D7A1, 'M', u'θ'), (0x1D7A2, 'M', u'σ'), (0x1D7A3, 'M', u'τ'), (0x1D7A4, 'M', u'υ'), (0x1D7A5, 'M', u'φ'), (0x1D7A6, 'M', u'χ'), (0x1D7A7, 'M', u'ψ'), (0x1D7A8, 'M', u'ω'), (0x1D7A9, 'M', u'∇'), (0x1D7AA, 'M', u'α'), (0x1D7AB, 'M', u'β'), (0x1D7AC, 'M', u'γ'), (0x1D7AD, 'M', u'δ'), (0x1D7AE, 'M', u'ε'), (0x1D7AF, 'M', u'ζ'), (0x1D7B0, 'M', u'η'), (0x1D7B1, 'M', u'θ'), (0x1D7B2, 'M', u'ι'), (0x1D7B3, 'M', u'κ'), (0x1D7B4, 'M', u'λ'), (0x1D7B5, 'M', u'μ'), (0x1D7B6, 'M', u'ν'), (0x1D7B7, 'M', u'ξ'), (0x1D7B8, 'M', u'ο'), (0x1D7B9, 'M', u'π'), (0x1D7BA, 'M', u'ρ'), (0x1D7BB, 'M', u'σ'), (0x1D7BD, 'M', u'τ'), (0x1D7BE, 'M', u'υ'), (0x1D7BF, 'M', u'φ'), (0x1D7C0, 'M', u'χ'), (0x1D7C1, 'M', u'ψ'), (0x1D7C2, 'M', u'ω'), (0x1D7C3, 'M', u'∂'), (0x1D7C4, 'M', u'ε'), (0x1D7C5, 'M', u'θ'), (0x1D7C6, 'M', u'κ'), (0x1D7C7, 'M', u'φ'), (0x1D7C8, 'M', u'ρ'), (0x1D7C9, 'M', u'π'), (0x1D7CA, 'M', u'ϝ'), (0x1D7CC, 'X'), (0x1D7CE, 'M', u'0'), (0x1D7CF, 'M', u'1'), (0x1D7D0, 'M', u'2'), (0x1D7D1, 'M', u'3'), (0x1D7D2, 'M', u'4'), (0x1D7D3, 'M', u'5'), (0x1D7D4, 'M', u'6'), (0x1D7D5, 'M', u'7'), (0x1D7D6, 'M', u'8'), (0x1D7D7, 'M', u'9'), (0x1D7D8, 'M', u'0'), (0x1D7D9, 'M', u'1'), (0x1D7DA, 'M', u'2'), (0x1D7DB, 'M', u'3'), (0x1D7DC, 'M', u'4'), (0x1D7DD, 'M', u'5'), (0x1D7DE, 'M', u'6'), (0x1D7DF, 'M', u'7'), (0x1D7E0, 'M', u'8'), (0x1D7E1, 'M', u'9'), (0x1D7E2, 'M', u'0'), (0x1D7E3, 'M', u'1'), (0x1D7E4, 'M', u'2'), (0x1D7E5, 'M', u'3'), (0x1D7E6, 'M', u'4'), (0x1D7E7, 'M', u'5'), (0x1D7E8, 'M', u'6'), (0x1D7E9, 'M', u'7'), (0x1D7EA, 'M', u'8'), (0x1D7EB, 'M', u'9'), (0x1D7EC, 'M', u'0'), (0x1D7ED, 'M', u'1'), (0x1D7EE, 'M', u'2'), ] def _seg_68(): return [ (0x1D7EF, 'M', u'3'), (0x1D7F0, 'M', u'4'), (0x1D7F1, 'M', u'5'), (0x1D7F2, 'M', u'6'), (0x1D7F3, 'M', u'7'), (0x1D7F4, 'M', u'8'), (0x1D7F5, 'M', u'9'), (0x1D7F6, 'M', u'0'), (0x1D7F7, 'M', u'1'), (0x1D7F8, 'M', u'2'), (0x1D7F9, 'M', u'3'), (0x1D7FA, 'M', u'4'), (0x1D7FB, 'M', u'5'), (0x1D7FC, 'M', u'6'), (0x1D7FD, 'M', u'7'), (0x1D7FE, 'M', u'8'), (0x1D7FF, 'M', u'9'), (0x1D800, 'V'), (0x1DA8C, 'X'), (0x1DA9B, 'V'), (0x1DAA0, 'X'), (0x1DAA1, 'V'), (0x1DAB0, 'X'), (0x1E000, 'V'), (0x1E007, 'X'), (0x1E008, 'V'), (0x1E019, 'X'), (0x1E01B, 'V'), (0x1E022, 'X'), (0x1E023, 'V'), (0x1E025, 'X'), (0x1E026, 'V'), (0x1E02B, 'X'), (0x1E800, 'V'), (0x1E8C5, 'X'), (0x1E8C7, 'V'), (0x1E8D7, 'X'), (0x1E900, 'M', u'𞤢'), (0x1E901, 'M', u'𞤣'), (0x1E902, 'M', u'𞤤'), (0x1E903, 'M', u'𞤥'), (0x1E904, 'M', u'𞤦'), (0x1E905, 'M', u'𞤧'), (0x1E906, 'M', u'𞤨'), (0x1E907, 'M', u'𞤩'), (0x1E908, 'M', u'𞤪'), (0x1E909, 'M', u'𞤫'), (0x1E90A, 'M', u'𞤬'), (0x1E90B, 'M', u'𞤭'), (0x1E90C, 'M', u'𞤮'), (0x1E90D, 'M', u'𞤯'), (0x1E90E, 'M', u'𞤰'), (0x1E90F, 'M', u'𞤱'), (0x1E910, 'M', u'𞤲'), (0x1E911, 'M', u'𞤳'), (0x1E912, 'M', u'𞤴'), (0x1E913, 'M', u'𞤵'), (0x1E914, 'M', u'𞤶'), (0x1E915, 'M', u'𞤷'), (0x1E916, 'M', u'𞤸'), (0x1E917, 'M', u'𞤹'), (0x1E918, 'M', u'𞤺'), (0x1E919, 'M', u'𞤻'), (0x1E91A, 'M', u'𞤼'), (0x1E91B, 'M', u'𞤽'), (0x1E91C, 'M', u'𞤾'), (0x1E91D, 'M', u'𞤿'), (0x1E91E, 'M', u'𞥀'), (0x1E91F, 'M', u'𞥁'), (0x1E920, 'M', u'𞥂'), (0x1E921, 'M', u'𞥃'), (0x1E922, 'V'), (0x1E94B, 'X'), (0x1E950, 'V'), (0x1E95A, 'X'), (0x1E95E, 'V'), (0x1E960, 'X'), (0x1EC71, 'V'), (0x1ECB5, 'X'), (0x1EE00, 'M', u'ا'), (0x1EE01, 'M', u'ب'), (0x1EE02, 'M', u'ج'), (0x1EE03, 'M', u'د'), (0x1EE04, 'X'), (0x1EE05, 'M', u'و'), (0x1EE06, 'M', u'ز'), (0x1EE07, 'M', u'ح'), (0x1EE08, 'M', u'ط'), (0x1EE09, 'M', u'ي'), (0x1EE0A, 'M', u'ك'), (0x1EE0B, 'M', u'ل'), (0x1EE0C, 'M', u'م'), (0x1EE0D, 'M', u'ن'), (0x1EE0E, 'M', u'س'), (0x1EE0F, 'M', u'ع'), (0x1EE10, 'M', u'ف'), (0x1EE11, 'M', u'ص'), (0x1EE12, 'M', u'ق'), (0x1EE13, 'M', u'ر'), (0x1EE14, 'M', u'ش'), ] def _seg_69(): return [ (0x1EE15, 'M', u'ت'), (0x1EE16, 'M', u'ث'), (0x1EE17, 'M', u'خ'), (0x1EE18, 'M', u'ذ'), (0x1EE19, 'M', u'ض'), (0x1EE1A, 'M', u'ظ'), (0x1EE1B, 'M', u'غ'), (0x1EE1C, 'M', u'ٮ'), (0x1EE1D, 'M', u'ں'), (0x1EE1E, 'M', u'ڡ'), (0x1EE1F, 'M', u'ٯ'), (0x1EE20, 'X'), (0x1EE21, 'M', u'ب'), (0x1EE22, 'M', u'ج'), (0x1EE23, 'X'), (0x1EE24, 'M', u'ه'), (0x1EE25, 'X'), (0x1EE27, 'M', u'ح'), (0x1EE28, 'X'), (0x1EE29, 'M', u'ي'), (0x1EE2A, 'M', u'ك'), (0x1EE2B, 'M', u'ل'), (0x1EE2C, 'M', u'م'), (0x1EE2D, 'M', u'ن'), (0x1EE2E, 'M', u'س'), (0x1EE2F, 'M', u'ع'), (0x1EE30, 'M', u'ف'), (0x1EE31, 'M', u'ص'), (0x1EE32, 'M', u'ق'), (0x1EE33, 'X'), (0x1EE34, 'M', u'ش'), (0x1EE35, 'M', u'ت'), (0x1EE36, 'M', u'ث'), (0x1EE37, 'M', u'خ'), (0x1EE38, 'X'), (0x1EE39, 'M', u'ض'), (0x1EE3A, 'X'), (0x1EE3B, 'M', u'غ'), (0x1EE3C, 'X'), (0x1EE42, 'M', u'ج'), (0x1EE43, 'X'), (0x1EE47, 'M', u'ح'), (0x1EE48, 'X'), (0x1EE49, 'M', u'ي'), (0x1EE4A, 'X'), (0x1EE4B, 'M', u'ل'), (0x1EE4C, 'X'), (0x1EE4D, 'M', u'ن'), (0x1EE4E, 'M', u'س'), (0x1EE4F, 'M', u'ع'), (0x1EE50, 'X'), (0x1EE51, 'M', u'ص'), (0x1EE52, 'M', u'ق'), (0x1EE53, 'X'), (0x1EE54, 'M', u'ش'), (0x1EE55, 'X'), (0x1EE57, 'M', u'خ'), (0x1EE58, 'X'), (0x1EE59, 'M', u'ض'), (0x1EE5A, 'X'), (0x1EE5B, 'M', u'غ'), (0x1EE5C, 'X'), (0x1EE5D, 'M', u'ں'), (0x1EE5E, 'X'), (0x1EE5F, 'M', u'ٯ'), (0x1EE60, 'X'), (0x1EE61, 'M', u'ب'), (0x1EE62, 'M', u'ج'), (0x1EE63, 'X'), (0x1EE64, 'M', u'ه'), (0x1EE65, 'X'), (0x1EE67, 'M', u'ح'), (0x1EE68, 'M', u'ط'), (0x1EE69, 'M', u'ي'), (0x1EE6A, 'M', u'ك'), (0x1EE6B, 'X'), (0x1EE6C, 'M', u'م'), (0x1EE6D, 'M', u'ن'), (0x1EE6E, 'M', u'س'), (0x1EE6F, 'M', u'ع'), (0x1EE70, 'M', u'ف'), (0x1EE71, 'M', u'ص'), (0x1EE72, 'M', u'ق'), (0x1EE73, 'X'), (0x1EE74, 'M', u'ش'), (0x1EE75, 'M', u'ت'), (0x1EE76, 'M', u'ث'), (0x1EE77, 'M', u'خ'), (0x1EE78, 'X'), (0x1EE79, 'M', u'ض'), (0x1EE7A, 'M', u'ظ'), (0x1EE7B, 'M', u'غ'), (0x1EE7C, 'M', u'ٮ'), (0x1EE7D, 'X'), (0x1EE7E, 'M', u'ڡ'), (0x1EE7F, 'X'), (0x1EE80, 'M', u'ا'), (0x1EE81, 'M', u'ب'), (0x1EE82, 'M', u'ج'), (0x1EE83, 'M', u'د'), ] def _seg_70(): return [ (0x1EE84, 'M', u'ه'), (0x1EE85, 'M', u'و'), (0x1EE86, 'M', u'ز'), (0x1EE87, 'M', u'ح'), (0x1EE88, 'M', u'ط'), (0x1EE89, 'M', u'ي'), (0x1EE8A, 'X'), (0x1EE8B, 'M', u'ل'), (0x1EE8C, 'M', u'م'), (0x1EE8D, 'M', u'ن'), (0x1EE8E, 'M', u'س'), (0x1EE8F, 'M', u'ع'), (0x1EE90, 'M', u'ف'), (0x1EE91, 'M', u'ص'), (0x1EE92, 'M', u'ق'), (0x1EE93, 'M', u'ر'), (0x1EE94, 'M', u'ش'), (0x1EE95, 'M', u'ت'), (0x1EE96, 'M', u'ث'), (0x1EE97, 'M', u'خ'), (0x1EE98, 'M', u'ذ'), (0x1EE99, 'M', u'ض'), (0x1EE9A, 'M', u'ظ'), (0x1EE9B, 'M', u'غ'), (0x1EE9C, 'X'), (0x1EEA1, 'M', u'ب'), (0x1EEA2, 'M', u'ج'), (0x1EEA3, 'M', u'د'), (0x1EEA4, 'X'), (0x1EEA5, 'M', u'و'), (0x1EEA6, 'M', u'ز'), (0x1EEA7, 'M', u'ح'), (0x1EEA8, 'M', u'ط'), (0x1EEA9, 'M', u'ي'), (0x1EEAA, 'X'), (0x1EEAB, 'M', u'ل'), (0x1EEAC, 'M', u'م'), (0x1EEAD, 'M', u'ن'), (0x1EEAE, 'M', u'س'), (0x1EEAF, 'M', u'ع'), (0x1EEB0, 'M', u'ف'), (0x1EEB1, 'M', u'ص'), (0x1EEB2, 'M', u'ق'), (0x1EEB3, 'M', u'ر'), (0x1EEB4, 'M', u'ش'), (0x1EEB5, 'M', u'ت'), (0x1EEB6, 'M', u'ث'), (0x1EEB7, 'M', u'خ'), (0x1EEB8, 'M', u'ذ'), (0x1EEB9, 'M', u'ض'), (0x1EEBA, 'M', u'ظ'), (0x1EEBB, 'M', u'غ'), (0x1EEBC, 'X'), (0x1EEF0, 'V'), (0x1EEF2, 'X'), (0x1F000, 'V'), (0x1F02C, 'X'), (0x1F030, 'V'), (0x1F094, 'X'), (0x1F0A0, 'V'), (0x1F0AF, 'X'), (0x1F0B1, 'V'), (0x1F0C0, 'X'), (0x1F0C1, 'V'), (0x1F0D0, 'X'), (0x1F0D1, 'V'), (0x1F0F6, 'X'), (0x1F101, '3', u'0,'), (0x1F102, '3', u'1,'), (0x1F103, '3', u'2,'), (0x1F104, '3', u'3,'), (0x1F105, '3', u'4,'), (0x1F106, '3', u'5,'), (0x1F107, '3', u'6,'), (0x1F108, '3', u'7,'), (0x1F109, '3', u'8,'), (0x1F10A, '3', u'9,'), (0x1F10B, 'V'), (0x1F10D, 'X'), (0x1F110, '3', u'(a)'), (0x1F111, '3', u'(b)'), (0x1F112, '3', u'(c)'), (0x1F113, '3', u'(d)'), (0x1F114, '3', u'(e)'), (0x1F115, '3', u'(f)'), (0x1F116, '3', u'(g)'), (0x1F117, '3', u'(h)'), (0x1F118, '3', u'(i)'), (0x1F119, '3', u'(j)'), (0x1F11A, '3', u'(k)'), (0x1F11B, '3', u'(l)'), (0x1F11C, '3', u'(m)'), (0x1F11D, '3', u'(n)'), (0x1F11E, '3', u'(o)'), (0x1F11F, '3', u'(p)'), (0x1F120, '3', u'(q)'), (0x1F121, '3', u'(r)'), (0x1F122, '3', u'(s)'), (0x1F123, '3', u'(t)'), (0x1F124, '3', u'(u)'), ] def _seg_71(): return [ (0x1F125, '3', u'(v)'), (0x1F126, '3', u'(w)'), (0x1F127, '3', u'(x)'), (0x1F128, '3', u'(y)'), (0x1F129, '3', u'(z)'), (0x1F12A, 'M', u'〔s〕'), (0x1F12B, 'M', u'c'), (0x1F12C, 'M', u'r'), (0x1F12D, 'M', u'cd'), (0x1F12E, 'M', u'wz'), (0x1F12F, 'V'), (0x1F130, 'M', u'a'), (0x1F131, 'M', u'b'), (0x1F132, 'M', u'c'), (0x1F133, 'M', u'd'), (0x1F134, 'M', u'e'), (0x1F135, 'M', u'f'), (0x1F136, 'M', u'g'), (0x1F137, 'M', u'h'), (0x1F138, 'M', u'i'), (0x1F139, 'M', u'j'), (0x1F13A, 'M', u'k'), (0x1F13B, 'M', u'l'), (0x1F13C, 'M', u'm'), (0x1F13D, 'M', u'n'), (0x1F13E, 'M', u'o'), (0x1F13F, 'M', u'p'), (0x1F140, 'M', u'q'), (0x1F141, 'M', u'r'), (0x1F142, 'M', u's'), (0x1F143, 'M', u't'), (0x1F144, 'M', u'u'), (0x1F145, 'M', u'v'), (0x1F146, 'M', u'w'), (0x1F147, 'M', u'x'), (0x1F148, 'M', u'y'), (0x1F149, 'M', u'z'), (0x1F14A, 'M', u'hv'), (0x1F14B, 'M', u'mv'), (0x1F14C, 'M', u'sd'), (0x1F14D, 'M', u'ss'), (0x1F14E, 'M', u'ppv'), (0x1F14F, 'M', u'wc'), (0x1F150, 'V'), (0x1F16A, 'M', u'mc'), (0x1F16B, 'M', u'md'), (0x1F16C, 'X'), (0x1F170, 'V'), (0x1F190, 'M', u'dj'), (0x1F191, 'V'), (0x1F1AD, 'X'), (0x1F1E6, 'V'), (0x1F200, 'M', u'ほか'), (0x1F201, 'M', u'ココ'), (0x1F202, 'M', u'サ'), (0x1F203, 'X'), (0x1F210, 'M', u'手'), (0x1F211, 'M', u'字'), (0x1F212, 'M', u'双'), (0x1F213, 'M', u'デ'), (0x1F214, 'M', u'二'), (0x1F215, 'M', u'多'), (0x1F216, 'M', u'解'), (0x1F217, 'M', u'天'), (0x1F218, 'M', u'交'), (0x1F219, 'M', u'映'), (0x1F21A, 'M', u'無'), (0x1F21B, 'M', u'料'), (0x1F21C, 'M', u'前'), (0x1F21D, 'M', u'後'), (0x1F21E, 'M', u'再'), (0x1F21F, 'M', u'新'), (0x1F220, 'M', u'初'), (0x1F221, 'M', u'終'), (0x1F222, 'M', u'生'), (0x1F223, 'M', u'販'), (0x1F224, 'M', u'声'), (0x1F225, 'M', u'吹'), (0x1F226, 'M', u'演'), (0x1F227, 'M', u'投'), (0x1F228, 'M', u'捕'), (0x1F229, 'M', u'一'), (0x1F22A, 'M', u'三'), (0x1F22B, 'M', u'遊'), (0x1F22C, 'M', u'左'), (0x1F22D, 'M', u'中'), (0x1F22E, 'M', u'右'), (0x1F22F, 'M', u'指'), (0x1F230, 'M', u'走'), (0x1F231, 'M', u'打'), (0x1F232, 'M', u'禁'), (0x1F233, 'M', u'空'), (0x1F234, 'M', u'合'), (0x1F235, 'M', u'満'), (0x1F236, 'M', u'有'), (0x1F237, 'M', u'月'), (0x1F238, 'M', u'申'), (0x1F239, 'M', u'割'), (0x1F23A, 'M', u'営'), (0x1F23B, 'M', u'配'), ] def _seg_72(): return [ (0x1F23C, 'X'), (0x1F240, 'M', u'〔本〕'), (0x1F241, 'M', u'〔三〕'), (0x1F242, 'M', u'〔二〕'), (0x1F243, 'M', u'〔安〕'), (0x1F244, 'M', u'〔点〕'), (0x1F245, 'M', u'〔打〕'), (0x1F246, 'M', u'〔盗〕'), (0x1F247, 'M', u'〔勝〕'), (0x1F248, 'M', u'〔敗〕'), (0x1F249, 'X'), (0x1F250, 'M', u'得'), (0x1F251, 'M', u'可'), (0x1F252, 'X'), (0x1F260, 'V'), (0x1F266, 'X'), (0x1F300, 'V'), (0x1F6D5, 'X'), (0x1F6E0, 'V'), (0x1F6ED, 'X'), (0x1F6F0, 'V'), (0x1F6FA, 'X'), (0x1F700, 'V'), (0x1F774, 'X'), (0x1F780, 'V'), (0x1F7D9, 'X'), (0x1F800, 'V'), (0x1F80C, 'X'), (0x1F810, 'V'), (0x1F848, 'X'), (0x1F850, 'V'), (0x1F85A, 'X'), (0x1F860, 'V'), (0x1F888, 'X'), (0x1F890, 'V'), (0x1F8AE, 'X'), (0x1F900, 'V'), (0x1F90C, 'X'), (0x1F910, 'V'), (0x1F93F, 'X'), (0x1F940, 'V'), (0x1F971, 'X'), (0x1F973, 'V'), (0x1F977, 'X'), (0x1F97A, 'V'), (0x1F97B, 'X'), (0x1F97C, 'V'), (0x1F9A3, 'X'), (0x1F9B0, 'V'), (0x1F9BA, 'X'), (0x1F9C0, 'V'), (0x1F9C3, 'X'), (0x1F9D0, 'V'), (0x1FA00, 'X'), (0x1FA60, 'V'), (0x1FA6E, 'X'), (0x20000, 'V'), (0x2A6D7, 'X'), (0x2A700, 'V'), (0x2B735, 'X'), (0x2B740, 'V'), (0x2B81E, 'X'), (0x2B820, 'V'), (0x2CEA2, 'X'), (0x2CEB0, 'V'), (0x2EBE1, 'X'), (0x2F800, 'M', u'丽'), (0x2F801, 'M', u'丸'), (0x2F802, 'M', u'乁'), (0x2F803, 'M', u'𠄢'), (0x2F804, 'M', u'你'), (0x2F805, 'M', u'侮'), (0x2F806, 'M', u'侻'), (0x2F807, 'M', u'倂'), (0x2F808, 'M', u'偺'), (0x2F809, 'M', u'備'), (0x2F80A, 'M', u'僧'), (0x2F80B, 'M', u'像'), (0x2F80C, 'M', u'㒞'), (0x2F80D, 'M', u'𠘺'), (0x2F80E, 'M', u'免'), (0x2F80F, 'M', u'兔'), (0x2F810, 'M', u'兤'), (0x2F811, 'M', u'具'), (0x2F812, 'M', u'𠔜'), (0x2F813, 'M', u'㒹'), (0x2F814, 'M', u'內'), (0x2F815, 'M', u'再'), (0x2F816, 'M', u'𠕋'), (0x2F817, 'M', u'冗'), (0x2F818, 'M', u'冤'), (0x2F819, 'M', u'仌'), (0x2F81A, 'M', u'冬'), (0x2F81B, 'M', u'况'), (0x2F81C, 'M', u'𩇟'), (0x2F81D, 'M', u'凵'), (0x2F81E, 'M', u'刃'), (0x2F81F, 'M', u'㓟'), (0x2F820, 'M', u'刻'), (0x2F821, 'M', u'剆'), ] def _seg_73(): return [ (0x2F822, 'M', u'割'), (0x2F823, 'M', u'剷'), (0x2F824, 'M', u'㔕'), (0x2F825, 'M', u'勇'), (0x2F826, 'M', u'勉'), (0x2F827, 'M', u'勤'), (0x2F828, 'M', u'勺'), (0x2F829, 'M', u'包'), (0x2F82A, 'M', u'匆'), (0x2F82B, 'M', u'北'), (0x2F82C, 'M', u'卉'), (0x2F82D, 'M', u'卑'), (0x2F82E, 'M', u'博'), (0x2F82F, 'M', u'即'), (0x2F830, 'M', u'卽'), (0x2F831, 'M', u'卿'), (0x2F834, 'M', u'𠨬'), (0x2F835, 'M', u'灰'), (0x2F836, 'M', u'及'), (0x2F837, 'M', u'叟'), (0x2F838, 'M', u'𠭣'), (0x2F839, 'M', u'叫'), (0x2F83A, 'M', u'叱'), (0x2F83B, 'M', u'吆'), (0x2F83C, 'M', u'咞'), (0x2F83D, 'M', u'吸'), (0x2F83E, 'M', u'呈'), (0x2F83F, 'M', u'周'), (0x2F840, 'M', u'咢'), (0x2F841, 'M', u'哶'), (0x2F842, 'M', u'唐'), (0x2F843, 'M', u'啓'), (0x2F844, 'M', u'啣'), (0x2F845, 'M', u'善'), (0x2F847, 'M', u'喙'), (0x2F848, 'M', u'喫'), (0x2F849, 'M', u'喳'), (0x2F84A, 'M', u'嗂'), (0x2F84B, 'M', u'圖'), (0x2F84C, 'M', u'嘆'), (0x2F84D, 'M', u'圗'), (0x2F84E, 'M', u'噑'), (0x2F84F, 'M', u'噴'), (0x2F850, 'M', u'切'), (0x2F851, 'M', u'壮'), (0x2F852, 'M', u'城'), (0x2F853, 'M', u'埴'), (0x2F854, 'M', u'堍'), (0x2F855, 'M', u'型'), (0x2F856, 'M', u'堲'), (0x2F857, 'M', u'報'), (0x2F858, 'M', u'墬'), (0x2F859, 'M', u'𡓤'), (0x2F85A, 'M', u'売'), (0x2F85B, 'M', u'壷'), (0x2F85C, 'M', u'夆'), (0x2F85D, 'M', u'多'), (0x2F85E, 'M', u'夢'), (0x2F85F, 'M', u'奢'), (0x2F860, 'M', u'𡚨'), (0x2F861, 'M', u'𡛪'), (0x2F862, 'M', u'姬'), (0x2F863, 'M', u'娛'), (0x2F864, 'M', u'娧'), (0x2F865, 'M', u'姘'), (0x2F866, 'M', u'婦'), (0x2F867, 'M', u'㛮'), (0x2F868, 'X'), (0x2F869, 'M', u'嬈'), (0x2F86A, 'M', u'嬾'), (0x2F86C, 'M', u'𡧈'), (0x2F86D, 'M', u'寃'), (0x2F86E, 'M', u'寘'), (0x2F86F, 'M', u'寧'), (0x2F870, 'M', u'寳'), (0x2F871, 'M', u'𡬘'), (0x2F872, 'M', u'寿'), (0x2F873, 'M', u'将'), (0x2F874, 'X'), (0x2F875, 'M', u'尢'), (0x2F876, 'M', u'㞁'), (0x2F877, 'M', u'屠'), (0x2F878, 'M', u'屮'), (0x2F879, 'M', u'峀'), (0x2F87A, 'M', u'岍'), (0x2F87B, 'M', u'𡷤'), (0x2F87C, 'M', u'嵃'), (0x2F87D, 'M', u'𡷦'), (0x2F87E, 'M', u'嵮'), (0x2F87F, 'M', u'嵫'), (0x2F880, 'M', u'嵼'), (0x2F881, 'M', u'巡'), (0x2F882, 'M', u'巢'), (0x2F883, 'M', u'㠯'), (0x2F884, 'M', u'巽'), (0x2F885, 'M', u'帨'), (0x2F886, 'M', u'帽'), (0x2F887, 'M', u'幩'), (0x2F888, 'M', u'㡢'), (0x2F889, 'M', u'𢆃'), ] def _seg_74(): return [ (0x2F88A, 'M', u'㡼'), (0x2F88B, 'M', u'庰'), (0x2F88C, 'M', u'庳'), (0x2F88D, 'M', u'庶'), (0x2F88E, 'M', u'廊'), (0x2F88F, 'M', u'𪎒'), (0x2F890, 'M', u'廾'), (0x2F891, 'M', u'𢌱'), (0x2F893, 'M', u'舁'), (0x2F894, 'M', u'弢'), (0x2F896, 'M', u'㣇'), (0x2F897, 'M', u'𣊸'), (0x2F898, 'M', u'𦇚'), (0x2F899, 'M', u'形'), (0x2F89A, 'M', u'彫'), (0x2F89B, 'M', u'㣣'), (0x2F89C, 'M', u'徚'), (0x2F89D, 'M', u'忍'), (0x2F89E, 'M', u'志'), (0x2F89F, 'M', u'忹'), (0x2F8A0, 'M', u'悁'), (0x2F8A1, 'M', u'㤺'), (0x2F8A2, 'M', u'㤜'), (0x2F8A3, 'M', u'悔'), (0x2F8A4, 'M', u'𢛔'), (0x2F8A5, 'M', u'惇'), (0x2F8A6, 'M', u'慈'), (0x2F8A7, 'M', u'慌'), (0x2F8A8, 'M', u'慎'), (0x2F8A9, 'M', u'慌'), (0x2F8AA, 'M', u'慺'), (0x2F8AB, 'M', u'憎'), (0x2F8AC, 'M', u'憲'), (0x2F8AD, 'M', u'憤'), (0x2F8AE, 'M', u'憯'), (0x2F8AF, 'M', u'懞'), (0x2F8B0, 'M', u'懲'), (0x2F8B1, 'M', u'懶'), (0x2F8B2, 'M', u'成'), (0x2F8B3, 'M', u'戛'), (0x2F8B4, 'M', u'扝'), (0x2F8B5, 'M', u'抱'), (0x2F8B6, 'M', u'拔'), (0x2F8B7, 'M', u'捐'), (0x2F8B8, 'M', u'𢬌'), (0x2F8B9, 'M', u'挽'), (0x2F8BA, 'M', u'拼'), (0x2F8BB, 'M', u'捨'), (0x2F8BC, 'M', u'掃'), (0x2F8BD, 'M', u'揤'), (0x2F8BE, 'M', u'𢯱'), (0x2F8BF, 'M', u'搢'), (0x2F8C0, 'M', u'揅'), (0x2F8C1, 'M', u'掩'), (0x2F8C2, 'M', u'㨮'), (0x2F8C3, 'M', u'摩'), (0x2F8C4, 'M', u'摾'), (0x2F8C5, 'M', u'撝'), (0x2F8C6, 'M', u'摷'), (0x2F8C7, 'M', u'㩬'), (0x2F8C8, 'M', u'敏'), (0x2F8C9, 'M', u'敬'), (0x2F8CA, 'M', u'𣀊'), (0x2F8CB, 'M', u'旣'), (0x2F8CC, 'M', u'書'), (0x2F8CD, 'M', u'晉'), (0x2F8CE, 'M', u'㬙'), (0x2F8CF, 'M', u'暑'), (0x2F8D0, 'M', u'㬈'), (0x2F8D1, 'M', u'㫤'), (0x2F8D2, 'M', u'冒'), (0x2F8D3, 'M', u'冕'), (0x2F8D4, 'M', u'最'), (0x2F8D5, 'M', u'暜'), (0x2F8D6, 'M', u'肭'), (0x2F8D7, 'M', u'䏙'), (0x2F8D8, 'M', u'朗'), (0x2F8D9, 'M', u'望'), (0x2F8DA, 'M', u'朡'), (0x2F8DB, 'M', u'杞'), (0x2F8DC, 'M', u'杓'), (0x2F8DD, 'M', u'𣏃'), (0x2F8DE, 'M', u'㭉'), (0x2F8DF, 'M', u'柺'), (0x2F8E0, 'M', u'枅'), (0x2F8E1, 'M', u'桒'), (0x2F8E2, 'M', u'梅'), (0x2F8E3, 'M', u'𣑭'), (0x2F8E4, 'M', u'梎'), (0x2F8E5, 'M', u'栟'), (0x2F8E6, 'M', u'椔'), (0x2F8E7, 'M', u'㮝'), (0x2F8E8, 'M', u'楂'), (0x2F8E9, 'M', u'榣'), (0x2F8EA, 'M', u'槪'), (0x2F8EB, 'M', u'檨'), (0x2F8EC, 'M', u'𣚣'), (0x2F8ED, 'M', u'櫛'), (0x2F8EE, 'M', u'㰘'), (0x2F8EF, 'M', u'次'), ] def _seg_75(): return [ (0x2F8F0, 'M', u'𣢧'), (0x2F8F1, 'M', u'歔'), (0x2F8F2, 'M', u'㱎'), (0x2F8F3, 'M', u'歲'), (0x2F8F4, 'M', u'殟'), (0x2F8F5, 'M', u'殺'), (0x2F8F6, 'M', u'殻'), (0x2F8F7, 'M', u'𣪍'), (0x2F8F8, 'M', u'𡴋'), (0x2F8F9, 'M', u'𣫺'), (0x2F8FA, 'M', u'汎'), (0x2F8FB, 'M', u'𣲼'), (0x2F8FC, 'M', u'沿'), (0x2F8FD, 'M', u'泍'), (0x2F8FE, 'M', u'汧'), (0x2F8FF, 'M', u'洖'), (0x2F900, 'M', u'派'), (0x2F901, 'M', u'海'), (0x2F902, 'M', u'流'), (0x2F903, 'M', u'浩'), (0x2F904, 'M', u'浸'), (0x2F905, 'M', u'涅'), (0x2F906, 'M', u'𣴞'), (0x2F907, 'M', u'洴'), (0x2F908, 'M', u'港'), (0x2F909, 'M', u'湮'), (0x2F90A, 'M', u'㴳'), (0x2F90B, 'M', u'滋'), (0x2F90C, 'M', u'滇'), (0x2F90D, 'M', u'𣻑'), (0x2F90E, 'M', u'淹'), (0x2F90F, 'M', u'潮'), (0x2F910, 'M', u'𣽞'), (0x2F911, 'M', u'𣾎'), (0x2F912, 'M', u'濆'), (0x2F913, 'M', u'瀹'), (0x2F914, 'M', u'瀞'), (0x2F915, 'M', u'瀛'), (0x2F916, 'M', u'㶖'), (0x2F917, 'M', u'灊'), (0x2F918, 'M', u'災'), (0x2F919, 'M', u'灷'), (0x2F91A, 'M', u'炭'), (0x2F91B, 'M', u'𠔥'), (0x2F91C, 'M', u'煅'), (0x2F91D, 'M', u'𤉣'), (0x2F91E, 'M', u'熜'), (0x2F91F, 'X'), (0x2F920, 'M', u'爨'), (0x2F921, 'M', u'爵'), (0x2F922, 'M', u'牐'), (0x2F923, 'M', u'𤘈'), (0x2F924, 'M', u'犀'), (0x2F925, 'M', u'犕'), (0x2F926, 'M', u'𤜵'), (0x2F927, 'M', u'𤠔'), (0x2F928, 'M', u'獺'), (0x2F929, 'M', u'王'), (0x2F92A, 'M', u'㺬'), (0x2F92B, 'M', u'玥'), (0x2F92C, 'M', u'㺸'), (0x2F92E, 'M', u'瑇'), (0x2F92F, 'M', u'瑜'), (0x2F930, 'M', u'瑱'), (0x2F931, 'M', u'璅'), (0x2F932, 'M', u'瓊'), (0x2F933, 'M', u'㼛'), (0x2F934, 'M', u'甤'), (0x2F935, 'M', u'𤰶'), (0x2F936, 'M', u'甾'), (0x2F937, 'M', u'𤲒'), (0x2F938, 'M', u'異'), (0x2F939, 'M', u'𢆟'), (0x2F93A, 'M', u'瘐'), (0x2F93B, 'M', u'𤾡'), (0x2F93C, 'M', u'𤾸'), (0x2F93D, 'M', u'𥁄'), (0x2F93E, 'M', u'㿼'), (0x2F93F, 'M', u'䀈'), (0x2F940, 'M', u'直'), (0x2F941, 'M', u'𥃳'), (0x2F942, 'M', u'𥃲'), (0x2F943, 'M', u'𥄙'), (0x2F944, 'M', u'𥄳'), (0x2F945, 'M', u'眞'), (0x2F946, 'M', u'真'), (0x2F948, 'M', u'睊'), (0x2F949, 'M', u'䀹'), (0x2F94A, 'M', u'瞋'), (0x2F94B, 'M', u'䁆'), (0x2F94C, 'M', u'䂖'), (0x2F94D, 'M', u'𥐝'), (0x2F94E, 'M', u'硎'), (0x2F94F, 'M', u'碌'), (0x2F950, 'M', u'磌'), (0x2F951, 'M', u'䃣'), (0x2F952, 'M', u'𥘦'), (0x2F953, 'M', u'祖'), (0x2F954, 'M', u'𥚚'), (0x2F955, 'M', u'𥛅'), ] def _seg_76(): return [ (0x2F956, 'M', u'福'), (0x2F957, 'M', u'秫'), (0x2F958, 'M', u'䄯'), (0x2F959, 'M', u'穀'), (0x2F95A, 'M', u'穊'), (0x2F95B, 'M', u'穏'), (0x2F95C, 'M', u'𥥼'), (0x2F95D, 'M', u'𥪧'), (0x2F95F, 'X'), (0x2F960, 'M', u'䈂'), (0x2F961, 'M', u'𥮫'), (0x2F962, 'M', u'篆'), (0x2F963, 'M', u'築'), (0x2F964, 'M', u'䈧'), (0x2F965, 'M', u'𥲀'), (0x2F966, 'M', u'糒'), (0x2F967, 'M', u'䊠'), (0x2F968, 'M', u'糨'), (0x2F969, 'M', u'糣'), (0x2F96A, 'M', u'紀'), (0x2F96B, 'M', u'𥾆'), (0x2F96C, 'M', u'絣'), (0x2F96D, 'M', u'䌁'), (0x2F96E, 'M', u'緇'), (0x2F96F, 'M', u'縂'), (0x2F970, 'M', u'繅'), (0x2F971, 'M', u'䌴'), (0x2F972, 'M', u'𦈨'), (0x2F973, 'M', u'𦉇'), (0x2F974, 'M', u'䍙'), (0x2F975, 'M', u'𦋙'), (0x2F976, 'M', u'罺'), (0x2F977, 'M', u'𦌾'), (0x2F978, 'M', u'羕'), (0x2F979, 'M', u'翺'), (0x2F97A, 'M', u'者'), (0x2F97B, 'M', u'𦓚'), (0x2F97C, 'M', u'𦔣'), (0x2F97D, 'M', u'聠'), (0x2F97E, 'M', u'𦖨'), (0x2F97F, 'M', u'聰'), (0x2F980, 'M', u'𣍟'), (0x2F981, 'M', u'䏕'), (0x2F982, 'M', u'育'), (0x2F983, 'M', u'脃'), (0x2F984, 'M', u'䐋'), (0x2F985, 'M', u'脾'), (0x2F986, 'M', u'媵'), (0x2F987, 'M', u'𦞧'), (0x2F988, 'M', u'𦞵'), (0x2F989, 'M', u'𣎓'), (0x2F98A, 'M', u'𣎜'), (0x2F98B, 'M', u'舁'), (0x2F98C, 'M', u'舄'), (0x2F98D, 'M', u'辞'), (0x2F98E, 'M', u'䑫'), (0x2F98F, 'M', u'芑'), (0x2F990, 'M', u'芋'), (0x2F991, 'M', u'芝'), (0x2F992, 'M', u'劳'), (0x2F993, 'M', u'花'), (0x2F994, 'M', u'芳'), (0x2F995, 'M', u'芽'), (0x2F996, 'M', u'苦'), (0x2F997, 'M', u'𦬼'), (0x2F998, 'M', u'若'), (0x2F999, 'M', u'茝'), (0x2F99A, 'M', u'荣'), (0x2F99B, 'M', u'莭'), (0x2F99C, 'M', u'茣'), (0x2F99D, 'M', u'莽'), (0x2F99E, 'M', u'菧'), (0x2F99F, 'M', u'著'), (0x2F9A0, 'M', u'荓'), (0x2F9A1, 'M', u'菊'), (0x2F9A2, 'M', u'菌'), (0x2F9A3, 'M', u'菜'), (0x2F9A4, 'M', u'𦰶'), (0x2F9A5, 'M', u'𦵫'), (0x2F9A6, 'M', u'𦳕'), (0x2F9A7, 'M', u'䔫'), (0x2F9A8, 'M', u'蓱'), (0x2F9A9, 'M', u'蓳'), (0x2F9AA, 'M', u'蔖'), (0x2F9AB, 'M', u'𧏊'), (0x2F9AC, 'M', u'蕤'), (0x2F9AD, 'M', u'𦼬'), (0x2F9AE, 'M', u'䕝'), (0x2F9AF, 'M', u'䕡'), (0x2F9B0, 'M', u'𦾱'), (0x2F9B1, 'M', u'𧃒'), (0x2F9B2, 'M', u'䕫'), (0x2F9B3, 'M', u'虐'), (0x2F9B4, 'M', u'虜'), (0x2F9B5, 'M', u'虧'), (0x2F9B6, 'M', u'虩'), (0x2F9B7, 'M', u'蚩'), (0x2F9B8, 'M', u'蚈'), (0x2F9B9, 'M', u'蜎'), (0x2F9BA, 'M', u'蛢'), ] def _seg_77(): return [ (0x2F9BB, 'M', u'蝹'), (0x2F9BC, 'M', u'蜨'), (0x2F9BD, 'M', u'蝫'), (0x2F9BE, 'M', u'螆'), (0x2F9BF, 'X'), (0x2F9C0, 'M', u'蟡'), (0x2F9C1, 'M', u'蠁'), (0x2F9C2, 'M', u'䗹'), (0x2F9C3, 'M', u'衠'), (0x2F9C4, 'M', u'衣'), (0x2F9C5, 'M', u'𧙧'), (0x2F9C6, 'M', u'裗'), (0x2F9C7, 'M', u'裞'), (0x2F9C8, 'M', u'䘵'), (0x2F9C9, 'M', u'裺'), (0x2F9CA, 'M', u'㒻'), (0x2F9CB, 'M', u'𧢮'), (0x2F9CC, 'M', u'𧥦'), (0x2F9CD, 'M', u'䚾'), (0x2F9CE, 'M', u'䛇'), (0x2F9CF, 'M', u'誠'), (0x2F9D0, 'M', u'諭'), (0x2F9D1, 'M', u'變'), (0x2F9D2, 'M', u'豕'), (0x2F9D3, 'M', u'𧲨'), (0x2F9D4, 'M', u'貫'), (0x2F9D5, 'M', u'賁'), (0x2F9D6, 'M', u'贛'), (0x2F9D7, 'M', u'起'), (0x2F9D8, 'M', u'𧼯'), (0x2F9D9, 'M', u'𠠄'), (0x2F9DA, 'M', u'跋'), (0x2F9DB, 'M', u'趼'), (0x2F9DC, 'M', u'跰'), (0x2F9DD, 'M', u'𠣞'), (0x2F9DE, 'M', u'軔'), (0x2F9DF, 'M', u'輸'), (0x2F9E0, 'M', u'𨗒'), (0x2F9E1, 'M', u'𨗭'), (0x2F9E2, 'M', u'邔'), (0x2F9E3, 'M', u'郱'), (0x2F9E4, 'M', u'鄑'), (0x2F9E5, 'M', u'𨜮'), (0x2F9E6, 'M', u'鄛'), (0x2F9E7, 'M', u'鈸'), (0x2F9E8, 'M', u'鋗'), (0x2F9E9, 'M', u'鋘'), (0x2F9EA, 'M', u'鉼'), (0x2F9EB, 'M', u'鏹'), (0x2F9EC, 'M', u'鐕'), (0x2F9ED, 'M', u'𨯺'), (0x2F9EE, 'M', u'開'), (0x2F9EF, 'M', u'䦕'), (0x2F9F0, 'M', u'閷'), (0x2F9F1, 'M', u'𨵷'), (0x2F9F2, 'M', u'䧦'), (0x2F9F3, 'M', u'雃'), (0x2F9F4, 'M', u'嶲'), (0x2F9F5, 'M', u'霣'), (0x2F9F6, 'M', u'𩅅'), (0x2F9F7, 'M', u'𩈚'), (0x2F9F8, 'M', u'䩮'), (0x2F9F9, 'M', u'䩶'), (0x2F9FA, 'M', u'韠'), (0x2F9FB, 'M', u'𩐊'), (0x2F9FC, 'M', u'䪲'), (0x2F9FD, 'M', u'𩒖'), (0x2F9FE, 'M', u'頋'), (0x2FA00, 'M', u'頩'), (0x2FA01, 'M', u'𩖶'), (0x2FA02, 'M', u'飢'), (0x2FA03, 'M', u'䬳'), (0x2FA04, 'M', u'餩'), (0x2FA05, 'M', u'馧'), (0x2FA06, 'M', u'駂'), (0x2FA07, 'M', u'駾'), (0x2FA08, 'M', u'䯎'), (0x2FA09, 'M', u'𩬰'), (0x2FA0A, 'M', u'鬒'), (0x2FA0B, 'M', u'鱀'), (0x2FA0C, 'M', u'鳽'), (0x2FA0D, 'M', u'䳎'), (0x2FA0E, 'M', u'䳭'), (0x2FA0F, 'M', u'鵧'), (0x2FA10, 'M', u'𪃎'), (0x2FA11, 'M', u'䳸'), (0x2FA12, 'M', u'𪄅'), (0x2FA13, 'M', u'𪈎'), (0x2FA14, 'M', u'𪊑'), (0x2FA15, 'M', u'麻'), (0x2FA16, 'M', u'䵖'), (0x2FA17, 'M', u'黹'), (0x2FA18, 'M', u'黾'), (0x2FA19, 'M', u'鼅'), (0x2FA1A, 'M', u'鼏'), (0x2FA1B, 'M', u'鼖'), (0x2FA1C, 'M', u'鼻'), (0x2FA1D, 'M', u'𪘀'), (0x2FA1E, 'X'), (0xE0100, 'I'), ] def _seg_78(): return [ (0xE01F0, 'X'), ] uts46data = tuple( _seg_0() + _seg_1() + _seg_2() + _seg_3() + _seg_4() + _seg_5() + _seg_6() + _seg_7() + _seg_8() + _seg_9() + _seg_10() + _seg_11() + _seg_12() + _seg_13() + _seg_14() + _seg_15() + _seg_16() + _seg_17() + _seg_18() + _seg_19() + _seg_20() + _seg_21() + _seg_22() + _seg_23() + _seg_24() + _seg_25() + _seg_26() + _seg_27() + _seg_28() + _seg_29() + _seg_30() + _seg_31() + _seg_32() + _seg_33() + _seg_34() + _seg_35() + _seg_36() + _seg_37() + _seg_38() + _seg_39() + _seg_40() + _seg_41() + _seg_42() + _seg_43() + _seg_44() + _seg_45() + _seg_46() + _seg_47() + _seg_48() + _seg_49() + _seg_50() + _seg_51() + _seg_52() + _seg_53() + _seg_54() + _seg_55() + _seg_56() + _seg_57() + _seg_58() + _seg_59() + _seg_60() + _seg_61() + _seg_62() + _seg_63() + _seg_64() + _seg_65() + _seg_66() + _seg_67() + _seg_68() + _seg_69() + _seg_70() + _seg_71() + _seg_72() + _seg_73() + _seg_74() + _seg_75() + _seg_76() + _seg_77() + _seg_78() ) ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/ipaddress.py�����������������������������0000644�0001750�0001750�00000233754�00000000000�027033� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# Copyright 2007 Google Inc. # Licensed to PSF under a Contributor Agreement. """A fast, lightweight IPv4/IPv6 manipulation library in Python. This library is used to create/poke/manipulate IPv4 and IPv6 addresses and networks. """ from __future__ import unicode_literals import itertools import struct __version__ = '1.0.22' # Compatibility functions _compat_int_types = (int,) try: _compat_int_types = (int, long) except NameError: pass try: _compat_str = unicode except NameError: _compat_str = str assert bytes != str if b'\0'[0] == 0: # Python 3 semantics def _compat_bytes_to_byte_vals(byt): return byt else: def _compat_bytes_to_byte_vals(byt): return [struct.unpack(b'!B', b)[0] for b in byt] try: _compat_int_from_byte_vals = int.from_bytes except AttributeError: def _compat_int_from_byte_vals(bytvals, endianess): assert endianess == 'big' res = 0 for bv in bytvals: assert isinstance(bv, _compat_int_types) res = (res << 8) + bv return res def _compat_to_bytes(intval, length, endianess): assert isinstance(intval, _compat_int_types) assert endianess == 'big' if length == 4: if intval < 0 or intval >= 2 ** 32: raise struct.error("integer out of range for 'I' format code") return struct.pack(b'!I', intval) elif length == 16: if intval < 0 or intval >= 2 ** 128: raise struct.error("integer out of range for 'QQ' format code") return struct.pack(b'!QQ', intval >> 64, intval & 0xffffffffffffffff) else: raise NotImplementedError() if hasattr(int, 'bit_length'): # Not int.bit_length , since that won't work in 2.7 where long exists def _compat_bit_length(i): return i.bit_length() else: def _compat_bit_length(i): for res in itertools.count(): if i >> res == 0: return res def _compat_range(start, end, step=1): assert step > 0 i = start while i < end: yield i i += step class _TotalOrderingMixin(object): __slots__ = () # Helper that derives the other comparison operations from # __lt__ and __eq__ # We avoid functools.total_ordering because it doesn't handle # NotImplemented correctly yet (http://bugs.python.org/issue10042) def __eq__(self, other): raise NotImplementedError def __ne__(self, other): equal = self.__eq__(other) if equal is NotImplemented: return NotImplemented return not equal def __lt__(self, other): raise NotImplementedError def __le__(self, other): less = self.__lt__(other) if less is NotImplemented or not less: return self.__eq__(other) return less def __gt__(self, other): less = self.__lt__(other) if less is NotImplemented: return NotImplemented equal = self.__eq__(other) if equal is NotImplemented: return NotImplemented return not (less or equal) def __ge__(self, other): less = self.__lt__(other) if less is NotImplemented: return NotImplemented return not less IPV4LENGTH = 32 IPV6LENGTH = 128 class AddressValueError(ValueError): """A Value Error related to the address.""" class NetmaskValueError(ValueError): """A Value Error related to the netmask.""" def ip_address(address): """Take an IP string/int and return an object of the correct type. Args: address: A string or integer, the IP address. Either IPv4 or IPv6 addresses may be supplied; integers less than 2**32 will be considered to be IPv4 by default. Returns: An IPv4Address or IPv6Address object. Raises: ValueError: if the *address* passed isn't either a v4 or a v6 address """ try: return IPv4Address(address) except (AddressValueError, NetmaskValueError): pass try: return IPv6Address(address) except (AddressValueError, NetmaskValueError): pass if isinstance(address, bytes): raise AddressValueError( '%r does not appear to be an IPv4 or IPv6 address. ' 'Did you pass in a bytes (str in Python 2) instead of' ' a unicode object?' % address) raise ValueError('%r does not appear to be an IPv4 or IPv6 address' % address) def ip_network(address, strict=True): """Take an IP string/int and return an object of the correct type. Args: address: A string or integer, the IP network. Either IPv4 or IPv6 networks may be supplied; integers less than 2**32 will be considered to be IPv4 by default. Returns: An IPv4Network or IPv6Network object. Raises: ValueError: if the string passed isn't either a v4 or a v6 address. Or if the network has host bits set. """ try: return IPv4Network(address, strict) except (AddressValueError, NetmaskValueError): pass try: return IPv6Network(address, strict) except (AddressValueError, NetmaskValueError): pass if isinstance(address, bytes): raise AddressValueError( '%r does not appear to be an IPv4 or IPv6 network. ' 'Did you pass in a bytes (str in Python 2) instead of' ' a unicode object?' % address) raise ValueError('%r does not appear to be an IPv4 or IPv6 network' % address) def ip_interface(address): """Take an IP string/int and return an object of the correct type. Args: address: A string or integer, the IP address. Either IPv4 or IPv6 addresses may be supplied; integers less than 2**32 will be considered to be IPv4 by default. Returns: An IPv4Interface or IPv6Interface object. Raises: ValueError: if the string passed isn't either a v4 or a v6 address. Notes: The IPv?Interface classes describe an Address on a particular Network, so they're basically a combination of both the Address and Network classes. """ try: return IPv4Interface(address) except (AddressValueError, NetmaskValueError): pass try: return IPv6Interface(address) except (AddressValueError, NetmaskValueError): pass raise ValueError('%r does not appear to be an IPv4 or IPv6 interface' % address) def v4_int_to_packed(address): """Represent an address as 4 packed bytes in network (big-endian) order. Args: address: An integer representation of an IPv4 IP address. Returns: The integer address packed as 4 bytes in network (big-endian) order. Raises: ValueError: If the integer is negative or too large to be an IPv4 IP address. """ try: return _compat_to_bytes(address, 4, 'big') except (struct.error, OverflowError): raise ValueError("Address negative or too large for IPv4") def v6_int_to_packed(address): """Represent an address as 16 packed bytes in network (big-endian) order. Args: address: An integer representation of an IPv6 IP address. Returns: The integer address packed as 16 bytes in network (big-endian) order. """ try: return _compat_to_bytes(address, 16, 'big') except (struct.error, OverflowError): raise ValueError("Address negative or too large for IPv6") def _split_optional_netmask(address): """Helper to split the netmask and raise AddressValueError if needed""" addr = _compat_str(address).split('/') if len(addr) > 2: raise AddressValueError("Only one '/' permitted in %r" % address) return addr def _find_address_range(addresses): """Find a sequence of sorted deduplicated IPv#Address. Args: addresses: a list of IPv#Address objects. Yields: A tuple containing the first and last IP addresses in the sequence. """ it = iter(addresses) first = last = next(it) for ip in it: if ip._ip != last._ip + 1: yield first, last first = ip last = ip yield first, last def _count_righthand_zero_bits(number, bits): """Count the number of zero bits on the right hand side. Args: number: an integer. bits: maximum number of bits to count. Returns: The number of zero bits on the right hand side of the number. """ if number == 0: return bits return min(bits, _compat_bit_length(~number & (number - 1))) def summarize_address_range(first, last): """Summarize a network range given the first and last IP addresses. Example: >>> list(summarize_address_range(IPv4Address('192.0.2.0'), ... IPv4Address('192.0.2.130'))) ... #doctest: +NORMALIZE_WHITESPACE [IPv4Network('192.0.2.0/25'), IPv4Network('192.0.2.128/31'), IPv4Network('192.0.2.130/32')] Args: first: the first IPv4Address or IPv6Address in the range. last: the last IPv4Address or IPv6Address in the range. Returns: An iterator of the summarized IPv(4|6) network objects. Raise: TypeError: If the first and last objects are not IP addresses. If the first and last objects are not the same version. ValueError: If the last object is not greater than the first. If the version of the first address is not 4 or 6. """ if (not (isinstance(first, _BaseAddress) and isinstance(last, _BaseAddress))): raise TypeError('first and last must be IP addresses, not networks') if first.version != last.version: raise TypeError("%s and %s are not of the same version" % ( first, last)) if first > last: raise ValueError('last IP address must be greater than first') if first.version == 4: ip = IPv4Network elif first.version == 6: ip = IPv6Network else: raise ValueError('unknown IP version') ip_bits = first._max_prefixlen first_int = first._ip last_int = last._ip while first_int <= last_int: nbits = min(_count_righthand_zero_bits(first_int, ip_bits), _compat_bit_length(last_int - first_int + 1) - 1) net = ip((first_int, ip_bits - nbits)) yield net first_int += 1 << nbits if first_int - 1 == ip._ALL_ONES: break def _collapse_addresses_internal(addresses): """Loops through the addresses, collapsing concurrent netblocks. Example: ip1 = IPv4Network('192.0.2.0/26') ip2 = IPv4Network('192.0.2.64/26') ip3 = IPv4Network('192.0.2.128/26') ip4 = IPv4Network('192.0.2.192/26') _collapse_addresses_internal([ip1, ip2, ip3, ip4]) -> [IPv4Network('192.0.2.0/24')] This shouldn't be called directly; it is called via collapse_addresses([]). Args: addresses: A list of IPv4Network's or IPv6Network's Returns: A list of IPv4Network's or IPv6Network's depending on what we were passed. """ # First merge to_merge = list(addresses) subnets = {} while to_merge: net = to_merge.pop() supernet = net.supernet() existing = subnets.get(supernet) if existing is None: subnets[supernet] = net elif existing != net: # Merge consecutive subnets del subnets[supernet] to_merge.append(supernet) # Then iterate over resulting networks, skipping subsumed subnets last = None for net in sorted(subnets.values()): if last is not None: # Since they are sorted, # last.network_address <= net.network_address is a given. if last.broadcast_address >= net.broadcast_address: continue yield net last = net def collapse_addresses(addresses): """Collapse a list of IP objects. Example: collapse_addresses([IPv4Network('192.0.2.0/25'), IPv4Network('192.0.2.128/25')]) -> [IPv4Network('192.0.2.0/24')] Args: addresses: An iterator of IPv4Network or IPv6Network objects. Returns: An iterator of the collapsed IPv(4|6)Network objects. Raises: TypeError: If passed a list of mixed version objects. """ addrs = [] ips = [] nets = [] # split IP addresses and networks for ip in addresses: if isinstance(ip, _BaseAddress): if ips and ips[-1]._version != ip._version: raise TypeError("%s and %s are not of the same version" % ( ip, ips[-1])) ips.append(ip) elif ip._prefixlen == ip._max_prefixlen: if ips and ips[-1]._version != ip._version: raise TypeError("%s and %s are not of the same version" % ( ip, ips[-1])) try: ips.append(ip.ip) except AttributeError: ips.append(ip.network_address) else: if nets and nets[-1]._version != ip._version: raise TypeError("%s and %s are not of the same version" % ( ip, nets[-1])) nets.append(ip) # sort and dedup ips = sorted(set(ips)) # find consecutive address ranges in the sorted sequence and summarize them if ips: for first, last in _find_address_range(ips): addrs.extend(summarize_address_range(first, last)) return _collapse_addresses_internal(addrs + nets) def get_mixed_type_key(obj): """Return a key suitable for sorting between networks and addresses. Address and Network objects are not sortable by default; they're fundamentally different so the expression IPv4Address('192.0.2.0') <= IPv4Network('192.0.2.0/24') doesn't make any sense. There are some times however, where you may wish to have ipaddress sort these for you anyway. If you need to do this, you can use this function as the key= argument to sorted(). Args: obj: either a Network or Address object. Returns: appropriate key. """ if isinstance(obj, _BaseNetwork): return obj._get_networks_key() elif isinstance(obj, _BaseAddress): return obj._get_address_key() return NotImplemented class _IPAddressBase(_TotalOrderingMixin): """The mother class.""" __slots__ = () @property def exploded(self): """Return the longhand version of the IP address as a string.""" return self._explode_shorthand_ip_string() @property def compressed(self): """Return the shorthand version of the IP address as a string.""" return _compat_str(self) @property def reverse_pointer(self): """The name of the reverse DNS pointer for the IP address, e.g.: >>> ipaddress.ip_address("127.0.0.1").reverse_pointer '1.0.0.127.in-addr.arpa' >>> ipaddress.ip_address("2001:db8::1").reverse_pointer '1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa' """ return self._reverse_pointer() @property def version(self): msg = '%200s has no version specified' % (type(self),) raise NotImplementedError(msg) def _check_int_address(self, address): if address < 0: msg = "%d (< 0) is not permitted as an IPv%d address" raise AddressValueError(msg % (address, self._version)) if address > self._ALL_ONES: msg = "%d (>= 2**%d) is not permitted as an IPv%d address" raise AddressValueError(msg % (address, self._max_prefixlen, self._version)) def _check_packed_address(self, address, expected_len): address_len = len(address) if address_len != expected_len: msg = ( '%r (len %d != %d) is not permitted as an IPv%d address. ' 'Did you pass in a bytes (str in Python 2) instead of' ' a unicode object?') raise AddressValueError(msg % (address, address_len, expected_len, self._version)) @classmethod def _ip_int_from_prefix(cls, prefixlen): """Turn the prefix length into a bitwise netmask Args: prefixlen: An integer, the prefix length. Returns: An integer. """ return cls._ALL_ONES ^ (cls._ALL_ONES >> prefixlen) @classmethod def _prefix_from_ip_int(cls, ip_int): """Return prefix length from the bitwise netmask. Args: ip_int: An integer, the netmask in expanded bitwise format Returns: An integer, the prefix length. Raises: ValueError: If the input intermingles zeroes & ones """ trailing_zeroes = _count_righthand_zero_bits(ip_int, cls._max_prefixlen) prefixlen = cls._max_prefixlen - trailing_zeroes leading_ones = ip_int >> trailing_zeroes all_ones = (1 << prefixlen) - 1 if leading_ones != all_ones: byteslen = cls._max_prefixlen // 8 details = _compat_to_bytes(ip_int, byteslen, 'big') msg = 'Netmask pattern %r mixes zeroes & ones' raise ValueError(msg % details) return prefixlen @classmethod def _report_invalid_netmask(cls, netmask_str): msg = '%r is not a valid netmask' % netmask_str raise NetmaskValueError(msg) @classmethod def _prefix_from_prefix_string(cls, prefixlen_str): """Return prefix length from a numeric string Args: prefixlen_str: The string to be converted Returns: An integer, the prefix length. Raises: NetmaskValueError: If the input is not a valid netmask """ # int allows a leading +/- as well as surrounding whitespace, # so we ensure that isn't the case if not _BaseV4._DECIMAL_DIGITS.issuperset(prefixlen_str): cls._report_invalid_netmask(prefixlen_str) try: prefixlen = int(prefixlen_str) except ValueError: cls._report_invalid_netmask(prefixlen_str) if not (0 <= prefixlen <= cls._max_prefixlen): cls._report_invalid_netmask(prefixlen_str) return prefixlen @classmethod def _prefix_from_ip_string(cls, ip_str): """Turn a netmask/hostmask string into a prefix length Args: ip_str: The netmask/hostmask to be converted Returns: An integer, the prefix length. Raises: NetmaskValueError: If the input is not a valid netmask/hostmask """ # Parse the netmask/hostmask like an IP address. try: ip_int = cls._ip_int_from_string(ip_str) except AddressValueError: cls._report_invalid_netmask(ip_str) # Try matching a netmask (this would be /1*0*/ as a bitwise regexp). # Note that the two ambiguous cases (all-ones and all-zeroes) are # treated as netmasks. try: return cls._prefix_from_ip_int(ip_int) except ValueError: pass # Invert the bits, and try matching a /0+1+/ hostmask instead. ip_int ^= cls._ALL_ONES try: return cls._prefix_from_ip_int(ip_int) except ValueError: cls._report_invalid_netmask(ip_str) def __reduce__(self): return self.__class__, (_compat_str(self),) class _BaseAddress(_IPAddressBase): """A generic IP object. This IP class contains the version independent methods which are used by single IP addresses. """ __slots__ = () def __int__(self): return self._ip def __eq__(self, other): try: return (self._ip == other._ip and self._version == other._version) except AttributeError: return NotImplemented def __lt__(self, other): if not isinstance(other, _IPAddressBase): return NotImplemented if not isinstance(other, _BaseAddress): raise TypeError('%s and %s are not of the same type' % ( self, other)) if self._version != other._version: raise TypeError('%s and %s are not of the same version' % ( self, other)) if self._ip != other._ip: return self._ip < other._ip return False # Shorthand for Integer addition and subtraction. This is not # meant to ever support addition/subtraction of addresses. def __add__(self, other): if not isinstance(other, _compat_int_types): return NotImplemented return self.__class__(int(self) + other) def __sub__(self, other): if not isinstance(other, _compat_int_types): return NotImplemented return self.__class__(int(self) - other) def __repr__(self): return '%s(%r)' % (self.__class__.__name__, _compat_str(self)) def __str__(self): return _compat_str(self._string_from_ip_int(self._ip)) def __hash__(self): return hash(hex(int(self._ip))) def _get_address_key(self): return (self._version, self) def __reduce__(self): return self.__class__, (self._ip,) class _BaseNetwork(_IPAddressBase): """A generic IP network object. This IP class contains the version independent methods which are used by networks. """ def __init__(self, address): self._cache = {} def __repr__(self): return '%s(%r)' % (self.__class__.__name__, _compat_str(self)) def __str__(self): return '%s/%d' % (self.network_address, self.prefixlen) def hosts(self): """Generate Iterator over usable hosts in a network. This is like __iter__ except it doesn't return the network or broadcast addresses. """ network = int(self.network_address) broadcast = int(self.broadcast_address) for x in _compat_range(network + 1, broadcast): yield self._address_class(x) def __iter__(self): network = int(self.network_address) broadcast = int(self.broadcast_address) for x in _compat_range(network, broadcast + 1): yield self._address_class(x) def __getitem__(self, n): network = int(self.network_address) broadcast = int(self.broadcast_address) if n >= 0: if network + n > broadcast: raise IndexError('address out of range') return self._address_class(network + n) else: n += 1 if broadcast + n < network: raise IndexError('address out of range') return self._address_class(broadcast + n) def __lt__(self, other): if not isinstance(other, _IPAddressBase): return NotImplemented if not isinstance(other, _BaseNetwork): raise TypeError('%s and %s are not of the same type' % ( self, other)) if self._version != other._version: raise TypeError('%s and %s are not of the same version' % ( self, other)) if self.network_address != other.network_address: return self.network_address < other.network_address if self.netmask != other.netmask: return self.netmask < other.netmask return False def __eq__(self, other): try: return (self._version == other._version and self.network_address == other.network_address and int(self.netmask) == int(other.netmask)) except AttributeError: return NotImplemented def __hash__(self): return hash(int(self.network_address) ^ int(self.netmask)) def __contains__(self, other): # always false if one is v4 and the other is v6. if self._version != other._version: return False # dealing with another network. if isinstance(other, _BaseNetwork): return False # dealing with another address else: # address return (int(self.network_address) <= int(other._ip) <= int(self.broadcast_address)) def overlaps(self, other): """Tell if self is partly contained in other.""" return self.network_address in other or ( self.broadcast_address in other or ( other.network_address in self or ( other.broadcast_address in self))) @property def broadcast_address(self): x = self._cache.get('broadcast_address') if x is None: x = self._address_class(int(self.network_address) | int(self.hostmask)) self._cache['broadcast_address'] = x return x @property def hostmask(self): x = self._cache.get('hostmask') if x is None: x = self._address_class(int(self.netmask) ^ self._ALL_ONES) self._cache['hostmask'] = x return x @property def with_prefixlen(self): return '%s/%d' % (self.network_address, self._prefixlen) @property def with_netmask(self): return '%s/%s' % (self.network_address, self.netmask) @property def with_hostmask(self): return '%s/%s' % (self.network_address, self.hostmask) @property def num_addresses(self): """Number of hosts in the current subnet.""" return int(self.broadcast_address) - int(self.network_address) + 1 @property def _address_class(self): # Returning bare address objects (rather than interfaces) allows for # more consistent behaviour across the network address, broadcast # address and individual host addresses. msg = '%200s has no associated address class' % (type(self),) raise NotImplementedError(msg) @property def prefixlen(self): return self._prefixlen def address_exclude(self, other): """Remove an address from a larger block. For example: addr1 = ip_network('192.0.2.0/28') addr2 = ip_network('192.0.2.1/32') list(addr1.address_exclude(addr2)) = [IPv4Network('192.0.2.0/32'), IPv4Network('192.0.2.2/31'), IPv4Network('192.0.2.4/30'), IPv4Network('192.0.2.8/29')] or IPv6: addr1 = ip_network('2001:db8::1/32') addr2 = ip_network('2001:db8::1/128') list(addr1.address_exclude(addr2)) = [ip_network('2001:db8::1/128'), ip_network('2001:db8::2/127'), ip_network('2001:db8::4/126'), ip_network('2001:db8::8/125'), ... ip_network('2001:db8:8000::/33')] Args: other: An IPv4Network or IPv6Network object of the same type. Returns: An iterator of the IPv(4|6)Network objects which is self minus other. Raises: TypeError: If self and other are of differing address versions, or if other is not a network object. ValueError: If other is not completely contained by self. """ if not self._version == other._version: raise TypeError("%s and %s are not of the same version" % ( self, other)) if not isinstance(other, _BaseNetwork): raise TypeError("%s is not a network object" % other) if not other.subnet_of(self): raise ValueError('%s not contained in %s' % (other, self)) if other == self: return # Make sure we're comparing the network of other. other = other.__class__('%s/%s' % (other.network_address, other.prefixlen)) s1, s2 = self.subnets() while s1 != other and s2 != other: if other.subnet_of(s1): yield s2 s1, s2 = s1.subnets() elif other.subnet_of(s2): yield s1 s1, s2 = s2.subnets() else: # If we got here, there's a bug somewhere. raise AssertionError('Error performing exclusion: ' 's1: %s s2: %s other: %s' % (s1, s2, other)) if s1 == other: yield s2 elif s2 == other: yield s1 else: # If we got here, there's a bug somewhere. raise AssertionError('Error performing exclusion: ' 's1: %s s2: %s other: %s' % (s1, s2, other)) def compare_networks(self, other): """Compare two IP objects. This is only concerned about the comparison of the integer representation of the network addresses. This means that the host bits aren't considered at all in this method. If you want to compare host bits, you can easily enough do a 'HostA._ip < HostB._ip' Args: other: An IP object. Returns: If the IP versions of self and other are the same, returns: -1 if self < other: eg: IPv4Network('192.0.2.0/25') < IPv4Network('192.0.2.128/25') IPv6Network('2001:db8::1000/124') < IPv6Network('2001:db8::2000/124') 0 if self == other eg: IPv4Network('192.0.2.0/24') == IPv4Network('192.0.2.0/24') IPv6Network('2001:db8::1000/124') == IPv6Network('2001:db8::1000/124') 1 if self > other eg: IPv4Network('192.0.2.128/25') > IPv4Network('192.0.2.0/25') IPv6Network('2001:db8::2000/124') > IPv6Network('2001:db8::1000/124') Raises: TypeError if the IP versions are different. """ # does this need to raise a ValueError? if self._version != other._version: raise TypeError('%s and %s are not of the same type' % ( self, other)) # self._version == other._version below here: if self.network_address < other.network_address: return -1 if self.network_address > other.network_address: return 1 # self.network_address == other.network_address below here: if self.netmask < other.netmask: return -1 if self.netmask > other.netmask: return 1 return 0 def _get_networks_key(self): """Network-only key function. Returns an object that identifies this address' network and netmask. This function is a suitable "key" argument for sorted() and list.sort(). """ return (self._version, self.network_address, self.netmask) def subnets(self, prefixlen_diff=1, new_prefix=None): """The subnets which join to make the current subnet. In the case that self contains only one IP (self._prefixlen == 32 for IPv4 or self._prefixlen == 128 for IPv6), yield an iterator with just ourself. Args: prefixlen_diff: An integer, the amount the prefix length should be increased by. This should not be set if new_prefix is also set. new_prefix: The desired new prefix length. This must be a larger number (smaller prefix) than the existing prefix. This should not be set if prefixlen_diff is also set. Returns: An iterator of IPv(4|6) objects. Raises: ValueError: The prefixlen_diff is too small or too large. OR prefixlen_diff and new_prefix are both set or new_prefix is a smaller number than the current prefix (smaller number means a larger network) """ if self._prefixlen == self._max_prefixlen: yield self return if new_prefix is not None: if new_prefix < self._prefixlen: raise ValueError('new prefix must be longer') if prefixlen_diff != 1: raise ValueError('cannot set prefixlen_diff and new_prefix') prefixlen_diff = new_prefix - self._prefixlen if prefixlen_diff < 0: raise ValueError('prefix length diff must be > 0') new_prefixlen = self._prefixlen + prefixlen_diff if new_prefixlen > self._max_prefixlen: raise ValueError( 'prefix length diff %d is invalid for netblock %s' % ( new_prefixlen, self)) start = int(self.network_address) end = int(self.broadcast_address) + 1 step = (int(self.hostmask) + 1) >> prefixlen_diff for new_addr in _compat_range(start, end, step): current = self.__class__((new_addr, new_prefixlen)) yield current def supernet(self, prefixlen_diff=1, new_prefix=None): """The supernet containing the current network. Args: prefixlen_diff: An integer, the amount the prefix length of the network should be decreased by. For example, given a /24 network and a prefixlen_diff of 3, a supernet with a /21 netmask is returned. Returns: An IPv4 network object. Raises: ValueError: If self.prefixlen - prefixlen_diff < 0. I.e., you have a negative prefix length. OR If prefixlen_diff and new_prefix are both set or new_prefix is a larger number than the current prefix (larger number means a smaller network) """ if self._prefixlen == 0: return self if new_prefix is not None: if new_prefix > self._prefixlen: raise ValueError('new prefix must be shorter') if prefixlen_diff != 1: raise ValueError('cannot set prefixlen_diff and new_prefix') prefixlen_diff = self._prefixlen - new_prefix new_prefixlen = self.prefixlen - prefixlen_diff if new_prefixlen < 0: raise ValueError( 'current prefixlen is %d, cannot have a prefixlen_diff of %d' % (self.prefixlen, prefixlen_diff)) return self.__class__(( int(self.network_address) & (int(self.netmask) << prefixlen_diff), new_prefixlen)) @property def is_multicast(self): """Test if the address is reserved for multicast use. Returns: A boolean, True if the address is a multicast address. See RFC 2373 2.7 for details. """ return (self.network_address.is_multicast and self.broadcast_address.is_multicast) @staticmethod def _is_subnet_of(a, b): try: # Always false if one is v4 and the other is v6. if a._version != b._version: raise TypeError("%s and %s are not of the same version" (a, b)) return (b.network_address <= a.network_address and b.broadcast_address >= a.broadcast_address) except AttributeError: raise TypeError("Unable to test subnet containment " "between %s and %s" % (a, b)) def subnet_of(self, other): """Return True if this network is a subnet of other.""" return self._is_subnet_of(self, other) def supernet_of(self, other): """Return True if this network is a supernet of other.""" return self._is_subnet_of(other, self) @property def is_reserved(self): """Test if the address is otherwise IETF reserved. Returns: A boolean, True if the address is within one of the reserved IPv6 Network ranges. """ return (self.network_address.is_reserved and self.broadcast_address.is_reserved) @property def is_link_local(self): """Test if the address is reserved for link-local. Returns: A boolean, True if the address is reserved per RFC 4291. """ return (self.network_address.is_link_local and self.broadcast_address.is_link_local) @property def is_private(self): """Test if this address is allocated for private networks. Returns: A boolean, True if the address is reserved per iana-ipv4-special-registry or iana-ipv6-special-registry. """ return (self.network_address.is_private and self.broadcast_address.is_private) @property def is_global(self): """Test if this address is allocated for public networks. Returns: A boolean, True if the address is not reserved per iana-ipv4-special-registry or iana-ipv6-special-registry. """ return not self.is_private @property def is_unspecified(self): """Test if the address is unspecified. Returns: A boolean, True if this is the unspecified address as defined in RFC 2373 2.5.2. """ return (self.network_address.is_unspecified and self.broadcast_address.is_unspecified) @property def is_loopback(self): """Test if the address is a loopback address. Returns: A boolean, True if the address is a loopback address as defined in RFC 2373 2.5.3. """ return (self.network_address.is_loopback and self.broadcast_address.is_loopback) class _BaseV4(object): """Base IPv4 object. The following methods are used by IPv4 objects in both single IP addresses and networks. """ __slots__ = () _version = 4 # Equivalent to 255.255.255.255 or 32 bits of 1's. _ALL_ONES = (2 ** IPV4LENGTH) - 1 _DECIMAL_DIGITS = frozenset('0123456789') # the valid octets for host and netmasks. only useful for IPv4. _valid_mask_octets = frozenset([255, 254, 252, 248, 240, 224, 192, 128, 0]) _max_prefixlen = IPV4LENGTH # There are only a handful of valid v4 netmasks, so we cache them all # when constructed (see _make_netmask()). _netmask_cache = {} def _explode_shorthand_ip_string(self): return _compat_str(self) @classmethod def _make_netmask(cls, arg): """Make a (netmask, prefix_len) tuple from the given argument. Argument can be: - an integer (the prefix length) - a string representing the prefix length (e.g. "24") - a string representing the prefix netmask (e.g. "255.255.255.0") """ if arg not in cls._netmask_cache: if isinstance(arg, _compat_int_types): prefixlen = arg else: try: # Check for a netmask in prefix length form prefixlen = cls._prefix_from_prefix_string(arg) except NetmaskValueError: # Check for a netmask or hostmask in dotted-quad form. # This may raise NetmaskValueError. prefixlen = cls._prefix_from_ip_string(arg) netmask = IPv4Address(cls._ip_int_from_prefix(prefixlen)) cls._netmask_cache[arg] = netmask, prefixlen return cls._netmask_cache[arg] @classmethod def _ip_int_from_string(cls, ip_str): """Turn the given IP string into an integer for comparison. Args: ip_str: A string, the IP ip_str. Returns: The IP ip_str as an integer. Raises: AddressValueError: if ip_str isn't a valid IPv4 Address. """ if not ip_str: raise AddressValueError('Address cannot be empty') octets = ip_str.split('.') if len(octets) != 4: raise AddressValueError("Expected 4 octets in %r" % ip_str) try: return _compat_int_from_byte_vals( map(cls._parse_octet, octets), 'big') except ValueError as exc: raise AddressValueError("%s in %r" % (exc, ip_str)) @classmethod def _parse_octet(cls, octet_str): """Convert a decimal octet into an integer. Args: octet_str: A string, the number to parse. Returns: The octet as an integer. Raises: ValueError: if the octet isn't strictly a decimal from [0..255]. """ if not octet_str: raise ValueError("Empty octet not permitted") # Whitelist the characters, since int() allows a lot of bizarre stuff. if not cls._DECIMAL_DIGITS.issuperset(octet_str): msg = "Only decimal digits permitted in %r" raise ValueError(msg % octet_str) # We do the length check second, since the invalid character error # is likely to be more informative for the user if len(octet_str) > 3: msg = "At most 3 characters permitted in %r" raise ValueError(msg % octet_str) # Convert to integer (we know digits are legal) octet_int = int(octet_str, 10) # Any octets that look like they *might* be written in octal, # and which don't look exactly the same in both octal and # decimal are rejected as ambiguous if octet_int > 7 and octet_str[0] == '0': msg = "Ambiguous (octal/decimal) value in %r not permitted" raise ValueError(msg % octet_str) if octet_int > 255: raise ValueError("Octet %d (> 255) not permitted" % octet_int) return octet_int @classmethod def _string_from_ip_int(cls, ip_int): """Turns a 32-bit integer into dotted decimal notation. Args: ip_int: An integer, the IP address. Returns: The IP address as a string in dotted decimal notation. """ return '.'.join(_compat_str(struct.unpack(b'!B', b)[0] if isinstance(b, bytes) else b) for b in _compat_to_bytes(ip_int, 4, 'big')) def _is_hostmask(self, ip_str): """Test if the IP string is a hostmask (rather than a netmask). Args: ip_str: A string, the potential hostmask. Returns: A boolean, True if the IP string is a hostmask. """ bits = ip_str.split('.') try: parts = [x for x in map(int, bits) if x in self._valid_mask_octets] except ValueError: return False if len(parts) != len(bits): return False if parts[0] < parts[-1]: return True return False def _reverse_pointer(self): """Return the reverse DNS pointer name for the IPv4 address. This implements the method described in RFC1035 3.5. """ reverse_octets = _compat_str(self).split('.')[::-1] return '.'.join(reverse_octets) + '.in-addr.arpa' @property def max_prefixlen(self): return self._max_prefixlen @property def version(self): return self._version class IPv4Address(_BaseV4, _BaseAddress): """Represent and manipulate single IPv4 Addresses.""" __slots__ = ('_ip', '__weakref__') def __init__(self, address): """ Args: address: A string or integer representing the IP Additionally, an integer can be passed, so IPv4Address('192.0.2.1') == IPv4Address(3221225985). or, more generally IPv4Address(int(IPv4Address('192.0.2.1'))) == IPv4Address('192.0.2.1') Raises: AddressValueError: If ipaddress isn't a valid IPv4 address. """ # Efficient constructor from integer. if isinstance(address, _compat_int_types): self._check_int_address(address) self._ip = address return # Constructing from a packed address if isinstance(address, bytes): self._check_packed_address(address, 4) bvs = _compat_bytes_to_byte_vals(address) self._ip = _compat_int_from_byte_vals(bvs, 'big') return # Assume input argument to be string or any object representation # which converts into a formatted IP string. addr_str = _compat_str(address) if '/' in addr_str: raise AddressValueError("Unexpected '/' in %r" % address) self._ip = self._ip_int_from_string(addr_str) @property def packed(self): """The binary representation of this address.""" return v4_int_to_packed(self._ip) @property def is_reserved(self): """Test if the address is otherwise IETF reserved. Returns: A boolean, True if the address is within the reserved IPv4 Network range. """ return self in self._constants._reserved_network @property def is_private(self): """Test if this address is allocated for private networks. Returns: A boolean, True if the address is reserved per iana-ipv4-special-registry. """ return any(self in net for net in self._constants._private_networks) @property def is_global(self): return ( self not in self._constants._public_network and not self.is_private) @property def is_multicast(self): """Test if the address is reserved for multicast use. Returns: A boolean, True if the address is multicast. See RFC 3171 for details. """ return self in self._constants._multicast_network @property def is_unspecified(self): """Test if the address is unspecified. Returns: A boolean, True if this is the unspecified address as defined in RFC 5735 3. """ return self == self._constants._unspecified_address @property def is_loopback(self): """Test if the address is a loopback address. Returns: A boolean, True if the address is a loopback per RFC 3330. """ return self in self._constants._loopback_network @property def is_link_local(self): """Test if the address is reserved for link-local. Returns: A boolean, True if the address is link-local per RFC 3927. """ return self in self._constants._linklocal_network class IPv4Interface(IPv4Address): def __init__(self, address): if isinstance(address, (bytes, _compat_int_types)): IPv4Address.__init__(self, address) self.network = IPv4Network(self._ip) self._prefixlen = self._max_prefixlen return if isinstance(address, tuple): IPv4Address.__init__(self, address[0]) if len(address) > 1: self._prefixlen = int(address[1]) else: self._prefixlen = self._max_prefixlen self.network = IPv4Network(address, strict=False) self.netmask = self.network.netmask self.hostmask = self.network.hostmask return addr = _split_optional_netmask(address) IPv4Address.__init__(self, addr[0]) self.network = IPv4Network(address, strict=False) self._prefixlen = self.network._prefixlen self.netmask = self.network.netmask self.hostmask = self.network.hostmask def __str__(self): return '%s/%d' % (self._string_from_ip_int(self._ip), self.network.prefixlen) def __eq__(self, other): address_equal = IPv4Address.__eq__(self, other) if not address_equal or address_equal is NotImplemented: return address_equal try: return self.network == other.network except AttributeError: # An interface with an associated network is NOT the # same as an unassociated address. That's why the hash # takes the extra info into account. return False def __lt__(self, other): address_less = IPv4Address.__lt__(self, other) if address_less is NotImplemented: return NotImplemented try: return (self.network < other.network or self.network == other.network and address_less) except AttributeError: # We *do* allow addresses and interfaces to be sorted. The # unassociated address is considered less than all interfaces. return False def __hash__(self): return self._ip ^ self._prefixlen ^ int(self.network.network_address) __reduce__ = _IPAddressBase.__reduce__ @property def ip(self): return IPv4Address(self._ip) @property def with_prefixlen(self): return '%s/%s' % (self._string_from_ip_int(self._ip), self._prefixlen) @property def with_netmask(self): return '%s/%s' % (self._string_from_ip_int(self._ip), self.netmask) @property def with_hostmask(self): return '%s/%s' % (self._string_from_ip_int(self._ip), self.hostmask) class IPv4Network(_BaseV4, _BaseNetwork): """This class represents and manipulates 32-bit IPv4 network + addresses.. Attributes: [examples for IPv4Network('192.0.2.0/27')] .network_address: IPv4Address('192.0.2.0') .hostmask: IPv4Address('0.0.0.31') .broadcast_address: IPv4Address('192.0.2.32') .netmask: IPv4Address('255.255.255.224') .prefixlen: 27 """ # Class to use when creating address objects _address_class = IPv4Address def __init__(self, address, strict=True): """Instantiate a new IPv4 network object. Args: address: A string or integer representing the IP [& network]. '192.0.2.0/24' '192.0.2.0/255.255.255.0' '192.0.0.2/0.0.0.255' are all functionally the same in IPv4. Similarly, '192.0.2.1' '192.0.2.1/255.255.255.255' '192.0.2.1/32' are also functionally equivalent. That is to say, failing to provide a subnetmask will create an object with a mask of /32. If the mask (portion after the / in the argument) is given in dotted quad form, it is treated as a netmask if it starts with a non-zero field (e.g. /255.0.0.0 == /8) and as a hostmask if it starts with a zero field (e.g. 0.255.255.255 == /8), with the single exception of an all-zero mask which is treated as a netmask == /0. If no mask is given, a default of /32 is used. Additionally, an integer can be passed, so IPv4Network('192.0.2.1') == IPv4Network(3221225985) or, more generally IPv4Interface(int(IPv4Interface('192.0.2.1'))) == IPv4Interface('192.0.2.1') Raises: AddressValueError: If ipaddress isn't a valid IPv4 address. NetmaskValueError: If the netmask isn't valid for an IPv4 address. ValueError: If strict is True and a network address is not supplied. """ _BaseNetwork.__init__(self, address) # Constructing from a packed address or integer if isinstance(address, (_compat_int_types, bytes)): self.network_address = IPv4Address(address) self.netmask, self._prefixlen = self._make_netmask( self._max_prefixlen) # fixme: address/network test here. return if isinstance(address, tuple): if len(address) > 1: arg = address[1] else: # We weren't given an address[1] arg = self._max_prefixlen self.network_address = IPv4Address(address[0]) self.netmask, self._prefixlen = self._make_netmask(arg) packed = int(self.network_address) if packed & int(self.netmask) != packed: if strict: raise ValueError('%s has host bits set' % self) else: self.network_address = IPv4Address(packed & int(self.netmask)) return # Assume input argument to be string or any object representation # which converts into a formatted IP prefix string. addr = _split_optional_netmask(address) self.network_address = IPv4Address(self._ip_int_from_string(addr[0])) if len(addr) == 2: arg = addr[1] else: arg = self._max_prefixlen self.netmask, self._prefixlen = self._make_netmask(arg) if strict: if (IPv4Address(int(self.network_address) & int(self.netmask)) != self.network_address): raise ValueError('%s has host bits set' % self) self.network_address = IPv4Address(int(self.network_address) & int(self.netmask)) if self._prefixlen == (self._max_prefixlen - 1): self.hosts = self.__iter__ @property def is_global(self): """Test if this address is allocated for public networks. Returns: A boolean, True if the address is not reserved per iana-ipv4-special-registry. """ return (not (self.network_address in IPv4Network('100.64.0.0/10') and self.broadcast_address in IPv4Network('100.64.0.0/10')) and not self.is_private) class _IPv4Constants(object): _linklocal_network = IPv4Network('169.254.0.0/16') _loopback_network = IPv4Network('127.0.0.0/8') _multicast_network = IPv4Network('224.0.0.0/4') _public_network = IPv4Network('100.64.0.0/10') _private_networks = [ IPv4Network('0.0.0.0/8'), IPv4Network('10.0.0.0/8'), IPv4Network('127.0.0.0/8'), IPv4Network('169.254.0.0/16'), IPv4Network('172.16.0.0/12'), IPv4Network('192.0.0.0/29'), IPv4Network('192.0.0.170/31'), IPv4Network('192.0.2.0/24'), IPv4Network('192.168.0.0/16'), IPv4Network('198.18.0.0/15'), IPv4Network('198.51.100.0/24'), IPv4Network('203.0.113.0/24'), IPv4Network('240.0.0.0/4'), IPv4Network('255.255.255.255/32'), ] _reserved_network = IPv4Network('240.0.0.0/4') _unspecified_address = IPv4Address('0.0.0.0') IPv4Address._constants = _IPv4Constants class _BaseV6(object): """Base IPv6 object. The following methods are used by IPv6 objects in both single IP addresses and networks. """ __slots__ = () _version = 6 _ALL_ONES = (2 ** IPV6LENGTH) - 1 _HEXTET_COUNT = 8 _HEX_DIGITS = frozenset('0123456789ABCDEFabcdef') _max_prefixlen = IPV6LENGTH # There are only a bunch of valid v6 netmasks, so we cache them all # when constructed (see _make_netmask()). _netmask_cache = {} @classmethod def _make_netmask(cls, arg): """Make a (netmask, prefix_len) tuple from the given argument. Argument can be: - an integer (the prefix length) - a string representing the prefix length (e.g. "24") - a string representing the prefix netmask (e.g. "255.255.255.0") """ if arg not in cls._netmask_cache: if isinstance(arg, _compat_int_types): prefixlen = arg else: prefixlen = cls._prefix_from_prefix_string(arg) netmask = IPv6Address(cls._ip_int_from_prefix(prefixlen)) cls._netmask_cache[arg] = netmask, prefixlen return cls._netmask_cache[arg] @classmethod def _ip_int_from_string(cls, ip_str): """Turn an IPv6 ip_str into an integer. Args: ip_str: A string, the IPv6 ip_str. Returns: An int, the IPv6 address Raises: AddressValueError: if ip_str isn't a valid IPv6 Address. """ if not ip_str: raise AddressValueError('Address cannot be empty') parts = ip_str.split(':') # An IPv6 address needs at least 2 colons (3 parts). _min_parts = 3 if len(parts) < _min_parts: msg = "At least %d parts expected in %r" % (_min_parts, ip_str) raise AddressValueError(msg) # If the address has an IPv4-style suffix, convert it to hexadecimal. if '.' in parts[-1]: try: ipv4_int = IPv4Address(parts.pop())._ip except AddressValueError as exc: raise AddressValueError("%s in %r" % (exc, ip_str)) parts.append('%x' % ((ipv4_int >> 16) & 0xFFFF)) parts.append('%x' % (ipv4_int & 0xFFFF)) # An IPv6 address can't have more than 8 colons (9 parts). # The extra colon comes from using the "::" notation for a single # leading or trailing zero part. _max_parts = cls._HEXTET_COUNT + 1 if len(parts) > _max_parts: msg = "At most %d colons permitted in %r" % ( _max_parts - 1, ip_str) raise AddressValueError(msg) # Disregarding the endpoints, find '::' with nothing in between. # This indicates that a run of zeroes has been skipped. skip_index = None for i in _compat_range(1, len(parts) - 1): if not parts[i]: if skip_index is not None: # Can't have more than one '::' msg = "At most one '::' permitted in %r" % ip_str raise AddressValueError(msg) skip_index = i # parts_hi is the number of parts to copy from above/before the '::' # parts_lo is the number of parts to copy from below/after the '::' if skip_index is not None: # If we found a '::', then check if it also covers the endpoints. parts_hi = skip_index parts_lo = len(parts) - skip_index - 1 if not parts[0]: parts_hi -= 1 if parts_hi: msg = "Leading ':' only permitted as part of '::' in %r" raise AddressValueError(msg % ip_str) # ^: requires ^:: if not parts[-1]: parts_lo -= 1 if parts_lo: msg = "Trailing ':' only permitted as part of '::' in %r" raise AddressValueError(msg % ip_str) # :$ requires ::$ parts_skipped = cls._HEXTET_COUNT - (parts_hi + parts_lo) if parts_skipped < 1: msg = "Expected at most %d other parts with '::' in %r" raise AddressValueError(msg % (cls._HEXTET_COUNT - 1, ip_str)) else: # Otherwise, allocate the entire address to parts_hi. The # endpoints could still be empty, but _parse_hextet() will check # for that. if len(parts) != cls._HEXTET_COUNT: msg = "Exactly %d parts expected without '::' in %r" raise AddressValueError(msg % (cls._HEXTET_COUNT, ip_str)) if not parts[0]: msg = "Leading ':' only permitted as part of '::' in %r" raise AddressValueError(msg % ip_str) # ^: requires ^:: if not parts[-1]: msg = "Trailing ':' only permitted as part of '::' in %r" raise AddressValueError(msg % ip_str) # :$ requires ::$ parts_hi = len(parts) parts_lo = 0 parts_skipped = 0 try: # Now, parse the hextets into a 128-bit integer. ip_int = 0 for i in range(parts_hi): ip_int <<= 16 ip_int |= cls._parse_hextet(parts[i]) ip_int <<= 16 * parts_skipped for i in range(-parts_lo, 0): ip_int <<= 16 ip_int |= cls._parse_hextet(parts[i]) return ip_int except ValueError as exc: raise AddressValueError("%s in %r" % (exc, ip_str)) @classmethod def _parse_hextet(cls, hextet_str): """Convert an IPv6 hextet string into an integer. Args: hextet_str: A string, the number to parse. Returns: The hextet as an integer. Raises: ValueError: if the input isn't strictly a hex number from [0..FFFF]. """ # Whitelist the characters, since int() allows a lot of bizarre stuff. if not cls._HEX_DIGITS.issuperset(hextet_str): raise ValueError("Only hex digits permitted in %r" % hextet_str) # We do the length check second, since the invalid character error # is likely to be more informative for the user if len(hextet_str) > 4: msg = "At most 4 characters permitted in %r" raise ValueError(msg % hextet_str) # Length check means we can skip checking the integer value return int(hextet_str, 16) @classmethod def _compress_hextets(cls, hextets): """Compresses a list of hextets. Compresses a list of strings, replacing the longest continuous sequence of "0" in the list with "" and adding empty strings at the beginning or at the end of the string such that subsequently calling ":".join(hextets) will produce the compressed version of the IPv6 address. Args: hextets: A list of strings, the hextets to compress. Returns: A list of strings. """ best_doublecolon_start = -1 best_doublecolon_len = 0 doublecolon_start = -1 doublecolon_len = 0 for index, hextet in enumerate(hextets): if hextet == '0': doublecolon_len += 1 if doublecolon_start == -1: # Start of a sequence of zeros. doublecolon_start = index if doublecolon_len > best_doublecolon_len: # This is the longest sequence of zeros so far. best_doublecolon_len = doublecolon_len best_doublecolon_start = doublecolon_start else: doublecolon_len = 0 doublecolon_start = -1 if best_doublecolon_len > 1: best_doublecolon_end = (best_doublecolon_start + best_doublecolon_len) # For zeros at the end of the address. if best_doublecolon_end == len(hextets): hextets += [''] hextets[best_doublecolon_start:best_doublecolon_end] = [''] # For zeros at the beginning of the address. if best_doublecolon_start == 0: hextets = [''] + hextets return hextets @classmethod def _string_from_ip_int(cls, ip_int=None): """Turns a 128-bit integer into hexadecimal notation. Args: ip_int: An integer, the IP address. Returns: A string, the hexadecimal representation of the address. Raises: ValueError: The address is bigger than 128 bits of all ones. """ if ip_int is None: ip_int = int(cls._ip) if ip_int > cls._ALL_ONES: raise ValueError('IPv6 address is too large') hex_str = '%032x' % ip_int hextets = ['%x' % int(hex_str[x:x + 4], 16) for x in range(0, 32, 4)] hextets = cls._compress_hextets(hextets) return ':'.join(hextets) def _explode_shorthand_ip_string(self): """Expand a shortened IPv6 address. Args: ip_str: A string, the IPv6 address. Returns: A string, the expanded IPv6 address. """ if isinstance(self, IPv6Network): ip_str = _compat_str(self.network_address) elif isinstance(self, IPv6Interface): ip_str = _compat_str(self.ip) else: ip_str = _compat_str(self) ip_int = self._ip_int_from_string(ip_str) hex_str = '%032x' % ip_int parts = [hex_str[x:x + 4] for x in range(0, 32, 4)] if isinstance(self, (_BaseNetwork, IPv6Interface)): return '%s/%d' % (':'.join(parts), self._prefixlen) return ':'.join(parts) def _reverse_pointer(self): """Return the reverse DNS pointer name for the IPv6 address. This implements the method described in RFC3596 2.5. """ reverse_chars = self.exploded[::-1].replace(':', '') return '.'.join(reverse_chars) + '.ip6.arpa' @property def max_prefixlen(self): return self._max_prefixlen @property def version(self): return self._version class IPv6Address(_BaseV6, _BaseAddress): """Represent and manipulate single IPv6 Addresses.""" __slots__ = ('_ip', '__weakref__') def __init__(self, address): """Instantiate a new IPv6 address object. Args: address: A string or integer representing the IP Additionally, an integer can be passed, so IPv6Address('2001:db8::') == IPv6Address(42540766411282592856903984951653826560) or, more generally IPv6Address(int(IPv6Address('2001:db8::'))) == IPv6Address('2001:db8::') Raises: AddressValueError: If address isn't a valid IPv6 address. """ # Efficient constructor from integer. if isinstance(address, _compat_int_types): self._check_int_address(address) self._ip = address return # Constructing from a packed address if isinstance(address, bytes): self._check_packed_address(address, 16) bvs = _compat_bytes_to_byte_vals(address) self._ip = _compat_int_from_byte_vals(bvs, 'big') return # Assume input argument to be string or any object representation # which converts into a formatted IP string. addr_str = _compat_str(address) if '/' in addr_str: raise AddressValueError("Unexpected '/' in %r" % address) self._ip = self._ip_int_from_string(addr_str) @property def packed(self): """The binary representation of this address.""" return v6_int_to_packed(self._ip) @property def is_multicast(self): """Test if the address is reserved for multicast use. Returns: A boolean, True if the address is a multicast address. See RFC 2373 2.7 for details. """ return self in self._constants._multicast_network @property def is_reserved(self): """Test if the address is otherwise IETF reserved. Returns: A boolean, True if the address is within one of the reserved IPv6 Network ranges. """ return any(self in x for x in self._constants._reserved_networks) @property def is_link_local(self): """Test if the address is reserved for link-local. Returns: A boolean, True if the address is reserved per RFC 4291. """ return self in self._constants._linklocal_network @property def is_site_local(self): """Test if the address is reserved for site-local. Note that the site-local address space has been deprecated by RFC 3879. Use is_private to test if this address is in the space of unique local addresses as defined by RFC 4193. Returns: A boolean, True if the address is reserved per RFC 3513 2.5.6. """ return self in self._constants._sitelocal_network @property def is_private(self): """Test if this address is allocated for private networks. Returns: A boolean, True if the address is reserved per iana-ipv6-special-registry. """ return any(self in net for net in self._constants._private_networks) @property def is_global(self): """Test if this address is allocated for public networks. Returns: A boolean, true if the address is not reserved per iana-ipv6-special-registry. """ return not self.is_private @property def is_unspecified(self): """Test if the address is unspecified. Returns: A boolean, True if this is the unspecified address as defined in RFC 2373 2.5.2. """ return self._ip == 0 @property def is_loopback(self): """Test if the address is a loopback address. Returns: A boolean, True if the address is a loopback address as defined in RFC 2373 2.5.3. """ return self._ip == 1 @property def ipv4_mapped(self): """Return the IPv4 mapped address. Returns: If the IPv6 address is a v4 mapped address, return the IPv4 mapped address. Return None otherwise. """ if (self._ip >> 32) != 0xFFFF: return None return IPv4Address(self._ip & 0xFFFFFFFF) @property def teredo(self): """Tuple of embedded teredo IPs. Returns: Tuple of the (server, client) IPs or None if the address doesn't appear to be a teredo address (doesn't start with 2001::/32) """ if (self._ip >> 96) != 0x20010000: return None return (IPv4Address((self._ip >> 64) & 0xFFFFFFFF), IPv4Address(~self._ip & 0xFFFFFFFF)) @property def sixtofour(self): """Return the IPv4 6to4 embedded address. Returns: The IPv4 6to4-embedded address if present or None if the address doesn't appear to contain a 6to4 embedded address. """ if (self._ip >> 112) != 0x2002: return None return IPv4Address((self._ip >> 80) & 0xFFFFFFFF) class IPv6Interface(IPv6Address): def __init__(self, address): if isinstance(address, (bytes, _compat_int_types)): IPv6Address.__init__(self, address) self.network = IPv6Network(self._ip) self._prefixlen = self._max_prefixlen return if isinstance(address, tuple): IPv6Address.__init__(self, address[0]) if len(address) > 1: self._prefixlen = int(address[1]) else: self._prefixlen = self._max_prefixlen self.network = IPv6Network(address, strict=False) self.netmask = self.network.netmask self.hostmask = self.network.hostmask return addr = _split_optional_netmask(address) IPv6Address.__init__(self, addr[0]) self.network = IPv6Network(address, strict=False) self.netmask = self.network.netmask self._prefixlen = self.network._prefixlen self.hostmask = self.network.hostmask def __str__(self): return '%s/%d' % (self._string_from_ip_int(self._ip), self.network.prefixlen) def __eq__(self, other): address_equal = IPv6Address.__eq__(self, other) if not address_equal or address_equal is NotImplemented: return address_equal try: return self.network == other.network except AttributeError: # An interface with an associated network is NOT the # same as an unassociated address. That's why the hash # takes the extra info into account. return False def __lt__(self, other): address_less = IPv6Address.__lt__(self, other) if address_less is NotImplemented: return NotImplemented try: return (self.network < other.network or self.network == other.network and address_less) except AttributeError: # We *do* allow addresses and interfaces to be sorted. The # unassociated address is considered less than all interfaces. return False def __hash__(self): return self._ip ^ self._prefixlen ^ int(self.network.network_address) __reduce__ = _IPAddressBase.__reduce__ @property def ip(self): return IPv6Address(self._ip) @property def with_prefixlen(self): return '%s/%s' % (self._string_from_ip_int(self._ip), self._prefixlen) @property def with_netmask(self): return '%s/%s' % (self._string_from_ip_int(self._ip), self.netmask) @property def with_hostmask(self): return '%s/%s' % (self._string_from_ip_int(self._ip), self.hostmask) @property def is_unspecified(self): return self._ip == 0 and self.network.is_unspecified @property def is_loopback(self): return self._ip == 1 and self.network.is_loopback class IPv6Network(_BaseV6, _BaseNetwork): """This class represents and manipulates 128-bit IPv6 networks. Attributes: [examples for IPv6('2001:db8::1000/124')] .network_address: IPv6Address('2001:db8::1000') .hostmask: IPv6Address('::f') .broadcast_address: IPv6Address('2001:db8::100f') .netmask: IPv6Address('ffff:ffff:ffff:ffff:ffff:ffff:ffff:fff0') .prefixlen: 124 """ # Class to use when creating address objects _address_class = IPv6Address def __init__(self, address, strict=True): """Instantiate a new IPv6 Network object. Args: address: A string or integer representing the IPv6 network or the IP and prefix/netmask. '2001:db8::/128' '2001:db8:0000:0000:0000:0000:0000:0000/128' '2001:db8::' are all functionally the same in IPv6. That is to say, failing to provide a subnetmask will create an object with a mask of /128. Additionally, an integer can be passed, so IPv6Network('2001:db8::') == IPv6Network(42540766411282592856903984951653826560) or, more generally IPv6Network(int(IPv6Network('2001:db8::'))) == IPv6Network('2001:db8::') strict: A boolean. If true, ensure that we have been passed A true network address, eg, 2001:db8::1000/124 and not an IP address on a network, eg, 2001:db8::1/124. Raises: AddressValueError: If address isn't a valid IPv6 address. NetmaskValueError: If the netmask isn't valid for an IPv6 address. ValueError: If strict was True and a network address was not supplied. """ _BaseNetwork.__init__(self, address) # Efficient constructor from integer or packed address if isinstance(address, (bytes, _compat_int_types)): self.network_address = IPv6Address(address) self.netmask, self._prefixlen = self._make_netmask( self._max_prefixlen) return if isinstance(address, tuple): if len(address) > 1: arg = address[1] else: arg = self._max_prefixlen self.netmask, self._prefixlen = self._make_netmask(arg) self.network_address = IPv6Address(address[0]) packed = int(self.network_address) if packed & int(self.netmask) != packed: if strict: raise ValueError('%s has host bits set' % self) else: self.network_address = IPv6Address(packed & int(self.netmask)) return # Assume input argument to be string or any object representation # which converts into a formatted IP prefix string. addr = _split_optional_netmask(address) self.network_address = IPv6Address(self._ip_int_from_string(addr[0])) if len(addr) == 2: arg = addr[1] else: arg = self._max_prefixlen self.netmask, self._prefixlen = self._make_netmask(arg) if strict: if (IPv6Address(int(self.network_address) & int(self.netmask)) != self.network_address): raise ValueError('%s has host bits set' % self) self.network_address = IPv6Address(int(self.network_address) & int(self.netmask)) if self._prefixlen == (self._max_prefixlen - 1): self.hosts = self.__iter__ def hosts(self): """Generate Iterator over usable hosts in a network. This is like __iter__ except it doesn't return the Subnet-Router anycast address. """ network = int(self.network_address) broadcast = int(self.broadcast_address) for x in _compat_range(network + 1, broadcast + 1): yield self._address_class(x) @property def is_site_local(self): """Test if the address is reserved for site-local. Note that the site-local address space has been deprecated by RFC 3879. Use is_private to test if this address is in the space of unique local addresses as defined by RFC 4193. Returns: A boolean, True if the address is reserved per RFC 3513 2.5.6. """ return (self.network_address.is_site_local and self.broadcast_address.is_site_local) class _IPv6Constants(object): _linklocal_network = IPv6Network('fe80::/10') _multicast_network = IPv6Network('ff00::/8') _private_networks = [ IPv6Network('::1/128'), IPv6Network('::/128'), IPv6Network('::ffff:0:0/96'), IPv6Network('100::/64'), IPv6Network('2001::/23'), IPv6Network('2001:2::/48'), IPv6Network('2001:db8::/32'), IPv6Network('2001:10::/28'), IPv6Network('fc00::/7'), IPv6Network('fe80::/10'), ] _reserved_networks = [ IPv6Network('::/8'), IPv6Network('100::/8'), IPv6Network('200::/7'), IPv6Network('400::/6'), IPv6Network('800::/5'), IPv6Network('1000::/4'), IPv6Network('4000::/3'), IPv6Network('6000::/3'), IPv6Network('8000::/3'), IPv6Network('A000::/3'), IPv6Network('C000::/3'), IPv6Network('E000::/4'), IPv6Network('F000::/5'), IPv6Network('F800::/6'), IPv6Network('FE00::/9'), ] _sitelocal_network = IPv6Network('fec0::/10') IPv6Address._constants = _IPv6Constants ��������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735100.0033276 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/lockfile/��������������������������������0000755�0001750�0001750�00000000000�00000000000�026255� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/lockfile/__init__.py���������������������0000644�0001750�0001750�00000022233�00000000000�030370� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- """ lockfile.py - Platform-independent advisory file locks. Requires Python 2.5 unless you apply 2.4.diff Locking is done on a per-thread basis instead of a per-process basis. Usage: >>> lock = LockFile('somefile') >>> try: ... lock.acquire() ... except AlreadyLocked: ... print 'somefile', 'is locked already.' ... except LockFailed: ... print 'somefile', 'can\\'t be locked.' ... else: ... print 'got lock' got lock >>> print lock.is_locked() True >>> lock.release() >>> lock = LockFile('somefile') >>> print lock.is_locked() False >>> with lock: ... print lock.is_locked() True >>> print lock.is_locked() False >>> lock = LockFile('somefile') >>> # It is okay to lock twice from the same thread... >>> with lock: ... lock.acquire() ... >>> # Though no counter is kept, so you can't unlock multiple times... >>> print lock.is_locked() False Exceptions: Error - base class for other exceptions LockError - base class for all locking exceptions AlreadyLocked - Another thread or process already holds the lock LockFailed - Lock failed for some other reason UnlockError - base class for all unlocking exceptions AlreadyUnlocked - File was not locked. NotMyLock - File was locked but not by the current thread/process """ from __future__ import absolute_import import functools import os import socket import threading import warnings # Work with PEP8 and non-PEP8 versions of threading module. if not hasattr(threading, "current_thread"): threading.current_thread = threading.currentThread if not hasattr(threading.Thread, "get_name"): threading.Thread.get_name = threading.Thread.getName __all__ = ['Error', 'LockError', 'LockTimeout', 'AlreadyLocked', 'LockFailed', 'UnlockError', 'NotLocked', 'NotMyLock', 'LinkFileLock', 'MkdirFileLock', 'SQLiteFileLock', 'LockBase', 'locked'] class Error(Exception): """ Base class for other exceptions. >>> try: ... raise Error ... except Exception: ... pass """ pass class LockError(Error): """ Base class for error arising from attempts to acquire the lock. >>> try: ... raise LockError ... except Error: ... pass """ pass class LockTimeout(LockError): """Raised when lock creation fails within a user-defined period of time. >>> try: ... raise LockTimeout ... except LockError: ... pass """ pass class AlreadyLocked(LockError): """Some other thread/process is locking the file. >>> try: ... raise AlreadyLocked ... except LockError: ... pass """ pass class LockFailed(LockError): """Lock file creation failed for some other reason. >>> try: ... raise LockFailed ... except LockError: ... pass """ pass class UnlockError(Error): """ Base class for errors arising from attempts to release the lock. >>> try: ... raise UnlockError ... except Error: ... pass """ pass class NotLocked(UnlockError): """Raised when an attempt is made to unlock an unlocked file. >>> try: ... raise NotLocked ... except UnlockError: ... pass """ pass class NotMyLock(UnlockError): """Raised when an attempt is made to unlock a file someone else locked. >>> try: ... raise NotMyLock ... except UnlockError: ... pass """ pass class _SharedBase(object): def __init__(self, path): self.path = path def acquire(self, timeout=None): """ Acquire the lock. * If timeout is omitted (or None), wait forever trying to lock the file. * If timeout > 0, try to acquire the lock for that many seconds. If the lock period expires and the file is still locked, raise LockTimeout. * If timeout <= 0, raise AlreadyLocked immediately if the file is already locked. """ raise NotImplemented("implement in subclass") def release(self): """ Release the lock. If the file is not locked, raise NotLocked. """ raise NotImplemented("implement in subclass") def __enter__(self): """ Context manager support. """ self.acquire() return self def __exit__(self, *_exc): """ Context manager support. """ self.release() def __repr__(self): return "<%s: %r>" % (self.__class__.__name__, self.path) class LockBase(_SharedBase): """Base class for platform-specific lock classes.""" def __init__(self, path, threaded=True, timeout=None): """ >>> lock = LockBase('somefile') >>> lock = LockBase('somefile', threaded=False) """ super(LockBase, self).__init__(path) self.lock_file = os.path.abspath(path) + ".lock" self.hostname = socket.gethostname() self.pid = os.getpid() if threaded: t = threading.current_thread() # Thread objects in Python 2.4 and earlier do not have ident # attrs. Worm around that. ident = getattr(t, "ident", hash(t)) self.tname = "-%x" % (ident & 0xffffffff) else: self.tname = "" dirname = os.path.dirname(self.lock_file) # unique name is mostly about the current process, but must # also contain the path -- otherwise, two adjacent locked # files conflict (one file gets locked, creating lock-file and # unique file, the other one gets locked, creating lock-file # and overwriting the already existing lock-file, then one # gets unlocked, deleting both lock-file and unique file, # finally the last lock errors out upon releasing. self.unique_name = os.path.join(dirname, "%s%s.%s%s" % (self.hostname, self.tname, self.pid, hash(self.path))) self.timeout = timeout def is_locked(self): """ Tell whether or not the file is locked. """ raise NotImplemented("implement in subclass") def i_am_locking(self): """ Return True if this object is locking the file. """ raise NotImplemented("implement in subclass") def break_lock(self): """ Remove a lock. Useful if a locking thread failed to unlock. """ raise NotImplemented("implement in subclass") def __repr__(self): return "<%s: %r -- %r>" % (self.__class__.__name__, self.unique_name, self.path) def _fl_helper(cls, mod, *args, **kwds): warnings.warn("Import from %s module instead of lockfile package" % mod, DeprecationWarning, stacklevel=2) # This is a bit funky, but it's only for awhile. The way the unit tests # are constructed this function winds up as an unbound method, so it # actually takes three args, not two. We want to toss out self. if not isinstance(args[0], str): # We are testing, avoid the first arg args = args[1:] if len(args) == 1 and not kwds: kwds["threaded"] = True return cls(*args, **kwds) def LinkFileLock(*args, **kwds): """Factory function provided for backwards compatibility. Do not use in new code. Instead, import LinkLockFile from the lockfile.linklockfile module. """ from . import linklockfile return _fl_helper(linklockfile.LinkLockFile, "lockfile.linklockfile", *args, **kwds) def MkdirFileLock(*args, **kwds): """Factory function provided for backwards compatibility. Do not use in new code. Instead, import MkdirLockFile from the lockfile.mkdirlockfile module. """ from . import mkdirlockfile return _fl_helper(mkdirlockfile.MkdirLockFile, "lockfile.mkdirlockfile", *args, **kwds) def SQLiteFileLock(*args, **kwds): """Factory function provided for backwards compatibility. Do not use in new code. Instead, import SQLiteLockFile from the lockfile.mkdirlockfile module. """ from . import sqlitelockfile return _fl_helper(sqlitelockfile.SQLiteLockFile, "lockfile.sqlitelockfile", *args, **kwds) def locked(path, timeout=None): """Decorator which enables locks for decorated function. Arguments: - path: path for lockfile. - timeout (optional): Timeout for acquiring lock. Usage: @locked('/var/run/myname', timeout=0) def myname(...): ... """ def decor(func): @functools.wraps(func) def wrapper(*args, **kwargs): lock = FileLock(path, timeout=timeout) lock.acquire() try: return func(*args, **kwargs) finally: lock.release() return wrapper return decor if hasattr(os, "link"): from . import linklockfile as _llf LockFile = _llf.LinkLockFile else: from . import mkdirlockfile as _mlf LockFile = _mlf.MkdirLockFile FileLock = LockFile ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/lockfile/linklockfile.py�����������������0000644�0001750�0001750�00000005134�00000000000�031300� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import import time import os from . import (LockBase, LockFailed, NotLocked, NotMyLock, LockTimeout, AlreadyLocked) class LinkLockFile(LockBase): """Lock access to a file using atomic property of link(2). >>> lock = LinkLockFile('somefile') >>> lock = LinkLockFile('somefile', threaded=False) """ def acquire(self, timeout=None): try: open(self.unique_name, "wb").close() except IOError: raise LockFailed("failed to create %s" % self.unique_name) timeout = timeout if timeout is not None else self.timeout end_time = time.time() if timeout is not None and timeout > 0: end_time += timeout while True: # Try and create a hard link to it. try: os.link(self.unique_name, self.lock_file) except OSError: # Link creation failed. Maybe we've double-locked? nlinks = os.stat(self.unique_name).st_nlink if nlinks == 2: # The original link plus the one I created == 2. We're # good to go. return else: # Otherwise the lock creation failed. if timeout is not None and time.time() > end_time: os.unlink(self.unique_name) if timeout > 0: raise LockTimeout("Timeout waiting to acquire" " lock for %s" % self.path) else: raise AlreadyLocked("%s is already locked" % self.path) time.sleep(timeout is not None and timeout / 10 or 0.1) else: # Link creation succeeded. We're good to go. return def release(self): if not self.is_locked(): raise NotLocked("%s is not locked" % self.path) elif not os.path.exists(self.unique_name): raise NotMyLock("%s is locked, but not by me" % self.path) os.unlink(self.unique_name) os.unlink(self.lock_file) def is_locked(self): return os.path.exists(self.lock_file) def i_am_locking(self): return (self.is_locked() and os.path.exists(self.unique_name) and os.stat(self.unique_name).st_nlink == 2) def break_lock(self): if os.path.exists(self.lock_file): os.unlink(self.lock_file) ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/lockfile/mkdirlockfile.py����������������0000644�0001750�0001750�00000006030�00000000000�031445� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division import time import os import sys import errno from . import (LockBase, LockFailed, NotLocked, NotMyLock, LockTimeout, AlreadyLocked) class MkdirLockFile(LockBase): """Lock file by creating a directory.""" def __init__(self, path, threaded=True, timeout=None): """ >>> lock = MkdirLockFile('somefile') >>> lock = MkdirLockFile('somefile', threaded=False) """ LockBase.__init__(self, path, threaded, timeout) # Lock file itself is a directory. Place the unique file name into # it. self.unique_name = os.path.join(self.lock_file, "%s.%s%s" % (self.hostname, self.tname, self.pid)) def acquire(self, timeout=None): timeout = timeout if timeout is not None else self.timeout end_time = time.time() if timeout is not None and timeout > 0: end_time += timeout if timeout is None: wait = 0.1 else: wait = max(0, timeout / 10) while True: try: os.mkdir(self.lock_file) except OSError: err = sys.exc_info()[1] if err.errno == errno.EEXIST: # Already locked. if os.path.exists(self.unique_name): # Already locked by me. return if timeout is not None and time.time() > end_time: if timeout > 0: raise LockTimeout("Timeout waiting to acquire" " lock for %s" % self.path) else: # Someone else has the lock. raise AlreadyLocked("%s is already locked" % self.path) time.sleep(wait) else: # Couldn't create the lock for some other reason raise LockFailed("failed to create %s" % self.lock_file) else: open(self.unique_name, "wb").close() return def release(self): if not self.is_locked(): raise NotLocked("%s is not locked" % self.path) elif not os.path.exists(self.unique_name): raise NotMyLock("%s is locked, but not by me" % self.path) os.unlink(self.unique_name) os.rmdir(self.lock_file) def is_locked(self): return os.path.exists(self.lock_file) def i_am_locking(self): return (self.is_locked() and os.path.exists(self.unique_name)) def break_lock(self): if os.path.exists(self.lock_file): for name in os.listdir(self.lock_file): os.unlink(os.path.join(self.lock_file, name)) os.rmdir(self.lock_file) ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/lockfile/pidlockfile.py������������������0000644�0001750�0001750�00000013712�00000000000�031120� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # pidlockfile.py # # Copyright © 2008–2009 Ben Finney <ben+python@benfinney.id.au> # # This is free software: you may copy, modify, and/or distribute this work # under the terms of the Python Software Foundation License, version 2 or # later as published by the Python Software Foundation. # No warranty expressed or implied. See the file LICENSE.PSF-2 for details. """ Lockfile behaviour implemented via Unix PID files. """ from __future__ import absolute_import import errno import os import time from . import (LockBase, AlreadyLocked, LockFailed, NotLocked, NotMyLock, LockTimeout) class PIDLockFile(LockBase): """ Lockfile implemented as a Unix PID file. The lock file is a normal file named by the attribute `path`. A lock's PID file contains a single line of text, containing the process ID (PID) of the process that acquired the lock. >>> lock = PIDLockFile('somefile') >>> lock = PIDLockFile('somefile') """ def __init__(self, path, threaded=False, timeout=None): # pid lockfiles don't support threaded operation, so always force # False as the threaded arg. LockBase.__init__(self, path, False, timeout) self.unique_name = self.path def read_pid(self): """ Get the PID from the lock file. """ return read_pid_from_pidfile(self.path) def is_locked(self): """ Test if the lock is currently held. The lock is held if the PID file for this lock exists. """ return os.path.exists(self.path) def i_am_locking(self): """ Test if the lock is held by the current process. Returns ``True`` if the current process ID matches the number stored in the PID file. """ return self.is_locked() and os.getpid() == self.read_pid() def acquire(self, timeout=None): """ Acquire the lock. Creates the PID file for this lock, or raises an error if the lock could not be acquired. """ timeout = timeout if timeout is not None else self.timeout end_time = time.time() if timeout is not None and timeout > 0: end_time += timeout while True: try: write_pid_to_pidfile(self.path) except OSError as exc: if exc.errno == errno.EEXIST: # The lock creation failed. Maybe sleep a bit. if time.time() > end_time: if timeout is not None and timeout > 0: raise LockTimeout("Timeout waiting to acquire" " lock for %s" % self.path) else: raise AlreadyLocked("%s is already locked" % self.path) time.sleep(timeout is not None and timeout / 10 or 0.1) else: raise LockFailed("failed to create %s" % self.path) else: return def release(self): """ Release the lock. Removes the PID file to release the lock, or raises an error if the current process does not hold the lock. """ if not self.is_locked(): raise NotLocked("%s is not locked" % self.path) if not self.i_am_locking(): raise NotMyLock("%s is locked, but not by me" % self.path) remove_existing_pidfile(self.path) def break_lock(self): """ Break an existing lock. Removes the PID file if it already exists, otherwise does nothing. """ remove_existing_pidfile(self.path) def read_pid_from_pidfile(pidfile_path): """ Read the PID recorded in the named PID file. Read and return the numeric PID recorded as text in the named PID file. If the PID file cannot be read, or if the content is not a valid PID, return ``None``. """ pid = None try: pidfile = open(pidfile_path, 'r') except IOError: pass else: # According to the FHS 2.3 section on PID files in /var/run: # # The file must consist of the process identifier in # ASCII-encoded decimal, followed by a newline character. # # Programs that read PID files should be somewhat flexible # in what they accept; i.e., they should ignore extra # whitespace, leading zeroes, absence of the trailing # newline, or additional lines in the PID file. line = pidfile.readline().strip() try: pid = int(line) except ValueError: pass pidfile.close() return pid def write_pid_to_pidfile(pidfile_path): """ Write the PID in the named PID file. Get the numeric process ID (“PID”) of the current process and write it to the named file as a line of text. """ open_flags = (os.O_CREAT | os.O_EXCL | os.O_WRONLY) open_mode = 0o644 pidfile_fd = os.open(pidfile_path, open_flags, open_mode) pidfile = os.fdopen(pidfile_fd, 'w') # According to the FHS 2.3 section on PID files in /var/run: # # The file must consist of the process identifier in # ASCII-encoded decimal, followed by a newline character. For # example, if crond was process number 25, /var/run/crond.pid # would contain three characters: two, five, and newline. pid = os.getpid() pidfile.write("%s\n" % pid) pidfile.close() def remove_existing_pidfile(pidfile_path): """ Remove the named PID file if it exists. Removing a PID file that doesn't already exist puts us in the desired state, so we ignore the condition if the file does not exist. """ try: os.remove(pidfile_path) except OSError as exc: if exc.errno == errno.ENOENT: pass else: raise ������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/lockfile/sqlitelockfile.py���������������0000644�0001750�0001750�00000012602�00000000000�031642� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, division import time import os try: unicode except NameError: unicode = str from . import LockBase, NotLocked, NotMyLock, LockTimeout, AlreadyLocked class SQLiteLockFile(LockBase): "Demonstrate SQL-based locking." testdb = None def __init__(self, path, threaded=True, timeout=None): """ >>> lock = SQLiteLockFile('somefile') >>> lock = SQLiteLockFile('somefile', threaded=False) """ LockBase.__init__(self, path, threaded, timeout) self.lock_file = unicode(self.lock_file) self.unique_name = unicode(self.unique_name) if SQLiteLockFile.testdb is None: import tempfile _fd, testdb = tempfile.mkstemp() os.close(_fd) os.unlink(testdb) del _fd, tempfile SQLiteLockFile.testdb = testdb import sqlite3 self.connection = sqlite3.connect(SQLiteLockFile.testdb) c = self.connection.cursor() try: c.execute("create table locks" "(" " lock_file varchar(32)," " unique_name varchar(32)" ")") except sqlite3.OperationalError: pass else: self.connection.commit() import atexit atexit.register(os.unlink, SQLiteLockFile.testdb) def acquire(self, timeout=None): timeout = timeout if timeout is not None else self.timeout end_time = time.time() if timeout is not None and timeout > 0: end_time += timeout if timeout is None: wait = 0.1 elif timeout <= 0: wait = 0 else: wait = timeout / 10 cursor = self.connection.cursor() while True: if not self.is_locked(): # Not locked. Try to lock it. cursor.execute("insert into locks" " (lock_file, unique_name)" " values" " (?, ?)", (self.lock_file, self.unique_name)) self.connection.commit() # Check to see if we are the only lock holder. cursor.execute("select * from locks" " where unique_name = ?", (self.unique_name,)) rows = cursor.fetchall() if len(rows) > 1: # Nope. Someone else got there. Remove our lock. cursor.execute("delete from locks" " where unique_name = ?", (self.unique_name,)) self.connection.commit() else: # Yup. We're done, so go home. return else: # Check to see if we are the only lock holder. cursor.execute("select * from locks" " where unique_name = ?", (self.unique_name,)) rows = cursor.fetchall() if len(rows) == 1: # We're the locker, so go home. return # Maybe we should wait a bit longer. if timeout is not None and time.time() > end_time: if timeout > 0: # No more waiting. raise LockTimeout("Timeout waiting to acquire" " lock for %s" % self.path) else: # Someone else has the lock and we are impatient.. raise AlreadyLocked("%s is already locked" % self.path) # Well, okay. We'll give it a bit longer. time.sleep(wait) def release(self): if not self.is_locked(): raise NotLocked("%s is not locked" % self.path) if not self.i_am_locking(): raise NotMyLock("%s is locked, but not by me (by %s)" % (self.unique_name, self._who_is_locking())) cursor = self.connection.cursor() cursor.execute("delete from locks" " where unique_name = ?", (self.unique_name,)) self.connection.commit() def _who_is_locking(self): cursor = self.connection.cursor() cursor.execute("select unique_name from locks" " where lock_file = ?", (self.lock_file,)) return cursor.fetchone()[0] def is_locked(self): cursor = self.connection.cursor() cursor.execute("select * from locks" " where lock_file = ?", (self.lock_file,)) rows = cursor.fetchall() return not not rows def i_am_locking(self): cursor = self.connection.cursor() cursor.execute("select * from locks" " where lock_file = ?" " and unique_name = ?", (self.lock_file, self.unique_name)) return not not cursor.fetchall() def break_lock(self): cursor = self.connection.cursor() cursor.execute("delete from locks" " where lock_file = ?", (self.lock_file,)) self.connection.commit() ������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/lockfile/symlinklockfile.py��������������0000644�0001750�0001750�00000005070�00000000000�032030� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import import os import time from . import (LockBase, NotLocked, NotMyLock, LockTimeout, AlreadyLocked) class SymlinkLockFile(LockBase): """Lock access to a file using symlink(2).""" def __init__(self, path, threaded=True, timeout=None): # super(SymlinkLockFile).__init(...) LockBase.__init__(self, path, threaded, timeout) # split it back! self.unique_name = os.path.split(self.unique_name)[1] def acquire(self, timeout=None): # Hopefully unnecessary for symlink. # try: # open(self.unique_name, "wb").close() # except IOError: # raise LockFailed("failed to create %s" % self.unique_name) timeout = timeout if timeout is not None else self.timeout end_time = time.time() if timeout is not None and timeout > 0: end_time += timeout while True: # Try and create a symbolic link to it. try: os.symlink(self.unique_name, self.lock_file) except OSError: # Link creation failed. Maybe we've double-locked? if self.i_am_locking(): # Linked to out unique name. Proceed. return else: # Otherwise the lock creation failed. if timeout is not None and time.time() > end_time: if timeout > 0: raise LockTimeout("Timeout waiting to acquire" " lock for %s" % self.path) else: raise AlreadyLocked("%s is already locked" % self.path) time.sleep(timeout / 10 if timeout is not None else 0.1) else: # Link creation succeeded. We're good to go. return def release(self): if not self.is_locked(): raise NotLocked("%s is not locked" % self.path) elif not self.i_am_locking(): raise NotMyLock("%s is locked, but not by me" % self.path) os.unlink(self.lock_file) def is_locked(self): return os.path.islink(self.lock_file) def i_am_locking(self): return (os.path.islink(self.lock_file) and os.readlink(self.lock_file) == self.unique_name) def break_lock(self): if os.path.islink(self.lock_file): # exists && link os.unlink(self.lock_file) ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000033�00000000000�011451� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������27 mtime=1604735100.006661 �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/msgpack/���������������������������������0000755�0001750�0001750�00000000000�00000000000�026112� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/msgpack/__init__.py����������������������0000644�0001750�0001750�00000003136�00000000000�030226� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# coding: utf-8 from pip._vendor.msgpack._version import version from pip._vendor.msgpack.exceptions import * from collections import namedtuple class ExtType(namedtuple('ExtType', 'code data')): """ExtType represents ext type in msgpack.""" def __new__(cls, code, data): if not isinstance(code, int): raise TypeError("code must be int") if not isinstance(data, bytes): raise TypeError("data must be bytes") if not 0 <= code <= 127: raise ValueError("code must be 0~127") return super(ExtType, cls).__new__(cls, code, data) import os if os.environ.get('MSGPACK_PUREPYTHON'): from pip._vendor.msgpack.fallback import Packer, unpackb, Unpacker else: try: from pip._vendor.msgpack._cmsgpack import Packer, unpackb, Unpacker except ImportError: from pip._vendor.msgpack.fallback import Packer, unpackb, Unpacker def pack(o, stream, **kwargs): """ Pack object `o` and write it to `stream` See :class:`Packer` for options. """ packer = Packer(**kwargs) stream.write(packer.pack(o)) def packb(o, **kwargs): """ Pack object `o` and return packed bytes See :class:`Packer` for options. """ return Packer(**kwargs).pack(o) def unpack(stream, **kwargs): """ Unpack an object from `stream`. Raises `ExtraData` when `stream` contains extra bytes. See :class:`Unpacker` for options. """ data = stream.read() return unpackb(data, **kwargs) # alias for compatibility to simplejson/marshal/pickle. load = unpack loads = unpackb dump = pack dumps = packb ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/msgpack/_version.py����������������������0000644�0001750�0001750�00000000024�00000000000�030304� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������version = (0, 6, 1) ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/msgpack/exceptions.py��������������������0000644�0001750�0001750�00000002071�00000000000�030645� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������class UnpackException(Exception): """Base class for some exceptions raised while unpacking. NOTE: unpack may raise exception other than subclass of UnpackException. If you want to catch all error, catch Exception instead. """ class BufferFull(UnpackException): pass class OutOfData(UnpackException): pass class FormatError(ValueError, UnpackException): """Invalid msgpack format""" class StackError(ValueError, UnpackException): """Too nested""" # Deprecated. Use ValueError instead UnpackValueError = ValueError class ExtraData(UnpackValueError): """ExtraData is raised when there is trailing data. This exception is raised while only one-shot (not streaming) unpack. """ def __init__(self, unpacked, extra): self.unpacked = unpacked self.extra = extra def __str__(self): return "unpack(b) received extra data." # Deprecated. Use Exception instead to catch all exception during packing. PackException = Exception PackValueError = ValueError PackOverflowError = OverflowError �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/msgpack/fallback.py����������������������0000644�0001750�0001750�00000111163�00000000000�030226� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""Fallback pure Python implementation of msgpack""" import sys import struct import warnings if sys.version_info[0] == 2: PY2 = True int_types = (int, long) def dict_iteritems(d): return d.iteritems() else: PY2 = False int_types = int unicode = str xrange = range def dict_iteritems(d): return d.items() if sys.version_info < (3, 5): # Ugly hack... RecursionError = RuntimeError def _is_recursionerror(e): return len(e.args) == 1 and isinstance(e.args[0], str) and \ e.args[0].startswith('maximum recursion depth exceeded') else: def _is_recursionerror(e): return True if hasattr(sys, 'pypy_version_info'): # cStringIO is slow on PyPy, StringIO is faster. However: PyPy's own # StringBuilder is fastest. from __pypy__ import newlist_hint try: from __pypy__.builders import BytesBuilder as StringBuilder except ImportError: from __pypy__.builders import StringBuilder USING_STRINGBUILDER = True class StringIO(object): def __init__(self, s=b''): if s: self.builder = StringBuilder(len(s)) self.builder.append(s) else: self.builder = StringBuilder() def write(self, s): if isinstance(s, memoryview): s = s.tobytes() elif isinstance(s, bytearray): s = bytes(s) self.builder.append(s) def getvalue(self): return self.builder.build() else: USING_STRINGBUILDER = False from io import BytesIO as StringIO newlist_hint = lambda size: [] from pip._vendor.msgpack.exceptions import ( BufferFull, OutOfData, ExtraData, FormatError, StackError, ) from pip._vendor.msgpack import ExtType EX_SKIP = 0 EX_CONSTRUCT = 1 EX_READ_ARRAY_HEADER = 2 EX_READ_MAP_HEADER = 3 TYPE_IMMEDIATE = 0 TYPE_ARRAY = 1 TYPE_MAP = 2 TYPE_RAW = 3 TYPE_BIN = 4 TYPE_EXT = 5 DEFAULT_RECURSE_LIMIT = 511 def _check_type_strict(obj, t, type=type, tuple=tuple): if type(t) is tuple: return type(obj) in t else: return type(obj) is t def _get_data_from_buffer(obj): try: view = memoryview(obj) except TypeError: # try to use legacy buffer protocol if 2.7, otherwise re-raise if PY2: view = memoryview(buffer(obj)) warnings.warn("using old buffer interface to unpack %s; " "this leads to unpacking errors if slicing is used and " "will be removed in a future version" % type(obj), RuntimeWarning, stacklevel=3) else: raise if view.itemsize != 1: raise ValueError("cannot unpack from multi-byte object") return view def unpack(stream, **kwargs): warnings.warn( "Direct calling implementation's unpack() is deprecated, Use msgpack.unpack() or unpackb() instead.", DeprecationWarning, stacklevel=2) data = stream.read() return unpackb(data, **kwargs) def unpackb(packed, **kwargs): """ Unpack an object from `packed`. Raises ``ExtraData`` when *packed* contains extra bytes. Raises ``ValueError`` when *packed* is incomplete. Raises ``FormatError`` when *packed* is not valid msgpack. Raises ``StackError`` when *packed* contains too nested. Other exceptions can be raised during unpacking. See :class:`Unpacker` for options. """ unpacker = Unpacker(None, max_buffer_size=len(packed), **kwargs) unpacker.feed(packed) try: ret = unpacker._unpack() except OutOfData: raise ValueError("Unpack failed: incomplete input") except RecursionError as e: if _is_recursionerror(e): raise StackError raise if unpacker._got_extradata(): raise ExtraData(ret, unpacker._get_extradata()) return ret if sys.version_info < (2, 7, 6): def _unpack_from(f, b, o=0): """Explicit typcast for legacy struct.unpack_from""" return struct.unpack_from(f, bytes(b), o) else: _unpack_from = struct.unpack_from class Unpacker(object): """Streaming unpacker. arguments: :param file_like: File-like object having `.read(n)` method. If specified, unpacker reads serialized data from it and :meth:`feed()` is not usable. :param int read_size: Used as `file_like.read(read_size)`. (default: `min(16*1024, max_buffer_size)`) :param bool use_list: If true, unpack msgpack array to Python list. Otherwise, unpack to Python tuple. (default: True) :param bool raw: If true, unpack msgpack raw to Python bytes (default). Otherwise, unpack to Python str (or unicode on Python 2) by decoding with UTF-8 encoding (recommended). Currently, the default is true, but it will be changed to false in near future. So you must specify it explicitly for keeping backward compatibility. *encoding* option which is deprecated overrides this option. :param bool strict_map_key: If true, only str or bytes are accepted for map (dict) keys. It's False by default for backward-compatibility. But it will be True from msgpack 1.0. :param callable object_hook: When specified, it should be callable. Unpacker calls it with a dict argument after unpacking msgpack map. (See also simplejson) :param callable object_pairs_hook: When specified, it should be callable. Unpacker calls it with a list of key-value pairs after unpacking msgpack map. (See also simplejson) :param str encoding: Encoding used for decoding msgpack raw. If it is None (default), msgpack raw is deserialized to Python bytes. :param str unicode_errors: (deprecated) Used for decoding msgpack raw with *encoding*. (default: `'strict'`) :param int max_buffer_size: Limits size of data waiting unpacked. 0 means system's INT_MAX (default). Raises `BufferFull` exception when it is insufficient. You should set this parameter when unpacking data from untrusted source. :param int max_str_len: Deprecated, use *max_buffer_size* instead. Limits max length of str. (default: max_buffer_size or 1024*1024) :param int max_bin_len: Deprecated, use *max_buffer_size* instead. Limits max length of bin. (default: max_buffer_size or 1024*1024) :param int max_array_len: Limits max length of array. (default: max_buffer_size or 128*1024) :param int max_map_len: Limits max length of map. (default: max_buffer_size//2 or 32*1024) :param int max_ext_len: Deprecated, use *max_buffer_size* instead. Limits max size of ext type. (default: max_buffer_size or 1024*1024) Example of streaming deserialize from file-like object:: unpacker = Unpacker(file_like, raw=False, max_buffer_size=10*1024*1024) for o in unpacker: process(o) Example of streaming deserialize from socket:: unpacker = Unpacker(raw=False, max_buffer_size=10*1024*1024) while True: buf = sock.recv(1024**2) if not buf: break unpacker.feed(buf) for o in unpacker: process(o) Raises ``ExtraData`` when *packed* contains extra bytes. Raises ``OutOfData`` when *packed* is incomplete. Raises ``FormatError`` when *packed* is not valid msgpack. Raises ``StackError`` when *packed* contains too nested. Other exceptions can be raised during unpacking. """ def __init__(self, file_like=None, read_size=0, use_list=True, raw=True, strict_map_key=False, object_hook=None, object_pairs_hook=None, list_hook=None, encoding=None, unicode_errors=None, max_buffer_size=0, ext_hook=ExtType, max_str_len=-1, max_bin_len=-1, max_array_len=-1, max_map_len=-1, max_ext_len=-1): if encoding is not None: warnings.warn( "encoding is deprecated, Use raw=False instead.", DeprecationWarning, stacklevel=2) if unicode_errors is None: unicode_errors = 'strict' if file_like is None: self._feeding = True else: if not callable(file_like.read): raise TypeError("`file_like.read` must be callable") self.file_like = file_like self._feeding = False #: array of bytes fed. self._buffer = bytearray() #: Which position we currently reads self._buff_i = 0 # When Unpacker is used as an iterable, between the calls to next(), # the buffer is not "consumed" completely, for efficiency sake. # Instead, it is done sloppily. To make sure we raise BufferFull at # the correct moments, we have to keep track of how sloppy we were. # Furthermore, when the buffer is incomplete (that is: in the case # we raise an OutOfData) we need to rollback the buffer to the correct # state, which _buf_checkpoint records. self._buf_checkpoint = 0 if max_str_len == -1: max_str_len = max_buffer_size or 1024*1024 if max_bin_len == -1: max_bin_len = max_buffer_size or 1024*1024 if max_array_len == -1: max_array_len = max_buffer_size or 128*1024 if max_map_len == -1: max_map_len = max_buffer_size//2 or 32*1024 if max_ext_len == -1: max_ext_len = max_buffer_size or 1024*1024 self._max_buffer_size = max_buffer_size or 2**31-1 if read_size > self._max_buffer_size: raise ValueError("read_size must be smaller than max_buffer_size") self._read_size = read_size or min(self._max_buffer_size, 16*1024) self._raw = bool(raw) self._strict_map_key = bool(strict_map_key) self._encoding = encoding self._unicode_errors = unicode_errors self._use_list = use_list self._list_hook = list_hook self._object_hook = object_hook self._object_pairs_hook = object_pairs_hook self._ext_hook = ext_hook self._max_str_len = max_str_len self._max_bin_len = max_bin_len self._max_array_len = max_array_len self._max_map_len = max_map_len self._max_ext_len = max_ext_len self._stream_offset = 0 if list_hook is not None and not callable(list_hook): raise TypeError('`list_hook` is not callable') if object_hook is not None and not callable(object_hook): raise TypeError('`object_hook` is not callable') if object_pairs_hook is not None and not callable(object_pairs_hook): raise TypeError('`object_pairs_hook` is not callable') if object_hook is not None and object_pairs_hook is not None: raise TypeError("object_pairs_hook and object_hook are mutually " "exclusive") if not callable(ext_hook): raise TypeError("`ext_hook` is not callable") def feed(self, next_bytes): assert self._feeding view = _get_data_from_buffer(next_bytes) if (len(self._buffer) - self._buff_i + len(view) > self._max_buffer_size): raise BufferFull # Strip buffer before checkpoint before reading file. if self._buf_checkpoint > 0: del self._buffer[:self._buf_checkpoint] self._buff_i -= self._buf_checkpoint self._buf_checkpoint = 0 # Use extend here: INPLACE_ADD += doesn't reliably typecast memoryview in jython self._buffer.extend(view) def _consume(self): """ Gets rid of the used parts of the buffer. """ self._stream_offset += self._buff_i - self._buf_checkpoint self._buf_checkpoint = self._buff_i def _got_extradata(self): return self._buff_i < len(self._buffer) def _get_extradata(self): return self._buffer[self._buff_i:] def read_bytes(self, n): return self._read(n) def _read(self, n): # (int) -> bytearray self._reserve(n) i = self._buff_i self._buff_i = i+n return self._buffer[i:i+n] def _reserve(self, n): remain_bytes = len(self._buffer) - self._buff_i - n # Fast path: buffer has n bytes already if remain_bytes >= 0: return if self._feeding: self._buff_i = self._buf_checkpoint raise OutOfData # Strip buffer before checkpoint before reading file. if self._buf_checkpoint > 0: del self._buffer[:self._buf_checkpoint] self._buff_i -= self._buf_checkpoint self._buf_checkpoint = 0 # Read from file remain_bytes = -remain_bytes while remain_bytes > 0: to_read_bytes = max(self._read_size, remain_bytes) read_data = self.file_like.read(to_read_bytes) if not read_data: break assert isinstance(read_data, bytes) self._buffer += read_data remain_bytes -= len(read_data) if len(self._buffer) < n + self._buff_i: self._buff_i = 0 # rollback raise OutOfData def _read_header(self, execute=EX_CONSTRUCT): typ = TYPE_IMMEDIATE n = 0 obj = None self._reserve(1) b = self._buffer[self._buff_i] self._buff_i += 1 if b & 0b10000000 == 0: obj = b elif b & 0b11100000 == 0b11100000: obj = -1 - (b ^ 0xff) elif b & 0b11100000 == 0b10100000: n = b & 0b00011111 typ = TYPE_RAW if n > self._max_str_len: raise ValueError("%s exceeds max_str_len(%s)", n, self._max_str_len) obj = self._read(n) elif b & 0b11110000 == 0b10010000: n = b & 0b00001111 typ = TYPE_ARRAY if n > self._max_array_len: raise ValueError("%s exceeds max_array_len(%s)", n, self._max_array_len) elif b & 0b11110000 == 0b10000000: n = b & 0b00001111 typ = TYPE_MAP if n > self._max_map_len: raise ValueError("%s exceeds max_map_len(%s)", n, self._max_map_len) elif b == 0xc0: obj = None elif b == 0xc2: obj = False elif b == 0xc3: obj = True elif b == 0xc4: typ = TYPE_BIN self._reserve(1) n = self._buffer[self._buff_i] self._buff_i += 1 if n > self._max_bin_len: raise ValueError("%s exceeds max_bin_len(%s)" % (n, self._max_bin_len)) obj = self._read(n) elif b == 0xc5: typ = TYPE_BIN self._reserve(2) n = _unpack_from(">H", self._buffer, self._buff_i)[0] self._buff_i += 2 if n > self._max_bin_len: raise ValueError("%s exceeds max_bin_len(%s)" % (n, self._max_bin_len)) obj = self._read(n) elif b == 0xc6: typ = TYPE_BIN self._reserve(4) n = _unpack_from(">I", self._buffer, self._buff_i)[0] self._buff_i += 4 if n > self._max_bin_len: raise ValueError("%s exceeds max_bin_len(%s)" % (n, self._max_bin_len)) obj = self._read(n) elif b == 0xc7: # ext 8 typ = TYPE_EXT self._reserve(2) L, n = _unpack_from('Bb', self._buffer, self._buff_i) self._buff_i += 2 if L > self._max_ext_len: raise ValueError("%s exceeds max_ext_len(%s)" % (L, self._max_ext_len)) obj = self._read(L) elif b == 0xc8: # ext 16 typ = TYPE_EXT self._reserve(3) L, n = _unpack_from('>Hb', self._buffer, self._buff_i) self._buff_i += 3 if L > self._max_ext_len: raise ValueError("%s exceeds max_ext_len(%s)" % (L, self._max_ext_len)) obj = self._read(L) elif b == 0xc9: # ext 32 typ = TYPE_EXT self._reserve(5) L, n = _unpack_from('>Ib', self._buffer, self._buff_i) self._buff_i += 5 if L > self._max_ext_len: raise ValueError("%s exceeds max_ext_len(%s)" % (L, self._max_ext_len)) obj = self._read(L) elif b == 0xca: self._reserve(4) obj = _unpack_from(">f", self._buffer, self._buff_i)[0] self._buff_i += 4 elif b == 0xcb: self._reserve(8) obj = _unpack_from(">d", self._buffer, self._buff_i)[0] self._buff_i += 8 elif b == 0xcc: self._reserve(1) obj = self._buffer[self._buff_i] self._buff_i += 1 elif b == 0xcd: self._reserve(2) obj = _unpack_from(">H", self._buffer, self._buff_i)[0] self._buff_i += 2 elif b == 0xce: self._reserve(4) obj = _unpack_from(">I", self._buffer, self._buff_i)[0] self._buff_i += 4 elif b == 0xcf: self._reserve(8) obj = _unpack_from(">Q", self._buffer, self._buff_i)[0] self._buff_i += 8 elif b == 0xd0: self._reserve(1) obj = _unpack_from("b", self._buffer, self._buff_i)[0] self._buff_i += 1 elif b == 0xd1: self._reserve(2) obj = _unpack_from(">h", self._buffer, self._buff_i)[0] self._buff_i += 2 elif b == 0xd2: self._reserve(4) obj = _unpack_from(">i", self._buffer, self._buff_i)[0] self._buff_i += 4 elif b == 0xd3: self._reserve(8) obj = _unpack_from(">q", self._buffer, self._buff_i)[0] self._buff_i += 8 elif b == 0xd4: # fixext 1 typ = TYPE_EXT if self._max_ext_len < 1: raise ValueError("%s exceeds max_ext_len(%s)" % (1, self._max_ext_len)) self._reserve(2) n, obj = _unpack_from("b1s", self._buffer, self._buff_i) self._buff_i += 2 elif b == 0xd5: # fixext 2 typ = TYPE_EXT if self._max_ext_len < 2: raise ValueError("%s exceeds max_ext_len(%s)" % (2, self._max_ext_len)) self._reserve(3) n, obj = _unpack_from("b2s", self._buffer, self._buff_i) self._buff_i += 3 elif b == 0xd6: # fixext 4 typ = TYPE_EXT if self._max_ext_len < 4: raise ValueError("%s exceeds max_ext_len(%s)" % (4, self._max_ext_len)) self._reserve(5) n, obj = _unpack_from("b4s", self._buffer, self._buff_i) self._buff_i += 5 elif b == 0xd7: # fixext 8 typ = TYPE_EXT if self._max_ext_len < 8: raise ValueError("%s exceeds max_ext_len(%s)" % (8, self._max_ext_len)) self._reserve(9) n, obj = _unpack_from("b8s", self._buffer, self._buff_i) self._buff_i += 9 elif b == 0xd8: # fixext 16 typ = TYPE_EXT if self._max_ext_len < 16: raise ValueError("%s exceeds max_ext_len(%s)" % (16, self._max_ext_len)) self._reserve(17) n, obj = _unpack_from("b16s", self._buffer, self._buff_i) self._buff_i += 17 elif b == 0xd9: typ = TYPE_RAW self._reserve(1) n = self._buffer[self._buff_i] self._buff_i += 1 if n > self._max_str_len: raise ValueError("%s exceeds max_str_len(%s)", n, self._max_str_len) obj = self._read(n) elif b == 0xda: typ = TYPE_RAW self._reserve(2) n, = _unpack_from(">H", self._buffer, self._buff_i) self._buff_i += 2 if n > self._max_str_len: raise ValueError("%s exceeds max_str_len(%s)", n, self._max_str_len) obj = self._read(n) elif b == 0xdb: typ = TYPE_RAW self._reserve(4) n, = _unpack_from(">I", self._buffer, self._buff_i) self._buff_i += 4 if n > self._max_str_len: raise ValueError("%s exceeds max_str_len(%s)", n, self._max_str_len) obj = self._read(n) elif b == 0xdc: typ = TYPE_ARRAY self._reserve(2) n, = _unpack_from(">H", self._buffer, self._buff_i) self._buff_i += 2 if n > self._max_array_len: raise ValueError("%s exceeds max_array_len(%s)", n, self._max_array_len) elif b == 0xdd: typ = TYPE_ARRAY self._reserve(4) n, = _unpack_from(">I", self._buffer, self._buff_i) self._buff_i += 4 if n > self._max_array_len: raise ValueError("%s exceeds max_array_len(%s)", n, self._max_array_len) elif b == 0xde: self._reserve(2) n, = _unpack_from(">H", self._buffer, self._buff_i) self._buff_i += 2 if n > self._max_map_len: raise ValueError("%s exceeds max_map_len(%s)", n, self._max_map_len) typ = TYPE_MAP elif b == 0xdf: self._reserve(4) n, = _unpack_from(">I", self._buffer, self._buff_i) self._buff_i += 4 if n > self._max_map_len: raise ValueError("%s exceeds max_map_len(%s)", n, self._max_map_len) typ = TYPE_MAP else: raise FormatError("Unknown header: 0x%x" % b) return typ, n, obj def _unpack(self, execute=EX_CONSTRUCT): typ, n, obj = self._read_header(execute) if execute == EX_READ_ARRAY_HEADER: if typ != TYPE_ARRAY: raise ValueError("Expected array") return n if execute == EX_READ_MAP_HEADER: if typ != TYPE_MAP: raise ValueError("Expected map") return n # TODO should we eliminate the recursion? if typ == TYPE_ARRAY: if execute == EX_SKIP: for i in xrange(n): # TODO check whether we need to call `list_hook` self._unpack(EX_SKIP) return ret = newlist_hint(n) for i in xrange(n): ret.append(self._unpack(EX_CONSTRUCT)) if self._list_hook is not None: ret = self._list_hook(ret) # TODO is the interaction between `list_hook` and `use_list` ok? return ret if self._use_list else tuple(ret) if typ == TYPE_MAP: if execute == EX_SKIP: for i in xrange(n): # TODO check whether we need to call hooks self._unpack(EX_SKIP) self._unpack(EX_SKIP) return if self._object_pairs_hook is not None: ret = self._object_pairs_hook( (self._unpack(EX_CONSTRUCT), self._unpack(EX_CONSTRUCT)) for _ in xrange(n)) else: ret = {} for _ in xrange(n): key = self._unpack(EX_CONSTRUCT) if self._strict_map_key and type(key) not in (unicode, bytes): raise ValueError("%s is not allowed for map key" % str(type(key))) ret[key] = self._unpack(EX_CONSTRUCT) if self._object_hook is not None: ret = self._object_hook(ret) return ret if execute == EX_SKIP: return if typ == TYPE_RAW: if self._encoding is not None: obj = obj.decode(self._encoding, self._unicode_errors) elif self._raw: obj = bytes(obj) else: obj = obj.decode('utf_8') return obj if typ == TYPE_EXT: return self._ext_hook(n, bytes(obj)) if typ == TYPE_BIN: return bytes(obj) assert typ == TYPE_IMMEDIATE return obj def __iter__(self): return self def __next__(self): try: ret = self._unpack(EX_CONSTRUCT) self._consume() return ret except OutOfData: self._consume() raise StopIteration except RecursionError: raise StackError next = __next__ def skip(self): self._unpack(EX_SKIP) self._consume() def unpack(self): try: ret = self._unpack(EX_CONSTRUCT) except RecursionError: raise StackError self._consume() return ret def read_array_header(self): ret = self._unpack(EX_READ_ARRAY_HEADER) self._consume() return ret def read_map_header(self): ret = self._unpack(EX_READ_MAP_HEADER) self._consume() return ret def tell(self): return self._stream_offset class Packer(object): """ MessagePack Packer usage: packer = Packer() astream.write(packer.pack(a)) astream.write(packer.pack(b)) Packer's constructor has some keyword arguments: :param callable default: Convert user type to builtin type that Packer supports. See also simplejson's document. :param bool use_single_float: Use single precision float type for float. (default: False) :param bool autoreset: Reset buffer after each pack and return its content as `bytes`. (default: True). If set this to false, use `bytes()` to get content and `.reset()` to clear buffer. :param bool use_bin_type: Use bin type introduced in msgpack spec 2.0 for bytes. It also enables str8 type for unicode. :param bool strict_types: If set to true, types will be checked to be exact. Derived classes from serializeable types will not be serialized and will be treated as unsupported type and forwarded to default. Additionally tuples will not be serialized as lists. This is useful when trying to implement accurate serialization for python types. :param str encoding: (deprecated) Convert unicode to bytes with this encoding. (default: 'utf-8') :param str unicode_errors: Error handler for encoding unicode. (default: 'strict') """ def __init__(self, default=None, encoding=None, unicode_errors=None, use_single_float=False, autoreset=True, use_bin_type=False, strict_types=False): if encoding is None: encoding = 'utf_8' else: warnings.warn( "encoding is deprecated, Use raw=False instead.", DeprecationWarning, stacklevel=2) if unicode_errors is None: unicode_errors = 'strict' self._strict_types = strict_types self._use_float = use_single_float self._autoreset = autoreset self._use_bin_type = use_bin_type self._encoding = encoding self._unicode_errors = unicode_errors self._buffer = StringIO() if default is not None: if not callable(default): raise TypeError("default must be callable") self._default = default def _pack(self, obj, nest_limit=DEFAULT_RECURSE_LIMIT, check=isinstance, check_type_strict=_check_type_strict): default_used = False if self._strict_types: check = check_type_strict list_types = list else: list_types = (list, tuple) while True: if nest_limit < 0: raise ValueError("recursion limit exceeded") if obj is None: return self._buffer.write(b"\xc0") if check(obj, bool): if obj: return self._buffer.write(b"\xc3") return self._buffer.write(b"\xc2") if check(obj, int_types): if 0 <= obj < 0x80: return self._buffer.write(struct.pack("B", obj)) if -0x20 <= obj < 0: return self._buffer.write(struct.pack("b", obj)) if 0x80 <= obj <= 0xff: return self._buffer.write(struct.pack("BB", 0xcc, obj)) if -0x80 <= obj < 0: return self._buffer.write(struct.pack(">Bb", 0xd0, obj)) if 0xff < obj <= 0xffff: return self._buffer.write(struct.pack(">BH", 0xcd, obj)) if -0x8000 <= obj < -0x80: return self._buffer.write(struct.pack(">Bh", 0xd1, obj)) if 0xffff < obj <= 0xffffffff: return self._buffer.write(struct.pack(">BI", 0xce, obj)) if -0x80000000 <= obj < -0x8000: return self._buffer.write(struct.pack(">Bi", 0xd2, obj)) if 0xffffffff < obj <= 0xffffffffffffffff: return self._buffer.write(struct.pack(">BQ", 0xcf, obj)) if -0x8000000000000000 <= obj < -0x80000000: return self._buffer.write(struct.pack(">Bq", 0xd3, obj)) if not default_used and self._default is not None: obj = self._default(obj) default_used = True continue raise OverflowError("Integer value out of range") if check(obj, (bytes, bytearray)): n = len(obj) if n >= 2**32: raise ValueError("%s is too large" % type(obj).__name__) self._pack_bin_header(n) return self._buffer.write(obj) if check(obj, unicode): if self._encoding is None: raise TypeError( "Can't encode unicode string: " "no encoding is specified") obj = obj.encode(self._encoding, self._unicode_errors) n = len(obj) if n >= 2**32: raise ValueError("String is too large") self._pack_raw_header(n) return self._buffer.write(obj) if check(obj, memoryview): n = len(obj) * obj.itemsize if n >= 2**32: raise ValueError("Memoryview is too large") self._pack_bin_header(n) return self._buffer.write(obj) if check(obj, float): if self._use_float: return self._buffer.write(struct.pack(">Bf", 0xca, obj)) return self._buffer.write(struct.pack(">Bd", 0xcb, obj)) if check(obj, ExtType): code = obj.code data = obj.data assert isinstance(code, int) assert isinstance(data, bytes) L = len(data) if L == 1: self._buffer.write(b'\xd4') elif L == 2: self._buffer.write(b'\xd5') elif L == 4: self._buffer.write(b'\xd6') elif L == 8: self._buffer.write(b'\xd7') elif L == 16: self._buffer.write(b'\xd8') elif L <= 0xff: self._buffer.write(struct.pack(">BB", 0xc7, L)) elif L <= 0xffff: self._buffer.write(struct.pack(">BH", 0xc8, L)) else: self._buffer.write(struct.pack(">BI", 0xc9, L)) self._buffer.write(struct.pack("b", code)) self._buffer.write(data) return if check(obj, list_types): n = len(obj) self._pack_array_header(n) for i in xrange(n): self._pack(obj[i], nest_limit - 1) return if check(obj, dict): return self._pack_map_pairs(len(obj), dict_iteritems(obj), nest_limit - 1) if not default_used and self._default is not None: obj = self._default(obj) default_used = 1 continue raise TypeError("Cannot serialize %r" % (obj, )) def pack(self, obj): try: self._pack(obj) except: self._buffer = StringIO() # force reset raise if self._autoreset: ret = self._buffer.getvalue() self._buffer = StringIO() return ret def pack_map_pairs(self, pairs): self._pack_map_pairs(len(pairs), pairs) if self._autoreset: ret = self._buffer.getvalue() self._buffer = StringIO() return ret def pack_array_header(self, n): if n >= 2**32: raise ValueError self._pack_array_header(n) if self._autoreset: ret = self._buffer.getvalue() self._buffer = StringIO() return ret def pack_map_header(self, n): if n >= 2**32: raise ValueError self._pack_map_header(n) if self._autoreset: ret = self._buffer.getvalue() self._buffer = StringIO() return ret def pack_ext_type(self, typecode, data): if not isinstance(typecode, int): raise TypeError("typecode must have int type.") if not 0 <= typecode <= 127: raise ValueError("typecode should be 0-127") if not isinstance(data, bytes): raise TypeError("data must have bytes type") L = len(data) if L > 0xffffffff: raise ValueError("Too large data") if L == 1: self._buffer.write(b'\xd4') elif L == 2: self._buffer.write(b'\xd5') elif L == 4: self._buffer.write(b'\xd6') elif L == 8: self._buffer.write(b'\xd7') elif L == 16: self._buffer.write(b'\xd8') elif L <= 0xff: self._buffer.write(b'\xc7' + struct.pack('B', L)) elif L <= 0xffff: self._buffer.write(b'\xc8' + struct.pack('>H', L)) else: self._buffer.write(b'\xc9' + struct.pack('>I', L)) self._buffer.write(struct.pack('B', typecode)) self._buffer.write(data) def _pack_array_header(self, n): if n <= 0x0f: return self._buffer.write(struct.pack('B', 0x90 + n)) if n <= 0xffff: return self._buffer.write(struct.pack(">BH", 0xdc, n)) if n <= 0xffffffff: return self._buffer.write(struct.pack(">BI", 0xdd, n)) raise ValueError("Array is too large") def _pack_map_header(self, n): if n <= 0x0f: return self._buffer.write(struct.pack('B', 0x80 + n)) if n <= 0xffff: return self._buffer.write(struct.pack(">BH", 0xde, n)) if n <= 0xffffffff: return self._buffer.write(struct.pack(">BI", 0xdf, n)) raise ValueError("Dict is too large") def _pack_map_pairs(self, n, pairs, nest_limit=DEFAULT_RECURSE_LIMIT): self._pack_map_header(n) for (k, v) in pairs: self._pack(k, nest_limit - 1) self._pack(v, nest_limit - 1) def _pack_raw_header(self, n): if n <= 0x1f: self._buffer.write(struct.pack('B', 0xa0 + n)) elif self._use_bin_type and n <= 0xff: self._buffer.write(struct.pack('>BB', 0xd9, n)) elif n <= 0xffff: self._buffer.write(struct.pack(">BH", 0xda, n)) elif n <= 0xffffffff: self._buffer.write(struct.pack(">BI", 0xdb, n)) else: raise ValueError('Raw is too large') def _pack_bin_header(self, n): if not self._use_bin_type: return self._pack_raw_header(n) elif n <= 0xff: return self._buffer.write(struct.pack('>BB', 0xc4, n)) elif n <= 0xffff: return self._buffer.write(struct.pack(">BH", 0xc5, n)) elif n <= 0xffffffff: return self._buffer.write(struct.pack(">BI", 0xc6, n)) else: raise ValueError('Bin is too large') def bytes(self): """Return internal buffer contents as bytes object""" return self._buffer.getvalue() def reset(self): """Reset internal buffer. This method is usaful only when autoreset=False. """ self._buffer = StringIO() def getbuffer(self): """Return view of internal buffer.""" if USING_STRINGBUILDER or PY2: return memoryview(self.bytes()) else: return self._buffer.getbuffer() �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000033�00000000000�011451� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������27 mtime=1604735100.006661 �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/packaging/�������������������������������0000755�0001750�0001750�00000000000�00000000000�026411� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/packaging/__about__.py�������������������0000644�0001750�0001750�00000001350�00000000000�030670� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function __all__ = [ "__title__", "__summary__", "__uri__", "__version__", "__author__", "__email__", "__license__", "__copyright__", ] __title__ = "packaging" __summary__ = "Core utilities for Python packages" __uri__ = "https://github.com/pypa/packaging" __version__ = "19.0" __author__ = "Donald Stufft and individual contributors" __email__ = "donald@stufft.io" __license__ = "BSD or Apache License, Version 2.0" __copyright__ = "Copyright 2014-2019 %s" % __author__ ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/packaging/__init__.py��������������������0000644�0001750�0001750�00000001062�00000000000�030521� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function from .__about__ import ( __author__, __copyright__, __email__, __license__, __summary__, __title__, __uri__, __version__, ) __all__ = [ "__title__", "__summary__", "__uri__", "__version__", "__author__", "__email__", "__license__", "__copyright__", ] ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/packaging/_compat.py���������������������0000644�0001750�0001750�00000001541�00000000000�030406� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function import sys PY2 = sys.version_info[0] == 2 PY3 = sys.version_info[0] == 3 # flake8: noqa if PY3: string_types = (str,) else: string_types = (basestring,) def with_metaclass(meta, *bases): """ Create a base class with a metaclass. """ # This requires a bit of explanation: the basic idea is to make a dummy # metaclass for one level of class instantiation that replaces itself with # the actual metaclass. class metaclass(meta): def __new__(cls, name, this_bases, d): return meta(name, bases, d) return type.__new__(metaclass, "temporary_class", (), {}) ���������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/packaging/_structures.py�����������������0000644�0001750�0001750�00000002610�00000000000�031344� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function class Infinity(object): def __repr__(self): return "Infinity" def __hash__(self): return hash(repr(self)) def __lt__(self, other): return False def __le__(self, other): return False def __eq__(self, other): return isinstance(other, self.__class__) def __ne__(self, other): return not isinstance(other, self.__class__) def __gt__(self, other): return True def __ge__(self, other): return True def __neg__(self): return NegativeInfinity Infinity = Infinity() class NegativeInfinity(object): def __repr__(self): return "-Infinity" def __hash__(self): return hash(repr(self)) def __lt__(self, other): return True def __le__(self, other): return True def __eq__(self, other): return isinstance(other, self.__class__) def __ne__(self, other): return not isinstance(other, self.__class__) def __gt__(self, other): return False def __ge__(self, other): return False def __neg__(self): return Infinity NegativeInfinity = NegativeInfinity() ������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/packaging/markers.py���������������������0000644�0001750�0001750�00000020052�00000000000�030426� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function import operator import os import platform import sys from pip._vendor.pyparsing import ParseException, ParseResults, stringStart, stringEnd from pip._vendor.pyparsing import ZeroOrMore, Group, Forward, QuotedString from pip._vendor.pyparsing import Literal as L # noqa from ._compat import string_types from .specifiers import Specifier, InvalidSpecifier __all__ = [ "InvalidMarker", "UndefinedComparison", "UndefinedEnvironmentName", "Marker", "default_environment", ] class InvalidMarker(ValueError): """ An invalid marker was found, users should refer to PEP 508. """ class UndefinedComparison(ValueError): """ An invalid operation was attempted on a value that doesn't support it. """ class UndefinedEnvironmentName(ValueError): """ A name was attempted to be used that does not exist inside of the environment. """ class Node(object): def __init__(self, value): self.value = value def __str__(self): return str(self.value) def __repr__(self): return "<{0}({1!r})>".format(self.__class__.__name__, str(self)) def serialize(self): raise NotImplementedError class Variable(Node): def serialize(self): return str(self) class Value(Node): def serialize(self): return '"{0}"'.format(self) class Op(Node): def serialize(self): return str(self) VARIABLE = ( L("implementation_version") | L("platform_python_implementation") | L("implementation_name") | L("python_full_version") | L("platform_release") | L("platform_version") | L("platform_machine") | L("platform_system") | L("python_version") | L("sys_platform") | L("os_name") | L("os.name") | L("sys.platform") # PEP-345 | L("platform.version") # PEP-345 | L("platform.machine") # PEP-345 | L("platform.python_implementation") # PEP-345 | L("python_implementation") # PEP-345 | L("extra") # undocumented setuptools legacy ) ALIASES = { "os.name": "os_name", "sys.platform": "sys_platform", "platform.version": "platform_version", "platform.machine": "platform_machine", "platform.python_implementation": "platform_python_implementation", "python_implementation": "platform_python_implementation", } VARIABLE.setParseAction(lambda s, l, t: Variable(ALIASES.get(t[0], t[0]))) VERSION_CMP = ( L("===") | L("==") | L(">=") | L("<=") | L("!=") | L("~=") | L(">") | L("<") ) MARKER_OP = VERSION_CMP | L("not in") | L("in") MARKER_OP.setParseAction(lambda s, l, t: Op(t[0])) MARKER_VALUE = QuotedString("'") | QuotedString('"') MARKER_VALUE.setParseAction(lambda s, l, t: Value(t[0])) BOOLOP = L("and") | L("or") MARKER_VAR = VARIABLE | MARKER_VALUE MARKER_ITEM = Group(MARKER_VAR + MARKER_OP + MARKER_VAR) MARKER_ITEM.setParseAction(lambda s, l, t: tuple(t[0])) LPAREN = L("(").suppress() RPAREN = L(")").suppress() MARKER_EXPR = Forward() MARKER_ATOM = MARKER_ITEM | Group(LPAREN + MARKER_EXPR + RPAREN) MARKER_EXPR << MARKER_ATOM + ZeroOrMore(BOOLOP + MARKER_EXPR) MARKER = stringStart + MARKER_EXPR + stringEnd def _coerce_parse_result(results): if isinstance(results, ParseResults): return [_coerce_parse_result(i) for i in results] else: return results def _format_marker(marker, first=True): assert isinstance(marker, (list, tuple, string_types)) # Sometimes we have a structure like [[...]] which is a single item list # where the single item is itself it's own list. In that case we want skip # the rest of this function so that we don't get extraneous () on the # outside. if ( isinstance(marker, list) and len(marker) == 1 and isinstance(marker[0], (list, tuple)) ): return _format_marker(marker[0]) if isinstance(marker, list): inner = (_format_marker(m, first=False) for m in marker) if first: return " ".join(inner) else: return "(" + " ".join(inner) + ")" elif isinstance(marker, tuple): return " ".join([m.serialize() for m in marker]) else: return marker _operators = { "in": lambda lhs, rhs: lhs in rhs, "not in": lambda lhs, rhs: lhs not in rhs, "<": operator.lt, "<=": operator.le, "==": operator.eq, "!=": operator.ne, ">=": operator.ge, ">": operator.gt, } def _eval_op(lhs, op, rhs): try: spec = Specifier("".join([op.serialize(), rhs])) except InvalidSpecifier: pass else: return spec.contains(lhs) oper = _operators.get(op.serialize()) if oper is None: raise UndefinedComparison( "Undefined {0!r} on {1!r} and {2!r}.".format(op, lhs, rhs) ) return oper(lhs, rhs) _undefined = object() def _get_env(environment, name): value = environment.get(name, _undefined) if value is _undefined: raise UndefinedEnvironmentName( "{0!r} does not exist in evaluation environment.".format(name) ) return value def _evaluate_markers(markers, environment): groups = [[]] for marker in markers: assert isinstance(marker, (list, tuple, string_types)) if isinstance(marker, list): groups[-1].append(_evaluate_markers(marker, environment)) elif isinstance(marker, tuple): lhs, op, rhs = marker if isinstance(lhs, Variable): lhs_value = _get_env(environment, lhs.value) rhs_value = rhs.value else: lhs_value = lhs.value rhs_value = _get_env(environment, rhs.value) groups[-1].append(_eval_op(lhs_value, op, rhs_value)) else: assert marker in ["and", "or"] if marker == "or": groups.append([]) return any(all(item) for item in groups) def format_full_version(info): version = "{0.major}.{0.minor}.{0.micro}".format(info) kind = info.releaselevel if kind != "final": version += kind[0] + str(info.serial) return version def default_environment(): if hasattr(sys, "implementation"): iver = format_full_version(sys.implementation.version) implementation_name = sys.implementation.name else: iver = "0" implementation_name = "" return { "implementation_name": implementation_name, "implementation_version": iver, "os_name": os.name, "platform_machine": platform.machine(), "platform_release": platform.release(), "platform_system": platform.system(), "platform_version": platform.version(), "python_full_version": platform.python_version(), "platform_python_implementation": platform.python_implementation(), "python_version": platform.python_version()[:3], "sys_platform": sys.platform, } class Marker(object): def __init__(self, marker): try: self._markers = _coerce_parse_result(MARKER.parseString(marker)) except ParseException as e: err_str = "Invalid marker: {0!r}, parse error at {1!r}".format( marker, marker[e.loc : e.loc + 8] ) raise InvalidMarker(err_str) def __str__(self): return _format_marker(self._markers) def __repr__(self): return "<Marker({0!r})>".format(str(self)) def evaluate(self, environment=None): """Evaluate a marker. Return the boolean from evaluating the given marker against the environment. environment is an optional argument to override all or part of the determined environment. The environment is determined from the current Python process. """ current_environment = default_environment() if environment is not None: current_environment.update(environment) return _evaluate_markers(self._markers, current_environment) ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/packaging/requirements.py����������������0000644�0001750�0001750�00000011134�00000000000�031506� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function import string import re from pip._vendor.pyparsing import stringStart, stringEnd, originalTextFor, ParseException from pip._vendor.pyparsing import ZeroOrMore, Word, Optional, Regex, Combine from pip._vendor.pyparsing import Literal as L # noqa from pip._vendor.six.moves.urllib import parse as urlparse from .markers import MARKER_EXPR, Marker from .specifiers import LegacySpecifier, Specifier, SpecifierSet class InvalidRequirement(ValueError): """ An invalid requirement was found, users should refer to PEP 508. """ ALPHANUM = Word(string.ascii_letters + string.digits) LBRACKET = L("[").suppress() RBRACKET = L("]").suppress() LPAREN = L("(").suppress() RPAREN = L(")").suppress() COMMA = L(",").suppress() SEMICOLON = L(";").suppress() AT = L("@").suppress() PUNCTUATION = Word("-_.") IDENTIFIER_END = ALPHANUM | (ZeroOrMore(PUNCTUATION) + ALPHANUM) IDENTIFIER = Combine(ALPHANUM + ZeroOrMore(IDENTIFIER_END)) NAME = IDENTIFIER("name") EXTRA = IDENTIFIER URI = Regex(r"[^ ]+")("url") URL = AT + URI EXTRAS_LIST = EXTRA + ZeroOrMore(COMMA + EXTRA) EXTRAS = (LBRACKET + Optional(EXTRAS_LIST) + RBRACKET)("extras") VERSION_PEP440 = Regex(Specifier._regex_str, re.VERBOSE | re.IGNORECASE) VERSION_LEGACY = Regex(LegacySpecifier._regex_str, re.VERBOSE | re.IGNORECASE) VERSION_ONE = VERSION_PEP440 ^ VERSION_LEGACY VERSION_MANY = Combine( VERSION_ONE + ZeroOrMore(COMMA + VERSION_ONE), joinString=",", adjacent=False )("_raw_spec") _VERSION_SPEC = Optional(((LPAREN + VERSION_MANY + RPAREN) | VERSION_MANY)) _VERSION_SPEC.setParseAction(lambda s, l, t: t._raw_spec or "") VERSION_SPEC = originalTextFor(_VERSION_SPEC)("specifier") VERSION_SPEC.setParseAction(lambda s, l, t: t[1]) MARKER_EXPR = originalTextFor(MARKER_EXPR())("marker") MARKER_EXPR.setParseAction( lambda s, l, t: Marker(s[t._original_start : t._original_end]) ) MARKER_SEPARATOR = SEMICOLON MARKER = MARKER_SEPARATOR + MARKER_EXPR VERSION_AND_MARKER = VERSION_SPEC + Optional(MARKER) URL_AND_MARKER = URL + Optional(MARKER) NAMED_REQUIREMENT = NAME + Optional(EXTRAS) + (URL_AND_MARKER | VERSION_AND_MARKER) REQUIREMENT = stringStart + NAMED_REQUIREMENT + stringEnd # pyparsing isn't thread safe during initialization, so we do it eagerly, see # issue #104 REQUIREMENT.parseString("x[]") class Requirement(object): """Parse a requirement. Parse a given requirement string into its parts, such as name, specifier, URL, and extras. Raises InvalidRequirement on a badly-formed requirement string. """ # TODO: Can we test whether something is contained within a requirement? # If so how do we do that? Do we need to test against the _name_ of # the thing as well as the version? What about the markers? # TODO: Can we normalize the name and extra name? def __init__(self, requirement_string): try: req = REQUIREMENT.parseString(requirement_string) except ParseException as e: raise InvalidRequirement( 'Parse error at "{0!r}": {1}'.format( requirement_string[e.loc : e.loc + 8], e.msg ) ) self.name = req.name if req.url: parsed_url = urlparse.urlparse(req.url) if parsed_url.scheme == "file": if urlparse.urlunparse(parsed_url) != req.url: raise InvalidRequirement("Invalid URL given") elif not (parsed_url.scheme and parsed_url.netloc) or ( not parsed_url.scheme and not parsed_url.netloc ): raise InvalidRequirement("Invalid URL: {0}".format(req.url)) self.url = req.url else: self.url = None self.extras = set(req.extras.asList() if req.extras else []) self.specifier = SpecifierSet(req.specifier) self.marker = req.marker if req.marker else None def __str__(self): parts = [self.name] if self.extras: parts.append("[{0}]".format(",".join(sorted(self.extras)))) if self.specifier: parts.append(str(self.specifier)) if self.url: parts.append("@ {0}".format(self.url)) if self.marker: parts.append(" ") if self.marker: parts.append("; {0}".format(self.marker)) return "".join(parts) def __repr__(self): return "<Requirement({0!r})>".format(str(self)) ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/packaging/specifiers.py������������������0000644�0001750�0001750�00000066202�00000000000�031125� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function import abc import functools import itertools import re from ._compat import string_types, with_metaclass from .version import Version, LegacyVersion, parse class InvalidSpecifier(ValueError): """ An invalid specifier was found, users should refer to PEP 440. """ class BaseSpecifier(with_metaclass(abc.ABCMeta, object)): @abc.abstractmethod def __str__(self): """ Returns the str representation of this Specifier like object. This should be representative of the Specifier itself. """ @abc.abstractmethod def __hash__(self): """ Returns a hash value for this Specifier like object. """ @abc.abstractmethod def __eq__(self, other): """ Returns a boolean representing whether or not the two Specifier like objects are equal. """ @abc.abstractmethod def __ne__(self, other): """ Returns a boolean representing whether or not the two Specifier like objects are not equal. """ @abc.abstractproperty def prereleases(self): """ Returns whether or not pre-releases as a whole are allowed by this specifier. """ @prereleases.setter def prereleases(self, value): """ Sets whether or not pre-releases as a whole are allowed by this specifier. """ @abc.abstractmethod def contains(self, item, prereleases=None): """ Determines if the given item is contained within this specifier. """ @abc.abstractmethod def filter(self, iterable, prereleases=None): """ Takes an iterable of items and filters them so that only items which are contained within this specifier are allowed in it. """ class _IndividualSpecifier(BaseSpecifier): _operators = {} def __init__(self, spec="", prereleases=None): match = self._regex.search(spec) if not match: raise InvalidSpecifier("Invalid specifier: '{0}'".format(spec)) self._spec = (match.group("operator").strip(), match.group("version").strip()) # Store whether or not this Specifier should accept prereleases self._prereleases = prereleases def __repr__(self): pre = ( ", prereleases={0!r}".format(self.prereleases) if self._prereleases is not None else "" ) return "<{0}({1!r}{2})>".format(self.__class__.__name__, str(self), pre) def __str__(self): return "{0}{1}".format(*self._spec) def __hash__(self): return hash(self._spec) def __eq__(self, other): if isinstance(other, string_types): try: other = self.__class__(other) except InvalidSpecifier: return NotImplemented elif not isinstance(other, self.__class__): return NotImplemented return self._spec == other._spec def __ne__(self, other): if isinstance(other, string_types): try: other = self.__class__(other) except InvalidSpecifier: return NotImplemented elif not isinstance(other, self.__class__): return NotImplemented return self._spec != other._spec def _get_operator(self, op): return getattr(self, "_compare_{0}".format(self._operators[op])) def _coerce_version(self, version): if not isinstance(version, (LegacyVersion, Version)): version = parse(version) return version @property def operator(self): return self._spec[0] @property def version(self): return self._spec[1] @property def prereleases(self): return self._prereleases @prereleases.setter def prereleases(self, value): self._prereleases = value def __contains__(self, item): return self.contains(item) def contains(self, item, prereleases=None): # Determine if prereleases are to be allowed or not. if prereleases is None: prereleases = self.prereleases # Normalize item to a Version or LegacyVersion, this allows us to have # a shortcut for ``"2.0" in Specifier(">=2") item = self._coerce_version(item) # Determine if we should be supporting prereleases in this specifier # or not, if we do not support prereleases than we can short circuit # logic if this version is a prereleases. if item.is_prerelease and not prereleases: return False # Actually do the comparison to determine if this item is contained # within this Specifier or not. return self._get_operator(self.operator)(item, self.version) def filter(self, iterable, prereleases=None): yielded = False found_prereleases = [] kw = {"prereleases": prereleases if prereleases is not None else True} # Attempt to iterate over all the values in the iterable and if any of # them match, yield them. for version in iterable: parsed_version = self._coerce_version(version) if self.contains(parsed_version, **kw): # If our version is a prerelease, and we were not set to allow # prereleases, then we'll store it for later incase nothing # else matches this specifier. if parsed_version.is_prerelease and not ( prereleases or self.prereleases ): found_prereleases.append(version) # Either this is not a prerelease, or we should have been # accepting prereleases from the beginning. else: yielded = True yield version # Now that we've iterated over everything, determine if we've yielded # any values, and if we have not and we have any prereleases stored up # then we will go ahead and yield the prereleases. if not yielded and found_prereleases: for version in found_prereleases: yield version class LegacySpecifier(_IndividualSpecifier): _regex_str = r""" (?P<operator>(==|!=|<=|>=|<|>)) \s* (?P<version> [^,;\s)]* # Since this is a "legacy" specifier, and the version # string can be just about anything, we match everything # except for whitespace, a semi-colon for marker support, # a closing paren since versions can be enclosed in # them, and a comma since it's a version separator. ) """ _regex = re.compile(r"^\s*" + _regex_str + r"\s*$", re.VERBOSE | re.IGNORECASE) _operators = { "==": "equal", "!=": "not_equal", "<=": "less_than_equal", ">=": "greater_than_equal", "<": "less_than", ">": "greater_than", } def _coerce_version(self, version): if not isinstance(version, LegacyVersion): version = LegacyVersion(str(version)) return version def _compare_equal(self, prospective, spec): return prospective == self._coerce_version(spec) def _compare_not_equal(self, prospective, spec): return prospective != self._coerce_version(spec) def _compare_less_than_equal(self, prospective, spec): return prospective <= self._coerce_version(spec) def _compare_greater_than_equal(self, prospective, spec): return prospective >= self._coerce_version(spec) def _compare_less_than(self, prospective, spec): return prospective < self._coerce_version(spec) def _compare_greater_than(self, prospective, spec): return prospective > self._coerce_version(spec) def _require_version_compare(fn): @functools.wraps(fn) def wrapped(self, prospective, spec): if not isinstance(prospective, Version): return False return fn(self, prospective, spec) return wrapped class Specifier(_IndividualSpecifier): _regex_str = r""" (?P<operator>(~=|==|!=|<=|>=|<|>|===)) (?P<version> (?: # The identity operators allow for an escape hatch that will # do an exact string match of the version you wish to install. # This will not be parsed by PEP 440 and we cannot determine # any semantic meaning from it. This operator is discouraged # but included entirely as an escape hatch. (?<====) # Only match for the identity operator \s* [^\s]* # We just match everything, except for whitespace # since we are only testing for strict identity. ) | (?: # The (non)equality operators allow for wild card and local # versions to be specified so we have to define these two # operators separately to enable that. (?<===|!=) # Only match for equals and not equals \s* v? (?:[0-9]+!)? # epoch [0-9]+(?:\.[0-9]+)* # release (?: # pre release [-_\.]? (a|b|c|rc|alpha|beta|pre|preview) [-_\.]? [0-9]* )? (?: # post release (?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*) )? # You cannot use a wild card and a dev or local version # together so group them with a | and make them optional. (?: (?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release (?:\+[a-z0-9]+(?:[-_\.][a-z0-9]+)*)? # local | \.\* # Wild card syntax of .* )? ) | (?: # The compatible operator requires at least two digits in the # release segment. (?<=~=) # Only match for the compatible operator \s* v? (?:[0-9]+!)? # epoch [0-9]+(?:\.[0-9]+)+ # release (We have a + instead of a *) (?: # pre release [-_\.]? (a|b|c|rc|alpha|beta|pre|preview) [-_\.]? [0-9]* )? (?: # post release (?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*) )? (?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release ) | (?: # All other operators only allow a sub set of what the # (non)equality operators do. Specifically they do not allow # local versions to be specified nor do they allow the prefix # matching wild cards. (?<!==|!=|~=) # We have special cases for these # operators so we want to make sure they # don't match here. \s* v? (?:[0-9]+!)? # epoch [0-9]+(?:\.[0-9]+)* # release (?: # pre release [-_\.]? (a|b|c|rc|alpha|beta|pre|preview) [-_\.]? [0-9]* )? (?: # post release (?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*) )? (?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release ) ) """ _regex = re.compile(r"^\s*" + _regex_str + r"\s*$", re.VERBOSE | re.IGNORECASE) _operators = { "~=": "compatible", "==": "equal", "!=": "not_equal", "<=": "less_than_equal", ">=": "greater_than_equal", "<": "less_than", ">": "greater_than", "===": "arbitrary", } @_require_version_compare def _compare_compatible(self, prospective, spec): # Compatible releases have an equivalent combination of >= and ==. That # is that ~=2.2 is equivalent to >=2.2,==2.*. This allows us to # implement this in terms of the other specifiers instead of # implementing it ourselves. The only thing we need to do is construct # the other specifiers. # We want everything but the last item in the version, but we want to # ignore post and dev releases and we want to treat the pre-release as # it's own separate segment. prefix = ".".join( list( itertools.takewhile( lambda x: (not x.startswith("post") and not x.startswith("dev")), _version_split(spec), ) )[:-1] ) # Add the prefix notation to the end of our string prefix += ".*" return self._get_operator(">=")(prospective, spec) and self._get_operator("==")( prospective, prefix ) @_require_version_compare def _compare_equal(self, prospective, spec): # We need special logic to handle prefix matching if spec.endswith(".*"): # In the case of prefix matching we want to ignore local segment. prospective = Version(prospective.public) # Split the spec out by dots, and pretend that there is an implicit # dot in between a release segment and a pre-release segment. spec = _version_split(spec[:-2]) # Remove the trailing .* # Split the prospective version out by dots, and pretend that there # is an implicit dot in between a release segment and a pre-release # segment. prospective = _version_split(str(prospective)) # Shorten the prospective version to be the same length as the spec # so that we can determine if the specifier is a prefix of the # prospective version or not. prospective = prospective[: len(spec)] # Pad out our two sides with zeros so that they both equal the same # length. spec, prospective = _pad_version(spec, prospective) else: # Convert our spec string into a Version spec = Version(spec) # If the specifier does not have a local segment, then we want to # act as if the prospective version also does not have a local # segment. if not spec.local: prospective = Version(prospective.public) return prospective == spec @_require_version_compare def _compare_not_equal(self, prospective, spec): return not self._compare_equal(prospective, spec) @_require_version_compare def _compare_less_than_equal(self, prospective, spec): return prospective <= Version(spec) @_require_version_compare def _compare_greater_than_equal(self, prospective, spec): return prospective >= Version(spec) @_require_version_compare def _compare_less_than(self, prospective, spec): # Convert our spec to a Version instance, since we'll want to work with # it as a version. spec = Version(spec) # Check to see if the prospective version is less than the spec # version. If it's not we can short circuit and just return False now # instead of doing extra unneeded work. if not prospective < spec: return False # This special case is here so that, unless the specifier itself # includes is a pre-release version, that we do not accept pre-release # versions for the version mentioned in the specifier (e.g. <3.1 should # not match 3.1.dev0, but should match 3.0.dev0). if not spec.is_prerelease and prospective.is_prerelease: if Version(prospective.base_version) == Version(spec.base_version): return False # If we've gotten to here, it means that prospective version is both # less than the spec version *and* it's not a pre-release of the same # version in the spec. return True @_require_version_compare def _compare_greater_than(self, prospective, spec): # Convert our spec to a Version instance, since we'll want to work with # it as a version. spec = Version(spec) # Check to see if the prospective version is greater than the spec # version. If it's not we can short circuit and just return False now # instead of doing extra unneeded work. if not prospective > spec: return False # This special case is here so that, unless the specifier itself # includes is a post-release version, that we do not accept # post-release versions for the version mentioned in the specifier # (e.g. >3.1 should not match 3.0.post0, but should match 3.2.post0). if not spec.is_postrelease and prospective.is_postrelease: if Version(prospective.base_version) == Version(spec.base_version): return False # Ensure that we do not allow a local version of the version mentioned # in the specifier, which is technically greater than, to match. if prospective.local is not None: if Version(prospective.base_version) == Version(spec.base_version): return False # If we've gotten to here, it means that prospective version is both # greater than the spec version *and* it's not a pre-release of the # same version in the spec. return True def _compare_arbitrary(self, prospective, spec): return str(prospective).lower() == str(spec).lower() @property def prereleases(self): # If there is an explicit prereleases set for this, then we'll just # blindly use that. if self._prereleases is not None: return self._prereleases # Look at all of our specifiers and determine if they are inclusive # operators, and if they are if they are including an explicit # prerelease. operator, version = self._spec if operator in ["==", ">=", "<=", "~=", "==="]: # The == specifier can include a trailing .*, if it does we # want to remove before parsing. if operator == "==" and version.endswith(".*"): version = version[:-2] # Parse the version, and if it is a pre-release than this # specifier allows pre-releases. if parse(version).is_prerelease: return True return False @prereleases.setter def prereleases(self, value): self._prereleases = value _prefix_regex = re.compile(r"^([0-9]+)((?:a|b|c|rc)[0-9]+)$") def _version_split(version): result = [] for item in version.split("."): match = _prefix_regex.search(item) if match: result.extend(match.groups()) else: result.append(item) return result def _pad_version(left, right): left_split, right_split = [], [] # Get the release segment of our versions left_split.append(list(itertools.takewhile(lambda x: x.isdigit(), left))) right_split.append(list(itertools.takewhile(lambda x: x.isdigit(), right))) # Get the rest of our versions left_split.append(left[len(left_split[0]) :]) right_split.append(right[len(right_split[0]) :]) # Insert our padding left_split.insert(1, ["0"] * max(0, len(right_split[0]) - len(left_split[0]))) right_split.insert(1, ["0"] * max(0, len(left_split[0]) - len(right_split[0]))) return (list(itertools.chain(*left_split)), list(itertools.chain(*right_split))) class SpecifierSet(BaseSpecifier): def __init__(self, specifiers="", prereleases=None): # Split on , to break each indidivual specifier into it's own item, and # strip each item to remove leading/trailing whitespace. specifiers = [s.strip() for s in specifiers.split(",") if s.strip()] # Parsed each individual specifier, attempting first to make it a # Specifier and falling back to a LegacySpecifier. parsed = set() for specifier in specifiers: try: parsed.add(Specifier(specifier)) except InvalidSpecifier: parsed.add(LegacySpecifier(specifier)) # Turn our parsed specifiers into a frozen set and save them for later. self._specs = frozenset(parsed) # Store our prereleases value so we can use it later to determine if # we accept prereleases or not. self._prereleases = prereleases def __repr__(self): pre = ( ", prereleases={0!r}".format(self.prereleases) if self._prereleases is not None else "" ) return "<SpecifierSet({0!r}{1})>".format(str(self), pre) def __str__(self): return ",".join(sorted(str(s) for s in self._specs)) def __hash__(self): return hash(self._specs) def __and__(self, other): if isinstance(other, string_types): other = SpecifierSet(other) elif not isinstance(other, SpecifierSet): return NotImplemented specifier = SpecifierSet() specifier._specs = frozenset(self._specs | other._specs) if self._prereleases is None and other._prereleases is not None: specifier._prereleases = other._prereleases elif self._prereleases is not None and other._prereleases is None: specifier._prereleases = self._prereleases elif self._prereleases == other._prereleases: specifier._prereleases = self._prereleases else: raise ValueError( "Cannot combine SpecifierSets with True and False prerelease " "overrides." ) return specifier def __eq__(self, other): if isinstance(other, string_types): other = SpecifierSet(other) elif isinstance(other, _IndividualSpecifier): other = SpecifierSet(str(other)) elif not isinstance(other, SpecifierSet): return NotImplemented return self._specs == other._specs def __ne__(self, other): if isinstance(other, string_types): other = SpecifierSet(other) elif isinstance(other, _IndividualSpecifier): other = SpecifierSet(str(other)) elif not isinstance(other, SpecifierSet): return NotImplemented return self._specs != other._specs def __len__(self): return len(self._specs) def __iter__(self): return iter(self._specs) @property def prereleases(self): # If we have been given an explicit prerelease modifier, then we'll # pass that through here. if self._prereleases is not None: return self._prereleases # If we don't have any specifiers, and we don't have a forced value, # then we'll just return None since we don't know if this should have # pre-releases or not. if not self._specs: return None # Otherwise we'll see if any of the given specifiers accept # prereleases, if any of them do we'll return True, otherwise False. return any(s.prereleases for s in self._specs) @prereleases.setter def prereleases(self, value): self._prereleases = value def __contains__(self, item): return self.contains(item) def contains(self, item, prereleases=None): # Ensure that our item is a Version or LegacyVersion instance. if not isinstance(item, (LegacyVersion, Version)): item = parse(item) # Determine if we're forcing a prerelease or not, if we're not forcing # one for this particular filter call, then we'll use whatever the # SpecifierSet thinks for whether or not we should support prereleases. if prereleases is None: prereleases = self.prereleases # We can determine if we're going to allow pre-releases by looking to # see if any of the underlying items supports them. If none of them do # and this item is a pre-release then we do not allow it and we can # short circuit that here. # Note: This means that 1.0.dev1 would not be contained in something # like >=1.0.devabc however it would be in >=1.0.debabc,>0.0.dev0 if not prereleases and item.is_prerelease: return False # We simply dispatch to the underlying specs here to make sure that the # given version is contained within all of them. # Note: This use of all() here means that an empty set of specifiers # will always return True, this is an explicit design decision. return all(s.contains(item, prereleases=prereleases) for s in self._specs) def filter(self, iterable, prereleases=None): # Determine if we're forcing a prerelease or not, if we're not forcing # one for this particular filter call, then we'll use whatever the # SpecifierSet thinks for whether or not we should support prereleases. if prereleases is None: prereleases = self.prereleases # If we have any specifiers, then we want to wrap our iterable in the # filter method for each one, this will act as a logical AND amongst # each specifier. if self._specs: for spec in self._specs: iterable = spec.filter(iterable, prereleases=bool(prereleases)) return iterable # If we do not have any specifiers, then we need to have a rough filter # which will filter out any pre-releases, unless there are no final # releases, and which will filter out LegacyVersion in general. else: filtered = [] found_prereleases = [] for item in iterable: # Ensure that we some kind of Version class for this item. if not isinstance(item, (LegacyVersion, Version)): parsed_version = parse(item) else: parsed_version = item # Filter out any item which is parsed as a LegacyVersion if isinstance(parsed_version, LegacyVersion): continue # Store any item which is a pre-release for later unless we've # already found a final version or we are accepting prereleases if parsed_version.is_prerelease and not prereleases: if not filtered: found_prereleases.append(item) else: filtered.append(item) # If we've found no items except for pre-releases, then we'll go # ahead and use the pre-releases if not filtered and found_prereleases and prereleases is None: return found_prereleases return filtered ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/packaging/utils.py�����������������������0000644�0001750�0001750�00000002760�00000000000�030130� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function import re from .version import InvalidVersion, Version _canonicalize_regex = re.compile(r"[-_.]+") def canonicalize_name(name): # This is taken from PEP 503. return _canonicalize_regex.sub("-", name).lower() def canonicalize_version(version): """ This is very similar to Version.__str__, but has one subtle differences with the way it handles the release segment. """ try: version = Version(version) except InvalidVersion: # Legacy versions cannot be normalized return version parts = [] # Epoch if version.epoch != 0: parts.append("{0}!".format(version.epoch)) # Release segment # NB: This strips trailing '.0's to normalize parts.append(re.sub(r"(\.0)+$", "", ".".join(str(x) for x in version.release))) # Pre-release if version.pre is not None: parts.append("".join(str(x) for x in version.pre)) # Post-release if version.post is not None: parts.append(".post{0}".format(version.post)) # Development release if version.dev is not None: parts.append(".dev{0}".format(version.dev)) # Local version segment if version.local is not None: parts.append("+{0}".format(version.local)) return "".join(parts) ����������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/packaging/version.py���������������������0000644�0001750�0001750�00000027312�00000000000�030455� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function import collections import itertools import re from ._structures import Infinity __all__ = ["parse", "Version", "LegacyVersion", "InvalidVersion", "VERSION_PATTERN"] _Version = collections.namedtuple( "_Version", ["epoch", "release", "dev", "pre", "post", "local"] ) def parse(version): """ Parse the given version string and return either a :class:`Version` object or a :class:`LegacyVersion` object depending on if the given version is a valid PEP 440 version or a legacy version. """ try: return Version(version) except InvalidVersion: return LegacyVersion(version) class InvalidVersion(ValueError): """ An invalid version was found, users should refer to PEP 440. """ class _BaseVersion(object): def __hash__(self): return hash(self._key) def __lt__(self, other): return self._compare(other, lambda s, o: s < o) def __le__(self, other): return self._compare(other, lambda s, o: s <= o) def __eq__(self, other): return self._compare(other, lambda s, o: s == o) def __ge__(self, other): return self._compare(other, lambda s, o: s >= o) def __gt__(self, other): return self._compare(other, lambda s, o: s > o) def __ne__(self, other): return self._compare(other, lambda s, o: s != o) def _compare(self, other, method): if not isinstance(other, _BaseVersion): return NotImplemented return method(self._key, other._key) class LegacyVersion(_BaseVersion): def __init__(self, version): self._version = str(version) self._key = _legacy_cmpkey(self._version) def __str__(self): return self._version def __repr__(self): return "<LegacyVersion({0})>".format(repr(str(self))) @property def public(self): return self._version @property def base_version(self): return self._version @property def epoch(self): return -1 @property def release(self): return None @property def pre(self): return None @property def post(self): return None @property def dev(self): return None @property def local(self): return None @property def is_prerelease(self): return False @property def is_postrelease(self): return False @property def is_devrelease(self): return False _legacy_version_component_re = re.compile(r"(\d+ | [a-z]+ | \.| -)", re.VERBOSE) _legacy_version_replacement_map = { "pre": "c", "preview": "c", "-": "final-", "rc": "c", "dev": "@", } def _parse_version_parts(s): for part in _legacy_version_component_re.split(s): part = _legacy_version_replacement_map.get(part, part) if not part or part == ".": continue if part[:1] in "0123456789": # pad for numeric comparison yield part.zfill(8) else: yield "*" + part # ensure that alpha/beta/candidate are before final yield "*final" def _legacy_cmpkey(version): # We hardcode an epoch of -1 here. A PEP 440 version can only have a epoch # greater than or equal to 0. This will effectively put the LegacyVersion, # which uses the defacto standard originally implemented by setuptools, # as before all PEP 440 versions. epoch = -1 # This scheme is taken from pkg_resources.parse_version setuptools prior to # it's adoption of the packaging library. parts = [] for part in _parse_version_parts(version.lower()): if part.startswith("*"): # remove "-" before a prerelease tag if part < "*final": while parts and parts[-1] == "*final-": parts.pop() # remove trailing zeros from each series of numeric parts while parts and parts[-1] == "00000000": parts.pop() parts.append(part) parts = tuple(parts) return epoch, parts # Deliberately not anchored to the start and end of the string, to make it # easier for 3rd party code to reuse VERSION_PATTERN = r""" v? (?: (?:(?P<epoch>[0-9]+)!)? # epoch (?P<release>[0-9]+(?:\.[0-9]+)*) # release segment (?P<pre> # pre-release [-_\.]? (?P<pre_l>(a|b|c|rc|alpha|beta|pre|preview)) [-_\.]? (?P<pre_n>[0-9]+)? )? (?P<post> # post release (?:-(?P<post_n1>[0-9]+)) | (?: [-_\.]? (?P<post_l>post|rev|r) [-_\.]? (?P<post_n2>[0-9]+)? ) )? (?P<dev> # dev release [-_\.]? (?P<dev_l>dev) [-_\.]? (?P<dev_n>[0-9]+)? )? ) (?:\+(?P<local>[a-z0-9]+(?:[-_\.][a-z0-9]+)*))? # local version """ class Version(_BaseVersion): _regex = re.compile(r"^\s*" + VERSION_PATTERN + r"\s*$", re.VERBOSE | re.IGNORECASE) def __init__(self, version): # Validate the version and parse it into pieces match = self._regex.search(version) if not match: raise InvalidVersion("Invalid version: '{0}'".format(version)) # Store the parsed out pieces of the version self._version = _Version( epoch=int(match.group("epoch")) if match.group("epoch") else 0, release=tuple(int(i) for i in match.group("release").split(".")), pre=_parse_letter_version(match.group("pre_l"), match.group("pre_n")), post=_parse_letter_version( match.group("post_l"), match.group("post_n1") or match.group("post_n2") ), dev=_parse_letter_version(match.group("dev_l"), match.group("dev_n")), local=_parse_local_version(match.group("local")), ) # Generate a key which will be used for sorting self._key = _cmpkey( self._version.epoch, self._version.release, self._version.pre, self._version.post, self._version.dev, self._version.local, ) def __repr__(self): return "<Version({0})>".format(repr(str(self))) def __str__(self): parts = [] # Epoch if self.epoch != 0: parts.append("{0}!".format(self.epoch)) # Release segment parts.append(".".join(str(x) for x in self.release)) # Pre-release if self.pre is not None: parts.append("".join(str(x) for x in self.pre)) # Post-release if self.post is not None: parts.append(".post{0}".format(self.post)) # Development release if self.dev is not None: parts.append(".dev{0}".format(self.dev)) # Local version segment if self.local is not None: parts.append("+{0}".format(self.local)) return "".join(parts) @property def epoch(self): return self._version.epoch @property def release(self): return self._version.release @property def pre(self): return self._version.pre @property def post(self): return self._version.post[1] if self._version.post else None @property def dev(self): return self._version.dev[1] if self._version.dev else None @property def local(self): if self._version.local: return ".".join(str(x) for x in self._version.local) else: return None @property def public(self): return str(self).split("+", 1)[0] @property def base_version(self): parts = [] # Epoch if self.epoch != 0: parts.append("{0}!".format(self.epoch)) # Release segment parts.append(".".join(str(x) for x in self.release)) return "".join(parts) @property def is_prerelease(self): return self.dev is not None or self.pre is not None @property def is_postrelease(self): return self.post is not None @property def is_devrelease(self): return self.dev is not None def _parse_letter_version(letter, number): if letter: # We consider there to be an implicit 0 in a pre-release if there is # not a numeral associated with it. if number is None: number = 0 # We normalize any letters to their lower case form letter = letter.lower() # We consider some words to be alternate spellings of other words and # in those cases we want to normalize the spellings to our preferred # spelling. if letter == "alpha": letter = "a" elif letter == "beta": letter = "b" elif letter in ["c", "pre", "preview"]: letter = "rc" elif letter in ["rev", "r"]: letter = "post" return letter, int(number) if not letter and number: # We assume if we are given a number, but we are not given a letter # then this is using the implicit post release syntax (e.g. 1.0-1) letter = "post" return letter, int(number) _local_version_separators = re.compile(r"[\._-]") def _parse_local_version(local): """ Takes a string like abc.1.twelve and turns it into ("abc", 1, "twelve"). """ if local is not None: return tuple( part.lower() if not part.isdigit() else int(part) for part in _local_version_separators.split(local) ) def _cmpkey(epoch, release, pre, post, dev, local): # When we compare a release version, we want to compare it with all of the # trailing zeros removed. So we'll use a reverse the list, drop all the now # leading zeros until we come to something non zero, then take the rest # re-reverse it back into the correct order and make it a tuple and use # that for our sorting key. release = tuple( reversed(list(itertools.dropwhile(lambda x: x == 0, reversed(release)))) ) # We need to "trick" the sorting algorithm to put 1.0.dev0 before 1.0a0. # We'll do this by abusing the pre segment, but we _only_ want to do this # if there is not a pre or a post segment. If we have one of those then # the normal sorting rules will handle this case correctly. if pre is None and post is None and dev is not None: pre = -Infinity # Versions without a pre-release (except as noted above) should sort after # those with one. elif pre is None: pre = Infinity # Versions without a post segment should sort before those with one. if post is None: post = -Infinity # Versions without a development segment should sort after those with one. if dev is None: dev = Infinity if local is None: # Versions without a local segment should sort before those with one. local = -Infinity else: # Versions with a local segment need that segment parsed to implement # the sorting rules in PEP440. # - Alpha numeric segments sort before numeric segments # - Alpha numeric segments sort lexicographically # - Numeric segments sort numerically # - Shorter versions sort before longer versions when the prefixes # match exactly local = tuple((i, "") if isinstance(i, int) else (-Infinity, i) for i in local) return epoch, release, pre, post, dev, local ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735100.0099943 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/pep517/����������������������������������0000755�0001750�0001750�00000000000�00000000000�025506� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/pep517/__init__.py�����������������������0000644�0001750�0001750�00000000124�00000000000�027614� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""Wrappers to build Python packages using PEP 517 hooks """ __version__ = '0.5.0' ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/pep517/_in_process.py��������������������0000644�0001750�0001750�00000013655�00000000000�030375� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""This is invoked in a subprocess to call the build backend hooks. It expects: - Command line args: hook_name, control_dir - Environment variable: PEP517_BUILD_BACKEND=entry.point:spec - control_dir/input.json: - {"kwargs": {...}} Results: - control_dir/output.json - {"return_val": ...} """ from glob import glob from importlib import import_module import os from os.path import join as pjoin import re import shutil import sys # This is run as a script, not a module, so it can't do a relative import import compat class BackendUnavailable(Exception): """Raised if we cannot import the backend""" def _build_backend(): """Find and load the build backend""" ep = os.environ['PEP517_BUILD_BACKEND'] mod_path, _, obj_path = ep.partition(':') try: obj = import_module(mod_path) except ImportError: raise BackendUnavailable if obj_path: for path_part in obj_path.split('.'): obj = getattr(obj, path_part) return obj def get_requires_for_build_wheel(config_settings): """Invoke the optional get_requires_for_build_wheel hook Returns [] if the hook is not defined. """ backend = _build_backend() try: hook = backend.get_requires_for_build_wheel except AttributeError: return [] else: return hook(config_settings) def prepare_metadata_for_build_wheel(metadata_directory, config_settings): """Invoke optional prepare_metadata_for_build_wheel Implements a fallback by building a wheel if the hook isn't defined. """ backend = _build_backend() try: hook = backend.prepare_metadata_for_build_wheel except AttributeError: return _get_wheel_metadata_from_wheel(backend, metadata_directory, config_settings) else: return hook(metadata_directory, config_settings) WHEEL_BUILT_MARKER = 'PEP517_ALREADY_BUILT_WHEEL' def _dist_info_files(whl_zip): """Identify the .dist-info folder inside a wheel ZipFile.""" res = [] for path in whl_zip.namelist(): m = re.match(r'[^/\\]+-[^/\\]+\.dist-info/', path) if m: res.append(path) if res: return res raise Exception("No .dist-info folder found in wheel") def _get_wheel_metadata_from_wheel( backend, metadata_directory, config_settings): """Build a wheel and extract the metadata from it. Fallback for when the build backend does not define the 'get_wheel_metadata' hook. """ from zipfile import ZipFile whl_basename = backend.build_wheel(metadata_directory, config_settings) with open(os.path.join(metadata_directory, WHEEL_BUILT_MARKER), 'wb'): pass # Touch marker file whl_file = os.path.join(metadata_directory, whl_basename) with ZipFile(whl_file) as zipf: dist_info = _dist_info_files(zipf) zipf.extractall(path=metadata_directory, members=dist_info) return dist_info[0].split('/')[0] def _find_already_built_wheel(metadata_directory): """Check for a wheel already built during the get_wheel_metadata hook. """ if not metadata_directory: return None metadata_parent = os.path.dirname(metadata_directory) if not os.path.isfile(pjoin(metadata_parent, WHEEL_BUILT_MARKER)): return None whl_files = glob(os.path.join(metadata_parent, '*.whl')) if not whl_files: print('Found wheel built marker, but no .whl files') return None if len(whl_files) > 1: print('Found multiple .whl files; unspecified behaviour. ' 'Will call build_wheel.') return None # Exactly one .whl file return whl_files[0] def build_wheel(wheel_directory, config_settings, metadata_directory=None): """Invoke the mandatory build_wheel hook. If a wheel was already built in the prepare_metadata_for_build_wheel fallback, this will copy it rather than rebuilding the wheel. """ prebuilt_whl = _find_already_built_wheel(metadata_directory) if prebuilt_whl: shutil.copy2(prebuilt_whl, wheel_directory) return os.path.basename(prebuilt_whl) return _build_backend().build_wheel(wheel_directory, config_settings, metadata_directory) def get_requires_for_build_sdist(config_settings): """Invoke the optional get_requires_for_build_wheel hook Returns [] if the hook is not defined. """ backend = _build_backend() try: hook = backend.get_requires_for_build_sdist except AttributeError: return [] else: return hook(config_settings) class _DummyException(Exception): """Nothing should ever raise this exception""" class GotUnsupportedOperation(Exception): """For internal use when backend raises UnsupportedOperation""" def build_sdist(sdist_directory, config_settings): """Invoke the mandatory build_sdist hook.""" backend = _build_backend() try: return backend.build_sdist(sdist_directory, config_settings) except getattr(backend, 'UnsupportedOperation', _DummyException): raise GotUnsupportedOperation HOOK_NAMES = { 'get_requires_for_build_wheel', 'prepare_metadata_for_build_wheel', 'build_wheel', 'get_requires_for_build_sdist', 'build_sdist', } def main(): if len(sys.argv) < 3: sys.exit("Needs args: hook_name, control_dir") hook_name = sys.argv[1] control_dir = sys.argv[2] if hook_name not in HOOK_NAMES: sys.exit("Unknown hook: %s" % hook_name) hook = globals()[hook_name] hook_input = compat.read_json(pjoin(control_dir, 'input.json')) json_out = {'unsupported': False, 'return_val': None} try: json_out['return_val'] = hook(**hook_input['kwargs']) except BackendUnavailable: json_out['no_backend'] = True except GotUnsupportedOperation: json_out['unsupported'] = True compat.write_json(json_out, pjoin(control_dir, 'output.json'), indent=2) if __name__ == '__main__': main() �����������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/pep517/build.py��������������������������0000644�0001750�0001750�00000005213�00000000000�027160� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""Build a project using PEP 517 hooks. """ import argparse import logging import os import contextlib from pip._vendor import pytoml import shutil import errno import tempfile from .envbuild import BuildEnvironment from .wrappers import Pep517HookCaller log = logging.getLogger(__name__) @contextlib.contextmanager def tempdir(): td = tempfile.mkdtemp() try: yield td finally: shutil.rmtree(td) def _do_build(hooks, env, dist, dest): get_requires_name = 'get_requires_for_build_{dist}'.format(**locals()) get_requires = getattr(hooks, get_requires_name) reqs = get_requires({}) log.info('Got build requires: %s', reqs) env.pip_install(reqs) log.info('Installed dynamic build dependencies') with tempdir() as td: log.info('Trying to build %s in %s', dist, td) build_name = 'build_{dist}'.format(**locals()) build = getattr(hooks, build_name) filename = build(td, {}) source = os.path.join(td, filename) shutil.move(source, os.path.join(dest, os.path.basename(filename))) def mkdir_p(*args, **kwargs): """Like `mkdir`, but does not raise an exception if the directory already exists. """ try: return os.mkdir(*args, **kwargs) except OSError as exc: if exc.errno != errno.EEXIST: raise def build(source_dir, dist, dest=None): pyproject = os.path.join(source_dir, 'pyproject.toml') dest = os.path.join(source_dir, dest or 'dist') mkdir_p(dest) with open(pyproject) as f: pyproject_data = pytoml.load(f) # Ensure the mandatory data can be loaded buildsys = pyproject_data['build-system'] requires = buildsys['requires'] backend = buildsys['build-backend'] hooks = Pep517HookCaller(source_dir, backend) with BuildEnvironment() as env: env.pip_install(requires) _do_build(hooks, env, dist, dest) parser = argparse.ArgumentParser() parser.add_argument( 'source_dir', help="A directory containing pyproject.toml", ) parser.add_argument( '--binary', '-b', action='store_true', default=False, ) parser.add_argument( '--source', '-s', action='store_true', default=False, ) parser.add_argument( '--out-dir', '-o', help="Destination in which to save the builds relative to source dir", ) def main(args): # determine which dists to build dists = list(filter(None, ( 'sdist' if args.source or not args.binary else None, 'wheel' if args.binary or not args.source else None, ))) for dist in dists: build(args.source_dir, dist, args.out_dir) if __name__ == '__main__': main(parser.parse_args()) �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/pep517/check.py��������������������������0000644�0001750�0001750�00000013375�00000000000�027146� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""Check a project and backend by attempting to build using PEP 517 hooks. """ import argparse import logging import os from os.path import isfile, join as pjoin from pip._vendor.pytoml import TomlError, load as toml_load import shutil from subprocess import CalledProcessError import sys import tarfile from tempfile import mkdtemp import zipfile from .colorlog import enable_colourful_output from .envbuild import BuildEnvironment from .wrappers import Pep517HookCaller log = logging.getLogger(__name__) def check_build_sdist(hooks, build_sys_requires): with BuildEnvironment() as env: try: env.pip_install(build_sys_requires) log.info('Installed static build dependencies') except CalledProcessError: log.error('Failed to install static build dependencies') return False try: reqs = hooks.get_requires_for_build_sdist({}) log.info('Got build requires: %s', reqs) except Exception: log.error('Failure in get_requires_for_build_sdist', exc_info=True) return False try: env.pip_install(reqs) log.info('Installed dynamic build dependencies') except CalledProcessError: log.error('Failed to install dynamic build dependencies') return False td = mkdtemp() log.info('Trying to build sdist in %s', td) try: try: filename = hooks.build_sdist(td, {}) log.info('build_sdist returned %r', filename) except Exception: log.info('Failure in build_sdist', exc_info=True) return False if not filename.endswith('.tar.gz'): log.error( "Filename %s doesn't have .tar.gz extension", filename) return False path = pjoin(td, filename) if isfile(path): log.info("Output file %s exists", path) else: log.error("Output file %s does not exist", path) return False if tarfile.is_tarfile(path): log.info("Output file is a tar file") else: log.error("Output file is not a tar file") return False finally: shutil.rmtree(td) return True def check_build_wheel(hooks, build_sys_requires): with BuildEnvironment() as env: try: env.pip_install(build_sys_requires) log.info('Installed static build dependencies') except CalledProcessError: log.error('Failed to install static build dependencies') return False try: reqs = hooks.get_requires_for_build_wheel({}) log.info('Got build requires: %s', reqs) except Exception: log.error('Failure in get_requires_for_build_sdist', exc_info=True) return False try: env.pip_install(reqs) log.info('Installed dynamic build dependencies') except CalledProcessError: log.error('Failed to install dynamic build dependencies') return False td = mkdtemp() log.info('Trying to build wheel in %s', td) try: try: filename = hooks.build_wheel(td, {}) log.info('build_wheel returned %r', filename) except Exception: log.info('Failure in build_wheel', exc_info=True) return False if not filename.endswith('.whl'): log.error("Filename %s doesn't have .whl extension", filename) return False path = pjoin(td, filename) if isfile(path): log.info("Output file %s exists", path) else: log.error("Output file %s does not exist", path) return False if zipfile.is_zipfile(path): log.info("Output file is a zip file") else: log.error("Output file is not a zip file") return False finally: shutil.rmtree(td) return True def check(source_dir): pyproject = pjoin(source_dir, 'pyproject.toml') if isfile(pyproject): log.info('Found pyproject.toml') else: log.error('Missing pyproject.toml') return False try: with open(pyproject) as f: pyproject_data = toml_load(f) # Ensure the mandatory data can be loaded buildsys = pyproject_data['build-system'] requires = buildsys['requires'] backend = buildsys['build-backend'] log.info('Loaded pyproject.toml') except (TomlError, KeyError): log.error("Invalid pyproject.toml", exc_info=True) return False hooks = Pep517HookCaller(source_dir, backend) sdist_ok = check_build_sdist(hooks, requires) wheel_ok = check_build_wheel(hooks, requires) if not sdist_ok: log.warning('Sdist checks failed; scroll up to see') if not wheel_ok: log.warning('Wheel checks failed') return sdist_ok def main(argv=None): ap = argparse.ArgumentParser() ap.add_argument( 'source_dir', help="A directory containing pyproject.toml") args = ap.parse_args(argv) enable_colourful_output() ok = check(args.source_dir) if ok: print(ansi('Checks passed', 'green')) else: print(ansi('Checks failed', 'red')) sys.exit(1) ansi_codes = { 'reset': '\x1b[0m', 'bold': '\x1b[1m', 'red': '\x1b[31m', 'green': '\x1b[32m', } def ansi(s, attr): if os.name != 'nt' and sys.stdout.isatty(): return ansi_codes[attr] + str(s) + ansi_codes['reset'] else: return str(s) if __name__ == '__main__': main() �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/pep517/colorlog.py�����������������������0000644�0001750�0001750�00000010002�00000000000�027671� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""Nicer log formatting with colours. Code copied from Tornado, Apache licensed. """ # Copyright 2012 Facebook # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import sys try: import curses except ImportError: curses = None def _stderr_supports_color(): color = False if curses and hasattr(sys.stderr, 'isatty') and sys.stderr.isatty(): try: curses.setupterm() if curses.tigetnum("colors") > 0: color = True except Exception: pass return color class LogFormatter(logging.Formatter): """Log formatter with colour support """ DEFAULT_COLORS = { logging.INFO: 2, # Green logging.WARNING: 3, # Yellow logging.ERROR: 1, # Red logging.CRITICAL: 1, } def __init__(self, color=True, datefmt=None): r""" :arg bool color: Enables color support. :arg string fmt: Log message format. It will be applied to the attributes dict of log records. The text between ``%(color)s`` and ``%(end_color)s`` will be colored depending on the level if color support is on. :arg dict colors: color mappings from logging level to terminal color code :arg string datefmt: Datetime format. Used for formatting ``(asctime)`` placeholder in ``prefix_fmt``. .. versionchanged:: 3.2 Added ``fmt`` and ``datefmt`` arguments. """ logging.Formatter.__init__(self, datefmt=datefmt) self._colors = {} if color and _stderr_supports_color(): # The curses module has some str/bytes confusion in # python3. Until version 3.2.3, most methods return # bytes, but only accept strings. In addition, we want to # output these strings with the logging module, which # works with unicode strings. The explicit calls to # unicode() below are harmless in python2 but will do the # right conversion in python 3. fg_color = (curses.tigetstr("setaf") or curses.tigetstr("setf") or "") if (3, 0) < sys.version_info < (3, 2, 3): fg_color = str(fg_color, "ascii") for levelno, code in self.DEFAULT_COLORS.items(): self._colors[levelno] = str( curses.tparm(fg_color, code), "ascii") self._normal = str(curses.tigetstr("sgr0"), "ascii") scr = curses.initscr() self.termwidth = scr.getmaxyx()[1] curses.endwin() else: self._normal = '' # Default width is usually 80, but too wide is # worse than too narrow self.termwidth = 70 def formatMessage(self, record): mlen = len(record.message) right_text = '{initial}-{name}'.format(initial=record.levelname[0], name=record.name) if mlen + len(right_text) < self.termwidth: space = ' ' * (self.termwidth - (mlen + len(right_text))) else: space = ' ' if record.levelno in self._colors: start_color = self._colors[record.levelno] end_color = self._normal else: start_color = end_color = '' return record.message + space + start_color + right_text + end_color def enable_colourful_output(level=logging.INFO): handler = logging.StreamHandler() handler.setFormatter(LogFormatter()) logging.root.addHandler(handler) logging.root.setLevel(level) ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/pep517/compat.py�������������������������0000644�0001750�0001750�00000001167�00000000000�027350� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""Handle reading and writing JSON in UTF-8, on Python 3 and 2.""" import json import sys if sys.version_info[0] >= 3: # Python 3 def write_json(obj, path, **kwargs): with open(path, 'w', encoding='utf-8') as f: json.dump(obj, f, **kwargs) def read_json(path): with open(path, 'r', encoding='utf-8') as f: return json.load(f) else: # Python 2 def write_json(obj, path, **kwargs): with open(path, 'wb') as f: json.dump(obj, f, encoding='utf-8', **kwargs) def read_json(path): with open(path, 'rb') as f: return json.load(f) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/pep517/envbuild.py�����������������������0000644�0001750�0001750�00000013203�00000000000�027667� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""Build wheels/sdists by installing build deps to a temporary environment. """ import os import logging from pip._vendor import pytoml import shutil from subprocess import check_call import sys from sysconfig import get_paths from tempfile import mkdtemp from .wrappers import Pep517HookCaller log = logging.getLogger(__name__) def _load_pyproject(source_dir): with open(os.path.join(source_dir, 'pyproject.toml')) as f: pyproject_data = pytoml.load(f) buildsys = pyproject_data['build-system'] return buildsys['requires'], buildsys['build-backend'] class BuildEnvironment(object): """Context manager to install build deps in a simple temporary environment Based on code I wrote for pip, which is MIT licensed. """ # Copyright (c) 2008-2016 The pip developers (see AUTHORS.txt file) # # Permission is hereby granted, free of charge, to any person obtaining # a copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, # distribute, sublicense, and/or sell copies of the Software, and to # permit persons to whom the Software is furnished to do so, subject to # the following conditions: # # The above copyright notice and this permission notice shall be # included in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND # NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE # LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION # WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. path = None def __init__(self, cleanup=True): self._cleanup = cleanup def __enter__(self): self.path = mkdtemp(prefix='pep517-build-env-') log.info('Temporary build environment: %s', self.path) self.save_path = os.environ.get('PATH', None) self.save_pythonpath = os.environ.get('PYTHONPATH', None) install_scheme = 'nt' if (os.name == 'nt') else 'posix_prefix' install_dirs = get_paths(install_scheme, vars={ 'base': self.path, 'platbase': self.path, }) scripts = install_dirs['scripts'] if self.save_path: os.environ['PATH'] = scripts + os.pathsep + self.save_path else: os.environ['PATH'] = scripts + os.pathsep + os.defpath if install_dirs['purelib'] == install_dirs['platlib']: lib_dirs = install_dirs['purelib'] else: lib_dirs = install_dirs['purelib'] + os.pathsep + \ install_dirs['platlib'] if self.save_pythonpath: os.environ['PYTHONPATH'] = lib_dirs + os.pathsep + \ self.save_pythonpath else: os.environ['PYTHONPATH'] = lib_dirs return self def pip_install(self, reqs): """Install dependencies into this env by calling pip in a subprocess""" if not reqs: return log.info('Calling pip to install %s', reqs) check_call([ sys.executable, '-m', 'pip', 'install', '--ignore-installed', '--prefix', self.path] + list(reqs)) def __exit__(self, exc_type, exc_val, exc_tb): needs_cleanup = ( self._cleanup and self.path is not None and os.path.isdir(self.path) ) if needs_cleanup: shutil.rmtree(self.path) if self.save_path is None: os.environ.pop('PATH', None) else: os.environ['PATH'] = self.save_path if self.save_pythonpath is None: os.environ.pop('PYTHONPATH', None) else: os.environ['PYTHONPATH'] = self.save_pythonpath def build_wheel(source_dir, wheel_dir, config_settings=None): """Build a wheel from a source directory using PEP 517 hooks. :param str source_dir: Source directory containing pyproject.toml :param str wheel_dir: Target directory to create wheel in :param dict config_settings: Options to pass to build backend This is a blocking function which will run pip in a subprocess to install build requirements. """ if config_settings is None: config_settings = {} requires, backend = _load_pyproject(source_dir) hooks = Pep517HookCaller(source_dir, backend) with BuildEnvironment() as env: env.pip_install(requires) reqs = hooks.get_requires_for_build_wheel(config_settings) env.pip_install(reqs) return hooks.build_wheel(wheel_dir, config_settings) def build_sdist(source_dir, sdist_dir, config_settings=None): """Build an sdist from a source directory using PEP 517 hooks. :param str source_dir: Source directory containing pyproject.toml :param str sdist_dir: Target directory to place sdist in :param dict config_settings: Options to pass to build backend This is a blocking function which will run pip in a subprocess to install build requirements. """ if config_settings is None: config_settings = {} requires, backend = _load_pyproject(source_dir) hooks = Pep517HookCaller(source_dir, backend) with BuildEnvironment() as env: env.pip_install(requires) reqs = hooks.get_requires_for_build_sdist(config_settings) env.pip_install(reqs) return hooks.build_sdist(sdist_dir, config_settings) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/pep517/wrappers.py�����������������������0000644�0001750�0001750�00000013430�00000000000�027724� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from contextlib import contextmanager import os from os.path import dirname, abspath, join as pjoin import shutil from subprocess import check_call import sys from tempfile import mkdtemp from . import compat _in_proc_script = pjoin(dirname(abspath(__file__)), '_in_process.py') @contextmanager def tempdir(): td = mkdtemp() try: yield td finally: shutil.rmtree(td) class BackendUnavailable(Exception): """Will be raised if the backend cannot be imported in the hook process.""" class UnsupportedOperation(Exception): """May be raised by build_sdist if the backend indicates that it can't.""" def default_subprocess_runner(cmd, cwd=None, extra_environ=None): """The default method of calling the wrapper subprocess.""" env = os.environ.copy() if extra_environ: env.update(extra_environ) check_call(cmd, cwd=cwd, env=env) class Pep517HookCaller(object): """A wrapper around a source directory to be built with a PEP 517 backend. source_dir : The path to the source directory, containing pyproject.toml. backend : The build backend spec, as per PEP 517, from pyproject.toml. """ def __init__(self, source_dir, build_backend): self.source_dir = abspath(source_dir) self.build_backend = build_backend self._subprocess_runner = default_subprocess_runner # TODO: Is this over-engineered? Maybe frontends only need to # set this when creating the wrapper, not on every call. @contextmanager def subprocess_runner(self, runner): prev = self._subprocess_runner self._subprocess_runner = runner yield self._subprocess_runner = prev def get_requires_for_build_wheel(self, config_settings=None): """Identify packages required for building a wheel Returns a list of dependency specifications, e.g.: ["wheel >= 0.25", "setuptools"] This does not include requirements specified in pyproject.toml. It returns the result of calling the equivalently named hook in a subprocess. """ return self._call_hook('get_requires_for_build_wheel', { 'config_settings': config_settings }) def prepare_metadata_for_build_wheel( self, metadata_directory, config_settings=None): """Prepare a *.dist-info folder with metadata for this project. Returns the name of the newly created folder. If the build backend defines a hook with this name, it will be called in a subprocess. If not, the backend will be asked to build a wheel, and the dist-info extracted from that. """ return self._call_hook('prepare_metadata_for_build_wheel', { 'metadata_directory': abspath(metadata_directory), 'config_settings': config_settings, }) def build_wheel( self, wheel_directory, config_settings=None, metadata_directory=None): """Build a wheel from this project. Returns the name of the newly created file. In general, this will call the 'build_wheel' hook in the backend. However, if that was previously called by 'prepare_metadata_for_build_wheel', and the same metadata_directory is used, the previously built wheel will be copied to wheel_directory. """ if metadata_directory is not None: metadata_directory = abspath(metadata_directory) return self._call_hook('build_wheel', { 'wheel_directory': abspath(wheel_directory), 'config_settings': config_settings, 'metadata_directory': metadata_directory, }) def get_requires_for_build_sdist(self, config_settings=None): """Identify packages required for building a wheel Returns a list of dependency specifications, e.g.: ["setuptools >= 26"] This does not include requirements specified in pyproject.toml. It returns the result of calling the equivalently named hook in a subprocess. """ return self._call_hook('get_requires_for_build_sdist', { 'config_settings': config_settings }) def build_sdist(self, sdist_directory, config_settings=None): """Build an sdist from this project. Returns the name of the newly created file. This calls the 'build_sdist' backend hook in a subprocess. """ return self._call_hook('build_sdist', { 'sdist_directory': abspath(sdist_directory), 'config_settings': config_settings, }) def _call_hook(self, hook_name, kwargs): # On Python 2, pytoml returns Unicode values (which is correct) but the # environment passed to check_call needs to contain string values. We # convert here by encoding using ASCII (the backend can only contain # letters, digits and _, . and : characters, and will be used as a # Python identifier, so non-ASCII content is wrong on Python 2 in # any case). if sys.version_info[0] == 2: build_backend = self.build_backend.encode('ASCII') else: build_backend = self.build_backend with tempdir() as td: compat.write_json({'kwargs': kwargs}, pjoin(td, 'input.json'), indent=2) # Run the hook in a subprocess self._subprocess_runner( [sys.executable, _in_proc_script, hook_name, td], cwd=self.source_dir, extra_environ={'PEP517_BUILD_BACKEND': build_backend} ) data = compat.read_json(pjoin(td, 'output.json')) if data.get('unsupported'): raise UnsupportedOperation if data.get('no_backend'): raise BackendUnavailable return data['return_val'] ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735100.0133276 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/pkg_resources/���������������������������0000755�0001750�0001750�00000000000�00000000000�027340� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/pkg_resources/__init__.py����������������0000644�0001750�0001750�00000322606�00000000000�031462� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# coding: utf-8 """ Package resource API -------------------- A resource is a logical file contained within a package, or a logical subdirectory thereof. The package resource API expects resource names to have their path parts separated with ``/``, *not* whatever the local path separator is. Do not use os.path operations to manipulate resource names being passed into the API. The package resource API is designed to work with normal filesystem packages, .egg files, and unpacked .egg files. It can also work in a limited way with .zip files and with custom PEP 302 loaders that support the ``get_data()`` method. """ from __future__ import absolute_import import sys import os import io import time import re import types import zipfile import zipimport import warnings import stat import functools import pkgutil import operator import platform import collections import plistlib import email.parser import errno import tempfile import textwrap import itertools import inspect import ntpath import posixpath from pkgutil import get_importer try: import _imp except ImportError: # Python 3.2 compatibility import imp as _imp try: FileExistsError except NameError: FileExistsError = OSError from pip._vendor import six from pip._vendor.six.moves import urllib, map, filter # capture these to bypass sandboxing from os import utime try: from os import mkdir, rename, unlink WRITE_SUPPORT = True except ImportError: # no write support, probably under GAE WRITE_SUPPORT = False from os import open as os_open from os.path import isdir, split try: import importlib.machinery as importlib_machinery # access attribute to force import under delayed import mechanisms. importlib_machinery.__name__ except ImportError: importlib_machinery = None from . import py31compat from pip._vendor import appdirs from pip._vendor import packaging __import__('pip._vendor.packaging.version') __import__('pip._vendor.packaging.specifiers') __import__('pip._vendor.packaging.requirements') __import__('pip._vendor.packaging.markers') __metaclass__ = type if (3, 0) < sys.version_info < (3, 4): raise RuntimeError("Python 3.4 or later is required") if six.PY2: # Those builtin exceptions are only defined in Python 3 PermissionError = None NotADirectoryError = None # declare some globals that will be defined later to # satisfy the linters. require = None working_set = None add_activation_listener = None resources_stream = None cleanup_resources = None resource_dir = None resource_stream = None set_extraction_path = None resource_isdir = None resource_string = None iter_entry_points = None resource_listdir = None resource_filename = None resource_exists = None _distribution_finders = None _namespace_handlers = None _namespace_packages = None class PEP440Warning(RuntimeWarning): """ Used when there is an issue with a version or specifier not complying with PEP 440. """ def parse_version(v): try: return packaging.version.Version(v) except packaging.version.InvalidVersion: return packaging.version.LegacyVersion(v) _state_vars = {} def _declare_state(vartype, **kw): globals().update(kw) _state_vars.update(dict.fromkeys(kw, vartype)) def __getstate__(): state = {} g = globals() for k, v in _state_vars.items(): state[k] = g['_sget_' + v](g[k]) return state def __setstate__(state): g = globals() for k, v in state.items(): g['_sset_' + _state_vars[k]](k, g[k], v) return state def _sget_dict(val): return val.copy() def _sset_dict(key, ob, state): ob.clear() ob.update(state) def _sget_object(val): return val.__getstate__() def _sset_object(key, ob, state): ob.__setstate__(state) _sget_none = _sset_none = lambda *args: None def get_supported_platform(): """Return this platform's maximum compatible version. distutils.util.get_platform() normally reports the minimum version of Mac OS X that would be required to *use* extensions produced by distutils. But what we want when checking compatibility is to know the version of Mac OS X that we are *running*. To allow usage of packages that explicitly require a newer version of Mac OS X, we must also know the current version of the OS. If this condition occurs for any other platform with a version in its platform strings, this function should be extended accordingly. """ plat = get_build_platform() m = macosVersionString.match(plat) if m is not None and sys.platform == "darwin": try: plat = 'macosx-%s-%s' % ('.'.join(_macosx_vers()[:2]), m.group(3)) except ValueError: # not Mac OS X pass return plat __all__ = [ # Basic resource access and distribution/entry point discovery 'require', 'run_script', 'get_provider', 'get_distribution', 'load_entry_point', 'get_entry_map', 'get_entry_info', 'iter_entry_points', 'resource_string', 'resource_stream', 'resource_filename', 'resource_listdir', 'resource_exists', 'resource_isdir', # Environmental control 'declare_namespace', 'working_set', 'add_activation_listener', 'find_distributions', 'set_extraction_path', 'cleanup_resources', 'get_default_cache', # Primary implementation classes 'Environment', 'WorkingSet', 'ResourceManager', 'Distribution', 'Requirement', 'EntryPoint', # Exceptions 'ResolutionError', 'VersionConflict', 'DistributionNotFound', 'UnknownExtra', 'ExtractionError', # Warnings 'PEP440Warning', # Parsing functions and string utilities 'parse_requirements', 'parse_version', 'safe_name', 'safe_version', 'get_platform', 'compatible_platforms', 'yield_lines', 'split_sections', 'safe_extra', 'to_filename', 'invalid_marker', 'evaluate_marker', # filesystem utilities 'ensure_directory', 'normalize_path', # Distribution "precedence" constants 'EGG_DIST', 'BINARY_DIST', 'SOURCE_DIST', 'CHECKOUT_DIST', 'DEVELOP_DIST', # "Provider" interfaces, implementations, and registration/lookup APIs 'IMetadataProvider', 'IResourceProvider', 'FileMetadata', 'PathMetadata', 'EggMetadata', 'EmptyProvider', 'empty_provider', 'NullProvider', 'EggProvider', 'DefaultProvider', 'ZipProvider', 'register_finder', 'register_namespace_handler', 'register_loader_type', 'fixup_namespace_packages', 'get_importer', # Warnings 'PkgResourcesDeprecationWarning', # Deprecated/backward compatibility only 'run_main', 'AvailableDistributions', ] class ResolutionError(Exception): """Abstract base for dependency resolution errors""" def __repr__(self): return self.__class__.__name__ + repr(self.args) class VersionConflict(ResolutionError): """ An already-installed version conflicts with the requested version. Should be initialized with the installed Distribution and the requested Requirement. """ _template = "{self.dist} is installed but {self.req} is required" @property def dist(self): return self.args[0] @property def req(self): return self.args[1] def report(self): return self._template.format(**locals()) def with_context(self, required_by): """ If required_by is non-empty, return a version of self that is a ContextualVersionConflict. """ if not required_by: return self args = self.args + (required_by,) return ContextualVersionConflict(*args) class ContextualVersionConflict(VersionConflict): """ A VersionConflict that accepts a third parameter, the set of the requirements that required the installed Distribution. """ _template = VersionConflict._template + ' by {self.required_by}' @property def required_by(self): return self.args[2] class DistributionNotFound(ResolutionError): """A requested distribution was not found""" _template = ("The '{self.req}' distribution was not found " "and is required by {self.requirers_str}") @property def req(self): return self.args[0] @property def requirers(self): return self.args[1] @property def requirers_str(self): if not self.requirers: return 'the application' return ', '.join(self.requirers) def report(self): return self._template.format(**locals()) def __str__(self): return self.report() class UnknownExtra(ResolutionError): """Distribution doesn't have an "extra feature" of the given name""" _provider_factories = {} PY_MAJOR = sys.version[:3] EGG_DIST = 3 BINARY_DIST = 2 SOURCE_DIST = 1 CHECKOUT_DIST = 0 DEVELOP_DIST = -1 def register_loader_type(loader_type, provider_factory): """Register `provider_factory` to make providers for `loader_type` `loader_type` is the type or class of a PEP 302 ``module.__loader__``, and `provider_factory` is a function that, passed a *module* object, returns an ``IResourceProvider`` for that module. """ _provider_factories[loader_type] = provider_factory def get_provider(moduleOrReq): """Return an IResourceProvider for the named module or requirement""" if isinstance(moduleOrReq, Requirement): return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0] try: module = sys.modules[moduleOrReq] except KeyError: __import__(moduleOrReq) module = sys.modules[moduleOrReq] loader = getattr(module, '__loader__', None) return _find_adapter(_provider_factories, loader)(module) def _macosx_vers(_cache=[]): if not _cache: version = platform.mac_ver()[0] # fallback for MacPorts if version == '': plist = '/System/Library/CoreServices/SystemVersion.plist' if os.path.exists(plist): if hasattr(plistlib, 'readPlist'): plist_content = plistlib.readPlist(plist) if 'ProductVersion' in plist_content: version = plist_content['ProductVersion'] _cache.append(version.split('.')) return _cache[0] def _macosx_arch(machine): return {'PowerPC': 'ppc', 'Power_Macintosh': 'ppc'}.get(machine, machine) def get_build_platform(): """Return this platform's string for platform-specific distributions XXX Currently this is the same as ``distutils.util.get_platform()``, but it needs some hacks for Linux and Mac OS X. """ from sysconfig import get_platform plat = get_platform() if sys.platform == "darwin" and not plat.startswith('macosx-'): try: version = _macosx_vers() machine = os.uname()[4].replace(" ", "_") return "macosx-%d.%d-%s" % ( int(version[0]), int(version[1]), _macosx_arch(machine), ) except ValueError: # if someone is running a non-Mac darwin system, this will fall # through to the default implementation pass return plat macosVersionString = re.compile(r"macosx-(\d+)\.(\d+)-(.*)") darwinVersionString = re.compile(r"darwin-(\d+)\.(\d+)\.(\d+)-(.*)") # XXX backward compat get_platform = get_build_platform def compatible_platforms(provided, required): """Can code for the `provided` platform run on the `required` platform? Returns true if either platform is ``None``, or the platforms are equal. XXX Needs compatibility checks for Linux and other unixy OSes. """ if provided is None or required is None or provided == required: # easy case return True # Mac OS X special cases reqMac = macosVersionString.match(required) if reqMac: provMac = macosVersionString.match(provided) # is this a Mac package? if not provMac: # this is backwards compatibility for packages built before # setuptools 0.6. All packages built after this point will # use the new macosx designation. provDarwin = darwinVersionString.match(provided) if provDarwin: dversion = int(provDarwin.group(1)) macosversion = "%s.%s" % (reqMac.group(1), reqMac.group(2)) if dversion == 7 and macosversion >= "10.3" or \ dversion == 8 and macosversion >= "10.4": return True # egg isn't macosx or legacy darwin return False # are they the same major version and machine type? if provMac.group(1) != reqMac.group(1) or \ provMac.group(3) != reqMac.group(3): return False # is the required OS major update >= the provided one? if int(provMac.group(2)) > int(reqMac.group(2)): return False return True # XXX Linux and other platforms' special cases should go here return False def run_script(dist_spec, script_name): """Locate distribution `dist_spec` and run its `script_name` script""" ns = sys._getframe(1).f_globals name = ns['__name__'] ns.clear() ns['__name__'] = name require(dist_spec)[0].run_script(script_name, ns) # backward compatibility run_main = run_script def get_distribution(dist): """Return a current distribution object for a Requirement or string""" if isinstance(dist, six.string_types): dist = Requirement.parse(dist) if isinstance(dist, Requirement): dist = get_provider(dist) if not isinstance(dist, Distribution): raise TypeError("Expected string, Requirement, or Distribution", dist) return dist def load_entry_point(dist, group, name): """Return `name` entry point of `group` for `dist` or raise ImportError""" return get_distribution(dist).load_entry_point(group, name) def get_entry_map(dist, group=None): """Return the entry point map for `group`, or the full entry map""" return get_distribution(dist).get_entry_map(group) def get_entry_info(dist, group, name): """Return the EntryPoint object for `group`+`name`, or ``None``""" return get_distribution(dist).get_entry_info(group, name) class IMetadataProvider: def has_metadata(name): """Does the package's distribution contain the named metadata?""" def get_metadata(name): """The named metadata resource as a string""" def get_metadata_lines(name): """Yield named metadata resource as list of non-blank non-comment lines Leading and trailing whitespace is stripped from each line, and lines with ``#`` as the first non-blank character are omitted.""" def metadata_isdir(name): """Is the named metadata a directory? (like ``os.path.isdir()``)""" def metadata_listdir(name): """List of metadata names in the directory (like ``os.listdir()``)""" def run_script(script_name, namespace): """Execute the named script in the supplied namespace dictionary""" class IResourceProvider(IMetadataProvider): """An object that provides access to package resources""" def get_resource_filename(manager, resource_name): """Return a true filesystem path for `resource_name` `manager` must be an ``IResourceManager``""" def get_resource_stream(manager, resource_name): """Return a readable file-like object for `resource_name` `manager` must be an ``IResourceManager``""" def get_resource_string(manager, resource_name): """Return a string containing the contents of `resource_name` `manager` must be an ``IResourceManager``""" def has_resource(resource_name): """Does the package contain the named resource?""" def resource_isdir(resource_name): """Is the named resource a directory? (like ``os.path.isdir()``)""" def resource_listdir(resource_name): """List of resource names in the directory (like ``os.listdir()``)""" class WorkingSet: """A collection of active distributions on sys.path (or a similar list)""" def __init__(self, entries=None): """Create working set from list of path entries (default=sys.path)""" self.entries = [] self.entry_keys = {} self.by_key = {} self.callbacks = [] if entries is None: entries = sys.path for entry in entries: self.add_entry(entry) @classmethod def _build_master(cls): """ Prepare the master working set. """ ws = cls() try: from __main__ import __requires__ except ImportError: # The main program does not list any requirements return ws # ensure the requirements are met try: ws.require(__requires__) except VersionConflict: return cls._build_from_requirements(__requires__) return ws @classmethod def _build_from_requirements(cls, req_spec): """ Build a working set from a requirement spec. Rewrites sys.path. """ # try it without defaults already on sys.path # by starting with an empty path ws = cls([]) reqs = parse_requirements(req_spec) dists = ws.resolve(reqs, Environment()) for dist in dists: ws.add(dist) # add any missing entries from sys.path for entry in sys.path: if entry not in ws.entries: ws.add_entry(entry) # then copy back to sys.path sys.path[:] = ws.entries return ws def add_entry(self, entry): """Add a path item to ``.entries``, finding any distributions on it ``find_distributions(entry, True)`` is used to find distributions corresponding to the path entry, and they are added. `entry` is always appended to ``.entries``, even if it is already present. (This is because ``sys.path`` can contain the same value more than once, and the ``.entries`` of the ``sys.path`` WorkingSet should always equal ``sys.path``.) """ self.entry_keys.setdefault(entry, []) self.entries.append(entry) for dist in find_distributions(entry, True): self.add(dist, entry, False) def __contains__(self, dist): """True if `dist` is the active distribution for its project""" return self.by_key.get(dist.key) == dist def find(self, req): """Find a distribution matching requirement `req` If there is an active distribution for the requested project, this returns it as long as it meets the version requirement specified by `req`. But, if there is an active distribution for the project and it does *not* meet the `req` requirement, ``VersionConflict`` is raised. If there is no active distribution for the requested project, ``None`` is returned. """ dist = self.by_key.get(req.key) if dist is not None and dist not in req: # XXX add more info raise VersionConflict(dist, req) return dist def iter_entry_points(self, group, name=None): """Yield entry point objects from `group` matching `name` If `name` is None, yields all entry points in `group` from all distributions in the working set, otherwise only ones matching both `group` and `name` are yielded (in distribution order). """ return ( entry for dist in self for entry in dist.get_entry_map(group).values() if name is None or name == entry.name ) def run_script(self, requires, script_name): """Locate distribution for `requires` and run `script_name` script""" ns = sys._getframe(1).f_globals name = ns['__name__'] ns.clear() ns['__name__'] = name self.require(requires)[0].run_script(script_name, ns) def __iter__(self): """Yield distributions for non-duplicate projects in the working set The yield order is the order in which the items' path entries were added to the working set. """ seen = {} for item in self.entries: if item not in self.entry_keys: # workaround a cache issue continue for key in self.entry_keys[item]: if key not in seen: seen[key] = 1 yield self.by_key[key] def add(self, dist, entry=None, insert=True, replace=False): """Add `dist` to working set, associated with `entry` If `entry` is unspecified, it defaults to the ``.location`` of `dist`. On exit from this routine, `entry` is added to the end of the working set's ``.entries`` (if it wasn't already present). `dist` is only added to the working set if it's for a project that doesn't already have a distribution in the set, unless `replace=True`. If it's added, any callbacks registered with the ``subscribe()`` method will be called. """ if insert: dist.insert_on(self.entries, entry, replace=replace) if entry is None: entry = dist.location keys = self.entry_keys.setdefault(entry, []) keys2 = self.entry_keys.setdefault(dist.location, []) if not replace and dist.key in self.by_key: # ignore hidden distros return self.by_key[dist.key] = dist if dist.key not in keys: keys.append(dist.key) if dist.key not in keys2: keys2.append(dist.key) self._added_new(dist) def resolve(self, requirements, env=None, installer=None, replace_conflicting=False, extras=None): """List all distributions needed to (recursively) meet `requirements` `requirements` must be a sequence of ``Requirement`` objects. `env`, if supplied, should be an ``Environment`` instance. If not supplied, it defaults to all distributions available within any entry or distribution in the working set. `installer`, if supplied, will be invoked with each requirement that cannot be met by an already-installed distribution; it should return a ``Distribution`` or ``None``. Unless `replace_conflicting=True`, raises a VersionConflict exception if any requirements are found on the path that have the correct name but the wrong version. Otherwise, if an `installer` is supplied it will be invoked to obtain the correct version of the requirement and activate it. `extras` is a list of the extras to be used with these requirements. This is important because extra requirements may look like `my_req; extra = "my_extra"`, which would otherwise be interpreted as a purely optional requirement. Instead, we want to be able to assert that these requirements are truly required. """ # set up the stack requirements = list(requirements)[::-1] # set of processed requirements processed = {} # key -> dist best = {} to_activate = [] req_extras = _ReqExtras() # Mapping of requirement to set of distributions that required it; # useful for reporting info about conflicts. required_by = collections.defaultdict(set) while requirements: # process dependencies breadth-first req = requirements.pop(0) if req in processed: # Ignore cyclic or redundant dependencies continue if not req_extras.markers_pass(req, extras): continue dist = best.get(req.key) if dist is None: # Find the best distribution and add it to the map dist = self.by_key.get(req.key) if dist is None or (dist not in req and replace_conflicting): ws = self if env is None: if dist is None: env = Environment(self.entries) else: # Use an empty environment and workingset to avoid # any further conflicts with the conflicting # distribution env = Environment([]) ws = WorkingSet([]) dist = best[req.key] = env.best_match( req, ws, installer, replace_conflicting=replace_conflicting ) if dist is None: requirers = required_by.get(req, None) raise DistributionNotFound(req, requirers) to_activate.append(dist) if dist not in req: # Oops, the "best" so far conflicts with a dependency dependent_req = required_by[req] raise VersionConflict(dist, req).with_context(dependent_req) # push the new requirements onto the stack new_requirements = dist.requires(req.extras)[::-1] requirements.extend(new_requirements) # Register the new requirements needed by req for new_requirement in new_requirements: required_by[new_requirement].add(req.project_name) req_extras[new_requirement] = req.extras processed[req] = True # return list of distros to activate return to_activate def find_plugins( self, plugin_env, full_env=None, installer=None, fallback=True): """Find all activatable distributions in `plugin_env` Example usage:: distributions, errors = working_set.find_plugins( Environment(plugin_dirlist) ) # add plugins+libs to sys.path map(working_set.add, distributions) # display errors print('Could not load', errors) The `plugin_env` should be an ``Environment`` instance that contains only distributions that are in the project's "plugin directory" or directories. The `full_env`, if supplied, should be an ``Environment`` contains all currently-available distributions. If `full_env` is not supplied, one is created automatically from the ``WorkingSet`` this method is called on, which will typically mean that every directory on ``sys.path`` will be scanned for distributions. `installer` is a standard installer callback as used by the ``resolve()`` method. The `fallback` flag indicates whether we should attempt to resolve older versions of a plugin if the newest version cannot be resolved. This method returns a 2-tuple: (`distributions`, `error_info`), where `distributions` is a list of the distributions found in `plugin_env` that were loadable, along with any other distributions that are needed to resolve their dependencies. `error_info` is a dictionary mapping unloadable plugin distributions to an exception instance describing the error that occurred. Usually this will be a ``DistributionNotFound`` or ``VersionConflict`` instance. """ plugin_projects = list(plugin_env) # scan project names in alphabetic order plugin_projects.sort() error_info = {} distributions = {} if full_env is None: env = Environment(self.entries) env += plugin_env else: env = full_env + plugin_env shadow_set = self.__class__([]) # put all our entries in shadow_set list(map(shadow_set.add, self)) for project_name in plugin_projects: for dist in plugin_env[project_name]: req = [dist.as_requirement()] try: resolvees = shadow_set.resolve(req, env, installer) except ResolutionError as v: # save error info error_info[dist] = v if fallback: # try the next older version of project continue else: # give up on this project, keep going break else: list(map(shadow_set.add, resolvees)) distributions.update(dict.fromkeys(resolvees)) # success, no need to try any more versions of this project break distributions = list(distributions) distributions.sort() return distributions, error_info def require(self, *requirements): """Ensure that distributions matching `requirements` are activated `requirements` must be a string or a (possibly-nested) sequence thereof, specifying the distributions and versions required. The return value is a sequence of the distributions that needed to be activated to fulfill the requirements; all relevant distributions are included, even if they were already activated in this working set. """ needed = self.resolve(parse_requirements(requirements)) for dist in needed: self.add(dist) return needed def subscribe(self, callback, existing=True): """Invoke `callback` for all distributions If `existing=True` (default), call on all existing ones, as well. """ if callback in self.callbacks: return self.callbacks.append(callback) if not existing: return for dist in self: callback(dist) def _added_new(self, dist): for callback in self.callbacks: callback(dist) def __getstate__(self): return ( self.entries[:], self.entry_keys.copy(), self.by_key.copy(), self.callbacks[:] ) def __setstate__(self, e_k_b_c): entries, keys, by_key, callbacks = e_k_b_c self.entries = entries[:] self.entry_keys = keys.copy() self.by_key = by_key.copy() self.callbacks = callbacks[:] class _ReqExtras(dict): """ Map each requirement to the extras that demanded it. """ def markers_pass(self, req, extras=None): """ Evaluate markers for req against each extra that demanded it. Return False if the req has a marker and fails evaluation. Otherwise, return True. """ extra_evals = ( req.marker.evaluate({'extra': extra}) for extra in self.get(req, ()) + (extras or (None,)) ) return not req.marker or any(extra_evals) class Environment: """Searchable snapshot of distributions on a search path""" def __init__( self, search_path=None, platform=get_supported_platform(), python=PY_MAJOR): """Snapshot distributions available on a search path Any distributions found on `search_path` are added to the environment. `search_path` should be a sequence of ``sys.path`` items. If not supplied, ``sys.path`` is used. `platform` is an optional string specifying the name of the platform that platform-specific distributions must be compatible with. If unspecified, it defaults to the current platform. `python` is an optional string naming the desired version of Python (e.g. ``'3.6'``); it defaults to the current version. You may explicitly set `platform` (and/or `python`) to ``None`` if you wish to map *all* distributions, not just those compatible with the running platform or Python version. """ self._distmap = {} self.platform = platform self.python = python self.scan(search_path) def can_add(self, dist): """Is distribution `dist` acceptable for this environment? The distribution must match the platform and python version requirements specified when this environment was created, or False is returned. """ py_compat = ( self.python is None or dist.py_version is None or dist.py_version == self.python ) return py_compat and compatible_platforms(dist.platform, self.platform) def remove(self, dist): """Remove `dist` from the environment""" self._distmap[dist.key].remove(dist) def scan(self, search_path=None): """Scan `search_path` for distributions usable in this environment Any distributions found are added to the environment. `search_path` should be a sequence of ``sys.path`` items. If not supplied, ``sys.path`` is used. Only distributions conforming to the platform/python version defined at initialization are added. """ if search_path is None: search_path = sys.path for item in search_path: for dist in find_distributions(item): self.add(dist) def __getitem__(self, project_name): """Return a newest-to-oldest list of distributions for `project_name` Uses case-insensitive `project_name` comparison, assuming all the project's distributions use their project's name converted to all lowercase as their key. """ distribution_key = project_name.lower() return self._distmap.get(distribution_key, []) def add(self, dist): """Add `dist` if we ``can_add()`` it and it has not already been added """ if self.can_add(dist) and dist.has_version(): dists = self._distmap.setdefault(dist.key, []) if dist not in dists: dists.append(dist) dists.sort(key=operator.attrgetter('hashcmp'), reverse=True) def best_match( self, req, working_set, installer=None, replace_conflicting=False): """Find distribution best matching `req` and usable on `working_set` This calls the ``find(req)`` method of the `working_set` to see if a suitable distribution is already active. (This may raise ``VersionConflict`` if an unsuitable version of the project is already active in the specified `working_set`.) If a suitable distribution isn't active, this method returns the newest distribution in the environment that meets the ``Requirement`` in `req`. If no suitable distribution is found, and `installer` is supplied, then the result of calling the environment's ``obtain(req, installer)`` method will be returned. """ try: dist = working_set.find(req) except VersionConflict: if not replace_conflicting: raise dist = None if dist is not None: return dist for dist in self[req.key]: if dist in req: return dist # try to download/install return self.obtain(req, installer) def obtain(self, requirement, installer=None): """Obtain a distribution matching `requirement` (e.g. via download) Obtain a distro that matches requirement (e.g. via download). In the base ``Environment`` class, this routine just returns ``installer(requirement)``, unless `installer` is None, in which case None is returned instead. This method is a hook that allows subclasses to attempt other ways of obtaining a distribution before falling back to the `installer` argument.""" if installer is not None: return installer(requirement) def __iter__(self): """Yield the unique project names of the available distributions""" for key in self._distmap.keys(): if self[key]: yield key def __iadd__(self, other): """In-place addition of a distribution or environment""" if isinstance(other, Distribution): self.add(other) elif isinstance(other, Environment): for project in other: for dist in other[project]: self.add(dist) else: raise TypeError("Can't add %r to environment" % (other,)) return self def __add__(self, other): """Add an environment or distribution to an environment""" new = self.__class__([], platform=None, python=None) for env in self, other: new += env return new # XXX backward compatibility AvailableDistributions = Environment class ExtractionError(RuntimeError): """An error occurred extracting a resource The following attributes are available from instances of this exception: manager The resource manager that raised this exception cache_path The base directory for resource extraction original_error The exception instance that caused extraction to fail """ class ResourceManager: """Manage resource extraction and packages""" extraction_path = None def __init__(self): self.cached_files = {} def resource_exists(self, package_or_requirement, resource_name): """Does the named resource exist?""" return get_provider(package_or_requirement).has_resource(resource_name) def resource_isdir(self, package_or_requirement, resource_name): """Is the named resource an existing directory?""" return get_provider(package_or_requirement).resource_isdir( resource_name ) def resource_filename(self, package_or_requirement, resource_name): """Return a true filesystem path for specified resource""" return get_provider(package_or_requirement).get_resource_filename( self, resource_name ) def resource_stream(self, package_or_requirement, resource_name): """Return a readable file-like object for specified resource""" return get_provider(package_or_requirement).get_resource_stream( self, resource_name ) def resource_string(self, package_or_requirement, resource_name): """Return specified resource as a string""" return get_provider(package_or_requirement).get_resource_string( self, resource_name ) def resource_listdir(self, package_or_requirement, resource_name): """List the contents of the named resource directory""" return get_provider(package_or_requirement).resource_listdir( resource_name ) def extraction_error(self): """Give an error message for problems extracting file(s)""" old_exc = sys.exc_info()[1] cache_path = self.extraction_path or get_default_cache() tmpl = textwrap.dedent(""" Can't extract file(s) to egg cache The following error occurred while trying to extract file(s) to the Python egg cache: {old_exc} The Python egg cache directory is currently set to: {cache_path} Perhaps your account does not have write access to this directory? You can change the cache directory by setting the PYTHON_EGG_CACHE environment variable to point to an accessible directory. """).lstrip() err = ExtractionError(tmpl.format(**locals())) err.manager = self err.cache_path = cache_path err.original_error = old_exc raise err def get_cache_path(self, archive_name, names=()): """Return absolute location in cache for `archive_name` and `names` The parent directory of the resulting path will be created if it does not already exist. `archive_name` should be the base filename of the enclosing egg (which may not be the name of the enclosing zipfile!), including its ".egg" extension. `names`, if provided, should be a sequence of path name parts "under" the egg's extraction location. This method should only be called by resource providers that need to obtain an extraction location, and only for names they intend to extract, as it tracks the generated names for possible cleanup later. """ extract_path = self.extraction_path or get_default_cache() target_path = os.path.join(extract_path, archive_name + '-tmp', *names) try: _bypass_ensure_directory(target_path) except Exception: self.extraction_error() self._warn_unsafe_extraction_path(extract_path) self.cached_files[target_path] = 1 return target_path @staticmethod def _warn_unsafe_extraction_path(path): """ If the default extraction path is overridden and set to an insecure location, such as /tmp, it opens up an opportunity for an attacker to replace an extracted file with an unauthorized payload. Warn the user if a known insecure location is used. See Distribute #375 for more details. """ if os.name == 'nt' and not path.startswith(os.environ['windir']): # On Windows, permissions are generally restrictive by default # and temp directories are not writable by other users, so # bypass the warning. return mode = os.stat(path).st_mode if mode & stat.S_IWOTH or mode & stat.S_IWGRP: msg = ( "%s is writable by group/others and vulnerable to attack " "when " "used with get_resource_filename. Consider a more secure " "location (set with .set_extraction_path or the " "PYTHON_EGG_CACHE environment variable)." % path ) warnings.warn(msg, UserWarning) def postprocess(self, tempname, filename): """Perform any platform-specific postprocessing of `tempname` This is where Mac header rewrites should be done; other platforms don't have anything special they should do. Resource providers should call this method ONLY after successfully extracting a compressed resource. They must NOT call it on resources that are already in the filesystem. `tempname` is the current (temporary) name of the file, and `filename` is the name it will be renamed to by the caller after this routine returns. """ if os.name == 'posix': # Make the resource executable mode = ((os.stat(tempname).st_mode) | 0o555) & 0o7777 os.chmod(tempname, mode) def set_extraction_path(self, path): """Set the base path where resources will be extracted to, if needed. If you do not call this routine before any extractions take place, the path defaults to the return value of ``get_default_cache()``. (Which is based on the ``PYTHON_EGG_CACHE`` environment variable, with various platform-specific fallbacks. See that routine's documentation for more details.) Resources are extracted to subdirectories of this path based upon information given by the ``IResourceProvider``. You may set this to a temporary directory, but then you must call ``cleanup_resources()`` to delete the extracted files when done. There is no guarantee that ``cleanup_resources()`` will be able to remove all extracted files. (Note: you may not change the extraction path for a given resource manager once resources have been extracted, unless you first call ``cleanup_resources()``.) """ if self.cached_files: raise ValueError( "Can't change extraction path, files already extracted" ) self.extraction_path = path def cleanup_resources(self, force=False): """ Delete all extracted resource files and directories, returning a list of the file and directory names that could not be successfully removed. This function does not have any concurrency protection, so it should generally only be called when the extraction path is a temporary directory exclusive to a single process. This method is not automatically called; you must call it explicitly or register it as an ``atexit`` function if you wish to ensure cleanup of a temporary directory used for extractions. """ # XXX def get_default_cache(): """ Return the ``PYTHON_EGG_CACHE`` environment variable or a platform-relevant user cache dir for an app named "Python-Eggs". """ return ( os.environ.get('PYTHON_EGG_CACHE') or appdirs.user_cache_dir(appname='Python-Eggs') ) def safe_name(name): """Convert an arbitrary string to a standard distribution name Any runs of non-alphanumeric/. characters are replaced with a single '-'. """ return re.sub('[^A-Za-z0-9.]+', '-', name) def safe_version(version): """ Convert an arbitrary string to a standard version string """ try: # normalize the version return str(packaging.version.Version(version)) except packaging.version.InvalidVersion: version = version.replace(' ', '.') return re.sub('[^A-Za-z0-9.]+', '-', version) def safe_extra(extra): """Convert an arbitrary string to a standard 'extra' name Any runs of non-alphanumeric characters are replaced with a single '_', and the result is always lowercased. """ return re.sub('[^A-Za-z0-9.-]+', '_', extra).lower() def to_filename(name): """Convert a project or version name to its filename-escaped form Any '-' characters are currently replaced with '_'. """ return name.replace('-', '_') def invalid_marker(text): """ Validate text as a PEP 508 environment marker; return an exception if invalid or False otherwise. """ try: evaluate_marker(text) except SyntaxError as e: e.filename = None e.lineno = None return e return False def evaluate_marker(text, extra=None): """ Evaluate a PEP 508 environment marker. Return a boolean indicating the marker result in this environment. Raise SyntaxError if marker is invalid. This implementation uses the 'pyparsing' module. """ try: marker = packaging.markers.Marker(text) return marker.evaluate() except packaging.markers.InvalidMarker as e: raise SyntaxError(e) class NullProvider: """Try to implement resources and metadata for arbitrary PEP 302 loaders""" egg_name = None egg_info = None loader = None def __init__(self, module): self.loader = getattr(module, '__loader__', None) self.module_path = os.path.dirname(getattr(module, '__file__', '')) def get_resource_filename(self, manager, resource_name): return self._fn(self.module_path, resource_name) def get_resource_stream(self, manager, resource_name): return io.BytesIO(self.get_resource_string(manager, resource_name)) def get_resource_string(self, manager, resource_name): return self._get(self._fn(self.module_path, resource_name)) def has_resource(self, resource_name): return self._has(self._fn(self.module_path, resource_name)) def _get_metadata_path(self, name): return self._fn(self.egg_info, name) def has_metadata(self, name): if not self.egg_info: return self.egg_info path = self._get_metadata_path(name) return self._has(path) def get_metadata(self, name): if not self.egg_info: return "" value = self._get(self._fn(self.egg_info, name)) return value.decode('utf-8') if six.PY3 else value def get_metadata_lines(self, name): return yield_lines(self.get_metadata(name)) def resource_isdir(self, resource_name): return self._isdir(self._fn(self.module_path, resource_name)) def metadata_isdir(self, name): return self.egg_info and self._isdir(self._fn(self.egg_info, name)) def resource_listdir(self, resource_name): return self._listdir(self._fn(self.module_path, resource_name)) def metadata_listdir(self, name): if self.egg_info: return self._listdir(self._fn(self.egg_info, name)) return [] def run_script(self, script_name, namespace): script = 'scripts/' + script_name if not self.has_metadata(script): raise ResolutionError( "Script {script!r} not found in metadata at {self.egg_info!r}" .format(**locals()), ) script_text = self.get_metadata(script).replace('\r\n', '\n') script_text = script_text.replace('\r', '\n') script_filename = self._fn(self.egg_info, script) namespace['__file__'] = script_filename if os.path.exists(script_filename): source = open(script_filename).read() code = compile(source, script_filename, 'exec') exec(code, namespace, namespace) else: from linecache import cache cache[script_filename] = ( len(script_text), 0, script_text.split('\n'), script_filename ) script_code = compile(script_text, script_filename, 'exec') exec(script_code, namespace, namespace) def _has(self, path): raise NotImplementedError( "Can't perform this operation for unregistered loader type" ) def _isdir(self, path): raise NotImplementedError( "Can't perform this operation for unregistered loader type" ) def _listdir(self, path): raise NotImplementedError( "Can't perform this operation for unregistered loader type" ) def _fn(self, base, resource_name): self._validate_resource_path(resource_name) if resource_name: return os.path.join(base, *resource_name.split('/')) return base @staticmethod def _validate_resource_path(path): """ Validate the resource paths according to the docs. https://setuptools.readthedocs.io/en/latest/pkg_resources.html#basic-resource-access >>> warned = getfixture('recwarn') >>> warnings.simplefilter('always') >>> vrp = NullProvider._validate_resource_path >>> vrp('foo/bar.txt') >>> bool(warned) False >>> vrp('../foo/bar.txt') >>> bool(warned) True >>> warned.clear() >>> vrp('/foo/bar.txt') >>> bool(warned) True >>> vrp('foo/../../bar.txt') >>> bool(warned) True >>> warned.clear() >>> vrp('foo/f../bar.txt') >>> bool(warned) False Windows path separators are straight-up disallowed. >>> vrp(r'\\foo/bar.txt') Traceback (most recent call last): ... ValueError: Use of .. or absolute path in a resource path \ is not allowed. >>> vrp(r'C:\\foo/bar.txt') Traceback (most recent call last): ... ValueError: Use of .. or absolute path in a resource path \ is not allowed. Blank values are allowed >>> vrp('') >>> bool(warned) False Non-string values are not. >>> vrp(None) Traceback (most recent call last): ... AttributeError: ... """ invalid = ( os.path.pardir in path.split(posixpath.sep) or posixpath.isabs(path) or ntpath.isabs(path) ) if not invalid: return msg = "Use of .. or absolute path in a resource path is not allowed." # Aggressively disallow Windows absolute paths if ntpath.isabs(path) and not posixpath.isabs(path): raise ValueError(msg) # for compatibility, warn; in future # raise ValueError(msg) warnings.warn( msg[:-1] + " and will raise exceptions in a future release.", DeprecationWarning, stacklevel=4, ) def _get(self, path): if hasattr(self.loader, 'get_data'): return self.loader.get_data(path) raise NotImplementedError( "Can't perform this operation for loaders without 'get_data()'" ) register_loader_type(object, NullProvider) class EggProvider(NullProvider): """Provider based on a virtual filesystem""" def __init__(self, module): NullProvider.__init__(self, module) self._setup_prefix() def _setup_prefix(self): # we assume here that our metadata may be nested inside a "basket" # of multiple eggs; that's why we use module_path instead of .archive path = self.module_path old = None while path != old: if _is_egg_path(path): self.egg_name = os.path.basename(path) self.egg_info = os.path.join(path, 'EGG-INFO') self.egg_root = path break old = path path, base = os.path.split(path) class DefaultProvider(EggProvider): """Provides access to package resources in the filesystem""" def _has(self, path): return os.path.exists(path) def _isdir(self, path): return os.path.isdir(path) def _listdir(self, path): return os.listdir(path) def get_resource_stream(self, manager, resource_name): return open(self._fn(self.module_path, resource_name), 'rb') def _get(self, path): with open(path, 'rb') as stream: return stream.read() @classmethod def _register(cls): loader_names = 'SourceFileLoader', 'SourcelessFileLoader', for name in loader_names: loader_cls = getattr(importlib_machinery, name, type(None)) register_loader_type(loader_cls, cls) DefaultProvider._register() class EmptyProvider(NullProvider): """Provider that returns nothing for all requests""" module_path = None _isdir = _has = lambda self, path: False def _get(self, path): return '' def _listdir(self, path): return [] def __init__(self): pass empty_provider = EmptyProvider() class ZipManifests(dict): """ zip manifest builder """ @classmethod def build(cls, path): """ Build a dictionary similar to the zipimport directory caches, except instead of tuples, store ZipInfo objects. Use a platform-specific path separator (os.sep) for the path keys for compatibility with pypy on Windows. """ with zipfile.ZipFile(path) as zfile: items = ( ( name.replace('/', os.sep), zfile.getinfo(name), ) for name in zfile.namelist() ) return dict(items) load = build class MemoizedZipManifests(ZipManifests): """ Memoized zipfile manifests. """ manifest_mod = collections.namedtuple('manifest_mod', 'manifest mtime') def load(self, path): """ Load a manifest at path or return a suitable manifest already loaded. """ path = os.path.normpath(path) mtime = os.stat(path).st_mtime if path not in self or self[path].mtime != mtime: manifest = self.build(path) self[path] = self.manifest_mod(manifest, mtime) return self[path].manifest class ZipProvider(EggProvider): """Resource support for zips and eggs""" eagers = None _zip_manifests = MemoizedZipManifests() def __init__(self, module): EggProvider.__init__(self, module) self.zip_pre = self.loader.archive + os.sep def _zipinfo_name(self, fspath): # Convert a virtual filename (full path to file) into a zipfile subpath # usable with the zipimport directory cache for our target archive fspath = fspath.rstrip(os.sep) if fspath == self.loader.archive: return '' if fspath.startswith(self.zip_pre): return fspath[len(self.zip_pre):] raise AssertionError( "%s is not a subpath of %s" % (fspath, self.zip_pre) ) def _parts(self, zip_path): # Convert a zipfile subpath into an egg-relative path part list. # pseudo-fs path fspath = self.zip_pre + zip_path if fspath.startswith(self.egg_root + os.sep): return fspath[len(self.egg_root) + 1:].split(os.sep) raise AssertionError( "%s is not a subpath of %s" % (fspath, self.egg_root) ) @property def zipinfo(self): return self._zip_manifests.load(self.loader.archive) def get_resource_filename(self, manager, resource_name): if not self.egg_name: raise NotImplementedError( "resource_filename() only supported for .egg, not .zip" ) # no need to lock for extraction, since we use temp names zip_path = self._resource_to_zip(resource_name) eagers = self._get_eager_resources() if '/'.join(self._parts(zip_path)) in eagers: for name in eagers: self._extract_resource(manager, self._eager_to_zip(name)) return self._extract_resource(manager, zip_path) @staticmethod def _get_date_and_size(zip_stat): size = zip_stat.file_size # ymdhms+wday, yday, dst date_time = zip_stat.date_time + (0, 0, -1) # 1980 offset already done timestamp = time.mktime(date_time) return timestamp, size def _extract_resource(self, manager, zip_path): if zip_path in self._index(): for name in self._index()[zip_path]: last = self._extract_resource( manager, os.path.join(zip_path, name) ) # return the extracted directory name return os.path.dirname(last) timestamp, size = self._get_date_and_size(self.zipinfo[zip_path]) if not WRITE_SUPPORT: raise IOError('"os.rename" and "os.unlink" are not supported ' 'on this platform') try: real_path = manager.get_cache_path( self.egg_name, self._parts(zip_path) ) if self._is_current(real_path, zip_path): return real_path outf, tmpnam = _mkstemp( ".$extract", dir=os.path.dirname(real_path), ) os.write(outf, self.loader.get_data(zip_path)) os.close(outf) utime(tmpnam, (timestamp, timestamp)) manager.postprocess(tmpnam, real_path) try: rename(tmpnam, real_path) except os.error: if os.path.isfile(real_path): if self._is_current(real_path, zip_path): # the file became current since it was checked above, # so proceed. return real_path # Windows, del old file and retry elif os.name == 'nt': unlink(real_path) rename(tmpnam, real_path) return real_path raise except os.error: # report a user-friendly error manager.extraction_error() return real_path def _is_current(self, file_path, zip_path): """ Return True if the file_path is current for this zip_path """ timestamp, size = self._get_date_and_size(self.zipinfo[zip_path]) if not os.path.isfile(file_path): return False stat = os.stat(file_path) if stat.st_size != size or stat.st_mtime != timestamp: return False # check that the contents match zip_contents = self.loader.get_data(zip_path) with open(file_path, 'rb') as f: file_contents = f.read() return zip_contents == file_contents def _get_eager_resources(self): if self.eagers is None: eagers = [] for name in ('native_libs.txt', 'eager_resources.txt'): if self.has_metadata(name): eagers.extend(self.get_metadata_lines(name)) self.eagers = eagers return self.eagers def _index(self): try: return self._dirindex except AttributeError: ind = {} for path in self.zipinfo: parts = path.split(os.sep) while parts: parent = os.sep.join(parts[:-1]) if parent in ind: ind[parent].append(parts[-1]) break else: ind[parent] = [parts.pop()] self._dirindex = ind return ind def _has(self, fspath): zip_path = self._zipinfo_name(fspath) return zip_path in self.zipinfo or zip_path in self._index() def _isdir(self, fspath): return self._zipinfo_name(fspath) in self._index() def _listdir(self, fspath): return list(self._index().get(self._zipinfo_name(fspath), ())) def _eager_to_zip(self, resource_name): return self._zipinfo_name(self._fn(self.egg_root, resource_name)) def _resource_to_zip(self, resource_name): return self._zipinfo_name(self._fn(self.module_path, resource_name)) register_loader_type(zipimport.zipimporter, ZipProvider) class FileMetadata(EmptyProvider): """Metadata handler for standalone PKG-INFO files Usage:: metadata = FileMetadata("/path/to/PKG-INFO") This provider rejects all data and metadata requests except for PKG-INFO, which is treated as existing, and will be the contents of the file at the provided location. """ def __init__(self, path): self.path = path def _get_metadata_path(self, name): return self.path def has_metadata(self, name): return name == 'PKG-INFO' and os.path.isfile(self.path) def get_metadata(self, name): if name != 'PKG-INFO': raise KeyError("No metadata except PKG-INFO is available") with io.open(self.path, encoding='utf-8', errors="replace") as f: metadata = f.read() self._warn_on_replacement(metadata) return metadata def _warn_on_replacement(self, metadata): # Python 2.7 compat for: replacement_char = '�' replacement_char = b'\xef\xbf\xbd'.decode('utf-8') if replacement_char in metadata: tmpl = "{self.path} could not be properly decoded in UTF-8" msg = tmpl.format(**locals()) warnings.warn(msg) def get_metadata_lines(self, name): return yield_lines(self.get_metadata(name)) class PathMetadata(DefaultProvider): """Metadata provider for egg directories Usage:: # Development eggs: egg_info = "/path/to/PackageName.egg-info" base_dir = os.path.dirname(egg_info) metadata = PathMetadata(base_dir, egg_info) dist_name = os.path.splitext(os.path.basename(egg_info))[0] dist = Distribution(basedir, project_name=dist_name, metadata=metadata) # Unpacked egg directories: egg_path = "/path/to/PackageName-ver-pyver-etc.egg" metadata = PathMetadata(egg_path, os.path.join(egg_path,'EGG-INFO')) dist = Distribution.from_filename(egg_path, metadata=metadata) """ def __init__(self, path, egg_info): self.module_path = path self.egg_info = egg_info class EggMetadata(ZipProvider): """Metadata provider for .egg files""" def __init__(self, importer): """Create a metadata provider from a zipimporter""" self.zip_pre = importer.archive + os.sep self.loader = importer if importer.prefix: self.module_path = os.path.join(importer.archive, importer.prefix) else: self.module_path = importer.archive self._setup_prefix() _declare_state('dict', _distribution_finders={}) def register_finder(importer_type, distribution_finder): """Register `distribution_finder` to find distributions in sys.path items `importer_type` is the type or class of a PEP 302 "Importer" (sys.path item handler), and `distribution_finder` is a callable that, passed a path item and the importer instance, yields ``Distribution`` instances found on that path item. See ``pkg_resources.find_on_path`` for an example.""" _distribution_finders[importer_type] = distribution_finder def find_distributions(path_item, only=False): """Yield distributions accessible via `path_item`""" importer = get_importer(path_item) finder = _find_adapter(_distribution_finders, importer) return finder(importer, path_item, only) def find_eggs_in_zip(importer, path_item, only=False): """ Find eggs in zip files; possibly multiple nested eggs. """ if importer.archive.endswith('.whl'): # wheels are not supported with this finder # they don't have PKG-INFO metadata, and won't ever contain eggs return metadata = EggMetadata(importer) if metadata.has_metadata('PKG-INFO'): yield Distribution.from_filename(path_item, metadata=metadata) if only: # don't yield nested distros return for subitem in metadata.resource_listdir(''): if _is_egg_path(subitem): subpath = os.path.join(path_item, subitem) dists = find_eggs_in_zip(zipimport.zipimporter(subpath), subpath) for dist in dists: yield dist elif subitem.lower().endswith('.dist-info'): subpath = os.path.join(path_item, subitem) submeta = EggMetadata(zipimport.zipimporter(subpath)) submeta.egg_info = subpath yield Distribution.from_location(path_item, subitem, submeta) register_finder(zipimport.zipimporter, find_eggs_in_zip) def find_nothing(importer, path_item, only=False): return () register_finder(object, find_nothing) def _by_version_descending(names): """ Given a list of filenames, return them in descending order by version number. >>> names = 'bar', 'foo', 'Python-2.7.10.egg', 'Python-2.7.2.egg' >>> _by_version_descending(names) ['Python-2.7.10.egg', 'Python-2.7.2.egg', 'foo', 'bar'] >>> names = 'Setuptools-1.2.3b1.egg', 'Setuptools-1.2.3.egg' >>> _by_version_descending(names) ['Setuptools-1.2.3.egg', 'Setuptools-1.2.3b1.egg'] >>> names = 'Setuptools-1.2.3b1.egg', 'Setuptools-1.2.3.post1.egg' >>> _by_version_descending(names) ['Setuptools-1.2.3.post1.egg', 'Setuptools-1.2.3b1.egg'] """ def _by_version(name): """ Parse each component of the filename """ name, ext = os.path.splitext(name) parts = itertools.chain(name.split('-'), [ext]) return [packaging.version.parse(part) for part in parts] return sorted(names, key=_by_version, reverse=True) def find_on_path(importer, path_item, only=False): """Yield distributions accessible on a sys.path directory""" path_item = _normalize_cached(path_item) if _is_unpacked_egg(path_item): yield Distribution.from_filename( path_item, metadata=PathMetadata( path_item, os.path.join(path_item, 'EGG-INFO') ) ) return entries = safe_listdir(path_item) # for performance, before sorting by version, # screen entries for only those that will yield # distributions filtered = ( entry for entry in entries if dist_factory(path_item, entry, only) ) # scan for .egg and .egg-info in directory path_item_entries = _by_version_descending(filtered) for entry in path_item_entries: fullpath = os.path.join(path_item, entry) factory = dist_factory(path_item, entry, only) for dist in factory(fullpath): yield dist def dist_factory(path_item, entry, only): """ Return a dist_factory for a path_item and entry """ lower = entry.lower() is_meta = any(map(lower.endswith, ('.egg-info', '.dist-info'))) return ( distributions_from_metadata if is_meta else find_distributions if not only and _is_egg_path(entry) else resolve_egg_link if not only and lower.endswith('.egg-link') else NoDists() ) class NoDists: """ >>> bool(NoDists()) False >>> list(NoDists()('anything')) [] """ def __bool__(self): return False if six.PY2: __nonzero__ = __bool__ def __call__(self, fullpath): return iter(()) def safe_listdir(path): """ Attempt to list contents of path, but suppress some exceptions. """ try: return os.listdir(path) except (PermissionError, NotADirectoryError): pass except OSError as e: # Ignore the directory if does not exist, not a directory or # permission denied ignorable = ( e.errno in (errno.ENOTDIR, errno.EACCES, errno.ENOENT) # Python 2 on Windows needs to be handled this way :( or getattr(e, "winerror", None) == 267 ) if not ignorable: raise return () def distributions_from_metadata(path): root = os.path.dirname(path) if os.path.isdir(path): if len(os.listdir(path)) == 0: # empty metadata dir; skip return metadata = PathMetadata(root, path) else: metadata = FileMetadata(path) entry = os.path.basename(path) yield Distribution.from_location( root, entry, metadata, precedence=DEVELOP_DIST, ) def non_empty_lines(path): """ Yield non-empty lines from file at path """ with open(path) as f: for line in f: line = line.strip() if line: yield line def resolve_egg_link(path): """ Given a path to an .egg-link, resolve distributions present in the referenced path. """ referenced_paths = non_empty_lines(path) resolved_paths = ( os.path.join(os.path.dirname(path), ref) for ref in referenced_paths ) dist_groups = map(find_distributions, resolved_paths) return next(dist_groups, ()) register_finder(pkgutil.ImpImporter, find_on_path) if hasattr(importlib_machinery, 'FileFinder'): register_finder(importlib_machinery.FileFinder, find_on_path) _declare_state('dict', _namespace_handlers={}) _declare_state('dict', _namespace_packages={}) def register_namespace_handler(importer_type, namespace_handler): """Register `namespace_handler` to declare namespace packages `importer_type` is the type or class of a PEP 302 "Importer" (sys.path item handler), and `namespace_handler` is a callable like this:: def namespace_handler(importer, path_entry, moduleName, module): # return a path_entry to use for child packages Namespace handlers are only called if the importer object has already agreed that it can handle the relevant path item, and they should only return a subpath if the module __path__ does not already contain an equivalent subpath. For an example namespace handler, see ``pkg_resources.file_ns_handler``. """ _namespace_handlers[importer_type] = namespace_handler def _handle_ns(packageName, path_item): """Ensure that named package includes a subpath of path_item (if needed)""" importer = get_importer(path_item) if importer is None: return None # capture warnings due to #1111 with warnings.catch_warnings(): warnings.simplefilter("ignore") loader = importer.find_module(packageName) if loader is None: return None module = sys.modules.get(packageName) if module is None: module = sys.modules[packageName] = types.ModuleType(packageName) module.__path__ = [] _set_parent_ns(packageName) elif not hasattr(module, '__path__'): raise TypeError("Not a package:", packageName) handler = _find_adapter(_namespace_handlers, importer) subpath = handler(importer, path_item, packageName, module) if subpath is not None: path = module.__path__ path.append(subpath) loader.load_module(packageName) _rebuild_mod_path(path, packageName, module) return subpath def _rebuild_mod_path(orig_path, package_name, module): """ Rebuild module.__path__ ensuring that all entries are ordered corresponding to their sys.path order """ sys_path = [_normalize_cached(p) for p in sys.path] def safe_sys_path_index(entry): """ Workaround for #520 and #513. """ try: return sys_path.index(entry) except ValueError: return float('inf') def position_in_sys_path(path): """ Return the ordinal of the path based on its position in sys.path """ path_parts = path.split(os.sep) module_parts = package_name.count('.') + 1 parts = path_parts[:-module_parts] return safe_sys_path_index(_normalize_cached(os.sep.join(parts))) new_path = sorted(orig_path, key=position_in_sys_path) new_path = [_normalize_cached(p) for p in new_path] if isinstance(module.__path__, list): module.__path__[:] = new_path else: module.__path__ = new_path def declare_namespace(packageName): """Declare that package 'packageName' is a namespace package""" _imp.acquire_lock() try: if packageName in _namespace_packages: return path = sys.path parent, _, _ = packageName.rpartition('.') if parent: declare_namespace(parent) if parent not in _namespace_packages: __import__(parent) try: path = sys.modules[parent].__path__ except AttributeError: raise TypeError("Not a package:", parent) # Track what packages are namespaces, so when new path items are added, # they can be updated _namespace_packages.setdefault(parent or None, []).append(packageName) _namespace_packages.setdefault(packageName, []) for path_item in path: # Ensure all the parent's path items are reflected in the child, # if they apply _handle_ns(packageName, path_item) finally: _imp.release_lock() def fixup_namespace_packages(path_item, parent=None): """Ensure that previously-declared namespace packages include path_item""" _imp.acquire_lock() try: for package in _namespace_packages.get(parent, ()): subpath = _handle_ns(package, path_item) if subpath: fixup_namespace_packages(subpath, package) finally: _imp.release_lock() def file_ns_handler(importer, path_item, packageName, module): """Compute an ns-package subpath for a filesystem or zipfile importer""" subpath = os.path.join(path_item, packageName.split('.')[-1]) normalized = _normalize_cached(subpath) for item in module.__path__: if _normalize_cached(item) == normalized: break else: # Only return the path if it's not already there return subpath register_namespace_handler(pkgutil.ImpImporter, file_ns_handler) register_namespace_handler(zipimport.zipimporter, file_ns_handler) if hasattr(importlib_machinery, 'FileFinder'): register_namespace_handler(importlib_machinery.FileFinder, file_ns_handler) def null_ns_handler(importer, path_item, packageName, module): return None register_namespace_handler(object, null_ns_handler) def normalize_path(filename): """Normalize a file/dir name for comparison purposes""" return os.path.normcase(os.path.realpath(os.path.normpath(_cygwin_patch(filename)))) def _cygwin_patch(filename): # pragma: nocover """ Contrary to POSIX 2008, on Cygwin, getcwd (3) contains symlink components. Using os.path.abspath() works around this limitation. A fix in os.getcwd() would probably better, in Cygwin even more so, except that this seems to be by design... """ return os.path.abspath(filename) if sys.platform == 'cygwin' else filename def _normalize_cached(filename, _cache={}): try: return _cache[filename] except KeyError: _cache[filename] = result = normalize_path(filename) return result def _is_egg_path(path): """ Determine if given path appears to be an egg. """ return path.lower().endswith('.egg') def _is_unpacked_egg(path): """ Determine if given path appears to be an unpacked egg. """ return ( _is_egg_path(path) and os.path.isfile(os.path.join(path, 'EGG-INFO', 'PKG-INFO')) ) def _set_parent_ns(packageName): parts = packageName.split('.') name = parts.pop() if parts: parent = '.'.join(parts) setattr(sys.modules[parent], name, sys.modules[packageName]) def yield_lines(strs): """Yield non-empty/non-comment lines of a string or sequence""" if isinstance(strs, six.string_types): for s in strs.splitlines(): s = s.strip() # skip blank lines/comments if s and not s.startswith('#'): yield s else: for ss in strs: for s in yield_lines(ss): yield s MODULE = re.compile(r"\w+(\.\w+)*$").match EGG_NAME = re.compile( r""" (?P<name>[^-]+) ( -(?P<ver>[^-]+) ( -py(?P<pyver>[^-]+) ( -(?P<plat>.+) )? )? )? """, re.VERBOSE | re.IGNORECASE, ).match class EntryPoint: """Object representing an advertised importable object""" def __init__(self, name, module_name, attrs=(), extras=(), dist=None): if not MODULE(module_name): raise ValueError("Invalid module name", module_name) self.name = name self.module_name = module_name self.attrs = tuple(attrs) self.extras = tuple(extras) self.dist = dist def __str__(self): s = "%s = %s" % (self.name, self.module_name) if self.attrs: s += ':' + '.'.join(self.attrs) if self.extras: s += ' [%s]' % ','.join(self.extras) return s def __repr__(self): return "EntryPoint.parse(%r)" % str(self) def load(self, require=True, *args, **kwargs): """ Require packages for this EntryPoint, then resolve it. """ if not require or args or kwargs: warnings.warn( "Parameters to load are deprecated. Call .resolve and " ".require separately.", PkgResourcesDeprecationWarning, stacklevel=2, ) if require: self.require(*args, **kwargs) return self.resolve() def resolve(self): """ Resolve the entry point from its module and attrs. """ module = __import__(self.module_name, fromlist=['__name__'], level=0) try: return functools.reduce(getattr, self.attrs, module) except AttributeError as exc: raise ImportError(str(exc)) def require(self, env=None, installer=None): if self.extras and not self.dist: raise UnknownExtra("Can't require() without a distribution", self) # Get the requirements for this entry point with all its extras and # then resolve them. We have to pass `extras` along when resolving so # that the working set knows what extras we want. Otherwise, for # dist-info distributions, the working set will assume that the # requirements for that extra are purely optional and skip over them. reqs = self.dist.requires(self.extras) items = working_set.resolve(reqs, env, installer, extras=self.extras) list(map(working_set.add, items)) pattern = re.compile( r'\s*' r'(?P<name>.+?)\s*' r'=\s*' r'(?P<module>[\w.]+)\s*' r'(:\s*(?P<attr>[\w.]+))?\s*' r'(?P<extras>\[.*\])?\s*$' ) @classmethod def parse(cls, src, dist=None): """Parse a single entry point from string `src` Entry point syntax follows the form:: name = some.module:some.attr [extra1, extra2] The entry name and module name are required, but the ``:attrs`` and ``[extras]`` parts are optional """ m = cls.pattern.match(src) if not m: msg = "EntryPoint must be in 'name=module:attrs [extras]' format" raise ValueError(msg, src) res = m.groupdict() extras = cls._parse_extras(res['extras']) attrs = res['attr'].split('.') if res['attr'] else () return cls(res['name'], res['module'], attrs, extras, dist) @classmethod def _parse_extras(cls, extras_spec): if not extras_spec: return () req = Requirement.parse('x' + extras_spec) if req.specs: raise ValueError() return req.extras @classmethod def parse_group(cls, group, lines, dist=None): """Parse an entry point group""" if not MODULE(group): raise ValueError("Invalid group name", group) this = {} for line in yield_lines(lines): ep = cls.parse(line, dist) if ep.name in this: raise ValueError("Duplicate entry point", group, ep.name) this[ep.name] = ep return this @classmethod def parse_map(cls, data, dist=None): """Parse a map of entry point groups""" if isinstance(data, dict): data = data.items() else: data = split_sections(data) maps = {} for group, lines in data: if group is None: if not lines: continue raise ValueError("Entry points must be listed in groups") group = group.strip() if group in maps: raise ValueError("Duplicate group name", group) maps[group] = cls.parse_group(group, lines, dist) return maps def _remove_md5_fragment(location): if not location: return '' parsed = urllib.parse.urlparse(location) if parsed[-1].startswith('md5='): return urllib.parse.urlunparse(parsed[:-1] + ('',)) return location def _version_from_file(lines): """ Given an iterable of lines from a Metadata file, return the value of the Version field, if present, or None otherwise. """ def is_version_line(line): return line.lower().startswith('version:') version_lines = filter(is_version_line, lines) line = next(iter(version_lines), '') _, _, value = line.partition(':') return safe_version(value.strip()) or None class Distribution: """Wrap an actual or potential sys.path entry w/metadata""" PKG_INFO = 'PKG-INFO' def __init__( self, location=None, metadata=None, project_name=None, version=None, py_version=PY_MAJOR, platform=None, precedence=EGG_DIST): self.project_name = safe_name(project_name or 'Unknown') if version is not None: self._version = safe_version(version) self.py_version = py_version self.platform = platform self.location = location self.precedence = precedence self._provider = metadata or empty_provider @classmethod def from_location(cls, location, basename, metadata=None, **kw): project_name, version, py_version, platform = [None] * 4 basename, ext = os.path.splitext(basename) if ext.lower() in _distributionImpl: cls = _distributionImpl[ext.lower()] match = EGG_NAME(basename) if match: project_name, version, py_version, platform = match.group( 'name', 'ver', 'pyver', 'plat' ) return cls( location, metadata, project_name=project_name, version=version, py_version=py_version, platform=platform, **kw )._reload_version() def _reload_version(self): return self @property def hashcmp(self): return ( self.parsed_version, self.precedence, self.key, _remove_md5_fragment(self.location), self.py_version or '', self.platform or '', ) def __hash__(self): return hash(self.hashcmp) def __lt__(self, other): return self.hashcmp < other.hashcmp def __le__(self, other): return self.hashcmp <= other.hashcmp def __gt__(self, other): return self.hashcmp > other.hashcmp def __ge__(self, other): return self.hashcmp >= other.hashcmp def __eq__(self, other): if not isinstance(other, self.__class__): # It's not a Distribution, so they are not equal return False return self.hashcmp == other.hashcmp def __ne__(self, other): return not self == other # These properties have to be lazy so that we don't have to load any # metadata until/unless it's actually needed. (i.e., some distributions # may not know their name or version without loading PKG-INFO) @property def key(self): try: return self._key except AttributeError: self._key = key = self.project_name.lower() return key @property def parsed_version(self): if not hasattr(self, "_parsed_version"): self._parsed_version = parse_version(self.version) return self._parsed_version def _warn_legacy_version(self): LV = packaging.version.LegacyVersion is_legacy = isinstance(self._parsed_version, LV) if not is_legacy: return # While an empty version is technically a legacy version and # is not a valid PEP 440 version, it's also unlikely to # actually come from someone and instead it is more likely that # it comes from setuptools attempting to parse a filename and # including it in the list. So for that we'll gate this warning # on if the version is anything at all or not. if not self.version: return tmpl = textwrap.dedent(""" '{project_name} ({version})' is being parsed as a legacy, non PEP 440, version. You may find odd behavior and sort order. In particular it will be sorted as less than 0.0. It is recommended to migrate to PEP 440 compatible versions. """).strip().replace('\n', ' ') warnings.warn(tmpl.format(**vars(self)), PEP440Warning) @property def version(self): try: return self._version except AttributeError: version = self._get_version() if version is None: path = self._get_metadata_path_for_display(self.PKG_INFO) msg = ( "Missing 'Version:' header and/or {} file at path: {}" ).format(self.PKG_INFO, path) raise ValueError(msg, self) return version @property def _dep_map(self): """ A map of extra to its list of (direct) requirements for this distribution, including the null extra. """ try: return self.__dep_map except AttributeError: self.__dep_map = self._filter_extras(self._build_dep_map()) return self.__dep_map @staticmethod def _filter_extras(dm): """ Given a mapping of extras to dependencies, strip off environment markers and filter out any dependencies not matching the markers. """ for extra in list(filter(None, dm)): new_extra = extra reqs = dm.pop(extra) new_extra, _, marker = extra.partition(':') fails_marker = marker and ( invalid_marker(marker) or not evaluate_marker(marker) ) if fails_marker: reqs = [] new_extra = safe_extra(new_extra) or None dm.setdefault(new_extra, []).extend(reqs) return dm def _build_dep_map(self): dm = {} for name in 'requires.txt', 'depends.txt': for extra, reqs in split_sections(self._get_metadata(name)): dm.setdefault(extra, []).extend(parse_requirements(reqs)) return dm def requires(self, extras=()): """List of Requirements needed for this distro if `extras` are used""" dm = self._dep_map deps = [] deps.extend(dm.get(None, ())) for ext in extras: try: deps.extend(dm[safe_extra(ext)]) except KeyError: raise UnknownExtra( "%s has no such extra feature %r" % (self, ext) ) return deps def _get_metadata_path_for_display(self, name): """ Return the path to the given metadata file, if available. """ try: # We need to access _get_metadata_path() on the provider object # directly rather than through this class's __getattr__() # since _get_metadata_path() is marked private. path = self._provider._get_metadata_path(name) # Handle exceptions e.g. in case the distribution's metadata # provider doesn't support _get_metadata_path(). except Exception: return '[could not detect]' return path def _get_metadata(self, name): if self.has_metadata(name): for line in self.get_metadata_lines(name): yield line def _get_version(self): lines = self._get_metadata(self.PKG_INFO) version = _version_from_file(lines) return version def activate(self, path=None, replace=False): """Ensure distribution is importable on `path` (default=sys.path)""" if path is None: path = sys.path self.insert_on(path, replace=replace) if path is sys.path: fixup_namespace_packages(self.location) for pkg in self._get_metadata('namespace_packages.txt'): if pkg in sys.modules: declare_namespace(pkg) def egg_name(self): """Return what this distribution's standard .egg filename should be""" filename = "%s-%s-py%s" % ( to_filename(self.project_name), to_filename(self.version), self.py_version or PY_MAJOR ) if self.platform: filename += '-' + self.platform return filename def __repr__(self): if self.location: return "%s (%s)" % (self, self.location) else: return str(self) def __str__(self): try: version = getattr(self, 'version', None) except ValueError: version = None version = version or "[unknown version]" return "%s %s" % (self.project_name, version) def __getattr__(self, attr): """Delegate all unrecognized public attributes to .metadata provider""" if attr.startswith('_'): raise AttributeError(attr) return getattr(self._provider, attr) def __dir__(self): return list( set(super(Distribution, self).__dir__()) | set( attr for attr in self._provider.__dir__() if not attr.startswith('_') ) ) if not hasattr(object, '__dir__'): # python 2.7 not supported del __dir__ @classmethod def from_filename(cls, filename, metadata=None, **kw): return cls.from_location( _normalize_cached(filename), os.path.basename(filename), metadata, **kw ) def as_requirement(self): """Return a ``Requirement`` that matches this distribution exactly""" if isinstance(self.parsed_version, packaging.version.Version): spec = "%s==%s" % (self.project_name, self.parsed_version) else: spec = "%s===%s" % (self.project_name, self.parsed_version) return Requirement.parse(spec) def load_entry_point(self, group, name): """Return the `name` entry point of `group` or raise ImportError""" ep = self.get_entry_info(group, name) if ep is None: raise ImportError("Entry point %r not found" % ((group, name),)) return ep.load() def get_entry_map(self, group=None): """Return the entry point map for `group`, or the full entry map""" try: ep_map = self._ep_map except AttributeError: ep_map = self._ep_map = EntryPoint.parse_map( self._get_metadata('entry_points.txt'), self ) if group is not None: return ep_map.get(group, {}) return ep_map def get_entry_info(self, group, name): """Return the EntryPoint object for `group`+`name`, or ``None``""" return self.get_entry_map(group).get(name) def insert_on(self, path, loc=None, replace=False): """Ensure self.location is on path If replace=False (default): - If location is already in path anywhere, do nothing. - Else: - If it's an egg and its parent directory is on path, insert just ahead of the parent. - Else: add to the end of path. If replace=True: - If location is already on path anywhere (not eggs) or higher priority than its parent (eggs) do nothing. - Else: - If it's an egg and its parent directory is on path, insert just ahead of the parent, removing any lower-priority entries. - Else: add it to the front of path. """ loc = loc or self.location if not loc: return nloc = _normalize_cached(loc) bdir = os.path.dirname(nloc) npath = [(p and _normalize_cached(p) or p) for p in path] for p, item in enumerate(npath): if item == nloc: if replace: break else: # don't modify path (even removing duplicates) if # found and not replace return elif item == bdir and self.precedence == EGG_DIST: # if it's an .egg, give it precedence over its directory # UNLESS it's already been added to sys.path and replace=False if (not replace) and nloc in npath[p:]: return if path is sys.path: self.check_version_conflict() path.insert(p, loc) npath.insert(p, nloc) break else: if path is sys.path: self.check_version_conflict() if replace: path.insert(0, loc) else: path.append(loc) return # p is the spot where we found or inserted loc; now remove duplicates while True: try: np = npath.index(nloc, p + 1) except ValueError: break else: del npath[np], path[np] # ha! p = np return def check_version_conflict(self): if self.key == 'setuptools': # ignore the inevitable setuptools self-conflicts :( return nsp = dict.fromkeys(self._get_metadata('namespace_packages.txt')) loc = normalize_path(self.location) for modname in self._get_metadata('top_level.txt'): if (modname not in sys.modules or modname in nsp or modname in _namespace_packages): continue if modname in ('pkg_resources', 'setuptools', 'site'): continue fn = getattr(sys.modules[modname], '__file__', None) if fn and (normalize_path(fn).startswith(loc) or fn.startswith(self.location)): continue issue_warning( "Module %s was already imported from %s, but %s is being added" " to sys.path" % (modname, fn, self.location), ) def has_version(self): try: self.version except ValueError: issue_warning("Unbuilt egg for " + repr(self)) return False return True def clone(self, **kw): """Copy this distribution, substituting in any changed keyword args""" names = 'project_name version py_version platform location precedence' for attr in names.split(): kw.setdefault(attr, getattr(self, attr, None)) kw.setdefault('metadata', self._provider) return self.__class__(**kw) @property def extras(self): return [dep for dep in self._dep_map if dep] class EggInfoDistribution(Distribution): def _reload_version(self): """ Packages installed by distutils (e.g. numpy or scipy), which uses an old safe_version, and so their version numbers can get mangled when converted to filenames (e.g., 1.11.0.dev0+2329eae to 1.11.0.dev0_2329eae). These distributions will not be parsed properly downstream by Distribution and safe_version, so take an extra step and try to get the version number from the metadata file itself instead of the filename. """ md_version = self._get_version() if md_version: self._version = md_version return self class DistInfoDistribution(Distribution): """ Wrap an actual or potential sys.path entry w/metadata, .dist-info style. """ PKG_INFO = 'METADATA' EQEQ = re.compile(r"([\(,])\s*(\d.*?)\s*([,\)])") @property def _parsed_pkg_info(self): """Parse and cache metadata""" try: return self._pkg_info except AttributeError: metadata = self.get_metadata(self.PKG_INFO) self._pkg_info = email.parser.Parser().parsestr(metadata) return self._pkg_info @property def _dep_map(self): try: return self.__dep_map except AttributeError: self.__dep_map = self._compute_dependencies() return self.__dep_map def _compute_dependencies(self): """Recompute this distribution's dependencies.""" dm = self.__dep_map = {None: []} reqs = [] # Including any condition expressions for req in self._parsed_pkg_info.get_all('Requires-Dist') or []: reqs.extend(parse_requirements(req)) def reqs_for_extra(extra): for req in reqs: if not req.marker or req.marker.evaluate({'extra': extra}): yield req common = frozenset(reqs_for_extra(None)) dm[None].extend(common) for extra in self._parsed_pkg_info.get_all('Provides-Extra') or []: s_extra = safe_extra(extra.strip()) dm[s_extra] = list(frozenset(reqs_for_extra(extra)) - common) return dm _distributionImpl = { '.egg': Distribution, '.egg-info': EggInfoDistribution, '.dist-info': DistInfoDistribution, } def issue_warning(*args, **kw): level = 1 g = globals() try: # find the first stack frame that is *not* code in # the pkg_resources module, to use for the warning while sys._getframe(level).f_globals is g: level += 1 except ValueError: pass warnings.warn(stacklevel=level + 1, *args, **kw) class RequirementParseError(ValueError): def __str__(self): return ' '.join(self.args) def parse_requirements(strs): """Yield ``Requirement`` objects for each specification in `strs` `strs` must be a string, or a (possibly-nested) iterable thereof. """ # create a steppable iterator, so we can handle \-continuations lines = iter(yield_lines(strs)) for line in lines: # Drop comments -- a hash without a space may be in a URL. if ' #' in line: line = line[:line.find(' #')] # If there is a line continuation, drop it, and append the next line. if line.endswith('\\'): line = line[:-2].strip() try: line += next(lines) except StopIteration: return yield Requirement(line) class Requirement(packaging.requirements.Requirement): def __init__(self, requirement_string): """DO NOT CALL THIS UNDOCUMENTED METHOD; use Requirement.parse()!""" try: super(Requirement, self).__init__(requirement_string) except packaging.requirements.InvalidRequirement as e: raise RequirementParseError(str(e)) self.unsafe_name = self.name project_name = safe_name(self.name) self.project_name, self.key = project_name, project_name.lower() self.specs = [ (spec.operator, spec.version) for spec in self.specifier] self.extras = tuple(map(safe_extra, self.extras)) self.hashCmp = ( self.key, self.specifier, frozenset(self.extras), str(self.marker) if self.marker else None, ) self.__hash = hash(self.hashCmp) def __eq__(self, other): return ( isinstance(other, Requirement) and self.hashCmp == other.hashCmp ) def __ne__(self, other): return not self == other def __contains__(self, item): if isinstance(item, Distribution): if item.key != self.key: return False item = item.version # Allow prereleases always in order to match the previous behavior of # this method. In the future this should be smarter and follow PEP 440 # more accurately. return self.specifier.contains(item, prereleases=True) def __hash__(self): return self.__hash def __repr__(self): return "Requirement.parse(%r)" % str(self) @staticmethod def parse(s): req, = parse_requirements(s) return req def _always_object(classes): """ Ensure object appears in the mro even for old-style classes. """ if object not in classes: return classes + (object,) return classes def _find_adapter(registry, ob): """Return an adapter factory for `ob` from `registry`""" types = _always_object(inspect.getmro(getattr(ob, '__class__', type(ob)))) for t in types: if t in registry: return registry[t] def ensure_directory(path): """Ensure that the parent directory of `path` exists""" dirname = os.path.dirname(path) py31compat.makedirs(dirname, exist_ok=True) def _bypass_ensure_directory(path): """Sandbox-bypassing version of ensure_directory()""" if not WRITE_SUPPORT: raise IOError('"os.mkdir" not supported on this platform.') dirname, filename = split(path) if dirname and filename and not isdir(dirname): _bypass_ensure_directory(dirname) try: mkdir(dirname, 0o755) except FileExistsError: pass def split_sections(s): """Split a string or iterable thereof into (section, content) pairs Each ``section`` is a stripped version of the section header ("[section]") and each ``content`` is a list of stripped lines excluding blank lines and comment-only lines. If there are any such lines before the first section header, they're returned in a first ``section`` of ``None``. """ section = None content = [] for line in yield_lines(s): if line.startswith("["): if line.endswith("]"): if section or content: yield section, content section = line[1:-1].strip() content = [] else: raise ValueError("Invalid section heading", line) else: content.append(line) # wrap up last segment yield section, content def _mkstemp(*args, **kw): old_open = os.open try: # temporarily bypass sandboxing os.open = os_open return tempfile.mkstemp(*args, **kw) finally: # and then put it back os.open = old_open # Silence the PEP440Warning by default, so that end users don't get hit by it # randomly just because they use pkg_resources. We want to append the rule # because we want earlier uses of filterwarnings to take precedence over this # one. warnings.filterwarnings("ignore", category=PEP440Warning, append=True) # from jaraco.functools 1.3 def _call_aside(f, *args, **kwargs): f(*args, **kwargs) return f @_call_aside def _initialize(g=globals()): "Set up global resource manager (deliberately not state-saved)" manager = ResourceManager() g['_manager'] = manager g.update( (name, getattr(manager, name)) for name in dir(manager) if not name.startswith('_') ) @_call_aside def _initialize_master_working_set(): """ Prepare the master working set and make the ``require()`` API available. This function has explicit effects on the global state of pkg_resources. It is intended to be invoked once at the initialization of this module. Invocation by other packages is unsupported and done at their own risk. """ working_set = WorkingSet._build_master() _declare_state('object', working_set=working_set) require = working_set.require iter_entry_points = working_set.iter_entry_points add_activation_listener = working_set.subscribe run_script = working_set.run_script # backward compatibility run_main = run_script # Activate all distributions already on sys.path with replace=False and # ensure that all distributions added to the working set in the future # (e.g. by calling ``require()``) will get activated as well, # with higher priority (replace=True). tuple( dist.activate(replace=False) for dist in working_set ) add_activation_listener( lambda dist: dist.activate(replace=True), existing=False, ) working_set.entries = [] # match order list(map(working_set.add_entry, sys.path)) globals().update(locals()) class PkgResourcesDeprecationWarning(Warning): """ Base class for warning about deprecations in ``pkg_resources`` This class is not derived from ``DeprecationWarning``, and as such is visible by default. """ ��������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/pkg_resources/py31compat.py��������������0000644�0001750�0001750�00000001062�00000000000�031711� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import os import errno import sys from pip._vendor import six def _makedirs_31(path, exist_ok=False): try: os.makedirs(path) except OSError as exc: if not exist_ok or exc.errno != errno.EEXIST: raise # rely on compatibility behavior until mode considerations # and exists_ok considerations are disentangled. # See https://github.com/pypa/setuptools/pull/1083#issuecomment-315168663 needs_makedirs = ( six.PY2 or (3, 4) <= sys.version_info < (3, 4, 1) ) makedirs = _makedirs_31 if needs_makedirs else os.makedirs ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735100.0133276 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/progress/��������������������������������0000755�0001750�0001750�00000000000�00000000000�026331� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/progress/__init__.py���������������������0000644�0001750�0001750�00000011371�00000000000�030445� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# Copyright (c) 2012 Giorgos Verigakis <verigak@gmail.com> # # Permission to use, copy, modify, and distribute this software for any # purpose with or without fee is hereby granted, provided that the above # copyright notice and this permission notice appear in all copies. # # THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES # WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF # MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR # ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES # WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN # ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF # OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. from __future__ import division, print_function from collections import deque from datetime import timedelta from math import ceil from sys import stderr try: from time import monotonic except ImportError: from time import time as monotonic __version__ = '1.5' HIDE_CURSOR = '\x1b[?25l' SHOW_CURSOR = '\x1b[?25h' class Infinite(object): file = stderr sma_window = 10 # Simple Moving Average window check_tty = True hide_cursor = True def __init__(self, message='', **kwargs): self.index = 0 self.start_ts = monotonic() self.avg = 0 self._avg_update_ts = self.start_ts self._ts = self.start_ts self._xput = deque(maxlen=self.sma_window) for key, val in kwargs.items(): setattr(self, key, val) self._width = 0 self.message = message if self.file and self.is_tty(): if self.hide_cursor: print(HIDE_CURSOR, end='', file=self.file) print(self.message, end='', file=self.file) self.file.flush() def __getitem__(self, key): if key.startswith('_'): return None return getattr(self, key, None) @property def elapsed(self): return int(monotonic() - self.start_ts) @property def elapsed_td(self): return timedelta(seconds=self.elapsed) def update_avg(self, n, dt): if n > 0: xput_len = len(self._xput) self._xput.append(dt / n) now = monotonic() # update when we're still filling _xput, then after every second if (xput_len < self.sma_window or now - self._avg_update_ts > 1): self.avg = sum(self._xput) / len(self._xput) self._avg_update_ts = now def update(self): pass def start(self): pass def clearln(self): if self.file and self.is_tty(): print('\r\x1b[K', end='', file=self.file) def write(self, s): if self.file and self.is_tty(): line = self.message + s.ljust(self._width) print('\r' + line, end='', file=self.file) self._width = max(self._width, len(s)) self.file.flush() def writeln(self, line): if self.file and self.is_tty(): self.clearln() print(line, end='', file=self.file) self.file.flush() def finish(self): if self.file and self.is_tty(): print(file=self.file) if self.hide_cursor: print(SHOW_CURSOR, end='', file=self.file) def is_tty(self): return self.file.isatty() if self.check_tty else True def next(self, n=1): now = monotonic() dt = now - self._ts self.update_avg(n, dt) self._ts = now self.index = self.index + n self.update() def iter(self, it): with self: for x in it: yield x self.next() def __enter__(self): self.start() return self def __exit__(self, exc_type, exc_val, exc_tb): self.finish() class Progress(Infinite): def __init__(self, *args, **kwargs): super(Progress, self).__init__(*args, **kwargs) self.max = kwargs.get('max', 100) @property def eta(self): return int(ceil(self.avg * self.remaining)) @property def eta_td(self): return timedelta(seconds=self.eta) @property def percent(self): return self.progress * 100 @property def progress(self): return min(1, self.index / self.max) @property def remaining(self): return max(self.max - self.index, 0) def start(self): self.update() def goto(self, index): incr = index - self.index self.next(incr) def iter(self, it): try: self.max = len(it) except TypeError: pass with self: for x in it: yield x self.next() �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/progress/bar.py��������������������������0000644�0001750�0001750�00000005446�00000000000�027460� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # Copyright (c) 2012 Giorgos Verigakis <verigak@gmail.com> # # Permission to use, copy, modify, and distribute this software for any # purpose with or without fee is hereby granted, provided that the above # copyright notice and this permission notice appear in all copies. # # THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES # WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF # MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR # ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES # WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN # ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF # OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. from __future__ import unicode_literals import sys from . import Progress class Bar(Progress): width = 32 suffix = '%(index)d/%(max)d' bar_prefix = ' |' bar_suffix = '| ' empty_fill = ' ' fill = '#' def update(self): filled_length = int(self.width * self.progress) empty_length = self.width - filled_length message = self.message % self bar = self.fill * filled_length empty = self.empty_fill * empty_length suffix = self.suffix % self line = ''.join([message, self.bar_prefix, bar, empty, self.bar_suffix, suffix]) self.writeln(line) class ChargingBar(Bar): suffix = '%(percent)d%%' bar_prefix = ' ' bar_suffix = ' ' empty_fill = '∙' fill = '█' class FillingSquaresBar(ChargingBar): empty_fill = '▢' fill = '▣' class FillingCirclesBar(ChargingBar): empty_fill = '◯' fill = '◉' class IncrementalBar(Bar): if sys.platform.startswith('win'): phases = (u' ', u'▌', u'█') else: phases = (' ', '▏', '▎', '▍', '▌', '▋', '▊', '▉', '█') def update(self): nphases = len(self.phases) filled_len = self.width * self.progress nfull = int(filled_len) # Number of full chars phase = int((filled_len - nfull) * nphases) # Phase of last char nempty = self.width - nfull # Number of empty chars message = self.message % self bar = self.phases[-1] * nfull current = self.phases[phase] if phase > 0 else '' empty = self.empty_fill * max(0, nempty - len(current)) suffix = self.suffix % self line = ''.join([message, self.bar_prefix, bar, current, empty, self.bar_suffix, suffix]) self.writeln(line) class PixelBar(IncrementalBar): phases = ('⡀', '⡄', '⡆', '⡇', '⣇', '⣧', '⣷', '⣿') class ShadyBar(IncrementalBar): phases = (' ', '░', '▒', '▓', '█') ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/progress/counter.py����������������������0000644�0001750�0001750�00000002534�00000000000�030366� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # Copyright (c) 2012 Giorgos Verigakis <verigak@gmail.com> # # Permission to use, copy, modify, and distribute this software for any # purpose with or without fee is hereby granted, provided that the above # copyright notice and this permission notice appear in all copies. # # THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES # WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF # MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR # ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES # WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN # ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF # OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. from __future__ import unicode_literals from . import Infinite, Progress class Counter(Infinite): def update(self): self.write(str(self.index)) class Countdown(Progress): def update(self): self.write(str(self.remaining)) class Stack(Progress): phases = (' ', '▁', '▂', '▃', '▄', '▅', '▆', '▇', '█') def update(self): nphases = len(self.phases) i = min(nphases - 1, int(self.progress * nphases)) self.write(self.phases[i]) class Pie(Stack): phases = ('○', '◔', '◑', '◕', '●') ��������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/progress/spinner.py����������������������0000644�0001750�0001750�00000002544�00000000000�030366� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # Copyright (c) 2012 Giorgos Verigakis <verigak@gmail.com> # # Permission to use, copy, modify, and distribute this software for any # purpose with or without fee is hereby granted, provided that the above # copyright notice and this permission notice appear in all copies. # # THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES # WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF # MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR # ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES # WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN # ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF # OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. from __future__ import unicode_literals from . import Infinite class Spinner(Infinite): phases = ('-', '\\', '|', '/') hide_cursor = True def update(self): i = self.index % len(self.phases) self.write(self.phases[i]) class PieSpinner(Spinner): phases = ['◷', '◶', '◵', '◴'] class MoonSpinner(Spinner): phases = ['◑', '◒', '◐', '◓'] class LineSpinner(Spinner): phases = ['⎺', '⎻', '⎼', '⎽', '⎼', '⎻'] class PixelSpinner(Spinner): phases = ['⣾', '⣷', '⣯', '⣟', '⡿', '⢿', '⣻', '⣽'] ������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/pyparsing.py�����������������������������0000644�0001750�0001750�00000737211�00000000000�027066� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������#-*- coding: utf-8 -*- # module pyparsing.py # # Copyright (c) 2003-2019 Paul T. McGuire # # Permission is hereby granted, free of charge, to any person obtaining # a copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, # distribute, sublicense, and/or sell copies of the Software, and to # permit persons to whom the Software is furnished to do so, subject to # the following conditions: # # The above copyright notice and this permission notice shall be # included in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. # IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY # CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, # TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # __doc__ = \ """ pyparsing module - Classes and methods to define and execute parsing grammars ============================================================================= The pyparsing module is an alternative approach to creating and executing simple grammars, vs. the traditional lex/yacc approach, or the use of regular expressions. With pyparsing, you don't need to learn a new syntax for defining grammars or matching expressions - the parsing module provides a library of classes that you use to construct the grammar directly in Python. Here is a program to parse "Hello, World!" (or any greeting of the form ``"<salutation>, <addressee>!"``), built up using :class:`Word`, :class:`Literal`, and :class:`And` elements (the :class:`'+'<ParserElement.__add__>` operators create :class:`And` expressions, and the strings are auto-converted to :class:`Literal` expressions):: from pip._vendor.pyparsing import Word, alphas # define grammar of a greeting greet = Word(alphas) + "," + Word(alphas) + "!" hello = "Hello, World!" print (hello, "->", greet.parseString(hello)) The program outputs the following:: Hello, World! -> ['Hello', ',', 'World', '!'] The Python representation of the grammar is quite readable, owing to the self-explanatory class names, and the use of '+', '|' and '^' operators. The :class:`ParseResults` object returned from :class:`ParserElement.parseString` can be accessed as a nested list, a dictionary, or an object with named attributes. The pyparsing module handles some of the problems that are typically vexing when writing text parsers: - extra or missing whitespace (the above program will also handle "Hello,World!", "Hello , World !", etc.) - quoted strings - embedded comments Getting Started - ----------------- Visit the classes :class:`ParserElement` and :class:`ParseResults` to see the base classes that most other pyparsing classes inherit from. Use the docstrings for examples of how to: - construct literal match expressions from :class:`Literal` and :class:`CaselessLiteral` classes - construct character word-group expressions using the :class:`Word` class - see how to create repetitive expressions using :class:`ZeroOrMore` and :class:`OneOrMore` classes - use :class:`'+'<And>`, :class:`'|'<MatchFirst>`, :class:`'^'<Or>`, and :class:`'&'<Each>` operators to combine simple expressions into more complex ones - associate names with your parsed results using :class:`ParserElement.setResultsName` - find some helpful expression short-cuts like :class:`delimitedList` and :class:`oneOf` - find more useful common expressions in the :class:`pyparsing_common` namespace class """ __version__ = "2.4.0" __versionTime__ = "07 Apr 2019 18:28 UTC" __author__ = "Paul McGuire <ptmcg@users.sourceforge.net>" import string from weakref import ref as wkref import copy import sys import warnings import re import sre_constants import collections import pprint import traceback import types from datetime import datetime try: # Python 3 from itertools import filterfalse except ImportError: from itertools import ifilterfalse as filterfalse try: from _thread import RLock except ImportError: from threading import RLock try: # Python 3 from collections.abc import Iterable from collections.abc import MutableMapping except ImportError: # Python 2.7 from collections import Iterable from collections import MutableMapping try: from collections import OrderedDict as _OrderedDict except ImportError: try: from ordereddict import OrderedDict as _OrderedDict except ImportError: _OrderedDict = None try: from types import SimpleNamespace except ImportError: class SimpleNamespace: pass # version compatibility configuration __compat__ = SimpleNamespace() __compat__.__doc__ = """ A cross-version compatibility configuration for pyparsing features that will be released in a future version. By setting values in this configuration to True, those features can be enabled in prior versions for compatibility development and testing. - collect_all_And_tokens - flag to enable fix for Issue #63 that fixes erroneous grouping of results names when an And expression is nested within an Or or MatchFirst; set to True to enable bugfix to be released in pyparsing 2.4 """ __compat__.collect_all_And_tokens = True #~ sys.stderr.write( "testing pyparsing module, version %s, %s\n" % (__version__,__versionTime__ ) ) __all__ = [ '__version__', '__versionTime__', '__author__', '__compat__', 'And', 'CaselessKeyword', 'CaselessLiteral', 'CharsNotIn', 'Combine', 'Dict', 'Each', 'Empty', 'FollowedBy', 'Forward', 'GoToColumn', 'Group', 'Keyword', 'LineEnd', 'LineStart', 'Literal', 'PrecededBy', 'MatchFirst', 'NoMatch', 'NotAny', 'OneOrMore', 'OnlyOnce', 'Optional', 'Or', 'ParseBaseException', 'ParseElementEnhance', 'ParseException', 'ParseExpression', 'ParseFatalException', 'ParseResults', 'ParseSyntaxException', 'ParserElement', 'QuotedString', 'RecursiveGrammarException', 'Regex', 'SkipTo', 'StringEnd', 'StringStart', 'Suppress', 'Token', 'TokenConverter', 'White', 'Word', 'WordEnd', 'WordStart', 'ZeroOrMore', 'Char', 'alphanums', 'alphas', 'alphas8bit', 'anyCloseTag', 'anyOpenTag', 'cStyleComment', 'col', 'commaSeparatedList', 'commonHTMLEntity', 'countedArray', 'cppStyleComment', 'dblQuotedString', 'dblSlashComment', 'delimitedList', 'dictOf', 'downcaseTokens', 'empty', 'hexnums', 'htmlComment', 'javaStyleComment', 'line', 'lineEnd', 'lineStart', 'lineno', 'makeHTMLTags', 'makeXMLTags', 'matchOnlyAtCol', 'matchPreviousExpr', 'matchPreviousLiteral', 'nestedExpr', 'nullDebugAction', 'nums', 'oneOf', 'opAssoc', 'operatorPrecedence', 'printables', 'punc8bit', 'pythonStyleComment', 'quotedString', 'removeQuotes', 'replaceHTMLEntity', 'replaceWith', 'restOfLine', 'sglQuotedString', 'srange', 'stringEnd', 'stringStart', 'traceParseAction', 'unicodeString', 'upcaseTokens', 'withAttribute', 'indentedBlock', 'originalTextFor', 'ungroup', 'infixNotation','locatedExpr', 'withClass', 'CloseMatch', 'tokenMap', 'pyparsing_common', 'pyparsing_unicode', 'unicode_set', ] system_version = tuple(sys.version_info)[:3] PY_3 = system_version[0] == 3 if PY_3: _MAX_INT = sys.maxsize basestring = str unichr = chr unicode = str _ustr = str # build list of single arg builtins, that can be used as parse actions singleArgBuiltins = [sum, len, sorted, reversed, list, tuple, set, any, all, min, max] else: _MAX_INT = sys.maxint range = xrange def _ustr(obj): """Drop-in replacement for str(obj) that tries to be Unicode friendly. It first tries str(obj). If that fails with a UnicodeEncodeError, then it tries unicode(obj). It then < returns the unicode object | encodes it with the default encoding | ... >. """ if isinstance(obj,unicode): return obj try: # If this works, then _ustr(obj) has the same behaviour as str(obj), so # it won't break any existing code. return str(obj) except UnicodeEncodeError: # Else encode it ret = unicode(obj).encode(sys.getdefaultencoding(), 'xmlcharrefreplace') xmlcharref = Regex(r'&#\d+;') xmlcharref.setParseAction(lambda t: '\\u' + hex(int(t[0][2:-1]))[2:]) return xmlcharref.transformString(ret) # build list of single arg builtins, tolerant of Python version, that can be used as parse actions singleArgBuiltins = [] import __builtin__ for fname in "sum len sorted reversed list tuple set any all min max".split(): try: singleArgBuiltins.append(getattr(__builtin__,fname)) except AttributeError: continue _generatorType = type((y for y in range(1))) def _xml_escape(data): """Escape &, <, >, ", ', etc. in a string of data.""" # ampersand must be replaced first from_symbols = '&><"\'' to_symbols = ('&'+s+';' for s in "amp gt lt quot apos".split()) for from_,to_ in zip(from_symbols, to_symbols): data = data.replace(from_, to_) return data alphas = string.ascii_uppercase + string.ascii_lowercase nums = "0123456789" hexnums = nums + "ABCDEFabcdef" alphanums = alphas + nums _bslash = chr(92) printables = "".join(c for c in string.printable if c not in string.whitespace) class ParseBaseException(Exception): """base exception class for all parsing runtime exceptions""" # Performance tuning: we construct a *lot* of these, so keep this # constructor as small and fast as possible def __init__( self, pstr, loc=0, msg=None, elem=None ): self.loc = loc if msg is None: self.msg = pstr self.pstr = "" else: self.msg = msg self.pstr = pstr self.parserElement = elem self.args = (pstr, loc, msg) @classmethod def _from_exception(cls, pe): """ internal factory method to simplify creating one type of ParseException from another - avoids having __init__ signature conflicts among subclasses """ return cls(pe.pstr, pe.loc, pe.msg, pe.parserElement) def __getattr__( self, aname ): """supported attributes by name are: - lineno - returns the line number of the exception text - col - returns the column number of the exception text - line - returns the line containing the exception text """ if( aname == "lineno" ): return lineno( self.loc, self.pstr ) elif( aname in ("col", "column") ): return col( self.loc, self.pstr ) elif( aname == "line" ): return line( self.loc, self.pstr ) else: raise AttributeError(aname) def __str__( self ): return "%s (at char %d), (line:%d, col:%d)" % \ ( self.msg, self.loc, self.lineno, self.column ) def __repr__( self ): return _ustr(self) def markInputline( self, markerString = ">!<" ): """Extracts the exception line from the input string, and marks the location of the exception with a special symbol. """ line_str = self.line line_column = self.column - 1 if markerString: line_str = "".join((line_str[:line_column], markerString, line_str[line_column:])) return line_str.strip() def __dir__(self): return "lineno col line".split() + dir(type(self)) class ParseException(ParseBaseException): """ Exception thrown when parse expressions don't match class; supported attributes by name are: - lineno - returns the line number of the exception text - col - returns the column number of the exception text - line - returns the line containing the exception text Example:: try: Word(nums).setName("integer").parseString("ABC") except ParseException as pe: print(pe) print("column: {}".format(pe.col)) prints:: Expected integer (at char 0), (line:1, col:1) column: 1 """ @staticmethod def explain(exc, depth=16): """ Method to take an exception and translate the Python internal traceback into a list of the pyparsing expressions that caused the exception to be raised. Parameters: - exc - exception raised during parsing (need not be a ParseException, in support of Python exceptions that might be raised in a parse action) - depth (default=16) - number of levels back in the stack trace to list expression and function names; if None, the full stack trace names will be listed; if 0, only the failing input line, marker, and exception string will be shown Returns a multi-line string listing the ParserElements and/or function names in the exception's stack trace. Note: the diagnostic output will include string representations of the expressions that failed to parse. These representations will be more helpful if you use `setName` to give identifiable names to your expressions. Otherwise they will use the default string forms, which may be cryptic to read. explain() is only supported under Python 3. """ import inspect if depth is None: depth = sys.getrecursionlimit() ret = [] if isinstance(exc, ParseBaseException): ret.append(exc.line) ret.append(' ' * (exc.col - 1) + '^') ret.append("{0}: {1}".format(type(exc).__name__, exc)) if depth > 0: callers = inspect.getinnerframes(exc.__traceback__, context=depth) seen = set() for i, ff in enumerate(callers[-depth:]): frm = ff[0] f_self = frm.f_locals.get('self', None) if isinstance(f_self, ParserElement): if frm.f_code.co_name not in ('parseImpl', '_parseNoCache'): continue if f_self in seen: continue seen.add(f_self) self_type = type(f_self) ret.append("{0}.{1} - {2}".format(self_type.__module__, self_type.__name__, f_self)) elif f_self is not None: self_type = type(f_self) ret.append("{0}.{1}".format(self_type.__module__, self_type.__name__)) else: code = frm.f_code if code.co_name in ('wrapper', '<module>'): continue ret.append("{0}".format(code.co_name)) depth -= 1 if not depth: break return '\n'.join(ret) class ParseFatalException(ParseBaseException): """user-throwable exception thrown when inconsistent parse content is found; stops all parsing immediately""" pass class ParseSyntaxException(ParseFatalException): """just like :class:`ParseFatalException`, but thrown internally when an :class:`ErrorStop<And._ErrorStop>` ('-' operator) indicates that parsing is to stop immediately because an unbacktrackable syntax error has been found. """ pass #~ class ReparseException(ParseBaseException): #~ """Experimental class - parse actions can raise this exception to cause #~ pyparsing to reparse the input string: #~ - with a modified input string, and/or #~ - with a modified start location #~ Set the values of the ReparseException in the constructor, and raise the #~ exception in a parse action to cause pyparsing to use the new string/location. #~ Setting the values as None causes no change to be made. #~ """ #~ def __init_( self, newstring, restartLoc ): #~ self.newParseText = newstring #~ self.reparseLoc = restartLoc class RecursiveGrammarException(Exception): """exception thrown by :class:`ParserElement.validate` if the grammar could be improperly recursive """ def __init__( self, parseElementList ): self.parseElementTrace = parseElementList def __str__( self ): return "RecursiveGrammarException: %s" % self.parseElementTrace class _ParseResultsWithOffset(object): def __init__(self,p1,p2): self.tup = (p1,p2) def __getitem__(self,i): return self.tup[i] def __repr__(self): return repr(self.tup[0]) def setOffset(self,i): self.tup = (self.tup[0],i) class ParseResults(object): """Structured parse results, to provide multiple means of access to the parsed data: - as a list (``len(results)``) - by list index (``results[0], results[1]``, etc.) - by attribute (``results.<resultsName>`` - see :class:`ParserElement.setResultsName`) Example:: integer = Word(nums) date_str = (integer.setResultsName("year") + '/' + integer.setResultsName("month") + '/' + integer.setResultsName("day")) # equivalent form: # date_str = integer("year") + '/' + integer("month") + '/' + integer("day") # parseString returns a ParseResults object result = date_str.parseString("1999/12/31") def test(s, fn=repr): print("%s -> %s" % (s, fn(eval(s)))) test("list(result)") test("result[0]") test("result['month']") test("result.day") test("'month' in result") test("'minutes' in result") test("result.dump()", str) prints:: list(result) -> ['1999', '/', '12', '/', '31'] result[0] -> '1999' result['month'] -> '12' result.day -> '31' 'month' in result -> True 'minutes' in result -> False result.dump() -> ['1999', '/', '12', '/', '31'] - day: 31 - month: 12 - year: 1999 """ def __new__(cls, toklist=None, name=None, asList=True, modal=True ): if isinstance(toklist, cls): return toklist retobj = object.__new__(cls) retobj.__doinit = True return retobj # Performance tuning: we construct a *lot* of these, so keep this # constructor as small and fast as possible def __init__( self, toklist=None, name=None, asList=True, modal=True, isinstance=isinstance ): if self.__doinit: self.__doinit = False self.__name = None self.__parent = None self.__accumNames = {} self.__asList = asList self.__modal = modal if toklist is None: toklist = [] if isinstance(toklist, list): self.__toklist = toklist[:] elif isinstance(toklist, _generatorType): self.__toklist = list(toklist) else: self.__toklist = [toklist] self.__tokdict = dict() if name is not None and name: if not modal: self.__accumNames[name] = 0 if isinstance(name,int): name = _ustr(name) # will always return a str, but use _ustr for consistency self.__name = name if not (isinstance(toklist, (type(None), basestring, list)) and toklist in (None,'',[])): if isinstance(toklist,basestring): toklist = [ toklist ] if asList: if isinstance(toklist,ParseResults): self[name] = _ParseResultsWithOffset(ParseResults(toklist.__toklist), 0) else: self[name] = _ParseResultsWithOffset(ParseResults(toklist[0]),0) self[name].__name = name else: try: self[name] = toklist[0] except (KeyError,TypeError,IndexError): self[name] = toklist def __getitem__( self, i ): if isinstance( i, (int,slice) ): return self.__toklist[i] else: if i not in self.__accumNames: return self.__tokdict[i][-1][0] else: return ParseResults([ v[0] for v in self.__tokdict[i] ]) def __setitem__( self, k, v, isinstance=isinstance ): if isinstance(v,_ParseResultsWithOffset): self.__tokdict[k] = self.__tokdict.get(k,list()) + [v] sub = v[0] elif isinstance(k,(int,slice)): self.__toklist[k] = v sub = v else: self.__tokdict[k] = self.__tokdict.get(k,list()) + [_ParseResultsWithOffset(v,0)] sub = v if isinstance(sub,ParseResults): sub.__parent = wkref(self) def __delitem__( self, i ): if isinstance(i,(int,slice)): mylen = len( self.__toklist ) del self.__toklist[i] # convert int to slice if isinstance(i, int): if i < 0: i += mylen i = slice(i, i+1) # get removed indices removed = list(range(*i.indices(mylen))) removed.reverse() # fixup indices in token dictionary for name,occurrences in self.__tokdict.items(): for j in removed: for k, (value, position) in enumerate(occurrences): occurrences[k] = _ParseResultsWithOffset(value, position - (position > j)) else: del self.__tokdict[i] def __contains__( self, k ): return k in self.__tokdict def __len__( self ): return len( self.__toklist ) def __bool__(self): return ( not not self.__toklist ) __nonzero__ = __bool__ def __iter__( self ): return iter( self.__toklist ) def __reversed__( self ): return iter( self.__toklist[::-1] ) def _iterkeys( self ): if hasattr(self.__tokdict, "iterkeys"): return self.__tokdict.iterkeys() else: return iter(self.__tokdict) def _itervalues( self ): return (self[k] for k in self._iterkeys()) def _iteritems( self ): return ((k, self[k]) for k in self._iterkeys()) if PY_3: keys = _iterkeys """Returns an iterator of all named result keys.""" values = _itervalues """Returns an iterator of all named result values.""" items = _iteritems """Returns an iterator of all named result key-value tuples.""" else: iterkeys = _iterkeys """Returns an iterator of all named result keys (Python 2.x only).""" itervalues = _itervalues """Returns an iterator of all named result values (Python 2.x only).""" iteritems = _iteritems """Returns an iterator of all named result key-value tuples (Python 2.x only).""" def keys( self ): """Returns all named result keys (as a list in Python 2.x, as an iterator in Python 3.x).""" return list(self.iterkeys()) def values( self ): """Returns all named result values (as a list in Python 2.x, as an iterator in Python 3.x).""" return list(self.itervalues()) def items( self ): """Returns all named result key-values (as a list of tuples in Python 2.x, as an iterator in Python 3.x).""" return list(self.iteritems()) def haskeys( self ): """Since keys() returns an iterator, this method is helpful in bypassing code that looks for the existence of any defined results names.""" return bool(self.__tokdict) def pop( self, *args, **kwargs): """ Removes and returns item at specified index (default= ``last``). Supports both ``list`` and ``dict`` semantics for ``pop()``. If passed no argument or an integer argument, it will use ``list`` semantics and pop tokens from the list of parsed tokens. If passed a non-integer argument (most likely a string), it will use ``dict`` semantics and pop the corresponding value from any defined results names. A second default return value argument is supported, just as in ``dict.pop()``. Example:: def remove_first(tokens): tokens.pop(0) print(OneOrMore(Word(nums)).parseString("0 123 321")) # -> ['0', '123', '321'] print(OneOrMore(Word(nums)).addParseAction(remove_first).parseString("0 123 321")) # -> ['123', '321'] label = Word(alphas) patt = label("LABEL") + OneOrMore(Word(nums)) print(patt.parseString("AAB 123 321").dump()) # Use pop() in a parse action to remove named result (note that corresponding value is not # removed from list form of results) def remove_LABEL(tokens): tokens.pop("LABEL") return tokens patt.addParseAction(remove_LABEL) print(patt.parseString("AAB 123 321").dump()) prints:: ['AAB', '123', '321'] - LABEL: AAB ['AAB', '123', '321'] """ if not args: args = [-1] for k,v in kwargs.items(): if k == 'default': args = (args[0], v) else: raise TypeError("pop() got an unexpected keyword argument '%s'" % k) if (isinstance(args[0], int) or len(args) == 1 or args[0] in self): index = args[0] ret = self[index] del self[index] return ret else: defaultvalue = args[1] return defaultvalue def get(self, key, defaultValue=None): """ Returns named result matching the given key, or if there is no such name, then returns the given ``defaultValue`` or ``None`` if no ``defaultValue`` is specified. Similar to ``dict.get()``. Example:: integer = Word(nums) date_str = integer("year") + '/' + integer("month") + '/' + integer("day") result = date_str.parseString("1999/12/31") print(result.get("year")) # -> '1999' print(result.get("hour", "not specified")) # -> 'not specified' print(result.get("hour")) # -> None """ if key in self: return self[key] else: return defaultValue def insert( self, index, insStr ): """ Inserts new element at location index in the list of parsed tokens. Similar to ``list.insert()``. Example:: print(OneOrMore(Word(nums)).parseString("0 123 321")) # -> ['0', '123', '321'] # use a parse action to insert the parse location in the front of the parsed results def insert_locn(locn, tokens): tokens.insert(0, locn) print(OneOrMore(Word(nums)).addParseAction(insert_locn).parseString("0 123 321")) # -> [0, '0', '123', '321'] """ self.__toklist.insert(index, insStr) # fixup indices in token dictionary for name,occurrences in self.__tokdict.items(): for k, (value, position) in enumerate(occurrences): occurrences[k] = _ParseResultsWithOffset(value, position + (position > index)) def append( self, item ): """ Add single element to end of ParseResults list of elements. Example:: print(OneOrMore(Word(nums)).parseString("0 123 321")) # -> ['0', '123', '321'] # use a parse action to compute the sum of the parsed integers, and add it to the end def append_sum(tokens): tokens.append(sum(map(int, tokens))) print(OneOrMore(Word(nums)).addParseAction(append_sum).parseString("0 123 321")) # -> ['0', '123', '321', 444] """ self.__toklist.append(item) def extend( self, itemseq ): """ Add sequence of elements to end of ParseResults list of elements. Example:: patt = OneOrMore(Word(alphas)) # use a parse action to append the reverse of the matched strings, to make a palindrome def make_palindrome(tokens): tokens.extend(reversed([t[::-1] for t in tokens])) return ''.join(tokens) print(patt.addParseAction(make_palindrome).parseString("lskdj sdlkjf lksd")) # -> 'lskdjsdlkjflksddsklfjkldsjdksl' """ if isinstance(itemseq, ParseResults): self.__iadd__(itemseq) else: self.__toklist.extend(itemseq) def clear( self ): """ Clear all elements and results names. """ del self.__toklist[:] self.__tokdict.clear() def __getattr__( self, name ): try: return self[name] except KeyError: return "" if name in self.__tokdict: if name not in self.__accumNames: return self.__tokdict[name][-1][0] else: return ParseResults([ v[0] for v in self.__tokdict[name] ]) else: return "" def __add__( self, other ): ret = self.copy() ret += other return ret def __iadd__( self, other ): if other.__tokdict: offset = len(self.__toklist) addoffset = lambda a: offset if a<0 else a+offset otheritems = other.__tokdict.items() otherdictitems = [(k, _ParseResultsWithOffset(v[0],addoffset(v[1])) ) for (k,vlist) in otheritems for v in vlist] for k,v in otherdictitems: self[k] = v if isinstance(v[0],ParseResults): v[0].__parent = wkref(self) self.__toklist += other.__toklist self.__accumNames.update( other.__accumNames ) return self def __radd__(self, other): if isinstance(other,int) and other == 0: # useful for merging many ParseResults using sum() builtin return self.copy() else: # this may raise a TypeError - so be it return other + self def __repr__( self ): return "(%s, %s)" % ( repr( self.__toklist ), repr( self.__tokdict ) ) def __str__( self ): return '[' + ', '.join(_ustr(i) if isinstance(i, ParseResults) else repr(i) for i in self.__toklist) + ']' def _asStringList( self, sep='' ): out = [] for item in self.__toklist: if out and sep: out.append(sep) if isinstance( item, ParseResults ): out += item._asStringList() else: out.append( _ustr(item) ) return out def asList( self ): """ Returns the parse results as a nested list of matching tokens, all converted to strings. Example:: patt = OneOrMore(Word(alphas)) result = patt.parseString("sldkj lsdkj sldkj") # even though the result prints in string-like form, it is actually a pyparsing ParseResults print(type(result), result) # -> <class 'pyparsing.ParseResults'> ['sldkj', 'lsdkj', 'sldkj'] # Use asList() to create an actual list result_list = result.asList() print(type(result_list), result_list) # -> <class 'list'> ['sldkj', 'lsdkj', 'sldkj'] """ return [res.asList() if isinstance(res,ParseResults) else res for res in self.__toklist] def asDict( self ): """ Returns the named parse results as a nested dictionary. Example:: integer = Word(nums) date_str = integer("year") + '/' + integer("month") + '/' + integer("day") result = date_str.parseString('12/31/1999') print(type(result), repr(result)) # -> <class 'pyparsing.ParseResults'> (['12', '/', '31', '/', '1999'], {'day': [('1999', 4)], 'year': [('12', 0)], 'month': [('31', 2)]}) result_dict = result.asDict() print(type(result_dict), repr(result_dict)) # -> <class 'dict'> {'day': '1999', 'year': '12', 'month': '31'} # even though a ParseResults supports dict-like access, sometime you just need to have a dict import json print(json.dumps(result)) # -> Exception: TypeError: ... is not JSON serializable print(json.dumps(result.asDict())) # -> {"month": "31", "day": "1999", "year": "12"} """ if PY_3: item_fn = self.items else: item_fn = self.iteritems def toItem(obj): if isinstance(obj, ParseResults): if obj.haskeys(): return obj.asDict() else: return [toItem(v) for v in obj] else: return obj return dict((k,toItem(v)) for k,v in item_fn()) def copy( self ): """ Returns a new copy of a :class:`ParseResults` object. """ ret = ParseResults( self.__toklist ) ret.__tokdict = dict(self.__tokdict.items()) ret.__parent = self.__parent ret.__accumNames.update( self.__accumNames ) ret.__name = self.__name return ret def asXML( self, doctag=None, namedItemsOnly=False, indent="", formatted=True ): """ (Deprecated) Returns the parse results as XML. Tags are created for tokens and lists that have defined results names. """ nl = "\n" out = [] namedItems = dict((v[1],k) for (k,vlist) in self.__tokdict.items() for v in vlist) nextLevelIndent = indent + " " # collapse out indents if formatting is not desired if not formatted: indent = "" nextLevelIndent = "" nl = "" selfTag = None if doctag is not None: selfTag = doctag else: if self.__name: selfTag = self.__name if not selfTag: if namedItemsOnly: return "" else: selfTag = "ITEM" out += [ nl, indent, "<", selfTag, ">" ] for i,res in enumerate(self.__toklist): if isinstance(res,ParseResults): if i in namedItems: out += [ res.asXML(namedItems[i], namedItemsOnly and doctag is None, nextLevelIndent, formatted)] else: out += [ res.asXML(None, namedItemsOnly and doctag is None, nextLevelIndent, formatted)] else: # individual token, see if there is a name for it resTag = None if i in namedItems: resTag = namedItems[i] if not resTag: if namedItemsOnly: continue else: resTag = "ITEM" xmlBodyText = _xml_escape(_ustr(res)) out += [ nl, nextLevelIndent, "<", resTag, ">", xmlBodyText, "</", resTag, ">" ] out += [ nl, indent, "</", selfTag, ">" ] return "".join(out) def __lookup(self,sub): for k,vlist in self.__tokdict.items(): for v,loc in vlist: if sub is v: return k return None def getName(self): r""" Returns the results name for this token expression. Useful when several different expressions might match at a particular location. Example:: integer = Word(nums) ssn_expr = Regex(r"\d\d\d-\d\d-\d\d\d\d") house_number_expr = Suppress('#') + Word(nums, alphanums) user_data = (Group(house_number_expr)("house_number") | Group(ssn_expr)("ssn") | Group(integer)("age")) user_info = OneOrMore(user_data) result = user_info.parseString("22 111-22-3333 #221B") for item in result: print(item.getName(), ':', item[0]) prints:: age : 22 ssn : 111-22-3333 house_number : 221B """ if self.__name: return self.__name elif self.__parent: par = self.__parent() if par: return par.__lookup(self) else: return None elif (len(self) == 1 and len(self.__tokdict) == 1 and next(iter(self.__tokdict.values()))[0][1] in (0,-1)): return next(iter(self.__tokdict.keys())) else: return None def dump(self, indent='', depth=0, full=True): """ Diagnostic method for listing out the contents of a :class:`ParseResults`. Accepts an optional ``indent`` argument so that this string can be embedded in a nested display of other data. Example:: integer = Word(nums) date_str = integer("year") + '/' + integer("month") + '/' + integer("day") result = date_str.parseString('12/31/1999') print(result.dump()) prints:: ['12', '/', '31', '/', '1999'] - day: 1999 - month: 31 - year: 12 """ out = [] NL = '\n' out.append( indent+_ustr(self.asList()) ) if full: if self.haskeys(): items = sorted((str(k), v) for k,v in self.items()) for k,v in items: if out: out.append(NL) out.append( "%s%s- %s: " % (indent,(' '*depth), k) ) if isinstance(v,ParseResults): if v: out.append( v.dump(indent,depth+1) ) else: out.append(_ustr(v)) else: out.append(repr(v)) elif any(isinstance(vv,ParseResults) for vv in self): v = self for i,vv in enumerate(v): if isinstance(vv,ParseResults): out.append("\n%s%s[%d]:\n%s%s%s" % (indent,(' '*(depth)),i,indent,(' '*(depth+1)),vv.dump(indent,depth+1) )) else: out.append("\n%s%s[%d]:\n%s%s%s" % (indent,(' '*(depth)),i,indent,(' '*(depth+1)),_ustr(vv))) return "".join(out) def pprint(self, *args, **kwargs): """ Pretty-printer for parsed results as a list, using the `pprint <https://docs.python.org/3/library/pprint.html>`_ module. Accepts additional positional or keyword args as defined for `pprint.pprint <https://docs.python.org/3/library/pprint.html#pprint.pprint>`_ . Example:: ident = Word(alphas, alphanums) num = Word(nums) func = Forward() term = ident | num | Group('(' + func + ')') func <<= ident + Group(Optional(delimitedList(term))) result = func.parseString("fna a,b,(fnb c,d,200),100") result.pprint(width=40) prints:: ['fna', ['a', 'b', ['(', 'fnb', ['c', 'd', '200'], ')'], '100']] """ pprint.pprint(self.asList(), *args, **kwargs) # add support for pickle protocol def __getstate__(self): return ( self.__toklist, ( self.__tokdict.copy(), self.__parent is not None and self.__parent() or None, self.__accumNames, self.__name ) ) def __setstate__(self,state): self.__toklist = state[0] (self.__tokdict, par, inAccumNames, self.__name) = state[1] self.__accumNames = {} self.__accumNames.update(inAccumNames) if par is not None: self.__parent = wkref(par) else: self.__parent = None def __getnewargs__(self): return self.__toklist, self.__name, self.__asList, self.__modal def __dir__(self): return (dir(type(self)) + list(self.keys())) MutableMapping.register(ParseResults) def col (loc,strg): """Returns current column within a string, counting newlines as line separators. The first column is number 1. Note: the default parsing behavior is to expand tabs in the input string before starting the parsing process. See :class:`ParserElement.parseString` for more information on parsing strings containing ``<TAB>`` s, and suggested methods to maintain a consistent view of the parsed string, the parse location, and line and column positions within the parsed string. """ s = strg return 1 if 0<loc<len(s) and s[loc-1] == '\n' else loc - s.rfind("\n", 0, loc) def lineno(loc,strg): """Returns current line number within a string, counting newlines as line separators. The first line is number 1. Note - the default parsing behavior is to expand tabs in the input string before starting the parsing process. See :class:`ParserElement.parseString` for more information on parsing strings containing ``<TAB>`` s, and suggested methods to maintain a consistent view of the parsed string, the parse location, and line and column positions within the parsed string. """ return strg.count("\n",0,loc) + 1 def line( loc, strg ): """Returns the line of text containing loc within a string, counting newlines as line separators. """ lastCR = strg.rfind("\n", 0, loc) nextCR = strg.find("\n", loc) if nextCR >= 0: return strg[lastCR+1:nextCR] else: return strg[lastCR+1:] def _defaultStartDebugAction( instring, loc, expr ): print (("Match " + _ustr(expr) + " at loc " + _ustr(loc) + "(%d,%d)" % ( lineno(loc,instring), col(loc,instring) ))) def _defaultSuccessDebugAction( instring, startloc, endloc, expr, toks ): print ("Matched " + _ustr(expr) + " -> " + str(toks.asList())) def _defaultExceptionDebugAction( instring, loc, expr, exc ): print ("Exception raised:" + _ustr(exc)) def nullDebugAction(*args): """'Do-nothing' debug action, to suppress debugging output during parsing.""" pass # Only works on Python 3.x - nonlocal is toxic to Python 2 installs #~ 'decorator to trim function calls to match the arity of the target' #~ def _trim_arity(func, maxargs=3): #~ if func in singleArgBuiltins: #~ return lambda s,l,t: func(t) #~ limit = 0 #~ foundArity = False #~ def wrapper(*args): #~ nonlocal limit,foundArity #~ while 1: #~ try: #~ ret = func(*args[limit:]) #~ foundArity = True #~ return ret #~ except TypeError: #~ if limit == maxargs or foundArity: #~ raise #~ limit += 1 #~ continue #~ return wrapper # this version is Python 2.x-3.x cross-compatible 'decorator to trim function calls to match the arity of the target' def _trim_arity(func, maxargs=2): if func in singleArgBuiltins: return lambda s,l,t: func(t) limit = [0] foundArity = [False] # traceback return data structure changed in Py3.5 - normalize back to plain tuples if system_version[:2] >= (3,5): def extract_stack(limit=0): # special handling for Python 3.5.0 - extra deep call stack by 1 offset = -3 if system_version == (3,5,0) else -2 frame_summary = traceback.extract_stack(limit=-offset+limit-1)[offset] return [frame_summary[:2]] def extract_tb(tb, limit=0): frames = traceback.extract_tb(tb, limit=limit) frame_summary = frames[-1] return [frame_summary[:2]] else: extract_stack = traceback.extract_stack extract_tb = traceback.extract_tb # synthesize what would be returned by traceback.extract_stack at the call to # user's parse action 'func', so that we don't incur call penalty at parse time LINE_DIFF = 6 # IF ANY CODE CHANGES, EVEN JUST COMMENTS OR BLANK LINES, BETWEEN THE NEXT LINE AND # THE CALL TO FUNC INSIDE WRAPPER, LINE_DIFF MUST BE MODIFIED!!!! this_line = extract_stack(limit=2)[-1] pa_call_line_synth = (this_line[0], this_line[1]+LINE_DIFF) def wrapper(*args): while 1: try: ret = func(*args[limit[0]:]) foundArity[0] = True return ret except TypeError: # re-raise TypeErrors if they did not come from our arity testing if foundArity[0]: raise else: try: tb = sys.exc_info()[-1] if not extract_tb(tb, limit=2)[-1][:2] == pa_call_line_synth: raise finally: del tb if limit[0] <= maxargs: limit[0] += 1 continue raise # copy func name to wrapper for sensible debug output func_name = "<parse action>" try: func_name = getattr(func, '__name__', getattr(func, '__class__').__name__) except Exception: func_name = str(func) wrapper.__name__ = func_name return wrapper class ParserElement(object): """Abstract base level parser element class.""" DEFAULT_WHITE_CHARS = " \n\t\r" verbose_stacktrace = False @staticmethod def setDefaultWhitespaceChars( chars ): r""" Overrides the default whitespace chars Example:: # default whitespace chars are space, <TAB> and newline OneOrMore(Word(alphas)).parseString("abc def\nghi jkl") # -> ['abc', 'def', 'ghi', 'jkl'] # change to just treat newline as significant ParserElement.setDefaultWhitespaceChars(" \t") OneOrMore(Word(alphas)).parseString("abc def\nghi jkl") # -> ['abc', 'def'] """ ParserElement.DEFAULT_WHITE_CHARS = chars @staticmethod def inlineLiteralsUsing(cls): """ Set class to be used for inclusion of string literals into a parser. Example:: # default literal class used is Literal integer = Word(nums) date_str = integer("year") + '/' + integer("month") + '/' + integer("day") date_str.parseString("1999/12/31") # -> ['1999', '/', '12', '/', '31'] # change to Suppress ParserElement.inlineLiteralsUsing(Suppress) date_str = integer("year") + '/' + integer("month") + '/' + integer("day") date_str.parseString("1999/12/31") # -> ['1999', '12', '31'] """ ParserElement._literalStringClass = cls def __init__( self, savelist=False ): self.parseAction = list() self.failAction = None #~ self.name = "<unknown>" # don't define self.name, let subclasses try/except upcall self.strRepr = None self.resultsName = None self.saveAsList = savelist self.skipWhitespace = True self.whiteChars = set(ParserElement.DEFAULT_WHITE_CHARS) self.copyDefaultWhiteChars = True self.mayReturnEmpty = False # used when checking for left-recursion self.keepTabs = False self.ignoreExprs = list() self.debug = False self.streamlined = False self.mayIndexError = True # used to optimize exception handling for subclasses that don't advance parse index self.errmsg = "" self.modalResults = True # used to mark results names as modal (report only last) or cumulative (list all) self.debugActions = ( None, None, None ) #custom debug actions self.re = None self.callPreparse = True # used to avoid redundant calls to preParse self.callDuringTry = False def copy( self ): """ Make a copy of this :class:`ParserElement`. Useful for defining different parse actions for the same parsing pattern, using copies of the original parse element. Example:: integer = Word(nums).setParseAction(lambda toks: int(toks[0])) integerK = integer.copy().addParseAction(lambda toks: toks[0]*1024) + Suppress("K") integerM = integer.copy().addParseAction(lambda toks: toks[0]*1024*1024) + Suppress("M") print(OneOrMore(integerK | integerM | integer).parseString("5K 100 640K 256M")) prints:: [5120, 100, 655360, 268435456] Equivalent form of ``expr.copy()`` is just ``expr()``:: integerM = integer().addParseAction(lambda toks: toks[0]*1024*1024) + Suppress("M") """ cpy = copy.copy( self ) cpy.parseAction = self.parseAction[:] cpy.ignoreExprs = self.ignoreExprs[:] if self.copyDefaultWhiteChars: cpy.whiteChars = ParserElement.DEFAULT_WHITE_CHARS return cpy def setName( self, name ): """ Define name for this expression, makes debugging and exception messages clearer. Example:: Word(nums).parseString("ABC") # -> Exception: Expected W:(0123...) (at char 0), (line:1, col:1) Word(nums).setName("integer").parseString("ABC") # -> Exception: Expected integer (at char 0), (line:1, col:1) """ self.name = name self.errmsg = "Expected " + self.name if hasattr(self,"exception"): self.exception.msg = self.errmsg return self def setResultsName( self, name, listAllMatches=False ): """ Define name for referencing matching tokens as a nested attribute of the returned parse results. NOTE: this returns a *copy* of the original :class:`ParserElement` object; this is so that the client can define a basic element, such as an integer, and reference it in multiple places with different names. You can also set results names using the abbreviated syntax, ``expr("name")`` in place of ``expr.setResultsName("name")`` - see :class:`__call__`. Example:: date_str = (integer.setResultsName("year") + '/' + integer.setResultsName("month") + '/' + integer.setResultsName("day")) # equivalent form: date_str = integer("year") + '/' + integer("month") + '/' + integer("day") """ newself = self.copy() if name.endswith("*"): name = name[:-1] listAllMatches=True newself.resultsName = name newself.modalResults = not listAllMatches return newself def setBreak(self,breakFlag = True): """Method to invoke the Python pdb debugger when this element is about to be parsed. Set ``breakFlag`` to True to enable, False to disable. """ if breakFlag: _parseMethod = self._parse def breaker(instring, loc, doActions=True, callPreParse=True): import pdb pdb.set_trace() return _parseMethod( instring, loc, doActions, callPreParse ) breaker._originalParseMethod = _parseMethod self._parse = breaker else: if hasattr(self._parse,"_originalParseMethod"): self._parse = self._parse._originalParseMethod return self def setParseAction( self, *fns, **kwargs ): """ Define one or more actions to perform when successfully matching parse element definition. Parse action fn is a callable method with 0-3 arguments, called as ``fn(s,loc,toks)`` , ``fn(loc,toks)`` , ``fn(toks)`` , or just ``fn()`` , where: - s = the original string being parsed (see note below) - loc = the location of the matching substring - toks = a list of the matched tokens, packaged as a :class:`ParseResults` object If the functions in fns modify the tokens, they can return them as the return value from fn, and the modified list of tokens will replace the original. Otherwise, fn does not need to return any value. Optional keyword arguments: - callDuringTry = (default= ``False`` ) indicate if parse action should be run during lookaheads and alternate testing Note: the default parsing behavior is to expand tabs in the input string before starting the parsing process. See :class:`parseString for more information on parsing strings containing ``<TAB>`` s, and suggested methods to maintain a consistent view of the parsed string, the parse location, and line and column positions within the parsed string. Example:: integer = Word(nums) date_str = integer + '/' + integer + '/' + integer date_str.parseString("1999/12/31") # -> ['1999', '/', '12', '/', '31'] # use parse action to convert to ints at parse time integer = Word(nums).setParseAction(lambda toks: int(toks[0])) date_str = integer + '/' + integer + '/' + integer # note that integer fields are now ints, not strings date_str.parseString("1999/12/31") # -> [1999, '/', 12, '/', 31] """ self.parseAction = list(map(_trim_arity, list(fns))) self.callDuringTry = kwargs.get("callDuringTry", False) return self def addParseAction( self, *fns, **kwargs ): """ Add one or more parse actions to expression's list of parse actions. See :class:`setParseAction`. See examples in :class:`copy`. """ self.parseAction += list(map(_trim_arity, list(fns))) self.callDuringTry = self.callDuringTry or kwargs.get("callDuringTry", False) return self def addCondition(self, *fns, **kwargs): """Add a boolean predicate function to expression's list of parse actions. See :class:`setParseAction` for function call signatures. Unlike ``setParseAction``, functions passed to ``addCondition`` need to return boolean success/fail of the condition. Optional keyword arguments: - message = define a custom message to be used in the raised exception - fatal = if True, will raise ParseFatalException to stop parsing immediately; otherwise will raise ParseException Example:: integer = Word(nums).setParseAction(lambda toks: int(toks[0])) year_int = integer.copy() year_int.addCondition(lambda toks: toks[0] >= 2000, message="Only support years 2000 and later") date_str = year_int + '/' + integer + '/' + integer result = date_str.parseString("1999/12/31") # -> Exception: Only support years 2000 and later (at char 0), (line:1, col:1) """ msg = kwargs.get("message", "failed user-defined condition") exc_type = ParseFatalException if kwargs.get("fatal", False) else ParseException for fn in fns: fn = _trim_arity(fn) def pa(s,l,t): if not bool(fn(s,l,t)): raise exc_type(s,l,msg) self.parseAction.append(pa) self.callDuringTry = self.callDuringTry or kwargs.get("callDuringTry", False) return self def setFailAction( self, fn ): """Define action to perform if parsing fails at this expression. Fail acton fn is a callable function that takes the arguments ``fn(s,loc,expr,err)`` where: - s = string being parsed - loc = location where expression match was attempted and failed - expr = the parse expression that failed - err = the exception thrown The function returns no value. It may throw :class:`ParseFatalException` if it is desired to stop parsing immediately.""" self.failAction = fn return self def _skipIgnorables( self, instring, loc ): exprsFound = True while exprsFound: exprsFound = False for e in self.ignoreExprs: try: while 1: loc,dummy = e._parse( instring, loc ) exprsFound = True except ParseException: pass return loc def preParse( self, instring, loc ): if self.ignoreExprs: loc = self._skipIgnorables( instring, loc ) if self.skipWhitespace: wt = self.whiteChars instrlen = len(instring) while loc < instrlen and instring[loc] in wt: loc += 1 return loc def parseImpl( self, instring, loc, doActions=True ): return loc, [] def postParse( self, instring, loc, tokenlist ): return tokenlist #~ @profile def _parseNoCache( self, instring, loc, doActions=True, callPreParse=True ): debugging = ( self.debug ) #and doActions ) if debugging or self.failAction: #~ print ("Match",self,"at loc",loc,"(%d,%d)" % ( lineno(loc,instring), col(loc,instring) )) if (self.debugActions[0] ): self.debugActions[0]( instring, loc, self ) if callPreParse and self.callPreparse: preloc = self.preParse( instring, loc ) else: preloc = loc tokensStart = preloc try: try: loc,tokens = self.parseImpl( instring, preloc, doActions ) except IndexError: raise ParseException( instring, len(instring), self.errmsg, self ) except ParseBaseException as err: #~ print ("Exception raised:", err) if self.debugActions[2]: self.debugActions[2]( instring, tokensStart, self, err ) if self.failAction: self.failAction( instring, tokensStart, self, err ) raise else: if callPreParse and self.callPreparse: preloc = self.preParse( instring, loc ) else: preloc = loc tokensStart = preloc if self.mayIndexError or preloc >= len(instring): try: loc,tokens = self.parseImpl( instring, preloc, doActions ) except IndexError: raise ParseException( instring, len(instring), self.errmsg, self ) else: loc,tokens = self.parseImpl( instring, preloc, doActions ) tokens = self.postParse( instring, loc, tokens ) retTokens = ParseResults( tokens, self.resultsName, asList=self.saveAsList, modal=self.modalResults ) if self.parseAction and (doActions or self.callDuringTry): if debugging: try: for fn in self.parseAction: try: tokens = fn( instring, tokensStart, retTokens ) except IndexError as parse_action_exc: exc = ParseException("exception raised in parse action") exc.__cause__ = parse_action_exc raise exc if tokens is not None and tokens is not retTokens: retTokens = ParseResults( tokens, self.resultsName, asList=self.saveAsList and isinstance(tokens,(ParseResults,list)), modal=self.modalResults ) except ParseBaseException as err: #~ print "Exception raised in user parse action:", err if (self.debugActions[2] ): self.debugActions[2]( instring, tokensStart, self, err ) raise else: for fn in self.parseAction: try: tokens = fn( instring, tokensStart, retTokens ) except IndexError as parse_action_exc: exc = ParseException("exception raised in parse action") exc.__cause__ = parse_action_exc raise exc if tokens is not None and tokens is not retTokens: retTokens = ParseResults( tokens, self.resultsName, asList=self.saveAsList and isinstance(tokens,(ParseResults,list)), modal=self.modalResults ) if debugging: #~ print ("Matched",self,"->",retTokens.asList()) if (self.debugActions[1] ): self.debugActions[1]( instring, tokensStart, loc, self, retTokens ) return loc, retTokens def tryParse( self, instring, loc ): try: return self._parse( instring, loc, doActions=False )[0] except ParseFatalException: raise ParseException( instring, loc, self.errmsg, self) def canParseNext(self, instring, loc): try: self.tryParse(instring, loc) except (ParseException, IndexError): return False else: return True class _UnboundedCache(object): def __init__(self): cache = {} self.not_in_cache = not_in_cache = object() def get(self, key): return cache.get(key, not_in_cache) def set(self, key, value): cache[key] = value def clear(self): cache.clear() def cache_len(self): return len(cache) self.get = types.MethodType(get, self) self.set = types.MethodType(set, self) self.clear = types.MethodType(clear, self) self.__len__ = types.MethodType(cache_len, self) if _OrderedDict is not None: class _FifoCache(object): def __init__(self, size): self.not_in_cache = not_in_cache = object() cache = _OrderedDict() def get(self, key): return cache.get(key, not_in_cache) def set(self, key, value): cache[key] = value while len(cache) > size: try: cache.popitem(False) except KeyError: pass def clear(self): cache.clear() def cache_len(self): return len(cache) self.get = types.MethodType(get, self) self.set = types.MethodType(set, self) self.clear = types.MethodType(clear, self) self.__len__ = types.MethodType(cache_len, self) else: class _FifoCache(object): def __init__(self, size): self.not_in_cache = not_in_cache = object() cache = {} key_fifo = collections.deque([], size) def get(self, key): return cache.get(key, not_in_cache) def set(self, key, value): cache[key] = value while len(key_fifo) > size: cache.pop(key_fifo.popleft(), None) key_fifo.append(key) def clear(self): cache.clear() key_fifo.clear() def cache_len(self): return len(cache) self.get = types.MethodType(get, self) self.set = types.MethodType(set, self) self.clear = types.MethodType(clear, self) self.__len__ = types.MethodType(cache_len, self) # argument cache for optimizing repeated calls when backtracking through recursive expressions packrat_cache = {} # this is set later by enabledPackrat(); this is here so that resetCache() doesn't fail packrat_cache_lock = RLock() packrat_cache_stats = [0, 0] # this method gets repeatedly called during backtracking with the same arguments - # we can cache these arguments and save ourselves the trouble of re-parsing the contained expression def _parseCache( self, instring, loc, doActions=True, callPreParse=True ): HIT, MISS = 0, 1 lookup = (self, instring, loc, callPreParse, doActions) with ParserElement.packrat_cache_lock: cache = ParserElement.packrat_cache value = cache.get(lookup) if value is cache.not_in_cache: ParserElement.packrat_cache_stats[MISS] += 1 try: value = self._parseNoCache(instring, loc, doActions, callPreParse) except ParseBaseException as pe: # cache a copy of the exception, without the traceback cache.set(lookup, pe.__class__(*pe.args)) raise else: cache.set(lookup, (value[0], value[1].copy())) return value else: ParserElement.packrat_cache_stats[HIT] += 1 if isinstance(value, Exception): raise value return (value[0], value[1].copy()) _parse = _parseNoCache @staticmethod def resetCache(): ParserElement.packrat_cache.clear() ParserElement.packrat_cache_stats[:] = [0] * len(ParserElement.packrat_cache_stats) _packratEnabled = False @staticmethod def enablePackrat(cache_size_limit=128): """Enables "packrat" parsing, which adds memoizing to the parsing logic. Repeated parse attempts at the same string location (which happens often in many complex grammars) can immediately return a cached value, instead of re-executing parsing/validating code. Memoizing is done of both valid results and parsing exceptions. Parameters: - cache_size_limit - (default= ``128``) - if an integer value is provided will limit the size of the packrat cache; if None is passed, then the cache size will be unbounded; if 0 is passed, the cache will be effectively disabled. This speedup may break existing programs that use parse actions that have side-effects. For this reason, packrat parsing is disabled when you first import pyparsing. To activate the packrat feature, your program must call the class method :class:`ParserElement.enablePackrat`. For best results, call ``enablePackrat()`` immediately after importing pyparsing. Example:: from pip._vendor import pyparsing pyparsing.ParserElement.enablePackrat() """ if not ParserElement._packratEnabled: ParserElement._packratEnabled = True if cache_size_limit is None: ParserElement.packrat_cache = ParserElement._UnboundedCache() else: ParserElement.packrat_cache = ParserElement._FifoCache(cache_size_limit) ParserElement._parse = ParserElement._parseCache def parseString( self, instring, parseAll=False ): """ Execute the parse expression with the given string. This is the main interface to the client code, once the complete expression has been built. If you want the grammar to require that the entire input string be successfully parsed, then set ``parseAll`` to True (equivalent to ending the grammar with ``StringEnd()``). Note: ``parseString`` implicitly calls ``expandtabs()`` on the input string, in order to report proper column numbers in parse actions. If the input string contains tabs and the grammar uses parse actions that use the ``loc`` argument to index into the string being parsed, you can ensure you have a consistent view of the input string by: - calling ``parseWithTabs`` on your grammar before calling ``parseString`` (see :class:`parseWithTabs`) - define your parse action using the full ``(s,loc,toks)`` signature, and reference the input string using the parse action's ``s`` argument - explictly expand the tabs in your input string before calling ``parseString`` Example:: Word('a').parseString('aaaaabaaa') # -> ['aaaaa'] Word('a').parseString('aaaaabaaa', parseAll=True) # -> Exception: Expected end of text """ ParserElement.resetCache() if not self.streamlined: self.streamline() #~ self.saveAsList = True for e in self.ignoreExprs: e.streamline() if not self.keepTabs: instring = instring.expandtabs() try: loc, tokens = self._parse( instring, 0 ) if parseAll: loc = self.preParse( instring, loc ) se = Empty() + StringEnd() se._parse( instring, loc ) except ParseBaseException as exc: if ParserElement.verbose_stacktrace: raise else: # catch and re-raise exception from here, clears out pyparsing internal stack trace raise exc else: return tokens def scanString( self, instring, maxMatches=_MAX_INT, overlap=False ): """ Scan the input string for expression matches. Each match will return the matching tokens, start location, and end location. May be called with optional ``maxMatches`` argument, to clip scanning after 'n' matches are found. If ``overlap`` is specified, then overlapping matches will be reported. Note that the start and end locations are reported relative to the string being parsed. See :class:`parseString` for more information on parsing strings with embedded tabs. Example:: source = "sldjf123lsdjjkf345sldkjf879lkjsfd987" print(source) for tokens,start,end in Word(alphas).scanString(source): print(' '*start + '^'*(end-start)) print(' '*start + tokens[0]) prints:: sldjf123lsdjjkf345sldkjf879lkjsfd987 ^^^^^ sldjf ^^^^^^^ lsdjjkf ^^^^^^ sldkjf ^^^^^^ lkjsfd """ if not self.streamlined: self.streamline() for e in self.ignoreExprs: e.streamline() if not self.keepTabs: instring = _ustr(instring).expandtabs() instrlen = len(instring) loc = 0 preparseFn = self.preParse parseFn = self._parse ParserElement.resetCache() matches = 0 try: while loc <= instrlen and matches < maxMatches: try: preloc = preparseFn( instring, loc ) nextLoc,tokens = parseFn( instring, preloc, callPreParse=False ) except ParseException: loc = preloc+1 else: if nextLoc > loc: matches += 1 yield tokens, preloc, nextLoc if overlap: nextloc = preparseFn( instring, loc ) if nextloc > loc: loc = nextLoc else: loc += 1 else: loc = nextLoc else: loc = preloc+1 except ParseBaseException as exc: if ParserElement.verbose_stacktrace: raise else: # catch and re-raise exception from here, clears out pyparsing internal stack trace raise exc def transformString( self, instring ): """ Extension to :class:`scanString`, to modify matching text with modified tokens that may be returned from a parse action. To use ``transformString``, define a grammar and attach a parse action to it that modifies the returned token list. Invoking ``transformString()`` on a target string will then scan for matches, and replace the matched text patterns according to the logic in the parse action. ``transformString()`` returns the resulting transformed string. Example:: wd = Word(alphas) wd.setParseAction(lambda toks: toks[0].title()) print(wd.transformString("now is the winter of our discontent made glorious summer by this sun of york.")) prints:: Now Is The Winter Of Our Discontent Made Glorious Summer By This Sun Of York. """ out = [] lastE = 0 # force preservation of <TAB>s, to minimize unwanted transformation of string, and to # keep string locs straight between transformString and scanString self.keepTabs = True try: for t,s,e in self.scanString( instring ): out.append( instring[lastE:s] ) if t: if isinstance(t,ParseResults): out += t.asList() elif isinstance(t,list): out += t else: out.append(t) lastE = e out.append(instring[lastE:]) out = [o for o in out if o] return "".join(map(_ustr,_flatten(out))) except ParseBaseException as exc: if ParserElement.verbose_stacktrace: raise else: # catch and re-raise exception from here, clears out pyparsing internal stack trace raise exc def searchString( self, instring, maxMatches=_MAX_INT ): """ Another extension to :class:`scanString`, simplifying the access to the tokens found to match the given parse expression. May be called with optional ``maxMatches`` argument, to clip searching after 'n' matches are found. Example:: # a capitalized word starts with an uppercase letter, followed by zero or more lowercase letters cap_word = Word(alphas.upper(), alphas.lower()) print(cap_word.searchString("More than Iron, more than Lead, more than Gold I need Electricity")) # the sum() builtin can be used to merge results into a single ParseResults object print(sum(cap_word.searchString("More than Iron, more than Lead, more than Gold I need Electricity"))) prints:: [['More'], ['Iron'], ['Lead'], ['Gold'], ['I'], ['Electricity']] ['More', 'Iron', 'Lead', 'Gold', 'I', 'Electricity'] """ try: return ParseResults([ t for t,s,e in self.scanString( instring, maxMatches ) ]) except ParseBaseException as exc: if ParserElement.verbose_stacktrace: raise else: # catch and re-raise exception from here, clears out pyparsing internal stack trace raise exc def split(self, instring, maxsplit=_MAX_INT, includeSeparators=False): """ Generator method to split a string using the given expression as a separator. May be called with optional ``maxsplit`` argument, to limit the number of splits; and the optional ``includeSeparators`` argument (default= ``False``), if the separating matching text should be included in the split results. Example:: punc = oneOf(list(".,;:/-!?")) print(list(punc.split("This, this?, this sentence, is badly punctuated!"))) prints:: ['This', ' this', '', ' this sentence', ' is badly punctuated', ''] """ splits = 0 last = 0 for t,s,e in self.scanString(instring, maxMatches=maxsplit): yield instring[last:s] if includeSeparators: yield t[0] last = e yield instring[last:] def __add__(self, other ): """ Implementation of + operator - returns :class:`And`. Adding strings to a ParserElement converts them to :class:`Literal`s by default. Example:: greet = Word(alphas) + "," + Word(alphas) + "!" hello = "Hello, World!" print (hello, "->", greet.parseString(hello)) prints:: Hello, World! -> ['Hello', ',', 'World', '!'] """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return And( [ self, other ] ) def __radd__(self, other ): """ Implementation of + operator when left operand is not a :class:`ParserElement` """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return other + self def __sub__(self, other): """ Implementation of - operator, returns :class:`And` with error stop """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return self + And._ErrorStop() + other def __rsub__(self, other ): """ Implementation of - operator when left operand is not a :class:`ParserElement` """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return other - self def __mul__(self,other): """ Implementation of * operator, allows use of ``expr * 3`` in place of ``expr + expr + expr``. Expressions may also me multiplied by a 2-integer tuple, similar to ``{min,max}`` multipliers in regular expressions. Tuples may also include ``None`` as in: - ``expr*(n,None)`` or ``expr*(n,)`` is equivalent to ``expr*n + ZeroOrMore(expr)`` (read as "at least n instances of ``expr``") - ``expr*(None,n)`` is equivalent to ``expr*(0,n)`` (read as "0 to n instances of ``expr``") - ``expr*(None,None)`` is equivalent to ``ZeroOrMore(expr)`` - ``expr*(1,None)`` is equivalent to ``OneOrMore(expr)`` Note that ``expr*(None,n)`` does not raise an exception if more than n exprs exist in the input stream; that is, ``expr*(None,n)`` does not enforce a maximum number of expr occurrences. If this behavior is desired, then write ``expr*(None,n) + ~expr`` """ if isinstance(other,int): minElements, optElements = other,0 elif isinstance(other,tuple): other = (other + (None, None))[:2] if other[0] is None: other = (0, other[1]) if isinstance(other[0],int) and other[1] is None: if other[0] == 0: return ZeroOrMore(self) if other[0] == 1: return OneOrMore(self) else: return self*other[0] + ZeroOrMore(self) elif isinstance(other[0],int) and isinstance(other[1],int): minElements, optElements = other optElements -= minElements else: raise TypeError("cannot multiply 'ParserElement' and ('%s','%s') objects", type(other[0]),type(other[1])) else: raise TypeError("cannot multiply 'ParserElement' and '%s' objects", type(other)) if minElements < 0: raise ValueError("cannot multiply ParserElement by negative value") if optElements < 0: raise ValueError("second tuple value must be greater or equal to first tuple value") if minElements == optElements == 0: raise ValueError("cannot multiply ParserElement by 0 or (0,0)") if (optElements): def makeOptionalList(n): if n>1: return Optional(self + makeOptionalList(n-1)) else: return Optional(self) if minElements: if minElements == 1: ret = self + makeOptionalList(optElements) else: ret = And([self]*minElements) + makeOptionalList(optElements) else: ret = makeOptionalList(optElements) else: if minElements == 1: ret = self else: ret = And([self]*minElements) return ret def __rmul__(self, other): return self.__mul__(other) def __or__(self, other ): """ Implementation of | operator - returns :class:`MatchFirst` """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return MatchFirst( [ self, other ] ) def __ror__(self, other ): """ Implementation of | operator when left operand is not a :class:`ParserElement` """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return other | self def __xor__(self, other ): """ Implementation of ^ operator - returns :class:`Or` """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return Or( [ self, other ] ) def __rxor__(self, other ): """ Implementation of ^ operator when left operand is not a :class:`ParserElement` """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return other ^ self def __and__(self, other ): """ Implementation of & operator - returns :class:`Each` """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return Each( [ self, other ] ) def __rand__(self, other ): """ Implementation of & operator when left operand is not a :class:`ParserElement` """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return other & self def __invert__( self ): """ Implementation of ~ operator - returns :class:`NotAny` """ return NotAny( self ) def __call__(self, name=None): """ Shortcut for :class:`setResultsName`, with ``listAllMatches=False``. If ``name`` is given with a trailing ``'*'`` character, then ``listAllMatches`` will be passed as ``True``. If ``name` is omitted, same as calling :class:`copy`. Example:: # these are equivalent userdata = Word(alphas).setResultsName("name") + Word(nums+"-").setResultsName("socsecno") userdata = Word(alphas)("name") + Word(nums+"-")("socsecno") """ if name is not None: return self.setResultsName(name) else: return self.copy() def suppress( self ): """ Suppresses the output of this :class:`ParserElement`; useful to keep punctuation from cluttering up returned output. """ return Suppress( self ) def leaveWhitespace( self ): """ Disables the skipping of whitespace before matching the characters in the :class:`ParserElement`'s defined pattern. This is normally only used internally by the pyparsing module, but may be needed in some whitespace-sensitive grammars. """ self.skipWhitespace = False return self def setWhitespaceChars( self, chars ): """ Overrides the default whitespace chars """ self.skipWhitespace = True self.whiteChars = chars self.copyDefaultWhiteChars = False return self def parseWithTabs( self ): """ Overrides default behavior to expand ``<TAB>``s to spaces before parsing the input string. Must be called before ``parseString`` when the input grammar contains elements that match ``<TAB>`` characters. """ self.keepTabs = True return self def ignore( self, other ): """ Define expression to be ignored (e.g., comments) while doing pattern matching; may be called repeatedly, to define multiple comment or other ignorable patterns. Example:: patt = OneOrMore(Word(alphas)) patt.parseString('ablaj /* comment */ lskjd') # -> ['ablaj'] patt.ignore(cStyleComment) patt.parseString('ablaj /* comment */ lskjd') # -> ['ablaj', 'lskjd'] """ if isinstance(other, basestring): other = Suppress(other) if isinstance( other, Suppress ): if other not in self.ignoreExprs: self.ignoreExprs.append(other) else: self.ignoreExprs.append( Suppress( other.copy() ) ) return self def setDebugActions( self, startAction, successAction, exceptionAction ): """ Enable display of debugging messages while doing pattern matching. """ self.debugActions = (startAction or _defaultStartDebugAction, successAction or _defaultSuccessDebugAction, exceptionAction or _defaultExceptionDebugAction) self.debug = True return self def setDebug( self, flag=True ): """ Enable display of debugging messages while doing pattern matching. Set ``flag`` to True to enable, False to disable. Example:: wd = Word(alphas).setName("alphaword") integer = Word(nums).setName("numword") term = wd | integer # turn on debugging for wd wd.setDebug() OneOrMore(term).parseString("abc 123 xyz 890") prints:: Match alphaword at loc 0(1,1) Matched alphaword -> ['abc'] Match alphaword at loc 3(1,4) Exception raised:Expected alphaword (at char 4), (line:1, col:5) Match alphaword at loc 7(1,8) Matched alphaword -> ['xyz'] Match alphaword at loc 11(1,12) Exception raised:Expected alphaword (at char 12), (line:1, col:13) Match alphaword at loc 15(1,16) Exception raised:Expected alphaword (at char 15), (line:1, col:16) The output shown is that produced by the default debug actions - custom debug actions can be specified using :class:`setDebugActions`. Prior to attempting to match the ``wd`` expression, the debugging message ``"Match <exprname> at loc <n>(<line>,<col>)"`` is shown. Then if the parse succeeds, a ``"Matched"`` message is shown, or an ``"Exception raised"`` message is shown. Also note the use of :class:`setName` to assign a human-readable name to the expression, which makes debugging and exception messages easier to understand - for instance, the default name created for the :class:`Word` expression without calling ``setName`` is ``"W:(ABCD...)"``. """ if flag: self.setDebugActions( _defaultStartDebugAction, _defaultSuccessDebugAction, _defaultExceptionDebugAction ) else: self.debug = False return self def __str__( self ): return self.name def __repr__( self ): return _ustr(self) def streamline( self ): self.streamlined = True self.strRepr = None return self def checkRecursion( self, parseElementList ): pass def validate( self, validateTrace=[] ): """ Check defined expressions for valid structure, check for infinite recursive definitions. """ self.checkRecursion( [] ) def parseFile( self, file_or_filename, parseAll=False ): """ Execute the parse expression on the given file or filename. If a filename is specified (instead of a file object), the entire file is opened, read, and closed before parsing. """ try: file_contents = file_or_filename.read() except AttributeError: with open(file_or_filename, "r") as f: file_contents = f.read() try: return self.parseString(file_contents, parseAll) except ParseBaseException as exc: if ParserElement.verbose_stacktrace: raise else: # catch and re-raise exception from here, clears out pyparsing internal stack trace raise exc def __eq__(self,other): if isinstance(other, ParserElement): return self is other or vars(self) == vars(other) elif isinstance(other, basestring): return self.matches(other) else: return super(ParserElement,self)==other def __ne__(self,other): return not (self == other) def __hash__(self): return hash(id(self)) def __req__(self,other): return self == other def __rne__(self,other): return not (self == other) def matches(self, testString, parseAll=True): """ Method for quick testing of a parser against a test string. Good for simple inline microtests of sub expressions while building up larger parser. Parameters: - testString - to test against this expression for a match - parseAll - (default= ``True``) - flag to pass to :class:`parseString` when running tests Example:: expr = Word(nums) assert expr.matches("100") """ try: self.parseString(_ustr(testString), parseAll=parseAll) return True except ParseBaseException: return False def runTests(self, tests, parseAll=True, comment='#', fullDump=True, printResults=True, failureTests=False, postParse=None): """ Execute the parse expression on a series of test strings, showing each test, the parsed results or where the parse failed. Quick and easy way to run a parse expression against a list of sample strings. Parameters: - tests - a list of separate test strings, or a multiline string of test strings - parseAll - (default= ``True``) - flag to pass to :class:`parseString` when running tests - comment - (default= ``'#'``) - expression for indicating embedded comments in the test string; pass None to disable comment filtering - fullDump - (default= ``True``) - dump results as list followed by results names in nested outline; if False, only dump nested list - printResults - (default= ``True``) prints test output to stdout - failureTests - (default= ``False``) indicates if these tests are expected to fail parsing - postParse - (default= ``None``) optional callback for successful parse results; called as `fn(test_string, parse_results)` and returns a string to be added to the test output Returns: a (success, results) tuple, where success indicates that all tests succeeded (or failed if ``failureTests`` is True), and the results contain a list of lines of each test's output Example:: number_expr = pyparsing_common.number.copy() result = number_expr.runTests(''' # unsigned integer 100 # negative integer -100 # float with scientific notation 6.02e23 # integer with scientific notation 1e-12 ''') print("Success" if result[0] else "Failed!") result = number_expr.runTests(''' # stray character 100Z # missing leading digit before '.' -.100 # too many '.' 3.14.159 ''', failureTests=True) print("Success" if result[0] else "Failed!") prints:: # unsigned integer 100 [100] # negative integer -100 [-100] # float with scientific notation 6.02e23 [6.02e+23] # integer with scientific notation 1e-12 [1e-12] Success # stray character 100Z ^ FAIL: Expected end of text (at char 3), (line:1, col:4) # missing leading digit before '.' -.100 ^ FAIL: Expected {real number with scientific notation | real number | signed integer} (at char 0), (line:1, col:1) # too many '.' 3.14.159 ^ FAIL: Expected end of text (at char 4), (line:1, col:5) Success Each test string must be on a single line. If you want to test a string that spans multiple lines, create a test like this:: expr.runTest(r"this is a test\\n of strings that spans \\n 3 lines") (Note that this is a raw string literal, you must include the leading 'r'.) """ if isinstance(tests, basestring): tests = list(map(str.strip, tests.rstrip().splitlines())) if isinstance(comment, basestring): comment = Literal(comment) allResults = [] comments = [] success = True for t in tests: if comment is not None and comment.matches(t, False) or comments and not t: comments.append(t) continue if not t: continue out = ['\n'.join(comments), t] comments = [] try: # convert newline marks to actual newlines, and strip leading BOM if present NL = Literal(r'\n').addParseAction(replaceWith('\n')).ignore(quotedString) BOM = '\ufeff' t = NL.transformString(t.lstrip(BOM)) result = self.parseString(t, parseAll=parseAll) out.append(result.dump(full=fullDump)) success = success and not failureTests if postParse is not None: try: pp_value = postParse(t, result) if pp_value is not None: out.append(str(pp_value)) except Exception as e: out.append("{0} failed: {1}: {2}".format(postParse.__name__, type(e).__name__, e)) except ParseBaseException as pe: fatal = "(FATAL)" if isinstance(pe, ParseFatalException) else "" if '\n' in t: out.append(line(pe.loc, t)) out.append(' '*(col(pe.loc,t)-1) + '^' + fatal) else: out.append(' '*pe.loc + '^' + fatal) out.append("FAIL: " + str(pe)) success = success and failureTests result = pe except Exception as exc: out.append("FAIL-EXCEPTION: " + str(exc)) success = success and failureTests result = exc if printResults: if fullDump: out.append('') print('\n'.join(out)) allResults.append((t, result)) return success, allResults class Token(ParserElement): """Abstract :class:`ParserElement` subclass, for defining atomic matching patterns. """ def __init__( self ): super(Token,self).__init__( savelist=False ) class Empty(Token): """An empty token, will always match. """ def __init__( self ): super(Empty,self).__init__() self.name = "Empty" self.mayReturnEmpty = True self.mayIndexError = False class NoMatch(Token): """A token that will never match. """ def __init__( self ): super(NoMatch,self).__init__() self.name = "NoMatch" self.mayReturnEmpty = True self.mayIndexError = False self.errmsg = "Unmatchable token" def parseImpl( self, instring, loc, doActions=True ): raise ParseException(instring, loc, self.errmsg, self) class Literal(Token): """Token to exactly match a specified string. Example:: Literal('blah').parseString('blah') # -> ['blah'] Literal('blah').parseString('blahfooblah') # -> ['blah'] Literal('blah').parseString('bla') # -> Exception: Expected "blah" For case-insensitive matching, use :class:`CaselessLiteral`. For keyword matching (force word break before and after the matched string), use :class:`Keyword` or :class:`CaselessKeyword`. """ def __init__( self, matchString ): super(Literal,self).__init__() self.match = matchString self.matchLen = len(matchString) try: self.firstMatchChar = matchString[0] except IndexError: warnings.warn("null string passed to Literal; use Empty() instead", SyntaxWarning, stacklevel=2) self.__class__ = Empty self.name = '"%s"' % _ustr(self.match) self.errmsg = "Expected " + self.name self.mayReturnEmpty = False self.mayIndexError = False # Performance tuning: this routine gets called a *lot* # if this is a single character match string and the first character matches, # short-circuit as quickly as possible, and avoid calling startswith #~ @profile def parseImpl( self, instring, loc, doActions=True ): if (instring[loc] == self.firstMatchChar and (self.matchLen==1 or instring.startswith(self.match,loc)) ): return loc+self.matchLen, self.match raise ParseException(instring, loc, self.errmsg, self) _L = Literal ParserElement._literalStringClass = Literal class Keyword(Token): """Token to exactly match a specified string as a keyword, that is, it must be immediately followed by a non-keyword character. Compare with :class:`Literal`: - ``Literal("if")`` will match the leading ``'if'`` in ``'ifAndOnlyIf'``. - ``Keyword("if")`` will not; it will only match the leading ``'if'`` in ``'if x=1'``, or ``'if(y==2)'`` Accepts two optional constructor arguments in addition to the keyword string: - ``identChars`` is a string of characters that would be valid identifier characters, defaulting to all alphanumerics + "_" and "$" - ``caseless`` allows case-insensitive matching, default is ``False``. Example:: Keyword("start").parseString("start") # -> ['start'] Keyword("start").parseString("starting") # -> Exception For case-insensitive matching, use :class:`CaselessKeyword`. """ DEFAULT_KEYWORD_CHARS = alphanums+"_$" def __init__( self, matchString, identChars=None, caseless=False ): super(Keyword,self).__init__() if identChars is None: identChars = Keyword.DEFAULT_KEYWORD_CHARS self.match = matchString self.matchLen = len(matchString) try: self.firstMatchChar = matchString[0] except IndexError: warnings.warn("null string passed to Keyword; use Empty() instead", SyntaxWarning, stacklevel=2) self.name = '"%s"' % self.match self.errmsg = "Expected " + self.name self.mayReturnEmpty = False self.mayIndexError = False self.caseless = caseless if caseless: self.caselessmatch = matchString.upper() identChars = identChars.upper() self.identChars = set(identChars) def parseImpl( self, instring, loc, doActions=True ): if self.caseless: if ( (instring[ loc:loc+self.matchLen ].upper() == self.caselessmatch) and (loc >= len(instring)-self.matchLen or instring[loc+self.matchLen].upper() not in self.identChars) and (loc == 0 or instring[loc-1].upper() not in self.identChars) ): return loc+self.matchLen, self.match else: if (instring[loc] == self.firstMatchChar and (self.matchLen==1 or instring.startswith(self.match,loc)) and (loc >= len(instring)-self.matchLen or instring[loc+self.matchLen] not in self.identChars) and (loc == 0 or instring[loc-1] not in self.identChars) ): return loc+self.matchLen, self.match raise ParseException(instring, loc, self.errmsg, self) def copy(self): c = super(Keyword,self).copy() c.identChars = Keyword.DEFAULT_KEYWORD_CHARS return c @staticmethod def setDefaultKeywordChars( chars ): """Overrides the default Keyword chars """ Keyword.DEFAULT_KEYWORD_CHARS = chars class CaselessLiteral(Literal): """Token to match a specified string, ignoring case of letters. Note: the matched results will always be in the case of the given match string, NOT the case of the input text. Example:: OneOrMore(CaselessLiteral("CMD")).parseString("cmd CMD Cmd10") # -> ['CMD', 'CMD', 'CMD'] (Contrast with example for :class:`CaselessKeyword`.) """ def __init__( self, matchString ): super(CaselessLiteral,self).__init__( matchString.upper() ) # Preserve the defining literal. self.returnString = matchString self.name = "'%s'" % self.returnString self.errmsg = "Expected " + self.name def parseImpl( self, instring, loc, doActions=True ): if instring[ loc:loc+self.matchLen ].upper() == self.match: return loc+self.matchLen, self.returnString raise ParseException(instring, loc, self.errmsg, self) class CaselessKeyword(Keyword): """ Caseless version of :class:`Keyword`. Example:: OneOrMore(CaselessKeyword("CMD")).parseString("cmd CMD Cmd10") # -> ['CMD', 'CMD'] (Contrast with example for :class:`CaselessLiteral`.) """ def __init__( self, matchString, identChars=None ): super(CaselessKeyword,self).__init__( matchString, identChars, caseless=True ) class CloseMatch(Token): """A variation on :class:`Literal` which matches "close" matches, that is, strings with at most 'n' mismatching characters. :class:`CloseMatch` takes parameters: - ``match_string`` - string to be matched - ``maxMismatches`` - (``default=1``) maximum number of mismatches allowed to count as a match The results from a successful parse will contain the matched text from the input string and the following named results: - ``mismatches`` - a list of the positions within the match_string where mismatches were found - ``original`` - the original match_string used to compare against the input string If ``mismatches`` is an empty list, then the match was an exact match. Example:: patt = CloseMatch("ATCATCGAATGGA") patt.parseString("ATCATCGAAXGGA") # -> (['ATCATCGAAXGGA'], {'mismatches': [[9]], 'original': ['ATCATCGAATGGA']}) patt.parseString("ATCAXCGAAXGGA") # -> Exception: Expected 'ATCATCGAATGGA' (with up to 1 mismatches) (at char 0), (line:1, col:1) # exact match patt.parseString("ATCATCGAATGGA") # -> (['ATCATCGAATGGA'], {'mismatches': [[]], 'original': ['ATCATCGAATGGA']}) # close match allowing up to 2 mismatches patt = CloseMatch("ATCATCGAATGGA", maxMismatches=2) patt.parseString("ATCAXCGAAXGGA") # -> (['ATCAXCGAAXGGA'], {'mismatches': [[4, 9]], 'original': ['ATCATCGAATGGA']}) """ def __init__(self, match_string, maxMismatches=1): super(CloseMatch,self).__init__() self.name = match_string self.match_string = match_string self.maxMismatches = maxMismatches self.errmsg = "Expected %r (with up to %d mismatches)" % (self.match_string, self.maxMismatches) self.mayIndexError = False self.mayReturnEmpty = False def parseImpl( self, instring, loc, doActions=True ): start = loc instrlen = len(instring) maxloc = start + len(self.match_string) if maxloc <= instrlen: match_string = self.match_string match_stringloc = 0 mismatches = [] maxMismatches = self.maxMismatches for match_stringloc,s_m in enumerate(zip(instring[loc:maxloc], self.match_string)): src,mat = s_m if src != mat: mismatches.append(match_stringloc) if len(mismatches) > maxMismatches: break else: loc = match_stringloc + 1 results = ParseResults([instring[start:loc]]) results['original'] = self.match_string results['mismatches'] = mismatches return loc, results raise ParseException(instring, loc, self.errmsg, self) class Word(Token): """Token for matching words composed of allowed character sets. Defined with string containing all allowed initial characters, an optional string containing allowed body characters (if omitted, defaults to the initial character set), and an optional minimum, maximum, and/or exact length. The default value for ``min`` is 1 (a minimum value < 1 is not valid); the default values for ``max`` and ``exact`` are 0, meaning no maximum or exact length restriction. An optional ``excludeChars`` parameter can list characters that might be found in the input ``bodyChars`` string; useful to define a word of all printables except for one or two characters, for instance. :class:`srange` is useful for defining custom character set strings for defining ``Word`` expressions, using range notation from regular expression character sets. A common mistake is to use :class:`Word` to match a specific literal string, as in ``Word("Address")``. Remember that :class:`Word` uses the string argument to define *sets* of matchable characters. This expression would match "Add", "AAA", "dAred", or any other word made up of the characters 'A', 'd', 'r', 'e', and 's'. To match an exact literal string, use :class:`Literal` or :class:`Keyword`. pyparsing includes helper strings for building Words: - :class:`alphas` - :class:`nums` - :class:`alphanums` - :class:`hexnums` - :class:`alphas8bit` (alphabetic characters in ASCII range 128-255 - accented, tilded, umlauted, etc.) - :class:`punc8bit` (non-alphabetic characters in ASCII range 128-255 - currency, symbols, superscripts, diacriticals, etc.) - :class:`printables` (any non-whitespace character) Example:: # a word composed of digits integer = Word(nums) # equivalent to Word("0123456789") or Word(srange("0-9")) # a word with a leading capital, and zero or more lowercase capital_word = Word(alphas.upper(), alphas.lower()) # hostnames are alphanumeric, with leading alpha, and '-' hostname = Word(alphas, alphanums+'-') # roman numeral (not a strict parser, accepts invalid mix of characters) roman = Word("IVXLCDM") # any string of non-whitespace characters, except for ',' csv_value = Word(printables, excludeChars=",") """ def __init__( self, initChars, bodyChars=None, min=1, max=0, exact=0, asKeyword=False, excludeChars=None ): super(Word,self).__init__() if excludeChars: excludeChars = set(excludeChars) initChars = ''.join(c for c in initChars if c not in excludeChars) if bodyChars: bodyChars = ''.join(c for c in bodyChars if c not in excludeChars) self.initCharsOrig = initChars self.initChars = set(initChars) if bodyChars : self.bodyCharsOrig = bodyChars self.bodyChars = set(bodyChars) else: self.bodyCharsOrig = initChars self.bodyChars = set(initChars) self.maxSpecified = max > 0 if min < 1: raise ValueError("cannot specify a minimum length < 1; use Optional(Word()) if zero-length word is permitted") self.minLen = min if max > 0: self.maxLen = max else: self.maxLen = _MAX_INT if exact > 0: self.maxLen = exact self.minLen = exact self.name = _ustr(self) self.errmsg = "Expected " + self.name self.mayIndexError = False self.asKeyword = asKeyword if ' ' not in self.initCharsOrig+self.bodyCharsOrig and (min==1 and max==0 and exact==0): if self.bodyCharsOrig == self.initCharsOrig: self.reString = "[%s]+" % _escapeRegexRangeChars(self.initCharsOrig) elif len(self.initCharsOrig) == 1: self.reString = "%s[%s]*" % \ (re.escape(self.initCharsOrig), _escapeRegexRangeChars(self.bodyCharsOrig),) else: self.reString = "[%s][%s]*" % \ (_escapeRegexRangeChars(self.initCharsOrig), _escapeRegexRangeChars(self.bodyCharsOrig),) if self.asKeyword: self.reString = r"\b"+self.reString+r"\b" try: self.re = re.compile( self.reString ) except Exception: self.re = None def parseImpl( self, instring, loc, doActions=True ): if self.re: result = self.re.match(instring,loc) if not result: raise ParseException(instring, loc, self.errmsg, self) loc = result.end() return loc, result.group() if instring[loc] not in self.initChars: raise ParseException(instring, loc, self.errmsg, self) start = loc loc += 1 instrlen = len(instring) bodychars = self.bodyChars maxloc = start + self.maxLen maxloc = min( maxloc, instrlen ) while loc < maxloc and instring[loc] in bodychars: loc += 1 throwException = False if loc - start < self.minLen: throwException = True elif self.maxSpecified and loc < instrlen and instring[loc] in bodychars: throwException = True elif self.asKeyword: if (start>0 and instring[start-1] in bodychars) or (loc<instrlen and instring[loc] in bodychars): throwException = True if throwException: raise ParseException(instring, loc, self.errmsg, self) return loc, instring[start:loc] def __str__( self ): try: return super(Word,self).__str__() except Exception: pass if self.strRepr is None: def charsAsStr(s): if len(s)>4: return s[:4]+"..." else: return s if ( self.initCharsOrig != self.bodyCharsOrig ): self.strRepr = "W:(%s,%s)" % ( charsAsStr(self.initCharsOrig), charsAsStr(self.bodyCharsOrig) ) else: self.strRepr = "W:(%s)" % charsAsStr(self.initCharsOrig) return self.strRepr class Char(Word): """A short-cut class for defining ``Word(characters, exact=1)``, when defining a match of any single character in a string of characters. """ def __init__(self, charset, asKeyword=False, excludeChars=None): super(Char, self).__init__(charset, exact=1, asKeyword=asKeyword, excludeChars=excludeChars) self.reString = "[%s]" % _escapeRegexRangeChars(self.initCharsOrig) self.re = re.compile( self.reString ) class Regex(Token): r"""Token for matching strings that match a given regular expression. Defined with string specifying the regular expression in a form recognized by the stdlib Python `re module <https://docs.python.org/3/library/re.html>`_. If the given regex contains named groups (defined using ``(?P<name>...)``), these will be preserved as named parse results. Example:: realnum = Regex(r"[+-]?\d+\.\d*") date = Regex(r'(?P<year>\d{4})-(?P<month>\d\d?)-(?P<day>\d\d?)') # ref: https://stackoverflow.com/questions/267399/how-do-you-match-only-valid-roman-numerals-with-a-regular-expression roman = Regex(r"M{0,4}(CM|CD|D?{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})") """ compiledREtype = type(re.compile("[A-Z]")) def __init__( self, pattern, flags=0, asGroupList=False, asMatch=False): """The parameters ``pattern`` and ``flags`` are passed to the ``re.compile()`` function as-is. See the Python `re module <https://docs.python.org/3/library/re.html>`_ module for an explanation of the acceptable patterns and flags. """ super(Regex,self).__init__() if isinstance(pattern, basestring): if not pattern: warnings.warn("null string passed to Regex; use Empty() instead", SyntaxWarning, stacklevel=2) self.pattern = pattern self.flags = flags try: self.re = re.compile(self.pattern, self.flags) self.reString = self.pattern except sre_constants.error: warnings.warn("invalid pattern (%s) passed to Regex" % pattern, SyntaxWarning, stacklevel=2) raise elif isinstance(pattern, Regex.compiledREtype): self.re = pattern self.pattern = \ self.reString = str(pattern) self.flags = flags else: raise ValueError("Regex may only be constructed with a string or a compiled RE object") self.name = _ustr(self) self.errmsg = "Expected " + self.name self.mayIndexError = False self.mayReturnEmpty = True self.asGroupList = asGroupList self.asMatch = asMatch if self.asGroupList: self.parseImpl = self.parseImplAsGroupList if self.asMatch: self.parseImpl = self.parseImplAsMatch def parseImpl(self, instring, loc, doActions=True): result = self.re.match(instring,loc) if not result: raise ParseException(instring, loc, self.errmsg, self) loc = result.end() ret = ParseResults(result.group()) d = result.groupdict() if d: for k, v in d.items(): ret[k] = v return loc, ret def parseImplAsGroupList(self, instring, loc, doActions=True): result = self.re.match(instring,loc) if not result: raise ParseException(instring, loc, self.errmsg, self) loc = result.end() ret = result.groups() return loc, ret def parseImplAsMatch(self, instring, loc, doActions=True): result = self.re.match(instring,loc) if not result: raise ParseException(instring, loc, self.errmsg, self) loc = result.end() ret = result return loc, ret def __str__( self ): try: return super(Regex,self).__str__() except Exception: pass if self.strRepr is None: self.strRepr = "Re:(%s)" % repr(self.pattern) return self.strRepr def sub(self, repl): r""" Return Regex with an attached parse action to transform the parsed result as if called using `re.sub(expr, repl, string) <https://docs.python.org/3/library/re.html#re.sub>`_. Example:: make_html = Regex(r"(\w+):(.*?):").sub(r"<\1>\2</\1>") print(make_html.transformString("h1:main title:")) # prints "<h1>main title</h1>" """ if self.asGroupList: warnings.warn("cannot use sub() with Regex(asGroupList=True)", SyntaxWarning, stacklevel=2) raise SyntaxError() if self.asMatch and callable(repl): warnings.warn("cannot use sub() with a callable with Regex(asMatch=True)", SyntaxWarning, stacklevel=2) raise SyntaxError() if self.asMatch: def pa(tokens): return tokens[0].expand(repl) else: def pa(tokens): return self.re.sub(repl, tokens[0]) return self.addParseAction(pa) class QuotedString(Token): r""" Token for matching strings that are delimited by quoting characters. Defined with the following parameters: - quoteChar - string of one or more characters defining the quote delimiting string - escChar - character to escape quotes, typically backslash (default= ``None`` ) - escQuote - special quote sequence to escape an embedded quote string (such as SQL's ``""`` to escape an embedded ``"``) (default= ``None`` ) - multiline - boolean indicating whether quotes can span multiple lines (default= ``False`` ) - unquoteResults - boolean indicating whether the matched text should be unquoted (default= ``True`` ) - endQuoteChar - string of one or more characters defining the end of the quote delimited string (default= ``None`` => same as quoteChar) - convertWhitespaceEscapes - convert escaped whitespace (``'\t'``, ``'\n'``, etc.) to actual whitespace (default= ``True`` ) Example:: qs = QuotedString('"') print(qs.searchString('lsjdf "This is the quote" sldjf')) complex_qs = QuotedString('{{', endQuoteChar='}}') print(complex_qs.searchString('lsjdf {{This is the "quote"}} sldjf')) sql_qs = QuotedString('"', escQuote='""') print(sql_qs.searchString('lsjdf "This is the quote with ""embedded"" quotes" sldjf')) prints:: [['This is the quote']] [['This is the "quote"']] [['This is the quote with "embedded" quotes']] """ def __init__( self, quoteChar, escChar=None, escQuote=None, multiline=False, unquoteResults=True, endQuoteChar=None, convertWhitespaceEscapes=True): super(QuotedString,self).__init__() # remove white space from quote chars - wont work anyway quoteChar = quoteChar.strip() if not quoteChar: warnings.warn("quoteChar cannot be the empty string",SyntaxWarning,stacklevel=2) raise SyntaxError() if endQuoteChar is None: endQuoteChar = quoteChar else: endQuoteChar = endQuoteChar.strip() if not endQuoteChar: warnings.warn("endQuoteChar cannot be the empty string",SyntaxWarning,stacklevel=2) raise SyntaxError() self.quoteChar = quoteChar self.quoteCharLen = len(quoteChar) self.firstQuoteChar = quoteChar[0] self.endQuoteChar = endQuoteChar self.endQuoteCharLen = len(endQuoteChar) self.escChar = escChar self.escQuote = escQuote self.unquoteResults = unquoteResults self.convertWhitespaceEscapes = convertWhitespaceEscapes if multiline: self.flags = re.MULTILINE | re.DOTALL self.pattern = r'%s(?:[^%s%s]' % \ ( re.escape(self.quoteChar), _escapeRegexRangeChars(self.endQuoteChar[0]), (escChar is not None and _escapeRegexRangeChars(escChar) or '') ) else: self.flags = 0 self.pattern = r'%s(?:[^%s\n\r%s]' % \ ( re.escape(self.quoteChar), _escapeRegexRangeChars(self.endQuoteChar[0]), (escChar is not None and _escapeRegexRangeChars(escChar) or '') ) if len(self.endQuoteChar) > 1: self.pattern += ( '|(?:' + ')|(?:'.join("%s[^%s]" % (re.escape(self.endQuoteChar[:i]), _escapeRegexRangeChars(self.endQuoteChar[i])) for i in range(len(self.endQuoteChar)-1,0,-1)) + ')' ) if escQuote: self.pattern += (r'|(?:%s)' % re.escape(escQuote)) if escChar: self.pattern += (r'|(?:%s.)' % re.escape(escChar)) self.escCharReplacePattern = re.escape(self.escChar)+"(.)" self.pattern += (r')*%s' % re.escape(self.endQuoteChar)) try: self.re = re.compile(self.pattern, self.flags) self.reString = self.pattern except sre_constants.error: warnings.warn("invalid pattern (%s) passed to Regex" % self.pattern, SyntaxWarning, stacklevel=2) raise self.name = _ustr(self) self.errmsg = "Expected " + self.name self.mayIndexError = False self.mayReturnEmpty = True def parseImpl( self, instring, loc, doActions=True ): result = instring[loc] == self.firstQuoteChar and self.re.match(instring,loc) or None if not result: raise ParseException(instring, loc, self.errmsg, self) loc = result.end() ret = result.group() if self.unquoteResults: # strip off quotes ret = ret[self.quoteCharLen:-self.endQuoteCharLen] if isinstance(ret,basestring): # replace escaped whitespace if '\\' in ret and self.convertWhitespaceEscapes: ws_map = { r'\t' : '\t', r'\n' : '\n', r'\f' : '\f', r'\r' : '\r', } for wslit,wschar in ws_map.items(): ret = ret.replace(wslit, wschar) # replace escaped characters if self.escChar: ret = re.sub(self.escCharReplacePattern, r"\g<1>", ret) # replace escaped quotes if self.escQuote: ret = ret.replace(self.escQuote, self.endQuoteChar) return loc, ret def __str__( self ): try: return super(QuotedString,self).__str__() except Exception: pass if self.strRepr is None: self.strRepr = "quoted string, starting with %s ending with %s" % (self.quoteChar, self.endQuoteChar) return self.strRepr class CharsNotIn(Token): """Token for matching words composed of characters *not* in a given set (will include whitespace in matched characters if not listed in the provided exclusion set - see example). Defined with string containing all disallowed characters, and an optional minimum, maximum, and/or exact length. The default value for ``min`` is 1 (a minimum value < 1 is not valid); the default values for ``max`` and ``exact`` are 0, meaning no maximum or exact length restriction. Example:: # define a comma-separated-value as anything that is not a ',' csv_value = CharsNotIn(',') print(delimitedList(csv_value).parseString("dkls,lsdkjf,s12 34,@!#,213")) prints:: ['dkls', 'lsdkjf', 's12 34', '@!#', '213'] """ def __init__( self, notChars, min=1, max=0, exact=0 ): super(CharsNotIn,self).__init__() self.skipWhitespace = False self.notChars = notChars if min < 1: raise ValueError( "cannot specify a minimum length < 1; use " + "Optional(CharsNotIn()) if zero-length char group is permitted") self.minLen = min if max > 0: self.maxLen = max else: self.maxLen = _MAX_INT if exact > 0: self.maxLen = exact self.minLen = exact self.name = _ustr(self) self.errmsg = "Expected " + self.name self.mayReturnEmpty = ( self.minLen == 0 ) self.mayIndexError = False def parseImpl( self, instring, loc, doActions=True ): if instring[loc] in self.notChars: raise ParseException(instring, loc, self.errmsg, self) start = loc loc += 1 notchars = self.notChars maxlen = min( start+self.maxLen, len(instring) ) while loc < maxlen and \ (instring[loc] not in notchars): loc += 1 if loc - start < self.minLen: raise ParseException(instring, loc, self.errmsg, self) return loc, instring[start:loc] def __str__( self ): try: return super(CharsNotIn, self).__str__() except Exception: pass if self.strRepr is None: if len(self.notChars) > 4: self.strRepr = "!W:(%s...)" % self.notChars[:4] else: self.strRepr = "!W:(%s)" % self.notChars return self.strRepr class White(Token): """Special matching class for matching whitespace. Normally, whitespace is ignored by pyparsing grammars. This class is included when some whitespace structures are significant. Define with a string containing the whitespace characters to be matched; default is ``" \\t\\r\\n"``. Also takes optional ``min``, ``max``, and ``exact`` arguments, as defined for the :class:`Word` class. """ whiteStrs = { ' ' : '<SP>', '\t': '<TAB>', '\n': '<LF>', '\r': '<CR>', '\f': '<FF>', 'u\00A0': '<NBSP>', 'u\1680': '<OGHAM_SPACE_MARK>', 'u\180E': '<MONGOLIAN_VOWEL_SEPARATOR>', 'u\2000': '<EN_QUAD>', 'u\2001': '<EM_QUAD>', 'u\2002': '<EN_SPACE>', 'u\2003': '<EM_SPACE>', 'u\2004': '<THREE-PER-EM_SPACE>', 'u\2005': '<FOUR-PER-EM_SPACE>', 'u\2006': '<SIX-PER-EM_SPACE>', 'u\2007': '<FIGURE_SPACE>', 'u\2008': '<PUNCTUATION_SPACE>', 'u\2009': '<THIN_SPACE>', 'u\200A': '<HAIR_SPACE>', 'u\200B': '<ZERO_WIDTH_SPACE>', 'u\202F': '<NNBSP>', 'u\205F': '<MMSP>', 'u\3000': '<IDEOGRAPHIC_SPACE>', } def __init__(self, ws=" \t\r\n", min=1, max=0, exact=0): super(White,self).__init__() self.matchWhite = ws self.setWhitespaceChars( "".join(c for c in self.whiteChars if c not in self.matchWhite) ) #~ self.leaveWhitespace() self.name = ("".join(White.whiteStrs[c] for c in self.matchWhite)) self.mayReturnEmpty = True self.errmsg = "Expected " + self.name self.minLen = min if max > 0: self.maxLen = max else: self.maxLen = _MAX_INT if exact > 0: self.maxLen = exact self.minLen = exact def parseImpl( self, instring, loc, doActions=True ): if instring[loc] not in self.matchWhite: raise ParseException(instring, loc, self.errmsg, self) start = loc loc += 1 maxloc = start + self.maxLen maxloc = min( maxloc, len(instring) ) while loc < maxloc and instring[loc] in self.matchWhite: loc += 1 if loc - start < self.minLen: raise ParseException(instring, loc, self.errmsg, self) return loc, instring[start:loc] class _PositionToken(Token): def __init__( self ): super(_PositionToken,self).__init__() self.name=self.__class__.__name__ self.mayReturnEmpty = True self.mayIndexError = False class GoToColumn(_PositionToken): """Token to advance to a specific column of input text; useful for tabular report scraping. """ def __init__( self, colno ): super(GoToColumn,self).__init__() self.col = colno def preParse( self, instring, loc ): if col(loc,instring) != self.col: instrlen = len(instring) if self.ignoreExprs: loc = self._skipIgnorables( instring, loc ) while loc < instrlen and instring[loc].isspace() and col( loc, instring ) != self.col : loc += 1 return loc def parseImpl( self, instring, loc, doActions=True ): thiscol = col( loc, instring ) if thiscol > self.col: raise ParseException( instring, loc, "Text not in expected column", self ) newloc = loc + self.col - thiscol ret = instring[ loc: newloc ] return newloc, ret class LineStart(_PositionToken): r"""Matches if current position is at the beginning of a line within the parse string Example:: test = '''\ AAA this line AAA and this line AAA but not this one B AAA and definitely not this one ''' for t in (LineStart() + 'AAA' + restOfLine).searchString(test): print(t) prints:: ['AAA', ' this line'] ['AAA', ' and this line'] """ def __init__( self ): super(LineStart,self).__init__() self.errmsg = "Expected start of line" def parseImpl( self, instring, loc, doActions=True ): if col(loc, instring) == 1: return loc, [] raise ParseException(instring, loc, self.errmsg, self) class LineEnd(_PositionToken): """Matches if current position is at the end of a line within the parse string """ def __init__( self ): super(LineEnd,self).__init__() self.setWhitespaceChars( ParserElement.DEFAULT_WHITE_CHARS.replace("\n","") ) self.errmsg = "Expected end of line" def parseImpl( self, instring, loc, doActions=True ): if loc<len(instring): if instring[loc] == "\n": return loc+1, "\n" else: raise ParseException(instring, loc, self.errmsg, self) elif loc == len(instring): return loc+1, [] else: raise ParseException(instring, loc, self.errmsg, self) class StringStart(_PositionToken): """Matches if current position is at the beginning of the parse string """ def __init__( self ): super(StringStart,self).__init__() self.errmsg = "Expected start of text" def parseImpl( self, instring, loc, doActions=True ): if loc != 0: # see if entire string up to here is just whitespace and ignoreables if loc != self.preParse( instring, 0 ): raise ParseException(instring, loc, self.errmsg, self) return loc, [] class StringEnd(_PositionToken): """Matches if current position is at the end of the parse string """ def __init__( self ): super(StringEnd,self).__init__() self.errmsg = "Expected end of text" def parseImpl( self, instring, loc, doActions=True ): if loc < len(instring): raise ParseException(instring, loc, self.errmsg, self) elif loc == len(instring): return loc+1, [] elif loc > len(instring): return loc, [] else: raise ParseException(instring, loc, self.errmsg, self) class WordStart(_PositionToken): """Matches if the current position is at the beginning of a Word, and is not preceded by any character in a given set of ``wordChars`` (default= ``printables``). To emulate the ``\b`` behavior of regular expressions, use ``WordStart(alphanums)``. ``WordStart`` will also match at the beginning of the string being parsed, or at the beginning of a line. """ def __init__(self, wordChars = printables): super(WordStart,self).__init__() self.wordChars = set(wordChars) self.errmsg = "Not at the start of a word" def parseImpl(self, instring, loc, doActions=True ): if loc != 0: if (instring[loc-1] in self.wordChars or instring[loc] not in self.wordChars): raise ParseException(instring, loc, self.errmsg, self) return loc, [] class WordEnd(_PositionToken): """Matches if the current position is at the end of a Word, and is not followed by any character in a given set of ``wordChars`` (default= ``printables``). To emulate the ``\b`` behavior of regular expressions, use ``WordEnd(alphanums)``. ``WordEnd`` will also match at the end of the string being parsed, or at the end of a line. """ def __init__(self, wordChars = printables): super(WordEnd,self).__init__() self.wordChars = set(wordChars) self.skipWhitespace = False self.errmsg = "Not at the end of a word" def parseImpl(self, instring, loc, doActions=True ): instrlen = len(instring) if instrlen>0 and loc<instrlen: if (instring[loc] in self.wordChars or instring[loc-1] not in self.wordChars): raise ParseException(instring, loc, self.errmsg, self) return loc, [] class ParseExpression(ParserElement): """Abstract subclass of ParserElement, for combining and post-processing parsed tokens. """ def __init__( self, exprs, savelist = False ): super(ParseExpression,self).__init__(savelist) if isinstance( exprs, _generatorType ): exprs = list(exprs) if isinstance( exprs, basestring ): self.exprs = [ ParserElement._literalStringClass( exprs ) ] elif isinstance( exprs, Iterable ): exprs = list(exprs) # if sequence of strings provided, wrap with Literal if all(isinstance(expr, basestring) for expr in exprs): exprs = map(ParserElement._literalStringClass, exprs) self.exprs = list(exprs) else: try: self.exprs = list( exprs ) except TypeError: self.exprs = [ exprs ] self.callPreparse = False def __getitem__( self, i ): return self.exprs[i] def append( self, other ): self.exprs.append( other ) self.strRepr = None return self def leaveWhitespace( self ): """Extends ``leaveWhitespace`` defined in base class, and also invokes ``leaveWhitespace`` on all contained expressions.""" self.skipWhitespace = False self.exprs = [ e.copy() for e in self.exprs ] for e in self.exprs: e.leaveWhitespace() return self def ignore( self, other ): if isinstance( other, Suppress ): if other not in self.ignoreExprs: super( ParseExpression, self).ignore( other ) for e in self.exprs: e.ignore( self.ignoreExprs[-1] ) else: super( ParseExpression, self).ignore( other ) for e in self.exprs: e.ignore( self.ignoreExprs[-1] ) return self def __str__( self ): try: return super(ParseExpression,self).__str__() except Exception: pass if self.strRepr is None: self.strRepr = "%s:(%s)" % ( self.__class__.__name__, _ustr(self.exprs) ) return self.strRepr def streamline( self ): super(ParseExpression,self).streamline() for e in self.exprs: e.streamline() # collapse nested And's of the form And( And( And( a,b), c), d) to And( a,b,c,d ) # but only if there are no parse actions or resultsNames on the nested And's # (likewise for Or's and MatchFirst's) if ( len(self.exprs) == 2 ): other = self.exprs[0] if ( isinstance( other, self.__class__ ) and not(other.parseAction) and other.resultsName is None and not other.debug ): self.exprs = other.exprs[:] + [ self.exprs[1] ] self.strRepr = None self.mayReturnEmpty |= other.mayReturnEmpty self.mayIndexError |= other.mayIndexError other = self.exprs[-1] if ( isinstance( other, self.__class__ ) and not(other.parseAction) and other.resultsName is None and not other.debug ): self.exprs = self.exprs[:-1] + other.exprs[:] self.strRepr = None self.mayReturnEmpty |= other.mayReturnEmpty self.mayIndexError |= other.mayIndexError self.errmsg = "Expected " + _ustr(self) return self def validate( self, validateTrace=[] ): tmp = validateTrace[:]+[self] for e in self.exprs: e.validate(tmp) self.checkRecursion( [] ) def copy(self): ret = super(ParseExpression,self).copy() ret.exprs = [e.copy() for e in self.exprs] return ret class And(ParseExpression): """ Requires all given :class:`ParseExpression` s to be found in the given order. Expressions may be separated by whitespace. May be constructed using the ``'+'`` operator. May also be constructed using the ``'-'`` operator, which will suppress backtracking. Example:: integer = Word(nums) name_expr = OneOrMore(Word(alphas)) expr = And([integer("id"),name_expr("name"),integer("age")]) # more easily written as: expr = integer("id") + name_expr("name") + integer("age") """ class _ErrorStop(Empty): def __init__(self, *args, **kwargs): super(And._ErrorStop,self).__init__(*args, **kwargs) self.name = '-' self.leaveWhitespace() def __init__( self, exprs, savelist = True ): super(And,self).__init__(exprs, savelist) self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs) self.setWhitespaceChars( self.exprs[0].whiteChars ) self.skipWhitespace = self.exprs[0].skipWhitespace self.callPreparse = True def streamline(self): super(And, self).streamline() self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs) return self def parseImpl( self, instring, loc, doActions=True ): # pass False as last arg to _parse for first element, since we already # pre-parsed the string as part of our And pre-parsing loc, resultlist = self.exprs[0]._parse( instring, loc, doActions, callPreParse=False ) errorStop = False for e in self.exprs[1:]: if isinstance(e, And._ErrorStop): errorStop = True continue if errorStop: try: loc, exprtokens = e._parse( instring, loc, doActions ) except ParseSyntaxException: raise except ParseBaseException as pe: pe.__traceback__ = None raise ParseSyntaxException._from_exception(pe) except IndexError: raise ParseSyntaxException(instring, len(instring), self.errmsg, self) else: loc, exprtokens = e._parse( instring, loc, doActions ) if exprtokens or exprtokens.haskeys(): resultlist += exprtokens return loc, resultlist def __iadd__(self, other ): if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) return self.append( other ) #And( [ self, other ] ) def checkRecursion( self, parseElementList ): subRecCheckList = parseElementList[:] + [ self ] for e in self.exprs: e.checkRecursion( subRecCheckList ) if not e.mayReturnEmpty: break def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "{" + " ".join(_ustr(e) for e in self.exprs) + "}" return self.strRepr class Or(ParseExpression): """Requires that at least one :class:`ParseExpression` is found. If two expressions match, the expression that matches the longest string will be used. May be constructed using the ``'^'`` operator. Example:: # construct Or using '^' operator number = Word(nums) ^ Combine(Word(nums) + '.' + Word(nums)) print(number.searchString("123 3.1416 789")) prints:: [['123'], ['3.1416'], ['789']] """ def __init__( self, exprs, savelist = False ): super(Or,self).__init__(exprs, savelist) if self.exprs: self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs) else: self.mayReturnEmpty = True def streamline(self): super(Or, self).streamline() if __compat__.collect_all_And_tokens: self.saveAsList = any(e.saveAsList for e in self.exprs) return self def parseImpl( self, instring, loc, doActions=True ): maxExcLoc = -1 maxException = None matches = [] for e in self.exprs: try: loc2 = e.tryParse( instring, loc ) except ParseException as err: err.__traceback__ = None if err.loc > maxExcLoc: maxException = err maxExcLoc = err.loc except IndexError: if len(instring) > maxExcLoc: maxException = ParseException(instring,len(instring),e.errmsg,self) maxExcLoc = len(instring) else: # save match among all matches, to retry longest to shortest matches.append((loc2, e)) if matches: matches.sort(key=lambda x: -x[0]) for _,e in matches: try: return e._parse( instring, loc, doActions ) except ParseException as err: err.__traceback__ = None if err.loc > maxExcLoc: maxException = err maxExcLoc = err.loc if maxException is not None: maxException.msg = self.errmsg raise maxException else: raise ParseException(instring, loc, "no defined alternatives to match", self) def __ixor__(self, other ): if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) return self.append( other ) #Or( [ self, other ] ) def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "{" + " ^ ".join(_ustr(e) for e in self.exprs) + "}" return self.strRepr def checkRecursion( self, parseElementList ): subRecCheckList = parseElementList[:] + [ self ] for e in self.exprs: e.checkRecursion( subRecCheckList ) class MatchFirst(ParseExpression): """Requires that at least one :class:`ParseExpression` is found. If two expressions match, the first one listed is the one that will match. May be constructed using the ``'|'`` operator. Example:: # construct MatchFirst using '|' operator # watch the order of expressions to match number = Word(nums) | Combine(Word(nums) + '.' + Word(nums)) print(number.searchString("123 3.1416 789")) # Fail! -> [['123'], ['3'], ['1416'], ['789']] # put more selective expression first number = Combine(Word(nums) + '.' + Word(nums)) | Word(nums) print(number.searchString("123 3.1416 789")) # Better -> [['123'], ['3.1416'], ['789']] """ def __init__( self, exprs, savelist = False ): super(MatchFirst,self).__init__(exprs, savelist) if self.exprs: self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs) else: self.mayReturnEmpty = True def streamline(self): super(MatchFirst, self).streamline() if __compat__.collect_all_And_tokens: self.saveAsList = any(e.saveAsList for e in self.exprs) return self def parseImpl( self, instring, loc, doActions=True ): maxExcLoc = -1 maxException = None for e in self.exprs: try: ret = e._parse( instring, loc, doActions ) return ret except ParseException as err: if err.loc > maxExcLoc: maxException = err maxExcLoc = err.loc except IndexError: if len(instring) > maxExcLoc: maxException = ParseException(instring,len(instring),e.errmsg,self) maxExcLoc = len(instring) # only got here if no expression matched, raise exception for match that made it the furthest else: if maxException is not None: maxException.msg = self.errmsg raise maxException else: raise ParseException(instring, loc, "no defined alternatives to match", self) def __ior__(self, other ): if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) return self.append( other ) #MatchFirst( [ self, other ] ) def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "{" + " | ".join(_ustr(e) for e in self.exprs) + "}" return self.strRepr def checkRecursion( self, parseElementList ): subRecCheckList = parseElementList[:] + [ self ] for e in self.exprs: e.checkRecursion( subRecCheckList ) class Each(ParseExpression): """Requires all given :class:`ParseExpression` s to be found, but in any order. Expressions may be separated by whitespace. May be constructed using the ``'&'`` operator. Example:: color = oneOf("RED ORANGE YELLOW GREEN BLUE PURPLE BLACK WHITE BROWN") shape_type = oneOf("SQUARE CIRCLE TRIANGLE STAR HEXAGON OCTAGON") integer = Word(nums) shape_attr = "shape:" + shape_type("shape") posn_attr = "posn:" + Group(integer("x") + ',' + integer("y"))("posn") color_attr = "color:" + color("color") size_attr = "size:" + integer("size") # use Each (using operator '&') to accept attributes in any order # (shape and posn are required, color and size are optional) shape_spec = shape_attr & posn_attr & Optional(color_attr) & Optional(size_attr) shape_spec.runTests(''' shape: SQUARE color: BLACK posn: 100, 120 shape: CIRCLE size: 50 color: BLUE posn: 50,80 color:GREEN size:20 shape:TRIANGLE posn:20,40 ''' ) prints:: shape: SQUARE color: BLACK posn: 100, 120 ['shape:', 'SQUARE', 'color:', 'BLACK', 'posn:', ['100', ',', '120']] - color: BLACK - posn: ['100', ',', '120'] - x: 100 - y: 120 - shape: SQUARE shape: CIRCLE size: 50 color: BLUE posn: 50,80 ['shape:', 'CIRCLE', 'size:', '50', 'color:', 'BLUE', 'posn:', ['50', ',', '80']] - color: BLUE - posn: ['50', ',', '80'] - x: 50 - y: 80 - shape: CIRCLE - size: 50 color: GREEN size: 20 shape: TRIANGLE posn: 20,40 ['color:', 'GREEN', 'size:', '20', 'shape:', 'TRIANGLE', 'posn:', ['20', ',', '40']] - color: GREEN - posn: ['20', ',', '40'] - x: 20 - y: 40 - shape: TRIANGLE - size: 20 """ def __init__( self, exprs, savelist = True ): super(Each,self).__init__(exprs, savelist) self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs) self.skipWhitespace = True self.initExprGroups = True self.saveAsList = True def streamline(self): super(Each, self).streamline() self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs) return self def parseImpl( self, instring, loc, doActions=True ): if self.initExprGroups: self.opt1map = dict((id(e.expr),e) for e in self.exprs if isinstance(e,Optional)) opt1 = [ e.expr for e in self.exprs if isinstance(e,Optional) ] opt2 = [ e for e in self.exprs if e.mayReturnEmpty and not isinstance(e,Optional)] self.optionals = opt1 + opt2 self.multioptionals = [ e.expr for e in self.exprs if isinstance(e,ZeroOrMore) ] self.multirequired = [ e.expr for e in self.exprs if isinstance(e,OneOrMore) ] self.required = [ e for e in self.exprs if not isinstance(e,(Optional,ZeroOrMore,OneOrMore)) ] self.required += self.multirequired self.initExprGroups = False tmpLoc = loc tmpReqd = self.required[:] tmpOpt = self.optionals[:] matchOrder = [] keepMatching = True while keepMatching: tmpExprs = tmpReqd + tmpOpt + self.multioptionals + self.multirequired failed = [] for e in tmpExprs: try: tmpLoc = e.tryParse( instring, tmpLoc ) except ParseException: failed.append(e) else: matchOrder.append(self.opt1map.get(id(e),e)) if e in tmpReqd: tmpReqd.remove(e) elif e in tmpOpt: tmpOpt.remove(e) if len(failed) == len(tmpExprs): keepMatching = False if tmpReqd: missing = ", ".join(_ustr(e) for e in tmpReqd) raise ParseException(instring,loc,"Missing one or more required elements (%s)" % missing ) # add any unmatched Optionals, in case they have default values defined matchOrder += [e for e in self.exprs if isinstance(e,Optional) and e.expr in tmpOpt] resultlist = [] for e in matchOrder: loc,results = e._parse(instring,loc,doActions) resultlist.append(results) finalResults = sum(resultlist, ParseResults([])) return loc, finalResults def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "{" + " & ".join(_ustr(e) for e in self.exprs) + "}" return self.strRepr def checkRecursion( self, parseElementList ): subRecCheckList = parseElementList[:] + [ self ] for e in self.exprs: e.checkRecursion( subRecCheckList ) class ParseElementEnhance(ParserElement): """Abstract subclass of :class:`ParserElement`, for combining and post-processing parsed tokens. """ def __init__( self, expr, savelist=False ): super(ParseElementEnhance,self).__init__(savelist) if isinstance( expr, basestring ): if issubclass(ParserElement._literalStringClass, Token): expr = ParserElement._literalStringClass(expr) else: expr = ParserElement._literalStringClass(Literal(expr)) self.expr = expr self.strRepr = None if expr is not None: self.mayIndexError = expr.mayIndexError self.mayReturnEmpty = expr.mayReturnEmpty self.setWhitespaceChars( expr.whiteChars ) self.skipWhitespace = expr.skipWhitespace self.saveAsList = expr.saveAsList self.callPreparse = expr.callPreparse self.ignoreExprs.extend(expr.ignoreExprs) def parseImpl( self, instring, loc, doActions=True ): if self.expr is not None: return self.expr._parse( instring, loc, doActions, callPreParse=False ) else: raise ParseException("",loc,self.errmsg,self) def leaveWhitespace( self ): self.skipWhitespace = False self.expr = self.expr.copy() if self.expr is not None: self.expr.leaveWhitespace() return self def ignore( self, other ): if isinstance( other, Suppress ): if other not in self.ignoreExprs: super( ParseElementEnhance, self).ignore( other ) if self.expr is not None: self.expr.ignore( self.ignoreExprs[-1] ) else: super( ParseElementEnhance, self).ignore( other ) if self.expr is not None: self.expr.ignore( self.ignoreExprs[-1] ) return self def streamline( self ): super(ParseElementEnhance,self).streamline() if self.expr is not None: self.expr.streamline() return self def checkRecursion( self, parseElementList ): if self in parseElementList: raise RecursiveGrammarException( parseElementList+[self] ) subRecCheckList = parseElementList[:] + [ self ] if self.expr is not None: self.expr.checkRecursion( subRecCheckList ) def validate( self, validateTrace=[] ): tmp = validateTrace[:]+[self] if self.expr is not None: self.expr.validate(tmp) self.checkRecursion( [] ) def __str__( self ): try: return super(ParseElementEnhance,self).__str__() except Exception: pass if self.strRepr is None and self.expr is not None: self.strRepr = "%s:(%s)" % ( self.__class__.__name__, _ustr(self.expr) ) return self.strRepr class FollowedBy(ParseElementEnhance): """Lookahead matching of the given parse expression. ``FollowedBy`` does *not* advance the parsing position within the input string, it only verifies that the specified parse expression matches at the current position. ``FollowedBy`` always returns a null token list. If any results names are defined in the lookahead expression, those *will* be returned for access by name. Example:: # use FollowedBy to match a label only if it is followed by a ':' data_word = Word(alphas) label = data_word + FollowedBy(':') attr_expr = Group(label + Suppress(':') + OneOrMore(data_word, stopOn=label).setParseAction(' '.join)) OneOrMore(attr_expr).parseString("shape: SQUARE color: BLACK posn: upper left").pprint() prints:: [['shape', 'SQUARE'], ['color', 'BLACK'], ['posn', 'upper left']] """ def __init__( self, expr ): super(FollowedBy,self).__init__(expr) self.mayReturnEmpty = True def parseImpl( self, instring, loc, doActions=True ): _, ret = self.expr._parse(instring, loc, doActions=doActions) del ret[:] return loc, ret class PrecededBy(ParseElementEnhance): """Lookbehind matching of the given parse expression. ``PrecededBy`` does not advance the parsing position within the input string, it only verifies that the specified parse expression matches prior to the current position. ``PrecededBy`` always returns a null token list, but if a results name is defined on the given expression, it is returned. Parameters: - expr - expression that must match prior to the current parse location - retreat - (default= ``None``) - (int) maximum number of characters to lookbehind prior to the current parse location If the lookbehind expression is a string, Literal, Keyword, or a Word or CharsNotIn with a specified exact or maximum length, then the retreat parameter is not required. Otherwise, retreat must be specified to give a maximum number of characters to look back from the current parse position for a lookbehind match. Example:: # VB-style variable names with type prefixes int_var = PrecededBy("#") + pyparsing_common.identifier str_var = PrecededBy("$") + pyparsing_common.identifier """ def __init__(self, expr, retreat=None): super(PrecededBy, self).__init__(expr) self.expr = self.expr().leaveWhitespace() self.mayReturnEmpty = True self.mayIndexError = False self.exact = False if isinstance(expr, str): retreat = len(expr) self.exact = True elif isinstance(expr, (Literal, Keyword)): retreat = expr.matchLen self.exact = True elif isinstance(expr, (Word, CharsNotIn)) and expr.maxLen != _MAX_INT: retreat = expr.maxLen self.exact = True elif isinstance(expr, _PositionToken): retreat = 0 self.exact = True self.retreat = retreat self.errmsg = "not preceded by " + str(expr) self.skipWhitespace = False def parseImpl(self, instring, loc=0, doActions=True): if self.exact: if loc < self.retreat: raise ParseException(instring, loc, self.errmsg) start = loc - self.retreat _, ret = self.expr._parse(instring, start) else: # retreat specified a maximum lookbehind window, iterate test_expr = self.expr + StringEnd() instring_slice = instring[:loc] last_expr = ParseException(instring, loc, self.errmsg) for offset in range(1, min(loc, self.retreat+1)): try: _, ret = test_expr._parse(instring_slice, loc-offset) except ParseBaseException as pbe: last_expr = pbe else: break else: raise last_expr # return empty list of tokens, but preserve any defined results names del ret[:] return loc, ret class NotAny(ParseElementEnhance): """Lookahead to disallow matching with the given parse expression. ``NotAny`` does *not* advance the parsing position within the input string, it only verifies that the specified parse expression does *not* match at the current position. Also, ``NotAny`` does *not* skip over leading whitespace. ``NotAny`` always returns a null token list. May be constructed using the '~' operator. Example:: AND, OR, NOT = map(CaselessKeyword, "AND OR NOT".split()) # take care not to mistake keywords for identifiers ident = ~(AND | OR | NOT) + Word(alphas) boolean_term = Optional(NOT) + ident # very crude boolean expression - to support parenthesis groups and # operation hierarchy, use infixNotation boolean_expr = boolean_term + ZeroOrMore((AND | OR) + boolean_term) # integers that are followed by "." are actually floats integer = Word(nums) + ~Char(".") """ def __init__( self, expr ): super(NotAny,self).__init__(expr) #~ self.leaveWhitespace() self.skipWhitespace = False # do NOT use self.leaveWhitespace(), don't want to propagate to exprs self.mayReturnEmpty = True self.errmsg = "Found unwanted token, "+_ustr(self.expr) def parseImpl( self, instring, loc, doActions=True ): if self.expr.canParseNext(instring, loc): raise ParseException(instring, loc, self.errmsg, self) return loc, [] def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "~{" + _ustr(self.expr) + "}" return self.strRepr class _MultipleMatch(ParseElementEnhance): def __init__( self, expr, stopOn=None): super(_MultipleMatch, self).__init__(expr) self.saveAsList = True ender = stopOn if isinstance(ender, basestring): ender = ParserElement._literalStringClass(ender) self.not_ender = ~ender if ender is not None else None def parseImpl( self, instring, loc, doActions=True ): self_expr_parse = self.expr._parse self_skip_ignorables = self._skipIgnorables check_ender = self.not_ender is not None if check_ender: try_not_ender = self.not_ender.tryParse # must be at least one (but first see if we are the stopOn sentinel; # if so, fail) if check_ender: try_not_ender(instring, loc) loc, tokens = self_expr_parse( instring, loc, doActions, callPreParse=False ) try: hasIgnoreExprs = (not not self.ignoreExprs) while 1: if check_ender: try_not_ender(instring, loc) if hasIgnoreExprs: preloc = self_skip_ignorables( instring, loc ) else: preloc = loc loc, tmptokens = self_expr_parse( instring, preloc, doActions ) if tmptokens or tmptokens.haskeys(): tokens += tmptokens except (ParseException,IndexError): pass return loc, tokens class OneOrMore(_MultipleMatch): """Repetition of one or more of the given expression. Parameters: - expr - expression that must match one or more times - stopOn - (default= ``None``) - expression for a terminating sentinel (only required if the sentinel would ordinarily match the repetition expression) Example:: data_word = Word(alphas) label = data_word + FollowedBy(':') attr_expr = Group(label + Suppress(':') + OneOrMore(data_word).setParseAction(' '.join)) text = "shape: SQUARE posn: upper left color: BLACK" OneOrMore(attr_expr).parseString(text).pprint() # Fail! read 'color' as data instead of next label -> [['shape', 'SQUARE color']] # use stopOn attribute for OneOrMore to avoid reading label string as part of the data attr_expr = Group(label + Suppress(':') + OneOrMore(data_word, stopOn=label).setParseAction(' '.join)) OneOrMore(attr_expr).parseString(text).pprint() # Better -> [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'BLACK']] # could also be written as (attr_expr * (1,)).parseString(text).pprint() """ def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "{" + _ustr(self.expr) + "}..." return self.strRepr class ZeroOrMore(_MultipleMatch): """Optional repetition of zero or more of the given expression. Parameters: - expr - expression that must match zero or more times - stopOn - (default= ``None``) - expression for a terminating sentinel (only required if the sentinel would ordinarily match the repetition expression) Example: similar to :class:`OneOrMore` """ def __init__( self, expr, stopOn=None): super(ZeroOrMore,self).__init__(expr, stopOn=stopOn) self.mayReturnEmpty = True def parseImpl( self, instring, loc, doActions=True ): try: return super(ZeroOrMore, self).parseImpl(instring, loc, doActions) except (ParseException,IndexError): return loc, [] def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "[" + _ustr(self.expr) + "]..." return self.strRepr class _NullToken(object): def __bool__(self): return False __nonzero__ = __bool__ def __str__(self): return "" _optionalNotMatched = _NullToken() class Optional(ParseElementEnhance): """Optional matching of the given expression. Parameters: - expr - expression that must match zero or more times - default (optional) - value to be returned if the optional expression is not found. Example:: # US postal code can be a 5-digit zip, plus optional 4-digit qualifier zip = Combine(Word(nums, exact=5) + Optional('-' + Word(nums, exact=4))) zip.runTests(''' # traditional ZIP code 12345 # ZIP+4 form 12101-0001 # invalid ZIP 98765- ''') prints:: # traditional ZIP code 12345 ['12345'] # ZIP+4 form 12101-0001 ['12101-0001'] # invalid ZIP 98765- ^ FAIL: Expected end of text (at char 5), (line:1, col:6) """ def __init__( self, expr, default=_optionalNotMatched ): super(Optional,self).__init__( expr, savelist=False ) self.saveAsList = self.expr.saveAsList self.defaultValue = default self.mayReturnEmpty = True def parseImpl( self, instring, loc, doActions=True ): try: loc, tokens = self.expr._parse( instring, loc, doActions, callPreParse=False ) except (ParseException,IndexError): if self.defaultValue is not _optionalNotMatched: if self.expr.resultsName: tokens = ParseResults([ self.defaultValue ]) tokens[self.expr.resultsName] = self.defaultValue else: tokens = [ self.defaultValue ] else: tokens = [] return loc, tokens def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "[" + _ustr(self.expr) + "]" return self.strRepr class SkipTo(ParseElementEnhance): """Token for skipping over all undefined text until the matched expression is found. Parameters: - expr - target expression marking the end of the data to be skipped - include - (default= ``False``) if True, the target expression is also parsed (the skipped text and target expression are returned as a 2-element list). - ignore - (default= ``None``) used to define grammars (typically quoted strings and comments) that might contain false matches to the target expression - failOn - (default= ``None``) define expressions that are not allowed to be included in the skipped test; if found before the target expression is found, the SkipTo is not a match Example:: report = ''' Outstanding Issues Report - 1 Jan 2000 # | Severity | Description | Days Open -----+----------+-------------------------------------------+----------- 101 | Critical | Intermittent system crash | 6 94 | Cosmetic | Spelling error on Login ('log|n') | 14 79 | Minor | System slow when running too many reports | 47 ''' integer = Word(nums) SEP = Suppress('|') # use SkipTo to simply match everything up until the next SEP # - ignore quoted strings, so that a '|' character inside a quoted string does not match # - parse action will call token.strip() for each matched token, i.e., the description body string_data = SkipTo(SEP, ignore=quotedString) string_data.setParseAction(tokenMap(str.strip)) ticket_expr = (integer("issue_num") + SEP + string_data("sev") + SEP + string_data("desc") + SEP + integer("days_open")) for tkt in ticket_expr.searchString(report): print tkt.dump() prints:: ['101', 'Critical', 'Intermittent system crash', '6'] - days_open: 6 - desc: Intermittent system crash - issue_num: 101 - sev: Critical ['94', 'Cosmetic', "Spelling error on Login ('log|n')", '14'] - days_open: 14 - desc: Spelling error on Login ('log|n') - issue_num: 94 - sev: Cosmetic ['79', 'Minor', 'System slow when running too many reports', '47'] - days_open: 47 - desc: System slow when running too many reports - issue_num: 79 - sev: Minor """ def __init__( self, other, include=False, ignore=None, failOn=None ): super( SkipTo, self ).__init__( other ) self.ignoreExpr = ignore self.mayReturnEmpty = True self.mayIndexError = False self.includeMatch = include self.saveAsList = False if isinstance(failOn, basestring): self.failOn = ParserElement._literalStringClass(failOn) else: self.failOn = failOn self.errmsg = "No match found for "+_ustr(self.expr) def parseImpl( self, instring, loc, doActions=True ): startloc = loc instrlen = len(instring) expr = self.expr expr_parse = self.expr._parse self_failOn_canParseNext = self.failOn.canParseNext if self.failOn is not None else None self_ignoreExpr_tryParse = self.ignoreExpr.tryParse if self.ignoreExpr is not None else None tmploc = loc while tmploc <= instrlen: if self_failOn_canParseNext is not None: # break if failOn expression matches if self_failOn_canParseNext(instring, tmploc): break if self_ignoreExpr_tryParse is not None: # advance past ignore expressions while 1: try: tmploc = self_ignoreExpr_tryParse(instring, tmploc) except ParseBaseException: break try: expr_parse(instring, tmploc, doActions=False, callPreParse=False) except (ParseException, IndexError): # no match, advance loc in string tmploc += 1 else: # matched skipto expr, done break else: # ran off the end of the input string without matching skipto expr, fail raise ParseException(instring, loc, self.errmsg, self) # build up return values loc = tmploc skiptext = instring[startloc:loc] skipresult = ParseResults(skiptext) if self.includeMatch: loc, mat = expr_parse(instring,loc,doActions,callPreParse=False) skipresult += mat return loc, skipresult class Forward(ParseElementEnhance): """Forward declaration of an expression to be defined later - used for recursive grammars, such as algebraic infix notation. When the expression is known, it is assigned to the ``Forward`` variable using the '<<' operator. Note: take care when assigning to ``Forward`` not to overlook precedence of operators. Specifically, '|' has a lower precedence than '<<', so that:: fwdExpr << a | b | c will actually be evaluated as:: (fwdExpr << a) | b | c thereby leaving b and c out as parseable alternatives. It is recommended that you explicitly group the values inserted into the ``Forward``:: fwdExpr << (a | b | c) Converting to use the '<<=' operator instead will avoid this problem. See :class:`ParseResults.pprint` for an example of a recursive parser created using ``Forward``. """ def __init__( self, other=None ): super(Forward,self).__init__( other, savelist=False ) def __lshift__( self, other ): if isinstance( other, basestring ): other = ParserElement._literalStringClass(other) self.expr = other self.strRepr = None self.mayIndexError = self.expr.mayIndexError self.mayReturnEmpty = self.expr.mayReturnEmpty self.setWhitespaceChars( self.expr.whiteChars ) self.skipWhitespace = self.expr.skipWhitespace self.saveAsList = self.expr.saveAsList self.ignoreExprs.extend(self.expr.ignoreExprs) return self def __ilshift__(self, other): return self << other def leaveWhitespace( self ): self.skipWhitespace = False return self def streamline( self ): if not self.streamlined: self.streamlined = True if self.expr is not None: self.expr.streamline() return self def validate( self, validateTrace=[] ): if self not in validateTrace: tmp = validateTrace[:]+[self] if self.expr is not None: self.expr.validate(tmp) self.checkRecursion([]) def __str__( self ): if hasattr(self,"name"): return self.name # Avoid infinite recursion by setting a temporary name self.name = self.__class__.__name__ + ": ..." # Use the string representation of main expression. try: if self.expr is not None: retString = _ustr(self.expr) else: retString = "None" finally: del self.name return self.__class__.__name__ + ": " + retString def copy(self): if self.expr is not None: return super(Forward,self).copy() else: ret = Forward() ret <<= self return ret class TokenConverter(ParseElementEnhance): """ Abstract subclass of :class:`ParseExpression`, for converting parsed results. """ def __init__( self, expr, savelist=False ): super(TokenConverter,self).__init__( expr )#, savelist ) self.saveAsList = False class Combine(TokenConverter): """Converter to concatenate all matching tokens to a single string. By default, the matching patterns must also be contiguous in the input string; this can be disabled by specifying ``'adjacent=False'`` in the constructor. Example:: real = Word(nums) + '.' + Word(nums) print(real.parseString('3.1416')) # -> ['3', '.', '1416'] # will also erroneously match the following print(real.parseString('3. 1416')) # -> ['3', '.', '1416'] real = Combine(Word(nums) + '.' + Word(nums)) print(real.parseString('3.1416')) # -> ['3.1416'] # no match when there are internal spaces print(real.parseString('3. 1416')) # -> Exception: Expected W:(0123...) """ def __init__( self, expr, joinString="", adjacent=True ): super(Combine,self).__init__( expr ) # suppress whitespace-stripping in contained parse expressions, but re-enable it on the Combine itself if adjacent: self.leaveWhitespace() self.adjacent = adjacent self.skipWhitespace = True self.joinString = joinString self.callPreparse = True def ignore( self, other ): if self.adjacent: ParserElement.ignore(self, other) else: super( Combine, self).ignore( other ) return self def postParse( self, instring, loc, tokenlist ): retToks = tokenlist.copy() del retToks[:] retToks += ParseResults([ "".join(tokenlist._asStringList(self.joinString)) ], modal=self.modalResults) if self.resultsName and retToks.haskeys(): return [ retToks ] else: return retToks class Group(TokenConverter): """Converter to return the matched tokens as a list - useful for returning tokens of :class:`ZeroOrMore` and :class:`OneOrMore` expressions. Example:: ident = Word(alphas) num = Word(nums) term = ident | num func = ident + Optional(delimitedList(term)) print(func.parseString("fn a,b,100")) # -> ['fn', 'a', 'b', '100'] func = ident + Group(Optional(delimitedList(term))) print(func.parseString("fn a,b,100")) # -> ['fn', ['a', 'b', '100']] """ def __init__( self, expr ): super(Group,self).__init__( expr ) self.saveAsList = True def postParse( self, instring, loc, tokenlist ): return [ tokenlist ] class Dict(TokenConverter): """Converter to return a repetitive expression as a list, but also as a dictionary. Each element can also be referenced using the first token in the expression as its key. Useful for tabular report scraping when the first column can be used as a item key. Example:: data_word = Word(alphas) label = data_word + FollowedBy(':') attr_expr = Group(label + Suppress(':') + OneOrMore(data_word).setParseAction(' '.join)) text = "shape: SQUARE posn: upper left color: light blue texture: burlap" attr_expr = (label + Suppress(':') + OneOrMore(data_word, stopOn=label).setParseAction(' '.join)) # print attributes as plain groups print(OneOrMore(attr_expr).parseString(text).dump()) # instead of OneOrMore(expr), parse using Dict(OneOrMore(Group(expr))) - Dict will auto-assign names result = Dict(OneOrMore(Group(attr_expr))).parseString(text) print(result.dump()) # access named fields as dict entries, or output as dict print(result['shape']) print(result.asDict()) prints:: ['shape', 'SQUARE', 'posn', 'upper left', 'color', 'light blue', 'texture', 'burlap'] [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'light blue'], ['texture', 'burlap']] - color: light blue - posn: upper left - shape: SQUARE - texture: burlap SQUARE {'color': 'light blue', 'posn': 'upper left', 'texture': 'burlap', 'shape': 'SQUARE'} See more examples at :class:`ParseResults` of accessing fields by results name. """ def __init__( self, expr ): super(Dict,self).__init__( expr ) self.saveAsList = True def postParse( self, instring, loc, tokenlist ): for i,tok in enumerate(tokenlist): if len(tok) == 0: continue ikey = tok[0] if isinstance(ikey,int): ikey = _ustr(tok[0]).strip() if len(tok)==1: tokenlist[ikey] = _ParseResultsWithOffset("",i) elif len(tok)==2 and not isinstance(tok[1],ParseResults): tokenlist[ikey] = _ParseResultsWithOffset(tok[1],i) else: dictvalue = tok.copy() #ParseResults(i) del dictvalue[0] if len(dictvalue)!= 1 or (isinstance(dictvalue,ParseResults) and dictvalue.haskeys()): tokenlist[ikey] = _ParseResultsWithOffset(dictvalue,i) else: tokenlist[ikey] = _ParseResultsWithOffset(dictvalue[0],i) if self.resultsName: return [ tokenlist ] else: return tokenlist class Suppress(TokenConverter): """Converter for ignoring the results of a parsed expression. Example:: source = "a, b, c,d" wd = Word(alphas) wd_list1 = wd + ZeroOrMore(',' + wd) print(wd_list1.parseString(source)) # often, delimiters that are useful during parsing are just in the # way afterward - use Suppress to keep them out of the parsed output wd_list2 = wd + ZeroOrMore(Suppress(',') + wd) print(wd_list2.parseString(source)) prints:: ['a', ',', 'b', ',', 'c', ',', 'd'] ['a', 'b', 'c', 'd'] (See also :class:`delimitedList`.) """ def postParse( self, instring, loc, tokenlist ): return [] def suppress( self ): return self class OnlyOnce(object): """Wrapper for parse actions, to ensure they are only called once. """ def __init__(self, methodCall): self.callable = _trim_arity(methodCall) self.called = False def __call__(self,s,l,t): if not self.called: results = self.callable(s,l,t) self.called = True return results raise ParseException(s,l,"") def reset(self): self.called = False def traceParseAction(f): """Decorator for debugging parse actions. When the parse action is called, this decorator will print ``">> entering method-name(line:<current_source_line>, <parse_location>, <matched_tokens>)"``. When the parse action completes, the decorator will print ``"<<"`` followed by the returned value, or any exception that the parse action raised. Example:: wd = Word(alphas) @traceParseAction def remove_duplicate_chars(tokens): return ''.join(sorted(set(''.join(tokens)))) wds = OneOrMore(wd).setParseAction(remove_duplicate_chars) print(wds.parseString("slkdjs sld sldd sdlf sdljf")) prints:: >>entering remove_duplicate_chars(line: 'slkdjs sld sldd sdlf sdljf', 0, (['slkdjs', 'sld', 'sldd', 'sdlf', 'sdljf'], {})) <<leaving remove_duplicate_chars (ret: 'dfjkls') ['dfjkls'] """ f = _trim_arity(f) def z(*paArgs): thisFunc = f.__name__ s,l,t = paArgs[-3:] if len(paArgs)>3: thisFunc = paArgs[0].__class__.__name__ + '.' + thisFunc sys.stderr.write( ">>entering %s(line: '%s', %d, %r)\n" % (thisFunc,line(l,s),l,t) ) try: ret = f(*paArgs) except Exception as exc: sys.stderr.write( "<<leaving %s (exception: %s)\n" % (thisFunc,exc) ) raise sys.stderr.write( "<<leaving %s (ret: %r)\n" % (thisFunc,ret) ) return ret try: z.__name__ = f.__name__ except AttributeError: pass return z # # global helpers # def delimitedList( expr, delim=",", combine=False ): """Helper to define a delimited list of expressions - the delimiter defaults to ','. By default, the list elements and delimiters can have intervening whitespace, and comments, but this can be overridden by passing ``combine=True`` in the constructor. If ``combine`` is set to ``True``, the matching tokens are returned as a single token string, with the delimiters included; otherwise, the matching tokens are returned as a list of tokens, with the delimiters suppressed. Example:: delimitedList(Word(alphas)).parseString("aa,bb,cc") # -> ['aa', 'bb', 'cc'] delimitedList(Word(hexnums), delim=':', combine=True).parseString("AA:BB:CC:DD:EE") # -> ['AA:BB:CC:DD:EE'] """ dlName = _ustr(expr)+" ["+_ustr(delim)+" "+_ustr(expr)+"]..." if combine: return Combine( expr + ZeroOrMore( delim + expr ) ).setName(dlName) else: return ( expr + ZeroOrMore( Suppress( delim ) + expr ) ).setName(dlName) def countedArray( expr, intExpr=None ): """Helper to define a counted list of expressions. This helper defines a pattern of the form:: integer expr expr expr... where the leading integer tells how many expr expressions follow. The matched tokens returns the array of expr tokens as a list - the leading count token is suppressed. If ``intExpr`` is specified, it should be a pyparsing expression that produces an integer value. Example:: countedArray(Word(alphas)).parseString('2 ab cd ef') # -> ['ab', 'cd'] # in this parser, the leading integer value is given in binary, # '10' indicating that 2 values are in the array binaryConstant = Word('01').setParseAction(lambda t: int(t[0], 2)) countedArray(Word(alphas), intExpr=binaryConstant).parseString('10 ab cd ef') # -> ['ab', 'cd'] """ arrayExpr = Forward() def countFieldParseAction(s,l,t): n = t[0] arrayExpr << (n and Group(And([expr]*n)) or Group(empty)) return [] if intExpr is None: intExpr = Word(nums).setParseAction(lambda t:int(t[0])) else: intExpr = intExpr.copy() intExpr.setName("arrayLen") intExpr.addParseAction(countFieldParseAction, callDuringTry=True) return ( intExpr + arrayExpr ).setName('(len) ' + _ustr(expr) + '...') def _flatten(L): ret = [] for i in L: if isinstance(i,list): ret.extend(_flatten(i)) else: ret.append(i) return ret def matchPreviousLiteral(expr): """Helper to define an expression that is indirectly defined from the tokens matched in a previous expression, that is, it looks for a 'repeat' of a previous expression. For example:: first = Word(nums) second = matchPreviousLiteral(first) matchExpr = first + ":" + second will match ``"1:1"``, but not ``"1:2"``. Because this matches a previous literal, will also match the leading ``"1:1"`` in ``"1:10"``. If this is not desired, use :class:`matchPreviousExpr`. Do *not* use with packrat parsing enabled. """ rep = Forward() def copyTokenToRepeater(s,l,t): if t: if len(t) == 1: rep << t[0] else: # flatten t tokens tflat = _flatten(t.asList()) rep << And(Literal(tt) for tt in tflat) else: rep << Empty() expr.addParseAction(copyTokenToRepeater, callDuringTry=True) rep.setName('(prev) ' + _ustr(expr)) return rep def matchPreviousExpr(expr): """Helper to define an expression that is indirectly defined from the tokens matched in a previous expression, that is, it looks for a 'repeat' of a previous expression. For example:: first = Word(nums) second = matchPreviousExpr(first) matchExpr = first + ":" + second will match ``"1:1"``, but not ``"1:2"``. Because this matches by expressions, will *not* match the leading ``"1:1"`` in ``"1:10"``; the expressions are evaluated first, and then compared, so ``"1"`` is compared with ``"10"``. Do *not* use with packrat parsing enabled. """ rep = Forward() e2 = expr.copy() rep <<= e2 def copyTokenToRepeater(s,l,t): matchTokens = _flatten(t.asList()) def mustMatchTheseTokens(s,l,t): theseTokens = _flatten(t.asList()) if theseTokens != matchTokens: raise ParseException("",0,"") rep.setParseAction( mustMatchTheseTokens, callDuringTry=True ) expr.addParseAction(copyTokenToRepeater, callDuringTry=True) rep.setName('(prev) ' + _ustr(expr)) return rep def _escapeRegexRangeChars(s): #~ escape these chars: ^-] for c in r"\^-]": s = s.replace(c,_bslash+c) s = s.replace("\n",r"\n") s = s.replace("\t",r"\t") return _ustr(s) def oneOf( strs, caseless=False, useRegex=True ): """Helper to quickly define a set of alternative Literals, and makes sure to do longest-first testing when there is a conflict, regardless of the input order, but returns a :class:`MatchFirst` for best performance. Parameters: - strs - a string of space-delimited literals, or a collection of string literals - caseless - (default= ``False``) - treat all literals as caseless - useRegex - (default= ``True``) - as an optimization, will generate a Regex object; otherwise, will generate a :class:`MatchFirst` object (if ``caseless=True``, or if creating a :class:`Regex` raises an exception) Example:: comp_oper = oneOf("< = > <= >= !=") var = Word(alphas) number = Word(nums) term = var | number comparison_expr = term + comp_oper + term print(comparison_expr.searchString("B = 12 AA=23 B<=AA AA>12")) prints:: [['B', '=', '12'], ['AA', '=', '23'], ['B', '<=', 'AA'], ['AA', '>', '12']] """ if caseless: isequal = ( lambda a,b: a.upper() == b.upper() ) masks = ( lambda a,b: b.upper().startswith(a.upper()) ) parseElementClass = CaselessLiteral else: isequal = ( lambda a,b: a == b ) masks = ( lambda a,b: b.startswith(a) ) parseElementClass = Literal symbols = [] if isinstance(strs,basestring): symbols = strs.split() elif isinstance(strs, Iterable): symbols = list(strs) else: warnings.warn("Invalid argument to oneOf, expected string or iterable", SyntaxWarning, stacklevel=2) if not symbols: return NoMatch() i = 0 while i < len(symbols)-1: cur = symbols[i] for j,other in enumerate(symbols[i+1:]): if ( isequal(other, cur) ): del symbols[i+j+1] break elif ( masks(cur, other) ): del symbols[i+j+1] symbols.insert(i,other) cur = other break else: i += 1 if not caseless and useRegex: #~ print (strs,"->", "|".join( [ _escapeRegexChars(sym) for sym in symbols] )) try: if len(symbols)==len("".join(symbols)): return Regex( "[%s]" % "".join(_escapeRegexRangeChars(sym) for sym in symbols) ).setName(' | '.join(symbols)) else: return Regex( "|".join(re.escape(sym) for sym in symbols) ).setName(' | '.join(symbols)) except Exception: warnings.warn("Exception creating Regex for oneOf, building MatchFirst", SyntaxWarning, stacklevel=2) # last resort, just use MatchFirst return MatchFirst(parseElementClass(sym) for sym in symbols).setName(' | '.join(symbols)) def dictOf( key, value ): """Helper to easily and clearly define a dictionary by specifying the respective patterns for the key and value. Takes care of defining the :class:`Dict`, :class:`ZeroOrMore`, and :class:`Group` tokens in the proper order. The key pattern can include delimiting markers or punctuation, as long as they are suppressed, thereby leaving the significant key text. The value pattern can include named results, so that the :class:`Dict` results can include named token fields. Example:: text = "shape: SQUARE posn: upper left color: light blue texture: burlap" attr_expr = (label + Suppress(':') + OneOrMore(data_word, stopOn=label).setParseAction(' '.join)) print(OneOrMore(attr_expr).parseString(text).dump()) attr_label = label attr_value = Suppress(':') + OneOrMore(data_word, stopOn=label).setParseAction(' '.join) # similar to Dict, but simpler call format result = dictOf(attr_label, attr_value).parseString(text) print(result.dump()) print(result['shape']) print(result.shape) # object attribute access works too print(result.asDict()) prints:: [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'light blue'], ['texture', 'burlap']] - color: light blue - posn: upper left - shape: SQUARE - texture: burlap SQUARE SQUARE {'color': 'light blue', 'shape': 'SQUARE', 'posn': 'upper left', 'texture': 'burlap'} """ return Dict(OneOrMore(Group(key + value))) def originalTextFor(expr, asString=True): """Helper to return the original, untokenized text for a given expression. Useful to restore the parsed fields of an HTML start tag into the raw tag text itself, or to revert separate tokens with intervening whitespace back to the original matching input text. By default, returns astring containing the original parsed text. If the optional ``asString`` argument is passed as ``False``, then the return value is a :class:`ParseResults` containing any results names that were originally matched, and a single token containing the original matched text from the input string. So if the expression passed to :class:`originalTextFor` contains expressions with defined results names, you must set ``asString`` to ``False`` if you want to preserve those results name values. Example:: src = "this is test <b> bold <i>text</i> </b> normal text " for tag in ("b","i"): opener,closer = makeHTMLTags(tag) patt = originalTextFor(opener + SkipTo(closer) + closer) print(patt.searchString(src)[0]) prints:: ['<b> bold <i>text</i> </b>'] ['<i>text</i>'] """ locMarker = Empty().setParseAction(lambda s,loc,t: loc) endlocMarker = locMarker.copy() endlocMarker.callPreparse = False matchExpr = locMarker("_original_start") + expr + endlocMarker("_original_end") if asString: extractText = lambda s,l,t: s[t._original_start:t._original_end] else: def extractText(s,l,t): t[:] = [s[t.pop('_original_start'):t.pop('_original_end')]] matchExpr.setParseAction(extractText) matchExpr.ignoreExprs = expr.ignoreExprs return matchExpr def ungroup(expr): """Helper to undo pyparsing's default grouping of And expressions, even if all but one are non-empty. """ return TokenConverter(expr).addParseAction(lambda t:t[0]) def locatedExpr(expr): """Helper to decorate a returned token with its starting and ending locations in the input string. This helper adds the following results names: - locn_start = location where matched expression begins - locn_end = location where matched expression ends - value = the actual parsed results Be careful if the input text contains ``<TAB>`` characters, you may want to call :class:`ParserElement.parseWithTabs` Example:: wd = Word(alphas) for match in locatedExpr(wd).searchString("ljsdf123lksdjjf123lkkjj1222"): print(match) prints:: [[0, 'ljsdf', 5]] [[8, 'lksdjjf', 15]] [[18, 'lkkjj', 23]] """ locator = Empty().setParseAction(lambda s,l,t: l) return Group(locator("locn_start") + expr("value") + locator.copy().leaveWhitespace()("locn_end")) # convenience constants for positional expressions empty = Empty().setName("empty") lineStart = LineStart().setName("lineStart") lineEnd = LineEnd().setName("lineEnd") stringStart = StringStart().setName("stringStart") stringEnd = StringEnd().setName("stringEnd") _escapedPunc = Word( _bslash, r"\[]-*.$+^?()~ ", exact=2 ).setParseAction(lambda s,l,t:t[0][1]) _escapedHexChar = Regex(r"\\0?[xX][0-9a-fA-F]+").setParseAction(lambda s,l,t:unichr(int(t[0].lstrip(r'\0x'),16))) _escapedOctChar = Regex(r"\\0[0-7]+").setParseAction(lambda s,l,t:unichr(int(t[0][1:],8))) _singleChar = _escapedPunc | _escapedHexChar | _escapedOctChar | CharsNotIn(r'\]', exact=1) _charRange = Group(_singleChar + Suppress("-") + _singleChar) _reBracketExpr = Literal("[") + Optional("^").setResultsName("negate") + Group( OneOrMore( _charRange | _singleChar ) ).setResultsName("body") + "]" def srange(s): r"""Helper to easily define string ranges for use in Word construction. Borrows syntax from regexp '[]' string range definitions:: srange("[0-9]") -> "0123456789" srange("[a-z]") -> "abcdefghijklmnopqrstuvwxyz" srange("[a-z$_]") -> "abcdefghijklmnopqrstuvwxyz$_" The input string must be enclosed in []'s, and the returned string is the expanded character set joined into a single string. The values enclosed in the []'s may be: - a single character - an escaped character with a leading backslash (such as ``\-`` or ``\]``) - an escaped hex character with a leading ``'\x'`` (``\x21``, which is a ``'!'`` character) (``\0x##`` is also supported for backwards compatibility) - an escaped octal character with a leading ``'\0'`` (``\041``, which is a ``'!'`` character) - a range of any of the above, separated by a dash (``'a-z'``, etc.) - any combination of the above (``'aeiouy'``, ``'a-zA-Z0-9_$'``, etc.) """ _expanded = lambda p: p if not isinstance(p,ParseResults) else ''.join(unichr(c) for c in range(ord(p[0]),ord(p[1])+1)) try: return "".join(_expanded(part) for part in _reBracketExpr.parseString(s).body) except Exception: return "" def matchOnlyAtCol(n): """Helper method for defining parse actions that require matching at a specific column in the input text. """ def verifyCol(strg,locn,toks): if col(locn,strg) != n: raise ParseException(strg,locn,"matched token not at column %d" % n) return verifyCol def replaceWith(replStr): """Helper method for common parse actions that simply return a literal value. Especially useful when used with :class:`transformString<ParserElement.transformString>` (). Example:: num = Word(nums).setParseAction(lambda toks: int(toks[0])) na = oneOf("N/A NA").setParseAction(replaceWith(math.nan)) term = na | num OneOrMore(term).parseString("324 234 N/A 234") # -> [324, 234, nan, 234] """ return lambda s,l,t: [replStr] def removeQuotes(s,l,t): """Helper parse action for removing quotation marks from parsed quoted strings. Example:: # by default, quotation marks are included in parsed results quotedString.parseString("'Now is the Winter of our Discontent'") # -> ["'Now is the Winter of our Discontent'"] # use removeQuotes to strip quotation marks from parsed results quotedString.setParseAction(removeQuotes) quotedString.parseString("'Now is the Winter of our Discontent'") # -> ["Now is the Winter of our Discontent"] """ return t[0][1:-1] def tokenMap(func, *args): """Helper to define a parse action by mapping a function to all elements of a ParseResults list. If any additional args are passed, they are forwarded to the given function as additional arguments after the token, as in ``hex_integer = Word(hexnums).setParseAction(tokenMap(int, 16))``, which will convert the parsed data to an integer using base 16. Example (compare the last to example in :class:`ParserElement.transformString`:: hex_ints = OneOrMore(Word(hexnums)).setParseAction(tokenMap(int, 16)) hex_ints.runTests(''' 00 11 22 aa FF 0a 0d 1a ''') upperword = Word(alphas).setParseAction(tokenMap(str.upper)) OneOrMore(upperword).runTests(''' my kingdom for a horse ''') wd = Word(alphas).setParseAction(tokenMap(str.title)) OneOrMore(wd).setParseAction(' '.join).runTests(''' now is the winter of our discontent made glorious summer by this sun of york ''') prints:: 00 11 22 aa FF 0a 0d 1a [0, 17, 34, 170, 255, 10, 13, 26] my kingdom for a horse ['MY', 'KINGDOM', 'FOR', 'A', 'HORSE'] now is the winter of our discontent made glorious summer by this sun of york ['Now Is The Winter Of Our Discontent Made Glorious Summer By This Sun Of York'] """ def pa(s,l,t): return [func(tokn, *args) for tokn in t] try: func_name = getattr(func, '__name__', getattr(func, '__class__').__name__) except Exception: func_name = str(func) pa.__name__ = func_name return pa upcaseTokens = tokenMap(lambda t: _ustr(t).upper()) """(Deprecated) Helper parse action to convert tokens to upper case. Deprecated in favor of :class:`pyparsing_common.upcaseTokens`""" downcaseTokens = tokenMap(lambda t: _ustr(t).lower()) """(Deprecated) Helper parse action to convert tokens to lower case. Deprecated in favor of :class:`pyparsing_common.downcaseTokens`""" def _makeTags(tagStr, xml, suppress_LT=Suppress("<"), suppress_GT=Suppress(">")): """Internal helper to construct opening and closing tag expressions, given a tag name""" if isinstance(tagStr,basestring): resname = tagStr tagStr = Keyword(tagStr, caseless=not xml) else: resname = tagStr.name tagAttrName = Word(alphas,alphanums+"_-:") if (xml): tagAttrValue = dblQuotedString.copy().setParseAction( removeQuotes ) openTag = (suppress_LT + tagStr("tag") + Dict(ZeroOrMore(Group(tagAttrName + Suppress("=") + tagAttrValue ))) + Optional("/", default=[False])("empty").setParseAction(lambda s,l,t:t[0]=='/') + suppress_GT) else: tagAttrValue = quotedString.copy().setParseAction( removeQuotes ) | Word(printables, excludeChars=">") openTag = (suppress_LT + tagStr("tag") + Dict(ZeroOrMore(Group(tagAttrName.setParseAction(downcaseTokens) + Optional(Suppress("=") + tagAttrValue)))) + Optional("/",default=[False])("empty").setParseAction(lambda s,l,t:t[0]=='/') + suppress_GT) closeTag = Combine(_L("</") + tagStr + ">", adjacent=False) openTag.setName("<%s>" % resname) # add start<tagname> results name in parse action now that ungrouped names are not reported at two levels openTag.addParseAction(lambda t: t.__setitem__("start"+"".join(resname.replace(":"," ").title().split()), t.copy())) closeTag = closeTag("end"+"".join(resname.replace(":"," ").title().split())).setName("</%s>" % resname) openTag.tag = resname closeTag.tag = resname openTag.tag_body = SkipTo(closeTag()) return openTag, closeTag def makeHTMLTags(tagStr): """Helper to construct opening and closing tag expressions for HTML, given a tag name. Matches tags in either upper or lower case, attributes with namespaces and with quoted or unquoted values. Example:: text = '<td>More info at the <a href="https://github.com/pyparsing/pyparsing/wiki">pyparsing</a> wiki page</td>' # makeHTMLTags returns pyparsing expressions for the opening and # closing tags as a 2-tuple a,a_end = makeHTMLTags("A") link_expr = a + SkipTo(a_end)("link_text") + a_end for link in link_expr.searchString(text): # attributes in the <A> tag (like "href" shown here) are # also accessible as named results print(link.link_text, '->', link.href) prints:: pyparsing -> https://github.com/pyparsing/pyparsing/wiki """ return _makeTags( tagStr, False ) def makeXMLTags(tagStr): """Helper to construct opening and closing tag expressions for XML, given a tag name. Matches tags only in the given upper/lower case. Example: similar to :class:`makeHTMLTags` """ return _makeTags( tagStr, True ) def withAttribute(*args,**attrDict): """Helper to create a validating parse action to be used with start tags created with :class:`makeXMLTags` or :class:`makeHTMLTags`. Use ``withAttribute`` to qualify a starting tag with a required attribute value, to avoid false matches on common tags such as ``<TD>`` or ``<DIV>``. Call ``withAttribute`` with a series of attribute names and values. Specify the list of filter attributes names and values as: - keyword arguments, as in ``(align="right")``, or - as an explicit dict with ``**`` operator, when an attribute name is also a Python reserved word, as in ``**{"class":"Customer", "align":"right"}`` - a list of name-value tuples, as in ``(("ns1:class", "Customer"), ("ns2:align","right"))`` For attribute names with a namespace prefix, you must use the second form. Attribute names are matched insensitive to upper/lower case. If just testing for ``class`` (with or without a namespace), use :class:`withClass`. To verify that the attribute exists, but without specifying a value, pass ``withAttribute.ANY_VALUE`` as the value. Example:: html = ''' <div> Some text <div type="grid">1 4 0 1 0</div> <div type="graph">1,3 2,3 1,1</div> <div>this has no type</div> </div> ''' div,div_end = makeHTMLTags("div") # only match div tag having a type attribute with value "grid" div_grid = div().setParseAction(withAttribute(type="grid")) grid_expr = div_grid + SkipTo(div | div_end)("body") for grid_header in grid_expr.searchString(html): print(grid_header.body) # construct a match with any div tag having a type attribute, regardless of the value div_any_type = div().setParseAction(withAttribute(type=withAttribute.ANY_VALUE)) div_expr = div_any_type + SkipTo(div | div_end)("body") for div_header in div_expr.searchString(html): print(div_header.body) prints:: 1 4 0 1 0 1 4 0 1 0 1,3 2,3 1,1 """ if args: attrs = args[:] else: attrs = attrDict.items() attrs = [(k,v) for k,v in attrs] def pa(s,l,tokens): for attrName,attrValue in attrs: if attrName not in tokens: raise ParseException(s,l,"no matching attribute " + attrName) if attrValue != withAttribute.ANY_VALUE and tokens[attrName] != attrValue: raise ParseException(s,l,"attribute '%s' has value '%s', must be '%s'" % (attrName, tokens[attrName], attrValue)) return pa withAttribute.ANY_VALUE = object() def withClass(classname, namespace=''): """Simplified version of :class:`withAttribute` when matching on a div class - made difficult because ``class`` is a reserved word in Python. Example:: html = ''' <div> Some text <div class="grid">1 4 0 1 0</div> <div class="graph">1,3 2,3 1,1</div> <div>this <div> has no class</div> </div> ''' div,div_end = makeHTMLTags("div") div_grid = div().setParseAction(withClass("grid")) grid_expr = div_grid + SkipTo(div | div_end)("body") for grid_header in grid_expr.searchString(html): print(grid_header.body) div_any_type = div().setParseAction(withClass(withAttribute.ANY_VALUE)) div_expr = div_any_type + SkipTo(div | div_end)("body") for div_header in div_expr.searchString(html): print(div_header.body) prints:: 1 4 0 1 0 1 4 0 1 0 1,3 2,3 1,1 """ classattr = "%s:class" % namespace if namespace else "class" return withAttribute(**{classattr : classname}) opAssoc = SimpleNamespace() opAssoc.LEFT = object() opAssoc.RIGHT = object() def infixNotation( baseExpr, opList, lpar=Suppress('('), rpar=Suppress(')') ): """Helper method for constructing grammars of expressions made up of operators working in a precedence hierarchy. Operators may be unary or binary, left- or right-associative. Parse actions can also be attached to operator expressions. The generated parser will also recognize the use of parentheses to override operator precedences (see example below). Note: if you define a deep operator list, you may see performance issues when using infixNotation. See :class:`ParserElement.enablePackrat` for a mechanism to potentially improve your parser performance. Parameters: - baseExpr - expression representing the most basic element for the nested - opList - list of tuples, one for each operator precedence level in the expression grammar; each tuple is of the form ``(opExpr, numTerms, rightLeftAssoc, parseAction)``, where: - opExpr is the pyparsing expression for the operator; may also be a string, which will be converted to a Literal; if numTerms is 3, opExpr is a tuple of two expressions, for the two operators separating the 3 terms - numTerms is the number of terms for this operator (must be 1, 2, or 3) - rightLeftAssoc is the indicator whether the operator is right or left associative, using the pyparsing-defined constants ``opAssoc.RIGHT`` and ``opAssoc.LEFT``. - parseAction is the parse action to be associated with expressions matching this operator expression (the parse action tuple member may be omitted); if the parse action is passed a tuple or list of functions, this is equivalent to calling ``setParseAction(*fn)`` (:class:`ParserElement.setParseAction`) - lpar - expression for matching left-parentheses (default= ``Suppress('(')``) - rpar - expression for matching right-parentheses (default= ``Suppress(')')``) Example:: # simple example of four-function arithmetic with ints and # variable names integer = pyparsing_common.signed_integer varname = pyparsing_common.identifier arith_expr = infixNotation(integer | varname, [ ('-', 1, opAssoc.RIGHT), (oneOf('* /'), 2, opAssoc.LEFT), (oneOf('+ -'), 2, opAssoc.LEFT), ]) arith_expr.runTests(''' 5+3*6 (5+3)*6 -2--11 ''', fullDump=False) prints:: 5+3*6 [[5, '+', [3, '*', 6]]] (5+3)*6 [[[5, '+', 3], '*', 6]] -2--11 [[['-', 2], '-', ['-', 11]]] """ # captive version of FollowedBy that does not do parse actions or capture results names class _FB(FollowedBy): def parseImpl(self, instring, loc, doActions=True): self.expr.tryParse(instring, loc) return loc, [] ret = Forward() lastExpr = baseExpr | ( lpar + ret + rpar ) for i,operDef in enumerate(opList): opExpr,arity,rightLeftAssoc,pa = (operDef + (None,))[:4] termName = "%s term" % opExpr if arity < 3 else "%s%s term" % opExpr if arity == 3: if opExpr is None or len(opExpr) != 2: raise ValueError( "if numterms=3, opExpr must be a tuple or list of two expressions") opExpr1, opExpr2 = opExpr thisExpr = Forward().setName(termName) if rightLeftAssoc == opAssoc.LEFT: if arity == 1: matchExpr = _FB(lastExpr + opExpr) + Group( lastExpr + OneOrMore( opExpr ) ) elif arity == 2: if opExpr is not None: matchExpr = _FB(lastExpr + opExpr + lastExpr) + Group( lastExpr + OneOrMore( opExpr + lastExpr ) ) else: matchExpr = _FB(lastExpr+lastExpr) + Group( lastExpr + OneOrMore(lastExpr) ) elif arity == 3: matchExpr = _FB(lastExpr + opExpr1 + lastExpr + opExpr2 + lastExpr) + \ Group( lastExpr + opExpr1 + lastExpr + opExpr2 + lastExpr ) else: raise ValueError("operator must be unary (1), binary (2), or ternary (3)") elif rightLeftAssoc == opAssoc.RIGHT: if arity == 1: # try to avoid LR with this extra test if not isinstance(opExpr, Optional): opExpr = Optional(opExpr) matchExpr = _FB(opExpr.expr + thisExpr) + Group( opExpr + thisExpr ) elif arity == 2: if opExpr is not None: matchExpr = _FB(lastExpr + opExpr + thisExpr) + Group( lastExpr + OneOrMore( opExpr + thisExpr ) ) else: matchExpr = _FB(lastExpr + thisExpr) + Group( lastExpr + OneOrMore( thisExpr ) ) elif arity == 3: matchExpr = _FB(lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr) + \ Group( lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr ) else: raise ValueError("operator must be unary (1), binary (2), or ternary (3)") else: raise ValueError("operator must indicate right or left associativity") if pa: if isinstance(pa, (tuple, list)): matchExpr.setParseAction(*pa) else: matchExpr.setParseAction(pa) thisExpr <<= ( matchExpr.setName(termName) | lastExpr ) lastExpr = thisExpr ret <<= lastExpr return ret operatorPrecedence = infixNotation """(Deprecated) Former name of :class:`infixNotation`, will be dropped in a future release.""" dblQuotedString = Combine(Regex(r'"(?:[^"\n\r\\]|(?:"")|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*')+'"').setName("string enclosed in double quotes") sglQuotedString = Combine(Regex(r"'(?:[^'\n\r\\]|(?:'')|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*")+"'").setName("string enclosed in single quotes") quotedString = Combine(Regex(r'"(?:[^"\n\r\\]|(?:"")|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*')+'"'| Regex(r"'(?:[^'\n\r\\]|(?:'')|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*")+"'").setName("quotedString using single or double quotes") unicodeString = Combine(_L('u') + quotedString.copy()).setName("unicode string literal") def nestedExpr(opener="(", closer=")", content=None, ignoreExpr=quotedString.copy()): """Helper method for defining nested lists enclosed in opening and closing delimiters ("(" and ")" are the default). Parameters: - opener - opening character for a nested list (default= ``"("``); can also be a pyparsing expression - closer - closing character for a nested list (default= ``")"``); can also be a pyparsing expression - content - expression for items within the nested lists (default= ``None``) - ignoreExpr - expression for ignoring opening and closing delimiters (default= :class:`quotedString`) If an expression is not provided for the content argument, the nested expression will capture all whitespace-delimited content between delimiters as a list of separate values. Use the ``ignoreExpr`` argument to define expressions that may contain opening or closing characters that should not be treated as opening or closing characters for nesting, such as quotedString or a comment expression. Specify multiple expressions using an :class:`Or` or :class:`MatchFirst`. The default is :class:`quotedString`, but if no expressions are to be ignored, then pass ``None`` for this argument. Example:: data_type = oneOf("void int short long char float double") decl_data_type = Combine(data_type + Optional(Word('*'))) ident = Word(alphas+'_', alphanums+'_') number = pyparsing_common.number arg = Group(decl_data_type + ident) LPAR,RPAR = map(Suppress, "()") code_body = nestedExpr('{', '}', ignoreExpr=(quotedString | cStyleComment)) c_function = (decl_data_type("type") + ident("name") + LPAR + Optional(delimitedList(arg), [])("args") + RPAR + code_body("body")) c_function.ignore(cStyleComment) source_code = ''' int is_odd(int x) { return (x%2); } int dec_to_hex(char hchar) { if (hchar >= '0' && hchar <= '9') { return (ord(hchar)-ord('0')); } else { return (10+ord(hchar)-ord('A')); } } ''' for func in c_function.searchString(source_code): print("%(name)s (%(type)s) args: %(args)s" % func) prints:: is_odd (int) args: [['int', 'x']] dec_to_hex (int) args: [['char', 'hchar']] """ if opener == closer: raise ValueError("opening and closing strings cannot be the same") if content is None: if isinstance(opener,basestring) and isinstance(closer,basestring): if len(opener) == 1 and len(closer)==1: if ignoreExpr is not None: content = (Combine(OneOrMore(~ignoreExpr + CharsNotIn(opener+closer+ParserElement.DEFAULT_WHITE_CHARS,exact=1)) ).setParseAction(lambda t:t[0].strip())) else: content = (empty.copy()+CharsNotIn(opener+closer+ParserElement.DEFAULT_WHITE_CHARS ).setParseAction(lambda t:t[0].strip())) else: if ignoreExpr is not None: content = (Combine(OneOrMore(~ignoreExpr + ~Literal(opener) + ~Literal(closer) + CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS,exact=1)) ).setParseAction(lambda t:t[0].strip())) else: content = (Combine(OneOrMore(~Literal(opener) + ~Literal(closer) + CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS,exact=1)) ).setParseAction(lambda t:t[0].strip())) else: raise ValueError("opening and closing arguments must be strings if no content expression is given") ret = Forward() if ignoreExpr is not None: ret <<= Group( Suppress(opener) + ZeroOrMore( ignoreExpr | ret | content ) + Suppress(closer) ) else: ret <<= Group( Suppress(opener) + ZeroOrMore( ret | content ) + Suppress(closer) ) ret.setName('nested %s%s expression' % (opener,closer)) return ret def indentedBlock(blockStatementExpr, indentStack, indent=True): """Helper method for defining space-delimited indentation blocks, such as those used to define block statements in Python source code. Parameters: - blockStatementExpr - expression defining syntax of statement that is repeated within the indented block - indentStack - list created by caller to manage indentation stack (multiple statementWithIndentedBlock expressions within a single grammar should share a common indentStack) - indent - boolean indicating whether block must be indented beyond the the current level; set to False for block of left-most statements (default= ``True``) A valid block must contain at least one ``blockStatement``. Example:: data = ''' def A(z): A1 B = 100 G = A2 A2 A3 B def BB(a,b,c): BB1 def BBA(): bba1 bba2 bba3 C D def spam(x,y): def eggs(z): pass ''' indentStack = [1] stmt = Forward() identifier = Word(alphas, alphanums) funcDecl = ("def" + identifier + Group( "(" + Optional( delimitedList(identifier) ) + ")" ) + ":") func_body = indentedBlock(stmt, indentStack) funcDef = Group( funcDecl + func_body ) rvalue = Forward() funcCall = Group(identifier + "(" + Optional(delimitedList(rvalue)) + ")") rvalue << (funcCall | identifier | Word(nums)) assignment = Group(identifier + "=" + rvalue) stmt << ( funcDef | assignment | identifier ) module_body = OneOrMore(stmt) parseTree = module_body.parseString(data) parseTree.pprint() prints:: [['def', 'A', ['(', 'z', ')'], ':', [['A1'], [['B', '=', '100']], [['G', '=', 'A2']], ['A2'], ['A3']]], 'B', ['def', 'BB', ['(', 'a', 'b', 'c', ')'], ':', [['BB1'], [['def', 'BBA', ['(', ')'], ':', [['bba1'], ['bba2'], ['bba3']]]]]], 'C', 'D', ['def', 'spam', ['(', 'x', 'y', ')'], ':', [[['def', 'eggs', ['(', 'z', ')'], ':', [['pass']]]]]]] """ backup_stack = indentStack[:] def reset_stack(): indentStack[:] = backup_stack def checkPeerIndent(s,l,t): if l >= len(s): return curCol = col(l,s) if curCol != indentStack[-1]: if curCol > indentStack[-1]: raise ParseException(s,l,"illegal nesting") raise ParseException(s,l,"not a peer entry") def checkSubIndent(s,l,t): curCol = col(l,s) if curCol > indentStack[-1]: indentStack.append( curCol ) else: raise ParseException(s,l,"not a subentry") def checkUnindent(s,l,t): if l >= len(s): return curCol = col(l,s) if not(indentStack and curCol < indentStack[-1] and curCol <= indentStack[-2]): raise ParseException(s,l,"not an unindent") indentStack.pop() NL = OneOrMore(LineEnd().setWhitespaceChars("\t ").suppress()) INDENT = (Empty() + Empty().setParseAction(checkSubIndent)).setName('INDENT') PEER = Empty().setParseAction(checkPeerIndent).setName('') UNDENT = Empty().setParseAction(checkUnindent).setName('UNINDENT') if indent: smExpr = Group( Optional(NL) + #~ FollowedBy(blockStatementExpr) + INDENT + (OneOrMore( PEER + Group(blockStatementExpr) + Optional(NL) )) + UNDENT) else: smExpr = Group( Optional(NL) + (OneOrMore( PEER + Group(blockStatementExpr) + Optional(NL) )) ) smExpr.setFailAction(lambda a, b, c, d: reset_stack()) blockStatementExpr.ignore(_bslash + LineEnd()) return smExpr.setName('indented block') alphas8bit = srange(r"[\0xc0-\0xd6\0xd8-\0xf6\0xf8-\0xff]") punc8bit = srange(r"[\0xa1-\0xbf\0xd7\0xf7]") anyOpenTag,anyCloseTag = makeHTMLTags(Word(alphas,alphanums+"_:").setName('any tag')) _htmlEntityMap = dict(zip("gt lt amp nbsp quot apos".split(),'><& "\'')) commonHTMLEntity = Regex('&(?P<entity>' + '|'.join(_htmlEntityMap.keys()) +");").setName("common HTML entity") def replaceHTMLEntity(t): """Helper parser action to replace common HTML entities with their special characters""" return _htmlEntityMap.get(t.entity) # it's easy to get these comment structures wrong - they're very common, so may as well make them available cStyleComment = Combine(Regex(r"/\*(?:[^*]|\*(?!/))*") + '*/').setName("C style comment") "Comment of the form ``/* ... */``" htmlComment = Regex(r"<!--[\s\S]*?-->").setName("HTML comment") "Comment of the form ``<!-- ... -->``" restOfLine = Regex(r".*").leaveWhitespace().setName("rest of line") dblSlashComment = Regex(r"//(?:\\\n|[^\n])*").setName("// comment") "Comment of the form ``// ... (to end of line)``" cppStyleComment = Combine(Regex(r"/\*(?:[^*]|\*(?!/))*") + '*/'| dblSlashComment).setName("C++ style comment") "Comment of either form :class:`cStyleComment` or :class:`dblSlashComment`" javaStyleComment = cppStyleComment "Same as :class:`cppStyleComment`" pythonStyleComment = Regex(r"#.*").setName("Python style comment") "Comment of the form ``# ... (to end of line)``" _commasepitem = Combine(OneOrMore(Word(printables, excludeChars=',') + Optional( Word(" \t") + ~Literal(",") + ~LineEnd() ) ) ).streamline().setName("commaItem") commaSeparatedList = delimitedList( Optional( quotedString.copy() | _commasepitem, default="") ).setName("commaSeparatedList") """(Deprecated) Predefined expression of 1 or more printable words or quoted strings, separated by commas. This expression is deprecated in favor of :class:`pyparsing_common.comma_separated_list`. """ # some other useful expressions - using lower-case class name since we are really using this as a namespace class pyparsing_common: """Here are some common low-level expressions that may be useful in jump-starting parser development: - numeric forms (:class:`integers<integer>`, :class:`reals<real>`, :class:`scientific notation<sci_real>`) - common :class:`programming identifiers<identifier>` - network addresses (:class:`MAC<mac_address>`, :class:`IPv4<ipv4_address>`, :class:`IPv6<ipv6_address>`) - ISO8601 :class:`dates<iso8601_date>` and :class:`datetime<iso8601_datetime>` - :class:`UUID<uuid>` - :class:`comma-separated list<comma_separated_list>` Parse actions: - :class:`convertToInteger` - :class:`convertToFloat` - :class:`convertToDate` - :class:`convertToDatetime` - :class:`stripHTMLTags` - :class:`upcaseTokens` - :class:`downcaseTokens` Example:: pyparsing_common.number.runTests(''' # any int or real number, returned as the appropriate type 100 -100 +100 3.14159 6.02e23 1e-12 ''') pyparsing_common.fnumber.runTests(''' # any int or real number, returned as float 100 -100 +100 3.14159 6.02e23 1e-12 ''') pyparsing_common.hex_integer.runTests(''' # hex numbers 100 FF ''') pyparsing_common.fraction.runTests(''' # fractions 1/2 -3/4 ''') pyparsing_common.mixed_integer.runTests(''' # mixed fractions 1 1/2 -3/4 1-3/4 ''') import uuid pyparsing_common.uuid.setParseAction(tokenMap(uuid.UUID)) pyparsing_common.uuid.runTests(''' # uuid 12345678-1234-5678-1234-567812345678 ''') prints:: # any int or real number, returned as the appropriate type 100 [100] -100 [-100] +100 [100] 3.14159 [3.14159] 6.02e23 [6.02e+23] 1e-12 [1e-12] # any int or real number, returned as float 100 [100.0] -100 [-100.0] +100 [100.0] 3.14159 [3.14159] 6.02e23 [6.02e+23] 1e-12 [1e-12] # hex numbers 100 [256] FF [255] # fractions 1/2 [0.5] -3/4 [-0.75] # mixed fractions 1 [1] 1/2 [0.5] -3/4 [-0.75] 1-3/4 [1.75] # uuid 12345678-1234-5678-1234-567812345678 [UUID('12345678-1234-5678-1234-567812345678')] """ convertToInteger = tokenMap(int) """ Parse action for converting parsed integers to Python int """ convertToFloat = tokenMap(float) """ Parse action for converting parsed numbers to Python float """ integer = Word(nums).setName("integer").setParseAction(convertToInteger) """expression that parses an unsigned integer, returns an int""" hex_integer = Word(hexnums).setName("hex integer").setParseAction(tokenMap(int,16)) """expression that parses a hexadecimal integer, returns an int""" signed_integer = Regex(r'[+-]?\d+').setName("signed integer").setParseAction(convertToInteger) """expression that parses an integer with optional leading sign, returns an int""" fraction = (signed_integer().setParseAction(convertToFloat) + '/' + signed_integer().setParseAction(convertToFloat)).setName("fraction") """fractional expression of an integer divided by an integer, returns a float""" fraction.addParseAction(lambda t: t[0]/t[-1]) mixed_integer = (fraction | signed_integer + Optional(Optional('-').suppress() + fraction)).setName("fraction or mixed integer-fraction") """mixed integer of the form 'integer - fraction', with optional leading integer, returns float""" mixed_integer.addParseAction(sum) real = Regex(r'[+-]?\d+\.\d*').setName("real number").setParseAction(convertToFloat) """expression that parses a floating point number and returns a float""" sci_real = Regex(r'[+-]?\d+([eE][+-]?\d+|\.\d*([eE][+-]?\d+)?)').setName("real number with scientific notation").setParseAction(convertToFloat) """expression that parses a floating point number with optional scientific notation and returns a float""" # streamlining this expression makes the docs nicer-looking number = (sci_real | real | signed_integer).streamline() """any numeric expression, returns the corresponding Python type""" fnumber = Regex(r'[+-]?\d+\.?\d*([eE][+-]?\d+)?').setName("fnumber").setParseAction(convertToFloat) """any int or real number, returned as float""" identifier = Word(alphas+'_', alphanums+'_').setName("identifier") """typical code identifier (leading alpha or '_', followed by 0 or more alphas, nums, or '_')""" ipv4_address = Regex(r'(25[0-5]|2[0-4][0-9]|1?[0-9]{1,2})(\.(25[0-5]|2[0-4][0-9]|1?[0-9]{1,2})){3}').setName("IPv4 address") "IPv4 address (``0.0.0.0 - 255.255.255.255``)" _ipv6_part = Regex(r'[0-9a-fA-F]{1,4}').setName("hex_integer") _full_ipv6_address = (_ipv6_part + (':' + _ipv6_part)*7).setName("full IPv6 address") _short_ipv6_address = (Optional(_ipv6_part + (':' + _ipv6_part)*(0,6)) + "::" + Optional(_ipv6_part + (':' + _ipv6_part)*(0,6))).setName("short IPv6 address") _short_ipv6_address.addCondition(lambda t: sum(1 for tt in t if pyparsing_common._ipv6_part.matches(tt)) < 8) _mixed_ipv6_address = ("::ffff:" + ipv4_address).setName("mixed IPv6 address") ipv6_address = Combine((_full_ipv6_address | _mixed_ipv6_address | _short_ipv6_address).setName("IPv6 address")).setName("IPv6 address") "IPv6 address (long, short, or mixed form)" mac_address = Regex(r'[0-9a-fA-F]{2}([:.-])[0-9a-fA-F]{2}(?:\1[0-9a-fA-F]{2}){4}').setName("MAC address") "MAC address xx:xx:xx:xx:xx (may also have '-' or '.' delimiters)" @staticmethod def convertToDate(fmt="%Y-%m-%d"): """ Helper to create a parse action for converting parsed date string to Python datetime.date Params - - fmt - format to be passed to datetime.strptime (default= ``"%Y-%m-%d"``) Example:: date_expr = pyparsing_common.iso8601_date.copy() date_expr.setParseAction(pyparsing_common.convertToDate()) print(date_expr.parseString("1999-12-31")) prints:: [datetime.date(1999, 12, 31)] """ def cvt_fn(s,l,t): try: return datetime.strptime(t[0], fmt).date() except ValueError as ve: raise ParseException(s, l, str(ve)) return cvt_fn @staticmethod def convertToDatetime(fmt="%Y-%m-%dT%H:%M:%S.%f"): """Helper to create a parse action for converting parsed datetime string to Python datetime.datetime Params - - fmt - format to be passed to datetime.strptime (default= ``"%Y-%m-%dT%H:%M:%S.%f"``) Example:: dt_expr = pyparsing_common.iso8601_datetime.copy() dt_expr.setParseAction(pyparsing_common.convertToDatetime()) print(dt_expr.parseString("1999-12-31T23:59:59.999")) prints:: [datetime.datetime(1999, 12, 31, 23, 59, 59, 999000)] """ def cvt_fn(s,l,t): try: return datetime.strptime(t[0], fmt) except ValueError as ve: raise ParseException(s, l, str(ve)) return cvt_fn iso8601_date = Regex(r'(?P<year>\d{4})(?:-(?P<month>\d\d)(?:-(?P<day>\d\d))?)?').setName("ISO8601 date") "ISO8601 date (``yyyy-mm-dd``)" iso8601_datetime = Regex(r'(?P<year>\d{4})-(?P<month>\d\d)-(?P<day>\d\d)[T ](?P<hour>\d\d):(?P<minute>\d\d)(:(?P<second>\d\d(\.\d*)?)?)?(?P<tz>Z|[+-]\d\d:?\d\d)?').setName("ISO8601 datetime") "ISO8601 datetime (``yyyy-mm-ddThh:mm:ss.s(Z|+-00:00)``) - trailing seconds, milliseconds, and timezone optional; accepts separating ``'T'`` or ``' '``" uuid = Regex(r'[0-9a-fA-F]{8}(-[0-9a-fA-F]{4}){3}-[0-9a-fA-F]{12}').setName("UUID") "UUID (``xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx``)" _html_stripper = anyOpenTag.suppress() | anyCloseTag.suppress() @staticmethod def stripHTMLTags(s, l, tokens): """Parse action to remove HTML tags from web page HTML source Example:: # strip HTML links from normal text text = '<td>More info at the <a href="https://github.com/pyparsing/pyparsing/wiki">pyparsing</a> wiki page</td>' td,td_end = makeHTMLTags("TD") table_text = td + SkipTo(td_end).setParseAction(pyparsing_common.stripHTMLTags)("body") + td_end print(table_text.parseString(text).body) Prints:: More info at the pyparsing wiki page """ return pyparsing_common._html_stripper.transformString(tokens[0]) _commasepitem = Combine(OneOrMore(~Literal(",") + ~LineEnd() + Word(printables, excludeChars=',') + Optional( White(" \t") ) ) ).streamline().setName("commaItem") comma_separated_list = delimitedList( Optional( quotedString.copy() | _commasepitem, default="") ).setName("comma separated list") """Predefined expression of 1 or more printable words or quoted strings, separated by commas.""" upcaseTokens = staticmethod(tokenMap(lambda t: _ustr(t).upper())) """Parse action to convert tokens to upper case.""" downcaseTokens = staticmethod(tokenMap(lambda t: _ustr(t).lower())) """Parse action to convert tokens to lower case.""" class _lazyclassproperty(object): def __init__(self, fn): self.fn = fn self.__doc__ = fn.__doc__ self.__name__ = fn.__name__ def __get__(self, obj, cls): if cls is None: cls = type(obj) if not hasattr(cls, '_intern') or any(cls._intern is getattr(superclass, '_intern', []) for superclass in cls.__mro__[1:]): cls._intern = {} attrname = self.fn.__name__ if attrname not in cls._intern: cls._intern[attrname] = self.fn(cls) return cls._intern[attrname] class unicode_set(object): """ A set of Unicode characters, for language-specific strings for ``alphas``, ``nums``, ``alphanums``, and ``printables``. A unicode_set is defined by a list of ranges in the Unicode character set, in a class attribute ``_ranges``, such as:: _ranges = [(0x0020, 0x007e), (0x00a0, 0x00ff),] A unicode set can also be defined using multiple inheritance of other unicode sets:: class CJK(Chinese, Japanese, Korean): pass """ _ranges = [] @classmethod def _get_chars_for_ranges(cls): ret = [] for cc in cls.__mro__: if cc is unicode_set: break for rr in cc._ranges: ret.extend(range(rr[0], rr[-1]+1)) return [unichr(c) for c in sorted(set(ret))] @_lazyclassproperty def printables(cls): "all non-whitespace characters in this range" return u''.join(filterfalse(unicode.isspace, cls._get_chars_for_ranges())) @_lazyclassproperty def alphas(cls): "all alphabetic characters in this range" return u''.join(filter(unicode.isalpha, cls._get_chars_for_ranges())) @_lazyclassproperty def nums(cls): "all numeric digit characters in this range" return u''.join(filter(unicode.isdigit, cls._get_chars_for_ranges())) @_lazyclassproperty def alphanums(cls): "all alphanumeric characters in this range" return cls.alphas + cls.nums class pyparsing_unicode(unicode_set): """ A namespace class for defining common language unicode_sets. """ _ranges = [(32, sys.maxunicode)] class Latin1(unicode_set): "Unicode set for Latin-1 Unicode Character Range" _ranges = [(0x0020, 0x007e), (0x00a0, 0x00ff),] class LatinA(unicode_set): "Unicode set for Latin-A Unicode Character Range" _ranges = [(0x0100, 0x017f),] class LatinB(unicode_set): "Unicode set for Latin-B Unicode Character Range" _ranges = [(0x0180, 0x024f),] class Greek(unicode_set): "Unicode set for Greek Unicode Character Ranges" _ranges = [ (0x0370, 0x03ff), (0x1f00, 0x1f15), (0x1f18, 0x1f1d), (0x1f20, 0x1f45), (0x1f48, 0x1f4d), (0x1f50, 0x1f57), (0x1f59,), (0x1f5b,), (0x1f5d,), (0x1f5f, 0x1f7d), (0x1f80, 0x1fb4), (0x1fb6, 0x1fc4), (0x1fc6, 0x1fd3), (0x1fd6, 0x1fdb), (0x1fdd, 0x1fef), (0x1ff2, 0x1ff4), (0x1ff6, 0x1ffe), ] class Cyrillic(unicode_set): "Unicode set for Cyrillic Unicode Character Range" _ranges = [(0x0400, 0x04ff)] class Chinese(unicode_set): "Unicode set for Chinese Unicode Character Range" _ranges = [(0x4e00, 0x9fff), (0x3000, 0x303f), ] class Japanese(unicode_set): "Unicode set for Japanese Unicode Character Range, combining Kanji, Hiragana, and Katakana ranges" _ranges = [ ] class Kanji(unicode_set): "Unicode set for Kanji Unicode Character Range" _ranges = [(0x4E00, 0x9Fbf), (0x3000, 0x303f), ] class Hiragana(unicode_set): "Unicode set for Hiragana Unicode Character Range" _ranges = [(0x3040, 0x309f), ] class Katakana(unicode_set): "Unicode set for Katakana Unicode Character Range" _ranges = [(0x30a0, 0x30ff), ] class Korean(unicode_set): "Unicode set for Korean Unicode Character Range" _ranges = [(0xac00, 0xd7af), (0x1100, 0x11ff), (0x3130, 0x318f), (0xa960, 0xa97f), (0xd7b0, 0xd7ff), (0x3000, 0x303f), ] class CJK(Chinese, Japanese, Korean): "Unicode set for combined Chinese, Japanese, and Korean (CJK) Unicode Character Range" pass class Thai(unicode_set): "Unicode set for Thai Unicode Character Range" _ranges = [(0x0e01, 0x0e3a), (0x0e3f, 0x0e5b), ] class Arabic(unicode_set): "Unicode set for Arabic Unicode Character Range" _ranges = [(0x0600, 0x061b), (0x061e, 0x06ff), (0x0700, 0x077f), ] class Hebrew(unicode_set): "Unicode set for Hebrew Unicode Character Range" _ranges = [(0x0590, 0x05ff), ] class Devanagari(unicode_set): "Unicode set for Devanagari Unicode Character Range" _ranges = [(0x0900, 0x097f), (0xa8e0, 0xa8ff)] pyparsing_unicode.Japanese._ranges = (pyparsing_unicode.Japanese.Kanji._ranges + pyparsing_unicode.Japanese.Hiragana._ranges + pyparsing_unicode.Japanese.Katakana._ranges) # define ranges in language character sets if PY_3: setattr(pyparsing_unicode, "العربية", pyparsing_unicode.Arabic) setattr(pyparsing_unicode, "中文", pyparsing_unicode.Chinese) setattr(pyparsing_unicode, "кириллица", pyparsing_unicode.Cyrillic) setattr(pyparsing_unicode, "Ελληνικά", pyparsing_unicode.Greek) setattr(pyparsing_unicode, "עִברִית", pyparsing_unicode.Hebrew) setattr(pyparsing_unicode, "日本語", pyparsing_unicode.Japanese) setattr(pyparsing_unicode.Japanese, "漢字", pyparsing_unicode.Japanese.Kanji) setattr(pyparsing_unicode.Japanese, "カタカナ", pyparsing_unicode.Japanese.Katakana) setattr(pyparsing_unicode.Japanese, "ひらがな", pyparsing_unicode.Japanese.Hiragana) setattr(pyparsing_unicode, "한국어", pyparsing_unicode.Korean) setattr(pyparsing_unicode, "ไทย", pyparsing_unicode.Thai) setattr(pyparsing_unicode, "देवनागरी", pyparsing_unicode.Devanagari) if __name__ == "__main__": selectToken = CaselessLiteral("select") fromToken = CaselessLiteral("from") ident = Word(alphas, alphanums + "_$") columnName = delimitedList(ident, ".", combine=True).setParseAction(upcaseTokens) columnNameList = Group(delimitedList(columnName)).setName("columns") columnSpec = ('*' | columnNameList) tableName = delimitedList(ident, ".", combine=True).setParseAction(upcaseTokens) tableNameList = Group(delimitedList(tableName)).setName("tables") simpleSQL = selectToken("command") + columnSpec("columns") + fromToken + tableNameList("tables") # demo runTests method, including embedded comments in test string simpleSQL.runTests(""" # '*' as column list and dotted table name select * from SYS.XYZZY # caseless match on "SELECT", and casts back to "select" SELECT * from XYZZY, ABC # list of column names, and mixed case SELECT keyword Select AA,BB,CC from Sys.dual # multiple tables Select A, B, C from Sys.dual, Table2 # invalid SELECT keyword - should fail Xelect A, B, C from Sys.dual # incomplete command - should fail Select # invalid column name - should fail Select ^^^ frox Sys.dual """) pyparsing_common.number.runTests(""" 100 -100 +100 3.14159 6.02e23 1e-12 """) # any int or real number, returned as float pyparsing_common.fnumber.runTests(""" 100 -100 +100 3.14159 6.02e23 1e-12 """) pyparsing_common.hex_integer.runTests(""" 100 FF """) import uuid pyparsing_common.uuid.setParseAction(tokenMap(uuid.UUID)) pyparsing_common.uuid.runTests(""" 12345678-1234-5678-1234-567812345678 """) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735100.0133276 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/pytoml/����������������������������������0000755�0001750�0001750�00000000000�00000000000�026011� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/pytoml/__init__.py�����������������������0000644�0001750�0001750�00000000177�00000000000�030127� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from .core import TomlError from .parser import load, loads from .test import translate_to_test from .writer import dump, dumps�������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/pytoml/core.py���������������������������0000644�0001750�0001750�00000000775�00000000000�027324� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������class TomlError(RuntimeError): def __init__(self, message, line, col, filename): RuntimeError.__init__(self, message, line, col, filename) self.message = message self.line = line self.col = col self.filename = filename def __str__(self): return '{}({}, {}): {}'.format(self.filename, self.line, self.col, self.message) def __repr__(self): return 'TomlError({!r}, {!r}, {!r}, {!r})'.format(self.message, self.line, self.col, self.filename) ���././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/pytoml/parser.py�������������������������0000644�0001750�0001750�00000024126�00000000000�027664� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import string, re, sys, datetime from .core import TomlError from .utils import rfc3339_re, parse_rfc3339_re if sys.version_info[0] == 2: _chr = unichr else: _chr = chr def load(fin, translate=lambda t, x, v: v, object_pairs_hook=dict): return loads(fin.read(), translate=translate, object_pairs_hook=object_pairs_hook, filename=getattr(fin, 'name', repr(fin))) def loads(s, filename='<string>', translate=lambda t, x, v: v, object_pairs_hook=dict): if isinstance(s, bytes): s = s.decode('utf-8') s = s.replace('\r\n', '\n') root = object_pairs_hook() tables = object_pairs_hook() scope = root src = _Source(s, filename=filename) ast = _p_toml(src, object_pairs_hook=object_pairs_hook) def error(msg): raise TomlError(msg, pos[0], pos[1], filename) def process_value(v, object_pairs_hook): kind, text, value, pos = v if kind == 'str' and value.startswith('\n'): value = value[1:] if kind == 'array': if value and any(k != value[0][0] for k, t, v, p in value[1:]): error('array-type-mismatch') value = [process_value(item, object_pairs_hook=object_pairs_hook) for item in value] elif kind == 'table': value = object_pairs_hook([(k, process_value(value[k], object_pairs_hook=object_pairs_hook)) for k in value]) return translate(kind, text, value) for kind, value, pos in ast: if kind == 'kv': k, v = value if k in scope: error('duplicate_keys. Key "{0}" was used more than once.'.format(k)) scope[k] = process_value(v, object_pairs_hook=object_pairs_hook) else: is_table_array = (kind == 'table_array') cur = tables for name in value[:-1]: if isinstance(cur.get(name), list): d, cur = cur[name][-1] else: d, cur = cur.setdefault(name, (None, object_pairs_hook())) scope = object_pairs_hook() name = value[-1] if name not in cur: if is_table_array: cur[name] = [(scope, object_pairs_hook())] else: cur[name] = (scope, object_pairs_hook()) elif isinstance(cur[name], list): if not is_table_array: error('table_type_mismatch') cur[name].append((scope, object_pairs_hook())) else: if is_table_array: error('table_type_mismatch') old_scope, next_table = cur[name] if old_scope is not None: error('duplicate_tables') cur[name] = (scope, next_table) def merge_tables(scope, tables): if scope is None: scope = object_pairs_hook() for k in tables: if k in scope: error('key_table_conflict') v = tables[k] if isinstance(v, list): scope[k] = [merge_tables(sc, tbl) for sc, tbl in v] else: scope[k] = merge_tables(v[0], v[1]) return scope return merge_tables(root, tables) class _Source: def __init__(self, s, filename=None): self.s = s self._pos = (1, 1) self._last = None self._filename = filename self.backtrack_stack = [] def last(self): return self._last def pos(self): return self._pos def fail(self): return self._expect(None) def consume_dot(self): if self.s: self._last = self.s[0] self.s = self[1:] self._advance(self._last) return self._last return None def expect_dot(self): return self._expect(self.consume_dot()) def consume_eof(self): if not self.s: self._last = '' return True return False def expect_eof(self): return self._expect(self.consume_eof()) def consume(self, s): if self.s.startswith(s): self.s = self.s[len(s):] self._last = s self._advance(s) return True return False def expect(self, s): return self._expect(self.consume(s)) def consume_re(self, re): m = re.match(self.s) if m: self.s = self.s[len(m.group(0)):] self._last = m self._advance(m.group(0)) return m return None def expect_re(self, re): return self._expect(self.consume_re(re)) def __enter__(self): self.backtrack_stack.append((self.s, self._pos)) def __exit__(self, type, value, traceback): if type is None: self.backtrack_stack.pop() else: self.s, self._pos = self.backtrack_stack.pop() return type == TomlError def commit(self): self.backtrack_stack[-1] = (self.s, self._pos) def _expect(self, r): if not r: raise TomlError('msg', self._pos[0], self._pos[1], self._filename) return r def _advance(self, s): suffix_pos = s.rfind('\n') if suffix_pos == -1: self._pos = (self._pos[0], self._pos[1] + len(s)) else: self._pos = (self._pos[0] + s.count('\n'), len(s) - suffix_pos) _ews_re = re.compile(r'(?:[ \t]|#[^\n]*\n|#[^\n]*\Z|\n)*') def _p_ews(s): s.expect_re(_ews_re) _ws_re = re.compile(r'[ \t]*') def _p_ws(s): s.expect_re(_ws_re) _escapes = { 'b': '\b', 'n': '\n', 'r': '\r', 't': '\t', '"': '"', '\\': '\\', 'f': '\f' } _basicstr_re = re.compile(r'[^"\\\000-\037]*') _short_uni_re = re.compile(r'u([0-9a-fA-F]{4})') _long_uni_re = re.compile(r'U([0-9a-fA-F]{8})') _escapes_re = re.compile(r'[btnfr\"\\]') _newline_esc_re = re.compile('\n[ \t\n]*') def _p_basicstr_content(s, content=_basicstr_re): res = [] while True: res.append(s.expect_re(content).group(0)) if not s.consume('\\'): break if s.consume_re(_newline_esc_re): pass elif s.consume_re(_short_uni_re) or s.consume_re(_long_uni_re): v = int(s.last().group(1), 16) if 0xd800 <= v < 0xe000: s.fail() res.append(_chr(v)) else: s.expect_re(_escapes_re) res.append(_escapes[s.last().group(0)]) return ''.join(res) _key_re = re.compile(r'[0-9a-zA-Z-_]+') def _p_key(s): with s: s.expect('"') r = _p_basicstr_content(s, _basicstr_re) s.expect('"') return r if s.consume('\''): if s.consume('\'\''): r = s.expect_re(_litstr_ml_re).group(0) s.expect('\'\'\'') else: r = s.expect_re(_litstr_re).group(0) s.expect('\'') return r return s.expect_re(_key_re).group(0) _float_re = re.compile(r'[+-]?(?:0|[1-9](?:_?\d)*)(?:\.\d(?:_?\d)*)?(?:[eE][+-]?(?:\d(?:_?\d)*))?') _basicstr_ml_re = re.compile(r'(?:""?(?!")|[^"\\\000-\011\013-\037])*') _litstr_re = re.compile(r"[^'\000\010\012-\037]*") _litstr_ml_re = re.compile(r"(?:(?:|'|'')(?:[^'\000-\010\013-\037]))*") def _p_value(s, object_pairs_hook): pos = s.pos() if s.consume('true'): return 'bool', s.last(), True, pos if s.consume('false'): return 'bool', s.last(), False, pos if s.consume('"'): if s.consume('""'): r = _p_basicstr_content(s, _basicstr_ml_re) s.expect('"""') else: r = _p_basicstr_content(s, _basicstr_re) s.expect('"') return 'str', r, r, pos if s.consume('\''): if s.consume('\'\''): r = s.expect_re(_litstr_ml_re).group(0) s.expect('\'\'\'') else: r = s.expect_re(_litstr_re).group(0) s.expect('\'') return 'str', r, r, pos if s.consume_re(rfc3339_re): m = s.last() return 'datetime', m.group(0), parse_rfc3339_re(m), pos if s.consume_re(_float_re): m = s.last().group(0) r = m.replace('_','') if '.' in m or 'e' in m or 'E' in m: return 'float', m, float(r), pos else: return 'int', m, int(r, 10), pos if s.consume('['): items = [] with s: while True: _p_ews(s) items.append(_p_value(s, object_pairs_hook=object_pairs_hook)) s.commit() _p_ews(s) s.expect(',') s.commit() _p_ews(s) s.expect(']') return 'array', None, items, pos if s.consume('{'): _p_ws(s) items = object_pairs_hook() if not s.consume('}'): k = _p_key(s) _p_ws(s) s.expect('=') _p_ws(s) items[k] = _p_value(s, object_pairs_hook=object_pairs_hook) _p_ws(s) while s.consume(','): _p_ws(s) k = _p_key(s) _p_ws(s) s.expect('=') _p_ws(s) items[k] = _p_value(s, object_pairs_hook=object_pairs_hook) _p_ws(s) s.expect('}') return 'table', None, items, pos s.fail() def _p_stmt(s, object_pairs_hook): pos = s.pos() if s.consume( '['): is_array = s.consume('[') _p_ws(s) keys = [_p_key(s)] _p_ws(s) while s.consume('.'): _p_ws(s) keys.append(_p_key(s)) _p_ws(s) s.expect(']') if is_array: s.expect(']') return 'table_array' if is_array else 'table', keys, pos key = _p_key(s) _p_ws(s) s.expect('=') _p_ws(s) value = _p_value(s, object_pairs_hook=object_pairs_hook) return 'kv', (key, value), pos _stmtsep_re = re.compile(r'(?:[ \t]*(?:#[^\n]*)?\n)+[ \t]*') def _p_toml(s, object_pairs_hook): stmts = [] _p_ews(s) with s: stmts.append(_p_stmt(s, object_pairs_hook=object_pairs_hook)) while True: s.commit() s.expect_re(_stmtsep_re) stmts.append(_p_stmt(s, object_pairs_hook=object_pairs_hook)) _p_ews(s) s.expect_eof() return stmts ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/pytoml/test.py���������������������������0000644�0001750�0001750�00000001775�00000000000�027354� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import datetime from .utils import format_rfc3339 try: _string_types = (str, unicode) _int_types = (int, long) except NameError: _string_types = str _int_types = int def translate_to_test(v): if isinstance(v, dict): return { k: translate_to_test(v) for k, v in v.items() } if isinstance(v, list): a = [translate_to_test(x) for x in v] if v and isinstance(v[0], dict): return a else: return {'type': 'array', 'value': a} if isinstance(v, datetime.datetime): return {'type': 'datetime', 'value': format_rfc3339(v)} if isinstance(v, bool): return {'type': 'bool', 'value': 'true' if v else 'false'} if isinstance(v, _int_types): return {'type': 'integer', 'value': str(v)} if isinstance(v, float): return {'type': 'float', 'value': '{:.17}'.format(v)} if isinstance(v, _string_types): return {'type': 'string', 'value': v} raise RuntimeError('unexpected value: {!r}'.format(v)) ���././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/pytoml/utils.py��������������������������0000644�0001750�0001750�00000003201�00000000000�027517� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import datetime import re rfc3339_re = re.compile(r'(\d{4})-(\d{2})-(\d{2})T(\d{2}):(\d{2}):(\d{2})(\.\d+)?(?:Z|([+-]\d{2}):(\d{2}))') def parse_rfc3339(v): m = rfc3339_re.match(v) if not m or m.group(0) != v: return None return parse_rfc3339_re(m) def parse_rfc3339_re(m): r = map(int, m.groups()[:6]) if m.group(7): micro = float(m.group(7)) else: micro = 0 if m.group(8): g = int(m.group(8), 10) * 60 + int(m.group(9), 10) tz = _TimeZone(datetime.timedelta(0, g * 60)) else: tz = _TimeZone(datetime.timedelta(0, 0)) y, m, d, H, M, S = r return datetime.datetime(y, m, d, H, M, S, int(micro * 1000000), tz) def format_rfc3339(v): offs = v.utcoffset() offs = int(offs.total_seconds()) // 60 if offs is not None else 0 if offs == 0: suffix = 'Z' else: if offs > 0: suffix = '+' else: suffix = '-' offs = -offs suffix = '{0}{1:02}:{2:02}'.format(suffix, offs // 60, offs % 60) if v.microsecond: return v.strftime('%Y-%m-%dT%H:%M:%S.%f') + suffix else: return v.strftime('%Y-%m-%dT%H:%M:%S') + suffix class _TimeZone(datetime.tzinfo): def __init__(self, offset): self._offset = offset def utcoffset(self, dt): return self._offset def dst(self, dt): return None def tzname(self, dt): m = self._offset.total_seconds() // 60 if m < 0: res = '-' m = -m else: res = '+' h = m // 60 m = m - h * 60 return '{}{:.02}{:.02}'.format(res, h, m) �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/pytoml/writer.py�������������������������0000644�0001750�0001750�00000006176�00000000000�027711� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import unicode_literals import io, datetime, math, string, sys from .utils import format_rfc3339 if sys.version_info[0] == 3: long = int unicode = str def dumps(obj, sort_keys=False): fout = io.StringIO() dump(obj, fout, sort_keys=sort_keys) return fout.getvalue() _escapes = {'\n': 'n', '\r': 'r', '\\': '\\', '\t': 't', '\b': 'b', '\f': 'f', '"': '"'} def _escape_string(s): res = [] start = 0 def flush(): if start != i: res.append(s[start:i]) return i + 1 i = 0 while i < len(s): c = s[i] if c in '"\\\n\r\t\b\f': start = flush() res.append('\\' + _escapes[c]) elif ord(c) < 0x20: start = flush() res.append('\\u%04x' % ord(c)) i += 1 flush() return '"' + ''.join(res) + '"' _key_chars = string.digits + string.ascii_letters + '-_' def _escape_id(s): if any(c not in _key_chars for c in s): return _escape_string(s) return s def _format_value(v): if isinstance(v, bool): return 'true' if v else 'false' if isinstance(v, int) or isinstance(v, long): return unicode(v) if isinstance(v, float): if math.isnan(v) or math.isinf(v): raise ValueError("{0} is not a valid TOML value".format(v)) else: return repr(v) elif isinstance(v, unicode) or isinstance(v, bytes): return _escape_string(v) elif isinstance(v, datetime.datetime): return format_rfc3339(v) elif isinstance(v, list): return '[{0}]'.format(', '.join(_format_value(obj) for obj in v)) elif isinstance(v, dict): return '{{{0}}}'.format(', '.join('{} = {}'.format(_escape_id(k), _format_value(obj)) for k, obj in v.items())) else: raise RuntimeError(v) def dump(obj, fout, sort_keys=False): tables = [((), obj, False)] while tables: name, table, is_array = tables.pop() if name: section_name = '.'.join(_escape_id(c) for c in name) if is_array: fout.write('[[{0}]]\n'.format(section_name)) else: fout.write('[{0}]\n'.format(section_name)) table_keys = sorted(table.keys()) if sort_keys else table.keys() new_tables = [] has_kv = False for k in table_keys: v = table[k] if isinstance(v, dict): new_tables.append((name + (k,), v, False)) elif isinstance(v, list) and v and all(isinstance(o, dict) for o in v): new_tables.extend((name + (k,), d, True) for d in v) elif v is None: # based on mojombo's comment: https://github.com/toml-lang/toml/issues/146#issuecomment-25019344 fout.write( '#{} = null # To use: uncomment and replace null with value\n'.format(_escape_id(k))) has_kv = True else: fout.write('{0} = {1}\n'.format(_escape_id(k), _format_value(v))) has_kv = True tables.extend(reversed(new_tables)) if (name or has_kv) and tables: fout.write('\n') ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735100.0199943 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/requests/��������������������������������0000755�0001750�0001750�00000000000�00000000000�026340� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/requests/__init__.py���������������������0000644�0001750�0001750�00000007752�00000000000�030464� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # __ # /__) _ _ _ _ _/ _ # / ( (- (/ (/ (- _) / _) # / """ Requests HTTP Library ~~~~~~~~~~~~~~~~~~~~~ Requests is an HTTP library, written in Python, for human beings. Basic GET usage: >>> import requests >>> r = requests.get('https://www.python.org') >>> r.status_code 200 >>> 'Python is a programming language' in r.content True ... or POST: >>> payload = dict(key1='value1', key2='value2') >>> r = requests.post('https://httpbin.org/post', data=payload) >>> print(r.text) { ... "form": { "key2": "value2", "key1": "value1" }, ... } The other HTTP methods are supported - see `requests.api`. Full documentation is at <http://python-requests.org>. :copyright: (c) 2017 by Kenneth Reitz. :license: Apache 2.0, see LICENSE for more details. """ from pip._vendor import urllib3 from pip._vendor import chardet import warnings from .exceptions import RequestsDependencyWarning def check_compatibility(urllib3_version, chardet_version): urllib3_version = urllib3_version.split('.') assert urllib3_version != ['dev'] # Verify urllib3 isn't installed from git. # Sometimes, urllib3 only reports its version as 16.1. if len(urllib3_version) == 2: urllib3_version.append('0') # Check urllib3 for compatibility. major, minor, patch = urllib3_version # noqa: F811 major, minor, patch = int(major), int(minor), int(patch) # urllib3 >= 1.21.1, <= 1.25 assert major == 1 assert minor >= 21 assert minor <= 25 # Check chardet for compatibility. major, minor, patch = chardet_version.split('.')[:3] major, minor, patch = int(major), int(minor), int(patch) # chardet >= 3.0.2, < 3.1.0 assert major == 3 assert minor < 1 assert patch >= 2 def _check_cryptography(cryptography_version): # cryptography < 1.3.4 try: cryptography_version = list(map(int, cryptography_version.split('.'))) except ValueError: return if cryptography_version < [1, 3, 4]: warning = 'Old version of cryptography ({}) may cause slowdown.'.format(cryptography_version) warnings.warn(warning, RequestsDependencyWarning) # Check imported dependencies for compatibility. try: check_compatibility(urllib3.__version__, chardet.__version__) except (AssertionError, ValueError): warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported " "version!".format(urllib3.__version__, chardet.__version__), RequestsDependencyWarning) # Attempt to enable urllib3's SNI support, if possible from pip._internal.utils.compat import WINDOWS if not WINDOWS: try: from pip._vendor.urllib3.contrib import pyopenssl pyopenssl.inject_into_urllib3() # Check cryptography version from cryptography import __version__ as cryptography_version _check_cryptography(cryptography_version) except ImportError: pass # urllib3's DependencyWarnings should be silenced. from pip._vendor.urllib3.exceptions import DependencyWarning warnings.simplefilter('ignore', DependencyWarning) from .__version__ import __title__, __description__, __url__, __version__ from .__version__ import __build__, __author__, __author_email__, __license__ from .__version__ import __copyright__, __cake__ from . import utils from . import packages from .models import Request, Response, PreparedRequest from .api import request, get, head, post, patch, put, delete, options from .sessions import session, Session from .status_codes import codes from .exceptions import ( RequestException, Timeout, URLRequired, TooManyRedirects, HTTPError, ConnectionError, FileModeWarning, ConnectTimeout, ReadTimeout ) # Set default logging handler to avoid "No handler found" warnings. import logging from logging import NullHandler logging.getLogger(__name__).addHandler(NullHandler()) # FileModeWarnings go off per the default. warnings.simplefilter('default', FileModeWarning, append=True) ����������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/requests/__version__.py������������������0000644�0001750�0001750�00000000664�00000000000�031201� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# .-. .-. .-. . . .-. .-. .-. .-. # |( |- |.| | | |- `-. | `-. # ' ' `-' `-`.`-' `-' `-' ' `-' __title__ = 'requests' __description__ = 'Python HTTP for Humans.' __url__ = 'http://python-requests.org' __version__ = '2.22.0' __build__ = 0x022200 __author__ = 'Kenneth Reitz' __author_email__ = 'me@kennethreitz.org' __license__ = 'Apache 2.0' __copyright__ = 'Copyright 2019 Kenneth Reitz' __cake__ = u'\u2728 \U0001f370 \u2728' ����������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/requests/_internal_utils.py��������������0000644�0001750�0001750�00000002110�00000000000�032077� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- """ requests._internal_utils ~~~~~~~~~~~~~~ Provides utility functions that are consumed internally by Requests which depend on extremely few external helpers (such as compat) """ from .compat import is_py2, builtin_str, str def to_native_string(string, encoding='ascii'): """Given a string object, regardless of type, returns a representation of that string in the native string type, encoding and decoding where necessary. This assumes ASCII unless told otherwise. """ if isinstance(string, builtin_str): out = string else: if is_py2: out = string.encode(encoding) else: out = string.decode(encoding) return out def unicode_is_ascii(u_string): """Determine if unicode string only contains ASCII characters. :param str u_string: unicode string to check. Must be unicode and not Python 2 `str`. :rtype: bool """ assert isinstance(u_string, str) try: u_string.encode('ascii') return True except UnicodeEncodeError: return False ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/requests/adapters.py���������������������0000644�0001750�0001750�00000052054�00000000000�030523� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- """ requests.adapters ~~~~~~~~~~~~~~~~~ This module contains the transport adapters that Requests uses to define and maintain connections. """ import os.path import socket from pip._vendor.urllib3.poolmanager import PoolManager, proxy_from_url from pip._vendor.urllib3.response import HTTPResponse from pip._vendor.urllib3.util import parse_url from pip._vendor.urllib3.util import Timeout as TimeoutSauce from pip._vendor.urllib3.util.retry import Retry from pip._vendor.urllib3.exceptions import ClosedPoolError from pip._vendor.urllib3.exceptions import ConnectTimeoutError from pip._vendor.urllib3.exceptions import HTTPError as _HTTPError from pip._vendor.urllib3.exceptions import MaxRetryError from pip._vendor.urllib3.exceptions import NewConnectionError from pip._vendor.urllib3.exceptions import ProxyError as _ProxyError from pip._vendor.urllib3.exceptions import ProtocolError from pip._vendor.urllib3.exceptions import ReadTimeoutError from pip._vendor.urllib3.exceptions import SSLError as _SSLError from pip._vendor.urllib3.exceptions import ResponseError from pip._vendor.urllib3.exceptions import LocationValueError from .models import Response from .compat import urlparse, basestring from .utils import (DEFAULT_CA_BUNDLE_PATH, extract_zipped_paths, get_encoding_from_headers, prepend_scheme_if_needed, get_auth_from_url, urldefragauth, select_proxy) from .structures import CaseInsensitiveDict from .cookies import extract_cookies_to_jar from .exceptions import (ConnectionError, ConnectTimeout, ReadTimeout, SSLError, ProxyError, RetryError, InvalidSchema, InvalidProxyURL, InvalidURL) from .auth import _basic_auth_str try: from pip._vendor.urllib3.contrib.socks import SOCKSProxyManager except ImportError: def SOCKSProxyManager(*args, **kwargs): raise InvalidSchema("Missing dependencies for SOCKS support.") DEFAULT_POOLBLOCK = False DEFAULT_POOLSIZE = 10 DEFAULT_RETRIES = 0 DEFAULT_POOL_TIMEOUT = None class BaseAdapter(object): """The Base Transport Adapter""" def __init__(self): super(BaseAdapter, self).__init__() def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest <PreparedRequest>` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) <timeouts>` tuple. :type timeout: float or tuple :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. """ raise NotImplementedError def close(self): """Cleans up adapter specific items.""" raise NotImplementedError class HTTPAdapter(BaseAdapter): """The built-in HTTP Adapter for urllib3. Provides a general-case interface for Requests sessions to contact HTTP and HTTPS urls by implementing the Transport Adapter interface. This class will usually be created by the :class:`Session <Session>` class under the covers. :param pool_connections: The number of urllib3 connection pools to cache. :param pool_maxsize: The maximum number of connections to save in the pool. :param max_retries: The maximum number of retries each connection should attempt. Note, this applies only to failed DNS lookups, socket connections and connection timeouts, never to requests where data has made it to the server. By default, Requests does not retry failed connections. If you need granular control over the conditions under which we retry a request, import urllib3's ``Retry`` class and pass that instead. :param pool_block: Whether the connection pool should block for connections. Usage:: >>> import requests >>> s = requests.Session() >>> a = requests.adapters.HTTPAdapter(max_retries=3) >>> s.mount('http://', a) """ __attrs__ = ['max_retries', 'config', '_pool_connections', '_pool_maxsize', '_pool_block'] def __init__(self, pool_connections=DEFAULT_POOLSIZE, pool_maxsize=DEFAULT_POOLSIZE, max_retries=DEFAULT_RETRIES, pool_block=DEFAULT_POOLBLOCK): if max_retries == DEFAULT_RETRIES: self.max_retries = Retry(0, read=False) else: self.max_retries = Retry.from_int(max_retries) self.config = {} self.proxy_manager = {} super(HTTPAdapter, self).__init__() self._pool_connections = pool_connections self._pool_maxsize = pool_maxsize self._pool_block = pool_block self.init_poolmanager(pool_connections, pool_maxsize, block=pool_block) def __getstate__(self): return {attr: getattr(self, attr, None) for attr in self.__attrs__} def __setstate__(self, state): # Can't handle by adding 'proxy_manager' to self.__attrs__ because # self.poolmanager uses a lambda function, which isn't pickleable. self.proxy_manager = {} self.config = {} for attr, value in state.items(): setattr(self, attr, value) self.init_poolmanager(self._pool_connections, self._pool_maxsize, block=self._pool_block) def init_poolmanager(self, connections, maxsize, block=DEFAULT_POOLBLOCK, **pool_kwargs): """Initializes a urllib3 PoolManager. This method should not be called from user code, and is only exposed for use when subclassing the :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`. :param connections: The number of urllib3 connection pools to cache. :param maxsize: The maximum number of connections to save in the pool. :param block: Block when no free connections are available. :param pool_kwargs: Extra keyword arguments used to initialize the Pool Manager. """ # save these values for pickling self._pool_connections = connections self._pool_maxsize = maxsize self._pool_block = block self.poolmanager = PoolManager(num_pools=connections, maxsize=maxsize, block=block, strict=True, **pool_kwargs) def proxy_manager_for(self, proxy, **proxy_kwargs): """Return urllib3 ProxyManager for the given proxy. This method should not be called from user code, and is only exposed for use when subclassing the :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`. :param proxy: The proxy to return a urllib3 ProxyManager for. :param proxy_kwargs: Extra keyword arguments used to configure the Proxy Manager. :returns: ProxyManager :rtype: urllib3.ProxyManager """ if proxy in self.proxy_manager: manager = self.proxy_manager[proxy] elif proxy.lower().startswith('socks'): username, password = get_auth_from_url(proxy) manager = self.proxy_manager[proxy] = SOCKSProxyManager( proxy, username=username, password=password, num_pools=self._pool_connections, maxsize=self._pool_maxsize, block=self._pool_block, **proxy_kwargs ) else: proxy_headers = self.proxy_headers(proxy) manager = self.proxy_manager[proxy] = proxy_from_url( proxy, proxy_headers=proxy_headers, num_pools=self._pool_connections, maxsize=self._pool_maxsize, block=self._pool_block, **proxy_kwargs) return manager def cert_verify(self, conn, url, verify, cert): """Verify a SSL certificate. This method should not be called from user code, and is only exposed for use when subclassing the :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`. :param conn: The urllib3 connection object associated with the cert. :param url: The requested URL. :param verify: Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: The SSL certificate to verify. """ if url.lower().startswith('https') and verify: cert_loc = None # Allow self-specified cert location. if verify is not True: cert_loc = verify if not cert_loc: cert_loc = extract_zipped_paths(DEFAULT_CA_BUNDLE_PATH) if not cert_loc or not os.path.exists(cert_loc): raise IOError("Could not find a suitable TLS CA certificate bundle, " "invalid path: {}".format(cert_loc)) conn.cert_reqs = 'CERT_REQUIRED' if not os.path.isdir(cert_loc): conn.ca_certs = cert_loc else: conn.ca_cert_dir = cert_loc else: conn.cert_reqs = 'CERT_NONE' conn.ca_certs = None conn.ca_cert_dir = None if cert: if not isinstance(cert, basestring): conn.cert_file = cert[0] conn.key_file = cert[1] else: conn.cert_file = cert conn.key_file = None if conn.cert_file and not os.path.exists(conn.cert_file): raise IOError("Could not find the TLS certificate file, " "invalid path: {}".format(conn.cert_file)) if conn.key_file and not os.path.exists(conn.key_file): raise IOError("Could not find the TLS key file, " "invalid path: {}".format(conn.key_file)) def build_response(self, req, resp): """Builds a :class:`Response <requests.Response>` object from a urllib3 response. This should not be called from user code, and is only exposed for use when subclassing the :class:`HTTPAdapter <requests.adapters.HTTPAdapter>` :param req: The :class:`PreparedRequest <PreparedRequest>` used to generate the response. :param resp: The urllib3 response object. :rtype: requests.Response """ response = Response() # Fallback to None if there's no status_code, for whatever reason. response.status_code = getattr(resp, 'status', None) # Make headers case-insensitive. response.headers = CaseInsensitiveDict(getattr(resp, 'headers', {})) # Set encoding. response.encoding = get_encoding_from_headers(response.headers) response.raw = resp response.reason = response.raw.reason if isinstance(req.url, bytes): response.url = req.url.decode('utf-8') else: response.url = req.url # Add new cookies from the server. extract_cookies_to_jar(response.cookies, req, resp) # Give the Response some context. response.request = req response.connection = self return response def get_connection(self, url, proxies=None): """Returns a urllib3 connection for the given URL. This should not be called from user code, and is only exposed for use when subclassing the :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`. :param url: The URL to connect to. :param proxies: (optional) A Requests-style dictionary of proxies used on this request. :rtype: urllib3.ConnectionPool """ proxy = select_proxy(url, proxies) if proxy: proxy = prepend_scheme_if_needed(proxy, 'http') proxy_url = parse_url(proxy) if not proxy_url.host: raise InvalidProxyURL("Please check proxy URL. It is malformed" " and could be missing the host.") proxy_manager = self.proxy_manager_for(proxy) conn = proxy_manager.connection_from_url(url) else: # Only scheme should be lower case parsed = urlparse(url) url = parsed.geturl() conn = self.poolmanager.connection_from_url(url) return conn def close(self): """Disposes of any internal state. Currently, this closes the PoolManager and any active ProxyManager, which closes any pooled connections. """ self.poolmanager.clear() for proxy in self.proxy_manager.values(): proxy.clear() def request_url(self, request, proxies): """Obtain the url to use when making the final request. If the message is being sent through a HTTP proxy, the full URL has to be used. Otherwise, we should only use the path portion of the URL. This should not be called from user code, and is only exposed for use when subclassing the :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`. :param request: The :class:`PreparedRequest <PreparedRequest>` being sent. :param proxies: A dictionary of schemes or schemes and hosts to proxy URLs. :rtype: str """ proxy = select_proxy(request.url, proxies) scheme = urlparse(request.url).scheme is_proxied_http_request = (proxy and scheme != 'https') using_socks_proxy = False if proxy: proxy_scheme = urlparse(proxy).scheme.lower() using_socks_proxy = proxy_scheme.startswith('socks') url = request.path_url if is_proxied_http_request and not using_socks_proxy: url = urldefragauth(request.url) return url def add_headers(self, request, **kwargs): """Add any headers needed by the connection. As of v2.0 this does nothing by default, but is left for overriding by users that subclass the :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`. This should not be called from user code, and is only exposed for use when subclassing the :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`. :param request: The :class:`PreparedRequest <PreparedRequest>` to add headers to. :param kwargs: The keyword arguments from the call to send(). """ pass def proxy_headers(self, proxy): """Returns a dictionary of the headers to add to any request sent through a proxy. This works with urllib3 magic to ensure that they are correctly sent to the proxy, rather than in a tunnelled request if CONNECT is being used. This should not be called from user code, and is only exposed for use when subclassing the :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`. :param proxy: The url of the proxy being used for this request. :rtype: dict """ headers = {} username, password = get_auth_from_url(proxy) if username: headers['Proxy-Authorization'] = _basic_auth_str(username, password) return headers def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest <PreparedRequest>` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) <timeouts>` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection(request.url, proxies) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers(request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies) chunked = not (request.body is None or 'Content-Length' in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError as e: # this may raise a string formatting error. err = ("Invalid timeout {}. Pass a (connect, read) " "timeout tuple, or a single float to set " "both timeouts to the same value".format(timeout)) raise ValueError(err) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: if not chunked: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout ) # Send the request. else: if hasattr(conn, 'proxy_pool'): conn = conn.proxy_pool low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) try: low_conn.putrequest(request.method, url, skip_accept_encoding=True) for header, value in request.headers.items(): low_conn.putheader(header, value) low_conn.endheaders() for i in request.body: low_conn.send(hex(len(i))[2:].encode('utf-8')) low_conn.send(b'\r\n') low_conn.send(i) low_conn.send(b'\r\n') low_conn.send(b'0\r\n\r\n') # Receive the response from the server try: # For Python 2.7, use buffering of HTTP responses r = low_conn.getresponse(buffering=True) except TypeError: # For compatibility with Python 3.3+ r = low_conn.getresponse() resp = HTTPResponse.from_httplib( r, pool=conn, connection=low_conn, preload_content=False, decode_content=False ) except: # If we hit any problems here, clean up the connection. # Then, reraise so that we can handle the actual exception. low_conn.close() raise except (ProtocolError, socket.error) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) raise ConnectionError(e, request=request) except ClosedPoolError as e: raise ConnectionError(e, request=request) except _ProxyError as e: raise ProxyError(e) except (_SSLError, _HTTPError) as e: if isinstance(e, _SSLError): # This branch is for urllib3 versions earlier than v1.22 raise SSLError(e, request=request) elif isinstance(e, ReadTimeoutError): raise ReadTimeout(e, request=request) else: raise return self.build_response(request, resp) ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/requests/api.py��������������������������0000644�0001750�0001750�00000014177�00000000000�027475� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- """ requests.api ~~~~~~~~~~~~ This module implements the Requests API. :copyright: (c) 2012 by Kenneth Reitz. :license: Apache2, see LICENSE for more details. """ from . import sessions def request(method, url, **kwargs): """Constructs and sends a :class:`Request <Request>`. :param method: method for the new :class:`Request` object. :param url: URL for the new :class:`Request` object. :param params: (optional) Dictionary, list of tuples or bytes to send in the query string for the :class:`Request`. :param data: (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the :class:`Request`. :param json: (optional) A JSON serializable Python object to send in the body of the :class:`Request`. :param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`. :param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`. :param files: (optional) Dictionary of ``'name': file-like-objects`` (or ``{'name': file-tuple}``) for multipart encoding upload. ``file-tuple`` can be a 2-tuple ``('filename', fileobj)``, 3-tuple ``('filename', fileobj, 'content_type')`` or a 4-tuple ``('filename', fileobj, 'content_type', custom_headers)``, where ``'content-type'`` is a string defining the content type of the given file and ``custom_headers`` a dict-like object containing additional headers to add for the file. :param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth. :param timeout: (optional) How many seconds to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) <timeouts>` tuple. :type timeout: float or tuple :param allow_redirects: (optional) Boolean. Enable/disable GET/OPTIONS/POST/PUT/PATCH/DELETE/HEAD redirection. Defaults to ``True``. :type allow_redirects: bool :param proxies: (optional) Dictionary mapping protocol to the URL of the proxy. :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use. Defaults to ``True``. :param stream: (optional) if ``False``, the response content will be immediately downloaded. :param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair. :return: :class:`Response <Response>` object :rtype: requests.Response Usage:: >>> import requests >>> req = requests.request('GET', 'https://httpbin.org/get') <Response [200]> """ # By using the 'with' statement we are sure the session is closed, thus we # avoid leaving sockets open which can trigger a ResourceWarning in some # cases, and look like a memory leak in others. with sessions.Session() as session: return session.request(method=method, url=url, **kwargs) def get(url, params=None, **kwargs): r"""Sends a GET request. :param url: URL for the new :class:`Request` object. :param params: (optional) Dictionary, list of tuples or bytes to send in the query string for the :class:`Request`. :param \*\*kwargs: Optional arguments that ``request`` takes. :return: :class:`Response <Response>` object :rtype: requests.Response """ kwargs.setdefault('allow_redirects', True) return request('get', url, params=params, **kwargs) def options(url, **kwargs): r"""Sends an OPTIONS request. :param url: URL for the new :class:`Request` object. :param \*\*kwargs: Optional arguments that ``request`` takes. :return: :class:`Response <Response>` object :rtype: requests.Response """ kwargs.setdefault('allow_redirects', True) return request('options', url, **kwargs) def head(url, **kwargs): r"""Sends a HEAD request. :param url: URL for the new :class:`Request` object. :param \*\*kwargs: Optional arguments that ``request`` takes. :return: :class:`Response <Response>` object :rtype: requests.Response """ kwargs.setdefault('allow_redirects', False) return request('head', url, **kwargs) def post(url, data=None, json=None, **kwargs): r"""Sends a POST request. :param url: URL for the new :class:`Request` object. :param data: (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the :class:`Request`. :param json: (optional) json data to send in the body of the :class:`Request`. :param \*\*kwargs: Optional arguments that ``request`` takes. :return: :class:`Response <Response>` object :rtype: requests.Response """ return request('post', url, data=data, json=json, **kwargs) def put(url, data=None, **kwargs): r"""Sends a PUT request. :param url: URL for the new :class:`Request` object. :param data: (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the :class:`Request`. :param json: (optional) json data to send in the body of the :class:`Request`. :param \*\*kwargs: Optional arguments that ``request`` takes. :return: :class:`Response <Response>` object :rtype: requests.Response """ return request('put', url, data=data, **kwargs) def patch(url, data=None, **kwargs): r"""Sends a PATCH request. :param url: URL for the new :class:`Request` object. :param data: (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the :class:`Request`. :param json: (optional) json data to send in the body of the :class:`Request`. :param \*\*kwargs: Optional arguments that ``request`` takes. :return: :class:`Response <Response>` object :rtype: requests.Response """ return request('patch', url, data=data, **kwargs) def delete(url, **kwargs): r"""Sends a DELETE request. :param url: URL for the new :class:`Request` object. :param \*\*kwargs: Optional arguments that ``request`` takes. :return: :class:`Response <Response>` object :rtype: requests.Response """ return request('delete', url, **kwargs) �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/requests/auth.py�������������������������0000644�0001750�0001750�00000023736�00000000000�027666� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- """ requests.auth ~~~~~~~~~~~~~ This module contains the authentication handlers for Requests. """ import os import re import time import hashlib import threading import warnings from base64 import b64encode from .compat import urlparse, str, basestring from .cookies import extract_cookies_to_jar from ._internal_utils import to_native_string from .utils import parse_dict_header CONTENT_TYPE_FORM_URLENCODED = 'application/x-www-form-urlencoded' CONTENT_TYPE_MULTI_PART = 'multipart/form-data' def _basic_auth_str(username, password): """Returns a Basic Auth string.""" # "I want us to put a big-ol' comment on top of it that # says that this behaviour is dumb but we need to preserve # it because people are relying on it." # - Lukasa # # These are here solely to maintain backwards compatibility # for things like ints. This will be removed in 3.0.0. if not isinstance(username, basestring): warnings.warn( "Non-string usernames will no longer be supported in Requests " "3.0.0. Please convert the object you've passed in ({!r}) to " "a string or bytes object in the near future to avoid " "problems.".format(username), category=DeprecationWarning, ) username = str(username) if not isinstance(password, basestring): warnings.warn( "Non-string passwords will no longer be supported in Requests " "3.0.0. Please convert the object you've passed in ({!r}) to " "a string or bytes object in the near future to avoid " "problems.".format(password), category=DeprecationWarning, ) password = str(password) # -- End Removal -- if isinstance(username, str): username = username.encode('latin1') if isinstance(password, str): password = password.encode('latin1') authstr = 'Basic ' + to_native_string( b64encode(b':'.join((username, password))).strip() ) return authstr class AuthBase(object): """Base class that all auth implementations derive from""" def __call__(self, r): raise NotImplementedError('Auth hooks must be callable.') class HTTPBasicAuth(AuthBase): """Attaches HTTP Basic Authentication to the given Request object.""" def __init__(self, username, password): self.username = username self.password = password def __eq__(self, other): return all([ self.username == getattr(other, 'username', None), self.password == getattr(other, 'password', None) ]) def __ne__(self, other): return not self == other def __call__(self, r): r.headers['Authorization'] = _basic_auth_str(self.username, self.password) return r class HTTPProxyAuth(HTTPBasicAuth): """Attaches HTTP Proxy Authentication to a given Request object.""" def __call__(self, r): r.headers['Proxy-Authorization'] = _basic_auth_str(self.username, self.password) return r class HTTPDigestAuth(AuthBase): """Attaches HTTP Digest Authentication to the given Request object.""" def __init__(self, username, password): self.username = username self.password = password # Keep state in per-thread local storage self._thread_local = threading.local() def init_per_thread_state(self): # Ensure state is initialized just once per-thread if not hasattr(self._thread_local, 'init'): self._thread_local.init = True self._thread_local.last_nonce = '' self._thread_local.nonce_count = 0 self._thread_local.chal = {} self._thread_local.pos = None self._thread_local.num_401_calls = None def build_digest_header(self, method, url): """ :rtype: str """ realm = self._thread_local.chal['realm'] nonce = self._thread_local.chal['nonce'] qop = self._thread_local.chal.get('qop') algorithm = self._thread_local.chal.get('algorithm') opaque = self._thread_local.chal.get('opaque') hash_utf8 = None if algorithm is None: _algorithm = 'MD5' else: _algorithm = algorithm.upper() # lambdas assume digest modules are imported at the top level if _algorithm == 'MD5' or _algorithm == 'MD5-SESS': def md5_utf8(x): if isinstance(x, str): x = x.encode('utf-8') return hashlib.md5(x).hexdigest() hash_utf8 = md5_utf8 elif _algorithm == 'SHA': def sha_utf8(x): if isinstance(x, str): x = x.encode('utf-8') return hashlib.sha1(x).hexdigest() hash_utf8 = sha_utf8 elif _algorithm == 'SHA-256': def sha256_utf8(x): if isinstance(x, str): x = x.encode('utf-8') return hashlib.sha256(x).hexdigest() hash_utf8 = sha256_utf8 elif _algorithm == 'SHA-512': def sha512_utf8(x): if isinstance(x, str): x = x.encode('utf-8') return hashlib.sha512(x).hexdigest() hash_utf8 = sha512_utf8 KD = lambda s, d: hash_utf8("%s:%s" % (s, d)) if hash_utf8 is None: return None # XXX not implemented yet entdig = None p_parsed = urlparse(url) #: path is request-uri defined in RFC 2616 which should not be empty path = p_parsed.path or "/" if p_parsed.query: path += '?' + p_parsed.query A1 = '%s:%s:%s' % (self.username, realm, self.password) A2 = '%s:%s' % (method, path) HA1 = hash_utf8(A1) HA2 = hash_utf8(A2) if nonce == self._thread_local.last_nonce: self._thread_local.nonce_count += 1 else: self._thread_local.nonce_count = 1 ncvalue = '%08x' % self._thread_local.nonce_count s = str(self._thread_local.nonce_count).encode('utf-8') s += nonce.encode('utf-8') s += time.ctime().encode('utf-8') s += os.urandom(8) cnonce = (hashlib.sha1(s).hexdigest()[:16]) if _algorithm == 'MD5-SESS': HA1 = hash_utf8('%s:%s:%s' % (HA1, nonce, cnonce)) if not qop: respdig = KD(HA1, "%s:%s" % (nonce, HA2)) elif qop == 'auth' or 'auth' in qop.split(','): noncebit = "%s:%s:%s:%s:%s" % ( nonce, ncvalue, cnonce, 'auth', HA2 ) respdig = KD(HA1, noncebit) else: # XXX handle auth-int. return None self._thread_local.last_nonce = nonce # XXX should the partial digests be encoded too? base = 'username="%s", realm="%s", nonce="%s", uri="%s", ' \ 'response="%s"' % (self.username, realm, nonce, path, respdig) if opaque: base += ', opaque="%s"' % opaque if algorithm: base += ', algorithm="%s"' % algorithm if entdig: base += ', digest="%s"' % entdig if qop: base += ', qop="auth", nc=%s, cnonce="%s"' % (ncvalue, cnonce) return 'Digest %s' % (base) def handle_redirect(self, r, **kwargs): """Reset num_401_calls counter on redirects.""" if r.is_redirect: self._thread_local.num_401_calls = 1 def handle_401(self, r, **kwargs): """ Takes the given response and tries digest-auth, if needed. :rtype: requests.Response """ # If response is not 4xx, do not auth # See https://github.com/requests/requests/issues/3772 if not 400 <= r.status_code < 500: self._thread_local.num_401_calls = 1 return r if self._thread_local.pos is not None: # Rewind the file position indicator of the body to where # it was to resend the request. r.request.body.seek(self._thread_local.pos) s_auth = r.headers.get('www-authenticate', '') if 'digest' in s_auth.lower() and self._thread_local.num_401_calls < 2: self._thread_local.num_401_calls += 1 pat = re.compile(r'digest ', flags=re.IGNORECASE) self._thread_local.chal = parse_dict_header(pat.sub('', s_auth, count=1)) # Consume content and release the original connection # to allow our new request to reuse the same one. r.content r.close() prep = r.request.copy() extract_cookies_to_jar(prep._cookies, r.request, r.raw) prep.prepare_cookies(prep._cookies) prep.headers['Authorization'] = self.build_digest_header( prep.method, prep.url) _r = r.connection.send(prep, **kwargs) _r.history.append(r) _r.request = prep return _r self._thread_local.num_401_calls = 1 return r def __call__(self, r): # Initialize per-thread state, if needed self.init_per_thread_state() # If we have a saved nonce, skip the 401 if self._thread_local.last_nonce: r.headers['Authorization'] = self.build_digest_header(r.method, r.url) try: self._thread_local.pos = r.body.tell() except AttributeError: # In the case of HTTPDigestAuth being reused and the body of # the previous request was a file-like object, pos has the # file position of the previous body. Ensure it's set to # None. self._thread_local.pos = None r.register_hook('response', self.handle_401) r.register_hook('response', self.handle_redirect) self._thread_local.num_401_calls = 1 return r def __eq__(self, other): return all([ self.username == getattr(other, 'username', None), self.password == getattr(other, 'password', None) ]) def __ne__(self, other): return not self == other ����������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/requests/certs.py������������������������0000644�0001750�0001750�00000000721�00000000000�030032� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������#!/usr/bin/env python # -*- coding: utf-8 -*- """ requests.certs ~~~~~~~~~~~~~~ This module returns the preferred default CA certificate bundle. There is only one — the one from the certifi package. If you are packaging Requests, e.g., for a Linux distribution or a managed environment, you can change the definition of where() to return a separately packaged CA bundle. """ from pip._vendor.certifi import where if __name__ == '__main__': print(where()) �����������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/requests/compat.py�����������������������0000644�0001750�0001750�00000003625�00000000000�030203� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- """ requests.compat ~~~~~~~~~~~~~~~ This module handles import compatibility issues between Python 2 and Python 3. """ from pip._vendor import chardet import sys # ------- # Pythons # ------- # Syntax sugar. _ver = sys.version_info #: Python 2.x? is_py2 = (_ver[0] == 2) #: Python 3.x? is_py3 = (_ver[0] == 3) # Note: We've patched out simplejson support in pip because it prevents # upgrading simplejson on Windows. # try: # import simplejson as json # except (ImportError, SyntaxError): # # simplejson does not support Python 3.2, it throws a SyntaxError # # because of u'...' Unicode literals. import json # --------- # Specifics # --------- if is_py2: from urllib import ( quote, unquote, quote_plus, unquote_plus, urlencode, getproxies, proxy_bypass, proxy_bypass_environment, getproxies_environment) from urlparse import urlparse, urlunparse, urljoin, urlsplit, urldefrag from urllib2 import parse_http_list import cookielib from Cookie import Morsel from StringIO import StringIO from collections import Callable, Mapping, MutableMapping, OrderedDict builtin_str = str bytes = str str = unicode basestring = basestring numeric_types = (int, long, float) integer_types = (int, long) elif is_py3: from urllib.parse import urlparse, urlunparse, urljoin, urlsplit, urlencode, quote, unquote, quote_plus, unquote_plus, urldefrag from urllib.request import parse_http_list, getproxies, proxy_bypass, proxy_bypass_environment, getproxies_environment from http import cookiejar as cookielib from http.cookies import Morsel from io import StringIO from collections import OrderedDict from collections.abc import Callable, Mapping, MutableMapping builtin_str = str str = str bytes = bytes basestring = (str, bytes) numeric_types = (int, float) integer_types = (int,) �����������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/requests/cookies.py����������������������0000644�0001750�0001750�00000043776�00000000000�030367� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- """ requests.cookies ~~~~~~~~~~~~~~~~ Compatibility code to be able to use `cookielib.CookieJar` with requests. requests.utils imports from here, so be careful with imports. """ import copy import time import calendar from ._internal_utils import to_native_string from .compat import cookielib, urlparse, urlunparse, Morsel, MutableMapping try: import threading except ImportError: import dummy_threading as threading class MockRequest(object): """Wraps a `requests.Request` to mimic a `urllib2.Request`. The code in `cookielib.CookieJar` expects this interface in order to correctly manage cookie policies, i.e., determine whether a cookie can be set, given the domains of the request and the cookie. The original request object is read-only. The client is responsible for collecting the new headers via `get_new_headers()` and interpreting them appropriately. You probably want `get_cookie_header`, defined below. """ def __init__(self, request): self._r = request self._new_headers = {} self.type = urlparse(self._r.url).scheme def get_type(self): return self.type def get_host(self): return urlparse(self._r.url).netloc def get_origin_req_host(self): return self.get_host() def get_full_url(self): # Only return the response's URL if the user hadn't set the Host # header if not self._r.headers.get('Host'): return self._r.url # If they did set it, retrieve it and reconstruct the expected domain host = to_native_string(self._r.headers['Host'], encoding='utf-8') parsed = urlparse(self._r.url) # Reconstruct the URL as we expect it return urlunparse([ parsed.scheme, host, parsed.path, parsed.params, parsed.query, parsed.fragment ]) def is_unverifiable(self): return True def has_header(self, name): return name in self._r.headers or name in self._new_headers def get_header(self, name, default=None): return self._r.headers.get(name, self._new_headers.get(name, default)) def add_header(self, key, val): """cookielib has no legitimate use for this method; add it back if you find one.""" raise NotImplementedError("Cookie headers should be added with add_unredirected_header()") def add_unredirected_header(self, name, value): self._new_headers[name] = value def get_new_headers(self): return self._new_headers @property def unverifiable(self): return self.is_unverifiable() @property def origin_req_host(self): return self.get_origin_req_host() @property def host(self): return self.get_host() class MockResponse(object): """Wraps a `httplib.HTTPMessage` to mimic a `urllib.addinfourl`. ...what? Basically, expose the parsed HTTP headers from the server response the way `cookielib` expects to see them. """ def __init__(self, headers): """Make a MockResponse for `cookielib` to read. :param headers: a httplib.HTTPMessage or analogous carrying the headers """ self._headers = headers def info(self): return self._headers def getheaders(self, name): self._headers.getheaders(name) def extract_cookies_to_jar(jar, request, response): """Extract the cookies from the response into a CookieJar. :param jar: cookielib.CookieJar (not necessarily a RequestsCookieJar) :param request: our own requests.Request object :param response: urllib3.HTTPResponse object """ if not (hasattr(response, '_original_response') and response._original_response): return # the _original_response field is the wrapped httplib.HTTPResponse object, req = MockRequest(request) # pull out the HTTPMessage with the headers and put it in the mock: res = MockResponse(response._original_response.msg) jar.extract_cookies(res, req) def get_cookie_header(jar, request): """ Produce an appropriate Cookie header string to be sent with `request`, or None. :rtype: str """ r = MockRequest(request) jar.add_cookie_header(r) return r.get_new_headers().get('Cookie') def remove_cookie_by_name(cookiejar, name, domain=None, path=None): """Unsets a cookie by name, by default over all domains and paths. Wraps CookieJar.clear(), is O(n). """ clearables = [] for cookie in cookiejar: if cookie.name != name: continue if domain is not None and domain != cookie.domain: continue if path is not None and path != cookie.path: continue clearables.append((cookie.domain, cookie.path, cookie.name)) for domain, path, name in clearables: cookiejar.clear(domain, path, name) class CookieConflictError(RuntimeError): """There are two cookies that meet the criteria specified in the cookie jar. Use .get and .set and include domain and path args in order to be more specific. """ class RequestsCookieJar(cookielib.CookieJar, MutableMapping): """Compatibility class; is a cookielib.CookieJar, but exposes a dict interface. This is the CookieJar we create by default for requests and sessions that don't specify one, since some clients may expect response.cookies and session.cookies to support dict operations. Requests does not use the dict interface internally; it's just for compatibility with external client code. All requests code should work out of the box with externally provided instances of ``CookieJar``, e.g. ``LWPCookieJar`` and ``FileCookieJar``. Unlike a regular CookieJar, this class is pickleable. .. warning:: dictionary operations that are normally O(1) may be O(n). """ def get(self, name, default=None, domain=None, path=None): """Dict-like get() that also supports optional domain and path args in order to resolve naming collisions from using one cookie jar over multiple domains. .. warning:: operation is O(n), not O(1). """ try: return self._find_no_duplicates(name, domain, path) except KeyError: return default def set(self, name, value, **kwargs): """Dict-like set() that also supports optional domain and path args in order to resolve naming collisions from using one cookie jar over multiple domains. """ # support client code that unsets cookies by assignment of a None value: if value is None: remove_cookie_by_name(self, name, domain=kwargs.get('domain'), path=kwargs.get('path')) return if isinstance(value, Morsel): c = morsel_to_cookie(value) else: c = create_cookie(name, value, **kwargs) self.set_cookie(c) return c def iterkeys(self): """Dict-like iterkeys() that returns an iterator of names of cookies from the jar. .. seealso:: itervalues() and iteritems(). """ for cookie in iter(self): yield cookie.name def keys(self): """Dict-like keys() that returns a list of names of cookies from the jar. .. seealso:: values() and items(). """ return list(self.iterkeys()) def itervalues(self): """Dict-like itervalues() that returns an iterator of values of cookies from the jar. .. seealso:: iterkeys() and iteritems(). """ for cookie in iter(self): yield cookie.value def values(self): """Dict-like values() that returns a list of values of cookies from the jar. .. seealso:: keys() and items(). """ return list(self.itervalues()) def iteritems(self): """Dict-like iteritems() that returns an iterator of name-value tuples from the jar. .. seealso:: iterkeys() and itervalues(). """ for cookie in iter(self): yield cookie.name, cookie.value def items(self): """Dict-like items() that returns a list of name-value tuples from the jar. Allows client-code to call ``dict(RequestsCookieJar)`` and get a vanilla python dict of key value pairs. .. seealso:: keys() and values(). """ return list(self.iteritems()) def list_domains(self): """Utility method to list all the domains in the jar.""" domains = [] for cookie in iter(self): if cookie.domain not in domains: domains.append(cookie.domain) return domains def list_paths(self): """Utility method to list all the paths in the jar.""" paths = [] for cookie in iter(self): if cookie.path not in paths: paths.append(cookie.path) return paths def multiple_domains(self): """Returns True if there are multiple domains in the jar. Returns False otherwise. :rtype: bool """ domains = [] for cookie in iter(self): if cookie.domain is not None and cookie.domain in domains: return True domains.append(cookie.domain) return False # there is only one domain in jar def get_dict(self, domain=None, path=None): """Takes as an argument an optional domain and path and returns a plain old Python dict of name-value pairs of cookies that meet the requirements. :rtype: dict """ dictionary = {} for cookie in iter(self): if ( (domain is None or cookie.domain == domain) and (path is None or cookie.path == path) ): dictionary[cookie.name] = cookie.value return dictionary def __contains__(self, name): try: return super(RequestsCookieJar, self).__contains__(name) except CookieConflictError: return True def __getitem__(self, name): """Dict-like __getitem__() for compatibility with client code. Throws exception if there are more than one cookie with name. In that case, use the more explicit get() method instead. .. warning:: operation is O(n), not O(1). """ return self._find_no_duplicates(name) def __setitem__(self, name, value): """Dict-like __setitem__ for compatibility with client code. Throws exception if there is already a cookie of that name in the jar. In that case, use the more explicit set() method instead. """ self.set(name, value) def __delitem__(self, name): """Deletes a cookie given a name. Wraps ``cookielib.CookieJar``'s ``remove_cookie_by_name()``. """ remove_cookie_by_name(self, name) def set_cookie(self, cookie, *args, **kwargs): if hasattr(cookie.value, 'startswith') and cookie.value.startswith('"') and cookie.value.endswith('"'): cookie.value = cookie.value.replace('\\"', '') return super(RequestsCookieJar, self).set_cookie(cookie, *args, **kwargs) def update(self, other): """Updates this jar with cookies from another CookieJar or dict-like""" if isinstance(other, cookielib.CookieJar): for cookie in other: self.set_cookie(copy.copy(cookie)) else: super(RequestsCookieJar, self).update(other) def _find(self, name, domain=None, path=None): """Requests uses this method internally to get cookie values. If there are conflicting cookies, _find arbitrarily chooses one. See _find_no_duplicates if you want an exception thrown if there are conflicting cookies. :param name: a string containing name of cookie :param domain: (optional) string containing domain of cookie :param path: (optional) string containing path of cookie :return: cookie.value """ for cookie in iter(self): if cookie.name == name: if domain is None or cookie.domain == domain: if path is None or cookie.path == path: return cookie.value raise KeyError('name=%r, domain=%r, path=%r' % (name, domain, path)) def _find_no_duplicates(self, name, domain=None, path=None): """Both ``__get_item__`` and ``get`` call this function: it's never used elsewhere in Requests. :param name: a string containing name of cookie :param domain: (optional) string containing domain of cookie :param path: (optional) string containing path of cookie :raises KeyError: if cookie is not found :raises CookieConflictError: if there are multiple cookies that match name and optionally domain and path :return: cookie.value """ toReturn = None for cookie in iter(self): if cookie.name == name: if domain is None or cookie.domain == domain: if path is None or cookie.path == path: if toReturn is not None: # if there are multiple cookies that meet passed in criteria raise CookieConflictError('There are multiple cookies with name, %r' % (name)) toReturn = cookie.value # we will eventually return this as long as no cookie conflict if toReturn: return toReturn raise KeyError('name=%r, domain=%r, path=%r' % (name, domain, path)) def __getstate__(self): """Unlike a normal CookieJar, this class is pickleable.""" state = self.__dict__.copy() # remove the unpickleable RLock object state.pop('_cookies_lock') return state def __setstate__(self, state): """Unlike a normal CookieJar, this class is pickleable.""" self.__dict__.update(state) if '_cookies_lock' not in self.__dict__: self._cookies_lock = threading.RLock() def copy(self): """Return a copy of this RequestsCookieJar.""" new_cj = RequestsCookieJar() new_cj.set_policy(self.get_policy()) new_cj.update(self) return new_cj def get_policy(self): """Return the CookiePolicy instance used.""" return self._policy def _copy_cookie_jar(jar): if jar is None: return None if hasattr(jar, 'copy'): # We're dealing with an instance of RequestsCookieJar return jar.copy() # We're dealing with a generic CookieJar instance new_jar = copy.copy(jar) new_jar.clear() for cookie in jar: new_jar.set_cookie(copy.copy(cookie)) return new_jar def create_cookie(name, value, **kwargs): """Make a cookie from underspecified parameters. By default, the pair of `name` and `value` will be set for the domain '' and sent on every request (this is sometimes called a "supercookie"). """ result = { 'version': 0, 'name': name, 'value': value, 'port': None, 'domain': '', 'path': '/', 'secure': False, 'expires': None, 'discard': True, 'comment': None, 'comment_url': None, 'rest': {'HttpOnly': None}, 'rfc2109': False, } badargs = set(kwargs) - set(result) if badargs: err = 'create_cookie() got unexpected keyword arguments: %s' raise TypeError(err % list(badargs)) result.update(kwargs) result['port_specified'] = bool(result['port']) result['domain_specified'] = bool(result['domain']) result['domain_initial_dot'] = result['domain'].startswith('.') result['path_specified'] = bool(result['path']) return cookielib.Cookie(**result) def morsel_to_cookie(morsel): """Convert a Morsel object into a Cookie containing the one k/v pair.""" expires = None if morsel['max-age']: try: expires = int(time.time() + int(morsel['max-age'])) except ValueError: raise TypeError('max-age: %s must be integer' % morsel['max-age']) elif morsel['expires']: time_template = '%a, %d-%b-%Y %H:%M:%S GMT' expires = calendar.timegm( time.strptime(morsel['expires'], time_template) ) return create_cookie( comment=morsel['comment'], comment_url=bool(morsel['comment']), discard=False, domain=morsel['domain'], expires=expires, name=morsel.key, path=morsel['path'], port=None, rest={'HttpOnly': morsel['httponly']}, rfc2109=False, secure=bool(morsel['secure']), value=morsel.value, version=morsel['version'] or 0, ) def cookiejar_from_dict(cookie_dict, cookiejar=None, overwrite=True): """Returns a CookieJar from a key/value dictionary. :param cookie_dict: Dict of key/values to insert into CookieJar. :param cookiejar: (optional) A cookiejar to add the cookies to. :param overwrite: (optional) If False, will not replace cookies already in the jar with new ones. :rtype: CookieJar """ if cookiejar is None: cookiejar = RequestsCookieJar() if cookie_dict is not None: names_from_jar = [cookie.name for cookie in cookiejar] for name in cookie_dict: if overwrite or (name not in names_from_jar): cookiejar.set_cookie(create_cookie(name, cookie_dict[name])) return cookiejar def merge_cookies(cookiejar, cookies): """Add cookies to cookiejar and returns a merged CookieJar. :param cookiejar: CookieJar object to add the cookies to. :param cookies: Dictionary or CookieJar object to be added. :rtype: CookieJar """ if not isinstance(cookiejar, cookielib.CookieJar): raise ValueError('You can only merge into CookieJar') if isinstance(cookies, dict): cookiejar = cookiejar_from_dict( cookies, cookiejar=cookiejar, overwrite=False) elif isinstance(cookies, cookielib.CookieJar): try: cookiejar.update(cookies) except AttributeError: for cookie_in_jar in cookies: cookiejar.set_cookie(cookie_in_jar) return cookiejar ��././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/requests/exceptions.py�������������������0000644�0001750�0001750�00000006175�00000000000�031104� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- """ requests.exceptions ~~~~~~~~~~~~~~~~~~~ This module contains the set of Requests' exceptions. """ from pip._vendor.urllib3.exceptions import HTTPError as BaseHTTPError class RequestException(IOError): """There was an ambiguous exception that occurred while handling your request. """ def __init__(self, *args, **kwargs): """Initialize RequestException with `request` and `response` objects.""" response = kwargs.pop('response', None) self.response = response self.request = kwargs.pop('request', None) if (response is not None and not self.request and hasattr(response, 'request')): self.request = self.response.request super(RequestException, self).__init__(*args, **kwargs) class HTTPError(RequestException): """An HTTP error occurred.""" class ConnectionError(RequestException): """A Connection error occurred.""" class ProxyError(ConnectionError): """A proxy error occurred.""" class SSLError(ConnectionError): """An SSL error occurred.""" class Timeout(RequestException): """The request timed out. Catching this error will catch both :exc:`~requests.exceptions.ConnectTimeout` and :exc:`~requests.exceptions.ReadTimeout` errors. """ class ConnectTimeout(ConnectionError, Timeout): """The request timed out while trying to connect to the remote server. Requests that produced this error are safe to retry. """ class ReadTimeout(Timeout): """The server did not send any data in the allotted amount of time.""" class URLRequired(RequestException): """A valid URL is required to make a request.""" class TooManyRedirects(RequestException): """Too many redirects.""" class MissingSchema(RequestException, ValueError): """The URL schema (e.g. http or https) is missing.""" class InvalidSchema(RequestException, ValueError): """See defaults.py for valid schemas.""" class InvalidURL(RequestException, ValueError): """The URL provided was somehow invalid.""" class InvalidHeader(RequestException, ValueError): """The header value provided was somehow invalid.""" class InvalidProxyURL(InvalidURL): """The proxy URL provided is invalid.""" class ChunkedEncodingError(RequestException): """The server declared chunked encoding but sent an invalid chunk.""" class ContentDecodingError(RequestException, BaseHTTPError): """Failed to decode response content""" class StreamConsumedError(RequestException, TypeError): """The content for this response was already consumed""" class RetryError(RequestException): """Custom retries logic failed""" class UnrewindableBodyError(RequestException): """Requests encountered an error when trying to rewind a body""" # Warnings class RequestsWarning(Warning): """Base warning for Requests.""" pass class FileModeWarning(RequestsWarning, DeprecationWarning): """A file was opened in text mode, but Requests determined its binary length.""" pass class RequestsDependencyWarning(RequestsWarning): """An imported dependency doesn't match the expected version range.""" pass ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/requests/help.py�������������������������0000644�0001750�0001750�00000006772�00000000000�027656� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""Module containing bug report helper(s).""" from __future__ import print_function import json import platform import sys import ssl from pip._vendor import idna from pip._vendor import urllib3 from pip._vendor import chardet from . import __version__ as requests_version try: from pip._vendor.urllib3.contrib import pyopenssl except ImportError: pyopenssl = None OpenSSL = None cryptography = None else: import OpenSSL import cryptography def _implementation(): """Return a dict with the Python implementation and version. Provide both the name and the version of the Python implementation currently running. For example, on CPython 2.7.5 it will return {'name': 'CPython', 'version': '2.7.5'}. This function works best on CPython and PyPy: in particular, it probably doesn't work for Jython or IronPython. Future investigation should be done to work out the correct shape of the code for those platforms. """ implementation = platform.python_implementation() if implementation == 'CPython': implementation_version = platform.python_version() elif implementation == 'PyPy': implementation_version = '%s.%s.%s' % (sys.pypy_version_info.major, sys.pypy_version_info.minor, sys.pypy_version_info.micro) if sys.pypy_version_info.releaselevel != 'final': implementation_version = ''.join([ implementation_version, sys.pypy_version_info.releaselevel ]) elif implementation == 'Jython': implementation_version = platform.python_version() # Complete Guess elif implementation == 'IronPython': implementation_version = platform.python_version() # Complete Guess else: implementation_version = 'Unknown' return {'name': implementation, 'version': implementation_version} def info(): """Generate information for a bug report.""" try: platform_info = { 'system': platform.system(), 'release': platform.release(), } except IOError: platform_info = { 'system': 'Unknown', 'release': 'Unknown', } implementation_info = _implementation() urllib3_info = {'version': urllib3.__version__} chardet_info = {'version': chardet.__version__} pyopenssl_info = { 'version': None, 'openssl_version': '', } if OpenSSL: pyopenssl_info = { 'version': OpenSSL.__version__, 'openssl_version': '%x' % OpenSSL.SSL.OPENSSL_VERSION_NUMBER, } cryptography_info = { 'version': getattr(cryptography, '__version__', ''), } idna_info = { 'version': getattr(idna, '__version__', ''), } system_ssl = ssl.OPENSSL_VERSION_NUMBER system_ssl_info = { 'version': '%x' % system_ssl if system_ssl is not None else '' } return { 'platform': platform_info, 'implementation': implementation_info, 'system_ssl': system_ssl_info, 'using_pyopenssl': pyopenssl is not None, 'pyOpenSSL': pyopenssl_info, 'urllib3': urllib3_info, 'chardet': chardet_info, 'cryptography': cryptography_info, 'idna': idna_info, 'requests': { 'version': requests_version, }, } def main(): """Pretty-print the bug information as JSON.""" print(json.dumps(info(), sort_keys=True, indent=2)) if __name__ == '__main__': main() ������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/requests/hooks.py������������������������0000644�0001750�0001750�00000001365�00000000000�030042� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- """ requests.hooks ~~~~~~~~~~~~~~ This module provides the capabilities for the Requests hooks system. Available hooks: ``response``: The response generated from a Request. """ HOOKS = ['response'] def default_hooks(): return {event: [] for event in HOOKS} # TODO: response is the only one def dispatch_hook(key, hooks, hook_data, **kwargs): """Dispatches a hook dictionary on a given piece of data.""" hooks = hooks or {} hooks = hooks.get(key) if hooks: if hasattr(hooks, '__call__'): hooks = [hooks] for hook in hooks: _hook_data = hook(hook_data, **kwargs) if _hook_data is not None: hook_data = _hook_data return hook_data ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/requests/models.py�����������������������0000644�0001750�0001750�00000102743�00000000000�030204� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- """ requests.models ~~~~~~~~~~~~~~~ This module contains the primary objects that power Requests. """ import datetime import sys # Import encoding now, to avoid implicit import later. # Implicit import within threads may cause LookupError when standard library is in a ZIP, # such as in Embedded Python. See https://github.com/requests/requests/issues/3578. import encodings.idna from pip._vendor.urllib3.fields import RequestField from pip._vendor.urllib3.filepost import encode_multipart_formdata from pip._vendor.urllib3.util import parse_url from pip._vendor.urllib3.exceptions import ( DecodeError, ReadTimeoutError, ProtocolError, LocationParseError) from io import UnsupportedOperation from .hooks import default_hooks from .structures import CaseInsensitiveDict from .auth import HTTPBasicAuth from .cookies import cookiejar_from_dict, get_cookie_header, _copy_cookie_jar from .exceptions import ( HTTPError, MissingSchema, InvalidURL, ChunkedEncodingError, ContentDecodingError, ConnectionError, StreamConsumedError) from ._internal_utils import to_native_string, unicode_is_ascii from .utils import ( guess_filename, get_auth_from_url, requote_uri, stream_decode_response_unicode, to_key_val_list, parse_header_links, iter_slices, guess_json_utf, super_len, check_header_validity) from .compat import ( Callable, Mapping, cookielib, urlunparse, urlsplit, urlencode, str, bytes, is_py2, chardet, builtin_str, basestring) from .compat import json as complexjson from .status_codes import codes #: The set of HTTP status codes that indicate an automatically #: processable redirect. REDIRECT_STATI = ( codes.moved, # 301 codes.found, # 302 codes.other, # 303 codes.temporary_redirect, # 307 codes.permanent_redirect, # 308 ) DEFAULT_REDIRECT_LIMIT = 30 CONTENT_CHUNK_SIZE = 10 * 1024 ITER_CHUNK_SIZE = 512 class RequestEncodingMixin(object): @property def path_url(self): """Build the path URL to use.""" url = [] p = urlsplit(self.url) path = p.path if not path: path = '/' url.append(path) query = p.query if query: url.append('?') url.append(query) return ''.join(url) @staticmethod def _encode_params(data): """Encode parameters in a piece of data. Will successfully encode parameters when passed as a dict or a list of 2-tuples. Order is retained if data is a list of 2-tuples but arbitrary if parameters are supplied as a dict. """ if isinstance(data, (str, bytes)): return data elif hasattr(data, 'read'): return data elif hasattr(data, '__iter__'): result = [] for k, vs in to_key_val_list(data): if isinstance(vs, basestring) or not hasattr(vs, '__iter__'): vs = [vs] for v in vs: if v is not None: result.append( (k.encode('utf-8') if isinstance(k, str) else k, v.encode('utf-8') if isinstance(v, str) else v)) return urlencode(result, doseq=True) else: return data @staticmethod def _encode_files(files, data): """Build the body for a multipart/form-data request. Will successfully encode files when passed as a dict or a list of tuples. Order is retained if data is a list of tuples but arbitrary if parameters are supplied as a dict. The tuples may be 2-tuples (filename, fileobj), 3-tuples (filename, fileobj, contentype) or 4-tuples (filename, fileobj, contentype, custom_headers). """ if (not files): raise ValueError("Files must be provided.") elif isinstance(data, basestring): raise ValueError("Data must not be a string.") new_fields = [] fields = to_key_val_list(data or {}) files = to_key_val_list(files or {}) for field, val in fields: if isinstance(val, basestring) or not hasattr(val, '__iter__'): val = [val] for v in val: if v is not None: # Don't call str() on bytestrings: in Py3 it all goes wrong. if not isinstance(v, bytes): v = str(v) new_fields.append( (field.decode('utf-8') if isinstance(field, bytes) else field, v.encode('utf-8') if isinstance(v, str) else v)) for (k, v) in files: # support for explicit filename ft = None fh = None if isinstance(v, (tuple, list)): if len(v) == 2: fn, fp = v elif len(v) == 3: fn, fp, ft = v else: fn, fp, ft, fh = v else: fn = guess_filename(v) or k fp = v if isinstance(fp, (str, bytes, bytearray)): fdata = fp elif hasattr(fp, 'read'): fdata = fp.read() elif fp is None: continue else: fdata = fp rf = RequestField(name=k, data=fdata, filename=fn, headers=fh) rf.make_multipart(content_type=ft) new_fields.append(rf) body, content_type = encode_multipart_formdata(new_fields) return body, content_type class RequestHooksMixin(object): def register_hook(self, event, hook): """Properly register a hook.""" if event not in self.hooks: raise ValueError('Unsupported event specified, with event name "%s"' % (event)) if isinstance(hook, Callable): self.hooks[event].append(hook) elif hasattr(hook, '__iter__'): self.hooks[event].extend(h for h in hook if isinstance(h, Callable)) def deregister_hook(self, event, hook): """Deregister a previously registered hook. Returns True if the hook existed, False if not. """ try: self.hooks[event].remove(hook) return True except ValueError: return False class Request(RequestHooksMixin): """A user-created :class:`Request <Request>` object. Used to prepare a :class:`PreparedRequest <PreparedRequest>`, which is sent to the server. :param method: HTTP method to use. :param url: URL to send. :param headers: dictionary of headers to send. :param files: dictionary of {filename: fileobject} files to multipart upload. :param data: the body to attach to the request. If a dictionary or list of tuples ``[(key, value)]`` is provided, form-encoding will take place. :param json: json for the body to attach to the request (if files or data is not specified). :param params: URL parameters to append to the URL. If a dictionary or list of tuples ``[(key, value)]`` is provided, form-encoding will take place. :param auth: Auth handler or (user, pass) tuple. :param cookies: dictionary or CookieJar of cookies to attach to this request. :param hooks: dictionary of callback hooks, for internal usage. Usage:: >>> import requests >>> req = requests.Request('GET', 'https://httpbin.org/get') >>> req.prepare() <PreparedRequest [GET]> """ def __init__(self, method=None, url=None, headers=None, files=None, data=None, params=None, auth=None, cookies=None, hooks=None, json=None): # Default empty dicts for dict params. data = [] if data is None else data files = [] if files is None else files headers = {} if headers is None else headers params = {} if params is None else params hooks = {} if hooks is None else hooks self.hooks = default_hooks() for (k, v) in list(hooks.items()): self.register_hook(event=k, hook=v) self.method = method self.url = url self.headers = headers self.files = files self.data = data self.json = json self.params = params self.auth = auth self.cookies = cookies def __repr__(self): return '<Request [%s]>' % (self.method) def prepare(self): """Constructs a :class:`PreparedRequest <PreparedRequest>` for transmission and returns it.""" p = PreparedRequest() p.prepare( method=self.method, url=self.url, headers=self.headers, files=self.files, data=self.data, json=self.json, params=self.params, auth=self.auth, cookies=self.cookies, hooks=self.hooks, ) return p class PreparedRequest(RequestEncodingMixin, RequestHooksMixin): """The fully mutable :class:`PreparedRequest <PreparedRequest>` object, containing the exact bytes that will be sent to the server. Generated from either a :class:`Request <Request>` object or manually. Usage:: >>> import requests >>> req = requests.Request('GET', 'https://httpbin.org/get') >>> r = req.prepare() <PreparedRequest [GET]> >>> s = requests.Session() >>> s.send(r) <Response [200]> """ def __init__(self): #: HTTP verb to send to the server. self.method = None #: HTTP URL to send the request to. self.url = None #: dictionary of HTTP headers. self.headers = None # The `CookieJar` used to create the Cookie header will be stored here # after prepare_cookies is called self._cookies = None #: request body to send to the server. self.body = None #: dictionary of callback hooks, for internal usage. self.hooks = default_hooks() #: integer denoting starting position of a readable file-like body. self._body_position = None def prepare(self, method=None, url=None, headers=None, files=None, data=None, params=None, auth=None, cookies=None, hooks=None, json=None): """Prepares the entire request with the given parameters.""" self.prepare_method(method) self.prepare_url(url, params) self.prepare_headers(headers) self.prepare_cookies(cookies) self.prepare_body(data, files, json) self.prepare_auth(auth, url) # Note that prepare_auth must be last to enable authentication schemes # such as OAuth to work on a fully prepared request. # This MUST go after prepare_auth. Authenticators could add a hook self.prepare_hooks(hooks) def __repr__(self): return '<PreparedRequest [%s]>' % (self.method) def copy(self): p = PreparedRequest() p.method = self.method p.url = self.url p.headers = self.headers.copy() if self.headers is not None else None p._cookies = _copy_cookie_jar(self._cookies) p.body = self.body p.hooks = self.hooks p._body_position = self._body_position return p def prepare_method(self, method): """Prepares the given HTTP method.""" self.method = method if self.method is not None: self.method = to_native_string(self.method.upper()) @staticmethod def _get_idna_encoded_host(host): from pip._vendor import idna try: host = idna.encode(host, uts46=True).decode('utf-8') except idna.IDNAError: raise UnicodeError return host def prepare_url(self, url, params): """Prepares the given HTTP URL.""" #: Accept objects that have string representations. #: We're unable to blindly call unicode/str functions #: as this will include the bytestring indicator (b'') #: on python 3.x. #: https://github.com/requests/requests/pull/2238 if isinstance(url, bytes): url = url.decode('utf8') else: url = unicode(url) if is_py2 else str(url) # Remove leading whitespaces from url url = url.lstrip() # Don't do any URL preparation for non-HTTP schemes like `mailto`, # `data` etc to work around exceptions from `url_parse`, which # handles RFC 3986 only. if ':' in url and not url.lower().startswith('http'): self.url = url return # Support for unicode domain names and paths. try: scheme, auth, host, port, path, query, fragment = parse_url(url) except LocationParseError as e: raise InvalidURL(*e.args) if not scheme: error = ("Invalid URL {0!r}: No schema supplied. Perhaps you meant http://{0}?") error = error.format(to_native_string(url, 'utf8')) raise MissingSchema(error) if not host: raise InvalidURL("Invalid URL %r: No host supplied" % url) # In general, we want to try IDNA encoding the hostname if the string contains # non-ASCII characters. This allows users to automatically get the correct IDNA # behaviour. For strings containing only ASCII characters, we need to also verify # it doesn't start with a wildcard (*), before allowing the unencoded hostname. if not unicode_is_ascii(host): try: host = self._get_idna_encoded_host(host) except UnicodeError: raise InvalidURL('URL has an invalid label.') elif host.startswith(u'*'): raise InvalidURL('URL has an invalid label.') # Carefully reconstruct the network location netloc = auth or '' if netloc: netloc += '@' netloc += host if port: netloc += ':' + str(port) # Bare domains aren't valid URLs. if not path: path = '/' if is_py2: if isinstance(scheme, str): scheme = scheme.encode('utf-8') if isinstance(netloc, str): netloc = netloc.encode('utf-8') if isinstance(path, str): path = path.encode('utf-8') if isinstance(query, str): query = query.encode('utf-8') if isinstance(fragment, str): fragment = fragment.encode('utf-8') if isinstance(params, (str, bytes)): params = to_native_string(params) enc_params = self._encode_params(params) if enc_params: if query: query = '%s&%s' % (query, enc_params) else: query = enc_params url = requote_uri(urlunparse([scheme, netloc, path, None, query, fragment])) self.url = url def prepare_headers(self, headers): """Prepares the given HTTP headers.""" self.headers = CaseInsensitiveDict() if headers: for header in headers.items(): # Raise exception on invalid header value. check_header_validity(header) name, value = header self.headers[to_native_string(name)] = value def prepare_body(self, data, files, json=None): """Prepares the given HTTP body data.""" # Check if file, fo, generator, iterator. # If not, run through normal process. # Nottin' on you. body = None content_type = None if not data and json is not None: # urllib3 requires a bytes-like body. Python 2's json.dumps # provides this natively, but Python 3 gives a Unicode string. content_type = 'application/json' body = complexjson.dumps(json) if not isinstance(body, bytes): body = body.encode('utf-8') is_stream = all([ hasattr(data, '__iter__'), not isinstance(data, (basestring, list, tuple, Mapping)) ]) try: length = super_len(data) except (TypeError, AttributeError, UnsupportedOperation): length = None if is_stream: body = data if getattr(body, 'tell', None) is not None: # Record the current file position before reading. # This will allow us to rewind a file in the event # of a redirect. try: self._body_position = body.tell() except (IOError, OSError): # This differentiates from None, allowing us to catch # a failed `tell()` later when trying to rewind the body self._body_position = object() if files: raise NotImplementedError('Streamed bodies and files are mutually exclusive.') if length: self.headers['Content-Length'] = builtin_str(length) else: self.headers['Transfer-Encoding'] = 'chunked' else: # Multi-part file uploads. if files: (body, content_type) = self._encode_files(files, data) else: if data: body = self._encode_params(data) if isinstance(data, basestring) or hasattr(data, 'read'): content_type = None else: content_type = 'application/x-www-form-urlencoded' self.prepare_content_length(body) # Add content-type if it wasn't explicitly provided. if content_type and ('content-type' not in self.headers): self.headers['Content-Type'] = content_type self.body = body def prepare_content_length(self, body): """Prepare Content-Length header based on request method and body""" if body is not None: length = super_len(body) if length: # If length exists, set it. Otherwise, we fallback # to Transfer-Encoding: chunked. self.headers['Content-Length'] = builtin_str(length) elif self.method not in ('GET', 'HEAD') and self.headers.get('Content-Length') is None: # Set Content-Length to 0 for methods that can have a body # but don't provide one. (i.e. not GET or HEAD) self.headers['Content-Length'] = '0' def prepare_auth(self, auth, url=''): """Prepares the given HTTP auth data.""" # If no Auth is explicitly provided, extract it from the URL first. if auth is None: url_auth = get_auth_from_url(self.url) auth = url_auth if any(url_auth) else None if auth: if isinstance(auth, tuple) and len(auth) == 2: # special-case basic HTTP auth auth = HTTPBasicAuth(*auth) # Allow auth to make its changes. r = auth(self) # Update self to reflect the auth changes. self.__dict__.update(r.__dict__) # Recompute Content-Length self.prepare_content_length(self.body) def prepare_cookies(self, cookies): """Prepares the given HTTP cookie data. This function eventually generates a ``Cookie`` header from the given cookies using cookielib. Due to cookielib's design, the header will not be regenerated if it already exists, meaning this function can only be called once for the life of the :class:`PreparedRequest <PreparedRequest>` object. Any subsequent calls to ``prepare_cookies`` will have no actual effect, unless the "Cookie" header is removed beforehand. """ if isinstance(cookies, cookielib.CookieJar): self._cookies = cookies else: self._cookies = cookiejar_from_dict(cookies) cookie_header = get_cookie_header(self._cookies, self) if cookie_header is not None: self.headers['Cookie'] = cookie_header def prepare_hooks(self, hooks): """Prepares the given hooks.""" # hooks can be passed as None to the prepare method and to this # method. To prevent iterating over None, simply use an empty list # if hooks is False-y hooks = hooks or [] for event in hooks: self.register_hook(event, hooks[event]) class Response(object): """The :class:`Response <Response>` object, which contains a server's response to an HTTP request. """ __attrs__ = [ '_content', 'status_code', 'headers', 'url', 'history', 'encoding', 'reason', 'cookies', 'elapsed', 'request' ] def __init__(self): self._content = False self._content_consumed = False self._next = None #: Integer Code of responded HTTP Status, e.g. 404 or 200. self.status_code = None #: Case-insensitive Dictionary of Response Headers. #: For example, ``headers['content-encoding']`` will return the #: value of a ``'Content-Encoding'`` response header. self.headers = CaseInsensitiveDict() #: File-like object representation of response (for advanced usage). #: Use of ``raw`` requires that ``stream=True`` be set on the request. # This requirement does not apply for use internally to Requests. self.raw = None #: Final URL location of Response. self.url = None #: Encoding to decode with when accessing r.text. self.encoding = None #: A list of :class:`Response <Response>` objects from #: the history of the Request. Any redirect responses will end #: up here. The list is sorted from the oldest to the most recent request. self.history = [] #: Textual reason of responded HTTP Status, e.g. "Not Found" or "OK". self.reason = None #: A CookieJar of Cookies the server sent back. self.cookies = cookiejar_from_dict({}) #: The amount of time elapsed between sending the request #: and the arrival of the response (as a timedelta). #: This property specifically measures the time taken between sending #: the first byte of the request and finishing parsing the headers. It #: is therefore unaffected by consuming the response content or the #: value of the ``stream`` keyword argument. self.elapsed = datetime.timedelta(0) #: The :class:`PreparedRequest <PreparedRequest>` object to which this #: is a response. self.request = None def __enter__(self): return self def __exit__(self, *args): self.close() def __getstate__(self): # Consume everything; accessing the content attribute makes # sure the content has been fully read. if not self._content_consumed: self.content return {attr: getattr(self, attr, None) for attr in self.__attrs__} def __setstate__(self, state): for name, value in state.items(): setattr(self, name, value) # pickled objects do not have .raw setattr(self, '_content_consumed', True) setattr(self, 'raw', None) def __repr__(self): return '<Response [%s]>' % (self.status_code) def __bool__(self): """Returns True if :attr:`status_code` is less than 400. This attribute checks if the status code of the response is between 400 and 600 to see if there was a client error or a server error. If the status code, is between 200 and 400, this will return True. This is **not** a check to see if the response code is ``200 OK``. """ return self.ok def __nonzero__(self): """Returns True if :attr:`status_code` is less than 400. This attribute checks if the status code of the response is between 400 and 600 to see if there was a client error or a server error. If the status code, is between 200 and 400, this will return True. This is **not** a check to see if the response code is ``200 OK``. """ return self.ok def __iter__(self): """Allows you to use a response as an iterator.""" return self.iter_content(128) @property def ok(self): """Returns True if :attr:`status_code` is less than 400, False if not. This attribute checks if the status code of the response is between 400 and 600 to see if there was a client error or a server error. If the status code is between 200 and 400, this will return True. This is **not** a check to see if the response code is ``200 OK``. """ try: self.raise_for_status() except HTTPError: return False return True @property def is_redirect(self): """True if this Response is a well-formed HTTP redirect that could have been processed automatically (by :meth:`Session.resolve_redirects`). """ return ('location' in self.headers and self.status_code in REDIRECT_STATI) @property def is_permanent_redirect(self): """True if this Response one of the permanent versions of redirect.""" return ('location' in self.headers and self.status_code in (codes.moved_permanently, codes.permanent_redirect)) @property def next(self): """Returns a PreparedRequest for the next request in a redirect chain, if there is one.""" return self._next @property def apparent_encoding(self): """The apparent encoding, provided by the chardet library.""" return chardet.detect(self.content)['encoding'] def iter_content(self, chunk_size=1, decode_unicode=False): """Iterates over the response data. When stream=True is set on the request, this avoids reading the content at once into memory for large responses. The chunk size is the number of bytes it should read into memory. This is not necessarily the length of each item returned as decoding can take place. chunk_size must be of type int or None. A value of None will function differently depending on the value of `stream`. stream=True will read data as it arrives in whatever size the chunks are received. If stream=False, data is returned as a single chunk. If decode_unicode is True, content will be decoded using the best available encoding based on the response. """ def generate(): # Special case for urllib3. if hasattr(self.raw, 'stream'): try: for chunk in self.raw.stream(chunk_size, decode_content=True): yield chunk except ProtocolError as e: raise ChunkedEncodingError(e) except DecodeError as e: raise ContentDecodingError(e) except ReadTimeoutError as e: raise ConnectionError(e) else: # Standard file-like object. while True: chunk = self.raw.read(chunk_size) if not chunk: break yield chunk self._content_consumed = True if self._content_consumed and isinstance(self._content, bool): raise StreamConsumedError() elif chunk_size is not None and not isinstance(chunk_size, int): raise TypeError("chunk_size must be an int, it is instead a %s." % type(chunk_size)) # simulate reading small chunks of the content reused_chunks = iter_slices(self._content, chunk_size) stream_chunks = generate() chunks = reused_chunks if self._content_consumed else stream_chunks if decode_unicode: chunks = stream_decode_response_unicode(chunks, self) return chunks def iter_lines(self, chunk_size=ITER_CHUNK_SIZE, decode_unicode=False, delimiter=None): """Iterates over the response data, one line at a time. When stream=True is set on the request, this avoids reading the content at once into memory for large responses. .. note:: This method is not reentrant safe. """ pending = None for chunk in self.iter_content(chunk_size=chunk_size, decode_unicode=decode_unicode): if pending is not None: chunk = pending + chunk if delimiter: lines = chunk.split(delimiter) else: lines = chunk.splitlines() if lines and lines[-1] and chunk and lines[-1][-1] == chunk[-1]: pending = lines.pop() else: pending = None for line in lines: yield line if pending is not None: yield pending @property def content(self): """Content of the response, in bytes.""" if self._content is False: # Read the contents. if self._content_consumed: raise RuntimeError( 'The content for this response was already consumed') if self.status_code == 0 or self.raw is None: self._content = None else: self._content = b''.join(self.iter_content(CONTENT_CHUNK_SIZE)) or b'' self._content_consumed = True # don't need to release the connection; that's been handled by urllib3 # since we exhausted the data. return self._content @property def text(self): """Content of the response, in unicode. If Response.encoding is None, encoding will be guessed using ``chardet``. The encoding of the response content is determined based solely on HTTP headers, following RFC 2616 to the letter. If you can take advantage of non-HTTP knowledge to make a better guess at the encoding, you should set ``r.encoding`` appropriately before accessing this property. """ # Try charset from content-type content = None encoding = self.encoding if not self.content: return str('') # Fallback to auto-detected encoding. if self.encoding is None: encoding = self.apparent_encoding # Decode unicode from given encoding. try: content = str(self.content, encoding, errors='replace') except (LookupError, TypeError): # A LookupError is raised if the encoding was not found which could # indicate a misspelling or similar mistake. # # A TypeError can be raised if encoding is None # # So we try blindly encoding. content = str(self.content, errors='replace') return content def json(self, **kwargs): r"""Returns the json-encoded content of a response, if any. :param \*\*kwargs: Optional arguments that ``json.loads`` takes. :raises ValueError: If the response body does not contain valid json. """ if not self.encoding and self.content and len(self.content) > 3: # No encoding set. JSON RFC 4627 section 3 states we should expect # UTF-8, -16 or -32. Detect which one to use; If the detection or # decoding fails, fall back to `self.text` (using chardet to make # a best guess). encoding = guess_json_utf(self.content) if encoding is not None: try: return complexjson.loads( self.content.decode(encoding), **kwargs ) except UnicodeDecodeError: # Wrong UTF codec detected; usually because it's not UTF-8 # but some other 8-bit codec. This is an RFC violation, # and the server didn't bother to tell us what codec *was* # used. pass return complexjson.loads(self.text, **kwargs) @property def links(self): """Returns the parsed header links of the response, if any.""" header = self.headers.get('link') # l = MultiDict() l = {} if header: links = parse_header_links(header) for link in links: key = link.get('rel') or link.get('url') l[key] = link return l def raise_for_status(self): """Raises stored :class:`HTTPError`, if one occurred.""" http_error_msg = '' if isinstance(self.reason, bytes): # We attempt to decode utf-8 first because some servers # choose to localize their reason strings. If the string # isn't utf-8, we fall back to iso-8859-1 for all other # encodings. (See PR #3538) try: reason = self.reason.decode('utf-8') except UnicodeDecodeError: reason = self.reason.decode('iso-8859-1') else: reason = self.reason if 400 <= self.status_code < 500: http_error_msg = u'%s Client Error: %s for url: %s' % (self.status_code, reason, self.url) elif 500 <= self.status_code < 600: http_error_msg = u'%s Server Error: %s for url: %s' % (self.status_code, reason, self.url) if http_error_msg: raise HTTPError(http_error_msg, response=self) def close(self): """Releases the connection back to the pool. Once this method has been called the underlying ``raw`` object must not be accessed again. *Note: Should not normally need to be called explicitly.* """ if not self._content_consumed: self.raw.close() release_conn = getattr(self.raw, 'release_conn', None) if release_conn is not None: release_conn() �����������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/requests/packages.py���������������������0000644�0001750�0001750�00000001267�00000000000�030476� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import sys # This code exists for backwards compatibility reasons. # I don't like it either. Just look the other way. :) for package in ('urllib3', 'idna', 'chardet'): vendored_package = "pip._vendor." + package locals()[package] = __import__(vendored_package) # This traversal is apparently necessary such that the identities are # preserved (requests.packages.urllib3.* is urllib3.*) for mod in list(sys.modules): if mod == vendored_package or mod.startswith(vendored_package + '.'): unprefixed_mod = mod[len("pip._vendor."):] sys.modules['pip._vendor.requests.packages.' + unprefixed_mod] = sys.modules[mod] # Kinda cool, though, right? �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/requests/sessions.py���������������������0000644�0001750�0001750�00000071224�00000000000�030566� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- """ requests.session ~~~~~~~~~~~~~~~~ This module provides a Session object to manage and persist settings across requests (cookies, auth, proxies). """ import os import sys import time from datetime import timedelta from .auth import _basic_auth_str from .compat import cookielib, is_py3, OrderedDict, urljoin, urlparse, Mapping from .cookies import ( cookiejar_from_dict, extract_cookies_to_jar, RequestsCookieJar, merge_cookies) from .models import Request, PreparedRequest, DEFAULT_REDIRECT_LIMIT from .hooks import default_hooks, dispatch_hook from ._internal_utils import to_native_string from .utils import to_key_val_list, default_headers, DEFAULT_PORTS from .exceptions import ( TooManyRedirects, InvalidSchema, ChunkedEncodingError, ContentDecodingError) from .structures import CaseInsensitiveDict from .adapters import HTTPAdapter from .utils import ( requote_uri, get_environ_proxies, get_netrc_auth, should_bypass_proxies, get_auth_from_url, rewind_body ) from .status_codes import codes # formerly defined here, reexposed here for backward compatibility from .models import REDIRECT_STATI # Preferred clock, based on which one is more accurate on a given system. if sys.platform == 'win32': try: # Python 3.4+ preferred_clock = time.perf_counter except AttributeError: # Earlier than Python 3. preferred_clock = time.clock else: preferred_clock = time.time def merge_setting(request_setting, session_setting, dict_class=OrderedDict): """Determines appropriate setting for a given request, taking into account the explicit setting on that request, and the setting in the session. If a setting is a dictionary, they will be merged together using `dict_class` """ if session_setting is None: return request_setting if request_setting is None: return session_setting # Bypass if not a dictionary (e.g. verify) if not ( isinstance(session_setting, Mapping) and isinstance(request_setting, Mapping) ): return request_setting merged_setting = dict_class(to_key_val_list(session_setting)) merged_setting.update(to_key_val_list(request_setting)) # Remove keys that are set to None. Extract keys first to avoid altering # the dictionary during iteration. none_keys = [k for (k, v) in merged_setting.items() if v is None] for key in none_keys: del merged_setting[key] return merged_setting def merge_hooks(request_hooks, session_hooks, dict_class=OrderedDict): """Properly merges both requests and session hooks. This is necessary because when request_hooks == {'response': []}, the merge breaks Session hooks entirely. """ if session_hooks is None or session_hooks.get('response') == []: return request_hooks if request_hooks is None or request_hooks.get('response') == []: return session_hooks return merge_setting(request_hooks, session_hooks, dict_class) class SessionRedirectMixin(object): def get_redirect_target(self, resp): """Receives a Response. Returns a redirect URI or ``None``""" # Due to the nature of how requests processes redirects this method will # be called at least once upon the original response and at least twice # on each subsequent redirect response (if any). # If a custom mixin is used to handle this logic, it may be advantageous # to cache the redirect location onto the response object as a private # attribute. if resp.is_redirect: location = resp.headers['location'] # Currently the underlying http module on py3 decode headers # in latin1, but empirical evidence suggests that latin1 is very # rarely used with non-ASCII characters in HTTP headers. # It is more likely to get UTF8 header rather than latin1. # This causes incorrect handling of UTF8 encoded location headers. # To solve this, we re-encode the location in latin1. if is_py3: location = location.encode('latin1') return to_native_string(location, 'utf8') return None def should_strip_auth(self, old_url, new_url): """Decide whether Authorization header should be removed when redirecting""" old_parsed = urlparse(old_url) new_parsed = urlparse(new_url) if old_parsed.hostname != new_parsed.hostname: return True # Special case: allow http -> https redirect when using the standard # ports. This isn't specified by RFC 7235, but is kept to avoid # breaking backwards compatibility with older versions of requests # that allowed any redirects on the same host. if (old_parsed.scheme == 'http' and old_parsed.port in (80, None) and new_parsed.scheme == 'https' and new_parsed.port in (443, None)): return False # Handle default port usage corresponding to scheme. changed_port = old_parsed.port != new_parsed.port changed_scheme = old_parsed.scheme != new_parsed.scheme default_port = (DEFAULT_PORTS.get(old_parsed.scheme, None), None) if (not changed_scheme and old_parsed.port in default_port and new_parsed.port in default_port): return False # Standard case: root URI must match return changed_port or changed_scheme def resolve_redirects(self, resp, req, stream=False, timeout=None, verify=True, cert=None, proxies=None, yield_requests=False, **adapter_kwargs): """Receives a Response. Returns a generator of Responses or Requests.""" hist = [] # keep track of history url = self.get_redirect_target(resp) previous_fragment = urlparse(req.url).fragment while url: prepared_request = req.copy() # Update history and keep track of redirects. # resp.history must ignore the original request in this loop hist.append(resp) resp.history = hist[1:] try: resp.content # Consume socket so it can be released except (ChunkedEncodingError, ContentDecodingError, RuntimeError): resp.raw.read(decode_content=False) if len(resp.history) >= self.max_redirects: raise TooManyRedirects('Exceeded %s redirects.' % self.max_redirects, response=resp) # Release the connection back into the pool. resp.close() # Handle redirection without scheme (see: RFC 1808 Section 4) if url.startswith('//'): parsed_rurl = urlparse(resp.url) url = '%s:%s' % (to_native_string(parsed_rurl.scheme), url) # Normalize url case and attach previous fragment if needed (RFC 7231 7.1.2) parsed = urlparse(url) if parsed.fragment == '' and previous_fragment: parsed = parsed._replace(fragment=previous_fragment) elif parsed.fragment: previous_fragment = parsed.fragment url = parsed.geturl() # Facilitate relative 'location' headers, as allowed by RFC 7231. # (e.g. '/path/to/resource' instead of 'http://domain.tld/path/to/resource') # Compliant with RFC3986, we percent encode the url. if not parsed.netloc: url = urljoin(resp.url, requote_uri(url)) else: url = requote_uri(url) prepared_request.url = to_native_string(url) self.rebuild_method(prepared_request, resp) # https://github.com/requests/requests/issues/1084 if resp.status_code not in (codes.temporary_redirect, codes.permanent_redirect): # https://github.com/requests/requests/issues/3490 purged_headers = ('Content-Length', 'Content-Type', 'Transfer-Encoding') for header in purged_headers: prepared_request.headers.pop(header, None) prepared_request.body = None headers = prepared_request.headers try: del headers['Cookie'] except KeyError: pass # Extract any cookies sent on the response to the cookiejar # in the new request. Because we've mutated our copied prepared # request, use the old one that we haven't yet touched. extract_cookies_to_jar(prepared_request._cookies, req, resp.raw) merge_cookies(prepared_request._cookies, self.cookies) prepared_request.prepare_cookies(prepared_request._cookies) # Rebuild auth and proxy information. proxies = self.rebuild_proxies(prepared_request, proxies) self.rebuild_auth(prepared_request, resp) # A failed tell() sets `_body_position` to `object()`. This non-None # value ensures `rewindable` will be True, allowing us to raise an # UnrewindableBodyError, instead of hanging the connection. rewindable = ( prepared_request._body_position is not None and ('Content-Length' in headers or 'Transfer-Encoding' in headers) ) # Attempt to rewind consumed file-like object. if rewindable: rewind_body(prepared_request) # Override the original request. req = prepared_request if yield_requests: yield req else: resp = self.send( req, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, allow_redirects=False, **adapter_kwargs ) extract_cookies_to_jar(self.cookies, prepared_request, resp.raw) # extract redirect url, if any, for the next loop url = self.get_redirect_target(resp) yield resp def rebuild_auth(self, prepared_request, response): """When being redirected we may want to strip authentication from the request to avoid leaking credentials. This method intelligently removes and reapplies authentication where possible to avoid credential loss. """ headers = prepared_request.headers url = prepared_request.url if 'Authorization' in headers and self.should_strip_auth(response.request.url, url): # If we get redirected to a new host, we should strip out any # authentication headers. del headers['Authorization'] # .netrc might have more auth for us on our new host. new_auth = get_netrc_auth(url) if self.trust_env else None if new_auth is not None: prepared_request.prepare_auth(new_auth) return def rebuild_proxies(self, prepared_request, proxies): """This method re-evaluates the proxy configuration by considering the environment variables. If we are redirected to a URL covered by NO_PROXY, we strip the proxy configuration. Otherwise, we set missing proxy keys for this URL (in case they were stripped by a previous redirect). This method also replaces the Proxy-Authorization header where necessary. :rtype: dict """ proxies = proxies if proxies is not None else {} headers = prepared_request.headers url = prepared_request.url scheme = urlparse(url).scheme new_proxies = proxies.copy() no_proxy = proxies.get('no_proxy') bypass_proxy = should_bypass_proxies(url, no_proxy=no_proxy) if self.trust_env and not bypass_proxy: environ_proxies = get_environ_proxies(url, no_proxy=no_proxy) proxy = environ_proxies.get(scheme, environ_proxies.get('all')) if proxy: new_proxies.setdefault(scheme, proxy) if 'Proxy-Authorization' in headers: del headers['Proxy-Authorization'] try: username, password = get_auth_from_url(new_proxies[scheme]) except KeyError: username, password = None, None if username and password: headers['Proxy-Authorization'] = _basic_auth_str(username, password) return new_proxies def rebuild_method(self, prepared_request, response): """When being redirected we may want to change the method of the request based on certain specs or browser behavior. """ method = prepared_request.method # https://tools.ietf.org/html/rfc7231#section-6.4.4 if response.status_code == codes.see_other and method != 'HEAD': method = 'GET' # Do what the browsers do, despite standards... # First, turn 302s into GETs. if response.status_code == codes.found and method != 'HEAD': method = 'GET' # Second, if a POST is responded to with a 301, turn it into a GET. # This bizarre behaviour is explained in Issue 1704. if response.status_code == codes.moved and method == 'POST': method = 'GET' prepared_request.method = method class Session(SessionRedirectMixin): """A Requests session. Provides cookie persistence, connection-pooling, and configuration. Basic Usage:: >>> import requests >>> s = requests.Session() >>> s.get('https://httpbin.org/get') <Response [200]> Or as a context manager:: >>> with requests.Session() as s: >>> s.get('https://httpbin.org/get') <Response [200]> """ __attrs__ = [ 'headers', 'cookies', 'auth', 'proxies', 'hooks', 'params', 'verify', 'cert', 'prefetch', 'adapters', 'stream', 'trust_env', 'max_redirects', ] def __init__(self): #: A case-insensitive dictionary of headers to be sent on each #: :class:`Request <Request>` sent from this #: :class:`Session <Session>`. self.headers = default_headers() #: Default Authentication tuple or object to attach to #: :class:`Request <Request>`. self.auth = None #: Dictionary mapping protocol or protocol and host to the URL of the proxy #: (e.g. {'http': 'foo.bar:3128', 'http://host.name': 'foo.bar:4012'}) to #: be used on each :class:`Request <Request>`. self.proxies = {} #: Event-handling hooks. self.hooks = default_hooks() #: Dictionary of querystring data to attach to each #: :class:`Request <Request>`. The dictionary values may be lists for #: representing multivalued query parameters. self.params = {} #: Stream response content default. self.stream = False #: SSL Verification default. self.verify = True #: SSL client certificate default, if String, path to ssl client #: cert file (.pem). If Tuple, ('cert', 'key') pair. self.cert = None #: Maximum number of redirects allowed. If the request exceeds this #: limit, a :class:`TooManyRedirects` exception is raised. #: This defaults to requests.models.DEFAULT_REDIRECT_LIMIT, which is #: 30. self.max_redirects = DEFAULT_REDIRECT_LIMIT #: Trust environment settings for proxy configuration, default #: authentication and similar. self.trust_env = True #: A CookieJar containing all currently outstanding cookies set on this #: session. By default it is a #: :class:`RequestsCookieJar <requests.cookies.RequestsCookieJar>`, but #: may be any other ``cookielib.CookieJar`` compatible object. self.cookies = cookiejar_from_dict({}) # Default connection adapters. self.adapters = OrderedDict() self.mount('https://', HTTPAdapter()) self.mount('http://', HTTPAdapter()) def __enter__(self): return self def __exit__(self, *args): self.close() def prepare_request(self, request): """Constructs a :class:`PreparedRequest <PreparedRequest>` for transmission and returns it. The :class:`PreparedRequest` has settings merged from the :class:`Request <Request>` instance and those of the :class:`Session`. :param request: :class:`Request` instance to prepare with this session's settings. :rtype: requests.PreparedRequest """ cookies = request.cookies or {} # Bootstrap CookieJar. if not isinstance(cookies, cookielib.CookieJar): cookies = cookiejar_from_dict(cookies) # Merge with session cookies merged_cookies = merge_cookies( merge_cookies(RequestsCookieJar(), self.cookies), cookies) # Set environment's basic authentication if not explicitly set. auth = request.auth if self.trust_env and not auth and not self.auth: auth = get_netrc_auth(request.url) p = PreparedRequest() p.prepare( method=request.method.upper(), url=request.url, files=request.files, data=request.data, json=request.json, headers=merge_setting(request.headers, self.headers, dict_class=CaseInsensitiveDict), params=merge_setting(request.params, self.params), auth=merge_setting(auth, self.auth), cookies=merged_cookies, hooks=merge_hooks(request.hooks, self.hooks), ) return p def request(self, method, url, params=None, data=None, headers=None, cookies=None, files=None, auth=None, timeout=None, allow_redirects=True, proxies=None, hooks=None, stream=None, verify=None, cert=None, json=None): """Constructs a :class:`Request <Request>`, prepares it and sends it. Returns :class:`Response <Response>` object. :param method: method for the new :class:`Request` object. :param url: URL for the new :class:`Request` object. :param params: (optional) Dictionary or bytes to be sent in the query string for the :class:`Request`. :param data: (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the :class:`Request`. :param json: (optional) json to send in the body of the :class:`Request`. :param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`. :param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`. :param files: (optional) Dictionary of ``'filename': file-like-objects`` for multipart encoding upload. :param auth: (optional) Auth tuple or callable to enable Basic/Digest/Custom HTTP Auth. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) <timeouts>` tuple. :type timeout: float or tuple :param allow_redirects: (optional) Set to True by default. :type allow_redirects: bool :param proxies: (optional) Dictionary mapping protocol or protocol and hostname to the URL of the proxy. :param stream: (optional) whether to immediately download the response content. Defaults to ``False``. :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use. Defaults to ``True``. :param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair. :rtype: requests.Response """ # Create the Request. req = Request( method=method.upper(), url=url, headers=headers, files=files, data=data or {}, json=json, params=params or {}, auth=auth, cookies=cookies, hooks=hooks, ) prep = self.prepare_request(req) proxies = proxies or {} settings = self.merge_environment_settings( prep.url, proxies, stream, verify, cert ) # Send the request. send_kwargs = { 'timeout': timeout, 'allow_redirects': allow_redirects, } send_kwargs.update(settings) resp = self.send(prep, **send_kwargs) return resp def get(self, url, **kwargs): r"""Sends a GET request. Returns :class:`Response` object. :param url: URL for the new :class:`Request` object. :param \*\*kwargs: Optional arguments that ``request`` takes. :rtype: requests.Response """ kwargs.setdefault('allow_redirects', True) return self.request('GET', url, **kwargs) def options(self, url, **kwargs): r"""Sends a OPTIONS request. Returns :class:`Response` object. :param url: URL for the new :class:`Request` object. :param \*\*kwargs: Optional arguments that ``request`` takes. :rtype: requests.Response """ kwargs.setdefault('allow_redirects', True) return self.request('OPTIONS', url, **kwargs) def head(self, url, **kwargs): r"""Sends a HEAD request. Returns :class:`Response` object. :param url: URL for the new :class:`Request` object. :param \*\*kwargs: Optional arguments that ``request`` takes. :rtype: requests.Response """ kwargs.setdefault('allow_redirects', False) return self.request('HEAD', url, **kwargs) def post(self, url, data=None, json=None, **kwargs): r"""Sends a POST request. Returns :class:`Response` object. :param url: URL for the new :class:`Request` object. :param data: (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the :class:`Request`. :param json: (optional) json to send in the body of the :class:`Request`. :param \*\*kwargs: Optional arguments that ``request`` takes. :rtype: requests.Response """ return self.request('POST', url, data=data, json=json, **kwargs) def put(self, url, data=None, **kwargs): r"""Sends a PUT request. Returns :class:`Response` object. :param url: URL for the new :class:`Request` object. :param data: (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the :class:`Request`. :param \*\*kwargs: Optional arguments that ``request`` takes. :rtype: requests.Response """ return self.request('PUT', url, data=data, **kwargs) def patch(self, url, data=None, **kwargs): r"""Sends a PATCH request. Returns :class:`Response` object. :param url: URL for the new :class:`Request` object. :param data: (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the :class:`Request`. :param \*\*kwargs: Optional arguments that ``request`` takes. :rtype: requests.Response """ return self.request('PATCH', url, data=data, **kwargs) def delete(self, url, **kwargs): r"""Sends a DELETE request. Returns :class:`Response` object. :param url: URL for the new :class:`Request` object. :param \*\*kwargs: Optional arguments that ``request`` takes. :rtype: requests.Response """ return self.request('DELETE', url, **kwargs) def send(self, request, **kwargs): """Send a given PreparedRequest. :rtype: requests.Response """ # Set defaults that the hooks can utilize to ensure they always have # the correct parameters to reproduce the previous request. kwargs.setdefault('stream', self.stream) kwargs.setdefault('verify', self.verify) kwargs.setdefault('cert', self.cert) kwargs.setdefault('proxies', self.proxies) # It's possible that users might accidentally send a Request object. # Guard against that specific failure case. if isinstance(request, Request): raise ValueError('You can only send PreparedRequests.') # Set up variables needed for resolve_redirects and dispatching of hooks allow_redirects = kwargs.pop('allow_redirects', True) stream = kwargs.get('stream') hooks = request.hooks # Get the appropriate adapter to use adapter = self.get_adapter(url=request.url) # Start time (approximately) of the request start = preferred_clock() # Send the request r = adapter.send(request, **kwargs) # Total elapsed time of the request (approximately) elapsed = preferred_clock() - start r.elapsed = timedelta(seconds=elapsed) # Response manipulation hooks r = dispatch_hook('response', hooks, r, **kwargs) # Persist cookies if r.history: # If the hooks create history then we want those cookies too for resp in r.history: extract_cookies_to_jar(self.cookies, resp.request, resp.raw) extract_cookies_to_jar(self.cookies, request, r.raw) # Redirect resolving generator. gen = self.resolve_redirects(r, request, **kwargs) # Resolve redirects if allowed. history = [resp for resp in gen] if allow_redirects else [] # Shuffle things around if there's history. if history: # Insert the first (original) request at the start history.insert(0, r) # Get the last request made r = history.pop() r.history = history # If redirects aren't being followed, store the response on the Request for Response.next(). if not allow_redirects: try: r._next = next(self.resolve_redirects(r, request, yield_requests=True, **kwargs)) except StopIteration: pass if not stream: r.content return r def merge_environment_settings(self, url, proxies, stream, verify, cert): """ Check the environment and merge it with some settings. :rtype: dict """ # Gather clues from the surrounding environment. if self.trust_env: # Set environment's proxies. no_proxy = proxies.get('no_proxy') if proxies is not None else None env_proxies = get_environ_proxies(url, no_proxy=no_proxy) for (k, v) in env_proxies.items(): proxies.setdefault(k, v) # Look for requests environment configuration and be compatible # with cURL. if verify is True or verify is None: verify = (os.environ.get('REQUESTS_CA_BUNDLE') or os.environ.get('CURL_CA_BUNDLE')) # Merge all the kwargs. proxies = merge_setting(proxies, self.proxies) stream = merge_setting(stream, self.stream) verify = merge_setting(verify, self.verify) cert = merge_setting(cert, self.cert) return {'verify': verify, 'proxies': proxies, 'stream': stream, 'cert': cert} def get_adapter(self, url): """ Returns the appropriate connection adapter for the given URL. :rtype: requests.adapters.BaseAdapter """ for (prefix, adapter) in self.adapters.items(): if url.lower().startswith(prefix.lower()): return adapter # Nothing matches :-/ raise InvalidSchema("No connection adapters were found for '%s'" % url) def close(self): """Closes all adapters and as such the session""" for v in self.adapters.values(): v.close() def mount(self, prefix, adapter): """Registers a connection adapter to a prefix. Adapters are sorted in descending order by prefix length. """ self.adapters[prefix] = adapter keys_to_move = [k for k in self.adapters if len(k) < len(prefix)] for key in keys_to_move: self.adapters[key] = self.adapters.pop(key) def __getstate__(self): state = {attr: getattr(self, attr, None) for attr in self.__attrs__} return state def __setstate__(self, state): for attr, value in state.items(): setattr(self, attr, value) def session(): """ Returns a :class:`Session` for context-management. .. deprecated:: 1.0.0 This method has been deprecated since version 1.0.0 and is only kept for backwards compatibility. New code should use :class:`~requests.sessions.Session` to create a session. This may be removed at a future date. :rtype: Session """ return Session() ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/requests/status_codes.py�����������������0000644�0001750�0001750�00000010041�00000000000�031406� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- r""" The ``codes`` object defines a mapping from common names for HTTP statuses to their numerical codes, accessible either as attributes or as dictionary items. >>> requests.codes['temporary_redirect'] 307 >>> requests.codes.teapot 418 >>> requests.codes['\o/'] 200 Some codes have multiple names, and both upper- and lower-case versions of the names are allowed. For example, ``codes.ok``, ``codes.OK``, and ``codes.okay`` all correspond to the HTTP status code 200. """ from .structures import LookupDict _codes = { # Informational. 100: ('continue',), 101: ('switching_protocols',), 102: ('processing',), 103: ('checkpoint',), 122: ('uri_too_long', 'request_uri_too_long'), 200: ('ok', 'okay', 'all_ok', 'all_okay', 'all_good', '\\o/', '✓'), 201: ('created',), 202: ('accepted',), 203: ('non_authoritative_info', 'non_authoritative_information'), 204: ('no_content',), 205: ('reset_content', 'reset'), 206: ('partial_content', 'partial'), 207: ('multi_status', 'multiple_status', 'multi_stati', 'multiple_stati'), 208: ('already_reported',), 226: ('im_used',), # Redirection. 300: ('multiple_choices',), 301: ('moved_permanently', 'moved', '\\o-'), 302: ('found',), 303: ('see_other', 'other'), 304: ('not_modified',), 305: ('use_proxy',), 306: ('switch_proxy',), 307: ('temporary_redirect', 'temporary_moved', 'temporary'), 308: ('permanent_redirect', 'resume_incomplete', 'resume',), # These 2 to be removed in 3.0 # Client Error. 400: ('bad_request', 'bad'), 401: ('unauthorized',), 402: ('payment_required', 'payment'), 403: ('forbidden',), 404: ('not_found', '-o-'), 405: ('method_not_allowed', 'not_allowed'), 406: ('not_acceptable',), 407: ('proxy_authentication_required', 'proxy_auth', 'proxy_authentication'), 408: ('request_timeout', 'timeout'), 409: ('conflict',), 410: ('gone',), 411: ('length_required',), 412: ('precondition_failed', 'precondition'), 413: ('request_entity_too_large',), 414: ('request_uri_too_large',), 415: ('unsupported_media_type', 'unsupported_media', 'media_type'), 416: ('requested_range_not_satisfiable', 'requested_range', 'range_not_satisfiable'), 417: ('expectation_failed',), 418: ('im_a_teapot', 'teapot', 'i_am_a_teapot'), 421: ('misdirected_request',), 422: ('unprocessable_entity', 'unprocessable'), 423: ('locked',), 424: ('failed_dependency', 'dependency'), 425: ('unordered_collection', 'unordered'), 426: ('upgrade_required', 'upgrade'), 428: ('precondition_required', 'precondition'), 429: ('too_many_requests', 'too_many'), 431: ('header_fields_too_large', 'fields_too_large'), 444: ('no_response', 'none'), 449: ('retry_with', 'retry'), 450: ('blocked_by_windows_parental_controls', 'parental_controls'), 451: ('unavailable_for_legal_reasons', 'legal_reasons'), 499: ('client_closed_request',), # Server Error. 500: ('internal_server_error', 'server_error', '/o\\', '✗'), 501: ('not_implemented',), 502: ('bad_gateway',), 503: ('service_unavailable', 'unavailable'), 504: ('gateway_timeout',), 505: ('http_version_not_supported', 'http_version'), 506: ('variant_also_negotiates',), 507: ('insufficient_storage',), 509: ('bandwidth_limit_exceeded', 'bandwidth'), 510: ('not_extended',), 511: ('network_authentication_required', 'network_auth', 'network_authentication'), } codes = LookupDict(name='status_codes') def _init(): for code, titles in _codes.items(): for title in titles: setattr(codes, title, code) if not title.startswith(('\\', '/')): setattr(codes, title.upper(), code) def doc(code): names = ', '.join('``%s``' % n for n in _codes[code]) return '* %d: %s' % (code, names) global __doc__ __doc__ = (__doc__ + '\n' + '\n'.join(doc(code) for code in sorted(_codes)) if __doc__ is not None else None) _init() �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/requests/structures.py�������������������0000644�0001750�0001750�00000005645�00000000000�031147� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- """ requests.structures ~~~~~~~~~~~~~~~~~~~ Data structures that power Requests. """ from .compat import OrderedDict, Mapping, MutableMapping class CaseInsensitiveDict(MutableMapping): """A case-insensitive ``dict``-like object. Implements all methods and operations of ``MutableMapping`` as well as dict's ``copy``. Also provides ``lower_items``. All keys are expected to be strings. The structure remembers the case of the last key to be set, and ``iter(instance)``, ``keys()``, ``items()``, ``iterkeys()``, and ``iteritems()`` will contain case-sensitive keys. However, querying and contains testing is case insensitive:: cid = CaseInsensitiveDict() cid['Accept'] = 'application/json' cid['aCCEPT'] == 'application/json' # True list(cid) == ['Accept'] # True For example, ``headers['content-encoding']`` will return the value of a ``'Content-Encoding'`` response header, regardless of how the header name was originally stored. If the constructor, ``.update``, or equality comparison operations are given keys that have equal ``.lower()``s, the behavior is undefined. """ def __init__(self, data=None, **kwargs): self._store = OrderedDict() if data is None: data = {} self.update(data, **kwargs) def __setitem__(self, key, value): # Use the lowercased key for lookups, but store the actual # key alongside the value. self._store[key.lower()] = (key, value) def __getitem__(self, key): return self._store[key.lower()][1] def __delitem__(self, key): del self._store[key.lower()] def __iter__(self): return (casedkey for casedkey, mappedvalue in self._store.values()) def __len__(self): return len(self._store) def lower_items(self): """Like iteritems(), but with all lowercase keys.""" return ( (lowerkey, keyval[1]) for (lowerkey, keyval) in self._store.items() ) def __eq__(self, other): if isinstance(other, Mapping): other = CaseInsensitiveDict(other) else: return NotImplemented # Compare insensitively return dict(self.lower_items()) == dict(other.lower_items()) # Copy is required def copy(self): return CaseInsensitiveDict(self._store.values()) def __repr__(self): return str(dict(self.items())) class LookupDict(dict): """Dictionary lookup object.""" def __init__(self, name=None): self.name = name super(LookupDict, self).__init__() def __repr__(self): return '<lookup \'%s\'>' % (self.name) def __getitem__(self, key): # We allow fall-through here, so values default to None return self.__dict__.get(key, None) def get(self, key, default=None): return self.__dict__.get(key, default) �������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/requests/utils.py������������������������0000644�0001750�0001750�00000072541�00000000000�030063� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- """ requests.utils ~~~~~~~~~~~~~~ This module provides utility functions that are used within Requests that are also useful for external consumption. """ import codecs import contextlib import io import os import re import socket import struct import sys import tempfile import warnings import zipfile from .__version__ import __version__ from . import certs # to_native_string is unused here, but imported here for backwards compatibility from ._internal_utils import to_native_string from .compat import parse_http_list as _parse_list_header from .compat import ( quote, urlparse, bytes, str, OrderedDict, unquote, getproxies, proxy_bypass, urlunparse, basestring, integer_types, is_py3, proxy_bypass_environment, getproxies_environment, Mapping) from .cookies import cookiejar_from_dict from .structures import CaseInsensitiveDict from .exceptions import ( InvalidURL, InvalidHeader, FileModeWarning, UnrewindableBodyError) NETRC_FILES = ('.netrc', '_netrc') DEFAULT_CA_BUNDLE_PATH = certs.where() DEFAULT_PORTS = {'http': 80, 'https': 443} if sys.platform == 'win32': # provide a proxy_bypass version on Windows without DNS lookups def proxy_bypass_registry(host): try: if is_py3: import winreg else: import _winreg as winreg except ImportError: return False try: internetSettings = winreg.OpenKey(winreg.HKEY_CURRENT_USER, r'Software\Microsoft\Windows\CurrentVersion\Internet Settings') # ProxyEnable could be REG_SZ or REG_DWORD, normalizing it proxyEnable = int(winreg.QueryValueEx(internetSettings, 'ProxyEnable')[0]) # ProxyOverride is almost always a string proxyOverride = winreg.QueryValueEx(internetSettings, 'ProxyOverride')[0] except OSError: return False if not proxyEnable or not proxyOverride: return False # make a check value list from the registry entry: replace the # '<local>' string by the localhost entry and the corresponding # canonical entry. proxyOverride = proxyOverride.split(';') # now check if we match one of the registry values. for test in proxyOverride: if test == '<local>': if '.' not in host: return True test = test.replace(".", r"\.") # mask dots test = test.replace("*", r".*") # change glob sequence test = test.replace("?", r".") # change glob char if re.match(test, host, re.I): return True return False def proxy_bypass(host): # noqa """Return True, if the host should be bypassed. Checks proxy settings gathered from the environment, if specified, or the registry. """ if getproxies_environment(): return proxy_bypass_environment(host) else: return proxy_bypass_registry(host) def dict_to_sequence(d): """Returns an internal sequence dictionary update.""" if hasattr(d, 'items'): d = d.items() return d def super_len(o): total_length = None current_position = 0 if hasattr(o, '__len__'): total_length = len(o) elif hasattr(o, 'len'): total_length = o.len elif hasattr(o, 'fileno'): try: fileno = o.fileno() except io.UnsupportedOperation: pass else: total_length = os.fstat(fileno).st_size # Having used fstat to determine the file length, we need to # confirm that this file was opened up in binary mode. if 'b' not in o.mode: warnings.warn(( "Requests has determined the content-length for this " "request using the binary size of the file: however, the " "file has been opened in text mode (i.e. without the 'b' " "flag in the mode). This may lead to an incorrect " "content-length. In Requests 3.0, support will be removed " "for files in text mode."), FileModeWarning ) if hasattr(o, 'tell'): try: current_position = o.tell() except (OSError, IOError): # This can happen in some weird situations, such as when the file # is actually a special file descriptor like stdin. In this # instance, we don't know what the length is, so set it to zero and # let requests chunk it instead. if total_length is not None: current_position = total_length else: if hasattr(o, 'seek') and total_length is None: # StringIO and BytesIO have seek but no useable fileno try: # seek to end of file o.seek(0, 2) total_length = o.tell() # seek back to current position to support # partially read file-like objects o.seek(current_position or 0) except (OSError, IOError): total_length = 0 if total_length is None: total_length = 0 return max(0, total_length - current_position) def get_netrc_auth(url, raise_errors=False): """Returns the Requests tuple auth for a given url from netrc.""" try: from netrc import netrc, NetrcParseError netrc_path = None for f in NETRC_FILES: try: loc = os.path.expanduser('~/{}'.format(f)) except KeyError: # os.path.expanduser can fail when $HOME is undefined and # getpwuid fails. See https://bugs.python.org/issue20164 & # https://github.com/requests/requests/issues/1846 return if os.path.exists(loc): netrc_path = loc break # Abort early if there isn't one. if netrc_path is None: return ri = urlparse(url) # Strip port numbers from netloc. This weird `if...encode`` dance is # used for Python 3.2, which doesn't support unicode literals. splitstr = b':' if isinstance(url, str): splitstr = splitstr.decode('ascii') host = ri.netloc.split(splitstr)[0] try: _netrc = netrc(netrc_path).authenticators(host) if _netrc: # Return with login / password login_i = (0 if _netrc[0] else 1) return (_netrc[login_i], _netrc[2]) except (NetrcParseError, IOError): # If there was a parsing error or a permissions issue reading the file, # we'll just skip netrc auth unless explicitly asked to raise errors. if raise_errors: raise # AppEngine hackiness. except (ImportError, AttributeError): pass def guess_filename(obj): """Tries to guess the filename of the given object.""" name = getattr(obj, 'name', None) if (name and isinstance(name, basestring) and name[0] != '<' and name[-1] != '>'): return os.path.basename(name) def extract_zipped_paths(path): """Replace nonexistent paths that look like they refer to a member of a zip archive with the location of an extracted copy of the target, or else just return the provided path unchanged. """ if os.path.exists(path): # this is already a valid path, no need to do anything further return path # find the first valid part of the provided path and treat that as a zip archive # assume the rest of the path is the name of a member in the archive archive, member = os.path.split(path) while archive and not os.path.exists(archive): archive, prefix = os.path.split(archive) member = '/'.join([prefix, member]) if not zipfile.is_zipfile(archive): return path zip_file = zipfile.ZipFile(archive) if member not in zip_file.namelist(): return path # we have a valid zip archive and a valid member of that archive tmp = tempfile.gettempdir() extracted_path = os.path.join(tmp, *member.split('/')) if not os.path.exists(extracted_path): extracted_path = zip_file.extract(member, path=tmp) return extracted_path def from_key_val_list(value): """Take an object and test to see if it can be represented as a dictionary. Unless it can not be represented as such, return an OrderedDict, e.g., :: >>> from_key_val_list([('key', 'val')]) OrderedDict([('key', 'val')]) >>> from_key_val_list('string') ValueError: cannot encode objects that are not 2-tuples >>> from_key_val_list({'key': 'val'}) OrderedDict([('key', 'val')]) :rtype: OrderedDict """ if value is None: return None if isinstance(value, (str, bytes, bool, int)): raise ValueError('cannot encode objects that are not 2-tuples') return OrderedDict(value) def to_key_val_list(value): """Take an object and test to see if it can be represented as a dictionary. If it can be, return a list of tuples, e.g., :: >>> to_key_val_list([('key', 'val')]) [('key', 'val')] >>> to_key_val_list({'key': 'val'}) [('key', 'val')] >>> to_key_val_list('string') ValueError: cannot encode objects that are not 2-tuples. :rtype: list """ if value is None: return None if isinstance(value, (str, bytes, bool, int)): raise ValueError('cannot encode objects that are not 2-tuples') if isinstance(value, Mapping): value = value.items() return list(value) # From mitsuhiko/werkzeug (used with permission). def parse_list_header(value): """Parse lists as described by RFC 2068 Section 2. In particular, parse comma-separated lists where the elements of the list may include quoted-strings. A quoted-string could contain a comma. A non-quoted string could have quotes in the middle. Quotes are removed automatically after parsing. It basically works like :func:`parse_set_header` just that items may appear multiple times and case sensitivity is preserved. The return value is a standard :class:`list`: >>> parse_list_header('token, "quoted value"') ['token', 'quoted value'] To create a header from the :class:`list` again, use the :func:`dump_header` function. :param value: a string with a list header. :return: :class:`list` :rtype: list """ result = [] for item in _parse_list_header(value): if item[:1] == item[-1:] == '"': item = unquote_header_value(item[1:-1]) result.append(item) return result # From mitsuhiko/werkzeug (used with permission). def parse_dict_header(value): """Parse lists of key, value pairs as described by RFC 2068 Section 2 and convert them into a python dict: >>> d = parse_dict_header('foo="is a fish", bar="as well"') >>> type(d) is dict True >>> sorted(d.items()) [('bar', 'as well'), ('foo', 'is a fish')] If there is no value for a key it will be `None`: >>> parse_dict_header('key_without_value') {'key_without_value': None} To create a header from the :class:`dict` again, use the :func:`dump_header` function. :param value: a string with a dict header. :return: :class:`dict` :rtype: dict """ result = {} for item in _parse_list_header(value): if '=' not in item: result[item] = None continue name, value = item.split('=', 1) if value[:1] == value[-1:] == '"': value = unquote_header_value(value[1:-1]) result[name] = value return result # From mitsuhiko/werkzeug (used with permission). def unquote_header_value(value, is_filename=False): r"""Unquotes a header value. (Reversal of :func:`quote_header_value`). This does not use the real unquoting but what browsers are actually using for quoting. :param value: the header value to unquote. :rtype: str """ if value and value[0] == value[-1] == '"': # this is not the real unquoting, but fixing this so that the # RFC is met will result in bugs with internet explorer and # probably some other browsers as well. IE for example is # uploading files with "C:\foo\bar.txt" as filename value = value[1:-1] # if this is a filename and the starting characters look like # a UNC path, then just return the value without quotes. Using the # replace sequence below on a UNC path has the effect of turning # the leading double slash into a single slash and then # _fix_ie_filename() doesn't work correctly. See #458. if not is_filename or value[:2] != '\\\\': return value.replace('\\\\', '\\').replace('\\"', '"') return value def dict_from_cookiejar(cj): """Returns a key/value dictionary from a CookieJar. :param cj: CookieJar object to extract cookies from. :rtype: dict """ cookie_dict = {} for cookie in cj: cookie_dict[cookie.name] = cookie.value return cookie_dict def add_dict_to_cookiejar(cj, cookie_dict): """Returns a CookieJar from a key/value dictionary. :param cj: CookieJar to insert cookies into. :param cookie_dict: Dict of key/values to insert into CookieJar. :rtype: CookieJar """ return cookiejar_from_dict(cookie_dict, cj) def get_encodings_from_content(content): """Returns encodings from given content string. :param content: bytestring to extract encodings from. """ warnings.warn(( 'In requests 3.0, get_encodings_from_content will be removed. For ' 'more information, please see the discussion on issue #2266. (This' ' warning should only appear once.)'), DeprecationWarning) charset_re = re.compile(r'<meta.*?charset=["\']*(.+?)["\'>]', flags=re.I) pragma_re = re.compile(r'<meta.*?content=["\']*;?charset=(.+?)["\'>]', flags=re.I) xml_re = re.compile(r'^<\?xml.*?encoding=["\']*(.+?)["\'>]') return (charset_re.findall(content) + pragma_re.findall(content) + xml_re.findall(content)) def _parse_content_type_header(header): """Returns content type and parameters from given header :param header: string :return: tuple containing content type and dictionary of parameters """ tokens = header.split(';') content_type, params = tokens[0].strip(), tokens[1:] params_dict = {} items_to_strip = "\"' " for param in params: param = param.strip() if param: key, value = param, True index_of_equals = param.find("=") if index_of_equals != -1: key = param[:index_of_equals].strip(items_to_strip) value = param[index_of_equals + 1:].strip(items_to_strip) params_dict[key.lower()] = value return content_type, params_dict def get_encoding_from_headers(headers): """Returns encodings from given HTTP Header Dict. :param headers: dictionary to extract encoding from. :rtype: str """ content_type = headers.get('content-type') if not content_type: return None content_type, params = _parse_content_type_header(content_type) if 'charset' in params: return params['charset'].strip("'\"") if 'text' in content_type: return 'ISO-8859-1' def stream_decode_response_unicode(iterator, r): """Stream decodes a iterator.""" if r.encoding is None: for item in iterator: yield item return decoder = codecs.getincrementaldecoder(r.encoding)(errors='replace') for chunk in iterator: rv = decoder.decode(chunk) if rv: yield rv rv = decoder.decode(b'', final=True) if rv: yield rv def iter_slices(string, slice_length): """Iterate over slices of a string.""" pos = 0 if slice_length is None or slice_length <= 0: slice_length = len(string) while pos < len(string): yield string[pos:pos + slice_length] pos += slice_length def get_unicode_from_response(r): """Returns the requested content back in unicode. :param r: Response object to get unicode content from. Tried: 1. charset from content-type 2. fall back and replace all unicode characters :rtype: str """ warnings.warn(( 'In requests 3.0, get_unicode_from_response will be removed. For ' 'more information, please see the discussion on issue #2266. (This' ' warning should only appear once.)'), DeprecationWarning) tried_encodings = [] # Try charset from content-type encoding = get_encoding_from_headers(r.headers) if encoding: try: return str(r.content, encoding) except UnicodeError: tried_encodings.append(encoding) # Fall back: try: return str(r.content, encoding, errors='replace') except TypeError: return r.content # The unreserved URI characters (RFC 3986) UNRESERVED_SET = frozenset( "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz" + "0123456789-._~") def unquote_unreserved(uri): """Un-escape any percent-escape sequences in a URI that are unreserved characters. This leaves all reserved, illegal and non-ASCII bytes encoded. :rtype: str """ parts = uri.split('%') for i in range(1, len(parts)): h = parts[i][0:2] if len(h) == 2 and h.isalnum(): try: c = chr(int(h, 16)) except ValueError: raise InvalidURL("Invalid percent-escape sequence: '%s'" % h) if c in UNRESERVED_SET: parts[i] = c + parts[i][2:] else: parts[i] = '%' + parts[i] else: parts[i] = '%' + parts[i] return ''.join(parts) def requote_uri(uri): """Re-quote the given URI. This function passes the given URI through an unquote/quote cycle to ensure that it is fully and consistently quoted. :rtype: str """ safe_with_percent = "!#$%&'()*+,/:;=?@[]~" safe_without_percent = "!#$&'()*+,/:;=?@[]~" try: # Unquote only the unreserved characters # Then quote only illegal characters (do not quote reserved, # unreserved, or '%') return quote(unquote_unreserved(uri), safe=safe_with_percent) except InvalidURL: # We couldn't unquote the given URI, so let's try quoting it, but # there may be unquoted '%'s in the URI. We need to make sure they're # properly quoted so they do not cause issues elsewhere. return quote(uri, safe=safe_without_percent) def address_in_network(ip, net): """This function allows you to check if an IP belongs to a network subnet Example: returns True if ip = 192.168.1.1 and net = 192.168.1.0/24 returns False if ip = 192.168.1.1 and net = 192.168.100.0/24 :rtype: bool """ ipaddr = struct.unpack('=L', socket.inet_aton(ip))[0] netaddr, bits = net.split('/') netmask = struct.unpack('=L', socket.inet_aton(dotted_netmask(int(bits))))[0] network = struct.unpack('=L', socket.inet_aton(netaddr))[0] & netmask return (ipaddr & netmask) == (network & netmask) def dotted_netmask(mask): """Converts mask from /xx format to xxx.xxx.xxx.xxx Example: if mask is 24 function returns 255.255.255.0 :rtype: str """ bits = 0xffffffff ^ (1 << 32 - mask) - 1 return socket.inet_ntoa(struct.pack('>I', bits)) def is_ipv4_address(string_ip): """ :rtype: bool """ try: socket.inet_aton(string_ip) except socket.error: return False return True def is_valid_cidr(string_network): """ Very simple check of the cidr format in no_proxy variable. :rtype: bool """ if string_network.count('/') == 1: try: mask = int(string_network.split('/')[1]) except ValueError: return False if mask < 1 or mask > 32: return False try: socket.inet_aton(string_network.split('/')[0]) except socket.error: return False else: return False return True @contextlib.contextmanager def set_environ(env_name, value): """Set the environment variable 'env_name' to 'value' Save previous value, yield, and then restore the previous value stored in the environment variable 'env_name'. If 'value' is None, do nothing""" value_changed = value is not None if value_changed: old_value = os.environ.get(env_name) os.environ[env_name] = value try: yield finally: if value_changed: if old_value is None: del os.environ[env_name] else: os.environ[env_name] = old_value def should_bypass_proxies(url, no_proxy): """ Returns whether we should bypass proxies or not. :rtype: bool """ # Prioritize lowercase environment variables over uppercase # to keep a consistent behaviour with other http projects (curl, wget). get_proxy = lambda k: os.environ.get(k) or os.environ.get(k.upper()) # First check whether no_proxy is defined. If it is, check that the URL # we're getting isn't in the no_proxy list. no_proxy_arg = no_proxy if no_proxy is None: no_proxy = get_proxy('no_proxy') parsed = urlparse(url) if parsed.hostname is None: # URLs don't always have hostnames, e.g. file:/// urls. return True if no_proxy: # We need to check whether we match here. We need to see if we match # the end of the hostname, both with and without the port. no_proxy = ( host for host in no_proxy.replace(' ', '').split(',') if host ) if is_ipv4_address(parsed.hostname): for proxy_ip in no_proxy: if is_valid_cidr(proxy_ip): if address_in_network(parsed.hostname, proxy_ip): return True elif parsed.hostname == proxy_ip: # If no_proxy ip was defined in plain IP notation instead of cidr notation & # matches the IP of the index return True else: host_with_port = parsed.hostname if parsed.port: host_with_port += ':{}'.format(parsed.port) for host in no_proxy: if parsed.hostname.endswith(host) or host_with_port.endswith(host): # The URL does match something in no_proxy, so we don't want # to apply the proxies on this URL. return True with set_environ('no_proxy', no_proxy_arg): # parsed.hostname can be `None` in cases such as a file URI. try: bypass = proxy_bypass(parsed.hostname) except (TypeError, socket.gaierror): bypass = False if bypass: return True return False def get_environ_proxies(url, no_proxy=None): """ Return a dict of environment proxies. :rtype: dict """ if should_bypass_proxies(url, no_proxy=no_proxy): return {} else: return getproxies() def select_proxy(url, proxies): """Select a proxy for the url, if applicable. :param url: The url being for the request :param proxies: A dictionary of schemes or schemes and hosts to proxy URLs """ proxies = proxies or {} urlparts = urlparse(url) if urlparts.hostname is None: return proxies.get(urlparts.scheme, proxies.get('all')) proxy_keys = [ urlparts.scheme + '://' + urlparts.hostname, urlparts.scheme, 'all://' + urlparts.hostname, 'all', ] proxy = None for proxy_key in proxy_keys: if proxy_key in proxies: proxy = proxies[proxy_key] break return proxy def default_user_agent(name="python-requests"): """ Return a string representing the default user agent. :rtype: str """ return '%s/%s' % (name, __version__) def default_headers(): """ :rtype: requests.structures.CaseInsensitiveDict """ return CaseInsensitiveDict({ 'User-Agent': default_user_agent(), 'Accept-Encoding': ', '.join(('gzip', 'deflate')), 'Accept': '*/*', 'Connection': 'keep-alive', }) def parse_header_links(value): """Return a list of parsed link headers proxies. i.e. Link: <http:/.../front.jpeg>; rel=front; type="image/jpeg",<http://.../back.jpeg>; rel=back;type="image/jpeg" :rtype: list """ links = [] replace_chars = ' \'"' value = value.strip(replace_chars) if not value: return links for val in re.split(', *<', value): try: url, params = val.split(';', 1) except ValueError: url, params = val, '' link = {'url': url.strip('<> \'"')} for param in params.split(';'): try: key, value = param.split('=') except ValueError: break link[key.strip(replace_chars)] = value.strip(replace_chars) links.append(link) return links # Null bytes; no need to recreate these on each call to guess_json_utf _null = '\x00'.encode('ascii') # encoding to ASCII for Python 3 _null2 = _null * 2 _null3 = _null * 3 def guess_json_utf(data): """ :rtype: str """ # JSON always starts with two ASCII characters, so detection is as # easy as counting the nulls and from their location and count # determine the encoding. Also detect a BOM, if present. sample = data[:4] if sample in (codecs.BOM_UTF32_LE, codecs.BOM_UTF32_BE): return 'utf-32' # BOM included if sample[:3] == codecs.BOM_UTF8: return 'utf-8-sig' # BOM included, MS style (discouraged) if sample[:2] in (codecs.BOM_UTF16_LE, codecs.BOM_UTF16_BE): return 'utf-16' # BOM included nullcount = sample.count(_null) if nullcount == 0: return 'utf-8' if nullcount == 2: if sample[::2] == _null2: # 1st and 3rd are null return 'utf-16-be' if sample[1::2] == _null2: # 2nd and 4th are null return 'utf-16-le' # Did not detect 2 valid UTF-16 ascii-range characters if nullcount == 3: if sample[:3] == _null3: return 'utf-32-be' if sample[1:] == _null3: return 'utf-32-le' # Did not detect a valid UTF-32 ascii-range character return None def prepend_scheme_if_needed(url, new_scheme): """Given a URL that may or may not have a scheme, prepend the given scheme. Does not replace a present scheme with the one provided as an argument. :rtype: str """ scheme, netloc, path, params, query, fragment = urlparse(url, new_scheme) # urlparse is a finicky beast, and sometimes decides that there isn't a # netloc present. Assume that it's being over-cautious, and switch netloc # and path if urlparse decided there was no netloc. if not netloc: netloc, path = path, netloc return urlunparse((scheme, netloc, path, params, query, fragment)) def get_auth_from_url(url): """Given a url with authentication components, extract them into a tuple of username,password. :rtype: (str,str) """ parsed = urlparse(url) try: auth = (unquote(parsed.username), unquote(parsed.password)) except (AttributeError, TypeError): auth = ('', '') return auth # Moved outside of function to avoid recompile every call _CLEAN_HEADER_REGEX_BYTE = re.compile(b'^\\S[^\\r\\n]*$|^$') _CLEAN_HEADER_REGEX_STR = re.compile(r'^\S[^\r\n]*$|^$') def check_header_validity(header): """Verifies that header value is a string which doesn't contain leading whitespace or return characters. This prevents unintended header injection. :param header: tuple, in the format (name, value). """ name, value = header if isinstance(value, bytes): pat = _CLEAN_HEADER_REGEX_BYTE else: pat = _CLEAN_HEADER_REGEX_STR try: if not pat.match(value): raise InvalidHeader("Invalid return character or leading space in header: %s" % name) except TypeError: raise InvalidHeader("Value for header {%s: %s} must be of type str or " "bytes, not %s" % (name, value, type(value))) def urldefragauth(url): """ Given a url remove the fragment and the authentication part. :rtype: str """ scheme, netloc, path, params, query, fragment = urlparse(url) # see func:`prepend_scheme_if_needed` if not netloc: netloc, path = path, netloc netloc = netloc.rsplit('@', 1)[-1] return urlunparse((scheme, netloc, path, params, query, '')) def rewind_body(prepared_request): """Move file pointer back to its recorded starting position so it can be read again on redirect. """ body_seek = getattr(prepared_request.body, 'seek', None) if body_seek is not None and isinstance(prepared_request._body_position, integer_types): try: body_seek(prepared_request._body_position) except (IOError, OSError): raise UnrewindableBodyError("An error occurred when rewinding request " "body for redirect.") else: raise UnrewindableBodyError("Unable to rewind request body for redirect.") ���������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/retrying.py������������������������������0000644�0001750�0001750�00000023364�00000000000�026712� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������## Copyright 2013-2014 Ray Holder ## ## Licensed under the Apache License, Version 2.0 (the "License"); ## you may not use this file except in compliance with the License. ## You may obtain a copy of the License at ## ## http://www.apache.org/licenses/LICENSE-2.0 ## ## Unless required by applicable law or agreed to in writing, software ## distributed under the License is distributed on an "AS IS" BASIS, ## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ## See the License for the specific language governing permissions and ## limitations under the License. import random from pip._vendor import six import sys import time import traceback # sys.maxint / 2, since Python 3.2 doesn't have a sys.maxint... MAX_WAIT = 1073741823 def retry(*dargs, **dkw): """ Decorator function that instantiates the Retrying object @param *dargs: positional arguments passed to Retrying object @param **dkw: keyword arguments passed to the Retrying object """ # support both @retry and @retry() as valid syntax if len(dargs) == 1 and callable(dargs[0]): def wrap_simple(f): @six.wraps(f) def wrapped_f(*args, **kw): return Retrying().call(f, *args, **kw) return wrapped_f return wrap_simple(dargs[0]) else: def wrap(f): @six.wraps(f) def wrapped_f(*args, **kw): return Retrying(*dargs, **dkw).call(f, *args, **kw) return wrapped_f return wrap class Retrying(object): def __init__(self, stop=None, wait=None, stop_max_attempt_number=None, stop_max_delay=None, wait_fixed=None, wait_random_min=None, wait_random_max=None, wait_incrementing_start=None, wait_incrementing_increment=None, wait_exponential_multiplier=None, wait_exponential_max=None, retry_on_exception=None, retry_on_result=None, wrap_exception=False, stop_func=None, wait_func=None, wait_jitter_max=None): self._stop_max_attempt_number = 5 if stop_max_attempt_number is None else stop_max_attempt_number self._stop_max_delay = 100 if stop_max_delay is None else stop_max_delay self._wait_fixed = 1000 if wait_fixed is None else wait_fixed self._wait_random_min = 0 if wait_random_min is None else wait_random_min self._wait_random_max = 1000 if wait_random_max is None else wait_random_max self._wait_incrementing_start = 0 if wait_incrementing_start is None else wait_incrementing_start self._wait_incrementing_increment = 100 if wait_incrementing_increment is None else wait_incrementing_increment self._wait_exponential_multiplier = 1 if wait_exponential_multiplier is None else wait_exponential_multiplier self._wait_exponential_max = MAX_WAIT if wait_exponential_max is None else wait_exponential_max self._wait_jitter_max = 0 if wait_jitter_max is None else wait_jitter_max # TODO add chaining of stop behaviors # stop behavior stop_funcs = [] if stop_max_attempt_number is not None: stop_funcs.append(self.stop_after_attempt) if stop_max_delay is not None: stop_funcs.append(self.stop_after_delay) if stop_func is not None: self.stop = stop_func elif stop is None: self.stop = lambda attempts, delay: any(f(attempts, delay) for f in stop_funcs) else: self.stop = getattr(self, stop) # TODO add chaining of wait behaviors # wait behavior wait_funcs = [lambda *args, **kwargs: 0] if wait_fixed is not None: wait_funcs.append(self.fixed_sleep) if wait_random_min is not None or wait_random_max is not None: wait_funcs.append(self.random_sleep) if wait_incrementing_start is not None or wait_incrementing_increment is not None: wait_funcs.append(self.incrementing_sleep) if wait_exponential_multiplier is not None or wait_exponential_max is not None: wait_funcs.append(self.exponential_sleep) if wait_func is not None: self.wait = wait_func elif wait is None: self.wait = lambda attempts, delay: max(f(attempts, delay) for f in wait_funcs) else: self.wait = getattr(self, wait) # retry on exception filter if retry_on_exception is None: self._retry_on_exception = self.always_reject else: self._retry_on_exception = retry_on_exception # TODO simplify retrying by Exception types # retry on result filter if retry_on_result is None: self._retry_on_result = self.never_reject else: self._retry_on_result = retry_on_result self._wrap_exception = wrap_exception def stop_after_attempt(self, previous_attempt_number, delay_since_first_attempt_ms): """Stop after the previous attempt >= stop_max_attempt_number.""" return previous_attempt_number >= self._stop_max_attempt_number def stop_after_delay(self, previous_attempt_number, delay_since_first_attempt_ms): """Stop after the time from the first attempt >= stop_max_delay.""" return delay_since_first_attempt_ms >= self._stop_max_delay def no_sleep(self, previous_attempt_number, delay_since_first_attempt_ms): """Don't sleep at all before retrying.""" return 0 def fixed_sleep(self, previous_attempt_number, delay_since_first_attempt_ms): """Sleep a fixed amount of time between each retry.""" return self._wait_fixed def random_sleep(self, previous_attempt_number, delay_since_first_attempt_ms): """Sleep a random amount of time between wait_random_min and wait_random_max""" return random.randint(self._wait_random_min, self._wait_random_max) def incrementing_sleep(self, previous_attempt_number, delay_since_first_attempt_ms): """ Sleep an incremental amount of time after each attempt, starting at wait_incrementing_start and incrementing by wait_incrementing_increment """ result = self._wait_incrementing_start + (self._wait_incrementing_increment * (previous_attempt_number - 1)) if result < 0: result = 0 return result def exponential_sleep(self, previous_attempt_number, delay_since_first_attempt_ms): exp = 2 ** previous_attempt_number result = self._wait_exponential_multiplier * exp if result > self._wait_exponential_max: result = self._wait_exponential_max if result < 0: result = 0 return result def never_reject(self, result): return False def always_reject(self, result): return True def should_reject(self, attempt): reject = False if attempt.has_exception: reject |= self._retry_on_exception(attempt.value[1]) else: reject |= self._retry_on_result(attempt.value) return reject def call(self, fn, *args, **kwargs): start_time = int(round(time.time() * 1000)) attempt_number = 1 while True: try: attempt = Attempt(fn(*args, **kwargs), attempt_number, False) except: tb = sys.exc_info() attempt = Attempt(tb, attempt_number, True) if not self.should_reject(attempt): return attempt.get(self._wrap_exception) delay_since_first_attempt_ms = int(round(time.time() * 1000)) - start_time if self.stop(attempt_number, delay_since_first_attempt_ms): if not self._wrap_exception and attempt.has_exception: # get() on an attempt with an exception should cause it to be raised, but raise just in case raise attempt.get() else: raise RetryError(attempt) else: sleep = self.wait(attempt_number, delay_since_first_attempt_ms) if self._wait_jitter_max: jitter = random.random() * self._wait_jitter_max sleep = sleep + max(0, jitter) time.sleep(sleep / 1000.0) attempt_number += 1 class Attempt(object): """ An Attempt encapsulates a call to a target function that may end as a normal return value from the function or an Exception depending on what occurred during the execution. """ def __init__(self, value, attempt_number, has_exception): self.value = value self.attempt_number = attempt_number self.has_exception = has_exception def get(self, wrap_exception=False): """ Return the return value of this Attempt instance or raise an Exception. If wrap_exception is true, this Attempt is wrapped inside of a RetryError before being raised. """ if self.has_exception: if wrap_exception: raise RetryError(self) else: six.reraise(self.value[0], self.value[1], self.value[2]) else: return self.value def __repr__(self): if self.has_exception: return "Attempts: {0}, Error:\n{1}".format(self.attempt_number, "".join(traceback.format_tb(self.value[2]))) else: return "Attempts: {0}, Value: {1}".format(self.attempt_number, self.value) class RetryError(Exception): """ A RetryError encapsulates the last Attempt instance right before giving up. """ def __init__(self, last_attempt): self.last_attempt = last_attempt def __str__(self): return "RetryError[{0}]".format(self.last_attempt) ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/six.py�����������������������������������0000644�0001750�0001750�00000077304�00000000000�025655� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# Copyright (c) 2010-2018 Benjamin Peterson # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. """Utilities for writing code that runs on Python 2 and 3""" from __future__ import absolute_import import functools import itertools import operator import sys import types __author__ = "Benjamin Peterson <benjamin@python.org>" __version__ = "1.12.0" # Useful for very coarse version differentiation. PY2 = sys.version_info[0] == 2 PY3 = sys.version_info[0] == 3 PY34 = sys.version_info[0:2] >= (3, 4) if PY3: string_types = str, integer_types = int, class_types = type, text_type = str binary_type = bytes MAXSIZE = sys.maxsize else: string_types = basestring, integer_types = (int, long) class_types = (type, types.ClassType) text_type = unicode binary_type = str if sys.platform.startswith("java"): # Jython always uses 32 bits. MAXSIZE = int((1 << 31) - 1) else: # It's possible to have sizeof(long) != sizeof(Py_ssize_t). class X(object): def __len__(self): return 1 << 31 try: len(X()) except OverflowError: # 32-bit MAXSIZE = int((1 << 31) - 1) else: # 64-bit MAXSIZE = int((1 << 63) - 1) del X def _add_doc(func, doc): """Add documentation to a function.""" func.__doc__ = doc def _import_module(name): """Import module, returning the module after the last dot.""" __import__(name) return sys.modules[name] class _LazyDescr(object): def __init__(self, name): self.name = name def __get__(self, obj, tp): result = self._resolve() setattr(obj, self.name, result) # Invokes __set__. try: # This is a bit ugly, but it avoids running this again by # removing this descriptor. delattr(obj.__class__, self.name) except AttributeError: pass return result class MovedModule(_LazyDescr): def __init__(self, name, old, new=None): super(MovedModule, self).__init__(name) if PY3: if new is None: new = name self.mod = new else: self.mod = old def _resolve(self): return _import_module(self.mod) def __getattr__(self, attr): _module = self._resolve() value = getattr(_module, attr) setattr(self, attr, value) return value class _LazyModule(types.ModuleType): def __init__(self, name): super(_LazyModule, self).__init__(name) self.__doc__ = self.__class__.__doc__ def __dir__(self): attrs = ["__doc__", "__name__"] attrs += [attr.name for attr in self._moved_attributes] return attrs # Subclasses should override this _moved_attributes = [] class MovedAttribute(_LazyDescr): def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None): super(MovedAttribute, self).__init__(name) if PY3: if new_mod is None: new_mod = name self.mod = new_mod if new_attr is None: if old_attr is None: new_attr = name else: new_attr = old_attr self.attr = new_attr else: self.mod = old_mod if old_attr is None: old_attr = name self.attr = old_attr def _resolve(self): module = _import_module(self.mod) return getattr(module, self.attr) class _SixMetaPathImporter(object): """ A meta path importer to import six.moves and its submodules. This class implements a PEP302 finder and loader. It should be compatible with Python 2.5 and all existing versions of Python3 """ def __init__(self, six_module_name): self.name = six_module_name self.known_modules = {} def _add_module(self, mod, *fullnames): for fullname in fullnames: self.known_modules[self.name + "." + fullname] = mod def _get_module(self, fullname): return self.known_modules[self.name + "." + fullname] def find_module(self, fullname, path=None): if fullname in self.known_modules: return self return None def __get_module(self, fullname): try: return self.known_modules[fullname] except KeyError: raise ImportError("This loader does not know module " + fullname) def load_module(self, fullname): try: # in case of a reload return sys.modules[fullname] except KeyError: pass mod = self.__get_module(fullname) if isinstance(mod, MovedModule): mod = mod._resolve() else: mod.__loader__ = self sys.modules[fullname] = mod return mod def is_package(self, fullname): """ Return true, if the named module is a package. We need this method to get correct spec objects with Python 3.4 (see PEP451) """ return hasattr(self.__get_module(fullname), "__path__") def get_code(self, fullname): """Return None Required, if is_package is implemented""" self.__get_module(fullname) # eventually raises ImportError return None get_source = get_code # same as get_code _importer = _SixMetaPathImporter(__name__) class _MovedItems(_LazyModule): """Lazy loading of moved objects""" __path__ = [] # mark as package _moved_attributes = [ MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"), MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"), MovedAttribute("filterfalse", "itertools", "itertools", "ifilterfalse", "filterfalse"), MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"), MovedAttribute("intern", "__builtin__", "sys"), MovedAttribute("map", "itertools", "builtins", "imap", "map"), MovedAttribute("getcwd", "os", "os", "getcwdu", "getcwd"), MovedAttribute("getcwdb", "os", "os", "getcwd", "getcwdb"), MovedAttribute("getoutput", "commands", "subprocess"), MovedAttribute("range", "__builtin__", "builtins", "xrange", "range"), MovedAttribute("reload_module", "__builtin__", "importlib" if PY34 else "imp", "reload"), MovedAttribute("reduce", "__builtin__", "functools"), MovedAttribute("shlex_quote", "pipes", "shlex", "quote"), MovedAttribute("StringIO", "StringIO", "io"), MovedAttribute("UserDict", "UserDict", "collections"), MovedAttribute("UserList", "UserList", "collections"), MovedAttribute("UserString", "UserString", "collections"), MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"), MovedAttribute("zip", "itertools", "builtins", "izip", "zip"), MovedAttribute("zip_longest", "itertools", "itertools", "izip_longest", "zip_longest"), MovedModule("builtins", "__builtin__"), MovedModule("configparser", "ConfigParser"), MovedModule("copyreg", "copy_reg"), MovedModule("dbm_gnu", "gdbm", "dbm.gnu"), MovedModule("_dummy_thread", "dummy_thread", "_dummy_thread"), MovedModule("http_cookiejar", "cookielib", "http.cookiejar"), MovedModule("http_cookies", "Cookie", "http.cookies"), MovedModule("html_entities", "htmlentitydefs", "html.entities"), MovedModule("html_parser", "HTMLParser", "html.parser"), MovedModule("http_client", "httplib", "http.client"), MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"), MovedModule("email_mime_image", "email.MIMEImage", "email.mime.image"), MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"), MovedModule("email_mime_nonmultipart", "email.MIMENonMultipart", "email.mime.nonmultipart"), MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"), MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"), MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"), MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"), MovedModule("cPickle", "cPickle", "pickle"), MovedModule("queue", "Queue"), MovedModule("reprlib", "repr"), MovedModule("socketserver", "SocketServer"), MovedModule("_thread", "thread", "_thread"), MovedModule("tkinter", "Tkinter"), MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"), MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"), MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"), MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"), MovedModule("tkinter_tix", "Tix", "tkinter.tix"), MovedModule("tkinter_ttk", "ttk", "tkinter.ttk"), MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"), MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"), MovedModule("tkinter_colorchooser", "tkColorChooser", "tkinter.colorchooser"), MovedModule("tkinter_commondialog", "tkCommonDialog", "tkinter.commondialog"), MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"), MovedModule("tkinter_font", "tkFont", "tkinter.font"), MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"), MovedModule("tkinter_tksimpledialog", "tkSimpleDialog", "tkinter.simpledialog"), MovedModule("urllib_parse", __name__ + ".moves.urllib_parse", "urllib.parse"), MovedModule("urllib_error", __name__ + ".moves.urllib_error", "urllib.error"), MovedModule("urllib", __name__ + ".moves.urllib", __name__ + ".moves.urllib"), MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"), MovedModule("xmlrpc_client", "xmlrpclib", "xmlrpc.client"), MovedModule("xmlrpc_server", "SimpleXMLRPCServer", "xmlrpc.server"), ] # Add windows specific modules. if sys.platform == "win32": _moved_attributes += [ MovedModule("winreg", "_winreg"), ] for attr in _moved_attributes: setattr(_MovedItems, attr.name, attr) if isinstance(attr, MovedModule): _importer._add_module(attr, "moves." + attr.name) del attr _MovedItems._moved_attributes = _moved_attributes moves = _MovedItems(__name__ + ".moves") _importer._add_module(moves, "moves") class Module_six_moves_urllib_parse(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_parse""" _urllib_parse_moved_attributes = [ MovedAttribute("ParseResult", "urlparse", "urllib.parse"), MovedAttribute("SplitResult", "urlparse", "urllib.parse"), MovedAttribute("parse_qs", "urlparse", "urllib.parse"), MovedAttribute("parse_qsl", "urlparse", "urllib.parse"), MovedAttribute("urldefrag", "urlparse", "urllib.parse"), MovedAttribute("urljoin", "urlparse", "urllib.parse"), MovedAttribute("urlparse", "urlparse", "urllib.parse"), MovedAttribute("urlsplit", "urlparse", "urllib.parse"), MovedAttribute("urlunparse", "urlparse", "urllib.parse"), MovedAttribute("urlunsplit", "urlparse", "urllib.parse"), MovedAttribute("quote", "urllib", "urllib.parse"), MovedAttribute("quote_plus", "urllib", "urllib.parse"), MovedAttribute("unquote", "urllib", "urllib.parse"), MovedAttribute("unquote_plus", "urllib", "urllib.parse"), MovedAttribute("unquote_to_bytes", "urllib", "urllib.parse", "unquote", "unquote_to_bytes"), MovedAttribute("urlencode", "urllib", "urllib.parse"), MovedAttribute("splitquery", "urllib", "urllib.parse"), MovedAttribute("splittag", "urllib", "urllib.parse"), MovedAttribute("splituser", "urllib", "urllib.parse"), MovedAttribute("splitvalue", "urllib", "urllib.parse"), MovedAttribute("uses_fragment", "urlparse", "urllib.parse"), MovedAttribute("uses_netloc", "urlparse", "urllib.parse"), MovedAttribute("uses_params", "urlparse", "urllib.parse"), MovedAttribute("uses_query", "urlparse", "urllib.parse"), MovedAttribute("uses_relative", "urlparse", "urllib.parse"), ] for attr in _urllib_parse_moved_attributes: setattr(Module_six_moves_urllib_parse, attr.name, attr) del attr Module_six_moves_urllib_parse._moved_attributes = _urllib_parse_moved_attributes _importer._add_module(Module_six_moves_urllib_parse(__name__ + ".moves.urllib_parse"), "moves.urllib_parse", "moves.urllib.parse") class Module_six_moves_urllib_error(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_error""" _urllib_error_moved_attributes = [ MovedAttribute("URLError", "urllib2", "urllib.error"), MovedAttribute("HTTPError", "urllib2", "urllib.error"), MovedAttribute("ContentTooShortError", "urllib", "urllib.error"), ] for attr in _urllib_error_moved_attributes: setattr(Module_six_moves_urllib_error, attr.name, attr) del attr Module_six_moves_urllib_error._moved_attributes = _urllib_error_moved_attributes _importer._add_module(Module_six_moves_urllib_error(__name__ + ".moves.urllib.error"), "moves.urllib_error", "moves.urllib.error") class Module_six_moves_urllib_request(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_request""" _urllib_request_moved_attributes = [ MovedAttribute("urlopen", "urllib2", "urllib.request"), MovedAttribute("install_opener", "urllib2", "urllib.request"), MovedAttribute("build_opener", "urllib2", "urllib.request"), MovedAttribute("pathname2url", "urllib", "urllib.request"), MovedAttribute("url2pathname", "urllib", "urllib.request"), MovedAttribute("getproxies", "urllib", "urllib.request"), MovedAttribute("Request", "urllib2", "urllib.request"), MovedAttribute("OpenerDirector", "urllib2", "urllib.request"), MovedAttribute("HTTPDefaultErrorHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPRedirectHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPCookieProcessor", "urllib2", "urllib.request"), MovedAttribute("ProxyHandler", "urllib2", "urllib.request"), MovedAttribute("BaseHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPPasswordMgr", "urllib2", "urllib.request"), MovedAttribute("HTTPPasswordMgrWithDefaultRealm", "urllib2", "urllib.request"), MovedAttribute("AbstractBasicAuthHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPBasicAuthHandler", "urllib2", "urllib.request"), MovedAttribute("ProxyBasicAuthHandler", "urllib2", "urllib.request"), MovedAttribute("AbstractDigestAuthHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPDigestAuthHandler", "urllib2", "urllib.request"), MovedAttribute("ProxyDigestAuthHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPSHandler", "urllib2", "urllib.request"), MovedAttribute("FileHandler", "urllib2", "urllib.request"), MovedAttribute("FTPHandler", "urllib2", "urllib.request"), MovedAttribute("CacheFTPHandler", "urllib2", "urllib.request"), MovedAttribute("UnknownHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPErrorProcessor", "urllib2", "urllib.request"), MovedAttribute("urlretrieve", "urllib", "urllib.request"), MovedAttribute("urlcleanup", "urllib", "urllib.request"), MovedAttribute("URLopener", "urllib", "urllib.request"), MovedAttribute("FancyURLopener", "urllib", "urllib.request"), MovedAttribute("proxy_bypass", "urllib", "urllib.request"), MovedAttribute("parse_http_list", "urllib2", "urllib.request"), MovedAttribute("parse_keqv_list", "urllib2", "urllib.request"), ] for attr in _urllib_request_moved_attributes: setattr(Module_six_moves_urllib_request, attr.name, attr) del attr Module_six_moves_urllib_request._moved_attributes = _urllib_request_moved_attributes _importer._add_module(Module_six_moves_urllib_request(__name__ + ".moves.urllib.request"), "moves.urllib_request", "moves.urllib.request") class Module_six_moves_urllib_response(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_response""" _urllib_response_moved_attributes = [ MovedAttribute("addbase", "urllib", "urllib.response"), MovedAttribute("addclosehook", "urllib", "urllib.response"), MovedAttribute("addinfo", "urllib", "urllib.response"), MovedAttribute("addinfourl", "urllib", "urllib.response"), ] for attr in _urllib_response_moved_attributes: setattr(Module_six_moves_urllib_response, attr.name, attr) del attr Module_six_moves_urllib_response._moved_attributes = _urllib_response_moved_attributes _importer._add_module(Module_six_moves_urllib_response(__name__ + ".moves.urllib.response"), "moves.urllib_response", "moves.urllib.response") class Module_six_moves_urllib_robotparser(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_robotparser""" _urllib_robotparser_moved_attributes = [ MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"), ] for attr in _urllib_robotparser_moved_attributes: setattr(Module_six_moves_urllib_robotparser, attr.name, attr) del attr Module_six_moves_urllib_robotparser._moved_attributes = _urllib_robotparser_moved_attributes _importer._add_module(Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib.robotparser"), "moves.urllib_robotparser", "moves.urllib.robotparser") class Module_six_moves_urllib(types.ModuleType): """Create a six.moves.urllib namespace that resembles the Python 3 namespace""" __path__ = [] # mark as package parse = _importer._get_module("moves.urllib_parse") error = _importer._get_module("moves.urllib_error") request = _importer._get_module("moves.urllib_request") response = _importer._get_module("moves.urllib_response") robotparser = _importer._get_module("moves.urllib_robotparser") def __dir__(self): return ['parse', 'error', 'request', 'response', 'robotparser'] _importer._add_module(Module_six_moves_urllib(__name__ + ".moves.urllib"), "moves.urllib") def add_move(move): """Add an item to six.moves.""" setattr(_MovedItems, move.name, move) def remove_move(name): """Remove item from six.moves.""" try: delattr(_MovedItems, name) except AttributeError: try: del moves.__dict__[name] except KeyError: raise AttributeError("no such move, %r" % (name,)) if PY3: _meth_func = "__func__" _meth_self = "__self__" _func_closure = "__closure__" _func_code = "__code__" _func_defaults = "__defaults__" _func_globals = "__globals__" else: _meth_func = "im_func" _meth_self = "im_self" _func_closure = "func_closure" _func_code = "func_code" _func_defaults = "func_defaults" _func_globals = "func_globals" try: advance_iterator = next except NameError: def advance_iterator(it): return it.next() next = advance_iterator try: callable = callable except NameError: def callable(obj): return any("__call__" in klass.__dict__ for klass in type(obj).__mro__) if PY3: def get_unbound_function(unbound): return unbound create_bound_method = types.MethodType def create_unbound_method(func, cls): return func Iterator = object else: def get_unbound_function(unbound): return unbound.im_func def create_bound_method(func, obj): return types.MethodType(func, obj, obj.__class__) def create_unbound_method(func, cls): return types.MethodType(func, None, cls) class Iterator(object): def next(self): return type(self).__next__(self) callable = callable _add_doc(get_unbound_function, """Get the function out of a possibly unbound function""") get_method_function = operator.attrgetter(_meth_func) get_method_self = operator.attrgetter(_meth_self) get_function_closure = operator.attrgetter(_func_closure) get_function_code = operator.attrgetter(_func_code) get_function_defaults = operator.attrgetter(_func_defaults) get_function_globals = operator.attrgetter(_func_globals) if PY3: def iterkeys(d, **kw): return iter(d.keys(**kw)) def itervalues(d, **kw): return iter(d.values(**kw)) def iteritems(d, **kw): return iter(d.items(**kw)) def iterlists(d, **kw): return iter(d.lists(**kw)) viewkeys = operator.methodcaller("keys") viewvalues = operator.methodcaller("values") viewitems = operator.methodcaller("items") else: def iterkeys(d, **kw): return d.iterkeys(**kw) def itervalues(d, **kw): return d.itervalues(**kw) def iteritems(d, **kw): return d.iteritems(**kw) def iterlists(d, **kw): return d.iterlists(**kw) viewkeys = operator.methodcaller("viewkeys") viewvalues = operator.methodcaller("viewvalues") viewitems = operator.methodcaller("viewitems") _add_doc(iterkeys, "Return an iterator over the keys of a dictionary.") _add_doc(itervalues, "Return an iterator over the values of a dictionary.") _add_doc(iteritems, "Return an iterator over the (key, value) pairs of a dictionary.") _add_doc(iterlists, "Return an iterator over the (key, [values]) pairs of a dictionary.") if PY3: def b(s): return s.encode("latin-1") def u(s): return s unichr = chr import struct int2byte = struct.Struct(">B").pack del struct byte2int = operator.itemgetter(0) indexbytes = operator.getitem iterbytes = iter import io StringIO = io.StringIO BytesIO = io.BytesIO _assertCountEqual = "assertCountEqual" if sys.version_info[1] <= 1: _assertRaisesRegex = "assertRaisesRegexp" _assertRegex = "assertRegexpMatches" else: _assertRaisesRegex = "assertRaisesRegex" _assertRegex = "assertRegex" else: def b(s): return s # Workaround for standalone backslash def u(s): return unicode(s.replace(r'\\', r'\\\\'), "unicode_escape") unichr = unichr int2byte = chr def byte2int(bs): return ord(bs[0]) def indexbytes(buf, i): return ord(buf[i]) iterbytes = functools.partial(itertools.imap, ord) import StringIO StringIO = BytesIO = StringIO.StringIO _assertCountEqual = "assertItemsEqual" _assertRaisesRegex = "assertRaisesRegexp" _assertRegex = "assertRegexpMatches" _add_doc(b, """Byte literal""") _add_doc(u, """Text literal""") def assertCountEqual(self, *args, **kwargs): return getattr(self, _assertCountEqual)(*args, **kwargs) def assertRaisesRegex(self, *args, **kwargs): return getattr(self, _assertRaisesRegex)(*args, **kwargs) def assertRegex(self, *args, **kwargs): return getattr(self, _assertRegex)(*args, **kwargs) if PY3: exec_ = getattr(moves.builtins, "exec") def reraise(tp, value, tb=None): try: if value is None: value = tp() if value.__traceback__ is not tb: raise value.with_traceback(tb) raise value finally: value = None tb = None else: def exec_(_code_, _globs_=None, _locs_=None): """Execute code in a namespace.""" if _globs_ is None: frame = sys._getframe(1) _globs_ = frame.f_globals if _locs_ is None: _locs_ = frame.f_locals del frame elif _locs_ is None: _locs_ = _globs_ exec("""exec _code_ in _globs_, _locs_""") exec_("""def reraise(tp, value, tb=None): try: raise tp, value, tb finally: tb = None """) if sys.version_info[:2] == (3, 2): exec_("""def raise_from(value, from_value): try: if from_value is None: raise value raise value from from_value finally: value = None """) elif sys.version_info[:2] > (3, 2): exec_("""def raise_from(value, from_value): try: raise value from from_value finally: value = None """) else: def raise_from(value, from_value): raise value print_ = getattr(moves.builtins, "print", None) if print_ is None: def print_(*args, **kwargs): """The new-style print function for Python 2.4 and 2.5.""" fp = kwargs.pop("file", sys.stdout) if fp is None: return def write(data): if not isinstance(data, basestring): data = str(data) # If the file has an encoding, encode unicode with it. if (isinstance(fp, file) and isinstance(data, unicode) and fp.encoding is not None): errors = getattr(fp, "errors", None) if errors is None: errors = "strict" data = data.encode(fp.encoding, errors) fp.write(data) want_unicode = False sep = kwargs.pop("sep", None) if sep is not None: if isinstance(sep, unicode): want_unicode = True elif not isinstance(sep, str): raise TypeError("sep must be None or a string") end = kwargs.pop("end", None) if end is not None: if isinstance(end, unicode): want_unicode = True elif not isinstance(end, str): raise TypeError("end must be None or a string") if kwargs: raise TypeError("invalid keyword arguments to print()") if not want_unicode: for arg in args: if isinstance(arg, unicode): want_unicode = True break if want_unicode: newline = unicode("\n") space = unicode(" ") else: newline = "\n" space = " " if sep is None: sep = space if end is None: end = newline for i, arg in enumerate(args): if i: write(sep) write(arg) write(end) if sys.version_info[:2] < (3, 3): _print = print_ def print_(*args, **kwargs): fp = kwargs.get("file", sys.stdout) flush = kwargs.pop("flush", False) _print(*args, **kwargs) if flush and fp is not None: fp.flush() _add_doc(reraise, """Reraise an exception.""") if sys.version_info[0:2] < (3, 4): def wraps(wrapped, assigned=functools.WRAPPER_ASSIGNMENTS, updated=functools.WRAPPER_UPDATES): def wrapper(f): f = functools.wraps(wrapped, assigned, updated)(f) f.__wrapped__ = wrapped return f return wrapper else: wraps = functools.wraps def with_metaclass(meta, *bases): """Create a base class with a metaclass.""" # This requires a bit of explanation: the basic idea is to make a dummy # metaclass for one level of class instantiation that replaces itself with # the actual metaclass. class metaclass(type): def __new__(cls, name, this_bases, d): return meta(name, bases, d) @classmethod def __prepare__(cls, name, this_bases): return meta.__prepare__(name, bases) return type.__new__(metaclass, 'temporary_class', (), {}) def add_metaclass(metaclass): """Class decorator for creating a class with a metaclass.""" def wrapper(cls): orig_vars = cls.__dict__.copy() slots = orig_vars.get('__slots__') if slots is not None: if isinstance(slots, str): slots = [slots] for slots_var in slots: orig_vars.pop(slots_var) orig_vars.pop('__dict__', None) orig_vars.pop('__weakref__', None) if hasattr(cls, '__qualname__'): orig_vars['__qualname__'] = cls.__qualname__ return metaclass(cls.__name__, cls.__bases__, orig_vars) return wrapper def ensure_binary(s, encoding='utf-8', errors='strict'): """Coerce **s** to six.binary_type. For Python 2: - `unicode` -> encoded to `str` - `str` -> `str` For Python 3: - `str` -> encoded to `bytes` - `bytes` -> `bytes` """ if isinstance(s, text_type): return s.encode(encoding, errors) elif isinstance(s, binary_type): return s else: raise TypeError("not expecting type '%s'" % type(s)) def ensure_str(s, encoding='utf-8', errors='strict'): """Coerce *s* to `str`. For Python 2: - `unicode` -> encoded to `str` - `str` -> `str` For Python 3: - `str` -> `str` - `bytes` -> decoded to `str` """ if not isinstance(s, (text_type, binary_type)): raise TypeError("not expecting type '%s'" % type(s)) if PY2 and isinstance(s, text_type): s = s.encode(encoding, errors) elif PY3 and isinstance(s, binary_type): s = s.decode(encoding, errors) return s def ensure_text(s, encoding='utf-8', errors='strict'): """Coerce *s* to six.text_type. For Python 2: - `unicode` -> `unicode` - `str` -> `unicode` For Python 3: - `str` -> `str` - `bytes` -> decoded to `str` """ if isinstance(s, binary_type): return s.decode(encoding, errors) elif isinstance(s, text_type): return s else: raise TypeError("not expecting type '%s'" % type(s)) def python_2_unicode_compatible(klass): """ A decorator that defines __unicode__ and __str__ methods under Python 2. Under Python 3 it does nothing. To support Python 2 and 3 with a single code base, define a __str__ method returning text and apply this decorator to the class. """ if PY2: if '__str__' not in klass.__dict__: raise ValueError("@python_2_unicode_compatible cannot be applied " "to %s because it doesn't define __str__()." % klass.__name__) klass.__unicode__ = klass.__str__ klass.__str__ = lambda self: self.__unicode__().encode('utf-8') return klass # Complete the moves implementation. # This code is at the end of this module to speed up module loading. # Turn this module into a package. __path__ = [] # required for PEP 302 and PEP 451 __package__ = __name__ # see PEP 366 @ReservedAssignment if globals().get("__spec__") is not None: __spec__.submodule_search_locations = [] # PEP 451 @UndefinedVariable # Remove other six meta path importers, since they cause problems. This can # happen if six is removed from sys.modules and then reloaded. (Setuptools does # this for some reason.) if sys.meta_path: for i, importer in enumerate(sys.meta_path): # Here's some real nastiness: Another "instance" of the six module might # be floating around. Therefore, we can't use isinstance() to check for # the six meta path importer, since the other six instance will have # inserted an importer with different class. if (type(importer).__name__ == "_SixMetaPathImporter" and importer.name == __name__): del sys.meta_path[i] break del i, importer # Finally, add the importer to the meta path import hook. sys.meta_path.append(_importer) ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735100.0233276 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/���������������������������������0000755�0001750�0001750�00000000000�00000000000�026041� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/__init__.py����������������������0000644�0001750�0001750�00000005241�00000000000�030154� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" urllib3 - Thread-safe connection pooling and re-using. """ from __future__ import absolute_import import warnings from .connectionpool import ( HTTPConnectionPool, HTTPSConnectionPool, connection_from_url ) from . import exceptions from .filepost import encode_multipart_formdata from .poolmanager import PoolManager, ProxyManager, proxy_from_url from .response import HTTPResponse from .util.request import make_headers from .util.url import get_host from .util.timeout import Timeout from .util.retry import Retry # Set default logging handler to avoid "No handler found" warnings. import logging from logging import NullHandler __author__ = 'Andrey Petrov (andrey.petrov@shazow.net)' __license__ = 'MIT' __version__ = '1.25.3' __all__ = ( 'HTTPConnectionPool', 'HTTPSConnectionPool', 'PoolManager', 'ProxyManager', 'HTTPResponse', 'Retry', 'Timeout', 'add_stderr_logger', 'connection_from_url', 'disable_warnings', 'encode_multipart_formdata', 'get_host', 'make_headers', 'proxy_from_url', ) logging.getLogger(__name__).addHandler(NullHandler()) def add_stderr_logger(level=logging.DEBUG): """ Helper for quickly adding a StreamHandler to the logger. Useful for debugging. Returns the handler after adding it. """ # This method needs to be in this __init__.py to get the __name__ correct # even if urllib3 is vendored within another package. logger = logging.getLogger(__name__) handler = logging.StreamHandler() handler.setFormatter(logging.Formatter('%(asctime)s %(levelname)s %(message)s')) logger.addHandler(handler) logger.setLevel(level) logger.debug('Added a stderr logging handler to logger: %s', __name__) return handler # ... Clean up. del NullHandler # All warning filters *must* be appended unless you're really certain that they # shouldn't be: otherwise, it's very hard for users to use most Python # mechanisms to silence them. # SecurityWarning's always go off by default. warnings.simplefilter('always', exceptions.SecurityWarning, append=True) # SubjectAltNameWarning's should go off once per host warnings.simplefilter('default', exceptions.SubjectAltNameWarning, append=True) # InsecurePlatformWarning's don't vary between requests, so we keep it default. warnings.simplefilter('default', exceptions.InsecurePlatformWarning, append=True) # SNIMissingWarnings should go off only once. warnings.simplefilter('default', exceptions.SNIMissingWarning, append=True) def disable_warnings(category=exceptions.HTTPWarning): """ Helper for quickly disabling all urllib3 warnings. """ warnings.simplefilter('ignore', category) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/_collections.py������������������0000644�0001750�0001750�00000024772�00000000000�031104� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import try: from collections.abc import Mapping, MutableMapping except ImportError: from collections import Mapping, MutableMapping try: from threading import RLock except ImportError: # Platform-specific: No threads available class RLock: def __enter__(self): pass def __exit__(self, exc_type, exc_value, traceback): pass from collections import OrderedDict from .exceptions import InvalidHeader from .packages.six import iterkeys, itervalues, PY3 __all__ = ['RecentlyUsedContainer', 'HTTPHeaderDict'] _Null = object() class RecentlyUsedContainer(MutableMapping): """ Provides a thread-safe dict-like container which maintains up to ``maxsize`` keys while throwing away the least-recently-used keys beyond ``maxsize``. :param maxsize: Maximum number of recent elements to retain. :param dispose_func: Every time an item is evicted from the container, ``dispose_func(value)`` is called. Callback which will get called """ ContainerCls = OrderedDict def __init__(self, maxsize=10, dispose_func=None): self._maxsize = maxsize self.dispose_func = dispose_func self._container = self.ContainerCls() self.lock = RLock() def __getitem__(self, key): # Re-insert the item, moving it to the end of the eviction line. with self.lock: item = self._container.pop(key) self._container[key] = item return item def __setitem__(self, key, value): evicted_value = _Null with self.lock: # Possibly evict the existing value of 'key' evicted_value = self._container.get(key, _Null) self._container[key] = value # If we didn't evict an existing value, we might have to evict the # least recently used item from the beginning of the container. if len(self._container) > self._maxsize: _key, evicted_value = self._container.popitem(last=False) if self.dispose_func and evicted_value is not _Null: self.dispose_func(evicted_value) def __delitem__(self, key): with self.lock: value = self._container.pop(key) if self.dispose_func: self.dispose_func(value) def __len__(self): with self.lock: return len(self._container) def __iter__(self): raise NotImplementedError('Iteration over this class is unlikely to be threadsafe.') def clear(self): with self.lock: # Copy pointers to all values, then wipe the mapping values = list(itervalues(self._container)) self._container.clear() if self.dispose_func: for value in values: self.dispose_func(value) def keys(self): with self.lock: return list(iterkeys(self._container)) class HTTPHeaderDict(MutableMapping): """ :param headers: An iterable of field-value pairs. Must not contain multiple field names when compared case-insensitively. :param kwargs: Additional field-value pairs to pass in to ``dict.update``. A ``dict`` like container for storing HTTP Headers. Field names are stored and compared case-insensitively in compliance with RFC 7230. Iteration provides the first case-sensitive key seen for each case-insensitive pair. Using ``__setitem__`` syntax overwrites fields that compare equal case-insensitively in order to maintain ``dict``'s api. For fields that compare equal, instead create a new ``HTTPHeaderDict`` and use ``.add`` in a loop. If multiple fields that are equal case-insensitively are passed to the constructor or ``.update``, the behavior is undefined and some will be lost. >>> headers = HTTPHeaderDict() >>> headers.add('Set-Cookie', 'foo=bar') >>> headers.add('set-cookie', 'baz=quxx') >>> headers['content-length'] = '7' >>> headers['SET-cookie'] 'foo=bar, baz=quxx' >>> headers['Content-Length'] '7' """ def __init__(self, headers=None, **kwargs): super(HTTPHeaderDict, self).__init__() self._container = OrderedDict() if headers is not None: if isinstance(headers, HTTPHeaderDict): self._copy_from(headers) else: self.extend(headers) if kwargs: self.extend(kwargs) def __setitem__(self, key, val): self._container[key.lower()] = [key, val] return self._container[key.lower()] def __getitem__(self, key): val = self._container[key.lower()] return ', '.join(val[1:]) def __delitem__(self, key): del self._container[key.lower()] def __contains__(self, key): return key.lower() in self._container def __eq__(self, other): if not isinstance(other, Mapping) and not hasattr(other, 'keys'): return False if not isinstance(other, type(self)): other = type(self)(other) return (dict((k.lower(), v) for k, v in self.itermerged()) == dict((k.lower(), v) for k, v in other.itermerged())) def __ne__(self, other): return not self.__eq__(other) if not PY3: # Python 2 iterkeys = MutableMapping.iterkeys itervalues = MutableMapping.itervalues __marker = object() def __len__(self): return len(self._container) def __iter__(self): # Only provide the originally cased names for vals in self._container.values(): yield vals[0] def pop(self, key, default=__marker): '''D.pop(k[,d]) -> v, remove specified key and return the corresponding value. If key is not found, d is returned if given, otherwise KeyError is raised. ''' # Using the MutableMapping function directly fails due to the private marker. # Using ordinary dict.pop would expose the internal structures. # So let's reinvent the wheel. try: value = self[key] except KeyError: if default is self.__marker: raise return default else: del self[key] return value def discard(self, key): try: del self[key] except KeyError: pass def add(self, key, val): """Adds a (name, value) pair, doesn't overwrite the value if it already exists. >>> headers = HTTPHeaderDict(foo='bar') >>> headers.add('Foo', 'baz') >>> headers['foo'] 'bar, baz' """ key_lower = key.lower() new_vals = [key, val] # Keep the common case aka no item present as fast as possible vals = self._container.setdefault(key_lower, new_vals) if new_vals is not vals: vals.append(val) def extend(self, *args, **kwargs): """Generic import function for any type of header-like object. Adapted version of MutableMapping.update in order to insert items with self.add instead of self.__setitem__ """ if len(args) > 1: raise TypeError("extend() takes at most 1 positional " "arguments ({0} given)".format(len(args))) other = args[0] if len(args) >= 1 else () if isinstance(other, HTTPHeaderDict): for key, val in other.iteritems(): self.add(key, val) elif isinstance(other, Mapping): for key in other: self.add(key, other[key]) elif hasattr(other, "keys"): for key in other.keys(): self.add(key, other[key]) else: for key, value in other: self.add(key, value) for key, value in kwargs.items(): self.add(key, value) def getlist(self, key, default=__marker): """Returns a list of all the values for the named field. Returns an empty list if the key doesn't exist.""" try: vals = self._container[key.lower()] except KeyError: if default is self.__marker: return [] return default else: return vals[1:] # Backwards compatibility for httplib getheaders = getlist getallmatchingheaders = getlist iget = getlist # Backwards compatibility for http.cookiejar get_all = getlist def __repr__(self): return "%s(%s)" % (type(self).__name__, dict(self.itermerged())) def _copy_from(self, other): for key in other: val = other.getlist(key) if isinstance(val, list): # Don't need to convert tuples val = list(val) self._container[key.lower()] = [key] + val def copy(self): clone = type(self)() clone._copy_from(self) return clone def iteritems(self): """Iterate over all header lines, including duplicate ones.""" for key in self: vals = self._container[key.lower()] for val in vals[1:]: yield vals[0], val def itermerged(self): """Iterate over all headers, merging duplicate ones together.""" for key in self: val = self._container[key.lower()] yield val[0], ', '.join(val[1:]) def items(self): return list(self.iteritems()) @classmethod def from_httplib(cls, message): # Python 2 """Read headers from a Python 2 httplib message object.""" # python2.7 does not expose a proper API for exporting multiheaders # efficiently. This function re-reads raw lines from the message # object and extracts the multiheaders properly. obs_fold_continued_leaders = (' ', '\t') headers = [] for line in message.headers: if line.startswith(obs_fold_continued_leaders): if not headers: # We received a header line that starts with OWS as described # in RFC-7230 S3.2.4. This indicates a multiline header, but # there exists no previous header to which we can attach it. raise InvalidHeader( 'Header continuation with no previous header: %s' % line ) else: key, value = headers[-1] headers[-1] = (key, value + ' ' + line.strip()) continue key, value = line.split(':', 1) headers.append((key, value.strip())) return cls(headers) ������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/connection.py��������������������0000644�0001750�0001750�00000035231�00000000000�030556� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import import datetime import logging import os import socket from socket import error as SocketError, timeout as SocketTimeout import warnings from .packages import six from .packages.six.moves.http_client import HTTPConnection as _HTTPConnection from .packages.six.moves.http_client import HTTPException # noqa: F401 try: # Compiled with SSL? import ssl BaseSSLError = ssl.SSLError except (ImportError, AttributeError): # Platform-specific: No SSL. ssl = None class BaseSSLError(BaseException): pass try: # Python 3: not a no-op, we're adding this to the namespace so it can be imported. ConnectionError = ConnectionError except NameError: # Python 2 class ConnectionError(Exception): pass from .exceptions import ( NewConnectionError, ConnectTimeoutError, SubjectAltNameWarning, SystemTimeWarning, ) from .packages.ssl_match_hostname import match_hostname, CertificateError from .util.ssl_ import ( resolve_cert_reqs, resolve_ssl_version, assert_fingerprint, create_urllib3_context, ssl_wrap_socket ) from .util import connection from ._collections import HTTPHeaderDict log = logging.getLogger(__name__) port_by_scheme = { 'http': 80, 'https': 443, } # When updating RECENT_DATE, move it to within two years of the current date, # and not less than 6 months ago. # Example: if Today is 2018-01-01, then RECENT_DATE should be any date on or # after 2016-01-01 (today - 2 years) AND before 2017-07-01 (today - 6 months) RECENT_DATE = datetime.date(2017, 6, 30) class DummyConnection(object): """Used to detect a failed ConnectionCls import.""" pass class HTTPConnection(_HTTPConnection, object): """ Based on httplib.HTTPConnection but provides an extra constructor backwards-compatibility layer between older and newer Pythons. Additional keyword parameters are used to configure attributes of the connection. Accepted parameters include: - ``strict``: See the documentation on :class:`urllib3.connectionpool.HTTPConnectionPool` - ``source_address``: Set the source address for the current connection. - ``socket_options``: Set specific options on the underlying socket. If not specified, then defaults are loaded from ``HTTPConnection.default_socket_options`` which includes disabling Nagle's algorithm (sets TCP_NODELAY to 1) unless the connection is behind a proxy. For example, if you wish to enable TCP Keep Alive in addition to the defaults, you might pass:: HTTPConnection.default_socket_options + [ (socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1), ] Or you may want to disable the defaults by passing an empty list (e.g., ``[]``). """ default_port = port_by_scheme['http'] #: Disable Nagle's algorithm by default. #: ``[(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)]`` default_socket_options = [(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)] #: Whether this connection verifies the host's certificate. is_verified = False def __init__(self, *args, **kw): if six.PY3: kw.pop('strict', None) # Pre-set source_address. self.source_address = kw.get('source_address') #: The socket options provided by the user. If no options are #: provided, we use the default options. self.socket_options = kw.pop('socket_options', self.default_socket_options) _HTTPConnection.__init__(self, *args, **kw) @property def host(self): """ Getter method to remove any trailing dots that indicate the hostname is an FQDN. In general, SSL certificates don't include the trailing dot indicating a fully-qualified domain name, and thus, they don't validate properly when checked against a domain name that includes the dot. In addition, some servers may not expect to receive the trailing dot when provided. However, the hostname with trailing dot is critical to DNS resolution; doing a lookup with the trailing dot will properly only resolve the appropriate FQDN, whereas a lookup without a trailing dot will search the system's search domain list. Thus, it's important to keep the original host around for use only in those cases where it's appropriate (i.e., when doing DNS lookup to establish the actual TCP connection across which we're going to send HTTP requests). """ return self._dns_host.rstrip('.') @host.setter def host(self, value): """ Setter for the `host` property. We assume that only urllib3 uses the _dns_host attribute; httplib itself only uses `host`, and it seems reasonable that other libraries follow suit. """ self._dns_host = value def _new_conn(self): """ Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ extra_kw = {} if self.source_address: extra_kw['source_address'] = self.source_address if self.socket_options: extra_kw['socket_options'] = self.socket_options try: conn = connection.create_connection( (self._dns_host, self.port), self.timeout, **extra_kw) except SocketTimeout: raise ConnectTimeoutError( self, "Connection to %s timed out. (connect timeout=%s)" % (self.host, self.timeout)) except SocketError as e: raise NewConnectionError( self, "Failed to establish a new connection: %s" % e) return conn def _prepare_conn(self, conn): self.sock = conn # Google App Engine's httplib does not define _tunnel_host if getattr(self, '_tunnel_host', None): # TODO: Fix tunnel so it doesn't depend on self.sock state. self._tunnel() # Mark this connection as not reusable self.auto_open = 0 def connect(self): conn = self._new_conn() self._prepare_conn(conn) def request_chunked(self, method, url, body=None, headers=None): """ Alternative to the common request method, which sends the body with chunked encoding and not as one block """ headers = HTTPHeaderDict(headers if headers is not None else {}) skip_accept_encoding = 'accept-encoding' in headers skip_host = 'host' in headers self.putrequest( method, url, skip_accept_encoding=skip_accept_encoding, skip_host=skip_host ) for header, value in headers.items(): self.putheader(header, value) if 'transfer-encoding' not in headers: self.putheader('Transfer-Encoding', 'chunked') self.endheaders() if body is not None: stringish_types = six.string_types + (bytes,) if isinstance(body, stringish_types): body = (body,) for chunk in body: if not chunk: continue if not isinstance(chunk, bytes): chunk = chunk.encode('utf8') len_str = hex(len(chunk))[2:] self.send(len_str.encode('utf-8')) self.send(b'\r\n') self.send(chunk) self.send(b'\r\n') # After the if clause, to always have a closed body self.send(b'0\r\n\r\n') class HTTPSConnection(HTTPConnection): default_port = port_by_scheme['https'] ssl_version = None def __init__(self, host, port=None, key_file=None, cert_file=None, key_password=None, strict=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, ssl_context=None, server_hostname=None, **kw): HTTPConnection.__init__(self, host, port, strict=strict, timeout=timeout, **kw) self.key_file = key_file self.cert_file = cert_file self.key_password = key_password self.ssl_context = ssl_context self.server_hostname = server_hostname # Required property for Google AppEngine 1.9.0 which otherwise causes # HTTPS requests to go out as HTTP. (See Issue #356) self._protocol = 'https' def connect(self): conn = self._new_conn() self._prepare_conn(conn) # Wrap socket using verification with the root certs in # trusted_root_certs default_ssl_context = False if self.ssl_context is None: default_ssl_context = True self.ssl_context = create_urllib3_context( ssl_version=resolve_ssl_version(self.ssl_version), cert_reqs=resolve_cert_reqs(self.cert_reqs), ) # Try to load OS default certs if none are given. # Works well on Windows (requires Python3.4+) context = self.ssl_context if (not self.ca_certs and not self.ca_cert_dir and default_ssl_context and hasattr(context, 'load_default_certs')): context.load_default_certs() self.sock = ssl_wrap_socket( sock=conn, keyfile=self.key_file, certfile=self.cert_file, key_password=self.key_password, ssl_context=self.ssl_context, server_hostname=self.server_hostname ) class VerifiedHTTPSConnection(HTTPSConnection): """ Based on httplib.HTTPSConnection but wraps the socket with SSL certification. """ cert_reqs = None ca_certs = None ca_cert_dir = None ssl_version = None assert_fingerprint = None def set_cert(self, key_file=None, cert_file=None, cert_reqs=None, key_password=None, ca_certs=None, assert_hostname=None, assert_fingerprint=None, ca_cert_dir=None): """ This method should only be called once, before the connection is used. """ # If cert_reqs is not provided we'll assume CERT_REQUIRED unless we also # have an SSLContext object in which case we'll use its verify_mode. if cert_reqs is None: if self.ssl_context is not None: cert_reqs = self.ssl_context.verify_mode else: cert_reqs = resolve_cert_reqs(None) self.key_file = key_file self.cert_file = cert_file self.cert_reqs = cert_reqs self.key_password = key_password self.assert_hostname = assert_hostname self.assert_fingerprint = assert_fingerprint self.ca_certs = ca_certs and os.path.expanduser(ca_certs) self.ca_cert_dir = ca_cert_dir and os.path.expanduser(ca_cert_dir) def connect(self): # Add certificate verification conn = self._new_conn() hostname = self.host # Google App Engine's httplib does not define _tunnel_host if getattr(self, '_tunnel_host', None): self.sock = conn # Calls self._set_hostport(), so self.host is # self._tunnel_host below. self._tunnel() # Mark this connection as not reusable self.auto_open = 0 # Override the host with the one we're requesting data from. hostname = self._tunnel_host server_hostname = hostname if self.server_hostname is not None: server_hostname = self.server_hostname is_time_off = datetime.date.today() < RECENT_DATE if is_time_off: warnings.warn(( 'System time is way off (before {0}). This will probably ' 'lead to SSL verification errors').format(RECENT_DATE), SystemTimeWarning ) # Wrap socket using verification with the root certs in # trusted_root_certs default_ssl_context = False if self.ssl_context is None: default_ssl_context = True self.ssl_context = create_urllib3_context( ssl_version=resolve_ssl_version(self.ssl_version), cert_reqs=resolve_cert_reqs(self.cert_reqs), ) context = self.ssl_context context.verify_mode = resolve_cert_reqs(self.cert_reqs) # Try to load OS default certs if none are given. # Works well on Windows (requires Python3.4+) if (not self.ca_certs and not self.ca_cert_dir and default_ssl_context and hasattr(context, 'load_default_certs')): context.load_default_certs() self.sock = ssl_wrap_socket( sock=conn, keyfile=self.key_file, certfile=self.cert_file, key_password=self.key_password, ca_certs=self.ca_certs, ca_cert_dir=self.ca_cert_dir, server_hostname=server_hostname, ssl_context=context) if self.assert_fingerprint: assert_fingerprint(self.sock.getpeercert(binary_form=True), self.assert_fingerprint) elif context.verify_mode != ssl.CERT_NONE \ and not getattr(context, 'check_hostname', False) \ and self.assert_hostname is not False: # While urllib3 attempts to always turn off hostname matching from # the TLS library, this cannot always be done. So we check whether # the TLS Library still thinks it's matching hostnames. cert = self.sock.getpeercert() if not cert.get('subjectAltName', ()): warnings.warn(( 'Certificate for {0} has no `subjectAltName`, falling back to check for a ' '`commonName` for now. This feature is being removed by major browsers and ' 'deprecated by RFC 2818. (See https://github.com/shazow/urllib3/issues/497 ' 'for details.)'.format(hostname)), SubjectAltNameWarning ) _match_hostname(cert, self.assert_hostname or server_hostname) self.is_verified = ( context.verify_mode == ssl.CERT_REQUIRED or self.assert_fingerprint is not None ) def _match_hostname(cert, asserted_hostname): try: match_hostname(cert, asserted_hostname) except CertificateError as e: log.error( 'Certificate did not match expected hostname: %s. ' 'Certificate: %s', asserted_hostname, cert ) # Add cert to exception and reraise so client code can inspect # the cert when catching the exception, if they want to e._peer_cert = cert raise if ssl: # Make a copy for testing. UnverifiedHTTPSConnection = HTTPSConnection HTTPSConnection = VerifiedHTTPSConnection else: HTTPSConnection = DummyConnection �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/connectionpool.py����������������0000644�0001750�0001750�00000104753�00000000000�031456� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import import errno import logging import sys import warnings from socket import error as SocketError, timeout as SocketTimeout import socket from .exceptions import ( ClosedPoolError, ProtocolError, EmptyPoolError, HeaderParsingError, HostChangedError, LocationValueError, MaxRetryError, ProxyError, ReadTimeoutError, SSLError, TimeoutError, InsecureRequestWarning, NewConnectionError, ) from .packages.ssl_match_hostname import CertificateError from .packages import six from .packages.six.moves import queue from .packages.rfc3986.normalizers import normalize_host from .connection import ( port_by_scheme, DummyConnection, HTTPConnection, HTTPSConnection, VerifiedHTTPSConnection, HTTPException, BaseSSLError, ) from .request import RequestMethods from .response import HTTPResponse from .util.connection import is_connection_dropped from .util.request import set_file_position from .util.response import assert_header_parsing from .util.retry import Retry from .util.timeout import Timeout from .util.url import get_host, Url, NORMALIZABLE_SCHEMES from .util.queue import LifoQueue xrange = six.moves.xrange log = logging.getLogger(__name__) _Default = object() # Pool objects class ConnectionPool(object): """ Base class for all connection pools, such as :class:`.HTTPConnectionPool` and :class:`.HTTPSConnectionPool`. """ scheme = None QueueCls = LifoQueue def __init__(self, host, port=None): if not host: raise LocationValueError("No host specified.") self.host = _normalize_host(host, scheme=self.scheme) self._proxy_host = host.lower() self.port = port def __str__(self): return '%s(host=%r, port=%r)' % (type(self).__name__, self.host, self.port) def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.close() # Return False to re-raise any potential exceptions return False def close(self): """ Close all pooled connections and disable the pool. """ pass # This is taken from http://hg.python.org/cpython/file/7aaba721ebc0/Lib/socket.py#l252 _blocking_errnos = {errno.EAGAIN, errno.EWOULDBLOCK} class HTTPConnectionPool(ConnectionPool, RequestMethods): """ Thread-safe connection pool for one host. :param host: Host used for this HTTP Connection (e.g. "localhost"), passed into :class:`httplib.HTTPConnection`. :param port: Port used for this HTTP Connection (None is equivalent to 80), passed into :class:`httplib.HTTPConnection`. :param strict: Causes BadStatusLine to be raised if the status line can't be parsed as a valid HTTP/1.0 or 1.1 status line, passed into :class:`httplib.HTTPConnection`. .. note:: Only works in Python 2. This parameter is ignored in Python 3. :param timeout: Socket timeout in seconds for each individual connection. This can be a float or integer, which sets the timeout for the HTTP request, or an instance of :class:`urllib3.util.Timeout` which gives you more fine-grained control over request timeouts. After the constructor has been parsed, this is always a `urllib3.util.Timeout` object. :param maxsize: Number of connections to save that can be reused. More than 1 is useful in multithreaded situations. If ``block`` is set to False, more connections will be created but they will not be saved once they've been used. :param block: If set to True, no more than ``maxsize`` connections will be used at a time. When no free connections are available, the call will block until a connection has been released. This is a useful side effect for particular multithreaded situations where one does not want to use more than maxsize connections per host to prevent flooding. :param headers: Headers to include with all requests, unless other headers are given explicitly. :param retries: Retry configuration to use by default with requests in this pool. :param _proxy: Parsed proxy URL, should not be used directly, instead, see :class:`urllib3.connectionpool.ProxyManager`" :param _proxy_headers: A dictionary with proxy headers, should not be used directly, instead, see :class:`urllib3.connectionpool.ProxyManager`" :param \\**conn_kw: Additional parameters are used to create fresh :class:`urllib3.connection.HTTPConnection`, :class:`urllib3.connection.HTTPSConnection` instances. """ scheme = 'http' ConnectionCls = HTTPConnection ResponseCls = HTTPResponse def __init__(self, host, port=None, strict=False, timeout=Timeout.DEFAULT_TIMEOUT, maxsize=1, block=False, headers=None, retries=None, _proxy=None, _proxy_headers=None, **conn_kw): ConnectionPool.__init__(self, host, port) RequestMethods.__init__(self, headers) self.strict = strict if not isinstance(timeout, Timeout): timeout = Timeout.from_float(timeout) if retries is None: retries = Retry.DEFAULT self.timeout = timeout self.retries = retries self.pool = self.QueueCls(maxsize) self.block = block self.proxy = _proxy self.proxy_headers = _proxy_headers or {} # Fill the queue up so that doing get() on it will block properly for _ in xrange(maxsize): self.pool.put(None) # These are mostly for testing and debugging purposes. self.num_connections = 0 self.num_requests = 0 self.conn_kw = conn_kw if self.proxy: # Enable Nagle's algorithm for proxies, to avoid packet fragmentation. # We cannot know if the user has added default socket options, so we cannot replace the # list. self.conn_kw.setdefault('socket_options', []) def _new_conn(self): """ Return a fresh :class:`HTTPConnection`. """ self.num_connections += 1 log.debug("Starting new HTTP connection (%d): %s:%s", self.num_connections, self.host, self.port or "80") conn = self.ConnectionCls(host=self.host, port=self.port, timeout=self.timeout.connect_timeout, strict=self.strict, **self.conn_kw) return conn def _get_conn(self, timeout=None): """ Get a connection. Will return a pooled connection if one is available. If no connections are available and :prop:`.block` is ``False``, then a fresh connection is returned. :param timeout: Seconds to wait before giving up and raising :class:`urllib3.exceptions.EmptyPoolError` if the pool is empty and :prop:`.block` is ``True``. """ conn = None try: conn = self.pool.get(block=self.block, timeout=timeout) except AttributeError: # self.pool is None raise ClosedPoolError(self, "Pool is closed.") except queue.Empty: if self.block: raise EmptyPoolError(self, "Pool reached maximum size and no more " "connections are allowed.") pass # Oh well, we'll create a new connection then # If this is a persistent connection, check if it got disconnected if conn and is_connection_dropped(conn): log.debug("Resetting dropped connection: %s", self.host) conn.close() if getattr(conn, 'auto_open', 1) == 0: # This is a proxied connection that has been mutated by # httplib._tunnel() and cannot be reused (since it would # attempt to bypass the proxy) conn = None return conn or self._new_conn() def _put_conn(self, conn): """ Put a connection back into the pool. :param conn: Connection object for the current host and port as returned by :meth:`._new_conn` or :meth:`._get_conn`. If the pool is already full, the connection is closed and discarded because we exceeded maxsize. If connections are discarded frequently, then maxsize should be increased. If the pool is closed, then the connection will be closed and discarded. """ try: self.pool.put(conn, block=False) return # Everything is dandy, done. except AttributeError: # self.pool is None. pass except queue.Full: # This should never happen if self.block == True log.warning( "Connection pool is full, discarding connection: %s", self.host) # Connection never got put back into the pool, close it. if conn: conn.close() def _validate_conn(self, conn): """ Called right before a request is made, after the socket is created. """ pass def _prepare_proxy(self, conn): # Nothing to do for HTTP connections. pass def _get_timeout(self, timeout): """ Helper that always returns a :class:`urllib3.util.Timeout` """ if timeout is _Default: return self.timeout.clone() if isinstance(timeout, Timeout): return timeout.clone() else: # User passed us an int/float. This is for backwards compatibility, # can be removed later return Timeout.from_float(timeout) def _raise_timeout(self, err, url, timeout_value): """Is the error actually a timeout? Will raise a ReadTimeout or pass""" if isinstance(err, SocketTimeout): raise ReadTimeoutError(self, url, "Read timed out. (read timeout=%s)" % timeout_value) # See the above comment about EAGAIN in Python 3. In Python 2 we have # to specifically catch it and throw the timeout error if hasattr(err, 'errno') and err.errno in _blocking_errnos: raise ReadTimeoutError(self, url, "Read timed out. (read timeout=%s)" % timeout_value) # Catch possible read timeouts thrown as SSL errors. If not the # case, rethrow the original. We need to do this because of: # http://bugs.python.org/issue10272 if 'timed out' in str(err) or 'did not complete (read)' in str(err): # Python < 2.7.4 raise ReadTimeoutError(self, url, "Read timed out. (read timeout=%s)" % timeout_value) def _make_request(self, conn, method, url, timeout=_Default, chunked=False, **httplib_request_kw): """ Perform a request on a given urllib connection object taken from our pool. :param conn: a connection from one of our connection pools :param timeout: Socket timeout in seconds for the request. This can be a float or integer, which will set the same timeout value for the socket connect and the socket read, or an instance of :class:`urllib3.util.Timeout`, which gives you more fine-grained control over your timeouts. """ self.num_requests += 1 timeout_obj = self._get_timeout(timeout) timeout_obj.start_connect() conn.timeout = timeout_obj.connect_timeout # Trigger any extra validation we need to do. try: self._validate_conn(conn) except (SocketTimeout, BaseSSLError) as e: # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) raise # conn.request() calls httplib.*.request, not the method in # urllib3.request. It also calls makefile (recv) on the socket. if chunked: conn.request_chunked(method, url, **httplib_request_kw) else: conn.request(method, url, **httplib_request_kw) # Reset the timeout for the recv() on the socket read_timeout = timeout_obj.read_timeout # App Engine doesn't have a sock attr if getattr(conn, 'sock', None): # In Python 3 socket.py will catch EAGAIN and return None when you # try and read into the file pointer created by http.client, which # instead raises a BadStatusLine exception. Instead of catching # the exception and assuming all BadStatusLine exceptions are read # timeouts, check for a zero timeout before making the request. if read_timeout == 0: raise ReadTimeoutError( self, url, "Read timed out. (read timeout=%s)" % read_timeout) if read_timeout is Timeout.DEFAULT_TIMEOUT: conn.sock.settimeout(socket.getdefaulttimeout()) else: # None or a value conn.sock.settimeout(read_timeout) # Receive the response from the server try: try: # Python 2.7, use buffering of HTTP responses httplib_response = conn.getresponse(buffering=True) except TypeError: # Python 3 try: httplib_response = conn.getresponse() except Exception as e: # Remove the TypeError from the exception chain in Python 3; # otherwise it looks like a programming error was the cause. six.raise_from(e, None) except (SocketTimeout, BaseSSLError, SocketError) as e: self._raise_timeout(err=e, url=url, timeout_value=read_timeout) raise # AppEngine doesn't have a version attr. http_version = getattr(conn, '_http_vsn_str', 'HTTP/?') log.debug("%s://%s:%s \"%s %s %s\" %s %s", self.scheme, self.host, self.port, method, url, http_version, httplib_response.status, httplib_response.length) try: assert_header_parsing(httplib_response.msg) except (HeaderParsingError, TypeError) as hpe: # Platform-specific: Python 3 log.warning( 'Failed to parse headers (url=%s): %s', self._absolute_url(url), hpe, exc_info=True) return httplib_response def _absolute_url(self, path): return Url(scheme=self.scheme, host=self.host, port=self.port, path=path).url def close(self): """ Close all pooled connections and disable the pool. """ if self.pool is None: return # Disable access to the pool old_pool, self.pool = self.pool, None try: while True: conn = old_pool.get(block=False) if conn: conn.close() except queue.Empty: pass # Done. def is_same_host(self, url): """ Check if the given ``url`` is a member of the same host as this connection pool. """ if url.startswith('/'): return True # TODO: Add optional support for socket.gethostbyname checking. scheme, host, port = get_host(url) if host is not None: host = _normalize_host(host, scheme=scheme) # Use explicit default port for comparison when none is given if self.port and not port: port = port_by_scheme.get(scheme) elif not self.port and port == port_by_scheme.get(scheme): port = None return (scheme, host, port) == (self.scheme, self.host, self.port) def urlopen(self, method, url, body=None, headers=None, retries=None, redirect=True, assert_same_host=True, timeout=_Default, pool_timeout=None, release_conn=None, chunked=False, body_pos=None, **response_kw): """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method provided by :class:`.RequestMethods`, such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param body: Data to send in the request body (useful for creating POST requests, see HTTPConnectionPool.post_url for more convenience). :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. Pass ``None`` to retry until you receive a response. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When False, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``response_kw.get('preload_content', True)``. :param chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. :param \\**response_kw: Additional parameters are passed to :meth:`urllib3.response.HTTPResponse.from_httplib` """ if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = response_kw.get('preload_content', True) # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] <https://github.com/shazow/urllib3/issues/651> release_this_conn = release_conn # Merge the proxy headers. Only do this in HTTP. We have to copy the # headers dict so we can safely change it without those changes being # reflected in anyone else's copy. if self.scheme == 'http': headers = headers.copy() headers.update(self.proxy_headers) # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) if is_new_proxy_conn: self._prepare_proxy(conn) # Make the request on the httplib connection object. httplib_response = self._make_request(conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked) # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Pass method to Response for length checking response_kw['request_method'] = method # Import httplib's response into our own wrapper object response = self.ResponseCls.from_httplib(httplib_response, pool=self, connection=response_conn, retries=retries, **response_kw) # Everything went great! clean_exit = True except queue.Empty: # Timed out by queue. raise EmptyPoolError(self, "No pool connections are available.") except (TimeoutError, HTTPException, SocketError, ProtocolError, BaseSSLError, SSLError, CertificateError) as e: # Discard the connection for these exceptions. It will be # replaced during the next _get_conn() call. clean_exit = False if isinstance(e, (BaseSSLError, CertificateError)): e = SSLError(e) elif isinstance(e, (SocketError, NewConnectionError)) and self.proxy: e = ProxyError('Cannot connect to proxy.', e) elif isinstance(e, (SocketError, HTTPException)): e = ProtocolError('Connection aborted.', e) retries = retries.increment(method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]) retries.sleep() # Keep track of the error for the retry warning. err = e finally: if not clean_exit: # We hit some kind of exception, handled or otherwise. We need # to throw the connection away unless explicitly told not to. # Close the connection, set the variable to None, and make sure # we put the None back in the pool to avoid leaking it. conn = conn and conn.close() release_this_conn = True if release_this_conn: # Put the connection back to be reused. If the connection is # expired then it will be None, which will get replaced with a # fresh connection during _get_conn. self._put_conn(conn) if not conn: # Try again log.warning("Retrying (%r) after connection " "broken by '%r': %s", retries, err, url) return self.urlopen(method, url, body, headers, retries, redirect, assert_same_host, timeout=timeout, pool_timeout=pool_timeout, release_conn=release_conn, body_pos=body_pos, **response_kw) def drain_and_release_conn(response): try: # discard any remaining response body, the connection will be # released back to the pool once the entire response is read response.read() except (TimeoutError, HTTPException, SocketError, ProtocolError, BaseSSLError, SSLError): pass # Handle redirect? redirect_location = redirect and response.get_redirect_location() if redirect_location: if response.status == 303: method = 'GET' try: retries = retries.increment(method, url, response=response, _pool=self) except MaxRetryError: if retries.raise_on_redirect: # Drain and release the connection for this response, since # we're not returning it to be released manually. drain_and_release_conn(response) raise return response # drain and return the connection to the pool before recursing drain_and_release_conn(response) retries.sleep_for_retry(response) log.debug("Redirecting %s -> %s", url, redirect_location) return self.urlopen( method, redirect_location, body, headers, retries=retries, redirect=redirect, assert_same_host=assert_same_host, timeout=timeout, pool_timeout=pool_timeout, release_conn=release_conn, body_pos=body_pos, **response_kw) # Check if we should retry the HTTP response. has_retry_after = bool(response.getheader('Retry-After')) if retries.is_retry(method, response.status, has_retry_after): try: retries = retries.increment(method, url, response=response, _pool=self) except MaxRetryError: if retries.raise_on_status: # Drain and release the connection for this response, since # we're not returning it to be released manually. drain_and_release_conn(response) raise return response # drain and return the connection to the pool before recursing drain_and_release_conn(response) retries.sleep(response) log.debug("Retry: %s", url) return self.urlopen( method, url, body, headers, retries=retries, redirect=redirect, assert_same_host=assert_same_host, timeout=timeout, pool_timeout=pool_timeout, release_conn=release_conn, body_pos=body_pos, **response_kw) return response class HTTPSConnectionPool(HTTPConnectionPool): """ Same as :class:`.HTTPConnectionPool`, but HTTPS. When Python is compiled with the :mod:`ssl` module, then :class:`.VerifiedHTTPSConnection` is used, which *can* verify certificates, instead of :class:`.HTTPSConnection`. :class:`.VerifiedHTTPSConnection` uses one of ``assert_fingerprint``, ``assert_hostname`` and ``host`` in this order to verify connections. If ``assert_hostname`` is False, no verification is done. The ``key_file``, ``cert_file``, ``cert_reqs``, ``ca_certs``, ``ca_cert_dir``, ``ssl_version``, ``key_password`` are only used if :mod:`ssl` is available and are fed into :meth:`urllib3.util.ssl_wrap_socket` to upgrade the connection socket into an SSL socket. """ scheme = 'https' ConnectionCls = HTTPSConnection def __init__(self, host, port=None, strict=False, timeout=Timeout.DEFAULT_TIMEOUT, maxsize=1, block=False, headers=None, retries=None, _proxy=None, _proxy_headers=None, key_file=None, cert_file=None, cert_reqs=None, key_password=None, ca_certs=None, ssl_version=None, assert_hostname=None, assert_fingerprint=None, ca_cert_dir=None, **conn_kw): HTTPConnectionPool.__init__(self, host, port, strict, timeout, maxsize, block, headers, retries, _proxy, _proxy_headers, **conn_kw) self.key_file = key_file self.cert_file = cert_file self.cert_reqs = cert_reqs self.key_password = key_password self.ca_certs = ca_certs self.ca_cert_dir = ca_cert_dir self.ssl_version = ssl_version self.assert_hostname = assert_hostname self.assert_fingerprint = assert_fingerprint def _prepare_conn(self, conn): """ Prepare the ``connection`` for :meth:`urllib3.util.ssl_wrap_socket` and establish the tunnel if proxy is used. """ if isinstance(conn, VerifiedHTTPSConnection): conn.set_cert(key_file=self.key_file, key_password=self.key_password, cert_file=self.cert_file, cert_reqs=self.cert_reqs, ca_certs=self.ca_certs, ca_cert_dir=self.ca_cert_dir, assert_hostname=self.assert_hostname, assert_fingerprint=self.assert_fingerprint) conn.ssl_version = self.ssl_version return conn def _prepare_proxy(self, conn): """ Establish tunnel connection early, because otherwise httplib would improperly set Host: header to proxy's IP:port. """ conn.set_tunnel(self._proxy_host, self.port, self.proxy_headers) conn.connect() def _new_conn(self): """ Return a fresh :class:`httplib.HTTPSConnection`. """ self.num_connections += 1 log.debug("Starting new HTTPS connection (%d): %s:%s", self.num_connections, self.host, self.port or "443") if not self.ConnectionCls or self.ConnectionCls is DummyConnection: raise SSLError("Can't connect to HTTPS URL because the SSL " "module is not available.") actual_host = self.host actual_port = self.port if self.proxy is not None: actual_host = self.proxy.host actual_port = self.proxy.port conn = self.ConnectionCls(host=actual_host, port=actual_port, timeout=self.timeout.connect_timeout, strict=self.strict, cert_file=self.cert_file, key_file=self.key_file, key_password=self.key_password, **self.conn_kw) return self._prepare_conn(conn) def _validate_conn(self, conn): """ Called right before a request is made, after the socket is created. """ super(HTTPSConnectionPool, self)._validate_conn(conn) # Force connect early to allow us to validate the connection. if not getattr(conn, 'sock', None): # AppEngine might not have `.sock` conn.connect() if not conn.is_verified: warnings.warn(( 'Unverified HTTPS request is being made. ' 'Adding certificate verification is strongly advised. See: ' 'https://urllib3.readthedocs.io/en/latest/advanced-usage.html' '#ssl-warnings'), InsecureRequestWarning) def connection_from_url(url, **kw): """ Given a url, return an :class:`.ConnectionPool` instance of its host. This is a shortcut for not having to parse out the scheme, host, and port of the url before creating an :class:`.ConnectionPool` instance. :param url: Absolute URL string that must include the scheme. Port is optional. :param \\**kw: Passes additional parameters to the constructor of the appropriate :class:`.ConnectionPool`. Useful for specifying things like timeout, maxsize, headers, etc. Example:: >>> conn = connection_from_url('http://google.com/') >>> r = conn.request('GET', '/') """ scheme, host, port = get_host(url) port = port or port_by_scheme.get(scheme, 80) if scheme == 'https': return HTTPSConnectionPool(host, port=port, **kw) else: return HTTPConnectionPool(host, port=port, **kw) def _normalize_host(host, scheme): """ Normalize hosts for comparisons and use with sockets. """ # httplib doesn't like it when we include brackets in IPv6 addresses # Specifically, if we include brackets but also pass the port then # httplib crazily doubles up the square brackets on the Host header. # Instead, we need to make sure we never pass ``None`` as the port. # However, for backward compatibility reasons we can't actually # *assert* that. See http://bugs.python.org/issue28539 if host.startswith('[') and host.endswith(']'): host = host.strip('[]') if scheme in NORMALIZABLE_SCHEMES: host = normalize_host(host) return host ���������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000033�00000000000�011451� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������27 mtime=1604735100.026661 �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/�������������������������0000755�0001750�0001750�00000000000�00000000000�027501� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/__init__.py��������������0000644�0001750�0001750�00000000000�00000000000�031600� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/_appengine_environ.py����0000644�0001750�0001750�00000001315�00000000000�033720� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" This module provides means to detect the App Engine environment. """ import os def is_appengine(): return (is_local_appengine() or is_prod_appengine() or is_prod_appengine_mvms()) def is_appengine_sandbox(): return is_appengine() and not is_prod_appengine_mvms() def is_local_appengine(): return ('APPENGINE_RUNTIME' in os.environ and 'Development/' in os.environ['SERVER_SOFTWARE']) def is_prod_appengine(): return ('APPENGINE_RUNTIME' in os.environ and 'Google App Engine/' in os.environ['SERVER_SOFTWARE'] and not is_prod_appengine_mvms()) def is_prod_appengine_mvms(): return os.environ.get('GAE_VM', False) == 'true' �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000033�00000000000�011451� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������27 mtime=1604735100.026661 �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/_securetransport/��������0000755�0001750�0001750�00000000000�00000000000�033103� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000207�00000000000�011454� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������113 path=cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/_securetransport/__init__.py 22 mtime=1564430190.0 �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/_securetransport/__init__0000644�0001750�0001750�00000000000�00000000000�034553� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000207�00000000000�011454� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������113 path=cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/_securetransport/bindings.py 22 mtime=1564430190.0 �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/_securetransport/bindings0000644�0001750�0001750�00000042250�00000000000�034626� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" This module uses ctypes to bind a whole bunch of functions and constants from SecureTransport. The goal here is to provide the low-level API to SecureTransport. These are essentially the C-level functions and constants, and they're pretty gross to work with. This code is a bastardised version of the code found in Will Bond's oscrypto library. An enormous debt is owed to him for blazing this trail for us. For that reason, this code should be considered to be covered both by urllib3's license and by oscrypto's: Copyright (c) 2015-2016 Will Bond <will@wbond.net> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. """ from __future__ import absolute_import import platform from ctypes.util import find_library from ctypes import ( c_void_p, c_int32, c_char_p, c_size_t, c_byte, c_uint32, c_ulong, c_long, c_bool ) from ctypes import CDLL, POINTER, CFUNCTYPE security_path = find_library('Security') if not security_path: raise ImportError('The library Security could not be found') core_foundation_path = find_library('CoreFoundation') if not core_foundation_path: raise ImportError('The library CoreFoundation could not be found') version = platform.mac_ver()[0] version_info = tuple(map(int, version.split('.'))) if version_info < (10, 8): raise OSError( 'Only OS X 10.8 and newer are supported, not %s.%s' % ( version_info[0], version_info[1] ) ) Security = CDLL(security_path, use_errno=True) CoreFoundation = CDLL(core_foundation_path, use_errno=True) Boolean = c_bool CFIndex = c_long CFStringEncoding = c_uint32 CFData = c_void_p CFString = c_void_p CFArray = c_void_p CFMutableArray = c_void_p CFDictionary = c_void_p CFError = c_void_p CFType = c_void_p CFTypeID = c_ulong CFTypeRef = POINTER(CFType) CFAllocatorRef = c_void_p OSStatus = c_int32 CFDataRef = POINTER(CFData) CFStringRef = POINTER(CFString) CFArrayRef = POINTER(CFArray) CFMutableArrayRef = POINTER(CFMutableArray) CFDictionaryRef = POINTER(CFDictionary) CFArrayCallBacks = c_void_p CFDictionaryKeyCallBacks = c_void_p CFDictionaryValueCallBacks = c_void_p SecCertificateRef = POINTER(c_void_p) SecExternalFormat = c_uint32 SecExternalItemType = c_uint32 SecIdentityRef = POINTER(c_void_p) SecItemImportExportFlags = c_uint32 SecItemImportExportKeyParameters = c_void_p SecKeychainRef = POINTER(c_void_p) SSLProtocol = c_uint32 SSLCipherSuite = c_uint32 SSLContextRef = POINTER(c_void_p) SecTrustRef = POINTER(c_void_p) SSLConnectionRef = c_uint32 SecTrustResultType = c_uint32 SecTrustOptionFlags = c_uint32 SSLProtocolSide = c_uint32 SSLConnectionType = c_uint32 SSLSessionOption = c_uint32 try: Security.SecItemImport.argtypes = [ CFDataRef, CFStringRef, POINTER(SecExternalFormat), POINTER(SecExternalItemType), SecItemImportExportFlags, POINTER(SecItemImportExportKeyParameters), SecKeychainRef, POINTER(CFArrayRef), ] Security.SecItemImport.restype = OSStatus Security.SecCertificateGetTypeID.argtypes = [] Security.SecCertificateGetTypeID.restype = CFTypeID Security.SecIdentityGetTypeID.argtypes = [] Security.SecIdentityGetTypeID.restype = CFTypeID Security.SecKeyGetTypeID.argtypes = [] Security.SecKeyGetTypeID.restype = CFTypeID Security.SecCertificateCreateWithData.argtypes = [ CFAllocatorRef, CFDataRef ] Security.SecCertificateCreateWithData.restype = SecCertificateRef Security.SecCertificateCopyData.argtypes = [ SecCertificateRef ] Security.SecCertificateCopyData.restype = CFDataRef Security.SecCopyErrorMessageString.argtypes = [ OSStatus, c_void_p ] Security.SecCopyErrorMessageString.restype = CFStringRef Security.SecIdentityCreateWithCertificate.argtypes = [ CFTypeRef, SecCertificateRef, POINTER(SecIdentityRef) ] Security.SecIdentityCreateWithCertificate.restype = OSStatus Security.SecKeychainCreate.argtypes = [ c_char_p, c_uint32, c_void_p, Boolean, c_void_p, POINTER(SecKeychainRef) ] Security.SecKeychainCreate.restype = OSStatus Security.SecKeychainDelete.argtypes = [ SecKeychainRef ] Security.SecKeychainDelete.restype = OSStatus Security.SecPKCS12Import.argtypes = [ CFDataRef, CFDictionaryRef, POINTER(CFArrayRef) ] Security.SecPKCS12Import.restype = OSStatus SSLReadFunc = CFUNCTYPE(OSStatus, SSLConnectionRef, c_void_p, POINTER(c_size_t)) SSLWriteFunc = CFUNCTYPE(OSStatus, SSLConnectionRef, POINTER(c_byte), POINTER(c_size_t)) Security.SSLSetIOFuncs.argtypes = [ SSLContextRef, SSLReadFunc, SSLWriteFunc ] Security.SSLSetIOFuncs.restype = OSStatus Security.SSLSetPeerID.argtypes = [ SSLContextRef, c_char_p, c_size_t ] Security.SSLSetPeerID.restype = OSStatus Security.SSLSetCertificate.argtypes = [ SSLContextRef, CFArrayRef ] Security.SSLSetCertificate.restype = OSStatus Security.SSLSetCertificateAuthorities.argtypes = [ SSLContextRef, CFTypeRef, Boolean ] Security.SSLSetCertificateAuthorities.restype = OSStatus Security.SSLSetConnection.argtypes = [ SSLContextRef, SSLConnectionRef ] Security.SSLSetConnection.restype = OSStatus Security.SSLSetPeerDomainName.argtypes = [ SSLContextRef, c_char_p, c_size_t ] Security.SSLSetPeerDomainName.restype = OSStatus Security.SSLHandshake.argtypes = [ SSLContextRef ] Security.SSLHandshake.restype = OSStatus Security.SSLRead.argtypes = [ SSLContextRef, c_char_p, c_size_t, POINTER(c_size_t) ] Security.SSLRead.restype = OSStatus Security.SSLWrite.argtypes = [ SSLContextRef, c_char_p, c_size_t, POINTER(c_size_t) ] Security.SSLWrite.restype = OSStatus Security.SSLClose.argtypes = [ SSLContextRef ] Security.SSLClose.restype = OSStatus Security.SSLGetNumberSupportedCiphers.argtypes = [ SSLContextRef, POINTER(c_size_t) ] Security.SSLGetNumberSupportedCiphers.restype = OSStatus Security.SSLGetSupportedCiphers.argtypes = [ SSLContextRef, POINTER(SSLCipherSuite), POINTER(c_size_t) ] Security.SSLGetSupportedCiphers.restype = OSStatus Security.SSLSetEnabledCiphers.argtypes = [ SSLContextRef, POINTER(SSLCipherSuite), c_size_t ] Security.SSLSetEnabledCiphers.restype = OSStatus Security.SSLGetNumberEnabledCiphers.argtype = [ SSLContextRef, POINTER(c_size_t) ] Security.SSLGetNumberEnabledCiphers.restype = OSStatus Security.SSLGetEnabledCiphers.argtypes = [ SSLContextRef, POINTER(SSLCipherSuite), POINTER(c_size_t) ] Security.SSLGetEnabledCiphers.restype = OSStatus Security.SSLGetNegotiatedCipher.argtypes = [ SSLContextRef, POINTER(SSLCipherSuite) ] Security.SSLGetNegotiatedCipher.restype = OSStatus Security.SSLGetNegotiatedProtocolVersion.argtypes = [ SSLContextRef, POINTER(SSLProtocol) ] Security.SSLGetNegotiatedProtocolVersion.restype = OSStatus Security.SSLCopyPeerTrust.argtypes = [ SSLContextRef, POINTER(SecTrustRef) ] Security.SSLCopyPeerTrust.restype = OSStatus Security.SecTrustSetAnchorCertificates.argtypes = [ SecTrustRef, CFArrayRef ] Security.SecTrustSetAnchorCertificates.restype = OSStatus Security.SecTrustSetAnchorCertificatesOnly.argstypes = [ SecTrustRef, Boolean ] Security.SecTrustSetAnchorCertificatesOnly.restype = OSStatus Security.SecTrustEvaluate.argtypes = [ SecTrustRef, POINTER(SecTrustResultType) ] Security.SecTrustEvaluate.restype = OSStatus Security.SecTrustGetCertificateCount.argtypes = [ SecTrustRef ] Security.SecTrustGetCertificateCount.restype = CFIndex Security.SecTrustGetCertificateAtIndex.argtypes = [ SecTrustRef, CFIndex ] Security.SecTrustGetCertificateAtIndex.restype = SecCertificateRef Security.SSLCreateContext.argtypes = [ CFAllocatorRef, SSLProtocolSide, SSLConnectionType ] Security.SSLCreateContext.restype = SSLContextRef Security.SSLSetSessionOption.argtypes = [ SSLContextRef, SSLSessionOption, Boolean ] Security.SSLSetSessionOption.restype = OSStatus Security.SSLSetProtocolVersionMin.argtypes = [ SSLContextRef, SSLProtocol ] Security.SSLSetProtocolVersionMin.restype = OSStatus Security.SSLSetProtocolVersionMax.argtypes = [ SSLContextRef, SSLProtocol ] Security.SSLSetProtocolVersionMax.restype = OSStatus Security.SecCopyErrorMessageString.argtypes = [ OSStatus, c_void_p ] Security.SecCopyErrorMessageString.restype = CFStringRef Security.SSLReadFunc = SSLReadFunc Security.SSLWriteFunc = SSLWriteFunc Security.SSLContextRef = SSLContextRef Security.SSLProtocol = SSLProtocol Security.SSLCipherSuite = SSLCipherSuite Security.SecIdentityRef = SecIdentityRef Security.SecKeychainRef = SecKeychainRef Security.SecTrustRef = SecTrustRef Security.SecTrustResultType = SecTrustResultType Security.SecExternalFormat = SecExternalFormat Security.OSStatus = OSStatus Security.kSecImportExportPassphrase = CFStringRef.in_dll( Security, 'kSecImportExportPassphrase' ) Security.kSecImportItemIdentity = CFStringRef.in_dll( Security, 'kSecImportItemIdentity' ) # CoreFoundation time! CoreFoundation.CFRetain.argtypes = [ CFTypeRef ] CoreFoundation.CFRetain.restype = CFTypeRef CoreFoundation.CFRelease.argtypes = [ CFTypeRef ] CoreFoundation.CFRelease.restype = None CoreFoundation.CFGetTypeID.argtypes = [ CFTypeRef ] CoreFoundation.CFGetTypeID.restype = CFTypeID CoreFoundation.CFStringCreateWithCString.argtypes = [ CFAllocatorRef, c_char_p, CFStringEncoding ] CoreFoundation.CFStringCreateWithCString.restype = CFStringRef CoreFoundation.CFStringGetCStringPtr.argtypes = [ CFStringRef, CFStringEncoding ] CoreFoundation.CFStringGetCStringPtr.restype = c_char_p CoreFoundation.CFStringGetCString.argtypes = [ CFStringRef, c_char_p, CFIndex, CFStringEncoding ] CoreFoundation.CFStringGetCString.restype = c_bool CoreFoundation.CFDataCreate.argtypes = [ CFAllocatorRef, c_char_p, CFIndex ] CoreFoundation.CFDataCreate.restype = CFDataRef CoreFoundation.CFDataGetLength.argtypes = [ CFDataRef ] CoreFoundation.CFDataGetLength.restype = CFIndex CoreFoundation.CFDataGetBytePtr.argtypes = [ CFDataRef ] CoreFoundation.CFDataGetBytePtr.restype = c_void_p CoreFoundation.CFDictionaryCreate.argtypes = [ CFAllocatorRef, POINTER(CFTypeRef), POINTER(CFTypeRef), CFIndex, CFDictionaryKeyCallBacks, CFDictionaryValueCallBacks ] CoreFoundation.CFDictionaryCreate.restype = CFDictionaryRef CoreFoundation.CFDictionaryGetValue.argtypes = [ CFDictionaryRef, CFTypeRef ] CoreFoundation.CFDictionaryGetValue.restype = CFTypeRef CoreFoundation.CFArrayCreate.argtypes = [ CFAllocatorRef, POINTER(CFTypeRef), CFIndex, CFArrayCallBacks, ] CoreFoundation.CFArrayCreate.restype = CFArrayRef CoreFoundation.CFArrayCreateMutable.argtypes = [ CFAllocatorRef, CFIndex, CFArrayCallBacks ] CoreFoundation.CFArrayCreateMutable.restype = CFMutableArrayRef CoreFoundation.CFArrayAppendValue.argtypes = [ CFMutableArrayRef, c_void_p ] CoreFoundation.CFArrayAppendValue.restype = None CoreFoundation.CFArrayGetCount.argtypes = [ CFArrayRef ] CoreFoundation.CFArrayGetCount.restype = CFIndex CoreFoundation.CFArrayGetValueAtIndex.argtypes = [ CFArrayRef, CFIndex ] CoreFoundation.CFArrayGetValueAtIndex.restype = c_void_p CoreFoundation.kCFAllocatorDefault = CFAllocatorRef.in_dll( CoreFoundation, 'kCFAllocatorDefault' ) CoreFoundation.kCFTypeArrayCallBacks = c_void_p.in_dll(CoreFoundation, 'kCFTypeArrayCallBacks') CoreFoundation.kCFTypeDictionaryKeyCallBacks = c_void_p.in_dll( CoreFoundation, 'kCFTypeDictionaryKeyCallBacks' ) CoreFoundation.kCFTypeDictionaryValueCallBacks = c_void_p.in_dll( CoreFoundation, 'kCFTypeDictionaryValueCallBacks' ) CoreFoundation.CFTypeRef = CFTypeRef CoreFoundation.CFArrayRef = CFArrayRef CoreFoundation.CFStringRef = CFStringRef CoreFoundation.CFDictionaryRef = CFDictionaryRef except (AttributeError): raise ImportError('Error initializing ctypes') class CFConst(object): """ A class object that acts as essentially a namespace for CoreFoundation constants. """ kCFStringEncodingUTF8 = CFStringEncoding(0x08000100) class SecurityConst(object): """ A class object that acts as essentially a namespace for Security constants. """ kSSLSessionOptionBreakOnServerAuth = 0 kSSLProtocol2 = 1 kSSLProtocol3 = 2 kTLSProtocol1 = 4 kTLSProtocol11 = 7 kTLSProtocol12 = 8 kTLSProtocol13 = 10 kTLSProtocolMaxSupported = 999 kSSLClientSide = 1 kSSLStreamType = 0 kSecFormatPEMSequence = 10 kSecTrustResultInvalid = 0 kSecTrustResultProceed = 1 # This gap is present on purpose: this was kSecTrustResultConfirm, which # is deprecated. kSecTrustResultDeny = 3 kSecTrustResultUnspecified = 4 kSecTrustResultRecoverableTrustFailure = 5 kSecTrustResultFatalTrustFailure = 6 kSecTrustResultOtherError = 7 errSSLProtocol = -9800 errSSLWouldBlock = -9803 errSSLClosedGraceful = -9805 errSSLClosedNoNotify = -9816 errSSLClosedAbort = -9806 errSSLXCertChainInvalid = -9807 errSSLCrypto = -9809 errSSLInternal = -9810 errSSLCertExpired = -9814 errSSLCertNotYetValid = -9815 errSSLUnknownRootCert = -9812 errSSLNoRootCert = -9813 errSSLHostNameMismatch = -9843 errSSLPeerHandshakeFail = -9824 errSSLPeerUserCancelled = -9839 errSSLWeakPeerEphemeralDHKey = -9850 errSSLServerAuthCompleted = -9841 errSSLRecordOverflow = -9847 errSecVerifyFailed = -67808 errSecNoTrustSettings = -25263 errSecItemNotFound = -25300 errSecInvalidTrustSettings = -25262 # Cipher suites. We only pick the ones our default cipher string allows. # Source: https://developer.apple.com/documentation/security/1550981-ssl_cipher_suite_values TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 = 0xC02C TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 = 0xC030 TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 = 0xC02B TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 = 0xC02F TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCCA9 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCCA8 TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 = 0x009F TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 = 0x009E TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 = 0xC024 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 = 0xC028 TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA = 0xC00A TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA = 0xC014 TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 = 0x006B TLS_DHE_RSA_WITH_AES_256_CBC_SHA = 0x0039 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 = 0xC023 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 = 0xC027 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA = 0xC009 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA = 0xC013 TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 = 0x0067 TLS_DHE_RSA_WITH_AES_128_CBC_SHA = 0x0033 TLS_RSA_WITH_AES_256_GCM_SHA384 = 0x009D TLS_RSA_WITH_AES_128_GCM_SHA256 = 0x009C TLS_RSA_WITH_AES_256_CBC_SHA256 = 0x003D TLS_RSA_WITH_AES_128_CBC_SHA256 = 0x003C TLS_RSA_WITH_AES_256_CBC_SHA = 0x0035 TLS_RSA_WITH_AES_128_CBC_SHA = 0x002F TLS_AES_128_GCM_SHA256 = 0x1301 TLS_AES_256_GCM_SHA384 = 0x1302 TLS_AES_128_CCM_8_SHA256 = 0x1305 TLS_AES_128_CCM_SHA256 = 0x1304 ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000210�00000000000�011446� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������114 path=cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/_securetransport/low_level.py 22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/_securetransport/low_leve0000644�0001750�0001750�00000027602�00000000000�034651� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" Low-level helpers for the SecureTransport bindings. These are Python functions that are not directly related to the high-level APIs but are necessary to get them to work. They include a whole bunch of low-level CoreFoundation messing about and memory management. The concerns in this module are almost entirely about trying to avoid memory leaks and providing appropriate and useful assistance to the higher-level code. """ import base64 import ctypes import itertools import re import os import ssl import tempfile from .bindings import Security, CoreFoundation, CFConst # This regular expression is used to grab PEM data out of a PEM bundle. _PEM_CERTS_RE = re.compile( b"-----BEGIN CERTIFICATE-----\n(.*?)\n-----END CERTIFICATE-----", re.DOTALL ) def _cf_data_from_bytes(bytestring): """ Given a bytestring, create a CFData object from it. This CFData object must be CFReleased by the caller. """ return CoreFoundation.CFDataCreate( CoreFoundation.kCFAllocatorDefault, bytestring, len(bytestring) ) def _cf_dictionary_from_tuples(tuples): """ Given a list of Python tuples, create an associated CFDictionary. """ dictionary_size = len(tuples) # We need to get the dictionary keys and values out in the same order. keys = (t[0] for t in tuples) values = (t[1] for t in tuples) cf_keys = (CoreFoundation.CFTypeRef * dictionary_size)(*keys) cf_values = (CoreFoundation.CFTypeRef * dictionary_size)(*values) return CoreFoundation.CFDictionaryCreate( CoreFoundation.kCFAllocatorDefault, cf_keys, cf_values, dictionary_size, CoreFoundation.kCFTypeDictionaryKeyCallBacks, CoreFoundation.kCFTypeDictionaryValueCallBacks, ) def _cf_string_to_unicode(value): """ Creates a Unicode string from a CFString object. Used entirely for error reporting. Yes, it annoys me quite a lot that this function is this complex. """ value_as_void_p = ctypes.cast(value, ctypes.POINTER(ctypes.c_void_p)) string = CoreFoundation.CFStringGetCStringPtr( value_as_void_p, CFConst.kCFStringEncodingUTF8 ) if string is None: buffer = ctypes.create_string_buffer(1024) result = CoreFoundation.CFStringGetCString( value_as_void_p, buffer, 1024, CFConst.kCFStringEncodingUTF8 ) if not result: raise OSError('Error copying C string from CFStringRef') string = buffer.value if string is not None: string = string.decode('utf-8') return string def _assert_no_error(error, exception_class=None): """ Checks the return code and throws an exception if there is an error to report """ if error == 0: return cf_error_string = Security.SecCopyErrorMessageString(error, None) output = _cf_string_to_unicode(cf_error_string) CoreFoundation.CFRelease(cf_error_string) if output is None or output == u'': output = u'OSStatus %s' % error if exception_class is None: exception_class = ssl.SSLError raise exception_class(output) def _cert_array_from_pem(pem_bundle): """ Given a bundle of certs in PEM format, turns them into a CFArray of certs that can be used to validate a cert chain. """ # Normalize the PEM bundle's line endings. pem_bundle = pem_bundle.replace(b"\r\n", b"\n") der_certs = [ base64.b64decode(match.group(1)) for match in _PEM_CERTS_RE.finditer(pem_bundle) ] if not der_certs: raise ssl.SSLError("No root certificates specified") cert_array = CoreFoundation.CFArrayCreateMutable( CoreFoundation.kCFAllocatorDefault, 0, ctypes.byref(CoreFoundation.kCFTypeArrayCallBacks) ) if not cert_array: raise ssl.SSLError("Unable to allocate memory!") try: for der_bytes in der_certs: certdata = _cf_data_from_bytes(der_bytes) if not certdata: raise ssl.SSLError("Unable to allocate memory!") cert = Security.SecCertificateCreateWithData( CoreFoundation.kCFAllocatorDefault, certdata ) CoreFoundation.CFRelease(certdata) if not cert: raise ssl.SSLError("Unable to build cert object!") CoreFoundation.CFArrayAppendValue(cert_array, cert) CoreFoundation.CFRelease(cert) except Exception: # We need to free the array before the exception bubbles further. # We only want to do that if an error occurs: otherwise, the caller # should free. CoreFoundation.CFRelease(cert_array) return cert_array def _is_cert(item): """ Returns True if a given CFTypeRef is a certificate. """ expected = Security.SecCertificateGetTypeID() return CoreFoundation.CFGetTypeID(item) == expected def _is_identity(item): """ Returns True if a given CFTypeRef is an identity. """ expected = Security.SecIdentityGetTypeID() return CoreFoundation.CFGetTypeID(item) == expected def _temporary_keychain(): """ This function creates a temporary Mac keychain that we can use to work with credentials. This keychain uses a one-time password and a temporary file to store the data. We expect to have one keychain per socket. The returned SecKeychainRef must be freed by the caller, including calling SecKeychainDelete. Returns a tuple of the SecKeychainRef and the path to the temporary directory that contains it. """ # Unfortunately, SecKeychainCreate requires a path to a keychain. This # means we cannot use mkstemp to use a generic temporary file. Instead, # we're going to create a temporary directory and a filename to use there. # This filename will be 8 random bytes expanded into base64. We also need # some random bytes to password-protect the keychain we're creating, so we # ask for 40 random bytes. random_bytes = os.urandom(40) filename = base64.b16encode(random_bytes[:8]).decode('utf-8') password = base64.b16encode(random_bytes[8:]) # Must be valid UTF-8 tempdirectory = tempfile.mkdtemp() keychain_path = os.path.join(tempdirectory, filename).encode('utf-8') # We now want to create the keychain itself. keychain = Security.SecKeychainRef() status = Security.SecKeychainCreate( keychain_path, len(password), password, False, None, ctypes.byref(keychain) ) _assert_no_error(status) # Having created the keychain, we want to pass it off to the caller. return keychain, tempdirectory def _load_items_from_file(keychain, path): """ Given a single file, loads all the trust objects from it into arrays and the keychain. Returns a tuple of lists: the first list is a list of identities, the second a list of certs. """ certificates = [] identities = [] result_array = None with open(path, 'rb') as f: raw_filedata = f.read() try: filedata = CoreFoundation.CFDataCreate( CoreFoundation.kCFAllocatorDefault, raw_filedata, len(raw_filedata) ) result_array = CoreFoundation.CFArrayRef() result = Security.SecItemImport( filedata, # cert data None, # Filename, leaving it out for now None, # What the type of the file is, we don't care None, # what's in the file, we don't care 0, # import flags None, # key params, can include passphrase in the future keychain, # The keychain to insert into ctypes.byref(result_array) # Results ) _assert_no_error(result) # A CFArray is not very useful to us as an intermediary # representation, so we are going to extract the objects we want # and then free the array. We don't need to keep hold of keys: the # keychain already has them! result_count = CoreFoundation.CFArrayGetCount(result_array) for index in range(result_count): item = CoreFoundation.CFArrayGetValueAtIndex( result_array, index ) item = ctypes.cast(item, CoreFoundation.CFTypeRef) if _is_cert(item): CoreFoundation.CFRetain(item) certificates.append(item) elif _is_identity(item): CoreFoundation.CFRetain(item) identities.append(item) finally: if result_array: CoreFoundation.CFRelease(result_array) CoreFoundation.CFRelease(filedata) return (identities, certificates) def _load_client_cert_chain(keychain, *paths): """ Load certificates and maybe keys from a number of files. Has the end goal of returning a CFArray containing one SecIdentityRef, and then zero or more SecCertificateRef objects, suitable for use as a client certificate trust chain. """ # Ok, the strategy. # # This relies on knowing that macOS will not give you a SecIdentityRef # unless you have imported a key into a keychain. This is a somewhat # artificial limitation of macOS (for example, it doesn't necessarily # affect iOS), but there is nothing inside Security.framework that lets you # get a SecIdentityRef without having a key in a keychain. # # So the policy here is we take all the files and iterate them in order. # Each one will use SecItemImport to have one or more objects loaded from # it. We will also point at a keychain that macOS can use to work with the # private key. # # Once we have all the objects, we'll check what we actually have. If we # already have a SecIdentityRef in hand, fab: we'll use that. Otherwise, # we'll take the first certificate (which we assume to be our leaf) and # ask the keychain to give us a SecIdentityRef with that cert's associated # key. # # We'll then return a CFArray containing the trust chain: one # SecIdentityRef and then zero-or-more SecCertificateRef objects. The # responsibility for freeing this CFArray will be with the caller. This # CFArray must remain alive for the entire connection, so in practice it # will be stored with a single SSLSocket, along with the reference to the # keychain. certificates = [] identities = [] # Filter out bad paths. paths = (path for path in paths if path) try: for file_path in paths: new_identities, new_certs = _load_items_from_file( keychain, file_path ) identities.extend(new_identities) certificates.extend(new_certs) # Ok, we have everything. The question is: do we have an identity? If # not, we want to grab one from the first cert we have. if not identities: new_identity = Security.SecIdentityRef() status = Security.SecIdentityCreateWithCertificate( keychain, certificates[0], ctypes.byref(new_identity) ) _assert_no_error(status) identities.append(new_identity) # We now want to release the original certificate, as we no longer # need it. CoreFoundation.CFRelease(certificates.pop(0)) # We now need to build a new CFArray that holds the trust chain. trust_chain = CoreFoundation.CFArrayCreateMutable( CoreFoundation.kCFAllocatorDefault, 0, ctypes.byref(CoreFoundation.kCFTypeArrayCallBacks), ) for item in itertools.chain(identities, certificates): # ArrayAppendValue does a CFRetain on the item. That's fine, # because the finally block will release our other refs to them. CoreFoundation.CFArrayAppendValue(trust_chain, item) return trust_chain finally: for obj in itertools.chain(identities, certificates): CoreFoundation.CFRelease(obj) ������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/appengine.py�������������0000644�0001750�0001750�00000025272�00000000000�032031� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" This module provides a pool manager that uses Google App Engine's `URLFetch Service <https://cloud.google.com/appengine/docs/python/urlfetch>`_. Example usage:: from pip._vendor.urllib3 import PoolManager from pip._vendor.urllib3.contrib.appengine import AppEngineManager, is_appengine_sandbox if is_appengine_sandbox(): # AppEngineManager uses AppEngine's URLFetch API behind the scenes http = AppEngineManager() else: # PoolManager uses a socket-level API behind the scenes http = PoolManager() r = http.request('GET', 'https://google.com/') There are `limitations <https://cloud.google.com/appengine/docs/python/\ urlfetch/#Python_Quotas_and_limits>`_ to the URLFetch service and it may not be the best choice for your application. There are three options for using urllib3 on Google App Engine: 1. You can use :class:`AppEngineManager` with URLFetch. URLFetch is cost-effective in many circumstances as long as your usage is within the limitations. 2. You can use a normal :class:`~urllib3.PoolManager` by enabling sockets. Sockets also have `limitations and restrictions <https://cloud.google.com/appengine/docs/python/sockets/\ #limitations-and-restrictions>`_ and have a lower free quota than URLFetch. To use sockets, be sure to specify the following in your ``app.yaml``:: env_variables: GAE_USE_SOCKETS_HTTPLIB : 'true' 3. If you are using `App Engine Flexible <https://cloud.google.com/appengine/docs/flexible/>`_, you can use the standard :class:`PoolManager` without any configuration or special environment variables. """ from __future__ import absolute_import import io import logging import warnings from ..packages.six.moves.urllib.parse import urljoin from ..exceptions import ( HTTPError, HTTPWarning, MaxRetryError, ProtocolError, TimeoutError, SSLError ) from ..request import RequestMethods from ..response import HTTPResponse from ..util.timeout import Timeout from ..util.retry import Retry from . import _appengine_environ try: from google.appengine.api import urlfetch except ImportError: urlfetch = None log = logging.getLogger(__name__) class AppEnginePlatformWarning(HTTPWarning): pass class AppEnginePlatformError(HTTPError): pass class AppEngineManager(RequestMethods): """ Connection manager for Google App Engine sandbox applications. This manager uses the URLFetch service directly instead of using the emulated httplib, and is subject to URLFetch limitations as described in the App Engine documentation `here <https://cloud.google.com/appengine/docs/python/urlfetch>`_. Notably it will raise an :class:`AppEnginePlatformError` if: * URLFetch is not available. * If you attempt to use this on App Engine Flexible, as full socket support is available. * If a request size is more than 10 megabytes. * If a response size is more than 32 megabtyes. * If you use an unsupported request method such as OPTIONS. Beyond those cases, it will raise normal urllib3 errors. """ def __init__(self, headers=None, retries=None, validate_certificate=True, urlfetch_retries=True): if not urlfetch: raise AppEnginePlatformError( "URLFetch is not available in this environment.") if is_prod_appengine_mvms(): raise AppEnginePlatformError( "Use normal urllib3.PoolManager instead of AppEngineManager" "on Managed VMs, as using URLFetch is not necessary in " "this environment.") warnings.warn( "urllib3 is using URLFetch on Google App Engine sandbox instead " "of sockets. To use sockets directly instead of URLFetch see " "https://urllib3.readthedocs.io/en/latest/reference/urllib3.contrib.html.", AppEnginePlatformWarning) RequestMethods.__init__(self, headers) self.validate_certificate = validate_certificate self.urlfetch_retries = urlfetch_retries self.retries = retries or Retry.DEFAULT def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): # Return False to re-raise any potential exceptions return False def urlopen(self, method, url, body=None, headers=None, retries=None, redirect=True, timeout=Timeout.DEFAULT_TIMEOUT, **response_kw): retries = self._get_retries(retries, redirect) try: follow_redirects = ( redirect and retries.redirect != 0 and retries.total) response = urlfetch.fetch( url, payload=body, method=method, headers=headers or {}, allow_truncated=False, follow_redirects=self.urlfetch_retries and follow_redirects, deadline=self._get_absolute_timeout(timeout), validate_certificate=self.validate_certificate, ) except urlfetch.DeadlineExceededError as e: raise TimeoutError(self, e) except urlfetch.InvalidURLError as e: if 'too large' in str(e): raise AppEnginePlatformError( "URLFetch request too large, URLFetch only " "supports requests up to 10mb in size.", e) raise ProtocolError(e) except urlfetch.DownloadError as e: if 'Too many redirects' in str(e): raise MaxRetryError(self, url, reason=e) raise ProtocolError(e) except urlfetch.ResponseTooLargeError as e: raise AppEnginePlatformError( "URLFetch response too large, URLFetch only supports" "responses up to 32mb in size.", e) except urlfetch.SSLCertificateError as e: raise SSLError(e) except urlfetch.InvalidMethodError as e: raise AppEnginePlatformError( "URLFetch does not support method: %s" % method, e) http_response = self._urlfetch_response_to_http_response( response, retries=retries, **response_kw) # Handle redirect? redirect_location = redirect and http_response.get_redirect_location() if redirect_location: # Check for redirect response if (self.urlfetch_retries and retries.raise_on_redirect): raise MaxRetryError(self, url, "too many redirects") else: if http_response.status == 303: method = 'GET' try: retries = retries.increment(method, url, response=http_response, _pool=self) except MaxRetryError: if retries.raise_on_redirect: raise MaxRetryError(self, url, "too many redirects") return http_response retries.sleep_for_retry(http_response) log.debug("Redirecting %s -> %s", url, redirect_location) redirect_url = urljoin(url, redirect_location) return self.urlopen( method, redirect_url, body, headers, retries=retries, redirect=redirect, timeout=timeout, **response_kw) # Check if we should retry the HTTP response. has_retry_after = bool(http_response.getheader('Retry-After')) if retries.is_retry(method, http_response.status, has_retry_after): retries = retries.increment( method, url, response=http_response, _pool=self) log.debug("Retry: %s", url) retries.sleep(http_response) return self.urlopen( method, url, body=body, headers=headers, retries=retries, redirect=redirect, timeout=timeout, **response_kw) return http_response def _urlfetch_response_to_http_response(self, urlfetch_resp, **response_kw): if is_prod_appengine(): # Production GAE handles deflate encoding automatically, but does # not remove the encoding header. content_encoding = urlfetch_resp.headers.get('content-encoding') if content_encoding == 'deflate': del urlfetch_resp.headers['content-encoding'] transfer_encoding = urlfetch_resp.headers.get('transfer-encoding') # We have a full response's content, # so let's make sure we don't report ourselves as chunked data. if transfer_encoding == 'chunked': encodings = transfer_encoding.split(",") encodings.remove('chunked') urlfetch_resp.headers['transfer-encoding'] = ','.join(encodings) original_response = HTTPResponse( # In order for decoding to work, we must present the content as # a file-like object. body=io.BytesIO(urlfetch_resp.content), msg=urlfetch_resp.header_msg, headers=urlfetch_resp.headers, status=urlfetch_resp.status_code, **response_kw ) return HTTPResponse( body=io.BytesIO(urlfetch_resp.content), headers=urlfetch_resp.headers, status=urlfetch_resp.status_code, original_response=original_response, **response_kw ) def _get_absolute_timeout(self, timeout): if timeout is Timeout.DEFAULT_TIMEOUT: return None # Defer to URLFetch's default. if isinstance(timeout, Timeout): if timeout._read is not None or timeout._connect is not None: warnings.warn( "URLFetch does not support granular timeout settings, " "reverting to total or default URLFetch timeout.", AppEnginePlatformWarning) return timeout.total return timeout def _get_retries(self, retries, redirect): if not isinstance(retries, Retry): retries = Retry.from_int( retries, redirect=redirect, default=self.retries) if retries.connect or retries.read or retries.redirect: warnings.warn( "URLFetch only supports total retries and does not " "recognize connect, read, or redirect retry parameters.", AppEnginePlatformWarning) return retries # Alias methods from _appengine_environ to maintain public API interface. is_appengine = _appengine_environ.is_appengine is_appengine_sandbox = _appengine_environ.is_appengine_sandbox is_local_appengine = _appengine_environ.is_local_appengine is_prod_appengine = _appengine_environ.is_prod_appengine is_prod_appengine_mvms = _appengine_environ.is_prod_appengine_mvms ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/ntlmpool.py��������������0000644�0001750�0001750�00000010553�00000000000�031723� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" NTLM authenticating pool, contributed by erikcederstran Issue #10, see: http://code.google.com/p/urllib3/issues/detail?id=10 """ from __future__ import absolute_import from logging import getLogger from ntlm import ntlm from .. import HTTPSConnectionPool from ..packages.six.moves.http_client import HTTPSConnection log = getLogger(__name__) class NTLMConnectionPool(HTTPSConnectionPool): """ Implements an NTLM authentication version of an urllib3 connection pool """ scheme = 'https' def __init__(self, user, pw, authurl, *args, **kwargs): """ authurl is a random URL on the server that is protected by NTLM. user is the Windows user, probably in the DOMAIN\\username format. pw is the password for the user. """ super(NTLMConnectionPool, self).__init__(*args, **kwargs) self.authurl = authurl self.rawuser = user user_parts = user.split('\\', 1) self.domain = user_parts[0].upper() self.user = user_parts[1] self.pw = pw def _new_conn(self): # Performs the NTLM handshake that secures the connection. The socket # must be kept open while requests are performed. self.num_connections += 1 log.debug('Starting NTLM HTTPS connection no. %d: https://%s%s', self.num_connections, self.host, self.authurl) headers = {'Connection': 'Keep-Alive'} req_header = 'Authorization' resp_header = 'www-authenticate' conn = HTTPSConnection(host=self.host, port=self.port) # Send negotiation message headers[req_header] = ( 'NTLM %s' % ntlm.create_NTLM_NEGOTIATE_MESSAGE(self.rawuser)) log.debug('Request headers: %s', headers) conn.request('GET', self.authurl, None, headers) res = conn.getresponse() reshdr = dict(res.getheaders()) log.debug('Response status: %s %s', res.status, res.reason) log.debug('Response headers: %s', reshdr) log.debug('Response data: %s [...]', res.read(100)) # Remove the reference to the socket, so that it can not be closed by # the response object (we want to keep the socket open) res.fp = None # Server should respond with a challenge message auth_header_values = reshdr[resp_header].split(', ') auth_header_value = None for s in auth_header_values: if s[:5] == 'NTLM ': auth_header_value = s[5:] if auth_header_value is None: raise Exception('Unexpected %s response header: %s' % (resp_header, reshdr[resp_header])) # Send authentication message ServerChallenge, NegotiateFlags = \ ntlm.parse_NTLM_CHALLENGE_MESSAGE(auth_header_value) auth_msg = ntlm.create_NTLM_AUTHENTICATE_MESSAGE(ServerChallenge, self.user, self.domain, self.pw, NegotiateFlags) headers[req_header] = 'NTLM %s' % auth_msg log.debug('Request headers: %s', headers) conn.request('GET', self.authurl, None, headers) res = conn.getresponse() log.debug('Response status: %s %s', res.status, res.reason) log.debug('Response headers: %s', dict(res.getheaders())) log.debug('Response data: %s [...]', res.read()[:100]) if res.status != 200: if res.status == 401: raise Exception('Server rejected request: wrong ' 'username or password') raise Exception('Wrong server response: %s %s' % (res.status, res.reason)) res.fp = None log.debug('Connection established') return conn def urlopen(self, method, url, body=None, headers=None, retries=3, redirect=True, assert_same_host=True): if headers is None: headers = {} headers['Connection'] = 'Keep-Alive' return super(NTLMConnectionPool, self).urlopen(method, url, body, headers, retries, redirect, assert_same_host) �����������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/pyopenssl.py�������������0000644�0001750�0001750�00000040124�00000000000�032110� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" SSL with SNI_-support for Python 2. Follow these instructions if you would like to verify SSL certificates in Python 2. Note, the default libraries do *not* do certificate checking; you need to do additional work to validate certificates yourself. This needs the following packages installed: * pyOpenSSL (tested with 16.0.0) * cryptography (minimum 1.3.4, from pyopenssl) * idna (minimum 2.0, from cryptography) However, pyopenssl depends on cryptography, which depends on idna, so while we use all three directly here we end up having relatively few packages required. You can install them with the following command: pip install pyopenssl cryptography idna To activate certificate checking, call :func:`~urllib3.contrib.pyopenssl.inject_into_urllib3` from your Python code before you begin making HTTP requests. This can be done in a ``sitecustomize`` module, or at any other time before your application begins using ``urllib3``, like this:: try: import urllib3.contrib.pyopenssl urllib3.contrib.pyopenssl.inject_into_urllib3() except ImportError: pass Now you can use :mod:`urllib3` as you normally would, and it will support SNI when the required modules are installed. Activating this module also has the positive side effect of disabling SSL/TLS compression in Python 2 (see `CRIME attack`_). If you want to configure the default list of supported cipher suites, you can set the ``urllib3.contrib.pyopenssl.DEFAULT_SSL_CIPHER_LIST`` variable. .. _sni: https://en.wikipedia.org/wiki/Server_Name_Indication .. _crime attack: https://en.wikipedia.org/wiki/CRIME_(security_exploit) """ from __future__ import absolute_import import OpenSSL.SSL from cryptography import x509 from cryptography.hazmat.backends.openssl import backend as openssl_backend from cryptography.hazmat.backends.openssl.x509 import _Certificate try: from cryptography.x509 import UnsupportedExtension except ImportError: # UnsupportedExtension is gone in cryptography >= 2.1.0 class UnsupportedExtension(Exception): pass from socket import timeout, error as SocketError from io import BytesIO try: # Platform-specific: Python 2 from socket import _fileobject except ImportError: # Platform-specific: Python 3 _fileobject = None from ..packages.backports.makefile import backport_makefile import logging import ssl from ..packages import six import sys from .. import util __all__ = ['inject_into_urllib3', 'extract_from_urllib3'] # SNI always works. HAS_SNI = True # Map from urllib3 to PyOpenSSL compatible parameter-values. _openssl_versions = { util.PROTOCOL_TLS: OpenSSL.SSL.SSLv23_METHOD, ssl.PROTOCOL_TLSv1: OpenSSL.SSL.TLSv1_METHOD, } if hasattr(ssl, 'PROTOCOL_SSLv3') and hasattr(OpenSSL.SSL, 'SSLv3_METHOD'): _openssl_versions[ssl.PROTOCOL_SSLv3] = OpenSSL.SSL.SSLv3_METHOD if hasattr(ssl, 'PROTOCOL_TLSv1_1') and hasattr(OpenSSL.SSL, 'TLSv1_1_METHOD'): _openssl_versions[ssl.PROTOCOL_TLSv1_1] = OpenSSL.SSL.TLSv1_1_METHOD if hasattr(ssl, 'PROTOCOL_TLSv1_2') and hasattr(OpenSSL.SSL, 'TLSv1_2_METHOD'): _openssl_versions[ssl.PROTOCOL_TLSv1_2] = OpenSSL.SSL.TLSv1_2_METHOD _stdlib_to_openssl_verify = { ssl.CERT_NONE: OpenSSL.SSL.VERIFY_NONE, ssl.CERT_OPTIONAL: OpenSSL.SSL.VERIFY_PEER, ssl.CERT_REQUIRED: OpenSSL.SSL.VERIFY_PEER + OpenSSL.SSL.VERIFY_FAIL_IF_NO_PEER_CERT, } _openssl_to_stdlib_verify = dict( (v, k) for k, v in _stdlib_to_openssl_verify.items() ) # OpenSSL will only write 16K at a time SSL_WRITE_BLOCKSIZE = 16384 orig_util_HAS_SNI = util.HAS_SNI orig_util_SSLContext = util.ssl_.SSLContext log = logging.getLogger(__name__) def inject_into_urllib3(): 'Monkey-patch urllib3 with PyOpenSSL-backed SSL-support.' _validate_dependencies_met() util.SSLContext = PyOpenSSLContext util.ssl_.SSLContext = PyOpenSSLContext util.HAS_SNI = HAS_SNI util.ssl_.HAS_SNI = HAS_SNI util.IS_PYOPENSSL = True util.ssl_.IS_PYOPENSSL = True def extract_from_urllib3(): 'Undo monkey-patching by :func:`inject_into_urllib3`.' util.SSLContext = orig_util_SSLContext util.ssl_.SSLContext = orig_util_SSLContext util.HAS_SNI = orig_util_HAS_SNI util.ssl_.HAS_SNI = orig_util_HAS_SNI util.IS_PYOPENSSL = False util.ssl_.IS_PYOPENSSL = False def _validate_dependencies_met(): """ Verifies that PyOpenSSL's package-level dependencies have been met. Throws `ImportError` if they are not met. """ # Method added in `cryptography==1.1`; not available in older versions from cryptography.x509.extensions import Extensions if getattr(Extensions, "get_extension_for_class", None) is None: raise ImportError("'cryptography' module missing required functionality. " "Try upgrading to v1.3.4 or newer.") # pyOpenSSL 0.14 and above use cryptography for OpenSSL bindings. The _x509 # attribute is only present on those versions. from OpenSSL.crypto import X509 x509 = X509() if getattr(x509, "_x509", None) is None: raise ImportError("'pyOpenSSL' module missing required functionality. " "Try upgrading to v0.14 or newer.") def _dnsname_to_stdlib(name): """ Converts a dNSName SubjectAlternativeName field to the form used by the standard library on the given Python version. Cryptography produces a dNSName as a unicode string that was idna-decoded from ASCII bytes. We need to idna-encode that string to get it back, and then on Python 3 we also need to convert to unicode via UTF-8 (the stdlib uses PyUnicode_FromStringAndSize on it, which decodes via UTF-8). If the name cannot be idna-encoded then we return None signalling that the name given should be skipped. """ def idna_encode(name): """ Borrowed wholesale from the Python Cryptography Project. It turns out that we can't just safely call `idna.encode`: it can explode for wildcard names. This avoids that problem. """ from pip._vendor import idna try: for prefix in [u'*.', u'.']: if name.startswith(prefix): name = name[len(prefix):] return prefix.encode('ascii') + idna.encode(name) return idna.encode(name) except idna.core.IDNAError: return None # Don't send IPv6 addresses through the IDNA encoder. if ':' in name: return name name = idna_encode(name) if name is None: return None elif sys.version_info >= (3, 0): name = name.decode('utf-8') return name def get_subj_alt_name(peer_cert): """ Given an PyOpenSSL certificate, provides all the subject alternative names. """ # Pass the cert to cryptography, which has much better APIs for this. if hasattr(peer_cert, "to_cryptography"): cert = peer_cert.to_cryptography() else: # This is technically using private APIs, but should work across all # relevant versions before PyOpenSSL got a proper API for this. cert = _Certificate(openssl_backend, peer_cert._x509) # We want to find the SAN extension. Ask Cryptography to locate it (it's # faster than looping in Python) try: ext = cert.extensions.get_extension_for_class( x509.SubjectAlternativeName ).value except x509.ExtensionNotFound: # No such extension, return the empty list. return [] except (x509.DuplicateExtension, UnsupportedExtension, x509.UnsupportedGeneralNameType, UnicodeError) as e: # A problem has been found with the quality of the certificate. Assume # no SAN field is present. log.warning( "A problem was encountered with the certificate that prevented " "urllib3 from finding the SubjectAlternativeName field. This can " "affect certificate validation. The error was %s", e, ) return [] # We want to return dNSName and iPAddress fields. We need to cast the IPs # back to strings because the match_hostname function wants them as # strings. # Sadly the DNS names need to be idna encoded and then, on Python 3, UTF-8 # decoded. This is pretty frustrating, but that's what the standard library # does with certificates, and so we need to attempt to do the same. # We also want to skip over names which cannot be idna encoded. names = [ ('DNS', name) for name in map(_dnsname_to_stdlib, ext.get_values_for_type(x509.DNSName)) if name is not None ] names.extend( ('IP Address', str(name)) for name in ext.get_values_for_type(x509.IPAddress) ) return names class WrappedSocket(object): '''API-compatibility wrapper for Python OpenSSL's Connection-class. Note: _makefile_refs, _drop() and _reuse() are needed for the garbage collector of pypy. ''' def __init__(self, connection, socket, suppress_ragged_eofs=True): self.connection = connection self.socket = socket self.suppress_ragged_eofs = suppress_ragged_eofs self._makefile_refs = 0 self._closed = False def fileno(self): return self.socket.fileno() # Copy-pasted from Python 3.5 source code def _decref_socketios(self): if self._makefile_refs > 0: self._makefile_refs -= 1 if self._closed: self.close() def recv(self, *args, **kwargs): try: data = self.connection.recv(*args, **kwargs) except OpenSSL.SSL.SysCallError as e: if self.suppress_ragged_eofs and e.args == (-1, 'Unexpected EOF'): return b'' else: raise SocketError(str(e)) except OpenSSL.SSL.ZeroReturnError: if self.connection.get_shutdown() == OpenSSL.SSL.RECEIVED_SHUTDOWN: return b'' else: raise except OpenSSL.SSL.WantReadError: if not util.wait_for_read(self.socket, self.socket.gettimeout()): raise timeout('The read operation timed out') else: return self.recv(*args, **kwargs) # TLS 1.3 post-handshake authentication except OpenSSL.SSL.Error as e: raise ssl.SSLError("read error: %r" % e) else: return data def recv_into(self, *args, **kwargs): try: return self.connection.recv_into(*args, **kwargs) except OpenSSL.SSL.SysCallError as e: if self.suppress_ragged_eofs and e.args == (-1, 'Unexpected EOF'): return 0 else: raise SocketError(str(e)) except OpenSSL.SSL.ZeroReturnError: if self.connection.get_shutdown() == OpenSSL.SSL.RECEIVED_SHUTDOWN: return 0 else: raise except OpenSSL.SSL.WantReadError: if not util.wait_for_read(self.socket, self.socket.gettimeout()): raise timeout('The read operation timed out') else: return self.recv_into(*args, **kwargs) # TLS 1.3 post-handshake authentication except OpenSSL.SSL.Error as e: raise ssl.SSLError("read error: %r" % e) def settimeout(self, timeout): return self.socket.settimeout(timeout) def _send_until_done(self, data): while True: try: return self.connection.send(data) except OpenSSL.SSL.WantWriteError: if not util.wait_for_write(self.socket, self.socket.gettimeout()): raise timeout() continue except OpenSSL.SSL.SysCallError as e: raise SocketError(str(e)) def sendall(self, data): total_sent = 0 while total_sent < len(data): sent = self._send_until_done(data[total_sent:total_sent + SSL_WRITE_BLOCKSIZE]) total_sent += sent def shutdown(self): # FIXME rethrow compatible exceptions should we ever use this self.connection.shutdown() def close(self): if self._makefile_refs < 1: try: self._closed = True return self.connection.close() except OpenSSL.SSL.Error: return else: self._makefile_refs -= 1 def getpeercert(self, binary_form=False): x509 = self.connection.get_peer_certificate() if not x509: return x509 if binary_form: return OpenSSL.crypto.dump_certificate( OpenSSL.crypto.FILETYPE_ASN1, x509) return { 'subject': ( (('commonName', x509.get_subject().CN),), ), 'subjectAltName': get_subj_alt_name(x509) } def version(self): return self.connection.get_protocol_version_name() def _reuse(self): self._makefile_refs += 1 def _drop(self): if self._makefile_refs < 1: self.close() else: self._makefile_refs -= 1 if _fileobject: # Platform-specific: Python 2 def makefile(self, mode, bufsize=-1): self._makefile_refs += 1 return _fileobject(self, mode, bufsize, close=True) else: # Platform-specific: Python 3 makefile = backport_makefile WrappedSocket.makefile = makefile class PyOpenSSLContext(object): """ I am a wrapper class for the PyOpenSSL ``Context`` object. I am responsible for translating the interface of the standard library ``SSLContext`` object to calls into PyOpenSSL. """ def __init__(self, protocol): self.protocol = _openssl_versions[protocol] self._ctx = OpenSSL.SSL.Context(self.protocol) self._options = 0 self.check_hostname = False @property def options(self): return self._options @options.setter def options(self, value): self._options = value self._ctx.set_options(value) @property def verify_mode(self): return _openssl_to_stdlib_verify[self._ctx.get_verify_mode()] @verify_mode.setter def verify_mode(self, value): self._ctx.set_verify( _stdlib_to_openssl_verify[value], _verify_callback ) def set_default_verify_paths(self): self._ctx.set_default_verify_paths() def set_ciphers(self, ciphers): if isinstance(ciphers, six.text_type): ciphers = ciphers.encode('utf-8') self._ctx.set_cipher_list(ciphers) def load_verify_locations(self, cafile=None, capath=None, cadata=None): if cafile is not None: cafile = cafile.encode('utf-8') if capath is not None: capath = capath.encode('utf-8') self._ctx.load_verify_locations(cafile, capath) if cadata is not None: self._ctx.load_verify_locations(BytesIO(cadata)) def load_cert_chain(self, certfile, keyfile=None, password=None): self._ctx.use_certificate_chain_file(certfile) if password is not None: if not isinstance(password, six.binary_type): password = password.encode('utf-8') self._ctx.set_passwd_cb(lambda *_: password) self._ctx.use_privatekey_file(keyfile or certfile) def wrap_socket(self, sock, server_side=False, do_handshake_on_connect=True, suppress_ragged_eofs=True, server_hostname=None): cnx = OpenSSL.SSL.Connection(self._ctx, sock) if isinstance(server_hostname, six.text_type): # Platform-specific: Python 3 server_hostname = server_hostname.encode('utf-8') if server_hostname is not None: cnx.set_tlsext_host_name(server_hostname) cnx.set_connect_state() while True: try: cnx.do_handshake() except OpenSSL.SSL.WantReadError: if not util.wait_for_read(sock, sock.gettimeout()): raise timeout('select timed out') continue except OpenSSL.SSL.Error as e: raise ssl.SSLError('bad handshake: %r' % e) break return WrappedSocket(cnx, sock) def _verify_callback(cnx, x509, err_no, err_depth, return_code): return err_no == 0 ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/securetransport.py�������0000644�0001750�0001750�00000100072�00000000000�033316� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" SecureTranport support for urllib3 via ctypes. This makes platform-native TLS available to urllib3 users on macOS without the use of a compiler. This is an important feature because the Python Package Index is moving to become a TLSv1.2-or-higher server, and the default OpenSSL that ships with macOS is not capable of doing TLSv1.2. The only way to resolve this is to give macOS users an alternative solution to the problem, and that solution is to use SecureTransport. We use ctypes here because this solution must not require a compiler. That's because pip is not allowed to require a compiler either. This is not intended to be a seriously long-term solution to this problem. The hope is that PEP 543 will eventually solve this issue for us, at which point we can retire this contrib module. But in the short term, we need to solve the impending tire fire that is Python on Mac without this kind of contrib module. So...here we are. To use this module, simply import and inject it:: import urllib3.contrib.securetransport urllib3.contrib.securetransport.inject_into_urllib3() Happy TLSing! This code is a bastardised version of the code found in Will Bond's oscrypto library. An enormous debt is owed to him for blazing this trail for us. For that reason, this code should be considered to be covered both by urllib3's license and by oscrypto's: Copyright (c) 2015-2016 Will Bond <will@wbond.net> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. """ from __future__ import absolute_import import contextlib import ctypes import errno import os.path import shutil import socket import ssl import threading import weakref from .. import util from ._securetransport.bindings import ( Security, SecurityConst, CoreFoundation ) from ._securetransport.low_level import ( _assert_no_error, _cert_array_from_pem, _temporary_keychain, _load_client_cert_chain ) try: # Platform-specific: Python 2 from socket import _fileobject except ImportError: # Platform-specific: Python 3 _fileobject = None from ..packages.backports.makefile import backport_makefile __all__ = ['inject_into_urllib3', 'extract_from_urllib3'] # SNI always works HAS_SNI = True orig_util_HAS_SNI = util.HAS_SNI orig_util_SSLContext = util.ssl_.SSLContext # This dictionary is used by the read callback to obtain a handle to the # calling wrapped socket. This is a pretty silly approach, but for now it'll # do. I feel like I should be able to smuggle a handle to the wrapped socket # directly in the SSLConnectionRef, but for now this approach will work I # guess. # # We need to lock around this structure for inserts, but we don't do it for # reads/writes in the callbacks. The reasoning here goes as follows: # # 1. It is not possible to call into the callbacks before the dictionary is # populated, so once in the callback the id must be in the dictionary. # 2. The callbacks don't mutate the dictionary, they only read from it, and # so cannot conflict with any of the insertions. # # This is good: if we had to lock in the callbacks we'd drastically slow down # the performance of this code. _connection_refs = weakref.WeakValueDictionary() _connection_ref_lock = threading.Lock() # Limit writes to 16kB. This is OpenSSL's limit, but we'll cargo-cult it over # for no better reason than we need *a* limit, and this one is right there. SSL_WRITE_BLOCKSIZE = 16384 # This is our equivalent of util.ssl_.DEFAULT_CIPHERS, but expanded out to # individual cipher suites. We need to do this because this is how # SecureTransport wants them. CIPHER_SUITES = [ SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, SecurityConst.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, SecurityConst.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, SecurityConst.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, SecurityConst.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, SecurityConst.TLS_DHE_RSA_WITH_AES_256_GCM_SHA384, SecurityConst.TLS_DHE_RSA_WITH_AES_128_GCM_SHA256, SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, SecurityConst.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, SecurityConst.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, SecurityConst.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, SecurityConst.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, SecurityConst.TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, SecurityConst.TLS_DHE_RSA_WITH_AES_256_CBC_SHA, SecurityConst.TLS_DHE_RSA_WITH_AES_128_CBC_SHA256, SecurityConst.TLS_DHE_RSA_WITH_AES_128_CBC_SHA, SecurityConst.TLS_AES_256_GCM_SHA384, SecurityConst.TLS_AES_128_GCM_SHA256, SecurityConst.TLS_RSA_WITH_AES_256_GCM_SHA384, SecurityConst.TLS_RSA_WITH_AES_128_GCM_SHA256, SecurityConst.TLS_AES_128_CCM_8_SHA256, SecurityConst.TLS_AES_128_CCM_SHA256, SecurityConst.TLS_RSA_WITH_AES_256_CBC_SHA256, SecurityConst.TLS_RSA_WITH_AES_128_CBC_SHA256, SecurityConst.TLS_RSA_WITH_AES_256_CBC_SHA, SecurityConst.TLS_RSA_WITH_AES_128_CBC_SHA, ] # Basically this is simple: for PROTOCOL_SSLv23 we turn it into a low of # TLSv1 and a high of TLSv1.3. For everything else, we pin to that version. # TLSv1 to 1.2 are supported on macOS 10.8+ and TLSv1.3 is macOS 10.13+ _protocol_to_min_max = { util.PROTOCOL_TLS: (SecurityConst.kTLSProtocol1, SecurityConst.kTLSProtocolMaxSupported), } if hasattr(ssl, "PROTOCOL_SSLv2"): _protocol_to_min_max[ssl.PROTOCOL_SSLv2] = ( SecurityConst.kSSLProtocol2, SecurityConst.kSSLProtocol2 ) if hasattr(ssl, "PROTOCOL_SSLv3"): _protocol_to_min_max[ssl.PROTOCOL_SSLv3] = ( SecurityConst.kSSLProtocol3, SecurityConst.kSSLProtocol3 ) if hasattr(ssl, "PROTOCOL_TLSv1"): _protocol_to_min_max[ssl.PROTOCOL_TLSv1] = ( SecurityConst.kTLSProtocol1, SecurityConst.kTLSProtocol1 ) if hasattr(ssl, "PROTOCOL_TLSv1_1"): _protocol_to_min_max[ssl.PROTOCOL_TLSv1_1] = ( SecurityConst.kTLSProtocol11, SecurityConst.kTLSProtocol11 ) if hasattr(ssl, "PROTOCOL_TLSv1_2"): _protocol_to_min_max[ssl.PROTOCOL_TLSv1_2] = ( SecurityConst.kTLSProtocol12, SecurityConst.kTLSProtocol12 ) def inject_into_urllib3(): """ Monkey-patch urllib3 with SecureTransport-backed SSL-support. """ util.SSLContext = SecureTransportContext util.ssl_.SSLContext = SecureTransportContext util.HAS_SNI = HAS_SNI util.ssl_.HAS_SNI = HAS_SNI util.IS_SECURETRANSPORT = True util.ssl_.IS_SECURETRANSPORT = True def extract_from_urllib3(): """ Undo monkey-patching by :func:`inject_into_urllib3`. """ util.SSLContext = orig_util_SSLContext util.ssl_.SSLContext = orig_util_SSLContext util.HAS_SNI = orig_util_HAS_SNI util.ssl_.HAS_SNI = orig_util_HAS_SNI util.IS_SECURETRANSPORT = False util.ssl_.IS_SECURETRANSPORT = False def _read_callback(connection_id, data_buffer, data_length_pointer): """ SecureTransport read callback. This is called by ST to request that data be returned from the socket. """ wrapped_socket = None try: wrapped_socket = _connection_refs.get(connection_id) if wrapped_socket is None: return SecurityConst.errSSLInternal base_socket = wrapped_socket.socket requested_length = data_length_pointer[0] timeout = wrapped_socket.gettimeout() error = None read_count = 0 try: while read_count < requested_length: if timeout is None or timeout >= 0: if not util.wait_for_read(base_socket, timeout): raise socket.error(errno.EAGAIN, 'timed out') remaining = requested_length - read_count buffer = (ctypes.c_char * remaining).from_address( data_buffer + read_count ) chunk_size = base_socket.recv_into(buffer, remaining) read_count += chunk_size if not chunk_size: if not read_count: return SecurityConst.errSSLClosedGraceful break except (socket.error) as e: error = e.errno if error is not None and error != errno.EAGAIN: data_length_pointer[0] = read_count if error == errno.ECONNRESET or error == errno.EPIPE: return SecurityConst.errSSLClosedAbort raise data_length_pointer[0] = read_count if read_count != requested_length: return SecurityConst.errSSLWouldBlock return 0 except Exception as e: if wrapped_socket is not None: wrapped_socket._exception = e return SecurityConst.errSSLInternal def _write_callback(connection_id, data_buffer, data_length_pointer): """ SecureTransport write callback. This is called by ST to request that data actually be sent on the network. """ wrapped_socket = None try: wrapped_socket = _connection_refs.get(connection_id) if wrapped_socket is None: return SecurityConst.errSSLInternal base_socket = wrapped_socket.socket bytes_to_write = data_length_pointer[0] data = ctypes.string_at(data_buffer, bytes_to_write) timeout = wrapped_socket.gettimeout() error = None sent = 0 try: while sent < bytes_to_write: if timeout is None or timeout >= 0: if not util.wait_for_write(base_socket, timeout): raise socket.error(errno.EAGAIN, 'timed out') chunk_sent = base_socket.send(data) sent += chunk_sent # This has some needless copying here, but I'm not sure there's # much value in optimising this data path. data = data[chunk_sent:] except (socket.error) as e: error = e.errno if error is not None and error != errno.EAGAIN: data_length_pointer[0] = sent if error == errno.ECONNRESET or error == errno.EPIPE: return SecurityConst.errSSLClosedAbort raise data_length_pointer[0] = sent if sent != bytes_to_write: return SecurityConst.errSSLWouldBlock return 0 except Exception as e: if wrapped_socket is not None: wrapped_socket._exception = e return SecurityConst.errSSLInternal # We need to keep these two objects references alive: if they get GC'd while # in use then SecureTransport could attempt to call a function that is in freed # memory. That would be...uh...bad. Yeah, that's the word. Bad. _read_callback_pointer = Security.SSLReadFunc(_read_callback) _write_callback_pointer = Security.SSLWriteFunc(_write_callback) class WrappedSocket(object): """ API-compatibility wrapper for Python's OpenSSL wrapped socket object. Note: _makefile_refs, _drop(), and _reuse() are needed for the garbage collector of PyPy. """ def __init__(self, socket): self.socket = socket self.context = None self._makefile_refs = 0 self._closed = False self._exception = None self._keychain = None self._keychain_dir = None self._client_cert_chain = None # We save off the previously-configured timeout and then set it to # zero. This is done because we use select and friends to handle the # timeouts, but if we leave the timeout set on the lower socket then # Python will "kindly" call select on that socket again for us. Avoid # that by forcing the timeout to zero. self._timeout = self.socket.gettimeout() self.socket.settimeout(0) @contextlib.contextmanager def _raise_on_error(self): """ A context manager that can be used to wrap calls that do I/O from SecureTransport. If any of the I/O callbacks hit an exception, this context manager will correctly propagate the exception after the fact. This avoids silently swallowing those exceptions. It also correctly forces the socket closed. """ self._exception = None # We explicitly don't catch around this yield because in the unlikely # event that an exception was hit in the block we don't want to swallow # it. yield if self._exception is not None: exception, self._exception = self._exception, None self.close() raise exception def _set_ciphers(self): """ Sets up the allowed ciphers. By default this matches the set in util.ssl_.DEFAULT_CIPHERS, at least as supported by macOS. This is done custom and doesn't allow changing at this time, mostly because parsing OpenSSL cipher strings is going to be a freaking nightmare. """ ciphers = (Security.SSLCipherSuite * len(CIPHER_SUITES))(*CIPHER_SUITES) result = Security.SSLSetEnabledCiphers( self.context, ciphers, len(CIPHER_SUITES) ) _assert_no_error(result) def _custom_validate(self, verify, trust_bundle): """ Called when we have set custom validation. We do this in two cases: first, when cert validation is entirely disabled; and second, when using a custom trust DB. """ # If we disabled cert validation, just say: cool. if not verify: return # We want data in memory, so load it up. if os.path.isfile(trust_bundle): with open(trust_bundle, 'rb') as f: trust_bundle = f.read() cert_array = None trust = Security.SecTrustRef() try: # Get a CFArray that contains the certs we want. cert_array = _cert_array_from_pem(trust_bundle) # Ok, now the hard part. We want to get the SecTrustRef that ST has # created for this connection, shove our CAs into it, tell ST to # ignore everything else it knows, and then ask if it can build a # chain. This is a buuuunch of code. result = Security.SSLCopyPeerTrust( self.context, ctypes.byref(trust) ) _assert_no_error(result) if not trust: raise ssl.SSLError("Failed to copy trust reference") result = Security.SecTrustSetAnchorCertificates(trust, cert_array) _assert_no_error(result) result = Security.SecTrustSetAnchorCertificatesOnly(trust, True) _assert_no_error(result) trust_result = Security.SecTrustResultType() result = Security.SecTrustEvaluate( trust, ctypes.byref(trust_result) ) _assert_no_error(result) finally: if trust: CoreFoundation.CFRelease(trust) if cert_array is not None: CoreFoundation.CFRelease(cert_array) # Ok, now we can look at what the result was. successes = ( SecurityConst.kSecTrustResultUnspecified, SecurityConst.kSecTrustResultProceed ) if trust_result.value not in successes: raise ssl.SSLError( "certificate verify failed, error code: %d" % trust_result.value ) def handshake(self, server_hostname, verify, trust_bundle, min_version, max_version, client_cert, client_key, client_key_passphrase): """ Actually performs the TLS handshake. This is run automatically by wrapped socket, and shouldn't be needed in user code. """ # First, we do the initial bits of connection setup. We need to create # a context, set its I/O funcs, and set the connection reference. self.context = Security.SSLCreateContext( None, SecurityConst.kSSLClientSide, SecurityConst.kSSLStreamType ) result = Security.SSLSetIOFuncs( self.context, _read_callback_pointer, _write_callback_pointer ) _assert_no_error(result) # Here we need to compute the handle to use. We do this by taking the # id of self modulo 2**31 - 1. If this is already in the dictionary, we # just keep incrementing by one until we find a free space. with _connection_ref_lock: handle = id(self) % 2147483647 while handle in _connection_refs: handle = (handle + 1) % 2147483647 _connection_refs[handle] = self result = Security.SSLSetConnection(self.context, handle) _assert_no_error(result) # If we have a server hostname, we should set that too. if server_hostname: if not isinstance(server_hostname, bytes): server_hostname = server_hostname.encode('utf-8') result = Security.SSLSetPeerDomainName( self.context, server_hostname, len(server_hostname) ) _assert_no_error(result) # Setup the ciphers. self._set_ciphers() # Set the minimum and maximum TLS versions. result = Security.SSLSetProtocolVersionMin(self.context, min_version) _assert_no_error(result) # TLS 1.3 isn't necessarily enabled by the OS # so we have to detect when we error out and try # setting TLS 1.3 if it's allowed. kTLSProtocolMaxSupported # was added in macOS 10.13 along with kTLSProtocol13. result = Security.SSLSetProtocolVersionMax(self.context, max_version) if result != 0 and max_version == SecurityConst.kTLSProtocolMaxSupported: result = Security.SSLSetProtocolVersionMax(self.context, SecurityConst.kTLSProtocol12) _assert_no_error(result) # If there's a trust DB, we need to use it. We do that by telling # SecureTransport to break on server auth. We also do that if we don't # want to validate the certs at all: we just won't actually do any # authing in that case. if not verify or trust_bundle is not None: result = Security.SSLSetSessionOption( self.context, SecurityConst.kSSLSessionOptionBreakOnServerAuth, True ) _assert_no_error(result) # If there's a client cert, we need to use it. if client_cert: self._keychain, self._keychain_dir = _temporary_keychain() self._client_cert_chain = _load_client_cert_chain( self._keychain, client_cert, client_key ) result = Security.SSLSetCertificate( self.context, self._client_cert_chain ) _assert_no_error(result) while True: with self._raise_on_error(): result = Security.SSLHandshake(self.context) if result == SecurityConst.errSSLWouldBlock: raise socket.timeout("handshake timed out") elif result == SecurityConst.errSSLServerAuthCompleted: self._custom_validate(verify, trust_bundle) continue else: _assert_no_error(result) break def fileno(self): return self.socket.fileno() # Copy-pasted from Python 3.5 source code def _decref_socketios(self): if self._makefile_refs > 0: self._makefile_refs -= 1 if self._closed: self.close() def recv(self, bufsiz): buffer = ctypes.create_string_buffer(bufsiz) bytes_read = self.recv_into(buffer, bufsiz) data = buffer[:bytes_read] return data def recv_into(self, buffer, nbytes=None): # Read short on EOF. if self._closed: return 0 if nbytes is None: nbytes = len(buffer) buffer = (ctypes.c_char * nbytes).from_buffer(buffer) processed_bytes = ctypes.c_size_t(0) with self._raise_on_error(): result = Security.SSLRead( self.context, buffer, nbytes, ctypes.byref(processed_bytes) ) # There are some result codes that we want to treat as "not always # errors". Specifically, those are errSSLWouldBlock, # errSSLClosedGraceful, and errSSLClosedNoNotify. if (result == SecurityConst.errSSLWouldBlock): # If we didn't process any bytes, then this was just a time out. # However, we can get errSSLWouldBlock in situations when we *did* # read some data, and in those cases we should just read "short" # and return. if processed_bytes.value == 0: # Timed out, no data read. raise socket.timeout("recv timed out") elif result in (SecurityConst.errSSLClosedGraceful, SecurityConst.errSSLClosedNoNotify): # The remote peer has closed this connection. We should do so as # well. Note that we don't actually return here because in # principle this could actually be fired along with return data. # It's unlikely though. self.close() else: _assert_no_error(result) # Ok, we read and probably succeeded. We should return whatever data # was actually read. return processed_bytes.value def settimeout(self, timeout): self._timeout = timeout def gettimeout(self): return self._timeout def send(self, data): processed_bytes = ctypes.c_size_t(0) with self._raise_on_error(): result = Security.SSLWrite( self.context, data, len(data), ctypes.byref(processed_bytes) ) if result == SecurityConst.errSSLWouldBlock and processed_bytes.value == 0: # Timed out raise socket.timeout("send timed out") else: _assert_no_error(result) # We sent, and probably succeeded. Tell them how much we sent. return processed_bytes.value def sendall(self, data): total_sent = 0 while total_sent < len(data): sent = self.send(data[total_sent:total_sent + SSL_WRITE_BLOCKSIZE]) total_sent += sent def shutdown(self): with self._raise_on_error(): Security.SSLClose(self.context) def close(self): # TODO: should I do clean shutdown here? Do I have to? if self._makefile_refs < 1: self._closed = True if self.context: CoreFoundation.CFRelease(self.context) self.context = None if self._client_cert_chain: CoreFoundation.CFRelease(self._client_cert_chain) self._client_cert_chain = None if self._keychain: Security.SecKeychainDelete(self._keychain) CoreFoundation.CFRelease(self._keychain) shutil.rmtree(self._keychain_dir) self._keychain = self._keychain_dir = None return self.socket.close() else: self._makefile_refs -= 1 def getpeercert(self, binary_form=False): # Urgh, annoying. # # Here's how we do this: # # 1. Call SSLCopyPeerTrust to get hold of the trust object for this # connection. # 2. Call SecTrustGetCertificateAtIndex for index 0 to get the leaf. # 3. To get the CN, call SecCertificateCopyCommonName and process that # string so that it's of the appropriate type. # 4. To get the SAN, we need to do something a bit more complex: # a. Call SecCertificateCopyValues to get the data, requesting # kSecOIDSubjectAltName. # b. Mess about with this dictionary to try to get the SANs out. # # This is gross. Really gross. It's going to be a few hundred LoC extra # just to repeat something that SecureTransport can *already do*. So my # operating assumption at this time is that what we want to do is # instead to just flag to urllib3 that it shouldn't do its own hostname # validation when using SecureTransport. if not binary_form: raise ValueError( "SecureTransport only supports dumping binary certs" ) trust = Security.SecTrustRef() certdata = None der_bytes = None try: # Grab the trust store. result = Security.SSLCopyPeerTrust( self.context, ctypes.byref(trust) ) _assert_no_error(result) if not trust: # Probably we haven't done the handshake yet. No biggie. return None cert_count = Security.SecTrustGetCertificateCount(trust) if not cert_count: # Also a case that might happen if we haven't handshaked. # Handshook? Handshaken? return None leaf = Security.SecTrustGetCertificateAtIndex(trust, 0) assert leaf # Ok, now we want the DER bytes. certdata = Security.SecCertificateCopyData(leaf) assert certdata data_length = CoreFoundation.CFDataGetLength(certdata) data_buffer = CoreFoundation.CFDataGetBytePtr(certdata) der_bytes = ctypes.string_at(data_buffer, data_length) finally: if certdata: CoreFoundation.CFRelease(certdata) if trust: CoreFoundation.CFRelease(trust) return der_bytes def version(self): protocol = Security.SSLProtocol() result = Security.SSLGetNegotiatedProtocolVersion(self.context, ctypes.byref(protocol)) _assert_no_error(result) if protocol.value == SecurityConst.kTLSProtocol13: return 'TLSv1.3' elif protocol.value == SecurityConst.kTLSProtocol12: return 'TLSv1.2' elif protocol.value == SecurityConst.kTLSProtocol11: return 'TLSv1.1' elif protocol.value == SecurityConst.kTLSProtocol1: return 'TLSv1' elif protocol.value == SecurityConst.kSSLProtocol3: return 'SSLv3' elif protocol.value == SecurityConst.kSSLProtocol2: return 'SSLv2' else: raise ssl.SSLError('Unknown TLS version: %r' % protocol) def _reuse(self): self._makefile_refs += 1 def _drop(self): if self._makefile_refs < 1: self.close() else: self._makefile_refs -= 1 if _fileobject: # Platform-specific: Python 2 def makefile(self, mode, bufsize=-1): self._makefile_refs += 1 return _fileobject(self, mode, bufsize, close=True) else: # Platform-specific: Python 3 def makefile(self, mode="r", buffering=None, *args, **kwargs): # We disable buffering with SecureTransport because it conflicts with # the buffering that ST does internally (see issue #1153 for more). buffering = 0 return backport_makefile(self, mode, buffering, *args, **kwargs) WrappedSocket.makefile = makefile class SecureTransportContext(object): """ I am a wrapper class for the SecureTransport library, to translate the interface of the standard library ``SSLContext`` object to calls into SecureTransport. """ def __init__(self, protocol): self._min_version, self._max_version = _protocol_to_min_max[protocol] self._options = 0 self._verify = False self._trust_bundle = None self._client_cert = None self._client_key = None self._client_key_passphrase = None @property def check_hostname(self): """ SecureTransport cannot have its hostname checking disabled. For more, see the comment on getpeercert() in this file. """ return True @check_hostname.setter def check_hostname(self, value): """ SecureTransport cannot have its hostname checking disabled. For more, see the comment on getpeercert() in this file. """ pass @property def options(self): # TODO: Well, crap. # # So this is the bit of the code that is the most likely to cause us # trouble. Essentially we need to enumerate all of the SSL options that # users might want to use and try to see if we can sensibly translate # them, or whether we should just ignore them. return self._options @options.setter def options(self, value): # TODO: Update in line with above. self._options = value @property def verify_mode(self): return ssl.CERT_REQUIRED if self._verify else ssl.CERT_NONE @verify_mode.setter def verify_mode(self, value): self._verify = True if value == ssl.CERT_REQUIRED else False def set_default_verify_paths(self): # So, this has to do something a bit weird. Specifically, what it does # is nothing. # # This means that, if we had previously had load_verify_locations # called, this does not undo that. We need to do that because it turns # out that the rest of the urllib3 code will attempt to load the # default verify paths if it hasn't been told about any paths, even if # the context itself was sometime earlier. We resolve that by just # ignoring it. pass def load_default_certs(self): return self.set_default_verify_paths() def set_ciphers(self, ciphers): # For now, we just require the default cipher string. if ciphers != util.ssl_.DEFAULT_CIPHERS: raise ValueError( "SecureTransport doesn't support custom cipher strings" ) def load_verify_locations(self, cafile=None, capath=None, cadata=None): # OK, we only really support cadata and cafile. if capath is not None: raise ValueError( "SecureTransport does not support cert directories" ) self._trust_bundle = cafile or cadata def load_cert_chain(self, certfile, keyfile=None, password=None): self._client_cert = certfile self._client_key = keyfile self._client_cert_passphrase = password def wrap_socket(self, sock, server_side=False, do_handshake_on_connect=True, suppress_ragged_eofs=True, server_hostname=None): # So, what do we do here? Firstly, we assert some properties. This is a # stripped down shim, so there is some functionality we don't support. # See PEP 543 for the real deal. assert not server_side assert do_handshake_on_connect assert suppress_ragged_eofs # Ok, we're good to go. Now we want to create the wrapped socket object # and store it in the appropriate place. wrapped_socket = WrappedSocket(sock) # Now we can handshake wrapped_socket.handshake( server_hostname, self._verify, self._trust_bundle, self._min_version, self._max_version, self._client_cert, self._client_key, self._client_key_passphrase ) return wrapped_socket ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/contrib/socks.py�����������������0000644�0001750�0001750�00000015544�00000000000�031206� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- """ This module contains provisional support for SOCKS proxies from within urllib3. This module supports SOCKS4, SOCKS4A (an extension of SOCKS4), and SOCKS5. To enable its functionality, either install PySocks or install this module with the ``socks`` extra. The SOCKS implementation supports the full range of urllib3 features. It also supports the following SOCKS features: - SOCKS4A (``proxy_url='socks4a://...``) - SOCKS4 (``proxy_url='socks4://...``) - SOCKS5 with remote DNS (``proxy_url='socks5h://...``) - SOCKS5 with local DNS (``proxy_url='socks5://...``) - Usernames and passwords for the SOCKS proxy .. note:: It is recommended to use ``socks5h://`` or ``socks4a://`` schemes in your ``proxy_url`` to ensure that DNS resolution is done from the remote server instead of client-side when connecting to a domain name. SOCKS4 supports IPv4 and domain names with the SOCKS4A extension. SOCKS5 supports IPv4, IPv6, and domain names. When connecting to a SOCKS4 proxy the ``username`` portion of the ``proxy_url`` will be sent as the ``userid`` section of the SOCKS request:: proxy_url="socks4a://<userid>@proxy-host" When connecting to a SOCKS5 proxy the ``username`` and ``password`` portion of the ``proxy_url`` will be sent as the username/password to authenticate with the proxy:: proxy_url="socks5h://<username>:<password>@proxy-host" """ from __future__ import absolute_import try: import socks except ImportError: import warnings from ..exceptions import DependencyWarning warnings.warn(( 'SOCKS support in urllib3 requires the installation of optional ' 'dependencies: specifically, PySocks. For more information, see ' 'https://urllib3.readthedocs.io/en/latest/contrib.html#socks-proxies' ), DependencyWarning ) raise from socket import error as SocketError, timeout as SocketTimeout from ..connection import ( HTTPConnection, HTTPSConnection ) from ..connectionpool import ( HTTPConnectionPool, HTTPSConnectionPool ) from ..exceptions import ConnectTimeoutError, NewConnectionError from ..poolmanager import PoolManager from ..util.url import parse_url try: import ssl except ImportError: ssl = None class SOCKSConnection(HTTPConnection): """ A plain-text HTTP connection that connects via a SOCKS proxy. """ def __init__(self, *args, **kwargs): self._socks_options = kwargs.pop('_socks_options') super(SOCKSConnection, self).__init__(*args, **kwargs) def _new_conn(self): """ Establish a new connection via the SOCKS proxy. """ extra_kw = {} if self.source_address: extra_kw['source_address'] = self.source_address if self.socket_options: extra_kw['socket_options'] = self.socket_options try: conn = socks.create_connection( (self.host, self.port), proxy_type=self._socks_options['socks_version'], proxy_addr=self._socks_options['proxy_host'], proxy_port=self._socks_options['proxy_port'], proxy_username=self._socks_options['username'], proxy_password=self._socks_options['password'], proxy_rdns=self._socks_options['rdns'], timeout=self.timeout, **extra_kw ) except SocketTimeout: raise ConnectTimeoutError( self, "Connection to %s timed out. (connect timeout=%s)" % (self.host, self.timeout)) except socks.ProxyError as e: # This is fragile as hell, but it seems to be the only way to raise # useful errors here. if e.socket_err: error = e.socket_err if isinstance(error, SocketTimeout): raise ConnectTimeoutError( self, "Connection to %s timed out. (connect timeout=%s)" % (self.host, self.timeout) ) else: raise NewConnectionError( self, "Failed to establish a new connection: %s" % error ) else: raise NewConnectionError( self, "Failed to establish a new connection: %s" % e ) except SocketError as e: # Defensive: PySocks should catch all these. raise NewConnectionError( self, "Failed to establish a new connection: %s" % e) return conn # We don't need to duplicate the Verified/Unverified distinction from # urllib3/connection.py here because the HTTPSConnection will already have been # correctly set to either the Verified or Unverified form by that module. This # means the SOCKSHTTPSConnection will automatically be the correct type. class SOCKSHTTPSConnection(SOCKSConnection, HTTPSConnection): pass class SOCKSHTTPConnectionPool(HTTPConnectionPool): ConnectionCls = SOCKSConnection class SOCKSHTTPSConnectionPool(HTTPSConnectionPool): ConnectionCls = SOCKSHTTPSConnection class SOCKSProxyManager(PoolManager): """ A version of the urllib3 ProxyManager that routes connections via the defined SOCKS proxy. """ pool_classes_by_scheme = { 'http': SOCKSHTTPConnectionPool, 'https': SOCKSHTTPSConnectionPool, } def __init__(self, proxy_url, username=None, password=None, num_pools=10, headers=None, **connection_pool_kw): parsed = parse_url(proxy_url) if username is None and password is None and parsed.auth is not None: split = parsed.auth.split(':') if len(split) == 2: username, password = split if parsed.scheme == 'socks5': socks_version = socks.PROXY_TYPE_SOCKS5 rdns = False elif parsed.scheme == 'socks5h': socks_version = socks.PROXY_TYPE_SOCKS5 rdns = True elif parsed.scheme == 'socks4': socks_version = socks.PROXY_TYPE_SOCKS4 rdns = False elif parsed.scheme == 'socks4a': socks_version = socks.PROXY_TYPE_SOCKS4 rdns = True else: raise ValueError( "Unable to determine SOCKS version from %s" % proxy_url ) self.proxy_url = proxy_url socks_options = { 'socks_version': socks_version, 'proxy_host': parsed.host, 'proxy_port': parsed.port, 'username': username, 'password': password, 'rdns': rdns } connection_pool_kw['_socks_options'] = socks_options super(SOCKSProxyManager, self).__init__( num_pools, headers, **connection_pool_kw ) self.pool_classes_by_scheme = SOCKSProxyManager.pool_classes_by_scheme ������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/exceptions.py��������������������0000644�0001750�0001750�00000014714�00000000000�030603� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import from .packages.six.moves.http_client import ( IncompleteRead as httplib_IncompleteRead ) # Base Exceptions class HTTPError(Exception): "Base exception used by this module." pass class HTTPWarning(Warning): "Base warning used by this module." pass class PoolError(HTTPError): "Base exception for errors caused within a pool." def __init__(self, pool, message): self.pool = pool HTTPError.__init__(self, "%s: %s" % (pool, message)) def __reduce__(self): # For pickling purposes. return self.__class__, (None, None) class RequestError(PoolError): "Base exception for PoolErrors that have associated URLs." def __init__(self, pool, url, message): self.url = url PoolError.__init__(self, pool, message) def __reduce__(self): # For pickling purposes. return self.__class__, (None, self.url, None) class SSLError(HTTPError): "Raised when SSL certificate fails in an HTTPS connection." pass class ProxyError(HTTPError): "Raised when the connection to a proxy fails." pass class DecodeError(HTTPError): "Raised when automatic decoding based on Content-Type fails." pass class ProtocolError(HTTPError): "Raised when something unexpected happens mid-request/response." pass #: Renamed to ProtocolError but aliased for backwards compatibility. ConnectionError = ProtocolError # Leaf Exceptions class MaxRetryError(RequestError): """Raised when the maximum number of retries is exceeded. :param pool: The connection pool :type pool: :class:`~urllib3.connectionpool.HTTPConnectionPool` :param string url: The requested Url :param exceptions.Exception reason: The underlying error """ def __init__(self, pool, url, reason=None): self.reason = reason message = "Max retries exceeded with url: %s (Caused by %r)" % ( url, reason) RequestError.__init__(self, pool, url, message) class HostChangedError(RequestError): "Raised when an existing pool gets a request for a foreign host." def __init__(self, pool, url, retries=3): message = "Tried to open a foreign host with url: %s" % url RequestError.__init__(self, pool, url, message) self.retries = retries class TimeoutStateError(HTTPError): """ Raised when passing an invalid state to a timeout """ pass class TimeoutError(HTTPError): """ Raised when a socket timeout error occurs. Catching this error will catch both :exc:`ReadTimeoutErrors <ReadTimeoutError>` and :exc:`ConnectTimeoutErrors <ConnectTimeoutError>`. """ pass class ReadTimeoutError(TimeoutError, RequestError): "Raised when a socket timeout occurs while receiving data from a server" pass # This timeout error does not have a URL attached and needs to inherit from the # base HTTPError class ConnectTimeoutError(TimeoutError): "Raised when a socket timeout occurs while connecting to a server" pass class NewConnectionError(ConnectTimeoutError, PoolError): "Raised when we fail to establish a new connection. Usually ECONNREFUSED." pass class EmptyPoolError(PoolError): "Raised when a pool runs out of connections and no more are allowed." pass class ClosedPoolError(PoolError): "Raised when a request enters a pool after the pool has been closed." pass class LocationValueError(ValueError, HTTPError): "Raised when there is something wrong with a given URL input." pass class LocationParseError(LocationValueError): "Raised when get_host or similar fails to parse the URL input." def __init__(self, location): message = "Failed to parse: %s" % location HTTPError.__init__(self, message) self.location = location class ResponseError(HTTPError): "Used as a container for an error reason supplied in a MaxRetryError." GENERIC_ERROR = 'too many error responses' SPECIFIC_ERROR = 'too many {status_code} error responses' class SecurityWarning(HTTPWarning): "Warned when performing security reducing actions" pass class SubjectAltNameWarning(SecurityWarning): "Warned when connecting to a host with a certificate missing a SAN." pass class InsecureRequestWarning(SecurityWarning): "Warned when making an unverified HTTPS request." pass class SystemTimeWarning(SecurityWarning): "Warned when system time is suspected to be wrong" pass class InsecurePlatformWarning(SecurityWarning): "Warned when certain SSL configuration is not available on a platform." pass class SNIMissingWarning(HTTPWarning): "Warned when making a HTTPS request without SNI available." pass class DependencyWarning(HTTPWarning): """ Warned when an attempt is made to import a module with missing optional dependencies. """ pass class ResponseNotChunked(ProtocolError, ValueError): "Response needs to be chunked in order to read it as chunks." pass class BodyNotHttplibCompatible(HTTPError): """ Body should be httplib.HTTPResponse like (have an fp attribute which returns raw chunks) for read_chunked(). """ pass class IncompleteRead(HTTPError, httplib_IncompleteRead): """ Response length doesn't match expected Content-Length Subclass of http_client.IncompleteRead to allow int value for `partial` to avoid creating large objects on streamed reads. """ def __init__(self, partial, expected): super(IncompleteRead, self).__init__(partial, expected) def __repr__(self): return ('IncompleteRead(%i bytes read, ' '%i more expected)' % (self.partial, self.expected)) class InvalidHeader(HTTPError): "The header provided was somehow invalid." pass class ProxySchemeUnknown(AssertionError, ValueError): "ProxyManager does not support the supplied scheme" # TODO(t-8ch): Stop inheriting from AssertionError in v2.0. def __init__(self, scheme): message = "Not supported proxy scheme %s" % scheme super(ProxySchemeUnknown, self).__init__(message) class HeaderParsingError(HTTPError): "Raised by assert_header_parsing, but we convert it to a log.warning statement." def __init__(self, defects, unparsed_data): message = '%s, unparsed data: %r' % (defects or 'Unknown', unparsed_data) super(HeaderParsingError, self).__init__(message) class UnrewindableBodyError(HTTPError): "urllib3 encountered an error when trying to rewind a body" pass ����������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/fields.py������������������������0000644�0001750�0001750�00000020577�00000000000�027674� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import import email.utils import mimetypes import re from .packages import six def guess_content_type(filename, default='application/octet-stream'): """ Guess the "Content-Type" of a file. :param filename: The filename to guess the "Content-Type" of using :mod:`mimetypes`. :param default: If no "Content-Type" can be guessed, default to `default`. """ if filename: return mimetypes.guess_type(filename)[0] or default return default def format_header_param_rfc2231(name, value): """ Helper function to format and quote a single header parameter using the strategy defined in RFC 2231. Particularly useful for header parameters which might contain non-ASCII values, like file names. This follows RFC 2388 Section 4.4. :param name: The name of the parameter, a string expected to be ASCII only. :param value: The value of the parameter, provided as ``bytes`` or `str``. :ret: An RFC-2231-formatted unicode string. """ if isinstance(value, six.binary_type): value = value.decode("utf-8") if not any(ch in value for ch in '"\\\r\n'): result = u'%s="%s"' % (name, value) try: result.encode('ascii') except (UnicodeEncodeError, UnicodeDecodeError): pass else: return result if not six.PY3: # Python 2: value = value.encode('utf-8') # encode_rfc2231 accepts an encoded string and returns an ascii-encoded # string in Python 2 but accepts and returns unicode strings in Python 3 value = email.utils.encode_rfc2231(value, 'utf-8') value = '%s*=%s' % (name, value) if not six.PY3: # Python 2: value = value.decode('utf-8') return value _HTML5_REPLACEMENTS = { u"\u0022": u"%22", # Replace "\" with "\\". u"\u005C": u"\u005C\u005C", u"\u005C": u"\u005C\u005C", } # All control characters from 0x00 to 0x1F *except* 0x1B. _HTML5_REPLACEMENTS.update({ six.unichr(cc): u"%{:02X}".format(cc) for cc in range(0x00, 0x1F+1) if cc not in (0x1B,) }) def _replace_multiple(value, needles_and_replacements): def replacer(match): return needles_and_replacements[match.group(0)] pattern = re.compile( r"|".join([ re.escape(needle) for needle in needles_and_replacements.keys() ]) ) result = pattern.sub(replacer, value) return result def format_header_param_html5(name, value): """ Helper function to format and quote a single header parameter using the HTML5 strategy. Particularly useful for header parameters which might contain non-ASCII values, like file names. This follows the `HTML5 Working Draft Section 4.10.22.7`_ and matches the behavior of curl and modern browsers. .. _HTML5 Working Draft Section 4.10.22.7: https://w3c.github.io/html/sec-forms.html#multipart-form-data :param name: The name of the parameter, a string expected to be ASCII only. :param value: The value of the parameter, provided as ``bytes`` or `str``. :ret: A unicode string, stripped of troublesome characters. """ if isinstance(value, six.binary_type): value = value.decode("utf-8") value = _replace_multiple(value, _HTML5_REPLACEMENTS) return u'%s="%s"' % (name, value) # For backwards-compatibility. format_header_param = format_header_param_html5 class RequestField(object): """ A data container for request body parameters. :param name: The name of this request field. Must be unicode. :param data: The data/value body. :param filename: An optional filename of the request field. Must be unicode. :param headers: An optional dict-like object of headers to initially use for the field. :param header_formatter: An optional callable that is used to encode and format the headers. By default, this is :func:`format_header_param_html5`. """ def __init__( self, name, data, filename=None, headers=None, header_formatter=format_header_param_html5): self._name = name self._filename = filename self.data = data self.headers = {} if headers: self.headers = dict(headers) self.header_formatter = header_formatter @classmethod def from_tuples( cls, fieldname, value, header_formatter=format_header_param_html5): """ A :class:`~urllib3.fields.RequestField` factory from old-style tuple parameters. Supports constructing :class:`~urllib3.fields.RequestField` from parameter of key/value strings AND key/filetuple. A filetuple is a (filename, data, MIME type) tuple where the MIME type is optional. For example:: 'foo': 'bar', 'fakefile': ('foofile.txt', 'contents of foofile'), 'realfile': ('barfile.txt', open('realfile').read()), 'typedfile': ('bazfile.bin', open('bazfile').read(), 'image/jpeg'), 'nonamefile': 'contents of nonamefile field', Field names and filenames must be unicode. """ if isinstance(value, tuple): if len(value) == 3: filename, data, content_type = value else: filename, data = value content_type = guess_content_type(filename) else: filename = None content_type = None data = value request_param = cls( fieldname, data, filename=filename, header_formatter=header_formatter) request_param.make_multipart(content_type=content_type) return request_param def _render_part(self, name, value): """ Overridable helper function to format a single header parameter. By default, this calls ``self.header_formatter``. :param name: The name of the parameter, a string expected to be ASCII only. :param value: The value of the parameter, provided as a unicode string. """ return self.header_formatter(name, value) def _render_parts(self, header_parts): """ Helper function to format and quote a single header. Useful for single headers that are composed of multiple items. E.g., 'Content-Disposition' fields. :param header_parts: A sequence of (k, v) tuples or a :class:`dict` of (k, v) to format as `k1="v1"; k2="v2"; ...`. """ parts = [] iterable = header_parts if isinstance(header_parts, dict): iterable = header_parts.items() for name, value in iterable: if value is not None: parts.append(self._render_part(name, value)) return u'; '.join(parts) def render_headers(self): """ Renders the headers for this request field. """ lines = [] sort_keys = ['Content-Disposition', 'Content-Type', 'Content-Location'] for sort_key in sort_keys: if self.headers.get(sort_key, False): lines.append(u'%s: %s' % (sort_key, self.headers[sort_key])) for header_name, header_value in self.headers.items(): if header_name not in sort_keys: if header_value: lines.append(u'%s: %s' % (header_name, header_value)) lines.append(u'\r\n') return u'\r\n'.join(lines) def make_multipart(self, content_disposition=None, content_type=None, content_location=None): """ Makes this request field into a multipart request field. This method overrides "Content-Disposition", "Content-Type" and "Content-Location" headers to the request parameter. :param content_type: The 'Content-Type' of the request body. :param content_location: The 'Content-Location' of the request body. """ self.headers['Content-Disposition'] = content_disposition or u'form-data' self.headers['Content-Disposition'] += u'; '.join([ u'', self._render_parts( ((u'name', self._name), (u'filename', self._filename)) ) ]) self.headers['Content-Type'] = content_type self.headers['Content-Location'] = content_location ���������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/filepost.py����������������������0000644�0001750�0001750�00000004604�00000000000�030244� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import import binascii import codecs import os from io import BytesIO from .packages import six from .packages.six import b from .fields import RequestField writer = codecs.lookup('utf-8')[3] def choose_boundary(): """ Our embarrassingly-simple replacement for mimetools.choose_boundary. """ boundary = binascii.hexlify(os.urandom(16)) if six.PY3: boundary = boundary.decode('ascii') return boundary def iter_field_objects(fields): """ Iterate over fields. Supports list of (k, v) tuples and dicts, and lists of :class:`~urllib3.fields.RequestField`. """ if isinstance(fields, dict): i = six.iteritems(fields) else: i = iter(fields) for field in i: if isinstance(field, RequestField): yield field else: yield RequestField.from_tuples(*field) def iter_fields(fields): """ .. deprecated:: 1.6 Iterate over fields. The addition of :class:`~urllib3.fields.RequestField` makes this function obsolete. Instead, use :func:`iter_field_objects`, which returns :class:`~urllib3.fields.RequestField` objects. Supports list of (k, v) tuples and dicts. """ if isinstance(fields, dict): return ((k, v) for k, v in six.iteritems(fields)) return ((k, v) for k, v in fields) def encode_multipart_formdata(fields, boundary=None): """ Encode a dictionary of ``fields`` using the multipart/form-data MIME format. :param fields: Dictionary of fields or list of (key, :class:`~urllib3.fields.RequestField`). :param boundary: If not specified, then a random boundary will be generated using :func:`urllib3.filepost.choose_boundary`. """ body = BytesIO() if boundary is None: boundary = choose_boundary() for field in iter_field_objects(fields): body.write(b('--%s\r\n' % (boundary))) writer(body).write(field.render_headers()) data = field.data if isinstance(data, int): data = str(data) # Backwards compatibility if isinstance(data, six.text_type): writer(body).write(data) else: body.write(data) body.write(b'\r\n') body.write(b('--%s--\r\n' % (boundary))) content_type = str('multipart/form-data; boundary=%s' % boundary) return body.getvalue(), content_type ����������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000033�00000000000�011451� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������27 mtime=1604735100.026661 �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/������������������������0000755�0001750�0001750�00000000000�00000000000�027617� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/__init__.py�������������0000644�0001750�0001750�00000000155�00000000000�031731� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import from . import ssl_match_hostname __all__ = ('ssl_match_hostname', ) �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735100.0299942 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/backports/��������������0000755�0001750�0001750�00000000000�00000000000�031607� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/backports/__init__.py���0000644�0001750�0001750�00000000000�00000000000�033706� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/backports/makefile.py���0000644�0001750�0001750�00000002660�00000000000�033742� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- """ backports.makefile ~~~~~~~~~~~~~~~~~~ Backports the Python 3 ``socket.makefile`` method for use with anything that wants to create a "fake" socket object. """ import io from socket import SocketIO def backport_makefile(self, mode="r", buffering=None, encoding=None, errors=None, newline=None): """ Backport of ``socket.makefile`` from Python 3.5. """ if not set(mode) <= {"r", "w", "b"}: raise ValueError( "invalid mode %r (only r, w, b allowed)" % (mode,) ) writing = "w" in mode reading = "r" in mode or not writing assert reading or writing binary = "b" in mode rawmode = "" if reading: rawmode += "r" if writing: rawmode += "w" raw = SocketIO(self, rawmode) self._makefile_refs += 1 if buffering is None: buffering = -1 if buffering < 0: buffering = io.DEFAULT_BUFFER_SIZE if buffering == 0: if not binary: raise ValueError("unbuffered streams must be binary") return raw if reading and writing: buffer = io.BufferedRWPair(raw, raw, buffering) elif reading: buffer = io.BufferedReader(raw, buffering) else: assert writing buffer = io.BufferedWriter(raw, buffering) if binary: return buffer text = io.TextIOWrapper(buffer, encoding, errors, newline) text.mode = mode return text ��������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735100.0333276 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/����������������0000755�0001750�0001750�00000000000�00000000000�030723� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/__init__.py�����0000644�0001750�0001750�00000003032�00000000000�033032� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # Copyright (c) 2014 Rackspace # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ An implementation of semantics and validations described in RFC 3986. See http://rfc3986.readthedocs.io/ for detailed documentation. :copyright: (c) 2014 Rackspace :license: Apache v2.0, see LICENSE for details """ from .api import iri_reference from .api import IRIReference from .api import is_valid_uri from .api import normalize_uri from .api import uri_reference from .api import URIReference from .api import urlparse from .parseresult import ParseResult __title__ = 'rfc3986' __author__ = 'Ian Stapleton Cordasco' __author_email__ = 'graffatcolmingov@gmail.com' __license__ = 'Apache v2.0' __copyright__ = 'Copyright 2014 Rackspace' __version__ = '1.3.2' __all__ = ( 'ParseResult', 'URIReference', 'IRIReference', 'is_valid_uri', 'normalize_uri', 'uri_reference', 'iri_reference', 'urlparse', '__title__', '__author__', '__author_email__', '__license__', '__copyright__', '__version__', ) ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/_mixin.py�������0000644�0001750�0001750�00000031636�00000000000�032571� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""Module containing the implementation of the URIMixin class.""" import warnings from . import exceptions as exc from . import misc from . import normalizers from . import validators class URIMixin(object): """Mixin with all shared methods for URIs and IRIs.""" __hash__ = tuple.__hash__ def authority_info(self): """Return a dictionary with the ``userinfo``, ``host``, and ``port``. If the authority is not valid, it will raise a :class:`~rfc3986.exceptions.InvalidAuthority` Exception. :returns: ``{'userinfo': 'username:password', 'host': 'www.example.com', 'port': '80'}`` :rtype: dict :raises rfc3986.exceptions.InvalidAuthority: If the authority is not ``None`` and can not be parsed. """ if not self.authority: return {'userinfo': None, 'host': None, 'port': None} match = self._match_subauthority() if match is None: # In this case, we have an authority that was parsed from the URI # Reference, but it cannot be further parsed by our # misc.SUBAUTHORITY_MATCHER. In this case it must not be a valid # authority. raise exc.InvalidAuthority(self.authority.encode(self.encoding)) # We had a match, now let's ensure that it is actually a valid host # address if it is IPv4 matches = match.groupdict() host = matches.get('host') if (host and misc.IPv4_MATCHER.match(host) and not validators.valid_ipv4_host_address(host)): # If we have a host, it appears to be IPv4 and it does not have # valid bytes, it is an InvalidAuthority. raise exc.InvalidAuthority(self.authority.encode(self.encoding)) return matches def _match_subauthority(self): return misc.SUBAUTHORITY_MATCHER.match(self.authority) @property def host(self): """If present, a string representing the host.""" try: authority = self.authority_info() except exc.InvalidAuthority: return None return authority['host'] @property def port(self): """If present, the port extracted from the authority.""" try: authority = self.authority_info() except exc.InvalidAuthority: return None return authority['port'] @property def userinfo(self): """If present, the userinfo extracted from the authority.""" try: authority = self.authority_info() except exc.InvalidAuthority: return None return authority['userinfo'] def is_absolute(self): """Determine if this URI Reference is an absolute URI. See http://tools.ietf.org/html/rfc3986#section-4.3 for explanation. :returns: ``True`` if it is an absolute URI, ``False`` otherwise. :rtype: bool """ return bool(misc.ABSOLUTE_URI_MATCHER.match(self.unsplit())) def is_valid(self, **kwargs): """Determine if the URI is valid. .. deprecated:: 1.1.0 Use the :class:`~rfc3986.validators.Validator` object instead. :param bool require_scheme: Set to ``True`` if you wish to require the presence of the scheme component. :param bool require_authority: Set to ``True`` if you wish to require the presence of the authority component. :param bool require_path: Set to ``True`` if you wish to require the presence of the path component. :param bool require_query: Set to ``True`` if you wish to require the presence of the query component. :param bool require_fragment: Set to ``True`` if you wish to require the presence of the fragment component. :returns: ``True`` if the URI is valid. ``False`` otherwise. :rtype: bool """ warnings.warn("Please use rfc3986.validators.Validator instead. " "This method will be eventually removed.", DeprecationWarning) validators = [ (self.scheme_is_valid, kwargs.get('require_scheme', False)), (self.authority_is_valid, kwargs.get('require_authority', False)), (self.path_is_valid, kwargs.get('require_path', False)), (self.query_is_valid, kwargs.get('require_query', False)), (self.fragment_is_valid, kwargs.get('require_fragment', False)), ] return all(v(r) for v, r in validators) def authority_is_valid(self, require=False): """Determine if the authority component is valid. .. deprecated:: 1.1.0 Use the :class:`~rfc3986.validators.Validator` object instead. :param bool require: Set to ``True`` to require the presence of this component. :returns: ``True`` if the authority is valid. ``False`` otherwise. :rtype: bool """ warnings.warn("Please use rfc3986.validators.Validator instead. " "This method will be eventually removed.", DeprecationWarning) try: self.authority_info() except exc.InvalidAuthority: return False return validators.authority_is_valid( self.authority, host=self.host, require=require, ) def scheme_is_valid(self, require=False): """Determine if the scheme component is valid. .. deprecated:: 1.1.0 Use the :class:`~rfc3986.validators.Validator` object instead. :param str require: Set to ``True`` to require the presence of this component. :returns: ``True`` if the scheme is valid. ``False`` otherwise. :rtype: bool """ warnings.warn("Please use rfc3986.validators.Validator instead. " "This method will be eventually removed.", DeprecationWarning) return validators.scheme_is_valid(self.scheme, require) def path_is_valid(self, require=False): """Determine if the path component is valid. .. deprecated:: 1.1.0 Use the :class:`~rfc3986.validators.Validator` object instead. :param str require: Set to ``True`` to require the presence of this component. :returns: ``True`` if the path is valid. ``False`` otherwise. :rtype: bool """ warnings.warn("Please use rfc3986.validators.Validator instead. " "This method will be eventually removed.", DeprecationWarning) return validators.path_is_valid(self.path, require) def query_is_valid(self, require=False): """Determine if the query component is valid. .. deprecated:: 1.1.0 Use the :class:`~rfc3986.validators.Validator` object instead. :param str require: Set to ``True`` to require the presence of this component. :returns: ``True`` if the query is valid. ``False`` otherwise. :rtype: bool """ warnings.warn("Please use rfc3986.validators.Validator instead. " "This method will be eventually removed.", DeprecationWarning) return validators.query_is_valid(self.query, require) def fragment_is_valid(self, require=False): """Determine if the fragment component is valid. .. deprecated:: 1.1.0 Use the Validator object instead. :param str require: Set to ``True`` to require the presence of this component. :returns: ``True`` if the fragment is valid. ``False`` otherwise. :rtype: bool """ warnings.warn("Please use rfc3986.validators.Validator instead. " "This method will be eventually removed.", DeprecationWarning) return validators.fragment_is_valid(self.fragment, require) def normalized_equality(self, other_ref): """Compare this URIReference to another URIReference. :param URIReference other_ref: (required), The reference with which we're comparing. :returns: ``True`` if the references are equal, ``False`` otherwise. :rtype: bool """ return tuple(self.normalize()) == tuple(other_ref.normalize()) def resolve_with(self, base_uri, strict=False): """Use an absolute URI Reference to resolve this relative reference. Assuming this is a relative reference that you would like to resolve, use the provided base URI to resolve it. See http://tools.ietf.org/html/rfc3986#section-5 for more information. :param base_uri: Either a string or URIReference. It must be an absolute URI or it will raise an exception. :returns: A new URIReference which is the result of resolving this reference using ``base_uri``. :rtype: :class:`URIReference` :raises rfc3986.exceptions.ResolutionError: If the ``base_uri`` is not an absolute URI. """ if not isinstance(base_uri, URIMixin): base_uri = type(self).from_string(base_uri) if not base_uri.is_absolute(): raise exc.ResolutionError(base_uri) # This is optional per # http://tools.ietf.org/html/rfc3986#section-5.2.1 base_uri = base_uri.normalize() # The reference we're resolving resolving = self if not strict and resolving.scheme == base_uri.scheme: resolving = resolving.copy_with(scheme=None) # http://tools.ietf.org/html/rfc3986#page-32 if resolving.scheme is not None: target = resolving.copy_with( path=normalizers.normalize_path(resolving.path) ) else: if resolving.authority is not None: target = resolving.copy_with( scheme=base_uri.scheme, path=normalizers.normalize_path(resolving.path) ) else: if resolving.path is None: if resolving.query is not None: query = resolving.query else: query = base_uri.query target = resolving.copy_with( scheme=base_uri.scheme, authority=base_uri.authority, path=base_uri.path, query=query ) else: if resolving.path.startswith('/'): path = normalizers.normalize_path(resolving.path) else: path = normalizers.normalize_path( misc.merge_paths(base_uri, resolving.path) ) target = resolving.copy_with( scheme=base_uri.scheme, authority=base_uri.authority, path=path, query=resolving.query ) return target def unsplit(self): """Create a URI string from the components. :returns: The URI Reference reconstituted as a string. :rtype: str """ # See http://tools.ietf.org/html/rfc3986#section-5.3 result_list = [] if self.scheme: result_list.extend([self.scheme, ':']) if self.authority: result_list.extend(['//', self.authority]) if self.path: result_list.append(self.path) if self.query is not None: result_list.extend(['?', self.query]) if self.fragment is not None: result_list.extend(['#', self.fragment]) return ''.join(result_list) def copy_with(self, scheme=misc.UseExisting, authority=misc.UseExisting, path=misc.UseExisting, query=misc.UseExisting, fragment=misc.UseExisting): """Create a copy of this reference with the new components. :param str scheme: (optional) The scheme to use for the new reference. :param str authority: (optional) The authority to use for the new reference. :param str path: (optional) The path to use for the new reference. :param str query: (optional) The query to use for the new reference. :param str fragment: (optional) The fragment to use for the new reference. :returns: New URIReference with provided components. :rtype: URIReference """ attributes = { 'scheme': scheme, 'authority': authority, 'path': path, 'query': query, 'fragment': fragment, } for key, value in list(attributes.items()): if value is misc.UseExisting: del attributes[key] uri = self._replace(**attributes) uri.encoding = self.encoding return uri ��������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/abnf_regexp.py��0000644�0001750�0001750�00000021571�00000000000�033563� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Module for the regular expressions crafted from ABNF.""" import sys # https://tools.ietf.org/html/rfc3986#page-13 GEN_DELIMS = GENERIC_DELIMITERS = ":/?#[]@" GENERIC_DELIMITERS_SET = set(GENERIC_DELIMITERS) # https://tools.ietf.org/html/rfc3986#page-13 SUB_DELIMS = SUB_DELIMITERS = "!$&'()*+,;=" SUB_DELIMITERS_SET = set(SUB_DELIMITERS) # Escape the '*' for use in regular expressions SUB_DELIMITERS_RE = r"!$&'()\*+,;=" RESERVED_CHARS_SET = GENERIC_DELIMITERS_SET.union(SUB_DELIMITERS_SET) ALPHA = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz' DIGIT = '0123456789' # https://tools.ietf.org/html/rfc3986#section-2.3 UNRESERVED = UNRESERVED_CHARS = ALPHA + DIGIT + r'._!-' UNRESERVED_CHARS_SET = set(UNRESERVED_CHARS) NON_PCT_ENCODED_SET = RESERVED_CHARS_SET.union(UNRESERVED_CHARS_SET) # We need to escape the '-' in this case: UNRESERVED_RE = r'A-Za-z0-9._~\-' # Percent encoded character values PERCENT_ENCODED = PCT_ENCODED = '%[A-Fa-f0-9]{2}' PCHAR = '([' + UNRESERVED_RE + SUB_DELIMITERS_RE + ':@]|%s)' % PCT_ENCODED # NOTE(sigmavirus24): We're going to use more strict regular expressions # than appear in Appendix B for scheme. This will prevent over-eager # consuming of items that aren't schemes. SCHEME_RE = '[a-zA-Z][a-zA-Z0-9+.-]*' _AUTHORITY_RE = '[^/?#]*' _PATH_RE = '[^?#]*' _QUERY_RE = '[^#]*' _FRAGMENT_RE = '.*' # Extracted from http://tools.ietf.org/html/rfc3986#appendix-B COMPONENT_PATTERN_DICT = { 'scheme': SCHEME_RE, 'authority': _AUTHORITY_RE, 'path': _PATH_RE, 'query': _QUERY_RE, 'fragment': _FRAGMENT_RE, } # See http://tools.ietf.org/html/rfc3986#appendix-B # In this case, we name each of the important matches so we can use # SRE_Match#groupdict to parse the values out if we so choose. This is also # modified to ignore other matches that are not important to the parsing of # the reference so we can also simply use SRE_Match#groups. URL_PARSING_RE = ( r'(?:(?P<scheme>{scheme}):)?(?://(?P<authority>{authority}))?' r'(?P<path>{path})(?:\?(?P<query>{query}))?' r'(?:#(?P<fragment>{fragment}))?' ).format(**COMPONENT_PATTERN_DICT) # ######################### # Authority Matcher Section # ######################### # Host patterns, see: http://tools.ietf.org/html/rfc3986#section-3.2.2 # The pattern for a regular name, e.g., www.google.com, api.github.com REGULAR_NAME_RE = REG_NAME = '((?:{0}|[{1}])*)'.format( '%[0-9A-Fa-f]{2}', SUB_DELIMITERS_RE + UNRESERVED_RE ) # The pattern for an IPv4 address, e.g., 192.168.255.255, 127.0.0.1, IPv4_RE = r'([0-9]{1,3}\.){3}[0-9]{1,3}' # Hexadecimal characters used in each piece of an IPv6 address HEXDIG_RE = '[0-9A-Fa-f]{1,4}' # Least-significant 32 bits of an IPv6 address LS32_RE = '({hex}:{hex}|{ipv4})'.format(hex=HEXDIG_RE, ipv4=IPv4_RE) # Substitutions into the following patterns for IPv6 patterns defined # http://tools.ietf.org/html/rfc3986#page-20 _subs = {'hex': HEXDIG_RE, 'ls32': LS32_RE} # Below: h16 = hexdig, see: https://tools.ietf.org/html/rfc5234 for details # about ABNF (Augmented Backus-Naur Form) use in the comments variations = [ # 6( h16 ":" ) ls32 '(%(hex)s:){6}%(ls32)s' % _subs, # "::" 5( h16 ":" ) ls32 '::(%(hex)s:){5}%(ls32)s' % _subs, # [ h16 ] "::" 4( h16 ":" ) ls32 '(%(hex)s)?::(%(hex)s:){4}%(ls32)s' % _subs, # [ *1( h16 ":" ) h16 ] "::" 3( h16 ":" ) ls32 '((%(hex)s:)?%(hex)s)?::(%(hex)s:){3}%(ls32)s' % _subs, # [ *2( h16 ":" ) h16 ] "::" 2( h16 ":" ) ls32 '((%(hex)s:){0,2}%(hex)s)?::(%(hex)s:){2}%(ls32)s' % _subs, # [ *3( h16 ":" ) h16 ] "::" h16 ":" ls32 '((%(hex)s:){0,3}%(hex)s)?::%(hex)s:%(ls32)s' % _subs, # [ *4( h16 ":" ) h16 ] "::" ls32 '((%(hex)s:){0,4}%(hex)s)?::%(ls32)s' % _subs, # [ *5( h16 ":" ) h16 ] "::" h16 '((%(hex)s:){0,5}%(hex)s)?::%(hex)s' % _subs, # [ *6( h16 ":" ) h16 ] "::" '((%(hex)s:){0,6}%(hex)s)?::' % _subs, ] IPv6_RE = '(({0})|({1})|({2})|({3})|({4})|({5})|({6})|({7})|({8}))'.format( *variations ) IPv_FUTURE_RE = r'v[0-9A-Fa-f]+\.[%s]+' % ( UNRESERVED_RE + SUB_DELIMITERS_RE + ':' ) # RFC 6874 Zone ID ABNF ZONE_ID = '(?:[' + UNRESERVED_RE + ']|' + PCT_ENCODED + ')+' IPv6_ADDRZ_RFC4007_RE = IPv6_RE + '(?:(?:%25|%)' + ZONE_ID + ')?' IPv6_ADDRZ_RE = IPv6_RE + '(?:%25' + ZONE_ID + ')?' IP_LITERAL_RE = r'\[({0}|{1})\]'.format( IPv6_ADDRZ_RFC4007_RE, IPv_FUTURE_RE, ) # Pattern for matching the host piece of the authority HOST_RE = HOST_PATTERN = '({0}|{1}|{2})'.format( REG_NAME, IPv4_RE, IP_LITERAL_RE, ) USERINFO_RE = '^([' + UNRESERVED_RE + SUB_DELIMITERS_RE + ':]|%s)+' % ( PCT_ENCODED ) PORT_RE = '[0-9]{1,5}' # #################### # Path Matcher Section # #################### # See http://tools.ietf.org/html/rfc3986#section-3.3 for more information # about the path patterns defined below. segments = { 'segment': PCHAR + '*', # Non-zero length segment 'segment-nz': PCHAR + '+', # Non-zero length segment without ":" 'segment-nz-nc': PCHAR.replace(':', '') + '+' } # Path types taken from Section 3.3 (linked above) PATH_EMPTY = '^$' PATH_ROOTLESS = '%(segment-nz)s(/%(segment)s)*' % segments PATH_NOSCHEME = '%(segment-nz-nc)s(/%(segment)s)*' % segments PATH_ABSOLUTE = '/(%s)?' % PATH_ROOTLESS PATH_ABEMPTY = '(/%(segment)s)*' % segments PATH_RE = '^(%s|%s|%s|%s|%s)$' % ( PATH_ABEMPTY, PATH_ABSOLUTE, PATH_NOSCHEME, PATH_ROOTLESS, PATH_EMPTY ) FRAGMENT_RE = QUERY_RE = ( '^([/?:@' + UNRESERVED_RE + SUB_DELIMITERS_RE + ']|%s)*$' % PCT_ENCODED ) # ########################## # Relative reference matcher # ########################## # See http://tools.ietf.org/html/rfc3986#section-4.2 for details RELATIVE_PART_RE = '(//%s%s|%s|%s|%s)' % ( COMPONENT_PATTERN_DICT['authority'], PATH_ABEMPTY, PATH_ABSOLUTE, PATH_NOSCHEME, PATH_EMPTY, ) # See http://tools.ietf.org/html/rfc3986#section-3 for definition HIER_PART_RE = '(//%s%s|%s|%s|%s)' % ( COMPONENT_PATTERN_DICT['authority'], PATH_ABEMPTY, PATH_ABSOLUTE, PATH_ROOTLESS, PATH_EMPTY, ) # ############### # IRIs / RFC 3987 # ############### # Only wide-unicode gets the high-ranges of UCSCHAR if sys.maxunicode > 0xFFFF: # pragma: no cover IPRIVATE = u'\uE000-\uF8FF\U000F0000-\U000FFFFD\U00100000-\U0010FFFD' UCSCHAR_RE = ( u'\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF' u'\U00010000-\U0001FFFD\U00020000-\U0002FFFD' u'\U00030000-\U0003FFFD\U00040000-\U0004FFFD' u'\U00050000-\U0005FFFD\U00060000-\U0006FFFD' u'\U00070000-\U0007FFFD\U00080000-\U0008FFFD' u'\U00090000-\U0009FFFD\U000A0000-\U000AFFFD' u'\U000B0000-\U000BFFFD\U000C0000-\U000CFFFD' u'\U000D0000-\U000DFFFD\U000E1000-\U000EFFFD' ) else: # pragma: no cover IPRIVATE = u'\uE000-\uF8FF' UCSCHAR_RE = ( u'\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF' ) IUNRESERVED_RE = u'A-Za-z0-9\\._~\\-' + UCSCHAR_RE IPCHAR = u'([' + IUNRESERVED_RE + SUB_DELIMITERS_RE + u':@]|%s)' % PCT_ENCODED isegments = { 'isegment': IPCHAR + u'*', # Non-zero length segment 'isegment-nz': IPCHAR + u'+', # Non-zero length segment without ":" 'isegment-nz-nc': IPCHAR.replace(':', '') + u'+' } IPATH_ROOTLESS = u'%(isegment-nz)s(/%(isegment)s)*' % isegments IPATH_NOSCHEME = u'%(isegment-nz-nc)s(/%(isegment)s)*' % isegments IPATH_ABSOLUTE = u'/(?:%s)?' % IPATH_ROOTLESS IPATH_ABEMPTY = u'(?:/%(isegment)s)*' % isegments IPATH_RE = u'^(?:%s|%s|%s|%s|%s)$' % ( IPATH_ABEMPTY, IPATH_ABSOLUTE, IPATH_NOSCHEME, IPATH_ROOTLESS, PATH_EMPTY ) IREGULAR_NAME_RE = IREG_NAME = u'(?:{0}|[{1}])*'.format( u'%[0-9A-Fa-f]{2}', SUB_DELIMITERS_RE + IUNRESERVED_RE ) IHOST_RE = IHOST_PATTERN = u'({0}|{1}|{2})'.format( IREG_NAME, IPv4_RE, IP_LITERAL_RE, ) IUSERINFO_RE = u'^(?:[' + IUNRESERVED_RE + SUB_DELIMITERS_RE + u':]|%s)+' % ( PCT_ENCODED ) IFRAGMENT_RE = (u'^(?:[/?:@' + IUNRESERVED_RE + SUB_DELIMITERS_RE + u']|%s)*$' % PCT_ENCODED) IQUERY_RE = (u'^(?:[/?:@' + IUNRESERVED_RE + SUB_DELIMITERS_RE + IPRIVATE + u']|%s)*$' % PCT_ENCODED) IRELATIVE_PART_RE = u'(//%s%s|%s|%s|%s)' % ( COMPONENT_PATTERN_DICT['authority'], IPATH_ABEMPTY, IPATH_ABSOLUTE, IPATH_NOSCHEME, PATH_EMPTY, ) IHIER_PART_RE = u'(//%s%s|%s|%s|%s)' % ( COMPONENT_PATTERN_DICT['authority'], IPATH_ABEMPTY, IPATH_ABSOLUTE, IPATH_ROOTLESS, PATH_EMPTY, ) ���������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/api.py����������0000644�0001750�0001750�00000007457�00000000000�032063� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # Copyright (c) 2014 Rackspace # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ Module containing the simple and functional API for rfc3986. This module defines functions and provides access to the public attributes and classes of rfc3986. """ from .iri import IRIReference from .parseresult import ParseResult from .uri import URIReference def uri_reference(uri, encoding='utf-8'): """Parse a URI string into a URIReference. This is a convenience function. You could achieve the same end by using ``URIReference.from_string(uri)``. :param str uri: The URI which needs to be parsed into a reference. :param str encoding: The encoding of the string provided :returns: A parsed URI :rtype: :class:`URIReference` """ return URIReference.from_string(uri, encoding) def iri_reference(iri, encoding='utf-8'): """Parse a IRI string into an IRIReference. This is a convenience function. You could achieve the same end by using ``IRIReference.from_string(iri)``. :param str iri: The IRI which needs to be parsed into a reference. :param str encoding: The encoding of the string provided :returns: A parsed IRI :rtype: :class:`IRIReference` """ return IRIReference.from_string(iri, encoding) def is_valid_uri(uri, encoding='utf-8', **kwargs): """Determine if the URI given is valid. This is a convenience function. You could use either ``uri_reference(uri).is_valid()`` or ``URIReference.from_string(uri).is_valid()`` to achieve the same result. :param str uri: The URI to be validated. :param str encoding: The encoding of the string provided :param bool require_scheme: Set to ``True`` if you wish to require the presence of the scheme component. :param bool require_authority: Set to ``True`` if you wish to require the presence of the authority component. :param bool require_path: Set to ``True`` if you wish to require the presence of the path component. :param bool require_query: Set to ``True`` if you wish to require the presence of the query component. :param bool require_fragment: Set to ``True`` if you wish to require the presence of the fragment component. :returns: ``True`` if the URI is valid, ``False`` otherwise. :rtype: bool """ return URIReference.from_string(uri, encoding).is_valid(**kwargs) def normalize_uri(uri, encoding='utf-8'): """Normalize the given URI. This is a convenience function. You could use either ``uri_reference(uri).normalize().unsplit()`` or ``URIReference.from_string(uri).normalize().unsplit()`` instead. :param str uri: The URI to be normalized. :param str encoding: The encoding of the string provided :returns: The normalized URI. :rtype: str """ normalized_reference = URIReference.from_string(uri, encoding).normalize() return normalized_reference.unsplit() def urlparse(uri, encoding='utf-8'): """Parse a given URI and return a ParseResult. This is a partial replacement of the standard library's urlparse function. :param str uri: The URI to be parsed. :param str encoding: The encoding of the string provided. :returns: A parsed URI :rtype: :class:`~rfc3986.parseresult.ParseResult` """ return ParseResult.from_string(uri, encoding, strict=False) �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/builder.py������0000644�0001750�0001750�00000022551�00000000000�032730� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # Copyright (c) 2017 Ian Stapleton Cordasco # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Module containing the logic for the URIBuilder object.""" from . import compat from . import normalizers from . import uri class URIBuilder(object): """Object to aid in building up a URI Reference from parts. .. note:: This object should be instantiated by the user, but it's recommended that it is not provided with arguments. Instead, use the available method to populate the fields. """ def __init__(self, scheme=None, userinfo=None, host=None, port=None, path=None, query=None, fragment=None): """Initialize our URI builder. :param str scheme: (optional) :param str userinfo: (optional) :param str host: (optional) :param int port: (optional) :param str path: (optional) :param str query: (optional) :param str fragment: (optional) """ self.scheme = scheme self.userinfo = userinfo self.host = host self.port = port self.path = path self.query = query self.fragment = fragment def __repr__(self): """Provide a convenient view of our builder object.""" formatstr = ('URIBuilder(scheme={b.scheme}, userinfo={b.userinfo}, ' 'host={b.host}, port={b.port}, path={b.path}, ' 'query={b.query}, fragment={b.fragment})') return formatstr.format(b=self) def add_scheme(self, scheme): """Add a scheme to our builder object. After normalizing, this will generate a new URIBuilder instance with the specified scheme and all other attributes the same. .. code-block:: python >>> URIBuilder().add_scheme('HTTPS') URIBuilder(scheme='https', userinfo=None, host=None, port=None, path=None, query=None, fragment=None) """ scheme = normalizers.normalize_scheme(scheme) return URIBuilder( scheme=scheme, userinfo=self.userinfo, host=self.host, port=self.port, path=self.path, query=self.query, fragment=self.fragment, ) def add_credentials(self, username, password): """Add credentials as the userinfo portion of the URI. .. code-block:: python >>> URIBuilder().add_credentials('root', 's3crete') URIBuilder(scheme=None, userinfo='root:s3crete', host=None, port=None, path=None, query=None, fragment=None) >>> URIBuilder().add_credentials('root', None) URIBuilder(scheme=None, userinfo='root', host=None, port=None, path=None, query=None, fragment=None) """ if username is None: raise ValueError('Username cannot be None') userinfo = normalizers.normalize_username(username) if password is not None: userinfo = '{}:{}'.format( userinfo, normalizers.normalize_password(password), ) return URIBuilder( scheme=self.scheme, userinfo=userinfo, host=self.host, port=self.port, path=self.path, query=self.query, fragment=self.fragment, ) def add_host(self, host): """Add hostname to the URI. .. code-block:: python >>> URIBuilder().add_host('google.com') URIBuilder(scheme=None, userinfo=None, host='google.com', port=None, path=None, query=None, fragment=None) """ return URIBuilder( scheme=self.scheme, userinfo=self.userinfo, host=normalizers.normalize_host(host), port=self.port, path=self.path, query=self.query, fragment=self.fragment, ) def add_port(self, port): """Add port to the URI. .. code-block:: python >>> URIBuilder().add_port(80) URIBuilder(scheme=None, userinfo=None, host=None, port='80', path=None, query=None, fragment=None) >>> URIBuilder().add_port(443) URIBuilder(scheme=None, userinfo=None, host=None, port='443', path=None, query=None, fragment=None) """ port_int = int(port) if port_int < 0: raise ValueError( 'ports are not allowed to be negative. You provided {}'.format( port_int, ) ) if port_int > 65535: raise ValueError( 'ports are not allowed to be larger than 65535. ' 'You provided {}'.format( port_int, ) ) return URIBuilder( scheme=self.scheme, userinfo=self.userinfo, host=self.host, port='{}'.format(port_int), path=self.path, query=self.query, fragment=self.fragment, ) def add_path(self, path): """Add a path to the URI. .. code-block:: python >>> URIBuilder().add_path('sigmavirus24/rfc3985') URIBuilder(scheme=None, userinfo=None, host=None, port=None, path='/sigmavirus24/rfc3986', query=None, fragment=None) >>> URIBuilder().add_path('/checkout.php') URIBuilder(scheme=None, userinfo=None, host=None, port=None, path='/checkout.php', query=None, fragment=None) """ if not path.startswith('/'): path = '/{}'.format(path) return URIBuilder( scheme=self.scheme, userinfo=self.userinfo, host=self.host, port=self.port, path=normalizers.normalize_path(path), query=self.query, fragment=self.fragment, ) def add_query_from(self, query_items): """Generate and add a query a dictionary or list of tuples. .. code-block:: python >>> URIBuilder().add_query_from({'a': 'b c'}) URIBuilder(scheme=None, userinfo=None, host=None, port=None, path=None, query='a=b+c', fragment=None) >>> URIBuilder().add_query_from([('a', 'b c')]) URIBuilder(scheme=None, userinfo=None, host=None, port=None, path=None, query='a=b+c', fragment=None) """ query = normalizers.normalize_query(compat.urlencode(query_items)) return URIBuilder( scheme=self.scheme, userinfo=self.userinfo, host=self.host, port=self.port, path=self.path, query=query, fragment=self.fragment, ) def add_query(self, query): """Add a pre-formated query string to the URI. .. code-block:: python >>> URIBuilder().add_query('a=b&c=d') URIBuilder(scheme=None, userinfo=None, host=None, port=None, path=None, query='a=b&c=d', fragment=None) """ return URIBuilder( scheme=self.scheme, userinfo=self.userinfo, host=self.host, port=self.port, path=self.path, query=normalizers.normalize_query(query), fragment=self.fragment, ) def add_fragment(self, fragment): """Add a fragment to the URI. .. code-block:: python >>> URIBuilder().add_fragment('section-2.6.1') URIBuilder(scheme=None, userinfo=None, host=None, port=None, path=None, query=None, fragment='section-2.6.1') """ return URIBuilder( scheme=self.scheme, userinfo=self.userinfo, host=self.host, port=self.port, path=self.path, query=self.query, fragment=normalizers.normalize_fragment(fragment), ) def finalize(self): """Create a URIReference from our builder. .. code-block:: python >>> URIBuilder().add_scheme('https').add_host('github.com' ... ).add_path('sigmavirus24/rfc3986').finalize().unsplit() 'https://github.com/sigmavirus24/rfc3986' >>> URIBuilder().add_scheme('https').add_host('github.com' ... ).add_path('sigmavirus24/rfc3986').add_credentials( ... 'sigmavirus24', 'not-re@l').finalize().unsplit() 'https://sigmavirus24:not-re%40l@github.com/sigmavirus24/rfc3986' """ return uri.URIReference( self.scheme, normalizers.normalize_authority( (self.userinfo, self.host, self.port) ), self.path, self.query, self.fragment, ) �������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/compat.py�������0000644�0001750�0001750�00000002751�00000000000�032565� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # Copyright (c) 2014 Rackspace # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Compatibility module for Python 2 and 3 support.""" import sys try: from urllib.parse import quote as urlquote except ImportError: # Python 2.x from urllib import quote as urlquote try: from urllib.parse import urlencode except ImportError: # Python 2.x from urllib import urlencode __all__ = ( 'to_bytes', 'to_str', 'urlquote', 'urlencode', ) PY3 = (3, 0) <= sys.version_info < (4, 0) PY2 = (2, 6) <= sys.version_info < (2, 8) if PY3: unicode = str # Python 3.x def to_str(b, encoding='utf-8'): """Ensure that b is text in the specified encoding.""" if hasattr(b, 'decode') and not isinstance(b, unicode): b = b.decode(encoding) return b def to_bytes(s, encoding='utf-8'): """Ensure that s is converted to bytes from the encoding.""" if hasattr(s, 'encode') and not isinstance(s, bytes): s = s.encode(encoding) return s �����������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/exceptions.py���0000644�0001750�0001750�00000007277�00000000000�033473� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- """Exceptions module for rfc3986.""" from . import compat class RFC3986Exception(Exception): """Base class for all rfc3986 exception classes.""" pass class InvalidAuthority(RFC3986Exception): """Exception when the authority string is invalid.""" def __init__(self, authority): """Initialize the exception with the invalid authority.""" super(InvalidAuthority, self).__init__( u"The authority ({0}) is not valid.".format( compat.to_str(authority))) class InvalidPort(RFC3986Exception): """Exception when the port is invalid.""" def __init__(self, port): """Initialize the exception with the invalid port.""" super(InvalidPort, self).__init__( 'The port ("{0}") is not valid.'.format(port)) class ResolutionError(RFC3986Exception): """Exception to indicate a failure to resolve a URI.""" def __init__(self, uri): """Initialize the error with the failed URI.""" super(ResolutionError, self).__init__( "{0} is not an absolute URI.".format(uri.unsplit())) class ValidationError(RFC3986Exception): """Exception raised during Validation of a URI.""" pass class MissingComponentError(ValidationError): """Exception raised when a required component is missing.""" def __init__(self, uri, *component_names): """Initialize the error with the missing component name.""" verb = 'was' if len(component_names) > 1: verb = 'were' self.uri = uri self.components = sorted(component_names) components = ', '.join(self.components) super(MissingComponentError, self).__init__( "{} {} required but missing".format(components, verb), uri, self.components, ) class UnpermittedComponentError(ValidationError): """Exception raised when a component has an unpermitted value.""" def __init__(self, component_name, component_value, allowed_values): """Initialize the error with the unpermitted component.""" super(UnpermittedComponentError, self).__init__( "{} was required to be one of {!r} but was {!r}".format( component_name, list(sorted(allowed_values)), component_value, ), component_name, component_value, allowed_values, ) self.component_name = component_name self.component_value = component_value self.allowed_values = allowed_values class PasswordForbidden(ValidationError): """Exception raised when a URL has a password in the userinfo section.""" def __init__(self, uri): """Initialize the error with the URI that failed validation.""" unsplit = getattr(uri, 'unsplit', lambda: uri) super(PasswordForbidden, self).__init__( '"{}" contained a password when validation forbade it'.format( unsplit() ) ) self.uri = uri class InvalidComponentsError(ValidationError): """Exception raised when one or more components are invalid.""" def __init__(self, uri, *component_names): """Initialize the error with the invalid component name(s).""" verb = 'was' if len(component_names) > 1: verb = 'were' self.uri = uri self.components = sorted(component_names) components = ', '.join(self.components) super(InvalidComponentsError, self).__init__( "{} {} found to be invalid".format(components, verb), uri, self.components, ) class MissingDependencyError(RFC3986Exception): """Exception raised when an IRI is encoded without the 'idna' module.""" ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/iri.py����������0000644�0001750�0001750�00000012553�00000000000�032066� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""Module containing the implementation of the IRIReference class.""" # -*- coding: utf-8 -*- # Copyright (c) 2014 Rackspace # Copyright (c) 2015 Ian Stapleton Cordasco # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from collections import namedtuple from . import compat from . import exceptions from . import misc from . import normalizers from . import uri try: from pip._vendor import idna except ImportError: # pragma: no cover idna = None class IRIReference(namedtuple('IRIReference', misc.URI_COMPONENTS), uri.URIMixin): """Immutable object representing a parsed IRI Reference. Can be encoded into an URIReference object via the procedure specified in RFC 3987 Section 3.1 .. note:: The IRI submodule is a new interface and may possibly change in the future. Check for changes to the interface when upgrading. """ slots = () def __new__(cls, scheme, authority, path, query, fragment, encoding='utf-8'): """Create a new IRIReference.""" ref = super(IRIReference, cls).__new__( cls, scheme or None, authority or None, path or None, query, fragment) ref.encoding = encoding return ref def __eq__(self, other): """Compare this reference to another.""" other_ref = other if isinstance(other, tuple): other_ref = self.__class__(*other) elif not isinstance(other, IRIReference): try: other_ref = self.__class__.from_string(other) except TypeError: raise TypeError( 'Unable to compare {0}() to {1}()'.format( type(self).__name__, type(other).__name__)) # See http://tools.ietf.org/html/rfc3986#section-6.2 return tuple(self) == tuple(other_ref) def _match_subauthority(self): return misc.ISUBAUTHORITY_MATCHER.match(self.authority) @classmethod def from_string(cls, iri_string, encoding='utf-8'): """Parse a IRI reference from the given unicode IRI string. :param str iri_string: Unicode IRI to be parsed into a reference. :param str encoding: The encoding of the string provided :returns: :class:`IRIReference` or subclass thereof """ iri_string = compat.to_str(iri_string, encoding) split_iri = misc.IRI_MATCHER.match(iri_string).groupdict() return cls( split_iri['scheme'], split_iri['authority'], normalizers.encode_component(split_iri['path'], encoding), normalizers.encode_component(split_iri['query'], encoding), normalizers.encode_component(split_iri['fragment'], encoding), encoding, ) def encode(self, idna_encoder=None): # noqa: C901 """Encode an IRIReference into a URIReference instance. If the ``idna`` module is installed or the ``rfc3986[idna]`` extra is used then unicode characters in the IRI host component will be encoded with IDNA2008. :param idna_encoder: Function that encodes each part of the host component If not given will raise an exception if the IRI contains a host component. :rtype: uri.URIReference :returns: A URI reference """ authority = self.authority if authority: if idna_encoder is None: if idna is None: # pragma: no cover raise exceptions.MissingDependencyError( "Could not import the 'idna' module " "and the IRI hostname requires encoding" ) def idna_encoder(name): if any(ord(c) > 128 for c in name): try: return idna.encode(name.lower(), strict=True, std3_rules=True) except idna.IDNAError: raise exceptions.InvalidAuthority(self.authority) return name authority = "" if self.host: authority = ".".join([compat.to_str(idna_encoder(part)) for part in self.host.split(".")]) if self.userinfo is not None: authority = (normalizers.encode_component( self.userinfo, self.encoding) + '@' + authority) if self.port is not None: authority += ":" + str(self.port) return uri.URIReference(self.scheme, authority, path=self.path, query=self.query, fragment=self.fragment, encoding=self.encoding) �����������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/misc.py���������0000644�0001750�0001750�00000007776�00000000000�032251� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # Copyright (c) 2014 Rackspace # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ Module containing compiled regular expressions and constants. This module contains important constants, patterns, and compiled regular expressions for parsing and validating URIs and their components. """ import re from . import abnf_regexp # These are enumerated for the named tuple used as a superclass of # URIReference URI_COMPONENTS = ['scheme', 'authority', 'path', 'query', 'fragment'] important_characters = { 'generic_delimiters': abnf_regexp.GENERIC_DELIMITERS, 'sub_delimiters': abnf_regexp.SUB_DELIMITERS, # We need to escape the '*' in this case 're_sub_delimiters': abnf_regexp.SUB_DELIMITERS_RE, 'unreserved_chars': abnf_regexp.UNRESERVED_CHARS, # We need to escape the '-' in this case: 're_unreserved': abnf_regexp.UNRESERVED_RE, } # For details about delimiters and reserved characters, see: # http://tools.ietf.org/html/rfc3986#section-2.2 GENERIC_DELIMITERS = abnf_regexp.GENERIC_DELIMITERS_SET SUB_DELIMITERS = abnf_regexp.SUB_DELIMITERS_SET RESERVED_CHARS = abnf_regexp.RESERVED_CHARS_SET # For details about unreserved characters, see: # http://tools.ietf.org/html/rfc3986#section-2.3 UNRESERVED_CHARS = abnf_regexp.UNRESERVED_CHARS_SET NON_PCT_ENCODED = abnf_regexp.NON_PCT_ENCODED_SET URI_MATCHER = re.compile(abnf_regexp.URL_PARSING_RE) SUBAUTHORITY_MATCHER = re.compile(( '^(?:(?P<userinfo>{0})@)?' # userinfo '(?P<host>{1})' # host ':?(?P<port>{2})?$' # port ).format(abnf_regexp.USERINFO_RE, abnf_regexp.HOST_PATTERN, abnf_regexp.PORT_RE)) HOST_MATCHER = re.compile('^' + abnf_regexp.HOST_RE + '$') IPv4_MATCHER = re.compile('^' + abnf_regexp.IPv4_RE + '$') IPv6_MATCHER = re.compile(r'^\[' + abnf_regexp.IPv6_ADDRZ_RFC4007_RE + r'\]$') # Used by host validator IPv6_NO_RFC4007_MATCHER = re.compile(r'^\[%s\]$' % ( abnf_regexp.IPv6_ADDRZ_RE )) # Matcher used to validate path components PATH_MATCHER = re.compile(abnf_regexp.PATH_RE) # ################################## # Query and Fragment Matcher Section # ################################## QUERY_MATCHER = re.compile(abnf_regexp.QUERY_RE) FRAGMENT_MATCHER = QUERY_MATCHER # Scheme validation, see: http://tools.ietf.org/html/rfc3986#section-3.1 SCHEME_MATCHER = re.compile('^{0}$'.format(abnf_regexp.SCHEME_RE)) RELATIVE_REF_MATCHER = re.compile(r'^%s(\?%s)?(#%s)?$' % ( abnf_regexp.RELATIVE_PART_RE, abnf_regexp.QUERY_RE, abnf_regexp.FRAGMENT_RE, )) # See http://tools.ietf.org/html/rfc3986#section-4.3 ABSOLUTE_URI_MATCHER = re.compile(r'^%s:%s(\?%s)?$' % ( abnf_regexp.COMPONENT_PATTERN_DICT['scheme'], abnf_regexp.HIER_PART_RE, abnf_regexp.QUERY_RE[1:-1], )) # ############### # IRIs / RFC 3987 # ############### IRI_MATCHER = re.compile(abnf_regexp.URL_PARSING_RE, re.UNICODE) ISUBAUTHORITY_MATCHER = re.compile(( u'^(?:(?P<userinfo>{0})@)?' # iuserinfo u'(?P<host>{1})' # ihost u':?(?P<port>{2})?$' # port ).format(abnf_regexp.IUSERINFO_RE, abnf_regexp.IHOST_RE, abnf_regexp.PORT_RE), re.UNICODE) # Path merger as defined in http://tools.ietf.org/html/rfc3986#section-5.2.3 def merge_paths(base_uri, relative_path): """Merge a base URI's path with a relative URI's path.""" if base_uri.path is None and base_uri.authority is not None: return '/' + relative_path else: path = base_uri.path or '' index = path.rfind('/') return path[:index] + '/' + relative_path UseExisting = object() ��././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/normalizers.py��0000644�0001750�0001750�00000012213�00000000000�033641� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # Copyright (c) 2014 Rackspace # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Module with functions to normalize components.""" import re from . import compat from . import misc def normalize_scheme(scheme): """Normalize the scheme component.""" return scheme.lower() def normalize_authority(authority): """Normalize an authority tuple to a string.""" userinfo, host, port = authority result = '' if userinfo: result += normalize_percent_characters(userinfo) + '@' if host: result += normalize_host(host) if port: result += ':' + port return result def normalize_username(username): """Normalize a username to make it safe to include in userinfo.""" return compat.urlquote(username) def normalize_password(password): """Normalize a password to make safe for userinfo.""" return compat.urlquote(password) def normalize_host(host): """Normalize a host string.""" if misc.IPv6_MATCHER.match(host): percent = host.find('%') if percent != -1: percent_25 = host.find('%25') # Replace RFC 4007 IPv6 Zone ID delimiter '%' with '%25' # from RFC 6874. If the host is '[<IPv6 addr>%25]' then we # assume RFC 4007 and normalize to '[<IPV6 addr>%2525]' if percent_25 == -1 or percent < percent_25 or \ (percent == percent_25 and percent_25 == len(host) - 4): host = host.replace('%', '%25', 1) # Don't normalize the casing of the Zone ID return host[:percent].lower() + host[percent:] return host.lower() def normalize_path(path): """Normalize the path string.""" if not path: return path path = normalize_percent_characters(path) return remove_dot_segments(path) def normalize_query(query): """Normalize the query string.""" if not query: return query return normalize_percent_characters(query) def normalize_fragment(fragment): """Normalize the fragment string.""" if not fragment: return fragment return normalize_percent_characters(fragment) PERCENT_MATCHER = re.compile('%[A-Fa-f0-9]{2}') def normalize_percent_characters(s): """All percent characters should be upper-cased. For example, ``"%3afoo%DF%ab"`` should be turned into ``"%3Afoo%DF%AB"``. """ matches = set(PERCENT_MATCHER.findall(s)) for m in matches: if not m.isupper(): s = s.replace(m, m.upper()) return s def remove_dot_segments(s): """Remove dot segments from the string. See also Section 5.2.4 of :rfc:`3986`. """ # See http://tools.ietf.org/html/rfc3986#section-5.2.4 for pseudo-code segments = s.split('/') # Turn the path into a list of segments output = [] # Initialize the variable to use to store output for segment in segments: # '.' is the current directory, so ignore it, it is superfluous if segment == '.': continue # Anything other than '..', should be appended to the output elif segment != '..': output.append(segment) # In this case segment == '..', if we can, we should pop the last # element elif output: output.pop() # If the path starts with '/' and the output is empty or the first string # is non-empty if s.startswith('/') and (not output or output[0]): output.insert(0, '') # If the path starts with '/.' or '/..' ensure we add one more empty # string to add a trailing '/' if s.endswith(('/.', '/..')): output.append('') return '/'.join(output) def encode_component(uri_component, encoding): """Encode the specific component in the provided encoding.""" if uri_component is None: return uri_component # Try to see if the component we're encoding is already percent-encoded # so we can skip all '%' characters but still encode all others. percent_encodings = len(PERCENT_MATCHER.findall( compat.to_str(uri_component, encoding))) uri_bytes = compat.to_bytes(uri_component, encoding) is_percent_encoded = percent_encodings == uri_bytes.count(b'%') encoded_uri = bytearray() for i in range(0, len(uri_bytes)): # Will return a single character bytestring on both Python 2 & 3 byte = uri_bytes[i:i+1] byte_ord = ord(byte) if ((is_percent_encoded and byte == b'%') or (byte_ord < 128 and byte.decode() in misc.NON_PCT_ENCODED)): encoded_uri.extend(byte) continue encoded_uri.extend('%{0:02x}'.format(byte_ord).encode().upper()) return encoded_uri.decode(encoding) �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/parseresult.py��0000644�0001750�0001750�00000034476�00000000000�033664� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # Copyright (c) 2015 Ian Stapleton Cordasco # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Module containing the urlparse compatibility logic.""" from collections import namedtuple from . import compat from . import exceptions from . import misc from . import normalizers from . import uri __all__ = ('ParseResult', 'ParseResultBytes') PARSED_COMPONENTS = ('scheme', 'userinfo', 'host', 'port', 'path', 'query', 'fragment') class ParseResultMixin(object): def _generate_authority(self, attributes): # I swear I did not align the comparisons below. That's just how they # happened to align based on pep8 and attribute lengths. userinfo, host, port = (attributes[p] for p in ('userinfo', 'host', 'port')) if (self.userinfo != userinfo or self.host != host or self.port != port): if port: port = '{0}'.format(port) return normalizers.normalize_authority( (compat.to_str(userinfo, self.encoding), compat.to_str(host, self.encoding), port) ) return self.authority def geturl(self): """Shim to match the standard library method.""" return self.unsplit() @property def hostname(self): """Shim to match the standard library.""" return self.host @property def netloc(self): """Shim to match the standard library.""" return self.authority @property def params(self): """Shim to match the standard library.""" return self.query class ParseResult(namedtuple('ParseResult', PARSED_COMPONENTS), ParseResultMixin): """Implementation of urlparse compatibility class. This uses the URIReference logic to handle compatibility with the urlparse.ParseResult class. """ slots = () def __new__(cls, scheme, userinfo, host, port, path, query, fragment, uri_ref, encoding='utf-8'): """Create a new ParseResult.""" parse_result = super(ParseResult, cls).__new__( cls, scheme or None, userinfo or None, host, port or None, path or None, query, fragment) parse_result.encoding = encoding parse_result.reference = uri_ref return parse_result @classmethod def from_parts(cls, scheme=None, userinfo=None, host=None, port=None, path=None, query=None, fragment=None, encoding='utf-8'): """Create a ParseResult instance from its parts.""" authority = '' if userinfo is not None: authority += userinfo + '@' if host is not None: authority += host if port is not None: authority += ':{0}'.format(port) uri_ref = uri.URIReference(scheme=scheme, authority=authority, path=path, query=query, fragment=fragment, encoding=encoding).normalize() userinfo, host, port = authority_from(uri_ref, strict=True) return cls(scheme=uri_ref.scheme, userinfo=userinfo, host=host, port=port, path=uri_ref.path, query=uri_ref.query, fragment=uri_ref.fragment, uri_ref=uri_ref, encoding=encoding) @classmethod def from_string(cls, uri_string, encoding='utf-8', strict=True, lazy_normalize=True): """Parse a URI from the given unicode URI string. :param str uri_string: Unicode URI to be parsed into a reference. :param str encoding: The encoding of the string provided :param bool strict: Parse strictly according to :rfc:`3986` if True. If False, parse similarly to the standard library's urlparse function. :returns: :class:`ParseResult` or subclass thereof """ reference = uri.URIReference.from_string(uri_string, encoding) if not lazy_normalize: reference = reference.normalize() userinfo, host, port = authority_from(reference, strict) return cls(scheme=reference.scheme, userinfo=userinfo, host=host, port=port, path=reference.path, query=reference.query, fragment=reference.fragment, uri_ref=reference, encoding=encoding) @property def authority(self): """Return the normalized authority.""" return self.reference.authority def copy_with(self, scheme=misc.UseExisting, userinfo=misc.UseExisting, host=misc.UseExisting, port=misc.UseExisting, path=misc.UseExisting, query=misc.UseExisting, fragment=misc.UseExisting): """Create a copy of this instance replacing with specified parts.""" attributes = zip(PARSED_COMPONENTS, (scheme, userinfo, host, port, path, query, fragment)) attrs_dict = {} for name, value in attributes: if value is misc.UseExisting: value = getattr(self, name) attrs_dict[name] = value authority = self._generate_authority(attrs_dict) ref = self.reference.copy_with(scheme=attrs_dict['scheme'], authority=authority, path=attrs_dict['path'], query=attrs_dict['query'], fragment=attrs_dict['fragment']) return ParseResult(uri_ref=ref, encoding=self.encoding, **attrs_dict) def encode(self, encoding=None): """Convert to an instance of ParseResultBytes.""" encoding = encoding or self.encoding attrs = dict( zip(PARSED_COMPONENTS, (attr.encode(encoding) if hasattr(attr, 'encode') else attr for attr in self))) return ParseResultBytes( uri_ref=self.reference, encoding=encoding, **attrs ) def unsplit(self, use_idna=False): """Create a URI string from the components. :returns: The parsed URI reconstituted as a string. :rtype: str """ parse_result = self if use_idna and self.host: hostbytes = self.host.encode('idna') host = hostbytes.decode(self.encoding) parse_result = self.copy_with(host=host) return parse_result.reference.unsplit() class ParseResultBytes(namedtuple('ParseResultBytes', PARSED_COMPONENTS), ParseResultMixin): """Compatibility shim for the urlparse.ParseResultBytes object.""" def __new__(cls, scheme, userinfo, host, port, path, query, fragment, uri_ref, encoding='utf-8', lazy_normalize=True): """Create a new ParseResultBytes instance.""" parse_result = super(ParseResultBytes, cls).__new__( cls, scheme or None, userinfo or None, host, port or None, path or None, query or None, fragment or None) parse_result.encoding = encoding parse_result.reference = uri_ref parse_result.lazy_normalize = lazy_normalize return parse_result @classmethod def from_parts(cls, scheme=None, userinfo=None, host=None, port=None, path=None, query=None, fragment=None, encoding='utf-8', lazy_normalize=True): """Create a ParseResult instance from its parts.""" authority = '' if userinfo is not None: authority += userinfo + '@' if host is not None: authority += host if port is not None: authority += ':{0}'.format(int(port)) uri_ref = uri.URIReference(scheme=scheme, authority=authority, path=path, query=query, fragment=fragment, encoding=encoding) if not lazy_normalize: uri_ref = uri_ref.normalize() to_bytes = compat.to_bytes userinfo, host, port = authority_from(uri_ref, strict=True) return cls(scheme=to_bytes(scheme, encoding), userinfo=to_bytes(userinfo, encoding), host=to_bytes(host, encoding), port=port, path=to_bytes(path, encoding), query=to_bytes(query, encoding), fragment=to_bytes(fragment, encoding), uri_ref=uri_ref, encoding=encoding, lazy_normalize=lazy_normalize) @classmethod def from_string(cls, uri_string, encoding='utf-8', strict=True, lazy_normalize=True): """Parse a URI from the given unicode URI string. :param str uri_string: Unicode URI to be parsed into a reference. :param str encoding: The encoding of the string provided :param bool strict: Parse strictly according to :rfc:`3986` if True. If False, parse similarly to the standard library's urlparse function. :returns: :class:`ParseResultBytes` or subclass thereof """ reference = uri.URIReference.from_string(uri_string, encoding) if not lazy_normalize: reference = reference.normalize() userinfo, host, port = authority_from(reference, strict) to_bytes = compat.to_bytes return cls(scheme=to_bytes(reference.scheme, encoding), userinfo=to_bytes(userinfo, encoding), host=to_bytes(host, encoding), port=port, path=to_bytes(reference.path, encoding), query=to_bytes(reference.query, encoding), fragment=to_bytes(reference.fragment, encoding), uri_ref=reference, encoding=encoding, lazy_normalize=lazy_normalize) @property def authority(self): """Return the normalized authority.""" return self.reference.authority.encode(self.encoding) def copy_with(self, scheme=misc.UseExisting, userinfo=misc.UseExisting, host=misc.UseExisting, port=misc.UseExisting, path=misc.UseExisting, query=misc.UseExisting, fragment=misc.UseExisting, lazy_normalize=True): """Create a copy of this instance replacing with specified parts.""" attributes = zip(PARSED_COMPONENTS, (scheme, userinfo, host, port, path, query, fragment)) attrs_dict = {} for name, value in attributes: if value is misc.UseExisting: value = getattr(self, name) if not isinstance(value, bytes) and hasattr(value, 'encode'): value = value.encode(self.encoding) attrs_dict[name] = value authority = self._generate_authority(attrs_dict) to_str = compat.to_str ref = self.reference.copy_with( scheme=to_str(attrs_dict['scheme'], self.encoding), authority=to_str(authority, self.encoding), path=to_str(attrs_dict['path'], self.encoding), query=to_str(attrs_dict['query'], self.encoding), fragment=to_str(attrs_dict['fragment'], self.encoding) ) if not lazy_normalize: ref = ref.normalize() return ParseResultBytes( uri_ref=ref, encoding=self.encoding, lazy_normalize=lazy_normalize, **attrs_dict ) def unsplit(self, use_idna=False): """Create a URI bytes object from the components. :returns: The parsed URI reconstituted as a string. :rtype: bytes """ parse_result = self if use_idna and self.host: # self.host is bytes, to encode to idna, we need to decode it # first host = self.host.decode(self.encoding) hostbytes = host.encode('idna') parse_result = self.copy_with(host=hostbytes) if self.lazy_normalize: parse_result = parse_result.copy_with(lazy_normalize=False) uri = parse_result.reference.unsplit() return uri.encode(self.encoding) def split_authority(authority): # Initialize our expected return values userinfo = host = port = None # Initialize an extra var we may need to use extra_host = None # Set-up rest in case there is no userinfo portion rest = authority if '@' in authority: userinfo, rest = authority.rsplit('@', 1) # Handle IPv6 host addresses if rest.startswith('['): host, rest = rest.split(']', 1) host += ']' if ':' in rest: extra_host, port = rest.split(':', 1) elif not host and rest: host = rest if extra_host and not host: host = extra_host return userinfo, host, port def authority_from(reference, strict): try: subauthority = reference.authority_info() except exceptions.InvalidAuthority: if strict: raise userinfo, host, port = split_authority(reference.authority) else: # Thanks to Richard Barrell for this idea: # https://twitter.com/0x2ba22e11/status/617338811975139328 userinfo, host, port = (subauthority.get(p) for p in ('userinfo', 'host', 'port')) if port: try: port = int(port) except ValueError: raise exceptions.InvalidPort(port) return userinfo, host, port ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/uri.py����������0000644�0001750�0001750�00000012153�00000000000�032076� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""Module containing the implementation of the URIReference class.""" # -*- coding: utf-8 -*- # Copyright (c) 2014 Rackspace # Copyright (c) 2015 Ian Stapleton Cordasco # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from collections import namedtuple from . import compat from . import misc from . import normalizers from ._mixin import URIMixin class URIReference(namedtuple('URIReference', misc.URI_COMPONENTS), URIMixin): """Immutable object representing a parsed URI Reference. .. note:: This class is not intended to be directly instantiated by the user. This object exposes attributes for the following components of a URI: - scheme - authority - path - query - fragment .. attribute:: scheme The scheme that was parsed for the URI Reference. For example, ``http``, ``https``, ``smtp``, ``imap``, etc. .. attribute:: authority Component of the URI that contains the user information, host, and port sub-components. For example, ``google.com``, ``127.0.0.1:5000``, ``username@[::1]``, ``username:password@example.com:443``, etc. .. attribute:: path The path that was parsed for the given URI Reference. For example, ``/``, ``/index.php``, etc. .. attribute:: query The query component for a given URI Reference. For example, ``a=b``, ``a=b%20c``, ``a=b+c``, ``a=b,c=d,e=%20f``, etc. .. attribute:: fragment The fragment component of a URI. For example, ``section-3.1``. This class also provides extra attributes for easier access to information like the subcomponents of the authority component. .. attribute:: userinfo The user information parsed from the authority. .. attribute:: host The hostname, IPv4, or IPv6 adddres parsed from the authority. .. attribute:: port The port parsed from the authority. """ slots = () def __new__(cls, scheme, authority, path, query, fragment, encoding='utf-8'): """Create a new URIReference.""" ref = super(URIReference, cls).__new__( cls, scheme or None, authority or None, path or None, query, fragment) ref.encoding = encoding return ref __hash__ = tuple.__hash__ def __eq__(self, other): """Compare this reference to another.""" other_ref = other if isinstance(other, tuple): other_ref = URIReference(*other) elif not isinstance(other, URIReference): try: other_ref = URIReference.from_string(other) except TypeError: raise TypeError( 'Unable to compare URIReference() to {0}()'.format( type(other).__name__)) # See http://tools.ietf.org/html/rfc3986#section-6.2 naive_equality = tuple(self) == tuple(other_ref) return naive_equality or self.normalized_equality(other_ref) def normalize(self): """Normalize this reference as described in Section 6.2.2. This is not an in-place normalization. Instead this creates a new URIReference. :returns: A new reference object with normalized components. :rtype: URIReference """ # See http://tools.ietf.org/html/rfc3986#section-6.2.2 for logic in # this method. return URIReference(normalizers.normalize_scheme(self.scheme or ''), normalizers.normalize_authority( (self.userinfo, self.host, self.port)), normalizers.normalize_path(self.path or ''), normalizers.normalize_query(self.query), normalizers.normalize_fragment(self.fragment), self.encoding) @classmethod def from_string(cls, uri_string, encoding='utf-8'): """Parse a URI reference from the given unicode URI string. :param str uri_string: Unicode URI to be parsed into a reference. :param str encoding: The encoding of the string provided :returns: :class:`URIReference` or subclass thereof """ uri_string = compat.to_str(uri_string, encoding) split_uri = misc.URI_MATCHER.match(uri_string).groupdict() return cls( split_uri['scheme'], split_uri['authority'], normalizers.encode_component(split_uri['path'], encoding), normalizers.encode_component(split_uri['query'], encoding), normalizers.encode_component(split_uri['fragment'], encoding), encoding, ) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/rfc3986/validators.py���0000644�0001750�0001750�00000033036�00000000000�033452� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- # Copyright (c) 2017 Ian Stapleton Cordasco # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Module containing the validation logic for rfc3986.""" from . import exceptions from . import misc from . import normalizers class Validator(object): """Object used to configure validation of all objects in rfc3986. .. versionadded:: 1.0 Example usage:: >>> from rfc3986 import api, validators >>> uri = api.uri_reference('https://github.com/') >>> validator = validators.Validator().require_presence_of( ... 'scheme', 'host', 'path', ... ).allow_schemes( ... 'http', 'https', ... ).allow_hosts( ... '127.0.0.1', 'github.com', ... ) >>> validator.validate(uri) >>> invalid_uri = rfc3986.uri_reference('imap://mail.google.com') >>> validator.validate(invalid_uri) Traceback (most recent call last): ... rfc3986.exceptions.MissingComponentError: ('path was required but missing', URIReference(scheme=u'imap', authority=u'mail.google.com', path=None, query=None, fragment=None), ['path']) """ COMPONENT_NAMES = frozenset([ 'scheme', 'userinfo', 'host', 'port', 'path', 'query', 'fragment', ]) def __init__(self): """Initialize our default validations.""" self.allowed_schemes = set() self.allowed_hosts = set() self.allowed_ports = set() self.allow_password = True self.required_components = { 'scheme': False, 'userinfo': False, 'host': False, 'port': False, 'path': False, 'query': False, 'fragment': False, } self.validated_components = self.required_components.copy() def allow_schemes(self, *schemes): """Require the scheme to be one of the provided schemes. .. versionadded:: 1.0 :param schemes: Schemes, without ``://`` that are allowed. :returns: The validator instance. :rtype: Validator """ for scheme in schemes: self.allowed_schemes.add(normalizers.normalize_scheme(scheme)) return self def allow_hosts(self, *hosts): """Require the host to be one of the provided hosts. .. versionadded:: 1.0 :param hosts: Hosts that are allowed. :returns: The validator instance. :rtype: Validator """ for host in hosts: self.allowed_hosts.add(normalizers.normalize_host(host)) return self def allow_ports(self, *ports): """Require the port to be one of the provided ports. .. versionadded:: 1.0 :param ports: Ports that are allowed. :returns: The validator instance. :rtype: Validator """ for port in ports: port_int = int(port, base=10) if 0 <= port_int <= 65535: self.allowed_ports.add(port) return self def allow_use_of_password(self): """Allow passwords to be present in the URI. .. versionadded:: 1.0 :returns: The validator instance. :rtype: Validator """ self.allow_password = True return self def forbid_use_of_password(self): """Prevent passwords from being included in the URI. .. versionadded:: 1.0 :returns: The validator instance. :rtype: Validator """ self.allow_password = False return self def check_validity_of(self, *components): """Check the validity of the components provided. This can be specified repeatedly. .. versionadded:: 1.1 :param components: Names of components from :attr:`Validator.COMPONENT_NAMES`. :returns: The validator instance. :rtype: Validator """ components = [c.lower() for c in components] for component in components: if component not in self.COMPONENT_NAMES: raise ValueError( '"{}" is not a valid component'.format(component) ) self.validated_components.update({ component: True for component in components }) return self def require_presence_of(self, *components): """Require the components provided. This can be specified repeatedly. .. versionadded:: 1.0 :param components: Names of components from :attr:`Validator.COMPONENT_NAMES`. :returns: The validator instance. :rtype: Validator """ components = [c.lower() for c in components] for component in components: if component not in self.COMPONENT_NAMES: raise ValueError( '"{}" is not a valid component'.format(component) ) self.required_components.update({ component: True for component in components }) return self def validate(self, uri): """Check a URI for conditions specified on this validator. .. versionadded:: 1.0 :param uri: Parsed URI to validate. :type uri: rfc3986.uri.URIReference :raises MissingComponentError: When a required component is missing. :raises UnpermittedComponentError: When a component is not one of those allowed. :raises PasswordForbidden: When a password is present in the userinfo component but is not permitted by configuration. :raises InvalidComponentsError: When a component was found to be invalid. """ if not self.allow_password: check_password(uri) required_components = [ component for component, required in self.required_components.items() if required ] validated_components = [ component for component, required in self.validated_components.items() if required ] if required_components: ensure_required_components_exist(uri, required_components) if validated_components: ensure_components_are_valid(uri, validated_components) ensure_one_of(self.allowed_schemes, uri, 'scheme') ensure_one_of(self.allowed_hosts, uri, 'host') ensure_one_of(self.allowed_ports, uri, 'port') def check_password(uri): """Assert that there is no password present in the uri.""" userinfo = uri.userinfo if not userinfo: return credentials = userinfo.split(':', 1) if len(credentials) <= 1: return raise exceptions.PasswordForbidden(uri) def ensure_one_of(allowed_values, uri, attribute): """Assert that the uri's attribute is one of the allowed values.""" value = getattr(uri, attribute) if value is not None and allowed_values and value not in allowed_values: raise exceptions.UnpermittedComponentError( attribute, value, allowed_values, ) def ensure_required_components_exist(uri, required_components): """Assert that all required components are present in the URI.""" missing_components = sorted([ component for component in required_components if getattr(uri, component) is None ]) if missing_components: raise exceptions.MissingComponentError(uri, *missing_components) def is_valid(value, matcher, require): """Determine if a value is valid based on the provided matcher. :param str value: Value to validate. :param matcher: Compiled regular expression to use to validate the value. :param require: Whether or not the value is required. """ if require: return (value is not None and matcher.match(value)) # require is False and value is not None return value is None or matcher.match(value) def authority_is_valid(authority, host=None, require=False): """Determine if the authority string is valid. :param str authority: The authority to validate. :param str host: (optional) The host portion of the authority to validate. :param bool require: (optional) Specify if authority must not be None. :returns: ``True`` if valid, ``False`` otherwise :rtype: bool """ validated = is_valid(authority, misc.SUBAUTHORITY_MATCHER, require) if validated and host is not None: return host_is_valid(host, require) return validated def host_is_valid(host, require=False): """Determine if the host string is valid. :param str host: The host to validate. :param bool require: (optional) Specify if host must not be None. :returns: ``True`` if valid, ``False`` otherwise :rtype: bool """ validated = is_valid(host, misc.HOST_MATCHER, require) if validated and host is not None and misc.IPv4_MATCHER.match(host): return valid_ipv4_host_address(host) elif validated and host is not None and misc.IPv6_MATCHER.match(host): return misc.IPv6_NO_RFC4007_MATCHER.match(host) is not None return validated def scheme_is_valid(scheme, require=False): """Determine if the scheme is valid. :param str scheme: The scheme string to validate. :param bool require: (optional) Set to ``True`` to require the presence of a scheme. :returns: ``True`` if the scheme is valid. ``False`` otherwise. :rtype: bool """ return is_valid(scheme, misc.SCHEME_MATCHER, require) def path_is_valid(path, require=False): """Determine if the path component is valid. :param str path: The path string to validate. :param bool require: (optional) Set to ``True`` to require the presence of a path. :returns: ``True`` if the path is valid. ``False`` otherwise. :rtype: bool """ return is_valid(path, misc.PATH_MATCHER, require) def query_is_valid(query, require=False): """Determine if the query component is valid. :param str query: The query string to validate. :param bool require: (optional) Set to ``True`` to require the presence of a query. :returns: ``True`` if the query is valid. ``False`` otherwise. :rtype: bool """ return is_valid(query, misc.QUERY_MATCHER, require) def fragment_is_valid(fragment, require=False): """Determine if the fragment component is valid. :param str fragment: The fragment string to validate. :param bool require: (optional) Set to ``True`` to require the presence of a fragment. :returns: ``True`` if the fragment is valid. ``False`` otherwise. :rtype: bool """ return is_valid(fragment, misc.FRAGMENT_MATCHER, require) def valid_ipv4_host_address(host): """Determine if the given host is a valid IPv4 address.""" # If the host exists, and it might be IPv4, check each byte in the # address. return all([0 <= int(byte, base=10) <= 255 for byte in host.split('.')]) _COMPONENT_VALIDATORS = { 'scheme': scheme_is_valid, 'path': path_is_valid, 'query': query_is_valid, 'fragment': fragment_is_valid, } _SUBAUTHORITY_VALIDATORS = set(['userinfo', 'host', 'port']) def subauthority_component_is_valid(uri, component): """Determine if the userinfo, host, and port are valid.""" try: subauthority_dict = uri.authority_info() except exceptions.InvalidAuthority: return False # If we can parse the authority into sub-components and we're not # validating the port, we can assume it's valid. if component == 'host': return host_is_valid(subauthority_dict['host']) elif component != 'port': return True try: port = int(subauthority_dict['port']) except TypeError: # If the port wasn't provided it'll be None and int(None) raises a # TypeError return True return (0 <= port <= 65535) def ensure_components_are_valid(uri, validated_components): """Assert that all components are valid in the URI.""" invalid_components = set([]) for component in validated_components: if component in _SUBAUTHORITY_VALIDATORS: if not subauthority_component_is_valid(uri, component): invalid_components.add(component) # Python's peephole optimizer means that while this continue *is* # actually executed, coverage.py cannot detect that. See also, # https://bitbucket.org/ned/coveragepy/issues/198/continue-marked-as-not-covered continue # nocov: Python 2.7, 3.3, 3.4 validator = _COMPONENT_VALIDATORS[component] if not validator(getattr(uri, component)): invalid_components.add(component) if invalid_components: raise exceptions.InvalidComponentsError(uri, *invalid_components) ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/six.py������������������0000644�0001750�0001750�00000072622�00000000000�031005� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""Utilities for writing code that runs on Python 2 and 3""" # Copyright (c) 2010-2015 Benjamin Peterson # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from __future__ import absolute_import import functools import itertools import operator import sys import types __author__ = "Benjamin Peterson <benjamin@python.org>" __version__ = "1.10.0" # Useful for very coarse version differentiation. PY2 = sys.version_info[0] == 2 PY3 = sys.version_info[0] == 3 PY34 = sys.version_info[0:2] >= (3, 4) if PY3: string_types = str, integer_types = int, class_types = type, text_type = str binary_type = bytes MAXSIZE = sys.maxsize else: string_types = basestring, integer_types = (int, long) class_types = (type, types.ClassType) text_type = unicode binary_type = str if sys.platform.startswith("java"): # Jython always uses 32 bits. MAXSIZE = int((1 << 31) - 1) else: # It's possible to have sizeof(long) != sizeof(Py_ssize_t). class X(object): def __len__(self): return 1 << 31 try: len(X()) except OverflowError: # 32-bit MAXSIZE = int((1 << 31) - 1) else: # 64-bit MAXSIZE = int((1 << 63) - 1) del X def _add_doc(func, doc): """Add documentation to a function.""" func.__doc__ = doc def _import_module(name): """Import module, returning the module after the last dot.""" __import__(name) return sys.modules[name] class _LazyDescr(object): def __init__(self, name): self.name = name def __get__(self, obj, tp): result = self._resolve() setattr(obj, self.name, result) # Invokes __set__. try: # This is a bit ugly, but it avoids running this again by # removing this descriptor. delattr(obj.__class__, self.name) except AttributeError: pass return result class MovedModule(_LazyDescr): def __init__(self, name, old, new=None): super(MovedModule, self).__init__(name) if PY3: if new is None: new = name self.mod = new else: self.mod = old def _resolve(self): return _import_module(self.mod) def __getattr__(self, attr): _module = self._resolve() value = getattr(_module, attr) setattr(self, attr, value) return value class _LazyModule(types.ModuleType): def __init__(self, name): super(_LazyModule, self).__init__(name) self.__doc__ = self.__class__.__doc__ def __dir__(self): attrs = ["__doc__", "__name__"] attrs += [attr.name for attr in self._moved_attributes] return attrs # Subclasses should override this _moved_attributes = [] class MovedAttribute(_LazyDescr): def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None): super(MovedAttribute, self).__init__(name) if PY3: if new_mod is None: new_mod = name self.mod = new_mod if new_attr is None: if old_attr is None: new_attr = name else: new_attr = old_attr self.attr = new_attr else: self.mod = old_mod if old_attr is None: old_attr = name self.attr = old_attr def _resolve(self): module = _import_module(self.mod) return getattr(module, self.attr) class _SixMetaPathImporter(object): """ A meta path importer to import six.moves and its submodules. This class implements a PEP302 finder and loader. It should be compatible with Python 2.5 and all existing versions of Python3 """ def __init__(self, six_module_name): self.name = six_module_name self.known_modules = {} def _add_module(self, mod, *fullnames): for fullname in fullnames: self.known_modules[self.name + "." + fullname] = mod def _get_module(self, fullname): return self.known_modules[self.name + "." + fullname] def find_module(self, fullname, path=None): if fullname in self.known_modules: return self return None def __get_module(self, fullname): try: return self.known_modules[fullname] except KeyError: raise ImportError("This loader does not know module " + fullname) def load_module(self, fullname): try: # in case of a reload return sys.modules[fullname] except KeyError: pass mod = self.__get_module(fullname) if isinstance(mod, MovedModule): mod = mod._resolve() else: mod.__loader__ = self sys.modules[fullname] = mod return mod def is_package(self, fullname): """ Return true, if the named module is a package. We need this method to get correct spec objects with Python 3.4 (see PEP451) """ return hasattr(self.__get_module(fullname), "__path__") def get_code(self, fullname): """Return None Required, if is_package is implemented""" self.__get_module(fullname) # eventually raises ImportError return None get_source = get_code # same as get_code _importer = _SixMetaPathImporter(__name__) class _MovedItems(_LazyModule): """Lazy loading of moved objects""" __path__ = [] # mark as package _moved_attributes = [ MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"), MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"), MovedAttribute("filterfalse", "itertools", "itertools", "ifilterfalse", "filterfalse"), MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"), MovedAttribute("intern", "__builtin__", "sys"), MovedAttribute("map", "itertools", "builtins", "imap", "map"), MovedAttribute("getcwd", "os", "os", "getcwdu", "getcwd"), MovedAttribute("getcwdb", "os", "os", "getcwd", "getcwdb"), MovedAttribute("range", "__builtin__", "builtins", "xrange", "range"), MovedAttribute("reload_module", "__builtin__", "importlib" if PY34 else "imp", "reload"), MovedAttribute("reduce", "__builtin__", "functools"), MovedAttribute("shlex_quote", "pipes", "shlex", "quote"), MovedAttribute("StringIO", "StringIO", "io"), MovedAttribute("UserDict", "UserDict", "collections"), MovedAttribute("UserList", "UserList", "collections"), MovedAttribute("UserString", "UserString", "collections"), MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"), MovedAttribute("zip", "itertools", "builtins", "izip", "zip"), MovedAttribute("zip_longest", "itertools", "itertools", "izip_longest", "zip_longest"), MovedModule("builtins", "__builtin__"), MovedModule("configparser", "ConfigParser"), MovedModule("copyreg", "copy_reg"), MovedModule("dbm_gnu", "gdbm", "dbm.gnu"), MovedModule("_dummy_thread", "dummy_thread", "_dummy_thread"), MovedModule("http_cookiejar", "cookielib", "http.cookiejar"), MovedModule("http_cookies", "Cookie", "http.cookies"), MovedModule("html_entities", "htmlentitydefs", "html.entities"), MovedModule("html_parser", "HTMLParser", "html.parser"), MovedModule("http_client", "httplib", "http.client"), MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"), MovedModule("email_mime_nonmultipart", "email.MIMENonMultipart", "email.mime.nonmultipart"), MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"), MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"), MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"), MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"), MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"), MovedModule("cPickle", "cPickle", "pickle"), MovedModule("queue", "Queue"), MovedModule("reprlib", "repr"), MovedModule("socketserver", "SocketServer"), MovedModule("_thread", "thread", "_thread"), MovedModule("tkinter", "Tkinter"), MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"), MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"), MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"), MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"), MovedModule("tkinter_tix", "Tix", "tkinter.tix"), MovedModule("tkinter_ttk", "ttk", "tkinter.ttk"), MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"), MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"), MovedModule("tkinter_colorchooser", "tkColorChooser", "tkinter.colorchooser"), MovedModule("tkinter_commondialog", "tkCommonDialog", "tkinter.commondialog"), MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"), MovedModule("tkinter_font", "tkFont", "tkinter.font"), MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"), MovedModule("tkinter_tksimpledialog", "tkSimpleDialog", "tkinter.simpledialog"), MovedModule("urllib_parse", __name__ + ".moves.urllib_parse", "urllib.parse"), MovedModule("urllib_error", __name__ + ".moves.urllib_error", "urllib.error"), MovedModule("urllib", __name__ + ".moves.urllib", __name__ + ".moves.urllib"), MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"), MovedModule("xmlrpc_client", "xmlrpclib", "xmlrpc.client"), MovedModule("xmlrpc_server", "SimpleXMLRPCServer", "xmlrpc.server"), ] # Add windows specific modules. if sys.platform == "win32": _moved_attributes += [ MovedModule("winreg", "_winreg"), ] for attr in _moved_attributes: setattr(_MovedItems, attr.name, attr) if isinstance(attr, MovedModule): _importer._add_module(attr, "moves." + attr.name) del attr _MovedItems._moved_attributes = _moved_attributes moves = _MovedItems(__name__ + ".moves") _importer._add_module(moves, "moves") class Module_six_moves_urllib_parse(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_parse""" _urllib_parse_moved_attributes = [ MovedAttribute("ParseResult", "urlparse", "urllib.parse"), MovedAttribute("SplitResult", "urlparse", "urllib.parse"), MovedAttribute("parse_qs", "urlparse", "urllib.parse"), MovedAttribute("parse_qsl", "urlparse", "urllib.parse"), MovedAttribute("urldefrag", "urlparse", "urllib.parse"), MovedAttribute("urljoin", "urlparse", "urllib.parse"), MovedAttribute("urlparse", "urlparse", "urllib.parse"), MovedAttribute("urlsplit", "urlparse", "urllib.parse"), MovedAttribute("urlunparse", "urlparse", "urllib.parse"), MovedAttribute("urlunsplit", "urlparse", "urllib.parse"), MovedAttribute("quote", "urllib", "urllib.parse"), MovedAttribute("quote_plus", "urllib", "urllib.parse"), MovedAttribute("unquote", "urllib", "urllib.parse"), MovedAttribute("unquote_plus", "urllib", "urllib.parse"), MovedAttribute("urlencode", "urllib", "urllib.parse"), MovedAttribute("splitquery", "urllib", "urllib.parse"), MovedAttribute("splittag", "urllib", "urllib.parse"), MovedAttribute("splituser", "urllib", "urllib.parse"), MovedAttribute("uses_fragment", "urlparse", "urllib.parse"), MovedAttribute("uses_netloc", "urlparse", "urllib.parse"), MovedAttribute("uses_params", "urlparse", "urllib.parse"), MovedAttribute("uses_query", "urlparse", "urllib.parse"), MovedAttribute("uses_relative", "urlparse", "urllib.parse"), ] for attr in _urllib_parse_moved_attributes: setattr(Module_six_moves_urllib_parse, attr.name, attr) del attr Module_six_moves_urllib_parse._moved_attributes = _urllib_parse_moved_attributes _importer._add_module(Module_six_moves_urllib_parse(__name__ + ".moves.urllib_parse"), "moves.urllib_parse", "moves.urllib.parse") class Module_six_moves_urllib_error(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_error""" _urllib_error_moved_attributes = [ MovedAttribute("URLError", "urllib2", "urllib.error"), MovedAttribute("HTTPError", "urllib2", "urllib.error"), MovedAttribute("ContentTooShortError", "urllib", "urllib.error"), ] for attr in _urllib_error_moved_attributes: setattr(Module_six_moves_urllib_error, attr.name, attr) del attr Module_six_moves_urllib_error._moved_attributes = _urllib_error_moved_attributes _importer._add_module(Module_six_moves_urllib_error(__name__ + ".moves.urllib.error"), "moves.urllib_error", "moves.urllib.error") class Module_six_moves_urllib_request(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_request""" _urllib_request_moved_attributes = [ MovedAttribute("urlopen", "urllib2", "urllib.request"), MovedAttribute("install_opener", "urllib2", "urllib.request"), MovedAttribute("build_opener", "urllib2", "urllib.request"), MovedAttribute("pathname2url", "urllib", "urllib.request"), MovedAttribute("url2pathname", "urllib", "urllib.request"), MovedAttribute("getproxies", "urllib", "urllib.request"), MovedAttribute("Request", "urllib2", "urllib.request"), MovedAttribute("OpenerDirector", "urllib2", "urllib.request"), MovedAttribute("HTTPDefaultErrorHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPRedirectHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPCookieProcessor", "urllib2", "urllib.request"), MovedAttribute("ProxyHandler", "urllib2", "urllib.request"), MovedAttribute("BaseHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPPasswordMgr", "urllib2", "urllib.request"), MovedAttribute("HTTPPasswordMgrWithDefaultRealm", "urllib2", "urllib.request"), MovedAttribute("AbstractBasicAuthHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPBasicAuthHandler", "urllib2", "urllib.request"), MovedAttribute("ProxyBasicAuthHandler", "urllib2", "urllib.request"), MovedAttribute("AbstractDigestAuthHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPDigestAuthHandler", "urllib2", "urllib.request"), MovedAttribute("ProxyDigestAuthHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPSHandler", "urllib2", "urllib.request"), MovedAttribute("FileHandler", "urllib2", "urllib.request"), MovedAttribute("FTPHandler", "urllib2", "urllib.request"), MovedAttribute("CacheFTPHandler", "urllib2", "urllib.request"), MovedAttribute("UnknownHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPErrorProcessor", "urllib2", "urllib.request"), MovedAttribute("urlretrieve", "urllib", "urllib.request"), MovedAttribute("urlcleanup", "urllib", "urllib.request"), MovedAttribute("URLopener", "urllib", "urllib.request"), MovedAttribute("FancyURLopener", "urllib", "urllib.request"), MovedAttribute("proxy_bypass", "urllib", "urllib.request"), ] for attr in _urllib_request_moved_attributes: setattr(Module_six_moves_urllib_request, attr.name, attr) del attr Module_six_moves_urllib_request._moved_attributes = _urllib_request_moved_attributes _importer._add_module(Module_six_moves_urllib_request(__name__ + ".moves.urllib.request"), "moves.urllib_request", "moves.urllib.request") class Module_six_moves_urllib_response(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_response""" _urllib_response_moved_attributes = [ MovedAttribute("addbase", "urllib", "urllib.response"), MovedAttribute("addclosehook", "urllib", "urllib.response"), MovedAttribute("addinfo", "urllib", "urllib.response"), MovedAttribute("addinfourl", "urllib", "urllib.response"), ] for attr in _urllib_response_moved_attributes: setattr(Module_six_moves_urllib_response, attr.name, attr) del attr Module_six_moves_urllib_response._moved_attributes = _urllib_response_moved_attributes _importer._add_module(Module_six_moves_urllib_response(__name__ + ".moves.urllib.response"), "moves.urllib_response", "moves.urllib.response") class Module_six_moves_urllib_robotparser(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_robotparser""" _urllib_robotparser_moved_attributes = [ MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"), ] for attr in _urllib_robotparser_moved_attributes: setattr(Module_six_moves_urllib_robotparser, attr.name, attr) del attr Module_six_moves_urllib_robotparser._moved_attributes = _urllib_robotparser_moved_attributes _importer._add_module(Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib.robotparser"), "moves.urllib_robotparser", "moves.urllib.robotparser") class Module_six_moves_urllib(types.ModuleType): """Create a six.moves.urllib namespace that resembles the Python 3 namespace""" __path__ = [] # mark as package parse = _importer._get_module("moves.urllib_parse") error = _importer._get_module("moves.urllib_error") request = _importer._get_module("moves.urllib_request") response = _importer._get_module("moves.urllib_response") robotparser = _importer._get_module("moves.urllib_robotparser") def __dir__(self): return ['parse', 'error', 'request', 'response', 'robotparser'] _importer._add_module(Module_six_moves_urllib(__name__ + ".moves.urllib"), "moves.urllib") def add_move(move): """Add an item to six.moves.""" setattr(_MovedItems, move.name, move) def remove_move(name): """Remove item from six.moves.""" try: delattr(_MovedItems, name) except AttributeError: try: del moves.__dict__[name] except KeyError: raise AttributeError("no such move, %r" % (name,)) if PY3: _meth_func = "__func__" _meth_self = "__self__" _func_closure = "__closure__" _func_code = "__code__" _func_defaults = "__defaults__" _func_globals = "__globals__" else: _meth_func = "im_func" _meth_self = "im_self" _func_closure = "func_closure" _func_code = "func_code" _func_defaults = "func_defaults" _func_globals = "func_globals" try: advance_iterator = next except NameError: def advance_iterator(it): return it.next() next = advance_iterator try: callable = callable except NameError: def callable(obj): return any("__call__" in klass.__dict__ for klass in type(obj).__mro__) if PY3: def get_unbound_function(unbound): return unbound create_bound_method = types.MethodType def create_unbound_method(func, cls): return func Iterator = object else: def get_unbound_function(unbound): return unbound.im_func def create_bound_method(func, obj): return types.MethodType(func, obj, obj.__class__) def create_unbound_method(func, cls): return types.MethodType(func, None, cls) class Iterator(object): def next(self): return type(self).__next__(self) callable = callable _add_doc(get_unbound_function, """Get the function out of a possibly unbound function""") get_method_function = operator.attrgetter(_meth_func) get_method_self = operator.attrgetter(_meth_self) get_function_closure = operator.attrgetter(_func_closure) get_function_code = operator.attrgetter(_func_code) get_function_defaults = operator.attrgetter(_func_defaults) get_function_globals = operator.attrgetter(_func_globals) if PY3: def iterkeys(d, **kw): return iter(d.keys(**kw)) def itervalues(d, **kw): return iter(d.values(**kw)) def iteritems(d, **kw): return iter(d.items(**kw)) def iterlists(d, **kw): return iter(d.lists(**kw)) viewkeys = operator.methodcaller("keys") viewvalues = operator.methodcaller("values") viewitems = operator.methodcaller("items") else: def iterkeys(d, **kw): return d.iterkeys(**kw) def itervalues(d, **kw): return d.itervalues(**kw) def iteritems(d, **kw): return d.iteritems(**kw) def iterlists(d, **kw): return d.iterlists(**kw) viewkeys = operator.methodcaller("viewkeys") viewvalues = operator.methodcaller("viewvalues") viewitems = operator.methodcaller("viewitems") _add_doc(iterkeys, "Return an iterator over the keys of a dictionary.") _add_doc(itervalues, "Return an iterator over the values of a dictionary.") _add_doc(iteritems, "Return an iterator over the (key, value) pairs of a dictionary.") _add_doc(iterlists, "Return an iterator over the (key, [values]) pairs of a dictionary.") if PY3: def b(s): return s.encode("latin-1") def u(s): return s unichr = chr import struct int2byte = struct.Struct(">B").pack del struct byte2int = operator.itemgetter(0) indexbytes = operator.getitem iterbytes = iter import io StringIO = io.StringIO BytesIO = io.BytesIO _assertCountEqual = "assertCountEqual" if sys.version_info[1] <= 1: _assertRaisesRegex = "assertRaisesRegexp" _assertRegex = "assertRegexpMatches" else: _assertRaisesRegex = "assertRaisesRegex" _assertRegex = "assertRegex" else: def b(s): return s # Workaround for standalone backslash def u(s): return unicode(s.replace(r'\\', r'\\\\'), "unicode_escape") unichr = unichr int2byte = chr def byte2int(bs): return ord(bs[0]) def indexbytes(buf, i): return ord(buf[i]) iterbytes = functools.partial(itertools.imap, ord) import StringIO StringIO = BytesIO = StringIO.StringIO _assertCountEqual = "assertItemsEqual" _assertRaisesRegex = "assertRaisesRegexp" _assertRegex = "assertRegexpMatches" _add_doc(b, """Byte literal""") _add_doc(u, """Text literal""") def assertCountEqual(self, *args, **kwargs): return getattr(self, _assertCountEqual)(*args, **kwargs) def assertRaisesRegex(self, *args, **kwargs): return getattr(self, _assertRaisesRegex)(*args, **kwargs) def assertRegex(self, *args, **kwargs): return getattr(self, _assertRegex)(*args, **kwargs) if PY3: exec_ = getattr(moves.builtins, "exec") def reraise(tp, value, tb=None): if value is None: value = tp() if value.__traceback__ is not tb: raise value.with_traceback(tb) raise value else: def exec_(_code_, _globs_=None, _locs_=None): """Execute code in a namespace.""" if _globs_ is None: frame = sys._getframe(1) _globs_ = frame.f_globals if _locs_ is None: _locs_ = frame.f_locals del frame elif _locs_ is None: _locs_ = _globs_ exec("""exec _code_ in _globs_, _locs_""") exec_("""def reraise(tp, value, tb=None): raise tp, value, tb """) if sys.version_info[:2] == (3, 2): exec_("""def raise_from(value, from_value): if from_value is None: raise value raise value from from_value """) elif sys.version_info[:2] > (3, 2): exec_("""def raise_from(value, from_value): raise value from from_value """) else: def raise_from(value, from_value): raise value print_ = getattr(moves.builtins, "print", None) if print_ is None: def print_(*args, **kwargs): """The new-style print function for Python 2.4 and 2.5.""" fp = kwargs.pop("file", sys.stdout) if fp is None: return def write(data): if not isinstance(data, basestring): data = str(data) # If the file has an encoding, encode unicode with it. if (isinstance(fp, file) and isinstance(data, unicode) and fp.encoding is not None): errors = getattr(fp, "errors", None) if errors is None: errors = "strict" data = data.encode(fp.encoding, errors) fp.write(data) want_unicode = False sep = kwargs.pop("sep", None) if sep is not None: if isinstance(sep, unicode): want_unicode = True elif not isinstance(sep, str): raise TypeError("sep must be None or a string") end = kwargs.pop("end", None) if end is not None: if isinstance(end, unicode): want_unicode = True elif not isinstance(end, str): raise TypeError("end must be None or a string") if kwargs: raise TypeError("invalid keyword arguments to print()") if not want_unicode: for arg in args: if isinstance(arg, unicode): want_unicode = True break if want_unicode: newline = unicode("\n") space = unicode(" ") else: newline = "\n" space = " " if sep is None: sep = space if end is None: end = newline for i, arg in enumerate(args): if i: write(sep) write(arg) write(end) if sys.version_info[:2] < (3, 3): _print = print_ def print_(*args, **kwargs): fp = kwargs.get("file", sys.stdout) flush = kwargs.pop("flush", False) _print(*args, **kwargs) if flush and fp is not None: fp.flush() _add_doc(reraise, """Reraise an exception.""") if sys.version_info[0:2] < (3, 4): def wraps(wrapped, assigned=functools.WRAPPER_ASSIGNMENTS, updated=functools.WRAPPER_UPDATES): def wrapper(f): f = functools.wraps(wrapped, assigned, updated)(f) f.__wrapped__ = wrapped return f return wrapper else: wraps = functools.wraps def with_metaclass(meta, *bases): """Create a base class with a metaclass.""" # This requires a bit of explanation: the basic idea is to make a dummy # metaclass for one level of class instantiation that replaces itself with # the actual metaclass. class metaclass(meta): def __new__(cls, name, this_bases, d): return meta(name, bases, d) return type.__new__(metaclass, 'temporary_class', (), {}) def add_metaclass(metaclass): """Class decorator for creating a class with a metaclass.""" def wrapper(cls): orig_vars = cls.__dict__.copy() slots = orig_vars.get('__slots__') if slots is not None: if isinstance(slots, str): slots = [slots] for slots_var in slots: orig_vars.pop(slots_var) orig_vars.pop('__dict__', None) orig_vars.pop('__weakref__', None) return metaclass(cls.__name__, cls.__bases__, orig_vars) return wrapper def python_2_unicode_compatible(klass): """ A decorator that defines __unicode__ and __str__ methods under Python 2. Under Python 3 it does nothing. To support Python 2 and 3 with a single code base, define a __str__ method returning text and apply this decorator to the class. """ if PY2: if '__str__' not in klass.__dict__: raise ValueError("@python_2_unicode_compatible cannot be applied " "to %s because it doesn't define __str__()." % klass.__name__) klass.__unicode__ = klass.__str__ klass.__str__ = lambda self: self.__unicode__().encode('utf-8') return klass # Complete the moves implementation. # This code is at the end of this module to speed up module loading. # Turn this module into a package. __path__ = [] # required for PEP 302 and PEP 451 __package__ = __name__ # see PEP 366 @ReservedAssignment if globals().get("__spec__") is not None: __spec__.submodule_search_locations = [] # PEP 451 @UndefinedVariable # Remove other six meta path importers, since they cause problems. This can # happen if six is removed from sys.modules and then reloaded. (Setuptools does # this for some reason.) if sys.meta_path: for i, importer in enumerate(sys.meta_path): # Here's some real nastiness: Another "instance" of the six module might # be floating around. Therefore, we can't use isinstance() to check for # the six meta path importer, since the other six instance will have # inserted an importer with different class. if (type(importer).__name__ == "_SixMetaPathImporter" and importer.name == __name__): del sys.meta_path[i] break del i, importer # Finally, add the importer to the meta path import hook. sys.meta_path.append(_importer) ��������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735100.0333276 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/ssl_match_hostname/�����0000755�0001750�0001750�00000000000�00000000000�033472� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000212�00000000000�011450� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������116 path=cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/ssl_match_hostname/__init__.py 22 mtime=1564430190.0 ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/ssl_match_hostname/__ini0000644�0001750�0001750�00000001260�00000000000�034471� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import sys try: # Our match_hostname function is the same as 3.5's, so we only want to # import the match_hostname function if it's at least that good. if sys.version_info < (3, 5): raise ImportError("Fallback to vendored code") from ssl import CertificateError, match_hostname except ImportError: try: # Backport of the function from a pypi module from backports.ssl_match_hostname import CertificateError, match_hostname except ImportError: # Our vendored copy from ._implementation import CertificateError, match_hostname # Not needed, but documenting what we provide. __all__ = ('CertificateError', 'match_hostname') ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000221�00000000000�011450� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������123 path=cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/ssl_match_hostname/_implementation.py 22 mtime=1564430190.0 �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/packages/ssl_match_hostname/_impl0000644�0001750�0001750�00000013104�00000000000�034514� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""The match_hostname() function from Python 3.3.3, essential when using SSL.""" # Note: This file is under the PSF license as the code comes from the python # stdlib. http://docs.python.org/3/license.html import re import sys # ipaddress has been backported to 2.6+ in pypi. If it is installed on the # system, use it to handle IPAddress ServerAltnames (this was added in # python-3.5) otherwise only do DNS matching. This allows # backports.ssl_match_hostname to continue to be used in Python 2.7. try: from pip._vendor import ipaddress except ImportError: ipaddress = None __version__ = '3.5.0.1' class CertificateError(ValueError): pass def _dnsname_match(dn, hostname, max_wildcards=1): """Matching according to RFC 6125, section 6.4.3 http://tools.ietf.org/html/rfc6125#section-6.4.3 """ pats = [] if not dn: return False # Ported from python3-syntax: # leftmost, *remainder = dn.split(r'.') parts = dn.split(r'.') leftmost = parts[0] remainder = parts[1:] wildcards = leftmost.count('*') if wildcards > max_wildcards: # Issue #17980: avoid denials of service by refusing more # than one wildcard per fragment. A survey of established # policy among SSL implementations showed it to be a # reasonable choice. raise CertificateError( "too many wildcards in certificate DNS name: " + repr(dn)) # speed up common case w/o wildcards if not wildcards: return dn.lower() == hostname.lower() # RFC 6125, section 6.4.3, subitem 1. # The client SHOULD NOT attempt to match a presented identifier in which # the wildcard character comprises a label other than the left-most label. if leftmost == '*': # When '*' is a fragment by itself, it matches a non-empty dotless # fragment. pats.append('[^.]+') elif leftmost.startswith('xn--') or hostname.startswith('xn--'): # RFC 6125, section 6.4.3, subitem 3. # The client SHOULD NOT attempt to match a presented identifier # where the wildcard character is embedded within an A-label or # U-label of an internationalized domain name. pats.append(re.escape(leftmost)) else: # Otherwise, '*' matches any dotless string, e.g. www* pats.append(re.escape(leftmost).replace(r'\*', '[^.]*')) # add the remaining fragments, ignore any wildcards for frag in remainder: pats.append(re.escape(frag)) pat = re.compile(r'\A' + r'\.'.join(pats) + r'\Z', re.IGNORECASE) return pat.match(hostname) def _to_unicode(obj): if isinstance(obj, str) and sys.version_info < (3,): obj = unicode(obj, encoding='ascii', errors='strict') return obj def _ipaddress_match(ipname, host_ip): """Exact matching of IP addresses. RFC 6125 explicitly doesn't define an algorithm for this (section 1.7.2 - "Out of Scope"). """ # OpenSSL may add a trailing newline to a subjectAltName's IP address # Divergence from upstream: ipaddress can't handle byte str ip = ipaddress.ip_address(_to_unicode(ipname).rstrip()) return ip == host_ip def match_hostname(cert, hostname): """Verify that *cert* (in decoded format as returned by SSLSocket.getpeercert()) matches the *hostname*. RFC 2818 and RFC 6125 rules are followed, but IP addresses are not accepted for *hostname*. CertificateError is raised on failure. On success, the function returns nothing. """ if not cert: raise ValueError("empty or no certificate, match_hostname needs a " "SSL socket or SSL context with either " "CERT_OPTIONAL or CERT_REQUIRED") try: # Divergence from upstream: ipaddress can't handle byte str host_ip = ipaddress.ip_address(_to_unicode(hostname)) except ValueError: # Not an IP address (common case) host_ip = None except UnicodeError: # Divergence from upstream: Have to deal with ipaddress not taking # byte strings. addresses should be all ascii, so we consider it not # an ipaddress in this case host_ip = None except AttributeError: # Divergence from upstream: Make ipaddress library optional if ipaddress is None: host_ip = None else: raise dnsnames = [] san = cert.get('subjectAltName', ()) for key, value in san: if key == 'DNS': if host_ip is None and _dnsname_match(value, hostname): return dnsnames.append(value) elif key == 'IP Address': if host_ip is not None and _ipaddress_match(value, host_ip): return dnsnames.append(value) if not dnsnames: # The subject is only checked when there is no dNSName entry # in subjectAltName for sub in cert.get('subject', ()): for key, value in sub: # XXX according to RFC 2818, the most specific Common Name # must be used. if key == 'commonName': if _dnsname_match(value, hostname): return dnsnames.append(value) if len(dnsnames) > 1: raise CertificateError("hostname %r " "doesn't match either of %s" % (hostname, ', '.join(map(repr, dnsnames)))) elif len(dnsnames) == 1: raise CertificateError("hostname %r " "doesn't match %r" % (hostname, dnsnames[0])) else: raise CertificateError("no appropriate commonName or " "subjectAltName fields were found") ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/poolmanager.py�������������������0000644�0001750�0001750�00000041232�00000000000�030721� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import import collections import functools import logging from ._collections import RecentlyUsedContainer from .connectionpool import HTTPConnectionPool, HTTPSConnectionPool from .connectionpool import port_by_scheme from .exceptions import LocationValueError, MaxRetryError, ProxySchemeUnknown from .packages import six from .packages.six.moves.urllib.parse import urljoin from .request import RequestMethods from .util.url import parse_url from .util.retry import Retry __all__ = ['PoolManager', 'ProxyManager', 'proxy_from_url'] log = logging.getLogger(__name__) SSL_KEYWORDS = ('key_file', 'cert_file', 'cert_reqs', 'ca_certs', 'ssl_version', 'ca_cert_dir', 'ssl_context', 'key_password') # All known keyword arguments that could be provided to the pool manager, its # pools, or the underlying connections. This is used to construct a pool key. _key_fields = ( 'key_scheme', # str 'key_host', # str 'key_port', # int 'key_timeout', # int or float or Timeout 'key_retries', # int or Retry 'key_strict', # bool 'key_block', # bool 'key_source_address', # str 'key_key_file', # str 'key_key_password', # str 'key_cert_file', # str 'key_cert_reqs', # str 'key_ca_certs', # str 'key_ssl_version', # str 'key_ca_cert_dir', # str 'key_ssl_context', # instance of ssl.SSLContext or urllib3.util.ssl_.SSLContext 'key_maxsize', # int 'key_headers', # dict 'key__proxy', # parsed proxy url 'key__proxy_headers', # dict 'key_socket_options', # list of (level (int), optname (int), value (int or str)) tuples 'key__socks_options', # dict 'key_assert_hostname', # bool or string 'key_assert_fingerprint', # str 'key_server_hostname', # str ) #: The namedtuple class used to construct keys for the connection pool. #: All custom key schemes should include the fields in this key at a minimum. PoolKey = collections.namedtuple('PoolKey', _key_fields) def _default_key_normalizer(key_class, request_context): """ Create a pool key out of a request context dictionary. According to RFC 3986, both the scheme and host are case-insensitive. Therefore, this function normalizes both before constructing the pool key for an HTTPS request. If you wish to change this behaviour, provide alternate callables to ``key_fn_by_scheme``. :param key_class: The class to use when constructing the key. This should be a namedtuple with the ``scheme`` and ``host`` keys at a minimum. :type key_class: namedtuple :param request_context: A dictionary-like object that contain the context for a request. :type request_context: dict :return: A namedtuple that can be used as a connection pool key. :rtype: PoolKey """ # Since we mutate the dictionary, make a copy first context = request_context.copy() context['scheme'] = context['scheme'].lower() context['host'] = context['host'].lower() # These are both dictionaries and need to be transformed into frozensets for key in ('headers', '_proxy_headers', '_socks_options'): if key in context and context[key] is not None: context[key] = frozenset(context[key].items()) # The socket_options key may be a list and needs to be transformed into a # tuple. socket_opts = context.get('socket_options') if socket_opts is not None: context['socket_options'] = tuple(socket_opts) # Map the kwargs to the names in the namedtuple - this is necessary since # namedtuples can't have fields starting with '_'. for key in list(context.keys()): context['key_' + key] = context.pop(key) # Default to ``None`` for keys missing from the context for field in key_class._fields: if field not in context: context[field] = None return key_class(**context) #: A dictionary that maps a scheme to a callable that creates a pool key. #: This can be used to alter the way pool keys are constructed, if desired. #: Each PoolManager makes a copy of this dictionary so they can be configured #: globally here, or individually on the instance. key_fn_by_scheme = { 'http': functools.partial(_default_key_normalizer, PoolKey), 'https': functools.partial(_default_key_normalizer, PoolKey), } pool_classes_by_scheme = { 'http': HTTPConnectionPool, 'https': HTTPSConnectionPool, } class PoolManager(RequestMethods): """ Allows for arbitrary requests while transparently keeping track of necessary connection pools for you. :param num_pools: Number of connection pools to cache before discarding the least recently used pool. :param headers: Headers to include with all requests, unless other headers are given explicitly. :param \\**connection_pool_kw: Additional parameters are used to create fresh :class:`urllib3.connectionpool.ConnectionPool` instances. Example:: >>> manager = PoolManager(num_pools=2) >>> r = manager.request('GET', 'http://google.com/') >>> r = manager.request('GET', 'http://google.com/mail') >>> r = manager.request('GET', 'http://yahoo.com/') >>> len(manager.pools) 2 """ proxy = None def __init__(self, num_pools=10, headers=None, **connection_pool_kw): RequestMethods.__init__(self, headers) self.connection_pool_kw = connection_pool_kw self.pools = RecentlyUsedContainer(num_pools, dispose_func=lambda p: p.close()) # Locally set the pool classes and keys so other PoolManagers can # override them. self.pool_classes_by_scheme = pool_classes_by_scheme self.key_fn_by_scheme = key_fn_by_scheme.copy() def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.clear() # Return False to re-raise any potential exceptions return False def _new_pool(self, scheme, host, port, request_context=None): """ Create a new :class:`ConnectionPool` based on host, port, scheme, and any additional pool keyword arguments. If ``request_context`` is provided, it is provided as keyword arguments to the pool class used. This method is used to actually create the connection pools handed out by :meth:`connection_from_url` and companion methods. It is intended to be overridden for customization. """ pool_cls = self.pool_classes_by_scheme[scheme] if request_context is None: request_context = self.connection_pool_kw.copy() # Although the context has everything necessary to create the pool, # this function has historically only used the scheme, host, and port # in the positional args. When an API change is acceptable these can # be removed. for key in ('scheme', 'host', 'port'): request_context.pop(key, None) if scheme == 'http': for kw in SSL_KEYWORDS: request_context.pop(kw, None) return pool_cls(host, port, **request_context) def clear(self): """ Empty our store of pools and direct them all to close. This will not affect in-flight connections, but they will not be re-used after completion. """ self.pools.clear() def connection_from_host(self, host, port=None, scheme='http', pool_kwargs=None): """ Get a :class:`ConnectionPool` based on the host, port, and scheme. If ``port`` isn't given, it will be derived from the ``scheme`` using ``urllib3.connectionpool.port_by_scheme``. If ``pool_kwargs`` is provided, it is merged with the instance's ``connection_pool_kw`` variable and used to create the new connection pool, if one is needed. """ if not host: raise LocationValueError("No host specified.") request_context = self._merge_pool_kwargs(pool_kwargs) request_context['scheme'] = scheme or 'http' if not port: port = port_by_scheme.get(request_context['scheme'].lower(), 80) request_context['port'] = port request_context['host'] = host return self.connection_from_context(request_context) def connection_from_context(self, request_context): """ Get a :class:`ConnectionPool` based on the request context. ``request_context`` must at least contain the ``scheme`` key and its value must be a key in ``key_fn_by_scheme`` instance variable. """ scheme = request_context['scheme'].lower() pool_key_constructor = self.key_fn_by_scheme[scheme] pool_key = pool_key_constructor(request_context) return self.connection_from_pool_key(pool_key, request_context=request_context) def connection_from_pool_key(self, pool_key, request_context=None): """ Get a :class:`ConnectionPool` based on the provided pool key. ``pool_key`` should be a namedtuple that only contains immutable objects. At a minimum it must have the ``scheme``, ``host``, and ``port`` fields. """ with self.pools.lock: # If the scheme, host, or port doesn't match existing open # connections, open a new ConnectionPool. pool = self.pools.get(pool_key) if pool: return pool # Make a fresh ConnectionPool of the desired type scheme = request_context['scheme'] host = request_context['host'] port = request_context['port'] pool = self._new_pool(scheme, host, port, request_context=request_context) self.pools[pool_key] = pool return pool def connection_from_url(self, url, pool_kwargs=None): """ Similar to :func:`urllib3.connectionpool.connection_from_url`. If ``pool_kwargs`` is not provided and a new pool needs to be constructed, ``self.connection_pool_kw`` is used to initialize the :class:`urllib3.connectionpool.ConnectionPool`. If ``pool_kwargs`` is provided, it is used instead. Note that if a new pool does not need to be created for the request, the provided ``pool_kwargs`` are not used. """ u = parse_url(url) return self.connection_from_host(u.host, port=u.port, scheme=u.scheme, pool_kwargs=pool_kwargs) def _merge_pool_kwargs(self, override): """ Merge a dictionary of override values for self.connection_pool_kw. This does not modify self.connection_pool_kw and returns a new dict. Any keys in the override dictionary with a value of ``None`` are removed from the merged dictionary. """ base_pool_kwargs = self.connection_pool_kw.copy() if override: for key, value in override.items(): if value is None: try: del base_pool_kwargs[key] except KeyError: pass else: base_pool_kwargs[key] = value return base_pool_kwargs def urlopen(self, method, url, redirect=True, **kw): """ Same as :meth:`urllib3.connectionpool.HTTPConnectionPool.urlopen` with custom cross-host redirect logic and only sends the request-uri portion of the ``url``. The given ``url`` parameter must be absolute, such that an appropriate :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. """ u = parse_url(url) conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) kw['assert_same_host'] = False kw['redirect'] = False if 'headers' not in kw: kw['headers'] = self.headers.copy() if self.proxy is not None and u.scheme == "http": response = conn.urlopen(method, url, **kw) else: response = conn.urlopen(method, u.request_uri, **kw) redirect_location = redirect and response.get_redirect_location() if not redirect_location: return response # Support relative URLs for redirecting. redirect_location = urljoin(url, redirect_location) # RFC 7231, Section 6.4.4 if response.status == 303: method = 'GET' retries = kw.get('retries') if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect) # Strip headers marked as unsafe to forward to the redirected location. # Check remove_headers_on_redirect to avoid a potential network call within # conn.is_same_host() which may use socket.gethostbyname() in the future. if (retries.remove_headers_on_redirect and not conn.is_same_host(redirect_location)): headers = list(six.iterkeys(kw['headers'])) for header in headers: if header.lower() in retries.remove_headers_on_redirect: kw['headers'].pop(header, None) try: retries = retries.increment(method, url, response=response, _pool=conn) except MaxRetryError: if retries.raise_on_redirect: raise return response kw['retries'] = retries kw['redirect'] = redirect log.info("Redirecting %s -> %s", url, redirect_location) return self.urlopen(method, redirect_location, **kw) class ProxyManager(PoolManager): """ Behaves just like :class:`PoolManager`, but sends all requests through the defined proxy, using the CONNECT method for HTTPS URLs. :param proxy_url: The URL of the proxy to be used. :param proxy_headers: A dictionary containing headers that will be sent to the proxy. In case of HTTP they are being sent with each request, while in the HTTPS/CONNECT case they are sent only once. Could be used for proxy authentication. Example: >>> proxy = urllib3.ProxyManager('http://localhost:3128/') >>> r1 = proxy.request('GET', 'http://google.com/') >>> r2 = proxy.request('GET', 'http://httpbin.org/') >>> len(proxy.pools) 1 >>> r3 = proxy.request('GET', 'https://httpbin.org/') >>> r4 = proxy.request('GET', 'https://twitter.com/') >>> len(proxy.pools) 3 """ def __init__(self, proxy_url, num_pools=10, headers=None, proxy_headers=None, **connection_pool_kw): if isinstance(proxy_url, HTTPConnectionPool): proxy_url = '%s://%s:%i' % (proxy_url.scheme, proxy_url.host, proxy_url.port) proxy = parse_url(proxy_url) if not proxy.port: port = port_by_scheme.get(proxy.scheme, 80) proxy = proxy._replace(port=port) if proxy.scheme not in ("http", "https"): raise ProxySchemeUnknown(proxy.scheme) self.proxy = proxy self.proxy_headers = proxy_headers or {} connection_pool_kw['_proxy'] = self.proxy connection_pool_kw['_proxy_headers'] = self.proxy_headers super(ProxyManager, self).__init__( num_pools, headers, **connection_pool_kw) def connection_from_host(self, host, port=None, scheme='http', pool_kwargs=None): if scheme == "https": return super(ProxyManager, self).connection_from_host( host, port, scheme, pool_kwargs=pool_kwargs) return super(ProxyManager, self).connection_from_host( self.proxy.host, self.proxy.port, self.proxy.scheme, pool_kwargs=pool_kwargs) def _set_proxy_headers(self, url, headers=None): """ Sets headers needed by proxies: specifically, the Accept and Host headers. Only sets headers not provided by the user. """ headers_ = {'Accept': '*/*'} netloc = parse_url(url).netloc if netloc: headers_['Host'] = netloc if headers: headers_.update(headers) return headers_ def urlopen(self, method, url, redirect=True, **kw): "Same as HTTP(S)ConnectionPool.urlopen, ``url`` must be absolute." u = parse_url(url) if u.scheme == "http": # For proxied HTTPS requests, httplib sets the necessary headers # on the CONNECT to the proxy. For HTTP, we'll definitely # need to set 'Host' at the very least. headers = kw.get('headers', self.headers) kw['headers'] = self._set_proxy_headers(url, headers) return super(ProxyManager, self).urlopen(method, url, redirect=redirect, **kw) def proxy_from_url(url, **kw): return ProxyManager(proxy_url=url, **kw) ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/request.py�����������������������0000644�0001750�0001750�00000013547�00000000000�030115� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import from .filepost import encode_multipart_formdata from .packages.six.moves.urllib.parse import urlencode __all__ = ['RequestMethods'] class RequestMethods(object): """ Convenience mixin for classes who implement a :meth:`urlopen` method, such as :class:`~urllib3.connectionpool.HTTPConnectionPool` and :class:`~urllib3.poolmanager.PoolManager`. Provides behavior for making common types of HTTP request methods and decides which type of request field encoding to use. Specifically, :meth:`.request_encode_url` is for sending requests whose fields are encoded in the URL (such as GET, HEAD, DELETE). :meth:`.request_encode_body` is for sending requests whose fields are encoded in the *body* of the request using multipart or www-form-urlencoded (such as for POST, PUT, PATCH). :meth:`.request` is for making any kind of request, it will look up the appropriate encoding format and use one of the above two methods to make the request. Initializer parameters: :param headers: Headers to include with all requests, unless other headers are given explicitly. """ _encode_url_methods = {'DELETE', 'GET', 'HEAD', 'OPTIONS'} def __init__(self, headers=None): self.headers = headers or {} def urlopen(self, method, url, body=None, headers=None, encode_multipart=True, multipart_boundary=None, **kw): # Abstract raise NotImplementedError("Classes extending RequestMethods must implement " "their own ``urlopen`` method.") def request(self, method, url, fields=None, headers=None, **urlopen_kw): """ Make a request using :meth:`urlopen` with the appropriate encoding of ``fields`` based on the ``method`` used. This is a convenience method that requires the least amount of manual effort. It can be used in most situations, while still having the option to drop down to more specific methods when necessary, such as :meth:`request_encode_url`, :meth:`request_encode_body`, or even the lowest level :meth:`urlopen`. """ method = method.upper() urlopen_kw['request_url'] = url if method in self._encode_url_methods: return self.request_encode_url(method, url, fields=fields, headers=headers, **urlopen_kw) else: return self.request_encode_body(method, url, fields=fields, headers=headers, **urlopen_kw) def request_encode_url(self, method, url, fields=None, headers=None, **urlopen_kw): """ Make a request using :meth:`urlopen` with the ``fields`` encoded in the url. This is useful for request methods like GET, HEAD, DELETE, etc. """ if headers is None: headers = self.headers extra_kw = {'headers': headers} extra_kw.update(urlopen_kw) if fields: url += '?' + urlencode(fields) return self.urlopen(method, url, **extra_kw) def request_encode_body(self, method, url, fields=None, headers=None, encode_multipart=True, multipart_boundary=None, **urlopen_kw): """ Make a request using :meth:`urlopen` with the ``fields`` encoded in the body. This is useful for request methods like POST, PUT, PATCH, etc. When ``encode_multipart=True`` (default), then :meth:`urllib3.filepost.encode_multipart_formdata` is used to encode the payload with the appropriate content type. Otherwise :meth:`urllib.urlencode` is used with the 'application/x-www-form-urlencoded' content type. Multipart encoding must be used when posting files, and it's reasonably safe to use it in other times too. However, it may break request signing, such as with OAuth. Supports an optional ``fields`` parameter of key/value strings AND key/filetuple. A filetuple is a (filename, data, MIME type) tuple where the MIME type is optional. For example:: fields = { 'foo': 'bar', 'fakefile': ('foofile.txt', 'contents of foofile'), 'realfile': ('barfile.txt', open('realfile').read()), 'typedfile': ('bazfile.bin', open('bazfile').read(), 'image/jpeg'), 'nonamefile': 'contents of nonamefile field', } When uploading a file, providing a filename (the first parameter of the tuple) is optional but recommended to best mimic behavior of browsers. Note that if ``headers`` are supplied, the 'Content-Type' header will be overwritten because it depends on the dynamic random boundary string which is used to compose the body of the request. The random boundary string can be explicitly set with the ``multipart_boundary`` parameter. """ if headers is None: headers = self.headers extra_kw = {'headers': {}} if fields: if 'body' in urlopen_kw: raise TypeError( "request got values for both 'fields' and 'body', can only specify one.") if encode_multipart: body, content_type = encode_multipart_formdata(fields, boundary=multipart_boundary) else: body, content_type = urlencode(fields), 'application/x-www-form-urlencoded' extra_kw['body'] = body extra_kw['headers'] = {'Content-Type': content_type} extra_kw['headers'].update(headers) extra_kw.update(urlopen_kw) return self.urlopen(method, url, **extra_kw) ���������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/response.py����������������������0000644�0001750�0001750�00000065043�00000000000�030261� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import from contextlib import contextmanager import zlib import io import logging from socket import timeout as SocketTimeout from socket import error as SocketError try: import brotli except ImportError: brotli = None from ._collections import HTTPHeaderDict from .exceptions import ( BodyNotHttplibCompatible, ProtocolError, DecodeError, ReadTimeoutError, ResponseNotChunked, IncompleteRead, InvalidHeader ) from .packages.six import string_types as basestring, PY3 from .packages.six.moves import http_client as httplib from .connection import HTTPException, BaseSSLError from .util.response import is_fp_closed, is_response_to_head log = logging.getLogger(__name__) class DeflateDecoder(object): def __init__(self): self._first_try = True self._data = b'' self._obj = zlib.decompressobj() def __getattr__(self, name): return getattr(self._obj, name) def decompress(self, data): if not data: return data if not self._first_try: return self._obj.decompress(data) self._data += data try: decompressed = self._obj.decompress(data) if decompressed: self._first_try = False self._data = None return decompressed except zlib.error: self._first_try = False self._obj = zlib.decompressobj(-zlib.MAX_WBITS) try: return self.decompress(self._data) finally: self._data = None class GzipDecoderState(object): FIRST_MEMBER = 0 OTHER_MEMBERS = 1 SWALLOW_DATA = 2 class GzipDecoder(object): def __init__(self): self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS) self._state = GzipDecoderState.FIRST_MEMBER def __getattr__(self, name): return getattr(self._obj, name) def decompress(self, data): ret = bytearray() if self._state == GzipDecoderState.SWALLOW_DATA or not data: return bytes(ret) while True: try: ret += self._obj.decompress(data) except zlib.error: previous_state = self._state # Ignore data after the first error self._state = GzipDecoderState.SWALLOW_DATA if previous_state == GzipDecoderState.OTHER_MEMBERS: # Allow trailing garbage acceptable in other gzip clients return bytes(ret) raise data = self._obj.unused_data if not data: return bytes(ret) self._state = GzipDecoderState.OTHER_MEMBERS self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS) if brotli is not None: class BrotliDecoder(object): # Supports both 'brotlipy' and 'Brotli' packages # since they share an import name. The top branches # are for 'brotlipy' and bottom branches for 'Brotli' def __init__(self): self._obj = brotli.Decompressor() def decompress(self, data): if hasattr(self._obj, 'decompress'): return self._obj.decompress(data) return self._obj.process(data) def flush(self): if hasattr(self._obj, 'flush'): return self._obj.flush() return b'' class MultiDecoder(object): """ From RFC7231: If one or more encodings have been applied to a representation, the sender that applied the encodings MUST generate a Content-Encoding header field that lists the content codings in the order in which they were applied. """ def __init__(self, modes): self._decoders = [_get_decoder(m.strip()) for m in modes.split(',')] def flush(self): return self._decoders[0].flush() def decompress(self, data): for d in reversed(self._decoders): data = d.decompress(data) return data def _get_decoder(mode): if ',' in mode: return MultiDecoder(mode) if mode == 'gzip': return GzipDecoder() if brotli is not None and mode == 'br': return BrotliDecoder() return DeflateDecoder() class HTTPResponse(io.IOBase): """ HTTP Response container. Backwards-compatible to httplib's HTTPResponse but the response ``body`` is loaded and decoded on-demand when the ``data`` property is accessed. This class is also compatible with the Python standard library's :mod:`io` module, and can hence be treated as a readable object in the context of that framework. Extra parameters for behaviour not present in httplib.HTTPResponse: :param preload_content: If True, the response's body will be preloaded during construction. :param decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param original_response: When this HTTPResponse wrapper is generated from an httplib.HTTPResponse object, it's convenient to include the original for debug purposes. It's otherwise unused. :param retries: The retries contains the last :class:`~urllib3.util.retry.Retry` that was used during the request. :param enforce_content_length: Enforce content length checking. Body returned by server must match value of Content-Length header, if present. Otherwise, raise error. """ CONTENT_DECODERS = ['gzip', 'deflate'] if brotli is not None: CONTENT_DECODERS += ['br'] REDIRECT_STATUSES = [301, 302, 303, 307, 308] def __init__(self, body='', headers=None, status=0, version=0, reason=None, strict=0, preload_content=True, decode_content=True, original_response=None, pool=None, connection=None, msg=None, retries=None, enforce_content_length=False, request_method=None, request_url=None): if isinstance(headers, HTTPHeaderDict): self.headers = headers else: self.headers = HTTPHeaderDict(headers) self.status = status self.version = version self.reason = reason self.strict = strict self.decode_content = decode_content self.retries = retries self.enforce_content_length = enforce_content_length self._decoder = None self._body = None self._fp = None self._original_response = original_response self._fp_bytes_read = 0 self.msg = msg self._request_url = request_url if body and isinstance(body, (basestring, bytes)): self._body = body self._pool = pool self._connection = connection if hasattr(body, 'read'): self._fp = body # Are we using the chunked-style of transfer encoding? self.chunked = False self.chunk_left = None tr_enc = self.headers.get('transfer-encoding', '').lower() # Don't incur the penalty of creating a list and then discarding it encodings = (enc.strip() for enc in tr_enc.split(",")) if "chunked" in encodings: self.chunked = True # Determine length of response self.length_remaining = self._init_length(request_method) # If requested, preload the body. if preload_content and not self._body: self._body = self.read(decode_content=decode_content) def get_redirect_location(self): """ Should we redirect and where to? :returns: Truthy redirect location string if we got a redirect status code and valid location. ``None`` if redirect status and no location. ``False`` if not a redirect status code. """ if self.status in self.REDIRECT_STATUSES: return self.headers.get('location') return False def release_conn(self): if not self._pool or not self._connection: return self._pool._put_conn(self._connection) self._connection = None @property def data(self): # For backwords-compat with earlier urllib3 0.4 and earlier. if self._body: return self._body if self._fp: return self.read(cache_content=True) @property def connection(self): return self._connection def isclosed(self): return is_fp_closed(self._fp) def tell(self): """ Obtain the number of bytes pulled over the wire so far. May differ from the amount of content returned by :meth:``HTTPResponse.read`` if bytes are encoded on the wire (e.g, compressed). """ return self._fp_bytes_read def _init_length(self, request_method): """ Set initial length value for Response content if available. """ length = self.headers.get('content-length') if length is not None: if self.chunked: # This Response will fail with an IncompleteRead if it can't be # received as chunked. This method falls back to attempt reading # the response before raising an exception. log.warning("Received response with both Content-Length and " "Transfer-Encoding set. This is expressly forbidden " "by RFC 7230 sec 3.3.2. Ignoring Content-Length and " "attempting to process response as Transfer-Encoding: " "chunked.") return None try: # RFC 7230 section 3.3.2 specifies multiple content lengths can # be sent in a single Content-Length header # (e.g. Content-Length: 42, 42). This line ensures the values # are all valid ints and that as long as the `set` length is 1, # all values are the same. Otherwise, the header is invalid. lengths = set([int(val) for val in length.split(',')]) if len(lengths) > 1: raise InvalidHeader("Content-Length contained multiple " "unmatching values (%s)" % length) length = lengths.pop() except ValueError: length = None else: if length < 0: length = None # Convert status to int for comparison # In some cases, httplib returns a status of "_UNKNOWN" try: status = int(self.status) except ValueError: status = 0 # Check for responses that shouldn't include a body if status in (204, 304) or 100 <= status < 200 or request_method == 'HEAD': length = 0 return length def _init_decoder(self): """ Set-up the _decoder attribute if necessary. """ # Note: content-encoding value should be case-insensitive, per RFC 7230 # Section 3.2 content_encoding = self.headers.get('content-encoding', '').lower() if self._decoder is None: if content_encoding in self.CONTENT_DECODERS: self._decoder = _get_decoder(content_encoding) elif ',' in content_encoding: encodings = [ e.strip() for e in content_encoding.split(',') if e.strip() in self.CONTENT_DECODERS] if len(encodings): self._decoder = _get_decoder(content_encoding) DECODER_ERROR_CLASSES = (IOError, zlib.error) if brotli is not None: DECODER_ERROR_CLASSES += (brotli.error,) def _decode(self, data, decode_content, flush_decoder): """ Decode the data passed in and potentially flush the decoder. """ if not decode_content: return data try: if self._decoder: data = self._decoder.decompress(data) except self.DECODER_ERROR_CLASSES as e: content_encoding = self.headers.get('content-encoding', '').lower() raise DecodeError( "Received response with content-encoding: %s, but " "failed to decode it." % content_encoding, e) if flush_decoder: data += self._flush_decoder() return data def _flush_decoder(self): """ Flushes the decoder. Should only be called if the decoder is actually being used. """ if self._decoder: buf = self._decoder.decompress(b'') return buf + self._decoder.flush() return b'' @contextmanager def _error_catcher(self): """ Catch low-level python exceptions, instead re-raising urllib3 variants, so that low-level exceptions are not leaked in the high-level api. On exit, release the connection back to the pool. """ clean_exit = False try: try: yield except SocketTimeout: # FIXME: Ideally we'd like to include the url in the ReadTimeoutError but # there is yet no clean way to get at it from this context. raise ReadTimeoutError(self._pool, None, 'Read timed out.') except BaseSSLError as e: # FIXME: Is there a better way to differentiate between SSLErrors? if 'read operation timed out' not in str(e): # Defensive: # This shouldn't happen but just in case we're missing an edge # case, let's avoid swallowing SSL errors. raise raise ReadTimeoutError(self._pool, None, 'Read timed out.') except (HTTPException, SocketError) as e: # This includes IncompleteRead. raise ProtocolError('Connection broken: %r' % e, e) # If no exception is thrown, we should avoid cleaning up # unnecessarily. clean_exit = True finally: # If we didn't terminate cleanly, we need to throw away our # connection. if not clean_exit: # The response may not be closed but we're not going to use it # anymore so close it now to ensure that the connection is # released back to the pool. if self._original_response: self._original_response.close() # Closing the response may not actually be sufficient to close # everything, so if we have a hold of the connection close that # too. if self._connection: self._connection.close() # If we hold the original response but it's closed now, we should # return the connection back to the pool. if self._original_response and self._original_response.isclosed(): self.release_conn() def read(self, amt=None, decode_content=None, cache_content=False): """ Similar to :meth:`httplib.HTTPResponse.read`, but with two additional parameters: ``decode_content`` and ``cache_content``. :param amt: How much of the content to read. If specified, caching is skipped because it doesn't make sense to cache partial content as the full response. :param decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param cache_content: If True, will save the returned data such that the same result is returned despite of the state of the underlying file object. This is useful if you want the ``.data`` property to continue working after having ``.read()`` the file object. (Overridden if ``amt`` is set.) """ self._init_decoder() if decode_content is None: decode_content = self.decode_content if self._fp is None: return flush_decoder = False data = None with self._error_catcher(): if amt is None: # cStringIO doesn't like amt=None data = self._fp.read() flush_decoder = True else: cache_content = False data = self._fp.read(amt) if amt != 0 and not data: # Platform-specific: Buggy versions of Python. # Close the connection when no data is returned # # This is redundant to what httplib/http.client _should_ # already do. However, versions of python released before # December 15, 2012 (http://bugs.python.org/issue16298) do # not properly close the connection in all cases. There is # no harm in redundantly calling close. self._fp.close() flush_decoder = True if self.enforce_content_length and self.length_remaining not in (0, None): # This is an edge case that httplib failed to cover due # to concerns of backward compatibility. We're # addressing it here to make sure IncompleteRead is # raised during streaming, so all calls with incorrect # Content-Length are caught. raise IncompleteRead(self._fp_bytes_read, self.length_remaining) if data: self._fp_bytes_read += len(data) if self.length_remaining is not None: self.length_remaining -= len(data) data = self._decode(data, decode_content, flush_decoder) if cache_content: self._body = data return data def stream(self, amt=2**16, decode_content=None): """ A generator wrapper for the read() method. A call will block until ``amt`` bytes have been read from the connection or until the connection is closed. :param amt: How much of the content to read. The generator will return up to much data per iteration, but may return less. This is particularly likely when using compressed data. However, the empty string will never be returned. :param decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. """ if self.chunked and self.supports_chunked_reads(): for line in self.read_chunked(amt, decode_content=decode_content): yield line else: while not is_fp_closed(self._fp): data = self.read(amt=amt, decode_content=decode_content) if data: yield data @classmethod def from_httplib(ResponseCls, r, **response_kw): """ Given an :class:`httplib.HTTPResponse` instance ``r``, return a corresponding :class:`urllib3.response.HTTPResponse` object. Remaining parameters are passed to the HTTPResponse constructor, along with ``original_response=r``. """ headers = r.msg if not isinstance(headers, HTTPHeaderDict): if PY3: headers = HTTPHeaderDict(headers.items()) else: # Python 2.7 headers = HTTPHeaderDict.from_httplib(headers) # HTTPResponse objects in Python 3 don't have a .strict attribute strict = getattr(r, 'strict', 0) resp = ResponseCls(body=r, headers=headers, status=r.status, version=r.version, reason=r.reason, strict=strict, original_response=r, **response_kw) return resp # Backwards-compatibility methods for httplib.HTTPResponse def getheaders(self): return self.headers def getheader(self, name, default=None): return self.headers.get(name, default) # Backwards compatibility for http.cookiejar def info(self): return self.headers # Overrides from io.IOBase def close(self): if not self.closed: self._fp.close() if self._connection: self._connection.close() @property def closed(self): if self._fp is None: return True elif hasattr(self._fp, 'isclosed'): return self._fp.isclosed() elif hasattr(self._fp, 'closed'): return self._fp.closed else: return True def fileno(self): if self._fp is None: raise IOError("HTTPResponse has no file to get a fileno from") elif hasattr(self._fp, "fileno"): return self._fp.fileno() else: raise IOError("The file-like object this HTTPResponse is wrapped " "around has no file descriptor") def flush(self): if self._fp is not None and hasattr(self._fp, 'flush'): return self._fp.flush() def readable(self): # This method is required for `io` module compatibility. return True def readinto(self, b): # This method is required for `io` module compatibility. temp = self.read(len(b)) if len(temp) == 0: return 0 else: b[:len(temp)] = temp return len(temp) def supports_chunked_reads(self): """ Checks if the underlying file-like object looks like a httplib.HTTPResponse object. We do this by testing for the fp attribute. If it is present we assume it returns raw chunks as processed by read_chunked(). """ return hasattr(self._fp, 'fp') def _update_chunk_length(self): # First, we'll figure out length of a chunk and then # we'll try to read it from socket. if self.chunk_left is not None: return line = self._fp.fp.readline() line = line.split(b';', 1)[0] try: self.chunk_left = int(line, 16) except ValueError: # Invalid chunked protocol response, abort. self.close() raise httplib.IncompleteRead(line) def _handle_chunk(self, amt): returned_chunk = None if amt is None: chunk = self._fp._safe_read(self.chunk_left) returned_chunk = chunk self._fp._safe_read(2) # Toss the CRLF at the end of the chunk. self.chunk_left = None elif amt < self.chunk_left: value = self._fp._safe_read(amt) self.chunk_left = self.chunk_left - amt returned_chunk = value elif amt == self.chunk_left: value = self._fp._safe_read(amt) self._fp._safe_read(2) # Toss the CRLF at the end of the chunk. self.chunk_left = None returned_chunk = value else: # amt > self.chunk_left returned_chunk = self._fp._safe_read(self.chunk_left) self._fp._safe_read(2) # Toss the CRLF at the end of the chunk. self.chunk_left = None return returned_chunk def read_chunked(self, amt=None, decode_content=None): """ Similar to :meth:`HTTPResponse.read`, but with an additional parameter: ``decode_content``. :param amt: How much of the content to read. If specified, caching is skipped because it doesn't make sense to cache partial content as the full response. :param decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. """ self._init_decoder() # FIXME: Rewrite this method and make it a class with a better structured logic. if not self.chunked: raise ResponseNotChunked( "Response is not chunked. " "Header 'transfer-encoding: chunked' is missing.") if not self.supports_chunked_reads(): raise BodyNotHttplibCompatible( "Body should be httplib.HTTPResponse like. " "It should have have an fp attribute which returns raw chunks.") with self._error_catcher(): # Don't bother reading the body of a HEAD request. if self._original_response and is_response_to_head(self._original_response): self._original_response.close() return # If a response is already read and closed # then return immediately. if self._fp.fp is None: return while True: self._update_chunk_length() if self.chunk_left == 0: break chunk = self._handle_chunk(amt) decoded = self._decode(chunk, decode_content=decode_content, flush_decoder=False) if decoded: yield decoded if decode_content: # On CPython and PyPy, we should never need to flush the # decoder. However, on Jython we *might* need to, so # lets defensively do it anyway. decoded = self._flush_decoder() if decoded: # Platform-specific: Jython. yield decoded # Chunk content ends with \r\n: discard it. while True: line = self._fp.fp.readline() if not line: # Some sites may not end with '\r\n'. break if line == b'\r\n': break # We read everything; close the "file". if self._original_response: self._original_response.close() def geturl(self): """ Returns the URL that was the source of this response. If the request that generated this response redirected, this method will return the final redirect location. """ if self.retries is not None and len(self.retries.history): return self.retries.history[-1].redirect_location else: return self._request_url def __iter__(self): buffer = [b""] for chunk in self.stream(decode_content=True): if b"\n" in chunk: chunk = chunk.split(b"\n") yield b"".join(buffer) + chunk[0] + b"\n" for x in chunk[1:-1]: yield x + b"\n" if chunk[-1]: buffer = [chunk[-1]] else: buffer = [] else: buffer.append(chunk) if buffer: yield b"".join(buffer) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000033�00000000000�011451� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������27 mtime=1604735100.036661 �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/util/����������������������������0000755�0001750�0001750�00000000000�00000000000�027016� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/util/__init__.py�����������������0000644�0001750�0001750�00000002072�00000000000�031130� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import # For backwards compatibility, provide imports that used to be here. from .connection import is_connection_dropped from .request import make_headers from .response import is_fp_closed from .ssl_ import ( SSLContext, HAS_SNI, IS_PYOPENSSL, IS_SECURETRANSPORT, assert_fingerprint, resolve_cert_reqs, resolve_ssl_version, ssl_wrap_socket, PROTOCOL_TLS, ) from .timeout import ( current_time, Timeout, ) from .retry import Retry from .url import ( get_host, parse_url, split_first, Url, ) from .wait import ( wait_for_read, wait_for_write ) __all__ = ( 'HAS_SNI', 'IS_PYOPENSSL', 'IS_SECURETRANSPORT', 'SSLContext', 'PROTOCOL_TLS', 'Retry', 'Timeout', 'Url', 'assert_fingerprint', 'current_time', 'is_connection_dropped', 'is_fp_closed', 'get_host', 'parse_url', 'make_headers', 'resolve_cert_reqs', 'resolve_ssl_version', 'split_first', 'ssl_wrap_socket', 'wait_for_read', 'wait_for_write' ) ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/util/connection.py���������������0000644�0001750�0001750�00000011037�00000000000�031531� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import import socket from .wait import NoWayToWaitForSocketError, wait_for_read from ..contrib import _appengine_environ def is_connection_dropped(conn): # Platform-specific """ Returns True if the connection is dropped and should be closed. :param conn: :class:`httplib.HTTPConnection` object. Note: For platforms like AppEngine, this will always return ``False`` to let the platform handle connection recycling transparently for us. """ sock = getattr(conn, 'sock', False) if sock is False: # Platform-specific: AppEngine return False if sock is None: # Connection already closed (such as by httplib). return True try: # Returns True if readable, which here means it's been dropped return wait_for_read(sock, timeout=0.0) except NoWayToWaitForSocketError: # Platform-specific: AppEngine return False # This function is copied from socket.py in the Python 2.7 standard # library test suite. Added to its signature is only `socket_options`. # One additional modification is that we avoid binding to IPv6 servers # discovered in DNS if the system doesn't have IPv6 functionality. def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, source_address=None, socket_options=None): """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith('['): host = host.strip('[]') err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) sock.connect(sa) return sock except socket.error as e: err = e if sock is not None: sock.close() sock = None if err is not None: raise err raise socket.error("getaddrinfo returns an empty list") def _set_socket_options(sock, options): if options is None: return for opt in options: sock.setsockopt(*opt) def allowed_gai_family(): """This function is designed to work in the context of getaddrinfo, where family=socket.AF_UNSPEC is the default and will perform a DNS search for both IPv6 and IPv4 records.""" family = socket.AF_INET if HAS_IPV6: family = socket.AF_UNSPEC return family def _has_ipv6(host): """ Returns True if the system can bind an IPv6 address. """ sock = None has_ipv6 = False # App Engine doesn't support IPV6 sockets and actually has a quota on the # number of sockets that can be used, so just early out here instead of # creating a socket needlessly. # See https://github.com/urllib3/urllib3/issues/1446 if _appengine_environ.is_appengine_sandbox(): return False if socket.has_ipv6: # has_ipv6 returns true if cPython was compiled with IPv6 support. # It does not tell us if the system has IPv6 support enabled. To # determine that we must bind to an IPv6 address. # https://github.com/shazow/urllib3/pull/611 # https://bugs.python.org/issue658327 try: sock = socket.socket(socket.AF_INET6) sock.bind((host, 0)) has_ipv6 = True except Exception: pass if sock: sock.close() return has_ipv6 HAS_IPV6 = _has_ipv6('::1') �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/util/queue.py��������������������0000644�0001750�0001750�00000000761�00000000000�030520� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import collections from ..packages import six from ..packages.six.moves import queue if six.PY2: # Queue is imported for side effects on MS Windows. See issue #229. import Queue as _unused_module_Queue # noqa: F401 class LifoQueue(queue.Queue): def _init(self, _): self.queue = collections.deque() def _qsize(self, len=len): return len(self.queue) def _put(self, item): self.queue.append(item) def _get(self): return self.queue.pop() ���������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/util/request.py������������������0000644�0001750�0001750�00000007370�00000000000�031067� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import from base64 import b64encode from ..packages.six import b, integer_types from ..exceptions import UnrewindableBodyError ACCEPT_ENCODING = 'gzip,deflate' try: import brotli as _unused_module_brotli # noqa: F401 except ImportError: pass else: ACCEPT_ENCODING += ',br' _FAILEDTELL = object() def make_headers(keep_alive=None, accept_encoding=None, user_agent=None, basic_auth=None, proxy_basic_auth=None, disable_cache=None): """ Shortcuts for generating request headers. :param keep_alive: If ``True``, adds 'connection: keep-alive' header. :param accept_encoding: Can be a boolean, list, or string. ``True`` translates to 'gzip,deflate'. List will get joined by comma. String will be used as provided. :param user_agent: String representing the user-agent you want, such as "python-urllib3/0.6" :param basic_auth: Colon-separated username:password string for 'authorization: basic ...' auth header. :param proxy_basic_auth: Colon-separated username:password string for 'proxy-authorization: basic ...' auth header. :param disable_cache: If ``True``, adds 'cache-control: no-cache' header. Example:: >>> make_headers(keep_alive=True, user_agent="Batman/1.0") {'connection': 'keep-alive', 'user-agent': 'Batman/1.0'} >>> make_headers(accept_encoding=True) {'accept-encoding': 'gzip,deflate'} """ headers = {} if accept_encoding: if isinstance(accept_encoding, str): pass elif isinstance(accept_encoding, list): accept_encoding = ','.join(accept_encoding) else: accept_encoding = ACCEPT_ENCODING headers['accept-encoding'] = accept_encoding if user_agent: headers['user-agent'] = user_agent if keep_alive: headers['connection'] = 'keep-alive' if basic_auth: headers['authorization'] = 'Basic ' + \ b64encode(b(basic_auth)).decode('utf-8') if proxy_basic_auth: headers['proxy-authorization'] = 'Basic ' + \ b64encode(b(proxy_basic_auth)).decode('utf-8') if disable_cache: headers['cache-control'] = 'no-cache' return headers def set_file_position(body, pos): """ If a position is provided, move file to that point. Otherwise, we'll attempt to record a position for future use. """ if pos is not None: rewind_body(body, pos) elif getattr(body, 'tell', None) is not None: try: pos = body.tell() except (IOError, OSError): # This differentiates from None, allowing us to catch # a failed `tell()` later when trying to rewind the body. pos = _FAILEDTELL return pos def rewind_body(body, body_pos): """ Attempt to rewind body to a certain position. Primarily used for request redirects and retries. :param body: File-like object that supports seek. :param int pos: Position to seek to in file. """ body_seek = getattr(body, 'seek', None) if body_seek is not None and isinstance(body_pos, integer_types): try: body_seek(body_pos) except (IOError, OSError): raise UnrewindableBodyError("An error occurred when rewinding request " "body for redirect/retry.") elif body_pos is _FAILEDTELL: raise UnrewindableBodyError("Unable to record file position for rewinding " "request body during a redirect/retry.") else: raise ValueError("body_pos must be of type integer, " "instead it was %s." % type(body_pos)) ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/util/response.py�����������������0000644�0001750�0001750�00000005032�00000000000�031226� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import from ..packages.six.moves import http_client as httplib from ..exceptions import HeaderParsingError def is_fp_closed(obj): """ Checks whether a given file-like object is closed. :param obj: The file-like object to check. """ try: # Check `isclosed()` first, in case Python3 doesn't set `closed`. # GH Issue #928 return obj.isclosed() except AttributeError: pass try: # Check via the official file-like-object way. return obj.closed except AttributeError: pass try: # Check if the object is a container for another file-like object that # gets released on exhaustion (e.g. HTTPResponse). return obj.fp is None except AttributeError: pass raise ValueError("Unable to determine whether fp is closed.") def assert_header_parsing(headers): """ Asserts whether all headers have been successfully parsed. Extracts encountered errors from the result of parsing headers. Only works on Python 3. :param headers: Headers to verify. :type headers: `httplib.HTTPMessage`. :raises urllib3.exceptions.HeaderParsingError: If parsing errors are found. """ # This will fail silently if we pass in the wrong kind of parameter. # To make debugging easier add an explicit check. if not isinstance(headers, httplib.HTTPMessage): raise TypeError('expected httplib.Message, got {0}.'.format( type(headers))) defects = getattr(headers, 'defects', None) get_payload = getattr(headers, 'get_payload', None) unparsed_data = None if get_payload: # get_payload is actually email.message.Message.get_payload; # we're only interested in the result if it's not a multipart message if not headers.is_multipart(): payload = get_payload() if isinstance(payload, (bytes, str)): unparsed_data = payload if defects or unparsed_data: raise HeaderParsingError(defects=defects, unparsed_data=unparsed_data) def is_response_to_head(response): """ Checks whether the request of a response has been a HEAD-request. Handles the quirks of AppEngine. :param conn: :type conn: :class:`httplib.HTTPResponse` """ # FIXME: Can we do this somehow without accessing private httplib _method? method = response._method if isinstance(method, int): # Platform-specific: Appengine return method == 3 return method.upper() == 'HEAD' ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/util/retry.py��������������������0000644�0001750�0001750�00000035456�00000000000�030552� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import import time import logging from collections import namedtuple from itertools import takewhile import email import re from ..exceptions import ( ConnectTimeoutError, MaxRetryError, ProtocolError, ReadTimeoutError, ResponseError, InvalidHeader, ) from ..packages import six log = logging.getLogger(__name__) # Data structure for representing the metadata of requests that result in a retry. RequestHistory = namedtuple('RequestHistory', ["method", "url", "error", "status", "redirect_location"]) class Retry(object): """ Retry configuration. Each retry attempt will create a new Retry object with updated values, so they can be safely reused. Retries can be defined as a default for a pool:: retries = Retry(connect=5, read=2, redirect=5) http = PoolManager(retries=retries) response = http.request('GET', 'http://example.com/') Or per-request (which overrides the default for the pool):: response = http.request('GET', 'http://example.com/', retries=Retry(10)) Retries can be disabled by passing ``False``:: response = http.request('GET', 'http://example.com/', retries=False) Errors will be wrapped in :class:`~urllib3.exceptions.MaxRetryError` unless retries are disabled, in which case the causing exception will be raised. :param int total: Total number of retries to allow. Takes precedence over other counts. Set to ``None`` to remove this constraint and fall back on other counts. It's a good idea to set this to some sensibly-high value to account for unexpected edge cases and avoid infinite retry loops. Set to ``0`` to fail on the first retry. Set to ``False`` to disable and imply ``raise_on_redirect=False``. :param int connect: How many connection-related errors to retry on. These are errors raised before the request is sent to the remote server, which we assume has not triggered the server to process the request. Set to ``0`` to fail on the first retry of this type. :param int read: How many times to retry on read errors. These errors are raised after the request was sent to the server, so the request may have side-effects. Set to ``0`` to fail on the first retry of this type. :param int redirect: How many redirects to perform. Limit this to avoid infinite redirect loops. A redirect is a HTTP response with a status code 301, 302, 303, 307 or 308. Set to ``0`` to fail on the first retry of this type. Set to ``False`` to disable and imply ``raise_on_redirect=False``. :param int status: How many times to retry on bad status codes. These are retries made on responses, where status code matches ``status_forcelist``. Set to ``0`` to fail on the first retry of this type. :param iterable method_whitelist: Set of uppercased HTTP method verbs that we should retry on. By default, we only retry on methods which are considered to be idempotent (multiple requests with the same parameters end with the same state). See :attr:`Retry.DEFAULT_METHOD_WHITELIST`. Set to a ``False`` value to retry on any verb. :param iterable status_forcelist: A set of integer HTTP status codes that we should force a retry on. A retry is initiated if the request method is in ``method_whitelist`` and the response status code is in ``status_forcelist``. By default, this is disabled with ``None``. :param float backoff_factor: A backoff factor to apply between attempts after the second try (most errors are resolved immediately by a second try without a delay). urllib3 will sleep for:: {backoff factor} * (2 ** ({number of total retries} - 1)) seconds. If the backoff_factor is 0.1, then :func:`.sleep` will sleep for [0.0s, 0.2s, 0.4s, ...] between retries. It will never be longer than :attr:`Retry.BACKOFF_MAX`. By default, backoff is disabled (set to 0). :param bool raise_on_redirect: Whether, if the number of redirects is exhausted, to raise a MaxRetryError, or to return a response with a response code in the 3xx range. :param bool raise_on_status: Similar meaning to ``raise_on_redirect``: whether we should raise an exception, or return a response, if status falls in ``status_forcelist`` range and retries have been exhausted. :param tuple history: The history of the request encountered during each call to :meth:`~Retry.increment`. The list is in the order the requests occurred. Each list item is of class :class:`RequestHistory`. :param bool respect_retry_after_header: Whether to respect Retry-After header on status codes defined as :attr:`Retry.RETRY_AFTER_STATUS_CODES` or not. :param iterable remove_headers_on_redirect: Sequence of headers to remove from the request when a response indicating a redirect is returned before firing off the redirected request. """ DEFAULT_METHOD_WHITELIST = frozenset([ 'HEAD', 'GET', 'PUT', 'DELETE', 'OPTIONS', 'TRACE']) RETRY_AFTER_STATUS_CODES = frozenset([413, 429, 503]) DEFAULT_REDIRECT_HEADERS_BLACKLIST = frozenset(['Authorization']) #: Maximum backoff time. BACKOFF_MAX = 120 def __init__(self, total=10, connect=None, read=None, redirect=None, status=None, method_whitelist=DEFAULT_METHOD_WHITELIST, status_forcelist=None, backoff_factor=0, raise_on_redirect=True, raise_on_status=True, history=None, respect_retry_after_header=True, remove_headers_on_redirect=DEFAULT_REDIRECT_HEADERS_BLACKLIST): self.total = total self.connect = connect self.read = read self.status = status if redirect is False or total is False: redirect = 0 raise_on_redirect = False self.redirect = redirect self.status_forcelist = status_forcelist or set() self.method_whitelist = method_whitelist self.backoff_factor = backoff_factor self.raise_on_redirect = raise_on_redirect self.raise_on_status = raise_on_status self.history = history or tuple() self.respect_retry_after_header = respect_retry_after_header self.remove_headers_on_redirect = frozenset([ h.lower() for h in remove_headers_on_redirect]) def new(self, **kw): params = dict( total=self.total, connect=self.connect, read=self.read, redirect=self.redirect, status=self.status, method_whitelist=self.method_whitelist, status_forcelist=self.status_forcelist, backoff_factor=self.backoff_factor, raise_on_redirect=self.raise_on_redirect, raise_on_status=self.raise_on_status, history=self.history, remove_headers_on_redirect=self.remove_headers_on_redirect ) params.update(kw) return type(self)(**params) @classmethod def from_int(cls, retries, redirect=True, default=None): """ Backwards-compatibility for the old retries format.""" if retries is None: retries = default if default is not None else cls.DEFAULT if isinstance(retries, Retry): return retries redirect = bool(redirect) and None new_retries = cls(retries, redirect=redirect) log.debug("Converted retries value: %r -> %r", retries, new_retries) return new_retries def get_backoff_time(self): """ Formula for computing the current backoff :rtype: float """ # We want to consider only the last consecutive errors sequence (Ignore redirects). consecutive_errors_len = len(list(takewhile(lambda x: x.redirect_location is None, reversed(self.history)))) if consecutive_errors_len <= 1: return 0 backoff_value = self.backoff_factor * (2 ** (consecutive_errors_len - 1)) return min(self.BACKOFF_MAX, backoff_value) def parse_retry_after(self, retry_after): # Whitespace: https://tools.ietf.org/html/rfc7230#section-3.2.4 if re.match(r"^\s*[0-9]+\s*$", retry_after): seconds = int(retry_after) else: retry_date_tuple = email.utils.parsedate(retry_after) if retry_date_tuple is None: raise InvalidHeader("Invalid Retry-After header: %s" % retry_after) retry_date = time.mktime(retry_date_tuple) seconds = retry_date - time.time() if seconds < 0: seconds = 0 return seconds def get_retry_after(self, response): """ Get the value of Retry-After in seconds. """ retry_after = response.getheader("Retry-After") if retry_after is None: return None return self.parse_retry_after(retry_after) def sleep_for_retry(self, response=None): retry_after = self.get_retry_after(response) if retry_after: time.sleep(retry_after) return True return False def _sleep_backoff(self): backoff = self.get_backoff_time() if backoff <= 0: return time.sleep(backoff) def sleep(self, response=None): """ Sleep between retry attempts. This method will respect a server's ``Retry-After`` response header and sleep the duration of the time requested. If that is not present, it will use an exponential backoff. By default, the backoff factor is 0 and this method will return immediately. """ if response: slept = self.sleep_for_retry(response) if slept: return self._sleep_backoff() def _is_connection_error(self, err): """ Errors when we're fairly sure that the server did not receive the request, so it should be safe to retry. """ return isinstance(err, ConnectTimeoutError) def _is_read_error(self, err): """ Errors that occur after the request has been started, so we should assume that the server began processing it. """ return isinstance(err, (ReadTimeoutError, ProtocolError)) def _is_method_retryable(self, method): """ Checks if a given HTTP method should be retried upon, depending if it is included on the method whitelist. """ if self.method_whitelist and method.upper() not in self.method_whitelist: return False return True def is_retry(self, method, status_code, has_retry_after=False): """ Is this method/status code retryable? (Based on whitelists and control variables such as the number of total retries to allow, whether to respect the Retry-After header, whether this header is present, and whether the returned status code is on the list of status codes to be retried upon on the presence of the aforementioned header) """ if not self._is_method_retryable(method): return False if self.status_forcelist and status_code in self.status_forcelist: return True return (self.total and self.respect_retry_after_header and has_retry_after and (status_code in self.RETRY_AFTER_STATUS_CODES)) def is_exhausted(self): """ Are we out of retries? """ retry_counts = (self.total, self.connect, self.read, self.redirect, self.status) retry_counts = list(filter(None, retry_counts)) if not retry_counts: return False return min(retry_counts) < 0 def increment(self, method=None, url=None, response=None, error=None, _pool=None, _stacktrace=None): """ Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.HTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise six.reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status cause = 'unknown' status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise six.reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or not self._is_method_retryable(method): raise six.reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = 'too many redirects' redirect_location = response.get_redirect_location() status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and a the given method is in the whitelist cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format( status_code=response.status) status = response.status history = self.history + (RequestHistory(method, url, error, status, redirect_location),) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, history=history) if new_retry.is_exhausted(): raise MaxRetryError(_pool, url, error or ResponseError(cause)) log.debug("Incremented Retry for (url='%s'): %r", url, new_retry) return new_retry def __repr__(self): return ('{cls.__name__}(total={self.total}, connect={self.connect}, ' 'read={self.read}, redirect={self.redirect}, status={self.status})').format( cls=type(self), self=self) # For backwards compatibility (equivalent to pre-v1.9): Retry.DEFAULT = Retry(3) ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/util/ssl_.py���������������������0000644�0001750�0001750�00000032746�00000000000�030344� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import import errno import warnings import hmac import re from binascii import hexlify, unhexlify from hashlib import md5, sha1, sha256 from ..exceptions import SSLError, InsecurePlatformWarning, SNIMissingWarning from ..packages import six from ..packages.rfc3986 import abnf_regexp SSLContext = None HAS_SNI = False IS_PYOPENSSL = False IS_SECURETRANSPORT = False # Maps the length of a digest to a possible hash function producing this digest HASHFUNC_MAP = { 32: md5, 40: sha1, 64: sha256, } def _const_compare_digest_backport(a, b): """ Compare two digests of equal length in constant time. The digests must be of type str/bytes. Returns True if the digests match, and False otherwise. """ result = abs(len(a) - len(b)) for l, r in zip(bytearray(a), bytearray(b)): result |= l ^ r return result == 0 _const_compare_digest = getattr(hmac, 'compare_digest', _const_compare_digest_backport) # Borrow rfc3986's regular expressions for IPv4 # and IPv6 addresses for use in is_ipaddress() _IP_ADDRESS_REGEX = re.compile( r'^(?:%s|%s|%s)$' % ( abnf_regexp.IPv4_RE, abnf_regexp.IPv6_RE, abnf_regexp.IPv6_ADDRZ_RFC4007_RE ) ) try: # Test for SSL features import ssl from ssl import wrap_socket, CERT_REQUIRED from ssl import HAS_SNI # Has SNI? except ImportError: pass try: # Platform-specific: Python 3.6 from ssl import PROTOCOL_TLS PROTOCOL_SSLv23 = PROTOCOL_TLS except ImportError: try: from ssl import PROTOCOL_SSLv23 as PROTOCOL_TLS PROTOCOL_SSLv23 = PROTOCOL_TLS except ImportError: PROTOCOL_SSLv23 = PROTOCOL_TLS = 2 try: from ssl import OP_NO_SSLv2, OP_NO_SSLv3, OP_NO_COMPRESSION except ImportError: OP_NO_SSLv2, OP_NO_SSLv3 = 0x1000000, 0x2000000 OP_NO_COMPRESSION = 0x20000 # A secure default. # Sources for more information on TLS ciphers: # # - https://wiki.mozilla.org/Security/Server_Side_TLS # - https://www.ssllabs.com/projects/best-practices/index.html # - https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/ # # The general intent is: # - prefer cipher suites that offer perfect forward secrecy (DHE/ECDHE), # - prefer ECDHE over DHE for better performance, # - prefer any AES-GCM and ChaCha20 over any AES-CBC for better performance and # security, # - prefer AES-GCM over ChaCha20 because hardware-accelerated AES is common, # - disable NULL authentication, MD5 MACs, DSS, and other # insecure ciphers for security reasons. # - NOTE: TLS 1.3 cipher suites are managed through a different interface # not exposed by CPython (yet!) and are enabled by default if they're available. DEFAULT_CIPHERS = ':'.join([ 'ECDHE+AESGCM', 'ECDHE+CHACHA20', 'DHE+AESGCM', 'DHE+CHACHA20', 'ECDH+AESGCM', 'DH+AESGCM', 'ECDH+AES', 'DH+AES', 'RSA+AESGCM', 'RSA+AES', '!aNULL', '!eNULL', '!MD5', '!DSS', ]) try: from ssl import SSLContext # Modern SSL? except ImportError: class SSLContext(object): # Platform-specific: Python 2 def __init__(self, protocol_version): self.protocol = protocol_version # Use default values from a real SSLContext self.check_hostname = False self.verify_mode = ssl.CERT_NONE self.ca_certs = None self.options = 0 self.certfile = None self.keyfile = None self.ciphers = None def load_cert_chain(self, certfile, keyfile): self.certfile = certfile self.keyfile = keyfile def load_verify_locations(self, cafile=None, capath=None): self.ca_certs = cafile if capath is not None: raise SSLError("CA directories not supported in older Pythons") def set_ciphers(self, cipher_suite): self.ciphers = cipher_suite def wrap_socket(self, socket, server_hostname=None, server_side=False): warnings.warn( 'A true SSLContext object is not available. This prevents ' 'urllib3 from configuring SSL appropriately and may cause ' 'certain SSL connections to fail. You can upgrade to a newer ' 'version of Python to solve this. For more information, see ' 'https://urllib3.readthedocs.io/en/latest/advanced-usage.html' '#ssl-warnings', InsecurePlatformWarning ) kwargs = { 'keyfile': self.keyfile, 'certfile': self.certfile, 'ca_certs': self.ca_certs, 'cert_reqs': self.verify_mode, 'ssl_version': self.protocol, 'server_side': server_side, } return wrap_socket(socket, ciphers=self.ciphers, **kwargs) def assert_fingerprint(cert, fingerprint): """ Checks if given fingerprint matches the supplied certificate. :param cert: Certificate as bytes object. :param fingerprint: Fingerprint as string of hexdigits, can be interspersed by colons. """ fingerprint = fingerprint.replace(':', '').lower() digest_length = len(fingerprint) hashfunc = HASHFUNC_MAP.get(digest_length) if not hashfunc: raise SSLError( 'Fingerprint of invalid length: {0}'.format(fingerprint)) # We need encode() here for py32; works on py2 and p33. fingerprint_bytes = unhexlify(fingerprint.encode()) cert_digest = hashfunc(cert).digest() if not _const_compare_digest(cert_digest, fingerprint_bytes): raise SSLError('Fingerprints did not match. Expected "{0}", got "{1}".' .format(fingerprint, hexlify(cert_digest))) def resolve_cert_reqs(candidate): """ Resolves the argument to a numeric constant, which can be passed to the wrap_socket function/method from the ssl module. Defaults to :data:`ssl.CERT_NONE`. If given a string it is assumed to be the name of the constant in the :mod:`ssl` module or its abbreviation. (So you can specify `REQUIRED` instead of `CERT_REQUIRED`. If it's neither `None` nor a string we assume it is already the numeric constant which can directly be passed to wrap_socket. """ if candidate is None: return CERT_REQUIRED if isinstance(candidate, str): res = getattr(ssl, candidate, None) if res is None: res = getattr(ssl, 'CERT_' + candidate) return res return candidate def resolve_ssl_version(candidate): """ like resolve_cert_reqs """ if candidate is None: return PROTOCOL_TLS if isinstance(candidate, str): res = getattr(ssl, candidate, None) if res is None: res = getattr(ssl, 'PROTOCOL_' + candidate) return res return candidate def create_urllib3_context(ssl_version=None, cert_reqs=None, options=None, ciphers=None): """All arguments have the same meaning as ``ssl_wrap_socket``. By default, this function does a lot of the same work that ``ssl.create_default_context`` does on Python 3.4+. It: - Disables SSLv2, SSLv3, and compression - Sets a restricted set of server ciphers If you wish to enable SSLv3, you can do:: from pip._vendor.urllib3.util import ssl_ context = ssl_.create_urllib3_context() context.options &= ~ssl_.OP_NO_SSLv3 You can do the same to enable compression (substituting ``COMPRESSION`` for ``SSLv3`` in the last line above). :param ssl_version: The desired protocol version to use. This will default to PROTOCOL_SSLv23 which will negotiate the highest protocol that both the server and your installation of OpenSSL support. :param cert_reqs: Whether to require the certificate verification. This defaults to ``ssl.CERT_REQUIRED``. :param options: Specific OpenSSL options. These default to ``ssl.OP_NO_SSLv2``, ``ssl.OP_NO_SSLv3``, ``ssl.OP_NO_COMPRESSION``. :param ciphers: Which cipher suites to allow the server to select. :returns: Constructed SSLContext object with specified options :rtype: SSLContext """ context = SSLContext(ssl_version or PROTOCOL_TLS) context.set_ciphers(ciphers or DEFAULT_CIPHERS) # Setting the default here, as we may have no ssl module on import cert_reqs = ssl.CERT_REQUIRED if cert_reqs is None else cert_reqs if options is None: options = 0 # SSLv2 is easily broken and is considered harmful and dangerous options |= OP_NO_SSLv2 # SSLv3 has several problems and is now dangerous options |= OP_NO_SSLv3 # Disable compression to prevent CRIME attacks for OpenSSL 1.0+ # (issue #309) options |= OP_NO_COMPRESSION context.options |= options context.verify_mode = cert_reqs if getattr(context, 'check_hostname', None) is not None: # Platform-specific: Python 3.2 # We do our own verification, including fingerprints and alternative # hostnames. So disable it here context.check_hostname = False return context def ssl_wrap_socket(sock, keyfile=None, certfile=None, cert_reqs=None, ca_certs=None, server_hostname=None, ssl_version=None, ciphers=None, ssl_context=None, ca_cert_dir=None, key_password=None): """ All arguments except for server_hostname, ssl_context, and ca_cert_dir have the same meaning as they do when using :func:`ssl.wrap_socket`. :param server_hostname: When SNI is supported, the expected hostname of the certificate :param ssl_context: A pre-made :class:`SSLContext` object. If none is provided, one will be created using :func:`create_urllib3_context`. :param ciphers: A string of ciphers we wish the client to support. :param ca_cert_dir: A directory containing CA certificates in multiple separate files, as supported by OpenSSL's -CApath flag or the capath argument to SSLContext.load_verify_locations(). :param key_password: Optional password if the keyfile is encrypted. """ context = ssl_context if context is None: # Note: This branch of code and all the variables in it are no longer # used by urllib3 itself. We should consider deprecating and removing # this code. context = create_urllib3_context(ssl_version, cert_reqs, ciphers=ciphers) if ca_certs or ca_cert_dir: try: context.load_verify_locations(ca_certs, ca_cert_dir) except IOError as e: # Platform-specific: Python 2.7 raise SSLError(e) # Py33 raises FileNotFoundError which subclasses OSError # These are not equivalent unless we check the errno attribute except OSError as e: # Platform-specific: Python 3.3 and beyond if e.errno == errno.ENOENT: raise SSLError(e) raise elif ssl_context is None and hasattr(context, 'load_default_certs'): # try to load OS default certs; works well on Windows (require Python3.4+) context.load_default_certs() # Attempt to detect if we get the goofy behavior of the # keyfile being encrypted and OpenSSL asking for the # passphrase via the terminal and instead error out. if keyfile and key_password is None and _is_key_file_encrypted(keyfile): raise SSLError("Client private key is encrypted, password is required") if certfile: if key_password is None: context.load_cert_chain(certfile, keyfile) else: context.load_cert_chain(certfile, keyfile, key_password) # If we detect server_hostname is an IP address then the SNI # extension should not be used according to RFC3546 Section 3.1 # We shouldn't warn the user if SNI isn't available but we would # not be using SNI anyways due to IP address for server_hostname. if ((server_hostname is not None and not is_ipaddress(server_hostname)) or IS_SECURETRANSPORT): if HAS_SNI and server_hostname is not None: return context.wrap_socket(sock, server_hostname=server_hostname) warnings.warn( 'An HTTPS request has been made, but the SNI (Server Name ' 'Indication) extension to TLS is not available on this platform. ' 'This may cause the server to present an incorrect TLS ' 'certificate, which can cause validation failures. You can upgrade to ' 'a newer version of Python to solve this. For more information, see ' 'https://urllib3.readthedocs.io/en/latest/advanced-usage.html' '#ssl-warnings', SNIMissingWarning ) return context.wrap_socket(sock) def is_ipaddress(hostname): """Detects whether the hostname given is an IPv4 or IPv6 address. Also detects IPv6 addresses with Zone IDs. :param str hostname: Hostname to examine. :return: True if the hostname is an IP address, False otherwise. """ if six.PY3 and isinstance(hostname, bytes): # IDN A-label bytes are ASCII compatible. hostname = hostname.decode('ascii') return _IP_ADDRESS_REGEX.match(hostname) is not None def _is_key_file_encrypted(key_file): """Detects if a key file is encrypted or not.""" with open(key_file, 'r') as f: for line in f: # Look for Proc-Type: 4,ENCRYPTED if 'ENCRYPTED' in line: return True return False ��������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/util/timeout.py������������������0000644�0001750�0001750�00000023050�00000000000�031056� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import # The default socket timeout, used by httplib to indicate that no timeout was # specified by the user from socket import _GLOBAL_DEFAULT_TIMEOUT import time from ..exceptions import TimeoutStateError # A sentinel value to indicate that no timeout was specified by the user in # urllib3 _Default = object() # Use time.monotonic if available. current_time = getattr(time, "monotonic", time.time) class Timeout(object): """ Timeout configuration. Timeouts can be defined as a default for a pool:: timeout = Timeout(connect=2.0, read=7.0) http = PoolManager(timeout=timeout) response = http.request('GET', 'http://example.com/') Or per-request (which overrides the default for the pool):: response = http.request('GET', 'http://example.com/', timeout=Timeout(10)) Timeouts can be disabled by setting all the parameters to ``None``:: no_timeout = Timeout(connect=None, read=None) response = http.request('GET', 'http://example.com/, timeout=no_timeout) :param total: This combines the connect and read timeouts into one; the read timeout will be set to the time leftover from the connect attempt. In the event that both a connect timeout and a total are specified, or a read timeout and a total are specified, the shorter timeout will be applied. Defaults to None. :type total: integer, float, or None :param connect: The maximum amount of time to wait for a connection attempt to a server to succeed. Omitting the parameter will default the connect timeout to the system default, probably `the global default timeout in socket.py <http://hg.python.org/cpython/file/603b4d593758/Lib/socket.py#l535>`_. None will set an infinite timeout for connection attempts. :type connect: integer, float, or None :param read: The maximum amount of time to wait between consecutive read operations for a response from the server. Omitting the parameter will default the read timeout to the system default, probably `the global default timeout in socket.py <http://hg.python.org/cpython/file/603b4d593758/Lib/socket.py#l535>`_. None will set an infinite timeout. :type read: integer, float, or None .. note:: Many factors can affect the total amount of time for urllib3 to return an HTTP response. For example, Python's DNS resolver does not obey the timeout specified on the socket. Other factors that can affect total request time include high CPU load, high swap, the program running at a low priority level, or other behaviors. In addition, the read and total timeouts only measure the time between read operations on the socket connecting the client and the server, not the total amount of time for the request to return a complete response. For most requests, the timeout is raised because the server has not sent the first byte in the specified time. This is not always the case; if a server streams one byte every fifteen seconds, a timeout of 20 seconds will not trigger, even though the request will take several minutes to complete. If your goal is to cut off any request after a set amount of wall clock time, consider having a second "watcher" thread to cut off a slow request. """ #: A sentinel object representing the default timeout value DEFAULT_TIMEOUT = _GLOBAL_DEFAULT_TIMEOUT def __init__(self, total=None, connect=_Default, read=_Default): self._connect = self._validate_timeout(connect, 'connect') self._read = self._validate_timeout(read, 'read') self.total = self._validate_timeout(total, 'total') self._start_connect = None def __str__(self): return '%s(connect=%r, read=%r, total=%r)' % ( type(self).__name__, self._connect, self._read, self.total) @classmethod def _validate_timeout(cls, value, name): """ Check that a timeout attribute is valid. :param value: The timeout value to validate :param name: The name of the timeout attribute to validate. This is used to specify in error messages. :return: The validated and casted version of the given value. :raises ValueError: If it is a numeric value less than or equal to zero, or the type is not an integer, float, or None. """ if value is _Default: return cls.DEFAULT_TIMEOUT if value is None or value is cls.DEFAULT_TIMEOUT: return value if isinstance(value, bool): raise ValueError("Timeout cannot be a boolean value. It must " "be an int, float or None.") try: float(value) except (TypeError, ValueError): raise ValueError("Timeout value %s was %s, but it must be an " "int, float or None." % (name, value)) try: if value <= 0: raise ValueError("Attempted to set %s timeout to %s, but the " "timeout cannot be set to a value less " "than or equal to 0." % (name, value)) except TypeError: # Python 3 raise ValueError("Timeout value %s was %s, but it must be an " "int, float or None." % (name, value)) return value @classmethod def from_float(cls, timeout): """ Create a new Timeout from a legacy timeout value. The timeout value used by httplib.py sets the same timeout on the connect(), and recv() socket requests. This creates a :class:`Timeout` object that sets the individual timeouts to the ``timeout`` value passed to this function. :param timeout: The legacy timeout value. :type timeout: integer, float, sentinel default object, or None :return: Timeout object :rtype: :class:`Timeout` """ return Timeout(read=timeout, connect=timeout) def clone(self): """ Create a copy of the timeout object Timeout properties are stored per-pool but each request needs a fresh Timeout object to ensure each one has its own start/stop configured. :return: a copy of the timeout object :rtype: :class:`Timeout` """ # We can't use copy.deepcopy because that will also create a new object # for _GLOBAL_DEFAULT_TIMEOUT, which socket.py uses as a sentinel to # detect the user default. return Timeout(connect=self._connect, read=self._read, total=self.total) def start_connect(self): """ Start the timeout clock, used during a connect() attempt :raises urllib3.exceptions.TimeoutStateError: if you attempt to start a timer that has been started already. """ if self._start_connect is not None: raise TimeoutStateError("Timeout timer has already been started.") self._start_connect = current_time() return self._start_connect def get_connect_duration(self): """ Gets the time elapsed since the call to :meth:`start_connect`. :return: Elapsed time. :rtype: float :raises urllib3.exceptions.TimeoutStateError: if you attempt to get duration for a timer that hasn't been started. """ if self._start_connect is None: raise TimeoutStateError("Can't get connect duration for timer " "that has not started.") return current_time() - self._start_connect @property def connect_timeout(self): """ Get the value to use when setting a connection timeout. This will be a positive float or integer, the value None (never timeout), or the default system timeout. :return: Connect timeout. :rtype: int, float, :attr:`Timeout.DEFAULT_TIMEOUT` or None """ if self.total is None: return self._connect if self._connect is None or self._connect is self.DEFAULT_TIMEOUT: return self.total return min(self._connect, self.total) @property def read_timeout(self): """ Get the value for the read timeout. This assumes some time has elapsed in the connection timeout and computes the read timeout appropriately. If self.total is set, the read timeout is dependent on the amount of time taken by the connect timeout. If the connection time has not been established, a :exc:`~urllib3.exceptions.TimeoutStateError` will be raised. :return: Value to use for the read timeout. :rtype: int, float, :attr:`Timeout.DEFAULT_TIMEOUT` or None :raises urllib3.exceptions.TimeoutStateError: If :meth:`start_connect` has not yet been called on this object. """ if (self.total is not None and self.total is not self.DEFAULT_TIMEOUT and self._read is not None and self._read is not self.DEFAULT_TIMEOUT): # In case the connect timeout has not yet been established. if self._start_connect is None: return self._read return max(0, min(self.total - self.get_connect_duration(), self._read)) elif self.total is not None and self.total is not self.DEFAULT_TIMEOUT: return max(0, self.total - self.get_connect_duration()) else: return self._read ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/util/url.py����������������������0000644�0001750�0001750�00000023143�00000000000�030175� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import import re from collections import namedtuple from ..exceptions import LocationParseError from ..packages import six, rfc3986 from ..packages.rfc3986.exceptions import RFC3986Exception, ValidationError from ..packages.rfc3986.validators import Validator from ..packages.rfc3986 import abnf_regexp, normalizers, compat, misc url_attrs = ['scheme', 'auth', 'host', 'port', 'path', 'query', 'fragment'] # We only want to normalize urls with an HTTP(S) scheme. # urllib3 infers URLs without a scheme (None) to be http. NORMALIZABLE_SCHEMES = ('http', 'https', None) # Regex for detecting URLs with schemes. RFC 3986 Section 3.1 SCHEME_REGEX = re.compile(r"^(?:[a-zA-Z][a-zA-Z0-9+\-]*:|/)") PATH_CHARS = abnf_regexp.UNRESERVED_CHARS_SET | abnf_regexp.SUB_DELIMITERS_SET | {':', '@', '/'} QUERY_CHARS = FRAGMENT_CHARS = PATH_CHARS | {'?'} class Url(namedtuple('Url', url_attrs)): """ Data structure for representing an HTTP URL. Used as a return value for :func:`parse_url`. Both the scheme and host are normalized as they are both case-insensitive according to RFC 3986. """ __slots__ = () def __new__(cls, scheme=None, auth=None, host=None, port=None, path=None, query=None, fragment=None): if path and not path.startswith('/'): path = '/' + path if scheme is not None: scheme = scheme.lower() return super(Url, cls).__new__(cls, scheme, auth, host, port, path, query, fragment) @property def hostname(self): """For backwards-compatibility with urlparse. We're nice like that.""" return self.host @property def request_uri(self): """Absolute path including the query string.""" uri = self.path or '/' if self.query is not None: uri += '?' + self.query return uri @property def netloc(self): """Network location including host and port""" if self.port: return '%s:%d' % (self.host, self.port) return self.host @property def url(self): """ Convert self into a url This function should more or less round-trip with :func:`.parse_url`. The returned url may not be exactly the same as the url inputted to :func:`.parse_url`, but it should be equivalent by the RFC (e.g., urls with a blank port will have : removed). Example: :: >>> U = parse_url('http://google.com/mail/') >>> U.url 'http://google.com/mail/' >>> Url('http', 'username:password', 'host.com', 80, ... '/path', 'query', 'fragment').url 'http://username:password@host.com:80/path?query#fragment' """ scheme, auth, host, port, path, query, fragment = self url = u'' # We use "is not None" we want things to happen with empty strings (or 0 port) if scheme is not None: url += scheme + u'://' if auth is not None: url += auth + u'@' if host is not None: url += host if port is not None: url += u':' + str(port) if path is not None: url += path if query is not None: url += u'?' + query if fragment is not None: url += u'#' + fragment return url def __str__(self): return self.url def split_first(s, delims): """ .. deprecated:: 1.25 Given a string and an iterable of delimiters, split on the first found delimiter. Return two split parts and the matched delimiter. If not found, then the first part is the full input string. Example:: >>> split_first('foo/bar?baz', '?/=') ('foo', 'bar?baz', '/') >>> split_first('foo/bar?baz', '123') ('foo/bar?baz', '', None) Scales linearly with number of delims. Not ideal for large number of delims. """ min_idx = None min_delim = None for d in delims: idx = s.find(d) if idx < 0: continue if min_idx is None or idx < min_idx: min_idx = idx min_delim = d if min_idx is None or min_idx < 0: return s, '', None return s[:min_idx], s[min_idx + 1:], min_delim def _encode_invalid_chars(component, allowed_chars, encoding='utf-8'): """Percent-encodes a URI component without reapplying onto an already percent-encoded component. Based on rfc3986.normalizers.encode_component() """ if component is None: return component # Try to see if the component we're encoding is already percent-encoded # so we can skip all '%' characters but still encode all others. percent_encodings = len(normalizers.PERCENT_MATCHER.findall( compat.to_str(component, encoding))) uri_bytes = component.encode('utf-8', 'surrogatepass') is_percent_encoded = percent_encodings == uri_bytes.count(b'%') encoded_component = bytearray() for i in range(0, len(uri_bytes)): # Will return a single character bytestring on both Python 2 & 3 byte = uri_bytes[i:i+1] byte_ord = ord(byte) if ((is_percent_encoded and byte == b'%') or (byte_ord < 128 and byte.decode() in allowed_chars)): encoded_component.extend(byte) continue encoded_component.extend('%{0:02x}'.format(byte_ord).encode().upper()) return encoded_component.decode(encoding) def parse_url(url): """ Given a url, return a parsed :class:`.Url` namedtuple. Best-effort is performed to parse incomplete urls. Fields not provided will be None. This parser is RFC 3986 compliant. :param str url: URL to parse into a :class:`.Url` namedtuple. Partly backwards-compatible with :mod:`urlparse`. Example:: >>> parse_url('http://google.com/mail/') Url(scheme='http', host='google.com', port=None, path='/mail/', ...) >>> parse_url('google.com:80') Url(scheme=None, host='google.com', port=80, path=None, ...) >>> parse_url('/foo?bar') Url(scheme=None, host=None, port=None, path='/foo', query='bar', ...) """ if not url: # Empty return Url() is_string = not isinstance(url, six.binary_type) # RFC 3986 doesn't like URLs that have a host but don't start # with a scheme and we support URLs like that so we need to # detect that problem and add an empty scheme indication. # We don't get hurt on path-only URLs here as it's stripped # off and given an empty scheme anyways. if not SCHEME_REGEX.search(url): url = "//" + url def idna_encode(name): if name and any([ord(x) > 128 for x in name]): try: from pip._vendor import idna except ImportError: raise LocationParseError("Unable to parse URL without the 'idna' module") try: return idna.encode(name.lower(), strict=True, std3_rules=True) except idna.IDNAError: raise LocationParseError(u"Name '%s' is not a valid IDNA label" % name) return name try: split_iri = misc.IRI_MATCHER.match(compat.to_str(url)).groupdict() iri_ref = rfc3986.IRIReference( split_iri['scheme'], split_iri['authority'], _encode_invalid_chars(split_iri['path'], PATH_CHARS), _encode_invalid_chars(split_iri['query'], QUERY_CHARS), _encode_invalid_chars(split_iri['fragment'], FRAGMENT_CHARS) ) has_authority = iri_ref.authority is not None uri_ref = iri_ref.encode(idna_encoder=idna_encode) except (ValueError, RFC3986Exception): return six.raise_from(LocationParseError(url), None) # rfc3986 strips the authority if it's invalid if has_authority and uri_ref.authority is None: raise LocationParseError(url) # Only normalize schemes we understand to not break http+unix # or other schemes that don't follow RFC 3986. if uri_ref.scheme is None or uri_ref.scheme.lower() in NORMALIZABLE_SCHEMES: uri_ref = uri_ref.normalize() # Validate all URIReference components and ensure that all # components that were set before are still set after # normalization has completed. validator = Validator() try: validator.check_validity_of( *validator.COMPONENT_NAMES ).validate(uri_ref) except ValidationError: return six.raise_from(LocationParseError(url), None) # For the sake of backwards compatibility we put empty # string values for path if there are any defined values # beyond the path in the URL. # TODO: Remove this when we break backwards compatibility. path = uri_ref.path if not path: if (uri_ref.query is not None or uri_ref.fragment is not None): path = "" else: path = None # Ensure that each part of the URL is a `str` for # backwards compatibility. def to_input_type(x): if x is None: return None elif not is_string and not isinstance(x, six.binary_type): return x.encode('utf-8') return x return Url( scheme=to_input_type(uri_ref.scheme), auth=to_input_type(uri_ref.userinfo), host=to_input_type(uri_ref.host), port=int(uri_ref.port) if uri_ref.port is not None else None, path=to_input_type(path), query=to_input_type(uri_ref.query), fragment=to_input_type(uri_ref.fragment) ) def get_host(url): """ Deprecated. Use :func:`parse_url` instead. """ p = parse_url(url) return p.scheme or 'http', p.hostname, p.port �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/urllib3/util/wait.py���������������������0000644�0001750�0001750�00000012433�00000000000�030337� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import errno from functools import partial import select import sys try: from time import monotonic except ImportError: from time import time as monotonic __all__ = ["NoWayToWaitForSocketError", "wait_for_read", "wait_for_write"] class NoWayToWaitForSocketError(Exception): pass # How should we wait on sockets? # # There are two types of APIs you can use for waiting on sockets: the fancy # modern stateful APIs like epoll/kqueue, and the older stateless APIs like # select/poll. The stateful APIs are more efficient when you have a lots of # sockets to keep track of, because you can set them up once and then use them # lots of times. But we only ever want to wait on a single socket at a time # and don't want to keep track of state, so the stateless APIs are actually # more efficient. So we want to use select() or poll(). # # Now, how do we choose between select() and poll()? On traditional Unixes, # select() has a strange calling convention that makes it slow, or fail # altogether, for high-numbered file descriptors. The point of poll() is to fix # that, so on Unixes, we prefer poll(). # # On Windows, there is no poll() (or at least Python doesn't provide a wrapper # for it), but that's OK, because on Windows, select() doesn't have this # strange calling convention; plain select() works fine. # # So: on Windows we use select(), and everywhere else we use poll(). We also # fall back to select() in case poll() is somehow broken or missing. if sys.version_info >= (3, 5): # Modern Python, that retries syscalls by default def _retry_on_intr(fn, timeout): return fn(timeout) else: # Old and broken Pythons. def _retry_on_intr(fn, timeout): if timeout is None: deadline = float("inf") else: deadline = monotonic() + timeout while True: try: return fn(timeout) # OSError for 3 <= pyver < 3.5, select.error for pyver <= 2.7 except (OSError, select.error) as e: # 'e.args[0]' incantation works for both OSError and select.error if e.args[0] != errno.EINTR: raise else: timeout = deadline - monotonic() if timeout < 0: timeout = 0 if timeout == float("inf"): timeout = None continue def select_wait_for_socket(sock, read=False, write=False, timeout=None): if not read and not write: raise RuntimeError("must specify at least one of read=True, write=True") rcheck = [] wcheck = [] if read: rcheck.append(sock) if write: wcheck.append(sock) # When doing a non-blocking connect, most systems signal success by # marking the socket writable. Windows, though, signals success by marked # it as "exceptional". We paper over the difference by checking the write # sockets for both conditions. (The stdlib selectors module does the same # thing.) fn = partial(select.select, rcheck, wcheck, wcheck) rready, wready, xready = _retry_on_intr(fn, timeout) return bool(rready or wready or xready) def poll_wait_for_socket(sock, read=False, write=False, timeout=None): if not read and not write: raise RuntimeError("must specify at least one of read=True, write=True") mask = 0 if read: mask |= select.POLLIN if write: mask |= select.POLLOUT poll_obj = select.poll() poll_obj.register(sock, mask) # For some reason, poll() takes timeout in milliseconds def do_poll(t): if t is not None: t *= 1000 return poll_obj.poll(t) return bool(_retry_on_intr(do_poll, timeout)) def null_wait_for_socket(*args, **kwargs): raise NoWayToWaitForSocketError("no select-equivalent available") def _have_working_poll(): # Apparently some systems have a select.poll that fails as soon as you try # to use it, either due to strange configuration or broken monkeypatching # from libraries like eventlet/greenlet. try: poll_obj = select.poll() _retry_on_intr(poll_obj.poll, 0) except (AttributeError, OSError): return False else: return True def wait_for_socket(*args, **kwargs): # We delay choosing which implementation to use until the first time we're # called. We could do it at import time, but then we might make the wrong # decision if someone goes wild with monkeypatching select.poll after # we're imported. global wait_for_socket if _have_working_poll(): wait_for_socket = poll_wait_for_socket elif hasattr(select, "select"): wait_for_socket = select_wait_for_socket else: # Platform-specific: Appengine. wait_for_socket = null_wait_for_socket return wait_for_socket(*args, **kwargs) def wait_for_read(sock, timeout=None): """ Waits for reading to be available on a given socket. Returns True if the socket is readable, or False if the timeout expired. """ return wait_for_socket(sock, read=True, timeout=timeout) def wait_for_write(sock, timeout=None): """ Waits for writing to be available on a given socket. Returns True if the socket is readable, or False if the timeout expired. """ return wait_for_socket(sock, write=True, timeout=timeout) �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000033�00000000000�011451� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������27 mtime=1604735100.036661 �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/webencodings/����������������������������0000755�0001750�0001750�00000000000�00000000000�027134� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/webencodings/__init__.py�����������������0000644�0001750�0001750�00000024523�00000000000�031253� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# coding: utf-8 """ webencodings ~~~~~~~~~~~~ This is a Python implementation of the `WHATWG Encoding standard <http://encoding.spec.whatwg.org/>`. See README for details. :copyright: Copyright 2012 by Simon Sapin :license: BSD, see LICENSE for details. """ from __future__ import unicode_literals import codecs from .labels import LABELS VERSION = '0.5.1' # Some names in Encoding are not valid Python aliases. Remap these. PYTHON_NAMES = { 'iso-8859-8-i': 'iso-8859-8', 'x-mac-cyrillic': 'mac-cyrillic', 'macintosh': 'mac-roman', 'windows-874': 'cp874'} CACHE = {} def ascii_lower(string): r"""Transform (only) ASCII letters to lower case: A-Z is mapped to a-z. :param string: An Unicode string. :returns: A new Unicode string. This is used for `ASCII case-insensitive <http://encoding.spec.whatwg.org/#ascii-case-insensitive>`_ matching of encoding labels. The same matching is also used, among other things, for `CSS keywords <http://dev.w3.org/csswg/css-values/#keywords>`_. This is different from the :meth:`~py:str.lower` method of Unicode strings which also affect non-ASCII characters, sometimes mapping them into the ASCII range: >>> keyword = u'Bac\N{KELVIN SIGN}ground' >>> assert keyword.lower() == u'background' >>> assert ascii_lower(keyword) != keyword.lower() >>> assert ascii_lower(keyword) == u'bac\N{KELVIN SIGN}ground' """ # This turns out to be faster than unicode.translate() return string.encode('utf8').lower().decode('utf8') def lookup(label): """ Look for an encoding by its label. This is the spec’s `get an encoding <http://encoding.spec.whatwg.org/#concept-encoding-get>`_ algorithm. Supported labels are listed there. :param label: A string. :returns: An :class:`Encoding` object, or :obj:`None` for an unknown label. """ # Only strip ASCII whitespace: U+0009, U+000A, U+000C, U+000D, and U+0020. label = ascii_lower(label.strip('\t\n\f\r ')) name = LABELS.get(label) if name is None: return None encoding = CACHE.get(name) if encoding is None: if name == 'x-user-defined': from .x_user_defined import codec_info else: python_name = PYTHON_NAMES.get(name, name) # Any python_name value that gets to here should be valid. codec_info = codecs.lookup(python_name) encoding = Encoding(name, codec_info) CACHE[name] = encoding return encoding def _get_encoding(encoding_or_label): """ Accept either an encoding object or label. :param encoding: An :class:`Encoding` object or a label string. :returns: An :class:`Encoding` object. :raises: :exc:`~exceptions.LookupError` for an unknown label. """ if hasattr(encoding_or_label, 'codec_info'): return encoding_or_label encoding = lookup(encoding_or_label) if encoding is None: raise LookupError('Unknown encoding label: %r' % encoding_or_label) return encoding class Encoding(object): """Reresents a character encoding such as UTF-8, that can be used for decoding or encoding. .. attribute:: name Canonical name of the encoding .. attribute:: codec_info The actual implementation of the encoding, a stdlib :class:`~codecs.CodecInfo` object. See :func:`codecs.register`. """ def __init__(self, name, codec_info): self.name = name self.codec_info = codec_info def __repr__(self): return '<Encoding %s>' % self.name #: The UTF-8 encoding. Should be used for new content and formats. UTF8 = lookup('utf-8') _UTF16LE = lookup('utf-16le') _UTF16BE = lookup('utf-16be') def decode(input, fallback_encoding, errors='replace'): """ Decode a single string. :param input: A byte string :param fallback_encoding: An :class:`Encoding` object or a label string. The encoding to use if :obj:`input` does note have a BOM. :param errors: Type of error handling. See :func:`codecs.register`. :raises: :exc:`~exceptions.LookupError` for an unknown encoding label. :return: A ``(output, encoding)`` tuple of an Unicode string and an :obj:`Encoding`. """ # Fail early if `encoding` is an invalid label. fallback_encoding = _get_encoding(fallback_encoding) bom_encoding, input = _detect_bom(input) encoding = bom_encoding or fallback_encoding return encoding.codec_info.decode(input, errors)[0], encoding def _detect_bom(input): """Return (bom_encoding, input), with any BOM removed from the input.""" if input.startswith(b'\xFF\xFE'): return _UTF16LE, input[2:] if input.startswith(b'\xFE\xFF'): return _UTF16BE, input[2:] if input.startswith(b'\xEF\xBB\xBF'): return UTF8, input[3:] return None, input def encode(input, encoding=UTF8, errors='strict'): """ Encode a single string. :param input: An Unicode string. :param encoding: An :class:`Encoding` object or a label string. :param errors: Type of error handling. See :func:`codecs.register`. :raises: :exc:`~exceptions.LookupError` for an unknown encoding label. :return: A byte string. """ return _get_encoding(encoding).codec_info.encode(input, errors)[0] def iter_decode(input, fallback_encoding, errors='replace'): """ "Pull"-based decoder. :param input: An iterable of byte strings. The input is first consumed just enough to determine the encoding based on the precense of a BOM, then consumed on demand when the return value is. :param fallback_encoding: An :class:`Encoding` object or a label string. The encoding to use if :obj:`input` does note have a BOM. :param errors: Type of error handling. See :func:`codecs.register`. :raises: :exc:`~exceptions.LookupError` for an unknown encoding label. :returns: An ``(output, encoding)`` tuple. :obj:`output` is an iterable of Unicode strings, :obj:`encoding` is the :obj:`Encoding` that is being used. """ decoder = IncrementalDecoder(fallback_encoding, errors) generator = _iter_decode_generator(input, decoder) encoding = next(generator) return generator, encoding def _iter_decode_generator(input, decoder): """Return a generator that first yields the :obj:`Encoding`, then yields output chukns as Unicode strings. """ decode = decoder.decode input = iter(input) for chunck in input: output = decode(chunck) if output: assert decoder.encoding is not None yield decoder.encoding yield output break else: # Input exhausted without determining the encoding output = decode(b'', final=True) assert decoder.encoding is not None yield decoder.encoding if output: yield output return for chunck in input: output = decode(chunck) if output: yield output output = decode(b'', final=True) if output: yield output def iter_encode(input, encoding=UTF8, errors='strict'): """ “Pull”-based encoder. :param input: An iterable of Unicode strings. :param encoding: An :class:`Encoding` object or a label string. :param errors: Type of error handling. See :func:`codecs.register`. :raises: :exc:`~exceptions.LookupError` for an unknown encoding label. :returns: An iterable of byte strings. """ # Fail early if `encoding` is an invalid label. encode = IncrementalEncoder(encoding, errors).encode return _iter_encode_generator(input, encode) def _iter_encode_generator(input, encode): for chunck in input: output = encode(chunck) if output: yield output output = encode('', final=True) if output: yield output class IncrementalDecoder(object): """ “Push”-based decoder. :param fallback_encoding: An :class:`Encoding` object or a label string. The encoding to use if :obj:`input` does note have a BOM. :param errors: Type of error handling. See :func:`codecs.register`. :raises: :exc:`~exceptions.LookupError` for an unknown encoding label. """ def __init__(self, fallback_encoding, errors='replace'): # Fail early if `encoding` is an invalid label. self._fallback_encoding = _get_encoding(fallback_encoding) self._errors = errors self._buffer = b'' self._decoder = None #: The actual :class:`Encoding` that is being used, #: or :obj:`None` if that is not determined yet. #: (Ie. if there is not enough input yet to determine #: if there is a BOM.) self.encoding = None # Not known yet. def decode(self, input, final=False): """Decode one chunk of the input. :param input: A byte string. :param final: Indicate that no more input is available. Must be :obj:`True` if this is the last call. :returns: An Unicode string. """ decoder = self._decoder if decoder is not None: return decoder(input, final) input = self._buffer + input encoding, input = _detect_bom(input) if encoding is None: if len(input) < 3 and not final: # Not enough data yet. self._buffer = input return '' else: # No BOM encoding = self._fallback_encoding decoder = encoding.codec_info.incrementaldecoder(self._errors).decode self._decoder = decoder self.encoding = encoding return decoder(input, final) class IncrementalEncoder(object): """ “Push”-based encoder. :param encoding: An :class:`Encoding` object or a label string. :param errors: Type of error handling. See :func:`codecs.register`. :raises: :exc:`~exceptions.LookupError` for an unknown encoding label. .. method:: encode(input, final=False) :param input: An Unicode string. :param final: Indicate that no more input is available. Must be :obj:`True` if this is the last call. :returns: A byte string. """ def __init__(self, encoding=UTF8, errors='strict'): encoding = _get_encoding(encoding) self.encode = encoding.codec_info.incrementalencoder(errors).encode �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/webencodings/labels.py�������������������0000644�0001750�0001750�00000021423�00000000000�030752� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" webencodings.labels ~~~~~~~~~~~~~~~~~~~ Map encoding labels to their name. :copyright: Copyright 2012 by Simon Sapin :license: BSD, see LICENSE for details. """ # XXX Do not edit! # This file is automatically generated by mklabels.py LABELS = { 'unicode-1-1-utf-8': 'utf-8', 'utf-8': 'utf-8', 'utf8': 'utf-8', '866': 'ibm866', 'cp866': 'ibm866', 'csibm866': 'ibm866', 'ibm866': 'ibm866', 'csisolatin2': 'iso-8859-2', 'iso-8859-2': 'iso-8859-2', 'iso-ir-101': 'iso-8859-2', 'iso8859-2': 'iso-8859-2', 'iso88592': 'iso-8859-2', 'iso_8859-2': 'iso-8859-2', 'iso_8859-2:1987': 'iso-8859-2', 'l2': 'iso-8859-2', 'latin2': 'iso-8859-2', 'csisolatin3': 'iso-8859-3', 'iso-8859-3': 'iso-8859-3', 'iso-ir-109': 'iso-8859-3', 'iso8859-3': 'iso-8859-3', 'iso88593': 'iso-8859-3', 'iso_8859-3': 'iso-8859-3', 'iso_8859-3:1988': 'iso-8859-3', 'l3': 'iso-8859-3', 'latin3': 'iso-8859-3', 'csisolatin4': 'iso-8859-4', 'iso-8859-4': 'iso-8859-4', 'iso-ir-110': 'iso-8859-4', 'iso8859-4': 'iso-8859-4', 'iso88594': 'iso-8859-4', 'iso_8859-4': 'iso-8859-4', 'iso_8859-4:1988': 'iso-8859-4', 'l4': 'iso-8859-4', 'latin4': 'iso-8859-4', 'csisolatincyrillic': 'iso-8859-5', 'cyrillic': 'iso-8859-5', 'iso-8859-5': 'iso-8859-5', 'iso-ir-144': 'iso-8859-5', 'iso8859-5': 'iso-8859-5', 'iso88595': 'iso-8859-5', 'iso_8859-5': 'iso-8859-5', 'iso_8859-5:1988': 'iso-8859-5', 'arabic': 'iso-8859-6', 'asmo-708': 'iso-8859-6', 'csiso88596e': 'iso-8859-6', 'csiso88596i': 'iso-8859-6', 'csisolatinarabic': 'iso-8859-6', 'ecma-114': 'iso-8859-6', 'iso-8859-6': 'iso-8859-6', 'iso-8859-6-e': 'iso-8859-6', 'iso-8859-6-i': 'iso-8859-6', 'iso-ir-127': 'iso-8859-6', 'iso8859-6': 'iso-8859-6', 'iso88596': 'iso-8859-6', 'iso_8859-6': 'iso-8859-6', 'iso_8859-6:1987': 'iso-8859-6', 'csisolatingreek': 'iso-8859-7', 'ecma-118': 'iso-8859-7', 'elot_928': 'iso-8859-7', 'greek': 'iso-8859-7', 'greek8': 'iso-8859-7', 'iso-8859-7': 'iso-8859-7', 'iso-ir-126': 'iso-8859-7', 'iso8859-7': 'iso-8859-7', 'iso88597': 'iso-8859-7', 'iso_8859-7': 'iso-8859-7', 'iso_8859-7:1987': 'iso-8859-7', 'sun_eu_greek': 'iso-8859-7', 'csiso88598e': 'iso-8859-8', 'csisolatinhebrew': 'iso-8859-8', 'hebrew': 'iso-8859-8', 'iso-8859-8': 'iso-8859-8', 'iso-8859-8-e': 'iso-8859-8', 'iso-ir-138': 'iso-8859-8', 'iso8859-8': 'iso-8859-8', 'iso88598': 'iso-8859-8', 'iso_8859-8': 'iso-8859-8', 'iso_8859-8:1988': 'iso-8859-8', 'visual': 'iso-8859-8', 'csiso88598i': 'iso-8859-8-i', 'iso-8859-8-i': 'iso-8859-8-i', 'logical': 'iso-8859-8-i', 'csisolatin6': 'iso-8859-10', 'iso-8859-10': 'iso-8859-10', 'iso-ir-157': 'iso-8859-10', 'iso8859-10': 'iso-8859-10', 'iso885910': 'iso-8859-10', 'l6': 'iso-8859-10', 'latin6': 'iso-8859-10', 'iso-8859-13': 'iso-8859-13', 'iso8859-13': 'iso-8859-13', 'iso885913': 'iso-8859-13', 'iso-8859-14': 'iso-8859-14', 'iso8859-14': 'iso-8859-14', 'iso885914': 'iso-8859-14', 'csisolatin9': 'iso-8859-15', 'iso-8859-15': 'iso-8859-15', 'iso8859-15': 'iso-8859-15', 'iso885915': 'iso-8859-15', 'iso_8859-15': 'iso-8859-15', 'l9': 'iso-8859-15', 'iso-8859-16': 'iso-8859-16', 'cskoi8r': 'koi8-r', 'koi': 'koi8-r', 'koi8': 'koi8-r', 'koi8-r': 'koi8-r', 'koi8_r': 'koi8-r', 'koi8-u': 'koi8-u', 'csmacintosh': 'macintosh', 'mac': 'macintosh', 'macintosh': 'macintosh', 'x-mac-roman': 'macintosh', 'dos-874': 'windows-874', 'iso-8859-11': 'windows-874', 'iso8859-11': 'windows-874', 'iso885911': 'windows-874', 'tis-620': 'windows-874', 'windows-874': 'windows-874', 'cp1250': 'windows-1250', 'windows-1250': 'windows-1250', 'x-cp1250': 'windows-1250', 'cp1251': 'windows-1251', 'windows-1251': 'windows-1251', 'x-cp1251': 'windows-1251', 'ansi_x3.4-1968': 'windows-1252', 'ascii': 'windows-1252', 'cp1252': 'windows-1252', 'cp819': 'windows-1252', 'csisolatin1': 'windows-1252', 'ibm819': 'windows-1252', 'iso-8859-1': 'windows-1252', 'iso-ir-100': 'windows-1252', 'iso8859-1': 'windows-1252', 'iso88591': 'windows-1252', 'iso_8859-1': 'windows-1252', 'iso_8859-1:1987': 'windows-1252', 'l1': 'windows-1252', 'latin1': 'windows-1252', 'us-ascii': 'windows-1252', 'windows-1252': 'windows-1252', 'x-cp1252': 'windows-1252', 'cp1253': 'windows-1253', 'windows-1253': 'windows-1253', 'x-cp1253': 'windows-1253', 'cp1254': 'windows-1254', 'csisolatin5': 'windows-1254', 'iso-8859-9': 'windows-1254', 'iso-ir-148': 'windows-1254', 'iso8859-9': 'windows-1254', 'iso88599': 'windows-1254', 'iso_8859-9': 'windows-1254', 'iso_8859-9:1989': 'windows-1254', 'l5': 'windows-1254', 'latin5': 'windows-1254', 'windows-1254': 'windows-1254', 'x-cp1254': 'windows-1254', 'cp1255': 'windows-1255', 'windows-1255': 'windows-1255', 'x-cp1255': 'windows-1255', 'cp1256': 'windows-1256', 'windows-1256': 'windows-1256', 'x-cp1256': 'windows-1256', 'cp1257': 'windows-1257', 'windows-1257': 'windows-1257', 'x-cp1257': 'windows-1257', 'cp1258': 'windows-1258', 'windows-1258': 'windows-1258', 'x-cp1258': 'windows-1258', 'x-mac-cyrillic': 'x-mac-cyrillic', 'x-mac-ukrainian': 'x-mac-cyrillic', 'chinese': 'gbk', 'csgb2312': 'gbk', 'csiso58gb231280': 'gbk', 'gb2312': 'gbk', 'gb_2312': 'gbk', 'gb_2312-80': 'gbk', 'gbk': 'gbk', 'iso-ir-58': 'gbk', 'x-gbk': 'gbk', 'gb18030': 'gb18030', 'hz-gb-2312': 'hz-gb-2312', 'big5': 'big5', 'big5-hkscs': 'big5', 'cn-big5': 'big5', 'csbig5': 'big5', 'x-x-big5': 'big5', 'cseucpkdfmtjapanese': 'euc-jp', 'euc-jp': 'euc-jp', 'x-euc-jp': 'euc-jp', 'csiso2022jp': 'iso-2022-jp', 'iso-2022-jp': 'iso-2022-jp', 'csshiftjis': 'shift_jis', 'ms_kanji': 'shift_jis', 'shift-jis': 'shift_jis', 'shift_jis': 'shift_jis', 'sjis': 'shift_jis', 'windows-31j': 'shift_jis', 'x-sjis': 'shift_jis', 'cseuckr': 'euc-kr', 'csksc56011987': 'euc-kr', 'euc-kr': 'euc-kr', 'iso-ir-149': 'euc-kr', 'korean': 'euc-kr', 'ks_c_5601-1987': 'euc-kr', 'ks_c_5601-1989': 'euc-kr', 'ksc5601': 'euc-kr', 'ksc_5601': 'euc-kr', 'windows-949': 'euc-kr', 'csiso2022kr': 'iso-2022-kr', 'iso-2022-kr': 'iso-2022-kr', 'utf-16be': 'utf-16be', 'utf-16': 'utf-16le', 'utf-16le': 'utf-16le', 'x-user-defined': 'x-user-defined', } ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/webencodings/mklabels.py�����������������0000644�0001750�0001750�00000002431�00000000000�031300� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" webencodings.mklabels ~~~~~~~~~~~~~~~~~~~~~ Regenarate the webencodings.labels module. :copyright: Copyright 2012 by Simon Sapin :license: BSD, see LICENSE for details. """ import json try: from urllib import urlopen except ImportError: from urllib.request import urlopen def assert_lower(string): assert string == string.lower() return string def generate(url): parts = ['''\ """ webencodings.labels ~~~~~~~~~~~~~~~~~~~ Map encoding labels to their name. :copyright: Copyright 2012 by Simon Sapin :license: BSD, see LICENSE for details. """ # XXX Do not edit! # This file is automatically generated by mklabels.py LABELS = { '''] labels = [ (repr(assert_lower(label)).lstrip('u'), repr(encoding['name']).lstrip('u')) for category in json.loads(urlopen(url).read().decode('ascii')) for encoding in category['encodings'] for label in encoding['labels']] max_len = max(len(label) for label, name in labels) parts.extend( ' %s:%s %s,\n' % (label, ' ' * (max_len - len(label)), name) for label, name in labels) parts.append('}') return ''.join(parts) if __name__ == '__main__': print(generate('http://encoding.spec.whatwg.org/encodings.json')) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/webencodings/tests.py��������������������0000644�0001750�0001750�00000014643�00000000000�030660� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# coding: utf-8 """ webencodings.tests ~~~~~~~~~~~~~~~~~~ A basic test suite for Encoding. :copyright: Copyright 2012 by Simon Sapin :license: BSD, see LICENSE for details. """ from __future__ import unicode_literals from . import (lookup, LABELS, decode, encode, iter_decode, iter_encode, IncrementalDecoder, IncrementalEncoder, UTF8) def assert_raises(exception, function, *args, **kwargs): try: function(*args, **kwargs) except exception: return else: # pragma: no cover raise AssertionError('Did not raise %s.' % exception) def test_labels(): assert lookup('utf-8').name == 'utf-8' assert lookup('Utf-8').name == 'utf-8' assert lookup('UTF-8').name == 'utf-8' assert lookup('utf8').name == 'utf-8' assert lookup('utf8').name == 'utf-8' assert lookup('utf8 ').name == 'utf-8' assert lookup(' \r\nutf8\t').name == 'utf-8' assert lookup('u8') is None # Python label. assert lookup('utf-8 ') is None # Non-ASCII white space. assert lookup('US-ASCII').name == 'windows-1252' assert lookup('iso-8859-1').name == 'windows-1252' assert lookup('latin1').name == 'windows-1252' assert lookup('LATIN1').name == 'windows-1252' assert lookup('latin-1') is None assert lookup('LATİN1') is None # ASCII-only case insensitivity. def test_all_labels(): for label in LABELS: assert decode(b'', label) == ('', lookup(label)) assert encode('', label) == b'' for repeat in [0, 1, 12]: output, _ = iter_decode([b''] * repeat, label) assert list(output) == [] assert list(iter_encode([''] * repeat, label)) == [] decoder = IncrementalDecoder(label) assert decoder.decode(b'') == '' assert decoder.decode(b'', final=True) == '' encoder = IncrementalEncoder(label) assert encoder.encode('') == b'' assert encoder.encode('', final=True) == b'' # All encoding names are valid labels too: for name in set(LABELS.values()): assert lookup(name).name == name def test_invalid_label(): assert_raises(LookupError, decode, b'\xEF\xBB\xBF\xc3\xa9', 'invalid') assert_raises(LookupError, encode, 'é', 'invalid') assert_raises(LookupError, iter_decode, [], 'invalid') assert_raises(LookupError, iter_encode, [], 'invalid') assert_raises(LookupError, IncrementalDecoder, 'invalid') assert_raises(LookupError, IncrementalEncoder, 'invalid') def test_decode(): assert decode(b'\x80', 'latin1') == ('€', lookup('latin1')) assert decode(b'\x80', lookup('latin1')) == ('€', lookup('latin1')) assert decode(b'\xc3\xa9', 'utf8') == ('é', lookup('utf8')) assert decode(b'\xc3\xa9', UTF8) == ('é', lookup('utf8')) assert decode(b'\xc3\xa9', 'ascii') == ('é', lookup('ascii')) assert decode(b'\xEF\xBB\xBF\xc3\xa9', 'ascii') == ('é', lookup('utf8')) # UTF-8 with BOM assert decode(b'\xFE\xFF\x00\xe9', 'ascii') == ('é', lookup('utf-16be')) # UTF-16-BE with BOM assert decode(b'\xFF\xFE\xe9\x00', 'ascii') == ('é', lookup('utf-16le')) # UTF-16-LE with BOM assert decode(b'\xFE\xFF\xe9\x00', 'ascii') == ('\ue900', lookup('utf-16be')) assert decode(b'\xFF\xFE\x00\xe9', 'ascii') == ('\ue900', lookup('utf-16le')) assert decode(b'\x00\xe9', 'UTF-16BE') == ('é', lookup('utf-16be')) assert decode(b'\xe9\x00', 'UTF-16LE') == ('é', lookup('utf-16le')) assert decode(b'\xe9\x00', 'UTF-16') == ('é', lookup('utf-16le')) assert decode(b'\xe9\x00', 'UTF-16BE') == ('\ue900', lookup('utf-16be')) assert decode(b'\x00\xe9', 'UTF-16LE') == ('\ue900', lookup('utf-16le')) assert decode(b'\x00\xe9', 'UTF-16') == ('\ue900', lookup('utf-16le')) def test_encode(): assert encode('é', 'latin1') == b'\xe9' assert encode('é', 'utf8') == b'\xc3\xa9' assert encode('é', 'utf8') == b'\xc3\xa9' assert encode('é', 'utf-16') == b'\xe9\x00' assert encode('é', 'utf-16le') == b'\xe9\x00' assert encode('é', 'utf-16be') == b'\x00\xe9' def test_iter_decode(): def iter_decode_to_string(input, fallback_encoding): output, _encoding = iter_decode(input, fallback_encoding) return ''.join(output) assert iter_decode_to_string([], 'latin1') == '' assert iter_decode_to_string([b''], 'latin1') == '' assert iter_decode_to_string([b'\xe9'], 'latin1') == 'é' assert iter_decode_to_string([b'hello'], 'latin1') == 'hello' assert iter_decode_to_string([b'he', b'llo'], 'latin1') == 'hello' assert iter_decode_to_string([b'hell', b'o'], 'latin1') == 'hello' assert iter_decode_to_string([b'\xc3\xa9'], 'latin1') == 'é' assert iter_decode_to_string([b'\xEF\xBB\xBF\xc3\xa9'], 'latin1') == 'é' assert iter_decode_to_string([ b'\xEF\xBB\xBF', b'\xc3', b'\xa9'], 'latin1') == 'é' assert iter_decode_to_string([ b'\xEF\xBB\xBF', b'a', b'\xc3'], 'latin1') == 'a\uFFFD' assert iter_decode_to_string([ b'', b'\xEF', b'', b'', b'\xBB\xBF\xc3', b'\xa9'], 'latin1') == 'é' assert iter_decode_to_string([b'\xEF\xBB\xBF'], 'latin1') == '' assert iter_decode_to_string([b'\xEF\xBB'], 'latin1') == 'ï»' assert iter_decode_to_string([b'\xFE\xFF\x00\xe9'], 'latin1') == 'é' assert iter_decode_to_string([b'\xFF\xFE\xe9\x00'], 'latin1') == 'é' assert iter_decode_to_string([ b'', b'\xFF', b'', b'', b'\xFE\xe9', b'\x00'], 'latin1') == 'é' assert iter_decode_to_string([ b'', b'h\xe9', b'llo'], 'x-user-defined') == 'h\uF7E9llo' def test_iter_encode(): assert b''.join(iter_encode([], 'latin1')) == b'' assert b''.join(iter_encode([''], 'latin1')) == b'' assert b''.join(iter_encode(['é'], 'latin1')) == b'\xe9' assert b''.join(iter_encode(['', 'é', '', ''], 'latin1')) == b'\xe9' assert b''.join(iter_encode(['', 'é', '', ''], 'utf-16')) == b'\xe9\x00' assert b''.join(iter_encode(['', 'é', '', ''], 'utf-16le')) == b'\xe9\x00' assert b''.join(iter_encode(['', 'é', '', ''], 'utf-16be')) == b'\x00\xe9' assert b''.join(iter_encode([ '', 'h\uF7E9', '', 'llo'], 'x-user-defined')) == b'h\xe9llo' def test_x_user_defined(): encoded = b'2,\x0c\x0b\x1aO\xd9#\xcb\x0f\xc9\xbbt\xcf\xa8\xca' decoded = '2,\x0c\x0b\x1aO\uf7d9#\uf7cb\x0f\uf7c9\uf7bbt\uf7cf\uf7a8\uf7ca' encoded = b'aa' decoded = 'aa' assert decode(encoded, 'x-user-defined') == (decoded, lookup('x-user-defined')) assert encode(decoded, 'x-user-defined') == encoded ���������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430190.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pip/_vendor/webencodings/x_user_defined.py�����������0000644�0001750�0001750�00000010323�00000000000�032470� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# coding: utf-8 """ webencodings.x_user_defined ~~~~~~~~~~~~~~~~~~~~~~~~~~~ An implementation of the x-user-defined encoding. :copyright: Copyright 2012 by Simon Sapin :license: BSD, see LICENSE for details. """ from __future__ import unicode_literals import codecs ### Codec APIs class Codec(codecs.Codec): def encode(self, input, errors='strict'): return codecs.charmap_encode(input, errors, encoding_table) def decode(self, input, errors='strict'): return codecs.charmap_decode(input, errors, decoding_table) class IncrementalEncoder(codecs.IncrementalEncoder): def encode(self, input, final=False): return codecs.charmap_encode(input, self.errors, encoding_table)[0] class IncrementalDecoder(codecs.IncrementalDecoder): def decode(self, input, final=False): return codecs.charmap_decode(input, self.errors, decoding_table)[0] class StreamWriter(Codec, codecs.StreamWriter): pass class StreamReader(Codec, codecs.StreamReader): pass ### encodings module API codec_info = codecs.CodecInfo( name='x-user-defined', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, ) ### Decoding Table # Python 3: # for c in range(256): print(' %r' % chr(c if c < 128 else c + 0xF700)) decoding_table = ( '\x00' '\x01' '\x02' '\x03' '\x04' '\x05' '\x06' '\x07' '\x08' '\t' '\n' '\x0b' '\x0c' '\r' '\x0e' '\x0f' '\x10' '\x11' '\x12' '\x13' '\x14' '\x15' '\x16' '\x17' '\x18' '\x19' '\x1a' '\x1b' '\x1c' '\x1d' '\x1e' '\x1f' ' ' '!' '"' '#' '$' '%' '&' "'" '(' ')' '*' '+' ',' '-' '.' '/' '0' '1' '2' '3' '4' '5' '6' '7' '8' '9' ':' ';' '<' '=' '>' '?' '@' 'A' 'B' 'C' 'D' 'E' 'F' 'G' 'H' 'I' 'J' 'K' 'L' 'M' 'N' 'O' 'P' 'Q' 'R' 'S' 'T' 'U' 'V' 'W' 'X' 'Y' 'Z' '[' '\\' ']' '^' '_' '`' 'a' 'b' 'c' 'd' 'e' 'f' 'g' 'h' 'i' 'j' 'k' 'l' 'm' 'n' 'o' 'p' 'q' 'r' 's' 't' 'u' 'v' 'w' 'x' 'y' 'z' '{' '|' '}' '~' '\x7f' '\uf780' '\uf781' '\uf782' '\uf783' '\uf784' '\uf785' '\uf786' '\uf787' '\uf788' '\uf789' '\uf78a' '\uf78b' '\uf78c' '\uf78d' '\uf78e' '\uf78f' '\uf790' '\uf791' '\uf792' '\uf793' '\uf794' '\uf795' '\uf796' '\uf797' '\uf798' '\uf799' '\uf79a' '\uf79b' '\uf79c' '\uf79d' '\uf79e' '\uf79f' '\uf7a0' '\uf7a1' '\uf7a2' '\uf7a3' '\uf7a4' '\uf7a5' '\uf7a6' '\uf7a7' '\uf7a8' '\uf7a9' '\uf7aa' '\uf7ab' '\uf7ac' '\uf7ad' '\uf7ae' '\uf7af' '\uf7b0' '\uf7b1' '\uf7b2' '\uf7b3' '\uf7b4' '\uf7b5' '\uf7b6' '\uf7b7' '\uf7b8' '\uf7b9' '\uf7ba' '\uf7bb' '\uf7bc' '\uf7bd' '\uf7be' '\uf7bf' '\uf7c0' '\uf7c1' '\uf7c2' '\uf7c3' '\uf7c4' '\uf7c5' '\uf7c6' '\uf7c7' '\uf7c8' '\uf7c9' '\uf7ca' '\uf7cb' '\uf7cc' '\uf7cd' '\uf7ce' '\uf7cf' '\uf7d0' '\uf7d1' '\uf7d2' '\uf7d3' '\uf7d4' '\uf7d5' '\uf7d6' '\uf7d7' '\uf7d8' '\uf7d9' '\uf7da' '\uf7db' '\uf7dc' '\uf7dd' '\uf7de' '\uf7df' '\uf7e0' '\uf7e1' '\uf7e2' '\uf7e3' '\uf7e4' '\uf7e5' '\uf7e6' '\uf7e7' '\uf7e8' '\uf7e9' '\uf7ea' '\uf7eb' '\uf7ec' '\uf7ed' '\uf7ee' '\uf7ef' '\uf7f0' '\uf7f1' '\uf7f2' '\uf7f3' '\uf7f4' '\uf7f5' '\uf7f6' '\uf7f7' '\uf7f8' '\uf7f9' '\uf7fa' '\uf7fb' '\uf7fc' '\uf7fd' '\uf7fe' '\uf7ff' ) ### Encoding table encoding_table = codecs.charmap_build(decoding_table) �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735100.0399942 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pkg_resources/���������������������������������������0000755�0001750�0001750�00000000000�00000000000�025114� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pkg_resources/__init__.py����������������������������0000644�0001750�0001750�00000322716�00000000000�027240� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# coding: utf-8 """ Package resource API -------------------- A resource is a logical file contained within a package, or a logical subdirectory thereof. The package resource API expects resource names to have their path parts separated with ``/``, *not* whatever the local path separator is. Do not use os.path operations to manipulate resource names being passed into the API. The package resource API is designed to work with normal filesystem packages, .egg files, and unpacked .egg files. It can also work in a limited way with .zip files and with custom PEP 302 loaders that support the ``get_data()`` method. """ from __future__ import absolute_import import sys import os import io import time import re import types import zipfile import zipimport import warnings import stat import functools import pkgutil import operator import platform import collections import plistlib import email.parser import errno import tempfile import textwrap import itertools import inspect import ntpath import posixpath from pkgutil import get_importer try: import _imp except ImportError: # Python 3.2 compatibility import imp as _imp try: FileExistsError except NameError: FileExistsError = OSError from pkg_resources.extern import six from pkg_resources.extern.six.moves import urllib, map, filter # capture these to bypass sandboxing from os import utime try: from os import mkdir, rename, unlink WRITE_SUPPORT = True except ImportError: # no write support, probably under GAE WRITE_SUPPORT = False from os import open as os_open from os.path import isdir, split try: import importlib.machinery as importlib_machinery # access attribute to force import under delayed import mechanisms. importlib_machinery.__name__ except ImportError: importlib_machinery = None from . import py31compat from pkg_resources.extern import appdirs from pkg_resources.extern import packaging __import__('pkg_resources.extern.packaging.version') __import__('pkg_resources.extern.packaging.specifiers') __import__('pkg_resources.extern.packaging.requirements') __import__('pkg_resources.extern.packaging.markers') __metaclass__ = type if (3, 0) < sys.version_info < (3, 4): raise RuntimeError("Python 3.4 or later is required") if six.PY2: # Those builtin exceptions are only defined in Python 3 PermissionError = None NotADirectoryError = None # declare some globals that will be defined later to # satisfy the linters. require = None working_set = None add_activation_listener = None resources_stream = None cleanup_resources = None resource_dir = None resource_stream = None set_extraction_path = None resource_isdir = None resource_string = None iter_entry_points = None resource_listdir = None resource_filename = None resource_exists = None _distribution_finders = None _namespace_handlers = None _namespace_packages = None class PEP440Warning(RuntimeWarning): """ Used when there is an issue with a version or specifier not complying with PEP 440. """ def parse_version(v): try: return packaging.version.Version(v) except packaging.version.InvalidVersion: return packaging.version.LegacyVersion(v) _state_vars = {} def _declare_state(vartype, **kw): globals().update(kw) _state_vars.update(dict.fromkeys(kw, vartype)) def __getstate__(): state = {} g = globals() for k, v in _state_vars.items(): state[k] = g['_sget_' + v](g[k]) return state def __setstate__(state): g = globals() for k, v in state.items(): g['_sset_' + _state_vars[k]](k, g[k], v) return state def _sget_dict(val): return val.copy() def _sset_dict(key, ob, state): ob.clear() ob.update(state) def _sget_object(val): return val.__getstate__() def _sset_object(key, ob, state): ob.__setstate__(state) _sget_none = _sset_none = lambda *args: None def get_supported_platform(): """Return this platform's maximum compatible version. distutils.util.get_platform() normally reports the minimum version of Mac OS X that would be required to *use* extensions produced by distutils. But what we want when checking compatibility is to know the version of Mac OS X that we are *running*. To allow usage of packages that explicitly require a newer version of Mac OS X, we must also know the current version of the OS. If this condition occurs for any other platform with a version in its platform strings, this function should be extended accordingly. """ plat = get_build_platform() m = macosVersionString.match(plat) if m is not None and sys.platform == "darwin": try: plat = 'macosx-%s-%s' % ('.'.join(_macosx_vers()[:2]), m.group(3)) except ValueError: # not Mac OS X pass return plat __all__ = [ # Basic resource access and distribution/entry point discovery 'require', 'run_script', 'get_provider', 'get_distribution', 'load_entry_point', 'get_entry_map', 'get_entry_info', 'iter_entry_points', 'resource_string', 'resource_stream', 'resource_filename', 'resource_listdir', 'resource_exists', 'resource_isdir', # Environmental control 'declare_namespace', 'working_set', 'add_activation_listener', 'find_distributions', 'set_extraction_path', 'cleanup_resources', 'get_default_cache', # Primary implementation classes 'Environment', 'WorkingSet', 'ResourceManager', 'Distribution', 'Requirement', 'EntryPoint', # Exceptions 'ResolutionError', 'VersionConflict', 'DistributionNotFound', 'UnknownExtra', 'ExtractionError', # Warnings 'PEP440Warning', # Parsing functions and string utilities 'parse_requirements', 'parse_version', 'safe_name', 'safe_version', 'get_platform', 'compatible_platforms', 'yield_lines', 'split_sections', 'safe_extra', 'to_filename', 'invalid_marker', 'evaluate_marker', # filesystem utilities 'ensure_directory', 'normalize_path', # Distribution "precedence" constants 'EGG_DIST', 'BINARY_DIST', 'SOURCE_DIST', 'CHECKOUT_DIST', 'DEVELOP_DIST', # "Provider" interfaces, implementations, and registration/lookup APIs 'IMetadataProvider', 'IResourceProvider', 'FileMetadata', 'PathMetadata', 'EggMetadata', 'EmptyProvider', 'empty_provider', 'NullProvider', 'EggProvider', 'DefaultProvider', 'ZipProvider', 'register_finder', 'register_namespace_handler', 'register_loader_type', 'fixup_namespace_packages', 'get_importer', # Warnings 'PkgResourcesDeprecationWarning', # Deprecated/backward compatibility only 'run_main', 'AvailableDistributions', ] class ResolutionError(Exception): """Abstract base for dependency resolution errors""" def __repr__(self): return self.__class__.__name__ + repr(self.args) class VersionConflict(ResolutionError): """ An already-installed version conflicts with the requested version. Should be initialized with the installed Distribution and the requested Requirement. """ _template = "{self.dist} is installed but {self.req} is required" @property def dist(self): return self.args[0] @property def req(self): return self.args[1] def report(self): return self._template.format(**locals()) def with_context(self, required_by): """ If required_by is non-empty, return a version of self that is a ContextualVersionConflict. """ if not required_by: return self args = self.args + (required_by,) return ContextualVersionConflict(*args) class ContextualVersionConflict(VersionConflict): """ A VersionConflict that accepts a third parameter, the set of the requirements that required the installed Distribution. """ _template = VersionConflict._template + ' by {self.required_by}' @property def required_by(self): return self.args[2] class DistributionNotFound(ResolutionError): """A requested distribution was not found""" _template = ("The '{self.req}' distribution was not found " "and is required by {self.requirers_str}") @property def req(self): return self.args[0] @property def requirers(self): return self.args[1] @property def requirers_str(self): if not self.requirers: return 'the application' return ', '.join(self.requirers) def report(self): return self._template.format(**locals()) def __str__(self): return self.report() class UnknownExtra(ResolutionError): """Distribution doesn't have an "extra feature" of the given name""" _provider_factories = {} PY_MAJOR = sys.version[:3] EGG_DIST = 3 BINARY_DIST = 2 SOURCE_DIST = 1 CHECKOUT_DIST = 0 DEVELOP_DIST = -1 def register_loader_type(loader_type, provider_factory): """Register `provider_factory` to make providers for `loader_type` `loader_type` is the type or class of a PEP 302 ``module.__loader__``, and `provider_factory` is a function that, passed a *module* object, returns an ``IResourceProvider`` for that module. """ _provider_factories[loader_type] = provider_factory def get_provider(moduleOrReq): """Return an IResourceProvider for the named module or requirement""" if isinstance(moduleOrReq, Requirement): return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0] try: module = sys.modules[moduleOrReq] except KeyError: __import__(moduleOrReq) module = sys.modules[moduleOrReq] loader = getattr(module, '__loader__', None) return _find_adapter(_provider_factories, loader)(module) def _macosx_vers(_cache=[]): if not _cache: version = platform.mac_ver()[0] # fallback for MacPorts if version == '': plist = '/System/Library/CoreServices/SystemVersion.plist' if os.path.exists(plist): if hasattr(plistlib, 'readPlist'): plist_content = plistlib.readPlist(plist) if 'ProductVersion' in plist_content: version = plist_content['ProductVersion'] _cache.append(version.split('.')) return _cache[0] def _macosx_arch(machine): return {'PowerPC': 'ppc', 'Power_Macintosh': 'ppc'}.get(machine, machine) def get_build_platform(): """Return this platform's string for platform-specific distributions XXX Currently this is the same as ``distutils.util.get_platform()``, but it needs some hacks for Linux and Mac OS X. """ from sysconfig import get_platform plat = get_platform() if sys.platform == "darwin" and not plat.startswith('macosx-'): try: version = _macosx_vers() machine = os.uname()[4].replace(" ", "_") return "macosx-%d.%d-%s" % ( int(version[0]), int(version[1]), _macosx_arch(machine), ) except ValueError: # if someone is running a non-Mac darwin system, this will fall # through to the default implementation pass return plat macosVersionString = re.compile(r"macosx-(\d+)\.(\d+)-(.*)") darwinVersionString = re.compile(r"darwin-(\d+)\.(\d+)\.(\d+)-(.*)") # XXX backward compat get_platform = get_build_platform def compatible_platforms(provided, required): """Can code for the `provided` platform run on the `required` platform? Returns true if either platform is ``None``, or the platforms are equal. XXX Needs compatibility checks for Linux and other unixy OSes. """ if provided is None or required is None or provided == required: # easy case return True # Mac OS X special cases reqMac = macosVersionString.match(required) if reqMac: provMac = macosVersionString.match(provided) # is this a Mac package? if not provMac: # this is backwards compatibility for packages built before # setuptools 0.6. All packages built after this point will # use the new macosx designation. provDarwin = darwinVersionString.match(provided) if provDarwin: dversion = int(provDarwin.group(1)) macosversion = "%s.%s" % (reqMac.group(1), reqMac.group(2)) if dversion == 7 and macosversion >= "10.3" or \ dversion == 8 and macosversion >= "10.4": return True # egg isn't macosx or legacy darwin return False # are they the same major version and machine type? if provMac.group(1) != reqMac.group(1) or \ provMac.group(3) != reqMac.group(3): return False # is the required OS major update >= the provided one? if int(provMac.group(2)) > int(reqMac.group(2)): return False return True # XXX Linux and other platforms' special cases should go here return False def run_script(dist_spec, script_name): """Locate distribution `dist_spec` and run its `script_name` script""" ns = sys._getframe(1).f_globals name = ns['__name__'] ns.clear() ns['__name__'] = name require(dist_spec)[0].run_script(script_name, ns) # backward compatibility run_main = run_script def get_distribution(dist): """Return a current distribution object for a Requirement or string""" if isinstance(dist, six.string_types): dist = Requirement.parse(dist) if isinstance(dist, Requirement): dist = get_provider(dist) if not isinstance(dist, Distribution): raise TypeError("Expected string, Requirement, or Distribution", dist) return dist def load_entry_point(dist, group, name): """Return `name` entry point of `group` for `dist` or raise ImportError""" return get_distribution(dist).load_entry_point(group, name) def get_entry_map(dist, group=None): """Return the entry point map for `group`, or the full entry map""" return get_distribution(dist).get_entry_map(group) def get_entry_info(dist, group, name): """Return the EntryPoint object for `group`+`name`, or ``None``""" return get_distribution(dist).get_entry_info(group, name) class IMetadataProvider: def has_metadata(name): """Does the package's distribution contain the named metadata?""" def get_metadata(name): """The named metadata resource as a string""" def get_metadata_lines(name): """Yield named metadata resource as list of non-blank non-comment lines Leading and trailing whitespace is stripped from each line, and lines with ``#`` as the first non-blank character are omitted.""" def metadata_isdir(name): """Is the named metadata a directory? (like ``os.path.isdir()``)""" def metadata_listdir(name): """List of metadata names in the directory (like ``os.listdir()``)""" def run_script(script_name, namespace): """Execute the named script in the supplied namespace dictionary""" class IResourceProvider(IMetadataProvider): """An object that provides access to package resources""" def get_resource_filename(manager, resource_name): """Return a true filesystem path for `resource_name` `manager` must be an ``IResourceManager``""" def get_resource_stream(manager, resource_name): """Return a readable file-like object for `resource_name` `manager` must be an ``IResourceManager``""" def get_resource_string(manager, resource_name): """Return a string containing the contents of `resource_name` `manager` must be an ``IResourceManager``""" def has_resource(resource_name): """Does the package contain the named resource?""" def resource_isdir(resource_name): """Is the named resource a directory? (like ``os.path.isdir()``)""" def resource_listdir(resource_name): """List of resource names in the directory (like ``os.listdir()``)""" class WorkingSet: """A collection of active distributions on sys.path (or a similar list)""" def __init__(self, entries=None): """Create working set from list of path entries (default=sys.path)""" self.entries = [] self.entry_keys = {} self.by_key = {} self.callbacks = [] if entries is None: entries = sys.path for entry in entries: self.add_entry(entry) @classmethod def _build_master(cls): """ Prepare the master working set. """ ws = cls() try: from __main__ import __requires__ except ImportError: # The main program does not list any requirements return ws # ensure the requirements are met try: ws.require(__requires__) except VersionConflict: return cls._build_from_requirements(__requires__) return ws @classmethod def _build_from_requirements(cls, req_spec): """ Build a working set from a requirement spec. Rewrites sys.path. """ # try it without defaults already on sys.path # by starting with an empty path ws = cls([]) reqs = parse_requirements(req_spec) dists = ws.resolve(reqs, Environment()) for dist in dists: ws.add(dist) # add any missing entries from sys.path for entry in sys.path: if entry not in ws.entries: ws.add_entry(entry) # then copy back to sys.path sys.path[:] = ws.entries return ws def add_entry(self, entry): """Add a path item to ``.entries``, finding any distributions on it ``find_distributions(entry, True)`` is used to find distributions corresponding to the path entry, and they are added. `entry` is always appended to ``.entries``, even if it is already present. (This is because ``sys.path`` can contain the same value more than once, and the ``.entries`` of the ``sys.path`` WorkingSet should always equal ``sys.path``.) """ self.entry_keys.setdefault(entry, []) self.entries.append(entry) for dist in find_distributions(entry, True): self.add(dist, entry, False) def __contains__(self, dist): """True if `dist` is the active distribution for its project""" return self.by_key.get(dist.key) == dist def find(self, req): """Find a distribution matching requirement `req` If there is an active distribution for the requested project, this returns it as long as it meets the version requirement specified by `req`. But, if there is an active distribution for the project and it does *not* meet the `req` requirement, ``VersionConflict`` is raised. If there is no active distribution for the requested project, ``None`` is returned. """ dist = self.by_key.get(req.key) if dist is not None and dist not in req: # XXX add more info raise VersionConflict(dist, req) return dist def iter_entry_points(self, group, name=None): """Yield entry point objects from `group` matching `name` If `name` is None, yields all entry points in `group` from all distributions in the working set, otherwise only ones matching both `group` and `name` are yielded (in distribution order). """ return ( entry for dist in self for entry in dist.get_entry_map(group).values() if name is None or name == entry.name ) def run_script(self, requires, script_name): """Locate distribution for `requires` and run `script_name` script""" ns = sys._getframe(1).f_globals name = ns['__name__'] ns.clear() ns['__name__'] = name self.require(requires)[0].run_script(script_name, ns) def __iter__(self): """Yield distributions for non-duplicate projects in the working set The yield order is the order in which the items' path entries were added to the working set. """ seen = {} for item in self.entries: if item not in self.entry_keys: # workaround a cache issue continue for key in self.entry_keys[item]: if key not in seen: seen[key] = 1 yield self.by_key[key] def add(self, dist, entry=None, insert=True, replace=False): """Add `dist` to working set, associated with `entry` If `entry` is unspecified, it defaults to the ``.location`` of `dist`. On exit from this routine, `entry` is added to the end of the working set's ``.entries`` (if it wasn't already present). `dist` is only added to the working set if it's for a project that doesn't already have a distribution in the set, unless `replace=True`. If it's added, any callbacks registered with the ``subscribe()`` method will be called. """ if insert: dist.insert_on(self.entries, entry, replace=replace) if entry is None: entry = dist.location keys = self.entry_keys.setdefault(entry, []) keys2 = self.entry_keys.setdefault(dist.location, []) if not replace and dist.key in self.by_key: # ignore hidden distros return self.by_key[dist.key] = dist if dist.key not in keys: keys.append(dist.key) if dist.key not in keys2: keys2.append(dist.key) self._added_new(dist) def resolve(self, requirements, env=None, installer=None, replace_conflicting=False, extras=None): """List all distributions needed to (recursively) meet `requirements` `requirements` must be a sequence of ``Requirement`` objects. `env`, if supplied, should be an ``Environment`` instance. If not supplied, it defaults to all distributions available within any entry or distribution in the working set. `installer`, if supplied, will be invoked with each requirement that cannot be met by an already-installed distribution; it should return a ``Distribution`` or ``None``. Unless `replace_conflicting=True`, raises a VersionConflict exception if any requirements are found on the path that have the correct name but the wrong version. Otherwise, if an `installer` is supplied it will be invoked to obtain the correct version of the requirement and activate it. `extras` is a list of the extras to be used with these requirements. This is important because extra requirements may look like `my_req; extra = "my_extra"`, which would otherwise be interpreted as a purely optional requirement. Instead, we want to be able to assert that these requirements are truly required. """ # set up the stack requirements = list(requirements)[::-1] # set of processed requirements processed = {} # key -> dist best = {} to_activate = [] req_extras = _ReqExtras() # Mapping of requirement to set of distributions that required it; # useful for reporting info about conflicts. required_by = collections.defaultdict(set) while requirements: # process dependencies breadth-first req = requirements.pop(0) if req in processed: # Ignore cyclic or redundant dependencies continue if not req_extras.markers_pass(req, extras): continue dist = best.get(req.key) if dist is None: # Find the best distribution and add it to the map dist = self.by_key.get(req.key) if dist is None or (dist not in req and replace_conflicting): ws = self if env is None: if dist is None: env = Environment(self.entries) else: # Use an empty environment and workingset to avoid # any further conflicts with the conflicting # distribution env = Environment([]) ws = WorkingSet([]) dist = best[req.key] = env.best_match( req, ws, installer, replace_conflicting=replace_conflicting ) if dist is None: requirers = required_by.get(req, None) raise DistributionNotFound(req, requirers) to_activate.append(dist) if dist not in req: # Oops, the "best" so far conflicts with a dependency dependent_req = required_by[req] raise VersionConflict(dist, req).with_context(dependent_req) # push the new requirements onto the stack new_requirements = dist.requires(req.extras)[::-1] requirements.extend(new_requirements) # Register the new requirements needed by req for new_requirement in new_requirements: required_by[new_requirement].add(req.project_name) req_extras[new_requirement] = req.extras processed[req] = True # return list of distros to activate return to_activate def find_plugins( self, plugin_env, full_env=None, installer=None, fallback=True): """Find all activatable distributions in `plugin_env` Example usage:: distributions, errors = working_set.find_plugins( Environment(plugin_dirlist) ) # add plugins+libs to sys.path map(working_set.add, distributions) # display errors print('Could not load', errors) The `plugin_env` should be an ``Environment`` instance that contains only distributions that are in the project's "plugin directory" or directories. The `full_env`, if supplied, should be an ``Environment`` contains all currently-available distributions. If `full_env` is not supplied, one is created automatically from the ``WorkingSet`` this method is called on, which will typically mean that every directory on ``sys.path`` will be scanned for distributions. `installer` is a standard installer callback as used by the ``resolve()`` method. The `fallback` flag indicates whether we should attempt to resolve older versions of a plugin if the newest version cannot be resolved. This method returns a 2-tuple: (`distributions`, `error_info`), where `distributions` is a list of the distributions found in `plugin_env` that were loadable, along with any other distributions that are needed to resolve their dependencies. `error_info` is a dictionary mapping unloadable plugin distributions to an exception instance describing the error that occurred. Usually this will be a ``DistributionNotFound`` or ``VersionConflict`` instance. """ plugin_projects = list(plugin_env) # scan project names in alphabetic order plugin_projects.sort() error_info = {} distributions = {} if full_env is None: env = Environment(self.entries) env += plugin_env else: env = full_env + plugin_env shadow_set = self.__class__([]) # put all our entries in shadow_set list(map(shadow_set.add, self)) for project_name in plugin_projects: for dist in plugin_env[project_name]: req = [dist.as_requirement()] try: resolvees = shadow_set.resolve(req, env, installer) except ResolutionError as v: # save error info error_info[dist] = v if fallback: # try the next older version of project continue else: # give up on this project, keep going break else: list(map(shadow_set.add, resolvees)) distributions.update(dict.fromkeys(resolvees)) # success, no need to try any more versions of this project break distributions = list(distributions) distributions.sort() return distributions, error_info def require(self, *requirements): """Ensure that distributions matching `requirements` are activated `requirements` must be a string or a (possibly-nested) sequence thereof, specifying the distributions and versions required. The return value is a sequence of the distributions that needed to be activated to fulfill the requirements; all relevant distributions are included, even if they were already activated in this working set. """ needed = self.resolve(parse_requirements(requirements)) for dist in needed: self.add(dist) return needed def subscribe(self, callback, existing=True): """Invoke `callback` for all distributions If `existing=True` (default), call on all existing ones, as well. """ if callback in self.callbacks: return self.callbacks.append(callback) if not existing: return for dist in self: callback(dist) def _added_new(self, dist): for callback in self.callbacks: callback(dist) def __getstate__(self): return ( self.entries[:], self.entry_keys.copy(), self.by_key.copy(), self.callbacks[:] ) def __setstate__(self, e_k_b_c): entries, keys, by_key, callbacks = e_k_b_c self.entries = entries[:] self.entry_keys = keys.copy() self.by_key = by_key.copy() self.callbacks = callbacks[:] class _ReqExtras(dict): """ Map each requirement to the extras that demanded it. """ def markers_pass(self, req, extras=None): """ Evaluate markers for req against each extra that demanded it. Return False if the req has a marker and fails evaluation. Otherwise, return True. """ extra_evals = ( req.marker.evaluate({'extra': extra}) for extra in self.get(req, ()) + (extras or (None,)) ) return not req.marker or any(extra_evals) class Environment: """Searchable snapshot of distributions on a search path""" def __init__( self, search_path=None, platform=get_supported_platform(), python=PY_MAJOR): """Snapshot distributions available on a search path Any distributions found on `search_path` are added to the environment. `search_path` should be a sequence of ``sys.path`` items. If not supplied, ``sys.path`` is used. `platform` is an optional string specifying the name of the platform that platform-specific distributions must be compatible with. If unspecified, it defaults to the current platform. `python` is an optional string naming the desired version of Python (e.g. ``'3.6'``); it defaults to the current version. You may explicitly set `platform` (and/or `python`) to ``None`` if you wish to map *all* distributions, not just those compatible with the running platform or Python version. """ self._distmap = {} self.platform = platform self.python = python self.scan(search_path) def can_add(self, dist): """Is distribution `dist` acceptable for this environment? The distribution must match the platform and python version requirements specified when this environment was created, or False is returned. """ py_compat = ( self.python is None or dist.py_version is None or dist.py_version == self.python ) return py_compat and compatible_platforms(dist.platform, self.platform) def remove(self, dist): """Remove `dist` from the environment""" self._distmap[dist.key].remove(dist) def scan(self, search_path=None): """Scan `search_path` for distributions usable in this environment Any distributions found are added to the environment. `search_path` should be a sequence of ``sys.path`` items. If not supplied, ``sys.path`` is used. Only distributions conforming to the platform/python version defined at initialization are added. """ if search_path is None: search_path = sys.path for item in search_path: for dist in find_distributions(item): self.add(dist) def __getitem__(self, project_name): """Return a newest-to-oldest list of distributions for `project_name` Uses case-insensitive `project_name` comparison, assuming all the project's distributions use their project's name converted to all lowercase as their key. """ distribution_key = project_name.lower() return self._distmap.get(distribution_key, []) def add(self, dist): """Add `dist` if we ``can_add()`` it and it has not already been added """ if self.can_add(dist) and dist.has_version(): dists = self._distmap.setdefault(dist.key, []) if dist not in dists: dists.append(dist) dists.sort(key=operator.attrgetter('hashcmp'), reverse=True) def best_match( self, req, working_set, installer=None, replace_conflicting=False): """Find distribution best matching `req` and usable on `working_set` This calls the ``find(req)`` method of the `working_set` to see if a suitable distribution is already active. (This may raise ``VersionConflict`` if an unsuitable version of the project is already active in the specified `working_set`.) If a suitable distribution isn't active, this method returns the newest distribution in the environment that meets the ``Requirement`` in `req`. If no suitable distribution is found, and `installer` is supplied, then the result of calling the environment's ``obtain(req, installer)`` method will be returned. """ try: dist = working_set.find(req) except VersionConflict: if not replace_conflicting: raise dist = None if dist is not None: return dist for dist in self[req.key]: if dist in req: return dist # try to download/install return self.obtain(req, installer) def obtain(self, requirement, installer=None): """Obtain a distribution matching `requirement` (e.g. via download) Obtain a distro that matches requirement (e.g. via download). In the base ``Environment`` class, this routine just returns ``installer(requirement)``, unless `installer` is None, in which case None is returned instead. This method is a hook that allows subclasses to attempt other ways of obtaining a distribution before falling back to the `installer` argument.""" if installer is not None: return installer(requirement) def __iter__(self): """Yield the unique project names of the available distributions""" for key in self._distmap.keys(): if self[key]: yield key def __iadd__(self, other): """In-place addition of a distribution or environment""" if isinstance(other, Distribution): self.add(other) elif isinstance(other, Environment): for project in other: for dist in other[project]: self.add(dist) else: raise TypeError("Can't add %r to environment" % (other,)) return self def __add__(self, other): """Add an environment or distribution to an environment""" new = self.__class__([], platform=None, python=None) for env in self, other: new += env return new # XXX backward compatibility AvailableDistributions = Environment class ExtractionError(RuntimeError): """An error occurred extracting a resource The following attributes are available from instances of this exception: manager The resource manager that raised this exception cache_path The base directory for resource extraction original_error The exception instance that caused extraction to fail """ class ResourceManager: """Manage resource extraction and packages""" extraction_path = None def __init__(self): self.cached_files = {} def resource_exists(self, package_or_requirement, resource_name): """Does the named resource exist?""" return get_provider(package_or_requirement).has_resource(resource_name) def resource_isdir(self, package_or_requirement, resource_name): """Is the named resource an existing directory?""" return get_provider(package_or_requirement).resource_isdir( resource_name ) def resource_filename(self, package_or_requirement, resource_name): """Return a true filesystem path for specified resource""" return get_provider(package_or_requirement).get_resource_filename( self, resource_name ) def resource_stream(self, package_or_requirement, resource_name): """Return a readable file-like object for specified resource""" return get_provider(package_or_requirement).get_resource_stream( self, resource_name ) def resource_string(self, package_or_requirement, resource_name): """Return specified resource as a string""" return get_provider(package_or_requirement).get_resource_string( self, resource_name ) def resource_listdir(self, package_or_requirement, resource_name): """List the contents of the named resource directory""" return get_provider(package_or_requirement).resource_listdir( resource_name ) def extraction_error(self): """Give an error message for problems extracting file(s)""" old_exc = sys.exc_info()[1] cache_path = self.extraction_path or get_default_cache() tmpl = textwrap.dedent(""" Can't extract file(s) to egg cache The following error occurred while trying to extract file(s) to the Python egg cache: {old_exc} The Python egg cache directory is currently set to: {cache_path} Perhaps your account does not have write access to this directory? You can change the cache directory by setting the PYTHON_EGG_CACHE environment variable to point to an accessible directory. """).lstrip() err = ExtractionError(tmpl.format(**locals())) err.manager = self err.cache_path = cache_path err.original_error = old_exc raise err def get_cache_path(self, archive_name, names=()): """Return absolute location in cache for `archive_name` and `names` The parent directory of the resulting path will be created if it does not already exist. `archive_name` should be the base filename of the enclosing egg (which may not be the name of the enclosing zipfile!), including its ".egg" extension. `names`, if provided, should be a sequence of path name parts "under" the egg's extraction location. This method should only be called by resource providers that need to obtain an extraction location, and only for names they intend to extract, as it tracks the generated names for possible cleanup later. """ extract_path = self.extraction_path or get_default_cache() target_path = os.path.join(extract_path, archive_name + '-tmp', *names) try: _bypass_ensure_directory(target_path) except Exception: self.extraction_error() self._warn_unsafe_extraction_path(extract_path) self.cached_files[target_path] = 1 return target_path @staticmethod def _warn_unsafe_extraction_path(path): """ If the default extraction path is overridden and set to an insecure location, such as /tmp, it opens up an opportunity for an attacker to replace an extracted file with an unauthorized payload. Warn the user if a known insecure location is used. See Distribute #375 for more details. """ if os.name == 'nt' and not path.startswith(os.environ['windir']): # On Windows, permissions are generally restrictive by default # and temp directories are not writable by other users, so # bypass the warning. return mode = os.stat(path).st_mode if mode & stat.S_IWOTH or mode & stat.S_IWGRP: msg = ( "%s is writable by group/others and vulnerable to attack " "when " "used with get_resource_filename. Consider a more secure " "location (set with .set_extraction_path or the " "PYTHON_EGG_CACHE environment variable)." % path ) warnings.warn(msg, UserWarning) def postprocess(self, tempname, filename): """Perform any platform-specific postprocessing of `tempname` This is where Mac header rewrites should be done; other platforms don't have anything special they should do. Resource providers should call this method ONLY after successfully extracting a compressed resource. They must NOT call it on resources that are already in the filesystem. `tempname` is the current (temporary) name of the file, and `filename` is the name it will be renamed to by the caller after this routine returns. """ if os.name == 'posix': # Make the resource executable mode = ((os.stat(tempname).st_mode) | 0o555) & 0o7777 os.chmod(tempname, mode) def set_extraction_path(self, path): """Set the base path where resources will be extracted to, if needed. If you do not call this routine before any extractions take place, the path defaults to the return value of ``get_default_cache()``. (Which is based on the ``PYTHON_EGG_CACHE`` environment variable, with various platform-specific fallbacks. See that routine's documentation for more details.) Resources are extracted to subdirectories of this path based upon information given by the ``IResourceProvider``. You may set this to a temporary directory, but then you must call ``cleanup_resources()`` to delete the extracted files when done. There is no guarantee that ``cleanup_resources()`` will be able to remove all extracted files. (Note: you may not change the extraction path for a given resource manager once resources have been extracted, unless you first call ``cleanup_resources()``.) """ if self.cached_files: raise ValueError( "Can't change extraction path, files already extracted" ) self.extraction_path = path def cleanup_resources(self, force=False): """ Delete all extracted resource files and directories, returning a list of the file and directory names that could not be successfully removed. This function does not have any concurrency protection, so it should generally only be called when the extraction path is a temporary directory exclusive to a single process. This method is not automatically called; you must call it explicitly or register it as an ``atexit`` function if you wish to ensure cleanup of a temporary directory used for extractions. """ # XXX def get_default_cache(): """ Return the ``PYTHON_EGG_CACHE`` environment variable or a platform-relevant user cache dir for an app named "Python-Eggs". """ return ( os.environ.get('PYTHON_EGG_CACHE') or appdirs.user_cache_dir(appname='Python-Eggs') ) def safe_name(name): """Convert an arbitrary string to a standard distribution name Any runs of non-alphanumeric/. characters are replaced with a single '-'. """ return re.sub('[^A-Za-z0-9.]+', '-', name) def safe_version(version): """ Convert an arbitrary string to a standard version string """ try: # normalize the version return str(packaging.version.Version(version)) except packaging.version.InvalidVersion: version = version.replace(' ', '.') return re.sub('[^A-Za-z0-9.]+', '-', version) def safe_extra(extra): """Convert an arbitrary string to a standard 'extra' name Any runs of non-alphanumeric characters are replaced with a single '_', and the result is always lowercased. """ return re.sub('[^A-Za-z0-9.-]+', '_', extra).lower() def to_filename(name): """Convert a project or version name to its filename-escaped form Any '-' characters are currently replaced with '_'. """ return name.replace('-', '_') def invalid_marker(text): """ Validate text as a PEP 508 environment marker; return an exception if invalid or False otherwise. """ try: evaluate_marker(text) except SyntaxError as e: e.filename = None e.lineno = None return e return False def evaluate_marker(text, extra=None): """ Evaluate a PEP 508 environment marker. Return a boolean indicating the marker result in this environment. Raise SyntaxError if marker is invalid. This implementation uses the 'pyparsing' module. """ try: marker = packaging.markers.Marker(text) return marker.evaluate() except packaging.markers.InvalidMarker as e: raise SyntaxError(e) class NullProvider: """Try to implement resources and metadata for arbitrary PEP 302 loaders""" egg_name = None egg_info = None loader = None def __init__(self, module): self.loader = getattr(module, '__loader__', None) self.module_path = os.path.dirname(getattr(module, '__file__', '')) def get_resource_filename(self, manager, resource_name): return self._fn(self.module_path, resource_name) def get_resource_stream(self, manager, resource_name): return io.BytesIO(self.get_resource_string(manager, resource_name)) def get_resource_string(self, manager, resource_name): return self._get(self._fn(self.module_path, resource_name)) def has_resource(self, resource_name): return self._has(self._fn(self.module_path, resource_name)) def _get_metadata_path(self, name): return self._fn(self.egg_info, name) def has_metadata(self, name): if not self.egg_info: return self.egg_info path = self._get_metadata_path(name) return self._has(path) def get_metadata(self, name): if not self.egg_info: return "" value = self._get(self._fn(self.egg_info, name)) return value.decode('utf-8') if six.PY3 else value def get_metadata_lines(self, name): return yield_lines(self.get_metadata(name)) def resource_isdir(self, resource_name): return self._isdir(self._fn(self.module_path, resource_name)) def metadata_isdir(self, name): return self.egg_info and self._isdir(self._fn(self.egg_info, name)) def resource_listdir(self, resource_name): return self._listdir(self._fn(self.module_path, resource_name)) def metadata_listdir(self, name): if self.egg_info: return self._listdir(self._fn(self.egg_info, name)) return [] def run_script(self, script_name, namespace): script = 'scripts/' + script_name if not self.has_metadata(script): raise ResolutionError( "Script {script!r} not found in metadata at {self.egg_info!r}" .format(**locals()), ) script_text = self.get_metadata(script).replace('\r\n', '\n') script_text = script_text.replace('\r', '\n') script_filename = self._fn(self.egg_info, script) namespace['__file__'] = script_filename if os.path.exists(script_filename): source = open(script_filename).read() code = compile(source, script_filename, 'exec') exec(code, namespace, namespace) else: from linecache import cache cache[script_filename] = ( len(script_text), 0, script_text.split('\n'), script_filename ) script_code = compile(script_text, script_filename, 'exec') exec(script_code, namespace, namespace) def _has(self, path): raise NotImplementedError( "Can't perform this operation for unregistered loader type" ) def _isdir(self, path): raise NotImplementedError( "Can't perform this operation for unregistered loader type" ) def _listdir(self, path): raise NotImplementedError( "Can't perform this operation for unregistered loader type" ) def _fn(self, base, resource_name): self._validate_resource_path(resource_name) if resource_name: return os.path.join(base, *resource_name.split('/')) return base @staticmethod def _validate_resource_path(path): """ Validate the resource paths according to the docs. https://setuptools.readthedocs.io/en/latest/pkg_resources.html#basic-resource-access >>> warned = getfixture('recwarn') >>> warnings.simplefilter('always') >>> vrp = NullProvider._validate_resource_path >>> vrp('foo/bar.txt') >>> bool(warned) False >>> vrp('../foo/bar.txt') >>> bool(warned) True >>> warned.clear() >>> vrp('/foo/bar.txt') >>> bool(warned) True >>> vrp('foo/../../bar.txt') >>> bool(warned) True >>> warned.clear() >>> vrp('foo/f../bar.txt') >>> bool(warned) False Windows path separators are straight-up disallowed. >>> vrp(r'\\foo/bar.txt') Traceback (most recent call last): ... ValueError: Use of .. or absolute path in a resource path \ is not allowed. >>> vrp(r'C:\\foo/bar.txt') Traceback (most recent call last): ... ValueError: Use of .. or absolute path in a resource path \ is not allowed. Blank values are allowed >>> vrp('') >>> bool(warned) False Non-string values are not. >>> vrp(None) Traceback (most recent call last): ... AttributeError: ... """ invalid = ( os.path.pardir in path.split(posixpath.sep) or posixpath.isabs(path) or ntpath.isabs(path) ) if not invalid: return msg = "Use of .. or absolute path in a resource path is not allowed." # Aggressively disallow Windows absolute paths if ntpath.isabs(path) and not posixpath.isabs(path): raise ValueError(msg) # for compatibility, warn; in future # raise ValueError(msg) warnings.warn( msg[:-1] + " and will raise exceptions in a future release.", DeprecationWarning, stacklevel=4, ) def _get(self, path): if hasattr(self.loader, 'get_data'): return self.loader.get_data(path) raise NotImplementedError( "Can't perform this operation for loaders without 'get_data()'" ) register_loader_type(object, NullProvider) class EggProvider(NullProvider): """Provider based on a virtual filesystem""" def __init__(self, module): NullProvider.__init__(self, module) self._setup_prefix() def _setup_prefix(self): # we assume here that our metadata may be nested inside a "basket" # of multiple eggs; that's why we use module_path instead of .archive path = self.module_path old = None while path != old: if _is_egg_path(path): self.egg_name = os.path.basename(path) self.egg_info = os.path.join(path, 'EGG-INFO') self.egg_root = path break old = path path, base = os.path.split(path) class DefaultProvider(EggProvider): """Provides access to package resources in the filesystem""" def _has(self, path): return os.path.exists(path) def _isdir(self, path): return os.path.isdir(path) def _listdir(self, path): return os.listdir(path) def get_resource_stream(self, manager, resource_name): return open(self._fn(self.module_path, resource_name), 'rb') def _get(self, path): with open(path, 'rb') as stream: return stream.read() @classmethod def _register(cls): loader_names = 'SourceFileLoader', 'SourcelessFileLoader', for name in loader_names: loader_cls = getattr(importlib_machinery, name, type(None)) register_loader_type(loader_cls, cls) DefaultProvider._register() class EmptyProvider(NullProvider): """Provider that returns nothing for all requests""" module_path = None _isdir = _has = lambda self, path: False def _get(self, path): return '' def _listdir(self, path): return [] def __init__(self): pass empty_provider = EmptyProvider() class ZipManifests(dict): """ zip manifest builder """ @classmethod def build(cls, path): """ Build a dictionary similar to the zipimport directory caches, except instead of tuples, store ZipInfo objects. Use a platform-specific path separator (os.sep) for the path keys for compatibility with pypy on Windows. """ with zipfile.ZipFile(path) as zfile: items = ( ( name.replace('/', os.sep), zfile.getinfo(name), ) for name in zfile.namelist() ) return dict(items) load = build class MemoizedZipManifests(ZipManifests): """ Memoized zipfile manifests. """ manifest_mod = collections.namedtuple('manifest_mod', 'manifest mtime') def load(self, path): """ Load a manifest at path or return a suitable manifest already loaded. """ path = os.path.normpath(path) mtime = os.stat(path).st_mtime if path not in self or self[path].mtime != mtime: manifest = self.build(path) self[path] = self.manifest_mod(manifest, mtime) return self[path].manifest class ZipProvider(EggProvider): """Resource support for zips and eggs""" eagers = None _zip_manifests = MemoizedZipManifests() def __init__(self, module): EggProvider.__init__(self, module) self.zip_pre = self.loader.archive + os.sep def _zipinfo_name(self, fspath): # Convert a virtual filename (full path to file) into a zipfile subpath # usable with the zipimport directory cache for our target archive fspath = fspath.rstrip(os.sep) if fspath == self.loader.archive: return '' if fspath.startswith(self.zip_pre): return fspath[len(self.zip_pre):] raise AssertionError( "%s is not a subpath of %s" % (fspath, self.zip_pre) ) def _parts(self, zip_path): # Convert a zipfile subpath into an egg-relative path part list. # pseudo-fs path fspath = self.zip_pre + zip_path if fspath.startswith(self.egg_root + os.sep): return fspath[len(self.egg_root) + 1:].split(os.sep) raise AssertionError( "%s is not a subpath of %s" % (fspath, self.egg_root) ) @property def zipinfo(self): return self._zip_manifests.load(self.loader.archive) def get_resource_filename(self, manager, resource_name): if not self.egg_name: raise NotImplementedError( "resource_filename() only supported for .egg, not .zip" ) # no need to lock for extraction, since we use temp names zip_path = self._resource_to_zip(resource_name) eagers = self._get_eager_resources() if '/'.join(self._parts(zip_path)) in eagers: for name in eagers: self._extract_resource(manager, self._eager_to_zip(name)) return self._extract_resource(manager, zip_path) @staticmethod def _get_date_and_size(zip_stat): size = zip_stat.file_size # ymdhms+wday, yday, dst date_time = zip_stat.date_time + (0, 0, -1) # 1980 offset already done timestamp = time.mktime(date_time) return timestamp, size def _extract_resource(self, manager, zip_path): if zip_path in self._index(): for name in self._index()[zip_path]: last = self._extract_resource( manager, os.path.join(zip_path, name) ) # return the extracted directory name return os.path.dirname(last) timestamp, size = self._get_date_and_size(self.zipinfo[zip_path]) if not WRITE_SUPPORT: raise IOError('"os.rename" and "os.unlink" are not supported ' 'on this platform') try: real_path = manager.get_cache_path( self.egg_name, self._parts(zip_path) ) if self._is_current(real_path, zip_path): return real_path outf, tmpnam = _mkstemp( ".$extract", dir=os.path.dirname(real_path), ) os.write(outf, self.loader.get_data(zip_path)) os.close(outf) utime(tmpnam, (timestamp, timestamp)) manager.postprocess(tmpnam, real_path) try: rename(tmpnam, real_path) except os.error: if os.path.isfile(real_path): if self._is_current(real_path, zip_path): # the file became current since it was checked above, # so proceed. return real_path # Windows, del old file and retry elif os.name == 'nt': unlink(real_path) rename(tmpnam, real_path) return real_path raise except os.error: # report a user-friendly error manager.extraction_error() return real_path def _is_current(self, file_path, zip_path): """ Return True if the file_path is current for this zip_path """ timestamp, size = self._get_date_and_size(self.zipinfo[zip_path]) if not os.path.isfile(file_path): return False stat = os.stat(file_path) if stat.st_size != size or stat.st_mtime != timestamp: return False # check that the contents match zip_contents = self.loader.get_data(zip_path) with open(file_path, 'rb') as f: file_contents = f.read() return zip_contents == file_contents def _get_eager_resources(self): if self.eagers is None: eagers = [] for name in ('native_libs.txt', 'eager_resources.txt'): if self.has_metadata(name): eagers.extend(self.get_metadata_lines(name)) self.eagers = eagers return self.eagers def _index(self): try: return self._dirindex except AttributeError: ind = {} for path in self.zipinfo: parts = path.split(os.sep) while parts: parent = os.sep.join(parts[:-1]) if parent in ind: ind[parent].append(parts[-1]) break else: ind[parent] = [parts.pop()] self._dirindex = ind return ind def _has(self, fspath): zip_path = self._zipinfo_name(fspath) return zip_path in self.zipinfo or zip_path in self._index() def _isdir(self, fspath): return self._zipinfo_name(fspath) in self._index() def _listdir(self, fspath): return list(self._index().get(self._zipinfo_name(fspath), ())) def _eager_to_zip(self, resource_name): return self._zipinfo_name(self._fn(self.egg_root, resource_name)) def _resource_to_zip(self, resource_name): return self._zipinfo_name(self._fn(self.module_path, resource_name)) register_loader_type(zipimport.zipimporter, ZipProvider) class FileMetadata(EmptyProvider): """Metadata handler for standalone PKG-INFO files Usage:: metadata = FileMetadata("/path/to/PKG-INFO") This provider rejects all data and metadata requests except for PKG-INFO, which is treated as existing, and will be the contents of the file at the provided location. """ def __init__(self, path): self.path = path def _get_metadata_path(self, name): return self.path def has_metadata(self, name): return name == 'PKG-INFO' and os.path.isfile(self.path) def get_metadata(self, name): if name != 'PKG-INFO': raise KeyError("No metadata except PKG-INFO is available") with io.open(self.path, encoding='utf-8', errors="replace") as f: metadata = f.read() self._warn_on_replacement(metadata) return metadata def _warn_on_replacement(self, metadata): # Python 2.7 compat for: replacement_char = '�' replacement_char = b'\xef\xbf\xbd'.decode('utf-8') if replacement_char in metadata: tmpl = "{self.path} could not be properly decoded in UTF-8" msg = tmpl.format(**locals()) warnings.warn(msg) def get_metadata_lines(self, name): return yield_lines(self.get_metadata(name)) class PathMetadata(DefaultProvider): """Metadata provider for egg directories Usage:: # Development eggs: egg_info = "/path/to/PackageName.egg-info" base_dir = os.path.dirname(egg_info) metadata = PathMetadata(base_dir, egg_info) dist_name = os.path.splitext(os.path.basename(egg_info))[0] dist = Distribution(basedir, project_name=dist_name, metadata=metadata) # Unpacked egg directories: egg_path = "/path/to/PackageName-ver-pyver-etc.egg" metadata = PathMetadata(egg_path, os.path.join(egg_path,'EGG-INFO')) dist = Distribution.from_filename(egg_path, metadata=metadata) """ def __init__(self, path, egg_info): self.module_path = path self.egg_info = egg_info class EggMetadata(ZipProvider): """Metadata provider for .egg files""" def __init__(self, importer): """Create a metadata provider from a zipimporter""" self.zip_pre = importer.archive + os.sep self.loader = importer if importer.prefix: self.module_path = os.path.join(importer.archive, importer.prefix) else: self.module_path = importer.archive self._setup_prefix() _declare_state('dict', _distribution_finders={}) def register_finder(importer_type, distribution_finder): """Register `distribution_finder` to find distributions in sys.path items `importer_type` is the type or class of a PEP 302 "Importer" (sys.path item handler), and `distribution_finder` is a callable that, passed a path item and the importer instance, yields ``Distribution`` instances found on that path item. See ``pkg_resources.find_on_path`` for an example.""" _distribution_finders[importer_type] = distribution_finder def find_distributions(path_item, only=False): """Yield distributions accessible via `path_item`""" importer = get_importer(path_item) finder = _find_adapter(_distribution_finders, importer) return finder(importer, path_item, only) def find_eggs_in_zip(importer, path_item, only=False): """ Find eggs in zip files; possibly multiple nested eggs. """ if importer.archive.endswith('.whl'): # wheels are not supported with this finder # they don't have PKG-INFO metadata, and won't ever contain eggs return metadata = EggMetadata(importer) if metadata.has_metadata('PKG-INFO'): yield Distribution.from_filename(path_item, metadata=metadata) if only: # don't yield nested distros return for subitem in metadata.resource_listdir(''): if _is_egg_path(subitem): subpath = os.path.join(path_item, subitem) dists = find_eggs_in_zip(zipimport.zipimporter(subpath), subpath) for dist in dists: yield dist elif subitem.lower().endswith('.dist-info'): subpath = os.path.join(path_item, subitem) submeta = EggMetadata(zipimport.zipimporter(subpath)) submeta.egg_info = subpath yield Distribution.from_location(path_item, subitem, submeta) register_finder(zipimport.zipimporter, find_eggs_in_zip) def find_nothing(importer, path_item, only=False): return () register_finder(object, find_nothing) def _by_version_descending(names): """ Given a list of filenames, return them in descending order by version number. >>> names = 'bar', 'foo', 'Python-2.7.10.egg', 'Python-2.7.2.egg' >>> _by_version_descending(names) ['Python-2.7.10.egg', 'Python-2.7.2.egg', 'foo', 'bar'] >>> names = 'Setuptools-1.2.3b1.egg', 'Setuptools-1.2.3.egg' >>> _by_version_descending(names) ['Setuptools-1.2.3.egg', 'Setuptools-1.2.3b1.egg'] >>> names = 'Setuptools-1.2.3b1.egg', 'Setuptools-1.2.3.post1.egg' >>> _by_version_descending(names) ['Setuptools-1.2.3.post1.egg', 'Setuptools-1.2.3b1.egg'] """ def _by_version(name): """ Parse each component of the filename """ name, ext = os.path.splitext(name) parts = itertools.chain(name.split('-'), [ext]) return [packaging.version.parse(part) for part in parts] return sorted(names, key=_by_version, reverse=True) def find_on_path(importer, path_item, only=False): """Yield distributions accessible on a sys.path directory""" path_item = _normalize_cached(path_item) if _is_unpacked_egg(path_item): yield Distribution.from_filename( path_item, metadata=PathMetadata( path_item, os.path.join(path_item, 'EGG-INFO') ) ) return entries = safe_listdir(path_item) # for performance, before sorting by version, # screen entries for only those that will yield # distributions filtered = ( entry for entry in entries if dist_factory(path_item, entry, only) ) # scan for .egg and .egg-info in directory path_item_entries = _by_version_descending(filtered) for entry in path_item_entries: fullpath = os.path.join(path_item, entry) factory = dist_factory(path_item, entry, only) for dist in factory(fullpath): yield dist def dist_factory(path_item, entry, only): """ Return a dist_factory for a path_item and entry """ lower = entry.lower() is_meta = any(map(lower.endswith, ('.egg-info', '.dist-info'))) return ( distributions_from_metadata if is_meta else find_distributions if not only and _is_egg_path(entry) else resolve_egg_link if not only and lower.endswith('.egg-link') else NoDists() ) class NoDists: """ >>> bool(NoDists()) False >>> list(NoDists()('anything')) [] """ def __bool__(self): return False if six.PY2: __nonzero__ = __bool__ def __call__(self, fullpath): return iter(()) def safe_listdir(path): """ Attempt to list contents of path, but suppress some exceptions. """ try: return os.listdir(path) except (PermissionError, NotADirectoryError): pass except OSError as e: # Ignore the directory if does not exist, not a directory or # permission denied ignorable = ( e.errno in (errno.ENOTDIR, errno.EACCES, errno.ENOENT) # Python 2 on Windows needs to be handled this way :( or getattr(e, "winerror", None) == 267 ) if not ignorable: raise return () def distributions_from_metadata(path): root = os.path.dirname(path) if os.path.isdir(path): if len(os.listdir(path)) == 0: # empty metadata dir; skip return metadata = PathMetadata(root, path) else: metadata = FileMetadata(path) entry = os.path.basename(path) yield Distribution.from_location( root, entry, metadata, precedence=DEVELOP_DIST, ) def non_empty_lines(path): """ Yield non-empty lines from file at path """ with open(path) as f: for line in f: line = line.strip() if line: yield line def resolve_egg_link(path): """ Given a path to an .egg-link, resolve distributions present in the referenced path. """ referenced_paths = non_empty_lines(path) resolved_paths = ( os.path.join(os.path.dirname(path), ref) for ref in referenced_paths ) dist_groups = map(find_distributions, resolved_paths) return next(dist_groups, ()) register_finder(pkgutil.ImpImporter, find_on_path) if hasattr(importlib_machinery, 'FileFinder'): register_finder(importlib_machinery.FileFinder, find_on_path) _declare_state('dict', _namespace_handlers={}) _declare_state('dict', _namespace_packages={}) def register_namespace_handler(importer_type, namespace_handler): """Register `namespace_handler` to declare namespace packages `importer_type` is the type or class of a PEP 302 "Importer" (sys.path item handler), and `namespace_handler` is a callable like this:: def namespace_handler(importer, path_entry, moduleName, module): # return a path_entry to use for child packages Namespace handlers are only called if the importer object has already agreed that it can handle the relevant path item, and they should only return a subpath if the module __path__ does not already contain an equivalent subpath. For an example namespace handler, see ``pkg_resources.file_ns_handler``. """ _namespace_handlers[importer_type] = namespace_handler def _handle_ns(packageName, path_item): """Ensure that named package includes a subpath of path_item (if needed)""" importer = get_importer(path_item) if importer is None: return None # capture warnings due to #1111 with warnings.catch_warnings(): warnings.simplefilter("ignore") loader = importer.find_module(packageName) if loader is None: return None module = sys.modules.get(packageName) if module is None: module = sys.modules[packageName] = types.ModuleType(packageName) module.__path__ = [] _set_parent_ns(packageName) elif not hasattr(module, '__path__'): raise TypeError("Not a package:", packageName) handler = _find_adapter(_namespace_handlers, importer) subpath = handler(importer, path_item, packageName, module) if subpath is not None: path = module.__path__ path.append(subpath) loader.load_module(packageName) _rebuild_mod_path(path, packageName, module) return subpath def _rebuild_mod_path(orig_path, package_name, module): """ Rebuild module.__path__ ensuring that all entries are ordered corresponding to their sys.path order """ sys_path = [_normalize_cached(p) for p in sys.path] def safe_sys_path_index(entry): """ Workaround for #520 and #513. """ try: return sys_path.index(entry) except ValueError: return float('inf') def position_in_sys_path(path): """ Return the ordinal of the path based on its position in sys.path """ path_parts = path.split(os.sep) module_parts = package_name.count('.') + 1 parts = path_parts[:-module_parts] return safe_sys_path_index(_normalize_cached(os.sep.join(parts))) new_path = sorted(orig_path, key=position_in_sys_path) new_path = [_normalize_cached(p) for p in new_path] if isinstance(module.__path__, list): module.__path__[:] = new_path else: module.__path__ = new_path def declare_namespace(packageName): """Declare that package 'packageName' is a namespace package""" _imp.acquire_lock() try: if packageName in _namespace_packages: return path = sys.path parent, _, _ = packageName.rpartition('.') if parent: declare_namespace(parent) if parent not in _namespace_packages: __import__(parent) try: path = sys.modules[parent].__path__ except AttributeError: raise TypeError("Not a package:", parent) # Track what packages are namespaces, so when new path items are added, # they can be updated _namespace_packages.setdefault(parent or None, []).append(packageName) _namespace_packages.setdefault(packageName, []) for path_item in path: # Ensure all the parent's path items are reflected in the child, # if they apply _handle_ns(packageName, path_item) finally: _imp.release_lock() def fixup_namespace_packages(path_item, parent=None): """Ensure that previously-declared namespace packages include path_item""" _imp.acquire_lock() try: for package in _namespace_packages.get(parent, ()): subpath = _handle_ns(package, path_item) if subpath: fixup_namespace_packages(subpath, package) finally: _imp.release_lock() def file_ns_handler(importer, path_item, packageName, module): """Compute an ns-package subpath for a filesystem or zipfile importer""" subpath = os.path.join(path_item, packageName.split('.')[-1]) normalized = _normalize_cached(subpath) for item in module.__path__: if _normalize_cached(item) == normalized: break else: # Only return the path if it's not already there return subpath register_namespace_handler(pkgutil.ImpImporter, file_ns_handler) register_namespace_handler(zipimport.zipimporter, file_ns_handler) if hasattr(importlib_machinery, 'FileFinder'): register_namespace_handler(importlib_machinery.FileFinder, file_ns_handler) def null_ns_handler(importer, path_item, packageName, module): return None register_namespace_handler(object, null_ns_handler) def normalize_path(filename): """Normalize a file/dir name for comparison purposes""" return os.path.normcase(os.path.realpath(os.path.normpath(_cygwin_patch(filename)))) def _cygwin_patch(filename): # pragma: nocover """ Contrary to POSIX 2008, on Cygwin, getcwd (3) contains symlink components. Using os.path.abspath() works around this limitation. A fix in os.getcwd() would probably better, in Cygwin even more so, except that this seems to be by design... """ return os.path.abspath(filename) if sys.platform == 'cygwin' else filename def _normalize_cached(filename, _cache={}): try: return _cache[filename] except KeyError: _cache[filename] = result = normalize_path(filename) return result def _is_egg_path(path): """ Determine if given path appears to be an egg. """ return path.lower().endswith('.egg') def _is_unpacked_egg(path): """ Determine if given path appears to be an unpacked egg. """ return ( _is_egg_path(path) and os.path.isfile(os.path.join(path, 'EGG-INFO', 'PKG-INFO')) ) def _set_parent_ns(packageName): parts = packageName.split('.') name = parts.pop() if parts: parent = '.'.join(parts) setattr(sys.modules[parent], name, sys.modules[packageName]) def yield_lines(strs): """Yield non-empty/non-comment lines of a string or sequence""" if isinstance(strs, six.string_types): for s in strs.splitlines(): s = s.strip() # skip blank lines/comments if s and not s.startswith('#'): yield s else: for ss in strs: for s in yield_lines(ss): yield s MODULE = re.compile(r"\w+(\.\w+)*$").match EGG_NAME = re.compile( r""" (?P<name>[^-]+) ( -(?P<ver>[^-]+) ( -py(?P<pyver>[^-]+) ( -(?P<plat>.+) )? )? )? """, re.VERBOSE | re.IGNORECASE, ).match class EntryPoint: """Object representing an advertised importable object""" def __init__(self, name, module_name, attrs=(), extras=(), dist=None): if not MODULE(module_name): raise ValueError("Invalid module name", module_name) self.name = name self.module_name = module_name self.attrs = tuple(attrs) self.extras = tuple(extras) self.dist = dist def __str__(self): s = "%s = %s" % (self.name, self.module_name) if self.attrs: s += ':' + '.'.join(self.attrs) if self.extras: s += ' [%s]' % ','.join(self.extras) return s def __repr__(self): return "EntryPoint.parse(%r)" % str(self) def load(self, require=True, *args, **kwargs): """ Require packages for this EntryPoint, then resolve it. """ if not require or args or kwargs: warnings.warn( "Parameters to load are deprecated. Call .resolve and " ".require separately.", PkgResourcesDeprecationWarning, stacklevel=2, ) if require: self.require(*args, **kwargs) return self.resolve() def resolve(self): """ Resolve the entry point from its module and attrs. """ module = __import__(self.module_name, fromlist=['__name__'], level=0) try: return functools.reduce(getattr, self.attrs, module) except AttributeError as exc: raise ImportError(str(exc)) def require(self, env=None, installer=None): if self.extras and not self.dist: raise UnknownExtra("Can't require() without a distribution", self) # Get the requirements for this entry point with all its extras and # then resolve them. We have to pass `extras` along when resolving so # that the working set knows what extras we want. Otherwise, for # dist-info distributions, the working set will assume that the # requirements for that extra are purely optional and skip over them. reqs = self.dist.requires(self.extras) items = working_set.resolve(reqs, env, installer, extras=self.extras) list(map(working_set.add, items)) pattern = re.compile( r'\s*' r'(?P<name>.+?)\s*' r'=\s*' r'(?P<module>[\w.]+)\s*' r'(:\s*(?P<attr>[\w.]+))?\s*' r'(?P<extras>\[.*\])?\s*$' ) @classmethod def parse(cls, src, dist=None): """Parse a single entry point from string `src` Entry point syntax follows the form:: name = some.module:some.attr [extra1, extra2] The entry name and module name are required, but the ``:attrs`` and ``[extras]`` parts are optional """ m = cls.pattern.match(src) if not m: msg = "EntryPoint must be in 'name=module:attrs [extras]' format" raise ValueError(msg, src) res = m.groupdict() extras = cls._parse_extras(res['extras']) attrs = res['attr'].split('.') if res['attr'] else () return cls(res['name'], res['module'], attrs, extras, dist) @classmethod def _parse_extras(cls, extras_spec): if not extras_spec: return () req = Requirement.parse('x' + extras_spec) if req.specs: raise ValueError() return req.extras @classmethod def parse_group(cls, group, lines, dist=None): """Parse an entry point group""" if not MODULE(group): raise ValueError("Invalid group name", group) this = {} for line in yield_lines(lines): ep = cls.parse(line, dist) if ep.name in this: raise ValueError("Duplicate entry point", group, ep.name) this[ep.name] = ep return this @classmethod def parse_map(cls, data, dist=None): """Parse a map of entry point groups""" if isinstance(data, dict): data = data.items() else: data = split_sections(data) maps = {} for group, lines in data: if group is None: if not lines: continue raise ValueError("Entry points must be listed in groups") group = group.strip() if group in maps: raise ValueError("Duplicate group name", group) maps[group] = cls.parse_group(group, lines, dist) return maps def _remove_md5_fragment(location): if not location: return '' parsed = urllib.parse.urlparse(location) if parsed[-1].startswith('md5='): return urllib.parse.urlunparse(parsed[:-1] + ('',)) return location def _version_from_file(lines): """ Given an iterable of lines from a Metadata file, return the value of the Version field, if present, or None otherwise. """ def is_version_line(line): return line.lower().startswith('version:') version_lines = filter(is_version_line, lines) line = next(iter(version_lines), '') _, _, value = line.partition(':') return safe_version(value.strip()) or None class Distribution: """Wrap an actual or potential sys.path entry w/metadata""" PKG_INFO = 'PKG-INFO' def __init__( self, location=None, metadata=None, project_name=None, version=None, py_version=PY_MAJOR, platform=None, precedence=EGG_DIST): self.project_name = safe_name(project_name or 'Unknown') if version is not None: self._version = safe_version(version) self.py_version = py_version self.platform = platform self.location = location self.precedence = precedence self._provider = metadata or empty_provider @classmethod def from_location(cls, location, basename, metadata=None, **kw): project_name, version, py_version, platform = [None] * 4 basename, ext = os.path.splitext(basename) if ext.lower() in _distributionImpl: cls = _distributionImpl[ext.lower()] match = EGG_NAME(basename) if match: project_name, version, py_version, platform = match.group( 'name', 'ver', 'pyver', 'plat' ) return cls( location, metadata, project_name=project_name, version=version, py_version=py_version, platform=platform, **kw )._reload_version() def _reload_version(self): return self @property def hashcmp(self): return ( self.parsed_version, self.precedence, self.key, _remove_md5_fragment(self.location), self.py_version or '', self.platform or '', ) def __hash__(self): return hash(self.hashcmp) def __lt__(self, other): return self.hashcmp < other.hashcmp def __le__(self, other): return self.hashcmp <= other.hashcmp def __gt__(self, other): return self.hashcmp > other.hashcmp def __ge__(self, other): return self.hashcmp >= other.hashcmp def __eq__(self, other): if not isinstance(other, self.__class__): # It's not a Distribution, so they are not equal return False return self.hashcmp == other.hashcmp def __ne__(self, other): return not self == other # These properties have to be lazy so that we don't have to load any # metadata until/unless it's actually needed. (i.e., some distributions # may not know their name or version without loading PKG-INFO) @property def key(self): try: return self._key except AttributeError: self._key = key = self.project_name.lower() return key @property def parsed_version(self): if not hasattr(self, "_parsed_version"): self._parsed_version = parse_version(self.version) return self._parsed_version def _warn_legacy_version(self): LV = packaging.version.LegacyVersion is_legacy = isinstance(self._parsed_version, LV) if not is_legacy: return # While an empty version is technically a legacy version and # is not a valid PEP 440 version, it's also unlikely to # actually come from someone and instead it is more likely that # it comes from setuptools attempting to parse a filename and # including it in the list. So for that we'll gate this warning # on if the version is anything at all or not. if not self.version: return tmpl = textwrap.dedent(""" '{project_name} ({version})' is being parsed as a legacy, non PEP 440, version. You may find odd behavior and sort order. In particular it will be sorted as less than 0.0. It is recommended to migrate to PEP 440 compatible versions. """).strip().replace('\n', ' ') warnings.warn(tmpl.format(**vars(self)), PEP440Warning) @property def version(self): try: return self._version except AttributeError: version = self._get_version() if version is None: path = self._get_metadata_path_for_display(self.PKG_INFO) msg = ( "Missing 'Version:' header and/or {} file at path: {}" ).format(self.PKG_INFO, path) raise ValueError(msg, self) return version @property def _dep_map(self): """ A map of extra to its list of (direct) requirements for this distribution, including the null extra. """ try: return self.__dep_map except AttributeError: self.__dep_map = self._filter_extras(self._build_dep_map()) return self.__dep_map @staticmethod def _filter_extras(dm): """ Given a mapping of extras to dependencies, strip off environment markers and filter out any dependencies not matching the markers. """ for extra in list(filter(None, dm)): new_extra = extra reqs = dm.pop(extra) new_extra, _, marker = extra.partition(':') fails_marker = marker and ( invalid_marker(marker) or not evaluate_marker(marker) ) if fails_marker: reqs = [] new_extra = safe_extra(new_extra) or None dm.setdefault(new_extra, []).extend(reqs) return dm def _build_dep_map(self): dm = {} for name in 'requires.txt', 'depends.txt': for extra, reqs in split_sections(self._get_metadata(name)): dm.setdefault(extra, []).extend(parse_requirements(reqs)) return dm def requires(self, extras=()): """List of Requirements needed for this distro if `extras` are used""" dm = self._dep_map deps = [] deps.extend(dm.get(None, ())) for ext in extras: try: deps.extend(dm[safe_extra(ext)]) except KeyError: raise UnknownExtra( "%s has no such extra feature %r" % (self, ext) ) return deps def _get_metadata_path_for_display(self, name): """ Return the path to the given metadata file, if available. """ try: # We need to access _get_metadata_path() on the provider object # directly rather than through this class's __getattr__() # since _get_metadata_path() is marked private. path = self._provider._get_metadata_path(name) # Handle exceptions e.g. in case the distribution's metadata # provider doesn't support _get_metadata_path(). except Exception: return '[could not detect]' return path def _get_metadata(self, name): if self.has_metadata(name): for line in self.get_metadata_lines(name): yield line def _get_version(self): lines = self._get_metadata(self.PKG_INFO) version = _version_from_file(lines) return version def activate(self, path=None, replace=False): """Ensure distribution is importable on `path` (default=sys.path)""" if path is None: path = sys.path self.insert_on(path, replace=replace) if path is sys.path: fixup_namespace_packages(self.location) for pkg in self._get_metadata('namespace_packages.txt'): if pkg in sys.modules: declare_namespace(pkg) def egg_name(self): """Return what this distribution's standard .egg filename should be""" filename = "%s-%s-py%s" % ( to_filename(self.project_name), to_filename(self.version), self.py_version or PY_MAJOR ) if self.platform: filename += '-' + self.platform return filename def __repr__(self): if self.location: return "%s (%s)" % (self, self.location) else: return str(self) def __str__(self): try: version = getattr(self, 'version', None) except ValueError: version = None version = version or "[unknown version]" return "%s %s" % (self.project_name, version) def __getattr__(self, attr): """Delegate all unrecognized public attributes to .metadata provider""" if attr.startswith('_'): raise AttributeError(attr) return getattr(self._provider, attr) def __dir__(self): return list( set(super(Distribution, self).__dir__()) | set( attr for attr in self._provider.__dir__() if not attr.startswith('_') ) ) if not hasattr(object, '__dir__'): # python 2.7 not supported del __dir__ @classmethod def from_filename(cls, filename, metadata=None, **kw): return cls.from_location( _normalize_cached(filename), os.path.basename(filename), metadata, **kw ) def as_requirement(self): """Return a ``Requirement`` that matches this distribution exactly""" if isinstance(self.parsed_version, packaging.version.Version): spec = "%s==%s" % (self.project_name, self.parsed_version) else: spec = "%s===%s" % (self.project_name, self.parsed_version) return Requirement.parse(spec) def load_entry_point(self, group, name): """Return the `name` entry point of `group` or raise ImportError""" ep = self.get_entry_info(group, name) if ep is None: raise ImportError("Entry point %r not found" % ((group, name),)) return ep.load() def get_entry_map(self, group=None): """Return the entry point map for `group`, or the full entry map""" try: ep_map = self._ep_map except AttributeError: ep_map = self._ep_map = EntryPoint.parse_map( self._get_metadata('entry_points.txt'), self ) if group is not None: return ep_map.get(group, {}) return ep_map def get_entry_info(self, group, name): """Return the EntryPoint object for `group`+`name`, or ``None``""" return self.get_entry_map(group).get(name) def insert_on(self, path, loc=None, replace=False): """Ensure self.location is on path If replace=False (default): - If location is already in path anywhere, do nothing. - Else: - If it's an egg and its parent directory is on path, insert just ahead of the parent. - Else: add to the end of path. If replace=True: - If location is already on path anywhere (not eggs) or higher priority than its parent (eggs) do nothing. - Else: - If it's an egg and its parent directory is on path, insert just ahead of the parent, removing any lower-priority entries. - Else: add it to the front of path. """ loc = loc or self.location if not loc: return nloc = _normalize_cached(loc) bdir = os.path.dirname(nloc) npath = [(p and _normalize_cached(p) or p) for p in path] for p, item in enumerate(npath): if item == nloc: if replace: break else: # don't modify path (even removing duplicates) if # found and not replace return elif item == bdir and self.precedence == EGG_DIST: # if it's an .egg, give it precedence over its directory # UNLESS it's already been added to sys.path and replace=False if (not replace) and nloc in npath[p:]: return if path is sys.path: self.check_version_conflict() path.insert(p, loc) npath.insert(p, nloc) break else: if path is sys.path: self.check_version_conflict() if replace: path.insert(0, loc) else: path.append(loc) return # p is the spot where we found or inserted loc; now remove duplicates while True: try: np = npath.index(nloc, p + 1) except ValueError: break else: del npath[np], path[np] # ha! p = np return def check_version_conflict(self): if self.key == 'setuptools': # ignore the inevitable setuptools self-conflicts :( return nsp = dict.fromkeys(self._get_metadata('namespace_packages.txt')) loc = normalize_path(self.location) for modname in self._get_metadata('top_level.txt'): if (modname not in sys.modules or modname in nsp or modname in _namespace_packages): continue if modname in ('pkg_resources', 'setuptools', 'site'): continue fn = getattr(sys.modules[modname], '__file__', None) if fn and (normalize_path(fn).startswith(loc) or fn.startswith(self.location)): continue issue_warning( "Module %s was already imported from %s, but %s is being added" " to sys.path" % (modname, fn, self.location), ) def has_version(self): try: self.version except ValueError: issue_warning("Unbuilt egg for " + repr(self)) return False return True def clone(self, **kw): """Copy this distribution, substituting in any changed keyword args""" names = 'project_name version py_version platform location precedence' for attr in names.split(): kw.setdefault(attr, getattr(self, attr, None)) kw.setdefault('metadata', self._provider) return self.__class__(**kw) @property def extras(self): return [dep for dep in self._dep_map if dep] class EggInfoDistribution(Distribution): def _reload_version(self): """ Packages installed by distutils (e.g. numpy or scipy), which uses an old safe_version, and so their version numbers can get mangled when converted to filenames (e.g., 1.11.0.dev0+2329eae to 1.11.0.dev0_2329eae). These distributions will not be parsed properly downstream by Distribution and safe_version, so take an extra step and try to get the version number from the metadata file itself instead of the filename. """ md_version = self._get_version() if md_version: self._version = md_version return self class DistInfoDistribution(Distribution): """ Wrap an actual or potential sys.path entry w/metadata, .dist-info style. """ PKG_INFO = 'METADATA' EQEQ = re.compile(r"([\(,])\s*(\d.*?)\s*([,\)])") @property def _parsed_pkg_info(self): """Parse and cache metadata""" try: return self._pkg_info except AttributeError: metadata = self.get_metadata(self.PKG_INFO) self._pkg_info = email.parser.Parser().parsestr(metadata) return self._pkg_info @property def _dep_map(self): try: return self.__dep_map except AttributeError: self.__dep_map = self._compute_dependencies() return self.__dep_map def _compute_dependencies(self): """Recompute this distribution's dependencies.""" dm = self.__dep_map = {None: []} reqs = [] # Including any condition expressions for req in self._parsed_pkg_info.get_all('Requires-Dist') or []: reqs.extend(parse_requirements(req)) def reqs_for_extra(extra): for req in reqs: if not req.marker or req.marker.evaluate({'extra': extra}): yield req common = frozenset(reqs_for_extra(None)) dm[None].extend(common) for extra in self._parsed_pkg_info.get_all('Provides-Extra') or []: s_extra = safe_extra(extra.strip()) dm[s_extra] = list(frozenset(reqs_for_extra(extra)) - common) return dm _distributionImpl = { '.egg': Distribution, '.egg-info': EggInfoDistribution, '.dist-info': DistInfoDistribution, } def issue_warning(*args, **kw): level = 1 g = globals() try: # find the first stack frame that is *not* code in # the pkg_resources module, to use for the warning while sys._getframe(level).f_globals is g: level += 1 except ValueError: pass warnings.warn(stacklevel=level + 1, *args, **kw) class RequirementParseError(ValueError): def __str__(self): return ' '.join(self.args) def parse_requirements(strs): """Yield ``Requirement`` objects for each specification in `strs` `strs` must be a string, or a (possibly-nested) iterable thereof. """ # create a steppable iterator, so we can handle \-continuations lines = iter(yield_lines(strs)) for line in lines: # Drop comments -- a hash without a space may be in a URL. if ' #' in line: line = line[:line.find(' #')] # If there is a line continuation, drop it, and append the next line. if line.endswith('\\'): line = line[:-2].strip() try: line += next(lines) except StopIteration: return yield Requirement(line) class Requirement(packaging.requirements.Requirement): def __init__(self, requirement_string): """DO NOT CALL THIS UNDOCUMENTED METHOD; use Requirement.parse()!""" try: super(Requirement, self).__init__(requirement_string) except packaging.requirements.InvalidRequirement as e: raise RequirementParseError(str(e)) self.unsafe_name = self.name project_name = safe_name(self.name) self.project_name, self.key = project_name, project_name.lower() self.specs = [ (spec.operator, spec.version) for spec in self.specifier] self.extras = tuple(map(safe_extra, self.extras)) self.hashCmp = ( self.key, self.specifier, frozenset(self.extras), str(self.marker) if self.marker else None, ) self.__hash = hash(self.hashCmp) def __eq__(self, other): return ( isinstance(other, Requirement) and self.hashCmp == other.hashCmp ) def __ne__(self, other): return not self == other def __contains__(self, item): if isinstance(item, Distribution): if item.key != self.key: return False item = item.version # Allow prereleases always in order to match the previous behavior of # this method. In the future this should be smarter and follow PEP 440 # more accurately. return self.specifier.contains(item, prereleases=True) def __hash__(self): return self.__hash def __repr__(self): return "Requirement.parse(%r)" % str(self) @staticmethod def parse(s): req, = parse_requirements(s) return req def _always_object(classes): """ Ensure object appears in the mro even for old-style classes. """ if object not in classes: return classes + (object,) return classes def _find_adapter(registry, ob): """Return an adapter factory for `ob` from `registry`""" types = _always_object(inspect.getmro(getattr(ob, '__class__', type(ob)))) for t in types: if t in registry: return registry[t] def ensure_directory(path): """Ensure that the parent directory of `path` exists""" dirname = os.path.dirname(path) py31compat.makedirs(dirname, exist_ok=True) def _bypass_ensure_directory(path): """Sandbox-bypassing version of ensure_directory()""" if not WRITE_SUPPORT: raise IOError('"os.mkdir" not supported on this platform.') dirname, filename = split(path) if dirname and filename and not isdir(dirname): _bypass_ensure_directory(dirname) try: mkdir(dirname, 0o755) except FileExistsError: pass def split_sections(s): """Split a string or iterable thereof into (section, content) pairs Each ``section`` is a stripped version of the section header ("[section]") and each ``content`` is a list of stripped lines excluding blank lines and comment-only lines. If there are any such lines before the first section header, they're returned in a first ``section`` of ``None``. """ section = None content = [] for line in yield_lines(s): if line.startswith("["): if line.endswith("]"): if section or content: yield section, content section = line[1:-1].strip() content = [] else: raise ValueError("Invalid section heading", line) else: content.append(line) # wrap up last segment yield section, content def _mkstemp(*args, **kw): old_open = os.open try: # temporarily bypass sandboxing os.open = os_open return tempfile.mkstemp(*args, **kw) finally: # and then put it back os.open = old_open # Silence the PEP440Warning by default, so that end users don't get hit by it # randomly just because they use pkg_resources. We want to append the rule # because we want earlier uses of filterwarnings to take precedence over this # one. warnings.filterwarnings("ignore", category=PEP440Warning, append=True) # from jaraco.functools 1.3 def _call_aside(f, *args, **kwargs): f(*args, **kwargs) return f @_call_aside def _initialize(g=globals()): "Set up global resource manager (deliberately not state-saved)" manager = ResourceManager() g['_manager'] = manager g.update( (name, getattr(manager, name)) for name in dir(manager) if not name.startswith('_') ) @_call_aside def _initialize_master_working_set(): """ Prepare the master working set and make the ``require()`` API available. This function has explicit effects on the global state of pkg_resources. It is intended to be invoked once at the initialization of this module. Invocation by other packages is unsupported and done at their own risk. """ working_set = WorkingSet._build_master() _declare_state('object', working_set=working_set) require = working_set.require iter_entry_points = working_set.iter_entry_points add_activation_listener = working_set.subscribe run_script = working_set.run_script # backward compatibility run_main = run_script # Activate all distributions already on sys.path with replace=False and # ensure that all distributions added to the working set in the future # (e.g. by calling ``require()``) will get activated as well, # with higher priority (replace=True). tuple( dist.activate(replace=False) for dist in working_set ) add_activation_listener( lambda dist: dist.activate(replace=True), existing=False, ) working_set.entries = [] # match order list(map(working_set.add_entry, sys.path)) globals().update(locals()) class PkgResourcesDeprecationWarning(Warning): """ Base class for warning about deprecations in ``pkg_resources`` This class is not derived from ``DeprecationWarning``, and as such is visible by default. """ ��������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735100.0399942 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pkg_resources/_vendor/�������������������������������0000755�0001750�0001750�00000000000�00000000000�026550� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pkg_resources/_vendor/__init__.py��������������������0000644�0001750�0001750�00000000000�00000000000�030647� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pkg_resources/_vendor/appdirs.py���������������������0000644�0001750�0001750�00000060175�00000000000�030575� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������#!/usr/bin/env python # -*- coding: utf-8 -*- # Copyright (c) 2005-2010 ActiveState Software Inc. # Copyright (c) 2013 Eddy Petrișor """Utilities for determining application-specific dirs. See <http://github.com/ActiveState/appdirs> for details and usage. """ # Dev Notes: # - MSDN on where to store app data files: # http://support.microsoft.com/default.aspx?scid=kb;en-us;310294#XSLTH3194121123120121120120 # - Mac OS X: http://developer.apple.com/documentation/MacOSX/Conceptual/BPFileSystem/index.html # - XDG spec for Un*x: http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html __version_info__ = (1, 4, 3) __version__ = '.'.join(map(str, __version_info__)) import sys import os PY3 = sys.version_info[0] == 3 if PY3: unicode = str if sys.platform.startswith('java'): import platform os_name = platform.java_ver()[3][0] if os_name.startswith('Windows'): # "Windows XP", "Windows 7", etc. system = 'win32' elif os_name.startswith('Mac'): # "Mac OS X", etc. system = 'darwin' else: # "Linux", "SunOS", "FreeBSD", etc. # Setting this to "linux2" is not ideal, but only Windows or Mac # are actually checked for and the rest of the module expects # *sys.platform* style strings. system = 'linux2' else: system = sys.platform def user_data_dir(appname=None, appauthor=None, version=None, roaming=False): r"""Return full path to the user-specific data dir for this application. "appname" is the name of application. If None, just the system directory is returned. "appauthor" (only used on Windows) is the name of the appauthor or distributing body for this application. Typically it is the owning company name. This falls back to appname. You may pass False to disable it. "version" is an optional version path element to append to the path. You might want to use this if you want multiple versions of your app to be able to run independently. If used, this would typically be "<major>.<minor>". Only applied when appname is present. "roaming" (boolean, default False) can be set True to use the Windows roaming appdata directory. That means that for users on a Windows network setup for roaming profiles, this user data will be sync'd on login. See <http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx> for a discussion of issues. Typical user data directories are: Mac OS X: ~/Library/Application Support/<AppName> Unix: ~/.local/share/<AppName> # or in $XDG_DATA_HOME, if defined Win XP (not roaming): C:\Documents and Settings\<username>\Application Data\<AppAuthor>\<AppName> Win XP (roaming): C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName> Win 7 (not roaming): C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName> Win 7 (roaming): C:\Users\<username>\AppData\Roaming\<AppAuthor>\<AppName> For Unix, we follow the XDG spec and support $XDG_DATA_HOME. That means, by default "~/.local/share/<AppName>". """ if system == "win32": if appauthor is None: appauthor = appname const = roaming and "CSIDL_APPDATA" or "CSIDL_LOCAL_APPDATA" path = os.path.normpath(_get_win_folder(const)) if appname: if appauthor is not False: path = os.path.join(path, appauthor, appname) else: path = os.path.join(path, appname) elif system == 'darwin': path = os.path.expanduser('~/Library/Application Support/') if appname: path = os.path.join(path, appname) else: path = os.getenv('XDG_DATA_HOME', os.path.expanduser("~/.local/share")) if appname: path = os.path.join(path, appname) if appname and version: path = os.path.join(path, version) return path def site_data_dir(appname=None, appauthor=None, version=None, multipath=False): r"""Return full path to the user-shared data dir for this application. "appname" is the name of application. If None, just the system directory is returned. "appauthor" (only used on Windows) is the name of the appauthor or distributing body for this application. Typically it is the owning company name. This falls back to appname. You may pass False to disable it. "version" is an optional version path element to append to the path. You might want to use this if you want multiple versions of your app to be able to run independently. If used, this would typically be "<major>.<minor>". Only applied when appname is present. "multipath" is an optional parameter only applicable to *nix which indicates that the entire list of data dirs should be returned. By default, the first item from XDG_DATA_DIRS is returned, or '/usr/local/share/<AppName>', if XDG_DATA_DIRS is not set Typical site data directories are: Mac OS X: /Library/Application Support/<AppName> Unix: /usr/local/share/<AppName> or /usr/share/<AppName> Win XP: C:\Documents and Settings\All Users\Application Data\<AppAuthor>\<AppName> Vista: (Fail! "C:\ProgramData" is a hidden *system* directory on Vista.) Win 7: C:\ProgramData\<AppAuthor>\<AppName> # Hidden, but writeable on Win 7. For Unix, this is using the $XDG_DATA_DIRS[0] default. WARNING: Do not use this on Windows. See the Vista-Fail note above for why. """ if system == "win32": if appauthor is None: appauthor = appname path = os.path.normpath(_get_win_folder("CSIDL_COMMON_APPDATA")) if appname: if appauthor is not False: path = os.path.join(path, appauthor, appname) else: path = os.path.join(path, appname) elif system == 'darwin': path = os.path.expanduser('/Library/Application Support') if appname: path = os.path.join(path, appname) else: # XDG default for $XDG_DATA_DIRS # only first, if multipath is False path = os.getenv('XDG_DATA_DIRS', os.pathsep.join(['/usr/local/share', '/usr/share'])) pathlist = [os.path.expanduser(x.rstrip(os.sep)) for x in path.split(os.pathsep)] if appname: if version: appname = os.path.join(appname, version) pathlist = [os.sep.join([x, appname]) for x in pathlist] if multipath: path = os.pathsep.join(pathlist) else: path = pathlist[0] return path if appname and version: path = os.path.join(path, version) return path def user_config_dir(appname=None, appauthor=None, version=None, roaming=False): r"""Return full path to the user-specific config dir for this application. "appname" is the name of application. If None, just the system directory is returned. "appauthor" (only used on Windows) is the name of the appauthor or distributing body for this application. Typically it is the owning company name. This falls back to appname. You may pass False to disable it. "version" is an optional version path element to append to the path. You might want to use this if you want multiple versions of your app to be able to run independently. If used, this would typically be "<major>.<minor>". Only applied when appname is present. "roaming" (boolean, default False) can be set True to use the Windows roaming appdata directory. That means that for users on a Windows network setup for roaming profiles, this user data will be sync'd on login. See <http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx> for a discussion of issues. Typical user config directories are: Mac OS X: same as user_data_dir Unix: ~/.config/<AppName> # or in $XDG_CONFIG_HOME, if defined Win *: same as user_data_dir For Unix, we follow the XDG spec and support $XDG_CONFIG_HOME. That means, by default "~/.config/<AppName>". """ if system in ["win32", "darwin"]: path = user_data_dir(appname, appauthor, None, roaming) else: path = os.getenv('XDG_CONFIG_HOME', os.path.expanduser("~/.config")) if appname: path = os.path.join(path, appname) if appname and version: path = os.path.join(path, version) return path def site_config_dir(appname=None, appauthor=None, version=None, multipath=False): r"""Return full path to the user-shared data dir for this application. "appname" is the name of application. If None, just the system directory is returned. "appauthor" (only used on Windows) is the name of the appauthor or distributing body for this application. Typically it is the owning company name. This falls back to appname. You may pass False to disable it. "version" is an optional version path element to append to the path. You might want to use this if you want multiple versions of your app to be able to run independently. If used, this would typically be "<major>.<minor>". Only applied when appname is present. "multipath" is an optional parameter only applicable to *nix which indicates that the entire list of config dirs should be returned. By default, the first item from XDG_CONFIG_DIRS is returned, or '/etc/xdg/<AppName>', if XDG_CONFIG_DIRS is not set Typical site config directories are: Mac OS X: same as site_data_dir Unix: /etc/xdg/<AppName> or $XDG_CONFIG_DIRS[i]/<AppName> for each value in $XDG_CONFIG_DIRS Win *: same as site_data_dir Vista: (Fail! "C:\ProgramData" is a hidden *system* directory on Vista.) For Unix, this is using the $XDG_CONFIG_DIRS[0] default, if multipath=False WARNING: Do not use this on Windows. See the Vista-Fail note above for why. """ if system in ["win32", "darwin"]: path = site_data_dir(appname, appauthor) if appname and version: path = os.path.join(path, version) else: # XDG default for $XDG_CONFIG_DIRS # only first, if multipath is False path = os.getenv('XDG_CONFIG_DIRS', '/etc/xdg') pathlist = [os.path.expanduser(x.rstrip(os.sep)) for x in path.split(os.pathsep)] if appname: if version: appname = os.path.join(appname, version) pathlist = [os.sep.join([x, appname]) for x in pathlist] if multipath: path = os.pathsep.join(pathlist) else: path = pathlist[0] return path def user_cache_dir(appname=None, appauthor=None, version=None, opinion=True): r"""Return full path to the user-specific cache dir for this application. "appname" is the name of application. If None, just the system directory is returned. "appauthor" (only used on Windows) is the name of the appauthor or distributing body for this application. Typically it is the owning company name. This falls back to appname. You may pass False to disable it. "version" is an optional version path element to append to the path. You might want to use this if you want multiple versions of your app to be able to run independently. If used, this would typically be "<major>.<minor>". Only applied when appname is present. "opinion" (boolean) can be False to disable the appending of "Cache" to the base app data dir for Windows. See discussion below. Typical user cache directories are: Mac OS X: ~/Library/Caches/<AppName> Unix: ~/.cache/<AppName> (XDG default) Win XP: C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>\Cache Vista: C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>\Cache On Windows the only suggestion in the MSDN docs is that local settings go in the `CSIDL_LOCAL_APPDATA` directory. This is identical to the non-roaming app data dir (the default returned by `user_data_dir` above). Apps typically put cache data somewhere *under* the given dir here. Some examples: ...\Mozilla\Firefox\Profiles\<ProfileName>\Cache ...\Acme\SuperApp\Cache\1.0 OPINION: This function appends "Cache" to the `CSIDL_LOCAL_APPDATA` value. This can be disabled with the `opinion=False` option. """ if system == "win32": if appauthor is None: appauthor = appname path = os.path.normpath(_get_win_folder("CSIDL_LOCAL_APPDATA")) if appname: if appauthor is not False: path = os.path.join(path, appauthor, appname) else: path = os.path.join(path, appname) if opinion: path = os.path.join(path, "Cache") elif system == 'darwin': path = os.path.expanduser('~/Library/Caches') if appname: path = os.path.join(path, appname) else: path = os.getenv('XDG_CACHE_HOME', os.path.expanduser('~/.cache')) if appname: path = os.path.join(path, appname) if appname and version: path = os.path.join(path, version) return path def user_state_dir(appname=None, appauthor=None, version=None, roaming=False): r"""Return full path to the user-specific state dir for this application. "appname" is the name of application. If None, just the system directory is returned. "appauthor" (only used on Windows) is the name of the appauthor or distributing body for this application. Typically it is the owning company name. This falls back to appname. You may pass False to disable it. "version" is an optional version path element to append to the path. You might want to use this if you want multiple versions of your app to be able to run independently. If used, this would typically be "<major>.<minor>". Only applied when appname is present. "roaming" (boolean, default False) can be set True to use the Windows roaming appdata directory. That means that for users on a Windows network setup for roaming profiles, this user data will be sync'd on login. See <http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx> for a discussion of issues. Typical user state directories are: Mac OS X: same as user_data_dir Unix: ~/.local/state/<AppName> # or in $XDG_STATE_HOME, if defined Win *: same as user_data_dir For Unix, we follow this Debian proposal <https://wiki.debian.org/XDGBaseDirectorySpecification#state> to extend the XDG spec and support $XDG_STATE_HOME. That means, by default "~/.local/state/<AppName>". """ if system in ["win32", "darwin"]: path = user_data_dir(appname, appauthor, None, roaming) else: path = os.getenv('XDG_STATE_HOME', os.path.expanduser("~/.local/state")) if appname: path = os.path.join(path, appname) if appname and version: path = os.path.join(path, version) return path def user_log_dir(appname=None, appauthor=None, version=None, opinion=True): r"""Return full path to the user-specific log dir for this application. "appname" is the name of application. If None, just the system directory is returned. "appauthor" (only used on Windows) is the name of the appauthor or distributing body for this application. Typically it is the owning company name. This falls back to appname. You may pass False to disable it. "version" is an optional version path element to append to the path. You might want to use this if you want multiple versions of your app to be able to run independently. If used, this would typically be "<major>.<minor>". Only applied when appname is present. "opinion" (boolean) can be False to disable the appending of "Logs" to the base app data dir for Windows, and "log" to the base cache dir for Unix. See discussion below. Typical user log directories are: Mac OS X: ~/Library/Logs/<AppName> Unix: ~/.cache/<AppName>/log # or under $XDG_CACHE_HOME if defined Win XP: C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>\Logs Vista: C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>\Logs On Windows the only suggestion in the MSDN docs is that local settings go in the `CSIDL_LOCAL_APPDATA` directory. (Note: I'm interested in examples of what some windows apps use for a logs dir.) OPINION: This function appends "Logs" to the `CSIDL_LOCAL_APPDATA` value for Windows and appends "log" to the user cache dir for Unix. This can be disabled with the `opinion=False` option. """ if system == "darwin": path = os.path.join( os.path.expanduser('~/Library/Logs'), appname) elif system == "win32": path = user_data_dir(appname, appauthor, version) version = False if opinion: path = os.path.join(path, "Logs") else: path = user_cache_dir(appname, appauthor, version) version = False if opinion: path = os.path.join(path, "log") if appname and version: path = os.path.join(path, version) return path class AppDirs(object): """Convenience wrapper for getting application dirs.""" def __init__(self, appname=None, appauthor=None, version=None, roaming=False, multipath=False): self.appname = appname self.appauthor = appauthor self.version = version self.roaming = roaming self.multipath = multipath @property def user_data_dir(self): return user_data_dir(self.appname, self.appauthor, version=self.version, roaming=self.roaming) @property def site_data_dir(self): return site_data_dir(self.appname, self.appauthor, version=self.version, multipath=self.multipath) @property def user_config_dir(self): return user_config_dir(self.appname, self.appauthor, version=self.version, roaming=self.roaming) @property def site_config_dir(self): return site_config_dir(self.appname, self.appauthor, version=self.version, multipath=self.multipath) @property def user_cache_dir(self): return user_cache_dir(self.appname, self.appauthor, version=self.version) @property def user_state_dir(self): return user_state_dir(self.appname, self.appauthor, version=self.version) @property def user_log_dir(self): return user_log_dir(self.appname, self.appauthor, version=self.version) #---- internal support stuff def _get_win_folder_from_registry(csidl_name): """This is a fallback technique at best. I'm not sure if using the registry for this guarantees us the correct answer for all CSIDL_* names. """ if PY3: import winreg as _winreg else: import _winreg shell_folder_name = { "CSIDL_APPDATA": "AppData", "CSIDL_COMMON_APPDATA": "Common AppData", "CSIDL_LOCAL_APPDATA": "Local AppData", }[csidl_name] key = _winreg.OpenKey( _winreg.HKEY_CURRENT_USER, r"Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders" ) dir, type = _winreg.QueryValueEx(key, shell_folder_name) return dir def _get_win_folder_with_pywin32(csidl_name): from win32com.shell import shellcon, shell dir = shell.SHGetFolderPath(0, getattr(shellcon, csidl_name), 0, 0) # Try to make this a unicode path because SHGetFolderPath does # not return unicode strings when there is unicode data in the # path. try: dir = unicode(dir) # Downgrade to short path name if have highbit chars. See # <http://bugs.activestate.com/show_bug.cgi?id=85099>. has_high_char = False for c in dir: if ord(c) > 255: has_high_char = True break if has_high_char: try: import win32api dir = win32api.GetShortPathName(dir) except ImportError: pass except UnicodeError: pass return dir def _get_win_folder_with_ctypes(csidl_name): import ctypes csidl_const = { "CSIDL_APPDATA": 26, "CSIDL_COMMON_APPDATA": 35, "CSIDL_LOCAL_APPDATA": 28, }[csidl_name] buf = ctypes.create_unicode_buffer(1024) ctypes.windll.shell32.SHGetFolderPathW(None, csidl_const, None, 0, buf) # Downgrade to short path name if have highbit chars. See # <http://bugs.activestate.com/show_bug.cgi?id=85099>. has_high_char = False for c in buf: if ord(c) > 255: has_high_char = True break if has_high_char: buf2 = ctypes.create_unicode_buffer(1024) if ctypes.windll.kernel32.GetShortPathNameW(buf.value, buf2, 1024): buf = buf2 return buf.value def _get_win_folder_with_jna(csidl_name): import array from com.sun import jna from com.sun.jna.platform import win32 buf_size = win32.WinDef.MAX_PATH * 2 buf = array.zeros('c', buf_size) shell = win32.Shell32.INSTANCE shell.SHGetFolderPath(None, getattr(win32.ShlObj, csidl_name), None, win32.ShlObj.SHGFP_TYPE_CURRENT, buf) dir = jna.Native.toString(buf.tostring()).rstrip("\0") # Downgrade to short path name if have highbit chars. See # <http://bugs.activestate.com/show_bug.cgi?id=85099>. has_high_char = False for c in dir: if ord(c) > 255: has_high_char = True break if has_high_char: buf = array.zeros('c', buf_size) kernel = win32.Kernel32.INSTANCE if kernel.GetShortPathName(dir, buf, buf_size): dir = jna.Native.toString(buf.tostring()).rstrip("\0") return dir if system == "win32": try: import win32com.shell _get_win_folder = _get_win_folder_with_pywin32 except ImportError: try: from ctypes import windll _get_win_folder = _get_win_folder_with_ctypes except ImportError: try: import com.sun.jna _get_win_folder = _get_win_folder_with_jna except ImportError: _get_win_folder = _get_win_folder_from_registry #---- self test code if __name__ == "__main__": appname = "MyApp" appauthor = "MyCompany" props = ("user_data_dir", "user_config_dir", "user_cache_dir", "user_state_dir", "user_log_dir", "site_data_dir", "site_config_dir") print("-- app dirs %s --" % __version__) print("-- app dirs (with optional 'version')") dirs = AppDirs(appname, appauthor, version="1.0") for prop in props: print("%s: %s" % (prop, getattr(dirs, prop))) print("\n-- app dirs (without optional 'version')") dirs = AppDirs(appname, appauthor) for prop in props: print("%s: %s" % (prop, getattr(dirs, prop))) print("\n-- app dirs (without optional 'appauthor')") dirs = AppDirs(appname) for prop in props: print("%s: %s" % (prop, getattr(dirs, prop))) print("\n-- app dirs (with disabled 'appauthor')") dirs = AppDirs(appname, appauthor=False) for prop in props: print("%s: %s" % (prop, getattr(dirs, prop))) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735100.0433276 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/���������������������0000755�0001750�0001750�00000000000�00000000000�030474� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/__about__.py���������0000644�0001750�0001750�00000001320�00000000000�032750� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function __all__ = [ "__title__", "__summary__", "__uri__", "__version__", "__author__", "__email__", "__license__", "__copyright__", ] __title__ = "packaging" __summary__ = "Core utilities for Python packages" __uri__ = "https://github.com/pypa/packaging" __version__ = "16.8" __author__ = "Donald Stufft and individual contributors" __email__ = "donald@stufft.io" __license__ = "BSD or Apache License, Version 2.0" __copyright__ = "Copyright 2014-2016 %s" % __author__ ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/__init__.py����������0000644�0001750�0001750�00000001001�00000000000�032575� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function from .__about__ import ( __author__, __copyright__, __email__, __license__, __summary__, __title__, __uri__, __version__ ) __all__ = [ "__title__", "__summary__", "__uri__", "__version__", "__author__", "__email__", "__license__", "__copyright__", ] �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/_compat.py�����������0000644�0001750�0001750�00000001534�00000000000�032473� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function import sys PY2 = sys.version_info[0] == 2 PY3 = sys.version_info[0] == 3 # flake8: noqa if PY3: string_types = str, else: string_types = basestring, def with_metaclass(meta, *bases): """ Create a base class with a metaclass. """ # This requires a bit of explanation: the basic idea is to make a dummy # metaclass for one level of class instantiation that replaces itself with # the actual metaclass. class metaclass(meta): def __new__(cls, name, this_bases, d): return meta(name, bases, d) return type.__new__(metaclass, 'temporary_class', (), {}) ��������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/_structures.py�������0000644�0001750�0001750�00000002610�00000000000�033427� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function class Infinity(object): def __repr__(self): return "Infinity" def __hash__(self): return hash(repr(self)) def __lt__(self, other): return False def __le__(self, other): return False def __eq__(self, other): return isinstance(other, self.__class__) def __ne__(self, other): return not isinstance(other, self.__class__) def __gt__(self, other): return True def __ge__(self, other): return True def __neg__(self): return NegativeInfinity Infinity = Infinity() class NegativeInfinity(object): def __repr__(self): return "-Infinity" def __hash__(self): return hash(repr(self)) def __lt__(self, other): return True def __le__(self, other): return True def __eq__(self, other): return isinstance(other, self.__class__) def __ne__(self, other): return not isinstance(other, self.__class__) def __gt__(self, other): return False def __ge__(self, other): return False def __neg__(self): return Infinity NegativeInfinity = NegativeInfinity() ������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/markers.py�����������0000644�0001750�0001750�00000020070�00000000000�032511� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function import operator import os import platform import sys from pkg_resources.extern.pyparsing import ParseException, ParseResults, stringStart, stringEnd from pkg_resources.extern.pyparsing import ZeroOrMore, Group, Forward, QuotedString from pkg_resources.extern.pyparsing import Literal as L # noqa from ._compat import string_types from .specifiers import Specifier, InvalidSpecifier __all__ = [ "InvalidMarker", "UndefinedComparison", "UndefinedEnvironmentName", "Marker", "default_environment", ] class InvalidMarker(ValueError): """ An invalid marker was found, users should refer to PEP 508. """ class UndefinedComparison(ValueError): """ An invalid operation was attempted on a value that doesn't support it. """ class UndefinedEnvironmentName(ValueError): """ A name was attempted to be used that does not exist inside of the environment. """ class Node(object): def __init__(self, value): self.value = value def __str__(self): return str(self.value) def __repr__(self): return "<{0}({1!r})>".format(self.__class__.__name__, str(self)) def serialize(self): raise NotImplementedError class Variable(Node): def serialize(self): return str(self) class Value(Node): def serialize(self): return '"{0}"'.format(self) class Op(Node): def serialize(self): return str(self) VARIABLE = ( L("implementation_version") | L("platform_python_implementation") | L("implementation_name") | L("python_full_version") | L("platform_release") | L("platform_version") | L("platform_machine") | L("platform_system") | L("python_version") | L("sys_platform") | L("os_name") | L("os.name") | # PEP-345 L("sys.platform") | # PEP-345 L("platform.version") | # PEP-345 L("platform.machine") | # PEP-345 L("platform.python_implementation") | # PEP-345 L("python_implementation") | # undocumented setuptools legacy L("extra") ) ALIASES = { 'os.name': 'os_name', 'sys.platform': 'sys_platform', 'platform.version': 'platform_version', 'platform.machine': 'platform_machine', 'platform.python_implementation': 'platform_python_implementation', 'python_implementation': 'platform_python_implementation' } VARIABLE.setParseAction(lambda s, l, t: Variable(ALIASES.get(t[0], t[0]))) VERSION_CMP = ( L("===") | L("==") | L(">=") | L("<=") | L("!=") | L("~=") | L(">") | L("<") ) MARKER_OP = VERSION_CMP | L("not in") | L("in") MARKER_OP.setParseAction(lambda s, l, t: Op(t[0])) MARKER_VALUE = QuotedString("'") | QuotedString('"') MARKER_VALUE.setParseAction(lambda s, l, t: Value(t[0])) BOOLOP = L("and") | L("or") MARKER_VAR = VARIABLE | MARKER_VALUE MARKER_ITEM = Group(MARKER_VAR + MARKER_OP + MARKER_VAR) MARKER_ITEM.setParseAction(lambda s, l, t: tuple(t[0])) LPAREN = L("(").suppress() RPAREN = L(")").suppress() MARKER_EXPR = Forward() MARKER_ATOM = MARKER_ITEM | Group(LPAREN + MARKER_EXPR + RPAREN) MARKER_EXPR << MARKER_ATOM + ZeroOrMore(BOOLOP + MARKER_EXPR) MARKER = stringStart + MARKER_EXPR + stringEnd def _coerce_parse_result(results): if isinstance(results, ParseResults): return [_coerce_parse_result(i) for i in results] else: return results def _format_marker(marker, first=True): assert isinstance(marker, (list, tuple, string_types)) # Sometimes we have a structure like [[...]] which is a single item list # where the single item is itself it's own list. In that case we want skip # the rest of this function so that we don't get extraneous () on the # outside. if (isinstance(marker, list) and len(marker) == 1 and isinstance(marker[0], (list, tuple))): return _format_marker(marker[0]) if isinstance(marker, list): inner = (_format_marker(m, first=False) for m in marker) if first: return " ".join(inner) else: return "(" + " ".join(inner) + ")" elif isinstance(marker, tuple): return " ".join([m.serialize() for m in marker]) else: return marker _operators = { "in": lambda lhs, rhs: lhs in rhs, "not in": lambda lhs, rhs: lhs not in rhs, "<": operator.lt, "<=": operator.le, "==": operator.eq, "!=": operator.ne, ">=": operator.ge, ">": operator.gt, } def _eval_op(lhs, op, rhs): try: spec = Specifier("".join([op.serialize(), rhs])) except InvalidSpecifier: pass else: return spec.contains(lhs) oper = _operators.get(op.serialize()) if oper is None: raise UndefinedComparison( "Undefined {0!r} on {1!r} and {2!r}.".format(op, lhs, rhs) ) return oper(lhs, rhs) _undefined = object() def _get_env(environment, name): value = environment.get(name, _undefined) if value is _undefined: raise UndefinedEnvironmentName( "{0!r} does not exist in evaluation environment.".format(name) ) return value def _evaluate_markers(markers, environment): groups = [[]] for marker in markers: assert isinstance(marker, (list, tuple, string_types)) if isinstance(marker, list): groups[-1].append(_evaluate_markers(marker, environment)) elif isinstance(marker, tuple): lhs, op, rhs = marker if isinstance(lhs, Variable): lhs_value = _get_env(environment, lhs.value) rhs_value = rhs.value else: lhs_value = lhs.value rhs_value = _get_env(environment, rhs.value) groups[-1].append(_eval_op(lhs_value, op, rhs_value)) else: assert marker in ["and", "or"] if marker == "or": groups.append([]) return any(all(item) for item in groups) def format_full_version(info): version = '{0.major}.{0.minor}.{0.micro}'.format(info) kind = info.releaselevel if kind != 'final': version += kind[0] + str(info.serial) return version def default_environment(): if hasattr(sys, 'implementation'): iver = format_full_version(sys.implementation.version) implementation_name = sys.implementation.name else: iver = '0' implementation_name = '' return { "implementation_name": implementation_name, "implementation_version": iver, "os_name": os.name, "platform_machine": platform.machine(), "platform_release": platform.release(), "platform_system": platform.system(), "platform_version": platform.version(), "python_full_version": platform.python_version(), "platform_python_implementation": platform.python_implementation(), "python_version": platform.python_version()[:3], "sys_platform": sys.platform, } class Marker(object): def __init__(self, marker): try: self._markers = _coerce_parse_result(MARKER.parseString(marker)) except ParseException as e: err_str = "Invalid marker: {0!r}, parse error at {1!r}".format( marker, marker[e.loc:e.loc + 8]) raise InvalidMarker(err_str) def __str__(self): return _format_marker(self._markers) def __repr__(self): return "<Marker({0!r})>".format(str(self)) def evaluate(self, environment=None): """Evaluate a marker. Return the boolean from evaluating the given marker against the environment. environment is an optional argument to override all or part of the determined environment. The environment is determined from the current Python process. """ current_environment = default_environment() if environment is not None: current_environment.update(environment) return _evaluate_markers(self._markers, current_environment) ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/requirements.py������0000644�0001750�0001750�00000010403�00000000000�033567� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function import string import re from pkg_resources.extern.pyparsing import stringStart, stringEnd, originalTextFor, ParseException from pkg_resources.extern.pyparsing import ZeroOrMore, Word, Optional, Regex, Combine from pkg_resources.extern.pyparsing import Literal as L # noqa from pkg_resources.extern.six.moves.urllib import parse as urlparse from .markers import MARKER_EXPR, Marker from .specifiers import LegacySpecifier, Specifier, SpecifierSet class InvalidRequirement(ValueError): """ An invalid requirement was found, users should refer to PEP 508. """ ALPHANUM = Word(string.ascii_letters + string.digits) LBRACKET = L("[").suppress() RBRACKET = L("]").suppress() LPAREN = L("(").suppress() RPAREN = L(")").suppress() COMMA = L(",").suppress() SEMICOLON = L(";").suppress() AT = L("@").suppress() PUNCTUATION = Word("-_.") IDENTIFIER_END = ALPHANUM | (ZeroOrMore(PUNCTUATION) + ALPHANUM) IDENTIFIER = Combine(ALPHANUM + ZeroOrMore(IDENTIFIER_END)) NAME = IDENTIFIER("name") EXTRA = IDENTIFIER URI = Regex(r'[^ ]+')("url") URL = (AT + URI) EXTRAS_LIST = EXTRA + ZeroOrMore(COMMA + EXTRA) EXTRAS = (LBRACKET + Optional(EXTRAS_LIST) + RBRACKET)("extras") VERSION_PEP440 = Regex(Specifier._regex_str, re.VERBOSE | re.IGNORECASE) VERSION_LEGACY = Regex(LegacySpecifier._regex_str, re.VERBOSE | re.IGNORECASE) VERSION_ONE = VERSION_PEP440 ^ VERSION_LEGACY VERSION_MANY = Combine(VERSION_ONE + ZeroOrMore(COMMA + VERSION_ONE), joinString=",", adjacent=False)("_raw_spec") _VERSION_SPEC = Optional(((LPAREN + VERSION_MANY + RPAREN) | VERSION_MANY)) _VERSION_SPEC.setParseAction(lambda s, l, t: t._raw_spec or '') VERSION_SPEC = originalTextFor(_VERSION_SPEC)("specifier") VERSION_SPEC.setParseAction(lambda s, l, t: t[1]) MARKER_EXPR = originalTextFor(MARKER_EXPR())("marker") MARKER_EXPR.setParseAction( lambda s, l, t: Marker(s[t._original_start:t._original_end]) ) MARKER_SEPERATOR = SEMICOLON MARKER = MARKER_SEPERATOR + MARKER_EXPR VERSION_AND_MARKER = VERSION_SPEC + Optional(MARKER) URL_AND_MARKER = URL + Optional(MARKER) NAMED_REQUIREMENT = \ NAME + Optional(EXTRAS) + (URL_AND_MARKER | VERSION_AND_MARKER) REQUIREMENT = stringStart + NAMED_REQUIREMENT + stringEnd class Requirement(object): """Parse a requirement. Parse a given requirement string into its parts, such as name, specifier, URL, and extras. Raises InvalidRequirement on a badly-formed requirement string. """ # TODO: Can we test whether something is contained within a requirement? # If so how do we do that? Do we need to test against the _name_ of # the thing as well as the version? What about the markers? # TODO: Can we normalize the name and extra name? def __init__(self, requirement_string): try: req = REQUIREMENT.parseString(requirement_string) except ParseException as e: raise InvalidRequirement( "Invalid requirement, parse error at \"{0!r}\"".format( requirement_string[e.loc:e.loc + 8])) self.name = req.name if req.url: parsed_url = urlparse.urlparse(req.url) if not (parsed_url.scheme and parsed_url.netloc) or ( not parsed_url.scheme and not parsed_url.netloc): raise InvalidRequirement("Invalid URL given") self.url = req.url else: self.url = None self.extras = set(req.extras.asList() if req.extras else []) self.specifier = SpecifierSet(req.specifier) self.marker = req.marker if req.marker else None def __str__(self): parts = [self.name] if self.extras: parts.append("[{0}]".format(",".join(sorted(self.extras)))) if self.specifier: parts.append(str(self.specifier)) if self.url: parts.append("@ {0}".format(self.url)) if self.marker: parts.append("; {0}".format(self.marker)) return "".join(parts) def __repr__(self): return "<Requirement({0!r})>".format(str(self)) �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/specifiers.py��������0000644�0001750�0001750�00000066571�00000000000�033221� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function import abc import functools import itertools import re from ._compat import string_types, with_metaclass from .version import Version, LegacyVersion, parse class InvalidSpecifier(ValueError): """ An invalid specifier was found, users should refer to PEP 440. """ class BaseSpecifier(with_metaclass(abc.ABCMeta, object)): @abc.abstractmethod def __str__(self): """ Returns the str representation of this Specifier like object. This should be representative of the Specifier itself. """ @abc.abstractmethod def __hash__(self): """ Returns a hash value for this Specifier like object. """ @abc.abstractmethod def __eq__(self, other): """ Returns a boolean representing whether or not the two Specifier like objects are equal. """ @abc.abstractmethod def __ne__(self, other): """ Returns a boolean representing whether or not the two Specifier like objects are not equal. """ @abc.abstractproperty def prereleases(self): """ Returns whether or not pre-releases as a whole are allowed by this specifier. """ @prereleases.setter def prereleases(self, value): """ Sets whether or not pre-releases as a whole are allowed by this specifier. """ @abc.abstractmethod def contains(self, item, prereleases=None): """ Determines if the given item is contained within this specifier. """ @abc.abstractmethod def filter(self, iterable, prereleases=None): """ Takes an iterable of items and filters them so that only items which are contained within this specifier are allowed in it. """ class _IndividualSpecifier(BaseSpecifier): _operators = {} def __init__(self, spec="", prereleases=None): match = self._regex.search(spec) if not match: raise InvalidSpecifier("Invalid specifier: '{0}'".format(spec)) self._spec = ( match.group("operator").strip(), match.group("version").strip(), ) # Store whether or not this Specifier should accept prereleases self._prereleases = prereleases def __repr__(self): pre = ( ", prereleases={0!r}".format(self.prereleases) if self._prereleases is not None else "" ) return "<{0}({1!r}{2})>".format( self.__class__.__name__, str(self), pre, ) def __str__(self): return "{0}{1}".format(*self._spec) def __hash__(self): return hash(self._spec) def __eq__(self, other): if isinstance(other, string_types): try: other = self.__class__(other) except InvalidSpecifier: return NotImplemented elif not isinstance(other, self.__class__): return NotImplemented return self._spec == other._spec def __ne__(self, other): if isinstance(other, string_types): try: other = self.__class__(other) except InvalidSpecifier: return NotImplemented elif not isinstance(other, self.__class__): return NotImplemented return self._spec != other._spec def _get_operator(self, op): return getattr(self, "_compare_{0}".format(self._operators[op])) def _coerce_version(self, version): if not isinstance(version, (LegacyVersion, Version)): version = parse(version) return version @property def operator(self): return self._spec[0] @property def version(self): return self._spec[1] @property def prereleases(self): return self._prereleases @prereleases.setter def prereleases(self, value): self._prereleases = value def __contains__(self, item): return self.contains(item) def contains(self, item, prereleases=None): # Determine if prereleases are to be allowed or not. if prereleases is None: prereleases = self.prereleases # Normalize item to a Version or LegacyVersion, this allows us to have # a shortcut for ``"2.0" in Specifier(">=2") item = self._coerce_version(item) # Determine if we should be supporting prereleases in this specifier # or not, if we do not support prereleases than we can short circuit # logic if this version is a prereleases. if item.is_prerelease and not prereleases: return False # Actually do the comparison to determine if this item is contained # within this Specifier or not. return self._get_operator(self.operator)(item, self.version) def filter(self, iterable, prereleases=None): yielded = False found_prereleases = [] kw = {"prereleases": prereleases if prereleases is not None else True} # Attempt to iterate over all the values in the iterable and if any of # them match, yield them. for version in iterable: parsed_version = self._coerce_version(version) if self.contains(parsed_version, **kw): # If our version is a prerelease, and we were not set to allow # prereleases, then we'll store it for later incase nothing # else matches this specifier. if (parsed_version.is_prerelease and not (prereleases or self.prereleases)): found_prereleases.append(version) # Either this is not a prerelease, or we should have been # accepting prereleases from the begining. else: yielded = True yield version # Now that we've iterated over everything, determine if we've yielded # any values, and if we have not and we have any prereleases stored up # then we will go ahead and yield the prereleases. if not yielded and found_prereleases: for version in found_prereleases: yield version class LegacySpecifier(_IndividualSpecifier): _regex_str = ( r""" (?P<operator>(==|!=|<=|>=|<|>)) \s* (?P<version> [^,;\s)]* # Since this is a "legacy" specifier, and the version # string can be just about anything, we match everything # except for whitespace, a semi-colon for marker support, # a closing paren since versions can be enclosed in # them, and a comma since it's a version separator. ) """ ) _regex = re.compile( r"^\s*" + _regex_str + r"\s*$", re.VERBOSE | re.IGNORECASE) _operators = { "==": "equal", "!=": "not_equal", "<=": "less_than_equal", ">=": "greater_than_equal", "<": "less_than", ">": "greater_than", } def _coerce_version(self, version): if not isinstance(version, LegacyVersion): version = LegacyVersion(str(version)) return version def _compare_equal(self, prospective, spec): return prospective == self._coerce_version(spec) def _compare_not_equal(self, prospective, spec): return prospective != self._coerce_version(spec) def _compare_less_than_equal(self, prospective, spec): return prospective <= self._coerce_version(spec) def _compare_greater_than_equal(self, prospective, spec): return prospective >= self._coerce_version(spec) def _compare_less_than(self, prospective, spec): return prospective < self._coerce_version(spec) def _compare_greater_than(self, prospective, spec): return prospective > self._coerce_version(spec) def _require_version_compare(fn): @functools.wraps(fn) def wrapped(self, prospective, spec): if not isinstance(prospective, Version): return False return fn(self, prospective, spec) return wrapped class Specifier(_IndividualSpecifier): _regex_str = ( r""" (?P<operator>(~=|==|!=|<=|>=|<|>|===)) (?P<version> (?: # The identity operators allow for an escape hatch that will # do an exact string match of the version you wish to install. # This will not be parsed by PEP 440 and we cannot determine # any semantic meaning from it. This operator is discouraged # but included entirely as an escape hatch. (?<====) # Only match for the identity operator \s* [^\s]* # We just match everything, except for whitespace # since we are only testing for strict identity. ) | (?: # The (non)equality operators allow for wild card and local # versions to be specified so we have to define these two # operators separately to enable that. (?<===|!=) # Only match for equals and not equals \s* v? (?:[0-9]+!)? # epoch [0-9]+(?:\.[0-9]+)* # release (?: # pre release [-_\.]? (a|b|c|rc|alpha|beta|pre|preview) [-_\.]? [0-9]* )? (?: # post release (?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*) )? # You cannot use a wild card and a dev or local version # together so group them with a | and make them optional. (?: (?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release (?:\+[a-z0-9]+(?:[-_\.][a-z0-9]+)*)? # local | \.\* # Wild card syntax of .* )? ) | (?: # The compatible operator requires at least two digits in the # release segment. (?<=~=) # Only match for the compatible operator \s* v? (?:[0-9]+!)? # epoch [0-9]+(?:\.[0-9]+)+ # release (We have a + instead of a *) (?: # pre release [-_\.]? (a|b|c|rc|alpha|beta|pre|preview) [-_\.]? [0-9]* )? (?: # post release (?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*) )? (?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release ) | (?: # All other operators only allow a sub set of what the # (non)equality operators do. Specifically they do not allow # local versions to be specified nor do they allow the prefix # matching wild cards. (?<!==|!=|~=) # We have special cases for these # operators so we want to make sure they # don't match here. \s* v? (?:[0-9]+!)? # epoch [0-9]+(?:\.[0-9]+)* # release (?: # pre release [-_\.]? (a|b|c|rc|alpha|beta|pre|preview) [-_\.]? [0-9]* )? (?: # post release (?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*) )? (?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release ) ) """ ) _regex = re.compile( r"^\s*" + _regex_str + r"\s*$", re.VERBOSE | re.IGNORECASE) _operators = { "~=": "compatible", "==": "equal", "!=": "not_equal", "<=": "less_than_equal", ">=": "greater_than_equal", "<": "less_than", ">": "greater_than", "===": "arbitrary", } @_require_version_compare def _compare_compatible(self, prospective, spec): # Compatible releases have an equivalent combination of >= and ==. That # is that ~=2.2 is equivalent to >=2.2,==2.*. This allows us to # implement this in terms of the other specifiers instead of # implementing it ourselves. The only thing we need to do is construct # the other specifiers. # We want everything but the last item in the version, but we want to # ignore post and dev releases and we want to treat the pre-release as # it's own separate segment. prefix = ".".join( list( itertools.takewhile( lambda x: (not x.startswith("post") and not x.startswith("dev")), _version_split(spec), ) )[:-1] ) # Add the prefix notation to the end of our string prefix += ".*" return (self._get_operator(">=")(prospective, spec) and self._get_operator("==")(prospective, prefix)) @_require_version_compare def _compare_equal(self, prospective, spec): # We need special logic to handle prefix matching if spec.endswith(".*"): # In the case of prefix matching we want to ignore local segment. prospective = Version(prospective.public) # Split the spec out by dots, and pretend that there is an implicit # dot in between a release segment and a pre-release segment. spec = _version_split(spec[:-2]) # Remove the trailing .* # Split the prospective version out by dots, and pretend that there # is an implicit dot in between a release segment and a pre-release # segment. prospective = _version_split(str(prospective)) # Shorten the prospective version to be the same length as the spec # so that we can determine if the specifier is a prefix of the # prospective version or not. prospective = prospective[:len(spec)] # Pad out our two sides with zeros so that they both equal the same # length. spec, prospective = _pad_version(spec, prospective) else: # Convert our spec string into a Version spec = Version(spec) # If the specifier does not have a local segment, then we want to # act as if the prospective version also does not have a local # segment. if not spec.local: prospective = Version(prospective.public) return prospective == spec @_require_version_compare def _compare_not_equal(self, prospective, spec): return not self._compare_equal(prospective, spec) @_require_version_compare def _compare_less_than_equal(self, prospective, spec): return prospective <= Version(spec) @_require_version_compare def _compare_greater_than_equal(self, prospective, spec): return prospective >= Version(spec) @_require_version_compare def _compare_less_than(self, prospective, spec): # Convert our spec to a Version instance, since we'll want to work with # it as a version. spec = Version(spec) # Check to see if the prospective version is less than the spec # version. If it's not we can short circuit and just return False now # instead of doing extra unneeded work. if not prospective < spec: return False # This special case is here so that, unless the specifier itself # includes is a pre-release version, that we do not accept pre-release # versions for the version mentioned in the specifier (e.g. <3.1 should # not match 3.1.dev0, but should match 3.0.dev0). if not spec.is_prerelease and prospective.is_prerelease: if Version(prospective.base_version) == Version(spec.base_version): return False # If we've gotten to here, it means that prospective version is both # less than the spec version *and* it's not a pre-release of the same # version in the spec. return True @_require_version_compare def _compare_greater_than(self, prospective, spec): # Convert our spec to a Version instance, since we'll want to work with # it as a version. spec = Version(spec) # Check to see if the prospective version is greater than the spec # version. If it's not we can short circuit and just return False now # instead of doing extra unneeded work. if not prospective > spec: return False # This special case is here so that, unless the specifier itself # includes is a post-release version, that we do not accept # post-release versions for the version mentioned in the specifier # (e.g. >3.1 should not match 3.0.post0, but should match 3.2.post0). if not spec.is_postrelease and prospective.is_postrelease: if Version(prospective.base_version) == Version(spec.base_version): return False # Ensure that we do not allow a local version of the version mentioned # in the specifier, which is techincally greater than, to match. if prospective.local is not None: if Version(prospective.base_version) == Version(spec.base_version): return False # If we've gotten to here, it means that prospective version is both # greater than the spec version *and* it's not a pre-release of the # same version in the spec. return True def _compare_arbitrary(self, prospective, spec): return str(prospective).lower() == str(spec).lower() @property def prereleases(self): # If there is an explicit prereleases set for this, then we'll just # blindly use that. if self._prereleases is not None: return self._prereleases # Look at all of our specifiers and determine if they are inclusive # operators, and if they are if they are including an explicit # prerelease. operator, version = self._spec if operator in ["==", ">=", "<=", "~=", "==="]: # The == specifier can include a trailing .*, if it does we # want to remove before parsing. if operator == "==" and version.endswith(".*"): version = version[:-2] # Parse the version, and if it is a pre-release than this # specifier allows pre-releases. if parse(version).is_prerelease: return True return False @prereleases.setter def prereleases(self, value): self._prereleases = value _prefix_regex = re.compile(r"^([0-9]+)((?:a|b|c|rc)[0-9]+)$") def _version_split(version): result = [] for item in version.split("."): match = _prefix_regex.search(item) if match: result.extend(match.groups()) else: result.append(item) return result def _pad_version(left, right): left_split, right_split = [], [] # Get the release segment of our versions left_split.append(list(itertools.takewhile(lambda x: x.isdigit(), left))) right_split.append(list(itertools.takewhile(lambda x: x.isdigit(), right))) # Get the rest of our versions left_split.append(left[len(left_split[0]):]) right_split.append(right[len(right_split[0]):]) # Insert our padding left_split.insert( 1, ["0"] * max(0, len(right_split[0]) - len(left_split[0])), ) right_split.insert( 1, ["0"] * max(0, len(left_split[0]) - len(right_split[0])), ) return ( list(itertools.chain(*left_split)), list(itertools.chain(*right_split)), ) class SpecifierSet(BaseSpecifier): def __init__(self, specifiers="", prereleases=None): # Split on , to break each indidivual specifier into it's own item, and # strip each item to remove leading/trailing whitespace. specifiers = [s.strip() for s in specifiers.split(",") if s.strip()] # Parsed each individual specifier, attempting first to make it a # Specifier and falling back to a LegacySpecifier. parsed = set() for specifier in specifiers: try: parsed.add(Specifier(specifier)) except InvalidSpecifier: parsed.add(LegacySpecifier(specifier)) # Turn our parsed specifiers into a frozen set and save them for later. self._specs = frozenset(parsed) # Store our prereleases value so we can use it later to determine if # we accept prereleases or not. self._prereleases = prereleases def __repr__(self): pre = ( ", prereleases={0!r}".format(self.prereleases) if self._prereleases is not None else "" ) return "<SpecifierSet({0!r}{1})>".format(str(self), pre) def __str__(self): return ",".join(sorted(str(s) for s in self._specs)) def __hash__(self): return hash(self._specs) def __and__(self, other): if isinstance(other, string_types): other = SpecifierSet(other) elif not isinstance(other, SpecifierSet): return NotImplemented specifier = SpecifierSet() specifier._specs = frozenset(self._specs | other._specs) if self._prereleases is None and other._prereleases is not None: specifier._prereleases = other._prereleases elif self._prereleases is not None and other._prereleases is None: specifier._prereleases = self._prereleases elif self._prereleases == other._prereleases: specifier._prereleases = self._prereleases else: raise ValueError( "Cannot combine SpecifierSets with True and False prerelease " "overrides." ) return specifier def __eq__(self, other): if isinstance(other, string_types): other = SpecifierSet(other) elif isinstance(other, _IndividualSpecifier): other = SpecifierSet(str(other)) elif not isinstance(other, SpecifierSet): return NotImplemented return self._specs == other._specs def __ne__(self, other): if isinstance(other, string_types): other = SpecifierSet(other) elif isinstance(other, _IndividualSpecifier): other = SpecifierSet(str(other)) elif not isinstance(other, SpecifierSet): return NotImplemented return self._specs != other._specs def __len__(self): return len(self._specs) def __iter__(self): return iter(self._specs) @property def prereleases(self): # If we have been given an explicit prerelease modifier, then we'll # pass that through here. if self._prereleases is not None: return self._prereleases # If we don't have any specifiers, and we don't have a forced value, # then we'll just return None since we don't know if this should have # pre-releases or not. if not self._specs: return None # Otherwise we'll see if any of the given specifiers accept # prereleases, if any of them do we'll return True, otherwise False. return any(s.prereleases for s in self._specs) @prereleases.setter def prereleases(self, value): self._prereleases = value def __contains__(self, item): return self.contains(item) def contains(self, item, prereleases=None): # Ensure that our item is a Version or LegacyVersion instance. if not isinstance(item, (LegacyVersion, Version)): item = parse(item) # Determine if we're forcing a prerelease or not, if we're not forcing # one for this particular filter call, then we'll use whatever the # SpecifierSet thinks for whether or not we should support prereleases. if prereleases is None: prereleases = self.prereleases # We can determine if we're going to allow pre-releases by looking to # see if any of the underlying items supports them. If none of them do # and this item is a pre-release then we do not allow it and we can # short circuit that here. # Note: This means that 1.0.dev1 would not be contained in something # like >=1.0.devabc however it would be in >=1.0.debabc,>0.0.dev0 if not prereleases and item.is_prerelease: return False # We simply dispatch to the underlying specs here to make sure that the # given version is contained within all of them. # Note: This use of all() here means that an empty set of specifiers # will always return True, this is an explicit design decision. return all( s.contains(item, prereleases=prereleases) for s in self._specs ) def filter(self, iterable, prereleases=None): # Determine if we're forcing a prerelease or not, if we're not forcing # one for this particular filter call, then we'll use whatever the # SpecifierSet thinks for whether or not we should support prereleases. if prereleases is None: prereleases = self.prereleases # If we have any specifiers, then we want to wrap our iterable in the # filter method for each one, this will act as a logical AND amongst # each specifier. if self._specs: for spec in self._specs: iterable = spec.filter(iterable, prereleases=bool(prereleases)) return iterable # If we do not have any specifiers, then we need to have a rough filter # which will filter out any pre-releases, unless there are no final # releases, and which will filter out LegacyVersion in general. else: filtered = [] found_prereleases = [] for item in iterable: # Ensure that we some kind of Version class for this item. if not isinstance(item, (LegacyVersion, Version)): parsed_version = parse(item) else: parsed_version = item # Filter out any item which is parsed as a LegacyVersion if isinstance(parsed_version, LegacyVersion): continue # Store any item which is a pre-release for later unless we've # already found a final version or we are accepting prereleases if parsed_version.is_prerelease and not prereleases: if not filtered: found_prereleases.append(item) else: filtered.append(item) # If we've found no items except for pre-releases, then we'll go # ahead and use the pre-releases if not filtered and found_prereleases and prereleases is None: return found_prereleases return filtered ���������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/utils.py�������������0000644�0001750�0001750�00000000645�00000000000�032213� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function import re _canonicalize_regex = re.compile(r"[-_.]+") def canonicalize_name(name): # This is taken from PEP 503. return _canonicalize_regex.sub("-", name).lower() �������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/version.py�����������0000644�0001750�0001750�00000026444�00000000000�032545� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function import collections import itertools import re from ._structures import Infinity __all__ = [ "parse", "Version", "LegacyVersion", "InvalidVersion", "VERSION_PATTERN" ] _Version = collections.namedtuple( "_Version", ["epoch", "release", "dev", "pre", "post", "local"], ) def parse(version): """ Parse the given version string and return either a :class:`Version` object or a :class:`LegacyVersion` object depending on if the given version is a valid PEP 440 version or a legacy version. """ try: return Version(version) except InvalidVersion: return LegacyVersion(version) class InvalidVersion(ValueError): """ An invalid version was found, users should refer to PEP 440. """ class _BaseVersion(object): def __hash__(self): return hash(self._key) def __lt__(self, other): return self._compare(other, lambda s, o: s < o) def __le__(self, other): return self._compare(other, lambda s, o: s <= o) def __eq__(self, other): return self._compare(other, lambda s, o: s == o) def __ge__(self, other): return self._compare(other, lambda s, o: s >= o) def __gt__(self, other): return self._compare(other, lambda s, o: s > o) def __ne__(self, other): return self._compare(other, lambda s, o: s != o) def _compare(self, other, method): if not isinstance(other, _BaseVersion): return NotImplemented return method(self._key, other._key) class LegacyVersion(_BaseVersion): def __init__(self, version): self._version = str(version) self._key = _legacy_cmpkey(self._version) def __str__(self): return self._version def __repr__(self): return "<LegacyVersion({0})>".format(repr(str(self))) @property def public(self): return self._version @property def base_version(self): return self._version @property def local(self): return None @property def is_prerelease(self): return False @property def is_postrelease(self): return False _legacy_version_component_re = re.compile( r"(\d+ | [a-z]+ | \.| -)", re.VERBOSE, ) _legacy_version_replacement_map = { "pre": "c", "preview": "c", "-": "final-", "rc": "c", "dev": "@", } def _parse_version_parts(s): for part in _legacy_version_component_re.split(s): part = _legacy_version_replacement_map.get(part, part) if not part or part == ".": continue if part[:1] in "0123456789": # pad for numeric comparison yield part.zfill(8) else: yield "*" + part # ensure that alpha/beta/candidate are before final yield "*final" def _legacy_cmpkey(version): # We hardcode an epoch of -1 here. A PEP 440 version can only have a epoch # greater than or equal to 0. This will effectively put the LegacyVersion, # which uses the defacto standard originally implemented by setuptools, # as before all PEP 440 versions. epoch = -1 # This scheme is taken from pkg_resources.parse_version setuptools prior to # it's adoption of the packaging library. parts = [] for part in _parse_version_parts(version.lower()): if part.startswith("*"): # remove "-" before a prerelease tag if part < "*final": while parts and parts[-1] == "*final-": parts.pop() # remove trailing zeros from each series of numeric parts while parts and parts[-1] == "00000000": parts.pop() parts.append(part) parts = tuple(parts) return epoch, parts # Deliberately not anchored to the start and end of the string, to make it # easier for 3rd party code to reuse VERSION_PATTERN = r""" v? (?: (?:(?P<epoch>[0-9]+)!)? # epoch (?P<release>[0-9]+(?:\.[0-9]+)*) # release segment (?P<pre> # pre-release [-_\.]? (?P<pre_l>(a|b|c|rc|alpha|beta|pre|preview)) [-_\.]? (?P<pre_n>[0-9]+)? )? (?P<post> # post release (?:-(?P<post_n1>[0-9]+)) | (?: [-_\.]? (?P<post_l>post|rev|r) [-_\.]? (?P<post_n2>[0-9]+)? ) )? (?P<dev> # dev release [-_\.]? (?P<dev_l>dev) [-_\.]? (?P<dev_n>[0-9]+)? )? ) (?:\+(?P<local>[a-z0-9]+(?:[-_\.][a-z0-9]+)*))? # local version """ class Version(_BaseVersion): _regex = re.compile( r"^\s*" + VERSION_PATTERN + r"\s*$", re.VERBOSE | re.IGNORECASE, ) def __init__(self, version): # Validate the version and parse it into pieces match = self._regex.search(version) if not match: raise InvalidVersion("Invalid version: '{0}'".format(version)) # Store the parsed out pieces of the version self._version = _Version( epoch=int(match.group("epoch")) if match.group("epoch") else 0, release=tuple(int(i) for i in match.group("release").split(".")), pre=_parse_letter_version( match.group("pre_l"), match.group("pre_n"), ), post=_parse_letter_version( match.group("post_l"), match.group("post_n1") or match.group("post_n2"), ), dev=_parse_letter_version( match.group("dev_l"), match.group("dev_n"), ), local=_parse_local_version(match.group("local")), ) # Generate a key which will be used for sorting self._key = _cmpkey( self._version.epoch, self._version.release, self._version.pre, self._version.post, self._version.dev, self._version.local, ) def __repr__(self): return "<Version({0})>".format(repr(str(self))) def __str__(self): parts = [] # Epoch if self._version.epoch != 0: parts.append("{0}!".format(self._version.epoch)) # Release segment parts.append(".".join(str(x) for x in self._version.release)) # Pre-release if self._version.pre is not None: parts.append("".join(str(x) for x in self._version.pre)) # Post-release if self._version.post is not None: parts.append(".post{0}".format(self._version.post[1])) # Development release if self._version.dev is not None: parts.append(".dev{0}".format(self._version.dev[1])) # Local version segment if self._version.local is not None: parts.append( "+{0}".format(".".join(str(x) for x in self._version.local)) ) return "".join(parts) @property def public(self): return str(self).split("+", 1)[0] @property def base_version(self): parts = [] # Epoch if self._version.epoch != 0: parts.append("{0}!".format(self._version.epoch)) # Release segment parts.append(".".join(str(x) for x in self._version.release)) return "".join(parts) @property def local(self): version_string = str(self) if "+" in version_string: return version_string.split("+", 1)[1] @property def is_prerelease(self): return bool(self._version.dev or self._version.pre) @property def is_postrelease(self): return bool(self._version.post) def _parse_letter_version(letter, number): if letter: # We consider there to be an implicit 0 in a pre-release if there is # not a numeral associated with it. if number is None: number = 0 # We normalize any letters to their lower case form letter = letter.lower() # We consider some words to be alternate spellings of other words and # in those cases we want to normalize the spellings to our preferred # spelling. if letter == "alpha": letter = "a" elif letter == "beta": letter = "b" elif letter in ["c", "pre", "preview"]: letter = "rc" elif letter in ["rev", "r"]: letter = "post" return letter, int(number) if not letter and number: # We assume if we are given a number, but we are not given a letter # then this is using the implicit post release syntax (e.g. 1.0-1) letter = "post" return letter, int(number) _local_version_seperators = re.compile(r"[\._-]") def _parse_local_version(local): """ Takes a string like abc.1.twelve and turns it into ("abc", 1, "twelve"). """ if local is not None: return tuple( part.lower() if not part.isdigit() else int(part) for part in _local_version_seperators.split(local) ) def _cmpkey(epoch, release, pre, post, dev, local): # When we compare a release version, we want to compare it with all of the # trailing zeros removed. So we'll use a reverse the list, drop all the now # leading zeros until we come to something non zero, then take the rest # re-reverse it back into the correct order and make it a tuple and use # that for our sorting key. release = tuple( reversed(list( itertools.dropwhile( lambda x: x == 0, reversed(release), ) )) ) # We need to "trick" the sorting algorithm to put 1.0.dev0 before 1.0a0. # We'll do this by abusing the pre segment, but we _only_ want to do this # if there is not a pre or a post segment. If we have one of those then # the normal sorting rules will handle this case correctly. if pre is None and post is None and dev is not None: pre = -Infinity # Versions without a pre-release (except as noted above) should sort after # those with one. elif pre is None: pre = Infinity # Versions without a post segment should sort before those with one. if post is None: post = -Infinity # Versions without a development segment should sort after those with one. if dev is None: dev = Infinity if local is None: # Versions without a local segment should sort before those with one. local = -Infinity else: # Versions with a local segment need that segment parsed to implement # the sorting rules in PEP440. # - Alpha numeric segments sort before numeric segments # - Alpha numeric segments sort lexicographically # - Numeric segments sort numerically # - Shorter versions sort before longer versions when the prefixes # match exactly local = tuple( (i, "") if isinstance(i, int) else (-Infinity, i) for i in local ) return epoch, release, pre, post, dev, local ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pkg_resources/_vendor/pyparsing.py�������������������0000644�0001750�0001750�00000705167�00000000000�031156� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# module pyparsing.py # # Copyright (c) 2003-2018 Paul T. McGuire # # Permission is hereby granted, free of charge, to any person obtaining # a copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, # distribute, sublicense, and/or sell copies of the Software, and to # permit persons to whom the Software is furnished to do so, subject to # the following conditions: # # The above copyright notice and this permission notice shall be # included in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. # IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY # CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, # TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # __doc__ = \ """ pyparsing module - Classes and methods to define and execute parsing grammars ============================================================================= The pyparsing module is an alternative approach to creating and executing simple grammars, vs. the traditional lex/yacc approach, or the use of regular expressions. With pyparsing, you don't need to learn a new syntax for defining grammars or matching expressions - the parsing module provides a library of classes that you use to construct the grammar directly in Python. Here is a program to parse "Hello, World!" (or any greeting of the form C{"<salutation>, <addressee>!"}), built up using L{Word}, L{Literal}, and L{And} elements (L{'+'<ParserElement.__add__>} operator gives L{And} expressions, strings are auto-converted to L{Literal} expressions):: from pyparsing import Word, alphas # define grammar of a greeting greet = Word(alphas) + "," + Word(alphas) + "!" hello = "Hello, World!" print (hello, "->", greet.parseString(hello)) The program outputs the following:: Hello, World! -> ['Hello', ',', 'World', '!'] The Python representation of the grammar is quite readable, owing to the self-explanatory class names, and the use of '+', '|' and '^' operators. The L{ParseResults} object returned from L{ParserElement.parseString<ParserElement.parseString>} can be accessed as a nested list, a dictionary, or an object with named attributes. The pyparsing module handles some of the problems that are typically vexing when writing text parsers: - extra or missing whitespace (the above program will also handle "Hello,World!", "Hello , World !", etc.) - quoted strings - embedded comments Getting Started - ----------------- Visit the classes L{ParserElement} and L{ParseResults} to see the base classes that most other pyparsing classes inherit from. Use the docstrings for examples of how to: - construct literal match expressions from L{Literal} and L{CaselessLiteral} classes - construct character word-group expressions using the L{Word} class - see how to create repetitive expressions using L{ZeroOrMore} and L{OneOrMore} classes - use L{'+'<And>}, L{'|'<MatchFirst>}, L{'^'<Or>}, and L{'&'<Each>} operators to combine simple expressions into more complex ones - associate names with your parsed results using L{ParserElement.setResultsName} - find some helpful expression short-cuts like L{delimitedList} and L{oneOf} - find more useful common expressions in the L{pyparsing_common} namespace class """ __version__ = "2.2.1" __versionTime__ = "18 Sep 2018 00:49 UTC" __author__ = "Paul McGuire <ptmcg@users.sourceforge.net>" import string from weakref import ref as wkref import copy import sys import warnings import re import sre_constants import collections import pprint import traceback import types from datetime import datetime try: from _thread import RLock except ImportError: from threading import RLock try: # Python 3 from collections.abc import Iterable from collections.abc import MutableMapping except ImportError: # Python 2.7 from collections import Iterable from collections import MutableMapping try: from collections import OrderedDict as _OrderedDict except ImportError: try: from ordereddict import OrderedDict as _OrderedDict except ImportError: _OrderedDict = None #~ sys.stderr.write( "testing pyparsing module, version %s, %s\n" % (__version__,__versionTime__ ) ) __all__ = [ 'And', 'CaselessKeyword', 'CaselessLiteral', 'CharsNotIn', 'Combine', 'Dict', 'Each', 'Empty', 'FollowedBy', 'Forward', 'GoToColumn', 'Group', 'Keyword', 'LineEnd', 'LineStart', 'Literal', 'MatchFirst', 'NoMatch', 'NotAny', 'OneOrMore', 'OnlyOnce', 'Optional', 'Or', 'ParseBaseException', 'ParseElementEnhance', 'ParseException', 'ParseExpression', 'ParseFatalException', 'ParseResults', 'ParseSyntaxException', 'ParserElement', 'QuotedString', 'RecursiveGrammarException', 'Regex', 'SkipTo', 'StringEnd', 'StringStart', 'Suppress', 'Token', 'TokenConverter', 'White', 'Word', 'WordEnd', 'WordStart', 'ZeroOrMore', 'alphanums', 'alphas', 'alphas8bit', 'anyCloseTag', 'anyOpenTag', 'cStyleComment', 'col', 'commaSeparatedList', 'commonHTMLEntity', 'countedArray', 'cppStyleComment', 'dblQuotedString', 'dblSlashComment', 'delimitedList', 'dictOf', 'downcaseTokens', 'empty', 'hexnums', 'htmlComment', 'javaStyleComment', 'line', 'lineEnd', 'lineStart', 'lineno', 'makeHTMLTags', 'makeXMLTags', 'matchOnlyAtCol', 'matchPreviousExpr', 'matchPreviousLiteral', 'nestedExpr', 'nullDebugAction', 'nums', 'oneOf', 'opAssoc', 'operatorPrecedence', 'printables', 'punc8bit', 'pythonStyleComment', 'quotedString', 'removeQuotes', 'replaceHTMLEntity', 'replaceWith', 'restOfLine', 'sglQuotedString', 'srange', 'stringEnd', 'stringStart', 'traceParseAction', 'unicodeString', 'upcaseTokens', 'withAttribute', 'indentedBlock', 'originalTextFor', 'ungroup', 'infixNotation','locatedExpr', 'withClass', 'CloseMatch', 'tokenMap', 'pyparsing_common', ] system_version = tuple(sys.version_info)[:3] PY_3 = system_version[0] == 3 if PY_3: _MAX_INT = sys.maxsize basestring = str unichr = chr _ustr = str # build list of single arg builtins, that can be used as parse actions singleArgBuiltins = [sum, len, sorted, reversed, list, tuple, set, any, all, min, max] else: _MAX_INT = sys.maxint range = xrange def _ustr(obj): """Drop-in replacement for str(obj) that tries to be Unicode friendly. It first tries str(obj). If that fails with a UnicodeEncodeError, then it tries unicode(obj). It then < returns the unicode object | encodes it with the default encoding | ... >. """ if isinstance(obj,unicode): return obj try: # If this works, then _ustr(obj) has the same behaviour as str(obj), so # it won't break any existing code. return str(obj) except UnicodeEncodeError: # Else encode it ret = unicode(obj).encode(sys.getdefaultencoding(), 'xmlcharrefreplace') xmlcharref = Regex(r'&#\d+;') xmlcharref.setParseAction(lambda t: '\\u' + hex(int(t[0][2:-1]))[2:]) return xmlcharref.transformString(ret) # build list of single arg builtins, tolerant of Python version, that can be used as parse actions singleArgBuiltins = [] import __builtin__ for fname in "sum len sorted reversed list tuple set any all min max".split(): try: singleArgBuiltins.append(getattr(__builtin__,fname)) except AttributeError: continue _generatorType = type((y for y in range(1))) def _xml_escape(data): """Escape &, <, >, ", ', etc. in a string of data.""" # ampersand must be replaced first from_symbols = '&><"\'' to_symbols = ('&'+s+';' for s in "amp gt lt quot apos".split()) for from_,to_ in zip(from_symbols, to_symbols): data = data.replace(from_, to_) return data class _Constants(object): pass alphas = string.ascii_uppercase + string.ascii_lowercase nums = "0123456789" hexnums = nums + "ABCDEFabcdef" alphanums = alphas + nums _bslash = chr(92) printables = "".join(c for c in string.printable if c not in string.whitespace) class ParseBaseException(Exception): """base exception class for all parsing runtime exceptions""" # Performance tuning: we construct a *lot* of these, so keep this # constructor as small and fast as possible def __init__( self, pstr, loc=0, msg=None, elem=None ): self.loc = loc if msg is None: self.msg = pstr self.pstr = "" else: self.msg = msg self.pstr = pstr self.parserElement = elem self.args = (pstr, loc, msg) @classmethod def _from_exception(cls, pe): """ internal factory method to simplify creating one type of ParseException from another - avoids having __init__ signature conflicts among subclasses """ return cls(pe.pstr, pe.loc, pe.msg, pe.parserElement) def __getattr__( self, aname ): """supported attributes by name are: - lineno - returns the line number of the exception text - col - returns the column number of the exception text - line - returns the line containing the exception text """ if( aname == "lineno" ): return lineno( self.loc, self.pstr ) elif( aname in ("col", "column") ): return col( self.loc, self.pstr ) elif( aname == "line" ): return line( self.loc, self.pstr ) else: raise AttributeError(aname) def __str__( self ): return "%s (at char %d), (line:%d, col:%d)" % \ ( self.msg, self.loc, self.lineno, self.column ) def __repr__( self ): return _ustr(self) def markInputline( self, markerString = ">!<" ): """Extracts the exception line from the input string, and marks the location of the exception with a special symbol. """ line_str = self.line line_column = self.column - 1 if markerString: line_str = "".join((line_str[:line_column], markerString, line_str[line_column:])) return line_str.strip() def __dir__(self): return "lineno col line".split() + dir(type(self)) class ParseException(ParseBaseException): """ Exception thrown when parse expressions don't match class; supported attributes by name are: - lineno - returns the line number of the exception text - col - returns the column number of the exception text - line - returns the line containing the exception text Example:: try: Word(nums).setName("integer").parseString("ABC") except ParseException as pe: print(pe) print("column: {}".format(pe.col)) prints:: Expected integer (at char 0), (line:1, col:1) column: 1 """ pass class ParseFatalException(ParseBaseException): """user-throwable exception thrown when inconsistent parse content is found; stops all parsing immediately""" pass class ParseSyntaxException(ParseFatalException): """just like L{ParseFatalException}, but thrown internally when an L{ErrorStop<And._ErrorStop>} ('-' operator) indicates that parsing is to stop immediately because an unbacktrackable syntax error has been found""" pass #~ class ReparseException(ParseBaseException): #~ """Experimental class - parse actions can raise this exception to cause #~ pyparsing to reparse the input string: #~ - with a modified input string, and/or #~ - with a modified start location #~ Set the values of the ReparseException in the constructor, and raise the #~ exception in a parse action to cause pyparsing to use the new string/location. #~ Setting the values as None causes no change to be made. #~ """ #~ def __init_( self, newstring, restartLoc ): #~ self.newParseText = newstring #~ self.reparseLoc = restartLoc class RecursiveGrammarException(Exception): """exception thrown by L{ParserElement.validate} if the grammar could be improperly recursive""" def __init__( self, parseElementList ): self.parseElementTrace = parseElementList def __str__( self ): return "RecursiveGrammarException: %s" % self.parseElementTrace class _ParseResultsWithOffset(object): def __init__(self,p1,p2): self.tup = (p1,p2) def __getitem__(self,i): return self.tup[i] def __repr__(self): return repr(self.tup[0]) def setOffset(self,i): self.tup = (self.tup[0],i) class ParseResults(object): """ Structured parse results, to provide multiple means of access to the parsed data: - as a list (C{len(results)}) - by list index (C{results[0], results[1]}, etc.) - by attribute (C{results.<resultsName>} - see L{ParserElement.setResultsName}) Example:: integer = Word(nums) date_str = (integer.setResultsName("year") + '/' + integer.setResultsName("month") + '/' + integer.setResultsName("day")) # equivalent form: # date_str = integer("year") + '/' + integer("month") + '/' + integer("day") # parseString returns a ParseResults object result = date_str.parseString("1999/12/31") def test(s, fn=repr): print("%s -> %s" % (s, fn(eval(s)))) test("list(result)") test("result[0]") test("result['month']") test("result.day") test("'month' in result") test("'minutes' in result") test("result.dump()", str) prints:: list(result) -> ['1999', '/', '12', '/', '31'] result[0] -> '1999' result['month'] -> '12' result.day -> '31' 'month' in result -> True 'minutes' in result -> False result.dump() -> ['1999', '/', '12', '/', '31'] - day: 31 - month: 12 - year: 1999 """ def __new__(cls, toklist=None, name=None, asList=True, modal=True ): if isinstance(toklist, cls): return toklist retobj = object.__new__(cls) retobj.__doinit = True return retobj # Performance tuning: we construct a *lot* of these, so keep this # constructor as small and fast as possible def __init__( self, toklist=None, name=None, asList=True, modal=True, isinstance=isinstance ): if self.__doinit: self.__doinit = False self.__name = None self.__parent = None self.__accumNames = {} self.__asList = asList self.__modal = modal if toklist is None: toklist = [] if isinstance(toklist, list): self.__toklist = toklist[:] elif isinstance(toklist, _generatorType): self.__toklist = list(toklist) else: self.__toklist = [toklist] self.__tokdict = dict() if name is not None and name: if not modal: self.__accumNames[name] = 0 if isinstance(name,int): name = _ustr(name) # will always return a str, but use _ustr for consistency self.__name = name if not (isinstance(toklist, (type(None), basestring, list)) and toklist in (None,'',[])): if isinstance(toklist,basestring): toklist = [ toklist ] if asList: if isinstance(toklist,ParseResults): self[name] = _ParseResultsWithOffset(toklist.copy(),0) else: self[name] = _ParseResultsWithOffset(ParseResults(toklist[0]),0) self[name].__name = name else: try: self[name] = toklist[0] except (KeyError,TypeError,IndexError): self[name] = toklist def __getitem__( self, i ): if isinstance( i, (int,slice) ): return self.__toklist[i] else: if i not in self.__accumNames: return self.__tokdict[i][-1][0] else: return ParseResults([ v[0] for v in self.__tokdict[i] ]) def __setitem__( self, k, v, isinstance=isinstance ): if isinstance(v,_ParseResultsWithOffset): self.__tokdict[k] = self.__tokdict.get(k,list()) + [v] sub = v[0] elif isinstance(k,(int,slice)): self.__toklist[k] = v sub = v else: self.__tokdict[k] = self.__tokdict.get(k,list()) + [_ParseResultsWithOffset(v,0)] sub = v if isinstance(sub,ParseResults): sub.__parent = wkref(self) def __delitem__( self, i ): if isinstance(i,(int,slice)): mylen = len( self.__toklist ) del self.__toklist[i] # convert int to slice if isinstance(i, int): if i < 0: i += mylen i = slice(i, i+1) # get removed indices removed = list(range(*i.indices(mylen))) removed.reverse() # fixup indices in token dictionary for name,occurrences in self.__tokdict.items(): for j in removed: for k, (value, position) in enumerate(occurrences): occurrences[k] = _ParseResultsWithOffset(value, position - (position > j)) else: del self.__tokdict[i] def __contains__( self, k ): return k in self.__tokdict def __len__( self ): return len( self.__toklist ) def __bool__(self): return ( not not self.__toklist ) __nonzero__ = __bool__ def __iter__( self ): return iter( self.__toklist ) def __reversed__( self ): return iter( self.__toklist[::-1] ) def _iterkeys( self ): if hasattr(self.__tokdict, "iterkeys"): return self.__tokdict.iterkeys() else: return iter(self.__tokdict) def _itervalues( self ): return (self[k] for k in self._iterkeys()) def _iteritems( self ): return ((k, self[k]) for k in self._iterkeys()) if PY_3: keys = _iterkeys """Returns an iterator of all named result keys (Python 3.x only).""" values = _itervalues """Returns an iterator of all named result values (Python 3.x only).""" items = _iteritems """Returns an iterator of all named result key-value tuples (Python 3.x only).""" else: iterkeys = _iterkeys """Returns an iterator of all named result keys (Python 2.x only).""" itervalues = _itervalues """Returns an iterator of all named result values (Python 2.x only).""" iteritems = _iteritems """Returns an iterator of all named result key-value tuples (Python 2.x only).""" def keys( self ): """Returns all named result keys (as a list in Python 2.x, as an iterator in Python 3.x).""" return list(self.iterkeys()) def values( self ): """Returns all named result values (as a list in Python 2.x, as an iterator in Python 3.x).""" return list(self.itervalues()) def items( self ): """Returns all named result key-values (as a list of tuples in Python 2.x, as an iterator in Python 3.x).""" return list(self.iteritems()) def haskeys( self ): """Since keys() returns an iterator, this method is helpful in bypassing code that looks for the existence of any defined results names.""" return bool(self.__tokdict) def pop( self, *args, **kwargs): """ Removes and returns item at specified index (default=C{last}). Supports both C{list} and C{dict} semantics for C{pop()}. If passed no argument or an integer argument, it will use C{list} semantics and pop tokens from the list of parsed tokens. If passed a non-integer argument (most likely a string), it will use C{dict} semantics and pop the corresponding value from any defined results names. A second default return value argument is supported, just as in C{dict.pop()}. Example:: def remove_first(tokens): tokens.pop(0) print(OneOrMore(Word(nums)).parseString("0 123 321")) # -> ['0', '123', '321'] print(OneOrMore(Word(nums)).addParseAction(remove_first).parseString("0 123 321")) # -> ['123', '321'] label = Word(alphas) patt = label("LABEL") + OneOrMore(Word(nums)) print(patt.parseString("AAB 123 321").dump()) # Use pop() in a parse action to remove named result (note that corresponding value is not # removed from list form of results) def remove_LABEL(tokens): tokens.pop("LABEL") return tokens patt.addParseAction(remove_LABEL) print(patt.parseString("AAB 123 321").dump()) prints:: ['AAB', '123', '321'] - LABEL: AAB ['AAB', '123', '321'] """ if not args: args = [-1] for k,v in kwargs.items(): if k == 'default': args = (args[0], v) else: raise TypeError("pop() got an unexpected keyword argument '%s'" % k) if (isinstance(args[0], int) or len(args) == 1 or args[0] in self): index = args[0] ret = self[index] del self[index] return ret else: defaultvalue = args[1] return defaultvalue def get(self, key, defaultValue=None): """ Returns named result matching the given key, or if there is no such name, then returns the given C{defaultValue} or C{None} if no C{defaultValue} is specified. Similar to C{dict.get()}. Example:: integer = Word(nums) date_str = integer("year") + '/' + integer("month") + '/' + integer("day") result = date_str.parseString("1999/12/31") print(result.get("year")) # -> '1999' print(result.get("hour", "not specified")) # -> 'not specified' print(result.get("hour")) # -> None """ if key in self: return self[key] else: return defaultValue def insert( self, index, insStr ): """ Inserts new element at location index in the list of parsed tokens. Similar to C{list.insert()}. Example:: print(OneOrMore(Word(nums)).parseString("0 123 321")) # -> ['0', '123', '321'] # use a parse action to insert the parse location in the front of the parsed results def insert_locn(locn, tokens): tokens.insert(0, locn) print(OneOrMore(Word(nums)).addParseAction(insert_locn).parseString("0 123 321")) # -> [0, '0', '123', '321'] """ self.__toklist.insert(index, insStr) # fixup indices in token dictionary for name,occurrences in self.__tokdict.items(): for k, (value, position) in enumerate(occurrences): occurrences[k] = _ParseResultsWithOffset(value, position + (position > index)) def append( self, item ): """ Add single element to end of ParseResults list of elements. Example:: print(OneOrMore(Word(nums)).parseString("0 123 321")) # -> ['0', '123', '321'] # use a parse action to compute the sum of the parsed integers, and add it to the end def append_sum(tokens): tokens.append(sum(map(int, tokens))) print(OneOrMore(Word(nums)).addParseAction(append_sum).parseString("0 123 321")) # -> ['0', '123', '321', 444] """ self.__toklist.append(item) def extend( self, itemseq ): """ Add sequence of elements to end of ParseResults list of elements. Example:: patt = OneOrMore(Word(alphas)) # use a parse action to append the reverse of the matched strings, to make a palindrome def make_palindrome(tokens): tokens.extend(reversed([t[::-1] for t in tokens])) return ''.join(tokens) print(patt.addParseAction(make_palindrome).parseString("lskdj sdlkjf lksd")) # -> 'lskdjsdlkjflksddsklfjkldsjdksl' """ if isinstance(itemseq, ParseResults): self += itemseq else: self.__toklist.extend(itemseq) def clear( self ): """ Clear all elements and results names. """ del self.__toklist[:] self.__tokdict.clear() def __getattr__( self, name ): try: return self[name] except KeyError: return "" if name in self.__tokdict: if name not in self.__accumNames: return self.__tokdict[name][-1][0] else: return ParseResults([ v[0] for v in self.__tokdict[name] ]) else: return "" def __add__( self, other ): ret = self.copy() ret += other return ret def __iadd__( self, other ): if other.__tokdict: offset = len(self.__toklist) addoffset = lambda a: offset if a<0 else a+offset otheritems = other.__tokdict.items() otherdictitems = [(k, _ParseResultsWithOffset(v[0],addoffset(v[1])) ) for (k,vlist) in otheritems for v in vlist] for k,v in otherdictitems: self[k] = v if isinstance(v[0],ParseResults): v[0].__parent = wkref(self) self.__toklist += other.__toklist self.__accumNames.update( other.__accumNames ) return self def __radd__(self, other): if isinstance(other,int) and other == 0: # useful for merging many ParseResults using sum() builtin return self.copy() else: # this may raise a TypeError - so be it return other + self def __repr__( self ): return "(%s, %s)" % ( repr( self.__toklist ), repr( self.__tokdict ) ) def __str__( self ): return '[' + ', '.join(_ustr(i) if isinstance(i, ParseResults) else repr(i) for i in self.__toklist) + ']' def _asStringList( self, sep='' ): out = [] for item in self.__toklist: if out and sep: out.append(sep) if isinstance( item, ParseResults ): out += item._asStringList() else: out.append( _ustr(item) ) return out def asList( self ): """ Returns the parse results as a nested list of matching tokens, all converted to strings. Example:: patt = OneOrMore(Word(alphas)) result = patt.parseString("sldkj lsdkj sldkj") # even though the result prints in string-like form, it is actually a pyparsing ParseResults print(type(result), result) # -> <class 'pyparsing.ParseResults'> ['sldkj', 'lsdkj', 'sldkj'] # Use asList() to create an actual list result_list = result.asList() print(type(result_list), result_list) # -> <class 'list'> ['sldkj', 'lsdkj', 'sldkj'] """ return [res.asList() if isinstance(res,ParseResults) else res for res in self.__toklist] def asDict( self ): """ Returns the named parse results as a nested dictionary. Example:: integer = Word(nums) date_str = integer("year") + '/' + integer("month") + '/' + integer("day") result = date_str.parseString('12/31/1999') print(type(result), repr(result)) # -> <class 'pyparsing.ParseResults'> (['12', '/', '31', '/', '1999'], {'day': [('1999', 4)], 'year': [('12', 0)], 'month': [('31', 2)]}) result_dict = result.asDict() print(type(result_dict), repr(result_dict)) # -> <class 'dict'> {'day': '1999', 'year': '12', 'month': '31'} # even though a ParseResults supports dict-like access, sometime you just need to have a dict import json print(json.dumps(result)) # -> Exception: TypeError: ... is not JSON serializable print(json.dumps(result.asDict())) # -> {"month": "31", "day": "1999", "year": "12"} """ if PY_3: item_fn = self.items else: item_fn = self.iteritems def toItem(obj): if isinstance(obj, ParseResults): if obj.haskeys(): return obj.asDict() else: return [toItem(v) for v in obj] else: return obj return dict((k,toItem(v)) for k,v in item_fn()) def copy( self ): """ Returns a new copy of a C{ParseResults} object. """ ret = ParseResults( self.__toklist ) ret.__tokdict = self.__tokdict.copy() ret.__parent = self.__parent ret.__accumNames.update( self.__accumNames ) ret.__name = self.__name return ret def asXML( self, doctag=None, namedItemsOnly=False, indent="", formatted=True ): """ (Deprecated) Returns the parse results as XML. Tags are created for tokens and lists that have defined results names. """ nl = "\n" out = [] namedItems = dict((v[1],k) for (k,vlist) in self.__tokdict.items() for v in vlist) nextLevelIndent = indent + " " # collapse out indents if formatting is not desired if not formatted: indent = "" nextLevelIndent = "" nl = "" selfTag = None if doctag is not None: selfTag = doctag else: if self.__name: selfTag = self.__name if not selfTag: if namedItemsOnly: return "" else: selfTag = "ITEM" out += [ nl, indent, "<", selfTag, ">" ] for i,res in enumerate(self.__toklist): if isinstance(res,ParseResults): if i in namedItems: out += [ res.asXML(namedItems[i], namedItemsOnly and doctag is None, nextLevelIndent, formatted)] else: out += [ res.asXML(None, namedItemsOnly and doctag is None, nextLevelIndent, formatted)] else: # individual token, see if there is a name for it resTag = None if i in namedItems: resTag = namedItems[i] if not resTag: if namedItemsOnly: continue else: resTag = "ITEM" xmlBodyText = _xml_escape(_ustr(res)) out += [ nl, nextLevelIndent, "<", resTag, ">", xmlBodyText, "</", resTag, ">" ] out += [ nl, indent, "</", selfTag, ">" ] return "".join(out) def __lookup(self,sub): for k,vlist in self.__tokdict.items(): for v,loc in vlist: if sub is v: return k return None def getName(self): r""" Returns the results name for this token expression. Useful when several different expressions might match at a particular location. Example:: integer = Word(nums) ssn_expr = Regex(r"\d\d\d-\d\d-\d\d\d\d") house_number_expr = Suppress('#') + Word(nums, alphanums) user_data = (Group(house_number_expr)("house_number") | Group(ssn_expr)("ssn") | Group(integer)("age")) user_info = OneOrMore(user_data) result = user_info.parseString("22 111-22-3333 #221B") for item in result: print(item.getName(), ':', item[0]) prints:: age : 22 ssn : 111-22-3333 house_number : 221B """ if self.__name: return self.__name elif self.__parent: par = self.__parent() if par: return par.__lookup(self) else: return None elif (len(self) == 1 and len(self.__tokdict) == 1 and next(iter(self.__tokdict.values()))[0][1] in (0,-1)): return next(iter(self.__tokdict.keys())) else: return None def dump(self, indent='', depth=0, full=True): """ Diagnostic method for listing out the contents of a C{ParseResults}. Accepts an optional C{indent} argument so that this string can be embedded in a nested display of other data. Example:: integer = Word(nums) date_str = integer("year") + '/' + integer("month") + '/' + integer("day") result = date_str.parseString('12/31/1999') print(result.dump()) prints:: ['12', '/', '31', '/', '1999'] - day: 1999 - month: 31 - year: 12 """ out = [] NL = '\n' out.append( indent+_ustr(self.asList()) ) if full: if self.haskeys(): items = sorted((str(k), v) for k,v in self.items()) for k,v in items: if out: out.append(NL) out.append( "%s%s- %s: " % (indent,(' '*depth), k) ) if isinstance(v,ParseResults): if v: out.append( v.dump(indent,depth+1) ) else: out.append(_ustr(v)) else: out.append(repr(v)) elif any(isinstance(vv,ParseResults) for vv in self): v = self for i,vv in enumerate(v): if isinstance(vv,ParseResults): out.append("\n%s%s[%d]:\n%s%s%s" % (indent,(' '*(depth)),i,indent,(' '*(depth+1)),vv.dump(indent,depth+1) )) else: out.append("\n%s%s[%d]:\n%s%s%s" % (indent,(' '*(depth)),i,indent,(' '*(depth+1)),_ustr(vv))) return "".join(out) def pprint(self, *args, **kwargs): """ Pretty-printer for parsed results as a list, using the C{pprint} module. Accepts additional positional or keyword args as defined for the C{pprint.pprint} method. (U{http://docs.python.org/3/library/pprint.html#pprint.pprint}) Example:: ident = Word(alphas, alphanums) num = Word(nums) func = Forward() term = ident | num | Group('(' + func + ')') func <<= ident + Group(Optional(delimitedList(term))) result = func.parseString("fna a,b,(fnb c,d,200),100") result.pprint(width=40) prints:: ['fna', ['a', 'b', ['(', 'fnb', ['c', 'd', '200'], ')'], '100']] """ pprint.pprint(self.asList(), *args, **kwargs) # add support for pickle protocol def __getstate__(self): return ( self.__toklist, ( self.__tokdict.copy(), self.__parent is not None and self.__parent() or None, self.__accumNames, self.__name ) ) def __setstate__(self,state): self.__toklist = state[0] (self.__tokdict, par, inAccumNames, self.__name) = state[1] self.__accumNames = {} self.__accumNames.update(inAccumNames) if par is not None: self.__parent = wkref(par) else: self.__parent = None def __getnewargs__(self): return self.__toklist, self.__name, self.__asList, self.__modal def __dir__(self): return (dir(type(self)) + list(self.keys())) MutableMapping.register(ParseResults) def col (loc,strg): """Returns current column within a string, counting newlines as line separators. The first column is number 1. Note: the default parsing behavior is to expand tabs in the input string before starting the parsing process. See L{I{ParserElement.parseString}<ParserElement.parseString>} for more information on parsing strings containing C{<TAB>}s, and suggested methods to maintain a consistent view of the parsed string, the parse location, and line and column positions within the parsed string. """ s = strg return 1 if 0<loc<len(s) and s[loc-1] == '\n' else loc - s.rfind("\n", 0, loc) def lineno(loc,strg): """Returns current line number within a string, counting newlines as line separators. The first line is number 1. Note: the default parsing behavior is to expand tabs in the input string before starting the parsing process. See L{I{ParserElement.parseString}<ParserElement.parseString>} for more information on parsing strings containing C{<TAB>}s, and suggested methods to maintain a consistent view of the parsed string, the parse location, and line and column positions within the parsed string. """ return strg.count("\n",0,loc) + 1 def line( loc, strg ): """Returns the line of text containing loc within a string, counting newlines as line separators. """ lastCR = strg.rfind("\n", 0, loc) nextCR = strg.find("\n", loc) if nextCR >= 0: return strg[lastCR+1:nextCR] else: return strg[lastCR+1:] def _defaultStartDebugAction( instring, loc, expr ): print (("Match " + _ustr(expr) + " at loc " + _ustr(loc) + "(%d,%d)" % ( lineno(loc,instring), col(loc,instring) ))) def _defaultSuccessDebugAction( instring, startloc, endloc, expr, toks ): print ("Matched " + _ustr(expr) + " -> " + str(toks.asList())) def _defaultExceptionDebugAction( instring, loc, expr, exc ): print ("Exception raised:" + _ustr(exc)) def nullDebugAction(*args): """'Do-nothing' debug action, to suppress debugging output during parsing.""" pass # Only works on Python 3.x - nonlocal is toxic to Python 2 installs #~ 'decorator to trim function calls to match the arity of the target' #~ def _trim_arity(func, maxargs=3): #~ if func in singleArgBuiltins: #~ return lambda s,l,t: func(t) #~ limit = 0 #~ foundArity = False #~ def wrapper(*args): #~ nonlocal limit,foundArity #~ while 1: #~ try: #~ ret = func(*args[limit:]) #~ foundArity = True #~ return ret #~ except TypeError: #~ if limit == maxargs or foundArity: #~ raise #~ limit += 1 #~ continue #~ return wrapper # this version is Python 2.x-3.x cross-compatible 'decorator to trim function calls to match the arity of the target' def _trim_arity(func, maxargs=2): if func in singleArgBuiltins: return lambda s,l,t: func(t) limit = [0] foundArity = [False] # traceback return data structure changed in Py3.5 - normalize back to plain tuples if system_version[:2] >= (3,5): def extract_stack(limit=0): # special handling for Python 3.5.0 - extra deep call stack by 1 offset = -3 if system_version == (3,5,0) else -2 frame_summary = traceback.extract_stack(limit=-offset+limit-1)[offset] return [frame_summary[:2]] def extract_tb(tb, limit=0): frames = traceback.extract_tb(tb, limit=limit) frame_summary = frames[-1] return [frame_summary[:2]] else: extract_stack = traceback.extract_stack extract_tb = traceback.extract_tb # synthesize what would be returned by traceback.extract_stack at the call to # user's parse action 'func', so that we don't incur call penalty at parse time LINE_DIFF = 6 # IF ANY CODE CHANGES, EVEN JUST COMMENTS OR BLANK LINES, BETWEEN THE NEXT LINE AND # THE CALL TO FUNC INSIDE WRAPPER, LINE_DIFF MUST BE MODIFIED!!!! this_line = extract_stack(limit=2)[-1] pa_call_line_synth = (this_line[0], this_line[1]+LINE_DIFF) def wrapper(*args): while 1: try: ret = func(*args[limit[0]:]) foundArity[0] = True return ret except TypeError: # re-raise TypeErrors if they did not come from our arity testing if foundArity[0]: raise else: try: tb = sys.exc_info()[-1] if not extract_tb(tb, limit=2)[-1][:2] == pa_call_line_synth: raise finally: del tb if limit[0] <= maxargs: limit[0] += 1 continue raise # copy func name to wrapper for sensible debug output func_name = "<parse action>" try: func_name = getattr(func, '__name__', getattr(func, '__class__').__name__) except Exception: func_name = str(func) wrapper.__name__ = func_name return wrapper class ParserElement(object): """Abstract base level parser element class.""" DEFAULT_WHITE_CHARS = " \n\t\r" verbose_stacktrace = False @staticmethod def setDefaultWhitespaceChars( chars ): r""" Overrides the default whitespace chars Example:: # default whitespace chars are space, <TAB> and newline OneOrMore(Word(alphas)).parseString("abc def\nghi jkl") # -> ['abc', 'def', 'ghi', 'jkl'] # change to just treat newline as significant ParserElement.setDefaultWhitespaceChars(" \t") OneOrMore(Word(alphas)).parseString("abc def\nghi jkl") # -> ['abc', 'def'] """ ParserElement.DEFAULT_WHITE_CHARS = chars @staticmethod def inlineLiteralsUsing(cls): """ Set class to be used for inclusion of string literals into a parser. Example:: # default literal class used is Literal integer = Word(nums) date_str = integer("year") + '/' + integer("month") + '/' + integer("day") date_str.parseString("1999/12/31") # -> ['1999', '/', '12', '/', '31'] # change to Suppress ParserElement.inlineLiteralsUsing(Suppress) date_str = integer("year") + '/' + integer("month") + '/' + integer("day") date_str.parseString("1999/12/31") # -> ['1999', '12', '31'] """ ParserElement._literalStringClass = cls def __init__( self, savelist=False ): self.parseAction = list() self.failAction = None #~ self.name = "<unknown>" # don't define self.name, let subclasses try/except upcall self.strRepr = None self.resultsName = None self.saveAsList = savelist self.skipWhitespace = True self.whiteChars = ParserElement.DEFAULT_WHITE_CHARS self.copyDefaultWhiteChars = True self.mayReturnEmpty = False # used when checking for left-recursion self.keepTabs = False self.ignoreExprs = list() self.debug = False self.streamlined = False self.mayIndexError = True # used to optimize exception handling for subclasses that don't advance parse index self.errmsg = "" self.modalResults = True # used to mark results names as modal (report only last) or cumulative (list all) self.debugActions = ( None, None, None ) #custom debug actions self.re = None self.callPreparse = True # used to avoid redundant calls to preParse self.callDuringTry = False def copy( self ): """ Make a copy of this C{ParserElement}. Useful for defining different parse actions for the same parsing pattern, using copies of the original parse element. Example:: integer = Word(nums).setParseAction(lambda toks: int(toks[0])) integerK = integer.copy().addParseAction(lambda toks: toks[0]*1024) + Suppress("K") integerM = integer.copy().addParseAction(lambda toks: toks[0]*1024*1024) + Suppress("M") print(OneOrMore(integerK | integerM | integer).parseString("5K 100 640K 256M")) prints:: [5120, 100, 655360, 268435456] Equivalent form of C{expr.copy()} is just C{expr()}:: integerM = integer().addParseAction(lambda toks: toks[0]*1024*1024) + Suppress("M") """ cpy = copy.copy( self ) cpy.parseAction = self.parseAction[:] cpy.ignoreExprs = self.ignoreExprs[:] if self.copyDefaultWhiteChars: cpy.whiteChars = ParserElement.DEFAULT_WHITE_CHARS return cpy def setName( self, name ): """ Define name for this expression, makes debugging and exception messages clearer. Example:: Word(nums).parseString("ABC") # -> Exception: Expected W:(0123...) (at char 0), (line:1, col:1) Word(nums).setName("integer").parseString("ABC") # -> Exception: Expected integer (at char 0), (line:1, col:1) """ self.name = name self.errmsg = "Expected " + self.name if hasattr(self,"exception"): self.exception.msg = self.errmsg return self def setResultsName( self, name, listAllMatches=False ): """ Define name for referencing matching tokens as a nested attribute of the returned parse results. NOTE: this returns a *copy* of the original C{ParserElement} object; this is so that the client can define a basic element, such as an integer, and reference it in multiple places with different names. You can also set results names using the abbreviated syntax, C{expr("name")} in place of C{expr.setResultsName("name")} - see L{I{__call__}<__call__>}. Example:: date_str = (integer.setResultsName("year") + '/' + integer.setResultsName("month") + '/' + integer.setResultsName("day")) # equivalent form: date_str = integer("year") + '/' + integer("month") + '/' + integer("day") """ newself = self.copy() if name.endswith("*"): name = name[:-1] listAllMatches=True newself.resultsName = name newself.modalResults = not listAllMatches return newself def setBreak(self,breakFlag = True): """Method to invoke the Python pdb debugger when this element is about to be parsed. Set C{breakFlag} to True to enable, False to disable. """ if breakFlag: _parseMethod = self._parse def breaker(instring, loc, doActions=True, callPreParse=True): import pdb pdb.set_trace() return _parseMethod( instring, loc, doActions, callPreParse ) breaker._originalParseMethod = _parseMethod self._parse = breaker else: if hasattr(self._parse,"_originalParseMethod"): self._parse = self._parse._originalParseMethod return self def setParseAction( self, *fns, **kwargs ): """ Define one or more actions to perform when successfully matching parse element definition. Parse action fn is a callable method with 0-3 arguments, called as C{fn(s,loc,toks)}, C{fn(loc,toks)}, C{fn(toks)}, or just C{fn()}, where: - s = the original string being parsed (see note below) - loc = the location of the matching substring - toks = a list of the matched tokens, packaged as a C{L{ParseResults}} object If the functions in fns modify the tokens, they can return them as the return value from fn, and the modified list of tokens will replace the original. Otherwise, fn does not need to return any value. Optional keyword arguments: - callDuringTry = (default=C{False}) indicate if parse action should be run during lookaheads and alternate testing Note: the default parsing behavior is to expand tabs in the input string before starting the parsing process. See L{I{parseString}<parseString>} for more information on parsing strings containing C{<TAB>}s, and suggested methods to maintain a consistent view of the parsed string, the parse location, and line and column positions within the parsed string. Example:: integer = Word(nums) date_str = integer + '/' + integer + '/' + integer date_str.parseString("1999/12/31") # -> ['1999', '/', '12', '/', '31'] # use parse action to convert to ints at parse time integer = Word(nums).setParseAction(lambda toks: int(toks[0])) date_str = integer + '/' + integer + '/' + integer # note that integer fields are now ints, not strings date_str.parseString("1999/12/31") # -> [1999, '/', 12, '/', 31] """ self.parseAction = list(map(_trim_arity, list(fns))) self.callDuringTry = kwargs.get("callDuringTry", False) return self def addParseAction( self, *fns, **kwargs ): """ Add one or more parse actions to expression's list of parse actions. See L{I{setParseAction}<setParseAction>}. See examples in L{I{copy}<copy>}. """ self.parseAction += list(map(_trim_arity, list(fns))) self.callDuringTry = self.callDuringTry or kwargs.get("callDuringTry", False) return self def addCondition(self, *fns, **kwargs): """Add a boolean predicate function to expression's list of parse actions. See L{I{setParseAction}<setParseAction>} for function call signatures. Unlike C{setParseAction}, functions passed to C{addCondition} need to return boolean success/fail of the condition. Optional keyword arguments: - message = define a custom message to be used in the raised exception - fatal = if True, will raise ParseFatalException to stop parsing immediately; otherwise will raise ParseException Example:: integer = Word(nums).setParseAction(lambda toks: int(toks[0])) year_int = integer.copy() year_int.addCondition(lambda toks: toks[0] >= 2000, message="Only support years 2000 and later") date_str = year_int + '/' + integer + '/' + integer result = date_str.parseString("1999/12/31") # -> Exception: Only support years 2000 and later (at char 0), (line:1, col:1) """ msg = kwargs.get("message", "failed user-defined condition") exc_type = ParseFatalException if kwargs.get("fatal", False) else ParseException for fn in fns: def pa(s,l,t): if not bool(_trim_arity(fn)(s,l,t)): raise exc_type(s,l,msg) self.parseAction.append(pa) self.callDuringTry = self.callDuringTry or kwargs.get("callDuringTry", False) return self def setFailAction( self, fn ): """Define action to perform if parsing fails at this expression. Fail acton fn is a callable function that takes the arguments C{fn(s,loc,expr,err)} where: - s = string being parsed - loc = location where expression match was attempted and failed - expr = the parse expression that failed - err = the exception thrown The function returns no value. It may throw C{L{ParseFatalException}} if it is desired to stop parsing immediately.""" self.failAction = fn return self def _skipIgnorables( self, instring, loc ): exprsFound = True while exprsFound: exprsFound = False for e in self.ignoreExprs: try: while 1: loc,dummy = e._parse( instring, loc ) exprsFound = True except ParseException: pass return loc def preParse( self, instring, loc ): if self.ignoreExprs: loc = self._skipIgnorables( instring, loc ) if self.skipWhitespace: wt = self.whiteChars instrlen = len(instring) while loc < instrlen and instring[loc] in wt: loc += 1 return loc def parseImpl( self, instring, loc, doActions=True ): return loc, [] def postParse( self, instring, loc, tokenlist ): return tokenlist #~ @profile def _parseNoCache( self, instring, loc, doActions=True, callPreParse=True ): debugging = ( self.debug ) #and doActions ) if debugging or self.failAction: #~ print ("Match",self,"at loc",loc,"(%d,%d)" % ( lineno(loc,instring), col(loc,instring) )) if (self.debugActions[0] ): self.debugActions[0]( instring, loc, self ) if callPreParse and self.callPreparse: preloc = self.preParse( instring, loc ) else: preloc = loc tokensStart = preloc try: try: loc,tokens = self.parseImpl( instring, preloc, doActions ) except IndexError: raise ParseException( instring, len(instring), self.errmsg, self ) except ParseBaseException as err: #~ print ("Exception raised:", err) if self.debugActions[2]: self.debugActions[2]( instring, tokensStart, self, err ) if self.failAction: self.failAction( instring, tokensStart, self, err ) raise else: if callPreParse and self.callPreparse: preloc = self.preParse( instring, loc ) else: preloc = loc tokensStart = preloc if self.mayIndexError or preloc >= len(instring): try: loc,tokens = self.parseImpl( instring, preloc, doActions ) except IndexError: raise ParseException( instring, len(instring), self.errmsg, self ) else: loc,tokens = self.parseImpl( instring, preloc, doActions ) tokens = self.postParse( instring, loc, tokens ) retTokens = ParseResults( tokens, self.resultsName, asList=self.saveAsList, modal=self.modalResults ) if self.parseAction and (doActions or self.callDuringTry): if debugging: try: for fn in self.parseAction: tokens = fn( instring, tokensStart, retTokens ) if tokens is not None: retTokens = ParseResults( tokens, self.resultsName, asList=self.saveAsList and isinstance(tokens,(ParseResults,list)), modal=self.modalResults ) except ParseBaseException as err: #~ print "Exception raised in user parse action:", err if (self.debugActions[2] ): self.debugActions[2]( instring, tokensStart, self, err ) raise else: for fn in self.parseAction: tokens = fn( instring, tokensStart, retTokens ) if tokens is not None: retTokens = ParseResults( tokens, self.resultsName, asList=self.saveAsList and isinstance(tokens,(ParseResults,list)), modal=self.modalResults ) if debugging: #~ print ("Matched",self,"->",retTokens.asList()) if (self.debugActions[1] ): self.debugActions[1]( instring, tokensStart, loc, self, retTokens ) return loc, retTokens def tryParse( self, instring, loc ): try: return self._parse( instring, loc, doActions=False )[0] except ParseFatalException: raise ParseException( instring, loc, self.errmsg, self) def canParseNext(self, instring, loc): try: self.tryParse(instring, loc) except (ParseException, IndexError): return False else: return True class _UnboundedCache(object): def __init__(self): cache = {} self.not_in_cache = not_in_cache = object() def get(self, key): return cache.get(key, not_in_cache) def set(self, key, value): cache[key] = value def clear(self): cache.clear() def cache_len(self): return len(cache) self.get = types.MethodType(get, self) self.set = types.MethodType(set, self) self.clear = types.MethodType(clear, self) self.__len__ = types.MethodType(cache_len, self) if _OrderedDict is not None: class _FifoCache(object): def __init__(self, size): self.not_in_cache = not_in_cache = object() cache = _OrderedDict() def get(self, key): return cache.get(key, not_in_cache) def set(self, key, value): cache[key] = value while len(cache) > size: try: cache.popitem(False) except KeyError: pass def clear(self): cache.clear() def cache_len(self): return len(cache) self.get = types.MethodType(get, self) self.set = types.MethodType(set, self) self.clear = types.MethodType(clear, self) self.__len__ = types.MethodType(cache_len, self) else: class _FifoCache(object): def __init__(self, size): self.not_in_cache = not_in_cache = object() cache = {} key_fifo = collections.deque([], size) def get(self, key): return cache.get(key, not_in_cache) def set(self, key, value): cache[key] = value while len(key_fifo) > size: cache.pop(key_fifo.popleft(), None) key_fifo.append(key) def clear(self): cache.clear() key_fifo.clear() def cache_len(self): return len(cache) self.get = types.MethodType(get, self) self.set = types.MethodType(set, self) self.clear = types.MethodType(clear, self) self.__len__ = types.MethodType(cache_len, self) # argument cache for optimizing repeated calls when backtracking through recursive expressions packrat_cache = {} # this is set later by enabledPackrat(); this is here so that resetCache() doesn't fail packrat_cache_lock = RLock() packrat_cache_stats = [0, 0] # this method gets repeatedly called during backtracking with the same arguments - # we can cache these arguments and save ourselves the trouble of re-parsing the contained expression def _parseCache( self, instring, loc, doActions=True, callPreParse=True ): HIT, MISS = 0, 1 lookup = (self, instring, loc, callPreParse, doActions) with ParserElement.packrat_cache_lock: cache = ParserElement.packrat_cache value = cache.get(lookup) if value is cache.not_in_cache: ParserElement.packrat_cache_stats[MISS] += 1 try: value = self._parseNoCache(instring, loc, doActions, callPreParse) except ParseBaseException as pe: # cache a copy of the exception, without the traceback cache.set(lookup, pe.__class__(*pe.args)) raise else: cache.set(lookup, (value[0], value[1].copy())) return value else: ParserElement.packrat_cache_stats[HIT] += 1 if isinstance(value, Exception): raise value return (value[0], value[1].copy()) _parse = _parseNoCache @staticmethod def resetCache(): ParserElement.packrat_cache.clear() ParserElement.packrat_cache_stats[:] = [0] * len(ParserElement.packrat_cache_stats) _packratEnabled = False @staticmethod def enablePackrat(cache_size_limit=128): """Enables "packrat" parsing, which adds memoizing to the parsing logic. Repeated parse attempts at the same string location (which happens often in many complex grammars) can immediately return a cached value, instead of re-executing parsing/validating code. Memoizing is done of both valid results and parsing exceptions. Parameters: - cache_size_limit - (default=C{128}) - if an integer value is provided will limit the size of the packrat cache; if None is passed, then the cache size will be unbounded; if 0 is passed, the cache will be effectively disabled. This speedup may break existing programs that use parse actions that have side-effects. For this reason, packrat parsing is disabled when you first import pyparsing. To activate the packrat feature, your program must call the class method C{ParserElement.enablePackrat()}. If your program uses C{psyco} to "compile as you go", you must call C{enablePackrat} before calling C{psyco.full()}. If you do not do this, Python will crash. For best results, call C{enablePackrat()} immediately after importing pyparsing. Example:: import pyparsing pyparsing.ParserElement.enablePackrat() """ if not ParserElement._packratEnabled: ParserElement._packratEnabled = True if cache_size_limit is None: ParserElement.packrat_cache = ParserElement._UnboundedCache() else: ParserElement.packrat_cache = ParserElement._FifoCache(cache_size_limit) ParserElement._parse = ParserElement._parseCache def parseString( self, instring, parseAll=False ): """ Execute the parse expression with the given string. This is the main interface to the client code, once the complete expression has been built. If you want the grammar to require that the entire input string be successfully parsed, then set C{parseAll} to True (equivalent to ending the grammar with C{L{StringEnd()}}). Note: C{parseString} implicitly calls C{expandtabs()} on the input string, in order to report proper column numbers in parse actions. If the input string contains tabs and the grammar uses parse actions that use the C{loc} argument to index into the string being parsed, you can ensure you have a consistent view of the input string by: - calling C{parseWithTabs} on your grammar before calling C{parseString} (see L{I{parseWithTabs}<parseWithTabs>}) - define your parse action using the full C{(s,loc,toks)} signature, and reference the input string using the parse action's C{s} argument - explictly expand the tabs in your input string before calling C{parseString} Example:: Word('a').parseString('aaaaabaaa') # -> ['aaaaa'] Word('a').parseString('aaaaabaaa', parseAll=True) # -> Exception: Expected end of text """ ParserElement.resetCache() if not self.streamlined: self.streamline() #~ self.saveAsList = True for e in self.ignoreExprs: e.streamline() if not self.keepTabs: instring = instring.expandtabs() try: loc, tokens = self._parse( instring, 0 ) if parseAll: loc = self.preParse( instring, loc ) se = Empty() + StringEnd() se._parse( instring, loc ) except ParseBaseException as exc: if ParserElement.verbose_stacktrace: raise else: # catch and re-raise exception from here, clears out pyparsing internal stack trace raise exc else: return tokens def scanString( self, instring, maxMatches=_MAX_INT, overlap=False ): """ Scan the input string for expression matches. Each match will return the matching tokens, start location, and end location. May be called with optional C{maxMatches} argument, to clip scanning after 'n' matches are found. If C{overlap} is specified, then overlapping matches will be reported. Note that the start and end locations are reported relative to the string being parsed. See L{I{parseString}<parseString>} for more information on parsing strings with embedded tabs. Example:: source = "sldjf123lsdjjkf345sldkjf879lkjsfd987" print(source) for tokens,start,end in Word(alphas).scanString(source): print(' '*start + '^'*(end-start)) print(' '*start + tokens[0]) prints:: sldjf123lsdjjkf345sldkjf879lkjsfd987 ^^^^^ sldjf ^^^^^^^ lsdjjkf ^^^^^^ sldkjf ^^^^^^ lkjsfd """ if not self.streamlined: self.streamline() for e in self.ignoreExprs: e.streamline() if not self.keepTabs: instring = _ustr(instring).expandtabs() instrlen = len(instring) loc = 0 preparseFn = self.preParse parseFn = self._parse ParserElement.resetCache() matches = 0 try: while loc <= instrlen and matches < maxMatches: try: preloc = preparseFn( instring, loc ) nextLoc,tokens = parseFn( instring, preloc, callPreParse=False ) except ParseException: loc = preloc+1 else: if nextLoc > loc: matches += 1 yield tokens, preloc, nextLoc if overlap: nextloc = preparseFn( instring, loc ) if nextloc > loc: loc = nextLoc else: loc += 1 else: loc = nextLoc else: loc = preloc+1 except ParseBaseException as exc: if ParserElement.verbose_stacktrace: raise else: # catch and re-raise exception from here, clears out pyparsing internal stack trace raise exc def transformString( self, instring ): """ Extension to C{L{scanString}}, to modify matching text with modified tokens that may be returned from a parse action. To use C{transformString}, define a grammar and attach a parse action to it that modifies the returned token list. Invoking C{transformString()} on a target string will then scan for matches, and replace the matched text patterns according to the logic in the parse action. C{transformString()} returns the resulting transformed string. Example:: wd = Word(alphas) wd.setParseAction(lambda toks: toks[0].title()) print(wd.transformString("now is the winter of our discontent made glorious summer by this sun of york.")) Prints:: Now Is The Winter Of Our Discontent Made Glorious Summer By This Sun Of York. """ out = [] lastE = 0 # force preservation of <TAB>s, to minimize unwanted transformation of string, and to # keep string locs straight between transformString and scanString self.keepTabs = True try: for t,s,e in self.scanString( instring ): out.append( instring[lastE:s] ) if t: if isinstance(t,ParseResults): out += t.asList() elif isinstance(t,list): out += t else: out.append(t) lastE = e out.append(instring[lastE:]) out = [o for o in out if o] return "".join(map(_ustr,_flatten(out))) except ParseBaseException as exc: if ParserElement.verbose_stacktrace: raise else: # catch and re-raise exception from here, clears out pyparsing internal stack trace raise exc def searchString( self, instring, maxMatches=_MAX_INT ): """ Another extension to C{L{scanString}}, simplifying the access to the tokens found to match the given parse expression. May be called with optional C{maxMatches} argument, to clip searching after 'n' matches are found. Example:: # a capitalized word starts with an uppercase letter, followed by zero or more lowercase letters cap_word = Word(alphas.upper(), alphas.lower()) print(cap_word.searchString("More than Iron, more than Lead, more than Gold I need Electricity")) # the sum() builtin can be used to merge results into a single ParseResults object print(sum(cap_word.searchString("More than Iron, more than Lead, more than Gold I need Electricity"))) prints:: [['More'], ['Iron'], ['Lead'], ['Gold'], ['I'], ['Electricity']] ['More', 'Iron', 'Lead', 'Gold', 'I', 'Electricity'] """ try: return ParseResults([ t for t,s,e in self.scanString( instring, maxMatches ) ]) except ParseBaseException as exc: if ParserElement.verbose_stacktrace: raise else: # catch and re-raise exception from here, clears out pyparsing internal stack trace raise exc def split(self, instring, maxsplit=_MAX_INT, includeSeparators=False): """ Generator method to split a string using the given expression as a separator. May be called with optional C{maxsplit} argument, to limit the number of splits; and the optional C{includeSeparators} argument (default=C{False}), if the separating matching text should be included in the split results. Example:: punc = oneOf(list(".,;:/-!?")) print(list(punc.split("This, this?, this sentence, is badly punctuated!"))) prints:: ['This', ' this', '', ' this sentence', ' is badly punctuated', ''] """ splits = 0 last = 0 for t,s,e in self.scanString(instring, maxMatches=maxsplit): yield instring[last:s] if includeSeparators: yield t[0] last = e yield instring[last:] def __add__(self, other ): """ Implementation of + operator - returns C{L{And}}. Adding strings to a ParserElement converts them to L{Literal}s by default. Example:: greet = Word(alphas) + "," + Word(alphas) + "!" hello = "Hello, World!" print (hello, "->", greet.parseString(hello)) Prints:: Hello, World! -> ['Hello', ',', 'World', '!'] """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return And( [ self, other ] ) def __radd__(self, other ): """ Implementation of + operator when left operand is not a C{L{ParserElement}} """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return other + self def __sub__(self, other): """ Implementation of - operator, returns C{L{And}} with error stop """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return self + And._ErrorStop() + other def __rsub__(self, other ): """ Implementation of - operator when left operand is not a C{L{ParserElement}} """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return other - self def __mul__(self,other): """ Implementation of * operator, allows use of C{expr * 3} in place of C{expr + expr + expr}. Expressions may also me multiplied by a 2-integer tuple, similar to C{{min,max}} multipliers in regular expressions. Tuples may also include C{None} as in: - C{expr*(n,None)} or C{expr*(n,)} is equivalent to C{expr*n + L{ZeroOrMore}(expr)} (read as "at least n instances of C{expr}") - C{expr*(None,n)} is equivalent to C{expr*(0,n)} (read as "0 to n instances of C{expr}") - C{expr*(None,None)} is equivalent to C{L{ZeroOrMore}(expr)} - C{expr*(1,None)} is equivalent to C{L{OneOrMore}(expr)} Note that C{expr*(None,n)} does not raise an exception if more than n exprs exist in the input stream; that is, C{expr*(None,n)} does not enforce a maximum number of expr occurrences. If this behavior is desired, then write C{expr*(None,n) + ~expr} """ if isinstance(other,int): minElements, optElements = other,0 elif isinstance(other,tuple): other = (other + (None, None))[:2] if other[0] is None: other = (0, other[1]) if isinstance(other[0],int) and other[1] is None: if other[0] == 0: return ZeroOrMore(self) if other[0] == 1: return OneOrMore(self) else: return self*other[0] + ZeroOrMore(self) elif isinstance(other[0],int) and isinstance(other[1],int): minElements, optElements = other optElements -= minElements else: raise TypeError("cannot multiply 'ParserElement' and ('%s','%s') objects", type(other[0]),type(other[1])) else: raise TypeError("cannot multiply 'ParserElement' and '%s' objects", type(other)) if minElements < 0: raise ValueError("cannot multiply ParserElement by negative value") if optElements < 0: raise ValueError("second tuple value must be greater or equal to first tuple value") if minElements == optElements == 0: raise ValueError("cannot multiply ParserElement by 0 or (0,0)") if (optElements): def makeOptionalList(n): if n>1: return Optional(self + makeOptionalList(n-1)) else: return Optional(self) if minElements: if minElements == 1: ret = self + makeOptionalList(optElements) else: ret = And([self]*minElements) + makeOptionalList(optElements) else: ret = makeOptionalList(optElements) else: if minElements == 1: ret = self else: ret = And([self]*minElements) return ret def __rmul__(self, other): return self.__mul__(other) def __or__(self, other ): """ Implementation of | operator - returns C{L{MatchFirst}} """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return MatchFirst( [ self, other ] ) def __ror__(self, other ): """ Implementation of | operator when left operand is not a C{L{ParserElement}} """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return other | self def __xor__(self, other ): """ Implementation of ^ operator - returns C{L{Or}} """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return Or( [ self, other ] ) def __rxor__(self, other ): """ Implementation of ^ operator when left operand is not a C{L{ParserElement}} """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return other ^ self def __and__(self, other ): """ Implementation of & operator - returns C{L{Each}} """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return Each( [ self, other ] ) def __rand__(self, other ): """ Implementation of & operator when left operand is not a C{L{ParserElement}} """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return other & self def __invert__( self ): """ Implementation of ~ operator - returns C{L{NotAny}} """ return NotAny( self ) def __call__(self, name=None): """ Shortcut for C{L{setResultsName}}, with C{listAllMatches=False}. If C{name} is given with a trailing C{'*'} character, then C{listAllMatches} will be passed as C{True}. If C{name} is omitted, same as calling C{L{copy}}. Example:: # these are equivalent userdata = Word(alphas).setResultsName("name") + Word(nums+"-").setResultsName("socsecno") userdata = Word(alphas)("name") + Word(nums+"-")("socsecno") """ if name is not None: return self.setResultsName(name) else: return self.copy() def suppress( self ): """ Suppresses the output of this C{ParserElement}; useful to keep punctuation from cluttering up returned output. """ return Suppress( self ) def leaveWhitespace( self ): """ Disables the skipping of whitespace before matching the characters in the C{ParserElement}'s defined pattern. This is normally only used internally by the pyparsing module, but may be needed in some whitespace-sensitive grammars. """ self.skipWhitespace = False return self def setWhitespaceChars( self, chars ): """ Overrides the default whitespace chars """ self.skipWhitespace = True self.whiteChars = chars self.copyDefaultWhiteChars = False return self def parseWithTabs( self ): """ Overrides default behavior to expand C{<TAB>}s to spaces before parsing the input string. Must be called before C{parseString} when the input grammar contains elements that match C{<TAB>} characters. """ self.keepTabs = True return self def ignore( self, other ): """ Define expression to be ignored (e.g., comments) while doing pattern matching; may be called repeatedly, to define multiple comment or other ignorable patterns. Example:: patt = OneOrMore(Word(alphas)) patt.parseString('ablaj /* comment */ lskjd') # -> ['ablaj'] patt.ignore(cStyleComment) patt.parseString('ablaj /* comment */ lskjd') # -> ['ablaj', 'lskjd'] """ if isinstance(other, basestring): other = Suppress(other) if isinstance( other, Suppress ): if other not in self.ignoreExprs: self.ignoreExprs.append(other) else: self.ignoreExprs.append( Suppress( other.copy() ) ) return self def setDebugActions( self, startAction, successAction, exceptionAction ): """ Enable display of debugging messages while doing pattern matching. """ self.debugActions = (startAction or _defaultStartDebugAction, successAction or _defaultSuccessDebugAction, exceptionAction or _defaultExceptionDebugAction) self.debug = True return self def setDebug( self, flag=True ): """ Enable display of debugging messages while doing pattern matching. Set C{flag} to True to enable, False to disable. Example:: wd = Word(alphas).setName("alphaword") integer = Word(nums).setName("numword") term = wd | integer # turn on debugging for wd wd.setDebug() OneOrMore(term).parseString("abc 123 xyz 890") prints:: Match alphaword at loc 0(1,1) Matched alphaword -> ['abc'] Match alphaword at loc 3(1,4) Exception raised:Expected alphaword (at char 4), (line:1, col:5) Match alphaword at loc 7(1,8) Matched alphaword -> ['xyz'] Match alphaword at loc 11(1,12) Exception raised:Expected alphaword (at char 12), (line:1, col:13) Match alphaword at loc 15(1,16) Exception raised:Expected alphaword (at char 15), (line:1, col:16) The output shown is that produced by the default debug actions - custom debug actions can be specified using L{setDebugActions}. Prior to attempting to match the C{wd} expression, the debugging message C{"Match <exprname> at loc <n>(<line>,<col>)"} is shown. Then if the parse succeeds, a C{"Matched"} message is shown, or an C{"Exception raised"} message is shown. Also note the use of L{setName} to assign a human-readable name to the expression, which makes debugging and exception messages easier to understand - for instance, the default name created for the C{Word} expression without calling C{setName} is C{"W:(ABCD...)"}. """ if flag: self.setDebugActions( _defaultStartDebugAction, _defaultSuccessDebugAction, _defaultExceptionDebugAction ) else: self.debug = False return self def __str__( self ): return self.name def __repr__( self ): return _ustr(self) def streamline( self ): self.streamlined = True self.strRepr = None return self def checkRecursion( self, parseElementList ): pass def validate( self, validateTrace=[] ): """ Check defined expressions for valid structure, check for infinite recursive definitions. """ self.checkRecursion( [] ) def parseFile( self, file_or_filename, parseAll=False ): """ Execute the parse expression on the given file or filename. If a filename is specified (instead of a file object), the entire file is opened, read, and closed before parsing. """ try: file_contents = file_or_filename.read() except AttributeError: with open(file_or_filename, "r") as f: file_contents = f.read() try: return self.parseString(file_contents, parseAll) except ParseBaseException as exc: if ParserElement.verbose_stacktrace: raise else: # catch and re-raise exception from here, clears out pyparsing internal stack trace raise exc def __eq__(self,other): if isinstance(other, ParserElement): return self is other or vars(self) == vars(other) elif isinstance(other, basestring): return self.matches(other) else: return super(ParserElement,self)==other def __ne__(self,other): return not (self == other) def __hash__(self): return hash(id(self)) def __req__(self,other): return self == other def __rne__(self,other): return not (self == other) def matches(self, testString, parseAll=True): """ Method for quick testing of a parser against a test string. Good for simple inline microtests of sub expressions while building up larger parser. Parameters: - testString - to test against this expression for a match - parseAll - (default=C{True}) - flag to pass to C{L{parseString}} when running tests Example:: expr = Word(nums) assert expr.matches("100") """ try: self.parseString(_ustr(testString), parseAll=parseAll) return True except ParseBaseException: return False def runTests(self, tests, parseAll=True, comment='#', fullDump=True, printResults=True, failureTests=False): """ Execute the parse expression on a series of test strings, showing each test, the parsed results or where the parse failed. Quick and easy way to run a parse expression against a list of sample strings. Parameters: - tests - a list of separate test strings, or a multiline string of test strings - parseAll - (default=C{True}) - flag to pass to C{L{parseString}} when running tests - comment - (default=C{'#'}) - expression for indicating embedded comments in the test string; pass None to disable comment filtering - fullDump - (default=C{True}) - dump results as list followed by results names in nested outline; if False, only dump nested list - printResults - (default=C{True}) prints test output to stdout - failureTests - (default=C{False}) indicates if these tests are expected to fail parsing Returns: a (success, results) tuple, where success indicates that all tests succeeded (or failed if C{failureTests} is True), and the results contain a list of lines of each test's output Example:: number_expr = pyparsing_common.number.copy() result = number_expr.runTests(''' # unsigned integer 100 # negative integer -100 # float with scientific notation 6.02e23 # integer with scientific notation 1e-12 ''') print("Success" if result[0] else "Failed!") result = number_expr.runTests(''' # stray character 100Z # missing leading digit before '.' -.100 # too many '.' 3.14.159 ''', failureTests=True) print("Success" if result[0] else "Failed!") prints:: # unsigned integer 100 [100] # negative integer -100 [-100] # float with scientific notation 6.02e23 [6.02e+23] # integer with scientific notation 1e-12 [1e-12] Success # stray character 100Z ^ FAIL: Expected end of text (at char 3), (line:1, col:4) # missing leading digit before '.' -.100 ^ FAIL: Expected {real number with scientific notation | real number | signed integer} (at char 0), (line:1, col:1) # too many '.' 3.14.159 ^ FAIL: Expected end of text (at char 4), (line:1, col:5) Success Each test string must be on a single line. If you want to test a string that spans multiple lines, create a test like this:: expr.runTest(r"this is a test\\n of strings that spans \\n 3 lines") (Note that this is a raw string literal, you must include the leading 'r'.) """ if isinstance(tests, basestring): tests = list(map(str.strip, tests.rstrip().splitlines())) if isinstance(comment, basestring): comment = Literal(comment) allResults = [] comments = [] success = True for t in tests: if comment is not None and comment.matches(t, False) or comments and not t: comments.append(t) continue if not t: continue out = ['\n'.join(comments), t] comments = [] try: t = t.replace(r'\n','\n') result = self.parseString(t, parseAll=parseAll) out.append(result.dump(full=fullDump)) success = success and not failureTests except ParseBaseException as pe: fatal = "(FATAL)" if isinstance(pe, ParseFatalException) else "" if '\n' in t: out.append(line(pe.loc, t)) out.append(' '*(col(pe.loc,t)-1) + '^' + fatal) else: out.append(' '*pe.loc + '^' + fatal) out.append("FAIL: " + str(pe)) success = success and failureTests result = pe except Exception as exc: out.append("FAIL-EXCEPTION: " + str(exc)) success = success and failureTests result = exc if printResults: if fullDump: out.append('') print('\n'.join(out)) allResults.append((t, result)) return success, allResults class Token(ParserElement): """ Abstract C{ParserElement} subclass, for defining atomic matching patterns. """ def __init__( self ): super(Token,self).__init__( savelist=False ) class Empty(Token): """ An empty token, will always match. """ def __init__( self ): super(Empty,self).__init__() self.name = "Empty" self.mayReturnEmpty = True self.mayIndexError = False class NoMatch(Token): """ A token that will never match. """ def __init__( self ): super(NoMatch,self).__init__() self.name = "NoMatch" self.mayReturnEmpty = True self.mayIndexError = False self.errmsg = "Unmatchable token" def parseImpl( self, instring, loc, doActions=True ): raise ParseException(instring, loc, self.errmsg, self) class Literal(Token): """ Token to exactly match a specified string. Example:: Literal('blah').parseString('blah') # -> ['blah'] Literal('blah').parseString('blahfooblah') # -> ['blah'] Literal('blah').parseString('bla') # -> Exception: Expected "blah" For case-insensitive matching, use L{CaselessLiteral}. For keyword matching (force word break before and after the matched string), use L{Keyword} or L{CaselessKeyword}. """ def __init__( self, matchString ): super(Literal,self).__init__() self.match = matchString self.matchLen = len(matchString) try: self.firstMatchChar = matchString[0] except IndexError: warnings.warn("null string passed to Literal; use Empty() instead", SyntaxWarning, stacklevel=2) self.__class__ = Empty self.name = '"%s"' % _ustr(self.match) self.errmsg = "Expected " + self.name self.mayReturnEmpty = False self.mayIndexError = False # Performance tuning: this routine gets called a *lot* # if this is a single character match string and the first character matches, # short-circuit as quickly as possible, and avoid calling startswith #~ @profile def parseImpl( self, instring, loc, doActions=True ): if (instring[loc] == self.firstMatchChar and (self.matchLen==1 or instring.startswith(self.match,loc)) ): return loc+self.matchLen, self.match raise ParseException(instring, loc, self.errmsg, self) _L = Literal ParserElement._literalStringClass = Literal class Keyword(Token): """ Token to exactly match a specified string as a keyword, that is, it must be immediately followed by a non-keyword character. Compare with C{L{Literal}}: - C{Literal("if")} will match the leading C{'if'} in C{'ifAndOnlyIf'}. - C{Keyword("if")} will not; it will only match the leading C{'if'} in C{'if x=1'}, or C{'if(y==2)'} Accepts two optional constructor arguments in addition to the keyword string: - C{identChars} is a string of characters that would be valid identifier characters, defaulting to all alphanumerics + "_" and "$" - C{caseless} allows case-insensitive matching, default is C{False}. Example:: Keyword("start").parseString("start") # -> ['start'] Keyword("start").parseString("starting") # -> Exception For case-insensitive matching, use L{CaselessKeyword}. """ DEFAULT_KEYWORD_CHARS = alphanums+"_$" def __init__( self, matchString, identChars=None, caseless=False ): super(Keyword,self).__init__() if identChars is None: identChars = Keyword.DEFAULT_KEYWORD_CHARS self.match = matchString self.matchLen = len(matchString) try: self.firstMatchChar = matchString[0] except IndexError: warnings.warn("null string passed to Keyword; use Empty() instead", SyntaxWarning, stacklevel=2) self.name = '"%s"' % self.match self.errmsg = "Expected " + self.name self.mayReturnEmpty = False self.mayIndexError = False self.caseless = caseless if caseless: self.caselessmatch = matchString.upper() identChars = identChars.upper() self.identChars = set(identChars) def parseImpl( self, instring, loc, doActions=True ): if self.caseless: if ( (instring[ loc:loc+self.matchLen ].upper() == self.caselessmatch) and (loc >= len(instring)-self.matchLen or instring[loc+self.matchLen].upper() not in self.identChars) and (loc == 0 or instring[loc-1].upper() not in self.identChars) ): return loc+self.matchLen, self.match else: if (instring[loc] == self.firstMatchChar and (self.matchLen==1 or instring.startswith(self.match,loc)) and (loc >= len(instring)-self.matchLen or instring[loc+self.matchLen] not in self.identChars) and (loc == 0 or instring[loc-1] not in self.identChars) ): return loc+self.matchLen, self.match raise ParseException(instring, loc, self.errmsg, self) def copy(self): c = super(Keyword,self).copy() c.identChars = Keyword.DEFAULT_KEYWORD_CHARS return c @staticmethod def setDefaultKeywordChars( chars ): """Overrides the default Keyword chars """ Keyword.DEFAULT_KEYWORD_CHARS = chars class CaselessLiteral(Literal): """ Token to match a specified string, ignoring case of letters. Note: the matched results will always be in the case of the given match string, NOT the case of the input text. Example:: OneOrMore(CaselessLiteral("CMD")).parseString("cmd CMD Cmd10") # -> ['CMD', 'CMD', 'CMD'] (Contrast with example for L{CaselessKeyword}.) """ def __init__( self, matchString ): super(CaselessLiteral,self).__init__( matchString.upper() ) # Preserve the defining literal. self.returnString = matchString self.name = "'%s'" % self.returnString self.errmsg = "Expected " + self.name def parseImpl( self, instring, loc, doActions=True ): if instring[ loc:loc+self.matchLen ].upper() == self.match: return loc+self.matchLen, self.returnString raise ParseException(instring, loc, self.errmsg, self) class CaselessKeyword(Keyword): """ Caseless version of L{Keyword}. Example:: OneOrMore(CaselessKeyword("CMD")).parseString("cmd CMD Cmd10") # -> ['CMD', 'CMD'] (Contrast with example for L{CaselessLiteral}.) """ def __init__( self, matchString, identChars=None ): super(CaselessKeyword,self).__init__( matchString, identChars, caseless=True ) def parseImpl( self, instring, loc, doActions=True ): if ( (instring[ loc:loc+self.matchLen ].upper() == self.caselessmatch) and (loc >= len(instring)-self.matchLen or instring[loc+self.matchLen].upper() not in self.identChars) ): return loc+self.matchLen, self.match raise ParseException(instring, loc, self.errmsg, self) class CloseMatch(Token): """ A variation on L{Literal} which matches "close" matches, that is, strings with at most 'n' mismatching characters. C{CloseMatch} takes parameters: - C{match_string} - string to be matched - C{maxMismatches} - (C{default=1}) maximum number of mismatches allowed to count as a match The results from a successful parse will contain the matched text from the input string and the following named results: - C{mismatches} - a list of the positions within the match_string where mismatches were found - C{original} - the original match_string used to compare against the input string If C{mismatches} is an empty list, then the match was an exact match. Example:: patt = CloseMatch("ATCATCGAATGGA") patt.parseString("ATCATCGAAXGGA") # -> (['ATCATCGAAXGGA'], {'mismatches': [[9]], 'original': ['ATCATCGAATGGA']}) patt.parseString("ATCAXCGAAXGGA") # -> Exception: Expected 'ATCATCGAATGGA' (with up to 1 mismatches) (at char 0), (line:1, col:1) # exact match patt.parseString("ATCATCGAATGGA") # -> (['ATCATCGAATGGA'], {'mismatches': [[]], 'original': ['ATCATCGAATGGA']}) # close match allowing up to 2 mismatches patt = CloseMatch("ATCATCGAATGGA", maxMismatches=2) patt.parseString("ATCAXCGAAXGGA") # -> (['ATCAXCGAAXGGA'], {'mismatches': [[4, 9]], 'original': ['ATCATCGAATGGA']}) """ def __init__(self, match_string, maxMismatches=1): super(CloseMatch,self).__init__() self.name = match_string self.match_string = match_string self.maxMismatches = maxMismatches self.errmsg = "Expected %r (with up to %d mismatches)" % (self.match_string, self.maxMismatches) self.mayIndexError = False self.mayReturnEmpty = False def parseImpl( self, instring, loc, doActions=True ): start = loc instrlen = len(instring) maxloc = start + len(self.match_string) if maxloc <= instrlen: match_string = self.match_string match_stringloc = 0 mismatches = [] maxMismatches = self.maxMismatches for match_stringloc,s_m in enumerate(zip(instring[loc:maxloc], self.match_string)): src,mat = s_m if src != mat: mismatches.append(match_stringloc) if len(mismatches) > maxMismatches: break else: loc = match_stringloc + 1 results = ParseResults([instring[start:loc]]) results['original'] = self.match_string results['mismatches'] = mismatches return loc, results raise ParseException(instring, loc, self.errmsg, self) class Word(Token): """ Token for matching words composed of allowed character sets. Defined with string containing all allowed initial characters, an optional string containing allowed body characters (if omitted, defaults to the initial character set), and an optional minimum, maximum, and/or exact length. The default value for C{min} is 1 (a minimum value < 1 is not valid); the default values for C{max} and C{exact} are 0, meaning no maximum or exact length restriction. An optional C{excludeChars} parameter can list characters that might be found in the input C{bodyChars} string; useful to define a word of all printables except for one or two characters, for instance. L{srange} is useful for defining custom character set strings for defining C{Word} expressions, using range notation from regular expression character sets. A common mistake is to use C{Word} to match a specific literal string, as in C{Word("Address")}. Remember that C{Word} uses the string argument to define I{sets} of matchable characters. This expression would match "Add", "AAA", "dAred", or any other word made up of the characters 'A', 'd', 'r', 'e', and 's'. To match an exact literal string, use L{Literal} or L{Keyword}. pyparsing includes helper strings for building Words: - L{alphas} - L{nums} - L{alphanums} - L{hexnums} - L{alphas8bit} (alphabetic characters in ASCII range 128-255 - accented, tilded, umlauted, etc.) - L{punc8bit} (non-alphabetic characters in ASCII range 128-255 - currency, symbols, superscripts, diacriticals, etc.) - L{printables} (any non-whitespace character) Example:: # a word composed of digits integer = Word(nums) # equivalent to Word("0123456789") or Word(srange("0-9")) # a word with a leading capital, and zero or more lowercase capital_word = Word(alphas.upper(), alphas.lower()) # hostnames are alphanumeric, with leading alpha, and '-' hostname = Word(alphas, alphanums+'-') # roman numeral (not a strict parser, accepts invalid mix of characters) roman = Word("IVXLCDM") # any string of non-whitespace characters, except for ',' csv_value = Word(printables, excludeChars=",") """ def __init__( self, initChars, bodyChars=None, min=1, max=0, exact=0, asKeyword=False, excludeChars=None ): super(Word,self).__init__() if excludeChars: initChars = ''.join(c for c in initChars if c not in excludeChars) if bodyChars: bodyChars = ''.join(c for c in bodyChars if c not in excludeChars) self.initCharsOrig = initChars self.initChars = set(initChars) if bodyChars : self.bodyCharsOrig = bodyChars self.bodyChars = set(bodyChars) else: self.bodyCharsOrig = initChars self.bodyChars = set(initChars) self.maxSpecified = max > 0 if min < 1: raise ValueError("cannot specify a minimum length < 1; use Optional(Word()) if zero-length word is permitted") self.minLen = min if max > 0: self.maxLen = max else: self.maxLen = _MAX_INT if exact > 0: self.maxLen = exact self.minLen = exact self.name = _ustr(self) self.errmsg = "Expected " + self.name self.mayIndexError = False self.asKeyword = asKeyword if ' ' not in self.initCharsOrig+self.bodyCharsOrig and (min==1 and max==0 and exact==0): if self.bodyCharsOrig == self.initCharsOrig: self.reString = "[%s]+" % _escapeRegexRangeChars(self.initCharsOrig) elif len(self.initCharsOrig) == 1: self.reString = "%s[%s]*" % \ (re.escape(self.initCharsOrig), _escapeRegexRangeChars(self.bodyCharsOrig),) else: self.reString = "[%s][%s]*" % \ (_escapeRegexRangeChars(self.initCharsOrig), _escapeRegexRangeChars(self.bodyCharsOrig),) if self.asKeyword: self.reString = r"\b"+self.reString+r"\b" try: self.re = re.compile( self.reString ) except Exception: self.re = None def parseImpl( self, instring, loc, doActions=True ): if self.re: result = self.re.match(instring,loc) if not result: raise ParseException(instring, loc, self.errmsg, self) loc = result.end() return loc, result.group() if not(instring[ loc ] in self.initChars): raise ParseException(instring, loc, self.errmsg, self) start = loc loc += 1 instrlen = len(instring) bodychars = self.bodyChars maxloc = start + self.maxLen maxloc = min( maxloc, instrlen ) while loc < maxloc and instring[loc] in bodychars: loc += 1 throwException = False if loc - start < self.minLen: throwException = True if self.maxSpecified and loc < instrlen and instring[loc] in bodychars: throwException = True if self.asKeyword: if (start>0 and instring[start-1] in bodychars) or (loc<instrlen and instring[loc] in bodychars): throwException = True if throwException: raise ParseException(instring, loc, self.errmsg, self) return loc, instring[start:loc] def __str__( self ): try: return super(Word,self).__str__() except Exception: pass if self.strRepr is None: def charsAsStr(s): if len(s)>4: return s[:4]+"..." else: return s if ( self.initCharsOrig != self.bodyCharsOrig ): self.strRepr = "W:(%s,%s)" % ( charsAsStr(self.initCharsOrig), charsAsStr(self.bodyCharsOrig) ) else: self.strRepr = "W:(%s)" % charsAsStr(self.initCharsOrig) return self.strRepr class Regex(Token): r""" Token for matching strings that match a given regular expression. Defined with string specifying the regular expression in a form recognized by the inbuilt Python re module. If the given regex contains named groups (defined using C{(?P<name>...)}), these will be preserved as named parse results. Example:: realnum = Regex(r"[+-]?\d+\.\d*") date = Regex(r'(?P<year>\d{4})-(?P<month>\d\d?)-(?P<day>\d\d?)') # ref: http://stackoverflow.com/questions/267399/how-do-you-match-only-valid-roman-numerals-with-a-regular-expression roman = Regex(r"M{0,4}(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})") """ compiledREtype = type(re.compile("[A-Z]")) def __init__( self, pattern, flags=0): """The parameters C{pattern} and C{flags} are passed to the C{re.compile()} function as-is. See the Python C{re} module for an explanation of the acceptable patterns and flags.""" super(Regex,self).__init__() if isinstance(pattern, basestring): if not pattern: warnings.warn("null string passed to Regex; use Empty() instead", SyntaxWarning, stacklevel=2) self.pattern = pattern self.flags = flags try: self.re = re.compile(self.pattern, self.flags) self.reString = self.pattern except sre_constants.error: warnings.warn("invalid pattern (%s) passed to Regex" % pattern, SyntaxWarning, stacklevel=2) raise elif isinstance(pattern, Regex.compiledREtype): self.re = pattern self.pattern = \ self.reString = str(pattern) self.flags = flags else: raise ValueError("Regex may only be constructed with a string or a compiled RE object") self.name = _ustr(self) self.errmsg = "Expected " + self.name self.mayIndexError = False self.mayReturnEmpty = True def parseImpl( self, instring, loc, doActions=True ): result = self.re.match(instring,loc) if not result: raise ParseException(instring, loc, self.errmsg, self) loc = result.end() d = result.groupdict() ret = ParseResults(result.group()) if d: for k in d: ret[k] = d[k] return loc,ret def __str__( self ): try: return super(Regex,self).__str__() except Exception: pass if self.strRepr is None: self.strRepr = "Re:(%s)" % repr(self.pattern) return self.strRepr class QuotedString(Token): r""" Token for matching strings that are delimited by quoting characters. Defined with the following parameters: - quoteChar - string of one or more characters defining the quote delimiting string - escChar - character to escape quotes, typically backslash (default=C{None}) - escQuote - special quote sequence to escape an embedded quote string (such as SQL's "" to escape an embedded ") (default=C{None}) - multiline - boolean indicating whether quotes can span multiple lines (default=C{False}) - unquoteResults - boolean indicating whether the matched text should be unquoted (default=C{True}) - endQuoteChar - string of one or more characters defining the end of the quote delimited string (default=C{None} => same as quoteChar) - convertWhitespaceEscapes - convert escaped whitespace (C{'\t'}, C{'\n'}, etc.) to actual whitespace (default=C{True}) Example:: qs = QuotedString('"') print(qs.searchString('lsjdf "This is the quote" sldjf')) complex_qs = QuotedString('{{', endQuoteChar='}}') print(complex_qs.searchString('lsjdf {{This is the "quote"}} sldjf')) sql_qs = QuotedString('"', escQuote='""') print(sql_qs.searchString('lsjdf "This is the quote with ""embedded"" quotes" sldjf')) prints:: [['This is the quote']] [['This is the "quote"']] [['This is the quote with "embedded" quotes']] """ def __init__( self, quoteChar, escChar=None, escQuote=None, multiline=False, unquoteResults=True, endQuoteChar=None, convertWhitespaceEscapes=True): super(QuotedString,self).__init__() # remove white space from quote chars - wont work anyway quoteChar = quoteChar.strip() if not quoteChar: warnings.warn("quoteChar cannot be the empty string",SyntaxWarning,stacklevel=2) raise SyntaxError() if endQuoteChar is None: endQuoteChar = quoteChar else: endQuoteChar = endQuoteChar.strip() if not endQuoteChar: warnings.warn("endQuoteChar cannot be the empty string",SyntaxWarning,stacklevel=2) raise SyntaxError() self.quoteChar = quoteChar self.quoteCharLen = len(quoteChar) self.firstQuoteChar = quoteChar[0] self.endQuoteChar = endQuoteChar self.endQuoteCharLen = len(endQuoteChar) self.escChar = escChar self.escQuote = escQuote self.unquoteResults = unquoteResults self.convertWhitespaceEscapes = convertWhitespaceEscapes if multiline: self.flags = re.MULTILINE | re.DOTALL self.pattern = r'%s(?:[^%s%s]' % \ ( re.escape(self.quoteChar), _escapeRegexRangeChars(self.endQuoteChar[0]), (escChar is not None and _escapeRegexRangeChars(escChar) or '') ) else: self.flags = 0 self.pattern = r'%s(?:[^%s\n\r%s]' % \ ( re.escape(self.quoteChar), _escapeRegexRangeChars(self.endQuoteChar[0]), (escChar is not None and _escapeRegexRangeChars(escChar) or '') ) if len(self.endQuoteChar) > 1: self.pattern += ( '|(?:' + ')|(?:'.join("%s[^%s]" % (re.escape(self.endQuoteChar[:i]), _escapeRegexRangeChars(self.endQuoteChar[i])) for i in range(len(self.endQuoteChar)-1,0,-1)) + ')' ) if escQuote: self.pattern += (r'|(?:%s)' % re.escape(escQuote)) if escChar: self.pattern += (r'|(?:%s.)' % re.escape(escChar)) self.escCharReplacePattern = re.escape(self.escChar)+"(.)" self.pattern += (r')*%s' % re.escape(self.endQuoteChar)) try: self.re = re.compile(self.pattern, self.flags) self.reString = self.pattern except sre_constants.error: warnings.warn("invalid pattern (%s) passed to Regex" % self.pattern, SyntaxWarning, stacklevel=2) raise self.name = _ustr(self) self.errmsg = "Expected " + self.name self.mayIndexError = False self.mayReturnEmpty = True def parseImpl( self, instring, loc, doActions=True ): result = instring[loc] == self.firstQuoteChar and self.re.match(instring,loc) or None if not result: raise ParseException(instring, loc, self.errmsg, self) loc = result.end() ret = result.group() if self.unquoteResults: # strip off quotes ret = ret[self.quoteCharLen:-self.endQuoteCharLen] if isinstance(ret,basestring): # replace escaped whitespace if '\\' in ret and self.convertWhitespaceEscapes: ws_map = { r'\t' : '\t', r'\n' : '\n', r'\f' : '\f', r'\r' : '\r', } for wslit,wschar in ws_map.items(): ret = ret.replace(wslit, wschar) # replace escaped characters if self.escChar: ret = re.sub(self.escCharReplacePattern, r"\g<1>", ret) # replace escaped quotes if self.escQuote: ret = ret.replace(self.escQuote, self.endQuoteChar) return loc, ret def __str__( self ): try: return super(QuotedString,self).__str__() except Exception: pass if self.strRepr is None: self.strRepr = "quoted string, starting with %s ending with %s" % (self.quoteChar, self.endQuoteChar) return self.strRepr class CharsNotIn(Token): """ Token for matching words composed of characters I{not} in a given set (will include whitespace in matched characters if not listed in the provided exclusion set - see example). Defined with string containing all disallowed characters, and an optional minimum, maximum, and/or exact length. The default value for C{min} is 1 (a minimum value < 1 is not valid); the default values for C{max} and C{exact} are 0, meaning no maximum or exact length restriction. Example:: # define a comma-separated-value as anything that is not a ',' csv_value = CharsNotIn(',') print(delimitedList(csv_value).parseString("dkls,lsdkjf,s12 34,@!#,213")) prints:: ['dkls', 'lsdkjf', 's12 34', '@!#', '213'] """ def __init__( self, notChars, min=1, max=0, exact=0 ): super(CharsNotIn,self).__init__() self.skipWhitespace = False self.notChars = notChars if min < 1: raise ValueError("cannot specify a minimum length < 1; use Optional(CharsNotIn()) if zero-length char group is permitted") self.minLen = min if max > 0: self.maxLen = max else: self.maxLen = _MAX_INT if exact > 0: self.maxLen = exact self.minLen = exact self.name = _ustr(self) self.errmsg = "Expected " + self.name self.mayReturnEmpty = ( self.minLen == 0 ) self.mayIndexError = False def parseImpl( self, instring, loc, doActions=True ): if instring[loc] in self.notChars: raise ParseException(instring, loc, self.errmsg, self) start = loc loc += 1 notchars = self.notChars maxlen = min( start+self.maxLen, len(instring) ) while loc < maxlen and \ (instring[loc] not in notchars): loc += 1 if loc - start < self.minLen: raise ParseException(instring, loc, self.errmsg, self) return loc, instring[start:loc] def __str__( self ): try: return super(CharsNotIn, self).__str__() except Exception: pass if self.strRepr is None: if len(self.notChars) > 4: self.strRepr = "!W:(%s...)" % self.notChars[:4] else: self.strRepr = "!W:(%s)" % self.notChars return self.strRepr class White(Token): """ Special matching class for matching whitespace. Normally, whitespace is ignored by pyparsing grammars. This class is included when some whitespace structures are significant. Define with a string containing the whitespace characters to be matched; default is C{" \\t\\r\\n"}. Also takes optional C{min}, C{max}, and C{exact} arguments, as defined for the C{L{Word}} class. """ whiteStrs = { " " : "<SPC>", "\t": "<TAB>", "\n": "<LF>", "\r": "<CR>", "\f": "<FF>", } def __init__(self, ws=" \t\r\n", min=1, max=0, exact=0): super(White,self).__init__() self.matchWhite = ws self.setWhitespaceChars( "".join(c for c in self.whiteChars if c not in self.matchWhite) ) #~ self.leaveWhitespace() self.name = ("".join(White.whiteStrs[c] for c in self.matchWhite)) self.mayReturnEmpty = True self.errmsg = "Expected " + self.name self.minLen = min if max > 0: self.maxLen = max else: self.maxLen = _MAX_INT if exact > 0: self.maxLen = exact self.minLen = exact def parseImpl( self, instring, loc, doActions=True ): if not(instring[ loc ] in self.matchWhite): raise ParseException(instring, loc, self.errmsg, self) start = loc loc += 1 maxloc = start + self.maxLen maxloc = min( maxloc, len(instring) ) while loc < maxloc and instring[loc] in self.matchWhite: loc += 1 if loc - start < self.minLen: raise ParseException(instring, loc, self.errmsg, self) return loc, instring[start:loc] class _PositionToken(Token): def __init__( self ): super(_PositionToken,self).__init__() self.name=self.__class__.__name__ self.mayReturnEmpty = True self.mayIndexError = False class GoToColumn(_PositionToken): """ Token to advance to a specific column of input text; useful for tabular report scraping. """ def __init__( self, colno ): super(GoToColumn,self).__init__() self.col = colno def preParse( self, instring, loc ): if col(loc,instring) != self.col: instrlen = len(instring) if self.ignoreExprs: loc = self._skipIgnorables( instring, loc ) while loc < instrlen and instring[loc].isspace() and col( loc, instring ) != self.col : loc += 1 return loc def parseImpl( self, instring, loc, doActions=True ): thiscol = col( loc, instring ) if thiscol > self.col: raise ParseException( instring, loc, "Text not in expected column", self ) newloc = loc + self.col - thiscol ret = instring[ loc: newloc ] return newloc, ret class LineStart(_PositionToken): """ Matches if current position is at the beginning of a line within the parse string Example:: test = '''\ AAA this line AAA and this line AAA but not this one B AAA and definitely not this one ''' for t in (LineStart() + 'AAA' + restOfLine).searchString(test): print(t) Prints:: ['AAA', ' this line'] ['AAA', ' and this line'] """ def __init__( self ): super(LineStart,self).__init__() self.errmsg = "Expected start of line" def parseImpl( self, instring, loc, doActions=True ): if col(loc, instring) == 1: return loc, [] raise ParseException(instring, loc, self.errmsg, self) class LineEnd(_PositionToken): """ Matches if current position is at the end of a line within the parse string """ def __init__( self ): super(LineEnd,self).__init__() self.setWhitespaceChars( ParserElement.DEFAULT_WHITE_CHARS.replace("\n","") ) self.errmsg = "Expected end of line" def parseImpl( self, instring, loc, doActions=True ): if loc<len(instring): if instring[loc] == "\n": return loc+1, "\n" else: raise ParseException(instring, loc, self.errmsg, self) elif loc == len(instring): return loc+1, [] else: raise ParseException(instring, loc, self.errmsg, self) class StringStart(_PositionToken): """ Matches if current position is at the beginning of the parse string """ def __init__( self ): super(StringStart,self).__init__() self.errmsg = "Expected start of text" def parseImpl( self, instring, loc, doActions=True ): if loc != 0: # see if entire string up to here is just whitespace and ignoreables if loc != self.preParse( instring, 0 ): raise ParseException(instring, loc, self.errmsg, self) return loc, [] class StringEnd(_PositionToken): """ Matches if current position is at the end of the parse string """ def __init__( self ): super(StringEnd,self).__init__() self.errmsg = "Expected end of text" def parseImpl( self, instring, loc, doActions=True ): if loc < len(instring): raise ParseException(instring, loc, self.errmsg, self) elif loc == len(instring): return loc+1, [] elif loc > len(instring): return loc, [] else: raise ParseException(instring, loc, self.errmsg, self) class WordStart(_PositionToken): """ Matches if the current position is at the beginning of a Word, and is not preceded by any character in a given set of C{wordChars} (default=C{printables}). To emulate the C{\b} behavior of regular expressions, use C{WordStart(alphanums)}. C{WordStart} will also match at the beginning of the string being parsed, or at the beginning of a line. """ def __init__(self, wordChars = printables): super(WordStart,self).__init__() self.wordChars = set(wordChars) self.errmsg = "Not at the start of a word" def parseImpl(self, instring, loc, doActions=True ): if loc != 0: if (instring[loc-1] in self.wordChars or instring[loc] not in self.wordChars): raise ParseException(instring, loc, self.errmsg, self) return loc, [] class WordEnd(_PositionToken): """ Matches if the current position is at the end of a Word, and is not followed by any character in a given set of C{wordChars} (default=C{printables}). To emulate the C{\b} behavior of regular expressions, use C{WordEnd(alphanums)}. C{WordEnd} will also match at the end of the string being parsed, or at the end of a line. """ def __init__(self, wordChars = printables): super(WordEnd,self).__init__() self.wordChars = set(wordChars) self.skipWhitespace = False self.errmsg = "Not at the end of a word" def parseImpl(self, instring, loc, doActions=True ): instrlen = len(instring) if instrlen>0 and loc<instrlen: if (instring[loc] in self.wordChars or instring[loc-1] not in self.wordChars): raise ParseException(instring, loc, self.errmsg, self) return loc, [] class ParseExpression(ParserElement): """ Abstract subclass of ParserElement, for combining and post-processing parsed tokens. """ def __init__( self, exprs, savelist = False ): super(ParseExpression,self).__init__(savelist) if isinstance( exprs, _generatorType ): exprs = list(exprs) if isinstance( exprs, basestring ): self.exprs = [ ParserElement._literalStringClass( exprs ) ] elif isinstance( exprs, Iterable ): exprs = list(exprs) # if sequence of strings provided, wrap with Literal if all(isinstance(expr, basestring) for expr in exprs): exprs = map(ParserElement._literalStringClass, exprs) self.exprs = list(exprs) else: try: self.exprs = list( exprs ) except TypeError: self.exprs = [ exprs ] self.callPreparse = False def __getitem__( self, i ): return self.exprs[i] def append( self, other ): self.exprs.append( other ) self.strRepr = None return self def leaveWhitespace( self ): """Extends C{leaveWhitespace} defined in base class, and also invokes C{leaveWhitespace} on all contained expressions.""" self.skipWhitespace = False self.exprs = [ e.copy() for e in self.exprs ] for e in self.exprs: e.leaveWhitespace() return self def ignore( self, other ): if isinstance( other, Suppress ): if other not in self.ignoreExprs: super( ParseExpression, self).ignore( other ) for e in self.exprs: e.ignore( self.ignoreExprs[-1] ) else: super( ParseExpression, self).ignore( other ) for e in self.exprs: e.ignore( self.ignoreExprs[-1] ) return self def __str__( self ): try: return super(ParseExpression,self).__str__() except Exception: pass if self.strRepr is None: self.strRepr = "%s:(%s)" % ( self.__class__.__name__, _ustr(self.exprs) ) return self.strRepr def streamline( self ): super(ParseExpression,self).streamline() for e in self.exprs: e.streamline() # collapse nested And's of the form And( And( And( a,b), c), d) to And( a,b,c,d ) # but only if there are no parse actions or resultsNames on the nested And's # (likewise for Or's and MatchFirst's) if ( len(self.exprs) == 2 ): other = self.exprs[0] if ( isinstance( other, self.__class__ ) and not(other.parseAction) and other.resultsName is None and not other.debug ): self.exprs = other.exprs[:] + [ self.exprs[1] ] self.strRepr = None self.mayReturnEmpty |= other.mayReturnEmpty self.mayIndexError |= other.mayIndexError other = self.exprs[-1] if ( isinstance( other, self.__class__ ) and not(other.parseAction) and other.resultsName is None and not other.debug ): self.exprs = self.exprs[:-1] + other.exprs[:] self.strRepr = None self.mayReturnEmpty |= other.mayReturnEmpty self.mayIndexError |= other.mayIndexError self.errmsg = "Expected " + _ustr(self) return self def setResultsName( self, name, listAllMatches=False ): ret = super(ParseExpression,self).setResultsName(name,listAllMatches) return ret def validate( self, validateTrace=[] ): tmp = validateTrace[:]+[self] for e in self.exprs: e.validate(tmp) self.checkRecursion( [] ) def copy(self): ret = super(ParseExpression,self).copy() ret.exprs = [e.copy() for e in self.exprs] return ret class And(ParseExpression): """ Requires all given C{ParseExpression}s to be found in the given order. Expressions may be separated by whitespace. May be constructed using the C{'+'} operator. May also be constructed using the C{'-'} operator, which will suppress backtracking. Example:: integer = Word(nums) name_expr = OneOrMore(Word(alphas)) expr = And([integer("id"),name_expr("name"),integer("age")]) # more easily written as: expr = integer("id") + name_expr("name") + integer("age") """ class _ErrorStop(Empty): def __init__(self, *args, **kwargs): super(And._ErrorStop,self).__init__(*args, **kwargs) self.name = '-' self.leaveWhitespace() def __init__( self, exprs, savelist = True ): super(And,self).__init__(exprs, savelist) self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs) self.setWhitespaceChars( self.exprs[0].whiteChars ) self.skipWhitespace = self.exprs[0].skipWhitespace self.callPreparse = True def parseImpl( self, instring, loc, doActions=True ): # pass False as last arg to _parse for first element, since we already # pre-parsed the string as part of our And pre-parsing loc, resultlist = self.exprs[0]._parse( instring, loc, doActions, callPreParse=False ) errorStop = False for e in self.exprs[1:]: if isinstance(e, And._ErrorStop): errorStop = True continue if errorStop: try: loc, exprtokens = e._parse( instring, loc, doActions ) except ParseSyntaxException: raise except ParseBaseException as pe: pe.__traceback__ = None raise ParseSyntaxException._from_exception(pe) except IndexError: raise ParseSyntaxException(instring, len(instring), self.errmsg, self) else: loc, exprtokens = e._parse( instring, loc, doActions ) if exprtokens or exprtokens.haskeys(): resultlist += exprtokens return loc, resultlist def __iadd__(self, other ): if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) return self.append( other ) #And( [ self, other ] ) def checkRecursion( self, parseElementList ): subRecCheckList = parseElementList[:] + [ self ] for e in self.exprs: e.checkRecursion( subRecCheckList ) if not e.mayReturnEmpty: break def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "{" + " ".join(_ustr(e) for e in self.exprs) + "}" return self.strRepr class Or(ParseExpression): """ Requires that at least one C{ParseExpression} is found. If two expressions match, the expression that matches the longest string will be used. May be constructed using the C{'^'} operator. Example:: # construct Or using '^' operator number = Word(nums) ^ Combine(Word(nums) + '.' + Word(nums)) print(number.searchString("123 3.1416 789")) prints:: [['123'], ['3.1416'], ['789']] """ def __init__( self, exprs, savelist = False ): super(Or,self).__init__(exprs, savelist) if self.exprs: self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs) else: self.mayReturnEmpty = True def parseImpl( self, instring, loc, doActions=True ): maxExcLoc = -1 maxException = None matches = [] for e in self.exprs: try: loc2 = e.tryParse( instring, loc ) except ParseException as err: err.__traceback__ = None if err.loc > maxExcLoc: maxException = err maxExcLoc = err.loc except IndexError: if len(instring) > maxExcLoc: maxException = ParseException(instring,len(instring),e.errmsg,self) maxExcLoc = len(instring) else: # save match among all matches, to retry longest to shortest matches.append((loc2, e)) if matches: matches.sort(key=lambda x: -x[0]) for _,e in matches: try: return e._parse( instring, loc, doActions ) except ParseException as err: err.__traceback__ = None if err.loc > maxExcLoc: maxException = err maxExcLoc = err.loc if maxException is not None: maxException.msg = self.errmsg raise maxException else: raise ParseException(instring, loc, "no defined alternatives to match", self) def __ixor__(self, other ): if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) return self.append( other ) #Or( [ self, other ] ) def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "{" + " ^ ".join(_ustr(e) for e in self.exprs) + "}" return self.strRepr def checkRecursion( self, parseElementList ): subRecCheckList = parseElementList[:] + [ self ] for e in self.exprs: e.checkRecursion( subRecCheckList ) class MatchFirst(ParseExpression): """ Requires that at least one C{ParseExpression} is found. If two expressions match, the first one listed is the one that will match. May be constructed using the C{'|'} operator. Example:: # construct MatchFirst using '|' operator # watch the order of expressions to match number = Word(nums) | Combine(Word(nums) + '.' + Word(nums)) print(number.searchString("123 3.1416 789")) # Fail! -> [['123'], ['3'], ['1416'], ['789']] # put more selective expression first number = Combine(Word(nums) + '.' + Word(nums)) | Word(nums) print(number.searchString("123 3.1416 789")) # Better -> [['123'], ['3.1416'], ['789']] """ def __init__( self, exprs, savelist = False ): super(MatchFirst,self).__init__(exprs, savelist) if self.exprs: self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs) else: self.mayReturnEmpty = True def parseImpl( self, instring, loc, doActions=True ): maxExcLoc = -1 maxException = None for e in self.exprs: try: ret = e._parse( instring, loc, doActions ) return ret except ParseException as err: if err.loc > maxExcLoc: maxException = err maxExcLoc = err.loc except IndexError: if len(instring) > maxExcLoc: maxException = ParseException(instring,len(instring),e.errmsg,self) maxExcLoc = len(instring) # only got here if no expression matched, raise exception for match that made it the furthest else: if maxException is not None: maxException.msg = self.errmsg raise maxException else: raise ParseException(instring, loc, "no defined alternatives to match", self) def __ior__(self, other ): if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) return self.append( other ) #MatchFirst( [ self, other ] ) def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "{" + " | ".join(_ustr(e) for e in self.exprs) + "}" return self.strRepr def checkRecursion( self, parseElementList ): subRecCheckList = parseElementList[:] + [ self ] for e in self.exprs: e.checkRecursion( subRecCheckList ) class Each(ParseExpression): """ Requires all given C{ParseExpression}s to be found, but in any order. Expressions may be separated by whitespace. May be constructed using the C{'&'} operator. Example:: color = oneOf("RED ORANGE YELLOW GREEN BLUE PURPLE BLACK WHITE BROWN") shape_type = oneOf("SQUARE CIRCLE TRIANGLE STAR HEXAGON OCTAGON") integer = Word(nums) shape_attr = "shape:" + shape_type("shape") posn_attr = "posn:" + Group(integer("x") + ',' + integer("y"))("posn") color_attr = "color:" + color("color") size_attr = "size:" + integer("size") # use Each (using operator '&') to accept attributes in any order # (shape and posn are required, color and size are optional) shape_spec = shape_attr & posn_attr & Optional(color_attr) & Optional(size_attr) shape_spec.runTests(''' shape: SQUARE color: BLACK posn: 100, 120 shape: CIRCLE size: 50 color: BLUE posn: 50,80 color:GREEN size:20 shape:TRIANGLE posn:20,40 ''' ) prints:: shape: SQUARE color: BLACK posn: 100, 120 ['shape:', 'SQUARE', 'color:', 'BLACK', 'posn:', ['100', ',', '120']] - color: BLACK - posn: ['100', ',', '120'] - x: 100 - y: 120 - shape: SQUARE shape: CIRCLE size: 50 color: BLUE posn: 50,80 ['shape:', 'CIRCLE', 'size:', '50', 'color:', 'BLUE', 'posn:', ['50', ',', '80']] - color: BLUE - posn: ['50', ',', '80'] - x: 50 - y: 80 - shape: CIRCLE - size: 50 color: GREEN size: 20 shape: TRIANGLE posn: 20,40 ['color:', 'GREEN', 'size:', '20', 'shape:', 'TRIANGLE', 'posn:', ['20', ',', '40']] - color: GREEN - posn: ['20', ',', '40'] - x: 20 - y: 40 - shape: TRIANGLE - size: 20 """ def __init__( self, exprs, savelist = True ): super(Each,self).__init__(exprs, savelist) self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs) self.skipWhitespace = True self.initExprGroups = True def parseImpl( self, instring, loc, doActions=True ): if self.initExprGroups: self.opt1map = dict((id(e.expr),e) for e in self.exprs if isinstance(e,Optional)) opt1 = [ e.expr for e in self.exprs if isinstance(e,Optional) ] opt2 = [ e for e in self.exprs if e.mayReturnEmpty and not isinstance(e,Optional)] self.optionals = opt1 + opt2 self.multioptionals = [ e.expr for e in self.exprs if isinstance(e,ZeroOrMore) ] self.multirequired = [ e.expr for e in self.exprs if isinstance(e,OneOrMore) ] self.required = [ e for e in self.exprs if not isinstance(e,(Optional,ZeroOrMore,OneOrMore)) ] self.required += self.multirequired self.initExprGroups = False tmpLoc = loc tmpReqd = self.required[:] tmpOpt = self.optionals[:] matchOrder = [] keepMatching = True while keepMatching: tmpExprs = tmpReqd + tmpOpt + self.multioptionals + self.multirequired failed = [] for e in tmpExprs: try: tmpLoc = e.tryParse( instring, tmpLoc ) except ParseException: failed.append(e) else: matchOrder.append(self.opt1map.get(id(e),e)) if e in tmpReqd: tmpReqd.remove(e) elif e in tmpOpt: tmpOpt.remove(e) if len(failed) == len(tmpExprs): keepMatching = False if tmpReqd: missing = ", ".join(_ustr(e) for e in tmpReqd) raise ParseException(instring,loc,"Missing one or more required elements (%s)" % missing ) # add any unmatched Optionals, in case they have default values defined matchOrder += [e for e in self.exprs if isinstance(e,Optional) and e.expr in tmpOpt] resultlist = [] for e in matchOrder: loc,results = e._parse(instring,loc,doActions) resultlist.append(results) finalResults = sum(resultlist, ParseResults([])) return loc, finalResults def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "{" + " & ".join(_ustr(e) for e in self.exprs) + "}" return self.strRepr def checkRecursion( self, parseElementList ): subRecCheckList = parseElementList[:] + [ self ] for e in self.exprs: e.checkRecursion( subRecCheckList ) class ParseElementEnhance(ParserElement): """ Abstract subclass of C{ParserElement}, for combining and post-processing parsed tokens. """ def __init__( self, expr, savelist=False ): super(ParseElementEnhance,self).__init__(savelist) if isinstance( expr, basestring ): if issubclass(ParserElement._literalStringClass, Token): expr = ParserElement._literalStringClass(expr) else: expr = ParserElement._literalStringClass(Literal(expr)) self.expr = expr self.strRepr = None if expr is not None: self.mayIndexError = expr.mayIndexError self.mayReturnEmpty = expr.mayReturnEmpty self.setWhitespaceChars( expr.whiteChars ) self.skipWhitespace = expr.skipWhitespace self.saveAsList = expr.saveAsList self.callPreparse = expr.callPreparse self.ignoreExprs.extend(expr.ignoreExprs) def parseImpl( self, instring, loc, doActions=True ): if self.expr is not None: return self.expr._parse( instring, loc, doActions, callPreParse=False ) else: raise ParseException("",loc,self.errmsg,self) def leaveWhitespace( self ): self.skipWhitespace = False self.expr = self.expr.copy() if self.expr is not None: self.expr.leaveWhitespace() return self def ignore( self, other ): if isinstance( other, Suppress ): if other not in self.ignoreExprs: super( ParseElementEnhance, self).ignore( other ) if self.expr is not None: self.expr.ignore( self.ignoreExprs[-1] ) else: super( ParseElementEnhance, self).ignore( other ) if self.expr is not None: self.expr.ignore( self.ignoreExprs[-1] ) return self def streamline( self ): super(ParseElementEnhance,self).streamline() if self.expr is not None: self.expr.streamline() return self def checkRecursion( self, parseElementList ): if self in parseElementList: raise RecursiveGrammarException( parseElementList+[self] ) subRecCheckList = parseElementList[:] + [ self ] if self.expr is not None: self.expr.checkRecursion( subRecCheckList ) def validate( self, validateTrace=[] ): tmp = validateTrace[:]+[self] if self.expr is not None: self.expr.validate(tmp) self.checkRecursion( [] ) def __str__( self ): try: return super(ParseElementEnhance,self).__str__() except Exception: pass if self.strRepr is None and self.expr is not None: self.strRepr = "%s:(%s)" % ( self.__class__.__name__, _ustr(self.expr) ) return self.strRepr class FollowedBy(ParseElementEnhance): """ Lookahead matching of the given parse expression. C{FollowedBy} does I{not} advance the parsing position within the input string, it only verifies that the specified parse expression matches at the current position. C{FollowedBy} always returns a null token list. Example:: # use FollowedBy to match a label only if it is followed by a ':' data_word = Word(alphas) label = data_word + FollowedBy(':') attr_expr = Group(label + Suppress(':') + OneOrMore(data_word, stopOn=label).setParseAction(' '.join)) OneOrMore(attr_expr).parseString("shape: SQUARE color: BLACK posn: upper left").pprint() prints:: [['shape', 'SQUARE'], ['color', 'BLACK'], ['posn', 'upper left']] """ def __init__( self, expr ): super(FollowedBy,self).__init__(expr) self.mayReturnEmpty = True def parseImpl( self, instring, loc, doActions=True ): self.expr.tryParse( instring, loc ) return loc, [] class NotAny(ParseElementEnhance): """ Lookahead to disallow matching with the given parse expression. C{NotAny} does I{not} advance the parsing position within the input string, it only verifies that the specified parse expression does I{not} match at the current position. Also, C{NotAny} does I{not} skip over leading whitespace. C{NotAny} always returns a null token list. May be constructed using the '~' operator. Example:: """ def __init__( self, expr ): super(NotAny,self).__init__(expr) #~ self.leaveWhitespace() self.skipWhitespace = False # do NOT use self.leaveWhitespace(), don't want to propagate to exprs self.mayReturnEmpty = True self.errmsg = "Found unwanted token, "+_ustr(self.expr) def parseImpl( self, instring, loc, doActions=True ): if self.expr.canParseNext(instring, loc): raise ParseException(instring, loc, self.errmsg, self) return loc, [] def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "~{" + _ustr(self.expr) + "}" return self.strRepr class _MultipleMatch(ParseElementEnhance): def __init__( self, expr, stopOn=None): super(_MultipleMatch, self).__init__(expr) self.saveAsList = True ender = stopOn if isinstance(ender, basestring): ender = ParserElement._literalStringClass(ender) self.not_ender = ~ender if ender is not None else None def parseImpl( self, instring, loc, doActions=True ): self_expr_parse = self.expr._parse self_skip_ignorables = self._skipIgnorables check_ender = self.not_ender is not None if check_ender: try_not_ender = self.not_ender.tryParse # must be at least one (but first see if we are the stopOn sentinel; # if so, fail) if check_ender: try_not_ender(instring, loc) loc, tokens = self_expr_parse( instring, loc, doActions, callPreParse=False ) try: hasIgnoreExprs = (not not self.ignoreExprs) while 1: if check_ender: try_not_ender(instring, loc) if hasIgnoreExprs: preloc = self_skip_ignorables( instring, loc ) else: preloc = loc loc, tmptokens = self_expr_parse( instring, preloc, doActions ) if tmptokens or tmptokens.haskeys(): tokens += tmptokens except (ParseException,IndexError): pass return loc, tokens class OneOrMore(_MultipleMatch): """ Repetition of one or more of the given expression. Parameters: - expr - expression that must match one or more times - stopOn - (default=C{None}) - expression for a terminating sentinel (only required if the sentinel would ordinarily match the repetition expression) Example:: data_word = Word(alphas) label = data_word + FollowedBy(':') attr_expr = Group(label + Suppress(':') + OneOrMore(data_word).setParseAction(' '.join)) text = "shape: SQUARE posn: upper left color: BLACK" OneOrMore(attr_expr).parseString(text).pprint() # Fail! read 'color' as data instead of next label -> [['shape', 'SQUARE color']] # use stopOn attribute for OneOrMore to avoid reading label string as part of the data attr_expr = Group(label + Suppress(':') + OneOrMore(data_word, stopOn=label).setParseAction(' '.join)) OneOrMore(attr_expr).parseString(text).pprint() # Better -> [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'BLACK']] # could also be written as (attr_expr * (1,)).parseString(text).pprint() """ def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "{" + _ustr(self.expr) + "}..." return self.strRepr class ZeroOrMore(_MultipleMatch): """ Optional repetition of zero or more of the given expression. Parameters: - expr - expression that must match zero or more times - stopOn - (default=C{None}) - expression for a terminating sentinel (only required if the sentinel would ordinarily match the repetition expression) Example: similar to L{OneOrMore} """ def __init__( self, expr, stopOn=None): super(ZeroOrMore,self).__init__(expr, stopOn=stopOn) self.mayReturnEmpty = True def parseImpl( self, instring, loc, doActions=True ): try: return super(ZeroOrMore, self).parseImpl(instring, loc, doActions) except (ParseException,IndexError): return loc, [] def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "[" + _ustr(self.expr) + "]..." return self.strRepr class _NullToken(object): def __bool__(self): return False __nonzero__ = __bool__ def __str__(self): return "" _optionalNotMatched = _NullToken() class Optional(ParseElementEnhance): """ Optional matching of the given expression. Parameters: - expr - expression that must match zero or more times - default (optional) - value to be returned if the optional expression is not found. Example:: # US postal code can be a 5-digit zip, plus optional 4-digit qualifier zip = Combine(Word(nums, exact=5) + Optional('-' + Word(nums, exact=4))) zip.runTests(''' # traditional ZIP code 12345 # ZIP+4 form 12101-0001 # invalid ZIP 98765- ''') prints:: # traditional ZIP code 12345 ['12345'] # ZIP+4 form 12101-0001 ['12101-0001'] # invalid ZIP 98765- ^ FAIL: Expected end of text (at char 5), (line:1, col:6) """ def __init__( self, expr, default=_optionalNotMatched ): super(Optional,self).__init__( expr, savelist=False ) self.saveAsList = self.expr.saveAsList self.defaultValue = default self.mayReturnEmpty = True def parseImpl( self, instring, loc, doActions=True ): try: loc, tokens = self.expr._parse( instring, loc, doActions, callPreParse=False ) except (ParseException,IndexError): if self.defaultValue is not _optionalNotMatched: if self.expr.resultsName: tokens = ParseResults([ self.defaultValue ]) tokens[self.expr.resultsName] = self.defaultValue else: tokens = [ self.defaultValue ] else: tokens = [] return loc, tokens def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "[" + _ustr(self.expr) + "]" return self.strRepr class SkipTo(ParseElementEnhance): """ Token for skipping over all undefined text until the matched expression is found. Parameters: - expr - target expression marking the end of the data to be skipped - include - (default=C{False}) if True, the target expression is also parsed (the skipped text and target expression are returned as a 2-element list). - ignore - (default=C{None}) used to define grammars (typically quoted strings and comments) that might contain false matches to the target expression - failOn - (default=C{None}) define expressions that are not allowed to be included in the skipped test; if found before the target expression is found, the SkipTo is not a match Example:: report = ''' Outstanding Issues Report - 1 Jan 2000 # | Severity | Description | Days Open -----+----------+-------------------------------------------+----------- 101 | Critical | Intermittent system crash | 6 94 | Cosmetic | Spelling error on Login ('log|n') | 14 79 | Minor | System slow when running too many reports | 47 ''' integer = Word(nums) SEP = Suppress('|') # use SkipTo to simply match everything up until the next SEP # - ignore quoted strings, so that a '|' character inside a quoted string does not match # - parse action will call token.strip() for each matched token, i.e., the description body string_data = SkipTo(SEP, ignore=quotedString) string_data.setParseAction(tokenMap(str.strip)) ticket_expr = (integer("issue_num") + SEP + string_data("sev") + SEP + string_data("desc") + SEP + integer("days_open")) for tkt in ticket_expr.searchString(report): print tkt.dump() prints:: ['101', 'Critical', 'Intermittent system crash', '6'] - days_open: 6 - desc: Intermittent system crash - issue_num: 101 - sev: Critical ['94', 'Cosmetic', "Spelling error on Login ('log|n')", '14'] - days_open: 14 - desc: Spelling error on Login ('log|n') - issue_num: 94 - sev: Cosmetic ['79', 'Minor', 'System slow when running too many reports', '47'] - days_open: 47 - desc: System slow when running too many reports - issue_num: 79 - sev: Minor """ def __init__( self, other, include=False, ignore=None, failOn=None ): super( SkipTo, self ).__init__( other ) self.ignoreExpr = ignore self.mayReturnEmpty = True self.mayIndexError = False self.includeMatch = include self.asList = False if isinstance(failOn, basestring): self.failOn = ParserElement._literalStringClass(failOn) else: self.failOn = failOn self.errmsg = "No match found for "+_ustr(self.expr) def parseImpl( self, instring, loc, doActions=True ): startloc = loc instrlen = len(instring) expr = self.expr expr_parse = self.expr._parse self_failOn_canParseNext = self.failOn.canParseNext if self.failOn is not None else None self_ignoreExpr_tryParse = self.ignoreExpr.tryParse if self.ignoreExpr is not None else None tmploc = loc while tmploc <= instrlen: if self_failOn_canParseNext is not None: # break if failOn expression matches if self_failOn_canParseNext(instring, tmploc): break if self_ignoreExpr_tryParse is not None: # advance past ignore expressions while 1: try: tmploc = self_ignoreExpr_tryParse(instring, tmploc) except ParseBaseException: break try: expr_parse(instring, tmploc, doActions=False, callPreParse=False) except (ParseException, IndexError): # no match, advance loc in string tmploc += 1 else: # matched skipto expr, done break else: # ran off the end of the input string without matching skipto expr, fail raise ParseException(instring, loc, self.errmsg, self) # build up return values loc = tmploc skiptext = instring[startloc:loc] skipresult = ParseResults(skiptext) if self.includeMatch: loc, mat = expr_parse(instring,loc,doActions,callPreParse=False) skipresult += mat return loc, skipresult class Forward(ParseElementEnhance): """ Forward declaration of an expression to be defined later - used for recursive grammars, such as algebraic infix notation. When the expression is known, it is assigned to the C{Forward} variable using the '<<' operator. Note: take care when assigning to C{Forward} not to overlook precedence of operators. Specifically, '|' has a lower precedence than '<<', so that:: fwdExpr << a | b | c will actually be evaluated as:: (fwdExpr << a) | b | c thereby leaving b and c out as parseable alternatives. It is recommended that you explicitly group the values inserted into the C{Forward}:: fwdExpr << (a | b | c) Converting to use the '<<=' operator instead will avoid this problem. See L{ParseResults.pprint} for an example of a recursive parser created using C{Forward}. """ def __init__( self, other=None ): super(Forward,self).__init__( other, savelist=False ) def __lshift__( self, other ): if isinstance( other, basestring ): other = ParserElement._literalStringClass(other) self.expr = other self.strRepr = None self.mayIndexError = self.expr.mayIndexError self.mayReturnEmpty = self.expr.mayReturnEmpty self.setWhitespaceChars( self.expr.whiteChars ) self.skipWhitespace = self.expr.skipWhitespace self.saveAsList = self.expr.saveAsList self.ignoreExprs.extend(self.expr.ignoreExprs) return self def __ilshift__(self, other): return self << other def leaveWhitespace( self ): self.skipWhitespace = False return self def streamline( self ): if not self.streamlined: self.streamlined = True if self.expr is not None: self.expr.streamline() return self def validate( self, validateTrace=[] ): if self not in validateTrace: tmp = validateTrace[:]+[self] if self.expr is not None: self.expr.validate(tmp) self.checkRecursion([]) def __str__( self ): if hasattr(self,"name"): return self.name return self.__class__.__name__ + ": ..." # stubbed out for now - creates awful memory and perf issues self._revertClass = self.__class__ self.__class__ = _ForwardNoRecurse try: if self.expr is not None: retString = _ustr(self.expr) else: retString = "None" finally: self.__class__ = self._revertClass return self.__class__.__name__ + ": " + retString def copy(self): if self.expr is not None: return super(Forward,self).copy() else: ret = Forward() ret <<= self return ret class _ForwardNoRecurse(Forward): def __str__( self ): return "..." class TokenConverter(ParseElementEnhance): """ Abstract subclass of C{ParseExpression}, for converting parsed results. """ def __init__( self, expr, savelist=False ): super(TokenConverter,self).__init__( expr )#, savelist ) self.saveAsList = False class Combine(TokenConverter): """ Converter to concatenate all matching tokens to a single string. By default, the matching patterns must also be contiguous in the input string; this can be disabled by specifying C{'adjacent=False'} in the constructor. Example:: real = Word(nums) + '.' + Word(nums) print(real.parseString('3.1416')) # -> ['3', '.', '1416'] # will also erroneously match the following print(real.parseString('3. 1416')) # -> ['3', '.', '1416'] real = Combine(Word(nums) + '.' + Word(nums)) print(real.parseString('3.1416')) # -> ['3.1416'] # no match when there are internal spaces print(real.parseString('3. 1416')) # -> Exception: Expected W:(0123...) """ def __init__( self, expr, joinString="", adjacent=True ): super(Combine,self).__init__( expr ) # suppress whitespace-stripping in contained parse expressions, but re-enable it on the Combine itself if adjacent: self.leaveWhitespace() self.adjacent = adjacent self.skipWhitespace = True self.joinString = joinString self.callPreparse = True def ignore( self, other ): if self.adjacent: ParserElement.ignore(self, other) else: super( Combine, self).ignore( other ) return self def postParse( self, instring, loc, tokenlist ): retToks = tokenlist.copy() del retToks[:] retToks += ParseResults([ "".join(tokenlist._asStringList(self.joinString)) ], modal=self.modalResults) if self.resultsName and retToks.haskeys(): return [ retToks ] else: return retToks class Group(TokenConverter): """ Converter to return the matched tokens as a list - useful for returning tokens of C{L{ZeroOrMore}} and C{L{OneOrMore}} expressions. Example:: ident = Word(alphas) num = Word(nums) term = ident | num func = ident + Optional(delimitedList(term)) print(func.parseString("fn a,b,100")) # -> ['fn', 'a', 'b', '100'] func = ident + Group(Optional(delimitedList(term))) print(func.parseString("fn a,b,100")) # -> ['fn', ['a', 'b', '100']] """ def __init__( self, expr ): super(Group,self).__init__( expr ) self.saveAsList = True def postParse( self, instring, loc, tokenlist ): return [ tokenlist ] class Dict(TokenConverter): """ Converter to return a repetitive expression as a list, but also as a dictionary. Each element can also be referenced using the first token in the expression as its key. Useful for tabular report scraping when the first column can be used as a item key. Example:: data_word = Word(alphas) label = data_word + FollowedBy(':') attr_expr = Group(label + Suppress(':') + OneOrMore(data_word).setParseAction(' '.join)) text = "shape: SQUARE posn: upper left color: light blue texture: burlap" attr_expr = (label + Suppress(':') + OneOrMore(data_word, stopOn=label).setParseAction(' '.join)) # print attributes as plain groups print(OneOrMore(attr_expr).parseString(text).dump()) # instead of OneOrMore(expr), parse using Dict(OneOrMore(Group(expr))) - Dict will auto-assign names result = Dict(OneOrMore(Group(attr_expr))).parseString(text) print(result.dump()) # access named fields as dict entries, or output as dict print(result['shape']) print(result.asDict()) prints:: ['shape', 'SQUARE', 'posn', 'upper left', 'color', 'light blue', 'texture', 'burlap'] [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'light blue'], ['texture', 'burlap']] - color: light blue - posn: upper left - shape: SQUARE - texture: burlap SQUARE {'color': 'light blue', 'posn': 'upper left', 'texture': 'burlap', 'shape': 'SQUARE'} See more examples at L{ParseResults} of accessing fields by results name. """ def __init__( self, expr ): super(Dict,self).__init__( expr ) self.saveAsList = True def postParse( self, instring, loc, tokenlist ): for i,tok in enumerate(tokenlist): if len(tok) == 0: continue ikey = tok[0] if isinstance(ikey,int): ikey = _ustr(tok[0]).strip() if len(tok)==1: tokenlist[ikey] = _ParseResultsWithOffset("",i) elif len(tok)==2 and not isinstance(tok[1],ParseResults): tokenlist[ikey] = _ParseResultsWithOffset(tok[1],i) else: dictvalue = tok.copy() #ParseResults(i) del dictvalue[0] if len(dictvalue)!= 1 or (isinstance(dictvalue,ParseResults) and dictvalue.haskeys()): tokenlist[ikey] = _ParseResultsWithOffset(dictvalue,i) else: tokenlist[ikey] = _ParseResultsWithOffset(dictvalue[0],i) if self.resultsName: return [ tokenlist ] else: return tokenlist class Suppress(TokenConverter): """ Converter for ignoring the results of a parsed expression. Example:: source = "a, b, c,d" wd = Word(alphas) wd_list1 = wd + ZeroOrMore(',' + wd) print(wd_list1.parseString(source)) # often, delimiters that are useful during parsing are just in the # way afterward - use Suppress to keep them out of the parsed output wd_list2 = wd + ZeroOrMore(Suppress(',') + wd) print(wd_list2.parseString(source)) prints:: ['a', ',', 'b', ',', 'c', ',', 'd'] ['a', 'b', 'c', 'd'] (See also L{delimitedList}.) """ def postParse( self, instring, loc, tokenlist ): return [] def suppress( self ): return self class OnlyOnce(object): """ Wrapper for parse actions, to ensure they are only called once. """ def __init__(self, methodCall): self.callable = _trim_arity(methodCall) self.called = False def __call__(self,s,l,t): if not self.called: results = self.callable(s,l,t) self.called = True return results raise ParseException(s,l,"") def reset(self): self.called = False def traceParseAction(f): """ Decorator for debugging parse actions. When the parse action is called, this decorator will print C{">> entering I{method-name}(line:I{current_source_line}, I{parse_location}, I{matched_tokens})".} When the parse action completes, the decorator will print C{"<<"} followed by the returned value, or any exception that the parse action raised. Example:: wd = Word(alphas) @traceParseAction def remove_duplicate_chars(tokens): return ''.join(sorted(set(''.join(tokens)))) wds = OneOrMore(wd).setParseAction(remove_duplicate_chars) print(wds.parseString("slkdjs sld sldd sdlf sdljf")) prints:: >>entering remove_duplicate_chars(line: 'slkdjs sld sldd sdlf sdljf', 0, (['slkdjs', 'sld', 'sldd', 'sdlf', 'sdljf'], {})) <<leaving remove_duplicate_chars (ret: 'dfjkls') ['dfjkls'] """ f = _trim_arity(f) def z(*paArgs): thisFunc = f.__name__ s,l,t = paArgs[-3:] if len(paArgs)>3: thisFunc = paArgs[0].__class__.__name__ + '.' + thisFunc sys.stderr.write( ">>entering %s(line: '%s', %d, %r)\n" % (thisFunc,line(l,s),l,t) ) try: ret = f(*paArgs) except Exception as exc: sys.stderr.write( "<<leaving %s (exception: %s)\n" % (thisFunc,exc) ) raise sys.stderr.write( "<<leaving %s (ret: %r)\n" % (thisFunc,ret) ) return ret try: z.__name__ = f.__name__ except AttributeError: pass return z # # global helpers # def delimitedList( expr, delim=",", combine=False ): """ Helper to define a delimited list of expressions - the delimiter defaults to ','. By default, the list elements and delimiters can have intervening whitespace, and comments, but this can be overridden by passing C{combine=True} in the constructor. If C{combine} is set to C{True}, the matching tokens are returned as a single token string, with the delimiters included; otherwise, the matching tokens are returned as a list of tokens, with the delimiters suppressed. Example:: delimitedList(Word(alphas)).parseString("aa,bb,cc") # -> ['aa', 'bb', 'cc'] delimitedList(Word(hexnums), delim=':', combine=True).parseString("AA:BB:CC:DD:EE") # -> ['AA:BB:CC:DD:EE'] """ dlName = _ustr(expr)+" ["+_ustr(delim)+" "+_ustr(expr)+"]..." if combine: return Combine( expr + ZeroOrMore( delim + expr ) ).setName(dlName) else: return ( expr + ZeroOrMore( Suppress( delim ) + expr ) ).setName(dlName) def countedArray( expr, intExpr=None ): """ Helper to define a counted list of expressions. This helper defines a pattern of the form:: integer expr expr expr... where the leading integer tells how many expr expressions follow. The matched tokens returns the array of expr tokens as a list - the leading count token is suppressed. If C{intExpr} is specified, it should be a pyparsing expression that produces an integer value. Example:: countedArray(Word(alphas)).parseString('2 ab cd ef') # -> ['ab', 'cd'] # in this parser, the leading integer value is given in binary, # '10' indicating that 2 values are in the array binaryConstant = Word('01').setParseAction(lambda t: int(t[0], 2)) countedArray(Word(alphas), intExpr=binaryConstant).parseString('10 ab cd ef') # -> ['ab', 'cd'] """ arrayExpr = Forward() def countFieldParseAction(s,l,t): n = t[0] arrayExpr << (n and Group(And([expr]*n)) or Group(empty)) return [] if intExpr is None: intExpr = Word(nums).setParseAction(lambda t:int(t[0])) else: intExpr = intExpr.copy() intExpr.setName("arrayLen") intExpr.addParseAction(countFieldParseAction, callDuringTry=True) return ( intExpr + arrayExpr ).setName('(len) ' + _ustr(expr) + '...') def _flatten(L): ret = [] for i in L: if isinstance(i,list): ret.extend(_flatten(i)) else: ret.append(i) return ret def matchPreviousLiteral(expr): """ Helper to define an expression that is indirectly defined from the tokens matched in a previous expression, that is, it looks for a 'repeat' of a previous expression. For example:: first = Word(nums) second = matchPreviousLiteral(first) matchExpr = first + ":" + second will match C{"1:1"}, but not C{"1:2"}. Because this matches a previous literal, will also match the leading C{"1:1"} in C{"1:10"}. If this is not desired, use C{matchPreviousExpr}. Do I{not} use with packrat parsing enabled. """ rep = Forward() def copyTokenToRepeater(s,l,t): if t: if len(t) == 1: rep << t[0] else: # flatten t tokens tflat = _flatten(t.asList()) rep << And(Literal(tt) for tt in tflat) else: rep << Empty() expr.addParseAction(copyTokenToRepeater, callDuringTry=True) rep.setName('(prev) ' + _ustr(expr)) return rep def matchPreviousExpr(expr): """ Helper to define an expression that is indirectly defined from the tokens matched in a previous expression, that is, it looks for a 'repeat' of a previous expression. For example:: first = Word(nums) second = matchPreviousExpr(first) matchExpr = first + ":" + second will match C{"1:1"}, but not C{"1:2"}. Because this matches by expressions, will I{not} match the leading C{"1:1"} in C{"1:10"}; the expressions are evaluated first, and then compared, so C{"1"} is compared with C{"10"}. Do I{not} use with packrat parsing enabled. """ rep = Forward() e2 = expr.copy() rep <<= e2 def copyTokenToRepeater(s,l,t): matchTokens = _flatten(t.asList()) def mustMatchTheseTokens(s,l,t): theseTokens = _flatten(t.asList()) if theseTokens != matchTokens: raise ParseException("",0,"") rep.setParseAction( mustMatchTheseTokens, callDuringTry=True ) expr.addParseAction(copyTokenToRepeater, callDuringTry=True) rep.setName('(prev) ' + _ustr(expr)) return rep def _escapeRegexRangeChars(s): #~ escape these chars: ^-] for c in r"\^-]": s = s.replace(c,_bslash+c) s = s.replace("\n",r"\n") s = s.replace("\t",r"\t") return _ustr(s) def oneOf( strs, caseless=False, useRegex=True ): """ Helper to quickly define a set of alternative Literals, and makes sure to do longest-first testing when there is a conflict, regardless of the input order, but returns a C{L{MatchFirst}} for best performance. Parameters: - strs - a string of space-delimited literals, or a collection of string literals - caseless - (default=C{False}) - treat all literals as caseless - useRegex - (default=C{True}) - as an optimization, will generate a Regex object; otherwise, will generate a C{MatchFirst} object (if C{caseless=True}, or if creating a C{Regex} raises an exception) Example:: comp_oper = oneOf("< = > <= >= !=") var = Word(alphas) number = Word(nums) term = var | number comparison_expr = term + comp_oper + term print(comparison_expr.searchString("B = 12 AA=23 B<=AA AA>12")) prints:: [['B', '=', '12'], ['AA', '=', '23'], ['B', '<=', 'AA'], ['AA', '>', '12']] """ if caseless: isequal = ( lambda a,b: a.upper() == b.upper() ) masks = ( lambda a,b: b.upper().startswith(a.upper()) ) parseElementClass = CaselessLiteral else: isequal = ( lambda a,b: a == b ) masks = ( lambda a,b: b.startswith(a) ) parseElementClass = Literal symbols = [] if isinstance(strs,basestring): symbols = strs.split() elif isinstance(strs, Iterable): symbols = list(strs) else: warnings.warn("Invalid argument to oneOf, expected string or iterable", SyntaxWarning, stacklevel=2) if not symbols: return NoMatch() i = 0 while i < len(symbols)-1: cur = symbols[i] for j,other in enumerate(symbols[i+1:]): if ( isequal(other, cur) ): del symbols[i+j+1] break elif ( masks(cur, other) ): del symbols[i+j+1] symbols.insert(i,other) cur = other break else: i += 1 if not caseless and useRegex: #~ print (strs,"->", "|".join( [ _escapeRegexChars(sym) for sym in symbols] )) try: if len(symbols)==len("".join(symbols)): return Regex( "[%s]" % "".join(_escapeRegexRangeChars(sym) for sym in symbols) ).setName(' | '.join(symbols)) else: return Regex( "|".join(re.escape(sym) for sym in symbols) ).setName(' | '.join(symbols)) except Exception: warnings.warn("Exception creating Regex for oneOf, building MatchFirst", SyntaxWarning, stacklevel=2) # last resort, just use MatchFirst return MatchFirst(parseElementClass(sym) for sym in symbols).setName(' | '.join(symbols)) def dictOf( key, value ): """ Helper to easily and clearly define a dictionary by specifying the respective patterns for the key and value. Takes care of defining the C{L{Dict}}, C{L{ZeroOrMore}}, and C{L{Group}} tokens in the proper order. The key pattern can include delimiting markers or punctuation, as long as they are suppressed, thereby leaving the significant key text. The value pattern can include named results, so that the C{Dict} results can include named token fields. Example:: text = "shape: SQUARE posn: upper left color: light blue texture: burlap" attr_expr = (label + Suppress(':') + OneOrMore(data_word, stopOn=label).setParseAction(' '.join)) print(OneOrMore(attr_expr).parseString(text).dump()) attr_label = label attr_value = Suppress(':') + OneOrMore(data_word, stopOn=label).setParseAction(' '.join) # similar to Dict, but simpler call format result = dictOf(attr_label, attr_value).parseString(text) print(result.dump()) print(result['shape']) print(result.shape) # object attribute access works too print(result.asDict()) prints:: [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'light blue'], ['texture', 'burlap']] - color: light blue - posn: upper left - shape: SQUARE - texture: burlap SQUARE SQUARE {'color': 'light blue', 'shape': 'SQUARE', 'posn': 'upper left', 'texture': 'burlap'} """ return Dict( ZeroOrMore( Group ( key + value ) ) ) def originalTextFor(expr, asString=True): """ Helper to return the original, untokenized text for a given expression. Useful to restore the parsed fields of an HTML start tag into the raw tag text itself, or to revert separate tokens with intervening whitespace back to the original matching input text. By default, returns astring containing the original parsed text. If the optional C{asString} argument is passed as C{False}, then the return value is a C{L{ParseResults}} containing any results names that were originally matched, and a single token containing the original matched text from the input string. So if the expression passed to C{L{originalTextFor}} contains expressions with defined results names, you must set C{asString} to C{False} if you want to preserve those results name values. Example:: src = "this is test <b> bold <i>text</i> </b> normal text " for tag in ("b","i"): opener,closer = makeHTMLTags(tag) patt = originalTextFor(opener + SkipTo(closer) + closer) print(patt.searchString(src)[0]) prints:: ['<b> bold <i>text</i> </b>'] ['<i>text</i>'] """ locMarker = Empty().setParseAction(lambda s,loc,t: loc) endlocMarker = locMarker.copy() endlocMarker.callPreparse = False matchExpr = locMarker("_original_start") + expr + endlocMarker("_original_end") if asString: extractText = lambda s,l,t: s[t._original_start:t._original_end] else: def extractText(s,l,t): t[:] = [s[t.pop('_original_start'):t.pop('_original_end')]] matchExpr.setParseAction(extractText) matchExpr.ignoreExprs = expr.ignoreExprs return matchExpr def ungroup(expr): """ Helper to undo pyparsing's default grouping of And expressions, even if all but one are non-empty. """ return TokenConverter(expr).setParseAction(lambda t:t[0]) def locatedExpr(expr): """ Helper to decorate a returned token with its starting and ending locations in the input string. This helper adds the following results names: - locn_start = location where matched expression begins - locn_end = location where matched expression ends - value = the actual parsed results Be careful if the input text contains C{<TAB>} characters, you may want to call C{L{ParserElement.parseWithTabs}} Example:: wd = Word(alphas) for match in locatedExpr(wd).searchString("ljsdf123lksdjjf123lkkjj1222"): print(match) prints:: [[0, 'ljsdf', 5]] [[8, 'lksdjjf', 15]] [[18, 'lkkjj', 23]] """ locator = Empty().setParseAction(lambda s,l,t: l) return Group(locator("locn_start") + expr("value") + locator.copy().leaveWhitespace()("locn_end")) # convenience constants for positional expressions empty = Empty().setName("empty") lineStart = LineStart().setName("lineStart") lineEnd = LineEnd().setName("lineEnd") stringStart = StringStart().setName("stringStart") stringEnd = StringEnd().setName("stringEnd") _escapedPunc = Word( _bslash, r"\[]-*.$+^?()~ ", exact=2 ).setParseAction(lambda s,l,t:t[0][1]) _escapedHexChar = Regex(r"\\0?[xX][0-9a-fA-F]+").setParseAction(lambda s,l,t:unichr(int(t[0].lstrip(r'\0x'),16))) _escapedOctChar = Regex(r"\\0[0-7]+").setParseAction(lambda s,l,t:unichr(int(t[0][1:],8))) _singleChar = _escapedPunc | _escapedHexChar | _escapedOctChar | CharsNotIn(r'\]', exact=1) _charRange = Group(_singleChar + Suppress("-") + _singleChar) _reBracketExpr = Literal("[") + Optional("^").setResultsName("negate") + Group( OneOrMore( _charRange | _singleChar ) ).setResultsName("body") + "]" def srange(s): r""" Helper to easily define string ranges for use in Word construction. Borrows syntax from regexp '[]' string range definitions:: srange("[0-9]") -> "0123456789" srange("[a-z]") -> "abcdefghijklmnopqrstuvwxyz" srange("[a-z$_]") -> "abcdefghijklmnopqrstuvwxyz$_" The input string must be enclosed in []'s, and the returned string is the expanded character set joined into a single string. The values enclosed in the []'s may be: - a single character - an escaped character with a leading backslash (such as C{\-} or C{\]}) - an escaped hex character with a leading C{'\x'} (C{\x21}, which is a C{'!'} character) (C{\0x##} is also supported for backwards compatibility) - an escaped octal character with a leading C{'\0'} (C{\041}, which is a C{'!'} character) - a range of any of the above, separated by a dash (C{'a-z'}, etc.) - any combination of the above (C{'aeiouy'}, C{'a-zA-Z0-9_$'}, etc.) """ _expanded = lambda p: p if not isinstance(p,ParseResults) else ''.join(unichr(c) for c in range(ord(p[0]),ord(p[1])+1)) try: return "".join(_expanded(part) for part in _reBracketExpr.parseString(s).body) except Exception: return "" def matchOnlyAtCol(n): """ Helper method for defining parse actions that require matching at a specific column in the input text. """ def verifyCol(strg,locn,toks): if col(locn,strg) != n: raise ParseException(strg,locn,"matched token not at column %d" % n) return verifyCol def replaceWith(replStr): """ Helper method for common parse actions that simply return a literal value. Especially useful when used with C{L{transformString<ParserElement.transformString>}()}. Example:: num = Word(nums).setParseAction(lambda toks: int(toks[0])) na = oneOf("N/A NA").setParseAction(replaceWith(math.nan)) term = na | num OneOrMore(term).parseString("324 234 N/A 234") # -> [324, 234, nan, 234] """ return lambda s,l,t: [replStr] def removeQuotes(s,l,t): """ Helper parse action for removing quotation marks from parsed quoted strings. Example:: # by default, quotation marks are included in parsed results quotedString.parseString("'Now is the Winter of our Discontent'") # -> ["'Now is the Winter of our Discontent'"] # use removeQuotes to strip quotation marks from parsed results quotedString.setParseAction(removeQuotes) quotedString.parseString("'Now is the Winter of our Discontent'") # -> ["Now is the Winter of our Discontent"] """ return t[0][1:-1] def tokenMap(func, *args): """ Helper to define a parse action by mapping a function to all elements of a ParseResults list.If any additional args are passed, they are forwarded to the given function as additional arguments after the token, as in C{hex_integer = Word(hexnums).setParseAction(tokenMap(int, 16))}, which will convert the parsed data to an integer using base 16. Example (compare the last to example in L{ParserElement.transformString}:: hex_ints = OneOrMore(Word(hexnums)).setParseAction(tokenMap(int, 16)) hex_ints.runTests(''' 00 11 22 aa FF 0a 0d 1a ''') upperword = Word(alphas).setParseAction(tokenMap(str.upper)) OneOrMore(upperword).runTests(''' my kingdom for a horse ''') wd = Word(alphas).setParseAction(tokenMap(str.title)) OneOrMore(wd).setParseAction(' '.join).runTests(''' now is the winter of our discontent made glorious summer by this sun of york ''') prints:: 00 11 22 aa FF 0a 0d 1a [0, 17, 34, 170, 255, 10, 13, 26] my kingdom for a horse ['MY', 'KINGDOM', 'FOR', 'A', 'HORSE'] now is the winter of our discontent made glorious summer by this sun of york ['Now Is The Winter Of Our Discontent Made Glorious Summer By This Sun Of York'] """ def pa(s,l,t): return [func(tokn, *args) for tokn in t] try: func_name = getattr(func, '__name__', getattr(func, '__class__').__name__) except Exception: func_name = str(func) pa.__name__ = func_name return pa upcaseTokens = tokenMap(lambda t: _ustr(t).upper()) """(Deprecated) Helper parse action to convert tokens to upper case. Deprecated in favor of L{pyparsing_common.upcaseTokens}""" downcaseTokens = tokenMap(lambda t: _ustr(t).lower()) """(Deprecated) Helper parse action to convert tokens to lower case. Deprecated in favor of L{pyparsing_common.downcaseTokens}""" def _makeTags(tagStr, xml): """Internal helper to construct opening and closing tag expressions, given a tag name""" if isinstance(tagStr,basestring): resname = tagStr tagStr = Keyword(tagStr, caseless=not xml) else: resname = tagStr.name tagAttrName = Word(alphas,alphanums+"_-:") if (xml): tagAttrValue = dblQuotedString.copy().setParseAction( removeQuotes ) openTag = Suppress("<") + tagStr("tag") + \ Dict(ZeroOrMore(Group( tagAttrName + Suppress("=") + tagAttrValue ))) + \ Optional("/",default=[False]).setResultsName("empty").setParseAction(lambda s,l,t:t[0]=='/') + Suppress(">") else: printablesLessRAbrack = "".join(c for c in printables if c not in ">") tagAttrValue = quotedString.copy().setParseAction( removeQuotes ) | Word(printablesLessRAbrack) openTag = Suppress("<") + tagStr("tag") + \ Dict(ZeroOrMore(Group( tagAttrName.setParseAction(downcaseTokens) + \ Optional( Suppress("=") + tagAttrValue ) ))) + \ Optional("/",default=[False]).setResultsName("empty").setParseAction(lambda s,l,t:t[0]=='/') + Suppress(">") closeTag = Combine(_L("</") + tagStr + ">") openTag = openTag.setResultsName("start"+"".join(resname.replace(":"," ").title().split())).setName("<%s>" % resname) closeTag = closeTag.setResultsName("end"+"".join(resname.replace(":"," ").title().split())).setName("</%s>" % resname) openTag.tag = resname closeTag.tag = resname return openTag, closeTag def makeHTMLTags(tagStr): """ Helper to construct opening and closing tag expressions for HTML, given a tag name. Matches tags in either upper or lower case, attributes with namespaces and with quoted or unquoted values. Example:: text = '<td>More info at the <a href="http://pyparsing.wikispaces.com">pyparsing</a> wiki page</td>' # makeHTMLTags returns pyparsing expressions for the opening and closing tags as a 2-tuple a,a_end = makeHTMLTags("A") link_expr = a + SkipTo(a_end)("link_text") + a_end for link in link_expr.searchString(text): # attributes in the <A> tag (like "href" shown here) are also accessible as named results print(link.link_text, '->', link.href) prints:: pyparsing -> http://pyparsing.wikispaces.com """ return _makeTags( tagStr, False ) def makeXMLTags(tagStr): """ Helper to construct opening and closing tag expressions for XML, given a tag name. Matches tags only in the given upper/lower case. Example: similar to L{makeHTMLTags} """ return _makeTags( tagStr, True ) def withAttribute(*args,**attrDict): """ Helper to create a validating parse action to be used with start tags created with C{L{makeXMLTags}} or C{L{makeHTMLTags}}. Use C{withAttribute} to qualify a starting tag with a required attribute value, to avoid false matches on common tags such as C{<TD>} or C{<DIV>}. Call C{withAttribute} with a series of attribute names and values. Specify the list of filter attributes names and values as: - keyword arguments, as in C{(align="right")}, or - as an explicit dict with C{**} operator, when an attribute name is also a Python reserved word, as in C{**{"class":"Customer", "align":"right"}} - a list of name-value tuples, as in ( ("ns1:class", "Customer"), ("ns2:align","right") ) For attribute names with a namespace prefix, you must use the second form. Attribute names are matched insensitive to upper/lower case. If just testing for C{class} (with or without a namespace), use C{L{withClass}}. To verify that the attribute exists, but without specifying a value, pass C{withAttribute.ANY_VALUE} as the value. Example:: html = ''' <div> Some text <div type="grid">1 4 0 1 0</div> <div type="graph">1,3 2,3 1,1</div> <div>this has no type</div> </div> ''' div,div_end = makeHTMLTags("div") # only match div tag having a type attribute with value "grid" div_grid = div().setParseAction(withAttribute(type="grid")) grid_expr = div_grid + SkipTo(div | div_end)("body") for grid_header in grid_expr.searchString(html): print(grid_header.body) # construct a match with any div tag having a type attribute, regardless of the value div_any_type = div().setParseAction(withAttribute(type=withAttribute.ANY_VALUE)) div_expr = div_any_type + SkipTo(div | div_end)("body") for div_header in div_expr.searchString(html): print(div_header.body) prints:: 1 4 0 1 0 1 4 0 1 0 1,3 2,3 1,1 """ if args: attrs = args[:] else: attrs = attrDict.items() attrs = [(k,v) for k,v in attrs] def pa(s,l,tokens): for attrName,attrValue in attrs: if attrName not in tokens: raise ParseException(s,l,"no matching attribute " + attrName) if attrValue != withAttribute.ANY_VALUE and tokens[attrName] != attrValue: raise ParseException(s,l,"attribute '%s' has value '%s', must be '%s'" % (attrName, tokens[attrName], attrValue)) return pa withAttribute.ANY_VALUE = object() def withClass(classname, namespace=''): """ Simplified version of C{L{withAttribute}} when matching on a div class - made difficult because C{class} is a reserved word in Python. Example:: html = ''' <div> Some text <div class="grid">1 4 0 1 0</div> <div class="graph">1,3 2,3 1,1</div> <div>this <div> has no class</div> </div> ''' div,div_end = makeHTMLTags("div") div_grid = div().setParseAction(withClass("grid")) grid_expr = div_grid + SkipTo(div | div_end)("body") for grid_header in grid_expr.searchString(html): print(grid_header.body) div_any_type = div().setParseAction(withClass(withAttribute.ANY_VALUE)) div_expr = div_any_type + SkipTo(div | div_end)("body") for div_header in div_expr.searchString(html): print(div_header.body) prints:: 1 4 0 1 0 1 4 0 1 0 1,3 2,3 1,1 """ classattr = "%s:class" % namespace if namespace else "class" return withAttribute(**{classattr : classname}) opAssoc = _Constants() opAssoc.LEFT = object() opAssoc.RIGHT = object() def infixNotation( baseExpr, opList, lpar=Suppress('('), rpar=Suppress(')') ): """ Helper method for constructing grammars of expressions made up of operators working in a precedence hierarchy. Operators may be unary or binary, left- or right-associative. Parse actions can also be attached to operator expressions. The generated parser will also recognize the use of parentheses to override operator precedences (see example below). Note: if you define a deep operator list, you may see performance issues when using infixNotation. See L{ParserElement.enablePackrat} for a mechanism to potentially improve your parser performance. Parameters: - baseExpr - expression representing the most basic element for the nested - opList - list of tuples, one for each operator precedence level in the expression grammar; each tuple is of the form (opExpr, numTerms, rightLeftAssoc, parseAction), where: - opExpr is the pyparsing expression for the operator; may also be a string, which will be converted to a Literal; if numTerms is 3, opExpr is a tuple of two expressions, for the two operators separating the 3 terms - numTerms is the number of terms for this operator (must be 1, 2, or 3) - rightLeftAssoc is the indicator whether the operator is right or left associative, using the pyparsing-defined constants C{opAssoc.RIGHT} and C{opAssoc.LEFT}. - parseAction is the parse action to be associated with expressions matching this operator expression (the parse action tuple member may be omitted); if the parse action is passed a tuple or list of functions, this is equivalent to calling C{setParseAction(*fn)} (L{ParserElement.setParseAction}) - lpar - expression for matching left-parentheses (default=C{Suppress('(')}) - rpar - expression for matching right-parentheses (default=C{Suppress(')')}) Example:: # simple example of four-function arithmetic with ints and variable names integer = pyparsing_common.signed_integer varname = pyparsing_common.identifier arith_expr = infixNotation(integer | varname, [ ('-', 1, opAssoc.RIGHT), (oneOf('* /'), 2, opAssoc.LEFT), (oneOf('+ -'), 2, opAssoc.LEFT), ]) arith_expr.runTests(''' 5+3*6 (5+3)*6 -2--11 ''', fullDump=False) prints:: 5+3*6 [[5, '+', [3, '*', 6]]] (5+3)*6 [[[5, '+', 3], '*', 6]] -2--11 [[['-', 2], '-', ['-', 11]]] """ ret = Forward() lastExpr = baseExpr | ( lpar + ret + rpar ) for i,operDef in enumerate(opList): opExpr,arity,rightLeftAssoc,pa = (operDef + (None,))[:4] termName = "%s term" % opExpr if arity < 3 else "%s%s term" % opExpr if arity == 3: if opExpr is None or len(opExpr) != 2: raise ValueError("if numterms=3, opExpr must be a tuple or list of two expressions") opExpr1, opExpr2 = opExpr thisExpr = Forward().setName(termName) if rightLeftAssoc == opAssoc.LEFT: if arity == 1: matchExpr = FollowedBy(lastExpr + opExpr) + Group( lastExpr + OneOrMore( opExpr ) ) elif arity == 2: if opExpr is not None: matchExpr = FollowedBy(lastExpr + opExpr + lastExpr) + Group( lastExpr + OneOrMore( opExpr + lastExpr ) ) else: matchExpr = FollowedBy(lastExpr+lastExpr) + Group( lastExpr + OneOrMore(lastExpr) ) elif arity == 3: matchExpr = FollowedBy(lastExpr + opExpr1 + lastExpr + opExpr2 + lastExpr) + \ Group( lastExpr + opExpr1 + lastExpr + opExpr2 + lastExpr ) else: raise ValueError("operator must be unary (1), binary (2), or ternary (3)") elif rightLeftAssoc == opAssoc.RIGHT: if arity == 1: # try to avoid LR with this extra test if not isinstance(opExpr, Optional): opExpr = Optional(opExpr) matchExpr = FollowedBy(opExpr.expr + thisExpr) + Group( opExpr + thisExpr ) elif arity == 2: if opExpr is not None: matchExpr = FollowedBy(lastExpr + opExpr + thisExpr) + Group( lastExpr + OneOrMore( opExpr + thisExpr ) ) else: matchExpr = FollowedBy(lastExpr + thisExpr) + Group( lastExpr + OneOrMore( thisExpr ) ) elif arity == 3: matchExpr = FollowedBy(lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr) + \ Group( lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr ) else: raise ValueError("operator must be unary (1), binary (2), or ternary (3)") else: raise ValueError("operator must indicate right or left associativity") if pa: if isinstance(pa, (tuple, list)): matchExpr.setParseAction(*pa) else: matchExpr.setParseAction(pa) thisExpr <<= ( matchExpr.setName(termName) | lastExpr ) lastExpr = thisExpr ret <<= lastExpr return ret operatorPrecedence = infixNotation """(Deprecated) Former name of C{L{infixNotation}}, will be dropped in a future release.""" dblQuotedString = Combine(Regex(r'"(?:[^"\n\r\\]|(?:"")|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*')+'"').setName("string enclosed in double quotes") sglQuotedString = Combine(Regex(r"'(?:[^'\n\r\\]|(?:'')|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*")+"'").setName("string enclosed in single quotes") quotedString = Combine(Regex(r'"(?:[^"\n\r\\]|(?:"")|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*')+'"'| Regex(r"'(?:[^'\n\r\\]|(?:'')|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*")+"'").setName("quotedString using single or double quotes") unicodeString = Combine(_L('u') + quotedString.copy()).setName("unicode string literal") def nestedExpr(opener="(", closer=")", content=None, ignoreExpr=quotedString.copy()): """ Helper method for defining nested lists enclosed in opening and closing delimiters ("(" and ")" are the default). Parameters: - opener - opening character for a nested list (default=C{"("}); can also be a pyparsing expression - closer - closing character for a nested list (default=C{")"}); can also be a pyparsing expression - content - expression for items within the nested lists (default=C{None}) - ignoreExpr - expression for ignoring opening and closing delimiters (default=C{quotedString}) If an expression is not provided for the content argument, the nested expression will capture all whitespace-delimited content between delimiters as a list of separate values. Use the C{ignoreExpr} argument to define expressions that may contain opening or closing characters that should not be treated as opening or closing characters for nesting, such as quotedString or a comment expression. Specify multiple expressions using an C{L{Or}} or C{L{MatchFirst}}. The default is L{quotedString}, but if no expressions are to be ignored, then pass C{None} for this argument. Example:: data_type = oneOf("void int short long char float double") decl_data_type = Combine(data_type + Optional(Word('*'))) ident = Word(alphas+'_', alphanums+'_') number = pyparsing_common.number arg = Group(decl_data_type + ident) LPAR,RPAR = map(Suppress, "()") code_body = nestedExpr('{', '}', ignoreExpr=(quotedString | cStyleComment)) c_function = (decl_data_type("type") + ident("name") + LPAR + Optional(delimitedList(arg), [])("args") + RPAR + code_body("body")) c_function.ignore(cStyleComment) source_code = ''' int is_odd(int x) { return (x%2); } int dec_to_hex(char hchar) { if (hchar >= '0' && hchar <= '9') { return (ord(hchar)-ord('0')); } else { return (10+ord(hchar)-ord('A')); } } ''' for func in c_function.searchString(source_code): print("%(name)s (%(type)s) args: %(args)s" % func) prints:: is_odd (int) args: [['int', 'x']] dec_to_hex (int) args: [['char', 'hchar']] """ if opener == closer: raise ValueError("opening and closing strings cannot be the same") if content is None: if isinstance(opener,basestring) and isinstance(closer,basestring): if len(opener) == 1 and len(closer)==1: if ignoreExpr is not None: content = (Combine(OneOrMore(~ignoreExpr + CharsNotIn(opener+closer+ParserElement.DEFAULT_WHITE_CHARS,exact=1)) ).setParseAction(lambda t:t[0].strip())) else: content = (empty.copy()+CharsNotIn(opener+closer+ParserElement.DEFAULT_WHITE_CHARS ).setParseAction(lambda t:t[0].strip())) else: if ignoreExpr is not None: content = (Combine(OneOrMore(~ignoreExpr + ~Literal(opener) + ~Literal(closer) + CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS,exact=1)) ).setParseAction(lambda t:t[0].strip())) else: content = (Combine(OneOrMore(~Literal(opener) + ~Literal(closer) + CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS,exact=1)) ).setParseAction(lambda t:t[0].strip())) else: raise ValueError("opening and closing arguments must be strings if no content expression is given") ret = Forward() if ignoreExpr is not None: ret <<= Group( Suppress(opener) + ZeroOrMore( ignoreExpr | ret | content ) + Suppress(closer) ) else: ret <<= Group( Suppress(opener) + ZeroOrMore( ret | content ) + Suppress(closer) ) ret.setName('nested %s%s expression' % (opener,closer)) return ret def indentedBlock(blockStatementExpr, indentStack, indent=True): """ Helper method for defining space-delimited indentation blocks, such as those used to define block statements in Python source code. Parameters: - blockStatementExpr - expression defining syntax of statement that is repeated within the indented block - indentStack - list created by caller to manage indentation stack (multiple statementWithIndentedBlock expressions within a single grammar should share a common indentStack) - indent - boolean indicating whether block must be indented beyond the the current level; set to False for block of left-most statements (default=C{True}) A valid block must contain at least one C{blockStatement}. Example:: data = ''' def A(z): A1 B = 100 G = A2 A2 A3 B def BB(a,b,c): BB1 def BBA(): bba1 bba2 bba3 C D def spam(x,y): def eggs(z): pass ''' indentStack = [1] stmt = Forward() identifier = Word(alphas, alphanums) funcDecl = ("def" + identifier + Group( "(" + Optional( delimitedList(identifier) ) + ")" ) + ":") func_body = indentedBlock(stmt, indentStack) funcDef = Group( funcDecl + func_body ) rvalue = Forward() funcCall = Group(identifier + "(" + Optional(delimitedList(rvalue)) + ")") rvalue << (funcCall | identifier | Word(nums)) assignment = Group(identifier + "=" + rvalue) stmt << ( funcDef | assignment | identifier ) module_body = OneOrMore(stmt) parseTree = module_body.parseString(data) parseTree.pprint() prints:: [['def', 'A', ['(', 'z', ')'], ':', [['A1'], [['B', '=', '100']], [['G', '=', 'A2']], ['A2'], ['A3']]], 'B', ['def', 'BB', ['(', 'a', 'b', 'c', ')'], ':', [['BB1'], [['def', 'BBA', ['(', ')'], ':', [['bba1'], ['bba2'], ['bba3']]]]]], 'C', 'D', ['def', 'spam', ['(', 'x', 'y', ')'], ':', [[['def', 'eggs', ['(', 'z', ')'], ':', [['pass']]]]]]] """ def checkPeerIndent(s,l,t): if l >= len(s): return curCol = col(l,s) if curCol != indentStack[-1]: if curCol > indentStack[-1]: raise ParseFatalException(s,l,"illegal nesting") raise ParseException(s,l,"not a peer entry") def checkSubIndent(s,l,t): curCol = col(l,s) if curCol > indentStack[-1]: indentStack.append( curCol ) else: raise ParseException(s,l,"not a subentry") def checkUnindent(s,l,t): if l >= len(s): return curCol = col(l,s) if not(indentStack and curCol < indentStack[-1] and curCol <= indentStack[-2]): raise ParseException(s,l,"not an unindent") indentStack.pop() NL = OneOrMore(LineEnd().setWhitespaceChars("\t ").suppress()) INDENT = (Empty() + Empty().setParseAction(checkSubIndent)).setName('INDENT') PEER = Empty().setParseAction(checkPeerIndent).setName('') UNDENT = Empty().setParseAction(checkUnindent).setName('UNINDENT') if indent: smExpr = Group( Optional(NL) + #~ FollowedBy(blockStatementExpr) + INDENT + (OneOrMore( PEER + Group(blockStatementExpr) + Optional(NL) )) + UNDENT) else: smExpr = Group( Optional(NL) + (OneOrMore( PEER + Group(blockStatementExpr) + Optional(NL) )) ) blockStatementExpr.ignore(_bslash + LineEnd()) return smExpr.setName('indented block') alphas8bit = srange(r"[\0xc0-\0xd6\0xd8-\0xf6\0xf8-\0xff]") punc8bit = srange(r"[\0xa1-\0xbf\0xd7\0xf7]") anyOpenTag,anyCloseTag = makeHTMLTags(Word(alphas,alphanums+"_:").setName('any tag')) _htmlEntityMap = dict(zip("gt lt amp nbsp quot apos".split(),'><& "\'')) commonHTMLEntity = Regex('&(?P<entity>' + '|'.join(_htmlEntityMap.keys()) +");").setName("common HTML entity") def replaceHTMLEntity(t): """Helper parser action to replace common HTML entities with their special characters""" return _htmlEntityMap.get(t.entity) # it's easy to get these comment structures wrong - they're very common, so may as well make them available cStyleComment = Combine(Regex(r"/\*(?:[^*]|\*(?!/))*") + '*/').setName("C style comment") "Comment of the form C{/* ... */}" htmlComment = Regex(r"<!--[\s\S]*?-->").setName("HTML comment") "Comment of the form C{<!-- ... -->}" restOfLine = Regex(r".*").leaveWhitespace().setName("rest of line") dblSlashComment = Regex(r"//(?:\\\n|[^\n])*").setName("// comment") "Comment of the form C{// ... (to end of line)}" cppStyleComment = Combine(Regex(r"/\*(?:[^*]|\*(?!/))*") + '*/'| dblSlashComment).setName("C++ style comment") "Comment of either form C{L{cStyleComment}} or C{L{dblSlashComment}}" javaStyleComment = cppStyleComment "Same as C{L{cppStyleComment}}" pythonStyleComment = Regex(r"#.*").setName("Python style comment") "Comment of the form C{# ... (to end of line)}" _commasepitem = Combine(OneOrMore(Word(printables, excludeChars=',') + Optional( Word(" \t") + ~Literal(",") + ~LineEnd() ) ) ).streamline().setName("commaItem") commaSeparatedList = delimitedList( Optional( quotedString.copy() | _commasepitem, default="") ).setName("commaSeparatedList") """(Deprecated) Predefined expression of 1 or more printable words or quoted strings, separated by commas. This expression is deprecated in favor of L{pyparsing_common.comma_separated_list}.""" # some other useful expressions - using lower-case class name since we are really using this as a namespace class pyparsing_common: """ Here are some common low-level expressions that may be useful in jump-starting parser development: - numeric forms (L{integers<integer>}, L{reals<real>}, L{scientific notation<sci_real>}) - common L{programming identifiers<identifier>} - network addresses (L{MAC<mac_address>}, L{IPv4<ipv4_address>}, L{IPv6<ipv6_address>}) - ISO8601 L{dates<iso8601_date>} and L{datetime<iso8601_datetime>} - L{UUID<uuid>} - L{comma-separated list<comma_separated_list>} Parse actions: - C{L{convertToInteger}} - C{L{convertToFloat}} - C{L{convertToDate}} - C{L{convertToDatetime}} - C{L{stripHTMLTags}} - C{L{upcaseTokens}} - C{L{downcaseTokens}} Example:: pyparsing_common.number.runTests(''' # any int or real number, returned as the appropriate type 100 -100 +100 3.14159 6.02e23 1e-12 ''') pyparsing_common.fnumber.runTests(''' # any int or real number, returned as float 100 -100 +100 3.14159 6.02e23 1e-12 ''') pyparsing_common.hex_integer.runTests(''' # hex numbers 100 FF ''') pyparsing_common.fraction.runTests(''' # fractions 1/2 -3/4 ''') pyparsing_common.mixed_integer.runTests(''' # mixed fractions 1 1/2 -3/4 1-3/4 ''') import uuid pyparsing_common.uuid.setParseAction(tokenMap(uuid.UUID)) pyparsing_common.uuid.runTests(''' # uuid 12345678-1234-5678-1234-567812345678 ''') prints:: # any int or real number, returned as the appropriate type 100 [100] -100 [-100] +100 [100] 3.14159 [3.14159] 6.02e23 [6.02e+23] 1e-12 [1e-12] # any int or real number, returned as float 100 [100.0] -100 [-100.0] +100 [100.0] 3.14159 [3.14159] 6.02e23 [6.02e+23] 1e-12 [1e-12] # hex numbers 100 [256] FF [255] # fractions 1/2 [0.5] -3/4 [-0.75] # mixed fractions 1 [1] 1/2 [0.5] -3/4 [-0.75] 1-3/4 [1.75] # uuid 12345678-1234-5678-1234-567812345678 [UUID('12345678-1234-5678-1234-567812345678')] """ convertToInteger = tokenMap(int) """ Parse action for converting parsed integers to Python int """ convertToFloat = tokenMap(float) """ Parse action for converting parsed numbers to Python float """ integer = Word(nums).setName("integer").setParseAction(convertToInteger) """expression that parses an unsigned integer, returns an int""" hex_integer = Word(hexnums).setName("hex integer").setParseAction(tokenMap(int,16)) """expression that parses a hexadecimal integer, returns an int""" signed_integer = Regex(r'[+-]?\d+').setName("signed integer").setParseAction(convertToInteger) """expression that parses an integer with optional leading sign, returns an int""" fraction = (signed_integer().setParseAction(convertToFloat) + '/' + signed_integer().setParseAction(convertToFloat)).setName("fraction") """fractional expression of an integer divided by an integer, returns a float""" fraction.addParseAction(lambda t: t[0]/t[-1]) mixed_integer = (fraction | signed_integer + Optional(Optional('-').suppress() + fraction)).setName("fraction or mixed integer-fraction") """mixed integer of the form 'integer - fraction', with optional leading integer, returns float""" mixed_integer.addParseAction(sum) real = Regex(r'[+-]?\d+\.\d*').setName("real number").setParseAction(convertToFloat) """expression that parses a floating point number and returns a float""" sci_real = Regex(r'[+-]?\d+([eE][+-]?\d+|\.\d*([eE][+-]?\d+)?)').setName("real number with scientific notation").setParseAction(convertToFloat) """expression that parses a floating point number with optional scientific notation and returns a float""" # streamlining this expression makes the docs nicer-looking number = (sci_real | real | signed_integer).streamline() """any numeric expression, returns the corresponding Python type""" fnumber = Regex(r'[+-]?\d+\.?\d*([eE][+-]?\d+)?').setName("fnumber").setParseAction(convertToFloat) """any int or real number, returned as float""" identifier = Word(alphas+'_', alphanums+'_').setName("identifier") """typical code identifier (leading alpha or '_', followed by 0 or more alphas, nums, or '_')""" ipv4_address = Regex(r'(25[0-5]|2[0-4][0-9]|1?[0-9]{1,2})(\.(25[0-5]|2[0-4][0-9]|1?[0-9]{1,2})){3}').setName("IPv4 address") "IPv4 address (C{0.0.0.0 - 255.255.255.255})" _ipv6_part = Regex(r'[0-9a-fA-F]{1,4}').setName("hex_integer") _full_ipv6_address = (_ipv6_part + (':' + _ipv6_part)*7).setName("full IPv6 address") _short_ipv6_address = (Optional(_ipv6_part + (':' + _ipv6_part)*(0,6)) + "::" + Optional(_ipv6_part + (':' + _ipv6_part)*(0,6))).setName("short IPv6 address") _short_ipv6_address.addCondition(lambda t: sum(1 for tt in t if pyparsing_common._ipv6_part.matches(tt)) < 8) _mixed_ipv6_address = ("::ffff:" + ipv4_address).setName("mixed IPv6 address") ipv6_address = Combine((_full_ipv6_address | _mixed_ipv6_address | _short_ipv6_address).setName("IPv6 address")).setName("IPv6 address") "IPv6 address (long, short, or mixed form)" mac_address = Regex(r'[0-9a-fA-F]{2}([:.-])[0-9a-fA-F]{2}(?:\1[0-9a-fA-F]{2}){4}').setName("MAC address") "MAC address xx:xx:xx:xx:xx (may also have '-' or '.' delimiters)" @staticmethod def convertToDate(fmt="%Y-%m-%d"): """ Helper to create a parse action for converting parsed date string to Python datetime.date Params - - fmt - format to be passed to datetime.strptime (default=C{"%Y-%m-%d"}) Example:: date_expr = pyparsing_common.iso8601_date.copy() date_expr.setParseAction(pyparsing_common.convertToDate()) print(date_expr.parseString("1999-12-31")) prints:: [datetime.date(1999, 12, 31)] """ def cvt_fn(s,l,t): try: return datetime.strptime(t[0], fmt).date() except ValueError as ve: raise ParseException(s, l, str(ve)) return cvt_fn @staticmethod def convertToDatetime(fmt="%Y-%m-%dT%H:%M:%S.%f"): """ Helper to create a parse action for converting parsed datetime string to Python datetime.datetime Params - - fmt - format to be passed to datetime.strptime (default=C{"%Y-%m-%dT%H:%M:%S.%f"}) Example:: dt_expr = pyparsing_common.iso8601_datetime.copy() dt_expr.setParseAction(pyparsing_common.convertToDatetime()) print(dt_expr.parseString("1999-12-31T23:59:59.999")) prints:: [datetime.datetime(1999, 12, 31, 23, 59, 59, 999000)] """ def cvt_fn(s,l,t): try: return datetime.strptime(t[0], fmt) except ValueError as ve: raise ParseException(s, l, str(ve)) return cvt_fn iso8601_date = Regex(r'(?P<year>\d{4})(?:-(?P<month>\d\d)(?:-(?P<day>\d\d))?)?').setName("ISO8601 date") "ISO8601 date (C{yyyy-mm-dd})" iso8601_datetime = Regex(r'(?P<year>\d{4})-(?P<month>\d\d)-(?P<day>\d\d)[T ](?P<hour>\d\d):(?P<minute>\d\d)(:(?P<second>\d\d(\.\d*)?)?)?(?P<tz>Z|[+-]\d\d:?\d\d)?').setName("ISO8601 datetime") "ISO8601 datetime (C{yyyy-mm-ddThh:mm:ss.s(Z|+-00:00)}) - trailing seconds, milliseconds, and timezone optional; accepts separating C{'T'} or C{' '}" uuid = Regex(r'[0-9a-fA-F]{8}(-[0-9a-fA-F]{4}){3}-[0-9a-fA-F]{12}').setName("UUID") "UUID (C{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx})" _html_stripper = anyOpenTag.suppress() | anyCloseTag.suppress() @staticmethod def stripHTMLTags(s, l, tokens): """ Parse action to remove HTML tags from web page HTML source Example:: # strip HTML links from normal text text = '<td>More info at the <a href="http://pyparsing.wikispaces.com">pyparsing</a> wiki page</td>' td,td_end = makeHTMLTags("TD") table_text = td + SkipTo(td_end).setParseAction(pyparsing_common.stripHTMLTags)("body") + td_end print(table_text.parseString(text).body) # -> 'More info at the pyparsing wiki page' """ return pyparsing_common._html_stripper.transformString(tokens[0]) _commasepitem = Combine(OneOrMore(~Literal(",") + ~LineEnd() + Word(printables, excludeChars=',') + Optional( White(" \t") ) ) ).streamline().setName("commaItem") comma_separated_list = delimitedList( Optional( quotedString.copy() | _commasepitem, default="") ).setName("comma separated list") """Predefined expression of 1 or more printable words or quoted strings, separated by commas.""" upcaseTokens = staticmethod(tokenMap(lambda t: _ustr(t).upper())) """Parse action to convert tokens to upper case.""" downcaseTokens = staticmethod(tokenMap(lambda t: _ustr(t).lower())) """Parse action to convert tokens to lower case.""" if __name__ == "__main__": selectToken = CaselessLiteral("select") fromToken = CaselessLiteral("from") ident = Word(alphas, alphanums + "_$") columnName = delimitedList(ident, ".", combine=True).setParseAction(upcaseTokens) columnNameList = Group(delimitedList(columnName)).setName("columns") columnSpec = ('*' | columnNameList) tableName = delimitedList(ident, ".", combine=True).setParseAction(upcaseTokens) tableNameList = Group(delimitedList(tableName)).setName("tables") simpleSQL = selectToken("command") + columnSpec("columns") + fromToken + tableNameList("tables") # demo runTests method, including embedded comments in test string simpleSQL.runTests(""" # '*' as column list and dotted table name select * from SYS.XYZZY # caseless match on "SELECT", and casts back to "select" SELECT * from XYZZY, ABC # list of column names, and mixed case SELECT keyword Select AA,BB,CC from Sys.dual # multiple tables Select A, B, C from Sys.dual, Table2 # invalid SELECT keyword - should fail Xelect A, B, C from Sys.dual # incomplete command - should fail Select # invalid column name - should fail Select ^^^ frox Sys.dual """) pyparsing_common.number.runTests(""" 100 -100 +100 3.14159 6.02e23 1e-12 """) # any int or real number, returned as float pyparsing_common.fnumber.runTests(""" 100 -100 +100 3.14159 6.02e23 1e-12 """) pyparsing_common.hex_integer.runTests(""" 100 FF """) import uuid pyparsing_common.uuid.setParseAction(tokenMap(uuid.UUID)) pyparsing_common.uuid.runTests(""" 12345678-1234-5678-1234-567812345678 """) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pkg_resources/_vendor/six.py�������������������������0000644�0001750�0001750�00000072622�00000000000�027736� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""Utilities for writing code that runs on Python 2 and 3""" # Copyright (c) 2010-2015 Benjamin Peterson # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from __future__ import absolute_import import functools import itertools import operator import sys import types __author__ = "Benjamin Peterson <benjamin@python.org>" __version__ = "1.10.0" # Useful for very coarse version differentiation. PY2 = sys.version_info[0] == 2 PY3 = sys.version_info[0] == 3 PY34 = sys.version_info[0:2] >= (3, 4) if PY3: string_types = str, integer_types = int, class_types = type, text_type = str binary_type = bytes MAXSIZE = sys.maxsize else: string_types = basestring, integer_types = (int, long) class_types = (type, types.ClassType) text_type = unicode binary_type = str if sys.platform.startswith("java"): # Jython always uses 32 bits. MAXSIZE = int((1 << 31) - 1) else: # It's possible to have sizeof(long) != sizeof(Py_ssize_t). class X(object): def __len__(self): return 1 << 31 try: len(X()) except OverflowError: # 32-bit MAXSIZE = int((1 << 31) - 1) else: # 64-bit MAXSIZE = int((1 << 63) - 1) del X def _add_doc(func, doc): """Add documentation to a function.""" func.__doc__ = doc def _import_module(name): """Import module, returning the module after the last dot.""" __import__(name) return sys.modules[name] class _LazyDescr(object): def __init__(self, name): self.name = name def __get__(self, obj, tp): result = self._resolve() setattr(obj, self.name, result) # Invokes __set__. try: # This is a bit ugly, but it avoids running this again by # removing this descriptor. delattr(obj.__class__, self.name) except AttributeError: pass return result class MovedModule(_LazyDescr): def __init__(self, name, old, new=None): super(MovedModule, self).__init__(name) if PY3: if new is None: new = name self.mod = new else: self.mod = old def _resolve(self): return _import_module(self.mod) def __getattr__(self, attr): _module = self._resolve() value = getattr(_module, attr) setattr(self, attr, value) return value class _LazyModule(types.ModuleType): def __init__(self, name): super(_LazyModule, self).__init__(name) self.__doc__ = self.__class__.__doc__ def __dir__(self): attrs = ["__doc__", "__name__"] attrs += [attr.name for attr in self._moved_attributes] return attrs # Subclasses should override this _moved_attributes = [] class MovedAttribute(_LazyDescr): def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None): super(MovedAttribute, self).__init__(name) if PY3: if new_mod is None: new_mod = name self.mod = new_mod if new_attr is None: if old_attr is None: new_attr = name else: new_attr = old_attr self.attr = new_attr else: self.mod = old_mod if old_attr is None: old_attr = name self.attr = old_attr def _resolve(self): module = _import_module(self.mod) return getattr(module, self.attr) class _SixMetaPathImporter(object): """ A meta path importer to import six.moves and its submodules. This class implements a PEP302 finder and loader. It should be compatible with Python 2.5 and all existing versions of Python3 """ def __init__(self, six_module_name): self.name = six_module_name self.known_modules = {} def _add_module(self, mod, *fullnames): for fullname in fullnames: self.known_modules[self.name + "." + fullname] = mod def _get_module(self, fullname): return self.known_modules[self.name + "." + fullname] def find_module(self, fullname, path=None): if fullname in self.known_modules: return self return None def __get_module(self, fullname): try: return self.known_modules[fullname] except KeyError: raise ImportError("This loader does not know module " + fullname) def load_module(self, fullname): try: # in case of a reload return sys.modules[fullname] except KeyError: pass mod = self.__get_module(fullname) if isinstance(mod, MovedModule): mod = mod._resolve() else: mod.__loader__ = self sys.modules[fullname] = mod return mod def is_package(self, fullname): """ Return true, if the named module is a package. We need this method to get correct spec objects with Python 3.4 (see PEP451) """ return hasattr(self.__get_module(fullname), "__path__") def get_code(self, fullname): """Return None Required, if is_package is implemented""" self.__get_module(fullname) # eventually raises ImportError return None get_source = get_code # same as get_code _importer = _SixMetaPathImporter(__name__) class _MovedItems(_LazyModule): """Lazy loading of moved objects""" __path__ = [] # mark as package _moved_attributes = [ MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"), MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"), MovedAttribute("filterfalse", "itertools", "itertools", "ifilterfalse", "filterfalse"), MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"), MovedAttribute("intern", "__builtin__", "sys"), MovedAttribute("map", "itertools", "builtins", "imap", "map"), MovedAttribute("getcwd", "os", "os", "getcwdu", "getcwd"), MovedAttribute("getcwdb", "os", "os", "getcwd", "getcwdb"), MovedAttribute("range", "__builtin__", "builtins", "xrange", "range"), MovedAttribute("reload_module", "__builtin__", "importlib" if PY34 else "imp", "reload"), MovedAttribute("reduce", "__builtin__", "functools"), MovedAttribute("shlex_quote", "pipes", "shlex", "quote"), MovedAttribute("StringIO", "StringIO", "io"), MovedAttribute("UserDict", "UserDict", "collections"), MovedAttribute("UserList", "UserList", "collections"), MovedAttribute("UserString", "UserString", "collections"), MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"), MovedAttribute("zip", "itertools", "builtins", "izip", "zip"), MovedAttribute("zip_longest", "itertools", "itertools", "izip_longest", "zip_longest"), MovedModule("builtins", "__builtin__"), MovedModule("configparser", "ConfigParser"), MovedModule("copyreg", "copy_reg"), MovedModule("dbm_gnu", "gdbm", "dbm.gnu"), MovedModule("_dummy_thread", "dummy_thread", "_dummy_thread"), MovedModule("http_cookiejar", "cookielib", "http.cookiejar"), MovedModule("http_cookies", "Cookie", "http.cookies"), MovedModule("html_entities", "htmlentitydefs", "html.entities"), MovedModule("html_parser", "HTMLParser", "html.parser"), MovedModule("http_client", "httplib", "http.client"), MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"), MovedModule("email_mime_nonmultipart", "email.MIMENonMultipart", "email.mime.nonmultipart"), MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"), MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"), MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"), MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"), MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"), MovedModule("cPickle", "cPickle", "pickle"), MovedModule("queue", "Queue"), MovedModule("reprlib", "repr"), MovedModule("socketserver", "SocketServer"), MovedModule("_thread", "thread", "_thread"), MovedModule("tkinter", "Tkinter"), MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"), MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"), MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"), MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"), MovedModule("tkinter_tix", "Tix", "tkinter.tix"), MovedModule("tkinter_ttk", "ttk", "tkinter.ttk"), MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"), MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"), MovedModule("tkinter_colorchooser", "tkColorChooser", "tkinter.colorchooser"), MovedModule("tkinter_commondialog", "tkCommonDialog", "tkinter.commondialog"), MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"), MovedModule("tkinter_font", "tkFont", "tkinter.font"), MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"), MovedModule("tkinter_tksimpledialog", "tkSimpleDialog", "tkinter.simpledialog"), MovedModule("urllib_parse", __name__ + ".moves.urllib_parse", "urllib.parse"), MovedModule("urllib_error", __name__ + ".moves.urllib_error", "urllib.error"), MovedModule("urllib", __name__ + ".moves.urllib", __name__ + ".moves.urllib"), MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"), MovedModule("xmlrpc_client", "xmlrpclib", "xmlrpc.client"), MovedModule("xmlrpc_server", "SimpleXMLRPCServer", "xmlrpc.server"), ] # Add windows specific modules. if sys.platform == "win32": _moved_attributes += [ MovedModule("winreg", "_winreg"), ] for attr in _moved_attributes: setattr(_MovedItems, attr.name, attr) if isinstance(attr, MovedModule): _importer._add_module(attr, "moves." + attr.name) del attr _MovedItems._moved_attributes = _moved_attributes moves = _MovedItems(__name__ + ".moves") _importer._add_module(moves, "moves") class Module_six_moves_urllib_parse(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_parse""" _urllib_parse_moved_attributes = [ MovedAttribute("ParseResult", "urlparse", "urllib.parse"), MovedAttribute("SplitResult", "urlparse", "urllib.parse"), MovedAttribute("parse_qs", "urlparse", "urllib.parse"), MovedAttribute("parse_qsl", "urlparse", "urllib.parse"), MovedAttribute("urldefrag", "urlparse", "urllib.parse"), MovedAttribute("urljoin", "urlparse", "urllib.parse"), MovedAttribute("urlparse", "urlparse", "urllib.parse"), MovedAttribute("urlsplit", "urlparse", "urllib.parse"), MovedAttribute("urlunparse", "urlparse", "urllib.parse"), MovedAttribute("urlunsplit", "urlparse", "urllib.parse"), MovedAttribute("quote", "urllib", "urllib.parse"), MovedAttribute("quote_plus", "urllib", "urllib.parse"), MovedAttribute("unquote", "urllib", "urllib.parse"), MovedAttribute("unquote_plus", "urllib", "urllib.parse"), MovedAttribute("urlencode", "urllib", "urllib.parse"), MovedAttribute("splitquery", "urllib", "urllib.parse"), MovedAttribute("splittag", "urllib", "urllib.parse"), MovedAttribute("splituser", "urllib", "urllib.parse"), MovedAttribute("uses_fragment", "urlparse", "urllib.parse"), MovedAttribute("uses_netloc", "urlparse", "urllib.parse"), MovedAttribute("uses_params", "urlparse", "urllib.parse"), MovedAttribute("uses_query", "urlparse", "urllib.parse"), MovedAttribute("uses_relative", "urlparse", "urllib.parse"), ] for attr in _urllib_parse_moved_attributes: setattr(Module_six_moves_urllib_parse, attr.name, attr) del attr Module_six_moves_urllib_parse._moved_attributes = _urllib_parse_moved_attributes _importer._add_module(Module_six_moves_urllib_parse(__name__ + ".moves.urllib_parse"), "moves.urllib_parse", "moves.urllib.parse") class Module_six_moves_urllib_error(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_error""" _urllib_error_moved_attributes = [ MovedAttribute("URLError", "urllib2", "urllib.error"), MovedAttribute("HTTPError", "urllib2", "urllib.error"), MovedAttribute("ContentTooShortError", "urllib", "urllib.error"), ] for attr in _urllib_error_moved_attributes: setattr(Module_six_moves_urllib_error, attr.name, attr) del attr Module_six_moves_urllib_error._moved_attributes = _urllib_error_moved_attributes _importer._add_module(Module_six_moves_urllib_error(__name__ + ".moves.urllib.error"), "moves.urllib_error", "moves.urllib.error") class Module_six_moves_urllib_request(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_request""" _urllib_request_moved_attributes = [ MovedAttribute("urlopen", "urllib2", "urllib.request"), MovedAttribute("install_opener", "urllib2", "urllib.request"), MovedAttribute("build_opener", "urllib2", "urllib.request"), MovedAttribute("pathname2url", "urllib", "urllib.request"), MovedAttribute("url2pathname", "urllib", "urllib.request"), MovedAttribute("getproxies", "urllib", "urllib.request"), MovedAttribute("Request", "urllib2", "urllib.request"), MovedAttribute("OpenerDirector", "urllib2", "urllib.request"), MovedAttribute("HTTPDefaultErrorHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPRedirectHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPCookieProcessor", "urllib2", "urllib.request"), MovedAttribute("ProxyHandler", "urllib2", "urllib.request"), MovedAttribute("BaseHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPPasswordMgr", "urllib2", "urllib.request"), MovedAttribute("HTTPPasswordMgrWithDefaultRealm", "urllib2", "urllib.request"), MovedAttribute("AbstractBasicAuthHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPBasicAuthHandler", "urllib2", "urllib.request"), MovedAttribute("ProxyBasicAuthHandler", "urllib2", "urllib.request"), MovedAttribute("AbstractDigestAuthHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPDigestAuthHandler", "urllib2", "urllib.request"), MovedAttribute("ProxyDigestAuthHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPSHandler", "urllib2", "urllib.request"), MovedAttribute("FileHandler", "urllib2", "urllib.request"), MovedAttribute("FTPHandler", "urllib2", "urllib.request"), MovedAttribute("CacheFTPHandler", "urllib2", "urllib.request"), MovedAttribute("UnknownHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPErrorProcessor", "urllib2", "urllib.request"), MovedAttribute("urlretrieve", "urllib", "urllib.request"), MovedAttribute("urlcleanup", "urllib", "urllib.request"), MovedAttribute("URLopener", "urllib", "urllib.request"), MovedAttribute("FancyURLopener", "urllib", "urllib.request"), MovedAttribute("proxy_bypass", "urllib", "urllib.request"), ] for attr in _urllib_request_moved_attributes: setattr(Module_six_moves_urllib_request, attr.name, attr) del attr Module_six_moves_urllib_request._moved_attributes = _urllib_request_moved_attributes _importer._add_module(Module_six_moves_urllib_request(__name__ + ".moves.urllib.request"), "moves.urllib_request", "moves.urllib.request") class Module_six_moves_urllib_response(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_response""" _urllib_response_moved_attributes = [ MovedAttribute("addbase", "urllib", "urllib.response"), MovedAttribute("addclosehook", "urllib", "urllib.response"), MovedAttribute("addinfo", "urllib", "urllib.response"), MovedAttribute("addinfourl", "urllib", "urllib.response"), ] for attr in _urllib_response_moved_attributes: setattr(Module_six_moves_urllib_response, attr.name, attr) del attr Module_six_moves_urllib_response._moved_attributes = _urllib_response_moved_attributes _importer._add_module(Module_six_moves_urllib_response(__name__ + ".moves.urllib.response"), "moves.urllib_response", "moves.urllib.response") class Module_six_moves_urllib_robotparser(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_robotparser""" _urllib_robotparser_moved_attributes = [ MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"), ] for attr in _urllib_robotparser_moved_attributes: setattr(Module_six_moves_urllib_robotparser, attr.name, attr) del attr Module_six_moves_urllib_robotparser._moved_attributes = _urllib_robotparser_moved_attributes _importer._add_module(Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib.robotparser"), "moves.urllib_robotparser", "moves.urllib.robotparser") class Module_six_moves_urllib(types.ModuleType): """Create a six.moves.urllib namespace that resembles the Python 3 namespace""" __path__ = [] # mark as package parse = _importer._get_module("moves.urllib_parse") error = _importer._get_module("moves.urllib_error") request = _importer._get_module("moves.urllib_request") response = _importer._get_module("moves.urllib_response") robotparser = _importer._get_module("moves.urllib_robotparser") def __dir__(self): return ['parse', 'error', 'request', 'response', 'robotparser'] _importer._add_module(Module_six_moves_urllib(__name__ + ".moves.urllib"), "moves.urllib") def add_move(move): """Add an item to six.moves.""" setattr(_MovedItems, move.name, move) def remove_move(name): """Remove item from six.moves.""" try: delattr(_MovedItems, name) except AttributeError: try: del moves.__dict__[name] except KeyError: raise AttributeError("no such move, %r" % (name,)) if PY3: _meth_func = "__func__" _meth_self = "__self__" _func_closure = "__closure__" _func_code = "__code__" _func_defaults = "__defaults__" _func_globals = "__globals__" else: _meth_func = "im_func" _meth_self = "im_self" _func_closure = "func_closure" _func_code = "func_code" _func_defaults = "func_defaults" _func_globals = "func_globals" try: advance_iterator = next except NameError: def advance_iterator(it): return it.next() next = advance_iterator try: callable = callable except NameError: def callable(obj): return any("__call__" in klass.__dict__ for klass in type(obj).__mro__) if PY3: def get_unbound_function(unbound): return unbound create_bound_method = types.MethodType def create_unbound_method(func, cls): return func Iterator = object else: def get_unbound_function(unbound): return unbound.im_func def create_bound_method(func, obj): return types.MethodType(func, obj, obj.__class__) def create_unbound_method(func, cls): return types.MethodType(func, None, cls) class Iterator(object): def next(self): return type(self).__next__(self) callable = callable _add_doc(get_unbound_function, """Get the function out of a possibly unbound function""") get_method_function = operator.attrgetter(_meth_func) get_method_self = operator.attrgetter(_meth_self) get_function_closure = operator.attrgetter(_func_closure) get_function_code = operator.attrgetter(_func_code) get_function_defaults = operator.attrgetter(_func_defaults) get_function_globals = operator.attrgetter(_func_globals) if PY3: def iterkeys(d, **kw): return iter(d.keys(**kw)) def itervalues(d, **kw): return iter(d.values(**kw)) def iteritems(d, **kw): return iter(d.items(**kw)) def iterlists(d, **kw): return iter(d.lists(**kw)) viewkeys = operator.methodcaller("keys") viewvalues = operator.methodcaller("values") viewitems = operator.methodcaller("items") else: def iterkeys(d, **kw): return d.iterkeys(**kw) def itervalues(d, **kw): return d.itervalues(**kw) def iteritems(d, **kw): return d.iteritems(**kw) def iterlists(d, **kw): return d.iterlists(**kw) viewkeys = operator.methodcaller("viewkeys") viewvalues = operator.methodcaller("viewvalues") viewitems = operator.methodcaller("viewitems") _add_doc(iterkeys, "Return an iterator over the keys of a dictionary.") _add_doc(itervalues, "Return an iterator over the values of a dictionary.") _add_doc(iteritems, "Return an iterator over the (key, value) pairs of a dictionary.") _add_doc(iterlists, "Return an iterator over the (key, [values]) pairs of a dictionary.") if PY3: def b(s): return s.encode("latin-1") def u(s): return s unichr = chr import struct int2byte = struct.Struct(">B").pack del struct byte2int = operator.itemgetter(0) indexbytes = operator.getitem iterbytes = iter import io StringIO = io.StringIO BytesIO = io.BytesIO _assertCountEqual = "assertCountEqual" if sys.version_info[1] <= 1: _assertRaisesRegex = "assertRaisesRegexp" _assertRegex = "assertRegexpMatches" else: _assertRaisesRegex = "assertRaisesRegex" _assertRegex = "assertRegex" else: def b(s): return s # Workaround for standalone backslash def u(s): return unicode(s.replace(r'\\', r'\\\\'), "unicode_escape") unichr = unichr int2byte = chr def byte2int(bs): return ord(bs[0]) def indexbytes(buf, i): return ord(buf[i]) iterbytes = functools.partial(itertools.imap, ord) import StringIO StringIO = BytesIO = StringIO.StringIO _assertCountEqual = "assertItemsEqual" _assertRaisesRegex = "assertRaisesRegexp" _assertRegex = "assertRegexpMatches" _add_doc(b, """Byte literal""") _add_doc(u, """Text literal""") def assertCountEqual(self, *args, **kwargs): return getattr(self, _assertCountEqual)(*args, **kwargs) def assertRaisesRegex(self, *args, **kwargs): return getattr(self, _assertRaisesRegex)(*args, **kwargs) def assertRegex(self, *args, **kwargs): return getattr(self, _assertRegex)(*args, **kwargs) if PY3: exec_ = getattr(moves.builtins, "exec") def reraise(tp, value, tb=None): if value is None: value = tp() if value.__traceback__ is not tb: raise value.with_traceback(tb) raise value else: def exec_(_code_, _globs_=None, _locs_=None): """Execute code in a namespace.""" if _globs_ is None: frame = sys._getframe(1) _globs_ = frame.f_globals if _locs_ is None: _locs_ = frame.f_locals del frame elif _locs_ is None: _locs_ = _globs_ exec("""exec _code_ in _globs_, _locs_""") exec_("""def reraise(tp, value, tb=None): raise tp, value, tb """) if sys.version_info[:2] == (3, 2): exec_("""def raise_from(value, from_value): if from_value is None: raise value raise value from from_value """) elif sys.version_info[:2] > (3, 2): exec_("""def raise_from(value, from_value): raise value from from_value """) else: def raise_from(value, from_value): raise value print_ = getattr(moves.builtins, "print", None) if print_ is None: def print_(*args, **kwargs): """The new-style print function for Python 2.4 and 2.5.""" fp = kwargs.pop("file", sys.stdout) if fp is None: return def write(data): if not isinstance(data, basestring): data = str(data) # If the file has an encoding, encode unicode with it. if (isinstance(fp, file) and isinstance(data, unicode) and fp.encoding is not None): errors = getattr(fp, "errors", None) if errors is None: errors = "strict" data = data.encode(fp.encoding, errors) fp.write(data) want_unicode = False sep = kwargs.pop("sep", None) if sep is not None: if isinstance(sep, unicode): want_unicode = True elif not isinstance(sep, str): raise TypeError("sep must be None or a string") end = kwargs.pop("end", None) if end is not None: if isinstance(end, unicode): want_unicode = True elif not isinstance(end, str): raise TypeError("end must be None or a string") if kwargs: raise TypeError("invalid keyword arguments to print()") if not want_unicode: for arg in args: if isinstance(arg, unicode): want_unicode = True break if want_unicode: newline = unicode("\n") space = unicode(" ") else: newline = "\n" space = " " if sep is None: sep = space if end is None: end = newline for i, arg in enumerate(args): if i: write(sep) write(arg) write(end) if sys.version_info[:2] < (3, 3): _print = print_ def print_(*args, **kwargs): fp = kwargs.get("file", sys.stdout) flush = kwargs.pop("flush", False) _print(*args, **kwargs) if flush and fp is not None: fp.flush() _add_doc(reraise, """Reraise an exception.""") if sys.version_info[0:2] < (3, 4): def wraps(wrapped, assigned=functools.WRAPPER_ASSIGNMENTS, updated=functools.WRAPPER_UPDATES): def wrapper(f): f = functools.wraps(wrapped, assigned, updated)(f) f.__wrapped__ = wrapped return f return wrapper else: wraps = functools.wraps def with_metaclass(meta, *bases): """Create a base class with a metaclass.""" # This requires a bit of explanation: the basic idea is to make a dummy # metaclass for one level of class instantiation that replaces itself with # the actual metaclass. class metaclass(meta): def __new__(cls, name, this_bases, d): return meta(name, bases, d) return type.__new__(metaclass, 'temporary_class', (), {}) def add_metaclass(metaclass): """Class decorator for creating a class with a metaclass.""" def wrapper(cls): orig_vars = cls.__dict__.copy() slots = orig_vars.get('__slots__') if slots is not None: if isinstance(slots, str): slots = [slots] for slots_var in slots: orig_vars.pop(slots_var) orig_vars.pop('__dict__', None) orig_vars.pop('__weakref__', None) return metaclass(cls.__name__, cls.__bases__, orig_vars) return wrapper def python_2_unicode_compatible(klass): """ A decorator that defines __unicode__ and __str__ methods under Python 2. Under Python 3 it does nothing. To support Python 2 and 3 with a single code base, define a __str__ method returning text and apply this decorator to the class. """ if PY2: if '__str__' not in klass.__dict__: raise ValueError("@python_2_unicode_compatible cannot be applied " "to %s because it doesn't define __str__()." % klass.__name__) klass.__unicode__ = klass.__str__ klass.__str__ = lambda self: self.__unicode__().encode('utf-8') return klass # Complete the moves implementation. # This code is at the end of this module to speed up module loading. # Turn this module into a package. __path__ = [] # required for PEP 302 and PEP 451 __package__ = __name__ # see PEP 366 @ReservedAssignment if globals().get("__spec__") is not None: __spec__.submodule_search_locations = [] # PEP 451 @UndefinedVariable # Remove other six meta path importers, since they cause problems. This can # happen if six is removed from sys.modules and then reloaded. (Setuptools does # this for some reason.) if sys.meta_path: for i, importer in enumerate(sys.meta_path): # Here's some real nastiness: Another "instance" of the six module might # be floating around. Therefore, we can't use isinstance() to check for # the six meta path importer, since the other six instance will have # inserted an importer with different class. if (type(importer).__name__ == "_SixMetaPathImporter" and importer.name == __name__): del sys.meta_path[i] break del i, importer # Finally, add the importer to the meta path import hook. sys.meta_path.append(_importer) ��������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735100.0433276 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pkg_resources/extern/��������������������������������0000755�0001750�0001750�00000000000�00000000000�026421� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pkg_resources/extern/__init__.py���������������������0000644�0001750�0001750�00000004702�00000000000�030535� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import sys class VendorImporter: """ A PEP 302 meta path importer for finding optionally-vendored or otherwise naturally-installed packages from root_name. """ def __init__(self, root_name, vendored_names=(), vendor_pkg=None): self.root_name = root_name self.vendored_names = set(vendored_names) self.vendor_pkg = vendor_pkg or root_name.replace('extern', '_vendor') @property def search_path(self): """ Search first the vendor package then as a natural package. """ yield self.vendor_pkg + '.' yield '' def find_module(self, fullname, path=None): """ Return self when fullname starts with root_name and the target module is one vendored through this importer. """ root, base, target = fullname.partition(self.root_name + '.') if root: return if not any(map(target.startswith, self.vendored_names)): return return self def load_module(self, fullname): """ Iterate over the search path to locate and load fullname. """ root, base, target = fullname.partition(self.root_name + '.') for prefix in self.search_path: try: extant = prefix + target __import__(extant) mod = sys.modules[extant] sys.modules[fullname] = mod # mysterious hack: # Remove the reference to the extant package/module # on later Python versions to cause relative imports # in the vendor package to resolve the same modules # as those going through this importer. if prefix and sys.version_info > (3, 3): del sys.modules[extant] return mod except ImportError: pass else: raise ImportError( "The '{target}' package is required; " "normally this is bundled with this package so if you get " "this warning, consult the packager of your " "distribution.".format(**locals()) ) def install(self): """ Install this importer into sys.meta_path if not already present. """ if self not in sys.meta_path: sys.meta_path.append(self) names = 'packaging', 'pyparsing', 'six', 'appdirs' VendorImporter(__name__, names).install() ��������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pkg_resources/py31compat.py��������������������������0000644�0001750�0001750�00000001056�00000000000�027470� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import os import errno import sys from .extern import six def _makedirs_31(path, exist_ok=False): try: os.makedirs(path) except OSError as exc: if not exist_ok or exc.errno != errno.EEXIST: raise # rely on compatibility behavior until mode considerations # and exists_ok considerations are disentangled. # See https://github.com/pypa/setuptools/pull/1083#issuecomment-315168663 needs_makedirs = ( six.PY2 or (3, 4) <= sys.version_info < (3, 4, 1) ) makedirs = _makedirs_31 if needs_makedirs else os.makedirs ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000033�00000000000�011451� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������27 mtime=1604735100.046661 �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pyximport/�������������������������������������������0000755�0001750�0001750�00000000000�00000000000�024314� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564431446.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pyximport/__init__.py��������������������������������0000644�0001750�0001750�00000000117�00000000000�026424� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from .pyximport import * # replicate docstring from .pyximport import __doc__ �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564431446.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pyximport/pyxbuild.py��������������������������������0000644�0001750�0001750�00000013106�00000000000�026527� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""Build a Pyrex file from .pyx source to .so loadable module using the installed distutils infrastructure. Call: out_fname = pyx_to_dll("foo.pyx") """ import os import sys from distutils.errors import DistutilsArgError, DistutilsError, CCompilerError from distutils.extension import Extension from distutils.util import grok_environment_error try: from Cython.Distutils.old_build_ext import old_build_ext as build_ext HAS_CYTHON = True except ImportError: HAS_CYTHON = False DEBUG = 0 _reloads={} def pyx_to_dll(filename, ext=None, force_rebuild=0, build_in_temp=False, pyxbuild_dir=None, setup_args=None, reload_support=False, inplace=False): """Compile a PYX file to a DLL and return the name of the generated .so or .dll .""" assert os.path.exists(filename), "Could not find %s" % os.path.abspath(filename) path, name = os.path.split(os.path.abspath(filename)) if not ext: modname, extension = os.path.splitext(name) assert extension in (".pyx", ".py"), extension if not HAS_CYTHON: filename = filename[:-len(extension)] + '.c' ext = Extension(name=modname, sources=[filename]) if setup_args is None: setup_args = {} if not pyxbuild_dir: pyxbuild_dir = os.path.join(path, "_pyxbld") package_base_dir = path for package_name in ext.name.split('.')[-2::-1]: package_base_dir, pname = os.path.split(package_base_dir) if pname != package_name: # something is wrong - package path doesn't match file path package_base_dir = None break script_args=setup_args.get("script_args",[]) if DEBUG or "--verbose" in script_args: quiet = "--verbose" else: quiet = "--quiet" args = [quiet, "build_ext"] if force_rebuild: args.append("--force") if inplace and package_base_dir: args.extend(['--build-lib', package_base_dir]) if ext.name == '__init__' or ext.name.endswith('.__init__'): # package => provide __path__ early if not hasattr(ext, 'cython_directives'): ext.cython_directives = {'set_initial_path' : 'SOURCEFILE'} elif 'set_initial_path' not in ext.cython_directives: ext.cython_directives['set_initial_path'] = 'SOURCEFILE' if HAS_CYTHON and build_in_temp: args.append("--pyrex-c-in-temp") sargs = setup_args.copy() sargs.update({ "script_name": None, "script_args": args + script_args, }) # late import, in case setuptools replaced it from distutils.dist import Distribution dist = Distribution(sargs) if not dist.ext_modules: dist.ext_modules = [] dist.ext_modules.append(ext) if HAS_CYTHON: dist.cmdclass = {'build_ext': build_ext} build = dist.get_command_obj('build') build.build_base = pyxbuild_dir cfgfiles = dist.find_config_files() dist.parse_config_files(cfgfiles) try: ok = dist.parse_command_line() except DistutilsArgError: raise if DEBUG: print("options (after parsing command line):") dist.dump_option_dicts() assert ok try: obj_build_ext = dist.get_command_obj("build_ext") dist.run_commands() so_path = obj_build_ext.get_outputs()[0] if obj_build_ext.inplace: # Python distutils get_outputs()[ returns a wrong so_path # when --inplace ; see http://bugs.python.org/issue5977 # workaround: so_path = os.path.join(os.path.dirname(filename), os.path.basename(so_path)) if reload_support: org_path = so_path timestamp = os.path.getmtime(org_path) global _reloads last_timestamp, last_path, count = _reloads.get(org_path, (None,None,0) ) if last_timestamp == timestamp: so_path = last_path else: basename = os.path.basename(org_path) while count < 100: count += 1 r_path = os.path.join(obj_build_ext.build_lib, basename + '.reload%s'%count) try: import shutil # late import / reload_support is: debugging try: # Try to unlink first --- if the .so file # is mmapped by another process, # overwriting its contents corrupts the # loaded image (on Linux) and crashes the # other process. On Windows, unlinking an # open file just fails. if os.path.isfile(r_path): os.unlink(r_path) except OSError: continue shutil.copy2(org_path, r_path) so_path = r_path except IOError: continue break else: # used up all 100 slots raise ImportError("reload count for %s reached maximum"%org_path) _reloads[org_path]=(timestamp, so_path, count) return so_path except KeyboardInterrupt: sys.exit(1) except (IOError, os.error): exc = sys.exc_info()[1] error = grok_environment_error(exc) if DEBUG: sys.stderr.write(error + "\n") raise if __name__=="__main__": pyx_to_dll("dummy.pyx") from . import test ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564431446.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/pyximport/pyximport.py�������������������������������0000644�0001750�0001750�00000056032�00000000000�026747� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" Import hooks; when installed with the install() function, these hooks allow importing .pyx files as if they were Python modules. If you want the hook installed every time you run Python you can add it to your Python version by adding these lines to sitecustomize.py (which you can create from scratch in site-packages if it doesn't exist there or somewhere else on your python path):: import pyximport pyximport.install() For instance on the Mac with a non-system Python 2.3, you could create sitecustomize.py with only those two lines at /usr/local/lib/python2.3/site-packages/sitecustomize.py . A custom distutils.core.Extension instance and setup() args (Distribution) for for the build can be defined by a <modulename>.pyxbld file like: # examplemod.pyxbld def make_ext(modname, pyxfilename): from distutils.extension import Extension return Extension(name = modname, sources=[pyxfilename, 'hello.c'], include_dirs=['/myinclude'] ) def make_setup_args(): return dict(script_args=["--compiler=mingw32"]) Extra dependencies can be defined by a <modulename>.pyxdep . See README. Since Cython 0.11, the :mod:`pyximport` module also has experimental compilation support for normal Python modules. This allows you to automatically run Cython on every .pyx and .py module that Python imports, including parts of the standard library and installed packages. Cython will still fail to compile a lot of Python modules, in which case the import mechanism will fall back to loading the Python source modules instead. The .py import mechanism is installed like this:: pyximport.install(pyimport = True) Running this module as a top-level script will run a test and then print the documentation. This code is based on the Py2.3+ import protocol as described in PEP 302. """ import glob import imp import os import sys from zipimport import zipimporter, ZipImportError mod_name = "pyximport" PYX_EXT = ".pyx" PYXDEP_EXT = ".pyxdep" PYXBLD_EXT = ".pyxbld" DEBUG_IMPORT = False def _print(message, args): if args: message = message % args print(message) def _debug(message, *args): if DEBUG_IMPORT: _print(message, args) def _info(message, *args): _print(message, args) # Performance problem: for every PYX file that is imported, we will # invoke the whole distutils infrastructure even if the module is # already built. It might be more efficient to only do it when the # mod time of the .pyx is newer than the mod time of the .so but # the question is how to get distutils to tell me the name of the .so # before it builds it. Maybe it is easy...but maybe the performance # issue isn't real. def _load_pyrex(name, filename): "Load a pyrex file given a name and filename." def get_distutils_extension(modname, pyxfilename, language_level=None): # try: # import hashlib # except ImportError: # import md5 as hashlib # extra = "_" + hashlib.md5(open(pyxfilename).read()).hexdigest() # modname = modname + extra extension_mod,setup_args = handle_special_build(modname, pyxfilename) if not extension_mod: if not isinstance(pyxfilename, str): # distutils is stupid in Py2 and requires exactly 'str' # => encode accidentally coerced unicode strings back to str pyxfilename = pyxfilename.encode(sys.getfilesystemencoding()) from distutils.extension import Extension extension_mod = Extension(name = modname, sources=[pyxfilename]) if language_level is not None: extension_mod.cython_directives = {'language_level': language_level} return extension_mod,setup_args def handle_special_build(modname, pyxfilename): special_build = os.path.splitext(pyxfilename)[0] + PYXBLD_EXT ext = None setup_args={} if os.path.exists(special_build): # globls = {} # locs = {} # execfile(special_build, globls, locs) # ext = locs["make_ext"](modname, pyxfilename) mod = imp.load_source("XXXX", special_build, open(special_build)) make_ext = getattr(mod,'make_ext',None) if make_ext: ext = make_ext(modname, pyxfilename) assert ext and ext.sources, "make_ext in %s did not return Extension" % special_build make_setup_args = getattr(mod, 'make_setup_args',None) if make_setup_args: setup_args = make_setup_args() assert isinstance(setup_args,dict), ("make_setup_args in %s did not return a dict" % special_build) assert set or setup_args, ("neither make_ext nor make_setup_args %s" % special_build) ext.sources = [os.path.join(os.path.dirname(special_build), source) for source in ext.sources] return ext, setup_args def handle_dependencies(pyxfilename): testing = '_test_files' in globals() dependfile = os.path.splitext(pyxfilename)[0] + PYXDEP_EXT # by default let distutils decide whether to rebuild on its own # (it has a better idea of what the output file will be) # but we know more about dependencies so force a rebuild if # some of the dependencies are newer than the pyxfile. if os.path.exists(dependfile): depends = open(dependfile).readlines() depends = [depend.strip() for depend in depends] # gather dependencies in the "files" variable # the dependency file is itself a dependency files = [dependfile] for depend in depends: fullpath = os.path.join(os.path.dirname(dependfile), depend) files.extend(glob.glob(fullpath)) # only for unit testing to see we did the right thing if testing: _test_files[:] = [] #$pycheck_no # if any file that the pyxfile depends upon is newer than # the pyx file, 'touch' the pyx file so that distutils will # be tricked into rebuilding it. for file in files: from distutils.dep_util import newer if newer(file, pyxfilename): _debug("Rebuilding %s because of %s", pyxfilename, file) filetime = os.path.getmtime(file) os.utime(pyxfilename, (filetime, filetime)) if testing: _test_files.append(file) def build_module(name, pyxfilename, pyxbuild_dir=None, inplace=False, language_level=None): assert os.path.exists(pyxfilename), "Path does not exist: %s" % pyxfilename handle_dependencies(pyxfilename) extension_mod, setup_args = get_distutils_extension(name, pyxfilename, language_level) build_in_temp = pyxargs.build_in_temp sargs = pyxargs.setup_args.copy() sargs.update(setup_args) build_in_temp = sargs.pop('build_in_temp',build_in_temp) from . import pyxbuild so_path = pyxbuild.pyx_to_dll(pyxfilename, extension_mod, build_in_temp=build_in_temp, pyxbuild_dir=pyxbuild_dir, setup_args=sargs, inplace=inplace, reload_support=pyxargs.reload_support) assert os.path.exists(so_path), "Cannot find: %s" % so_path junkpath = os.path.join(os.path.dirname(so_path), name+"_*") #very dangerous with --inplace ? yes, indeed, trying to eat my files ;) junkstuff = glob.glob(junkpath) for path in junkstuff: if path != so_path: try: os.remove(path) except IOError: _info("Couldn't remove %s", path) return so_path def load_module(name, pyxfilename, pyxbuild_dir=None, is_package=False, build_inplace=False, language_level=None, so_path=None): try: if so_path is None: if is_package: module_name = name + '.__init__' else: module_name = name so_path = build_module(module_name, pyxfilename, pyxbuild_dir, inplace=build_inplace, language_level=language_level) mod = imp.load_dynamic(name, so_path) if is_package and not hasattr(mod, '__path__'): mod.__path__ = [os.path.dirname(so_path)] assert mod.__file__ == so_path, (mod.__file__, so_path) except Exception: if pyxargs.load_py_module_on_import_failure and pyxfilename.endswith('.py'): # try to fall back to normal import mod = imp.load_source(name, pyxfilename) assert mod.__file__ in (pyxfilename, pyxfilename+'c', pyxfilename+'o'), (mod.__file__, pyxfilename) else: tb = sys.exc_info()[2] import traceback exc = ImportError("Building module %s failed: %s" % ( name, traceback.format_exception_only(*sys.exc_info()[:2]))) if sys.version_info[0] >= 3: raise exc.with_traceback(tb) else: exec("raise exc, None, tb", {'exc': exc, 'tb': tb}) return mod # import hooks class PyxImporter(object): """A meta-path importer for .pyx files. """ def __init__(self, extension=PYX_EXT, pyxbuild_dir=None, inplace=False, language_level=None): self.extension = extension self.pyxbuild_dir = pyxbuild_dir self.inplace = inplace self.language_level = language_level def find_module(self, fullname, package_path=None): if fullname in sys.modules and not pyxargs.reload_support: return None # only here when reload() # package_path might be a _NamespacePath. Convert that into a list... if package_path is not None and not isinstance(package_path, list): package_path = list(package_path) try: fp, pathname, (ext,mode,ty) = imp.find_module(fullname,package_path) if fp: fp.close() # Python should offer a Default-Loader to avoid this double find/open! if pathname and ty == imp.PKG_DIRECTORY: pkg_file = os.path.join(pathname, '__init__'+self.extension) if os.path.isfile(pkg_file): return PyxLoader(fullname, pathname, init_path=pkg_file, pyxbuild_dir=self.pyxbuild_dir, inplace=self.inplace, language_level=self.language_level) if pathname and pathname.endswith(self.extension): return PyxLoader(fullname, pathname, pyxbuild_dir=self.pyxbuild_dir, inplace=self.inplace, language_level=self.language_level) if ty != imp.C_EXTENSION: # only when an extension, check if we have a .pyx next! return None # find .pyx fast, when .so/.pyd exist --inplace pyxpath = os.path.splitext(pathname)[0]+self.extension if os.path.isfile(pyxpath): return PyxLoader(fullname, pyxpath, pyxbuild_dir=self.pyxbuild_dir, inplace=self.inplace, language_level=self.language_level) # .so/.pyd's on PATH should not be remote from .pyx's # think no need to implement PyxArgs.importer_search_remote here? except ImportError: pass # searching sys.path ... #if DEBUG_IMPORT: print "SEARCHING", fullname, package_path mod_parts = fullname.split('.') module_name = mod_parts[-1] pyx_module_name = module_name + self.extension # this may work, but it returns the file content, not its path #import pkgutil #pyx_source = pkgutil.get_data(package, pyx_module_name) paths = package_path or sys.path for path in paths: pyx_data = None if not path: path = os.getcwd() elif os.path.isfile(path): try: zi = zipimporter(path) pyx_data = zi.get_data(pyx_module_name) except (ZipImportError, IOError, OSError): continue # Module not found. # unzip the imported file into the build dir # FIXME: can interfere with later imports if build dir is in sys.path and comes before zip file path = self.pyxbuild_dir elif not os.path.isabs(path): path = os.path.abspath(path) pyx_module_path = os.path.join(path, pyx_module_name) if pyx_data is not None: if not os.path.exists(path): try: os.makedirs(path) except OSError: # concurrency issue? if not os.path.exists(path): raise with open(pyx_module_path, "wb") as f: f.write(pyx_data) elif not os.path.isfile(pyx_module_path): continue # Module not found. return PyxLoader(fullname, pyx_module_path, pyxbuild_dir=self.pyxbuild_dir, inplace=self.inplace, language_level=self.language_level) # not found, normal package, not a .pyx file, none of our business _debug("%s not found" % fullname) return None class PyImporter(PyxImporter): """A meta-path importer for normal .py files. """ def __init__(self, pyxbuild_dir=None, inplace=False, language_level=None): if language_level is None: language_level = sys.version_info[0] self.super = super(PyImporter, self) self.super.__init__(extension='.py', pyxbuild_dir=pyxbuild_dir, inplace=inplace, language_level=language_level) self.uncompilable_modules = {} self.blocked_modules = ['Cython', 'pyxbuild', 'pyximport.pyxbuild', 'distutils.extension', 'distutils.sysconfig'] def find_module(self, fullname, package_path=None): if fullname in sys.modules: return None if fullname.startswith('Cython.'): return None if fullname in self.blocked_modules: # prevent infinite recursion return None if _lib_loader.knows(fullname): return _lib_loader _debug("trying import of module '%s'", fullname) if fullname in self.uncompilable_modules: path, last_modified = self.uncompilable_modules[fullname] try: new_last_modified = os.stat(path).st_mtime if new_last_modified > last_modified: # import would fail again return None except OSError: # module is no longer where we found it, retry the import pass self.blocked_modules.append(fullname) try: importer = self.super.find_module(fullname, package_path) if importer is not None: if importer.init_path: path = importer.init_path real_name = fullname + '.__init__' else: path = importer.path real_name = fullname _debug("importer found path %s for module %s", path, real_name) try: so_path = build_module( real_name, path, pyxbuild_dir=self.pyxbuild_dir, language_level=self.language_level, inplace=self.inplace) _lib_loader.add_lib(fullname, path, so_path, is_package=bool(importer.init_path)) return _lib_loader except Exception: if DEBUG_IMPORT: import traceback traceback.print_exc() # build failed, not a compilable Python module try: last_modified = os.stat(path).st_mtime except OSError: last_modified = 0 self.uncompilable_modules[fullname] = (path, last_modified) importer = None finally: self.blocked_modules.pop() return importer class LibLoader(object): def __init__(self): self._libs = {} def load_module(self, fullname): try: source_path, so_path, is_package = self._libs[fullname] except KeyError: raise ValueError("invalid module %s" % fullname) _debug("Loading shared library module '%s' from %s", fullname, so_path) return load_module(fullname, source_path, so_path=so_path, is_package=is_package) def add_lib(self, fullname, path, so_path, is_package): self._libs[fullname] = (path, so_path, is_package) def knows(self, fullname): return fullname in self._libs _lib_loader = LibLoader() class PyxLoader(object): def __init__(self, fullname, path, init_path=None, pyxbuild_dir=None, inplace=False, language_level=None): _debug("PyxLoader created for loading %s from %s (init path: %s)", fullname, path, init_path) self.fullname = fullname self.path, self.init_path = path, init_path self.pyxbuild_dir = pyxbuild_dir self.inplace = inplace self.language_level = language_level def load_module(self, fullname): assert self.fullname == fullname, ( "invalid module, expected %s, got %s" % ( self.fullname, fullname)) if self.init_path: # package #print "PACKAGE", fullname module = load_module(fullname, self.init_path, self.pyxbuild_dir, is_package=True, build_inplace=self.inplace, language_level=self.language_level) module.__path__ = [self.path] else: #print "MODULE", fullname module = load_module(fullname, self.path, self.pyxbuild_dir, build_inplace=self.inplace, language_level=self.language_level) return module #install args class PyxArgs(object): build_dir=True build_in_temp=True setup_args={} #None ##pyxargs=None def _have_importers(): has_py_importer = False has_pyx_importer = False for importer in sys.meta_path: if isinstance(importer, PyxImporter): if isinstance(importer, PyImporter): has_py_importer = True else: has_pyx_importer = True return has_py_importer, has_pyx_importer def install(pyximport=True, pyimport=False, build_dir=None, build_in_temp=True, setup_args=None, reload_support=False, load_py_module_on_import_failure=False, inplace=False, language_level=None): """ Main entry point for pyxinstall. Call this to install the ``.pyx`` import hook in your meta-path for a single Python process. If you want it to be installed whenever you use Python, add it to your ``sitecustomize`` (as described above). :param pyximport: If set to False, does not try to import ``.pyx`` files. :param pyimport: You can pass ``pyimport=True`` to also install the ``.py`` import hook in your meta-path. Note, however, that it is rather experimental, will not work at all for some ``.py`` files and packages, and will heavily slow down your imports due to search and compilation. Use at your own risk. :param build_dir: By default, compiled modules will end up in a ``.pyxbld`` directory in the user's home directory. Passing a different path as ``build_dir`` will override this. :param build_in_temp: If ``False``, will produce the C files locally. Working with complex dependencies and debugging becomes more easy. This can principally interfere with existing files of the same name. :param setup_args: Dict of arguments for Distribution. See ``distutils.core.setup()``. :param reload_support: Enables support for dynamic ``reload(my_module)``, e.g. after a change in the Cython code. Additional files ``<so_path>.reloadNN`` may arise on that account, when the previously loaded module file cannot be overwritten. :param load_py_module_on_import_failure: If the compilation of a ``.py`` file succeeds, but the subsequent import fails for some reason, retry the import with the normal ``.py`` module instead of the compiled module. Note that this may lead to unpredictable results for modules that change the system state during their import, as the second import will rerun these modifications in whatever state the system was left after the import of the compiled module failed. :param inplace: Install the compiled module (``.so`` for Linux and Mac / ``.pyd`` for Windows) next to the source file. :param language_level: The source language level to use: 2 or 3. The default is to use the language level of the current Python runtime for .py files and Py2 for ``.pyx`` files. """ if setup_args is None: setup_args = {} if not build_dir: build_dir = os.path.join(os.path.expanduser('~'), '.pyxbld') global pyxargs pyxargs = PyxArgs() #$pycheck_no pyxargs.build_dir = build_dir pyxargs.build_in_temp = build_in_temp pyxargs.setup_args = (setup_args or {}).copy() pyxargs.reload_support = reload_support pyxargs.load_py_module_on_import_failure = load_py_module_on_import_failure has_py_importer, has_pyx_importer = _have_importers() py_importer, pyx_importer = None, None if pyimport and not has_py_importer: py_importer = PyImporter(pyxbuild_dir=build_dir, inplace=inplace, language_level=language_level) # make sure we import Cython before we install the import hook import Cython.Compiler.Main, Cython.Compiler.Pipeline, Cython.Compiler.Optimize sys.meta_path.insert(0, py_importer) if pyximport and not has_pyx_importer: pyx_importer = PyxImporter(pyxbuild_dir=build_dir, inplace=inplace, language_level=language_level) sys.meta_path.append(pyx_importer) return py_importer, pyx_importer def uninstall(py_importer, pyx_importer): """ Uninstall an import hook. """ try: sys.meta_path.remove(py_importer) except ValueError: pass try: sys.meta_path.remove(pyx_importer) except ValueError: pass # MAIN def show_docs(): import __main__ __main__.__name__ = mod_name for name in dir(__main__): item = getattr(__main__, name) try: setattr(item, "__module__", mod_name) except (AttributeError, TypeError): pass help(__main__) if __name__ == '__main__': show_docs() ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735100.0533276 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/������������������������������������������0000755�0001750�0001750�00000000000�00000000000�024462� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/__init__.py�������������������������������0000644�0001750�0001750�00000016163�00000000000�026602� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""Extensions to the 'distutils' for large or complex distributions""" import os import sys import functools import distutils.core import distutils.filelist import re from distutils.errors import DistutilsOptionError from distutils.util import convert_path from fnmatch import fnmatchcase from ._deprecation_warning import SetuptoolsDeprecationWarning from setuptools.extern.six import PY3, string_types from setuptools.extern.six.moves import filter, map import setuptools.version from setuptools.extension import Extension from setuptools.dist import Distribution, Feature from setuptools.depends import Require from . import monkey __metaclass__ = type __all__ = [ 'setup', 'Distribution', 'Feature', 'Command', 'Extension', 'Require', 'SetuptoolsDeprecationWarning', 'find_packages' ] if PY3: __all__.append('find_namespace_packages') __version__ = setuptools.version.__version__ bootstrap_install_from = None # If we run 2to3 on .py files, should we also convert docstrings? # Default: yes; assume that we can detect doctests reliably run_2to3_on_doctests = True # Standard package names for fixer packages lib2to3_fixer_packages = ['lib2to3.fixes'] class PackageFinder: """ Generate a list of all Python packages found within a directory """ @classmethod def find(cls, where='.', exclude=(), include=('*',)): """Return a list all Python packages found within directory 'where' 'where' is the root directory which will be searched for packages. It should be supplied as a "cross-platform" (i.e. URL-style) path; it will be converted to the appropriate local path syntax. 'exclude' is a sequence of package names to exclude; '*' can be used as a wildcard in the names, such that 'foo.*' will exclude all subpackages of 'foo' (but not 'foo' itself). 'include' is a sequence of package names to include. If it's specified, only the named packages will be included. If it's not specified, all found packages will be included. 'include' can contain shell style wildcard patterns just like 'exclude'. """ return list(cls._find_packages_iter( convert_path(where), cls._build_filter('ez_setup', '*__pycache__', *exclude), cls._build_filter(*include))) @classmethod def _find_packages_iter(cls, where, exclude, include): """ All the packages found in 'where' that pass the 'include' filter, but not the 'exclude' filter. """ for root, dirs, files in os.walk(where, followlinks=True): # Copy dirs to iterate over it, then empty dirs. all_dirs = dirs[:] dirs[:] = [] for dir in all_dirs: full_path = os.path.join(root, dir) rel_path = os.path.relpath(full_path, where) package = rel_path.replace(os.path.sep, '.') # Skip directory trees that are not valid packages if ('.' in dir or not cls._looks_like_package(full_path)): continue # Should this package be included? if include(package) and not exclude(package): yield package # Keep searching subdirectories, as there may be more packages # down there, even if the parent was excluded. dirs.append(dir) @staticmethod def _looks_like_package(path): """Does a directory look like a package?""" return os.path.isfile(os.path.join(path, '__init__.py')) @staticmethod def _build_filter(*patterns): """ Given a list of patterns, return a callable that will be true only if the input matches at least one of the patterns. """ return lambda name: any(fnmatchcase(name, pat=pat) for pat in patterns) class PEP420PackageFinder(PackageFinder): @staticmethod def _looks_like_package(path): return True find_packages = PackageFinder.find if PY3: find_namespace_packages = PEP420PackageFinder.find def _install_setup_requires(attrs): # Note: do not use `setuptools.Distribution` directly, as # our PEP 517 backend patch `distutils.core.Distribution`. dist = distutils.core.Distribution(dict( (k, v) for k, v in attrs.items() if k in ('dependency_links', 'setup_requires') )) # Honor setup.cfg's options. dist.parse_config_files(ignore_option_errors=True) if dist.setup_requires: dist.fetch_build_eggs(dist.setup_requires) def setup(**attrs): # Make sure we have any requirements needed to interpret 'attrs'. _install_setup_requires(attrs) return distutils.core.setup(**attrs) setup.__doc__ = distutils.core.setup.__doc__ _Command = monkey.get_unpatched(distutils.core.Command) class Command(_Command): __doc__ = _Command.__doc__ command_consumes_arguments = False def __init__(self, dist, **kw): """ Construct the command for dist, updating vars(self) with any keyword parameters. """ _Command.__init__(self, dist) vars(self).update(kw) def _ensure_stringlike(self, option, what, default=None): val = getattr(self, option) if val is None: setattr(self, option, default) return default elif not isinstance(val, string_types): raise DistutilsOptionError("'%s' must be a %s (got `%s`)" % (option, what, val)) return val def ensure_string_list(self, option): r"""Ensure that 'option' is a list of strings. If 'option' is currently a string, we split it either on /,\s*/ or /\s+/, so "foo bar baz", "foo,bar,baz", and "foo, bar baz" all become ["foo", "bar", "baz"]. """ val = getattr(self, option) if val is None: return elif isinstance(val, string_types): setattr(self, option, re.split(r',\s*|\s+', val)) else: if isinstance(val, list): ok = all(isinstance(v, string_types) for v in val) else: ok = False if not ok: raise DistutilsOptionError( "'%s' must be a list of strings (got %r)" % (option, val)) def reinitialize_command(self, command, reinit_subcommands=0, **kw): cmd = _Command.reinitialize_command(self, command, reinit_subcommands) vars(cmd).update(kw) return cmd def _find_all_simple(path): """ Find all files under 'path' """ results = ( os.path.join(base, file) for base, dirs, files in os.walk(path, followlinks=True) for file in files ) return filter(os.path.isfile, results) def findall(dir=os.curdir): """ Find all files under 'dir' and return the list of full filenames. Unless dir is '.', return full filenames with dir prepended. """ files = _find_all_simple(dir) if dir == os.curdir: make_rel = functools.partial(os.path.relpath, start=dir) files = map(make_rel, files) return list(files) # Apply monkey patches monkey.patch_all() �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/_deprecation_warning.py�������������������0000644�0001750�0001750�00000000332�00000000000�031213� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������class SetuptoolsDeprecationWarning(Warning): """ Base class for warning deprecations in ``setuptools`` This class is not derived from ``DeprecationWarning``, and as such is visible by default. """ ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000033�00000000000�011451� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������27 mtime=1604735100.056661 �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/_vendor/����������������������������������0000755�0001750�0001750�00000000000�00000000000�026116� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/_vendor/__init__.py�����������������������0000644�0001750�0001750�00000000000�00000000000�030215� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000034�00000000000�011452� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������28 mtime=1604735100.0599942 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/_vendor/packaging/������������������������0000755�0001750�0001750�00000000000�00000000000�030042� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/_vendor/packaging/__about__.py������������0000644�0001750�0001750�00000001320�00000000000�032316� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function __all__ = [ "__title__", "__summary__", "__uri__", "__version__", "__author__", "__email__", "__license__", "__copyright__", ] __title__ = "packaging" __summary__ = "Core utilities for Python packages" __uri__ = "https://github.com/pypa/packaging" __version__ = "16.8" __author__ = "Donald Stufft and individual contributors" __email__ = "donald@stufft.io" __license__ = "BSD or Apache License, Version 2.0" __copyright__ = "Copyright 2014-2016 %s" % __author__ ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/_vendor/packaging/__init__.py�������������0000644�0001750�0001750�00000001001�00000000000�032143� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function from .__about__ import ( __author__, __copyright__, __email__, __license__, __summary__, __title__, __uri__, __version__ ) __all__ = [ "__title__", "__summary__", "__uri__", "__version__", "__author__", "__email__", "__license__", "__copyright__", ] �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/_vendor/packaging/_compat.py��������������0000644�0001750�0001750�00000001534�00000000000�032041� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function import sys PY2 = sys.version_info[0] == 2 PY3 = sys.version_info[0] == 3 # flake8: noqa if PY3: string_types = str, else: string_types = basestring, def with_metaclass(meta, *bases): """ Create a base class with a metaclass. """ # This requires a bit of explanation: the basic idea is to make a dummy # metaclass for one level of class instantiation that replaces itself with # the actual metaclass. class metaclass(meta): def __new__(cls, name, this_bases, d): return meta(name, bases, d) return type.__new__(metaclass, 'temporary_class', (), {}) ��������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/_vendor/packaging/_structures.py����������0000644�0001750�0001750�00000002610�00000000000�032775� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function class Infinity(object): def __repr__(self): return "Infinity" def __hash__(self): return hash(repr(self)) def __lt__(self, other): return False def __le__(self, other): return False def __eq__(self, other): return isinstance(other, self.__class__) def __ne__(self, other): return not isinstance(other, self.__class__) def __gt__(self, other): return True def __ge__(self, other): return True def __neg__(self): return NegativeInfinity Infinity = Infinity() class NegativeInfinity(object): def __repr__(self): return "-Infinity" def __hash__(self): return hash(repr(self)) def __lt__(self, other): return True def __le__(self, other): return True def __eq__(self, other): return isinstance(other, self.__class__) def __ne__(self, other): return not isinstance(other, self.__class__) def __gt__(self, other): return False def __ge__(self, other): return False def __neg__(self): return Infinity NegativeInfinity = NegativeInfinity() ������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/_vendor/packaging/markers.py��������������0000644�0001750�0001750�00000020057�00000000000�032064� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function import operator import os import platform import sys from setuptools.extern.pyparsing import ParseException, ParseResults, stringStart, stringEnd from setuptools.extern.pyparsing import ZeroOrMore, Group, Forward, QuotedString from setuptools.extern.pyparsing import Literal as L # noqa from ._compat import string_types from .specifiers import Specifier, InvalidSpecifier __all__ = [ "InvalidMarker", "UndefinedComparison", "UndefinedEnvironmentName", "Marker", "default_environment", ] class InvalidMarker(ValueError): """ An invalid marker was found, users should refer to PEP 508. """ class UndefinedComparison(ValueError): """ An invalid operation was attempted on a value that doesn't support it. """ class UndefinedEnvironmentName(ValueError): """ A name was attempted to be used that does not exist inside of the environment. """ class Node(object): def __init__(self, value): self.value = value def __str__(self): return str(self.value) def __repr__(self): return "<{0}({1!r})>".format(self.__class__.__name__, str(self)) def serialize(self): raise NotImplementedError class Variable(Node): def serialize(self): return str(self) class Value(Node): def serialize(self): return '"{0}"'.format(self) class Op(Node): def serialize(self): return str(self) VARIABLE = ( L("implementation_version") | L("platform_python_implementation") | L("implementation_name") | L("python_full_version") | L("platform_release") | L("platform_version") | L("platform_machine") | L("platform_system") | L("python_version") | L("sys_platform") | L("os_name") | L("os.name") | # PEP-345 L("sys.platform") | # PEP-345 L("platform.version") | # PEP-345 L("platform.machine") | # PEP-345 L("platform.python_implementation") | # PEP-345 L("python_implementation") | # undocumented setuptools legacy L("extra") ) ALIASES = { 'os.name': 'os_name', 'sys.platform': 'sys_platform', 'platform.version': 'platform_version', 'platform.machine': 'platform_machine', 'platform.python_implementation': 'platform_python_implementation', 'python_implementation': 'platform_python_implementation' } VARIABLE.setParseAction(lambda s, l, t: Variable(ALIASES.get(t[0], t[0]))) VERSION_CMP = ( L("===") | L("==") | L(">=") | L("<=") | L("!=") | L("~=") | L(">") | L("<") ) MARKER_OP = VERSION_CMP | L("not in") | L("in") MARKER_OP.setParseAction(lambda s, l, t: Op(t[0])) MARKER_VALUE = QuotedString("'") | QuotedString('"') MARKER_VALUE.setParseAction(lambda s, l, t: Value(t[0])) BOOLOP = L("and") | L("or") MARKER_VAR = VARIABLE | MARKER_VALUE MARKER_ITEM = Group(MARKER_VAR + MARKER_OP + MARKER_VAR) MARKER_ITEM.setParseAction(lambda s, l, t: tuple(t[0])) LPAREN = L("(").suppress() RPAREN = L(")").suppress() MARKER_EXPR = Forward() MARKER_ATOM = MARKER_ITEM | Group(LPAREN + MARKER_EXPR + RPAREN) MARKER_EXPR << MARKER_ATOM + ZeroOrMore(BOOLOP + MARKER_EXPR) MARKER = stringStart + MARKER_EXPR + stringEnd def _coerce_parse_result(results): if isinstance(results, ParseResults): return [_coerce_parse_result(i) for i in results] else: return results def _format_marker(marker, first=True): assert isinstance(marker, (list, tuple, string_types)) # Sometimes we have a structure like [[...]] which is a single item list # where the single item is itself it's own list. In that case we want skip # the rest of this function so that we don't get extraneous () on the # outside. if (isinstance(marker, list) and len(marker) == 1 and isinstance(marker[0], (list, tuple))): return _format_marker(marker[0]) if isinstance(marker, list): inner = (_format_marker(m, first=False) for m in marker) if first: return " ".join(inner) else: return "(" + " ".join(inner) + ")" elif isinstance(marker, tuple): return " ".join([m.serialize() for m in marker]) else: return marker _operators = { "in": lambda lhs, rhs: lhs in rhs, "not in": lambda lhs, rhs: lhs not in rhs, "<": operator.lt, "<=": operator.le, "==": operator.eq, "!=": operator.ne, ">=": operator.ge, ">": operator.gt, } def _eval_op(lhs, op, rhs): try: spec = Specifier("".join([op.serialize(), rhs])) except InvalidSpecifier: pass else: return spec.contains(lhs) oper = _operators.get(op.serialize()) if oper is None: raise UndefinedComparison( "Undefined {0!r} on {1!r} and {2!r}.".format(op, lhs, rhs) ) return oper(lhs, rhs) _undefined = object() def _get_env(environment, name): value = environment.get(name, _undefined) if value is _undefined: raise UndefinedEnvironmentName( "{0!r} does not exist in evaluation environment.".format(name) ) return value def _evaluate_markers(markers, environment): groups = [[]] for marker in markers: assert isinstance(marker, (list, tuple, string_types)) if isinstance(marker, list): groups[-1].append(_evaluate_markers(marker, environment)) elif isinstance(marker, tuple): lhs, op, rhs = marker if isinstance(lhs, Variable): lhs_value = _get_env(environment, lhs.value) rhs_value = rhs.value else: lhs_value = lhs.value rhs_value = _get_env(environment, rhs.value) groups[-1].append(_eval_op(lhs_value, op, rhs_value)) else: assert marker in ["and", "or"] if marker == "or": groups.append([]) return any(all(item) for item in groups) def format_full_version(info): version = '{0.major}.{0.minor}.{0.micro}'.format(info) kind = info.releaselevel if kind != 'final': version += kind[0] + str(info.serial) return version def default_environment(): if hasattr(sys, 'implementation'): iver = format_full_version(sys.implementation.version) implementation_name = sys.implementation.name else: iver = '0' implementation_name = '' return { "implementation_name": implementation_name, "implementation_version": iver, "os_name": os.name, "platform_machine": platform.machine(), "platform_release": platform.release(), "platform_system": platform.system(), "platform_version": platform.version(), "python_full_version": platform.python_version(), "platform_python_implementation": platform.python_implementation(), "python_version": platform.python_version()[:3], "sys_platform": sys.platform, } class Marker(object): def __init__(self, marker): try: self._markers = _coerce_parse_result(MARKER.parseString(marker)) except ParseException as e: err_str = "Invalid marker: {0!r}, parse error at {1!r}".format( marker, marker[e.loc:e.loc + 8]) raise InvalidMarker(err_str) def __str__(self): return _format_marker(self._markers) def __repr__(self): return "<Marker({0!r})>".format(str(self)) def evaluate(self, environment=None): """Evaluate a marker. Return the boolean from evaluating the given marker against the environment. environment is an optional argument to override all or part of the determined environment. The environment is determined from the current Python process. """ current_environment = default_environment() if environment is not None: current_environment.update(environment) return _evaluate_markers(self._markers, current_environment) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/_vendor/packaging/requirements.py���������0000644�0001750�0001750�00000010367�00000000000�033146� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function import string import re from setuptools.extern.pyparsing import stringStart, stringEnd, originalTextFor, ParseException from setuptools.extern.pyparsing import ZeroOrMore, Word, Optional, Regex, Combine from setuptools.extern.pyparsing import Literal as L # noqa from setuptools.extern.six.moves.urllib import parse as urlparse from .markers import MARKER_EXPR, Marker from .specifiers import LegacySpecifier, Specifier, SpecifierSet class InvalidRequirement(ValueError): """ An invalid requirement was found, users should refer to PEP 508. """ ALPHANUM = Word(string.ascii_letters + string.digits) LBRACKET = L("[").suppress() RBRACKET = L("]").suppress() LPAREN = L("(").suppress() RPAREN = L(")").suppress() COMMA = L(",").suppress() SEMICOLON = L(";").suppress() AT = L("@").suppress() PUNCTUATION = Word("-_.") IDENTIFIER_END = ALPHANUM | (ZeroOrMore(PUNCTUATION) + ALPHANUM) IDENTIFIER = Combine(ALPHANUM + ZeroOrMore(IDENTIFIER_END)) NAME = IDENTIFIER("name") EXTRA = IDENTIFIER URI = Regex(r'[^ ]+')("url") URL = (AT + URI) EXTRAS_LIST = EXTRA + ZeroOrMore(COMMA + EXTRA) EXTRAS = (LBRACKET + Optional(EXTRAS_LIST) + RBRACKET)("extras") VERSION_PEP440 = Regex(Specifier._regex_str, re.VERBOSE | re.IGNORECASE) VERSION_LEGACY = Regex(LegacySpecifier._regex_str, re.VERBOSE | re.IGNORECASE) VERSION_ONE = VERSION_PEP440 ^ VERSION_LEGACY VERSION_MANY = Combine(VERSION_ONE + ZeroOrMore(COMMA + VERSION_ONE), joinString=",", adjacent=False)("_raw_spec") _VERSION_SPEC = Optional(((LPAREN + VERSION_MANY + RPAREN) | VERSION_MANY)) _VERSION_SPEC.setParseAction(lambda s, l, t: t._raw_spec or '') VERSION_SPEC = originalTextFor(_VERSION_SPEC)("specifier") VERSION_SPEC.setParseAction(lambda s, l, t: t[1]) MARKER_EXPR = originalTextFor(MARKER_EXPR())("marker") MARKER_EXPR.setParseAction( lambda s, l, t: Marker(s[t._original_start:t._original_end]) ) MARKER_SEPERATOR = SEMICOLON MARKER = MARKER_SEPERATOR + MARKER_EXPR VERSION_AND_MARKER = VERSION_SPEC + Optional(MARKER) URL_AND_MARKER = URL + Optional(MARKER) NAMED_REQUIREMENT = \ NAME + Optional(EXTRAS) + (URL_AND_MARKER | VERSION_AND_MARKER) REQUIREMENT = stringStart + NAMED_REQUIREMENT + stringEnd class Requirement(object): """Parse a requirement. Parse a given requirement string into its parts, such as name, specifier, URL, and extras. Raises InvalidRequirement on a badly-formed requirement string. """ # TODO: Can we test whether something is contained within a requirement? # If so how do we do that? Do we need to test against the _name_ of # the thing as well as the version? What about the markers? # TODO: Can we normalize the name and extra name? def __init__(self, requirement_string): try: req = REQUIREMENT.parseString(requirement_string) except ParseException as e: raise InvalidRequirement( "Invalid requirement, parse error at \"{0!r}\"".format( requirement_string[e.loc:e.loc + 8])) self.name = req.name if req.url: parsed_url = urlparse.urlparse(req.url) if not (parsed_url.scheme and parsed_url.netloc) or ( not parsed_url.scheme and not parsed_url.netloc): raise InvalidRequirement("Invalid URL given") self.url = req.url else: self.url = None self.extras = set(req.extras.asList() if req.extras else []) self.specifier = SpecifierSet(req.specifier) self.marker = req.marker if req.marker else None def __str__(self): parts = [self.name] if self.extras: parts.append("[{0}]".format(",".join(sorted(self.extras)))) if self.specifier: parts.append(str(self.specifier)) if self.url: parts.append("@ {0}".format(self.url)) if self.marker: parts.append("; {0}".format(self.marker)) return "".join(parts) def __repr__(self): return "<Requirement({0!r})>".format(str(self)) �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/_vendor/packaging/specifiers.py�����������0000644�0001750�0001750�00000066571�00000000000�032567� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function import abc import functools import itertools import re from ._compat import string_types, with_metaclass from .version import Version, LegacyVersion, parse class InvalidSpecifier(ValueError): """ An invalid specifier was found, users should refer to PEP 440. """ class BaseSpecifier(with_metaclass(abc.ABCMeta, object)): @abc.abstractmethod def __str__(self): """ Returns the str representation of this Specifier like object. This should be representative of the Specifier itself. """ @abc.abstractmethod def __hash__(self): """ Returns a hash value for this Specifier like object. """ @abc.abstractmethod def __eq__(self, other): """ Returns a boolean representing whether or not the two Specifier like objects are equal. """ @abc.abstractmethod def __ne__(self, other): """ Returns a boolean representing whether or not the two Specifier like objects are not equal. """ @abc.abstractproperty def prereleases(self): """ Returns whether or not pre-releases as a whole are allowed by this specifier. """ @prereleases.setter def prereleases(self, value): """ Sets whether or not pre-releases as a whole are allowed by this specifier. """ @abc.abstractmethod def contains(self, item, prereleases=None): """ Determines if the given item is contained within this specifier. """ @abc.abstractmethod def filter(self, iterable, prereleases=None): """ Takes an iterable of items and filters them so that only items which are contained within this specifier are allowed in it. """ class _IndividualSpecifier(BaseSpecifier): _operators = {} def __init__(self, spec="", prereleases=None): match = self._regex.search(spec) if not match: raise InvalidSpecifier("Invalid specifier: '{0}'".format(spec)) self._spec = ( match.group("operator").strip(), match.group("version").strip(), ) # Store whether or not this Specifier should accept prereleases self._prereleases = prereleases def __repr__(self): pre = ( ", prereleases={0!r}".format(self.prereleases) if self._prereleases is not None else "" ) return "<{0}({1!r}{2})>".format( self.__class__.__name__, str(self), pre, ) def __str__(self): return "{0}{1}".format(*self._spec) def __hash__(self): return hash(self._spec) def __eq__(self, other): if isinstance(other, string_types): try: other = self.__class__(other) except InvalidSpecifier: return NotImplemented elif not isinstance(other, self.__class__): return NotImplemented return self._spec == other._spec def __ne__(self, other): if isinstance(other, string_types): try: other = self.__class__(other) except InvalidSpecifier: return NotImplemented elif not isinstance(other, self.__class__): return NotImplemented return self._spec != other._spec def _get_operator(self, op): return getattr(self, "_compare_{0}".format(self._operators[op])) def _coerce_version(self, version): if not isinstance(version, (LegacyVersion, Version)): version = parse(version) return version @property def operator(self): return self._spec[0] @property def version(self): return self._spec[1] @property def prereleases(self): return self._prereleases @prereleases.setter def prereleases(self, value): self._prereleases = value def __contains__(self, item): return self.contains(item) def contains(self, item, prereleases=None): # Determine if prereleases are to be allowed or not. if prereleases is None: prereleases = self.prereleases # Normalize item to a Version or LegacyVersion, this allows us to have # a shortcut for ``"2.0" in Specifier(">=2") item = self._coerce_version(item) # Determine if we should be supporting prereleases in this specifier # or not, if we do not support prereleases than we can short circuit # logic if this version is a prereleases. if item.is_prerelease and not prereleases: return False # Actually do the comparison to determine if this item is contained # within this Specifier or not. return self._get_operator(self.operator)(item, self.version) def filter(self, iterable, prereleases=None): yielded = False found_prereleases = [] kw = {"prereleases": prereleases if prereleases is not None else True} # Attempt to iterate over all the values in the iterable and if any of # them match, yield them. for version in iterable: parsed_version = self._coerce_version(version) if self.contains(parsed_version, **kw): # If our version is a prerelease, and we were not set to allow # prereleases, then we'll store it for later incase nothing # else matches this specifier. if (parsed_version.is_prerelease and not (prereleases or self.prereleases)): found_prereleases.append(version) # Either this is not a prerelease, or we should have been # accepting prereleases from the begining. else: yielded = True yield version # Now that we've iterated over everything, determine if we've yielded # any values, and if we have not and we have any prereleases stored up # then we will go ahead and yield the prereleases. if not yielded and found_prereleases: for version in found_prereleases: yield version class LegacySpecifier(_IndividualSpecifier): _regex_str = ( r""" (?P<operator>(==|!=|<=|>=|<|>)) \s* (?P<version> [^,;\s)]* # Since this is a "legacy" specifier, and the version # string can be just about anything, we match everything # except for whitespace, a semi-colon for marker support, # a closing paren since versions can be enclosed in # them, and a comma since it's a version separator. ) """ ) _regex = re.compile( r"^\s*" + _regex_str + r"\s*$", re.VERBOSE | re.IGNORECASE) _operators = { "==": "equal", "!=": "not_equal", "<=": "less_than_equal", ">=": "greater_than_equal", "<": "less_than", ">": "greater_than", } def _coerce_version(self, version): if not isinstance(version, LegacyVersion): version = LegacyVersion(str(version)) return version def _compare_equal(self, prospective, spec): return prospective == self._coerce_version(spec) def _compare_not_equal(self, prospective, spec): return prospective != self._coerce_version(spec) def _compare_less_than_equal(self, prospective, spec): return prospective <= self._coerce_version(spec) def _compare_greater_than_equal(self, prospective, spec): return prospective >= self._coerce_version(spec) def _compare_less_than(self, prospective, spec): return prospective < self._coerce_version(spec) def _compare_greater_than(self, prospective, spec): return prospective > self._coerce_version(spec) def _require_version_compare(fn): @functools.wraps(fn) def wrapped(self, prospective, spec): if not isinstance(prospective, Version): return False return fn(self, prospective, spec) return wrapped class Specifier(_IndividualSpecifier): _regex_str = ( r""" (?P<operator>(~=|==|!=|<=|>=|<|>|===)) (?P<version> (?: # The identity operators allow for an escape hatch that will # do an exact string match of the version you wish to install. # This will not be parsed by PEP 440 and we cannot determine # any semantic meaning from it. This operator is discouraged # but included entirely as an escape hatch. (?<====) # Only match for the identity operator \s* [^\s]* # We just match everything, except for whitespace # since we are only testing for strict identity. ) | (?: # The (non)equality operators allow for wild card and local # versions to be specified so we have to define these two # operators separately to enable that. (?<===|!=) # Only match for equals and not equals \s* v? (?:[0-9]+!)? # epoch [0-9]+(?:\.[0-9]+)* # release (?: # pre release [-_\.]? (a|b|c|rc|alpha|beta|pre|preview) [-_\.]? [0-9]* )? (?: # post release (?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*) )? # You cannot use a wild card and a dev or local version # together so group them with a | and make them optional. (?: (?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release (?:\+[a-z0-9]+(?:[-_\.][a-z0-9]+)*)? # local | \.\* # Wild card syntax of .* )? ) | (?: # The compatible operator requires at least two digits in the # release segment. (?<=~=) # Only match for the compatible operator \s* v? (?:[0-9]+!)? # epoch [0-9]+(?:\.[0-9]+)+ # release (We have a + instead of a *) (?: # pre release [-_\.]? (a|b|c|rc|alpha|beta|pre|preview) [-_\.]? [0-9]* )? (?: # post release (?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*) )? (?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release ) | (?: # All other operators only allow a sub set of what the # (non)equality operators do. Specifically they do not allow # local versions to be specified nor do they allow the prefix # matching wild cards. (?<!==|!=|~=) # We have special cases for these # operators so we want to make sure they # don't match here. \s* v? (?:[0-9]+!)? # epoch [0-9]+(?:\.[0-9]+)* # release (?: # pre release [-_\.]? (a|b|c|rc|alpha|beta|pre|preview) [-_\.]? [0-9]* )? (?: # post release (?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*) )? (?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release ) ) """ ) _regex = re.compile( r"^\s*" + _regex_str + r"\s*$", re.VERBOSE | re.IGNORECASE) _operators = { "~=": "compatible", "==": "equal", "!=": "not_equal", "<=": "less_than_equal", ">=": "greater_than_equal", "<": "less_than", ">": "greater_than", "===": "arbitrary", } @_require_version_compare def _compare_compatible(self, prospective, spec): # Compatible releases have an equivalent combination of >= and ==. That # is that ~=2.2 is equivalent to >=2.2,==2.*. This allows us to # implement this in terms of the other specifiers instead of # implementing it ourselves. The only thing we need to do is construct # the other specifiers. # We want everything but the last item in the version, but we want to # ignore post and dev releases and we want to treat the pre-release as # it's own separate segment. prefix = ".".join( list( itertools.takewhile( lambda x: (not x.startswith("post") and not x.startswith("dev")), _version_split(spec), ) )[:-1] ) # Add the prefix notation to the end of our string prefix += ".*" return (self._get_operator(">=")(prospective, spec) and self._get_operator("==")(prospective, prefix)) @_require_version_compare def _compare_equal(self, prospective, spec): # We need special logic to handle prefix matching if spec.endswith(".*"): # In the case of prefix matching we want to ignore local segment. prospective = Version(prospective.public) # Split the spec out by dots, and pretend that there is an implicit # dot in between a release segment and a pre-release segment. spec = _version_split(spec[:-2]) # Remove the trailing .* # Split the prospective version out by dots, and pretend that there # is an implicit dot in between a release segment and a pre-release # segment. prospective = _version_split(str(prospective)) # Shorten the prospective version to be the same length as the spec # so that we can determine if the specifier is a prefix of the # prospective version or not. prospective = prospective[:len(spec)] # Pad out our two sides with zeros so that they both equal the same # length. spec, prospective = _pad_version(spec, prospective) else: # Convert our spec string into a Version spec = Version(spec) # If the specifier does not have a local segment, then we want to # act as if the prospective version also does not have a local # segment. if not spec.local: prospective = Version(prospective.public) return prospective == spec @_require_version_compare def _compare_not_equal(self, prospective, spec): return not self._compare_equal(prospective, spec) @_require_version_compare def _compare_less_than_equal(self, prospective, spec): return prospective <= Version(spec) @_require_version_compare def _compare_greater_than_equal(self, prospective, spec): return prospective >= Version(spec) @_require_version_compare def _compare_less_than(self, prospective, spec): # Convert our spec to a Version instance, since we'll want to work with # it as a version. spec = Version(spec) # Check to see if the prospective version is less than the spec # version. If it's not we can short circuit and just return False now # instead of doing extra unneeded work. if not prospective < spec: return False # This special case is here so that, unless the specifier itself # includes is a pre-release version, that we do not accept pre-release # versions for the version mentioned in the specifier (e.g. <3.1 should # not match 3.1.dev0, but should match 3.0.dev0). if not spec.is_prerelease and prospective.is_prerelease: if Version(prospective.base_version) == Version(spec.base_version): return False # If we've gotten to here, it means that prospective version is both # less than the spec version *and* it's not a pre-release of the same # version in the spec. return True @_require_version_compare def _compare_greater_than(self, prospective, spec): # Convert our spec to a Version instance, since we'll want to work with # it as a version. spec = Version(spec) # Check to see if the prospective version is greater than the spec # version. If it's not we can short circuit and just return False now # instead of doing extra unneeded work. if not prospective > spec: return False # This special case is here so that, unless the specifier itself # includes is a post-release version, that we do not accept # post-release versions for the version mentioned in the specifier # (e.g. >3.1 should not match 3.0.post0, but should match 3.2.post0). if not spec.is_postrelease and prospective.is_postrelease: if Version(prospective.base_version) == Version(spec.base_version): return False # Ensure that we do not allow a local version of the version mentioned # in the specifier, which is techincally greater than, to match. if prospective.local is not None: if Version(prospective.base_version) == Version(spec.base_version): return False # If we've gotten to here, it means that prospective version is both # greater than the spec version *and* it's not a pre-release of the # same version in the spec. return True def _compare_arbitrary(self, prospective, spec): return str(prospective).lower() == str(spec).lower() @property def prereleases(self): # If there is an explicit prereleases set for this, then we'll just # blindly use that. if self._prereleases is not None: return self._prereleases # Look at all of our specifiers and determine if they are inclusive # operators, and if they are if they are including an explicit # prerelease. operator, version = self._spec if operator in ["==", ">=", "<=", "~=", "==="]: # The == specifier can include a trailing .*, if it does we # want to remove before parsing. if operator == "==" and version.endswith(".*"): version = version[:-2] # Parse the version, and if it is a pre-release than this # specifier allows pre-releases. if parse(version).is_prerelease: return True return False @prereleases.setter def prereleases(self, value): self._prereleases = value _prefix_regex = re.compile(r"^([0-9]+)((?:a|b|c|rc)[0-9]+)$") def _version_split(version): result = [] for item in version.split("."): match = _prefix_regex.search(item) if match: result.extend(match.groups()) else: result.append(item) return result def _pad_version(left, right): left_split, right_split = [], [] # Get the release segment of our versions left_split.append(list(itertools.takewhile(lambda x: x.isdigit(), left))) right_split.append(list(itertools.takewhile(lambda x: x.isdigit(), right))) # Get the rest of our versions left_split.append(left[len(left_split[0]):]) right_split.append(right[len(right_split[0]):]) # Insert our padding left_split.insert( 1, ["0"] * max(0, len(right_split[0]) - len(left_split[0])), ) right_split.insert( 1, ["0"] * max(0, len(left_split[0]) - len(right_split[0])), ) return ( list(itertools.chain(*left_split)), list(itertools.chain(*right_split)), ) class SpecifierSet(BaseSpecifier): def __init__(self, specifiers="", prereleases=None): # Split on , to break each indidivual specifier into it's own item, and # strip each item to remove leading/trailing whitespace. specifiers = [s.strip() for s in specifiers.split(",") if s.strip()] # Parsed each individual specifier, attempting first to make it a # Specifier and falling back to a LegacySpecifier. parsed = set() for specifier in specifiers: try: parsed.add(Specifier(specifier)) except InvalidSpecifier: parsed.add(LegacySpecifier(specifier)) # Turn our parsed specifiers into a frozen set and save them for later. self._specs = frozenset(parsed) # Store our prereleases value so we can use it later to determine if # we accept prereleases or not. self._prereleases = prereleases def __repr__(self): pre = ( ", prereleases={0!r}".format(self.prereleases) if self._prereleases is not None else "" ) return "<SpecifierSet({0!r}{1})>".format(str(self), pre) def __str__(self): return ",".join(sorted(str(s) for s in self._specs)) def __hash__(self): return hash(self._specs) def __and__(self, other): if isinstance(other, string_types): other = SpecifierSet(other) elif not isinstance(other, SpecifierSet): return NotImplemented specifier = SpecifierSet() specifier._specs = frozenset(self._specs | other._specs) if self._prereleases is None and other._prereleases is not None: specifier._prereleases = other._prereleases elif self._prereleases is not None and other._prereleases is None: specifier._prereleases = self._prereleases elif self._prereleases == other._prereleases: specifier._prereleases = self._prereleases else: raise ValueError( "Cannot combine SpecifierSets with True and False prerelease " "overrides." ) return specifier def __eq__(self, other): if isinstance(other, string_types): other = SpecifierSet(other) elif isinstance(other, _IndividualSpecifier): other = SpecifierSet(str(other)) elif not isinstance(other, SpecifierSet): return NotImplemented return self._specs == other._specs def __ne__(self, other): if isinstance(other, string_types): other = SpecifierSet(other) elif isinstance(other, _IndividualSpecifier): other = SpecifierSet(str(other)) elif not isinstance(other, SpecifierSet): return NotImplemented return self._specs != other._specs def __len__(self): return len(self._specs) def __iter__(self): return iter(self._specs) @property def prereleases(self): # If we have been given an explicit prerelease modifier, then we'll # pass that through here. if self._prereleases is not None: return self._prereleases # If we don't have any specifiers, and we don't have a forced value, # then we'll just return None since we don't know if this should have # pre-releases or not. if not self._specs: return None # Otherwise we'll see if any of the given specifiers accept # prereleases, if any of them do we'll return True, otherwise False. return any(s.prereleases for s in self._specs) @prereleases.setter def prereleases(self, value): self._prereleases = value def __contains__(self, item): return self.contains(item) def contains(self, item, prereleases=None): # Ensure that our item is a Version or LegacyVersion instance. if not isinstance(item, (LegacyVersion, Version)): item = parse(item) # Determine if we're forcing a prerelease or not, if we're not forcing # one for this particular filter call, then we'll use whatever the # SpecifierSet thinks for whether or not we should support prereleases. if prereleases is None: prereleases = self.prereleases # We can determine if we're going to allow pre-releases by looking to # see if any of the underlying items supports them. If none of them do # and this item is a pre-release then we do not allow it and we can # short circuit that here. # Note: This means that 1.0.dev1 would not be contained in something # like >=1.0.devabc however it would be in >=1.0.debabc,>0.0.dev0 if not prereleases and item.is_prerelease: return False # We simply dispatch to the underlying specs here to make sure that the # given version is contained within all of them. # Note: This use of all() here means that an empty set of specifiers # will always return True, this is an explicit design decision. return all( s.contains(item, prereleases=prereleases) for s in self._specs ) def filter(self, iterable, prereleases=None): # Determine if we're forcing a prerelease or not, if we're not forcing # one for this particular filter call, then we'll use whatever the # SpecifierSet thinks for whether or not we should support prereleases. if prereleases is None: prereleases = self.prereleases # If we have any specifiers, then we want to wrap our iterable in the # filter method for each one, this will act as a logical AND amongst # each specifier. if self._specs: for spec in self._specs: iterable = spec.filter(iterable, prereleases=bool(prereleases)) return iterable # If we do not have any specifiers, then we need to have a rough filter # which will filter out any pre-releases, unless there are no final # releases, and which will filter out LegacyVersion in general. else: filtered = [] found_prereleases = [] for item in iterable: # Ensure that we some kind of Version class for this item. if not isinstance(item, (LegacyVersion, Version)): parsed_version = parse(item) else: parsed_version = item # Filter out any item which is parsed as a LegacyVersion if isinstance(parsed_version, LegacyVersion): continue # Store any item which is a pre-release for later unless we've # already found a final version or we are accepting prereleases if parsed_version.is_prerelease and not prereleases: if not filtered: found_prereleases.append(item) else: filtered.append(item) # If we've found no items except for pre-releases, then we'll go # ahead and use the pre-releases if not filtered and found_prereleases and prereleases is None: return found_prereleases return filtered ���������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/_vendor/packaging/utils.py����������������0000644�0001750�0001750�00000000645�00000000000�031561� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function import re _canonicalize_regex = re.compile(r"[-_.]+") def canonicalize_name(name): # This is taken from PEP 503. return _canonicalize_regex.sub("-", name).lower() �������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/_vendor/packaging/version.py��������������0000644�0001750�0001750�00000026444�00000000000�032113� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. from __future__ import absolute_import, division, print_function import collections import itertools import re from ._structures import Infinity __all__ = [ "parse", "Version", "LegacyVersion", "InvalidVersion", "VERSION_PATTERN" ] _Version = collections.namedtuple( "_Version", ["epoch", "release", "dev", "pre", "post", "local"], ) def parse(version): """ Parse the given version string and return either a :class:`Version` object or a :class:`LegacyVersion` object depending on if the given version is a valid PEP 440 version or a legacy version. """ try: return Version(version) except InvalidVersion: return LegacyVersion(version) class InvalidVersion(ValueError): """ An invalid version was found, users should refer to PEP 440. """ class _BaseVersion(object): def __hash__(self): return hash(self._key) def __lt__(self, other): return self._compare(other, lambda s, o: s < o) def __le__(self, other): return self._compare(other, lambda s, o: s <= o) def __eq__(self, other): return self._compare(other, lambda s, o: s == o) def __ge__(self, other): return self._compare(other, lambda s, o: s >= o) def __gt__(self, other): return self._compare(other, lambda s, o: s > o) def __ne__(self, other): return self._compare(other, lambda s, o: s != o) def _compare(self, other, method): if not isinstance(other, _BaseVersion): return NotImplemented return method(self._key, other._key) class LegacyVersion(_BaseVersion): def __init__(self, version): self._version = str(version) self._key = _legacy_cmpkey(self._version) def __str__(self): return self._version def __repr__(self): return "<LegacyVersion({0})>".format(repr(str(self))) @property def public(self): return self._version @property def base_version(self): return self._version @property def local(self): return None @property def is_prerelease(self): return False @property def is_postrelease(self): return False _legacy_version_component_re = re.compile( r"(\d+ | [a-z]+ | \.| -)", re.VERBOSE, ) _legacy_version_replacement_map = { "pre": "c", "preview": "c", "-": "final-", "rc": "c", "dev": "@", } def _parse_version_parts(s): for part in _legacy_version_component_re.split(s): part = _legacy_version_replacement_map.get(part, part) if not part or part == ".": continue if part[:1] in "0123456789": # pad for numeric comparison yield part.zfill(8) else: yield "*" + part # ensure that alpha/beta/candidate are before final yield "*final" def _legacy_cmpkey(version): # We hardcode an epoch of -1 here. A PEP 440 version can only have a epoch # greater than or equal to 0. This will effectively put the LegacyVersion, # which uses the defacto standard originally implemented by setuptools, # as before all PEP 440 versions. epoch = -1 # This scheme is taken from pkg_resources.parse_version setuptools prior to # it's adoption of the packaging library. parts = [] for part in _parse_version_parts(version.lower()): if part.startswith("*"): # remove "-" before a prerelease tag if part < "*final": while parts and parts[-1] == "*final-": parts.pop() # remove trailing zeros from each series of numeric parts while parts and parts[-1] == "00000000": parts.pop() parts.append(part) parts = tuple(parts) return epoch, parts # Deliberately not anchored to the start and end of the string, to make it # easier for 3rd party code to reuse VERSION_PATTERN = r""" v? (?: (?:(?P<epoch>[0-9]+)!)? # epoch (?P<release>[0-9]+(?:\.[0-9]+)*) # release segment (?P<pre> # pre-release [-_\.]? (?P<pre_l>(a|b|c|rc|alpha|beta|pre|preview)) [-_\.]? (?P<pre_n>[0-9]+)? )? (?P<post> # post release (?:-(?P<post_n1>[0-9]+)) | (?: [-_\.]? (?P<post_l>post|rev|r) [-_\.]? (?P<post_n2>[0-9]+)? ) )? (?P<dev> # dev release [-_\.]? (?P<dev_l>dev) [-_\.]? (?P<dev_n>[0-9]+)? )? ) (?:\+(?P<local>[a-z0-9]+(?:[-_\.][a-z0-9]+)*))? # local version """ class Version(_BaseVersion): _regex = re.compile( r"^\s*" + VERSION_PATTERN + r"\s*$", re.VERBOSE | re.IGNORECASE, ) def __init__(self, version): # Validate the version and parse it into pieces match = self._regex.search(version) if not match: raise InvalidVersion("Invalid version: '{0}'".format(version)) # Store the parsed out pieces of the version self._version = _Version( epoch=int(match.group("epoch")) if match.group("epoch") else 0, release=tuple(int(i) for i in match.group("release").split(".")), pre=_parse_letter_version( match.group("pre_l"), match.group("pre_n"), ), post=_parse_letter_version( match.group("post_l"), match.group("post_n1") or match.group("post_n2"), ), dev=_parse_letter_version( match.group("dev_l"), match.group("dev_n"), ), local=_parse_local_version(match.group("local")), ) # Generate a key which will be used for sorting self._key = _cmpkey( self._version.epoch, self._version.release, self._version.pre, self._version.post, self._version.dev, self._version.local, ) def __repr__(self): return "<Version({0})>".format(repr(str(self))) def __str__(self): parts = [] # Epoch if self._version.epoch != 0: parts.append("{0}!".format(self._version.epoch)) # Release segment parts.append(".".join(str(x) for x in self._version.release)) # Pre-release if self._version.pre is not None: parts.append("".join(str(x) for x in self._version.pre)) # Post-release if self._version.post is not None: parts.append(".post{0}".format(self._version.post[1])) # Development release if self._version.dev is not None: parts.append(".dev{0}".format(self._version.dev[1])) # Local version segment if self._version.local is not None: parts.append( "+{0}".format(".".join(str(x) for x in self._version.local)) ) return "".join(parts) @property def public(self): return str(self).split("+", 1)[0] @property def base_version(self): parts = [] # Epoch if self._version.epoch != 0: parts.append("{0}!".format(self._version.epoch)) # Release segment parts.append(".".join(str(x) for x in self._version.release)) return "".join(parts) @property def local(self): version_string = str(self) if "+" in version_string: return version_string.split("+", 1)[1] @property def is_prerelease(self): return bool(self._version.dev or self._version.pre) @property def is_postrelease(self): return bool(self._version.post) def _parse_letter_version(letter, number): if letter: # We consider there to be an implicit 0 in a pre-release if there is # not a numeral associated with it. if number is None: number = 0 # We normalize any letters to their lower case form letter = letter.lower() # We consider some words to be alternate spellings of other words and # in those cases we want to normalize the spellings to our preferred # spelling. if letter == "alpha": letter = "a" elif letter == "beta": letter = "b" elif letter in ["c", "pre", "preview"]: letter = "rc" elif letter in ["rev", "r"]: letter = "post" return letter, int(number) if not letter and number: # We assume if we are given a number, but we are not given a letter # then this is using the implicit post release syntax (e.g. 1.0-1) letter = "post" return letter, int(number) _local_version_seperators = re.compile(r"[\._-]") def _parse_local_version(local): """ Takes a string like abc.1.twelve and turns it into ("abc", 1, "twelve"). """ if local is not None: return tuple( part.lower() if not part.isdigit() else int(part) for part in _local_version_seperators.split(local) ) def _cmpkey(epoch, release, pre, post, dev, local): # When we compare a release version, we want to compare it with all of the # trailing zeros removed. So we'll use a reverse the list, drop all the now # leading zeros until we come to something non zero, then take the rest # re-reverse it back into the correct order and make it a tuple and use # that for our sorting key. release = tuple( reversed(list( itertools.dropwhile( lambda x: x == 0, reversed(release), ) )) ) # We need to "trick" the sorting algorithm to put 1.0.dev0 before 1.0a0. # We'll do this by abusing the pre segment, but we _only_ want to do this # if there is not a pre or a post segment. If we have one of those then # the normal sorting rules will handle this case correctly. if pre is None and post is None and dev is not None: pre = -Infinity # Versions without a pre-release (except as noted above) should sort after # those with one. elif pre is None: pre = Infinity # Versions without a post segment should sort before those with one. if post is None: post = -Infinity # Versions without a development segment should sort after those with one. if dev is None: dev = Infinity if local is None: # Versions without a local segment should sort before those with one. local = -Infinity else: # Versions with a local segment need that segment parsed to implement # the sorting rules in PEP440. # - Alpha numeric segments sort before numeric segments # - Alpha numeric segments sort lexicographically # - Numeric segments sort numerically # - Shorter versions sort before longer versions when the prefixes # match exactly local = tuple( (i, "") if isinstance(i, int) else (-Infinity, i) for i in local ) return epoch, release, pre, post, dev, local ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/_vendor/pyparsing.py����������������������0000644�0001750�0001750�00000705167�00000000000�030524� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# module pyparsing.py # # Copyright (c) 2003-2018 Paul T. McGuire # # Permission is hereby granted, free of charge, to any person obtaining # a copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, # distribute, sublicense, and/or sell copies of the Software, and to # permit persons to whom the Software is furnished to do so, subject to # the following conditions: # # The above copyright notice and this permission notice shall be # included in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. # IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY # CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, # TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # __doc__ = \ """ pyparsing module - Classes and methods to define and execute parsing grammars ============================================================================= The pyparsing module is an alternative approach to creating and executing simple grammars, vs. the traditional lex/yacc approach, or the use of regular expressions. With pyparsing, you don't need to learn a new syntax for defining grammars or matching expressions - the parsing module provides a library of classes that you use to construct the grammar directly in Python. Here is a program to parse "Hello, World!" (or any greeting of the form C{"<salutation>, <addressee>!"}), built up using L{Word}, L{Literal}, and L{And} elements (L{'+'<ParserElement.__add__>} operator gives L{And} expressions, strings are auto-converted to L{Literal} expressions):: from pyparsing import Word, alphas # define grammar of a greeting greet = Word(alphas) + "," + Word(alphas) + "!" hello = "Hello, World!" print (hello, "->", greet.parseString(hello)) The program outputs the following:: Hello, World! -> ['Hello', ',', 'World', '!'] The Python representation of the grammar is quite readable, owing to the self-explanatory class names, and the use of '+', '|' and '^' operators. The L{ParseResults} object returned from L{ParserElement.parseString<ParserElement.parseString>} can be accessed as a nested list, a dictionary, or an object with named attributes. The pyparsing module handles some of the problems that are typically vexing when writing text parsers: - extra or missing whitespace (the above program will also handle "Hello,World!", "Hello , World !", etc.) - quoted strings - embedded comments Getting Started - ----------------- Visit the classes L{ParserElement} and L{ParseResults} to see the base classes that most other pyparsing classes inherit from. Use the docstrings for examples of how to: - construct literal match expressions from L{Literal} and L{CaselessLiteral} classes - construct character word-group expressions using the L{Word} class - see how to create repetitive expressions using L{ZeroOrMore} and L{OneOrMore} classes - use L{'+'<And>}, L{'|'<MatchFirst>}, L{'^'<Or>}, and L{'&'<Each>} operators to combine simple expressions into more complex ones - associate names with your parsed results using L{ParserElement.setResultsName} - find some helpful expression short-cuts like L{delimitedList} and L{oneOf} - find more useful common expressions in the L{pyparsing_common} namespace class """ __version__ = "2.2.1" __versionTime__ = "18 Sep 2018 00:49 UTC" __author__ = "Paul McGuire <ptmcg@users.sourceforge.net>" import string from weakref import ref as wkref import copy import sys import warnings import re import sre_constants import collections import pprint import traceback import types from datetime import datetime try: from _thread import RLock except ImportError: from threading import RLock try: # Python 3 from collections.abc import Iterable from collections.abc import MutableMapping except ImportError: # Python 2.7 from collections import Iterable from collections import MutableMapping try: from collections import OrderedDict as _OrderedDict except ImportError: try: from ordereddict import OrderedDict as _OrderedDict except ImportError: _OrderedDict = None #~ sys.stderr.write( "testing pyparsing module, version %s, %s\n" % (__version__,__versionTime__ ) ) __all__ = [ 'And', 'CaselessKeyword', 'CaselessLiteral', 'CharsNotIn', 'Combine', 'Dict', 'Each', 'Empty', 'FollowedBy', 'Forward', 'GoToColumn', 'Group', 'Keyword', 'LineEnd', 'LineStart', 'Literal', 'MatchFirst', 'NoMatch', 'NotAny', 'OneOrMore', 'OnlyOnce', 'Optional', 'Or', 'ParseBaseException', 'ParseElementEnhance', 'ParseException', 'ParseExpression', 'ParseFatalException', 'ParseResults', 'ParseSyntaxException', 'ParserElement', 'QuotedString', 'RecursiveGrammarException', 'Regex', 'SkipTo', 'StringEnd', 'StringStart', 'Suppress', 'Token', 'TokenConverter', 'White', 'Word', 'WordEnd', 'WordStart', 'ZeroOrMore', 'alphanums', 'alphas', 'alphas8bit', 'anyCloseTag', 'anyOpenTag', 'cStyleComment', 'col', 'commaSeparatedList', 'commonHTMLEntity', 'countedArray', 'cppStyleComment', 'dblQuotedString', 'dblSlashComment', 'delimitedList', 'dictOf', 'downcaseTokens', 'empty', 'hexnums', 'htmlComment', 'javaStyleComment', 'line', 'lineEnd', 'lineStart', 'lineno', 'makeHTMLTags', 'makeXMLTags', 'matchOnlyAtCol', 'matchPreviousExpr', 'matchPreviousLiteral', 'nestedExpr', 'nullDebugAction', 'nums', 'oneOf', 'opAssoc', 'operatorPrecedence', 'printables', 'punc8bit', 'pythonStyleComment', 'quotedString', 'removeQuotes', 'replaceHTMLEntity', 'replaceWith', 'restOfLine', 'sglQuotedString', 'srange', 'stringEnd', 'stringStart', 'traceParseAction', 'unicodeString', 'upcaseTokens', 'withAttribute', 'indentedBlock', 'originalTextFor', 'ungroup', 'infixNotation','locatedExpr', 'withClass', 'CloseMatch', 'tokenMap', 'pyparsing_common', ] system_version = tuple(sys.version_info)[:3] PY_3 = system_version[0] == 3 if PY_3: _MAX_INT = sys.maxsize basestring = str unichr = chr _ustr = str # build list of single arg builtins, that can be used as parse actions singleArgBuiltins = [sum, len, sorted, reversed, list, tuple, set, any, all, min, max] else: _MAX_INT = sys.maxint range = xrange def _ustr(obj): """Drop-in replacement for str(obj) that tries to be Unicode friendly. It first tries str(obj). If that fails with a UnicodeEncodeError, then it tries unicode(obj). It then < returns the unicode object | encodes it with the default encoding | ... >. """ if isinstance(obj,unicode): return obj try: # If this works, then _ustr(obj) has the same behaviour as str(obj), so # it won't break any existing code. return str(obj) except UnicodeEncodeError: # Else encode it ret = unicode(obj).encode(sys.getdefaultencoding(), 'xmlcharrefreplace') xmlcharref = Regex(r'&#\d+;') xmlcharref.setParseAction(lambda t: '\\u' + hex(int(t[0][2:-1]))[2:]) return xmlcharref.transformString(ret) # build list of single arg builtins, tolerant of Python version, that can be used as parse actions singleArgBuiltins = [] import __builtin__ for fname in "sum len sorted reversed list tuple set any all min max".split(): try: singleArgBuiltins.append(getattr(__builtin__,fname)) except AttributeError: continue _generatorType = type((y for y in range(1))) def _xml_escape(data): """Escape &, <, >, ", ', etc. in a string of data.""" # ampersand must be replaced first from_symbols = '&><"\'' to_symbols = ('&'+s+';' for s in "amp gt lt quot apos".split()) for from_,to_ in zip(from_symbols, to_symbols): data = data.replace(from_, to_) return data class _Constants(object): pass alphas = string.ascii_uppercase + string.ascii_lowercase nums = "0123456789" hexnums = nums + "ABCDEFabcdef" alphanums = alphas + nums _bslash = chr(92) printables = "".join(c for c in string.printable if c not in string.whitespace) class ParseBaseException(Exception): """base exception class for all parsing runtime exceptions""" # Performance tuning: we construct a *lot* of these, so keep this # constructor as small and fast as possible def __init__( self, pstr, loc=0, msg=None, elem=None ): self.loc = loc if msg is None: self.msg = pstr self.pstr = "" else: self.msg = msg self.pstr = pstr self.parserElement = elem self.args = (pstr, loc, msg) @classmethod def _from_exception(cls, pe): """ internal factory method to simplify creating one type of ParseException from another - avoids having __init__ signature conflicts among subclasses """ return cls(pe.pstr, pe.loc, pe.msg, pe.parserElement) def __getattr__( self, aname ): """supported attributes by name are: - lineno - returns the line number of the exception text - col - returns the column number of the exception text - line - returns the line containing the exception text """ if( aname == "lineno" ): return lineno( self.loc, self.pstr ) elif( aname in ("col", "column") ): return col( self.loc, self.pstr ) elif( aname == "line" ): return line( self.loc, self.pstr ) else: raise AttributeError(aname) def __str__( self ): return "%s (at char %d), (line:%d, col:%d)" % \ ( self.msg, self.loc, self.lineno, self.column ) def __repr__( self ): return _ustr(self) def markInputline( self, markerString = ">!<" ): """Extracts the exception line from the input string, and marks the location of the exception with a special symbol. """ line_str = self.line line_column = self.column - 1 if markerString: line_str = "".join((line_str[:line_column], markerString, line_str[line_column:])) return line_str.strip() def __dir__(self): return "lineno col line".split() + dir(type(self)) class ParseException(ParseBaseException): """ Exception thrown when parse expressions don't match class; supported attributes by name are: - lineno - returns the line number of the exception text - col - returns the column number of the exception text - line - returns the line containing the exception text Example:: try: Word(nums).setName("integer").parseString("ABC") except ParseException as pe: print(pe) print("column: {}".format(pe.col)) prints:: Expected integer (at char 0), (line:1, col:1) column: 1 """ pass class ParseFatalException(ParseBaseException): """user-throwable exception thrown when inconsistent parse content is found; stops all parsing immediately""" pass class ParseSyntaxException(ParseFatalException): """just like L{ParseFatalException}, but thrown internally when an L{ErrorStop<And._ErrorStop>} ('-' operator) indicates that parsing is to stop immediately because an unbacktrackable syntax error has been found""" pass #~ class ReparseException(ParseBaseException): #~ """Experimental class - parse actions can raise this exception to cause #~ pyparsing to reparse the input string: #~ - with a modified input string, and/or #~ - with a modified start location #~ Set the values of the ReparseException in the constructor, and raise the #~ exception in a parse action to cause pyparsing to use the new string/location. #~ Setting the values as None causes no change to be made. #~ """ #~ def __init_( self, newstring, restartLoc ): #~ self.newParseText = newstring #~ self.reparseLoc = restartLoc class RecursiveGrammarException(Exception): """exception thrown by L{ParserElement.validate} if the grammar could be improperly recursive""" def __init__( self, parseElementList ): self.parseElementTrace = parseElementList def __str__( self ): return "RecursiveGrammarException: %s" % self.parseElementTrace class _ParseResultsWithOffset(object): def __init__(self,p1,p2): self.tup = (p1,p2) def __getitem__(self,i): return self.tup[i] def __repr__(self): return repr(self.tup[0]) def setOffset(self,i): self.tup = (self.tup[0],i) class ParseResults(object): """ Structured parse results, to provide multiple means of access to the parsed data: - as a list (C{len(results)}) - by list index (C{results[0], results[1]}, etc.) - by attribute (C{results.<resultsName>} - see L{ParserElement.setResultsName}) Example:: integer = Word(nums) date_str = (integer.setResultsName("year") + '/' + integer.setResultsName("month") + '/' + integer.setResultsName("day")) # equivalent form: # date_str = integer("year") + '/' + integer("month") + '/' + integer("day") # parseString returns a ParseResults object result = date_str.parseString("1999/12/31") def test(s, fn=repr): print("%s -> %s" % (s, fn(eval(s)))) test("list(result)") test("result[0]") test("result['month']") test("result.day") test("'month' in result") test("'minutes' in result") test("result.dump()", str) prints:: list(result) -> ['1999', '/', '12', '/', '31'] result[0] -> '1999' result['month'] -> '12' result.day -> '31' 'month' in result -> True 'minutes' in result -> False result.dump() -> ['1999', '/', '12', '/', '31'] - day: 31 - month: 12 - year: 1999 """ def __new__(cls, toklist=None, name=None, asList=True, modal=True ): if isinstance(toklist, cls): return toklist retobj = object.__new__(cls) retobj.__doinit = True return retobj # Performance tuning: we construct a *lot* of these, so keep this # constructor as small and fast as possible def __init__( self, toklist=None, name=None, asList=True, modal=True, isinstance=isinstance ): if self.__doinit: self.__doinit = False self.__name = None self.__parent = None self.__accumNames = {} self.__asList = asList self.__modal = modal if toklist is None: toklist = [] if isinstance(toklist, list): self.__toklist = toklist[:] elif isinstance(toklist, _generatorType): self.__toklist = list(toklist) else: self.__toklist = [toklist] self.__tokdict = dict() if name is not None and name: if not modal: self.__accumNames[name] = 0 if isinstance(name,int): name = _ustr(name) # will always return a str, but use _ustr for consistency self.__name = name if not (isinstance(toklist, (type(None), basestring, list)) and toklist in (None,'',[])): if isinstance(toklist,basestring): toklist = [ toklist ] if asList: if isinstance(toklist,ParseResults): self[name] = _ParseResultsWithOffset(toklist.copy(),0) else: self[name] = _ParseResultsWithOffset(ParseResults(toklist[0]),0) self[name].__name = name else: try: self[name] = toklist[0] except (KeyError,TypeError,IndexError): self[name] = toklist def __getitem__( self, i ): if isinstance( i, (int,slice) ): return self.__toklist[i] else: if i not in self.__accumNames: return self.__tokdict[i][-1][0] else: return ParseResults([ v[0] for v in self.__tokdict[i] ]) def __setitem__( self, k, v, isinstance=isinstance ): if isinstance(v,_ParseResultsWithOffset): self.__tokdict[k] = self.__tokdict.get(k,list()) + [v] sub = v[0] elif isinstance(k,(int,slice)): self.__toklist[k] = v sub = v else: self.__tokdict[k] = self.__tokdict.get(k,list()) + [_ParseResultsWithOffset(v,0)] sub = v if isinstance(sub,ParseResults): sub.__parent = wkref(self) def __delitem__( self, i ): if isinstance(i,(int,slice)): mylen = len( self.__toklist ) del self.__toklist[i] # convert int to slice if isinstance(i, int): if i < 0: i += mylen i = slice(i, i+1) # get removed indices removed = list(range(*i.indices(mylen))) removed.reverse() # fixup indices in token dictionary for name,occurrences in self.__tokdict.items(): for j in removed: for k, (value, position) in enumerate(occurrences): occurrences[k] = _ParseResultsWithOffset(value, position - (position > j)) else: del self.__tokdict[i] def __contains__( self, k ): return k in self.__tokdict def __len__( self ): return len( self.__toklist ) def __bool__(self): return ( not not self.__toklist ) __nonzero__ = __bool__ def __iter__( self ): return iter( self.__toklist ) def __reversed__( self ): return iter( self.__toklist[::-1] ) def _iterkeys( self ): if hasattr(self.__tokdict, "iterkeys"): return self.__tokdict.iterkeys() else: return iter(self.__tokdict) def _itervalues( self ): return (self[k] for k in self._iterkeys()) def _iteritems( self ): return ((k, self[k]) for k in self._iterkeys()) if PY_3: keys = _iterkeys """Returns an iterator of all named result keys (Python 3.x only).""" values = _itervalues """Returns an iterator of all named result values (Python 3.x only).""" items = _iteritems """Returns an iterator of all named result key-value tuples (Python 3.x only).""" else: iterkeys = _iterkeys """Returns an iterator of all named result keys (Python 2.x only).""" itervalues = _itervalues """Returns an iterator of all named result values (Python 2.x only).""" iteritems = _iteritems """Returns an iterator of all named result key-value tuples (Python 2.x only).""" def keys( self ): """Returns all named result keys (as a list in Python 2.x, as an iterator in Python 3.x).""" return list(self.iterkeys()) def values( self ): """Returns all named result values (as a list in Python 2.x, as an iterator in Python 3.x).""" return list(self.itervalues()) def items( self ): """Returns all named result key-values (as a list of tuples in Python 2.x, as an iterator in Python 3.x).""" return list(self.iteritems()) def haskeys( self ): """Since keys() returns an iterator, this method is helpful in bypassing code that looks for the existence of any defined results names.""" return bool(self.__tokdict) def pop( self, *args, **kwargs): """ Removes and returns item at specified index (default=C{last}). Supports both C{list} and C{dict} semantics for C{pop()}. If passed no argument or an integer argument, it will use C{list} semantics and pop tokens from the list of parsed tokens. If passed a non-integer argument (most likely a string), it will use C{dict} semantics and pop the corresponding value from any defined results names. A second default return value argument is supported, just as in C{dict.pop()}. Example:: def remove_first(tokens): tokens.pop(0) print(OneOrMore(Word(nums)).parseString("0 123 321")) # -> ['0', '123', '321'] print(OneOrMore(Word(nums)).addParseAction(remove_first).parseString("0 123 321")) # -> ['123', '321'] label = Word(alphas) patt = label("LABEL") + OneOrMore(Word(nums)) print(patt.parseString("AAB 123 321").dump()) # Use pop() in a parse action to remove named result (note that corresponding value is not # removed from list form of results) def remove_LABEL(tokens): tokens.pop("LABEL") return tokens patt.addParseAction(remove_LABEL) print(patt.parseString("AAB 123 321").dump()) prints:: ['AAB', '123', '321'] - LABEL: AAB ['AAB', '123', '321'] """ if not args: args = [-1] for k,v in kwargs.items(): if k == 'default': args = (args[0], v) else: raise TypeError("pop() got an unexpected keyword argument '%s'" % k) if (isinstance(args[0], int) or len(args) == 1 or args[0] in self): index = args[0] ret = self[index] del self[index] return ret else: defaultvalue = args[1] return defaultvalue def get(self, key, defaultValue=None): """ Returns named result matching the given key, or if there is no such name, then returns the given C{defaultValue} or C{None} if no C{defaultValue} is specified. Similar to C{dict.get()}. Example:: integer = Word(nums) date_str = integer("year") + '/' + integer("month") + '/' + integer("day") result = date_str.parseString("1999/12/31") print(result.get("year")) # -> '1999' print(result.get("hour", "not specified")) # -> 'not specified' print(result.get("hour")) # -> None """ if key in self: return self[key] else: return defaultValue def insert( self, index, insStr ): """ Inserts new element at location index in the list of parsed tokens. Similar to C{list.insert()}. Example:: print(OneOrMore(Word(nums)).parseString("0 123 321")) # -> ['0', '123', '321'] # use a parse action to insert the parse location in the front of the parsed results def insert_locn(locn, tokens): tokens.insert(0, locn) print(OneOrMore(Word(nums)).addParseAction(insert_locn).parseString("0 123 321")) # -> [0, '0', '123', '321'] """ self.__toklist.insert(index, insStr) # fixup indices in token dictionary for name,occurrences in self.__tokdict.items(): for k, (value, position) in enumerate(occurrences): occurrences[k] = _ParseResultsWithOffset(value, position + (position > index)) def append( self, item ): """ Add single element to end of ParseResults list of elements. Example:: print(OneOrMore(Word(nums)).parseString("0 123 321")) # -> ['0', '123', '321'] # use a parse action to compute the sum of the parsed integers, and add it to the end def append_sum(tokens): tokens.append(sum(map(int, tokens))) print(OneOrMore(Word(nums)).addParseAction(append_sum).parseString("0 123 321")) # -> ['0', '123', '321', 444] """ self.__toklist.append(item) def extend( self, itemseq ): """ Add sequence of elements to end of ParseResults list of elements. Example:: patt = OneOrMore(Word(alphas)) # use a parse action to append the reverse of the matched strings, to make a palindrome def make_palindrome(tokens): tokens.extend(reversed([t[::-1] for t in tokens])) return ''.join(tokens) print(patt.addParseAction(make_palindrome).parseString("lskdj sdlkjf lksd")) # -> 'lskdjsdlkjflksddsklfjkldsjdksl' """ if isinstance(itemseq, ParseResults): self += itemseq else: self.__toklist.extend(itemseq) def clear( self ): """ Clear all elements and results names. """ del self.__toklist[:] self.__tokdict.clear() def __getattr__( self, name ): try: return self[name] except KeyError: return "" if name in self.__tokdict: if name not in self.__accumNames: return self.__tokdict[name][-1][0] else: return ParseResults([ v[0] for v in self.__tokdict[name] ]) else: return "" def __add__( self, other ): ret = self.copy() ret += other return ret def __iadd__( self, other ): if other.__tokdict: offset = len(self.__toklist) addoffset = lambda a: offset if a<0 else a+offset otheritems = other.__tokdict.items() otherdictitems = [(k, _ParseResultsWithOffset(v[0],addoffset(v[1])) ) for (k,vlist) in otheritems for v in vlist] for k,v in otherdictitems: self[k] = v if isinstance(v[0],ParseResults): v[0].__parent = wkref(self) self.__toklist += other.__toklist self.__accumNames.update( other.__accumNames ) return self def __radd__(self, other): if isinstance(other,int) and other == 0: # useful for merging many ParseResults using sum() builtin return self.copy() else: # this may raise a TypeError - so be it return other + self def __repr__( self ): return "(%s, %s)" % ( repr( self.__toklist ), repr( self.__tokdict ) ) def __str__( self ): return '[' + ', '.join(_ustr(i) if isinstance(i, ParseResults) else repr(i) for i in self.__toklist) + ']' def _asStringList( self, sep='' ): out = [] for item in self.__toklist: if out and sep: out.append(sep) if isinstance( item, ParseResults ): out += item._asStringList() else: out.append( _ustr(item) ) return out def asList( self ): """ Returns the parse results as a nested list of matching tokens, all converted to strings. Example:: patt = OneOrMore(Word(alphas)) result = patt.parseString("sldkj lsdkj sldkj") # even though the result prints in string-like form, it is actually a pyparsing ParseResults print(type(result), result) # -> <class 'pyparsing.ParseResults'> ['sldkj', 'lsdkj', 'sldkj'] # Use asList() to create an actual list result_list = result.asList() print(type(result_list), result_list) # -> <class 'list'> ['sldkj', 'lsdkj', 'sldkj'] """ return [res.asList() if isinstance(res,ParseResults) else res for res in self.__toklist] def asDict( self ): """ Returns the named parse results as a nested dictionary. Example:: integer = Word(nums) date_str = integer("year") + '/' + integer("month") + '/' + integer("day") result = date_str.parseString('12/31/1999') print(type(result), repr(result)) # -> <class 'pyparsing.ParseResults'> (['12', '/', '31', '/', '1999'], {'day': [('1999', 4)], 'year': [('12', 0)], 'month': [('31', 2)]}) result_dict = result.asDict() print(type(result_dict), repr(result_dict)) # -> <class 'dict'> {'day': '1999', 'year': '12', 'month': '31'} # even though a ParseResults supports dict-like access, sometime you just need to have a dict import json print(json.dumps(result)) # -> Exception: TypeError: ... is not JSON serializable print(json.dumps(result.asDict())) # -> {"month": "31", "day": "1999", "year": "12"} """ if PY_3: item_fn = self.items else: item_fn = self.iteritems def toItem(obj): if isinstance(obj, ParseResults): if obj.haskeys(): return obj.asDict() else: return [toItem(v) for v in obj] else: return obj return dict((k,toItem(v)) for k,v in item_fn()) def copy( self ): """ Returns a new copy of a C{ParseResults} object. """ ret = ParseResults( self.__toklist ) ret.__tokdict = self.__tokdict.copy() ret.__parent = self.__parent ret.__accumNames.update( self.__accumNames ) ret.__name = self.__name return ret def asXML( self, doctag=None, namedItemsOnly=False, indent="", formatted=True ): """ (Deprecated) Returns the parse results as XML. Tags are created for tokens and lists that have defined results names. """ nl = "\n" out = [] namedItems = dict((v[1],k) for (k,vlist) in self.__tokdict.items() for v in vlist) nextLevelIndent = indent + " " # collapse out indents if formatting is not desired if not formatted: indent = "" nextLevelIndent = "" nl = "" selfTag = None if doctag is not None: selfTag = doctag else: if self.__name: selfTag = self.__name if not selfTag: if namedItemsOnly: return "" else: selfTag = "ITEM" out += [ nl, indent, "<", selfTag, ">" ] for i,res in enumerate(self.__toklist): if isinstance(res,ParseResults): if i in namedItems: out += [ res.asXML(namedItems[i], namedItemsOnly and doctag is None, nextLevelIndent, formatted)] else: out += [ res.asXML(None, namedItemsOnly and doctag is None, nextLevelIndent, formatted)] else: # individual token, see if there is a name for it resTag = None if i in namedItems: resTag = namedItems[i] if not resTag: if namedItemsOnly: continue else: resTag = "ITEM" xmlBodyText = _xml_escape(_ustr(res)) out += [ nl, nextLevelIndent, "<", resTag, ">", xmlBodyText, "</", resTag, ">" ] out += [ nl, indent, "</", selfTag, ">" ] return "".join(out) def __lookup(self,sub): for k,vlist in self.__tokdict.items(): for v,loc in vlist: if sub is v: return k return None def getName(self): r""" Returns the results name for this token expression. Useful when several different expressions might match at a particular location. Example:: integer = Word(nums) ssn_expr = Regex(r"\d\d\d-\d\d-\d\d\d\d") house_number_expr = Suppress('#') + Word(nums, alphanums) user_data = (Group(house_number_expr)("house_number") | Group(ssn_expr)("ssn") | Group(integer)("age")) user_info = OneOrMore(user_data) result = user_info.parseString("22 111-22-3333 #221B") for item in result: print(item.getName(), ':', item[0]) prints:: age : 22 ssn : 111-22-3333 house_number : 221B """ if self.__name: return self.__name elif self.__parent: par = self.__parent() if par: return par.__lookup(self) else: return None elif (len(self) == 1 and len(self.__tokdict) == 1 and next(iter(self.__tokdict.values()))[0][1] in (0,-1)): return next(iter(self.__tokdict.keys())) else: return None def dump(self, indent='', depth=0, full=True): """ Diagnostic method for listing out the contents of a C{ParseResults}. Accepts an optional C{indent} argument so that this string can be embedded in a nested display of other data. Example:: integer = Word(nums) date_str = integer("year") + '/' + integer("month") + '/' + integer("day") result = date_str.parseString('12/31/1999') print(result.dump()) prints:: ['12', '/', '31', '/', '1999'] - day: 1999 - month: 31 - year: 12 """ out = [] NL = '\n' out.append( indent+_ustr(self.asList()) ) if full: if self.haskeys(): items = sorted((str(k), v) for k,v in self.items()) for k,v in items: if out: out.append(NL) out.append( "%s%s- %s: " % (indent,(' '*depth), k) ) if isinstance(v,ParseResults): if v: out.append( v.dump(indent,depth+1) ) else: out.append(_ustr(v)) else: out.append(repr(v)) elif any(isinstance(vv,ParseResults) for vv in self): v = self for i,vv in enumerate(v): if isinstance(vv,ParseResults): out.append("\n%s%s[%d]:\n%s%s%s" % (indent,(' '*(depth)),i,indent,(' '*(depth+1)),vv.dump(indent,depth+1) )) else: out.append("\n%s%s[%d]:\n%s%s%s" % (indent,(' '*(depth)),i,indent,(' '*(depth+1)),_ustr(vv))) return "".join(out) def pprint(self, *args, **kwargs): """ Pretty-printer for parsed results as a list, using the C{pprint} module. Accepts additional positional or keyword args as defined for the C{pprint.pprint} method. (U{http://docs.python.org/3/library/pprint.html#pprint.pprint}) Example:: ident = Word(alphas, alphanums) num = Word(nums) func = Forward() term = ident | num | Group('(' + func + ')') func <<= ident + Group(Optional(delimitedList(term))) result = func.parseString("fna a,b,(fnb c,d,200),100") result.pprint(width=40) prints:: ['fna', ['a', 'b', ['(', 'fnb', ['c', 'd', '200'], ')'], '100']] """ pprint.pprint(self.asList(), *args, **kwargs) # add support for pickle protocol def __getstate__(self): return ( self.__toklist, ( self.__tokdict.copy(), self.__parent is not None and self.__parent() or None, self.__accumNames, self.__name ) ) def __setstate__(self,state): self.__toklist = state[0] (self.__tokdict, par, inAccumNames, self.__name) = state[1] self.__accumNames = {} self.__accumNames.update(inAccumNames) if par is not None: self.__parent = wkref(par) else: self.__parent = None def __getnewargs__(self): return self.__toklist, self.__name, self.__asList, self.__modal def __dir__(self): return (dir(type(self)) + list(self.keys())) MutableMapping.register(ParseResults) def col (loc,strg): """Returns current column within a string, counting newlines as line separators. The first column is number 1. Note: the default parsing behavior is to expand tabs in the input string before starting the parsing process. See L{I{ParserElement.parseString}<ParserElement.parseString>} for more information on parsing strings containing C{<TAB>}s, and suggested methods to maintain a consistent view of the parsed string, the parse location, and line and column positions within the parsed string. """ s = strg return 1 if 0<loc<len(s) and s[loc-1] == '\n' else loc - s.rfind("\n", 0, loc) def lineno(loc,strg): """Returns current line number within a string, counting newlines as line separators. The first line is number 1. Note: the default parsing behavior is to expand tabs in the input string before starting the parsing process. See L{I{ParserElement.parseString}<ParserElement.parseString>} for more information on parsing strings containing C{<TAB>}s, and suggested methods to maintain a consistent view of the parsed string, the parse location, and line and column positions within the parsed string. """ return strg.count("\n",0,loc) + 1 def line( loc, strg ): """Returns the line of text containing loc within a string, counting newlines as line separators. """ lastCR = strg.rfind("\n", 0, loc) nextCR = strg.find("\n", loc) if nextCR >= 0: return strg[lastCR+1:nextCR] else: return strg[lastCR+1:] def _defaultStartDebugAction( instring, loc, expr ): print (("Match " + _ustr(expr) + " at loc " + _ustr(loc) + "(%d,%d)" % ( lineno(loc,instring), col(loc,instring) ))) def _defaultSuccessDebugAction( instring, startloc, endloc, expr, toks ): print ("Matched " + _ustr(expr) + " -> " + str(toks.asList())) def _defaultExceptionDebugAction( instring, loc, expr, exc ): print ("Exception raised:" + _ustr(exc)) def nullDebugAction(*args): """'Do-nothing' debug action, to suppress debugging output during parsing.""" pass # Only works on Python 3.x - nonlocal is toxic to Python 2 installs #~ 'decorator to trim function calls to match the arity of the target' #~ def _trim_arity(func, maxargs=3): #~ if func in singleArgBuiltins: #~ return lambda s,l,t: func(t) #~ limit = 0 #~ foundArity = False #~ def wrapper(*args): #~ nonlocal limit,foundArity #~ while 1: #~ try: #~ ret = func(*args[limit:]) #~ foundArity = True #~ return ret #~ except TypeError: #~ if limit == maxargs or foundArity: #~ raise #~ limit += 1 #~ continue #~ return wrapper # this version is Python 2.x-3.x cross-compatible 'decorator to trim function calls to match the arity of the target' def _trim_arity(func, maxargs=2): if func in singleArgBuiltins: return lambda s,l,t: func(t) limit = [0] foundArity = [False] # traceback return data structure changed in Py3.5 - normalize back to plain tuples if system_version[:2] >= (3,5): def extract_stack(limit=0): # special handling for Python 3.5.0 - extra deep call stack by 1 offset = -3 if system_version == (3,5,0) else -2 frame_summary = traceback.extract_stack(limit=-offset+limit-1)[offset] return [frame_summary[:2]] def extract_tb(tb, limit=0): frames = traceback.extract_tb(tb, limit=limit) frame_summary = frames[-1] return [frame_summary[:2]] else: extract_stack = traceback.extract_stack extract_tb = traceback.extract_tb # synthesize what would be returned by traceback.extract_stack at the call to # user's parse action 'func', so that we don't incur call penalty at parse time LINE_DIFF = 6 # IF ANY CODE CHANGES, EVEN JUST COMMENTS OR BLANK LINES, BETWEEN THE NEXT LINE AND # THE CALL TO FUNC INSIDE WRAPPER, LINE_DIFF MUST BE MODIFIED!!!! this_line = extract_stack(limit=2)[-1] pa_call_line_synth = (this_line[0], this_line[1]+LINE_DIFF) def wrapper(*args): while 1: try: ret = func(*args[limit[0]:]) foundArity[0] = True return ret except TypeError: # re-raise TypeErrors if they did not come from our arity testing if foundArity[0]: raise else: try: tb = sys.exc_info()[-1] if not extract_tb(tb, limit=2)[-1][:2] == pa_call_line_synth: raise finally: del tb if limit[0] <= maxargs: limit[0] += 1 continue raise # copy func name to wrapper for sensible debug output func_name = "<parse action>" try: func_name = getattr(func, '__name__', getattr(func, '__class__').__name__) except Exception: func_name = str(func) wrapper.__name__ = func_name return wrapper class ParserElement(object): """Abstract base level parser element class.""" DEFAULT_WHITE_CHARS = " \n\t\r" verbose_stacktrace = False @staticmethod def setDefaultWhitespaceChars( chars ): r""" Overrides the default whitespace chars Example:: # default whitespace chars are space, <TAB> and newline OneOrMore(Word(alphas)).parseString("abc def\nghi jkl") # -> ['abc', 'def', 'ghi', 'jkl'] # change to just treat newline as significant ParserElement.setDefaultWhitespaceChars(" \t") OneOrMore(Word(alphas)).parseString("abc def\nghi jkl") # -> ['abc', 'def'] """ ParserElement.DEFAULT_WHITE_CHARS = chars @staticmethod def inlineLiteralsUsing(cls): """ Set class to be used for inclusion of string literals into a parser. Example:: # default literal class used is Literal integer = Word(nums) date_str = integer("year") + '/' + integer("month") + '/' + integer("day") date_str.parseString("1999/12/31") # -> ['1999', '/', '12', '/', '31'] # change to Suppress ParserElement.inlineLiteralsUsing(Suppress) date_str = integer("year") + '/' + integer("month") + '/' + integer("day") date_str.parseString("1999/12/31") # -> ['1999', '12', '31'] """ ParserElement._literalStringClass = cls def __init__( self, savelist=False ): self.parseAction = list() self.failAction = None #~ self.name = "<unknown>" # don't define self.name, let subclasses try/except upcall self.strRepr = None self.resultsName = None self.saveAsList = savelist self.skipWhitespace = True self.whiteChars = ParserElement.DEFAULT_WHITE_CHARS self.copyDefaultWhiteChars = True self.mayReturnEmpty = False # used when checking for left-recursion self.keepTabs = False self.ignoreExprs = list() self.debug = False self.streamlined = False self.mayIndexError = True # used to optimize exception handling for subclasses that don't advance parse index self.errmsg = "" self.modalResults = True # used to mark results names as modal (report only last) or cumulative (list all) self.debugActions = ( None, None, None ) #custom debug actions self.re = None self.callPreparse = True # used to avoid redundant calls to preParse self.callDuringTry = False def copy( self ): """ Make a copy of this C{ParserElement}. Useful for defining different parse actions for the same parsing pattern, using copies of the original parse element. Example:: integer = Word(nums).setParseAction(lambda toks: int(toks[0])) integerK = integer.copy().addParseAction(lambda toks: toks[0]*1024) + Suppress("K") integerM = integer.copy().addParseAction(lambda toks: toks[0]*1024*1024) + Suppress("M") print(OneOrMore(integerK | integerM | integer).parseString("5K 100 640K 256M")) prints:: [5120, 100, 655360, 268435456] Equivalent form of C{expr.copy()} is just C{expr()}:: integerM = integer().addParseAction(lambda toks: toks[0]*1024*1024) + Suppress("M") """ cpy = copy.copy( self ) cpy.parseAction = self.parseAction[:] cpy.ignoreExprs = self.ignoreExprs[:] if self.copyDefaultWhiteChars: cpy.whiteChars = ParserElement.DEFAULT_WHITE_CHARS return cpy def setName( self, name ): """ Define name for this expression, makes debugging and exception messages clearer. Example:: Word(nums).parseString("ABC") # -> Exception: Expected W:(0123...) (at char 0), (line:1, col:1) Word(nums).setName("integer").parseString("ABC") # -> Exception: Expected integer (at char 0), (line:1, col:1) """ self.name = name self.errmsg = "Expected " + self.name if hasattr(self,"exception"): self.exception.msg = self.errmsg return self def setResultsName( self, name, listAllMatches=False ): """ Define name for referencing matching tokens as a nested attribute of the returned parse results. NOTE: this returns a *copy* of the original C{ParserElement} object; this is so that the client can define a basic element, such as an integer, and reference it in multiple places with different names. You can also set results names using the abbreviated syntax, C{expr("name")} in place of C{expr.setResultsName("name")} - see L{I{__call__}<__call__>}. Example:: date_str = (integer.setResultsName("year") + '/' + integer.setResultsName("month") + '/' + integer.setResultsName("day")) # equivalent form: date_str = integer("year") + '/' + integer("month") + '/' + integer("day") """ newself = self.copy() if name.endswith("*"): name = name[:-1] listAllMatches=True newself.resultsName = name newself.modalResults = not listAllMatches return newself def setBreak(self,breakFlag = True): """Method to invoke the Python pdb debugger when this element is about to be parsed. Set C{breakFlag} to True to enable, False to disable. """ if breakFlag: _parseMethod = self._parse def breaker(instring, loc, doActions=True, callPreParse=True): import pdb pdb.set_trace() return _parseMethod( instring, loc, doActions, callPreParse ) breaker._originalParseMethod = _parseMethod self._parse = breaker else: if hasattr(self._parse,"_originalParseMethod"): self._parse = self._parse._originalParseMethod return self def setParseAction( self, *fns, **kwargs ): """ Define one or more actions to perform when successfully matching parse element definition. Parse action fn is a callable method with 0-3 arguments, called as C{fn(s,loc,toks)}, C{fn(loc,toks)}, C{fn(toks)}, or just C{fn()}, where: - s = the original string being parsed (see note below) - loc = the location of the matching substring - toks = a list of the matched tokens, packaged as a C{L{ParseResults}} object If the functions in fns modify the tokens, they can return them as the return value from fn, and the modified list of tokens will replace the original. Otherwise, fn does not need to return any value. Optional keyword arguments: - callDuringTry = (default=C{False}) indicate if parse action should be run during lookaheads and alternate testing Note: the default parsing behavior is to expand tabs in the input string before starting the parsing process. See L{I{parseString}<parseString>} for more information on parsing strings containing C{<TAB>}s, and suggested methods to maintain a consistent view of the parsed string, the parse location, and line and column positions within the parsed string. Example:: integer = Word(nums) date_str = integer + '/' + integer + '/' + integer date_str.parseString("1999/12/31") # -> ['1999', '/', '12', '/', '31'] # use parse action to convert to ints at parse time integer = Word(nums).setParseAction(lambda toks: int(toks[0])) date_str = integer + '/' + integer + '/' + integer # note that integer fields are now ints, not strings date_str.parseString("1999/12/31") # -> [1999, '/', 12, '/', 31] """ self.parseAction = list(map(_trim_arity, list(fns))) self.callDuringTry = kwargs.get("callDuringTry", False) return self def addParseAction( self, *fns, **kwargs ): """ Add one or more parse actions to expression's list of parse actions. See L{I{setParseAction}<setParseAction>}. See examples in L{I{copy}<copy>}. """ self.parseAction += list(map(_trim_arity, list(fns))) self.callDuringTry = self.callDuringTry or kwargs.get("callDuringTry", False) return self def addCondition(self, *fns, **kwargs): """Add a boolean predicate function to expression's list of parse actions. See L{I{setParseAction}<setParseAction>} for function call signatures. Unlike C{setParseAction}, functions passed to C{addCondition} need to return boolean success/fail of the condition. Optional keyword arguments: - message = define a custom message to be used in the raised exception - fatal = if True, will raise ParseFatalException to stop parsing immediately; otherwise will raise ParseException Example:: integer = Word(nums).setParseAction(lambda toks: int(toks[0])) year_int = integer.copy() year_int.addCondition(lambda toks: toks[0] >= 2000, message="Only support years 2000 and later") date_str = year_int + '/' + integer + '/' + integer result = date_str.parseString("1999/12/31") # -> Exception: Only support years 2000 and later (at char 0), (line:1, col:1) """ msg = kwargs.get("message", "failed user-defined condition") exc_type = ParseFatalException if kwargs.get("fatal", False) else ParseException for fn in fns: def pa(s,l,t): if not bool(_trim_arity(fn)(s,l,t)): raise exc_type(s,l,msg) self.parseAction.append(pa) self.callDuringTry = self.callDuringTry or kwargs.get("callDuringTry", False) return self def setFailAction( self, fn ): """Define action to perform if parsing fails at this expression. Fail acton fn is a callable function that takes the arguments C{fn(s,loc,expr,err)} where: - s = string being parsed - loc = location where expression match was attempted and failed - expr = the parse expression that failed - err = the exception thrown The function returns no value. It may throw C{L{ParseFatalException}} if it is desired to stop parsing immediately.""" self.failAction = fn return self def _skipIgnorables( self, instring, loc ): exprsFound = True while exprsFound: exprsFound = False for e in self.ignoreExprs: try: while 1: loc,dummy = e._parse( instring, loc ) exprsFound = True except ParseException: pass return loc def preParse( self, instring, loc ): if self.ignoreExprs: loc = self._skipIgnorables( instring, loc ) if self.skipWhitespace: wt = self.whiteChars instrlen = len(instring) while loc < instrlen and instring[loc] in wt: loc += 1 return loc def parseImpl( self, instring, loc, doActions=True ): return loc, [] def postParse( self, instring, loc, tokenlist ): return tokenlist #~ @profile def _parseNoCache( self, instring, loc, doActions=True, callPreParse=True ): debugging = ( self.debug ) #and doActions ) if debugging or self.failAction: #~ print ("Match",self,"at loc",loc,"(%d,%d)" % ( lineno(loc,instring), col(loc,instring) )) if (self.debugActions[0] ): self.debugActions[0]( instring, loc, self ) if callPreParse and self.callPreparse: preloc = self.preParse( instring, loc ) else: preloc = loc tokensStart = preloc try: try: loc,tokens = self.parseImpl( instring, preloc, doActions ) except IndexError: raise ParseException( instring, len(instring), self.errmsg, self ) except ParseBaseException as err: #~ print ("Exception raised:", err) if self.debugActions[2]: self.debugActions[2]( instring, tokensStart, self, err ) if self.failAction: self.failAction( instring, tokensStart, self, err ) raise else: if callPreParse and self.callPreparse: preloc = self.preParse( instring, loc ) else: preloc = loc tokensStart = preloc if self.mayIndexError or preloc >= len(instring): try: loc,tokens = self.parseImpl( instring, preloc, doActions ) except IndexError: raise ParseException( instring, len(instring), self.errmsg, self ) else: loc,tokens = self.parseImpl( instring, preloc, doActions ) tokens = self.postParse( instring, loc, tokens ) retTokens = ParseResults( tokens, self.resultsName, asList=self.saveAsList, modal=self.modalResults ) if self.parseAction and (doActions or self.callDuringTry): if debugging: try: for fn in self.parseAction: tokens = fn( instring, tokensStart, retTokens ) if tokens is not None: retTokens = ParseResults( tokens, self.resultsName, asList=self.saveAsList and isinstance(tokens,(ParseResults,list)), modal=self.modalResults ) except ParseBaseException as err: #~ print "Exception raised in user parse action:", err if (self.debugActions[2] ): self.debugActions[2]( instring, tokensStart, self, err ) raise else: for fn in self.parseAction: tokens = fn( instring, tokensStart, retTokens ) if tokens is not None: retTokens = ParseResults( tokens, self.resultsName, asList=self.saveAsList and isinstance(tokens,(ParseResults,list)), modal=self.modalResults ) if debugging: #~ print ("Matched",self,"->",retTokens.asList()) if (self.debugActions[1] ): self.debugActions[1]( instring, tokensStart, loc, self, retTokens ) return loc, retTokens def tryParse( self, instring, loc ): try: return self._parse( instring, loc, doActions=False )[0] except ParseFatalException: raise ParseException( instring, loc, self.errmsg, self) def canParseNext(self, instring, loc): try: self.tryParse(instring, loc) except (ParseException, IndexError): return False else: return True class _UnboundedCache(object): def __init__(self): cache = {} self.not_in_cache = not_in_cache = object() def get(self, key): return cache.get(key, not_in_cache) def set(self, key, value): cache[key] = value def clear(self): cache.clear() def cache_len(self): return len(cache) self.get = types.MethodType(get, self) self.set = types.MethodType(set, self) self.clear = types.MethodType(clear, self) self.__len__ = types.MethodType(cache_len, self) if _OrderedDict is not None: class _FifoCache(object): def __init__(self, size): self.not_in_cache = not_in_cache = object() cache = _OrderedDict() def get(self, key): return cache.get(key, not_in_cache) def set(self, key, value): cache[key] = value while len(cache) > size: try: cache.popitem(False) except KeyError: pass def clear(self): cache.clear() def cache_len(self): return len(cache) self.get = types.MethodType(get, self) self.set = types.MethodType(set, self) self.clear = types.MethodType(clear, self) self.__len__ = types.MethodType(cache_len, self) else: class _FifoCache(object): def __init__(self, size): self.not_in_cache = not_in_cache = object() cache = {} key_fifo = collections.deque([], size) def get(self, key): return cache.get(key, not_in_cache) def set(self, key, value): cache[key] = value while len(key_fifo) > size: cache.pop(key_fifo.popleft(), None) key_fifo.append(key) def clear(self): cache.clear() key_fifo.clear() def cache_len(self): return len(cache) self.get = types.MethodType(get, self) self.set = types.MethodType(set, self) self.clear = types.MethodType(clear, self) self.__len__ = types.MethodType(cache_len, self) # argument cache for optimizing repeated calls when backtracking through recursive expressions packrat_cache = {} # this is set later by enabledPackrat(); this is here so that resetCache() doesn't fail packrat_cache_lock = RLock() packrat_cache_stats = [0, 0] # this method gets repeatedly called during backtracking with the same arguments - # we can cache these arguments and save ourselves the trouble of re-parsing the contained expression def _parseCache( self, instring, loc, doActions=True, callPreParse=True ): HIT, MISS = 0, 1 lookup = (self, instring, loc, callPreParse, doActions) with ParserElement.packrat_cache_lock: cache = ParserElement.packrat_cache value = cache.get(lookup) if value is cache.not_in_cache: ParserElement.packrat_cache_stats[MISS] += 1 try: value = self._parseNoCache(instring, loc, doActions, callPreParse) except ParseBaseException as pe: # cache a copy of the exception, without the traceback cache.set(lookup, pe.__class__(*pe.args)) raise else: cache.set(lookup, (value[0], value[1].copy())) return value else: ParserElement.packrat_cache_stats[HIT] += 1 if isinstance(value, Exception): raise value return (value[0], value[1].copy()) _parse = _parseNoCache @staticmethod def resetCache(): ParserElement.packrat_cache.clear() ParserElement.packrat_cache_stats[:] = [0] * len(ParserElement.packrat_cache_stats) _packratEnabled = False @staticmethod def enablePackrat(cache_size_limit=128): """Enables "packrat" parsing, which adds memoizing to the parsing logic. Repeated parse attempts at the same string location (which happens often in many complex grammars) can immediately return a cached value, instead of re-executing parsing/validating code. Memoizing is done of both valid results and parsing exceptions. Parameters: - cache_size_limit - (default=C{128}) - if an integer value is provided will limit the size of the packrat cache; if None is passed, then the cache size will be unbounded; if 0 is passed, the cache will be effectively disabled. This speedup may break existing programs that use parse actions that have side-effects. For this reason, packrat parsing is disabled when you first import pyparsing. To activate the packrat feature, your program must call the class method C{ParserElement.enablePackrat()}. If your program uses C{psyco} to "compile as you go", you must call C{enablePackrat} before calling C{psyco.full()}. If you do not do this, Python will crash. For best results, call C{enablePackrat()} immediately after importing pyparsing. Example:: import pyparsing pyparsing.ParserElement.enablePackrat() """ if not ParserElement._packratEnabled: ParserElement._packratEnabled = True if cache_size_limit is None: ParserElement.packrat_cache = ParserElement._UnboundedCache() else: ParserElement.packrat_cache = ParserElement._FifoCache(cache_size_limit) ParserElement._parse = ParserElement._parseCache def parseString( self, instring, parseAll=False ): """ Execute the parse expression with the given string. This is the main interface to the client code, once the complete expression has been built. If you want the grammar to require that the entire input string be successfully parsed, then set C{parseAll} to True (equivalent to ending the grammar with C{L{StringEnd()}}). Note: C{parseString} implicitly calls C{expandtabs()} on the input string, in order to report proper column numbers in parse actions. If the input string contains tabs and the grammar uses parse actions that use the C{loc} argument to index into the string being parsed, you can ensure you have a consistent view of the input string by: - calling C{parseWithTabs} on your grammar before calling C{parseString} (see L{I{parseWithTabs}<parseWithTabs>}) - define your parse action using the full C{(s,loc,toks)} signature, and reference the input string using the parse action's C{s} argument - explictly expand the tabs in your input string before calling C{parseString} Example:: Word('a').parseString('aaaaabaaa') # -> ['aaaaa'] Word('a').parseString('aaaaabaaa', parseAll=True) # -> Exception: Expected end of text """ ParserElement.resetCache() if not self.streamlined: self.streamline() #~ self.saveAsList = True for e in self.ignoreExprs: e.streamline() if not self.keepTabs: instring = instring.expandtabs() try: loc, tokens = self._parse( instring, 0 ) if parseAll: loc = self.preParse( instring, loc ) se = Empty() + StringEnd() se._parse( instring, loc ) except ParseBaseException as exc: if ParserElement.verbose_stacktrace: raise else: # catch and re-raise exception from here, clears out pyparsing internal stack trace raise exc else: return tokens def scanString( self, instring, maxMatches=_MAX_INT, overlap=False ): """ Scan the input string for expression matches. Each match will return the matching tokens, start location, and end location. May be called with optional C{maxMatches} argument, to clip scanning after 'n' matches are found. If C{overlap} is specified, then overlapping matches will be reported. Note that the start and end locations are reported relative to the string being parsed. See L{I{parseString}<parseString>} for more information on parsing strings with embedded tabs. Example:: source = "sldjf123lsdjjkf345sldkjf879lkjsfd987" print(source) for tokens,start,end in Word(alphas).scanString(source): print(' '*start + '^'*(end-start)) print(' '*start + tokens[0]) prints:: sldjf123lsdjjkf345sldkjf879lkjsfd987 ^^^^^ sldjf ^^^^^^^ lsdjjkf ^^^^^^ sldkjf ^^^^^^ lkjsfd """ if not self.streamlined: self.streamline() for e in self.ignoreExprs: e.streamline() if not self.keepTabs: instring = _ustr(instring).expandtabs() instrlen = len(instring) loc = 0 preparseFn = self.preParse parseFn = self._parse ParserElement.resetCache() matches = 0 try: while loc <= instrlen and matches < maxMatches: try: preloc = preparseFn( instring, loc ) nextLoc,tokens = parseFn( instring, preloc, callPreParse=False ) except ParseException: loc = preloc+1 else: if nextLoc > loc: matches += 1 yield tokens, preloc, nextLoc if overlap: nextloc = preparseFn( instring, loc ) if nextloc > loc: loc = nextLoc else: loc += 1 else: loc = nextLoc else: loc = preloc+1 except ParseBaseException as exc: if ParserElement.verbose_stacktrace: raise else: # catch and re-raise exception from here, clears out pyparsing internal stack trace raise exc def transformString( self, instring ): """ Extension to C{L{scanString}}, to modify matching text with modified tokens that may be returned from a parse action. To use C{transformString}, define a grammar and attach a parse action to it that modifies the returned token list. Invoking C{transformString()} on a target string will then scan for matches, and replace the matched text patterns according to the logic in the parse action. C{transformString()} returns the resulting transformed string. Example:: wd = Word(alphas) wd.setParseAction(lambda toks: toks[0].title()) print(wd.transformString("now is the winter of our discontent made glorious summer by this sun of york.")) Prints:: Now Is The Winter Of Our Discontent Made Glorious Summer By This Sun Of York. """ out = [] lastE = 0 # force preservation of <TAB>s, to minimize unwanted transformation of string, and to # keep string locs straight between transformString and scanString self.keepTabs = True try: for t,s,e in self.scanString( instring ): out.append( instring[lastE:s] ) if t: if isinstance(t,ParseResults): out += t.asList() elif isinstance(t,list): out += t else: out.append(t) lastE = e out.append(instring[lastE:]) out = [o for o in out if o] return "".join(map(_ustr,_flatten(out))) except ParseBaseException as exc: if ParserElement.verbose_stacktrace: raise else: # catch and re-raise exception from here, clears out pyparsing internal stack trace raise exc def searchString( self, instring, maxMatches=_MAX_INT ): """ Another extension to C{L{scanString}}, simplifying the access to the tokens found to match the given parse expression. May be called with optional C{maxMatches} argument, to clip searching after 'n' matches are found. Example:: # a capitalized word starts with an uppercase letter, followed by zero or more lowercase letters cap_word = Word(alphas.upper(), alphas.lower()) print(cap_word.searchString("More than Iron, more than Lead, more than Gold I need Electricity")) # the sum() builtin can be used to merge results into a single ParseResults object print(sum(cap_word.searchString("More than Iron, more than Lead, more than Gold I need Electricity"))) prints:: [['More'], ['Iron'], ['Lead'], ['Gold'], ['I'], ['Electricity']] ['More', 'Iron', 'Lead', 'Gold', 'I', 'Electricity'] """ try: return ParseResults([ t for t,s,e in self.scanString( instring, maxMatches ) ]) except ParseBaseException as exc: if ParserElement.verbose_stacktrace: raise else: # catch and re-raise exception from here, clears out pyparsing internal stack trace raise exc def split(self, instring, maxsplit=_MAX_INT, includeSeparators=False): """ Generator method to split a string using the given expression as a separator. May be called with optional C{maxsplit} argument, to limit the number of splits; and the optional C{includeSeparators} argument (default=C{False}), if the separating matching text should be included in the split results. Example:: punc = oneOf(list(".,;:/-!?")) print(list(punc.split("This, this?, this sentence, is badly punctuated!"))) prints:: ['This', ' this', '', ' this sentence', ' is badly punctuated', ''] """ splits = 0 last = 0 for t,s,e in self.scanString(instring, maxMatches=maxsplit): yield instring[last:s] if includeSeparators: yield t[0] last = e yield instring[last:] def __add__(self, other ): """ Implementation of + operator - returns C{L{And}}. Adding strings to a ParserElement converts them to L{Literal}s by default. Example:: greet = Word(alphas) + "," + Word(alphas) + "!" hello = "Hello, World!" print (hello, "->", greet.parseString(hello)) Prints:: Hello, World! -> ['Hello', ',', 'World', '!'] """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return And( [ self, other ] ) def __radd__(self, other ): """ Implementation of + operator when left operand is not a C{L{ParserElement}} """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return other + self def __sub__(self, other): """ Implementation of - operator, returns C{L{And}} with error stop """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return self + And._ErrorStop() + other def __rsub__(self, other ): """ Implementation of - operator when left operand is not a C{L{ParserElement}} """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return other - self def __mul__(self,other): """ Implementation of * operator, allows use of C{expr * 3} in place of C{expr + expr + expr}. Expressions may also me multiplied by a 2-integer tuple, similar to C{{min,max}} multipliers in regular expressions. Tuples may also include C{None} as in: - C{expr*(n,None)} or C{expr*(n,)} is equivalent to C{expr*n + L{ZeroOrMore}(expr)} (read as "at least n instances of C{expr}") - C{expr*(None,n)} is equivalent to C{expr*(0,n)} (read as "0 to n instances of C{expr}") - C{expr*(None,None)} is equivalent to C{L{ZeroOrMore}(expr)} - C{expr*(1,None)} is equivalent to C{L{OneOrMore}(expr)} Note that C{expr*(None,n)} does not raise an exception if more than n exprs exist in the input stream; that is, C{expr*(None,n)} does not enforce a maximum number of expr occurrences. If this behavior is desired, then write C{expr*(None,n) + ~expr} """ if isinstance(other,int): minElements, optElements = other,0 elif isinstance(other,tuple): other = (other + (None, None))[:2] if other[0] is None: other = (0, other[1]) if isinstance(other[0],int) and other[1] is None: if other[0] == 0: return ZeroOrMore(self) if other[0] == 1: return OneOrMore(self) else: return self*other[0] + ZeroOrMore(self) elif isinstance(other[0],int) and isinstance(other[1],int): minElements, optElements = other optElements -= minElements else: raise TypeError("cannot multiply 'ParserElement' and ('%s','%s') objects", type(other[0]),type(other[1])) else: raise TypeError("cannot multiply 'ParserElement' and '%s' objects", type(other)) if minElements < 0: raise ValueError("cannot multiply ParserElement by negative value") if optElements < 0: raise ValueError("second tuple value must be greater or equal to first tuple value") if minElements == optElements == 0: raise ValueError("cannot multiply ParserElement by 0 or (0,0)") if (optElements): def makeOptionalList(n): if n>1: return Optional(self + makeOptionalList(n-1)) else: return Optional(self) if minElements: if minElements == 1: ret = self + makeOptionalList(optElements) else: ret = And([self]*minElements) + makeOptionalList(optElements) else: ret = makeOptionalList(optElements) else: if minElements == 1: ret = self else: ret = And([self]*minElements) return ret def __rmul__(self, other): return self.__mul__(other) def __or__(self, other ): """ Implementation of | operator - returns C{L{MatchFirst}} """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return MatchFirst( [ self, other ] ) def __ror__(self, other ): """ Implementation of | operator when left operand is not a C{L{ParserElement}} """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return other | self def __xor__(self, other ): """ Implementation of ^ operator - returns C{L{Or}} """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return Or( [ self, other ] ) def __rxor__(self, other ): """ Implementation of ^ operator when left operand is not a C{L{ParserElement}} """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return other ^ self def __and__(self, other ): """ Implementation of & operator - returns C{L{Each}} """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return Each( [ self, other ] ) def __rand__(self, other ): """ Implementation of & operator when left operand is not a C{L{ParserElement}} """ if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) if not isinstance( other, ParserElement ): warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), SyntaxWarning, stacklevel=2) return None return other & self def __invert__( self ): """ Implementation of ~ operator - returns C{L{NotAny}} """ return NotAny( self ) def __call__(self, name=None): """ Shortcut for C{L{setResultsName}}, with C{listAllMatches=False}. If C{name} is given with a trailing C{'*'} character, then C{listAllMatches} will be passed as C{True}. If C{name} is omitted, same as calling C{L{copy}}. Example:: # these are equivalent userdata = Word(alphas).setResultsName("name") + Word(nums+"-").setResultsName("socsecno") userdata = Word(alphas)("name") + Word(nums+"-")("socsecno") """ if name is not None: return self.setResultsName(name) else: return self.copy() def suppress( self ): """ Suppresses the output of this C{ParserElement}; useful to keep punctuation from cluttering up returned output. """ return Suppress( self ) def leaveWhitespace( self ): """ Disables the skipping of whitespace before matching the characters in the C{ParserElement}'s defined pattern. This is normally only used internally by the pyparsing module, but may be needed in some whitespace-sensitive grammars. """ self.skipWhitespace = False return self def setWhitespaceChars( self, chars ): """ Overrides the default whitespace chars """ self.skipWhitespace = True self.whiteChars = chars self.copyDefaultWhiteChars = False return self def parseWithTabs( self ): """ Overrides default behavior to expand C{<TAB>}s to spaces before parsing the input string. Must be called before C{parseString} when the input grammar contains elements that match C{<TAB>} characters. """ self.keepTabs = True return self def ignore( self, other ): """ Define expression to be ignored (e.g., comments) while doing pattern matching; may be called repeatedly, to define multiple comment or other ignorable patterns. Example:: patt = OneOrMore(Word(alphas)) patt.parseString('ablaj /* comment */ lskjd') # -> ['ablaj'] patt.ignore(cStyleComment) patt.parseString('ablaj /* comment */ lskjd') # -> ['ablaj', 'lskjd'] """ if isinstance(other, basestring): other = Suppress(other) if isinstance( other, Suppress ): if other not in self.ignoreExprs: self.ignoreExprs.append(other) else: self.ignoreExprs.append( Suppress( other.copy() ) ) return self def setDebugActions( self, startAction, successAction, exceptionAction ): """ Enable display of debugging messages while doing pattern matching. """ self.debugActions = (startAction or _defaultStartDebugAction, successAction or _defaultSuccessDebugAction, exceptionAction or _defaultExceptionDebugAction) self.debug = True return self def setDebug( self, flag=True ): """ Enable display of debugging messages while doing pattern matching. Set C{flag} to True to enable, False to disable. Example:: wd = Word(alphas).setName("alphaword") integer = Word(nums).setName("numword") term = wd | integer # turn on debugging for wd wd.setDebug() OneOrMore(term).parseString("abc 123 xyz 890") prints:: Match alphaword at loc 0(1,1) Matched alphaword -> ['abc'] Match alphaword at loc 3(1,4) Exception raised:Expected alphaword (at char 4), (line:1, col:5) Match alphaword at loc 7(1,8) Matched alphaword -> ['xyz'] Match alphaword at loc 11(1,12) Exception raised:Expected alphaword (at char 12), (line:1, col:13) Match alphaword at loc 15(1,16) Exception raised:Expected alphaword (at char 15), (line:1, col:16) The output shown is that produced by the default debug actions - custom debug actions can be specified using L{setDebugActions}. Prior to attempting to match the C{wd} expression, the debugging message C{"Match <exprname> at loc <n>(<line>,<col>)"} is shown. Then if the parse succeeds, a C{"Matched"} message is shown, or an C{"Exception raised"} message is shown. Also note the use of L{setName} to assign a human-readable name to the expression, which makes debugging and exception messages easier to understand - for instance, the default name created for the C{Word} expression without calling C{setName} is C{"W:(ABCD...)"}. """ if flag: self.setDebugActions( _defaultStartDebugAction, _defaultSuccessDebugAction, _defaultExceptionDebugAction ) else: self.debug = False return self def __str__( self ): return self.name def __repr__( self ): return _ustr(self) def streamline( self ): self.streamlined = True self.strRepr = None return self def checkRecursion( self, parseElementList ): pass def validate( self, validateTrace=[] ): """ Check defined expressions for valid structure, check for infinite recursive definitions. """ self.checkRecursion( [] ) def parseFile( self, file_or_filename, parseAll=False ): """ Execute the parse expression on the given file or filename. If a filename is specified (instead of a file object), the entire file is opened, read, and closed before parsing. """ try: file_contents = file_or_filename.read() except AttributeError: with open(file_or_filename, "r") as f: file_contents = f.read() try: return self.parseString(file_contents, parseAll) except ParseBaseException as exc: if ParserElement.verbose_stacktrace: raise else: # catch and re-raise exception from here, clears out pyparsing internal stack trace raise exc def __eq__(self,other): if isinstance(other, ParserElement): return self is other or vars(self) == vars(other) elif isinstance(other, basestring): return self.matches(other) else: return super(ParserElement,self)==other def __ne__(self,other): return not (self == other) def __hash__(self): return hash(id(self)) def __req__(self,other): return self == other def __rne__(self,other): return not (self == other) def matches(self, testString, parseAll=True): """ Method for quick testing of a parser against a test string. Good for simple inline microtests of sub expressions while building up larger parser. Parameters: - testString - to test against this expression for a match - parseAll - (default=C{True}) - flag to pass to C{L{parseString}} when running tests Example:: expr = Word(nums) assert expr.matches("100") """ try: self.parseString(_ustr(testString), parseAll=parseAll) return True except ParseBaseException: return False def runTests(self, tests, parseAll=True, comment='#', fullDump=True, printResults=True, failureTests=False): """ Execute the parse expression on a series of test strings, showing each test, the parsed results or where the parse failed. Quick and easy way to run a parse expression against a list of sample strings. Parameters: - tests - a list of separate test strings, or a multiline string of test strings - parseAll - (default=C{True}) - flag to pass to C{L{parseString}} when running tests - comment - (default=C{'#'}) - expression for indicating embedded comments in the test string; pass None to disable comment filtering - fullDump - (default=C{True}) - dump results as list followed by results names in nested outline; if False, only dump nested list - printResults - (default=C{True}) prints test output to stdout - failureTests - (default=C{False}) indicates if these tests are expected to fail parsing Returns: a (success, results) tuple, where success indicates that all tests succeeded (or failed if C{failureTests} is True), and the results contain a list of lines of each test's output Example:: number_expr = pyparsing_common.number.copy() result = number_expr.runTests(''' # unsigned integer 100 # negative integer -100 # float with scientific notation 6.02e23 # integer with scientific notation 1e-12 ''') print("Success" if result[0] else "Failed!") result = number_expr.runTests(''' # stray character 100Z # missing leading digit before '.' -.100 # too many '.' 3.14.159 ''', failureTests=True) print("Success" if result[0] else "Failed!") prints:: # unsigned integer 100 [100] # negative integer -100 [-100] # float with scientific notation 6.02e23 [6.02e+23] # integer with scientific notation 1e-12 [1e-12] Success # stray character 100Z ^ FAIL: Expected end of text (at char 3), (line:1, col:4) # missing leading digit before '.' -.100 ^ FAIL: Expected {real number with scientific notation | real number | signed integer} (at char 0), (line:1, col:1) # too many '.' 3.14.159 ^ FAIL: Expected end of text (at char 4), (line:1, col:5) Success Each test string must be on a single line. If you want to test a string that spans multiple lines, create a test like this:: expr.runTest(r"this is a test\\n of strings that spans \\n 3 lines") (Note that this is a raw string literal, you must include the leading 'r'.) """ if isinstance(tests, basestring): tests = list(map(str.strip, tests.rstrip().splitlines())) if isinstance(comment, basestring): comment = Literal(comment) allResults = [] comments = [] success = True for t in tests: if comment is not None and comment.matches(t, False) or comments and not t: comments.append(t) continue if not t: continue out = ['\n'.join(comments), t] comments = [] try: t = t.replace(r'\n','\n') result = self.parseString(t, parseAll=parseAll) out.append(result.dump(full=fullDump)) success = success and not failureTests except ParseBaseException as pe: fatal = "(FATAL)" if isinstance(pe, ParseFatalException) else "" if '\n' in t: out.append(line(pe.loc, t)) out.append(' '*(col(pe.loc,t)-1) + '^' + fatal) else: out.append(' '*pe.loc + '^' + fatal) out.append("FAIL: " + str(pe)) success = success and failureTests result = pe except Exception as exc: out.append("FAIL-EXCEPTION: " + str(exc)) success = success and failureTests result = exc if printResults: if fullDump: out.append('') print('\n'.join(out)) allResults.append((t, result)) return success, allResults class Token(ParserElement): """ Abstract C{ParserElement} subclass, for defining atomic matching patterns. """ def __init__( self ): super(Token,self).__init__( savelist=False ) class Empty(Token): """ An empty token, will always match. """ def __init__( self ): super(Empty,self).__init__() self.name = "Empty" self.mayReturnEmpty = True self.mayIndexError = False class NoMatch(Token): """ A token that will never match. """ def __init__( self ): super(NoMatch,self).__init__() self.name = "NoMatch" self.mayReturnEmpty = True self.mayIndexError = False self.errmsg = "Unmatchable token" def parseImpl( self, instring, loc, doActions=True ): raise ParseException(instring, loc, self.errmsg, self) class Literal(Token): """ Token to exactly match a specified string. Example:: Literal('blah').parseString('blah') # -> ['blah'] Literal('blah').parseString('blahfooblah') # -> ['blah'] Literal('blah').parseString('bla') # -> Exception: Expected "blah" For case-insensitive matching, use L{CaselessLiteral}. For keyword matching (force word break before and after the matched string), use L{Keyword} or L{CaselessKeyword}. """ def __init__( self, matchString ): super(Literal,self).__init__() self.match = matchString self.matchLen = len(matchString) try: self.firstMatchChar = matchString[0] except IndexError: warnings.warn("null string passed to Literal; use Empty() instead", SyntaxWarning, stacklevel=2) self.__class__ = Empty self.name = '"%s"' % _ustr(self.match) self.errmsg = "Expected " + self.name self.mayReturnEmpty = False self.mayIndexError = False # Performance tuning: this routine gets called a *lot* # if this is a single character match string and the first character matches, # short-circuit as quickly as possible, and avoid calling startswith #~ @profile def parseImpl( self, instring, loc, doActions=True ): if (instring[loc] == self.firstMatchChar and (self.matchLen==1 or instring.startswith(self.match,loc)) ): return loc+self.matchLen, self.match raise ParseException(instring, loc, self.errmsg, self) _L = Literal ParserElement._literalStringClass = Literal class Keyword(Token): """ Token to exactly match a specified string as a keyword, that is, it must be immediately followed by a non-keyword character. Compare with C{L{Literal}}: - C{Literal("if")} will match the leading C{'if'} in C{'ifAndOnlyIf'}. - C{Keyword("if")} will not; it will only match the leading C{'if'} in C{'if x=1'}, or C{'if(y==2)'} Accepts two optional constructor arguments in addition to the keyword string: - C{identChars} is a string of characters that would be valid identifier characters, defaulting to all alphanumerics + "_" and "$" - C{caseless} allows case-insensitive matching, default is C{False}. Example:: Keyword("start").parseString("start") # -> ['start'] Keyword("start").parseString("starting") # -> Exception For case-insensitive matching, use L{CaselessKeyword}. """ DEFAULT_KEYWORD_CHARS = alphanums+"_$" def __init__( self, matchString, identChars=None, caseless=False ): super(Keyword,self).__init__() if identChars is None: identChars = Keyword.DEFAULT_KEYWORD_CHARS self.match = matchString self.matchLen = len(matchString) try: self.firstMatchChar = matchString[0] except IndexError: warnings.warn("null string passed to Keyword; use Empty() instead", SyntaxWarning, stacklevel=2) self.name = '"%s"' % self.match self.errmsg = "Expected " + self.name self.mayReturnEmpty = False self.mayIndexError = False self.caseless = caseless if caseless: self.caselessmatch = matchString.upper() identChars = identChars.upper() self.identChars = set(identChars) def parseImpl( self, instring, loc, doActions=True ): if self.caseless: if ( (instring[ loc:loc+self.matchLen ].upper() == self.caselessmatch) and (loc >= len(instring)-self.matchLen or instring[loc+self.matchLen].upper() not in self.identChars) and (loc == 0 or instring[loc-1].upper() not in self.identChars) ): return loc+self.matchLen, self.match else: if (instring[loc] == self.firstMatchChar and (self.matchLen==1 or instring.startswith(self.match,loc)) and (loc >= len(instring)-self.matchLen or instring[loc+self.matchLen] not in self.identChars) and (loc == 0 or instring[loc-1] not in self.identChars) ): return loc+self.matchLen, self.match raise ParseException(instring, loc, self.errmsg, self) def copy(self): c = super(Keyword,self).copy() c.identChars = Keyword.DEFAULT_KEYWORD_CHARS return c @staticmethod def setDefaultKeywordChars( chars ): """Overrides the default Keyword chars """ Keyword.DEFAULT_KEYWORD_CHARS = chars class CaselessLiteral(Literal): """ Token to match a specified string, ignoring case of letters. Note: the matched results will always be in the case of the given match string, NOT the case of the input text. Example:: OneOrMore(CaselessLiteral("CMD")).parseString("cmd CMD Cmd10") # -> ['CMD', 'CMD', 'CMD'] (Contrast with example for L{CaselessKeyword}.) """ def __init__( self, matchString ): super(CaselessLiteral,self).__init__( matchString.upper() ) # Preserve the defining literal. self.returnString = matchString self.name = "'%s'" % self.returnString self.errmsg = "Expected " + self.name def parseImpl( self, instring, loc, doActions=True ): if instring[ loc:loc+self.matchLen ].upper() == self.match: return loc+self.matchLen, self.returnString raise ParseException(instring, loc, self.errmsg, self) class CaselessKeyword(Keyword): """ Caseless version of L{Keyword}. Example:: OneOrMore(CaselessKeyword("CMD")).parseString("cmd CMD Cmd10") # -> ['CMD', 'CMD'] (Contrast with example for L{CaselessLiteral}.) """ def __init__( self, matchString, identChars=None ): super(CaselessKeyword,self).__init__( matchString, identChars, caseless=True ) def parseImpl( self, instring, loc, doActions=True ): if ( (instring[ loc:loc+self.matchLen ].upper() == self.caselessmatch) and (loc >= len(instring)-self.matchLen or instring[loc+self.matchLen].upper() not in self.identChars) ): return loc+self.matchLen, self.match raise ParseException(instring, loc, self.errmsg, self) class CloseMatch(Token): """ A variation on L{Literal} which matches "close" matches, that is, strings with at most 'n' mismatching characters. C{CloseMatch} takes parameters: - C{match_string} - string to be matched - C{maxMismatches} - (C{default=1}) maximum number of mismatches allowed to count as a match The results from a successful parse will contain the matched text from the input string and the following named results: - C{mismatches} - a list of the positions within the match_string where mismatches were found - C{original} - the original match_string used to compare against the input string If C{mismatches} is an empty list, then the match was an exact match. Example:: patt = CloseMatch("ATCATCGAATGGA") patt.parseString("ATCATCGAAXGGA") # -> (['ATCATCGAAXGGA'], {'mismatches': [[9]], 'original': ['ATCATCGAATGGA']}) patt.parseString("ATCAXCGAAXGGA") # -> Exception: Expected 'ATCATCGAATGGA' (with up to 1 mismatches) (at char 0), (line:1, col:1) # exact match patt.parseString("ATCATCGAATGGA") # -> (['ATCATCGAATGGA'], {'mismatches': [[]], 'original': ['ATCATCGAATGGA']}) # close match allowing up to 2 mismatches patt = CloseMatch("ATCATCGAATGGA", maxMismatches=2) patt.parseString("ATCAXCGAAXGGA") # -> (['ATCAXCGAAXGGA'], {'mismatches': [[4, 9]], 'original': ['ATCATCGAATGGA']}) """ def __init__(self, match_string, maxMismatches=1): super(CloseMatch,self).__init__() self.name = match_string self.match_string = match_string self.maxMismatches = maxMismatches self.errmsg = "Expected %r (with up to %d mismatches)" % (self.match_string, self.maxMismatches) self.mayIndexError = False self.mayReturnEmpty = False def parseImpl( self, instring, loc, doActions=True ): start = loc instrlen = len(instring) maxloc = start + len(self.match_string) if maxloc <= instrlen: match_string = self.match_string match_stringloc = 0 mismatches = [] maxMismatches = self.maxMismatches for match_stringloc,s_m in enumerate(zip(instring[loc:maxloc], self.match_string)): src,mat = s_m if src != mat: mismatches.append(match_stringloc) if len(mismatches) > maxMismatches: break else: loc = match_stringloc + 1 results = ParseResults([instring[start:loc]]) results['original'] = self.match_string results['mismatches'] = mismatches return loc, results raise ParseException(instring, loc, self.errmsg, self) class Word(Token): """ Token for matching words composed of allowed character sets. Defined with string containing all allowed initial characters, an optional string containing allowed body characters (if omitted, defaults to the initial character set), and an optional minimum, maximum, and/or exact length. The default value for C{min} is 1 (a minimum value < 1 is not valid); the default values for C{max} and C{exact} are 0, meaning no maximum or exact length restriction. An optional C{excludeChars} parameter can list characters that might be found in the input C{bodyChars} string; useful to define a word of all printables except for one or two characters, for instance. L{srange} is useful for defining custom character set strings for defining C{Word} expressions, using range notation from regular expression character sets. A common mistake is to use C{Word} to match a specific literal string, as in C{Word("Address")}. Remember that C{Word} uses the string argument to define I{sets} of matchable characters. This expression would match "Add", "AAA", "dAred", or any other word made up of the characters 'A', 'd', 'r', 'e', and 's'. To match an exact literal string, use L{Literal} or L{Keyword}. pyparsing includes helper strings for building Words: - L{alphas} - L{nums} - L{alphanums} - L{hexnums} - L{alphas8bit} (alphabetic characters in ASCII range 128-255 - accented, tilded, umlauted, etc.) - L{punc8bit} (non-alphabetic characters in ASCII range 128-255 - currency, symbols, superscripts, diacriticals, etc.) - L{printables} (any non-whitespace character) Example:: # a word composed of digits integer = Word(nums) # equivalent to Word("0123456789") or Word(srange("0-9")) # a word with a leading capital, and zero or more lowercase capital_word = Word(alphas.upper(), alphas.lower()) # hostnames are alphanumeric, with leading alpha, and '-' hostname = Word(alphas, alphanums+'-') # roman numeral (not a strict parser, accepts invalid mix of characters) roman = Word("IVXLCDM") # any string of non-whitespace characters, except for ',' csv_value = Word(printables, excludeChars=",") """ def __init__( self, initChars, bodyChars=None, min=1, max=0, exact=0, asKeyword=False, excludeChars=None ): super(Word,self).__init__() if excludeChars: initChars = ''.join(c for c in initChars if c not in excludeChars) if bodyChars: bodyChars = ''.join(c for c in bodyChars if c not in excludeChars) self.initCharsOrig = initChars self.initChars = set(initChars) if bodyChars : self.bodyCharsOrig = bodyChars self.bodyChars = set(bodyChars) else: self.bodyCharsOrig = initChars self.bodyChars = set(initChars) self.maxSpecified = max > 0 if min < 1: raise ValueError("cannot specify a minimum length < 1; use Optional(Word()) if zero-length word is permitted") self.minLen = min if max > 0: self.maxLen = max else: self.maxLen = _MAX_INT if exact > 0: self.maxLen = exact self.minLen = exact self.name = _ustr(self) self.errmsg = "Expected " + self.name self.mayIndexError = False self.asKeyword = asKeyword if ' ' not in self.initCharsOrig+self.bodyCharsOrig and (min==1 and max==0 and exact==0): if self.bodyCharsOrig == self.initCharsOrig: self.reString = "[%s]+" % _escapeRegexRangeChars(self.initCharsOrig) elif len(self.initCharsOrig) == 1: self.reString = "%s[%s]*" % \ (re.escape(self.initCharsOrig), _escapeRegexRangeChars(self.bodyCharsOrig),) else: self.reString = "[%s][%s]*" % \ (_escapeRegexRangeChars(self.initCharsOrig), _escapeRegexRangeChars(self.bodyCharsOrig),) if self.asKeyword: self.reString = r"\b"+self.reString+r"\b" try: self.re = re.compile( self.reString ) except Exception: self.re = None def parseImpl( self, instring, loc, doActions=True ): if self.re: result = self.re.match(instring,loc) if not result: raise ParseException(instring, loc, self.errmsg, self) loc = result.end() return loc, result.group() if not(instring[ loc ] in self.initChars): raise ParseException(instring, loc, self.errmsg, self) start = loc loc += 1 instrlen = len(instring) bodychars = self.bodyChars maxloc = start + self.maxLen maxloc = min( maxloc, instrlen ) while loc < maxloc and instring[loc] in bodychars: loc += 1 throwException = False if loc - start < self.minLen: throwException = True if self.maxSpecified and loc < instrlen and instring[loc] in bodychars: throwException = True if self.asKeyword: if (start>0 and instring[start-1] in bodychars) or (loc<instrlen and instring[loc] in bodychars): throwException = True if throwException: raise ParseException(instring, loc, self.errmsg, self) return loc, instring[start:loc] def __str__( self ): try: return super(Word,self).__str__() except Exception: pass if self.strRepr is None: def charsAsStr(s): if len(s)>4: return s[:4]+"..." else: return s if ( self.initCharsOrig != self.bodyCharsOrig ): self.strRepr = "W:(%s,%s)" % ( charsAsStr(self.initCharsOrig), charsAsStr(self.bodyCharsOrig) ) else: self.strRepr = "W:(%s)" % charsAsStr(self.initCharsOrig) return self.strRepr class Regex(Token): r""" Token for matching strings that match a given regular expression. Defined with string specifying the regular expression in a form recognized by the inbuilt Python re module. If the given regex contains named groups (defined using C{(?P<name>...)}), these will be preserved as named parse results. Example:: realnum = Regex(r"[+-]?\d+\.\d*") date = Regex(r'(?P<year>\d{4})-(?P<month>\d\d?)-(?P<day>\d\d?)') # ref: http://stackoverflow.com/questions/267399/how-do-you-match-only-valid-roman-numerals-with-a-regular-expression roman = Regex(r"M{0,4}(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})") """ compiledREtype = type(re.compile("[A-Z]")) def __init__( self, pattern, flags=0): """The parameters C{pattern} and C{flags} are passed to the C{re.compile()} function as-is. See the Python C{re} module for an explanation of the acceptable patterns and flags.""" super(Regex,self).__init__() if isinstance(pattern, basestring): if not pattern: warnings.warn("null string passed to Regex; use Empty() instead", SyntaxWarning, stacklevel=2) self.pattern = pattern self.flags = flags try: self.re = re.compile(self.pattern, self.flags) self.reString = self.pattern except sre_constants.error: warnings.warn("invalid pattern (%s) passed to Regex" % pattern, SyntaxWarning, stacklevel=2) raise elif isinstance(pattern, Regex.compiledREtype): self.re = pattern self.pattern = \ self.reString = str(pattern) self.flags = flags else: raise ValueError("Regex may only be constructed with a string or a compiled RE object") self.name = _ustr(self) self.errmsg = "Expected " + self.name self.mayIndexError = False self.mayReturnEmpty = True def parseImpl( self, instring, loc, doActions=True ): result = self.re.match(instring,loc) if not result: raise ParseException(instring, loc, self.errmsg, self) loc = result.end() d = result.groupdict() ret = ParseResults(result.group()) if d: for k in d: ret[k] = d[k] return loc,ret def __str__( self ): try: return super(Regex,self).__str__() except Exception: pass if self.strRepr is None: self.strRepr = "Re:(%s)" % repr(self.pattern) return self.strRepr class QuotedString(Token): r""" Token for matching strings that are delimited by quoting characters. Defined with the following parameters: - quoteChar - string of one or more characters defining the quote delimiting string - escChar - character to escape quotes, typically backslash (default=C{None}) - escQuote - special quote sequence to escape an embedded quote string (such as SQL's "" to escape an embedded ") (default=C{None}) - multiline - boolean indicating whether quotes can span multiple lines (default=C{False}) - unquoteResults - boolean indicating whether the matched text should be unquoted (default=C{True}) - endQuoteChar - string of one or more characters defining the end of the quote delimited string (default=C{None} => same as quoteChar) - convertWhitespaceEscapes - convert escaped whitespace (C{'\t'}, C{'\n'}, etc.) to actual whitespace (default=C{True}) Example:: qs = QuotedString('"') print(qs.searchString('lsjdf "This is the quote" sldjf')) complex_qs = QuotedString('{{', endQuoteChar='}}') print(complex_qs.searchString('lsjdf {{This is the "quote"}} sldjf')) sql_qs = QuotedString('"', escQuote='""') print(sql_qs.searchString('lsjdf "This is the quote with ""embedded"" quotes" sldjf')) prints:: [['This is the quote']] [['This is the "quote"']] [['This is the quote with "embedded" quotes']] """ def __init__( self, quoteChar, escChar=None, escQuote=None, multiline=False, unquoteResults=True, endQuoteChar=None, convertWhitespaceEscapes=True): super(QuotedString,self).__init__() # remove white space from quote chars - wont work anyway quoteChar = quoteChar.strip() if not quoteChar: warnings.warn("quoteChar cannot be the empty string",SyntaxWarning,stacklevel=2) raise SyntaxError() if endQuoteChar is None: endQuoteChar = quoteChar else: endQuoteChar = endQuoteChar.strip() if not endQuoteChar: warnings.warn("endQuoteChar cannot be the empty string",SyntaxWarning,stacklevel=2) raise SyntaxError() self.quoteChar = quoteChar self.quoteCharLen = len(quoteChar) self.firstQuoteChar = quoteChar[0] self.endQuoteChar = endQuoteChar self.endQuoteCharLen = len(endQuoteChar) self.escChar = escChar self.escQuote = escQuote self.unquoteResults = unquoteResults self.convertWhitespaceEscapes = convertWhitespaceEscapes if multiline: self.flags = re.MULTILINE | re.DOTALL self.pattern = r'%s(?:[^%s%s]' % \ ( re.escape(self.quoteChar), _escapeRegexRangeChars(self.endQuoteChar[0]), (escChar is not None and _escapeRegexRangeChars(escChar) or '') ) else: self.flags = 0 self.pattern = r'%s(?:[^%s\n\r%s]' % \ ( re.escape(self.quoteChar), _escapeRegexRangeChars(self.endQuoteChar[0]), (escChar is not None and _escapeRegexRangeChars(escChar) or '') ) if len(self.endQuoteChar) > 1: self.pattern += ( '|(?:' + ')|(?:'.join("%s[^%s]" % (re.escape(self.endQuoteChar[:i]), _escapeRegexRangeChars(self.endQuoteChar[i])) for i in range(len(self.endQuoteChar)-1,0,-1)) + ')' ) if escQuote: self.pattern += (r'|(?:%s)' % re.escape(escQuote)) if escChar: self.pattern += (r'|(?:%s.)' % re.escape(escChar)) self.escCharReplacePattern = re.escape(self.escChar)+"(.)" self.pattern += (r')*%s' % re.escape(self.endQuoteChar)) try: self.re = re.compile(self.pattern, self.flags) self.reString = self.pattern except sre_constants.error: warnings.warn("invalid pattern (%s) passed to Regex" % self.pattern, SyntaxWarning, stacklevel=2) raise self.name = _ustr(self) self.errmsg = "Expected " + self.name self.mayIndexError = False self.mayReturnEmpty = True def parseImpl( self, instring, loc, doActions=True ): result = instring[loc] == self.firstQuoteChar and self.re.match(instring,loc) or None if not result: raise ParseException(instring, loc, self.errmsg, self) loc = result.end() ret = result.group() if self.unquoteResults: # strip off quotes ret = ret[self.quoteCharLen:-self.endQuoteCharLen] if isinstance(ret,basestring): # replace escaped whitespace if '\\' in ret and self.convertWhitespaceEscapes: ws_map = { r'\t' : '\t', r'\n' : '\n', r'\f' : '\f', r'\r' : '\r', } for wslit,wschar in ws_map.items(): ret = ret.replace(wslit, wschar) # replace escaped characters if self.escChar: ret = re.sub(self.escCharReplacePattern, r"\g<1>", ret) # replace escaped quotes if self.escQuote: ret = ret.replace(self.escQuote, self.endQuoteChar) return loc, ret def __str__( self ): try: return super(QuotedString,self).__str__() except Exception: pass if self.strRepr is None: self.strRepr = "quoted string, starting with %s ending with %s" % (self.quoteChar, self.endQuoteChar) return self.strRepr class CharsNotIn(Token): """ Token for matching words composed of characters I{not} in a given set (will include whitespace in matched characters if not listed in the provided exclusion set - see example). Defined with string containing all disallowed characters, and an optional minimum, maximum, and/or exact length. The default value for C{min} is 1 (a minimum value < 1 is not valid); the default values for C{max} and C{exact} are 0, meaning no maximum or exact length restriction. Example:: # define a comma-separated-value as anything that is not a ',' csv_value = CharsNotIn(',') print(delimitedList(csv_value).parseString("dkls,lsdkjf,s12 34,@!#,213")) prints:: ['dkls', 'lsdkjf', 's12 34', '@!#', '213'] """ def __init__( self, notChars, min=1, max=0, exact=0 ): super(CharsNotIn,self).__init__() self.skipWhitespace = False self.notChars = notChars if min < 1: raise ValueError("cannot specify a minimum length < 1; use Optional(CharsNotIn()) if zero-length char group is permitted") self.minLen = min if max > 0: self.maxLen = max else: self.maxLen = _MAX_INT if exact > 0: self.maxLen = exact self.minLen = exact self.name = _ustr(self) self.errmsg = "Expected " + self.name self.mayReturnEmpty = ( self.minLen == 0 ) self.mayIndexError = False def parseImpl( self, instring, loc, doActions=True ): if instring[loc] in self.notChars: raise ParseException(instring, loc, self.errmsg, self) start = loc loc += 1 notchars = self.notChars maxlen = min( start+self.maxLen, len(instring) ) while loc < maxlen and \ (instring[loc] not in notchars): loc += 1 if loc - start < self.minLen: raise ParseException(instring, loc, self.errmsg, self) return loc, instring[start:loc] def __str__( self ): try: return super(CharsNotIn, self).__str__() except Exception: pass if self.strRepr is None: if len(self.notChars) > 4: self.strRepr = "!W:(%s...)" % self.notChars[:4] else: self.strRepr = "!W:(%s)" % self.notChars return self.strRepr class White(Token): """ Special matching class for matching whitespace. Normally, whitespace is ignored by pyparsing grammars. This class is included when some whitespace structures are significant. Define with a string containing the whitespace characters to be matched; default is C{" \\t\\r\\n"}. Also takes optional C{min}, C{max}, and C{exact} arguments, as defined for the C{L{Word}} class. """ whiteStrs = { " " : "<SPC>", "\t": "<TAB>", "\n": "<LF>", "\r": "<CR>", "\f": "<FF>", } def __init__(self, ws=" \t\r\n", min=1, max=0, exact=0): super(White,self).__init__() self.matchWhite = ws self.setWhitespaceChars( "".join(c for c in self.whiteChars if c not in self.matchWhite) ) #~ self.leaveWhitespace() self.name = ("".join(White.whiteStrs[c] for c in self.matchWhite)) self.mayReturnEmpty = True self.errmsg = "Expected " + self.name self.minLen = min if max > 0: self.maxLen = max else: self.maxLen = _MAX_INT if exact > 0: self.maxLen = exact self.minLen = exact def parseImpl( self, instring, loc, doActions=True ): if not(instring[ loc ] in self.matchWhite): raise ParseException(instring, loc, self.errmsg, self) start = loc loc += 1 maxloc = start + self.maxLen maxloc = min( maxloc, len(instring) ) while loc < maxloc and instring[loc] in self.matchWhite: loc += 1 if loc - start < self.minLen: raise ParseException(instring, loc, self.errmsg, self) return loc, instring[start:loc] class _PositionToken(Token): def __init__( self ): super(_PositionToken,self).__init__() self.name=self.__class__.__name__ self.mayReturnEmpty = True self.mayIndexError = False class GoToColumn(_PositionToken): """ Token to advance to a specific column of input text; useful for tabular report scraping. """ def __init__( self, colno ): super(GoToColumn,self).__init__() self.col = colno def preParse( self, instring, loc ): if col(loc,instring) != self.col: instrlen = len(instring) if self.ignoreExprs: loc = self._skipIgnorables( instring, loc ) while loc < instrlen and instring[loc].isspace() and col( loc, instring ) != self.col : loc += 1 return loc def parseImpl( self, instring, loc, doActions=True ): thiscol = col( loc, instring ) if thiscol > self.col: raise ParseException( instring, loc, "Text not in expected column", self ) newloc = loc + self.col - thiscol ret = instring[ loc: newloc ] return newloc, ret class LineStart(_PositionToken): """ Matches if current position is at the beginning of a line within the parse string Example:: test = '''\ AAA this line AAA and this line AAA but not this one B AAA and definitely not this one ''' for t in (LineStart() + 'AAA' + restOfLine).searchString(test): print(t) Prints:: ['AAA', ' this line'] ['AAA', ' and this line'] """ def __init__( self ): super(LineStart,self).__init__() self.errmsg = "Expected start of line" def parseImpl( self, instring, loc, doActions=True ): if col(loc, instring) == 1: return loc, [] raise ParseException(instring, loc, self.errmsg, self) class LineEnd(_PositionToken): """ Matches if current position is at the end of a line within the parse string """ def __init__( self ): super(LineEnd,self).__init__() self.setWhitespaceChars( ParserElement.DEFAULT_WHITE_CHARS.replace("\n","") ) self.errmsg = "Expected end of line" def parseImpl( self, instring, loc, doActions=True ): if loc<len(instring): if instring[loc] == "\n": return loc+1, "\n" else: raise ParseException(instring, loc, self.errmsg, self) elif loc == len(instring): return loc+1, [] else: raise ParseException(instring, loc, self.errmsg, self) class StringStart(_PositionToken): """ Matches if current position is at the beginning of the parse string """ def __init__( self ): super(StringStart,self).__init__() self.errmsg = "Expected start of text" def parseImpl( self, instring, loc, doActions=True ): if loc != 0: # see if entire string up to here is just whitespace and ignoreables if loc != self.preParse( instring, 0 ): raise ParseException(instring, loc, self.errmsg, self) return loc, [] class StringEnd(_PositionToken): """ Matches if current position is at the end of the parse string """ def __init__( self ): super(StringEnd,self).__init__() self.errmsg = "Expected end of text" def parseImpl( self, instring, loc, doActions=True ): if loc < len(instring): raise ParseException(instring, loc, self.errmsg, self) elif loc == len(instring): return loc+1, [] elif loc > len(instring): return loc, [] else: raise ParseException(instring, loc, self.errmsg, self) class WordStart(_PositionToken): """ Matches if the current position is at the beginning of a Word, and is not preceded by any character in a given set of C{wordChars} (default=C{printables}). To emulate the C{\b} behavior of regular expressions, use C{WordStart(alphanums)}. C{WordStart} will also match at the beginning of the string being parsed, or at the beginning of a line. """ def __init__(self, wordChars = printables): super(WordStart,self).__init__() self.wordChars = set(wordChars) self.errmsg = "Not at the start of a word" def parseImpl(self, instring, loc, doActions=True ): if loc != 0: if (instring[loc-1] in self.wordChars or instring[loc] not in self.wordChars): raise ParseException(instring, loc, self.errmsg, self) return loc, [] class WordEnd(_PositionToken): """ Matches if the current position is at the end of a Word, and is not followed by any character in a given set of C{wordChars} (default=C{printables}). To emulate the C{\b} behavior of regular expressions, use C{WordEnd(alphanums)}. C{WordEnd} will also match at the end of the string being parsed, or at the end of a line. """ def __init__(self, wordChars = printables): super(WordEnd,self).__init__() self.wordChars = set(wordChars) self.skipWhitespace = False self.errmsg = "Not at the end of a word" def parseImpl(self, instring, loc, doActions=True ): instrlen = len(instring) if instrlen>0 and loc<instrlen: if (instring[loc] in self.wordChars or instring[loc-1] not in self.wordChars): raise ParseException(instring, loc, self.errmsg, self) return loc, [] class ParseExpression(ParserElement): """ Abstract subclass of ParserElement, for combining and post-processing parsed tokens. """ def __init__( self, exprs, savelist = False ): super(ParseExpression,self).__init__(savelist) if isinstance( exprs, _generatorType ): exprs = list(exprs) if isinstance( exprs, basestring ): self.exprs = [ ParserElement._literalStringClass( exprs ) ] elif isinstance( exprs, Iterable ): exprs = list(exprs) # if sequence of strings provided, wrap with Literal if all(isinstance(expr, basestring) for expr in exprs): exprs = map(ParserElement._literalStringClass, exprs) self.exprs = list(exprs) else: try: self.exprs = list( exprs ) except TypeError: self.exprs = [ exprs ] self.callPreparse = False def __getitem__( self, i ): return self.exprs[i] def append( self, other ): self.exprs.append( other ) self.strRepr = None return self def leaveWhitespace( self ): """Extends C{leaveWhitespace} defined in base class, and also invokes C{leaveWhitespace} on all contained expressions.""" self.skipWhitespace = False self.exprs = [ e.copy() for e in self.exprs ] for e in self.exprs: e.leaveWhitespace() return self def ignore( self, other ): if isinstance( other, Suppress ): if other not in self.ignoreExprs: super( ParseExpression, self).ignore( other ) for e in self.exprs: e.ignore( self.ignoreExprs[-1] ) else: super( ParseExpression, self).ignore( other ) for e in self.exprs: e.ignore( self.ignoreExprs[-1] ) return self def __str__( self ): try: return super(ParseExpression,self).__str__() except Exception: pass if self.strRepr is None: self.strRepr = "%s:(%s)" % ( self.__class__.__name__, _ustr(self.exprs) ) return self.strRepr def streamline( self ): super(ParseExpression,self).streamline() for e in self.exprs: e.streamline() # collapse nested And's of the form And( And( And( a,b), c), d) to And( a,b,c,d ) # but only if there are no parse actions or resultsNames on the nested And's # (likewise for Or's and MatchFirst's) if ( len(self.exprs) == 2 ): other = self.exprs[0] if ( isinstance( other, self.__class__ ) and not(other.parseAction) and other.resultsName is None and not other.debug ): self.exprs = other.exprs[:] + [ self.exprs[1] ] self.strRepr = None self.mayReturnEmpty |= other.mayReturnEmpty self.mayIndexError |= other.mayIndexError other = self.exprs[-1] if ( isinstance( other, self.__class__ ) and not(other.parseAction) and other.resultsName is None and not other.debug ): self.exprs = self.exprs[:-1] + other.exprs[:] self.strRepr = None self.mayReturnEmpty |= other.mayReturnEmpty self.mayIndexError |= other.mayIndexError self.errmsg = "Expected " + _ustr(self) return self def setResultsName( self, name, listAllMatches=False ): ret = super(ParseExpression,self).setResultsName(name,listAllMatches) return ret def validate( self, validateTrace=[] ): tmp = validateTrace[:]+[self] for e in self.exprs: e.validate(tmp) self.checkRecursion( [] ) def copy(self): ret = super(ParseExpression,self).copy() ret.exprs = [e.copy() for e in self.exprs] return ret class And(ParseExpression): """ Requires all given C{ParseExpression}s to be found in the given order. Expressions may be separated by whitespace. May be constructed using the C{'+'} operator. May also be constructed using the C{'-'} operator, which will suppress backtracking. Example:: integer = Word(nums) name_expr = OneOrMore(Word(alphas)) expr = And([integer("id"),name_expr("name"),integer("age")]) # more easily written as: expr = integer("id") + name_expr("name") + integer("age") """ class _ErrorStop(Empty): def __init__(self, *args, **kwargs): super(And._ErrorStop,self).__init__(*args, **kwargs) self.name = '-' self.leaveWhitespace() def __init__( self, exprs, savelist = True ): super(And,self).__init__(exprs, savelist) self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs) self.setWhitespaceChars( self.exprs[0].whiteChars ) self.skipWhitespace = self.exprs[0].skipWhitespace self.callPreparse = True def parseImpl( self, instring, loc, doActions=True ): # pass False as last arg to _parse for first element, since we already # pre-parsed the string as part of our And pre-parsing loc, resultlist = self.exprs[0]._parse( instring, loc, doActions, callPreParse=False ) errorStop = False for e in self.exprs[1:]: if isinstance(e, And._ErrorStop): errorStop = True continue if errorStop: try: loc, exprtokens = e._parse( instring, loc, doActions ) except ParseSyntaxException: raise except ParseBaseException as pe: pe.__traceback__ = None raise ParseSyntaxException._from_exception(pe) except IndexError: raise ParseSyntaxException(instring, len(instring), self.errmsg, self) else: loc, exprtokens = e._parse( instring, loc, doActions ) if exprtokens or exprtokens.haskeys(): resultlist += exprtokens return loc, resultlist def __iadd__(self, other ): if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) return self.append( other ) #And( [ self, other ] ) def checkRecursion( self, parseElementList ): subRecCheckList = parseElementList[:] + [ self ] for e in self.exprs: e.checkRecursion( subRecCheckList ) if not e.mayReturnEmpty: break def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "{" + " ".join(_ustr(e) for e in self.exprs) + "}" return self.strRepr class Or(ParseExpression): """ Requires that at least one C{ParseExpression} is found. If two expressions match, the expression that matches the longest string will be used. May be constructed using the C{'^'} operator. Example:: # construct Or using '^' operator number = Word(nums) ^ Combine(Word(nums) + '.' + Word(nums)) print(number.searchString("123 3.1416 789")) prints:: [['123'], ['3.1416'], ['789']] """ def __init__( self, exprs, savelist = False ): super(Or,self).__init__(exprs, savelist) if self.exprs: self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs) else: self.mayReturnEmpty = True def parseImpl( self, instring, loc, doActions=True ): maxExcLoc = -1 maxException = None matches = [] for e in self.exprs: try: loc2 = e.tryParse( instring, loc ) except ParseException as err: err.__traceback__ = None if err.loc > maxExcLoc: maxException = err maxExcLoc = err.loc except IndexError: if len(instring) > maxExcLoc: maxException = ParseException(instring,len(instring),e.errmsg,self) maxExcLoc = len(instring) else: # save match among all matches, to retry longest to shortest matches.append((loc2, e)) if matches: matches.sort(key=lambda x: -x[0]) for _,e in matches: try: return e._parse( instring, loc, doActions ) except ParseException as err: err.__traceback__ = None if err.loc > maxExcLoc: maxException = err maxExcLoc = err.loc if maxException is not None: maxException.msg = self.errmsg raise maxException else: raise ParseException(instring, loc, "no defined alternatives to match", self) def __ixor__(self, other ): if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) return self.append( other ) #Or( [ self, other ] ) def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "{" + " ^ ".join(_ustr(e) for e in self.exprs) + "}" return self.strRepr def checkRecursion( self, parseElementList ): subRecCheckList = parseElementList[:] + [ self ] for e in self.exprs: e.checkRecursion( subRecCheckList ) class MatchFirst(ParseExpression): """ Requires that at least one C{ParseExpression} is found. If two expressions match, the first one listed is the one that will match. May be constructed using the C{'|'} operator. Example:: # construct MatchFirst using '|' operator # watch the order of expressions to match number = Word(nums) | Combine(Word(nums) + '.' + Word(nums)) print(number.searchString("123 3.1416 789")) # Fail! -> [['123'], ['3'], ['1416'], ['789']] # put more selective expression first number = Combine(Word(nums) + '.' + Word(nums)) | Word(nums) print(number.searchString("123 3.1416 789")) # Better -> [['123'], ['3.1416'], ['789']] """ def __init__( self, exprs, savelist = False ): super(MatchFirst,self).__init__(exprs, savelist) if self.exprs: self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs) else: self.mayReturnEmpty = True def parseImpl( self, instring, loc, doActions=True ): maxExcLoc = -1 maxException = None for e in self.exprs: try: ret = e._parse( instring, loc, doActions ) return ret except ParseException as err: if err.loc > maxExcLoc: maxException = err maxExcLoc = err.loc except IndexError: if len(instring) > maxExcLoc: maxException = ParseException(instring,len(instring),e.errmsg,self) maxExcLoc = len(instring) # only got here if no expression matched, raise exception for match that made it the furthest else: if maxException is not None: maxException.msg = self.errmsg raise maxException else: raise ParseException(instring, loc, "no defined alternatives to match", self) def __ior__(self, other ): if isinstance( other, basestring ): other = ParserElement._literalStringClass( other ) return self.append( other ) #MatchFirst( [ self, other ] ) def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "{" + " | ".join(_ustr(e) for e in self.exprs) + "}" return self.strRepr def checkRecursion( self, parseElementList ): subRecCheckList = parseElementList[:] + [ self ] for e in self.exprs: e.checkRecursion( subRecCheckList ) class Each(ParseExpression): """ Requires all given C{ParseExpression}s to be found, but in any order. Expressions may be separated by whitespace. May be constructed using the C{'&'} operator. Example:: color = oneOf("RED ORANGE YELLOW GREEN BLUE PURPLE BLACK WHITE BROWN") shape_type = oneOf("SQUARE CIRCLE TRIANGLE STAR HEXAGON OCTAGON") integer = Word(nums) shape_attr = "shape:" + shape_type("shape") posn_attr = "posn:" + Group(integer("x") + ',' + integer("y"))("posn") color_attr = "color:" + color("color") size_attr = "size:" + integer("size") # use Each (using operator '&') to accept attributes in any order # (shape and posn are required, color and size are optional) shape_spec = shape_attr & posn_attr & Optional(color_attr) & Optional(size_attr) shape_spec.runTests(''' shape: SQUARE color: BLACK posn: 100, 120 shape: CIRCLE size: 50 color: BLUE posn: 50,80 color:GREEN size:20 shape:TRIANGLE posn:20,40 ''' ) prints:: shape: SQUARE color: BLACK posn: 100, 120 ['shape:', 'SQUARE', 'color:', 'BLACK', 'posn:', ['100', ',', '120']] - color: BLACK - posn: ['100', ',', '120'] - x: 100 - y: 120 - shape: SQUARE shape: CIRCLE size: 50 color: BLUE posn: 50,80 ['shape:', 'CIRCLE', 'size:', '50', 'color:', 'BLUE', 'posn:', ['50', ',', '80']] - color: BLUE - posn: ['50', ',', '80'] - x: 50 - y: 80 - shape: CIRCLE - size: 50 color: GREEN size: 20 shape: TRIANGLE posn: 20,40 ['color:', 'GREEN', 'size:', '20', 'shape:', 'TRIANGLE', 'posn:', ['20', ',', '40']] - color: GREEN - posn: ['20', ',', '40'] - x: 20 - y: 40 - shape: TRIANGLE - size: 20 """ def __init__( self, exprs, savelist = True ): super(Each,self).__init__(exprs, savelist) self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs) self.skipWhitespace = True self.initExprGroups = True def parseImpl( self, instring, loc, doActions=True ): if self.initExprGroups: self.opt1map = dict((id(e.expr),e) for e in self.exprs if isinstance(e,Optional)) opt1 = [ e.expr for e in self.exprs if isinstance(e,Optional) ] opt2 = [ e for e in self.exprs if e.mayReturnEmpty and not isinstance(e,Optional)] self.optionals = opt1 + opt2 self.multioptionals = [ e.expr for e in self.exprs if isinstance(e,ZeroOrMore) ] self.multirequired = [ e.expr for e in self.exprs if isinstance(e,OneOrMore) ] self.required = [ e for e in self.exprs if not isinstance(e,(Optional,ZeroOrMore,OneOrMore)) ] self.required += self.multirequired self.initExprGroups = False tmpLoc = loc tmpReqd = self.required[:] tmpOpt = self.optionals[:] matchOrder = [] keepMatching = True while keepMatching: tmpExprs = tmpReqd + tmpOpt + self.multioptionals + self.multirequired failed = [] for e in tmpExprs: try: tmpLoc = e.tryParse( instring, tmpLoc ) except ParseException: failed.append(e) else: matchOrder.append(self.opt1map.get(id(e),e)) if e in tmpReqd: tmpReqd.remove(e) elif e in tmpOpt: tmpOpt.remove(e) if len(failed) == len(tmpExprs): keepMatching = False if tmpReqd: missing = ", ".join(_ustr(e) for e in tmpReqd) raise ParseException(instring,loc,"Missing one or more required elements (%s)" % missing ) # add any unmatched Optionals, in case they have default values defined matchOrder += [e for e in self.exprs if isinstance(e,Optional) and e.expr in tmpOpt] resultlist = [] for e in matchOrder: loc,results = e._parse(instring,loc,doActions) resultlist.append(results) finalResults = sum(resultlist, ParseResults([])) return loc, finalResults def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "{" + " & ".join(_ustr(e) for e in self.exprs) + "}" return self.strRepr def checkRecursion( self, parseElementList ): subRecCheckList = parseElementList[:] + [ self ] for e in self.exprs: e.checkRecursion( subRecCheckList ) class ParseElementEnhance(ParserElement): """ Abstract subclass of C{ParserElement}, for combining and post-processing parsed tokens. """ def __init__( self, expr, savelist=False ): super(ParseElementEnhance,self).__init__(savelist) if isinstance( expr, basestring ): if issubclass(ParserElement._literalStringClass, Token): expr = ParserElement._literalStringClass(expr) else: expr = ParserElement._literalStringClass(Literal(expr)) self.expr = expr self.strRepr = None if expr is not None: self.mayIndexError = expr.mayIndexError self.mayReturnEmpty = expr.mayReturnEmpty self.setWhitespaceChars( expr.whiteChars ) self.skipWhitespace = expr.skipWhitespace self.saveAsList = expr.saveAsList self.callPreparse = expr.callPreparse self.ignoreExprs.extend(expr.ignoreExprs) def parseImpl( self, instring, loc, doActions=True ): if self.expr is not None: return self.expr._parse( instring, loc, doActions, callPreParse=False ) else: raise ParseException("",loc,self.errmsg,self) def leaveWhitespace( self ): self.skipWhitespace = False self.expr = self.expr.copy() if self.expr is not None: self.expr.leaveWhitespace() return self def ignore( self, other ): if isinstance( other, Suppress ): if other not in self.ignoreExprs: super( ParseElementEnhance, self).ignore( other ) if self.expr is not None: self.expr.ignore( self.ignoreExprs[-1] ) else: super( ParseElementEnhance, self).ignore( other ) if self.expr is not None: self.expr.ignore( self.ignoreExprs[-1] ) return self def streamline( self ): super(ParseElementEnhance,self).streamline() if self.expr is not None: self.expr.streamline() return self def checkRecursion( self, parseElementList ): if self in parseElementList: raise RecursiveGrammarException( parseElementList+[self] ) subRecCheckList = parseElementList[:] + [ self ] if self.expr is not None: self.expr.checkRecursion( subRecCheckList ) def validate( self, validateTrace=[] ): tmp = validateTrace[:]+[self] if self.expr is not None: self.expr.validate(tmp) self.checkRecursion( [] ) def __str__( self ): try: return super(ParseElementEnhance,self).__str__() except Exception: pass if self.strRepr is None and self.expr is not None: self.strRepr = "%s:(%s)" % ( self.__class__.__name__, _ustr(self.expr) ) return self.strRepr class FollowedBy(ParseElementEnhance): """ Lookahead matching of the given parse expression. C{FollowedBy} does I{not} advance the parsing position within the input string, it only verifies that the specified parse expression matches at the current position. C{FollowedBy} always returns a null token list. Example:: # use FollowedBy to match a label only if it is followed by a ':' data_word = Word(alphas) label = data_word + FollowedBy(':') attr_expr = Group(label + Suppress(':') + OneOrMore(data_word, stopOn=label).setParseAction(' '.join)) OneOrMore(attr_expr).parseString("shape: SQUARE color: BLACK posn: upper left").pprint() prints:: [['shape', 'SQUARE'], ['color', 'BLACK'], ['posn', 'upper left']] """ def __init__( self, expr ): super(FollowedBy,self).__init__(expr) self.mayReturnEmpty = True def parseImpl( self, instring, loc, doActions=True ): self.expr.tryParse( instring, loc ) return loc, [] class NotAny(ParseElementEnhance): """ Lookahead to disallow matching with the given parse expression. C{NotAny} does I{not} advance the parsing position within the input string, it only verifies that the specified parse expression does I{not} match at the current position. Also, C{NotAny} does I{not} skip over leading whitespace. C{NotAny} always returns a null token list. May be constructed using the '~' operator. Example:: """ def __init__( self, expr ): super(NotAny,self).__init__(expr) #~ self.leaveWhitespace() self.skipWhitespace = False # do NOT use self.leaveWhitespace(), don't want to propagate to exprs self.mayReturnEmpty = True self.errmsg = "Found unwanted token, "+_ustr(self.expr) def parseImpl( self, instring, loc, doActions=True ): if self.expr.canParseNext(instring, loc): raise ParseException(instring, loc, self.errmsg, self) return loc, [] def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "~{" + _ustr(self.expr) + "}" return self.strRepr class _MultipleMatch(ParseElementEnhance): def __init__( self, expr, stopOn=None): super(_MultipleMatch, self).__init__(expr) self.saveAsList = True ender = stopOn if isinstance(ender, basestring): ender = ParserElement._literalStringClass(ender) self.not_ender = ~ender if ender is not None else None def parseImpl( self, instring, loc, doActions=True ): self_expr_parse = self.expr._parse self_skip_ignorables = self._skipIgnorables check_ender = self.not_ender is not None if check_ender: try_not_ender = self.not_ender.tryParse # must be at least one (but first see if we are the stopOn sentinel; # if so, fail) if check_ender: try_not_ender(instring, loc) loc, tokens = self_expr_parse( instring, loc, doActions, callPreParse=False ) try: hasIgnoreExprs = (not not self.ignoreExprs) while 1: if check_ender: try_not_ender(instring, loc) if hasIgnoreExprs: preloc = self_skip_ignorables( instring, loc ) else: preloc = loc loc, tmptokens = self_expr_parse( instring, preloc, doActions ) if tmptokens or tmptokens.haskeys(): tokens += tmptokens except (ParseException,IndexError): pass return loc, tokens class OneOrMore(_MultipleMatch): """ Repetition of one or more of the given expression. Parameters: - expr - expression that must match one or more times - stopOn - (default=C{None}) - expression for a terminating sentinel (only required if the sentinel would ordinarily match the repetition expression) Example:: data_word = Word(alphas) label = data_word + FollowedBy(':') attr_expr = Group(label + Suppress(':') + OneOrMore(data_word).setParseAction(' '.join)) text = "shape: SQUARE posn: upper left color: BLACK" OneOrMore(attr_expr).parseString(text).pprint() # Fail! read 'color' as data instead of next label -> [['shape', 'SQUARE color']] # use stopOn attribute for OneOrMore to avoid reading label string as part of the data attr_expr = Group(label + Suppress(':') + OneOrMore(data_word, stopOn=label).setParseAction(' '.join)) OneOrMore(attr_expr).parseString(text).pprint() # Better -> [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'BLACK']] # could also be written as (attr_expr * (1,)).parseString(text).pprint() """ def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "{" + _ustr(self.expr) + "}..." return self.strRepr class ZeroOrMore(_MultipleMatch): """ Optional repetition of zero or more of the given expression. Parameters: - expr - expression that must match zero or more times - stopOn - (default=C{None}) - expression for a terminating sentinel (only required if the sentinel would ordinarily match the repetition expression) Example: similar to L{OneOrMore} """ def __init__( self, expr, stopOn=None): super(ZeroOrMore,self).__init__(expr, stopOn=stopOn) self.mayReturnEmpty = True def parseImpl( self, instring, loc, doActions=True ): try: return super(ZeroOrMore, self).parseImpl(instring, loc, doActions) except (ParseException,IndexError): return loc, [] def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "[" + _ustr(self.expr) + "]..." return self.strRepr class _NullToken(object): def __bool__(self): return False __nonzero__ = __bool__ def __str__(self): return "" _optionalNotMatched = _NullToken() class Optional(ParseElementEnhance): """ Optional matching of the given expression. Parameters: - expr - expression that must match zero or more times - default (optional) - value to be returned if the optional expression is not found. Example:: # US postal code can be a 5-digit zip, plus optional 4-digit qualifier zip = Combine(Word(nums, exact=5) + Optional('-' + Word(nums, exact=4))) zip.runTests(''' # traditional ZIP code 12345 # ZIP+4 form 12101-0001 # invalid ZIP 98765- ''') prints:: # traditional ZIP code 12345 ['12345'] # ZIP+4 form 12101-0001 ['12101-0001'] # invalid ZIP 98765- ^ FAIL: Expected end of text (at char 5), (line:1, col:6) """ def __init__( self, expr, default=_optionalNotMatched ): super(Optional,self).__init__( expr, savelist=False ) self.saveAsList = self.expr.saveAsList self.defaultValue = default self.mayReturnEmpty = True def parseImpl( self, instring, loc, doActions=True ): try: loc, tokens = self.expr._parse( instring, loc, doActions, callPreParse=False ) except (ParseException,IndexError): if self.defaultValue is not _optionalNotMatched: if self.expr.resultsName: tokens = ParseResults([ self.defaultValue ]) tokens[self.expr.resultsName] = self.defaultValue else: tokens = [ self.defaultValue ] else: tokens = [] return loc, tokens def __str__( self ): if hasattr(self,"name"): return self.name if self.strRepr is None: self.strRepr = "[" + _ustr(self.expr) + "]" return self.strRepr class SkipTo(ParseElementEnhance): """ Token for skipping over all undefined text until the matched expression is found. Parameters: - expr - target expression marking the end of the data to be skipped - include - (default=C{False}) if True, the target expression is also parsed (the skipped text and target expression are returned as a 2-element list). - ignore - (default=C{None}) used to define grammars (typically quoted strings and comments) that might contain false matches to the target expression - failOn - (default=C{None}) define expressions that are not allowed to be included in the skipped test; if found before the target expression is found, the SkipTo is not a match Example:: report = ''' Outstanding Issues Report - 1 Jan 2000 # | Severity | Description | Days Open -----+----------+-------------------------------------------+----------- 101 | Critical | Intermittent system crash | 6 94 | Cosmetic | Spelling error on Login ('log|n') | 14 79 | Minor | System slow when running too many reports | 47 ''' integer = Word(nums) SEP = Suppress('|') # use SkipTo to simply match everything up until the next SEP # - ignore quoted strings, so that a '|' character inside a quoted string does not match # - parse action will call token.strip() for each matched token, i.e., the description body string_data = SkipTo(SEP, ignore=quotedString) string_data.setParseAction(tokenMap(str.strip)) ticket_expr = (integer("issue_num") + SEP + string_data("sev") + SEP + string_data("desc") + SEP + integer("days_open")) for tkt in ticket_expr.searchString(report): print tkt.dump() prints:: ['101', 'Critical', 'Intermittent system crash', '6'] - days_open: 6 - desc: Intermittent system crash - issue_num: 101 - sev: Critical ['94', 'Cosmetic', "Spelling error on Login ('log|n')", '14'] - days_open: 14 - desc: Spelling error on Login ('log|n') - issue_num: 94 - sev: Cosmetic ['79', 'Minor', 'System slow when running too many reports', '47'] - days_open: 47 - desc: System slow when running too many reports - issue_num: 79 - sev: Minor """ def __init__( self, other, include=False, ignore=None, failOn=None ): super( SkipTo, self ).__init__( other ) self.ignoreExpr = ignore self.mayReturnEmpty = True self.mayIndexError = False self.includeMatch = include self.asList = False if isinstance(failOn, basestring): self.failOn = ParserElement._literalStringClass(failOn) else: self.failOn = failOn self.errmsg = "No match found for "+_ustr(self.expr) def parseImpl( self, instring, loc, doActions=True ): startloc = loc instrlen = len(instring) expr = self.expr expr_parse = self.expr._parse self_failOn_canParseNext = self.failOn.canParseNext if self.failOn is not None else None self_ignoreExpr_tryParse = self.ignoreExpr.tryParse if self.ignoreExpr is not None else None tmploc = loc while tmploc <= instrlen: if self_failOn_canParseNext is not None: # break if failOn expression matches if self_failOn_canParseNext(instring, tmploc): break if self_ignoreExpr_tryParse is not None: # advance past ignore expressions while 1: try: tmploc = self_ignoreExpr_tryParse(instring, tmploc) except ParseBaseException: break try: expr_parse(instring, tmploc, doActions=False, callPreParse=False) except (ParseException, IndexError): # no match, advance loc in string tmploc += 1 else: # matched skipto expr, done break else: # ran off the end of the input string without matching skipto expr, fail raise ParseException(instring, loc, self.errmsg, self) # build up return values loc = tmploc skiptext = instring[startloc:loc] skipresult = ParseResults(skiptext) if self.includeMatch: loc, mat = expr_parse(instring,loc,doActions,callPreParse=False) skipresult += mat return loc, skipresult class Forward(ParseElementEnhance): """ Forward declaration of an expression to be defined later - used for recursive grammars, such as algebraic infix notation. When the expression is known, it is assigned to the C{Forward} variable using the '<<' operator. Note: take care when assigning to C{Forward} not to overlook precedence of operators. Specifically, '|' has a lower precedence than '<<', so that:: fwdExpr << a | b | c will actually be evaluated as:: (fwdExpr << a) | b | c thereby leaving b and c out as parseable alternatives. It is recommended that you explicitly group the values inserted into the C{Forward}:: fwdExpr << (a | b | c) Converting to use the '<<=' operator instead will avoid this problem. See L{ParseResults.pprint} for an example of a recursive parser created using C{Forward}. """ def __init__( self, other=None ): super(Forward,self).__init__( other, savelist=False ) def __lshift__( self, other ): if isinstance( other, basestring ): other = ParserElement._literalStringClass(other) self.expr = other self.strRepr = None self.mayIndexError = self.expr.mayIndexError self.mayReturnEmpty = self.expr.mayReturnEmpty self.setWhitespaceChars( self.expr.whiteChars ) self.skipWhitespace = self.expr.skipWhitespace self.saveAsList = self.expr.saveAsList self.ignoreExprs.extend(self.expr.ignoreExprs) return self def __ilshift__(self, other): return self << other def leaveWhitespace( self ): self.skipWhitespace = False return self def streamline( self ): if not self.streamlined: self.streamlined = True if self.expr is not None: self.expr.streamline() return self def validate( self, validateTrace=[] ): if self not in validateTrace: tmp = validateTrace[:]+[self] if self.expr is not None: self.expr.validate(tmp) self.checkRecursion([]) def __str__( self ): if hasattr(self,"name"): return self.name return self.__class__.__name__ + ": ..." # stubbed out for now - creates awful memory and perf issues self._revertClass = self.__class__ self.__class__ = _ForwardNoRecurse try: if self.expr is not None: retString = _ustr(self.expr) else: retString = "None" finally: self.__class__ = self._revertClass return self.__class__.__name__ + ": " + retString def copy(self): if self.expr is not None: return super(Forward,self).copy() else: ret = Forward() ret <<= self return ret class _ForwardNoRecurse(Forward): def __str__( self ): return "..." class TokenConverter(ParseElementEnhance): """ Abstract subclass of C{ParseExpression}, for converting parsed results. """ def __init__( self, expr, savelist=False ): super(TokenConverter,self).__init__( expr )#, savelist ) self.saveAsList = False class Combine(TokenConverter): """ Converter to concatenate all matching tokens to a single string. By default, the matching patterns must also be contiguous in the input string; this can be disabled by specifying C{'adjacent=False'} in the constructor. Example:: real = Word(nums) + '.' + Word(nums) print(real.parseString('3.1416')) # -> ['3', '.', '1416'] # will also erroneously match the following print(real.parseString('3. 1416')) # -> ['3', '.', '1416'] real = Combine(Word(nums) + '.' + Word(nums)) print(real.parseString('3.1416')) # -> ['3.1416'] # no match when there are internal spaces print(real.parseString('3. 1416')) # -> Exception: Expected W:(0123...) """ def __init__( self, expr, joinString="", adjacent=True ): super(Combine,self).__init__( expr ) # suppress whitespace-stripping in contained parse expressions, but re-enable it on the Combine itself if adjacent: self.leaveWhitespace() self.adjacent = adjacent self.skipWhitespace = True self.joinString = joinString self.callPreparse = True def ignore( self, other ): if self.adjacent: ParserElement.ignore(self, other) else: super( Combine, self).ignore( other ) return self def postParse( self, instring, loc, tokenlist ): retToks = tokenlist.copy() del retToks[:] retToks += ParseResults([ "".join(tokenlist._asStringList(self.joinString)) ], modal=self.modalResults) if self.resultsName and retToks.haskeys(): return [ retToks ] else: return retToks class Group(TokenConverter): """ Converter to return the matched tokens as a list - useful for returning tokens of C{L{ZeroOrMore}} and C{L{OneOrMore}} expressions. Example:: ident = Word(alphas) num = Word(nums) term = ident | num func = ident + Optional(delimitedList(term)) print(func.parseString("fn a,b,100")) # -> ['fn', 'a', 'b', '100'] func = ident + Group(Optional(delimitedList(term))) print(func.parseString("fn a,b,100")) # -> ['fn', ['a', 'b', '100']] """ def __init__( self, expr ): super(Group,self).__init__( expr ) self.saveAsList = True def postParse( self, instring, loc, tokenlist ): return [ tokenlist ] class Dict(TokenConverter): """ Converter to return a repetitive expression as a list, but also as a dictionary. Each element can also be referenced using the first token in the expression as its key. Useful for tabular report scraping when the first column can be used as a item key. Example:: data_word = Word(alphas) label = data_word + FollowedBy(':') attr_expr = Group(label + Suppress(':') + OneOrMore(data_word).setParseAction(' '.join)) text = "shape: SQUARE posn: upper left color: light blue texture: burlap" attr_expr = (label + Suppress(':') + OneOrMore(data_word, stopOn=label).setParseAction(' '.join)) # print attributes as plain groups print(OneOrMore(attr_expr).parseString(text).dump()) # instead of OneOrMore(expr), parse using Dict(OneOrMore(Group(expr))) - Dict will auto-assign names result = Dict(OneOrMore(Group(attr_expr))).parseString(text) print(result.dump()) # access named fields as dict entries, or output as dict print(result['shape']) print(result.asDict()) prints:: ['shape', 'SQUARE', 'posn', 'upper left', 'color', 'light blue', 'texture', 'burlap'] [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'light blue'], ['texture', 'burlap']] - color: light blue - posn: upper left - shape: SQUARE - texture: burlap SQUARE {'color': 'light blue', 'posn': 'upper left', 'texture': 'burlap', 'shape': 'SQUARE'} See more examples at L{ParseResults} of accessing fields by results name. """ def __init__( self, expr ): super(Dict,self).__init__( expr ) self.saveAsList = True def postParse( self, instring, loc, tokenlist ): for i,tok in enumerate(tokenlist): if len(tok) == 0: continue ikey = tok[0] if isinstance(ikey,int): ikey = _ustr(tok[0]).strip() if len(tok)==1: tokenlist[ikey] = _ParseResultsWithOffset("",i) elif len(tok)==2 and not isinstance(tok[1],ParseResults): tokenlist[ikey] = _ParseResultsWithOffset(tok[1],i) else: dictvalue = tok.copy() #ParseResults(i) del dictvalue[0] if len(dictvalue)!= 1 or (isinstance(dictvalue,ParseResults) and dictvalue.haskeys()): tokenlist[ikey] = _ParseResultsWithOffset(dictvalue,i) else: tokenlist[ikey] = _ParseResultsWithOffset(dictvalue[0],i) if self.resultsName: return [ tokenlist ] else: return tokenlist class Suppress(TokenConverter): """ Converter for ignoring the results of a parsed expression. Example:: source = "a, b, c,d" wd = Word(alphas) wd_list1 = wd + ZeroOrMore(',' + wd) print(wd_list1.parseString(source)) # often, delimiters that are useful during parsing are just in the # way afterward - use Suppress to keep them out of the parsed output wd_list2 = wd + ZeroOrMore(Suppress(',') + wd) print(wd_list2.parseString(source)) prints:: ['a', ',', 'b', ',', 'c', ',', 'd'] ['a', 'b', 'c', 'd'] (See also L{delimitedList}.) """ def postParse( self, instring, loc, tokenlist ): return [] def suppress( self ): return self class OnlyOnce(object): """ Wrapper for parse actions, to ensure they are only called once. """ def __init__(self, methodCall): self.callable = _trim_arity(methodCall) self.called = False def __call__(self,s,l,t): if not self.called: results = self.callable(s,l,t) self.called = True return results raise ParseException(s,l,"") def reset(self): self.called = False def traceParseAction(f): """ Decorator for debugging parse actions. When the parse action is called, this decorator will print C{">> entering I{method-name}(line:I{current_source_line}, I{parse_location}, I{matched_tokens})".} When the parse action completes, the decorator will print C{"<<"} followed by the returned value, or any exception that the parse action raised. Example:: wd = Word(alphas) @traceParseAction def remove_duplicate_chars(tokens): return ''.join(sorted(set(''.join(tokens)))) wds = OneOrMore(wd).setParseAction(remove_duplicate_chars) print(wds.parseString("slkdjs sld sldd sdlf sdljf")) prints:: >>entering remove_duplicate_chars(line: 'slkdjs sld sldd sdlf sdljf', 0, (['slkdjs', 'sld', 'sldd', 'sdlf', 'sdljf'], {})) <<leaving remove_duplicate_chars (ret: 'dfjkls') ['dfjkls'] """ f = _trim_arity(f) def z(*paArgs): thisFunc = f.__name__ s,l,t = paArgs[-3:] if len(paArgs)>3: thisFunc = paArgs[0].__class__.__name__ + '.' + thisFunc sys.stderr.write( ">>entering %s(line: '%s', %d, %r)\n" % (thisFunc,line(l,s),l,t) ) try: ret = f(*paArgs) except Exception as exc: sys.stderr.write( "<<leaving %s (exception: %s)\n" % (thisFunc,exc) ) raise sys.stderr.write( "<<leaving %s (ret: %r)\n" % (thisFunc,ret) ) return ret try: z.__name__ = f.__name__ except AttributeError: pass return z # # global helpers # def delimitedList( expr, delim=",", combine=False ): """ Helper to define a delimited list of expressions - the delimiter defaults to ','. By default, the list elements and delimiters can have intervening whitespace, and comments, but this can be overridden by passing C{combine=True} in the constructor. If C{combine} is set to C{True}, the matching tokens are returned as a single token string, with the delimiters included; otherwise, the matching tokens are returned as a list of tokens, with the delimiters suppressed. Example:: delimitedList(Word(alphas)).parseString("aa,bb,cc") # -> ['aa', 'bb', 'cc'] delimitedList(Word(hexnums), delim=':', combine=True).parseString("AA:BB:CC:DD:EE") # -> ['AA:BB:CC:DD:EE'] """ dlName = _ustr(expr)+" ["+_ustr(delim)+" "+_ustr(expr)+"]..." if combine: return Combine( expr + ZeroOrMore( delim + expr ) ).setName(dlName) else: return ( expr + ZeroOrMore( Suppress( delim ) + expr ) ).setName(dlName) def countedArray( expr, intExpr=None ): """ Helper to define a counted list of expressions. This helper defines a pattern of the form:: integer expr expr expr... where the leading integer tells how many expr expressions follow. The matched tokens returns the array of expr tokens as a list - the leading count token is suppressed. If C{intExpr} is specified, it should be a pyparsing expression that produces an integer value. Example:: countedArray(Word(alphas)).parseString('2 ab cd ef') # -> ['ab', 'cd'] # in this parser, the leading integer value is given in binary, # '10' indicating that 2 values are in the array binaryConstant = Word('01').setParseAction(lambda t: int(t[0], 2)) countedArray(Word(alphas), intExpr=binaryConstant).parseString('10 ab cd ef') # -> ['ab', 'cd'] """ arrayExpr = Forward() def countFieldParseAction(s,l,t): n = t[0] arrayExpr << (n and Group(And([expr]*n)) or Group(empty)) return [] if intExpr is None: intExpr = Word(nums).setParseAction(lambda t:int(t[0])) else: intExpr = intExpr.copy() intExpr.setName("arrayLen") intExpr.addParseAction(countFieldParseAction, callDuringTry=True) return ( intExpr + arrayExpr ).setName('(len) ' + _ustr(expr) + '...') def _flatten(L): ret = [] for i in L: if isinstance(i,list): ret.extend(_flatten(i)) else: ret.append(i) return ret def matchPreviousLiteral(expr): """ Helper to define an expression that is indirectly defined from the tokens matched in a previous expression, that is, it looks for a 'repeat' of a previous expression. For example:: first = Word(nums) second = matchPreviousLiteral(first) matchExpr = first + ":" + second will match C{"1:1"}, but not C{"1:2"}. Because this matches a previous literal, will also match the leading C{"1:1"} in C{"1:10"}. If this is not desired, use C{matchPreviousExpr}. Do I{not} use with packrat parsing enabled. """ rep = Forward() def copyTokenToRepeater(s,l,t): if t: if len(t) == 1: rep << t[0] else: # flatten t tokens tflat = _flatten(t.asList()) rep << And(Literal(tt) for tt in tflat) else: rep << Empty() expr.addParseAction(copyTokenToRepeater, callDuringTry=True) rep.setName('(prev) ' + _ustr(expr)) return rep def matchPreviousExpr(expr): """ Helper to define an expression that is indirectly defined from the tokens matched in a previous expression, that is, it looks for a 'repeat' of a previous expression. For example:: first = Word(nums) second = matchPreviousExpr(first) matchExpr = first + ":" + second will match C{"1:1"}, but not C{"1:2"}. Because this matches by expressions, will I{not} match the leading C{"1:1"} in C{"1:10"}; the expressions are evaluated first, and then compared, so C{"1"} is compared with C{"10"}. Do I{not} use with packrat parsing enabled. """ rep = Forward() e2 = expr.copy() rep <<= e2 def copyTokenToRepeater(s,l,t): matchTokens = _flatten(t.asList()) def mustMatchTheseTokens(s,l,t): theseTokens = _flatten(t.asList()) if theseTokens != matchTokens: raise ParseException("",0,"") rep.setParseAction( mustMatchTheseTokens, callDuringTry=True ) expr.addParseAction(copyTokenToRepeater, callDuringTry=True) rep.setName('(prev) ' + _ustr(expr)) return rep def _escapeRegexRangeChars(s): #~ escape these chars: ^-] for c in r"\^-]": s = s.replace(c,_bslash+c) s = s.replace("\n",r"\n") s = s.replace("\t",r"\t") return _ustr(s) def oneOf( strs, caseless=False, useRegex=True ): """ Helper to quickly define a set of alternative Literals, and makes sure to do longest-first testing when there is a conflict, regardless of the input order, but returns a C{L{MatchFirst}} for best performance. Parameters: - strs - a string of space-delimited literals, or a collection of string literals - caseless - (default=C{False}) - treat all literals as caseless - useRegex - (default=C{True}) - as an optimization, will generate a Regex object; otherwise, will generate a C{MatchFirst} object (if C{caseless=True}, or if creating a C{Regex} raises an exception) Example:: comp_oper = oneOf("< = > <= >= !=") var = Word(alphas) number = Word(nums) term = var | number comparison_expr = term + comp_oper + term print(comparison_expr.searchString("B = 12 AA=23 B<=AA AA>12")) prints:: [['B', '=', '12'], ['AA', '=', '23'], ['B', '<=', 'AA'], ['AA', '>', '12']] """ if caseless: isequal = ( lambda a,b: a.upper() == b.upper() ) masks = ( lambda a,b: b.upper().startswith(a.upper()) ) parseElementClass = CaselessLiteral else: isequal = ( lambda a,b: a == b ) masks = ( lambda a,b: b.startswith(a) ) parseElementClass = Literal symbols = [] if isinstance(strs,basestring): symbols = strs.split() elif isinstance(strs, Iterable): symbols = list(strs) else: warnings.warn("Invalid argument to oneOf, expected string or iterable", SyntaxWarning, stacklevel=2) if not symbols: return NoMatch() i = 0 while i < len(symbols)-1: cur = symbols[i] for j,other in enumerate(symbols[i+1:]): if ( isequal(other, cur) ): del symbols[i+j+1] break elif ( masks(cur, other) ): del symbols[i+j+1] symbols.insert(i,other) cur = other break else: i += 1 if not caseless and useRegex: #~ print (strs,"->", "|".join( [ _escapeRegexChars(sym) for sym in symbols] )) try: if len(symbols)==len("".join(symbols)): return Regex( "[%s]" % "".join(_escapeRegexRangeChars(sym) for sym in symbols) ).setName(' | '.join(symbols)) else: return Regex( "|".join(re.escape(sym) for sym in symbols) ).setName(' | '.join(symbols)) except Exception: warnings.warn("Exception creating Regex for oneOf, building MatchFirst", SyntaxWarning, stacklevel=2) # last resort, just use MatchFirst return MatchFirst(parseElementClass(sym) for sym in symbols).setName(' | '.join(symbols)) def dictOf( key, value ): """ Helper to easily and clearly define a dictionary by specifying the respective patterns for the key and value. Takes care of defining the C{L{Dict}}, C{L{ZeroOrMore}}, and C{L{Group}} tokens in the proper order. The key pattern can include delimiting markers or punctuation, as long as they are suppressed, thereby leaving the significant key text. The value pattern can include named results, so that the C{Dict} results can include named token fields. Example:: text = "shape: SQUARE posn: upper left color: light blue texture: burlap" attr_expr = (label + Suppress(':') + OneOrMore(data_word, stopOn=label).setParseAction(' '.join)) print(OneOrMore(attr_expr).parseString(text).dump()) attr_label = label attr_value = Suppress(':') + OneOrMore(data_word, stopOn=label).setParseAction(' '.join) # similar to Dict, but simpler call format result = dictOf(attr_label, attr_value).parseString(text) print(result.dump()) print(result['shape']) print(result.shape) # object attribute access works too print(result.asDict()) prints:: [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'light blue'], ['texture', 'burlap']] - color: light blue - posn: upper left - shape: SQUARE - texture: burlap SQUARE SQUARE {'color': 'light blue', 'shape': 'SQUARE', 'posn': 'upper left', 'texture': 'burlap'} """ return Dict( ZeroOrMore( Group ( key + value ) ) ) def originalTextFor(expr, asString=True): """ Helper to return the original, untokenized text for a given expression. Useful to restore the parsed fields of an HTML start tag into the raw tag text itself, or to revert separate tokens with intervening whitespace back to the original matching input text. By default, returns astring containing the original parsed text. If the optional C{asString} argument is passed as C{False}, then the return value is a C{L{ParseResults}} containing any results names that were originally matched, and a single token containing the original matched text from the input string. So if the expression passed to C{L{originalTextFor}} contains expressions with defined results names, you must set C{asString} to C{False} if you want to preserve those results name values. Example:: src = "this is test <b> bold <i>text</i> </b> normal text " for tag in ("b","i"): opener,closer = makeHTMLTags(tag) patt = originalTextFor(opener + SkipTo(closer) + closer) print(patt.searchString(src)[0]) prints:: ['<b> bold <i>text</i> </b>'] ['<i>text</i>'] """ locMarker = Empty().setParseAction(lambda s,loc,t: loc) endlocMarker = locMarker.copy() endlocMarker.callPreparse = False matchExpr = locMarker("_original_start") + expr + endlocMarker("_original_end") if asString: extractText = lambda s,l,t: s[t._original_start:t._original_end] else: def extractText(s,l,t): t[:] = [s[t.pop('_original_start'):t.pop('_original_end')]] matchExpr.setParseAction(extractText) matchExpr.ignoreExprs = expr.ignoreExprs return matchExpr def ungroup(expr): """ Helper to undo pyparsing's default grouping of And expressions, even if all but one are non-empty. """ return TokenConverter(expr).setParseAction(lambda t:t[0]) def locatedExpr(expr): """ Helper to decorate a returned token with its starting and ending locations in the input string. This helper adds the following results names: - locn_start = location where matched expression begins - locn_end = location where matched expression ends - value = the actual parsed results Be careful if the input text contains C{<TAB>} characters, you may want to call C{L{ParserElement.parseWithTabs}} Example:: wd = Word(alphas) for match in locatedExpr(wd).searchString("ljsdf123lksdjjf123lkkjj1222"): print(match) prints:: [[0, 'ljsdf', 5]] [[8, 'lksdjjf', 15]] [[18, 'lkkjj', 23]] """ locator = Empty().setParseAction(lambda s,l,t: l) return Group(locator("locn_start") + expr("value") + locator.copy().leaveWhitespace()("locn_end")) # convenience constants for positional expressions empty = Empty().setName("empty") lineStart = LineStart().setName("lineStart") lineEnd = LineEnd().setName("lineEnd") stringStart = StringStart().setName("stringStart") stringEnd = StringEnd().setName("stringEnd") _escapedPunc = Word( _bslash, r"\[]-*.$+^?()~ ", exact=2 ).setParseAction(lambda s,l,t:t[0][1]) _escapedHexChar = Regex(r"\\0?[xX][0-9a-fA-F]+").setParseAction(lambda s,l,t:unichr(int(t[0].lstrip(r'\0x'),16))) _escapedOctChar = Regex(r"\\0[0-7]+").setParseAction(lambda s,l,t:unichr(int(t[0][1:],8))) _singleChar = _escapedPunc | _escapedHexChar | _escapedOctChar | CharsNotIn(r'\]', exact=1) _charRange = Group(_singleChar + Suppress("-") + _singleChar) _reBracketExpr = Literal("[") + Optional("^").setResultsName("negate") + Group( OneOrMore( _charRange | _singleChar ) ).setResultsName("body") + "]" def srange(s): r""" Helper to easily define string ranges for use in Word construction. Borrows syntax from regexp '[]' string range definitions:: srange("[0-9]") -> "0123456789" srange("[a-z]") -> "abcdefghijklmnopqrstuvwxyz" srange("[a-z$_]") -> "abcdefghijklmnopqrstuvwxyz$_" The input string must be enclosed in []'s, and the returned string is the expanded character set joined into a single string. The values enclosed in the []'s may be: - a single character - an escaped character with a leading backslash (such as C{\-} or C{\]}) - an escaped hex character with a leading C{'\x'} (C{\x21}, which is a C{'!'} character) (C{\0x##} is also supported for backwards compatibility) - an escaped octal character with a leading C{'\0'} (C{\041}, which is a C{'!'} character) - a range of any of the above, separated by a dash (C{'a-z'}, etc.) - any combination of the above (C{'aeiouy'}, C{'a-zA-Z0-9_$'}, etc.) """ _expanded = lambda p: p if not isinstance(p,ParseResults) else ''.join(unichr(c) for c in range(ord(p[0]),ord(p[1])+1)) try: return "".join(_expanded(part) for part in _reBracketExpr.parseString(s).body) except Exception: return "" def matchOnlyAtCol(n): """ Helper method for defining parse actions that require matching at a specific column in the input text. """ def verifyCol(strg,locn,toks): if col(locn,strg) != n: raise ParseException(strg,locn,"matched token not at column %d" % n) return verifyCol def replaceWith(replStr): """ Helper method for common parse actions that simply return a literal value. Especially useful when used with C{L{transformString<ParserElement.transformString>}()}. Example:: num = Word(nums).setParseAction(lambda toks: int(toks[0])) na = oneOf("N/A NA").setParseAction(replaceWith(math.nan)) term = na | num OneOrMore(term).parseString("324 234 N/A 234") # -> [324, 234, nan, 234] """ return lambda s,l,t: [replStr] def removeQuotes(s,l,t): """ Helper parse action for removing quotation marks from parsed quoted strings. Example:: # by default, quotation marks are included in parsed results quotedString.parseString("'Now is the Winter of our Discontent'") # -> ["'Now is the Winter of our Discontent'"] # use removeQuotes to strip quotation marks from parsed results quotedString.setParseAction(removeQuotes) quotedString.parseString("'Now is the Winter of our Discontent'") # -> ["Now is the Winter of our Discontent"] """ return t[0][1:-1] def tokenMap(func, *args): """ Helper to define a parse action by mapping a function to all elements of a ParseResults list.If any additional args are passed, they are forwarded to the given function as additional arguments after the token, as in C{hex_integer = Word(hexnums).setParseAction(tokenMap(int, 16))}, which will convert the parsed data to an integer using base 16. Example (compare the last to example in L{ParserElement.transformString}:: hex_ints = OneOrMore(Word(hexnums)).setParseAction(tokenMap(int, 16)) hex_ints.runTests(''' 00 11 22 aa FF 0a 0d 1a ''') upperword = Word(alphas).setParseAction(tokenMap(str.upper)) OneOrMore(upperword).runTests(''' my kingdom for a horse ''') wd = Word(alphas).setParseAction(tokenMap(str.title)) OneOrMore(wd).setParseAction(' '.join).runTests(''' now is the winter of our discontent made glorious summer by this sun of york ''') prints:: 00 11 22 aa FF 0a 0d 1a [0, 17, 34, 170, 255, 10, 13, 26] my kingdom for a horse ['MY', 'KINGDOM', 'FOR', 'A', 'HORSE'] now is the winter of our discontent made glorious summer by this sun of york ['Now Is The Winter Of Our Discontent Made Glorious Summer By This Sun Of York'] """ def pa(s,l,t): return [func(tokn, *args) for tokn in t] try: func_name = getattr(func, '__name__', getattr(func, '__class__').__name__) except Exception: func_name = str(func) pa.__name__ = func_name return pa upcaseTokens = tokenMap(lambda t: _ustr(t).upper()) """(Deprecated) Helper parse action to convert tokens to upper case. Deprecated in favor of L{pyparsing_common.upcaseTokens}""" downcaseTokens = tokenMap(lambda t: _ustr(t).lower()) """(Deprecated) Helper parse action to convert tokens to lower case. Deprecated in favor of L{pyparsing_common.downcaseTokens}""" def _makeTags(tagStr, xml): """Internal helper to construct opening and closing tag expressions, given a tag name""" if isinstance(tagStr,basestring): resname = tagStr tagStr = Keyword(tagStr, caseless=not xml) else: resname = tagStr.name tagAttrName = Word(alphas,alphanums+"_-:") if (xml): tagAttrValue = dblQuotedString.copy().setParseAction( removeQuotes ) openTag = Suppress("<") + tagStr("tag") + \ Dict(ZeroOrMore(Group( tagAttrName + Suppress("=") + tagAttrValue ))) + \ Optional("/",default=[False]).setResultsName("empty").setParseAction(lambda s,l,t:t[0]=='/') + Suppress(">") else: printablesLessRAbrack = "".join(c for c in printables if c not in ">") tagAttrValue = quotedString.copy().setParseAction( removeQuotes ) | Word(printablesLessRAbrack) openTag = Suppress("<") + tagStr("tag") + \ Dict(ZeroOrMore(Group( tagAttrName.setParseAction(downcaseTokens) + \ Optional( Suppress("=") + tagAttrValue ) ))) + \ Optional("/",default=[False]).setResultsName("empty").setParseAction(lambda s,l,t:t[0]=='/') + Suppress(">") closeTag = Combine(_L("</") + tagStr + ">") openTag = openTag.setResultsName("start"+"".join(resname.replace(":"," ").title().split())).setName("<%s>" % resname) closeTag = closeTag.setResultsName("end"+"".join(resname.replace(":"," ").title().split())).setName("</%s>" % resname) openTag.tag = resname closeTag.tag = resname return openTag, closeTag def makeHTMLTags(tagStr): """ Helper to construct opening and closing tag expressions for HTML, given a tag name. Matches tags in either upper or lower case, attributes with namespaces and with quoted or unquoted values. Example:: text = '<td>More info at the <a href="http://pyparsing.wikispaces.com">pyparsing</a> wiki page</td>' # makeHTMLTags returns pyparsing expressions for the opening and closing tags as a 2-tuple a,a_end = makeHTMLTags("A") link_expr = a + SkipTo(a_end)("link_text") + a_end for link in link_expr.searchString(text): # attributes in the <A> tag (like "href" shown here) are also accessible as named results print(link.link_text, '->', link.href) prints:: pyparsing -> http://pyparsing.wikispaces.com """ return _makeTags( tagStr, False ) def makeXMLTags(tagStr): """ Helper to construct opening and closing tag expressions for XML, given a tag name. Matches tags only in the given upper/lower case. Example: similar to L{makeHTMLTags} """ return _makeTags( tagStr, True ) def withAttribute(*args,**attrDict): """ Helper to create a validating parse action to be used with start tags created with C{L{makeXMLTags}} or C{L{makeHTMLTags}}. Use C{withAttribute} to qualify a starting tag with a required attribute value, to avoid false matches on common tags such as C{<TD>} or C{<DIV>}. Call C{withAttribute} with a series of attribute names and values. Specify the list of filter attributes names and values as: - keyword arguments, as in C{(align="right")}, or - as an explicit dict with C{**} operator, when an attribute name is also a Python reserved word, as in C{**{"class":"Customer", "align":"right"}} - a list of name-value tuples, as in ( ("ns1:class", "Customer"), ("ns2:align","right") ) For attribute names with a namespace prefix, you must use the second form. Attribute names are matched insensitive to upper/lower case. If just testing for C{class} (with or without a namespace), use C{L{withClass}}. To verify that the attribute exists, but without specifying a value, pass C{withAttribute.ANY_VALUE} as the value. Example:: html = ''' <div> Some text <div type="grid">1 4 0 1 0</div> <div type="graph">1,3 2,3 1,1</div> <div>this has no type</div> </div> ''' div,div_end = makeHTMLTags("div") # only match div tag having a type attribute with value "grid" div_grid = div().setParseAction(withAttribute(type="grid")) grid_expr = div_grid + SkipTo(div | div_end)("body") for grid_header in grid_expr.searchString(html): print(grid_header.body) # construct a match with any div tag having a type attribute, regardless of the value div_any_type = div().setParseAction(withAttribute(type=withAttribute.ANY_VALUE)) div_expr = div_any_type + SkipTo(div | div_end)("body") for div_header in div_expr.searchString(html): print(div_header.body) prints:: 1 4 0 1 0 1 4 0 1 0 1,3 2,3 1,1 """ if args: attrs = args[:] else: attrs = attrDict.items() attrs = [(k,v) for k,v in attrs] def pa(s,l,tokens): for attrName,attrValue in attrs: if attrName not in tokens: raise ParseException(s,l,"no matching attribute " + attrName) if attrValue != withAttribute.ANY_VALUE and tokens[attrName] != attrValue: raise ParseException(s,l,"attribute '%s' has value '%s', must be '%s'" % (attrName, tokens[attrName], attrValue)) return pa withAttribute.ANY_VALUE = object() def withClass(classname, namespace=''): """ Simplified version of C{L{withAttribute}} when matching on a div class - made difficult because C{class} is a reserved word in Python. Example:: html = ''' <div> Some text <div class="grid">1 4 0 1 0</div> <div class="graph">1,3 2,3 1,1</div> <div>this <div> has no class</div> </div> ''' div,div_end = makeHTMLTags("div") div_grid = div().setParseAction(withClass("grid")) grid_expr = div_grid + SkipTo(div | div_end)("body") for grid_header in grid_expr.searchString(html): print(grid_header.body) div_any_type = div().setParseAction(withClass(withAttribute.ANY_VALUE)) div_expr = div_any_type + SkipTo(div | div_end)("body") for div_header in div_expr.searchString(html): print(div_header.body) prints:: 1 4 0 1 0 1 4 0 1 0 1,3 2,3 1,1 """ classattr = "%s:class" % namespace if namespace else "class" return withAttribute(**{classattr : classname}) opAssoc = _Constants() opAssoc.LEFT = object() opAssoc.RIGHT = object() def infixNotation( baseExpr, opList, lpar=Suppress('('), rpar=Suppress(')') ): """ Helper method for constructing grammars of expressions made up of operators working in a precedence hierarchy. Operators may be unary or binary, left- or right-associative. Parse actions can also be attached to operator expressions. The generated parser will also recognize the use of parentheses to override operator precedences (see example below). Note: if you define a deep operator list, you may see performance issues when using infixNotation. See L{ParserElement.enablePackrat} for a mechanism to potentially improve your parser performance. Parameters: - baseExpr - expression representing the most basic element for the nested - opList - list of tuples, one for each operator precedence level in the expression grammar; each tuple is of the form (opExpr, numTerms, rightLeftAssoc, parseAction), where: - opExpr is the pyparsing expression for the operator; may also be a string, which will be converted to a Literal; if numTerms is 3, opExpr is a tuple of two expressions, for the two operators separating the 3 terms - numTerms is the number of terms for this operator (must be 1, 2, or 3) - rightLeftAssoc is the indicator whether the operator is right or left associative, using the pyparsing-defined constants C{opAssoc.RIGHT} and C{opAssoc.LEFT}. - parseAction is the parse action to be associated with expressions matching this operator expression (the parse action tuple member may be omitted); if the parse action is passed a tuple or list of functions, this is equivalent to calling C{setParseAction(*fn)} (L{ParserElement.setParseAction}) - lpar - expression for matching left-parentheses (default=C{Suppress('(')}) - rpar - expression for matching right-parentheses (default=C{Suppress(')')}) Example:: # simple example of four-function arithmetic with ints and variable names integer = pyparsing_common.signed_integer varname = pyparsing_common.identifier arith_expr = infixNotation(integer | varname, [ ('-', 1, opAssoc.RIGHT), (oneOf('* /'), 2, opAssoc.LEFT), (oneOf('+ -'), 2, opAssoc.LEFT), ]) arith_expr.runTests(''' 5+3*6 (5+3)*6 -2--11 ''', fullDump=False) prints:: 5+3*6 [[5, '+', [3, '*', 6]]] (5+3)*6 [[[5, '+', 3], '*', 6]] -2--11 [[['-', 2], '-', ['-', 11]]] """ ret = Forward() lastExpr = baseExpr | ( lpar + ret + rpar ) for i,operDef in enumerate(opList): opExpr,arity,rightLeftAssoc,pa = (operDef + (None,))[:4] termName = "%s term" % opExpr if arity < 3 else "%s%s term" % opExpr if arity == 3: if opExpr is None or len(opExpr) != 2: raise ValueError("if numterms=3, opExpr must be a tuple or list of two expressions") opExpr1, opExpr2 = opExpr thisExpr = Forward().setName(termName) if rightLeftAssoc == opAssoc.LEFT: if arity == 1: matchExpr = FollowedBy(lastExpr + opExpr) + Group( lastExpr + OneOrMore( opExpr ) ) elif arity == 2: if opExpr is not None: matchExpr = FollowedBy(lastExpr + opExpr + lastExpr) + Group( lastExpr + OneOrMore( opExpr + lastExpr ) ) else: matchExpr = FollowedBy(lastExpr+lastExpr) + Group( lastExpr + OneOrMore(lastExpr) ) elif arity == 3: matchExpr = FollowedBy(lastExpr + opExpr1 + lastExpr + opExpr2 + lastExpr) + \ Group( lastExpr + opExpr1 + lastExpr + opExpr2 + lastExpr ) else: raise ValueError("operator must be unary (1), binary (2), or ternary (3)") elif rightLeftAssoc == opAssoc.RIGHT: if arity == 1: # try to avoid LR with this extra test if not isinstance(opExpr, Optional): opExpr = Optional(opExpr) matchExpr = FollowedBy(opExpr.expr + thisExpr) + Group( opExpr + thisExpr ) elif arity == 2: if opExpr is not None: matchExpr = FollowedBy(lastExpr + opExpr + thisExpr) + Group( lastExpr + OneOrMore( opExpr + thisExpr ) ) else: matchExpr = FollowedBy(lastExpr + thisExpr) + Group( lastExpr + OneOrMore( thisExpr ) ) elif arity == 3: matchExpr = FollowedBy(lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr) + \ Group( lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr ) else: raise ValueError("operator must be unary (1), binary (2), or ternary (3)") else: raise ValueError("operator must indicate right or left associativity") if pa: if isinstance(pa, (tuple, list)): matchExpr.setParseAction(*pa) else: matchExpr.setParseAction(pa) thisExpr <<= ( matchExpr.setName(termName) | lastExpr ) lastExpr = thisExpr ret <<= lastExpr return ret operatorPrecedence = infixNotation """(Deprecated) Former name of C{L{infixNotation}}, will be dropped in a future release.""" dblQuotedString = Combine(Regex(r'"(?:[^"\n\r\\]|(?:"")|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*')+'"').setName("string enclosed in double quotes") sglQuotedString = Combine(Regex(r"'(?:[^'\n\r\\]|(?:'')|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*")+"'").setName("string enclosed in single quotes") quotedString = Combine(Regex(r'"(?:[^"\n\r\\]|(?:"")|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*')+'"'| Regex(r"'(?:[^'\n\r\\]|(?:'')|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*")+"'").setName("quotedString using single or double quotes") unicodeString = Combine(_L('u') + quotedString.copy()).setName("unicode string literal") def nestedExpr(opener="(", closer=")", content=None, ignoreExpr=quotedString.copy()): """ Helper method for defining nested lists enclosed in opening and closing delimiters ("(" and ")" are the default). Parameters: - opener - opening character for a nested list (default=C{"("}); can also be a pyparsing expression - closer - closing character for a nested list (default=C{")"}); can also be a pyparsing expression - content - expression for items within the nested lists (default=C{None}) - ignoreExpr - expression for ignoring opening and closing delimiters (default=C{quotedString}) If an expression is not provided for the content argument, the nested expression will capture all whitespace-delimited content between delimiters as a list of separate values. Use the C{ignoreExpr} argument to define expressions that may contain opening or closing characters that should not be treated as opening or closing characters for nesting, such as quotedString or a comment expression. Specify multiple expressions using an C{L{Or}} or C{L{MatchFirst}}. The default is L{quotedString}, but if no expressions are to be ignored, then pass C{None} for this argument. Example:: data_type = oneOf("void int short long char float double") decl_data_type = Combine(data_type + Optional(Word('*'))) ident = Word(alphas+'_', alphanums+'_') number = pyparsing_common.number arg = Group(decl_data_type + ident) LPAR,RPAR = map(Suppress, "()") code_body = nestedExpr('{', '}', ignoreExpr=(quotedString | cStyleComment)) c_function = (decl_data_type("type") + ident("name") + LPAR + Optional(delimitedList(arg), [])("args") + RPAR + code_body("body")) c_function.ignore(cStyleComment) source_code = ''' int is_odd(int x) { return (x%2); } int dec_to_hex(char hchar) { if (hchar >= '0' && hchar <= '9') { return (ord(hchar)-ord('0')); } else { return (10+ord(hchar)-ord('A')); } } ''' for func in c_function.searchString(source_code): print("%(name)s (%(type)s) args: %(args)s" % func) prints:: is_odd (int) args: [['int', 'x']] dec_to_hex (int) args: [['char', 'hchar']] """ if opener == closer: raise ValueError("opening and closing strings cannot be the same") if content is None: if isinstance(opener,basestring) and isinstance(closer,basestring): if len(opener) == 1 and len(closer)==1: if ignoreExpr is not None: content = (Combine(OneOrMore(~ignoreExpr + CharsNotIn(opener+closer+ParserElement.DEFAULT_WHITE_CHARS,exact=1)) ).setParseAction(lambda t:t[0].strip())) else: content = (empty.copy()+CharsNotIn(opener+closer+ParserElement.DEFAULT_WHITE_CHARS ).setParseAction(lambda t:t[0].strip())) else: if ignoreExpr is not None: content = (Combine(OneOrMore(~ignoreExpr + ~Literal(opener) + ~Literal(closer) + CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS,exact=1)) ).setParseAction(lambda t:t[0].strip())) else: content = (Combine(OneOrMore(~Literal(opener) + ~Literal(closer) + CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS,exact=1)) ).setParseAction(lambda t:t[0].strip())) else: raise ValueError("opening and closing arguments must be strings if no content expression is given") ret = Forward() if ignoreExpr is not None: ret <<= Group( Suppress(opener) + ZeroOrMore( ignoreExpr | ret | content ) + Suppress(closer) ) else: ret <<= Group( Suppress(opener) + ZeroOrMore( ret | content ) + Suppress(closer) ) ret.setName('nested %s%s expression' % (opener,closer)) return ret def indentedBlock(blockStatementExpr, indentStack, indent=True): """ Helper method for defining space-delimited indentation blocks, such as those used to define block statements in Python source code. Parameters: - blockStatementExpr - expression defining syntax of statement that is repeated within the indented block - indentStack - list created by caller to manage indentation stack (multiple statementWithIndentedBlock expressions within a single grammar should share a common indentStack) - indent - boolean indicating whether block must be indented beyond the the current level; set to False for block of left-most statements (default=C{True}) A valid block must contain at least one C{blockStatement}. Example:: data = ''' def A(z): A1 B = 100 G = A2 A2 A3 B def BB(a,b,c): BB1 def BBA(): bba1 bba2 bba3 C D def spam(x,y): def eggs(z): pass ''' indentStack = [1] stmt = Forward() identifier = Word(alphas, alphanums) funcDecl = ("def" + identifier + Group( "(" + Optional( delimitedList(identifier) ) + ")" ) + ":") func_body = indentedBlock(stmt, indentStack) funcDef = Group( funcDecl + func_body ) rvalue = Forward() funcCall = Group(identifier + "(" + Optional(delimitedList(rvalue)) + ")") rvalue << (funcCall | identifier | Word(nums)) assignment = Group(identifier + "=" + rvalue) stmt << ( funcDef | assignment | identifier ) module_body = OneOrMore(stmt) parseTree = module_body.parseString(data) parseTree.pprint() prints:: [['def', 'A', ['(', 'z', ')'], ':', [['A1'], [['B', '=', '100']], [['G', '=', 'A2']], ['A2'], ['A3']]], 'B', ['def', 'BB', ['(', 'a', 'b', 'c', ')'], ':', [['BB1'], [['def', 'BBA', ['(', ')'], ':', [['bba1'], ['bba2'], ['bba3']]]]]], 'C', 'D', ['def', 'spam', ['(', 'x', 'y', ')'], ':', [[['def', 'eggs', ['(', 'z', ')'], ':', [['pass']]]]]]] """ def checkPeerIndent(s,l,t): if l >= len(s): return curCol = col(l,s) if curCol != indentStack[-1]: if curCol > indentStack[-1]: raise ParseFatalException(s,l,"illegal nesting") raise ParseException(s,l,"not a peer entry") def checkSubIndent(s,l,t): curCol = col(l,s) if curCol > indentStack[-1]: indentStack.append( curCol ) else: raise ParseException(s,l,"not a subentry") def checkUnindent(s,l,t): if l >= len(s): return curCol = col(l,s) if not(indentStack and curCol < indentStack[-1] and curCol <= indentStack[-2]): raise ParseException(s,l,"not an unindent") indentStack.pop() NL = OneOrMore(LineEnd().setWhitespaceChars("\t ").suppress()) INDENT = (Empty() + Empty().setParseAction(checkSubIndent)).setName('INDENT') PEER = Empty().setParseAction(checkPeerIndent).setName('') UNDENT = Empty().setParseAction(checkUnindent).setName('UNINDENT') if indent: smExpr = Group( Optional(NL) + #~ FollowedBy(blockStatementExpr) + INDENT + (OneOrMore( PEER + Group(blockStatementExpr) + Optional(NL) )) + UNDENT) else: smExpr = Group( Optional(NL) + (OneOrMore( PEER + Group(blockStatementExpr) + Optional(NL) )) ) blockStatementExpr.ignore(_bslash + LineEnd()) return smExpr.setName('indented block') alphas8bit = srange(r"[\0xc0-\0xd6\0xd8-\0xf6\0xf8-\0xff]") punc8bit = srange(r"[\0xa1-\0xbf\0xd7\0xf7]") anyOpenTag,anyCloseTag = makeHTMLTags(Word(alphas,alphanums+"_:").setName('any tag')) _htmlEntityMap = dict(zip("gt lt amp nbsp quot apos".split(),'><& "\'')) commonHTMLEntity = Regex('&(?P<entity>' + '|'.join(_htmlEntityMap.keys()) +");").setName("common HTML entity") def replaceHTMLEntity(t): """Helper parser action to replace common HTML entities with their special characters""" return _htmlEntityMap.get(t.entity) # it's easy to get these comment structures wrong - they're very common, so may as well make them available cStyleComment = Combine(Regex(r"/\*(?:[^*]|\*(?!/))*") + '*/').setName("C style comment") "Comment of the form C{/* ... */}" htmlComment = Regex(r"<!--[\s\S]*?-->").setName("HTML comment") "Comment of the form C{<!-- ... -->}" restOfLine = Regex(r".*").leaveWhitespace().setName("rest of line") dblSlashComment = Regex(r"//(?:\\\n|[^\n])*").setName("// comment") "Comment of the form C{// ... (to end of line)}" cppStyleComment = Combine(Regex(r"/\*(?:[^*]|\*(?!/))*") + '*/'| dblSlashComment).setName("C++ style comment") "Comment of either form C{L{cStyleComment}} or C{L{dblSlashComment}}" javaStyleComment = cppStyleComment "Same as C{L{cppStyleComment}}" pythonStyleComment = Regex(r"#.*").setName("Python style comment") "Comment of the form C{# ... (to end of line)}" _commasepitem = Combine(OneOrMore(Word(printables, excludeChars=',') + Optional( Word(" \t") + ~Literal(",") + ~LineEnd() ) ) ).streamline().setName("commaItem") commaSeparatedList = delimitedList( Optional( quotedString.copy() | _commasepitem, default="") ).setName("commaSeparatedList") """(Deprecated) Predefined expression of 1 or more printable words or quoted strings, separated by commas. This expression is deprecated in favor of L{pyparsing_common.comma_separated_list}.""" # some other useful expressions - using lower-case class name since we are really using this as a namespace class pyparsing_common: """ Here are some common low-level expressions that may be useful in jump-starting parser development: - numeric forms (L{integers<integer>}, L{reals<real>}, L{scientific notation<sci_real>}) - common L{programming identifiers<identifier>} - network addresses (L{MAC<mac_address>}, L{IPv4<ipv4_address>}, L{IPv6<ipv6_address>}) - ISO8601 L{dates<iso8601_date>} and L{datetime<iso8601_datetime>} - L{UUID<uuid>} - L{comma-separated list<comma_separated_list>} Parse actions: - C{L{convertToInteger}} - C{L{convertToFloat}} - C{L{convertToDate}} - C{L{convertToDatetime}} - C{L{stripHTMLTags}} - C{L{upcaseTokens}} - C{L{downcaseTokens}} Example:: pyparsing_common.number.runTests(''' # any int or real number, returned as the appropriate type 100 -100 +100 3.14159 6.02e23 1e-12 ''') pyparsing_common.fnumber.runTests(''' # any int or real number, returned as float 100 -100 +100 3.14159 6.02e23 1e-12 ''') pyparsing_common.hex_integer.runTests(''' # hex numbers 100 FF ''') pyparsing_common.fraction.runTests(''' # fractions 1/2 -3/4 ''') pyparsing_common.mixed_integer.runTests(''' # mixed fractions 1 1/2 -3/4 1-3/4 ''') import uuid pyparsing_common.uuid.setParseAction(tokenMap(uuid.UUID)) pyparsing_common.uuid.runTests(''' # uuid 12345678-1234-5678-1234-567812345678 ''') prints:: # any int or real number, returned as the appropriate type 100 [100] -100 [-100] +100 [100] 3.14159 [3.14159] 6.02e23 [6.02e+23] 1e-12 [1e-12] # any int or real number, returned as float 100 [100.0] -100 [-100.0] +100 [100.0] 3.14159 [3.14159] 6.02e23 [6.02e+23] 1e-12 [1e-12] # hex numbers 100 [256] FF [255] # fractions 1/2 [0.5] -3/4 [-0.75] # mixed fractions 1 [1] 1/2 [0.5] -3/4 [-0.75] 1-3/4 [1.75] # uuid 12345678-1234-5678-1234-567812345678 [UUID('12345678-1234-5678-1234-567812345678')] """ convertToInteger = tokenMap(int) """ Parse action for converting parsed integers to Python int """ convertToFloat = tokenMap(float) """ Parse action for converting parsed numbers to Python float """ integer = Word(nums).setName("integer").setParseAction(convertToInteger) """expression that parses an unsigned integer, returns an int""" hex_integer = Word(hexnums).setName("hex integer").setParseAction(tokenMap(int,16)) """expression that parses a hexadecimal integer, returns an int""" signed_integer = Regex(r'[+-]?\d+').setName("signed integer").setParseAction(convertToInteger) """expression that parses an integer with optional leading sign, returns an int""" fraction = (signed_integer().setParseAction(convertToFloat) + '/' + signed_integer().setParseAction(convertToFloat)).setName("fraction") """fractional expression of an integer divided by an integer, returns a float""" fraction.addParseAction(lambda t: t[0]/t[-1]) mixed_integer = (fraction | signed_integer + Optional(Optional('-').suppress() + fraction)).setName("fraction or mixed integer-fraction") """mixed integer of the form 'integer - fraction', with optional leading integer, returns float""" mixed_integer.addParseAction(sum) real = Regex(r'[+-]?\d+\.\d*').setName("real number").setParseAction(convertToFloat) """expression that parses a floating point number and returns a float""" sci_real = Regex(r'[+-]?\d+([eE][+-]?\d+|\.\d*([eE][+-]?\d+)?)').setName("real number with scientific notation").setParseAction(convertToFloat) """expression that parses a floating point number with optional scientific notation and returns a float""" # streamlining this expression makes the docs nicer-looking number = (sci_real | real | signed_integer).streamline() """any numeric expression, returns the corresponding Python type""" fnumber = Regex(r'[+-]?\d+\.?\d*([eE][+-]?\d+)?').setName("fnumber").setParseAction(convertToFloat) """any int or real number, returned as float""" identifier = Word(alphas+'_', alphanums+'_').setName("identifier") """typical code identifier (leading alpha or '_', followed by 0 or more alphas, nums, or '_')""" ipv4_address = Regex(r'(25[0-5]|2[0-4][0-9]|1?[0-9]{1,2})(\.(25[0-5]|2[0-4][0-9]|1?[0-9]{1,2})){3}').setName("IPv4 address") "IPv4 address (C{0.0.0.0 - 255.255.255.255})" _ipv6_part = Regex(r'[0-9a-fA-F]{1,4}').setName("hex_integer") _full_ipv6_address = (_ipv6_part + (':' + _ipv6_part)*7).setName("full IPv6 address") _short_ipv6_address = (Optional(_ipv6_part + (':' + _ipv6_part)*(0,6)) + "::" + Optional(_ipv6_part + (':' + _ipv6_part)*(0,6))).setName("short IPv6 address") _short_ipv6_address.addCondition(lambda t: sum(1 for tt in t if pyparsing_common._ipv6_part.matches(tt)) < 8) _mixed_ipv6_address = ("::ffff:" + ipv4_address).setName("mixed IPv6 address") ipv6_address = Combine((_full_ipv6_address | _mixed_ipv6_address | _short_ipv6_address).setName("IPv6 address")).setName("IPv6 address") "IPv6 address (long, short, or mixed form)" mac_address = Regex(r'[0-9a-fA-F]{2}([:.-])[0-9a-fA-F]{2}(?:\1[0-9a-fA-F]{2}){4}').setName("MAC address") "MAC address xx:xx:xx:xx:xx (may also have '-' or '.' delimiters)" @staticmethod def convertToDate(fmt="%Y-%m-%d"): """ Helper to create a parse action for converting parsed date string to Python datetime.date Params - - fmt - format to be passed to datetime.strptime (default=C{"%Y-%m-%d"}) Example:: date_expr = pyparsing_common.iso8601_date.copy() date_expr.setParseAction(pyparsing_common.convertToDate()) print(date_expr.parseString("1999-12-31")) prints:: [datetime.date(1999, 12, 31)] """ def cvt_fn(s,l,t): try: return datetime.strptime(t[0], fmt).date() except ValueError as ve: raise ParseException(s, l, str(ve)) return cvt_fn @staticmethod def convertToDatetime(fmt="%Y-%m-%dT%H:%M:%S.%f"): """ Helper to create a parse action for converting parsed datetime string to Python datetime.datetime Params - - fmt - format to be passed to datetime.strptime (default=C{"%Y-%m-%dT%H:%M:%S.%f"}) Example:: dt_expr = pyparsing_common.iso8601_datetime.copy() dt_expr.setParseAction(pyparsing_common.convertToDatetime()) print(dt_expr.parseString("1999-12-31T23:59:59.999")) prints:: [datetime.datetime(1999, 12, 31, 23, 59, 59, 999000)] """ def cvt_fn(s,l,t): try: return datetime.strptime(t[0], fmt) except ValueError as ve: raise ParseException(s, l, str(ve)) return cvt_fn iso8601_date = Regex(r'(?P<year>\d{4})(?:-(?P<month>\d\d)(?:-(?P<day>\d\d))?)?').setName("ISO8601 date") "ISO8601 date (C{yyyy-mm-dd})" iso8601_datetime = Regex(r'(?P<year>\d{4})-(?P<month>\d\d)-(?P<day>\d\d)[T ](?P<hour>\d\d):(?P<minute>\d\d)(:(?P<second>\d\d(\.\d*)?)?)?(?P<tz>Z|[+-]\d\d:?\d\d)?').setName("ISO8601 datetime") "ISO8601 datetime (C{yyyy-mm-ddThh:mm:ss.s(Z|+-00:00)}) - trailing seconds, milliseconds, and timezone optional; accepts separating C{'T'} or C{' '}" uuid = Regex(r'[0-9a-fA-F]{8}(-[0-9a-fA-F]{4}){3}-[0-9a-fA-F]{12}').setName("UUID") "UUID (C{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx})" _html_stripper = anyOpenTag.suppress() | anyCloseTag.suppress() @staticmethod def stripHTMLTags(s, l, tokens): """ Parse action to remove HTML tags from web page HTML source Example:: # strip HTML links from normal text text = '<td>More info at the <a href="http://pyparsing.wikispaces.com">pyparsing</a> wiki page</td>' td,td_end = makeHTMLTags("TD") table_text = td + SkipTo(td_end).setParseAction(pyparsing_common.stripHTMLTags)("body") + td_end print(table_text.parseString(text).body) # -> 'More info at the pyparsing wiki page' """ return pyparsing_common._html_stripper.transformString(tokens[0]) _commasepitem = Combine(OneOrMore(~Literal(",") + ~LineEnd() + Word(printables, excludeChars=',') + Optional( White(" \t") ) ) ).streamline().setName("commaItem") comma_separated_list = delimitedList( Optional( quotedString.copy() | _commasepitem, default="") ).setName("comma separated list") """Predefined expression of 1 or more printable words or quoted strings, separated by commas.""" upcaseTokens = staticmethod(tokenMap(lambda t: _ustr(t).upper())) """Parse action to convert tokens to upper case.""" downcaseTokens = staticmethod(tokenMap(lambda t: _ustr(t).lower())) """Parse action to convert tokens to lower case.""" if __name__ == "__main__": selectToken = CaselessLiteral("select") fromToken = CaselessLiteral("from") ident = Word(alphas, alphanums + "_$") columnName = delimitedList(ident, ".", combine=True).setParseAction(upcaseTokens) columnNameList = Group(delimitedList(columnName)).setName("columns") columnSpec = ('*' | columnNameList) tableName = delimitedList(ident, ".", combine=True).setParseAction(upcaseTokens) tableNameList = Group(delimitedList(tableName)).setName("tables") simpleSQL = selectToken("command") + columnSpec("columns") + fromToken + tableNameList("tables") # demo runTests method, including embedded comments in test string simpleSQL.runTests(""" # '*' as column list and dotted table name select * from SYS.XYZZY # caseless match on "SELECT", and casts back to "select" SELECT * from XYZZY, ABC # list of column names, and mixed case SELECT keyword Select AA,BB,CC from Sys.dual # multiple tables Select A, B, C from Sys.dual, Table2 # invalid SELECT keyword - should fail Xelect A, B, C from Sys.dual # incomplete command - should fail Select # invalid column name - should fail Select ^^^ frox Sys.dual """) pyparsing_common.number.runTests(""" 100 -100 +100 3.14159 6.02e23 1e-12 """) # any int or real number, returned as float pyparsing_common.fnumber.runTests(""" 100 -100 +100 3.14159 6.02e23 1e-12 """) pyparsing_common.hex_integer.runTests(""" 100 FF """) import uuid pyparsing_common.uuid.setParseAction(tokenMap(uuid.UUID)) pyparsing_common.uuid.runTests(""" 12345678-1234-5678-1234-567812345678 """) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/_vendor/six.py����������������������������0000644�0001750�0001750�00000072622�00000000000�027304� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""Utilities for writing code that runs on Python 2 and 3""" # Copyright (c) 2010-2015 Benjamin Peterson # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from __future__ import absolute_import import functools import itertools import operator import sys import types __author__ = "Benjamin Peterson <benjamin@python.org>" __version__ = "1.10.0" # Useful for very coarse version differentiation. PY2 = sys.version_info[0] == 2 PY3 = sys.version_info[0] == 3 PY34 = sys.version_info[0:2] >= (3, 4) if PY3: string_types = str, integer_types = int, class_types = type, text_type = str binary_type = bytes MAXSIZE = sys.maxsize else: string_types = basestring, integer_types = (int, long) class_types = (type, types.ClassType) text_type = unicode binary_type = str if sys.platform.startswith("java"): # Jython always uses 32 bits. MAXSIZE = int((1 << 31) - 1) else: # It's possible to have sizeof(long) != sizeof(Py_ssize_t). class X(object): def __len__(self): return 1 << 31 try: len(X()) except OverflowError: # 32-bit MAXSIZE = int((1 << 31) - 1) else: # 64-bit MAXSIZE = int((1 << 63) - 1) del X def _add_doc(func, doc): """Add documentation to a function.""" func.__doc__ = doc def _import_module(name): """Import module, returning the module after the last dot.""" __import__(name) return sys.modules[name] class _LazyDescr(object): def __init__(self, name): self.name = name def __get__(self, obj, tp): result = self._resolve() setattr(obj, self.name, result) # Invokes __set__. try: # This is a bit ugly, but it avoids running this again by # removing this descriptor. delattr(obj.__class__, self.name) except AttributeError: pass return result class MovedModule(_LazyDescr): def __init__(self, name, old, new=None): super(MovedModule, self).__init__(name) if PY3: if new is None: new = name self.mod = new else: self.mod = old def _resolve(self): return _import_module(self.mod) def __getattr__(self, attr): _module = self._resolve() value = getattr(_module, attr) setattr(self, attr, value) return value class _LazyModule(types.ModuleType): def __init__(self, name): super(_LazyModule, self).__init__(name) self.__doc__ = self.__class__.__doc__ def __dir__(self): attrs = ["__doc__", "__name__"] attrs += [attr.name for attr in self._moved_attributes] return attrs # Subclasses should override this _moved_attributes = [] class MovedAttribute(_LazyDescr): def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None): super(MovedAttribute, self).__init__(name) if PY3: if new_mod is None: new_mod = name self.mod = new_mod if new_attr is None: if old_attr is None: new_attr = name else: new_attr = old_attr self.attr = new_attr else: self.mod = old_mod if old_attr is None: old_attr = name self.attr = old_attr def _resolve(self): module = _import_module(self.mod) return getattr(module, self.attr) class _SixMetaPathImporter(object): """ A meta path importer to import six.moves and its submodules. This class implements a PEP302 finder and loader. It should be compatible with Python 2.5 and all existing versions of Python3 """ def __init__(self, six_module_name): self.name = six_module_name self.known_modules = {} def _add_module(self, mod, *fullnames): for fullname in fullnames: self.known_modules[self.name + "." + fullname] = mod def _get_module(self, fullname): return self.known_modules[self.name + "." + fullname] def find_module(self, fullname, path=None): if fullname in self.known_modules: return self return None def __get_module(self, fullname): try: return self.known_modules[fullname] except KeyError: raise ImportError("This loader does not know module " + fullname) def load_module(self, fullname): try: # in case of a reload return sys.modules[fullname] except KeyError: pass mod = self.__get_module(fullname) if isinstance(mod, MovedModule): mod = mod._resolve() else: mod.__loader__ = self sys.modules[fullname] = mod return mod def is_package(self, fullname): """ Return true, if the named module is a package. We need this method to get correct spec objects with Python 3.4 (see PEP451) """ return hasattr(self.__get_module(fullname), "__path__") def get_code(self, fullname): """Return None Required, if is_package is implemented""" self.__get_module(fullname) # eventually raises ImportError return None get_source = get_code # same as get_code _importer = _SixMetaPathImporter(__name__) class _MovedItems(_LazyModule): """Lazy loading of moved objects""" __path__ = [] # mark as package _moved_attributes = [ MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"), MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"), MovedAttribute("filterfalse", "itertools", "itertools", "ifilterfalse", "filterfalse"), MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"), MovedAttribute("intern", "__builtin__", "sys"), MovedAttribute("map", "itertools", "builtins", "imap", "map"), MovedAttribute("getcwd", "os", "os", "getcwdu", "getcwd"), MovedAttribute("getcwdb", "os", "os", "getcwd", "getcwdb"), MovedAttribute("range", "__builtin__", "builtins", "xrange", "range"), MovedAttribute("reload_module", "__builtin__", "importlib" if PY34 else "imp", "reload"), MovedAttribute("reduce", "__builtin__", "functools"), MovedAttribute("shlex_quote", "pipes", "shlex", "quote"), MovedAttribute("StringIO", "StringIO", "io"), MovedAttribute("UserDict", "UserDict", "collections"), MovedAttribute("UserList", "UserList", "collections"), MovedAttribute("UserString", "UserString", "collections"), MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"), MovedAttribute("zip", "itertools", "builtins", "izip", "zip"), MovedAttribute("zip_longest", "itertools", "itertools", "izip_longest", "zip_longest"), MovedModule("builtins", "__builtin__"), MovedModule("configparser", "ConfigParser"), MovedModule("copyreg", "copy_reg"), MovedModule("dbm_gnu", "gdbm", "dbm.gnu"), MovedModule("_dummy_thread", "dummy_thread", "_dummy_thread"), MovedModule("http_cookiejar", "cookielib", "http.cookiejar"), MovedModule("http_cookies", "Cookie", "http.cookies"), MovedModule("html_entities", "htmlentitydefs", "html.entities"), MovedModule("html_parser", "HTMLParser", "html.parser"), MovedModule("http_client", "httplib", "http.client"), MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"), MovedModule("email_mime_nonmultipart", "email.MIMENonMultipart", "email.mime.nonmultipart"), MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"), MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"), MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"), MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"), MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"), MovedModule("cPickle", "cPickle", "pickle"), MovedModule("queue", "Queue"), MovedModule("reprlib", "repr"), MovedModule("socketserver", "SocketServer"), MovedModule("_thread", "thread", "_thread"), MovedModule("tkinter", "Tkinter"), MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"), MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"), MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"), MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"), MovedModule("tkinter_tix", "Tix", "tkinter.tix"), MovedModule("tkinter_ttk", "ttk", "tkinter.ttk"), MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"), MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"), MovedModule("tkinter_colorchooser", "tkColorChooser", "tkinter.colorchooser"), MovedModule("tkinter_commondialog", "tkCommonDialog", "tkinter.commondialog"), MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"), MovedModule("tkinter_font", "tkFont", "tkinter.font"), MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"), MovedModule("tkinter_tksimpledialog", "tkSimpleDialog", "tkinter.simpledialog"), MovedModule("urllib_parse", __name__ + ".moves.urllib_parse", "urllib.parse"), MovedModule("urllib_error", __name__ + ".moves.urllib_error", "urllib.error"), MovedModule("urllib", __name__ + ".moves.urllib", __name__ + ".moves.urllib"), MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"), MovedModule("xmlrpc_client", "xmlrpclib", "xmlrpc.client"), MovedModule("xmlrpc_server", "SimpleXMLRPCServer", "xmlrpc.server"), ] # Add windows specific modules. if sys.platform == "win32": _moved_attributes += [ MovedModule("winreg", "_winreg"), ] for attr in _moved_attributes: setattr(_MovedItems, attr.name, attr) if isinstance(attr, MovedModule): _importer._add_module(attr, "moves." + attr.name) del attr _MovedItems._moved_attributes = _moved_attributes moves = _MovedItems(__name__ + ".moves") _importer._add_module(moves, "moves") class Module_six_moves_urllib_parse(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_parse""" _urllib_parse_moved_attributes = [ MovedAttribute("ParseResult", "urlparse", "urllib.parse"), MovedAttribute("SplitResult", "urlparse", "urllib.parse"), MovedAttribute("parse_qs", "urlparse", "urllib.parse"), MovedAttribute("parse_qsl", "urlparse", "urllib.parse"), MovedAttribute("urldefrag", "urlparse", "urllib.parse"), MovedAttribute("urljoin", "urlparse", "urllib.parse"), MovedAttribute("urlparse", "urlparse", "urllib.parse"), MovedAttribute("urlsplit", "urlparse", "urllib.parse"), MovedAttribute("urlunparse", "urlparse", "urllib.parse"), MovedAttribute("urlunsplit", "urlparse", "urllib.parse"), MovedAttribute("quote", "urllib", "urllib.parse"), MovedAttribute("quote_plus", "urllib", "urllib.parse"), MovedAttribute("unquote", "urllib", "urllib.parse"), MovedAttribute("unquote_plus", "urllib", "urllib.parse"), MovedAttribute("urlencode", "urllib", "urllib.parse"), MovedAttribute("splitquery", "urllib", "urllib.parse"), MovedAttribute("splittag", "urllib", "urllib.parse"), MovedAttribute("splituser", "urllib", "urllib.parse"), MovedAttribute("uses_fragment", "urlparse", "urllib.parse"), MovedAttribute("uses_netloc", "urlparse", "urllib.parse"), MovedAttribute("uses_params", "urlparse", "urllib.parse"), MovedAttribute("uses_query", "urlparse", "urllib.parse"), MovedAttribute("uses_relative", "urlparse", "urllib.parse"), ] for attr in _urllib_parse_moved_attributes: setattr(Module_six_moves_urllib_parse, attr.name, attr) del attr Module_six_moves_urllib_parse._moved_attributes = _urllib_parse_moved_attributes _importer._add_module(Module_six_moves_urllib_parse(__name__ + ".moves.urllib_parse"), "moves.urllib_parse", "moves.urllib.parse") class Module_six_moves_urllib_error(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_error""" _urllib_error_moved_attributes = [ MovedAttribute("URLError", "urllib2", "urllib.error"), MovedAttribute("HTTPError", "urllib2", "urllib.error"), MovedAttribute("ContentTooShortError", "urllib", "urllib.error"), ] for attr in _urllib_error_moved_attributes: setattr(Module_six_moves_urllib_error, attr.name, attr) del attr Module_six_moves_urllib_error._moved_attributes = _urllib_error_moved_attributes _importer._add_module(Module_six_moves_urllib_error(__name__ + ".moves.urllib.error"), "moves.urllib_error", "moves.urllib.error") class Module_six_moves_urllib_request(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_request""" _urllib_request_moved_attributes = [ MovedAttribute("urlopen", "urllib2", "urllib.request"), MovedAttribute("install_opener", "urllib2", "urllib.request"), MovedAttribute("build_opener", "urllib2", "urllib.request"), MovedAttribute("pathname2url", "urllib", "urllib.request"), MovedAttribute("url2pathname", "urllib", "urllib.request"), MovedAttribute("getproxies", "urllib", "urllib.request"), MovedAttribute("Request", "urllib2", "urllib.request"), MovedAttribute("OpenerDirector", "urllib2", "urllib.request"), MovedAttribute("HTTPDefaultErrorHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPRedirectHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPCookieProcessor", "urllib2", "urllib.request"), MovedAttribute("ProxyHandler", "urllib2", "urllib.request"), MovedAttribute("BaseHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPPasswordMgr", "urllib2", "urllib.request"), MovedAttribute("HTTPPasswordMgrWithDefaultRealm", "urllib2", "urllib.request"), MovedAttribute("AbstractBasicAuthHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPBasicAuthHandler", "urllib2", "urllib.request"), MovedAttribute("ProxyBasicAuthHandler", "urllib2", "urllib.request"), MovedAttribute("AbstractDigestAuthHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPDigestAuthHandler", "urllib2", "urllib.request"), MovedAttribute("ProxyDigestAuthHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPSHandler", "urllib2", "urllib.request"), MovedAttribute("FileHandler", "urllib2", "urllib.request"), MovedAttribute("FTPHandler", "urllib2", "urllib.request"), MovedAttribute("CacheFTPHandler", "urllib2", "urllib.request"), MovedAttribute("UnknownHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPErrorProcessor", "urllib2", "urllib.request"), MovedAttribute("urlretrieve", "urllib", "urllib.request"), MovedAttribute("urlcleanup", "urllib", "urllib.request"), MovedAttribute("URLopener", "urllib", "urllib.request"), MovedAttribute("FancyURLopener", "urllib", "urllib.request"), MovedAttribute("proxy_bypass", "urllib", "urllib.request"), ] for attr in _urllib_request_moved_attributes: setattr(Module_six_moves_urllib_request, attr.name, attr) del attr Module_six_moves_urllib_request._moved_attributes = _urllib_request_moved_attributes _importer._add_module(Module_six_moves_urllib_request(__name__ + ".moves.urllib.request"), "moves.urllib_request", "moves.urllib.request") class Module_six_moves_urllib_response(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_response""" _urllib_response_moved_attributes = [ MovedAttribute("addbase", "urllib", "urllib.response"), MovedAttribute("addclosehook", "urllib", "urllib.response"), MovedAttribute("addinfo", "urllib", "urllib.response"), MovedAttribute("addinfourl", "urllib", "urllib.response"), ] for attr in _urllib_response_moved_attributes: setattr(Module_six_moves_urllib_response, attr.name, attr) del attr Module_six_moves_urllib_response._moved_attributes = _urllib_response_moved_attributes _importer._add_module(Module_six_moves_urllib_response(__name__ + ".moves.urllib.response"), "moves.urllib_response", "moves.urllib.response") class Module_six_moves_urllib_robotparser(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_robotparser""" _urllib_robotparser_moved_attributes = [ MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"), ] for attr in _urllib_robotparser_moved_attributes: setattr(Module_six_moves_urllib_robotparser, attr.name, attr) del attr Module_six_moves_urllib_robotparser._moved_attributes = _urllib_robotparser_moved_attributes _importer._add_module(Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib.robotparser"), "moves.urllib_robotparser", "moves.urllib.robotparser") class Module_six_moves_urllib(types.ModuleType): """Create a six.moves.urllib namespace that resembles the Python 3 namespace""" __path__ = [] # mark as package parse = _importer._get_module("moves.urllib_parse") error = _importer._get_module("moves.urllib_error") request = _importer._get_module("moves.urllib_request") response = _importer._get_module("moves.urllib_response") robotparser = _importer._get_module("moves.urllib_robotparser") def __dir__(self): return ['parse', 'error', 'request', 'response', 'robotparser'] _importer._add_module(Module_six_moves_urllib(__name__ + ".moves.urllib"), "moves.urllib") def add_move(move): """Add an item to six.moves.""" setattr(_MovedItems, move.name, move) def remove_move(name): """Remove item from six.moves.""" try: delattr(_MovedItems, name) except AttributeError: try: del moves.__dict__[name] except KeyError: raise AttributeError("no such move, %r" % (name,)) if PY3: _meth_func = "__func__" _meth_self = "__self__" _func_closure = "__closure__" _func_code = "__code__" _func_defaults = "__defaults__" _func_globals = "__globals__" else: _meth_func = "im_func" _meth_self = "im_self" _func_closure = "func_closure" _func_code = "func_code" _func_defaults = "func_defaults" _func_globals = "func_globals" try: advance_iterator = next except NameError: def advance_iterator(it): return it.next() next = advance_iterator try: callable = callable except NameError: def callable(obj): return any("__call__" in klass.__dict__ for klass in type(obj).__mro__) if PY3: def get_unbound_function(unbound): return unbound create_bound_method = types.MethodType def create_unbound_method(func, cls): return func Iterator = object else: def get_unbound_function(unbound): return unbound.im_func def create_bound_method(func, obj): return types.MethodType(func, obj, obj.__class__) def create_unbound_method(func, cls): return types.MethodType(func, None, cls) class Iterator(object): def next(self): return type(self).__next__(self) callable = callable _add_doc(get_unbound_function, """Get the function out of a possibly unbound function""") get_method_function = operator.attrgetter(_meth_func) get_method_self = operator.attrgetter(_meth_self) get_function_closure = operator.attrgetter(_func_closure) get_function_code = operator.attrgetter(_func_code) get_function_defaults = operator.attrgetter(_func_defaults) get_function_globals = operator.attrgetter(_func_globals) if PY3: def iterkeys(d, **kw): return iter(d.keys(**kw)) def itervalues(d, **kw): return iter(d.values(**kw)) def iteritems(d, **kw): return iter(d.items(**kw)) def iterlists(d, **kw): return iter(d.lists(**kw)) viewkeys = operator.methodcaller("keys") viewvalues = operator.methodcaller("values") viewitems = operator.methodcaller("items") else: def iterkeys(d, **kw): return d.iterkeys(**kw) def itervalues(d, **kw): return d.itervalues(**kw) def iteritems(d, **kw): return d.iteritems(**kw) def iterlists(d, **kw): return d.iterlists(**kw) viewkeys = operator.methodcaller("viewkeys") viewvalues = operator.methodcaller("viewvalues") viewitems = operator.methodcaller("viewitems") _add_doc(iterkeys, "Return an iterator over the keys of a dictionary.") _add_doc(itervalues, "Return an iterator over the values of a dictionary.") _add_doc(iteritems, "Return an iterator over the (key, value) pairs of a dictionary.") _add_doc(iterlists, "Return an iterator over the (key, [values]) pairs of a dictionary.") if PY3: def b(s): return s.encode("latin-1") def u(s): return s unichr = chr import struct int2byte = struct.Struct(">B").pack del struct byte2int = operator.itemgetter(0) indexbytes = operator.getitem iterbytes = iter import io StringIO = io.StringIO BytesIO = io.BytesIO _assertCountEqual = "assertCountEqual" if sys.version_info[1] <= 1: _assertRaisesRegex = "assertRaisesRegexp" _assertRegex = "assertRegexpMatches" else: _assertRaisesRegex = "assertRaisesRegex" _assertRegex = "assertRegex" else: def b(s): return s # Workaround for standalone backslash def u(s): return unicode(s.replace(r'\\', r'\\\\'), "unicode_escape") unichr = unichr int2byte = chr def byte2int(bs): return ord(bs[0]) def indexbytes(buf, i): return ord(buf[i]) iterbytes = functools.partial(itertools.imap, ord) import StringIO StringIO = BytesIO = StringIO.StringIO _assertCountEqual = "assertItemsEqual" _assertRaisesRegex = "assertRaisesRegexp" _assertRegex = "assertRegexpMatches" _add_doc(b, """Byte literal""") _add_doc(u, """Text literal""") def assertCountEqual(self, *args, **kwargs): return getattr(self, _assertCountEqual)(*args, **kwargs) def assertRaisesRegex(self, *args, **kwargs): return getattr(self, _assertRaisesRegex)(*args, **kwargs) def assertRegex(self, *args, **kwargs): return getattr(self, _assertRegex)(*args, **kwargs) if PY3: exec_ = getattr(moves.builtins, "exec") def reraise(tp, value, tb=None): if value is None: value = tp() if value.__traceback__ is not tb: raise value.with_traceback(tb) raise value else: def exec_(_code_, _globs_=None, _locs_=None): """Execute code in a namespace.""" if _globs_ is None: frame = sys._getframe(1) _globs_ = frame.f_globals if _locs_ is None: _locs_ = frame.f_locals del frame elif _locs_ is None: _locs_ = _globs_ exec("""exec _code_ in _globs_, _locs_""") exec_("""def reraise(tp, value, tb=None): raise tp, value, tb """) if sys.version_info[:2] == (3, 2): exec_("""def raise_from(value, from_value): if from_value is None: raise value raise value from from_value """) elif sys.version_info[:2] > (3, 2): exec_("""def raise_from(value, from_value): raise value from from_value """) else: def raise_from(value, from_value): raise value print_ = getattr(moves.builtins, "print", None) if print_ is None: def print_(*args, **kwargs): """The new-style print function for Python 2.4 and 2.5.""" fp = kwargs.pop("file", sys.stdout) if fp is None: return def write(data): if not isinstance(data, basestring): data = str(data) # If the file has an encoding, encode unicode with it. if (isinstance(fp, file) and isinstance(data, unicode) and fp.encoding is not None): errors = getattr(fp, "errors", None) if errors is None: errors = "strict" data = data.encode(fp.encoding, errors) fp.write(data) want_unicode = False sep = kwargs.pop("sep", None) if sep is not None: if isinstance(sep, unicode): want_unicode = True elif not isinstance(sep, str): raise TypeError("sep must be None or a string") end = kwargs.pop("end", None) if end is not None: if isinstance(end, unicode): want_unicode = True elif not isinstance(end, str): raise TypeError("end must be None or a string") if kwargs: raise TypeError("invalid keyword arguments to print()") if not want_unicode: for arg in args: if isinstance(arg, unicode): want_unicode = True break if want_unicode: newline = unicode("\n") space = unicode(" ") else: newline = "\n" space = " " if sep is None: sep = space if end is None: end = newline for i, arg in enumerate(args): if i: write(sep) write(arg) write(end) if sys.version_info[:2] < (3, 3): _print = print_ def print_(*args, **kwargs): fp = kwargs.get("file", sys.stdout) flush = kwargs.pop("flush", False) _print(*args, **kwargs) if flush and fp is not None: fp.flush() _add_doc(reraise, """Reraise an exception.""") if sys.version_info[0:2] < (3, 4): def wraps(wrapped, assigned=functools.WRAPPER_ASSIGNMENTS, updated=functools.WRAPPER_UPDATES): def wrapper(f): f = functools.wraps(wrapped, assigned, updated)(f) f.__wrapped__ = wrapped return f return wrapper else: wraps = functools.wraps def with_metaclass(meta, *bases): """Create a base class with a metaclass.""" # This requires a bit of explanation: the basic idea is to make a dummy # metaclass for one level of class instantiation that replaces itself with # the actual metaclass. class metaclass(meta): def __new__(cls, name, this_bases, d): return meta(name, bases, d) return type.__new__(metaclass, 'temporary_class', (), {}) def add_metaclass(metaclass): """Class decorator for creating a class with a metaclass.""" def wrapper(cls): orig_vars = cls.__dict__.copy() slots = orig_vars.get('__slots__') if slots is not None: if isinstance(slots, str): slots = [slots] for slots_var in slots: orig_vars.pop(slots_var) orig_vars.pop('__dict__', None) orig_vars.pop('__weakref__', None) return metaclass(cls.__name__, cls.__bases__, orig_vars) return wrapper def python_2_unicode_compatible(klass): """ A decorator that defines __unicode__ and __str__ methods under Python 2. Under Python 3 it does nothing. To support Python 2 and 3 with a single code base, define a __str__ method returning text and apply this decorator to the class. """ if PY2: if '__str__' not in klass.__dict__: raise ValueError("@python_2_unicode_compatible cannot be applied " "to %s because it doesn't define __str__()." % klass.__name__) klass.__unicode__ = klass.__str__ klass.__str__ = lambda self: self.__unicode__().encode('utf-8') return klass # Complete the moves implementation. # This code is at the end of this module to speed up module loading. # Turn this module into a package. __path__ = [] # required for PEP 302 and PEP 451 __package__ = __name__ # see PEP 366 @ReservedAssignment if globals().get("__spec__") is not None: __spec__.submodule_search_locations = [] # PEP 451 @UndefinedVariable # Remove other six meta path importers, since they cause problems. This can # happen if six is removed from sys.modules and then reloaded. (Setuptools does # this for some reason.) if sys.meta_path: for i, importer in enumerate(sys.meta_path): # Here's some real nastiness: Another "instance" of the six module might # be floating around. Therefore, we can't use isinstance() to check for # the six meta path importer, since the other six instance will have # inserted an importer with different class. if (type(importer).__name__ == "_SixMetaPathImporter" and importer.name == __name__): del sys.meta_path[i] break del i, importer # Finally, add the importer to the meta path import hook. sys.meta_path.append(_importer) ��������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/archive_util.py���������������������������0000644�0001750�0001750�00000014700�00000000000�027514� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""Utilities for extracting common archive formats""" import zipfile import tarfile import os import shutil import posixpath import contextlib from distutils.errors import DistutilsError from pkg_resources import ensure_directory __all__ = [ "unpack_archive", "unpack_zipfile", "unpack_tarfile", "default_filter", "UnrecognizedFormat", "extraction_drivers", "unpack_directory", ] class UnrecognizedFormat(DistutilsError): """Couldn't recognize the archive type""" def default_filter(src, dst): """The default progress/filter callback; returns True for all files""" return dst def unpack_archive(filename, extract_dir, progress_filter=default_filter, drivers=None): """Unpack `filename` to `extract_dir`, or raise ``UnrecognizedFormat`` `progress_filter` is a function taking two arguments: a source path internal to the archive ('/'-separated), and a filesystem path where it will be extracted. The callback must return the desired extract path (which may be the same as the one passed in), or else ``None`` to skip that file or directory. The callback can thus be used to report on the progress of the extraction, as well as to filter the items extracted or alter their extraction paths. `drivers`, if supplied, must be a non-empty sequence of functions with the same signature as this function (minus the `drivers` argument), that raise ``UnrecognizedFormat`` if they do not support extracting the designated archive type. The `drivers` are tried in sequence until one is found that does not raise an error, or until all are exhausted (in which case ``UnrecognizedFormat`` is raised). If you do not supply a sequence of drivers, the module's ``extraction_drivers`` constant will be used, which means that ``unpack_zipfile`` and ``unpack_tarfile`` will be tried, in that order. """ for driver in drivers or extraction_drivers: try: driver(filename, extract_dir, progress_filter) except UnrecognizedFormat: continue else: return else: raise UnrecognizedFormat( "Not a recognized archive type: %s" % filename ) def unpack_directory(filename, extract_dir, progress_filter=default_filter): """"Unpack" a directory, using the same interface as for archives Raises ``UnrecognizedFormat`` if `filename` is not a directory """ if not os.path.isdir(filename): raise UnrecognizedFormat("%s is not a directory" % filename) paths = { filename: ('', extract_dir), } for base, dirs, files in os.walk(filename): src, dst = paths[base] for d in dirs: paths[os.path.join(base, d)] = src + d + '/', os.path.join(dst, d) for f in files: target = os.path.join(dst, f) target = progress_filter(src + f, target) if not target: # skip non-files continue ensure_directory(target) f = os.path.join(base, f) shutil.copyfile(f, target) shutil.copystat(f, target) def unpack_zipfile(filename, extract_dir, progress_filter=default_filter): """Unpack zip `filename` to `extract_dir` Raises ``UnrecognizedFormat`` if `filename` is not a zipfile (as determined by ``zipfile.is_zipfile()``). See ``unpack_archive()`` for an explanation of the `progress_filter` argument. """ if not zipfile.is_zipfile(filename): raise UnrecognizedFormat("%s is not a zip file" % (filename,)) with zipfile.ZipFile(filename) as z: for info in z.infolist(): name = info.filename # don't extract absolute paths or ones with .. in them if name.startswith('/') or '..' in name.split('/'): continue target = os.path.join(extract_dir, *name.split('/')) target = progress_filter(name, target) if not target: continue if name.endswith('/'): # directory ensure_directory(target) else: # file ensure_directory(target) data = z.read(info.filename) with open(target, 'wb') as f: f.write(data) unix_attributes = info.external_attr >> 16 if unix_attributes: os.chmod(target, unix_attributes) def unpack_tarfile(filename, extract_dir, progress_filter=default_filter): """Unpack tar/tar.gz/tar.bz2 `filename` to `extract_dir` Raises ``UnrecognizedFormat`` if `filename` is not a tarfile (as determined by ``tarfile.open()``). See ``unpack_archive()`` for an explanation of the `progress_filter` argument. """ try: tarobj = tarfile.open(filename) except tarfile.TarError: raise UnrecognizedFormat( "%s is not a compressed or uncompressed tar file" % (filename,) ) with contextlib.closing(tarobj): # don't do any chowning! tarobj.chown = lambda *args: None for member in tarobj: name = member.name # don't extract absolute paths or ones with .. in them if not name.startswith('/') and '..' not in name.split('/'): prelim_dst = os.path.join(extract_dir, *name.split('/')) # resolve any links and to extract the link targets as normal # files while member is not None and (member.islnk() or member.issym()): linkpath = member.linkname if member.issym(): base = posixpath.dirname(member.name) linkpath = posixpath.join(base, linkpath) linkpath = posixpath.normpath(linkpath) member = tarobj._getmember(linkpath) if member is not None and (member.isfile() or member.isdir()): final_dst = progress_filter(name, prelim_dst) if final_dst: if final_dst.endswith(os.sep): final_dst = final_dst[:-1] try: # XXX Ugh tarobj._extract_member(member, final_dst) except tarfile.ExtractError: # chown/chmod/mkfifo/mknode/makedev failed pass return True extraction_drivers = unpack_directory, unpack_zipfile, unpack_tarfile ����������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/build_meta.py�����������������������������0000644�0001750�0001750�00000022253�00000000000�027145� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""A PEP 517 interface to setuptools Previously, when a user or a command line tool (let's call it a "frontend") needed to make a request of setuptools to take a certain action, for example, generating a list of installation requirements, the frontend would would call "setup.py egg_info" or "setup.py bdist_wheel" on the command line. PEP 517 defines a different method of interfacing with setuptools. Rather than calling "setup.py" directly, the frontend should: 1. Set the current directory to the directory with a setup.py file 2. Import this module into a safe python interpreter (one in which setuptools can potentially set global variables or crash hard). 3. Call one of the functions defined in PEP 517. What each function does is defined in PEP 517. However, here is a "casual" definition of the functions (this definition should not be relied on for bug reports or API stability): - `build_wheel`: build a wheel in the folder and return the basename - `get_requires_for_build_wheel`: get the `setup_requires` to build - `prepare_metadata_for_build_wheel`: get the `install_requires` - `build_sdist`: build an sdist in the folder and return the basename - `get_requires_for_build_sdist`: get the `setup_requires` to build Again, this is not a formal definition! Just a "taste" of the module. """ import io import os import sys import tokenize import shutil import contextlib import setuptools import distutils from setuptools.py31compat import TemporaryDirectory from pkg_resources import parse_requirements __all__ = ['get_requires_for_build_sdist', 'get_requires_for_build_wheel', 'prepare_metadata_for_build_wheel', 'build_wheel', 'build_sdist', '__legacy__', 'SetupRequirementsError'] class SetupRequirementsError(BaseException): def __init__(self, specifiers): self.specifiers = specifiers class Distribution(setuptools.dist.Distribution): def fetch_build_eggs(self, specifiers): specifier_list = list(map(str, parse_requirements(specifiers))) raise SetupRequirementsError(specifier_list) @classmethod @contextlib.contextmanager def patch(cls): """ Replace distutils.dist.Distribution with this class for the duration of this context. """ orig = distutils.core.Distribution distutils.core.Distribution = cls try: yield finally: distutils.core.Distribution = orig def _to_str(s): """ Convert a filename to a string (on Python 2, explicitly a byte string, not Unicode) as distutils checks for the exact type str. """ if sys.version_info[0] == 2 and not isinstance(s, str): # Assume it's Unicode, as that's what the PEP says # should be provided. return s.encode(sys.getfilesystemencoding()) return s def _get_immediate_subdirectories(a_dir): return [name for name in os.listdir(a_dir) if os.path.isdir(os.path.join(a_dir, name))] def _file_with_extension(directory, extension): matching = ( f for f in os.listdir(directory) if f.endswith(extension) ) file, = matching return file def _open_setup_script(setup_script): if not os.path.exists(setup_script): # Supply a default setup.py return io.StringIO(u"from setuptools import setup; setup()") return getattr(tokenize, 'open', open)(setup_script) class _BuildMetaBackend(object): def _fix_config(self, config_settings): config_settings = config_settings or {} config_settings.setdefault('--global-option', []) return config_settings def _get_build_requires(self, config_settings, requirements): config_settings = self._fix_config(config_settings) sys.argv = sys.argv[:1] + ['egg_info'] + \ config_settings["--global-option"] try: with Distribution.patch(): self.run_setup() except SetupRequirementsError as e: requirements += e.specifiers return requirements def run_setup(self, setup_script='setup.py'): # Note that we can reuse our build directory between calls # Correctness comes first, then optimization later __file__ = setup_script __name__ = '__main__' with _open_setup_script(__file__) as f: code = f.read().replace(r'\r\n', r'\n') exec(compile(code, __file__, 'exec'), locals()) def get_requires_for_build_wheel(self, config_settings=None): config_settings = self._fix_config(config_settings) return self._get_build_requires(config_settings, requirements=['wheel']) def get_requires_for_build_sdist(self, config_settings=None): config_settings = self._fix_config(config_settings) return self._get_build_requires(config_settings, requirements=[]) def prepare_metadata_for_build_wheel(self, metadata_directory, config_settings=None): sys.argv = sys.argv[:1] + ['dist_info', '--egg-base', _to_str(metadata_directory)] self.run_setup() dist_info_directory = metadata_directory while True: dist_infos = [f for f in os.listdir(dist_info_directory) if f.endswith('.dist-info')] if (len(dist_infos) == 0 and len(_get_immediate_subdirectories(dist_info_directory)) == 1): dist_info_directory = os.path.join( dist_info_directory, os.listdir(dist_info_directory)[0]) continue assert len(dist_infos) == 1 break # PEP 517 requires that the .dist-info directory be placed in the # metadata_directory. To comply, we MUST copy the directory to the root if dist_info_directory != metadata_directory: shutil.move( os.path.join(dist_info_directory, dist_infos[0]), metadata_directory) shutil.rmtree(dist_info_directory, ignore_errors=True) return dist_infos[0] def build_wheel(self, wheel_directory, config_settings=None, metadata_directory=None): config_settings = self._fix_config(config_settings) wheel_directory = os.path.abspath(wheel_directory) # Build the wheel in a temporary directory, then copy to the target with TemporaryDirectory(dir=wheel_directory) as tmp_dist_dir: sys.argv = (sys.argv[:1] + ['bdist_wheel', '--dist-dir', tmp_dist_dir] + config_settings["--global-option"]) self.run_setup() wheel_basename = _file_with_extension(tmp_dist_dir, '.whl') wheel_path = os.path.join(wheel_directory, wheel_basename) if os.path.exists(wheel_path): # os.rename will fail overwriting on non-unix env os.remove(wheel_path) os.rename(os.path.join(tmp_dist_dir, wheel_basename), wheel_path) return wheel_basename def build_sdist(self, sdist_directory, config_settings=None): config_settings = self._fix_config(config_settings) sdist_directory = os.path.abspath(sdist_directory) sys.argv = sys.argv[:1] + ['sdist', '--formats', 'gztar'] + \ config_settings["--global-option"] + \ ["--dist-dir", sdist_directory] self.run_setup() return _file_with_extension(sdist_directory, '.tar.gz') class _BuildMetaLegacyBackend(_BuildMetaBackend): """Compatibility backend for setuptools This is a version of setuptools.build_meta that endeavors to maintain backwards compatibility with pre-PEP 517 modes of invocation. It exists as a temporary bridge between the old packaging mechanism and the new packaging mechanism, and will eventually be removed. """ def run_setup(self, setup_script='setup.py'): # In order to maintain compatibility with scripts assuming that # the setup.py script is in a directory on the PYTHONPATH, inject # '' into sys.path. (pypa/setuptools#1642) sys_path = list(sys.path) # Save the original path script_dir = os.path.dirname(os.path.abspath(setup_script)) if script_dir not in sys.path: sys.path.insert(0, script_dir) try: super(_BuildMetaLegacyBackend, self).run_setup(setup_script=setup_script) finally: # While PEP 517 frontends should be calling each hook in a fresh # subprocess according to the standard (and thus it should not be # strictly necessary to restore the old sys.path), we'll restore # the original path so that the path manipulation does not persist # within the hook after run_setup is called. sys.path[:] = sys_path # The primary backend _BACKEND = _BuildMetaBackend() get_requires_for_build_wheel = _BACKEND.get_requires_for_build_wheel get_requires_for_build_sdist = _BACKEND.get_requires_for_build_sdist prepare_metadata_for_build_wheel = _BACKEND.prepare_metadata_for_build_wheel build_wheel = _BACKEND.build_wheel build_sdist = _BACKEND.build_sdist # The legacy backend __legacy__ = _BuildMetaLegacyBackend() �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000033�00000000000�011451� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������27 mtime=1604735100.066661 �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/����������������������������������0000755�0001750�0001750�00000000000�00000000000�026100� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/__init__.py�����������������������0000644�0001750�0001750�00000001122�00000000000�030205� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������__all__ = [ 'alias', 'bdist_egg', 'bdist_rpm', 'build_ext', 'build_py', 'develop', 'easy_install', 'egg_info', 'install', 'install_lib', 'rotate', 'saveopts', 'sdist', 'setopt', 'test', 'install_egg_info', 'install_scripts', 'register', 'bdist_wininst', 'upload_docs', 'upload', 'build_clib', 'dist_info', ] from distutils.command.bdist import bdist import sys from setuptools.command import install_scripts if 'egg' not in bdist.format_commands: bdist.format_command['egg'] = ('bdist_egg', "Python .egg file") bdist.format_commands.append('egg') del bdist, sys ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/alias.py��������������������������0000644�0001750�0001750�00000004572�00000000000�027553� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from distutils.errors import DistutilsOptionError from setuptools.extern.six.moves import map from setuptools.command.setopt import edit_config, option_base, config_file def shquote(arg): """Quote an argument for later parsing by shlex.split()""" for c in '"', "'", "\\", "#": if c in arg: return repr(arg) if arg.split() != [arg]: return repr(arg) return arg class alias(option_base): """Define a shortcut that invokes one or more commands""" description = "define a shortcut to invoke one or more commands" command_consumes_arguments = True user_options = [ ('remove', 'r', 'remove (unset) the alias'), ] + option_base.user_options boolean_options = option_base.boolean_options + ['remove'] def initialize_options(self): option_base.initialize_options(self) self.args = None self.remove = None def finalize_options(self): option_base.finalize_options(self) if self.remove and len(self.args) != 1: raise DistutilsOptionError( "Must specify exactly one argument (the alias name) when " "using --remove" ) def run(self): aliases = self.distribution.get_option_dict('aliases') if not self.args: print("Command Aliases") print("---------------") for alias in aliases: print("setup.py alias", format_alias(alias, aliases)) return elif len(self.args) == 1: alias, = self.args if self.remove: command = None elif alias in aliases: print("setup.py alias", format_alias(alias, aliases)) return else: print("No alias definition found for %r" % alias) return else: alias = self.args[0] command = ' '.join(map(shquote, self.args[1:])) edit_config(self.filename, {'aliases': {alias: command}}, self.dry_run) def format_alias(name, aliases): source, command = aliases[name] if source == config_file('global'): source = '--global-config ' elif source == config_file('user'): source = '--user-config ' elif source == config_file('local'): source = '' else: source = '--filename=%r' % source return source + name + ' ' + command ��������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/bdist_egg.py����������������������0000644�0001750�0001750�00000043367�00000000000�030416� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""setuptools.command.bdist_egg Build .egg distributions""" from distutils.errors import DistutilsSetupError from distutils.dir_util import remove_tree, mkpath from distutils import log from types import CodeType import sys import os import re import textwrap import marshal from setuptools.extern import six from pkg_resources import get_build_platform, Distribution, ensure_directory from pkg_resources import EntryPoint from setuptools.extension import Library from setuptools import Command try: # Python 2.7 or >=3.2 from sysconfig import get_path, get_python_version def _get_purelib(): return get_path("purelib") except ImportError: from distutils.sysconfig import get_python_lib, get_python_version def _get_purelib(): return get_python_lib(False) def strip_module(filename): if '.' in filename: filename = os.path.splitext(filename)[0] if filename.endswith('module'): filename = filename[:-6] return filename def sorted_walk(dir): """Do os.walk in a reproducible way, independent of indeterministic filesystem readdir order """ for base, dirs, files in os.walk(dir): dirs.sort() files.sort() yield base, dirs, files def write_stub(resource, pyfile): _stub_template = textwrap.dedent(""" def __bootstrap__(): global __bootstrap__, __loader__, __file__ import sys, pkg_resources, imp __file__ = pkg_resources.resource_filename(__name__, %r) __loader__ = None; del __bootstrap__, __loader__ imp.load_dynamic(__name__,__file__) __bootstrap__() """).lstrip() with open(pyfile, 'w') as f: f.write(_stub_template % resource) class bdist_egg(Command): description = "create an \"egg\" distribution" user_options = [ ('bdist-dir=', 'b', "temporary directory for creating the distribution"), ('plat-name=', 'p', "platform name to embed in generated filenames " "(default: %s)" % get_build_platform()), ('exclude-source-files', None, "remove all .py files from the generated egg"), ('keep-temp', 'k', "keep the pseudo-installation tree around after " + "creating the distribution archive"), ('dist-dir=', 'd', "directory to put final built distributions in"), ('skip-build', None, "skip rebuilding everything (for testing/debugging)"), ] boolean_options = [ 'keep-temp', 'skip-build', 'exclude-source-files' ] def initialize_options(self): self.bdist_dir = None self.plat_name = None self.keep_temp = 0 self.dist_dir = None self.skip_build = 0 self.egg_output = None self.exclude_source_files = None def finalize_options(self): ei_cmd = self.ei_cmd = self.get_finalized_command("egg_info") self.egg_info = ei_cmd.egg_info if self.bdist_dir is None: bdist_base = self.get_finalized_command('bdist').bdist_base self.bdist_dir = os.path.join(bdist_base, 'egg') if self.plat_name is None: self.plat_name = get_build_platform() self.set_undefined_options('bdist', ('dist_dir', 'dist_dir')) if self.egg_output is None: # Compute filename of the output egg basename = Distribution( None, None, ei_cmd.egg_name, ei_cmd.egg_version, get_python_version(), self.distribution.has_ext_modules() and self.plat_name ).egg_name() self.egg_output = os.path.join(self.dist_dir, basename + '.egg') def do_install_data(self): # Hack for packages that install data to install's --install-lib self.get_finalized_command('install').install_lib = self.bdist_dir site_packages = os.path.normcase(os.path.realpath(_get_purelib())) old, self.distribution.data_files = self.distribution.data_files, [] for item in old: if isinstance(item, tuple) and len(item) == 2: if os.path.isabs(item[0]): realpath = os.path.realpath(item[0]) normalized = os.path.normcase(realpath) if normalized == site_packages or normalized.startswith( site_packages + os.sep ): item = realpath[len(site_packages) + 1:], item[1] # XXX else: raise ??? self.distribution.data_files.append(item) try: log.info("installing package data to %s", self.bdist_dir) self.call_command('install_data', force=0, root=None) finally: self.distribution.data_files = old def get_outputs(self): return [self.egg_output] def call_command(self, cmdname, **kw): """Invoke reinitialized command `cmdname` with keyword args""" for dirname in INSTALL_DIRECTORY_ATTRS: kw.setdefault(dirname, self.bdist_dir) kw.setdefault('skip_build', self.skip_build) kw.setdefault('dry_run', self.dry_run) cmd = self.reinitialize_command(cmdname, **kw) self.run_command(cmdname) return cmd def run(self): # Generate metadata first self.run_command("egg_info") # We run install_lib before install_data, because some data hacks # pull their data path from the install_lib command. log.info("installing library code to %s", self.bdist_dir) instcmd = self.get_finalized_command('install') old_root = instcmd.root instcmd.root = None if self.distribution.has_c_libraries() and not self.skip_build: self.run_command('build_clib') cmd = self.call_command('install_lib', warn_dir=0) instcmd.root = old_root all_outputs, ext_outputs = self.get_ext_outputs() self.stubs = [] to_compile = [] for (p, ext_name) in enumerate(ext_outputs): filename, ext = os.path.splitext(ext_name) pyfile = os.path.join(self.bdist_dir, strip_module(filename) + '.py') self.stubs.append(pyfile) log.info("creating stub loader for %s", ext_name) if not self.dry_run: write_stub(os.path.basename(ext_name), pyfile) to_compile.append(pyfile) ext_outputs[p] = ext_name.replace(os.sep, '/') if to_compile: cmd.byte_compile(to_compile) if self.distribution.data_files: self.do_install_data() # Make the EGG-INFO directory archive_root = self.bdist_dir egg_info = os.path.join(archive_root, 'EGG-INFO') self.mkpath(egg_info) if self.distribution.scripts: script_dir = os.path.join(egg_info, 'scripts') log.info("installing scripts to %s", script_dir) self.call_command('install_scripts', install_dir=script_dir, no_ep=1) self.copy_metadata_to(egg_info) native_libs = os.path.join(egg_info, "native_libs.txt") if all_outputs: log.info("writing %s", native_libs) if not self.dry_run: ensure_directory(native_libs) libs_file = open(native_libs, 'wt') libs_file.write('\n'.join(all_outputs)) libs_file.write('\n') libs_file.close() elif os.path.isfile(native_libs): log.info("removing %s", native_libs) if not self.dry_run: os.unlink(native_libs) write_safety_flag( os.path.join(archive_root, 'EGG-INFO'), self.zip_safe() ) if os.path.exists(os.path.join(self.egg_info, 'depends.txt')): log.warn( "WARNING: 'depends.txt' will not be used by setuptools 0.6!\n" "Use the install_requires/extras_require setup() args instead." ) if self.exclude_source_files: self.zap_pyfiles() # Make the archive make_zipfile(self.egg_output, archive_root, verbose=self.verbose, dry_run=self.dry_run, mode=self.gen_header()) if not self.keep_temp: remove_tree(self.bdist_dir, dry_run=self.dry_run) # Add to 'Distribution.dist_files' so that the "upload" command works getattr(self.distribution, 'dist_files', []).append( ('bdist_egg', get_python_version(), self.egg_output)) def zap_pyfiles(self): log.info("Removing .py files from temporary directory") for base, dirs, files in walk_egg(self.bdist_dir): for name in files: path = os.path.join(base, name) if name.endswith('.py'): log.debug("Deleting %s", path) os.unlink(path) if base.endswith('__pycache__'): path_old = path pattern = r'(?P<name>.+)\.(?P<magic>[^.]+)\.pyc' m = re.match(pattern, name) path_new = os.path.join( base, os.pardir, m.group('name') + '.pyc') log.info( "Renaming file from [%s] to [%s]" % (path_old, path_new)) try: os.remove(path_new) except OSError: pass os.rename(path_old, path_new) def zip_safe(self): safe = getattr(self.distribution, 'zip_safe', None) if safe is not None: return safe log.warn("zip_safe flag not set; analyzing archive contents...") return analyze_egg(self.bdist_dir, self.stubs) def gen_header(self): epm = EntryPoint.parse_map(self.distribution.entry_points or '') ep = epm.get('setuptools.installation', {}).get('eggsecutable') if ep is None: return 'w' # not an eggsecutable, do it the usual way. if not ep.attrs or ep.extras: raise DistutilsSetupError( "eggsecutable entry point (%r) cannot have 'extras' " "or refer to a module" % (ep,) ) pyver = sys.version[:3] pkg = ep.module_name full = '.'.join(ep.attrs) base = ep.attrs[0] basename = os.path.basename(self.egg_output) header = ( "#!/bin/sh\n" 'if [ `basename $0` = "%(basename)s" ]\n' 'then exec python%(pyver)s -c "' "import sys, os; sys.path.insert(0, os.path.abspath('$0')); " "from %(pkg)s import %(base)s; sys.exit(%(full)s())" '" "$@"\n' 'else\n' ' echo $0 is not the correct name for this egg file.\n' ' echo Please rename it back to %(basename)s and try again.\n' ' exec false\n' 'fi\n' ) % locals() if not self.dry_run: mkpath(os.path.dirname(self.egg_output), dry_run=self.dry_run) f = open(self.egg_output, 'w') f.write(header) f.close() return 'a' def copy_metadata_to(self, target_dir): "Copy metadata (egg info) to the target_dir" # normalize the path (so that a forward-slash in egg_info will # match using startswith below) norm_egg_info = os.path.normpath(self.egg_info) prefix = os.path.join(norm_egg_info, '') for path in self.ei_cmd.filelist.files: if path.startswith(prefix): target = os.path.join(target_dir, path[len(prefix):]) ensure_directory(target) self.copy_file(path, target) def get_ext_outputs(self): """Get a list of relative paths to C extensions in the output distro""" all_outputs = [] ext_outputs = [] paths = {self.bdist_dir: ''} for base, dirs, files in sorted_walk(self.bdist_dir): for filename in files: if os.path.splitext(filename)[1].lower() in NATIVE_EXTENSIONS: all_outputs.append(paths[base] + filename) for filename in dirs: paths[os.path.join(base, filename)] = (paths[base] + filename + '/') if self.distribution.has_ext_modules(): build_cmd = self.get_finalized_command('build_ext') for ext in build_cmd.extensions: if isinstance(ext, Library): continue fullname = build_cmd.get_ext_fullname(ext.name) filename = build_cmd.get_ext_filename(fullname) if not os.path.basename(filename).startswith('dl-'): if os.path.exists(os.path.join(self.bdist_dir, filename)): ext_outputs.append(filename) return all_outputs, ext_outputs NATIVE_EXTENSIONS = dict.fromkeys('.dll .so .dylib .pyd'.split()) def walk_egg(egg_dir): """Walk an unpacked egg's contents, skipping the metadata directory""" walker = sorted_walk(egg_dir) base, dirs, files = next(walker) if 'EGG-INFO' in dirs: dirs.remove('EGG-INFO') yield base, dirs, files for bdf in walker: yield bdf def analyze_egg(egg_dir, stubs): # check for existing flag in EGG-INFO for flag, fn in safety_flags.items(): if os.path.exists(os.path.join(egg_dir, 'EGG-INFO', fn)): return flag if not can_scan(): return False safe = True for base, dirs, files in walk_egg(egg_dir): for name in files: if name.endswith('.py') or name.endswith('.pyw'): continue elif name.endswith('.pyc') or name.endswith('.pyo'): # always scan, even if we already know we're not safe safe = scan_module(egg_dir, base, name, stubs) and safe return safe def write_safety_flag(egg_dir, safe): # Write or remove zip safety flag file(s) for flag, fn in safety_flags.items(): fn = os.path.join(egg_dir, fn) if os.path.exists(fn): if safe is None or bool(safe) != flag: os.unlink(fn) elif safe is not None and bool(safe) == flag: f = open(fn, 'wt') f.write('\n') f.close() safety_flags = { True: 'zip-safe', False: 'not-zip-safe', } def scan_module(egg_dir, base, name, stubs): """Check whether module possibly uses unsafe-for-zipfile stuff""" filename = os.path.join(base, name) if filename[:-1] in stubs: return True # Extension module pkg = base[len(egg_dir) + 1:].replace(os.sep, '.') module = pkg + (pkg and '.' or '') + os.path.splitext(name)[0] if six.PY2: skip = 8 # skip magic & date elif sys.version_info < (3, 7): skip = 12 # skip magic & date & file size else: skip = 16 # skip magic & reserved? & date & file size f = open(filename, 'rb') f.read(skip) code = marshal.load(f) f.close() safe = True symbols = dict.fromkeys(iter_symbols(code)) for bad in ['__file__', '__path__']: if bad in symbols: log.warn("%s: module references %s", module, bad) safe = False if 'inspect' in symbols: for bad in [ 'getsource', 'getabsfile', 'getsourcefile', 'getfile' 'getsourcelines', 'findsource', 'getcomments', 'getframeinfo', 'getinnerframes', 'getouterframes', 'stack', 'trace' ]: if bad in symbols: log.warn("%s: module MAY be using inspect.%s", module, bad) safe = False return safe def iter_symbols(code): """Yield names and strings used by `code` and its nested code objects""" for name in code.co_names: yield name for const in code.co_consts: if isinstance(const, six.string_types): yield const elif isinstance(const, CodeType): for name in iter_symbols(const): yield name def can_scan(): if not sys.platform.startswith('java') and sys.platform != 'cli': # CPython, PyPy, etc. return True log.warn("Unable to analyze compiled code on this platform.") log.warn("Please ask the author to include a 'zip_safe'" " setting (either True or False) in the package's setup.py") # Attribute names of options for commands that might need to be convinced to # install to the egg build directory INSTALL_DIRECTORY_ATTRS = [ 'install_lib', 'install_dir', 'install_data', 'install_base' ] def make_zipfile(zip_filename, base_dir, verbose=0, dry_run=0, compress=True, mode='w'): """Create a zip file from all the files under 'base_dir'. The output zip file will be named 'base_dir' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises DistutilsExecError. Returns the name of the output zip file. """ import zipfile mkpath(os.path.dirname(zip_filename), dry_run=dry_run) log.info("creating '%s' and adding '%s' to it", zip_filename, base_dir) def visit(z, dirname, names): for name in names: path = os.path.normpath(os.path.join(dirname, name)) if os.path.isfile(path): p = path[len(base_dir) + 1:] if not dry_run: z.write(path, p) log.debug("adding '%s'", p) compression = zipfile.ZIP_DEFLATED if compress else zipfile.ZIP_STORED if not dry_run: z = zipfile.ZipFile(zip_filename, mode, compression=compression) for dirname, dirs, files in sorted_walk(base_dir): visit(z, dirname, files) z.close() else: for dirname, dirs, files in sorted_walk(base_dir): visit(None, dirname, files) return zip_filename �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/bdist_rpm.py����������������������0000644�0001750�0001750�00000002744�00000000000�030444� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import distutils.command.bdist_rpm as orig class bdist_rpm(orig.bdist_rpm): """ Override the default bdist_rpm behavior to do the following: 1. Run egg_info to ensure the name and version are properly calculated. 2. Always run 'install' using --single-version-externally-managed to disable eggs in RPM distributions. 3. Replace dash with underscore in the version numbers for better RPM compatibility. """ def run(self): # ensure distro name is up-to-date self.run_command('egg_info') orig.bdist_rpm.run(self) def _make_spec_file(self): version = self.distribution.get_version() rpmversion = version.replace('-', '_') spec = orig.bdist_rpm._make_spec_file(self) line23 = '%define version ' + version line24 = '%define version ' + rpmversion spec = [ line.replace( "Source0: %{name}-%{version}.tar", "Source0: %{name}-%{unmangled_version}.tar" ).replace( "setup.py install ", "setup.py install --single-version-externally-managed " ).replace( "%setup", "%setup -n %{name}-%{unmangled_version}" ).replace(line23, line24) for line in spec ] insert_loc = spec.index(line24) + 1 unmangled_version = "%define unmangled_version " + version spec.insert(insert_loc, unmangled_version) return spec ����������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/bdist_wininst.py������������������0000644�0001750�0001750�00000001175�00000000000�031336� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import distutils.command.bdist_wininst as orig class bdist_wininst(orig.bdist_wininst): def reinitialize_command(self, command, reinit_subcommands=0): """ Supplement reinitialize_command to work around http://bugs.python.org/issue20819 """ cmd = self.distribution.reinitialize_command( command, reinit_subcommands) if command in ('install', 'install_lib'): cmd.install_lib = None return cmd def run(self): self._is_running = True try: orig.bdist_wininst.run(self) finally: self._is_running = False ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/build_clib.py���������������������0000644�0001750�0001750�00000010604�00000000000�030543� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import distutils.command.build_clib as orig from distutils.errors import DistutilsSetupError from distutils import log from setuptools.dep_util import newer_pairwise_group class build_clib(orig.build_clib): """ Override the default build_clib behaviour to do the following: 1. Implement a rudimentary timestamp-based dependency system so 'compile()' doesn't run every time. 2. Add more keys to the 'build_info' dictionary: * obj_deps - specify dependencies for each object compiled. this should be a dictionary mapping a key with the source filename to a list of dependencies. Use an empty string for global dependencies. * cflags - specify a list of additional flags to pass to the compiler. """ def build_libraries(self, libraries): for (lib_name, build_info) in libraries: sources = build_info.get('sources') if sources is None or not isinstance(sources, (list, tuple)): raise DistutilsSetupError( "in 'libraries' option (library '%s'), " "'sources' must be present and must be " "a list of source filenames" % lib_name) sources = list(sources) log.info("building '%s' library", lib_name) # Make sure everything is the correct type. # obj_deps should be a dictionary of keys as sources # and a list/tuple of files that are its dependencies. obj_deps = build_info.get('obj_deps', dict()) if not isinstance(obj_deps, dict): raise DistutilsSetupError( "in 'libraries' option (library '%s'), " "'obj_deps' must be a dictionary of " "type 'source: list'" % lib_name) dependencies = [] # Get the global dependencies that are specified by the '' key. # These will go into every source's dependency list. global_deps = obj_deps.get('', list()) if not isinstance(global_deps, (list, tuple)): raise DistutilsSetupError( "in 'libraries' option (library '%s'), " "'obj_deps' must be a dictionary of " "type 'source: list'" % lib_name) # Build the list to be used by newer_pairwise_group # each source will be auto-added to its dependencies. for source in sources: src_deps = [source] src_deps.extend(global_deps) extra_deps = obj_deps.get(source, list()) if not isinstance(extra_deps, (list, tuple)): raise DistutilsSetupError( "in 'libraries' option (library '%s'), " "'obj_deps' must be a dictionary of " "type 'source: list'" % lib_name) src_deps.extend(extra_deps) dependencies.append(src_deps) expected_objects = self.compiler.object_filenames( sources, output_dir=self.build_temp ) if newer_pairwise_group(dependencies, expected_objects) != ([], []): # First, compile the source code to object files in the library # directory. (This should probably change to putting object # files in a temporary build directory.) macros = build_info.get('macros') include_dirs = build_info.get('include_dirs') cflags = build_info.get('cflags') objects = self.compiler.compile( sources, output_dir=self.build_temp, macros=macros, include_dirs=include_dirs, extra_postargs=cflags, debug=self.debug ) # Now "link" the object files together into a static library. # (On Unix at least, this isn't really linking -- it just # builds an archive. Whatever.) self.compiler.create_static_lib( expected_objects, lib_name, output_dir=self.build_clib, debug=self.debug ) ����������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/build_ext.py����������������������0000644�0001750�0001750�00000031141�00000000000�030431� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import os import sys import itertools import imp from distutils.command.build_ext import build_ext as _du_build_ext from distutils.file_util import copy_file from distutils.ccompiler import new_compiler from distutils.sysconfig import customize_compiler, get_config_var from distutils.errors import DistutilsError from distutils import log from setuptools.extension import Library from setuptools.extern import six try: # Attempt to use Cython for building extensions, if available from Cython.Distutils.build_ext import build_ext as _build_ext # Additionally, assert that the compiler module will load # also. Ref #1229. __import__('Cython.Compiler.Main') except ImportError: _build_ext = _du_build_ext # make sure _config_vars is initialized get_config_var("LDSHARED") from distutils.sysconfig import _config_vars as _CONFIG_VARS def _customize_compiler_for_shlib(compiler): if sys.platform == "darwin": # building .dylib requires additional compiler flags on OSX; here we # temporarily substitute the pyconfig.h variables so that distutils' # 'customize_compiler' uses them before we build the shared libraries. tmp = _CONFIG_VARS.copy() try: # XXX Help! I don't have any idea whether these are right... _CONFIG_VARS['LDSHARED'] = ( "gcc -Wl,-x -dynamiclib -undefined dynamic_lookup") _CONFIG_VARS['CCSHARED'] = " -dynamiclib" _CONFIG_VARS['SO'] = ".dylib" customize_compiler(compiler) finally: _CONFIG_VARS.clear() _CONFIG_VARS.update(tmp) else: customize_compiler(compiler) have_rtld = False use_stubs = False libtype = 'shared' if sys.platform == "darwin": use_stubs = True elif os.name != 'nt': try: import dl use_stubs = have_rtld = hasattr(dl, 'RTLD_NOW') except ImportError: pass if_dl = lambda s: s if have_rtld else '' def get_abi3_suffix(): """Return the file extension for an abi3-compliant Extension()""" for suffix, _, _ in (s for s in imp.get_suffixes() if s[2] == imp.C_EXTENSION): if '.abi3' in suffix: # Unix return suffix elif suffix == '.pyd': # Windows return suffix class build_ext(_build_ext): def run(self): """Build extensions in build directory, then copy if --inplace""" old_inplace, self.inplace = self.inplace, 0 _build_ext.run(self) self.inplace = old_inplace if old_inplace: self.copy_extensions_to_source() def copy_extensions_to_source(self): build_py = self.get_finalized_command('build_py') for ext in self.extensions: fullname = self.get_ext_fullname(ext.name) filename = self.get_ext_filename(fullname) modpath = fullname.split('.') package = '.'.join(modpath[:-1]) package_dir = build_py.get_package_dir(package) dest_filename = os.path.join(package_dir, os.path.basename(filename)) src_filename = os.path.join(self.build_lib, filename) # Always copy, even if source is older than destination, to ensure # that the right extensions for the current Python/platform are # used. copy_file( src_filename, dest_filename, verbose=self.verbose, dry_run=self.dry_run ) if ext._needs_stub: self.write_stub(package_dir or os.curdir, ext, True) def get_ext_filename(self, fullname): filename = _build_ext.get_ext_filename(self, fullname) if fullname in self.ext_map: ext = self.ext_map[fullname] use_abi3 = ( six.PY3 and getattr(ext, 'py_limited_api') and get_abi3_suffix() ) if use_abi3: so_ext = get_config_var('EXT_SUFFIX') filename = filename[:-len(so_ext)] filename = filename + get_abi3_suffix() if isinstance(ext, Library): fn, ext = os.path.splitext(filename) return self.shlib_compiler.library_filename(fn, libtype) elif use_stubs and ext._links_to_dynamic: d, fn = os.path.split(filename) return os.path.join(d, 'dl-' + fn) return filename def initialize_options(self): _build_ext.initialize_options(self) self.shlib_compiler = None self.shlibs = [] self.ext_map = {} def finalize_options(self): _build_ext.finalize_options(self) self.extensions = self.extensions or [] self.check_extensions_list(self.extensions) self.shlibs = [ext for ext in self.extensions if isinstance(ext, Library)] if self.shlibs: self.setup_shlib_compiler() for ext in self.extensions: ext._full_name = self.get_ext_fullname(ext.name) for ext in self.extensions: fullname = ext._full_name self.ext_map[fullname] = ext # distutils 3.1 will also ask for module names # XXX what to do with conflicts? self.ext_map[fullname.split('.')[-1]] = ext ltd = self.shlibs and self.links_to_dynamic(ext) or False ns = ltd and use_stubs and not isinstance(ext, Library) ext._links_to_dynamic = ltd ext._needs_stub = ns filename = ext._file_name = self.get_ext_filename(fullname) libdir = os.path.dirname(os.path.join(self.build_lib, filename)) if ltd and libdir not in ext.library_dirs: ext.library_dirs.append(libdir) if ltd and use_stubs and os.curdir not in ext.runtime_library_dirs: ext.runtime_library_dirs.append(os.curdir) def setup_shlib_compiler(self): compiler = self.shlib_compiler = new_compiler( compiler=self.compiler, dry_run=self.dry_run, force=self.force ) _customize_compiler_for_shlib(compiler) if self.include_dirs is not None: compiler.set_include_dirs(self.include_dirs) if self.define is not None: # 'define' option is a list of (name,value) tuples for (name, value) in self.define: compiler.define_macro(name, value) if self.undef is not None: for macro in self.undef: compiler.undefine_macro(macro) if self.libraries is not None: compiler.set_libraries(self.libraries) if self.library_dirs is not None: compiler.set_library_dirs(self.library_dirs) if self.rpath is not None: compiler.set_runtime_library_dirs(self.rpath) if self.link_objects is not None: compiler.set_link_objects(self.link_objects) # hack so distutils' build_extension() builds a library instead compiler.link_shared_object = link_shared_object.__get__(compiler) def get_export_symbols(self, ext): if isinstance(ext, Library): return ext.export_symbols return _build_ext.get_export_symbols(self, ext) def build_extension(self, ext): ext._convert_pyx_sources_to_lang() _compiler = self.compiler try: if isinstance(ext, Library): self.compiler = self.shlib_compiler _build_ext.build_extension(self, ext) if ext._needs_stub: cmd = self.get_finalized_command('build_py').build_lib self.write_stub(cmd, ext) finally: self.compiler = _compiler def links_to_dynamic(self, ext): """Return true if 'ext' links to a dynamic lib in the same package""" # XXX this should check to ensure the lib is actually being built # XXX as dynamic, and not just using a locally-found version or a # XXX static-compiled version libnames = dict.fromkeys([lib._full_name for lib in self.shlibs]) pkg = '.'.join(ext._full_name.split('.')[:-1] + ['']) return any(pkg + libname in libnames for libname in ext.libraries) def get_outputs(self): return _build_ext.get_outputs(self) + self.__get_stubs_outputs() def __get_stubs_outputs(self): # assemble the base name for each extension that needs a stub ns_ext_bases = ( os.path.join(self.build_lib, *ext._full_name.split('.')) for ext in self.extensions if ext._needs_stub ) # pair each base with the extension pairs = itertools.product(ns_ext_bases, self.__get_output_extensions()) return list(base + fnext for base, fnext in pairs) def __get_output_extensions(self): yield '.py' yield '.pyc' if self.get_finalized_command('build_py').optimize: yield '.pyo' def write_stub(self, output_dir, ext, compile=False): log.info("writing stub loader for %s to %s", ext._full_name, output_dir) stub_file = (os.path.join(output_dir, *ext._full_name.split('.')) + '.py') if compile and os.path.exists(stub_file): raise DistutilsError(stub_file + " already exists! Please delete.") if not self.dry_run: f = open(stub_file, 'w') f.write( '\n'.join([ "def __bootstrap__():", " global __bootstrap__, __file__, __loader__", " import sys, os, pkg_resources, imp" + if_dl(", dl"), " __file__ = pkg_resources.resource_filename" "(__name__,%r)" % os.path.basename(ext._file_name), " del __bootstrap__", " if '__loader__' in globals():", " del __loader__", if_dl(" old_flags = sys.getdlopenflags()"), " old_dir = os.getcwd()", " try:", " os.chdir(os.path.dirname(__file__))", if_dl(" sys.setdlopenflags(dl.RTLD_NOW)"), " imp.load_dynamic(__name__,__file__)", " finally:", if_dl(" sys.setdlopenflags(old_flags)"), " os.chdir(old_dir)", "__bootstrap__()", "" # terminal \n ]) ) f.close() if compile: from distutils.util import byte_compile byte_compile([stub_file], optimize=0, force=True, dry_run=self.dry_run) optimize = self.get_finalized_command('install_lib').optimize if optimize > 0: byte_compile([stub_file], optimize=optimize, force=True, dry_run=self.dry_run) if os.path.exists(stub_file) and not self.dry_run: os.unlink(stub_file) if use_stubs or os.name == 'nt': # Build shared libraries # def link_shared_object( self, objects, output_libname, output_dir=None, libraries=None, library_dirs=None, runtime_library_dirs=None, export_symbols=None, debug=0, extra_preargs=None, extra_postargs=None, build_temp=None, target_lang=None): self.link( self.SHARED_LIBRARY, objects, output_libname, output_dir, libraries, library_dirs, runtime_library_dirs, export_symbols, debug, extra_preargs, extra_postargs, build_temp, target_lang ) else: # Build static libraries everywhere else libtype = 'static' def link_shared_object( self, objects, output_libname, output_dir=None, libraries=None, library_dirs=None, runtime_library_dirs=None, export_symbols=None, debug=0, extra_preargs=None, extra_postargs=None, build_temp=None, target_lang=None): # XXX we need to either disallow these attrs on Library instances, # or warn/abort here if set, or something... # libraries=None, library_dirs=None, runtime_library_dirs=None, # export_symbols=None, extra_preargs=None, extra_postargs=None, # build_temp=None assert output_dir is None # distutils build_ext doesn't pass this output_dir, filename = os.path.split(output_libname) basename, ext = os.path.splitext(filename) if self.library_filename("x").startswith('lib'): # strip 'lib' prefix; this is kludgy if some platform uses # a different prefix basename = basename[3:] self.create_static_lib( objects, basename, output_dir, debug, target_lang ) �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/build_py.py�����������������������0000644�0001750�0001750�00000022574�00000000000�030273� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from glob import glob from distutils.util import convert_path import distutils.command.build_py as orig import os import fnmatch import textwrap import io import distutils.errors import itertools from setuptools.extern import six from setuptools.extern.six.moves import map, filter, filterfalse try: from setuptools.lib2to3_ex import Mixin2to3 except ImportError: class Mixin2to3: def run_2to3(self, files, doctests=True): "do nothing" class build_py(orig.build_py, Mixin2to3): """Enhanced 'build_py' command that includes data files with packages The data files are specified via a 'package_data' argument to 'setup()'. See 'setuptools.dist.Distribution' for more details. Also, this version of the 'build_py' command allows you to specify both 'py_modules' and 'packages' in the same setup operation. """ def finalize_options(self): orig.build_py.finalize_options(self) self.package_data = self.distribution.package_data self.exclude_package_data = (self.distribution.exclude_package_data or {}) if 'data_files' in self.__dict__: del self.__dict__['data_files'] self.__updated_files = [] self.__doctests_2to3 = [] def run(self): """Build modules, packages, and copy data files to build directory""" if not self.py_modules and not self.packages: return if self.py_modules: self.build_modules() if self.packages: self.build_packages() self.build_package_data() self.run_2to3(self.__updated_files, False) self.run_2to3(self.__updated_files, True) self.run_2to3(self.__doctests_2to3, True) # Only compile actual .py files, using our base class' idea of what our # output files are. self.byte_compile(orig.build_py.get_outputs(self, include_bytecode=0)) def __getattr__(self, attr): "lazily compute data files" if attr == 'data_files': self.data_files = self._get_data_files() return self.data_files return orig.build_py.__getattr__(self, attr) def build_module(self, module, module_file, package): if six.PY2 and isinstance(package, six.string_types): # avoid errors on Python 2 when unicode is passed (#190) package = package.split('.') outfile, copied = orig.build_py.build_module(self, module, module_file, package) if copied: self.__updated_files.append(outfile) return outfile, copied def _get_data_files(self): """Generate list of '(package,src_dir,build_dir,filenames)' tuples""" self.analyze_manifest() return list(map(self._get_pkg_data_files, self.packages or ())) def _get_pkg_data_files(self, package): # Locate package source directory src_dir = self.get_package_dir(package) # Compute package build directory build_dir = os.path.join(*([self.build_lib] + package.split('.'))) # Strip directory from globbed filenames filenames = [ os.path.relpath(file, src_dir) for file in self.find_data_files(package, src_dir) ] return package, src_dir, build_dir, filenames def find_data_files(self, package, src_dir): """Return filenames for package's data files in 'src_dir'""" patterns = self._get_platform_patterns( self.package_data, package, src_dir, ) globs_expanded = map(glob, patterns) # flatten the expanded globs into an iterable of matches globs_matches = itertools.chain.from_iterable(globs_expanded) glob_files = filter(os.path.isfile, globs_matches) files = itertools.chain( self.manifest_files.get(package, []), glob_files, ) return self.exclude_data_files(package, src_dir, files) def build_package_data(self): """Copy data files into build directory""" for package, src_dir, build_dir, filenames in self.data_files: for filename in filenames: target = os.path.join(build_dir, filename) self.mkpath(os.path.dirname(target)) srcfile = os.path.join(src_dir, filename) outf, copied = self.copy_file(srcfile, target) srcfile = os.path.abspath(srcfile) if (copied and srcfile in self.distribution.convert_2to3_doctests): self.__doctests_2to3.append(outf) def analyze_manifest(self): self.manifest_files = mf = {} if not self.distribution.include_package_data: return src_dirs = {} for package in self.packages or (): # Locate package source directory src_dirs[assert_relative(self.get_package_dir(package))] = package self.run_command('egg_info') ei_cmd = self.get_finalized_command('egg_info') for path in ei_cmd.filelist.files: d, f = os.path.split(assert_relative(path)) prev = None oldf = f while d and d != prev and d not in src_dirs: prev = d d, df = os.path.split(d) f = os.path.join(df, f) if d in src_dirs: if path.endswith('.py') and f == oldf: continue # it's a module, not data mf.setdefault(src_dirs[d], []).append(path) def get_data_files(self): pass # Lazily compute data files in _get_data_files() function. def check_package(self, package, package_dir): """Check namespace packages' __init__ for declare_namespace""" try: return self.packages_checked[package] except KeyError: pass init_py = orig.build_py.check_package(self, package, package_dir) self.packages_checked[package] = init_py if not init_py or not self.distribution.namespace_packages: return init_py for pkg in self.distribution.namespace_packages: if pkg == package or pkg.startswith(package + '.'): break else: return init_py with io.open(init_py, 'rb') as f: contents = f.read() if b'declare_namespace' not in contents: raise distutils.errors.DistutilsError( "Namespace package problem: %s is a namespace package, but " "its\n__init__.py does not call declare_namespace()! Please " 'fix it.\n(See the setuptools manual under ' '"Namespace Packages" for details.)\n"' % (package,) ) return init_py def initialize_options(self): self.packages_checked = {} orig.build_py.initialize_options(self) def get_package_dir(self, package): res = orig.build_py.get_package_dir(self, package) if self.distribution.src_root is not None: return os.path.join(self.distribution.src_root, res) return res def exclude_data_files(self, package, src_dir, files): """Filter filenames for package's data files in 'src_dir'""" files = list(files) patterns = self._get_platform_patterns( self.exclude_package_data, package, src_dir, ) match_groups = ( fnmatch.filter(files, pattern) for pattern in patterns ) # flatten the groups of matches into an iterable of matches matches = itertools.chain.from_iterable(match_groups) bad = set(matches) keepers = ( fn for fn in files if fn not in bad ) # ditch dupes return list(_unique_everseen(keepers)) @staticmethod def _get_platform_patterns(spec, package, src_dir): """ yield platform-specific path patterns (suitable for glob or fn_match) from a glob-based spec (such as self.package_data or self.exclude_package_data) matching package in src_dir. """ raw_patterns = itertools.chain( spec.get('', []), spec.get(package, []), ) return ( # Each pattern has to be converted to a platform-specific path os.path.join(src_dir, convert_path(pattern)) for pattern in raw_patterns ) # from Python docs def _unique_everseen(iterable, key=None): "List unique elements, preserving order. Remember all elements ever seen." # unique_everseen('AAAABBBCCDAABBB') --> A B C D # unique_everseen('ABBCcAD', str.lower) --> A B C D seen = set() seen_add = seen.add if key is None: for element in filterfalse(seen.__contains__, iterable): seen_add(element) yield element else: for element in iterable: k = key(element) if k not in seen: seen_add(k) yield element def assert_relative(path): if not os.path.isabs(path): return path from distutils.errors import DistutilsSetupError msg = textwrap.dedent(""" Error: setup script specifies an absolute path: %s setup() arguments must *always* be /-separated paths relative to the setup.py directory, *never* absolute paths. """).lstrip() % path raise DistutilsSetupError(msg) ������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/develop.py������������������������0000644�0001750�0001750�00000017770�00000000000�030124� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from distutils.util import convert_path from distutils import log from distutils.errors import DistutilsError, DistutilsOptionError import os import glob import io from setuptools.extern import six import pkg_resources from setuptools.command.easy_install import easy_install from setuptools import namespaces import setuptools __metaclass__ = type class develop(namespaces.DevelopInstaller, easy_install): """Set up package for development""" description = "install package in 'development mode'" user_options = easy_install.user_options + [ ("uninstall", "u", "Uninstall this source package"), ("egg-path=", None, "Set the path to be used in the .egg-link file"), ] boolean_options = easy_install.boolean_options + ['uninstall'] command_consumes_arguments = False # override base def run(self): if self.uninstall: self.multi_version = True self.uninstall_link() self.uninstall_namespaces() else: self.install_for_development() self.warn_deprecated_options() def initialize_options(self): self.uninstall = None self.egg_path = None easy_install.initialize_options(self) self.setup_path = None self.always_copy_from = '.' # always copy eggs installed in curdir def finalize_options(self): ei = self.get_finalized_command("egg_info") if ei.broken_egg_info: template = "Please rename %r to %r before using 'develop'" args = ei.egg_info, ei.broken_egg_info raise DistutilsError(template % args) self.args = [ei.egg_name] easy_install.finalize_options(self) self.expand_basedirs() self.expand_dirs() # pick up setup-dir .egg files only: no .egg-info self.package_index.scan(glob.glob('*.egg')) egg_link_fn = ei.egg_name + '.egg-link' self.egg_link = os.path.join(self.install_dir, egg_link_fn) self.egg_base = ei.egg_base if self.egg_path is None: self.egg_path = os.path.abspath(ei.egg_base) target = pkg_resources.normalize_path(self.egg_base) egg_path = pkg_resources.normalize_path( os.path.join(self.install_dir, self.egg_path)) if egg_path != target: raise DistutilsOptionError( "--egg-path must be a relative path from the install" " directory to " + target ) # Make a distribution for the package's source self.dist = pkg_resources.Distribution( target, pkg_resources.PathMetadata(target, os.path.abspath(ei.egg_info)), project_name=ei.egg_name ) self.setup_path = self._resolve_setup_path( self.egg_base, self.install_dir, self.egg_path, ) @staticmethod def _resolve_setup_path(egg_base, install_dir, egg_path): """ Generate a path from egg_base back to '.' where the setup script resides and ensure that path points to the setup path from $install_dir/$egg_path. """ path_to_setup = egg_base.replace(os.sep, '/').rstrip('/') if path_to_setup != os.curdir: path_to_setup = '../' * (path_to_setup.count('/') + 1) resolved = pkg_resources.normalize_path( os.path.join(install_dir, egg_path, path_to_setup) ) if resolved != pkg_resources.normalize_path(os.curdir): raise DistutilsOptionError( "Can't get a consistent path to setup script from" " installation directory", resolved, pkg_resources.normalize_path(os.curdir)) return path_to_setup def install_for_development(self): if six.PY3 and getattr(self.distribution, 'use_2to3', False): # If we run 2to3 we can not do this inplace: # Ensure metadata is up-to-date self.reinitialize_command('build_py', inplace=0) self.run_command('build_py') bpy_cmd = self.get_finalized_command("build_py") build_path = pkg_resources.normalize_path(bpy_cmd.build_lib) # Build extensions self.reinitialize_command('egg_info', egg_base=build_path) self.run_command('egg_info') self.reinitialize_command('build_ext', inplace=0) self.run_command('build_ext') # Fixup egg-link and easy-install.pth ei_cmd = self.get_finalized_command("egg_info") self.egg_path = build_path self.dist.location = build_path # XXX self.dist._provider = pkg_resources.PathMetadata( build_path, ei_cmd.egg_info) else: # Without 2to3 inplace works fine: self.run_command('egg_info') # Build extensions in-place self.reinitialize_command('build_ext', inplace=1) self.run_command('build_ext') self.install_site_py() # ensure that target dir is site-safe if setuptools.bootstrap_install_from: self.easy_install(setuptools.bootstrap_install_from) setuptools.bootstrap_install_from = None self.install_namespaces() # create an .egg-link in the installation dir, pointing to our egg log.info("Creating %s (link to %s)", self.egg_link, self.egg_base) if not self.dry_run: with open(self.egg_link, "w") as f: f.write(self.egg_path + "\n" + self.setup_path) # postprocess the installed distro, fixing up .pth, installing scripts, # and handling requirements self.process_distribution(None, self.dist, not self.no_deps) def uninstall_link(self): if os.path.exists(self.egg_link): log.info("Removing %s (link to %s)", self.egg_link, self.egg_base) egg_link_file = open(self.egg_link) contents = [line.rstrip() for line in egg_link_file] egg_link_file.close() if contents not in ([self.egg_path], [self.egg_path, self.setup_path]): log.warn("Link points to %s: uninstall aborted", contents) return if not self.dry_run: os.unlink(self.egg_link) if not self.dry_run: self.update_pth(self.dist) # remove any .pth link to us if self.distribution.scripts: # XXX should also check for entry point scripts! log.warn("Note: you must uninstall or replace scripts manually!") def install_egg_scripts(self, dist): if dist is not self.dist: # Installing a dependency, so fall back to normal behavior return easy_install.install_egg_scripts(self, dist) # create wrapper scripts in the script dir, pointing to dist.scripts # new-style... self.install_wrapper_scripts(dist) # ...and old-style for script_name in self.distribution.scripts or []: script_path = os.path.abspath(convert_path(script_name)) script_name = os.path.basename(script_path) with io.open(script_path) as strm: script_text = strm.read() self.install_script(dist, script_name, script_text, script_path) def install_wrapper_scripts(self, dist): dist = VersionlessRequirement(dist) return easy_install.install_wrapper_scripts(self, dist) class VersionlessRequirement: """ Adapt a pkg_resources.Distribution to simply return the project name as the 'requirement' so that scripts will work across multiple versions. >>> from pkg_resources import Distribution >>> dist = Distribution(project_name='foo', version='1.0') >>> str(dist.as_requirement()) 'foo==1.0' >>> adapted_dist = VersionlessRequirement(dist) >>> str(adapted_dist.as_requirement()) 'foo' """ def __init__(self, dist): self.__dist = dist def __getattr__(self, name): return getattr(self.__dist, name) def as_requirement(self): return self.project_name ��������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/dist_info.py����������������������0000644�0001750�0001750�00000001700�00000000000�030426� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" Create a dist_info directory As defined in the wheel specification """ import os from distutils.core import Command from distutils import log class dist_info(Command): description = 'create a .dist-info directory' user_options = [ ('egg-base=', 'e', "directory containing .egg-info directories" " (default: top of the source tree)"), ] def initialize_options(self): self.egg_base = None def finalize_options(self): pass def run(self): egg_info = self.get_finalized_command('egg_info') egg_info.egg_base = self.egg_base egg_info.finalize_options() egg_info.run() dist_info_dir = egg_info.egg_info[:-len('.egg-info')] + '.dist-info' log.info("creating '{}'".format(os.path.abspath(dist_info_dir))) bdist_wheel = self.get_finalized_command('bdist_wheel') bdist_wheel.egg2dist(egg_info.egg_info, dist_info_dir) ����������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/easy_install.py�������������������0000644�0001750�0001750�00000252351�00000000000�031151� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������#!/usr/bin/env python """ Easy Install ------------ A tool for doing automatic download/extract/build of distutils-based Python packages. For detailed documentation, see the accompanying EasyInstall.txt file, or visit the `EasyInstall home page`__. __ https://setuptools.readthedocs.io/en/latest/easy_install.html """ from glob import glob from distutils.util import get_platform from distutils.util import convert_path, subst_vars from distutils.errors import ( DistutilsArgError, DistutilsOptionError, DistutilsError, DistutilsPlatformError, ) from distutils.command.install import INSTALL_SCHEMES, SCHEME_KEYS from distutils import log, dir_util from distutils.command.build_scripts import first_line_re from distutils.spawn import find_executable import sys import os import zipimport import shutil import tempfile import zipfile import re import stat import random import textwrap import warnings import site import struct import contextlib import subprocess import shlex import io from sysconfig import get_config_vars, get_path from setuptools import SetuptoolsDeprecationWarning from setuptools.extern import six from setuptools.extern.six.moves import configparser, map from setuptools import Command from setuptools.sandbox import run_setup from setuptools.py27compat import rmtree_safe from setuptools.command import setopt from setuptools.archive_util import unpack_archive from setuptools.package_index import ( PackageIndex, parse_requirement_arg, URL_SCHEME, ) from setuptools.command import bdist_egg, egg_info from setuptools.wheel import Wheel from pkg_resources import ( yield_lines, normalize_path, resource_string, ensure_directory, get_distribution, find_distributions, Environment, Requirement, Distribution, PathMetadata, EggMetadata, WorkingSet, DistributionNotFound, VersionConflict, DEVELOP_DIST, ) import pkg_resources.py31compat __metaclass__ = type # Turn on PEP440Warnings warnings.filterwarnings("default", category=pkg_resources.PEP440Warning) __all__ = [ 'samefile', 'easy_install', 'PthDistributions', 'extract_wininst_cfg', 'main', 'get_exe_prefixes', ] def is_64bit(): return struct.calcsize("P") == 8 def samefile(p1, p2): """ Determine if two paths reference the same file. Augments os.path.samefile to work on Windows and suppresses errors if the path doesn't exist. """ both_exist = os.path.exists(p1) and os.path.exists(p2) use_samefile = hasattr(os.path, 'samefile') and both_exist if use_samefile: return os.path.samefile(p1, p2) norm_p1 = os.path.normpath(os.path.normcase(p1)) norm_p2 = os.path.normpath(os.path.normcase(p2)) return norm_p1 == norm_p2 if six.PY2: def _to_bytes(s): return s def isascii(s): try: six.text_type(s, 'ascii') return True except UnicodeError: return False else: def _to_bytes(s): return s.encode('utf8') def isascii(s): try: s.encode('ascii') return True except UnicodeError: return False _one_liner = lambda text: textwrap.dedent(text).strip().replace('\n', '; ') class easy_install(Command): """Manage a download/build/install process""" description = "Find/get/install Python packages" command_consumes_arguments = True user_options = [ ('prefix=', None, "installation prefix"), ("zip-ok", "z", "install package as a zipfile"), ("multi-version", "m", "make apps have to require() a version"), ("upgrade", "U", "force upgrade (searches PyPI for latest versions)"), ("install-dir=", "d", "install package to DIR"), ("script-dir=", "s", "install scripts to DIR"), ("exclude-scripts", "x", "Don't install scripts"), ("always-copy", "a", "Copy all needed packages to install dir"), ("index-url=", "i", "base URL of Python Package Index"), ("find-links=", "f", "additional URL(s) to search for packages"), ("build-directory=", "b", "download/extract/build in DIR; keep the results"), ('optimize=', 'O', "also compile with optimization: -O1 for \"python -O\", " "-O2 for \"python -OO\", and -O0 to disable [default: -O0]"), ('record=', None, "filename in which to record list of installed files"), ('always-unzip', 'Z', "don't install as a zipfile, no matter what"), ('site-dirs=', 'S', "list of directories where .pth files work"), ('editable', 'e', "Install specified packages in editable form"), ('no-deps', 'N', "don't install dependencies"), ('allow-hosts=', 'H', "pattern(s) that hostnames must match"), ('local-snapshots-ok', 'l', "allow building eggs from local checkouts"), ('version', None, "print version information and exit"), ('no-find-links', None, "Don't load find-links defined in packages being installed") ] boolean_options = [ 'zip-ok', 'multi-version', 'exclude-scripts', 'upgrade', 'always-copy', 'editable', 'no-deps', 'local-snapshots-ok', 'version' ] if site.ENABLE_USER_SITE: help_msg = "install in user site-package '%s'" % site.USER_SITE user_options.append(('user', None, help_msg)) boolean_options.append('user') negative_opt = {'always-unzip': 'zip-ok'} create_index = PackageIndex def initialize_options(self): # the --user option seems to be an opt-in one, # so the default should be False. self.user = 0 self.zip_ok = self.local_snapshots_ok = None self.install_dir = self.script_dir = self.exclude_scripts = None self.index_url = None self.find_links = None self.build_directory = None self.args = None self.optimize = self.record = None self.upgrade = self.always_copy = self.multi_version = None self.editable = self.no_deps = self.allow_hosts = None self.root = self.prefix = self.no_report = None self.version = None self.install_purelib = None # for pure module distributions self.install_platlib = None # non-pure (dists w/ extensions) self.install_headers = None # for C/C++ headers self.install_lib = None # set to either purelib or platlib self.install_scripts = None self.install_data = None self.install_base = None self.install_platbase = None if site.ENABLE_USER_SITE: self.install_userbase = site.USER_BASE self.install_usersite = site.USER_SITE else: self.install_userbase = None self.install_usersite = None self.no_find_links = None # Options not specifiable via command line self.package_index = None self.pth_file = self.always_copy_from = None self.site_dirs = None self.installed_projects = {} self.sitepy_installed = False # Always read easy_install options, even if we are subclassed, or have # an independent instance created. This ensures that defaults will # always come from the standard configuration file(s)' "easy_install" # section, even if this is a "develop" or "install" command, or some # other embedding. self._dry_run = None self.verbose = self.distribution.verbose self.distribution._set_command_options( self, self.distribution.get_option_dict('easy_install') ) def delete_blockers(self, blockers): extant_blockers = ( filename for filename in blockers if os.path.exists(filename) or os.path.islink(filename) ) list(map(self._delete_path, extant_blockers)) def _delete_path(self, path): log.info("Deleting %s", path) if self.dry_run: return is_tree = os.path.isdir(path) and not os.path.islink(path) remover = rmtree if is_tree else os.unlink remover(path) @staticmethod def _render_version(): """ Render the Setuptools version and installation details, then exit. """ ver = sys.version[:3] dist = get_distribution('setuptools') tmpl = 'setuptools {dist.version} from {dist.location} (Python {ver})' print(tmpl.format(**locals())) raise SystemExit() def finalize_options(self): self.version and self._render_version() py_version = sys.version.split()[0] prefix, exec_prefix = get_config_vars('prefix', 'exec_prefix') self.config_vars = { 'dist_name': self.distribution.get_name(), 'dist_version': self.distribution.get_version(), 'dist_fullname': self.distribution.get_fullname(), 'py_version': py_version, 'py_version_short': py_version[0:3], 'py_version_nodot': py_version[0] + py_version[2], 'sys_prefix': prefix, 'prefix': prefix, 'sys_exec_prefix': exec_prefix, 'exec_prefix': exec_prefix, # Only python 3.2+ has abiflags 'abiflags': getattr(sys, 'abiflags', ''), } if site.ENABLE_USER_SITE: self.config_vars['userbase'] = self.install_userbase self.config_vars['usersite'] = self.install_usersite self._fix_install_dir_for_user_site() self.expand_basedirs() self.expand_dirs() self._expand( 'install_dir', 'script_dir', 'build_directory', 'site_dirs', ) # If a non-default installation directory was specified, default the # script directory to match it. if self.script_dir is None: self.script_dir = self.install_dir if self.no_find_links is None: self.no_find_links = False # Let install_dir get set by install_lib command, which in turn # gets its info from the install command, and takes into account # --prefix and --home and all that other crud. self.set_undefined_options( 'install_lib', ('install_dir', 'install_dir') ) # Likewise, set default script_dir from 'install_scripts.install_dir' self.set_undefined_options( 'install_scripts', ('install_dir', 'script_dir') ) if self.user and self.install_purelib: self.install_dir = self.install_purelib self.script_dir = self.install_scripts # default --record from the install command self.set_undefined_options('install', ('record', 'record')) # Should this be moved to the if statement below? It's not used # elsewhere normpath = map(normalize_path, sys.path) self.all_site_dirs = get_site_dirs() if self.site_dirs is not None: site_dirs = [ os.path.expanduser(s.strip()) for s in self.site_dirs.split(',') ] for d in site_dirs: if not os.path.isdir(d): log.warn("%s (in --site-dirs) does not exist", d) elif normalize_path(d) not in normpath: raise DistutilsOptionError( d + " (in --site-dirs) is not on sys.path" ) else: self.all_site_dirs.append(normalize_path(d)) if not self.editable: self.check_site_dir() self.index_url = self.index_url or "https://pypi.org/simple/" self.shadow_path = self.all_site_dirs[:] for path_item in self.install_dir, normalize_path(self.script_dir): if path_item not in self.shadow_path: self.shadow_path.insert(0, path_item) if self.allow_hosts is not None: hosts = [s.strip() for s in self.allow_hosts.split(',')] else: hosts = ['*'] if self.package_index is None: self.package_index = self.create_index( self.index_url, search_path=self.shadow_path, hosts=hosts, ) self.local_index = Environment(self.shadow_path + sys.path) if self.find_links is not None: if isinstance(self.find_links, six.string_types): self.find_links = self.find_links.split() else: self.find_links = [] if self.local_snapshots_ok: self.package_index.scan_egg_links(self.shadow_path + sys.path) if not self.no_find_links: self.package_index.add_find_links(self.find_links) self.set_undefined_options('install_lib', ('optimize', 'optimize')) if not isinstance(self.optimize, int): try: self.optimize = int(self.optimize) if not (0 <= self.optimize <= 2): raise ValueError except ValueError: raise DistutilsOptionError("--optimize must be 0, 1, or 2") if self.editable and not self.build_directory: raise DistutilsArgError( "Must specify a build directory (-b) when using --editable" ) if not self.args: raise DistutilsArgError( "No urls, filenames, or requirements specified (see --help)") self.outputs = [] def _fix_install_dir_for_user_site(self): """ Fix the install_dir if "--user" was used. """ if not self.user or not site.ENABLE_USER_SITE: return self.create_home_path() if self.install_userbase is None: msg = "User base directory is not specified" raise DistutilsPlatformError(msg) self.install_base = self.install_platbase = self.install_userbase scheme_name = os.name.replace('posix', 'unix') + '_user' self.select_scheme(scheme_name) def _expand_attrs(self, attrs): for attr in attrs: val = getattr(self, attr) if val is not None: if os.name == 'posix' or os.name == 'nt': val = os.path.expanduser(val) val = subst_vars(val, self.config_vars) setattr(self, attr, val) def expand_basedirs(self): """Calls `os.path.expanduser` on install_base, install_platbase and root.""" self._expand_attrs(['install_base', 'install_platbase', 'root']) def expand_dirs(self): """Calls `os.path.expanduser` on install dirs.""" dirs = [ 'install_purelib', 'install_platlib', 'install_lib', 'install_headers', 'install_scripts', 'install_data', ] self._expand_attrs(dirs) def run(self): if self.verbose != self.distribution.verbose: log.set_verbosity(self.verbose) try: for spec in self.args: self.easy_install(spec, not self.no_deps) if self.record: outputs = self.outputs if self.root: # strip any package prefix root_len = len(self.root) for counter in range(len(outputs)): outputs[counter] = outputs[counter][root_len:] from distutils import file_util self.execute( file_util.write_file, (self.record, outputs), "writing list of installed files to '%s'" % self.record ) self.warn_deprecated_options() finally: log.set_verbosity(self.distribution.verbose) def pseudo_tempname(self): """Return a pseudo-tempname base in the install directory. This code is intentionally naive; if a malicious party can write to the target directory you're already in deep doodoo. """ try: pid = os.getpid() except Exception: pid = random.randint(0, sys.maxsize) return os.path.join(self.install_dir, "test-easy-install-%s" % pid) def warn_deprecated_options(self): pass def check_site_dir(self): """Verify that self.install_dir is .pth-capable dir, if needed""" instdir = normalize_path(self.install_dir) pth_file = os.path.join(instdir, 'easy-install.pth') # Is it a configured, PYTHONPATH, implicit, or explicit site dir? is_site_dir = instdir in self.all_site_dirs if not is_site_dir and not self.multi_version: # No? Then directly test whether it does .pth file processing is_site_dir = self.check_pth_processing() else: # make sure we can write to target dir testfile = self.pseudo_tempname() + '.write-test' test_exists = os.path.exists(testfile) try: if test_exists: os.unlink(testfile) open(testfile, 'w').close() os.unlink(testfile) except (OSError, IOError): self.cant_write_to_target() if not is_site_dir and not self.multi_version: # Can't install non-multi to non-site dir raise DistutilsError(self.no_default_version_msg()) if is_site_dir: if self.pth_file is None: self.pth_file = PthDistributions(pth_file, self.all_site_dirs) else: self.pth_file = None if instdir not in map(normalize_path, _pythonpath()): # only PYTHONPATH dirs need a site.py, so pretend it's there self.sitepy_installed = True elif self.multi_version and not os.path.exists(pth_file): self.sitepy_installed = True # don't need site.py in this case self.pth_file = None # and don't create a .pth file self.install_dir = instdir __cant_write_msg = textwrap.dedent(""" can't create or remove files in install directory The following error occurred while trying to add or remove files in the installation directory: %s The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was: %s """).lstrip() __not_exists_id = textwrap.dedent(""" This directory does not currently exist. Please create it and try again, or choose a different installation directory (using the -d or --install-dir option). """).lstrip() __access_msg = textwrap.dedent(""" Perhaps your account does not have write access to this directory? If the installation directory is a system-owned directory, you may need to sign in as the administrator or "root" account. If you do not have administrative access to this machine, you may wish to choose a different installation directory, preferably one that is listed in your PYTHONPATH environment variable. For information on other options, you may wish to consult the documentation at: https://setuptools.readthedocs.io/en/latest/easy_install.html Please make the appropriate changes for your system and try again. """).lstrip() def cant_write_to_target(self): msg = self.__cant_write_msg % (sys.exc_info()[1], self.install_dir,) if not os.path.exists(self.install_dir): msg += '\n' + self.__not_exists_id else: msg += '\n' + self.__access_msg raise DistutilsError(msg) def check_pth_processing(self): """Empirically verify whether .pth files are supported in inst. dir""" instdir = self.install_dir log.info("Checking .pth file support in %s", instdir) pth_file = self.pseudo_tempname() + ".pth" ok_file = pth_file + '.ok' ok_exists = os.path.exists(ok_file) tmpl = _one_liner(""" import os f = open({ok_file!r}, 'w') f.write('OK') f.close() """) + '\n' try: if ok_exists: os.unlink(ok_file) dirname = os.path.dirname(ok_file) pkg_resources.py31compat.makedirs(dirname, exist_ok=True) f = open(pth_file, 'w') except (OSError, IOError): self.cant_write_to_target() else: try: f.write(tmpl.format(**locals())) f.close() f = None executable = sys.executable if os.name == 'nt': dirname, basename = os.path.split(executable) alt = os.path.join(dirname, 'pythonw.exe') use_alt = ( basename.lower() == 'python.exe' and os.path.exists(alt) ) if use_alt: # use pythonw.exe to avoid opening a console window executable = alt from distutils.spawn import spawn spawn([executable, '-E', '-c', 'pass'], 0) if os.path.exists(ok_file): log.info( "TEST PASSED: %s appears to support .pth files", instdir ) return True finally: if f: f.close() if os.path.exists(ok_file): os.unlink(ok_file) if os.path.exists(pth_file): os.unlink(pth_file) if not self.multi_version: log.warn("TEST FAILED: %s does NOT support .pth files", instdir) return False def install_egg_scripts(self, dist): """Write all the scripts for `dist`, unless scripts are excluded""" if not self.exclude_scripts and dist.metadata_isdir('scripts'): for script_name in dist.metadata_listdir('scripts'): if dist.metadata_isdir('scripts/' + script_name): # The "script" is a directory, likely a Python 3 # __pycache__ directory, so skip it. continue self.install_script( dist, script_name, dist.get_metadata('scripts/' + script_name) ) self.install_wrapper_scripts(dist) def add_output(self, path): if os.path.isdir(path): for base, dirs, files in os.walk(path): for filename in files: self.outputs.append(os.path.join(base, filename)) else: self.outputs.append(path) def not_editable(self, spec): if self.editable: raise DistutilsArgError( "Invalid argument %r: you can't use filenames or URLs " "with --editable (except via the --find-links option)." % (spec,) ) def check_editable(self, spec): if not self.editable: return if os.path.exists(os.path.join(self.build_directory, spec.key)): raise DistutilsArgError( "%r already exists in %s; can't do a checkout there" % (spec.key, self.build_directory) ) @contextlib.contextmanager def _tmpdir(self): tmpdir = tempfile.mkdtemp(prefix=u"easy_install-") try: # cast to str as workaround for #709 and #710 and #712 yield str(tmpdir) finally: os.path.exists(tmpdir) and rmtree(rmtree_safe(tmpdir)) def easy_install(self, spec, deps=False): if not self.editable: self.install_site_py() with self._tmpdir() as tmpdir: if not isinstance(spec, Requirement): if URL_SCHEME(spec): # It's a url, download it to tmpdir and process self.not_editable(spec) dl = self.package_index.download(spec, tmpdir) return self.install_item(None, dl, tmpdir, deps, True) elif os.path.exists(spec): # Existing file or directory, just process it directly self.not_editable(spec) return self.install_item(None, spec, tmpdir, deps, True) else: spec = parse_requirement_arg(spec) self.check_editable(spec) dist = self.package_index.fetch_distribution( spec, tmpdir, self.upgrade, self.editable, not self.always_copy, self.local_index ) if dist is None: msg = "Could not find suitable distribution for %r" % spec if self.always_copy: msg += " (--always-copy skips system and development eggs)" raise DistutilsError(msg) elif dist.precedence == DEVELOP_DIST: # .egg-info dists don't need installing, just process deps self.process_distribution(spec, dist, deps, "Using") return dist else: return self.install_item(spec, dist.location, tmpdir, deps) def install_item(self, spec, download, tmpdir, deps, install_needed=False): # Installation is also needed if file in tmpdir or is not an egg install_needed = install_needed or self.always_copy install_needed = install_needed or os.path.dirname(download) == tmpdir install_needed = install_needed or not download.endswith('.egg') install_needed = install_needed or ( self.always_copy_from is not None and os.path.dirname(normalize_path(download)) == normalize_path(self.always_copy_from) ) if spec and not install_needed: # at this point, we know it's a local .egg, we just don't know if # it's already installed. for dist in self.local_index[spec.project_name]: if dist.location == download: break else: install_needed = True # it's not in the local index log.info("Processing %s", os.path.basename(download)) if install_needed: dists = self.install_eggs(spec, download, tmpdir) for dist in dists: self.process_distribution(spec, dist, deps) else: dists = [self.egg_distribution(download)] self.process_distribution(spec, dists[0], deps, "Using") if spec is not None: for dist in dists: if dist in spec: return dist def select_scheme(self, name): """Sets the install directories by applying the install schemes.""" # it's the caller's problem if they supply a bad name! scheme = INSTALL_SCHEMES[name] for key in SCHEME_KEYS: attrname = 'install_' + key if getattr(self, attrname) is None: setattr(self, attrname, scheme[key]) def process_distribution(self, requirement, dist, deps=True, *info): self.update_pth(dist) self.package_index.add(dist) if dist in self.local_index[dist.key]: self.local_index.remove(dist) self.local_index.add(dist) self.install_egg_scripts(dist) self.installed_projects[dist.key] = dist log.info(self.installation_report(requirement, dist, *info)) if (dist.has_metadata('dependency_links.txt') and not self.no_find_links): self.package_index.add_find_links( dist.get_metadata_lines('dependency_links.txt') ) if not deps and not self.always_copy: return elif requirement is not None and dist.key != requirement.key: log.warn("Skipping dependencies for %s", dist) return # XXX this is not the distribution we were looking for elif requirement is None or dist not in requirement: # if we wound up with a different version, resolve what we've got distreq = dist.as_requirement() requirement = Requirement(str(distreq)) log.info("Processing dependencies for %s", requirement) try: distros = WorkingSet([]).resolve( [requirement], self.local_index, self.easy_install ) except DistributionNotFound as e: raise DistutilsError(str(e)) except VersionConflict as e: raise DistutilsError(e.report()) if self.always_copy or self.always_copy_from: # Force all the relevant distros to be copied or activated for dist in distros: if dist.key not in self.installed_projects: self.easy_install(dist.as_requirement()) log.info("Finished processing dependencies for %s", requirement) def should_unzip(self, dist): if self.zip_ok is not None: return not self.zip_ok if dist.has_metadata('not-zip-safe'): return True if not dist.has_metadata('zip-safe'): return True return False def maybe_move(self, spec, dist_filename, setup_base): dst = os.path.join(self.build_directory, spec.key) if os.path.exists(dst): msg = ( "%r already exists in %s; build directory %s will not be kept" ) log.warn(msg, spec.key, self.build_directory, setup_base) return setup_base if os.path.isdir(dist_filename): setup_base = dist_filename else: if os.path.dirname(dist_filename) == setup_base: os.unlink(dist_filename) # get it out of the tmp dir contents = os.listdir(setup_base) if len(contents) == 1: dist_filename = os.path.join(setup_base, contents[0]) if os.path.isdir(dist_filename): # if the only thing there is a directory, move it instead setup_base = dist_filename ensure_directory(dst) shutil.move(setup_base, dst) return dst def install_wrapper_scripts(self, dist): if self.exclude_scripts: return for args in ScriptWriter.best().get_args(dist): self.write_script(*args) def install_script(self, dist, script_name, script_text, dev_path=None): """Generate a legacy script wrapper and install it""" spec = str(dist.as_requirement()) is_script = is_python_script(script_text, script_name) if is_script: body = self._load_template(dev_path) % locals() script_text = ScriptWriter.get_header(script_text) + body self.write_script(script_name, _to_bytes(script_text), 'b') @staticmethod def _load_template(dev_path): """ There are a couple of template scripts in the package. This function loads one of them and prepares it for use. """ # See https://github.com/pypa/setuptools/issues/134 for info # on script file naming and downstream issues with SVR4 name = 'script.tmpl' if dev_path: name = name.replace('.tmpl', ' (dev).tmpl') raw_bytes = resource_string('setuptools', name) return raw_bytes.decode('utf-8') def write_script(self, script_name, contents, mode="t", blockers=()): """Write an executable file to the scripts directory""" self.delete_blockers( # clean up old .py/.pyw w/o a script [os.path.join(self.script_dir, x) for x in blockers] ) log.info("Installing %s script to %s", script_name, self.script_dir) target = os.path.join(self.script_dir, script_name) self.add_output(target) if self.dry_run: return mask = current_umask() ensure_directory(target) if os.path.exists(target): os.unlink(target) with open(target, "w" + mode) as f: f.write(contents) chmod(target, 0o777 - mask) def install_eggs(self, spec, dist_filename, tmpdir): # .egg dirs or files are already built, so just return them if dist_filename.lower().endswith('.egg'): return [self.install_egg(dist_filename, tmpdir)] elif dist_filename.lower().endswith('.exe'): return [self.install_exe(dist_filename, tmpdir)] elif dist_filename.lower().endswith('.whl'): return [self.install_wheel(dist_filename, tmpdir)] # Anything else, try to extract and build setup_base = tmpdir if os.path.isfile(dist_filename) and not dist_filename.endswith('.py'): unpack_archive(dist_filename, tmpdir, self.unpack_progress) elif os.path.isdir(dist_filename): setup_base = os.path.abspath(dist_filename) if (setup_base.startswith(tmpdir) # something we downloaded and self.build_directory and spec is not None): setup_base = self.maybe_move(spec, dist_filename, setup_base) # Find the setup.py file setup_script = os.path.join(setup_base, 'setup.py') if not os.path.exists(setup_script): setups = glob(os.path.join(setup_base, '*', 'setup.py')) if not setups: raise DistutilsError( "Couldn't find a setup script in %s" % os.path.abspath(dist_filename) ) if len(setups) > 1: raise DistutilsError( "Multiple setup scripts in %s" % os.path.abspath(dist_filename) ) setup_script = setups[0] # Now run it, and return the result if self.editable: log.info(self.report_editable(spec, setup_script)) return [] else: return self.build_and_install(setup_script, setup_base) def egg_distribution(self, egg_path): if os.path.isdir(egg_path): metadata = PathMetadata(egg_path, os.path.join(egg_path, 'EGG-INFO')) else: metadata = EggMetadata(zipimport.zipimporter(egg_path)) return Distribution.from_filename(egg_path, metadata=metadata) def install_egg(self, egg_path, tmpdir): destination = os.path.join( self.install_dir, os.path.basename(egg_path), ) destination = os.path.abspath(destination) if not self.dry_run: ensure_directory(destination) dist = self.egg_distribution(egg_path) if not samefile(egg_path, destination): if os.path.isdir(destination) and not os.path.islink(destination): dir_util.remove_tree(destination, dry_run=self.dry_run) elif os.path.exists(destination): self.execute( os.unlink, (destination,), "Removing " + destination, ) try: new_dist_is_zipped = False if os.path.isdir(egg_path): if egg_path.startswith(tmpdir): f, m = shutil.move, "Moving" else: f, m = shutil.copytree, "Copying" elif self.should_unzip(dist): self.mkpath(destination) f, m = self.unpack_and_compile, "Extracting" else: new_dist_is_zipped = True if egg_path.startswith(tmpdir): f, m = shutil.move, "Moving" else: f, m = shutil.copy2, "Copying" self.execute( f, (egg_path, destination), (m + " %s to %s") % ( os.path.basename(egg_path), os.path.dirname(destination) ), ) update_dist_caches( destination, fix_zipimporter_caches=new_dist_is_zipped, ) except Exception: update_dist_caches(destination, fix_zipimporter_caches=False) raise self.add_output(destination) return self.egg_distribution(destination) def install_exe(self, dist_filename, tmpdir): # See if it's valid, get data cfg = extract_wininst_cfg(dist_filename) if cfg is None: raise DistutilsError( "%s is not a valid distutils Windows .exe" % dist_filename ) # Create a dummy distribution object until we build the real distro dist = Distribution( None, project_name=cfg.get('metadata', 'name'), version=cfg.get('metadata', 'version'), platform=get_platform(), ) # Convert the .exe to an unpacked egg egg_path = os.path.join(tmpdir, dist.egg_name() + '.egg') dist.location = egg_path egg_tmp = egg_path + '.tmp' _egg_info = os.path.join(egg_tmp, 'EGG-INFO') pkg_inf = os.path.join(_egg_info, 'PKG-INFO') ensure_directory(pkg_inf) # make sure EGG-INFO dir exists dist._provider = PathMetadata(egg_tmp, _egg_info) # XXX self.exe_to_egg(dist_filename, egg_tmp) # Write EGG-INFO/PKG-INFO if not os.path.exists(pkg_inf): f = open(pkg_inf, 'w') f.write('Metadata-Version: 1.0\n') for k, v in cfg.items('metadata'): if k != 'target_version': f.write('%s: %s\n' % (k.replace('_', '-').title(), v)) f.close() script_dir = os.path.join(_egg_info, 'scripts') # delete entry-point scripts to avoid duping self.delete_blockers([ os.path.join(script_dir, args[0]) for args in ScriptWriter.get_args(dist) ]) # Build .egg file from tmpdir bdist_egg.make_zipfile( egg_path, egg_tmp, verbose=self.verbose, dry_run=self.dry_run, ) # install the .egg return self.install_egg(egg_path, tmpdir) def exe_to_egg(self, dist_filename, egg_tmp): """Extract a bdist_wininst to the directories an egg would use""" # Check for .pth file and set up prefix translations prefixes = get_exe_prefixes(dist_filename) to_compile = [] native_libs = [] top_level = {} def process(src, dst): s = src.lower() for old, new in prefixes: if s.startswith(old): src = new + src[len(old):] parts = src.split('/') dst = os.path.join(egg_tmp, *parts) dl = dst.lower() if dl.endswith('.pyd') or dl.endswith('.dll'): parts[-1] = bdist_egg.strip_module(parts[-1]) top_level[os.path.splitext(parts[0])[0]] = 1 native_libs.append(src) elif dl.endswith('.py') and old != 'SCRIPTS/': top_level[os.path.splitext(parts[0])[0]] = 1 to_compile.append(dst) return dst if not src.endswith('.pth'): log.warn("WARNING: can't process %s", src) return None # extract, tracking .pyd/.dll->native_libs and .py -> to_compile unpack_archive(dist_filename, egg_tmp, process) stubs = [] for res in native_libs: if res.lower().endswith('.pyd'): # create stubs for .pyd's parts = res.split('/') resource = parts[-1] parts[-1] = bdist_egg.strip_module(parts[-1]) + '.py' pyfile = os.path.join(egg_tmp, *parts) to_compile.append(pyfile) stubs.append(pyfile) bdist_egg.write_stub(resource, pyfile) self.byte_compile(to_compile) # compile .py's bdist_egg.write_safety_flag( os.path.join(egg_tmp, 'EGG-INFO'), bdist_egg.analyze_egg(egg_tmp, stubs)) # write zip-safety flag for name in 'top_level', 'native_libs': if locals()[name]: txt = os.path.join(egg_tmp, 'EGG-INFO', name + '.txt') if not os.path.exists(txt): f = open(txt, 'w') f.write('\n'.join(locals()[name]) + '\n') f.close() def install_wheel(self, wheel_path, tmpdir): wheel = Wheel(wheel_path) assert wheel.is_compatible() destination = os.path.join(self.install_dir, wheel.egg_name()) destination = os.path.abspath(destination) if not self.dry_run: ensure_directory(destination) if os.path.isdir(destination) and not os.path.islink(destination): dir_util.remove_tree(destination, dry_run=self.dry_run) elif os.path.exists(destination): self.execute( os.unlink, (destination,), "Removing " + destination, ) try: self.execute( wheel.install_as_egg, (destination,), ("Installing %s to %s") % ( os.path.basename(wheel_path), os.path.dirname(destination) ), ) finally: update_dist_caches(destination, fix_zipimporter_caches=False) self.add_output(destination) return self.egg_distribution(destination) __mv_warning = textwrap.dedent(""" Because this distribution was installed --multi-version, before you can import modules from this package in an application, you will need to 'import pkg_resources' and then use a 'require()' call similar to one of these examples, in order to select the desired version: pkg_resources.require("%(name)s") # latest installed version pkg_resources.require("%(name)s==%(version)s") # this exact version pkg_resources.require("%(name)s>=%(version)s") # this version or higher """).lstrip() __id_warning = textwrap.dedent(""" Note also that the installation directory must be on sys.path at runtime for this to work. (e.g. by being the application's script directory, by being on PYTHONPATH, or by being added to sys.path by your code.) """) def installation_report(self, req, dist, what="Installed"): """Helpful installation message for display to package users""" msg = "\n%(what)s %(eggloc)s%(extras)s" if self.multi_version and not self.no_report: msg += '\n' + self.__mv_warning if self.install_dir not in map(normalize_path, sys.path): msg += '\n' + self.__id_warning eggloc = dist.location name = dist.project_name version = dist.version extras = '' # TODO: self.report_extras(req, dist) return msg % locals() __editable_msg = textwrap.dedent(""" Extracted editable version of %(spec)s to %(dirname)s If it uses setuptools in its setup script, you can activate it in "development" mode by going to that directory and running:: %(python)s setup.py develop See the setuptools documentation for the "develop" command for more info. """).lstrip() def report_editable(self, spec, setup_script): dirname = os.path.dirname(setup_script) python = sys.executable return '\n' + self.__editable_msg % locals() def run_setup(self, setup_script, setup_base, args): sys.modules.setdefault('distutils.command.bdist_egg', bdist_egg) sys.modules.setdefault('distutils.command.egg_info', egg_info) args = list(args) if self.verbose > 2: v = 'v' * (self.verbose - 1) args.insert(0, '-' + v) elif self.verbose < 2: args.insert(0, '-q') if self.dry_run: args.insert(0, '-n') log.info( "Running %s %s", setup_script[len(setup_base) + 1:], ' '.join(args) ) try: run_setup(setup_script, args) except SystemExit as v: raise DistutilsError("Setup script exited with %s" % (v.args[0],)) def build_and_install(self, setup_script, setup_base): args = ['bdist_egg', '--dist-dir'] dist_dir = tempfile.mkdtemp( prefix='egg-dist-tmp-', dir=os.path.dirname(setup_script) ) try: self._set_fetcher_options(os.path.dirname(setup_script)) args.append(dist_dir) self.run_setup(setup_script, setup_base, args) all_eggs = Environment([dist_dir]) eggs = [] for key in all_eggs: for dist in all_eggs[key]: eggs.append(self.install_egg(dist.location, setup_base)) if not eggs and not self.dry_run: log.warn("No eggs found in %s (setup script problem?)", dist_dir) return eggs finally: rmtree(dist_dir) log.set_verbosity(self.verbose) # restore our log verbosity def _set_fetcher_options(self, base): """ When easy_install is about to run bdist_egg on a source dist, that source dist might have 'setup_requires' directives, requiring additional fetching. Ensure the fetcher options given to easy_install are available to that command as well. """ # find the fetch options from easy_install and write them out # to the setup.cfg file. ei_opts = self.distribution.get_option_dict('easy_install').copy() fetch_directives = ( 'find_links', 'site_dirs', 'index_url', 'optimize', 'site_dirs', 'allow_hosts', ) fetch_options = {} for key, val in ei_opts.items(): if key not in fetch_directives: continue fetch_options[key.replace('_', '-')] = val[1] # create a settings dictionary suitable for `edit_config` settings = dict(easy_install=fetch_options) cfg_filename = os.path.join(base, 'setup.cfg') setopt.edit_config(cfg_filename, settings) def update_pth(self, dist): if self.pth_file is None: return for d in self.pth_file[dist.key]: # drop old entries if self.multi_version or d.location != dist.location: log.info("Removing %s from easy-install.pth file", d) self.pth_file.remove(d) if d.location in self.shadow_path: self.shadow_path.remove(d.location) if not self.multi_version: if dist.location in self.pth_file.paths: log.info( "%s is already the active version in easy-install.pth", dist, ) else: log.info("Adding %s to easy-install.pth file", dist) self.pth_file.add(dist) # add new entry if dist.location not in self.shadow_path: self.shadow_path.append(dist.location) if not self.dry_run: self.pth_file.save() if dist.key == 'setuptools': # Ensure that setuptools itself never becomes unavailable! # XXX should this check for latest version? filename = os.path.join(self.install_dir, 'setuptools.pth') if os.path.islink(filename): os.unlink(filename) f = open(filename, 'wt') f.write(self.pth_file.make_relative(dist.location) + '\n') f.close() def unpack_progress(self, src, dst): # Progress filter for unpacking log.debug("Unpacking %s to %s", src, dst) return dst # only unpack-and-compile skips files for dry run def unpack_and_compile(self, egg_path, destination): to_compile = [] to_chmod = [] def pf(src, dst): if dst.endswith('.py') and not src.startswith('EGG-INFO/'): to_compile.append(dst) elif dst.endswith('.dll') or dst.endswith('.so'): to_chmod.append(dst) self.unpack_progress(src, dst) return not self.dry_run and dst or None unpack_archive(egg_path, destination, pf) self.byte_compile(to_compile) if not self.dry_run: for f in to_chmod: mode = ((os.stat(f)[stat.ST_MODE]) | 0o555) & 0o7755 chmod(f, mode) def byte_compile(self, to_compile): if sys.dont_write_bytecode: return from distutils.util import byte_compile try: # try to make the byte compile messages quieter log.set_verbosity(self.verbose - 1) byte_compile(to_compile, optimize=0, force=1, dry_run=self.dry_run) if self.optimize: byte_compile( to_compile, optimize=self.optimize, force=1, dry_run=self.dry_run, ) finally: log.set_verbosity(self.verbose) # restore original verbosity __no_default_msg = textwrap.dedent(""" bad install directory or PYTHONPATH You are attempting to install a package to a directory that is not on PYTHONPATH and which Python does not read ".pth" files from. The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was: %s and your PYTHONPATH environment variable currently contains: %r Here are some of your options for correcting the problem: * You can choose a different installation directory, i.e., one that is on PYTHONPATH or supports .pth files * You can add the installation directory to the PYTHONPATH environment variable. (It must then also be on PYTHONPATH whenever you run Python and want to use the package(s) you are installing.) * You can set up the installation directory to support ".pth" files by using one of the approaches described here: https://setuptools.readthedocs.io/en/latest/easy_install.html#custom-installation-locations Please make the appropriate changes for your system and try again.""").lstrip() def no_default_version_msg(self): template = self.__no_default_msg return template % (self.install_dir, os.environ.get('PYTHONPATH', '')) def install_site_py(self): """Make sure there's a site.py in the target dir, if needed""" if self.sitepy_installed: return # already did it, or don't need to sitepy = os.path.join(self.install_dir, "site.py") source = resource_string("setuptools", "site-patch.py") source = source.decode('utf-8') current = "" if os.path.exists(sitepy): log.debug("Checking existing site.py in %s", self.install_dir) with io.open(sitepy) as strm: current = strm.read() if not current.startswith('def __boot():'): raise DistutilsError( "%s is not a setuptools-generated site.py; please" " remove it." % sitepy ) if current != source: log.info("Creating %s", sitepy) if not self.dry_run: ensure_directory(sitepy) with io.open(sitepy, 'w', encoding='utf-8') as strm: strm.write(source) self.byte_compile([sitepy]) self.sitepy_installed = True def create_home_path(self): """Create directories under ~.""" if not self.user: return home = convert_path(os.path.expanduser("~")) for name, path in six.iteritems(self.config_vars): if path.startswith(home) and not os.path.isdir(path): self.debug_print("os.makedirs('%s', 0o700)" % path) os.makedirs(path, 0o700) INSTALL_SCHEMES = dict( posix=dict( install_dir='$base/lib/python$py_version_short/site-packages', script_dir='$base/bin', ), ) DEFAULT_SCHEME = dict( install_dir='$base/Lib/site-packages', script_dir='$base/Scripts', ) def _expand(self, *attrs): config_vars = self.get_finalized_command('install').config_vars if self.prefix: # Set default install_dir/scripts from --prefix config_vars = config_vars.copy() config_vars['base'] = self.prefix scheme = self.INSTALL_SCHEMES.get(os.name, self.DEFAULT_SCHEME) for attr, val in scheme.items(): if getattr(self, attr, None) is None: setattr(self, attr, val) from distutils.util import subst_vars for attr in attrs: val = getattr(self, attr) if val is not None: val = subst_vars(val, config_vars) if os.name == 'posix': val = os.path.expanduser(val) setattr(self, attr, val) def _pythonpath(): items = os.environ.get('PYTHONPATH', '').split(os.pathsep) return filter(None, items) def get_site_dirs(): """ Return a list of 'site' dirs """ sitedirs = [] # start with PYTHONPATH sitedirs.extend(_pythonpath()) prefixes = [sys.prefix] if sys.exec_prefix != sys.prefix: prefixes.append(sys.exec_prefix) for prefix in prefixes: if prefix: if sys.platform in ('os2emx', 'riscos'): sitedirs.append(os.path.join(prefix, "Lib", "site-packages")) elif os.sep == '/': sitedirs.extend([ os.path.join( prefix, "lib", "python" + sys.version[:3], "site-packages", ), os.path.join(prefix, "lib", "site-python"), ]) else: sitedirs.extend([ prefix, os.path.join(prefix, "lib", "site-packages"), ]) if sys.platform == 'darwin': # for framework builds *only* we add the standard Apple # locations. Currently only per-user, but /Library and # /Network/Library could be added too if 'Python.framework' in prefix: home = os.environ.get('HOME') if home: home_sp = os.path.join( home, 'Library', 'Python', sys.version[:3], 'site-packages', ) sitedirs.append(home_sp) lib_paths = get_path('purelib'), get_path('platlib') for site_lib in lib_paths: if site_lib not in sitedirs: sitedirs.append(site_lib) if site.ENABLE_USER_SITE: sitedirs.append(site.USER_SITE) try: sitedirs.extend(site.getsitepackages()) except AttributeError: pass sitedirs = list(map(normalize_path, sitedirs)) return sitedirs def expand_paths(inputs): """Yield sys.path directories that might contain "old-style" packages""" seen = {} for dirname in inputs: dirname = normalize_path(dirname) if dirname in seen: continue seen[dirname] = 1 if not os.path.isdir(dirname): continue files = os.listdir(dirname) yield dirname, files for name in files: if not name.endswith('.pth'): # We only care about the .pth files continue if name in ('easy-install.pth', 'setuptools.pth'): # Ignore .pth files that we control continue # Read the .pth file f = open(os.path.join(dirname, name)) lines = list(yield_lines(f)) f.close() # Yield existing non-dupe, non-import directory lines from it for line in lines: if not line.startswith("import"): line = normalize_path(line.rstrip()) if line not in seen: seen[line] = 1 if not os.path.isdir(line): continue yield line, os.listdir(line) def extract_wininst_cfg(dist_filename): """Extract configuration data from a bdist_wininst .exe Returns a configparser.RawConfigParser, or None """ f = open(dist_filename, 'rb') try: endrec = zipfile._EndRecData(f) if endrec is None: return None prepended = (endrec[9] - endrec[5]) - endrec[6] if prepended < 12: # no wininst data here return None f.seek(prepended - 12) tag, cfglen, bmlen = struct.unpack("<iii", f.read(12)) if tag not in (0x1234567A, 0x1234567B): return None # not a valid tag f.seek(prepended - (12 + cfglen)) init = {'version': '', 'target_version': ''} cfg = configparser.RawConfigParser(init) try: part = f.read(cfglen) # Read up to the first null byte. config = part.split(b'\0', 1)[0] # Now the config is in bytes, but for RawConfigParser, it should # be text, so decode it. config = config.decode(sys.getfilesystemencoding()) cfg.readfp(six.StringIO(config)) except configparser.Error: return None if not cfg.has_section('metadata') or not cfg.has_section('Setup'): return None return cfg finally: f.close() def get_exe_prefixes(exe_filename): """Get exe->egg path translations for a given .exe file""" prefixes = [ ('PURELIB/', ''), ('PLATLIB/pywin32_system32', ''), ('PLATLIB/', ''), ('SCRIPTS/', 'EGG-INFO/scripts/'), ('DATA/lib/site-packages', ''), ] z = zipfile.ZipFile(exe_filename) try: for info in z.infolist(): name = info.filename parts = name.split('/') if len(parts) == 3 and parts[2] == 'PKG-INFO': if parts[1].endswith('.egg-info'): prefixes.insert(0, ('/'.join(parts[:2]), 'EGG-INFO/')) break if len(parts) != 2 or not name.endswith('.pth'): continue if name.endswith('-nspkg.pth'): continue if parts[0].upper() in ('PURELIB', 'PLATLIB'): contents = z.read(name) if six.PY3: contents = contents.decode() for pth in yield_lines(contents): pth = pth.strip().replace('\\', '/') if not pth.startswith('import'): prefixes.append((('%s/%s/' % (parts[0], pth)), '')) finally: z.close() prefixes = [(x.lower(), y) for x, y in prefixes] prefixes.sort() prefixes.reverse() return prefixes class PthDistributions(Environment): """A .pth file with Distribution paths in it""" dirty = False def __init__(self, filename, sitedirs=()): self.filename = filename self.sitedirs = list(map(normalize_path, sitedirs)) self.basedir = normalize_path(os.path.dirname(self.filename)) self._load() Environment.__init__(self, [], None, None) for path in yield_lines(self.paths): list(map(self.add, find_distributions(path, True))) def _load(self): self.paths = [] saw_import = False seen = dict.fromkeys(self.sitedirs) if os.path.isfile(self.filename): f = open(self.filename, 'rt') for line in f: if line.startswith('import'): saw_import = True continue path = line.rstrip() self.paths.append(path) if not path.strip() or path.strip().startswith('#'): continue # skip non-existent paths, in case somebody deleted a package # manually, and duplicate paths as well path = self.paths[-1] = normalize_path( os.path.join(self.basedir, path) ) if not os.path.exists(path) or path in seen: self.paths.pop() # skip it self.dirty = True # we cleaned up, so we're dirty now :) continue seen[path] = 1 f.close() if self.paths and not saw_import: self.dirty = True # ensure anything we touch has import wrappers while self.paths and not self.paths[-1].strip(): self.paths.pop() def save(self): """Write changed .pth file back to disk""" if not self.dirty: return rel_paths = list(map(self.make_relative, self.paths)) if rel_paths: log.debug("Saving %s", self.filename) lines = self._wrap_lines(rel_paths) data = '\n'.join(lines) + '\n' if os.path.islink(self.filename): os.unlink(self.filename) with open(self.filename, 'wt') as f: f.write(data) elif os.path.exists(self.filename): log.debug("Deleting empty %s", self.filename) os.unlink(self.filename) self.dirty = False @staticmethod def _wrap_lines(lines): return lines def add(self, dist): """Add `dist` to the distribution map""" new_path = ( dist.location not in self.paths and ( dist.location not in self.sitedirs or # account for '.' being in PYTHONPATH dist.location == os.getcwd() ) ) if new_path: self.paths.append(dist.location) self.dirty = True Environment.add(self, dist) def remove(self, dist): """Remove `dist` from the distribution map""" while dist.location in self.paths: self.paths.remove(dist.location) self.dirty = True Environment.remove(self, dist) def make_relative(self, path): npath, last = os.path.split(normalize_path(path)) baselen = len(self.basedir) parts = [last] sep = os.altsep == '/' and '/' or os.sep while len(npath) >= baselen: if npath == self.basedir: parts.append(os.curdir) parts.reverse() return sep.join(parts) npath, last = os.path.split(npath) parts.append(last) else: return path class RewritePthDistributions(PthDistributions): @classmethod def _wrap_lines(cls, lines): yield cls.prelude for line in lines: yield line yield cls.postlude prelude = _one_liner(""" import sys sys.__plen = len(sys.path) """) postlude = _one_liner(""" import sys new = sys.path[sys.__plen:] del sys.path[sys.__plen:] p = getattr(sys, '__egginsert', 0) sys.path[p:p] = new sys.__egginsert = p + len(new) """) if os.environ.get('SETUPTOOLS_SYS_PATH_TECHNIQUE', 'raw') == 'rewrite': PthDistributions = RewritePthDistributions def _first_line_re(): """ Return a regular expression based on first_line_re suitable for matching strings. """ if isinstance(first_line_re.pattern, str): return first_line_re # first_line_re in Python >=3.1.4 and >=3.2.1 is a bytes pattern. return re.compile(first_line_re.pattern.decode()) def auto_chmod(func, arg, exc): if func in [os.unlink, os.remove] and os.name == 'nt': chmod(arg, stat.S_IWRITE) return func(arg) et, ev, _ = sys.exc_info() six.reraise(et, (ev[0], ev[1] + (" %s %s" % (func, arg)))) def update_dist_caches(dist_path, fix_zipimporter_caches): """ Fix any globally cached `dist_path` related data `dist_path` should be a path of a newly installed egg distribution (zipped or unzipped). sys.path_importer_cache contains finder objects that have been cached when importing data from the original distribution. Any such finders need to be cleared since the replacement distribution might be packaged differently, e.g. a zipped egg distribution might get replaced with an unzipped egg folder or vice versa. Having the old finders cached may then cause Python to attempt loading modules from the replacement distribution using an incorrect loader. zipimport.zipimporter objects are Python loaders charged with importing data packaged inside zip archives. If stale loaders referencing the original distribution, are left behind, they can fail to load modules from the replacement distribution. E.g. if an old zipimport.zipimporter instance is used to load data from a new zipped egg archive, it may cause the operation to attempt to locate the requested data in the wrong location - one indicated by the original distribution's zip archive directory information. Such an operation may then fail outright, e.g. report having read a 'bad local file header', or even worse, it may fail silently & return invalid data. zipimport._zip_directory_cache contains cached zip archive directory information for all existing zipimport.zipimporter instances and all such instances connected to the same archive share the same cached directory information. If asked, and the underlying Python implementation allows it, we can fix all existing zipimport.zipimporter instances instead of having to track them down and remove them one by one, by updating their shared cached zip archive directory information. This, of course, assumes that the replacement distribution is packaged as a zipped egg. If not asked to fix existing zipimport.zipimporter instances, we still do our best to clear any remaining zipimport.zipimporter related cached data that might somehow later get used when attempting to load data from the new distribution and thus cause such load operations to fail. Note that when tracking down such remaining stale data, we can not catch every conceivable usage from here, and we clear only those that we know of and have found to cause problems if left alive. Any remaining caches should be updated by whomever is in charge of maintaining them, i.e. they should be ready to handle us replacing their zip archives with new distributions at runtime. """ # There are several other known sources of stale zipimport.zipimporter # instances that we do not clear here, but might if ever given a reason to # do so: # * Global setuptools pkg_resources.working_set (a.k.a. 'master working # set') may contain distributions which may in turn contain their # zipimport.zipimporter loaders. # * Several zipimport.zipimporter loaders held by local variables further # up the function call stack when running the setuptools installation. # * Already loaded modules may have their __loader__ attribute set to the # exact loader instance used when importing them. Python 3.4 docs state # that this information is intended mostly for introspection and so is # not expected to cause us problems. normalized_path = normalize_path(dist_path) _uncache(normalized_path, sys.path_importer_cache) if fix_zipimporter_caches: _replace_zip_directory_cache_data(normalized_path) else: # Here, even though we do not want to fix existing and now stale # zipimporter cache information, we still want to remove it. Related to # Python's zip archive directory information cache, we clear each of # its stale entries in two phases: # 1. Clear the entry so attempting to access zip archive information # via any existing stale zipimport.zipimporter instances fails. # 2. Remove the entry from the cache so any newly constructed # zipimport.zipimporter instances do not end up using old stale # zip archive directory information. # This whole stale data removal step does not seem strictly necessary, # but has been left in because it was done before we started replacing # the zip archive directory information cache content if possible, and # there are no relevant unit tests that we can depend on to tell us if # this is really needed. _remove_and_clear_zip_directory_cache_data(normalized_path) def _collect_zipimporter_cache_entries(normalized_path, cache): """ Return zipimporter cache entry keys related to a given normalized path. Alternative path spellings (e.g. those using different character case or those using alternative path separators) related to the same path are included. Any sub-path entries are included as well, i.e. those corresponding to zip archives embedded in other zip archives. """ result = [] prefix_len = len(normalized_path) for p in cache: np = normalize_path(p) if (np.startswith(normalized_path) and np[prefix_len:prefix_len + 1] in (os.sep, '')): result.append(p) return result def _update_zipimporter_cache(normalized_path, cache, updater=None): """ Update zipimporter cache data for a given normalized path. Any sub-path entries are processed as well, i.e. those corresponding to zip archives embedded in other zip archives. Given updater is a callable taking a cache entry key and the original entry (after already removing the entry from the cache), and expected to update the entry and possibly return a new one to be inserted in its place. Returning None indicates that the entry should not be replaced with a new one. If no updater is given, the cache entries are simply removed without any additional processing, the same as if the updater simply returned None. """ for p in _collect_zipimporter_cache_entries(normalized_path, cache): # N.B. pypy's custom zipimport._zip_directory_cache implementation does # not support the complete dict interface: # * Does not support item assignment, thus not allowing this function # to be used only for removing existing cache entries. # * Does not support the dict.pop() method, forcing us to use the # get/del patterns instead. For more detailed information see the # following links: # https://github.com/pypa/setuptools/issues/202#issuecomment-202913420 # http://bit.ly/2h9itJX old_entry = cache[p] del cache[p] new_entry = updater and updater(p, old_entry) if new_entry is not None: cache[p] = new_entry def _uncache(normalized_path, cache): _update_zipimporter_cache(normalized_path, cache) def _remove_and_clear_zip_directory_cache_data(normalized_path): def clear_and_remove_cached_zip_archive_directory_data(path, old_entry): old_entry.clear() _update_zipimporter_cache( normalized_path, zipimport._zip_directory_cache, updater=clear_and_remove_cached_zip_archive_directory_data) # PyPy Python implementation does not allow directly writing to the # zipimport._zip_directory_cache and so prevents us from attempting to correct # its content. The best we can do there is clear the problematic cache content # and have PyPy repopulate it as needed. The downside is that if there are any # stale zipimport.zipimporter instances laying around, attempting to use them # will fail due to not having its zip archive directory information available # instead of being automatically corrected to use the new correct zip archive # directory information. if '__pypy__' in sys.builtin_module_names: _replace_zip_directory_cache_data = \ _remove_and_clear_zip_directory_cache_data else: def _replace_zip_directory_cache_data(normalized_path): def replace_cached_zip_archive_directory_data(path, old_entry): # N.B. In theory, we could load the zip directory information just # once for all updated path spellings, and then copy it locally and # update its contained path strings to contain the correct # spelling, but that seems like a way too invasive move (this cache # structure is not officially documented anywhere and could in # theory change with new Python releases) for no significant # benefit. old_entry.clear() zipimport.zipimporter(path) old_entry.update(zipimport._zip_directory_cache[path]) return old_entry _update_zipimporter_cache( normalized_path, zipimport._zip_directory_cache, updater=replace_cached_zip_archive_directory_data) def is_python(text, filename='<string>'): "Is this string a valid Python script?" try: compile(text, filename, 'exec') except (SyntaxError, TypeError): return False else: return True def is_sh(executable): """Determine if the specified executable is a .sh (contains a #! line)""" try: with io.open(executable, encoding='latin-1') as fp: magic = fp.read(2) except (OSError, IOError): return executable return magic == '#!' def nt_quote_arg(arg): """Quote a command line argument according to Windows parsing rules""" return subprocess.list2cmdline([arg]) def is_python_script(script_text, filename): """Is this text, as a whole, a Python script? (as opposed to shell/bat/etc. """ if filename.endswith('.py') or filename.endswith('.pyw'): return True # extension says it's Python if is_python(script_text, filename): return True # it's syntactically valid Python if script_text.startswith('#!'): # It begins with a '#!' line, so check if 'python' is in it somewhere return 'python' in script_text.splitlines()[0].lower() return False # Not any Python I can recognize try: from os import chmod as _chmod except ImportError: # Jython compatibility def _chmod(*args): pass def chmod(path, mode): log.debug("changing mode of %s to %o", path, mode) try: _chmod(path, mode) except os.error as e: log.debug("chmod failed: %s", e) class CommandSpec(list): """ A command spec for a #! header, specified as a list of arguments akin to those passed to Popen. """ options = [] split_args = dict() @classmethod def best(cls): """ Choose the best CommandSpec class based on environmental conditions. """ return cls @classmethod def _sys_executable(cls): _default = os.path.normpath(sys.executable) return os.environ.get('__PYVENV_LAUNCHER__', _default) @classmethod def from_param(cls, param): """ Construct a CommandSpec from a parameter to build_scripts, which may be None. """ if isinstance(param, cls): return param if isinstance(param, list): return cls(param) if param is None: return cls.from_environment() # otherwise, assume it's a string. return cls.from_string(param) @classmethod def from_environment(cls): return cls([cls._sys_executable()]) @classmethod def from_string(cls, string): """ Construct a command spec from a simple string representing a command line parseable by shlex.split. """ items = shlex.split(string, **cls.split_args) return cls(items) def install_options(self, script_text): self.options = shlex.split(self._extract_options(script_text)) cmdline = subprocess.list2cmdline(self) if not isascii(cmdline): self.options[:0] = ['-x'] @staticmethod def _extract_options(orig_script): """ Extract any options from the first line of the script. """ first = (orig_script + '\n').splitlines()[0] match = _first_line_re().match(first) options = match.group(1) or '' if match else '' return options.strip() def as_header(self): return self._render(self + list(self.options)) @staticmethod def _strip_quotes(item): _QUOTES = '"\'' for q in _QUOTES: if item.startswith(q) and item.endswith(q): return item[1:-1] return item @staticmethod def _render(items): cmdline = subprocess.list2cmdline( CommandSpec._strip_quotes(item.strip()) for item in items) return '#!' + cmdline + '\n' # For pbr compat; will be removed in a future version. sys_executable = CommandSpec._sys_executable() class WindowsCommandSpec(CommandSpec): split_args = dict(posix=False) class ScriptWriter: """ Encapsulates behavior around writing entry point scripts for console and gui apps. """ template = textwrap.dedent(r""" # EASY-INSTALL-ENTRY-SCRIPT: %(spec)r,%(group)r,%(name)r __requires__ = %(spec)r import re import sys from pkg_resources import load_entry_point if __name__ == '__main__': sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0]) sys.exit( load_entry_point(%(spec)r, %(group)r, %(name)r)() ) """).lstrip() command_spec_class = CommandSpec @classmethod def get_script_args(cls, dist, executable=None, wininst=False): # for backward compatibility warnings.warn("Use get_args", EasyInstallDeprecationWarning) writer = (WindowsScriptWriter if wininst else ScriptWriter).best() header = cls.get_script_header("", executable, wininst) return writer.get_args(dist, header) @classmethod def get_script_header(cls, script_text, executable=None, wininst=False): # for backward compatibility warnings.warn("Use get_header", EasyInstallDeprecationWarning, stacklevel=2) if wininst: executable = "python.exe" return cls.get_header(script_text, executable) @classmethod def get_args(cls, dist, header=None): """ Yield write_script() argument tuples for a distribution's console_scripts and gui_scripts entry points. """ if header is None: header = cls.get_header() spec = str(dist.as_requirement()) for type_ in 'console', 'gui': group = type_ + '_scripts' for name, ep in dist.get_entry_map(group).items(): cls._ensure_safe_name(name) script_text = cls.template % locals() args = cls._get_script_args(type_, name, header, script_text) for res in args: yield res @staticmethod def _ensure_safe_name(name): """ Prevent paths in *_scripts entry point names. """ has_path_sep = re.search(r'[\\/]', name) if has_path_sep: raise ValueError("Path separators not allowed in script names") @classmethod def get_writer(cls, force_windows): # for backward compatibility warnings.warn("Use best", EasyInstallDeprecationWarning) return WindowsScriptWriter.best() if force_windows else cls.best() @classmethod def best(cls): """ Select the best ScriptWriter for this environment. """ if sys.platform == 'win32' or (os.name == 'java' and os._name == 'nt'): return WindowsScriptWriter.best() else: return cls @classmethod def _get_script_args(cls, type_, name, header, script_text): # Simply write the stub with no extension. yield (name, header + script_text) @classmethod def get_header(cls, script_text="", executable=None): """Create a #! line, getting options (if any) from script_text""" cmd = cls.command_spec_class.best().from_param(executable) cmd.install_options(script_text) return cmd.as_header() class WindowsScriptWriter(ScriptWriter): command_spec_class = WindowsCommandSpec @classmethod def get_writer(cls): # for backward compatibility warnings.warn("Use best", EasyInstallDeprecationWarning) return cls.best() @classmethod def best(cls): """ Select the best ScriptWriter suitable for Windows """ writer_lookup = dict( executable=WindowsExecutableLauncherWriter, natural=cls, ) # for compatibility, use the executable launcher by default launcher = os.environ.get('SETUPTOOLS_LAUNCHER', 'executable') return writer_lookup[launcher] @classmethod def _get_script_args(cls, type_, name, header, script_text): "For Windows, add a .py extension" ext = dict(console='.pya', gui='.pyw')[type_] if ext not in os.environ['PATHEXT'].lower().split(';'): msg = ( "{ext} not listed in PATHEXT; scripts will not be " "recognized as executables." ).format(**locals()) warnings.warn(msg, UserWarning) old = ['.pya', '.py', '-script.py', '.pyc', '.pyo', '.pyw', '.exe'] old.remove(ext) header = cls._adjust_header(type_, header) blockers = [name + x for x in old] yield name + ext, header + script_text, 't', blockers @classmethod def _adjust_header(cls, type_, orig_header): """ Make sure 'pythonw' is used for gui and and 'python' is used for console (regardless of what sys.executable is). """ pattern = 'pythonw.exe' repl = 'python.exe' if type_ == 'gui': pattern, repl = repl, pattern pattern_ob = re.compile(re.escape(pattern), re.IGNORECASE) new_header = pattern_ob.sub(string=orig_header, repl=repl) return new_header if cls._use_header(new_header) else orig_header @staticmethod def _use_header(new_header): """ Should _adjust_header use the replaced header? On non-windows systems, always use. On Windows systems, only use the replaced header if it resolves to an executable on the system. """ clean_header = new_header[2:-1].strip('"') return sys.platform != 'win32' or find_executable(clean_header) class WindowsExecutableLauncherWriter(WindowsScriptWriter): @classmethod def _get_script_args(cls, type_, name, header, script_text): """ For Windows, add a .py extension and an .exe launcher """ if type_ == 'gui': launcher_type = 'gui' ext = '-script.pyw' old = ['.pyw'] else: launcher_type = 'cli' ext = '-script.py' old = ['.py', '.pyc', '.pyo'] hdr = cls._adjust_header(type_, header) blockers = [name + x for x in old] yield (name + ext, hdr + script_text, 't', blockers) yield ( name + '.exe', get_win_launcher(launcher_type), 'b' # write in binary mode ) if not is_64bit(): # install a manifest for the launcher to prevent Windows # from detecting it as an installer (which it will for # launchers like easy_install.exe). Consider only # adding a manifest for launchers detected as installers. # See Distribute #143 for details. m_name = name + '.exe.manifest' yield (m_name, load_launcher_manifest(name), 't') # for backward-compatibility get_script_args = ScriptWriter.get_script_args get_script_header = ScriptWriter.get_script_header def get_win_launcher(type): """ Load the Windows launcher (executable) suitable for launching a script. `type` should be either 'cli' or 'gui' Returns the executable as a byte string. """ launcher_fn = '%s.exe' % type if is_64bit(): launcher_fn = launcher_fn.replace(".", "-64.") else: launcher_fn = launcher_fn.replace(".", "-32.") return resource_string('setuptools', launcher_fn) def load_launcher_manifest(name): manifest = pkg_resources.resource_string(__name__, 'launcher manifest.xml') if six.PY2: return manifest % vars() else: return manifest.decode('utf-8') % vars() def rmtree(path, ignore_errors=False, onerror=auto_chmod): return shutil.rmtree(path, ignore_errors, onerror) def current_umask(): tmp = os.umask(0o022) os.umask(tmp) return tmp def bootstrap(): # This function is called when setuptools*.egg is run using /bin/sh import setuptools argv0 = os.path.dirname(setuptools.__path__[0]) sys.argv[0] = argv0 sys.argv.append(argv0) main() def main(argv=None, **kw): from setuptools import setup from setuptools.dist import Distribution class DistributionWithoutHelpCommands(Distribution): common_usage = "" def _show_help(self, *args, **kw): with _patch_usage(): Distribution._show_help(self, *args, **kw) if argv is None: argv = sys.argv[1:] with _patch_usage(): setup( script_args=['-q', 'easy_install', '-v'] + argv, script_name=sys.argv[0] or 'easy_install', distclass=DistributionWithoutHelpCommands, **kw ) @contextlib.contextmanager def _patch_usage(): import distutils.core USAGE = textwrap.dedent(""" usage: %(script)s [options] requirement_or_url ... or: %(script)s --help """).lstrip() def gen_usage(script_name): return USAGE % dict( script=os.path.basename(script_name), ) saved = distutils.core.gen_usage distutils.core.gen_usage = gen_usage try: yield finally: distutils.core.gen_usage = saved class EasyInstallDeprecationWarning(SetuptoolsDeprecationWarning): """Class for warning about deprecations in EasyInstall in SetupTools. Not ignored by default, unlike DeprecationWarning.""" ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/egg_info.py�����������������������0000644�0001750�0001750�00000061742�00000000000�030241� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""setuptools.command.egg_info Create a distribution's .egg-info directory and contents""" from distutils.filelist import FileList as _FileList from distutils.errors import DistutilsInternalError from distutils.util import convert_path from distutils import log import distutils.errors import distutils.filelist import os import re import sys import io import warnings import time import collections from setuptools.extern import six from setuptools.extern.six.moves import map from setuptools import Command from setuptools.command.sdist import sdist from setuptools.command.sdist import walk_revctrl from setuptools.command.setopt import edit_config from setuptools.command import bdist_egg from pkg_resources import ( parse_requirements, safe_name, parse_version, safe_version, yield_lines, EntryPoint, iter_entry_points, to_filename) import setuptools.unicode_utils as unicode_utils from setuptools.glob import glob from setuptools.extern import packaging from setuptools import SetuptoolsDeprecationWarning def translate_pattern(glob): """ Translate a file path glob like '*.txt' in to a regular expression. This differs from fnmatch.translate which allows wildcards to match directory separators. It also knows about '**/' which matches any number of directories. """ pat = '' # This will split on '/' within [character classes]. This is deliberate. chunks = glob.split(os.path.sep) sep = re.escape(os.sep) valid_char = '[^%s]' % (sep,) for c, chunk in enumerate(chunks): last_chunk = c == len(chunks) - 1 # Chunks that are a literal ** are globstars. They match anything. if chunk == '**': if last_chunk: # Match anything if this is the last component pat += '.*' else: # Match '(name/)*' pat += '(?:%s+%s)*' % (valid_char, sep) continue # Break here as the whole path component has been handled # Find any special characters in the remainder i = 0 chunk_len = len(chunk) while i < chunk_len: char = chunk[i] if char == '*': # Match any number of name characters pat += valid_char + '*' elif char == '?': # Match a name character pat += valid_char elif char == '[': # Character class inner_i = i + 1 # Skip initial !/] chars if inner_i < chunk_len and chunk[inner_i] == '!': inner_i = inner_i + 1 if inner_i < chunk_len and chunk[inner_i] == ']': inner_i = inner_i + 1 # Loop till the closing ] is found while inner_i < chunk_len and chunk[inner_i] != ']': inner_i = inner_i + 1 if inner_i >= chunk_len: # Got to the end of the string without finding a closing ] # Do not treat this as a matching group, but as a literal [ pat += re.escape(char) else: # Grab the insides of the [brackets] inner = chunk[i + 1:inner_i] char_class = '' # Class negation if inner[0] == '!': char_class = '^' inner = inner[1:] char_class += re.escape(inner) pat += '[%s]' % (char_class,) # Skip to the end ] i = inner_i else: pat += re.escape(char) i += 1 # Join each chunk with the dir separator if not last_chunk: pat += sep pat += r'\Z' return re.compile(pat, flags=re.MULTILINE|re.DOTALL) class InfoCommon: tag_build = None tag_date = None @property def name(self): return safe_name(self.distribution.get_name()) def tagged_version(self): version = self.distribution.get_version() # egg_info may be called more than once for a distribution, # in which case the version string already contains all tags. if self.vtags and version.endswith(self.vtags): return safe_version(version) return safe_version(version + self.vtags) def tags(self): version = '' if self.tag_build: version += self.tag_build if self.tag_date: version += time.strftime("-%Y%m%d") return version vtags = property(tags) class egg_info(InfoCommon, Command): description = "create a distribution's .egg-info directory" user_options = [ ('egg-base=', 'e', "directory containing .egg-info directories" " (default: top of the source tree)"), ('tag-date', 'd', "Add date stamp (e.g. 20050528) to version number"), ('tag-build=', 'b', "Specify explicit tag to add to version number"), ('no-date', 'D', "Don't include date stamp [default]"), ] boolean_options = ['tag-date'] negative_opt = { 'no-date': 'tag-date', } def initialize_options(self): self.egg_base = None self.egg_name = None self.egg_info = None self.egg_version = None self.broken_egg_info = False #################################### # allow the 'tag_svn_revision' to be detected and # set, supporting sdists built on older Setuptools. @property def tag_svn_revision(self): pass @tag_svn_revision.setter def tag_svn_revision(self, value): pass #################################### def save_version_info(self, filename): """ Materialize the value of date into the build tag. Install build keys in a deterministic order to avoid arbitrary reordering on subsequent builds. """ egg_info = collections.OrderedDict() # follow the order these keys would have been added # when PYTHONHASHSEED=0 egg_info['tag_build'] = self.tags() egg_info['tag_date'] = 0 edit_config(filename, dict(egg_info=egg_info)) def finalize_options(self): # Note: we need to capture the current value returned # by `self.tagged_version()`, so we can later update # `self.distribution.metadata.version` without # repercussions. self.egg_name = self.name self.egg_version = self.tagged_version() parsed_version = parse_version(self.egg_version) try: is_version = isinstance(parsed_version, packaging.version.Version) spec = ( "%s==%s" if is_version else "%s===%s" ) list( parse_requirements(spec % (self.egg_name, self.egg_version)) ) except ValueError: raise distutils.errors.DistutilsOptionError( "Invalid distribution name or version syntax: %s-%s" % (self.egg_name, self.egg_version) ) if self.egg_base is None: dirs = self.distribution.package_dir self.egg_base = (dirs or {}).get('', os.curdir) self.ensure_dirname('egg_base') self.egg_info = to_filename(self.egg_name) + '.egg-info' if self.egg_base != os.curdir: self.egg_info = os.path.join(self.egg_base, self.egg_info) if '-' in self.egg_name: self.check_broken_egg_info() # Set package version for the benefit of dumber commands # (e.g. sdist, bdist_wininst, etc.) # self.distribution.metadata.version = self.egg_version # If we bootstrapped around the lack of a PKG-INFO, as might be the # case in a fresh checkout, make sure that any special tags get added # to the version info # pd = self.distribution._patched_dist if pd is not None and pd.key == self.egg_name.lower(): pd._version = self.egg_version pd._parsed_version = parse_version(self.egg_version) self.distribution._patched_dist = None def write_or_delete_file(self, what, filename, data, force=False): """Write `data` to `filename` or delete if empty If `data` is non-empty, this routine is the same as ``write_file()``. If `data` is empty but not ``None``, this is the same as calling ``delete_file(filename)`. If `data` is ``None``, then this is a no-op unless `filename` exists, in which case a warning is issued about the orphaned file (if `force` is false), or deleted (if `force` is true). """ if data: self.write_file(what, filename, data) elif os.path.exists(filename): if data is None and not force: log.warn( "%s not set in setup(), but %s exists", what, filename ) return else: self.delete_file(filename) def write_file(self, what, filename, data): """Write `data` to `filename` (if not a dry run) after announcing it `what` is used in a log message to identify what is being written to the file. """ log.info("writing %s to %s", what, filename) if six.PY3: data = data.encode("utf-8") if not self.dry_run: f = open(filename, 'wb') f.write(data) f.close() def delete_file(self, filename): """Delete `filename` (if not a dry run) after announcing it""" log.info("deleting %s", filename) if not self.dry_run: os.unlink(filename) def run(self): self.mkpath(self.egg_info) os.utime(self.egg_info, None) installer = self.distribution.fetch_build_egg for ep in iter_entry_points('egg_info.writers'): ep.require(installer=installer) writer = ep.resolve() writer(self, ep.name, os.path.join(self.egg_info, ep.name)) # Get rid of native_libs.txt if it was put there by older bdist_egg nl = os.path.join(self.egg_info, "native_libs.txt") if os.path.exists(nl): self.delete_file(nl) self.find_sources() def find_sources(self): """Generate SOURCES.txt manifest file""" manifest_filename = os.path.join(self.egg_info, "SOURCES.txt") mm = manifest_maker(self.distribution) mm.manifest = manifest_filename mm.run() self.filelist = mm.filelist def check_broken_egg_info(self): bei = self.egg_name + '.egg-info' if self.egg_base != os.curdir: bei = os.path.join(self.egg_base, bei) if os.path.exists(bei): log.warn( "-" * 78 + '\n' "Note: Your current .egg-info directory has a '-' in its name;" '\nthis will not work correctly with "setup.py develop".\n\n' 'Please rename %s to %s to correct this problem.\n' + '-' * 78, bei, self.egg_info ) self.broken_egg_info = self.egg_info self.egg_info = bei # make it work for now class FileList(_FileList): # Implementations of the various MANIFEST.in commands def process_template_line(self, line): # Parse the line: split it up, make sure the right number of words # is there, and return the relevant words. 'action' is always # defined: it's the first word of the line. Which of the other # three are defined depends on the action; it'll be either # patterns, (dir and patterns), or (dir_pattern). (action, patterns, dir, dir_pattern) = self._parse_template_line(line) # OK, now we know that the action is valid and we have the # right number of words on the line for that action -- so we # can proceed with minimal error-checking. if action == 'include': self.debug_print("include " + ' '.join(patterns)) for pattern in patterns: if not self.include(pattern): log.warn("warning: no files found matching '%s'", pattern) elif action == 'exclude': self.debug_print("exclude " + ' '.join(patterns)) for pattern in patterns: if not self.exclude(pattern): log.warn(("warning: no previously-included files " "found matching '%s'"), pattern) elif action == 'global-include': self.debug_print("global-include " + ' '.join(patterns)) for pattern in patterns: if not self.global_include(pattern): log.warn(("warning: no files found matching '%s' " "anywhere in distribution"), pattern) elif action == 'global-exclude': self.debug_print("global-exclude " + ' '.join(patterns)) for pattern in patterns: if not self.global_exclude(pattern): log.warn(("warning: no previously-included files matching " "'%s' found anywhere in distribution"), pattern) elif action == 'recursive-include': self.debug_print("recursive-include %s %s" % (dir, ' '.join(patterns))) for pattern in patterns: if not self.recursive_include(dir, pattern): log.warn(("warning: no files found matching '%s' " "under directory '%s'"), pattern, dir) elif action == 'recursive-exclude': self.debug_print("recursive-exclude %s %s" % (dir, ' '.join(patterns))) for pattern in patterns: if not self.recursive_exclude(dir, pattern): log.warn(("warning: no previously-included files matching " "'%s' found under directory '%s'"), pattern, dir) elif action == 'graft': self.debug_print("graft " + dir_pattern) if not self.graft(dir_pattern): log.warn("warning: no directories found matching '%s'", dir_pattern) elif action == 'prune': self.debug_print("prune " + dir_pattern) if not self.prune(dir_pattern): log.warn(("no previously-included directories found " "matching '%s'"), dir_pattern) else: raise DistutilsInternalError( "this cannot happen: invalid action '%s'" % action) def _remove_files(self, predicate): """ Remove all files from the file list that match the predicate. Return True if any matching files were removed """ found = False for i in range(len(self.files) - 1, -1, -1): if predicate(self.files[i]): self.debug_print(" removing " + self.files[i]) del self.files[i] found = True return found def include(self, pattern): """Include files that match 'pattern'.""" found = [f for f in glob(pattern) if not os.path.isdir(f)] self.extend(found) return bool(found) def exclude(self, pattern): """Exclude files that match 'pattern'.""" match = translate_pattern(pattern) return self._remove_files(match.match) def recursive_include(self, dir, pattern): """ Include all files anywhere in 'dir/' that match the pattern. """ full_pattern = os.path.join(dir, '**', pattern) found = [f for f in glob(full_pattern, recursive=True) if not os.path.isdir(f)] self.extend(found) return bool(found) def recursive_exclude(self, dir, pattern): """ Exclude any file anywhere in 'dir/' that match the pattern. """ match = translate_pattern(os.path.join(dir, '**', pattern)) return self._remove_files(match.match) def graft(self, dir): """Include all files from 'dir/'.""" found = [ item for match_dir in glob(dir) for item in distutils.filelist.findall(match_dir) ] self.extend(found) return bool(found) def prune(self, dir): """Filter out files from 'dir/'.""" match = translate_pattern(os.path.join(dir, '**')) return self._remove_files(match.match) def global_include(self, pattern): """ Include all files anywhere in the current directory that match the pattern. This is very inefficient on large file trees. """ if self.allfiles is None: self.findall() match = translate_pattern(os.path.join('**', pattern)) found = [f for f in self.allfiles if match.match(f)] self.extend(found) return bool(found) def global_exclude(self, pattern): """ Exclude all files anywhere that match the pattern. """ match = translate_pattern(os.path.join('**', pattern)) return self._remove_files(match.match) def append(self, item): if item.endswith('\r'): # Fix older sdists built on Windows item = item[:-1] path = convert_path(item) if self._safe_path(path): self.files.append(path) def extend(self, paths): self.files.extend(filter(self._safe_path, paths)) def _repair(self): """ Replace self.files with only safe paths Because some owners of FileList manipulate the underlying ``files`` attribute directly, this method must be called to repair those paths. """ self.files = list(filter(self._safe_path, self.files)) def _safe_path(self, path): enc_warn = "'%s' not %s encodable -- skipping" # To avoid accidental trans-codings errors, first to unicode u_path = unicode_utils.filesys_decode(path) if u_path is None: log.warn("'%s' in unexpected encoding -- skipping" % path) return False # Must ensure utf-8 encodability utf8_path = unicode_utils.try_encode(u_path, "utf-8") if utf8_path is None: log.warn(enc_warn, path, 'utf-8') return False try: # accept is either way checks out if os.path.exists(u_path) or os.path.exists(utf8_path): return True # this will catch any encode errors decoding u_path except UnicodeEncodeError: log.warn(enc_warn, path, sys.getfilesystemencoding()) class manifest_maker(sdist): template = "MANIFEST.in" def initialize_options(self): self.use_defaults = 1 self.prune = 1 self.manifest_only = 1 self.force_manifest = 1 def finalize_options(self): pass def run(self): self.filelist = FileList() if not os.path.exists(self.manifest): self.write_manifest() # it must exist so it'll get in the list self.add_defaults() if os.path.exists(self.template): self.read_template() self.prune_file_list() self.filelist.sort() self.filelist.remove_duplicates() self.write_manifest() def _manifest_normalize(self, path): path = unicode_utils.filesys_decode(path) return path.replace(os.sep, '/') def write_manifest(self): """ Write the file list in 'self.filelist' to the manifest file named by 'self.manifest'. """ self.filelist._repair() # Now _repairs should encodability, but not unicode files = [self._manifest_normalize(f) for f in self.filelist.files] msg = "writing manifest file '%s'" % self.manifest self.execute(write_file, (self.manifest, files), msg) def warn(self, msg): if not self._should_suppress_warning(msg): sdist.warn(self, msg) @staticmethod def _should_suppress_warning(msg): """ suppress missing-file warnings from sdist """ return re.match(r"standard file .*not found", msg) def add_defaults(self): sdist.add_defaults(self) self.check_license() self.filelist.append(self.template) self.filelist.append(self.manifest) rcfiles = list(walk_revctrl()) if rcfiles: self.filelist.extend(rcfiles) elif os.path.exists(self.manifest): self.read_manifest() if os.path.exists("setup.py"): # setup.py should be included by default, even if it's not # the script called to create the sdist self.filelist.append("setup.py") ei_cmd = self.get_finalized_command('egg_info') self.filelist.graft(ei_cmd.egg_info) def prune_file_list(self): build = self.get_finalized_command('build') base_dir = self.distribution.get_fullname() self.filelist.prune(build.build_base) self.filelist.prune(base_dir) sep = re.escape(os.sep) self.filelist.exclude_pattern(r'(^|' + sep + r')(RCS|CVS|\.svn)' + sep, is_regex=1) def write_file(filename, contents): """Create a file with the specified name and write 'contents' (a sequence of strings without line terminators) to it. """ contents = "\n".join(contents) # assuming the contents has been vetted for utf-8 encoding contents = contents.encode("utf-8") with open(filename, "wb") as f: # always write POSIX-style manifest f.write(contents) def write_pkg_info(cmd, basename, filename): log.info("writing %s", filename) if not cmd.dry_run: metadata = cmd.distribution.metadata metadata.version, oldver = cmd.egg_version, metadata.version metadata.name, oldname = cmd.egg_name, metadata.name try: # write unescaped data to PKG-INFO, so older pkg_resources # can still parse it metadata.write_pkg_info(cmd.egg_info) finally: metadata.name, metadata.version = oldname, oldver safe = getattr(cmd.distribution, 'zip_safe', None) bdist_egg.write_safety_flag(cmd.egg_info, safe) def warn_depends_obsolete(cmd, basename, filename): if os.path.exists(filename): log.warn( "WARNING: 'depends.txt' is not used by setuptools 0.6!\n" "Use the install_requires/extras_require setup() args instead." ) def _write_requirements(stream, reqs): lines = yield_lines(reqs or ()) append_cr = lambda line: line + '\n' lines = map(append_cr, lines) stream.writelines(lines) def write_requirements(cmd, basename, filename): dist = cmd.distribution data = six.StringIO() _write_requirements(data, dist.install_requires) extras_require = dist.extras_require or {} for extra in sorted(extras_require): data.write('\n[{extra}]\n'.format(**vars())) _write_requirements(data, extras_require[extra]) cmd.write_or_delete_file("requirements", filename, data.getvalue()) def write_setup_requirements(cmd, basename, filename): data = io.StringIO() _write_requirements(data, cmd.distribution.setup_requires) cmd.write_or_delete_file("setup-requirements", filename, data.getvalue()) def write_toplevel_names(cmd, basename, filename): pkgs = dict.fromkeys( [ k.split('.', 1)[0] for k in cmd.distribution.iter_distribution_names() ] ) cmd.write_file("top-level names", filename, '\n'.join(sorted(pkgs)) + '\n') def overwrite_arg(cmd, basename, filename): write_arg(cmd, basename, filename, True) def write_arg(cmd, basename, filename, force=False): argname = os.path.splitext(basename)[0] value = getattr(cmd.distribution, argname, None) if value is not None: value = '\n'.join(value) + '\n' cmd.write_or_delete_file(argname, filename, value, force) def write_entries(cmd, basename, filename): ep = cmd.distribution.entry_points if isinstance(ep, six.string_types) or ep is None: data = ep elif ep is not None: data = [] for section, contents in sorted(ep.items()): if not isinstance(contents, six.string_types): contents = EntryPoint.parse_group(section, contents) contents = '\n'.join(sorted(map(str, contents.values()))) data.append('[%s]\n%s\n\n' % (section, contents)) data = ''.join(data) cmd.write_or_delete_file('entry points', filename, data, True) def get_pkg_info_revision(): """ Get a -r### off of PKG-INFO Version in case this is an sdist of a subversion revision. """ warnings.warn("get_pkg_info_revision is deprecated.", EggInfoDeprecationWarning) if os.path.exists('PKG-INFO'): with io.open('PKG-INFO') as f: for line in f: match = re.match(r"Version:.*-r(\d+)\s*$", line) if match: return int(match.group(1)) return 0 class EggInfoDeprecationWarning(SetuptoolsDeprecationWarning): """Class for warning about deprecations in eggInfo in setupTools. Not ignored by default, unlike DeprecationWarning.""" ������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/install.py������������������������0000644�0001750�0001750�00000011113�00000000000�030115� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from distutils.errors import DistutilsArgError import inspect import glob import warnings import platform import distutils.command.install as orig import setuptools # Prior to numpy 1.9, NumPy relies on the '_install' name, so provide it for # now. See https://github.com/pypa/setuptools/issues/199/ _install = orig.install class install(orig.install): """Use easy_install to install the package, w/dependencies""" user_options = orig.install.user_options + [ ('old-and-unmanageable', None, "Try not to use this!"), ('single-version-externally-managed', None, "used by system package builders to create 'flat' eggs"), ] boolean_options = orig.install.boolean_options + [ 'old-and-unmanageable', 'single-version-externally-managed', ] new_commands = [ ('install_egg_info', lambda self: True), ('install_scripts', lambda self: True), ] _nc = dict(new_commands) def initialize_options(self): orig.install.initialize_options(self) self.old_and_unmanageable = None self.single_version_externally_managed = None def finalize_options(self): orig.install.finalize_options(self) if self.root: self.single_version_externally_managed = True elif self.single_version_externally_managed: if not self.root and not self.record: raise DistutilsArgError( "You must specify --record or --root when building system" " packages" ) def handle_extra_path(self): if self.root or self.single_version_externally_managed: # explicit backward-compatibility mode, allow extra_path to work return orig.install.handle_extra_path(self) # Ignore extra_path when installing an egg (or being run by another # command without --root or --single-version-externally-managed self.path_file = None self.extra_dirs = '' def run(self): # Explicit request for old-style install? Just do it if self.old_and_unmanageable or self.single_version_externally_managed: return orig.install.run(self) if not self._called_from_setup(inspect.currentframe()): # Run in backward-compatibility mode to support bdist_* commands. orig.install.run(self) else: self.do_egg_install() @staticmethod def _called_from_setup(run_frame): """ Attempt to detect whether run() was called from setup() or by another command. If called by setup(), the parent caller will be the 'run_command' method in 'distutils.dist', and *its* caller will be the 'run_commands' method. If called any other way, the immediate caller *might* be 'run_command', but it won't have been called by 'run_commands'. Return True in that case or if a call stack is unavailable. Return False otherwise. """ if run_frame is None: msg = "Call stack not available. bdist_* commands may fail." warnings.warn(msg) if platform.python_implementation() == 'IronPython': msg = "For best results, pass -X:Frames to enable call stack." warnings.warn(msg) return True res = inspect.getouterframes(run_frame)[2] caller, = res[:1] info = inspect.getframeinfo(caller) caller_module = caller.f_globals.get('__name__', '') return ( caller_module == 'distutils.dist' and info.function == 'run_commands' ) def do_egg_install(self): easy_install = self.distribution.get_command_class('easy_install') cmd = easy_install( self.distribution, args="x", root=self.root, record=self.record, ) cmd.ensure_finalized() # finalize before bdist_egg munges install cmd cmd.always_copy_from = '.' # make sure local-dir eggs get installed # pick up setup-dir .egg files only: no .egg-info cmd.package_index.scan(glob.glob('*.egg')) self.run_command('bdist_egg') args = [self.distribution.get_command_obj('bdist_egg').egg_output] if setuptools.bootstrap_install_from: # Bootstrap self-installation of setuptools args.insert(0, setuptools.bootstrap_install_from) cmd.args = args cmd.run() setuptools.bootstrap_install_from = None # XXX Python 3.1 doesn't see _nc if this is inside the class install.sub_commands = ( [cmd for cmd in orig.install.sub_commands if cmd[0] not in install._nc] + install.new_commands ) �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/install_egg_info.py���������������0000644�0001750�0001750�00000004233�00000000000�031757� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from distutils import log, dir_util import os from setuptools import Command from setuptools import namespaces from setuptools.archive_util import unpack_archive import pkg_resources class install_egg_info(namespaces.Installer, Command): """Install an .egg-info directory for the package""" description = "Install an .egg-info directory for the package" user_options = [ ('install-dir=', 'd', "directory to install to"), ] def initialize_options(self): self.install_dir = None def finalize_options(self): self.set_undefined_options('install_lib', ('install_dir', 'install_dir')) ei_cmd = self.get_finalized_command("egg_info") basename = pkg_resources.Distribution( None, None, ei_cmd.egg_name, ei_cmd.egg_version ).egg_name() + '.egg-info' self.source = ei_cmd.egg_info self.target = os.path.join(self.install_dir, basename) self.outputs = [] def run(self): self.run_command('egg_info') if os.path.isdir(self.target) and not os.path.islink(self.target): dir_util.remove_tree(self.target, dry_run=self.dry_run) elif os.path.exists(self.target): self.execute(os.unlink, (self.target,), "Removing " + self.target) if not self.dry_run: pkg_resources.ensure_directory(self.target) self.execute( self.copytree, (), "Copying %s to %s" % (self.source, self.target) ) self.install_namespaces() def get_outputs(self): return self.outputs def copytree(self): # Copy the .egg-info tree to site-packages def skimmer(src, dst): # filter out source-control directories; note that 'src' is always # a '/'-separated path, regardless of platform. 'dst' is a # platform-specific path. for skip in '.svn/', 'CVS/': if src.startswith(skip) or '/' + skip in src: return None self.outputs.append(dst) log.debug("Copying %s to %s", src, dst) return dst unpack_archive(self.source, self.target, skimmer) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/install_lib.py��������������������0000644�0001750�0001750�00000007400�00000000000�030747� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import os import imp from itertools import product, starmap import distutils.command.install_lib as orig class install_lib(orig.install_lib): """Don't add compiled flags to filenames of non-Python files""" def run(self): self.build() outfiles = self.install() if outfiles is not None: # always compile, in case we have any extension stubs to deal with self.byte_compile(outfiles) def get_exclusions(self): """ Return a collections.Sized collections.Container of paths to be excluded for single_version_externally_managed installations. """ all_packages = ( pkg for ns_pkg in self._get_SVEM_NSPs() for pkg in self._all_packages(ns_pkg) ) excl_specs = product(all_packages, self._gen_exclusion_paths()) return set(starmap(self._exclude_pkg_path, excl_specs)) def _exclude_pkg_path(self, pkg, exclusion_path): """ Given a package name and exclusion path within that package, compute the full exclusion path. """ parts = pkg.split('.') + [exclusion_path] return os.path.join(self.install_dir, *parts) @staticmethod def _all_packages(pkg_name): """ >>> list(install_lib._all_packages('foo.bar.baz')) ['foo.bar.baz', 'foo.bar', 'foo'] """ while pkg_name: yield pkg_name pkg_name, sep, child = pkg_name.rpartition('.') def _get_SVEM_NSPs(self): """ Get namespace packages (list) but only for single_version_externally_managed installations and empty otherwise. """ # TODO: is it necessary to short-circuit here? i.e. what's the cost # if get_finalized_command is called even when namespace_packages is # False? if not self.distribution.namespace_packages: return [] install_cmd = self.get_finalized_command('install') svem = install_cmd.single_version_externally_managed return self.distribution.namespace_packages if svem else [] @staticmethod def _gen_exclusion_paths(): """ Generate file paths to be excluded for namespace packages (bytecode cache files). """ # always exclude the package module itself yield '__init__.py' yield '__init__.pyc' yield '__init__.pyo' if not hasattr(imp, 'get_tag'): return base = os.path.join('__pycache__', '__init__.' + imp.get_tag()) yield base + '.pyc' yield base + '.pyo' yield base + '.opt-1.pyc' yield base + '.opt-2.pyc' def copy_tree( self, infile, outfile, preserve_mode=1, preserve_times=1, preserve_symlinks=0, level=1 ): assert preserve_mode and preserve_times and not preserve_symlinks exclude = self.get_exclusions() if not exclude: return orig.install_lib.copy_tree(self, infile, outfile) # Exclude namespace package __init__.py* files from the output from setuptools.archive_util import unpack_directory from distutils import log outfiles = [] def pf(src, dst): if dst in exclude: log.warn("Skipping installation of %s (namespace package)", dst) return False log.info("copying %s -> %s", src, os.path.dirname(dst)) outfiles.append(dst) return dst unpack_directory(infile, outfile, pf) return outfiles def get_outputs(self): outputs = orig.install_lib.get_outputs(self) exclude = self.get_exclusions() if exclude: return [f for f in outputs if f not in exclude] return outputs ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/install_scripts.py����������������0000644�0001750�0001750�00000004607�00000000000�031676� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from distutils import log import distutils.command.install_scripts as orig import os import sys from pkg_resources import Distribution, PathMetadata, ensure_directory class install_scripts(orig.install_scripts): """Do normal script install, plus any egg_info wrapper scripts""" def initialize_options(self): orig.install_scripts.initialize_options(self) self.no_ep = False def run(self): import setuptools.command.easy_install as ei self.run_command("egg_info") if self.distribution.scripts: orig.install_scripts.run(self) # run first to set up self.outfiles else: self.outfiles = [] if self.no_ep: # don't install entry point scripts into .egg file! return ei_cmd = self.get_finalized_command("egg_info") dist = Distribution( ei_cmd.egg_base, PathMetadata(ei_cmd.egg_base, ei_cmd.egg_info), ei_cmd.egg_name, ei_cmd.egg_version, ) bs_cmd = self.get_finalized_command('build_scripts') exec_param = getattr(bs_cmd, 'executable', None) bw_cmd = self.get_finalized_command("bdist_wininst") is_wininst = getattr(bw_cmd, '_is_running', False) writer = ei.ScriptWriter if is_wininst: exec_param = "python.exe" writer = ei.WindowsScriptWriter if exec_param == sys.executable: # In case the path to the Python executable contains a space, wrap # it so it's not split up. exec_param = [exec_param] # resolve the writer to the environment writer = writer.best() cmd = writer.command_spec_class.best().from_param(exec_param) for args in writer.get_args(dist, cmd.as_header()): self.write_script(*args) def write_script(self, script_name, contents, mode="t", *ignored): """Write an executable file to the scripts directory""" from setuptools.command.easy_install import chmod, current_umask log.info("Installing %s script to %s", script_name, self.install_dir) target = os.path.join(self.install_dir, script_name) self.outfiles.append(target) mask = current_umask() if not self.dry_run: ensure_directory(target) f = open(target, "w" + mode) f.write(contents) f.close() chmod(target, 0o777 - mask) �������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/py36compat.py���������������������0000644�0001750�0001750�00000011572�00000000000�030465� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import os from glob import glob from distutils.util import convert_path from distutils.command import sdist from setuptools.extern.six.moves import filter class sdist_add_defaults: """ Mix-in providing forward-compatibility for functionality as found in distutils on Python 3.7. Do not edit the code in this class except to update functionality as implemented in distutils. Instead, override in the subclass. """ def add_defaults(self): """Add all the default files to self.filelist: - README or README.txt - setup.py - test/test*.py - all pure Python modules mentioned in setup script - all files pointed by package_data (build_py) - all files defined in data_files. - all files defined as scripts. - all C sources listed as part of extensions or C libraries in the setup script (doesn't catch C headers!) Warns if (README or README.txt) or setup.py are missing; everything else is optional. """ self._add_defaults_standards() self._add_defaults_optional() self._add_defaults_python() self._add_defaults_data_files() self._add_defaults_ext() self._add_defaults_c_libs() self._add_defaults_scripts() @staticmethod def _cs_path_exists(fspath): """ Case-sensitive path existence check >>> sdist_add_defaults._cs_path_exists(__file__) True >>> sdist_add_defaults._cs_path_exists(__file__.upper()) False """ if not os.path.exists(fspath): return False # make absolute so we always have a directory abspath = os.path.abspath(fspath) directory, filename = os.path.split(abspath) return filename in os.listdir(directory) def _add_defaults_standards(self): standards = [self.READMES, self.distribution.script_name] for fn in standards: if isinstance(fn, tuple): alts = fn got_it = False for fn in alts: if self._cs_path_exists(fn): got_it = True self.filelist.append(fn) break if not got_it: self.warn("standard file not found: should have one of " + ', '.join(alts)) else: if self._cs_path_exists(fn): self.filelist.append(fn) else: self.warn("standard file '%s' not found" % fn) def _add_defaults_optional(self): optional = ['test/test*.py', 'setup.cfg'] for pattern in optional: files = filter(os.path.isfile, glob(pattern)) self.filelist.extend(files) def _add_defaults_python(self): # build_py is used to get: # - python modules # - files defined in package_data build_py = self.get_finalized_command('build_py') # getting python files if self.distribution.has_pure_modules(): self.filelist.extend(build_py.get_source_files()) # getting package_data files # (computed in build_py.data_files by build_py.finalize_options) for pkg, src_dir, build_dir, filenames in build_py.data_files: for filename in filenames: self.filelist.append(os.path.join(src_dir, filename)) def _add_defaults_data_files(self): # getting distribution.data_files if self.distribution.has_data_files(): for item in self.distribution.data_files: if isinstance(item, str): # plain file item = convert_path(item) if os.path.isfile(item): self.filelist.append(item) else: # a (dirname, filenames) tuple dirname, filenames = item for f in filenames: f = convert_path(f) if os.path.isfile(f): self.filelist.append(f) def _add_defaults_ext(self): if self.distribution.has_ext_modules(): build_ext = self.get_finalized_command('build_ext') self.filelist.extend(build_ext.get_source_files()) def _add_defaults_c_libs(self): if self.distribution.has_c_libraries(): build_clib = self.get_finalized_command('build_clib') self.filelist.extend(build_clib.get_source_files()) def _add_defaults_scripts(self): if self.distribution.has_scripts(): build_scripts = self.get_finalized_command('build_scripts') self.filelist.extend(build_scripts.get_source_files()) if hasattr(sdist.sdist, '_add_defaults_standards'): # disable the functionality already available upstream class sdist_add_defaults: pass ��������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/register.py�����������������������0000644�0001750�0001750�00000001026�00000000000�030275� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from distutils import log import distutils.command.register as orig class register(orig.register): __doc__ = orig.register.__doc__ def run(self): try: # Make sure that we are using valid current name/version info self.run_command('egg_info') orig.register.run(self) finally: self.announce( "WARNING: Registering is deprecated, use twine to " "upload instead (https://pypi.org/p/twine/)", log.WARN ) ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/rotate.py�������������������������0000644�0001750�0001750�00000004164�00000000000�027755� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from distutils.util import convert_path from distutils import log from distutils.errors import DistutilsOptionError import os import shutil from setuptools.extern import six from setuptools import Command class rotate(Command): """Delete older distributions""" description = "delete older distributions, keeping N newest files" user_options = [ ('match=', 'm', "patterns to match (required)"), ('dist-dir=', 'd', "directory where the distributions are"), ('keep=', 'k', "number of matching distributions to keep"), ] boolean_options = [] def initialize_options(self): self.match = None self.dist_dir = None self.keep = None def finalize_options(self): if self.match is None: raise DistutilsOptionError( "Must specify one or more (comma-separated) match patterns " "(e.g. '.zip' or '.egg')" ) if self.keep is None: raise DistutilsOptionError("Must specify number of files to keep") try: self.keep = int(self.keep) except ValueError: raise DistutilsOptionError("--keep must be an integer") if isinstance(self.match, six.string_types): self.match = [ convert_path(p.strip()) for p in self.match.split(',') ] self.set_undefined_options('bdist', ('dist_dir', 'dist_dir')) def run(self): self.run_command("egg_info") from glob import glob for pattern in self.match: pattern = self.distribution.get_name() + '*' + pattern files = glob(os.path.join(self.dist_dir, pattern)) files = [(os.path.getmtime(f), f) for f in files] files.sort() files.reverse() log.info("%d file(s) matching %s", len(files), pattern) files = files[self.keep:] for (t, f) in files: log.info("Deleting %s", f) if not self.dry_run: if os.path.isdir(f): shutil.rmtree(f) else: os.unlink(f) ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/saveopts.py�����������������������0000644�0001750�0001750�00000001222�00000000000�030313� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from setuptools.command.setopt import edit_config, option_base class saveopts(option_base): """Save command-line options to a file""" description = "save supplied options to setup.cfg or other config file" def run(self): dist = self.distribution settings = {} for cmd in dist.command_options: if cmd == 'saveopts': continue # don't save our own options! for opt, (src, val) in dist.get_option_dict(cmd).items(): if src == "command line": settings.setdefault(cmd, {})[opt] = val edit_config(self.filename, settings, self.dry_run) ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/sdist.py��������������������������0000644�0001750�0001750�00000016334�00000000000�027607� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from distutils import log import distutils.command.sdist as orig import os import sys import io import contextlib from setuptools.extern import six from .py36compat import sdist_add_defaults import pkg_resources _default_revctrl = list def walk_revctrl(dirname=''): """Find all files under revision control""" for ep in pkg_resources.iter_entry_points('setuptools.file_finders'): for item in ep.load()(dirname): yield item class sdist(sdist_add_defaults, orig.sdist): """Smart sdist that finds anything supported by revision control""" user_options = [ ('formats=', None, "formats for source distribution (comma-separated list)"), ('keep-temp', 'k', "keep the distribution tree around after creating " + "archive file(s)"), ('dist-dir=', 'd', "directory to put the source distribution archive(s) in " "[default: dist]"), ] negative_opt = {} README_EXTENSIONS = ['', '.rst', '.txt', '.md'] READMES = tuple('README{0}'.format(ext) for ext in README_EXTENSIONS) def run(self): self.run_command('egg_info') ei_cmd = self.get_finalized_command('egg_info') self.filelist = ei_cmd.filelist self.filelist.append(os.path.join(ei_cmd.egg_info, 'SOURCES.txt')) self.check_readme() # Run sub commands for cmd_name in self.get_sub_commands(): self.run_command(cmd_name) self.make_distribution() dist_files = getattr(self.distribution, 'dist_files', []) for file in self.archive_files: data = ('sdist', '', file) if data not in dist_files: dist_files.append(data) def initialize_options(self): orig.sdist.initialize_options(self) self._default_to_gztar() def _default_to_gztar(self): # only needed on Python prior to 3.6. if sys.version_info >= (3, 6, 0, 'beta', 1): return self.formats = ['gztar'] def make_distribution(self): """ Workaround for #516 """ with self._remove_os_link(): orig.sdist.make_distribution(self) @staticmethod @contextlib.contextmanager def _remove_os_link(): """ In a context, remove and restore os.link if it exists """ class NoValue: pass orig_val = getattr(os, 'link', NoValue) try: del os.link except Exception: pass try: yield finally: if orig_val is not NoValue: setattr(os, 'link', orig_val) def __read_template_hack(self): # This grody hack closes the template file (MANIFEST.in) if an # exception occurs during read_template. # Doing so prevents an error when easy_install attempts to delete the # file. try: orig.sdist.read_template(self) except Exception: _, _, tb = sys.exc_info() tb.tb_next.tb_frame.f_locals['template'].close() raise # Beginning with Python 2.7.2, 3.1.4, and 3.2.1, this leaky file handle # has been fixed, so only override the method if we're using an earlier # Python. has_leaky_handle = ( sys.version_info < (2, 7, 2) or (3, 0) <= sys.version_info < (3, 1, 4) or (3, 2) <= sys.version_info < (3, 2, 1) ) if has_leaky_handle: read_template = __read_template_hack def _add_defaults_python(self): """getting python files""" if self.distribution.has_pure_modules(): build_py = self.get_finalized_command('build_py') self.filelist.extend(build_py.get_source_files()) # This functionality is incompatible with include_package_data, and # will in fact create an infinite recursion if include_package_data # is True. Use of include_package_data will imply that # distutils-style automatic handling of package_data is disabled if not self.distribution.include_package_data: for _, src_dir, _, filenames in build_py.data_files: self.filelist.extend([os.path.join(src_dir, filename) for filename in filenames]) def _add_defaults_data_files(self): try: if six.PY2: sdist_add_defaults._add_defaults_data_files(self) else: super()._add_defaults_data_files() except TypeError: log.warn("data_files contains unexpected objects") def check_readme(self): for f in self.READMES: if os.path.exists(f): return else: self.warn( "standard file not found: should have one of " + ', '.join(self.READMES) ) def make_release_tree(self, base_dir, files): orig.sdist.make_release_tree(self, base_dir, files) # Save any egg_info command line options used to create this sdist dest = os.path.join(base_dir, 'setup.cfg') if hasattr(os, 'link') and os.path.exists(dest): # unlink and re-copy, since it might be hard-linked, and # we don't want to change the source version os.unlink(dest) self.copy_file('setup.cfg', dest) self.get_finalized_command('egg_info').save_version_info(dest) def _manifest_is_not_generated(self): # check for special comment used in 2.7.1 and higher if not os.path.isfile(self.manifest): return False with io.open(self.manifest, 'rb') as fp: first_line = fp.readline() return (first_line != '# file GENERATED by distutils, do NOT edit\n'.encode()) def read_manifest(self): """Read the manifest file (named by 'self.manifest') and use it to fill in 'self.filelist', the list of files to include in the source distribution. """ log.info("reading manifest file '%s'", self.manifest) manifest = open(self.manifest, 'rb') for line in manifest: # The manifest must contain UTF-8. See #303. if six.PY3: try: line = line.decode('UTF-8') except UnicodeDecodeError: log.warn("%r not UTF-8 decodable -- skipping" % line) continue # ignore comments and blank lines line = line.strip() if line.startswith('#') or not line: continue self.filelist.append(line) manifest.close() def check_license(self): """Checks if license_file' is configured and adds it to 'self.filelist' if the value contains a valid path. """ opts = self.distribution.get_option_dict('metadata') # ignore the source of the value _, license_file = opts.get('license_file', (None, None)) if license_file is None: log.debug("'license_file' option was not specified") return if not os.path.exists(license_file): log.warn("warning: Failed to find the configured license file '%s'", license_file) return self.filelist.append(license_file) ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/setopt.py�������������������������0000644�0001750�0001750�00000011735�00000000000�027777� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from distutils.util import convert_path from distutils import log from distutils.errors import DistutilsOptionError import distutils import os from setuptools.extern.six.moves import configparser from setuptools import Command __all__ = ['config_file', 'edit_config', 'option_base', 'setopt'] def config_file(kind="local"): """Get the filename of the distutils, local, global, or per-user config `kind` must be one of "local", "global", or "user" """ if kind == 'local': return 'setup.cfg' if kind == 'global': return os.path.join( os.path.dirname(distutils.__file__), 'distutils.cfg' ) if kind == 'user': dot = os.name == 'posix' and '.' or '' return os.path.expanduser(convert_path("~/%spydistutils.cfg" % dot)) raise ValueError( "config_file() type must be 'local', 'global', or 'user'", kind ) def edit_config(filename, settings, dry_run=False): """Edit a configuration file to include `settings` `settings` is a dictionary of dictionaries or ``None`` values, keyed by command/section name. A ``None`` value means to delete the entire section, while a dictionary lists settings to be changed or deleted in that section. A setting of ``None`` means to delete that setting. """ log.debug("Reading configuration from %s", filename) opts = configparser.RawConfigParser() opts.read([filename]) for section, options in settings.items(): if options is None: log.info("Deleting section [%s] from %s", section, filename) opts.remove_section(section) else: if not opts.has_section(section): log.debug("Adding new section [%s] to %s", section, filename) opts.add_section(section) for option, value in options.items(): if value is None: log.debug( "Deleting %s.%s from %s", section, option, filename ) opts.remove_option(section, option) if not opts.options(section): log.info("Deleting empty [%s] section from %s", section, filename) opts.remove_section(section) else: log.debug( "Setting %s.%s to %r in %s", section, option, value, filename ) opts.set(section, option, value) log.info("Writing %s", filename) if not dry_run: with open(filename, 'w') as f: opts.write(f) class option_base(Command): """Abstract base class for commands that mess with config files""" user_options = [ ('global-config', 'g', "save options to the site-wide distutils.cfg file"), ('user-config', 'u', "save options to the current user's pydistutils.cfg file"), ('filename=', 'f', "configuration file to use (default=setup.cfg)"), ] boolean_options = [ 'global-config', 'user-config', ] def initialize_options(self): self.global_config = None self.user_config = None self.filename = None def finalize_options(self): filenames = [] if self.global_config: filenames.append(config_file('global')) if self.user_config: filenames.append(config_file('user')) if self.filename is not None: filenames.append(self.filename) if not filenames: filenames.append(config_file('local')) if len(filenames) > 1: raise DistutilsOptionError( "Must specify only one configuration file option", filenames ) self.filename, = filenames class setopt(option_base): """Save command-line options to a file""" description = "set an option in setup.cfg or another config file" user_options = [ ('command=', 'c', 'command to set an option for'), ('option=', 'o', 'option to set'), ('set-value=', 's', 'value of the option'), ('remove', 'r', 'remove (unset) the value'), ] + option_base.user_options boolean_options = option_base.boolean_options + ['remove'] def initialize_options(self): option_base.initialize_options(self) self.command = None self.option = None self.set_value = None self.remove = None def finalize_options(self): option_base.finalize_options(self) if self.command is None or self.option is None: raise DistutilsOptionError("Must specify --command *and* --option") if self.set_value is None and not self.remove: raise DistutilsOptionError("Must specify --set-value or --remove") def run(self): edit_config( self.filename, { self.command: {self.option.replace('-', '_'): self.set_value} }, self.dry_run ) �����������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/test.py���������������������������0000644�0001750�0001750�00000022105�00000000000�027431� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import os import operator import sys import contextlib import itertools import unittest from distutils.errors import DistutilsError, DistutilsOptionError from distutils import log from unittest import TestLoader from setuptools.extern import six from setuptools.extern.six.moves import map, filter from pkg_resources import (resource_listdir, resource_exists, normalize_path, working_set, _namespace_packages, evaluate_marker, add_activation_listener, require, EntryPoint) from setuptools import Command from .build_py import _unique_everseen __metaclass__ = type class ScanningLoader(TestLoader): def __init__(self): TestLoader.__init__(self) self._visited = set() def loadTestsFromModule(self, module, pattern=None): """Return a suite of all tests cases contained in the given module If the module is a package, load tests from all the modules in it. If the module has an ``additional_tests`` function, call it and add the return value to the tests. """ if module in self._visited: return None self._visited.add(module) tests = [] tests.append(TestLoader.loadTestsFromModule(self, module)) if hasattr(module, "additional_tests"): tests.append(module.additional_tests()) if hasattr(module, '__path__'): for file in resource_listdir(module.__name__, ''): if file.endswith('.py') and file != '__init__.py': submodule = module.__name__ + '.' + file[:-3] else: if resource_exists(module.__name__, file + '/__init__.py'): submodule = module.__name__ + '.' + file else: continue tests.append(self.loadTestsFromName(submodule)) if len(tests) != 1: return self.suiteClass(tests) else: return tests[0] # don't create a nested suite for only one return # adapted from jaraco.classes.properties:NonDataProperty class NonDataProperty: def __init__(self, fget): self.fget = fget def __get__(self, obj, objtype=None): if obj is None: return self return self.fget(obj) class test(Command): """Command to run unit tests after in-place build""" description = "run unit tests after in-place build" user_options = [ ('test-module=', 'm', "Run 'test_suite' in specified module"), ('test-suite=', 's', "Run single test, case or suite (e.g. 'module.test_suite')"), ('test-runner=', 'r', "Test runner to use"), ] def initialize_options(self): self.test_suite = None self.test_module = None self.test_loader = None self.test_runner = None def finalize_options(self): if self.test_suite and self.test_module: msg = "You may specify a module or a suite, but not both" raise DistutilsOptionError(msg) if self.test_suite is None: if self.test_module is None: self.test_suite = self.distribution.test_suite else: self.test_suite = self.test_module + ".test_suite" if self.test_loader is None: self.test_loader = getattr(self.distribution, 'test_loader', None) if self.test_loader is None: self.test_loader = "setuptools.command.test:ScanningLoader" if self.test_runner is None: self.test_runner = getattr(self.distribution, 'test_runner', None) @NonDataProperty def test_args(self): return list(self._test_args()) def _test_args(self): if not self.test_suite and sys.version_info >= (2, 7): yield 'discover' if self.verbose: yield '--verbose' if self.test_suite: yield self.test_suite def with_project_on_sys_path(self, func): """ Backward compatibility for project_on_sys_path context. """ with self.project_on_sys_path(): func() @contextlib.contextmanager def project_on_sys_path(self, include_dists=[]): with_2to3 = six.PY3 and getattr(self.distribution, 'use_2to3', False) if with_2to3: # If we run 2to3 we can not do this inplace: # Ensure metadata is up-to-date self.reinitialize_command('build_py', inplace=0) self.run_command('build_py') bpy_cmd = self.get_finalized_command("build_py") build_path = normalize_path(bpy_cmd.build_lib) # Build extensions self.reinitialize_command('egg_info', egg_base=build_path) self.run_command('egg_info') self.reinitialize_command('build_ext', inplace=0) self.run_command('build_ext') else: # Without 2to3 inplace works fine: self.run_command('egg_info') # Build extensions in-place self.reinitialize_command('build_ext', inplace=1) self.run_command('build_ext') ei_cmd = self.get_finalized_command("egg_info") old_path = sys.path[:] old_modules = sys.modules.copy() try: project_path = normalize_path(ei_cmd.egg_base) sys.path.insert(0, project_path) working_set.__init__() add_activation_listener(lambda dist: dist.activate()) require('%s==%s' % (ei_cmd.egg_name, ei_cmd.egg_version)) with self.paths_on_pythonpath([project_path]): yield finally: sys.path[:] = old_path sys.modules.clear() sys.modules.update(old_modules) working_set.__init__() @staticmethod @contextlib.contextmanager def paths_on_pythonpath(paths): """ Add the indicated paths to the head of the PYTHONPATH environment variable so that subprocesses will also see the packages at these paths. Do this in a context that restores the value on exit. """ nothing = object() orig_pythonpath = os.environ.get('PYTHONPATH', nothing) current_pythonpath = os.environ.get('PYTHONPATH', '') try: prefix = os.pathsep.join(_unique_everseen(paths)) to_join = filter(None, [prefix, current_pythonpath]) new_path = os.pathsep.join(to_join) if new_path: os.environ['PYTHONPATH'] = new_path yield finally: if orig_pythonpath is nothing: os.environ.pop('PYTHONPATH', None) else: os.environ['PYTHONPATH'] = orig_pythonpath @staticmethod def install_dists(dist): """ Install the requirements indicated by self.distribution and return an iterable of the dists that were built. """ ir_d = dist.fetch_build_eggs(dist.install_requires) tr_d = dist.fetch_build_eggs(dist.tests_require or []) er_d = dist.fetch_build_eggs( v for k, v in dist.extras_require.items() if k.startswith(':') and evaluate_marker(k[1:]) ) return itertools.chain(ir_d, tr_d, er_d) def run(self): installed_dists = self.install_dists(self.distribution) cmd = ' '.join(self._argv) if self.dry_run: self.announce('skipping "%s" (dry run)' % cmd) return self.announce('running "%s"' % cmd) paths = map(operator.attrgetter('location'), installed_dists) with self.paths_on_pythonpath(paths): with self.project_on_sys_path(): self.run_tests() def run_tests(self): # Purge modules under test from sys.modules. The test loader will # re-import them from the build location. Required when 2to3 is used # with namespace packages. if six.PY3 and getattr(self.distribution, 'use_2to3', False): module = self.test_suite.split('.')[0] if module in _namespace_packages: del_modules = [] if module in sys.modules: del_modules.append(module) module += '.' for name in sys.modules: if name.startswith(module): del_modules.append(name) list(map(sys.modules.__delitem__, del_modules)) test = unittest.main( None, None, self._argv, testLoader=self._resolve_as_ep(self.test_loader), testRunner=self._resolve_as_ep(self.test_runner), exit=False, ) if not test.result.wasSuccessful(): msg = 'Test failed: %s' % test.result self.announce(msg, log.ERROR) raise DistutilsError(msg) @property def _argv(self): return ['unittest'] + self.test_args @staticmethod def _resolve_as_ep(val): """ Load the indicated attribute value, called, as a as if it were specified as an entry point. """ if val is None: return parsed = EntryPoint.parse("x=" + val) return parsed.resolve()() �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/upload.py�������������������������0000644�0001750�0001750�00000015233�00000000000�027742� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import io import os import hashlib import getpass from base64 import standard_b64encode from distutils import log from distutils.command import upload as orig from distutils.spawn import spawn from distutils.errors import DistutilsError from setuptools.extern.six.moves.urllib.request import urlopen, Request from setuptools.extern.six.moves.urllib.error import HTTPError from setuptools.extern.six.moves.urllib.parse import urlparse class upload(orig.upload): """ Override default upload behavior to obtain password in a variety of different ways. """ def run(self): try: orig.upload.run(self) finally: self.announce( "WARNING: Uploading via this command is deprecated, use twine " "to upload instead (https://pypi.org/p/twine/)", log.WARN ) def finalize_options(self): orig.upload.finalize_options(self) self.username = ( self.username or getpass.getuser() ) # Attempt to obtain password. Short circuit evaluation at the first # sign of success. self.password = ( self.password or self._load_password_from_keyring() or self._prompt_for_password() ) def upload_file(self, command, pyversion, filename): # Makes sure the repository URL is compliant schema, netloc, url, params, query, fragments = \ urlparse(self.repository) if params or query or fragments: raise AssertionError("Incompatible url %s" % self.repository) if schema not in ('http', 'https'): raise AssertionError("unsupported schema " + schema) # Sign if requested if self.sign: gpg_args = ["gpg", "--detach-sign", "-a", filename] if self.identity: gpg_args[2:2] = ["--local-user", self.identity] spawn(gpg_args, dry_run=self.dry_run) # Fill in the data - send all the meta-data in case we need to # register a new release with open(filename, 'rb') as f: content = f.read() meta = self.distribution.metadata data = { # action ':action': 'file_upload', 'protocol_version': '1', # identify release 'name': meta.get_name(), 'version': meta.get_version(), # file content 'content': (os.path.basename(filename), content), 'filetype': command, 'pyversion': pyversion, 'md5_digest': hashlib.md5(content).hexdigest(), # additional meta-data 'metadata_version': str(meta.get_metadata_version()), 'summary': meta.get_description(), 'home_page': meta.get_url(), 'author': meta.get_contact(), 'author_email': meta.get_contact_email(), 'license': meta.get_licence(), 'description': meta.get_long_description(), 'keywords': meta.get_keywords(), 'platform': meta.get_platforms(), 'classifiers': meta.get_classifiers(), 'download_url': meta.get_download_url(), # PEP 314 'provides': meta.get_provides(), 'requires': meta.get_requires(), 'obsoletes': meta.get_obsoletes(), } data['comment'] = '' if self.sign: data['gpg_signature'] = (os.path.basename(filename) + ".asc", open(filename+".asc", "rb").read()) # set up the authentication user_pass = (self.username + ":" + self.password).encode('ascii') # The exact encoding of the authentication string is debated. # Anyway PyPI only accepts ascii for both username or password. auth = "Basic " + standard_b64encode(user_pass).decode('ascii') # Build up the MIME payload for the POST data boundary = '--------------GHSKFJDLGDS7543FJKLFHRE75642756743254' sep_boundary = b'\r\n--' + boundary.encode('ascii') end_boundary = sep_boundary + b'--\r\n' body = io.BytesIO() for key, value in data.items(): title = '\r\nContent-Disposition: form-data; name="%s"' % key # handle multiple entries for the same name if not isinstance(value, list): value = [value] for value in value: if type(value) is tuple: title += '; filename="%s"' % value[0] value = value[1] else: value = str(value).encode('utf-8') body.write(sep_boundary) body.write(title.encode('utf-8')) body.write(b"\r\n\r\n") body.write(value) body.write(end_boundary) body = body.getvalue() msg = "Submitting %s to %s" % (filename, self.repository) self.announce(msg, log.INFO) # build the Request headers = { 'Content-type': 'multipart/form-data; boundary=%s' % boundary, 'Content-length': str(len(body)), 'Authorization': auth, } request = Request(self.repository, data=body, headers=headers) # send the data try: result = urlopen(request) status = result.getcode() reason = result.msg except HTTPError as e: status = e.code reason = e.msg except OSError as e: self.announce(str(e), log.ERROR) raise if status == 200: self.announce('Server response (%s): %s' % (status, reason), log.INFO) if self.show_response: text = getattr(self, '_read_pypi_response', lambda x: None)(result) if text is not None: msg = '\n'.join(('-' * 75, text, '-' * 75)) self.announce(msg, log.INFO) else: msg = 'Upload failed (%s): %s' % (status, reason) self.announce(msg, log.ERROR) raise DistutilsError(msg) def _load_password_from_keyring(self): """ Attempt to load password from keyring. Suppress Exceptions. """ try: keyring = __import__('keyring') return keyring.get_password(self.repository, self.username) except Exception: pass def _prompt_for_password(self): """ Prompt for a password on the tty. Suppress Exceptions. """ try: return getpass.getpass() except (Exception, KeyboardInterrupt): pass ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/command/upload_docs.py��������������������0000644�0001750�0001750�00000016217�00000000000�030755� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- """upload_docs Implements a Distutils 'upload_docs' subcommand (upload documentation to PyPI's pythonhosted.org). """ from base64 import standard_b64encode from distutils import log from distutils.errors import DistutilsOptionError import os import socket import zipfile import tempfile import shutil import itertools import functools from setuptools.extern import six from setuptools.extern.six.moves import http_client, urllib from pkg_resources import iter_entry_points from .upload import upload def _encode(s): errors = 'surrogateescape' if six.PY3 else 'strict' return s.encode('utf-8', errors) class upload_docs(upload): # override the default repository as upload_docs isn't # supported by Warehouse (and won't be). DEFAULT_REPOSITORY = 'https://pypi.python.org/pypi/' description = 'Upload documentation to PyPI' user_options = [ ('repository=', 'r', "url of repository [default: %s]" % upload.DEFAULT_REPOSITORY), ('show-response', None, 'display full response text from server'), ('upload-dir=', None, 'directory to upload'), ] boolean_options = upload.boolean_options def has_sphinx(self): if self.upload_dir is None: for ep in iter_entry_points('distutils.commands', 'build_sphinx'): return True sub_commands = [('build_sphinx', has_sphinx)] def initialize_options(self): upload.initialize_options(self) self.upload_dir = None self.target_dir = None def finalize_options(self): upload.finalize_options(self) if self.upload_dir is None: if self.has_sphinx(): build_sphinx = self.get_finalized_command('build_sphinx') self.target_dir = build_sphinx.builder_target_dir else: build = self.get_finalized_command('build') self.target_dir = os.path.join(build.build_base, 'docs') else: self.ensure_dirname('upload_dir') self.target_dir = self.upload_dir if 'pypi.python.org' in self.repository: log.warn("Upload_docs command is deprecated. Use RTD instead.") self.announce('Using upload directory %s' % self.target_dir) def create_zipfile(self, filename): zip_file = zipfile.ZipFile(filename, "w") try: self.mkpath(self.target_dir) # just in case for root, dirs, files in os.walk(self.target_dir): if root == self.target_dir and not files: tmpl = "no files found in upload directory '%s'" raise DistutilsOptionError(tmpl % self.target_dir) for name in files: full = os.path.join(root, name) relative = root[len(self.target_dir):].lstrip(os.path.sep) dest = os.path.join(relative, name) zip_file.write(full, dest) finally: zip_file.close() def run(self): # Run sub commands for cmd_name in self.get_sub_commands(): self.run_command(cmd_name) tmp_dir = tempfile.mkdtemp() name = self.distribution.metadata.get_name() zip_file = os.path.join(tmp_dir, "%s.zip" % name) try: self.create_zipfile(zip_file) self.upload_file(zip_file) finally: shutil.rmtree(tmp_dir) @staticmethod def _build_part(item, sep_boundary): key, values = item title = '\nContent-Disposition: form-data; name="%s"' % key # handle multiple entries for the same name if not isinstance(values, list): values = [values] for value in values: if isinstance(value, tuple): title += '; filename="%s"' % value[0] value = value[1] else: value = _encode(value) yield sep_boundary yield _encode(title) yield b"\n\n" yield value if value and value[-1:] == b'\r': yield b'\n' # write an extra newline (lurve Macs) @classmethod def _build_multipart(cls, data): """ Build up the MIME payload for the POST data """ boundary = b'--------------GHSKFJDLGDS7543FJKLFHRE75642756743254' sep_boundary = b'\n--' + boundary end_boundary = sep_boundary + b'--' end_items = end_boundary, b"\n", builder = functools.partial( cls._build_part, sep_boundary=sep_boundary, ) part_groups = map(builder, data.items()) parts = itertools.chain.from_iterable(part_groups) body_items = itertools.chain(parts, end_items) content_type = 'multipart/form-data; boundary=%s' % boundary.decode('ascii') return b''.join(body_items), content_type def upload_file(self, filename): with open(filename, 'rb') as f: content = f.read() meta = self.distribution.metadata data = { ':action': 'doc_upload', 'name': meta.get_name(), 'content': (os.path.basename(filename), content), } # set up the authentication credentials = _encode(self.username + ':' + self.password) credentials = standard_b64encode(credentials) if six.PY3: credentials = credentials.decode('ascii') auth = "Basic " + credentials body, ct = self._build_multipart(data) msg = "Submitting documentation to %s" % (self.repository) self.announce(msg, log.INFO) # build the Request # We can't use urllib2 since we need to send the Basic # auth right with the first request schema, netloc, url, params, query, fragments = \ urllib.parse.urlparse(self.repository) assert not params and not query and not fragments if schema == 'http': conn = http_client.HTTPConnection(netloc) elif schema == 'https': conn = http_client.HTTPSConnection(netloc) else: raise AssertionError("unsupported schema " + schema) data = '' try: conn.connect() conn.putrequest("POST", url) content_type = ct conn.putheader('Content-type', content_type) conn.putheader('Content-length', str(len(body))) conn.putheader('Authorization', auth) conn.endheaders() conn.send(body) except socket.error as e: self.announce(str(e), log.ERROR) return r = conn.getresponse() if r.status == 200: msg = 'Server response (%s): %s' % (r.status, r.reason) self.announce(msg, log.INFO) elif r.status == 301: location = r.getheader('Location') if location is None: location = 'https://pythonhosted.org/%s/' % meta.get_name() msg = 'Upload successful. Visit %s' % location self.announce(msg, log.INFO) else: msg = 'Upload failed (%s): %s' % (r.status, r.reason) self.announce(msg, log.ERROR) if self.show_response: print('-' * 75, r.read(), '-' * 75) ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/config.py���������������������������������0000644�0001750�0001750�00000047711�00000000000�026313� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from __future__ import absolute_import, unicode_literals import io import os import sys import warnings import functools from collections import defaultdict from functools import partial from functools import wraps from importlib import import_module from distutils.errors import DistutilsOptionError, DistutilsFileError from setuptools.extern.packaging.version import LegacyVersion, parse from setuptools.extern.six import string_types, PY3 __metaclass__ = type def read_configuration( filepath, find_others=False, ignore_option_errors=False): """Read given configuration file and returns options from it as a dict. :param str|unicode filepath: Path to configuration file to get options from. :param bool find_others: Whether to search for other configuration files which could be on in various places. :param bool ignore_option_errors: Whether to silently ignore options, values of which could not be resolved (e.g. due to exceptions in directives such as file:, attr:, etc.). If False exceptions are propagated as expected. :rtype: dict """ from setuptools.dist import Distribution, _Distribution filepath = os.path.abspath(filepath) if not os.path.isfile(filepath): raise DistutilsFileError( 'Configuration file %s does not exist.' % filepath) current_directory = os.getcwd() os.chdir(os.path.dirname(filepath)) try: dist = Distribution() filenames = dist.find_config_files() if find_others else [] if filepath not in filenames: filenames.append(filepath) _Distribution.parse_config_files(dist, filenames=filenames) handlers = parse_configuration( dist, dist.command_options, ignore_option_errors=ignore_option_errors) finally: os.chdir(current_directory) return configuration_to_dict(handlers) def _get_option(target_obj, key): """ Given a target object and option key, get that option from the target object, either through a get_{key} method or from an attribute directly. """ getter_name = 'get_{key}'.format(**locals()) by_attribute = functools.partial(getattr, target_obj, key) getter = getattr(target_obj, getter_name, by_attribute) return getter() def configuration_to_dict(handlers): """Returns configuration data gathered by given handlers as a dict. :param list[ConfigHandler] handlers: Handlers list, usually from parse_configuration() :rtype: dict """ config_dict = defaultdict(dict) for handler in handlers: for option in handler.set_options: value = _get_option(handler.target_obj, option) config_dict[handler.section_prefix][option] = value return config_dict def parse_configuration( distribution, command_options, ignore_option_errors=False): """Performs additional parsing of configuration options for a distribution. Returns a list of used option handlers. :param Distribution distribution: :param dict command_options: :param bool ignore_option_errors: Whether to silently ignore options, values of which could not be resolved (e.g. due to exceptions in directives such as file:, attr:, etc.). If False exceptions are propagated as expected. :rtype: list """ options = ConfigOptionsHandler( distribution, command_options, ignore_option_errors) options.parse() meta = ConfigMetadataHandler( distribution.metadata, command_options, ignore_option_errors, distribution.package_dir) meta.parse() return meta, options class ConfigHandler: """Handles metadata supplied in configuration files.""" section_prefix = None """Prefix for config sections handled by this handler. Must be provided by class heirs. """ aliases = {} """Options aliases. For compatibility with various packages. E.g.: d2to1 and pbr. Note: `-` in keys is replaced with `_` by config parser. """ def __init__(self, target_obj, options, ignore_option_errors=False): sections = {} section_prefix = self.section_prefix for section_name, section_options in options.items(): if not section_name.startswith(section_prefix): continue section_name = section_name.replace(section_prefix, '').strip('.') sections[section_name] = section_options self.ignore_option_errors = ignore_option_errors self.target_obj = target_obj self.sections = sections self.set_options = [] @property def parsers(self): """Metadata item name to parser function mapping.""" raise NotImplementedError( '%s must provide .parsers property' % self.__class__.__name__) def __setitem__(self, option_name, value): unknown = tuple() target_obj = self.target_obj # Translate alias into real name. option_name = self.aliases.get(option_name, option_name) current_value = getattr(target_obj, option_name, unknown) if current_value is unknown: raise KeyError(option_name) if current_value: # Already inhabited. Skipping. return skip_option = False parser = self.parsers.get(option_name) if parser: try: value = parser(value) except Exception: skip_option = True if not self.ignore_option_errors: raise if skip_option: return setter = getattr(target_obj, 'set_%s' % option_name, None) if setter is None: setattr(target_obj, option_name, value) else: setter(value) self.set_options.append(option_name) @classmethod def _parse_list(cls, value, separator=','): """Represents value as a list. Value is split either by separator (defaults to comma) or by lines. :param value: :param separator: List items separator character. :rtype: list """ if isinstance(value, list): # _get_parser_compound case return value if '\n' in value: value = value.splitlines() else: value = value.split(separator) return [chunk.strip() for chunk in value if chunk.strip()] @classmethod def _parse_dict(cls, value): """Represents value as a dict. :param value: :rtype: dict """ separator = '=' result = {} for line in cls._parse_list(value): key, sep, val = line.partition(separator) if sep != separator: raise DistutilsOptionError( 'Unable to parse option value to dict: %s' % value) result[key.strip()] = val.strip() return result @classmethod def _parse_bool(cls, value): """Represents value as boolean. :param value: :rtype: bool """ value = value.lower() return value in ('1', 'true', 'yes') @classmethod def _exclude_files_parser(cls, key): """Returns a parser function to make sure field inputs are not files. Parses a value after getting the key so error messages are more informative. :param key: :rtype: callable """ def parser(value): exclude_directive = 'file:' if value.startswith(exclude_directive): raise ValueError( 'Only strings are accepted for the {0} field, ' 'files are not accepted'.format(key)) return value return parser @classmethod def _parse_file(cls, value): """Represents value as a string, allowing including text from nearest files using `file:` directive. Directive is sandboxed and won't reach anything outside directory with setup.py. Examples: file: README.rst, CHANGELOG.md, src/file.txt :param str value: :rtype: str """ include_directive = 'file:' if not isinstance(value, string_types): return value if not value.startswith(include_directive): return value spec = value[len(include_directive):] filepaths = (os.path.abspath(path.strip()) for path in spec.split(',')) return '\n'.join( cls._read_file(path) for path in filepaths if (cls._assert_local(path) or True) and os.path.isfile(path) ) @staticmethod def _assert_local(filepath): if not filepath.startswith(os.getcwd()): raise DistutilsOptionError( '`file:` directive can not access %s' % filepath) @staticmethod def _read_file(filepath): with io.open(filepath, encoding='utf-8') as f: return f.read() @classmethod def _parse_attr(cls, value, package_dir=None): """Represents value as a module attribute. Examples: attr: package.attr attr: package.module.attr :param str value: :rtype: str """ attr_directive = 'attr:' if not value.startswith(attr_directive): return value attrs_path = value.replace(attr_directive, '').strip().split('.') attr_name = attrs_path.pop() module_name = '.'.join(attrs_path) module_name = module_name or '__init__' parent_path = os.getcwd() if package_dir: if attrs_path[0] in package_dir: # A custom path was specified for the module we want to import custom_path = package_dir[attrs_path[0]] parts = custom_path.rsplit('/', 1) if len(parts) > 1: parent_path = os.path.join(os.getcwd(), parts[0]) module_name = parts[1] else: module_name = custom_path elif '' in package_dir: # A custom parent directory was specified for all root modules parent_path = os.path.join(os.getcwd(), package_dir['']) sys.path.insert(0, parent_path) try: module = import_module(module_name) value = getattr(module, attr_name) finally: sys.path = sys.path[1:] return value @classmethod def _get_parser_compound(cls, *parse_methods): """Returns parser function to represents value as a list. Parses a value applying given methods one after another. :param parse_methods: :rtype: callable """ def parse(value): parsed = value for method in parse_methods: parsed = method(parsed) return parsed return parse @classmethod def _parse_section_to_dict(cls, section_options, values_parser=None): """Parses section options into a dictionary. Optionally applies a given parser to values. :param dict section_options: :param callable values_parser: :rtype: dict """ value = {} values_parser = values_parser or (lambda val: val) for key, (_, val) in section_options.items(): value[key] = values_parser(val) return value def parse_section(self, section_options): """Parses configuration file section. :param dict section_options: """ for (name, (_, value)) in section_options.items(): try: self[name] = value except KeyError: pass # Keep silent for a new option may appear anytime. def parse(self): """Parses configuration file items from one or more related sections. """ for section_name, section_options in self.sections.items(): method_postfix = '' if section_name: # [section.option] variant method_postfix = '_%s' % section_name section_parser_method = getattr( self, # Dots in section names are translated into dunderscores. ('parse_section%s' % method_postfix).replace('.', '__'), None) if section_parser_method is None: raise DistutilsOptionError( 'Unsupported distribution option section: [%s.%s]' % ( self.section_prefix, section_name)) section_parser_method(section_options) def _deprecated_config_handler(self, func, msg, warning_class): """ this function will wrap around parameters that are deprecated :param msg: deprecation message :param warning_class: class of warning exception to be raised :param func: function to be wrapped around """ @wraps(func) def config_handler(*args, **kwargs): warnings.warn(msg, warning_class) return func(*args, **kwargs) return config_handler class ConfigMetadataHandler(ConfigHandler): section_prefix = 'metadata' aliases = { 'home_page': 'url', 'summary': 'description', 'classifier': 'classifiers', 'platform': 'platforms', } strict_mode = False """We need to keep it loose, to be partially compatible with `pbr` and `d2to1` packages which also uses `metadata` section. """ def __init__(self, target_obj, options, ignore_option_errors=False, package_dir=None): super(ConfigMetadataHandler, self).__init__(target_obj, options, ignore_option_errors) self.package_dir = package_dir @property def parsers(self): """Metadata item name to parser function mapping.""" parse_list = self._parse_list parse_file = self._parse_file parse_dict = self._parse_dict exclude_files_parser = self._exclude_files_parser return { 'platforms': parse_list, 'keywords': parse_list, 'provides': parse_list, 'requires': self._deprecated_config_handler( parse_list, "The requires parameter is deprecated, please use " "install_requires for runtime dependencies.", DeprecationWarning), 'obsoletes': parse_list, 'classifiers': self._get_parser_compound(parse_file, parse_list), 'license': exclude_files_parser('license'), 'description': parse_file, 'long_description': parse_file, 'version': self._parse_version, 'project_urls': parse_dict, } def _parse_version(self, value): """Parses `version` option value. :param value: :rtype: str """ version = self._parse_file(value) if version != value: version = version.strip() # Be strict about versions loaded from file because it's easy to # accidentally include newlines and other unintended content if isinstance(parse(version), LegacyVersion): tmpl = ( 'Version loaded from {value} does not ' 'comply with PEP 440: {version}' ) raise DistutilsOptionError(tmpl.format(**locals())) return version version = self._parse_attr(value, self.package_dir) if callable(version): version = version() if not isinstance(version, string_types): if hasattr(version, '__iter__'): version = '.'.join(map(str, version)) else: version = '%s' % version return version class ConfigOptionsHandler(ConfigHandler): section_prefix = 'options' @property def parsers(self): """Metadata item name to parser function mapping.""" parse_list = self._parse_list parse_list_semicolon = partial(self._parse_list, separator=';') parse_bool = self._parse_bool parse_dict = self._parse_dict return { 'zip_safe': parse_bool, 'use_2to3': parse_bool, 'include_package_data': parse_bool, 'package_dir': parse_dict, 'use_2to3_fixers': parse_list, 'use_2to3_exclude_fixers': parse_list, 'convert_2to3_doctests': parse_list, 'scripts': parse_list, 'eager_resources': parse_list, 'dependency_links': parse_list, 'namespace_packages': parse_list, 'install_requires': parse_list_semicolon, 'setup_requires': parse_list_semicolon, 'tests_require': parse_list_semicolon, 'packages': self._parse_packages, 'entry_points': self._parse_file, 'py_modules': parse_list, } def _parse_packages(self, value): """Parses `packages` option value. :param value: :rtype: list """ find_directives = ['find:', 'find_namespace:'] trimmed_value = value.strip() if trimmed_value not in find_directives: return self._parse_list(value) findns = trimmed_value == find_directives[1] if findns and not PY3: raise DistutilsOptionError( 'find_namespace: directive is unsupported on Python < 3.3') # Read function arguments from a dedicated section. find_kwargs = self.parse_section_packages__find( self.sections.get('packages.find', {})) if findns: from setuptools import find_namespace_packages as find_packages else: from setuptools import find_packages return find_packages(**find_kwargs) def parse_section_packages__find(self, section_options): """Parses `packages.find` configuration file section. To be used in conjunction with _parse_packages(). :param dict section_options: """ section_data = self._parse_section_to_dict( section_options, self._parse_list) valid_keys = ['where', 'include', 'exclude'] find_kwargs = dict( [(k, v) for k, v in section_data.items() if k in valid_keys and v]) where = find_kwargs.get('where') if where is not None: find_kwargs['where'] = where[0] # cast list to single val return find_kwargs def parse_section_entry_points(self, section_options): """Parses `entry_points` configuration file section. :param dict section_options: """ parsed = self._parse_section_to_dict(section_options, self._parse_list) self['entry_points'] = parsed def _parse_package_data(self, section_options): parsed = self._parse_section_to_dict(section_options, self._parse_list) root = parsed.get('*') if root: parsed[''] = root del parsed['*'] return parsed def parse_section_package_data(self, section_options): """Parses `package_data` configuration file section. :param dict section_options: """ self['package_data'] = self._parse_package_data(section_options) def parse_section_exclude_package_data(self, section_options): """Parses `exclude_package_data` configuration file section. :param dict section_options: """ self['exclude_package_data'] = self._parse_package_data( section_options) def parse_section_extras_require(self, section_options): """Parses `extras_require` configuration file section. :param dict section_options: """ parse_list = partial(self._parse_list, separator=';') self['extras_require'] = self._parse_section_to_dict( section_options, parse_list) def parse_section_data_files(self, section_options): """Parses `data_files` configuration file section. :param dict section_options: """ parsed = self._parse_section_to_dict(section_options, self._parse_list) self['data_files'] = [(k, v) for k, v in parsed.items()] �������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/dep_util.py�������������������������������0000644�0001750�0001750�00000001647�00000000000�026651� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������from distutils.dep_util import newer_group # yes, this is was almost entirely copy-pasted from # 'newer_pairwise()', this is just another convenience # function. def newer_pairwise_group(sources_groups, targets): """Walk both arguments in parallel, testing if each source group is newer than its corresponding target. Returns a pair of lists (sources_groups, targets) where sources is newer than target, according to the semantics of 'newer_group()'. """ if len(sources_groups) != len(targets): raise ValueError("'sources_group' and 'targets' must be the same length") # build a pair of lists (sources_groups, targets) where source is newer n_sources = [] n_targets = [] for i in range(len(sources_groups)): if newer_group(sources_groups[i], targets[i]): n_sources.append(sources_groups[i]) n_targets.append(targets[i]) return n_sources, n_targets �����������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/depends.py��������������������������������0000644�0001750�0001750�00000013315�00000000000�026461� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import sys import imp import marshal from distutils.version import StrictVersion from imp import PKG_DIRECTORY, PY_COMPILED, PY_SOURCE, PY_FROZEN from .py33compat import Bytecode __all__ = [ 'Require', 'find_module', 'get_module_constant', 'extract_constant' ] class Require: """A prerequisite to building or installing a distribution""" def __init__(self, name, requested_version, module, homepage='', attribute=None, format=None): if format is None and requested_version is not None: format = StrictVersion if format is not None: requested_version = format(requested_version) if attribute is None: attribute = '__version__' self.__dict__.update(locals()) del self.self def full_name(self): """Return full package/distribution name, w/version""" if self.requested_version is not None: return '%s-%s' % (self.name, self.requested_version) return self.name def version_ok(self, version): """Is 'version' sufficiently up-to-date?""" return self.attribute is None or self.format is None or \ str(version) != "unknown" and version >= self.requested_version def get_version(self, paths=None, default="unknown"): """Get version number of installed module, 'None', or 'default' Search 'paths' for module. If not found, return 'None'. If found, return the extracted version attribute, or 'default' if no version attribute was specified, or the value cannot be determined without importing the module. The version is formatted according to the requirement's version format (if any), unless it is 'None' or the supplied 'default'. """ if self.attribute is None: try: f, p, i = find_module(self.module, paths) if f: f.close() return default except ImportError: return None v = get_module_constant(self.module, self.attribute, default, paths) if v is not None and v is not default and self.format is not None: return self.format(v) return v def is_present(self, paths=None): """Return true if dependency is present on 'paths'""" return self.get_version(paths) is not None def is_current(self, paths=None): """Return true if dependency is present and up-to-date on 'paths'""" version = self.get_version(paths) if version is None: return False return self.version_ok(version) def find_module(module, paths=None): """Just like 'imp.find_module()', but with package support""" parts = module.split('.') while parts: part = parts.pop(0) f, path, (suffix, mode, kind) = info = imp.find_module(part, paths) if kind == PKG_DIRECTORY: parts = parts or ['__init__'] paths = [path] elif parts: raise ImportError("Can't find %r in %s" % (parts, module)) return info def get_module_constant(module, symbol, default=-1, paths=None): """Find 'module' by searching 'paths', and extract 'symbol' Return 'None' if 'module' does not exist on 'paths', or it does not define 'symbol'. If the module defines 'symbol' as a constant, return the constant. Otherwise, return 'default'.""" try: f, path, (suffix, mode, kind) = find_module(module, paths) except ImportError: # Module doesn't exist return None try: if kind == PY_COMPILED: f.read(8) # skip magic & date code = marshal.load(f) elif kind == PY_FROZEN: code = imp.get_frozen_object(module) elif kind == PY_SOURCE: code = compile(f.read(), path, 'exec') else: # Not something we can parse; we'll have to import it. :( if module not in sys.modules: imp.load_module(module, f, path, (suffix, mode, kind)) return getattr(sys.modules[module], symbol, None) finally: if f: f.close() return extract_constant(code, symbol, default) def extract_constant(code, symbol, default=-1): """Extract the constant value of 'symbol' from 'code' If the name 'symbol' is bound to a constant value by the Python code object 'code', return that value. If 'symbol' is bound to an expression, return 'default'. Otherwise, return 'None'. Return value is based on the first assignment to 'symbol'. 'symbol' must be a global, or at least a non-"fast" local in the code block. That is, only 'STORE_NAME' and 'STORE_GLOBAL' opcodes are checked, and 'symbol' must be present in 'code.co_names'. """ if symbol not in code.co_names: # name's not there, can't possibly be an assignment return None name_idx = list(code.co_names).index(symbol) STORE_NAME = 90 STORE_GLOBAL = 97 LOAD_CONST = 100 const = default for byte_code in Bytecode(code): op = byte_code.opcode arg = byte_code.arg if op == LOAD_CONST: const = code.co_consts[arg] elif arg == name_idx and (op == STORE_NAME or op == STORE_GLOBAL): return const else: const = default def _update_globals(): """ Patch the globals to remove the objects not available on some platforms. XXX it'd be better to test assertions about bytecode instead. """ if not sys.platform.startswith('java') and sys.platform != 'cli': return incompatible = 'extract_constant', 'get_module_constant' for name in incompatible: del globals()[name] __all__.remove(name) _update_globals() �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/dist.py�����������������������������������0000644�0001750�0001750�00000141371�00000000000�026006� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# -*- coding: utf-8 -*- __all__ = ['Distribution'] import io import sys import re import os import warnings import numbers import distutils.log import distutils.core import distutils.cmd import distutils.dist from distutils.util import strtobool from distutils.debug import DEBUG from distutils.fancy_getopt import translate_longopt import itertools from collections import defaultdict from email import message_from_file from distutils.errors import ( DistutilsOptionError, DistutilsPlatformError, DistutilsSetupError, ) from distutils.util import rfc822_escape from distutils.version import StrictVersion from setuptools.extern import six from setuptools.extern import packaging from setuptools.extern.six.moves import map, filter, filterfalse from . import SetuptoolsDeprecationWarning from setuptools.depends import Require from setuptools import windows_support from setuptools.monkey import get_unpatched from setuptools.config import parse_configuration import pkg_resources __import__('setuptools.extern.packaging.specifiers') __import__('setuptools.extern.packaging.version') def _get_unpatched(cls): warnings.warn("Do not call this function", DistDeprecationWarning) return get_unpatched(cls) def get_metadata_version(self): mv = getattr(self, 'metadata_version', None) if mv is None: if self.long_description_content_type or self.provides_extras: mv = StrictVersion('2.1') elif (self.maintainer is not None or self.maintainer_email is not None or getattr(self, 'python_requires', None) is not None): mv = StrictVersion('1.2') elif (self.provides or self.requires or self.obsoletes or self.classifiers or self.download_url): mv = StrictVersion('1.1') else: mv = StrictVersion('1.0') self.metadata_version = mv return mv def read_pkg_file(self, file): """Reads the metadata values from a file object.""" msg = message_from_file(file) def _read_field(name): value = msg[name] if value == 'UNKNOWN': return None return value def _read_list(name): values = msg.get_all(name, None) if values == []: return None return values self.metadata_version = StrictVersion(msg['metadata-version']) self.name = _read_field('name') self.version = _read_field('version') self.description = _read_field('summary') # we are filling author only. self.author = _read_field('author') self.maintainer = None self.author_email = _read_field('author-email') self.maintainer_email = None self.url = _read_field('home-page') self.license = _read_field('license') if 'download-url' in msg: self.download_url = _read_field('download-url') else: self.download_url = None self.long_description = _read_field('description') self.description = _read_field('summary') if 'keywords' in msg: self.keywords = _read_field('keywords').split(',') self.platforms = _read_list('platform') self.classifiers = _read_list('classifier') # PEP 314 - these fields only exist in 1.1 if self.metadata_version == StrictVersion('1.1'): self.requires = _read_list('requires') self.provides = _read_list('provides') self.obsoletes = _read_list('obsoletes') else: self.requires = None self.provides = None self.obsoletes = None # Based on Python 3.5 version def write_pkg_file(self, file): """Write the PKG-INFO format data to a file object. """ version = self.get_metadata_version() if six.PY2: def write_field(key, value): file.write("%s: %s\n" % (key, self._encode_field(value))) else: def write_field(key, value): file.write("%s: %s\n" % (key, value)) write_field('Metadata-Version', str(version)) write_field('Name', self.get_name()) write_field('Version', self.get_version()) write_field('Summary', self.get_description()) write_field('Home-page', self.get_url()) if version < StrictVersion('1.2'): write_field('Author', self.get_contact()) write_field('Author-email', self.get_contact_email()) else: optional_fields = ( ('Author', 'author'), ('Author-email', 'author_email'), ('Maintainer', 'maintainer'), ('Maintainer-email', 'maintainer_email'), ) for field, attr in optional_fields: attr_val = getattr(self, attr) if attr_val is not None: write_field(field, attr_val) write_field('License', self.get_license()) if self.download_url: write_field('Download-URL', self.download_url) for project_url in self.project_urls.items(): write_field('Project-URL', '%s, %s' % project_url) long_desc = rfc822_escape(self.get_long_description()) write_field('Description', long_desc) keywords = ','.join(self.get_keywords()) if keywords: write_field('Keywords', keywords) if version >= StrictVersion('1.2'): for platform in self.get_platforms(): write_field('Platform', platform) else: self._write_list(file, 'Platform', self.get_platforms()) self._write_list(file, 'Classifier', self.get_classifiers()) # PEP 314 self._write_list(file, 'Requires', self.get_requires()) self._write_list(file, 'Provides', self.get_provides()) self._write_list(file, 'Obsoletes', self.get_obsoletes()) # Setuptools specific for PEP 345 if hasattr(self, 'python_requires'): write_field('Requires-Python', self.python_requires) # PEP 566 if self.long_description_content_type: write_field( 'Description-Content-Type', self.long_description_content_type ) if self.provides_extras: for extra in self.provides_extras: write_field('Provides-Extra', extra) sequence = tuple, list def check_importable(dist, attr, value): try: ep = pkg_resources.EntryPoint.parse('x=' + value) assert not ep.extras except (TypeError, ValueError, AttributeError, AssertionError): raise DistutilsSetupError( "%r must be importable 'module:attrs' string (got %r)" % (attr, value) ) def assert_string_list(dist, attr, value): """Verify that value is a string list or None""" try: assert ''.join(value) != value except (TypeError, ValueError, AttributeError, AssertionError): raise DistutilsSetupError( "%r must be a list of strings (got %r)" % (attr, value) ) def check_nsp(dist, attr, value): """Verify that namespace packages are valid""" ns_packages = value assert_string_list(dist, attr, ns_packages) for nsp in ns_packages: if not dist.has_contents_for(nsp): raise DistutilsSetupError( "Distribution contains no modules or packages for " + "namespace package %r" % nsp ) parent, sep, child = nsp.rpartition('.') if parent and parent not in ns_packages: distutils.log.warn( "WARNING: %r is declared as a package namespace, but %r" " is not: please correct this in setup.py", nsp, parent ) def check_extras(dist, attr, value): """Verify that extras_require mapping is valid""" try: list(itertools.starmap(_check_extra, value.items())) except (TypeError, ValueError, AttributeError): raise DistutilsSetupError( "'extras_require' must be a dictionary whose values are " "strings or lists of strings containing valid project/version " "requirement specifiers." ) def _check_extra(extra, reqs): name, sep, marker = extra.partition(':') if marker and pkg_resources.invalid_marker(marker): raise DistutilsSetupError("Invalid environment marker: " + marker) list(pkg_resources.parse_requirements(reqs)) def assert_bool(dist, attr, value): """Verify that value is True, False, 0, or 1""" if bool(value) != value: tmpl = "{attr!r} must be a boolean value (got {value!r})" raise DistutilsSetupError(tmpl.format(attr=attr, value=value)) def check_requirements(dist, attr, value): """Verify that install_requires is a valid requirements list""" try: list(pkg_resources.parse_requirements(value)) if isinstance(value, (dict, set)): raise TypeError("Unordered types are not allowed") except (TypeError, ValueError) as error: tmpl = ( "{attr!r} must be a string or list of strings " "containing valid project/version requirement specifiers; {error}" ) raise DistutilsSetupError(tmpl.format(attr=attr, error=error)) def check_specifier(dist, attr, value): """Verify that value is a valid version specifier""" try: packaging.specifiers.SpecifierSet(value) except packaging.specifiers.InvalidSpecifier as error: tmpl = ( "{attr!r} must be a string " "containing valid version specifiers; {error}" ) raise DistutilsSetupError(tmpl.format(attr=attr, error=error)) def check_entry_points(dist, attr, value): """Verify that entry_points map is parseable""" try: pkg_resources.EntryPoint.parse_map(value) except ValueError as e: raise DistutilsSetupError(e) def check_test_suite(dist, attr, value): if not isinstance(value, six.string_types): raise DistutilsSetupError("test_suite must be a string") def check_package_data(dist, attr, value): """Verify that value is a dictionary of package names to glob lists""" if isinstance(value, dict): for k, v in value.items(): if not isinstance(k, str): break try: iter(v) except TypeError: break else: return raise DistutilsSetupError( attr + " must be a dictionary mapping package names to lists of " "wildcard patterns" ) def check_packages(dist, attr, value): for pkgname in value: if not re.match(r'\w+(\.\w+)*', pkgname): distutils.log.warn( "WARNING: %r not a valid package name; please use only " ".-separated package names in setup.py", pkgname ) _Distribution = get_unpatched(distutils.core.Distribution) class Distribution(_Distribution): """Distribution with support for features, tests, and package data This is an enhanced version of 'distutils.dist.Distribution' that effectively adds the following new optional keyword arguments to 'setup()': 'install_requires' -- a string or sequence of strings specifying project versions that the distribution requires when installed, in the format used by 'pkg_resources.require()'. They will be installed automatically when the package is installed. If you wish to use packages that are not available in PyPI, or want to give your users an alternate download location, you can add a 'find_links' option to the '[easy_install]' section of your project's 'setup.cfg' file, and then setuptools will scan the listed web pages for links that satisfy the requirements. 'extras_require' -- a dictionary mapping names of optional "extras" to the additional requirement(s) that using those extras incurs. For example, this:: extras_require = dict(reST = ["docutils>=0.3", "reSTedit"]) indicates that the distribution can optionally provide an extra capability called "reST", but it can only be used if docutils and reSTedit are installed. If the user installs your package using EasyInstall and requests one of your extras, the corresponding additional requirements will be installed if needed. 'features' **deprecated** -- a dictionary mapping option names to 'setuptools.Feature' objects. Features are a portion of the distribution that can be included or excluded based on user options, inter-feature dependencies, and availability on the current system. Excluded features are omitted from all setup commands, including source and binary distributions, so you can create multiple distributions from the same source tree. Feature names should be valid Python identifiers, except that they may contain the '-' (minus) sign. Features can be included or excluded via the command line options '--with-X' and '--without-X', where 'X' is the name of the feature. Whether a feature is included by default, and whether you are allowed to control this from the command line, is determined by the Feature object. See the 'Feature' class for more information. 'test_suite' -- the name of a test suite to run for the 'test' command. If the user runs 'python setup.py test', the package will be installed, and the named test suite will be run. The format is the same as would be used on a 'unittest.py' command line. That is, it is the dotted name of an object to import and call to generate a test suite. 'package_data' -- a dictionary mapping package names to lists of filenames or globs to use to find data files contained in the named packages. If the dictionary has filenames or globs listed under '""' (the empty string), those names will be searched for in every package, in addition to any names for the specific package. Data files found using these names/globs will be installed along with the package, in the same location as the package. Note that globs are allowed to reference the contents of non-package subdirectories, as long as you use '/' as a path separator. (Globs are automatically converted to platform-specific paths at runtime.) In addition to these new keywords, this class also has several new methods for manipulating the distribution's contents. For example, the 'include()' and 'exclude()' methods can be thought of as in-place add and subtract commands that add or remove packages, modules, extensions, and so on from the distribution. They are used by the feature subsystem to configure the distribution for the included and excluded features. """ _DISTUTILS_UNSUPPORTED_METADATA = { 'long_description_content_type': None, 'project_urls': dict, 'provides_extras': set, } _patched_dist = None def patch_missing_pkg_info(self, attrs): # Fake up a replacement for the data that would normally come from # PKG-INFO, but which might not yet be built if this is a fresh # checkout. # if not attrs or 'name' not in attrs or 'version' not in attrs: return key = pkg_resources.safe_name(str(attrs['name'])).lower() dist = pkg_resources.working_set.by_key.get(key) if dist is not None and not dist.has_metadata('PKG-INFO'): dist._version = pkg_resources.safe_version(str(attrs['version'])) self._patched_dist = dist def __init__(self, attrs=None): have_package_data = hasattr(self, "package_data") if not have_package_data: self.package_data = {} attrs = attrs or {} if 'features' in attrs or 'require_features' in attrs: Feature.warn_deprecated() self.require_features = [] self.features = {} self.dist_files = [] # Filter-out setuptools' specific options. self.src_root = attrs.pop("src_root", None) self.patch_missing_pkg_info(attrs) self.dependency_links = attrs.pop('dependency_links', []) self.setup_requires = attrs.pop('setup_requires', []) for ep in pkg_resources.iter_entry_points('distutils.setup_keywords'): vars(self).setdefault(ep.name, None) _Distribution.__init__(self, { k: v for k, v in attrs.items() if k not in self._DISTUTILS_UNSUPPORTED_METADATA }) # Fill-in missing metadata fields not supported by distutils. # Note some fields may have been set by other tools (e.g. pbr) # above; they are taken preferrentially to setup() arguments for option, default in self._DISTUTILS_UNSUPPORTED_METADATA.items(): for source in self.metadata.__dict__, attrs: if option in source: value = source[option] break else: value = default() if default else None setattr(self.metadata, option, value) if isinstance(self.metadata.version, numbers.Number): # Some people apparently take "version number" too literally :) self.metadata.version = str(self.metadata.version) if self.metadata.version is not None: try: ver = packaging.version.Version(self.metadata.version) normalized_version = str(ver) if self.metadata.version != normalized_version: warnings.warn( "Normalizing '%s' to '%s'" % ( self.metadata.version, normalized_version, ) ) self.metadata.version = normalized_version except (packaging.version.InvalidVersion, TypeError): warnings.warn( "The version specified (%r) is an invalid version, this " "may not work as expected with newer versions of " "setuptools, pip, and PyPI. Please see PEP 440 for more " "details." % self.metadata.version ) self._finalize_requires() def _finalize_requires(self): """ Set `metadata.python_requires` and fix environment markers in `install_requires` and `extras_require`. """ if getattr(self, 'python_requires', None): self.metadata.python_requires = self.python_requires if getattr(self, 'extras_require', None): for extra in self.extras_require.keys(): # Since this gets called multiple times at points where the # keys have become 'converted' extras, ensure that we are only # truly adding extras we haven't seen before here. extra = extra.split(':')[0] if extra: self.metadata.provides_extras.add(extra) self._convert_extras_requirements() self._move_install_requirements_markers() def _convert_extras_requirements(self): """ Convert requirements in `extras_require` of the form `"extra": ["barbazquux; {marker}"]` to `"extra:{marker}": ["barbazquux"]`. """ spec_ext_reqs = getattr(self, 'extras_require', None) or {} self._tmp_extras_require = defaultdict(list) for section, v in spec_ext_reqs.items(): # Do not strip empty sections. self._tmp_extras_require[section] for r in pkg_resources.parse_requirements(v): suffix = self._suffix_for(r) self._tmp_extras_require[section + suffix].append(r) @staticmethod def _suffix_for(req): """ For a requirement, return the 'extras_require' suffix for that requirement. """ return ':' + str(req.marker) if req.marker else '' def _move_install_requirements_markers(self): """ Move requirements in `install_requires` that are using environment markers `extras_require`. """ # divide the install_requires into two sets, simple ones still # handled by install_requires and more complex ones handled # by extras_require. def is_simple_req(req): return not req.marker spec_inst_reqs = getattr(self, 'install_requires', None) or () inst_reqs = list(pkg_resources.parse_requirements(spec_inst_reqs)) simple_reqs = filter(is_simple_req, inst_reqs) complex_reqs = filterfalse(is_simple_req, inst_reqs) self.install_requires = list(map(str, simple_reqs)) for r in complex_reqs: self._tmp_extras_require[':' + str(r.marker)].append(r) self.extras_require = dict( (k, [str(r) for r in map(self._clean_req, v)]) for k, v in self._tmp_extras_require.items() ) def _clean_req(self, req): """ Given a Requirement, remove environment markers and return it. """ req.marker = None return req def _parse_config_files(self, filenames=None): """ Adapted from distutils.dist.Distribution.parse_config_files, this method provides the same functionality in subtly-improved ways. """ from setuptools.extern.six.moves.configparser import ConfigParser # Ignore install directory options if we have a venv if six.PY3 and sys.prefix != sys.base_prefix: ignore_options = [ 'install-base', 'install-platbase', 'install-lib', 'install-platlib', 'install-purelib', 'install-headers', 'install-scripts', 'install-data', 'prefix', 'exec-prefix', 'home', 'user', 'root'] else: ignore_options = [] ignore_options = frozenset(ignore_options) if filenames is None: filenames = self.find_config_files() if DEBUG: self.announce("Distribution.parse_config_files():") parser = ConfigParser() for filename in filenames: with io.open(filename, encoding='utf-8') as reader: if DEBUG: self.announce(" reading {filename}".format(**locals())) (parser.read_file if six.PY3 else parser.readfp)(reader) for section in parser.sections(): options = parser.options(section) opt_dict = self.get_option_dict(section) for opt in options: if opt != '__name__' and opt not in ignore_options: val = self._try_str(parser.get(section, opt)) opt = opt.replace('-', '_') opt_dict[opt] = (filename, val) # Make the ConfigParser forget everything (so we retain # the original filenames that options come from) parser.__init__() # If there was a "global" section in the config file, use it # to set Distribution options. if 'global' in self.command_options: for (opt, (src, val)) in self.command_options['global'].items(): alias = self.negative_opt.get(opt) try: if alias: setattr(self, alias, not strtobool(val)) elif opt in ('verbose', 'dry_run'): # ugh! setattr(self, opt, strtobool(val)) else: setattr(self, opt, val) except ValueError as msg: raise DistutilsOptionError(msg) @staticmethod def _try_str(val): """ On Python 2, much of distutils relies on string values being of type 'str' (bytes) and not unicode text. If the value can be safely encoded to bytes using the default encoding, prefer that. Why the default encoding? Because that value can be implicitly decoded back to text if needed. Ref #1653 """ if six.PY3: return val try: return val.encode() except UnicodeEncodeError: pass return val def _set_command_options(self, command_obj, option_dict=None): """ Set the options for 'command_obj' from 'option_dict'. Basically this means copying elements of a dictionary ('option_dict') to attributes of an instance ('command'). 'command_obj' must be a Command instance. If 'option_dict' is not supplied, uses the standard option dictionary for this command (from 'self.command_options'). (Adopted from distutils.dist.Distribution._set_command_options) """ command_name = command_obj.get_command_name() if option_dict is None: option_dict = self.get_option_dict(command_name) if DEBUG: self.announce(" setting options for '%s' command:" % command_name) for (option, (source, value)) in option_dict.items(): if DEBUG: self.announce(" %s = %s (from %s)" % (option, value, source)) try: bool_opts = [translate_longopt(o) for o in command_obj.boolean_options] except AttributeError: bool_opts = [] try: neg_opt = command_obj.negative_opt except AttributeError: neg_opt = {} try: is_string = isinstance(value, six.string_types) if option in neg_opt and is_string: setattr(command_obj, neg_opt[option], not strtobool(value)) elif option in bool_opts and is_string: setattr(command_obj, option, strtobool(value)) elif hasattr(command_obj, option): setattr(command_obj, option, value) else: raise DistutilsOptionError( "error in %s: command '%s' has no such option '%s'" % (source, command_name, option)) except ValueError as msg: raise DistutilsOptionError(msg) def parse_config_files(self, filenames=None, ignore_option_errors=False): """Parses configuration files from various levels and loads configuration. """ self._parse_config_files(filenames=filenames) parse_configuration(self, self.command_options, ignore_option_errors=ignore_option_errors) self._finalize_requires() def parse_command_line(self): """Process features after parsing command line options""" result = _Distribution.parse_command_line(self) if self.features: self._finalize_features() return result def _feature_attrname(self, name): """Convert feature name to corresponding option attribute name""" return 'with_' + name.replace('-', '_') def fetch_build_eggs(self, requires): """Resolve pre-setup requirements""" resolved_dists = pkg_resources.working_set.resolve( pkg_resources.parse_requirements(requires), installer=self.fetch_build_egg, replace_conflicting=True, ) for dist in resolved_dists: pkg_resources.working_set.add(dist, replace=True) return resolved_dists def finalize_options(self): _Distribution.finalize_options(self) if self.features: self._set_global_opts_from_features() for ep in pkg_resources.iter_entry_points('distutils.setup_keywords'): value = getattr(self, ep.name, None) if value is not None: ep.require(installer=self.fetch_build_egg) ep.load()(self, ep.name, value) if getattr(self, 'convert_2to3_doctests', None): # XXX may convert to set here when we can rely on set being builtin self.convert_2to3_doctests = [ os.path.abspath(p) for p in self.convert_2to3_doctests ] else: self.convert_2to3_doctests = [] def get_egg_cache_dir(self): egg_cache_dir = os.path.join(os.curdir, '.eggs') if not os.path.exists(egg_cache_dir): os.mkdir(egg_cache_dir) windows_support.hide_file(egg_cache_dir) readme_txt_filename = os.path.join(egg_cache_dir, 'README.txt') with open(readme_txt_filename, 'w') as f: f.write('This directory contains eggs that were downloaded ' 'by setuptools to build, test, and run plug-ins.\n\n') f.write('This directory caches those eggs to prevent ' 'repeated downloads.\n\n') f.write('However, it is safe to delete this directory.\n\n') return egg_cache_dir def fetch_build_egg(self, req): """Fetch an egg needed for building""" from setuptools.command.easy_install import easy_install dist = self.__class__({'script_args': ['easy_install']}) opts = dist.get_option_dict('easy_install') opts.clear() opts.update( (k, v) for k, v in self.get_option_dict('easy_install').items() if k in ( # don't use any other settings 'find_links', 'site_dirs', 'index_url', 'optimize', 'site_dirs', 'allow_hosts', )) if self.dependency_links: links = self.dependency_links[:] if 'find_links' in opts: links = opts['find_links'][1] + links opts['find_links'] = ('setup', links) install_dir = self.get_egg_cache_dir() cmd = easy_install( dist, args=["x"], install_dir=install_dir, exclude_scripts=True, always_copy=False, build_directory=None, editable=False, upgrade=False, multi_version=True, no_report=True, user=False ) cmd.ensure_finalized() return cmd.easy_install(req) def _set_global_opts_from_features(self): """Add --with-X/--without-X options based on optional features""" go = [] no = self.negative_opt.copy() for name, feature in self.features.items(): self._set_feature(name, None) feature.validate(self) if feature.optional: descr = feature.description incdef = ' (default)' excdef = '' if not feature.include_by_default(): excdef, incdef = incdef, excdef new = ( ('with-' + name, None, 'include ' + descr + incdef), ('without-' + name, None, 'exclude ' + descr + excdef), ) go.extend(new) no['without-' + name] = 'with-' + name self.global_options = self.feature_options = go + self.global_options self.negative_opt = self.feature_negopt = no def _finalize_features(self): """Add/remove features and resolve dependencies between them""" # First, flag all the enabled items (and thus their dependencies) for name, feature in self.features.items(): enabled = self.feature_is_included(name) if enabled or (enabled is None and feature.include_by_default()): feature.include_in(self) self._set_feature(name, 1) # Then disable the rest, so that off-by-default features don't # get flagged as errors when they're required by an enabled feature for name, feature in self.features.items(): if not self.feature_is_included(name): feature.exclude_from(self) self._set_feature(name, 0) def get_command_class(self, command): """Pluggable version of get_command_class()""" if command in self.cmdclass: return self.cmdclass[command] eps = pkg_resources.iter_entry_points('distutils.commands', command) for ep in eps: ep.require(installer=self.fetch_build_egg) self.cmdclass[command] = cmdclass = ep.load() return cmdclass else: return _Distribution.get_command_class(self, command) def print_commands(self): for ep in pkg_resources.iter_entry_points('distutils.commands'): if ep.name not in self.cmdclass: # don't require extras as the commands won't be invoked cmdclass = ep.resolve() self.cmdclass[ep.name] = cmdclass return _Distribution.print_commands(self) def get_command_list(self): for ep in pkg_resources.iter_entry_points('distutils.commands'): if ep.name not in self.cmdclass: # don't require extras as the commands won't be invoked cmdclass = ep.resolve() self.cmdclass[ep.name] = cmdclass return _Distribution.get_command_list(self) def _set_feature(self, name, status): """Set feature's inclusion status""" setattr(self, self._feature_attrname(name), status) def feature_is_included(self, name): """Return 1 if feature is included, 0 if excluded, 'None' if unknown""" return getattr(self, self._feature_attrname(name)) def include_feature(self, name): """Request inclusion of feature named 'name'""" if self.feature_is_included(name) == 0: descr = self.features[name].description raise DistutilsOptionError( descr + " is required, but was excluded or is not available" ) self.features[name].include_in(self) self._set_feature(name, 1) def include(self, **attrs): """Add items to distribution that are named in keyword arguments For example, 'dist.include(py_modules=["x"])' would add 'x' to the distribution's 'py_modules' attribute, if it was not already there. Currently, this method only supports inclusion for attributes that are lists or tuples. If you need to add support for adding to other attributes in this or a subclass, you can add an '_include_X' method, where 'X' is the name of the attribute. The method will be called with the value passed to 'include()'. So, 'dist.include(foo={"bar":"baz"})' will try to call 'dist._include_foo({"bar":"baz"})', which can then handle whatever special inclusion logic is needed. """ for k, v in attrs.items(): include = getattr(self, '_include_' + k, None) if include: include(v) else: self._include_misc(k, v) def exclude_package(self, package): """Remove packages, modules, and extensions in named package""" pfx = package + '.' if self.packages: self.packages = [ p for p in self.packages if p != package and not p.startswith(pfx) ] if self.py_modules: self.py_modules = [ p for p in self.py_modules if p != package and not p.startswith(pfx) ] if self.ext_modules: self.ext_modules = [ p for p in self.ext_modules if p.name != package and not p.name.startswith(pfx) ] def has_contents_for(self, package): """Return true if 'exclude_package(package)' would do something""" pfx = package + '.' for p in self.iter_distribution_names(): if p == package or p.startswith(pfx): return True def _exclude_misc(self, name, value): """Handle 'exclude()' for list/tuple attrs without a special handler""" if not isinstance(value, sequence): raise DistutilsSetupError( "%s: setting must be a list or tuple (%r)" % (name, value) ) try: old = getattr(self, name) except AttributeError: raise DistutilsSetupError( "%s: No such distribution setting" % name ) if old is not None and not isinstance(old, sequence): raise DistutilsSetupError( name + ": this setting cannot be changed via include/exclude" ) elif old: setattr(self, name, [item for item in old if item not in value]) def _include_misc(self, name, value): """Handle 'include()' for list/tuple attrs without a special handler""" if not isinstance(value, sequence): raise DistutilsSetupError( "%s: setting must be a list (%r)" % (name, value) ) try: old = getattr(self, name) except AttributeError: raise DistutilsSetupError( "%s: No such distribution setting" % name ) if old is None: setattr(self, name, value) elif not isinstance(old, sequence): raise DistutilsSetupError( name + ": this setting cannot be changed via include/exclude" ) else: new = [item for item in value if item not in old] setattr(self, name, old + new) def exclude(self, **attrs): """Remove items from distribution that are named in keyword arguments For example, 'dist.exclude(py_modules=["x"])' would remove 'x' from the distribution's 'py_modules' attribute. Excluding packages uses the 'exclude_package()' method, so all of the package's contained packages, modules, and extensions are also excluded. Currently, this method only supports exclusion from attributes that are lists or tuples. If you need to add support for excluding from other attributes in this or a subclass, you can add an '_exclude_X' method, where 'X' is the name of the attribute. The method will be called with the value passed to 'exclude()'. So, 'dist.exclude(foo={"bar":"baz"})' will try to call 'dist._exclude_foo({"bar":"baz"})', which can then handle whatever special exclusion logic is needed. """ for k, v in attrs.items(): exclude = getattr(self, '_exclude_' + k, None) if exclude: exclude(v) else: self._exclude_misc(k, v) def _exclude_packages(self, packages): if not isinstance(packages, sequence): raise DistutilsSetupError( "packages: setting must be a list or tuple (%r)" % (packages,) ) list(map(self.exclude_package, packages)) def _parse_command_opts(self, parser, args): # Remove --with-X/--without-X options when processing command args self.global_options = self.__class__.global_options self.negative_opt = self.__class__.negative_opt # First, expand any aliases command = args[0] aliases = self.get_option_dict('aliases') while command in aliases: src, alias = aliases[command] del aliases[command] # ensure each alias can expand only once! import shlex args[:1] = shlex.split(alias, True) command = args[0] nargs = _Distribution._parse_command_opts(self, parser, args) # Handle commands that want to consume all remaining arguments cmd_class = self.get_command_class(command) if getattr(cmd_class, 'command_consumes_arguments', None): self.get_option_dict(command)['args'] = ("command line", nargs) if nargs is not None: return [] return nargs def get_cmdline_options(self): """Return a '{cmd: {opt:val}}' map of all command-line options Option names are all long, but do not include the leading '--', and contain dashes rather than underscores. If the option doesn't take an argument (e.g. '--quiet'), the 'val' is 'None'. Note that options provided by config files are intentionally excluded. """ d = {} for cmd, opts in self.command_options.items(): for opt, (src, val) in opts.items(): if src != "command line": continue opt = opt.replace('_', '-') if val == 0: cmdobj = self.get_command_obj(cmd) neg_opt = self.negative_opt.copy() neg_opt.update(getattr(cmdobj, 'negative_opt', {})) for neg, pos in neg_opt.items(): if pos == opt: opt = neg val = None break else: raise AssertionError("Shouldn't be able to get here") elif val == 1: val = None d.setdefault(cmd, {})[opt] = val return d def iter_distribution_names(self): """Yield all packages, modules, and extension names in distribution""" for pkg in self.packages or (): yield pkg for module in self.py_modules or (): yield module for ext in self.ext_modules or (): if isinstance(ext, tuple): name, buildinfo = ext else: name = ext.name if name.endswith('module'): name = name[:-6] yield name def handle_display_options(self, option_order): """If there were any non-global "display-only" options (--help-commands or the metadata display options) on the command line, display the requested info and return true; else return false. """ import sys if six.PY2 or self.help_commands: return _Distribution.handle_display_options(self, option_order) # Stdout may be StringIO (e.g. in tests) if not isinstance(sys.stdout, io.TextIOWrapper): return _Distribution.handle_display_options(self, option_order) # Don't wrap stdout if utf-8 is already the encoding. Provides # workaround for #334. if sys.stdout.encoding.lower() in ('utf-8', 'utf8'): return _Distribution.handle_display_options(self, option_order) # Print metadata in UTF-8 no matter the platform encoding = sys.stdout.encoding errors = sys.stdout.errors newline = sys.platform != 'win32' and '\n' or None line_buffering = sys.stdout.line_buffering sys.stdout = io.TextIOWrapper( sys.stdout.detach(), 'utf-8', errors, newline, line_buffering) try: return _Distribution.handle_display_options(self, option_order) finally: sys.stdout = io.TextIOWrapper( sys.stdout.detach(), encoding, errors, newline, line_buffering) class Feature: """ **deprecated** -- The `Feature` facility was never completely implemented or supported, `has reported issues <https://github.com/pypa/setuptools/issues/58>`_ and will be removed in a future version. A subset of the distribution that can be excluded if unneeded/wanted Features are created using these keyword arguments: 'description' -- a short, human readable description of the feature, to be used in error messages, and option help messages. 'standard' -- if true, the feature is included by default if it is available on the current system. Otherwise, the feature is only included if requested via a command line '--with-X' option, or if another included feature requires it. The default setting is 'False'. 'available' -- if true, the feature is available for installation on the current system. The default setting is 'True'. 'optional' -- if true, the feature's inclusion can be controlled from the command line, using the '--with-X' or '--without-X' options. If false, the feature's inclusion status is determined automatically, based on 'availabile', 'standard', and whether any other feature requires it. The default setting is 'True'. 'require_features' -- a string or sequence of strings naming features that should also be included if this feature is included. Defaults to empty list. May also contain 'Require' objects that should be added/removed from the distribution. 'remove' -- a string or list of strings naming packages to be removed from the distribution if this feature is *not* included. If the feature *is* included, this argument is ignored. This argument exists to support removing features that "crosscut" a distribution, such as defining a 'tests' feature that removes all the 'tests' subpackages provided by other features. The default for this argument is an empty list. (Note: the named package(s) or modules must exist in the base distribution when the 'setup()' function is initially called.) other keywords -- any other keyword arguments are saved, and passed to the distribution's 'include()' and 'exclude()' methods when the feature is included or excluded, respectively. So, for example, you could pass 'packages=["a","b"]' to cause packages 'a' and 'b' to be added or removed from the distribution as appropriate. A feature must include at least one 'requires', 'remove', or other keyword argument. Otherwise, it can't affect the distribution in any way. Note also that you can subclass 'Feature' to create your own specialized feature types that modify the distribution in other ways when included or excluded. See the docstrings for the various methods here for more detail. Aside from the methods, the only feature attributes that distributions look at are 'description' and 'optional'. """ @staticmethod def warn_deprecated(): msg = ( "Features are deprecated and will be removed in a future " "version. See https://github.com/pypa/setuptools/issues/65." ) warnings.warn(msg, DistDeprecationWarning, stacklevel=3) def __init__( self, description, standard=False, available=True, optional=True, require_features=(), remove=(), **extras): self.warn_deprecated() self.description = description self.standard = standard self.available = available self.optional = optional if isinstance(require_features, (str, Require)): require_features = require_features, self.require_features = [ r for r in require_features if isinstance(r, str) ] er = [r for r in require_features if not isinstance(r, str)] if er: extras['require_features'] = er if isinstance(remove, str): remove = remove, self.remove = remove self.extras = extras if not remove and not require_features and not extras: raise DistutilsSetupError( "Feature %s: must define 'require_features', 'remove', or " "at least one of 'packages', 'py_modules', etc." ) def include_by_default(self): """Should this feature be included by default?""" return self.available and self.standard def include_in(self, dist): """Ensure feature and its requirements are included in distribution You may override this in a subclass to perform additional operations on the distribution. Note that this method may be called more than once per feature, and so should be idempotent. """ if not self.available: raise DistutilsPlatformError( self.description + " is required, " "but is not available on this platform" ) dist.include(**self.extras) for f in self.require_features: dist.include_feature(f) def exclude_from(self, dist): """Ensure feature is excluded from distribution You may override this in a subclass to perform additional operations on the distribution. This method will be called at most once per feature, and only after all included features have been asked to include themselves. """ dist.exclude(**self.extras) if self.remove: for item in self.remove: dist.exclude_package(item) def validate(self, dist): """Verify that feature makes sense in context of distribution This method is called by the distribution just before it parses its command line. It checks to ensure that the 'remove' attribute, if any, contains only valid package/module names that are present in the base distribution when 'setup()' is called. You may override it in a subclass to perform any other required validation of the feature against a target distribution. """ for item in self.remove: if not dist.has_contents_for(item): raise DistutilsSetupError( "%s wants to be able to remove %s, but the distribution" " doesn't contain any packages or modules under %s" % (self.description, item, item) ) class DistDeprecationWarning(SetuptoolsDeprecationWarning): """Class for warning about deprecations in dist in setuptools. Not ignored by default, unlike DeprecationWarning.""" �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/extension.py������������������������������0000644�0001750�0001750�00000003301�00000000000�027045� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import re import functools import distutils.core import distutils.errors import distutils.extension from setuptools.extern.six.moves import map from .monkey import get_unpatched def _have_cython(): """ Return True if Cython can be imported. """ cython_impl = 'Cython.Distutils.build_ext' try: # from (cython_impl) import build_ext __import__(cython_impl, fromlist=['build_ext']).build_ext return True except Exception: pass return False # for compatibility have_pyrex = _have_cython _Extension = get_unpatched(distutils.core.Extension) class Extension(_Extension): """Extension that uses '.c' files in place of '.pyx' files""" def __init__(self, name, sources, *args, **kw): # The *args is needed for compatibility as calls may use positional # arguments. py_limited_api may be set only via keyword. self.py_limited_api = kw.pop("py_limited_api", False) _Extension.__init__(self, name, sources, *args, **kw) def _convert_pyx_sources_to_lang(self): """ Replace sources with .pyx extensions to sources with the target language extension. This mechanism allows language authors to supply pre-converted sources but to prefer the .pyx sources. """ if _have_cython(): # the build has Cython, so allow it to compile the .pyx files return lang = self.language or '' target_ext = '.cpp' if lang.lower() == 'c++' else '.c' sub = functools.partial(re.sub, '.pyx$', target_ext) self.sources = list(map(sub, self.sources)) class Library(Extension): """Just like a regular Extension, but built as a library instead""" �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000033�00000000000�011451� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������27 mtime=1604735100.066661 �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/extern/�����������������������������������0000755�0001750�0001750�00000000000�00000000000�025767� 5����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/extern/__init__.py������������������������0000644�0001750�0001750�00000004703�00000000000�030104� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import sys class VendorImporter: """ A PEP 302 meta path importer for finding optionally-vendored or otherwise naturally-installed packages from root_name. """ def __init__(self, root_name, vendored_names=(), vendor_pkg=None): self.root_name = root_name self.vendored_names = set(vendored_names) self.vendor_pkg = vendor_pkg or root_name.replace('extern', '_vendor') @property def search_path(self): """ Search first the vendor package then as a natural package. """ yield self.vendor_pkg + '.' yield '' def find_module(self, fullname, path=None): """ Return self when fullname starts with root_name and the target module is one vendored through this importer. """ root, base, target = fullname.partition(self.root_name + '.') if root: return if not any(map(target.startswith, self.vendored_names)): return return self def load_module(self, fullname): """ Iterate over the search path to locate and load fullname. """ root, base, target = fullname.partition(self.root_name + '.') for prefix in self.search_path: try: extant = prefix + target __import__(extant) mod = sys.modules[extant] sys.modules[fullname] = mod # mysterious hack: # Remove the reference to the extant package/module # on later Python versions to cause relative imports # in the vendor package to resolve the same modules # as those going through this importer. if sys.version_info >= (3, ): del sys.modules[extant] return mod except ImportError: pass else: raise ImportError( "The '{target}' package is required; " "normally this is bundled with this package so if you get " "this warning, consult the packager of your " "distribution.".format(**locals()) ) def install(self): """ Install this importer into sys.meta_path if not already present. """ if self not in sys.meta_path: sys.meta_path.append(self) names = 'six', 'packaging', 'pyparsing', VendorImporter(__name__, names, 'setuptools._vendor').install() �������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/glibc.py����������������������������������0000644�0001750�0001750�00000006112�00000000000�026114� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This file originally from pip: # https://github.com/pypa/pip/blob/8f4f15a5a95d7d5b511ceaee9ed261176c181970/src/pip/_internal/utils/glibc.py from __future__ import absolute_import import ctypes import re import warnings def glibc_version_string(): "Returns glibc version string, or None if not using glibc." # ctypes.CDLL(None) internally calls dlopen(NULL), and as the dlopen # manpage says, "If filename is NULL, then the returned handle is for the # main program". This way we can let the linker do the work to figure out # which libc our process is actually using. process_namespace = ctypes.CDLL(None) try: gnu_get_libc_version = process_namespace.gnu_get_libc_version except AttributeError: # Symbol doesn't exist -> therefore, we are not linked to # glibc. return None # Call gnu_get_libc_version, which returns a string like "2.5" gnu_get_libc_version.restype = ctypes.c_char_p version_str = gnu_get_libc_version() # py2 / py3 compatibility: if not isinstance(version_str, str): version_str = version_str.decode("ascii") return version_str # Separated out from have_compatible_glibc for easier unit testing def check_glibc_version(version_str, required_major, minimum_minor): # Parse string and check against requested version. # # We use a regexp instead of str.split because we want to discard any # random junk that might come after the minor version -- this might happen # in patched/forked versions of glibc (e.g. Linaro's version of glibc # uses version strings like "2.20-2014.11"). See gh-3588. m = re.match(r"(?P<major>[0-9]+)\.(?P<minor>[0-9]+)", version_str) if not m: warnings.warn("Expected glibc version with 2 components major.minor," " got: %s" % version_str, RuntimeWarning) return False return (int(m.group("major")) == required_major and int(m.group("minor")) >= minimum_minor) def have_compatible_glibc(required_major, minimum_minor): version_str = glibc_version_string() if version_str is None: return False return check_glibc_version(version_str, required_major, minimum_minor) # platform.libc_ver regularly returns completely nonsensical glibc # versions. E.g. on my computer, platform says: # # ~$ python2.7 -c 'import platform; print(platform.libc_ver())' # ('glibc', '2.7') # ~$ python3.5 -c 'import platform; print(platform.libc_ver())' # ('glibc', '2.9') # # But the truth is: # # ~$ ldd --version # ldd (Debian GLIBC 2.22-11) 2.22 # # This is unfortunate, because it means that the linehaul data on libc # versions that was generated by pip 8.1.2 and earlier is useless and # misleading. Solution: instead of using platform, use our code that actually # works. def libc_ver(): """Try to determine the glibc version Returns a tuple of strings (lib, version) which default to empty strings in case the lookup fails. """ glibc_version = glibc_version_string() if glibc_version is None: return ("", "") else: return ("glibc", glibc_version) ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/glob.py�����������������������������������0000644�0001750�0001750�00000011734�00000000000�025765� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" Filename globbing utility. Mostly a copy of `glob` from Python 3.5. Changes include: * `yield from` and PEP3102 `*` removed. * Hidden files are not ignored. """ import os import re import fnmatch __all__ = ["glob", "iglob", "escape"] def glob(pathname, recursive=False): """Return a list of paths matching a pathname pattern. The pattern may contain simple shell-style wildcards a la fnmatch. However, unlike fnmatch, filenames starting with a dot are special cases that are not matched by '*' and '?' patterns. If recursive is true, the pattern '**' will match any files and zero or more directories and subdirectories. """ return list(iglob(pathname, recursive=recursive)) def iglob(pathname, recursive=False): """Return an iterator which yields the paths matching a pathname pattern. The pattern may contain simple shell-style wildcards a la fnmatch. However, unlike fnmatch, filenames starting with a dot are special cases that are not matched by '*' and '?' patterns. If recursive is true, the pattern '**' will match any files and zero or more directories and subdirectories. """ it = _iglob(pathname, recursive) if recursive and _isrecursive(pathname): s = next(it) # skip empty string assert not s return it def _iglob(pathname, recursive): dirname, basename = os.path.split(pathname) if not has_magic(pathname): if basename: if os.path.lexists(pathname): yield pathname else: # Patterns ending with a slash should match only directories if os.path.isdir(dirname): yield pathname return if not dirname: if recursive and _isrecursive(basename): for x in glob2(dirname, basename): yield x else: for x in glob1(dirname, basename): yield x return # `os.path.split()` returns the argument itself as a dirname if it is a # drive or UNC path. Prevent an infinite recursion if a drive or UNC path # contains magic characters (i.e. r'\\?\C:'). if dirname != pathname and has_magic(dirname): dirs = _iglob(dirname, recursive) else: dirs = [dirname] if has_magic(basename): if recursive and _isrecursive(basename): glob_in_dir = glob2 else: glob_in_dir = glob1 else: glob_in_dir = glob0 for dirname in dirs: for name in glob_in_dir(dirname, basename): yield os.path.join(dirname, name) # These 2 helper functions non-recursively glob inside a literal directory. # They return a list of basenames. `glob1` accepts a pattern while `glob0` # takes a literal basename (so it only has to check for its existence). def glob1(dirname, pattern): if not dirname: if isinstance(pattern, bytes): dirname = os.curdir.encode('ASCII') else: dirname = os.curdir try: names = os.listdir(dirname) except OSError: return [] return fnmatch.filter(names, pattern) def glob0(dirname, basename): if not basename: # `os.path.split()` returns an empty basename for paths ending with a # directory separator. 'q*x/' should match only directories. if os.path.isdir(dirname): return [basename] else: if os.path.lexists(os.path.join(dirname, basename)): return [basename] return [] # This helper function recursively yields relative pathnames inside a literal # directory. def glob2(dirname, pattern): assert _isrecursive(pattern) yield pattern[:0] for x in _rlistdir(dirname): yield x # Recursively yields relative pathnames inside a literal directory. def _rlistdir(dirname): if not dirname: if isinstance(dirname, bytes): dirname = os.curdir.encode('ASCII') else: dirname = os.curdir try: names = os.listdir(dirname) except os.error: return for x in names: yield x path = os.path.join(dirname, x) if dirname else x for y in _rlistdir(path): yield os.path.join(x, y) magic_check = re.compile('([*?[])') magic_check_bytes = re.compile(b'([*?[])') def has_magic(s): if isinstance(s, bytes): match = magic_check_bytes.search(s) else: match = magic_check.search(s) return match is not None def _isrecursive(pattern): if isinstance(pattern, bytes): return pattern == b'**' else: return pattern == '**' def escape(pathname): """Escape all special characters. """ # Escaping is done by wrapping any of "*?[" between square brackets. # Metacharacters do not work in the drive part and shouldn't be escaped. drive, pathname = os.path.splitdrive(pathname) if isinstance(pathname, bytes): pathname = magic_check_bytes.sub(br'[\1]', pathname) else: pathname = magic_check.sub(r'[\1]', pathname) return drive + pathname ������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/launch.py���������������������������������0000644�0001750�0001750�00000001423�00000000000�026306� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" Launch the Python script on the command line after setuptools is bootstrapped via import. """ # Note that setuptools gets imported implicitly by the # invocation of this script using python -m setuptools.launch import tokenize import sys def run(): """ Run the script in sys.argv[1] as if it had been invoked naturally. """ __builtins__ script_name = sys.argv[1] namespace = dict( __file__=script_name, __name__='__main__', __doc__=None, ) sys.argv[:] = sys.argv[1:] open_ = getattr(tokenize, 'open', open) script = open_(script_name).read() norm_script = script.replace('\\r\\n', '\\n') code = compile(norm_script, script_name, 'exec') exec(code, namespace) if __name__ == '__main__': run() ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/lib2to3_ex.py�����������������������������0000644�0001750�0001750�00000003735�00000000000�027016� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" Customized Mixin2to3 support: - adds support for converting doctests This module raises an ImportError on Python 2. """ from distutils.util import Mixin2to3 as _Mixin2to3 from distutils import log from lib2to3.refactor import RefactoringTool, get_fixers_from_package import setuptools class DistutilsRefactoringTool(RefactoringTool): def log_error(self, msg, *args, **kw): log.error(msg, *args) def log_message(self, msg, *args): log.info(msg, *args) def log_debug(self, msg, *args): log.debug(msg, *args) class Mixin2to3(_Mixin2to3): def run_2to3(self, files, doctests=False): # See of the distribution option has been set, otherwise check the # setuptools default. if self.distribution.use_2to3 is not True: return if not files: return log.info("Fixing " + " ".join(files)) self.__build_fixer_names() self.__exclude_fixers() if doctests: if setuptools.run_2to3_on_doctests: r = DistutilsRefactoringTool(self.fixer_names) r.refactor(files, write=True, doctests_only=True) else: _Mixin2to3.run_2to3(self, files) def __build_fixer_names(self): if self.fixer_names: return self.fixer_names = [] for p in setuptools.lib2to3_fixer_packages: self.fixer_names.extend(get_fixers_from_package(p)) if self.distribution.use_2to3_fixers is not None: for p in self.distribution.use_2to3_fixers: self.fixer_names.extend(get_fixers_from_package(p)) def __exclude_fixers(self): excluded_fixers = getattr(self, 'exclude_fixers', []) if self.distribution.use_2to3_exclude_fixers is not None: excluded_fixers.extend(self.distribution.use_2to3_exclude_fixers) for fixer_name in excluded_fixers: if fixer_name in self.fixer_names: self.fixer_names.remove(fixer_name) �����������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/monkey.py���������������������������������0000644�0001750�0001750�00000012220�00000000000�026333� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" Monkey patching of distutils. """ import sys import distutils.filelist import platform import types import functools from importlib import import_module import inspect from setuptools.extern import six import setuptools __all__ = [] """ Everything is private. Contact the project team if you think you need this functionality. """ def _get_mro(cls): """ Returns the bases classes for cls sorted by the MRO. Works around an issue on Jython where inspect.getmro will not return all base classes if multiple classes share the same name. Instead, this function will return a tuple containing the class itself, and the contents of cls.__bases__. See https://github.com/pypa/setuptools/issues/1024. """ if platform.python_implementation() == "Jython": return (cls,) + cls.__bases__ return inspect.getmro(cls) def get_unpatched(item): lookup = ( get_unpatched_class if isinstance(item, six.class_types) else get_unpatched_function if isinstance(item, types.FunctionType) else lambda item: None ) return lookup(item) def get_unpatched_class(cls): """Protect against re-patching the distutils if reloaded Also ensures that no other distutils extension monkeypatched the distutils first. """ external_bases = ( cls for cls in _get_mro(cls) if not cls.__module__.startswith('setuptools') ) base = next(external_bases) if not base.__module__.startswith('distutils'): msg = "distutils has already been patched by %r" % cls raise AssertionError(msg) return base def patch_all(): # we can't patch distutils.cmd, alas distutils.core.Command = setuptools.Command has_issue_12885 = sys.version_info <= (3, 5, 3) if has_issue_12885: # fix findall bug in distutils (http://bugs.python.org/issue12885) distutils.filelist.findall = setuptools.findall needs_warehouse = ( sys.version_info < (2, 7, 13) or (3, 4) < sys.version_info < (3, 4, 6) or (3, 5) < sys.version_info <= (3, 5, 3) ) if needs_warehouse: warehouse = 'https://upload.pypi.org/legacy/' distutils.config.PyPIRCCommand.DEFAULT_REPOSITORY = warehouse _patch_distribution_metadata() # Install Distribution throughout the distutils for module in distutils.dist, distutils.core, distutils.cmd: module.Distribution = setuptools.dist.Distribution # Install the patched Extension distutils.core.Extension = setuptools.extension.Extension distutils.extension.Extension = setuptools.extension.Extension if 'distutils.command.build_ext' in sys.modules: sys.modules['distutils.command.build_ext'].Extension = ( setuptools.extension.Extension ) patch_for_msvc_specialized_compiler() def _patch_distribution_metadata(): """Patch write_pkg_file and read_pkg_file for higher metadata standards""" for attr in ('write_pkg_file', 'read_pkg_file', 'get_metadata_version'): new_val = getattr(setuptools.dist, attr) setattr(distutils.dist.DistributionMetadata, attr, new_val) def patch_func(replacement, target_mod, func_name): """ Patch func_name in target_mod with replacement Important - original must be resolved by name to avoid patching an already patched function. """ original = getattr(target_mod, func_name) # set the 'unpatched' attribute on the replacement to # point to the original. vars(replacement).setdefault('unpatched', original) # replace the function in the original module setattr(target_mod, func_name, replacement) def get_unpatched_function(candidate): return getattr(candidate, 'unpatched') def patch_for_msvc_specialized_compiler(): """ Patch functions in distutils to use standalone Microsoft Visual C++ compilers. """ # import late to avoid circular imports on Python < 3.5 msvc = import_module('setuptools.msvc') if platform.system() != 'Windows': # Compilers only availables on Microsoft Windows return def patch_params(mod_name, func_name): """ Prepare the parameters for patch_func to patch indicated function. """ repl_prefix = 'msvc9_' if 'msvc9' in mod_name else 'msvc14_' repl_name = repl_prefix + func_name.lstrip('_') repl = getattr(msvc, repl_name) mod = import_module(mod_name) if not hasattr(mod, func_name): raise ImportError(func_name) return repl, mod, func_name # Python 2.7 to 3.4 msvc9 = functools.partial(patch_params, 'distutils.msvc9compiler') # Python 3.5+ msvc14 = functools.partial(patch_params, 'distutils._msvccompiler') try: # Patch distutils.msvc9compiler patch_func(*msvc9('find_vcvarsall')) patch_func(*msvc9('query_vcvarsall')) except ImportError: pass try: # Patch distutils._msvccompiler._get_vc_env patch_func(*msvc14('_get_vc_env')) except ImportError: pass try: # Patch distutils._msvccompiler.gen_lib_options for Numpy patch_func(*msvc14('gen_lib_options')) except ImportError: pass ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/msvc.py�����������������������������������0000644�0001750�0001750�00000117606�00000000000�026017� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������""" Improved support for Microsoft Visual C++ compilers. Known supported compilers: -------------------------- Microsoft Visual C++ 9.0: Microsoft Visual C++ Compiler for Python 2.7 (x86, amd64) Microsoft Windows SDK 6.1 (x86, x64, ia64) Microsoft Windows SDK 7.0 (x86, x64, ia64) Microsoft Visual C++ 10.0: Microsoft Windows SDK 7.1 (x86, x64, ia64) Microsoft Visual C++ 14.0: Microsoft Visual C++ Build Tools 2015 (x86, x64, arm) Microsoft Visual Studio 2017 (x86, x64, arm, arm64) Microsoft Visual Studio Build Tools 2017 (x86, x64, arm, arm64) """ import os import sys import platform import itertools import distutils.errors from setuptools.extern.packaging.version import LegacyVersion from setuptools.extern.six.moves import filterfalse from .monkey import get_unpatched if platform.system() == 'Windows': from setuptools.extern.six.moves import winreg safe_env = os.environ else: """ Mock winreg and environ so the module can be imported on this platform. """ class winreg: HKEY_USERS = None HKEY_CURRENT_USER = None HKEY_LOCAL_MACHINE = None HKEY_CLASSES_ROOT = None safe_env = dict() _msvc9_suppress_errors = ( # msvc9compiler isn't available on some platforms ImportError, # msvc9compiler raises DistutilsPlatformError in some # environments. See #1118. distutils.errors.DistutilsPlatformError, ) try: from distutils.msvc9compiler import Reg except _msvc9_suppress_errors: pass def msvc9_find_vcvarsall(version): """ Patched "distutils.msvc9compiler.find_vcvarsall" to use the standalone compiler build for Python (VCForPython). Fall back to original behavior when the standalone compiler is not available. Redirect the path of "vcvarsall.bat". Known supported compilers ------------------------- Microsoft Visual C++ 9.0: Microsoft Visual C++ Compiler for Python 2.7 (x86, amd64) Parameters ---------- version: float Required Microsoft Visual C++ version. Return ------ vcvarsall.bat path: str """ VC_BASE = r'Software\%sMicrosoft\DevDiv\VCForPython\%0.1f' key = VC_BASE % ('', version) try: # Per-user installs register the compiler path here productdir = Reg.get_value(key, "installdir") except KeyError: try: # All-user installs on a 64-bit system register here key = VC_BASE % ('Wow6432Node\\', version) productdir = Reg.get_value(key, "installdir") except KeyError: productdir = None if productdir: vcvarsall = os.path.os.path.join(productdir, "vcvarsall.bat") if os.path.isfile(vcvarsall): return vcvarsall return get_unpatched(msvc9_find_vcvarsall)(version) def msvc9_query_vcvarsall(ver, arch='x86', *args, **kwargs): """ Patched "distutils.msvc9compiler.query_vcvarsall" for support extra compilers. Set environment without use of "vcvarsall.bat". Known supported compilers ------------------------- Microsoft Visual C++ 9.0: Microsoft Visual C++ Compiler for Python 2.7 (x86, amd64) Microsoft Windows SDK 6.1 (x86, x64, ia64) Microsoft Windows SDK 7.0 (x86, x64, ia64) Microsoft Visual C++ 10.0: Microsoft Windows SDK 7.1 (x86, x64, ia64) Parameters ---------- ver: float Required Microsoft Visual C++ version. arch: str Target architecture. Return ------ environment: dict """ # Try to get environement from vcvarsall.bat (Classical way) try: orig = get_unpatched(msvc9_query_vcvarsall) return orig(ver, arch, *args, **kwargs) except distutils.errors.DistutilsPlatformError: # Pass error if Vcvarsall.bat is missing pass except ValueError: # Pass error if environment not set after executing vcvarsall.bat pass # If error, try to set environment directly try: return EnvironmentInfo(arch, ver).return_env() except distutils.errors.DistutilsPlatformError as exc: _augment_exception(exc, ver, arch) raise def msvc14_get_vc_env(plat_spec): """ Patched "distutils._msvccompiler._get_vc_env" for support extra compilers. Set environment without use of "vcvarsall.bat". Known supported compilers ------------------------- Microsoft Visual C++ 14.0: Microsoft Visual C++ Build Tools 2015 (x86, x64, arm) Microsoft Visual Studio 2017 (x86, x64, arm, arm64) Microsoft Visual Studio Build Tools 2017 (x86, x64, arm, arm64) Parameters ---------- plat_spec: str Target architecture. Return ------ environment: dict """ # Try to get environment from vcvarsall.bat (Classical way) try: return get_unpatched(msvc14_get_vc_env)(plat_spec) except distutils.errors.DistutilsPlatformError: # Pass error Vcvarsall.bat is missing pass # If error, try to set environment directly try: return EnvironmentInfo(plat_spec, vc_min_ver=14.0).return_env() except distutils.errors.DistutilsPlatformError as exc: _augment_exception(exc, 14.0) raise def msvc14_gen_lib_options(*args, **kwargs): """ Patched "distutils._msvccompiler.gen_lib_options" for fix compatibility between "numpy.distutils" and "distutils._msvccompiler" (for Numpy < 1.11.2) """ if "numpy.distutils" in sys.modules: import numpy as np if LegacyVersion(np.__version__) < LegacyVersion('1.11.2'): return np.distutils.ccompiler.gen_lib_options(*args, **kwargs) return get_unpatched(msvc14_gen_lib_options)(*args, **kwargs) def _augment_exception(exc, version, arch=''): """ Add details to the exception message to help guide the user as to what action will resolve it. """ # Error if MSVC++ directory not found or environment not set message = exc.args[0] if "vcvarsall" in message.lower() or "visual c" in message.lower(): # Special error message if MSVC++ not installed tmpl = 'Microsoft Visual C++ {version:0.1f} is required.' message = tmpl.format(**locals()) msdownload = 'www.microsoft.com/download/details.aspx?id=%d' if version == 9.0: if arch.lower().find('ia64') > -1: # For VC++ 9.0, if IA64 support is needed, redirect user # to Windows SDK 7.0 message += ' Get it with "Microsoft Windows SDK 7.0": ' message += msdownload % 3138 else: # For VC++ 9.0 redirect user to Vc++ for Python 2.7 : # This redirection link is maintained by Microsoft. # Contact vspython@microsoft.com if it needs updating. message += ' Get it from http://aka.ms/vcpython27' elif version == 10.0: # For VC++ 10.0 Redirect user to Windows SDK 7.1 message += ' Get it with "Microsoft Windows SDK 7.1": ' message += msdownload % 8279 elif version >= 14.0: # For VC++ 14.0 Redirect user to Visual C++ Build Tools message += (' Get it with "Microsoft Visual C++ Build Tools": ' r'https://visualstudio.microsoft.com/downloads/') exc.args = (message, ) class PlatformInfo: """ Current and Target Architectures informations. Parameters ---------- arch: str Target architecture. """ current_cpu = safe_env.get('processor_architecture', '').lower() def __init__(self, arch): self.arch = arch.lower().replace('x64', 'amd64') @property def target_cpu(self): return self.arch[self.arch.find('_') + 1:] def target_is_x86(self): return self.target_cpu == 'x86' def current_is_x86(self): return self.current_cpu == 'x86' def current_dir(self, hidex86=False, x64=False): """ Current platform specific subfolder. Parameters ---------- hidex86: bool return '' and not '\x86' if architecture is x86. x64: bool return '\x64' and not '\amd64' if architecture is amd64. Return ------ subfolder: str '\target', or '' (see hidex86 parameter) """ return ( '' if (self.current_cpu == 'x86' and hidex86) else r'\x64' if (self.current_cpu == 'amd64' and x64) else r'\%s' % self.current_cpu ) def target_dir(self, hidex86=False, x64=False): r""" Target platform specific subfolder. Parameters ---------- hidex86: bool return '' and not '\x86' if architecture is x86. x64: bool return '\x64' and not '\amd64' if architecture is amd64. Return ------ subfolder: str '\current', or '' (see hidex86 parameter) """ return ( '' if (self.target_cpu == 'x86' and hidex86) else r'\x64' if (self.target_cpu == 'amd64' and x64) else r'\%s' % self.target_cpu ) def cross_dir(self, forcex86=False): r""" Cross platform specific subfolder. Parameters ---------- forcex86: bool Use 'x86' as current architecture even if current acritecture is not x86. Return ------ subfolder: str '' if target architecture is current architecture, '\current_target' if not. """ current = 'x86' if forcex86 else self.current_cpu return ( '' if self.target_cpu == current else self.target_dir().replace('\\', '\\%s_' % current) ) class RegistryInfo: """ Microsoft Visual Studio related registry informations. Parameters ---------- platform_info: PlatformInfo "PlatformInfo" instance. """ HKEYS = (winreg.HKEY_USERS, winreg.HKEY_CURRENT_USER, winreg.HKEY_LOCAL_MACHINE, winreg.HKEY_CLASSES_ROOT) def __init__(self, platform_info): self.pi = platform_info @property def visualstudio(self): """ Microsoft Visual Studio root registry key. """ return 'VisualStudio' @property def sxs(self): """ Microsoft Visual Studio SxS registry key. """ return os.path.join(self.visualstudio, 'SxS') @property def vc(self): """ Microsoft Visual C++ VC7 registry key. """ return os.path.join(self.sxs, 'VC7') @property def vs(self): """ Microsoft Visual Studio VS7 registry key. """ return os.path.join(self.sxs, 'VS7') @property def vc_for_python(self): """ Microsoft Visual C++ for Python registry key. """ return r'DevDiv\VCForPython' @property def microsoft_sdk(self): """ Microsoft SDK registry key. """ return 'Microsoft SDKs' @property def windows_sdk(self): """ Microsoft Windows/Platform SDK registry key. """ return os.path.join(self.microsoft_sdk, 'Windows') @property def netfx_sdk(self): """ Microsoft .NET Framework SDK registry key. """ return os.path.join(self.microsoft_sdk, 'NETFXSDK') @property def windows_kits_roots(self): """ Microsoft Windows Kits Roots registry key. """ return r'Windows Kits\Installed Roots' def microsoft(self, key, x86=False): """ Return key in Microsoft software registry. Parameters ---------- key: str Registry key path where look. x86: str Force x86 software registry. Return ------ str: value """ node64 = '' if self.pi.current_is_x86() or x86 else 'Wow6432Node' return os.path.join('Software', node64, 'Microsoft', key) def lookup(self, key, name): """ Look for values in registry in Microsoft software registry. Parameters ---------- key: str Registry key path where look. name: str Value name to find. Return ------ str: value """ KEY_READ = winreg.KEY_READ openkey = winreg.OpenKey ms = self.microsoft for hkey in self.HKEYS: try: bkey = openkey(hkey, ms(key), 0, KEY_READ) except (OSError, IOError): if not self.pi.current_is_x86(): try: bkey = openkey(hkey, ms(key, True), 0, KEY_READ) except (OSError, IOError): continue else: continue try: return winreg.QueryValueEx(bkey, name)[0] except (OSError, IOError): pass class SystemInfo: """ Microsoft Windows and Visual Studio related system inormations. Parameters ---------- registry_info: RegistryInfo "RegistryInfo" instance. vc_ver: float Required Microsoft Visual C++ version. """ # Variables and properties in this class use originals CamelCase variables # names from Microsoft source files for more easy comparaison. WinDir = safe_env.get('WinDir', '') ProgramFiles = safe_env.get('ProgramFiles', '') ProgramFilesx86 = safe_env.get('ProgramFiles(x86)', ProgramFiles) def __init__(self, registry_info, vc_ver=None): self.ri = registry_info self.pi = self.ri.pi self.vc_ver = vc_ver or self._find_latest_available_vc_ver() def _find_latest_available_vc_ver(self): try: return self.find_available_vc_vers()[-1] except IndexError: err = 'No Microsoft Visual C++ version found' raise distutils.errors.DistutilsPlatformError(err) def find_available_vc_vers(self): """ Find all available Microsoft Visual C++ versions. """ ms = self.ri.microsoft vckeys = (self.ri.vc, self.ri.vc_for_python, self.ri.vs) vc_vers = [] for hkey in self.ri.HKEYS: for key in vckeys: try: bkey = winreg.OpenKey(hkey, ms(key), 0, winreg.KEY_READ) except (OSError, IOError): continue subkeys, values, _ = winreg.QueryInfoKey(bkey) for i in range(values): try: ver = float(winreg.EnumValue(bkey, i)[0]) if ver not in vc_vers: vc_vers.append(ver) except ValueError: pass for i in range(subkeys): try: ver = float(winreg.EnumKey(bkey, i)) if ver not in vc_vers: vc_vers.append(ver) except ValueError: pass return sorted(vc_vers) @property def VSInstallDir(self): """ Microsoft Visual Studio directory. """ # Default path name = 'Microsoft Visual Studio %0.1f' % self.vc_ver default = os.path.join(self.ProgramFilesx86, name) # Try to get path from registry, if fail use default path return self.ri.lookup(self.ri.vs, '%0.1f' % self.vc_ver) or default @property def VCInstallDir(self): """ Microsoft Visual C++ directory. """ self.VSInstallDir guess_vc = self._guess_vc() or self._guess_vc_legacy() # Try to get "VC++ for Python" path from registry as default path reg_path = os.path.join(self.ri.vc_for_python, '%0.1f' % self.vc_ver) python_vc = self.ri.lookup(reg_path, 'installdir') default_vc = os.path.join(python_vc, 'VC') if python_vc else guess_vc # Try to get path from registry, if fail use default path path = self.ri.lookup(self.ri.vc, '%0.1f' % self.vc_ver) or default_vc if not os.path.isdir(path): msg = 'Microsoft Visual C++ directory not found' raise distutils.errors.DistutilsPlatformError(msg) return path def _guess_vc(self): """ Locate Visual C for 2017 """ if self.vc_ver <= 14.0: return default = r'VC\Tools\MSVC' guess_vc = os.path.join(self.VSInstallDir, default) # Subdir with VC exact version as name try: vc_exact_ver = os.listdir(guess_vc)[-1] return os.path.join(guess_vc, vc_exact_ver) except (OSError, IOError, IndexError): pass def _guess_vc_legacy(self): """ Locate Visual C for versions prior to 2017 """ default = r'Microsoft Visual Studio %0.1f\VC' % self.vc_ver return os.path.join(self.ProgramFilesx86, default) @property def WindowsSdkVersion(self): """ Microsoft Windows SDK versions for specified MSVC++ version. """ if self.vc_ver <= 9.0: return ('7.0', '6.1', '6.0a') elif self.vc_ver == 10.0: return ('7.1', '7.0a') elif self.vc_ver == 11.0: return ('8.0', '8.0a') elif self.vc_ver == 12.0: return ('8.1', '8.1a') elif self.vc_ver >= 14.0: return ('10.0', '8.1') @property def WindowsSdkLastVersion(self): """ Microsoft Windows SDK last version """ return self._use_last_dir_name(os.path.join( self.WindowsSdkDir, 'lib')) @property def WindowsSdkDir(self): """ Microsoft Windows SDK directory. """ sdkdir = '' for ver in self.WindowsSdkVersion: # Try to get it from registry loc = os.path.join(self.ri.windows_sdk, 'v%s' % ver) sdkdir = self.ri.lookup(loc, 'installationfolder') if sdkdir: break if not sdkdir or not os.path.isdir(sdkdir): # Try to get "VC++ for Python" version from registry path = os.path.join(self.ri.vc_for_python, '%0.1f' % self.vc_ver) install_base = self.ri.lookup(path, 'installdir') if install_base: sdkdir = os.path.join(install_base, 'WinSDK') if not sdkdir or not os.path.isdir(sdkdir): # If fail, use default new path for ver in self.WindowsSdkVersion: intver = ver[:ver.rfind('.')] path = r'Microsoft SDKs\Windows Kits\%s' % (intver) d = os.path.join(self.ProgramFiles, path) if os.path.isdir(d): sdkdir = d if not sdkdir or not os.path.isdir(sdkdir): # If fail, use default old path for ver in self.WindowsSdkVersion: path = r'Microsoft SDKs\Windows\v%s' % ver d = os.path.join(self.ProgramFiles, path) if os.path.isdir(d): sdkdir = d if not sdkdir: # If fail, use Platform SDK sdkdir = os.path.join(self.VCInstallDir, 'PlatformSDK') return sdkdir @property def WindowsSDKExecutablePath(self): """ Microsoft Windows SDK executable directory. """ # Find WinSDK NetFx Tools registry dir name if self.vc_ver <= 11.0: netfxver = 35 arch = '' else: netfxver = 40 hidex86 = True if self.vc_ver <= 12.0 else False arch = self.pi.current_dir(x64=True, hidex86=hidex86) fx = 'WinSDK-NetFx%dTools%s' % (netfxver, arch.replace('\\', '-')) # liste all possibles registry paths regpaths = [] if self.vc_ver >= 14.0: for ver in self.NetFxSdkVersion: regpaths += [os.path.join(self.ri.netfx_sdk, ver, fx)] for ver in self.WindowsSdkVersion: regpaths += [os.path.join(self.ri.windows_sdk, 'v%sA' % ver, fx)] # Return installation folder from the more recent path for path in regpaths: execpath = self.ri.lookup(path, 'installationfolder') if execpath: break return execpath @property def FSharpInstallDir(self): """ Microsoft Visual F# directory. """ path = r'%0.1f\Setup\F#' % self.vc_ver path = os.path.join(self.ri.visualstudio, path) return self.ri.lookup(path, 'productdir') or '' @property def UniversalCRTSdkDir(self): """ Microsoft Universal CRT SDK directory. """ # Set Kit Roots versions for specified MSVC++ version if self.vc_ver >= 14.0: vers = ('10', '81') else: vers = () # Find path of the more recent Kit for ver in vers: sdkdir = self.ri.lookup(self.ri.windows_kits_roots, 'kitsroot%s' % ver) if sdkdir: break return sdkdir or '' @property def UniversalCRTSdkLastVersion(self): """ Microsoft Universal C Runtime SDK last version """ return self._use_last_dir_name(os.path.join( self.UniversalCRTSdkDir, 'lib')) @property def NetFxSdkVersion(self): """ Microsoft .NET Framework SDK versions. """ # Set FxSdk versions for specified MSVC++ version if self.vc_ver >= 14.0: return ('4.6.1', '4.6') else: return () @property def NetFxSdkDir(self): """ Microsoft .NET Framework SDK directory. """ for ver in self.NetFxSdkVersion: loc = os.path.join(self.ri.netfx_sdk, ver) sdkdir = self.ri.lookup(loc, 'kitsinstallationfolder') if sdkdir: break return sdkdir or '' @property def FrameworkDir32(self): """ Microsoft .NET Framework 32bit directory. """ # Default path guess_fw = os.path.join(self.WinDir, r'Microsoft.NET\Framework') # Try to get path from registry, if fail use default path return self.ri.lookup(self.ri.vc, 'frameworkdir32') or guess_fw @property def FrameworkDir64(self): """ Microsoft .NET Framework 64bit directory. """ # Default path guess_fw = os.path.join(self.WinDir, r'Microsoft.NET\Framework64') # Try to get path from registry, if fail use default path return self.ri.lookup(self.ri.vc, 'frameworkdir64') or guess_fw @property def FrameworkVersion32(self): """ Microsoft .NET Framework 32bit versions. """ return self._find_dot_net_versions(32) @property def FrameworkVersion64(self): """ Microsoft .NET Framework 64bit versions. """ return self._find_dot_net_versions(64) def _find_dot_net_versions(self, bits): """ Find Microsoft .NET Framework versions. Parameters ---------- bits: int Platform number of bits: 32 or 64. """ # Find actual .NET version in registry reg_ver = self.ri.lookup(self.ri.vc, 'frameworkver%d' % bits) dot_net_dir = getattr(self, 'FrameworkDir%d' % bits) ver = reg_ver or self._use_last_dir_name(dot_net_dir, 'v') or '' # Set .NET versions for specified MSVC++ version if self.vc_ver >= 12.0: frameworkver = (ver, 'v4.0') elif self.vc_ver >= 10.0: frameworkver = ('v4.0.30319' if ver.lower()[:2] != 'v4' else ver, 'v3.5') elif self.vc_ver == 9.0: frameworkver = ('v3.5', 'v2.0.50727') if self.vc_ver == 8.0: frameworkver = ('v3.0', 'v2.0.50727') return frameworkver def _use_last_dir_name(self, path, prefix=''): """ Return name of the last dir in path or '' if no dir found. Parameters ---------- path: str Use dirs in this path prefix: str Use only dirs startings by this prefix """ matching_dirs = ( dir_name for dir_name in reversed(os.listdir(path)) if os.path.isdir(os.path.join(path, dir_name)) and dir_name.startswith(prefix) ) return next(matching_dirs, None) or '' class EnvironmentInfo: """ Return environment variables for specified Microsoft Visual C++ version and platform : Lib, Include, Path and libpath. This function is compatible with Microsoft Visual C++ 9.0 to 14.0. Script created by analysing Microsoft environment configuration files like "vcvars[...].bat", "SetEnv.Cmd", "vcbuildtools.bat", ... Parameters ---------- arch: str Target architecture. vc_ver: float Required Microsoft Visual C++ version. If not set, autodetect the last version. vc_min_ver: float Minimum Microsoft Visual C++ version. """ # Variables and properties in this class use originals CamelCase variables # names from Microsoft source files for more easy comparaison. def __init__(self, arch, vc_ver=None, vc_min_ver=0): self.pi = PlatformInfo(arch) self.ri = RegistryInfo(self.pi) self.si = SystemInfo(self.ri, vc_ver) if self.vc_ver < vc_min_ver: err = 'No suitable Microsoft Visual C++ version found' raise distutils.errors.DistutilsPlatformError(err) @property def vc_ver(self): """ Microsoft Visual C++ version. """ return self.si.vc_ver @property def VSTools(self): """ Microsoft Visual Studio Tools """ paths = [r'Common7\IDE', r'Common7\Tools'] if self.vc_ver >= 14.0: arch_subdir = self.pi.current_dir(hidex86=True, x64=True) paths += [r'Common7\IDE\CommonExtensions\Microsoft\TestWindow'] paths += [r'Team Tools\Performance Tools'] paths += [r'Team Tools\Performance Tools%s' % arch_subdir] return [os.path.join(self.si.VSInstallDir, path) for path in paths] @property def VCIncludes(self): """ Microsoft Visual C++ & Microsoft Foundation Class Includes """ return [os.path.join(self.si.VCInstallDir, 'Include'), os.path.join(self.si.VCInstallDir, r'ATLMFC\Include')] @property def VCLibraries(self): """ Microsoft Visual C++ & Microsoft Foundation Class Libraries """ if self.vc_ver >= 15.0: arch_subdir = self.pi.target_dir(x64=True) else: arch_subdir = self.pi.target_dir(hidex86=True) paths = ['Lib%s' % arch_subdir, r'ATLMFC\Lib%s' % arch_subdir] if self.vc_ver >= 14.0: paths += [r'Lib\store%s' % arch_subdir] return [os.path.join(self.si.VCInstallDir, path) for path in paths] @property def VCStoreRefs(self): """ Microsoft Visual C++ store references Libraries """ if self.vc_ver < 14.0: return [] return [os.path.join(self.si.VCInstallDir, r'Lib\store\references')] @property def VCTools(self): """ Microsoft Visual C++ Tools """ si = self.si tools = [os.path.join(si.VCInstallDir, 'VCPackages')] forcex86 = True if self.vc_ver <= 10.0 else False arch_subdir = self.pi.cross_dir(forcex86) if arch_subdir: tools += [os.path.join(si.VCInstallDir, 'Bin%s' % arch_subdir)] if self.vc_ver == 14.0: path = 'Bin%s' % self.pi.current_dir(hidex86=True) tools += [os.path.join(si.VCInstallDir, path)] elif self.vc_ver >= 15.0: host_dir = (r'bin\HostX86%s' if self.pi.current_is_x86() else r'bin\HostX64%s') tools += [os.path.join( si.VCInstallDir, host_dir % self.pi.target_dir(x64=True))] if self.pi.current_cpu != self.pi.target_cpu: tools += [os.path.join( si.VCInstallDir, host_dir % self.pi.current_dir(x64=True))] else: tools += [os.path.join(si.VCInstallDir, 'Bin')] return tools @property def OSLibraries(self): """ Microsoft Windows SDK Libraries """ if self.vc_ver <= 10.0: arch_subdir = self.pi.target_dir(hidex86=True, x64=True) return [os.path.join(self.si.WindowsSdkDir, 'Lib%s' % arch_subdir)] else: arch_subdir = self.pi.target_dir(x64=True) lib = os.path.join(self.si.WindowsSdkDir, 'lib') libver = self._sdk_subdir return [os.path.join(lib, '%sum%s' % (libver , arch_subdir))] @property def OSIncludes(self): """ Microsoft Windows SDK Include """ include = os.path.join(self.si.WindowsSdkDir, 'include') if self.vc_ver <= 10.0: return [include, os.path.join(include, 'gl')] else: if self.vc_ver >= 14.0: sdkver = self._sdk_subdir else: sdkver = '' return [os.path.join(include, '%sshared' % sdkver), os.path.join(include, '%sum' % sdkver), os.path.join(include, '%swinrt' % sdkver)] @property def OSLibpath(self): """ Microsoft Windows SDK Libraries Paths """ ref = os.path.join(self.si.WindowsSdkDir, 'References') libpath = [] if self.vc_ver <= 9.0: libpath += self.OSLibraries if self.vc_ver >= 11.0: libpath += [os.path.join(ref, r'CommonConfiguration\Neutral')] if self.vc_ver >= 14.0: libpath += [ ref, os.path.join(self.si.WindowsSdkDir, 'UnionMetadata'), os.path.join( ref, 'Windows.Foundation.UniversalApiContract', '1.0.0.0', ), os.path.join( ref, 'Windows.Foundation.FoundationContract', '1.0.0.0', ), os.path.join( ref, 'Windows.Networking.Connectivity.WwanContract', '1.0.0.0', ), os.path.join( self.si.WindowsSdkDir, 'ExtensionSDKs', 'Microsoft.VCLibs', '%0.1f' % self.vc_ver, 'References', 'CommonConfiguration', 'neutral', ), ] return libpath @property def SdkTools(self): """ Microsoft Windows SDK Tools """ return list(self._sdk_tools()) def _sdk_tools(self): """ Microsoft Windows SDK Tools paths generator """ if self.vc_ver < 15.0: bin_dir = 'Bin' if self.vc_ver <= 11.0 else r'Bin\x86' yield os.path.join(self.si.WindowsSdkDir, bin_dir) if not self.pi.current_is_x86(): arch_subdir = self.pi.current_dir(x64=True) path = 'Bin%s' % arch_subdir yield os.path.join(self.si.WindowsSdkDir, path) if self.vc_ver == 10.0 or self.vc_ver == 11.0: if self.pi.target_is_x86(): arch_subdir = '' else: arch_subdir = self.pi.current_dir(hidex86=True, x64=True) path = r'Bin\NETFX 4.0 Tools%s' % arch_subdir yield os.path.join(self.si.WindowsSdkDir, path) elif self.vc_ver >= 15.0: path = os.path.join(self.si.WindowsSdkDir, 'Bin') arch_subdir = self.pi.current_dir(x64=True) sdkver = self.si.WindowsSdkLastVersion yield os.path.join(path, '%s%s' % (sdkver, arch_subdir)) if self.si.WindowsSDKExecutablePath: yield self.si.WindowsSDKExecutablePath @property def _sdk_subdir(self): """ Microsoft Windows SDK version subdir """ ucrtver = self.si.WindowsSdkLastVersion return ('%s\\' % ucrtver) if ucrtver else '' @property def SdkSetup(self): """ Microsoft Windows SDK Setup """ if self.vc_ver > 9.0: return [] return [os.path.join(self.si.WindowsSdkDir, 'Setup')] @property def FxTools(self): """ Microsoft .NET Framework Tools """ pi = self.pi si = self.si if self.vc_ver <= 10.0: include32 = True include64 = not pi.target_is_x86() and not pi.current_is_x86() else: include32 = pi.target_is_x86() or pi.current_is_x86() include64 = pi.current_cpu == 'amd64' or pi.target_cpu == 'amd64' tools = [] if include32: tools += [os.path.join(si.FrameworkDir32, ver) for ver in si.FrameworkVersion32] if include64: tools += [os.path.join(si.FrameworkDir64, ver) for ver in si.FrameworkVersion64] return tools @property def NetFxSDKLibraries(self): """ Microsoft .Net Framework SDK Libraries """ if self.vc_ver < 14.0 or not self.si.NetFxSdkDir: return [] arch_subdir = self.pi.target_dir(x64=True) return [os.path.join(self.si.NetFxSdkDir, r'lib\um%s' % arch_subdir)] @property def NetFxSDKIncludes(self): """ Microsoft .Net Framework SDK Includes """ if self.vc_ver < 14.0 or not self.si.NetFxSdkDir: return [] return [os.path.join(self.si.NetFxSdkDir, r'include\um')] @property def VsTDb(self): """ Microsoft Visual Studio Team System Database """ return [os.path.join(self.si.VSInstallDir, r'VSTSDB\Deploy')] @property def MSBuild(self): """ Microsoft Build Engine """ if self.vc_ver < 12.0: return [] elif self.vc_ver < 15.0: base_path = self.si.ProgramFilesx86 arch_subdir = self.pi.current_dir(hidex86=True) else: base_path = self.si.VSInstallDir arch_subdir = '' path = r'MSBuild\%0.1f\bin%s' % (self.vc_ver, arch_subdir) build = [os.path.join(base_path, path)] if self.vc_ver >= 15.0: # Add Roslyn C# & Visual Basic Compiler build += [os.path.join(base_path, path, 'Roslyn')] return build @property def HTMLHelpWorkshop(self): """ Microsoft HTML Help Workshop """ if self.vc_ver < 11.0: return [] return [os.path.join(self.si.ProgramFilesx86, 'HTML Help Workshop')] @property def UCRTLibraries(self): """ Microsoft Universal C Runtime SDK Libraries """ if self.vc_ver < 14.0: return [] arch_subdir = self.pi.target_dir(x64=True) lib = os.path.join(self.si.UniversalCRTSdkDir, 'lib') ucrtver = self._ucrt_subdir return [os.path.join(lib, '%sucrt%s' % (ucrtver, arch_subdir))] @property def UCRTIncludes(self): """ Microsoft Universal C Runtime SDK Include """ if self.vc_ver < 14.0: return [] include = os.path.join(self.si.UniversalCRTSdkDir, 'include') return [os.path.join(include, '%sucrt' % self._ucrt_subdir)] @property def _ucrt_subdir(self): """ Microsoft Universal C Runtime SDK version subdir """ ucrtver = self.si.UniversalCRTSdkLastVersion return ('%s\\' % ucrtver) if ucrtver else '' @property def FSharp(self): """ Microsoft Visual F# """ if self.vc_ver < 11.0 and self.vc_ver > 12.0: return [] return self.si.FSharpInstallDir @property def VCRuntimeRedist(self): """ Microsoft Visual C++ runtime redistribuable dll """ arch_subdir = self.pi.target_dir(x64=True) if self.vc_ver < 15: redist_path = self.si.VCInstallDir vcruntime = 'redist%s\\Microsoft.VC%d0.CRT\\vcruntime%d0.dll' else: redist_path = self.si.VCInstallDir.replace('\\Tools', '\\Redist') vcruntime = 'onecore%s\\Microsoft.VC%d0.CRT\\vcruntime%d0.dll' # Visual Studio 2017 is still Visual C++ 14.0 dll_ver = 14.0 if self.vc_ver == 15 else self.vc_ver vcruntime = vcruntime % (arch_subdir, self.vc_ver, dll_ver) return os.path.join(redist_path, vcruntime) def return_env(self, exists=True): """ Return environment dict. Parameters ---------- exists: bool It True, only return existing paths. """ env = dict( include=self._build_paths('include', [self.VCIncludes, self.OSIncludes, self.UCRTIncludes, self.NetFxSDKIncludes], exists), lib=self._build_paths('lib', [self.VCLibraries, self.OSLibraries, self.FxTools, self.UCRTLibraries, self.NetFxSDKLibraries], exists), libpath=self._build_paths('libpath', [self.VCLibraries, self.FxTools, self.VCStoreRefs, self.OSLibpath], exists), path=self._build_paths('path', [self.VCTools, self.VSTools, self.VsTDb, self.SdkTools, self.SdkSetup, self.FxTools, self.MSBuild, self.HTMLHelpWorkshop, self.FSharp], exists), ) if self.vc_ver >= 14 and os.path.isfile(self.VCRuntimeRedist): env['py_vcruntime_redist'] = self.VCRuntimeRedist return env def _build_paths(self, name, spec_path_lists, exists): """ Given an environment variable name and specified paths, return a pathsep-separated string of paths containing unique, extant, directories from those paths and from the environment variable. Raise an error if no paths are resolved. """ # flatten spec_path_lists spec_paths = itertools.chain.from_iterable(spec_path_lists) env_paths = safe_env.get(name, '').split(os.pathsep) paths = itertools.chain(spec_paths, env_paths) extant_paths = list(filter(os.path.isdir, paths)) if exists else paths if not extant_paths: msg = "%s environment variable is empty" % name.upper() raise distutils.errors.DistutilsPlatformError(msg) unique_paths = self._unique_everseen(extant_paths) return os.pathsep.join(unique_paths) # from Python docs def _unique_everseen(self, iterable, key=None): """ List unique elements, preserving order. Remember all elements ever seen. _unique_everseen('AAAABBBCCDAABBB') --> A B C D _unique_everseen('ABBCcAD', str.lower) --> A B C D """ seen = set() seen_add = seen.add if key is None: for element in filterfalse(seen.__contains__, iterable): seen_add(element) yield element else: for element in iterable: k = key(element) if k not in seen: seen_add(k) yield element ��������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/namespaces.py�����������������������������0000644�0001750�0001750�00000006177�00000000000�027166� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������import os from distutils import log import itertools from setuptools.extern.six.moves import map flatten = itertools.chain.from_iterable class Installer: nspkg_ext = '-nspkg.pth' def install_namespaces(self): nsp = self._get_all_ns_packages() if not nsp: return filename, ext = os.path.splitext(self._get_target()) filename += self.nspkg_ext self.outputs.append(filename) log.info("Installing %s", filename) lines = map(self._gen_nspkg_line, nsp) if self.dry_run: # always generate the lines, even in dry run list(lines) return with open(filename, 'wt') as f: f.writelines(lines) def uninstall_namespaces(self): filename, ext = os.path.splitext(self._get_target()) filename += self.nspkg_ext if not os.path.exists(filename): return log.info("Removing %s", filename) os.remove(filename) def _get_target(self): return self.target _nspkg_tmpl = ( "import sys, types, os", "has_mfs = sys.version_info > (3, 5)", "p = os.path.join(%(root)s, *%(pth)r)", "importlib = has_mfs and __import__('importlib.util')", "has_mfs and __import__('importlib.machinery')", "m = has_mfs and " "sys.modules.setdefault(%(pkg)r, " "importlib.util.module_from_spec(" "importlib.machinery.PathFinder.find_spec(%(pkg)r, " "[os.path.dirname(p)])))", "m = m or " "sys.modules.setdefault(%(pkg)r, types.ModuleType(%(pkg)r))", "mp = (m or []) and m.__dict__.setdefault('__path__',[])", "(p not in mp) and mp.append(p)", ) "lines for the namespace installer" _nspkg_tmpl_multi = ( 'm and setattr(sys.modules[%(parent)r], %(child)r, m)', ) "additional line(s) when a parent package is indicated" def _get_root(self): return "sys._getframe(1).f_locals['sitedir']" def _gen_nspkg_line(self, pkg): # ensure pkg is not a unicode string under Python 2.7 pkg = str(pkg) pth = tuple(pkg.split('.')) root = self._get_root() tmpl_lines = self._nspkg_tmpl parent, sep, child = pkg.rpartition('.') if parent: tmpl_lines += self._nspkg_tmpl_multi return ';'.join(tmpl_lines) % locals() + '\n' def _get_all_ns_packages(self): """Return sorted list of all package namespaces""" pkgs = self.distribution.namespace_packages or [] return sorted(flatten(map(self._pkg_names, pkgs))) @staticmethod def _pkg_names(pkg): """ Given a namespace package, yield the components of that package. >>> names = Installer._pkg_names('a.b.c') >>> set(names) == set(['a', 'a.b', 'a.b.c']) True """ parts = pkg.split('.') while parts: yield '.'.join(parts) parts.pop() class DevelopInstaller(Installer): def _get_root(self): return repr(str(self.egg_path)) def _get_target(self): return self.egg_link �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������././@PaxHeader��������������������������������������������������������������������������������������0000000�0000000�0000000�00000000026�00000000000�011453� x����������������������������������������������������������������������������������������������������ustar�00����������������������������������������������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������22 mtime=1564430189.0 ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/package_index.py��������������������������0000644�0001750�0001750�00000117213�00000000000�027623� 0����������������������������������������������������������������������������������������������������ustar�00vincent�������������������������vincent�������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������"""PyPI and direct package downloading""" import sys import os import re import shutil import socket import base64 import hashlib import itertools import warnings from functools import wraps from setuptools.extern import six from setuptools.extern.six.moves import urllib, http_client, configparser, map import setuptools from pkg_resources import ( CHECKOUT_DIST, Distribution, BINARY_DIST, normalize_path, SOURCE_DIST, Environment, find_distributions, safe_name, safe_version, to_filename, Requirement, DEVELOP_DIST, EGG_DIST, ) from setuptools import ssl_support from distutils import log from distutils.errors import DistutilsError from fnmatch import translate from setuptools.py27compat import get_all_headers from setuptools.py33compat import unescape from setuptools.wheel import Wheel __metaclass__ = type EGG_FRAGMENT = re.compile(r'^egg=([-A-Za-z0-9_.+!]+)$') HREF = re.compile(r"""href\s*=\s*['"]?([^'"> ]+)""", re.I) PYPI_MD5 = re.compile( r'<a href="([^"#]+)">([^<]+)</a>\n\s+\(<a (?:title="MD5 hash"\n\s+)' r'href="[^?]+\?:action=show_md5&digest=([0-9a-f]{32})">md5</a>\)' ) URL_SCHEME = re.compile('([-+.a-z0-9]{2,}):', re.I).match EXTENSIONS = ".tar.gz .tar.bz2 .tar .zip .tgz".split() __all__ = [ 'PackageIndex', 'distros_for_url', 'parse_bdist_wininst', 'interpret_distro_name', ] _SOCKET_TIMEOUT = 15 _tmpl = "setuptools/{setuptools.__version__} Python-urllib/{py_major}" user_agent = _tmpl.format(py_major=sys.version[:3], setuptools=setuptools) def parse_requirement_arg(spec): try: return Requirement.parse(spec) except ValueError: raise DistutilsError( "Not a URL, existing file, or requirement spec: %r" % (spec,) ) def parse_bdist_wininst(name): """Return (base,pyversion) or (None,None) for possible .exe name""" lower = name.lower() base, py_ver, plat = None, None, None if lower.endswith('.exe'): if lower.endswith('.win32.exe'): base = name[:-10] plat = 'win32' elif lower.startswith('.win32-py', -16): py_ver = name[-7:-4] base = name[:-16] plat = 'win32' elif lower.endswith('.win-amd64.exe'): base = name[:-14] plat = 'win-amd64' elif lower.startswith('.win-amd64-py', -20): py_ver = name[-7:-4] base = name[:-20] plat = 'win-amd64' return base, py_ver, plat def egg_info_for_url(url): parts = urllib.parse.urlparse(url) scheme, server, path, parameters, query, fragment = parts base = urllib.parse.unquote(path.split('/')[-1]) if server == 'sourceforge.net' and base == 'download': # XXX Yuck base = urllib.parse.unquote(path.split('/')[-2]) if '#' in base: base, fragment = base.split('#', 1) return base, fragment def distros_for_url(url, metadata=None): """Yield egg or source distribution objects that might be found at a URL""" base, fragment = egg_info_for_url(url) for dist in distros_for_location(url, base, metadata): yield dist if fragment: match = EGG_FRAGMENT.match(fragment) if match: for dist in interpret_distro_name( url, match.group(1), metadata, precedence=CHECKOUT_DIST ): yield dist def distros_for_location(location, basename, metadata=None): """Yield egg or source distribution objects based on basename""" if basename.endswith('.egg.zip'): basename = basename[:-4] # strip the .zip if basename.endswith('.egg') and '-' in basename: # only one, unambiguous interpretation return [Distribution.from_location(location, basename, metadata)] if basename.endswith('.whl') and '-' in basename: wheel = Wheel(basename) if not wheel.is_compatible(): return [] return [Distribution( location=location, project_name=wheel.project_name, version=wheel.version, # Increase priority over eggs. precedence=EGG_DIST + 1, )] if basename.endswith('.exe'): win_base, py_ver, platform = parse_bdist_wininst(basename) if win_base is not None: return interpret_distro_name( location, win_base, metadata, py_ver, BINARY_DIST, platform ) # Try source distro extensions (.zip, .tgz, etc.) # for ext in EXTENSIONS: if basename.endswith(ext): basename = basename[:-len(ext)] return interpret_distro_name(location, basename, metadata) return [] # no extension matched def distros_for_filename(filename, metadata=None): """Yield possible egg or source distribution objects based on a filename""" return distros_for_location( normalize_path(filename), os.path.basename(filename), metadata ) def interpret_distro_name( location, basename, metadata, py_version=None, precedence=SOURCE_DIST, platform=None ): """Generate alternative interpretations of a source distro name Note: if `location` is a filesystem filename, you should call ``pkg_resources.normalize_path()`` on it before passing it to this routine! """ # Generate alternative interpretations of a source distro name # Because some packages are ambiguous as to name/versions split # e.g. "adns-python-1.1.0", "egenix-mx-commercial", etc. # So, we generate each possible interepretation (e.g. "adns, python-1.1.0" # "adns-python, 1.1.0", and "adns-python-1.1.0, no version"). In practice, # the spurious interpretations should be ignored, because in the event # there's also an "adns" package, the spurious "python-1.1.0" version will # compare lower than any numeric version number, and is therefore unlikely # to match a request for it. It's still a potential problem, though, and # in the long run PyPI and the distutils should go for "safe" names and # versions in distribution archive names (sdist and bdist). parts = basename.split('-') if not py_version and any(re.match(r'py\d\.\d$', p) for p in parts[2:]): # it is a bdist_dumb, not an sdist -- bail out return for p in range(1, len(parts) + 1): yield Distribution( location, metadata, '-'.join(parts[:p]), '-'.join(parts[p:]), py_version=py_version, precedence=precedence, platform=platform ) # From Python 2.7 docs def unique_everseen(iterable, key=None): "List unique elements, preserving order. Remember all elements ever seen." # unique_everseen('AAAABBBCCDAABBB') --> A B C D # unique_everseen('ABBCcAD', str.lower) --> A B C D seen = set() seen_add = seen.add if key is None: for element in six.moves.filterfalse(seen.__contains__, iterable): seen_add(element) yield element else: for element in iterable: k = key(element) if k not in seen: seen_add(k) yield element def unique_values(func): """ Wrap a function returning an iterable such that the resulting iterable only ever yields unique items. """ @wraps(func) def wrapper(*args, **kwargs): return unique_everseen(func(*args, **kwargs)) return wrapper REL = re.compile(r"""<([^>]*\srel\s*=\s*['"]?([^'">]+)[^>]*)>""", re.I) # this line is here to fix emacs' cruddy broken syntax highlighting @unique_values def find_external_links(url, page): """Find rel="homepage" and rel="download" links in `page`, yielding URLs""" for match in REL.finditer(page): tag, rel = match.groups() rels = set(map(str.strip, rel.lower().split(','))) if 'homepage' in rels or 'download' in rels: for match in HREF.finditer(tag): yield urllib.parse.urljoin(url, htmldecode(match.group(1))) for tag in ("<th>Home Page", "<th>Download URL"): pos = page.find(tag) if pos != -1: match = HREF.search(page, pos) if match: yield urllib.parse.urljoin(url, htmldecode(match.group(1))) class ContentChecker: """ A null content checker that defines the interface for checking content """ def feed(self, block): """ Feed a block of data to the hash. """ return def is_valid(self): """ Check the hash. Return False if validation fails. """ return True def report(self, reporter, template): """ Call reporter with information about the checker (hash name) substituted into the template. """ return class HashChecker(ContentChecker): pattern = re.compile( r'(?P<hash_name>sha1|sha224|sha384|sha256|sha512|md5)=' r'(?P<expected>[a-f0-9]+)' ) def __init__(self, hash_name, expected): self.hash_name = hash_name self.hash = hashlib.new(hash_name) self.expected = expected @classmethod def from_url(cls, url): "Construct a (possibly null) ContentChecker from a URL" fragment = urllib.parse.urlparse(url)[-1] if not fragment: return ContentChecker() match = cls.pattern.search(fragment) if not match: return ContentChecker() return cls(**match.groupdict()) def feed(self, block): self.hash.update(block) def is_valid(self): return self.hash.hexdigest() == self.expected def report(self, reporter, template): msg = template % self.hash_name return reporter(msg) class PackageIndex(Environment): """A distribution index that scans web pages for download URLs""" def __init__( self, index_url="https://pypi.org/simple/", hosts=('*',), ca_bundle=None, verify_ssl=True, *args, **kw ): Environment.__init__(self, *args, **kw) self.index_url = index_url + "/" [:not index_url.endswith('/')] self.scanned_urls = {} self.fetched_urls = {} self.package_pages = {} self.allows = re.compile('|'.join(map(translate, hosts))).match self.to_scan = [] use_ssl = ( verify_ssl and ssl_support.is_available and (ca_bundle or ssl_support.find_ca_bundle()) ) if use_ssl: self.opener = ssl_support.opener_for(ca_bundle) else: self.opener = urllib.request.urlopen def process_url(self, url, retrieve=False): """Evaluate a URL as a possible download, and maybe retrieve it""" if url in self.scanned_urls and not retrieve: return self.scanned_urls[url] = True if not URL_SCHEME(url): self.process_filename(url) return else: dists = list(distros_for_url(url)) if dists: if not self.url_ok(url): return self.debug("Found link: %s", url) if dists or not retrieve or url in self.fetched_urls: list(map(self.add, dists)) return # don't need the actual page if not self.url_ok(url): self.fetched_urls[url] = True return self.info("Reading %s", url) self.fetched_urls[url] = True # prevent multiple fetch attempts tmpl = "Download error on %s: %%s -- Some packages may not be found!" f = self.open_url(url, tmpl % url) if f is None: return self.fetched_urls[f.url] = True if 'html' not in f.headers.get('content-type', '').lower(): f.close() # not html, we can't process it return base = f.url # handle redirects page = f.read() if not isinstance(page, str): # In Python 3 and got bytes but want str. if isinstance(f, urllib.error.HTTPError): # Errors have no charset, assume latin1: charset = 'latin-1' else: charset = f.headers.get_param('charset') or 'latin-1' page = page.decode(charset, "ignore") f.close() for match in HREF.finditer(page): link = urllib.parse.urljoin(base, htmldecode(match.group(1))) self.process_url(link) if url.startswith(self.index_url) and getattr(f, 'code', None) != 404: page = self.process_index(url, page) def process_filename(self, fn, nested=False): # process filenames or directories if not os.path.exists(fn): self.warn("Not found: %s", fn) return if os.path.isdir(fn) and not nested: path = os.path.realpath(fn) for item in os.listdir(path): self.process_filename(os.path.join(path, item), True) dists = distros_for_filename(fn) if dists: self.debug("Found: %s", fn) list(map(self.add, dists)) def url_ok(self, url, fatal=False): s = URL_SCHEME(url) is_file = s and s.group(1).lower() == 'file' if is_file or self.allows(urllib.parse.urlparse(url)[1]): return True msg = ( "\nNote: Bypassing %s (disallowed host; see " "http://bit.ly/2hrImnY for details).\n") if fatal: raise DistutilsError(msg % url) else: self.warn(msg, url) def scan_egg_links(self, search_path): dirs = filter(os.path.isdir, search_path) egg_links = ( (path, entry) for path in dirs for entry in os.listdir(path) if entry.endswith('.egg-link') ) list(itertools.starmap(self.scan_egg_link, egg_links)) def scan_egg_link(self, path, entry): with open(os.path.join(path, entry)) as raw_lines: # filter non-empty lines lines = list(filter(None, map(str.strip, raw_lines))) if len(lines) != 2: # format is not recognized; punt return egg_path, setup_path = lines for dist in find_distributions(os.path.join(path, egg_path)): dist.location = os.path.join(path, *lines) dist.precedence = SOURCE_DIST self.add(dist) def process_index(self, url, page): """Process the contents of a PyPI page""" def scan(link): # Process a URL to see if it's for a package page if link.startswith(self.index_url): parts = list(map( urllib.parse.unquote, link[len(self.index_url):].split('/') )) if len(parts) == 2 and '#' not in parts[1]: # it's a package page, sanitize and index it pkg = safe_name(parts[0]) ver = safe_version(parts[1]) self.package_pages.setdefault(pkg.lower(), {})[link] = True return to_filename(pkg), to_filename(ver) return None, None # process an index page into the package-page index for match in HREF.finditer(page): try: scan(urllib.parse.urljoin(url, htmldecode(match.group(1)))) except ValueError: pass pkg, ver = scan(url) # ensure this page is in the page index if pkg: # process individual package page for new_url in find_external_links(url, page): # Process the found URL base, frag = egg_info_for_url(new_url) if base.endswith('.py') and not frag: if ver: new_url += '#egg=%s-%s' % (pkg, ver) else: self.need_version_info(url) self.scan_url(new_url) return PYPI_MD5.sub( lambda m: '<a href="%s#md5=%s">%s</a>' % m.group(1, 3, 2), page ) else: return "" # no sense double-scanning non-package pages def need_version_info(self, url): self.scan_all( "Page at %s links to .py file(s) without version info; an index " "scan is required.", url ) def scan_all(self, msg=None, *args): if self.index_url not in self.fetched_urls: if msg: self.warn(msg, *args) self.info( "Scanning index of all packages (this may take a while)" ) self.scan_url(self.index_url) def find_packages(self, requirement): self.scan_url(self.index_url + requirement.unsafe_name + '/') if not self.package_pages.get(requirement.key): # Fall back to safe version of the name self.scan_url(self.index_url + requirement.project_name + '/') if not self.package_pages.get(requirement.key): # We couldn't find the target package, so search the index page too self.not_found_in_index(requirement) for url in list(self.package_pages.get(requirement.key, ())): # scan each page that might be related to the desired package self.scan_url(url) def obtain(self, requirement, installer=None): self.prescan() self.find_packages(requirement) for dist in self[requirement.key]: if dist in requirement: return dist self.debug("%s does not match %s", requirement, dist) return super(PackageIndex, self).obtain(requirement, installer) def check_hash(self, checker, filename, tfp): """ checker is a ContentChecker """ checker.report( self.debug, "Validating %%s checksum for %s" % filename) if not checker.is_valid(): tfp.close() os.unlink(filename) raise DistutilsError( "%s validation failed for %s; " "possible download problem?" % (checker.hash.name, os.path.basename(filename)) ) def add_find_links(self, urls): """Add `urls` to the list that will be prescanned for searches""" for url in urls: if ( self.to_scan is None # if we have already "gone online" or not URL_SCHEME(url) # or it's a local file/directory or url.startswith('file:') or list(distros_for_url(url)) # or a direct package link ): # then go ahead and process it now self.scan_url(url) else: # otherwise, defer retrieval till later self.to_scan.append(url) def prescan(self): """Scan urls scheduled for prescanning (e.g. --find-links)""" if self.to_scan: list(map(self.scan_url, self.to_scan)) self.to_scan = None # from now on, go ahead and process immediately def not_found_in_index(self, requirement): if self[requirement.key]: # we've seen at least one distro meth, msg = self.info, "Couldn't retrieve index page for %r" else: # no distros seen for this name, might be misspelled meth, msg = ( self.warn, "Couldn't find index page for %r (maybe misspelled?)") meth(msg, requirement.unsafe_name) self.scan_all() def download(self, spec, tmpdir): """Locate and/or download `spec` to `tmpdir`, returning a local path `spec` may be a ``Requirement`` object, or a string containing a URL, an existing local filename, or a project/version requirement spec (i.e. the string form of a ``Requirement`` object). If it is the URL of a .py file with an unambiguous ``#egg=name-version`` tag (i.e., one that escapes ``-`` as ``_`` throughout), a trivial ``setup.py`` is automatically created alongside the downloaded file. If `spec` is a ``Requirement`` object or a string containing a project/version requirement spec, this method returns the location of a matching distribution (possibly after downloading it to `tmpdir`). If `spec` is a locally existing file or directory name, it is simply returned unchanged. If `spec` is a URL, it is downloaded to a subpath of `tmpdir`, and the local filename is returned. Various errors may be raised if a problem occurs during downloading. """ if not isinstance(spec, Requirement): scheme = URL_SCHEME(spec) if scheme: # It's a url, download it to tmpdir found = self._download_url(scheme.group(1), spec, tmpdir) base, fragment = egg_info_for_url(spec) if base.endswith('.py'): found = self.gen_setup(found, fragment, tmpdir) return found elif os.path.exists(spec): # Existing file or directory, just return it return spec else: spec = parse_requirement_arg(spec) return getattr(self.fetch_distribution(spec, tmpdir), 'location', None) def fetch_distribution( self, requirement, tmpdir, force_scan=False, source=False, develop_ok=False, local_index=None): """Obtain a distribution suitable for fulfilling `requirement` `requirement` must be a ``pkg_resources.Requirement`` instance. If necessary, or if the `force_scan` flag is set, the requirement is searched for in the (online) package index as well as the locally installed packages. If a distribution matching `requirement` is found, the returned distribution's ``location`` is the value you would have gotten from calling the ``download()`` method with the matching distribution's URL or filename. If no matching distribution is found, ``None`` is returned. If the `source` flag is set, only source distributions and source checkout links will be considered. Unless the `develop_ok` flag is set, development and system eggs (i.e., those using the ``.egg-info`` format) will be ignored. """ # process a Requirement self.info("Searching for %s", requirement) skipped = {} dist = None def find(req, env=None): if env is None: env = self # Find a matching distribution; may be called more than once for dist in env[req.key]: if dist.precedence == DEVELOP_DIST and not develop_ok: if dist not in skipped: self.warn( "Skipping development or system egg: %s", dist, ) skipped[dist] = 1 continue test = ( dist in req and (dist.precedence <= SOURCE_DIST or not source) ) if test: loc = self.download(dist.location, tmpdir) dist.download_location = loc if os.path.exists(dist.download_location): return dist if force_scan: self.prescan() self.find_packages(requirement) dist = find(requirement) if not dist and local_index is not None: dist = find(requirement, local_index) if dist is None: if self.to_scan is not None: self.prescan() dist = find(requirement) if dist is None and not force_scan: self.find_packages(requirement) dist = find(requirement) if dist is None: self.warn( "No local packages or working download links found for %s%s", (source and "a source distribution of " or ""), requirement, ) else: self.info("Best match: %s", dist) return dist.clone(location=dist.download_location) def fetch(self, requirement, tmpdir, force_scan=False, source=False): """Obtain a file suitable for fulfilling `requirement` DEPRECATED; use the ``fetch_distribution()`` method now instead. For backward compatibility, this routine is identical but returns the ``location`` of the downloaded distribution instead of a distribution object. """ dist = self.fetch_distribution(requirement, tmpdir, force_scan, source) if dist is not None: return dist.location return None def gen_setup(self, filename, fragment, tmpdir): match = EGG_FRAGMENT.match(fragment) dists = match and [ d for d in interpret_distro_name(filename, match.group(1), None) if d.version ] or [] if len(dists) == 1: # unambiguous ``#egg`` fragment basename = os.path.basename(filename) # Make sure the file has been downloaded to the temp dir. if os.path.dirname(filename) != tmpdir: dst = os.path.join(tmpdir, basename) from setuptools.command.easy_install import samefile if not samefile(filename, dst): shutil.copy2(filename, dst) filename = dst with open(os.path.join(tmpdir, 'setup.py'), 'w') as file: file.write( "from setuptools import setup\n" "setup(name=%r, version=%r, py_modules=[%r])\n" % ( dists[0].project_name, dists[0].version, os.path.splitext(basename)[0] ) ) return filename elif match: raise DistutilsError( "Can't unambiguously interpret project/version identifier %r; " "any dashes in the name or version should be escaped using " "underscores. %r" % (fragment, dists) ) else: raise DistutilsError( "Can't process plain .py files without an '#egg=name-version'" " suffix to enable automatic setup script generation." ) dl_blocksize = 8192 def _download_to(self, url, filename): self.info("Downloading %s", url) # Download the file fp = None try: checker = HashChecker.from_url(url) fp = self.open_url(url) if isinstance(fp, urllib.error.HTTPError): raise DistutilsError( "Can't download %s: %s %s" % (url, fp.code, fp.msg) ) headers = fp.info() blocknum = 0 bs = self.dl_blocksize size = -1 if "content-length" in headers: # Some servers return multiple Content-Length headers :( sizes = get_all_headers(headers, 'Content-Length') size = max(map(int, sizes)) self.reporthook(url, filename, blocknum, bs, size) with open(filename, 'wb') as tfp: while True: block = fp.read(bs) if block: checker.feed(block) tfp.write(block) blocknum += 1 self.reporthook(url, filename, blocknum, bs, size) else: break self.check_hash(checker, filename, tfp) return headers finally: if fp: fp.close() def reporthook(self, url, filename, blocknum, blksize, size): pass # no-op def open_url(self, url, warning=None): if url.startswith('file:'): return local_open(url) try: return open_with_auth(url, self.opener) except (ValueError, http_client.InvalidURL) as v: msg = ' '.join([str(arg) for arg in v.args]) if warning: self.warn(warning, msg) else: raise DistutilsError('%s %s' % (url, msg)) except urllib.error.HTTPError as v: return v except urllib.error.URLError as v: if warning: self.warn(warning, v.reason) else: raise DistutilsError("Download error for %s: %s" % (url, v.reason)) except http_client.BadStatusLine as v: if warning: self.warn(warning, v.line) else: raise DistutilsError( '%s returned a bad status line. The server might be ' 'down, %s' % (url, v.line) ) except (http_client.HTTPException, socket.error) as v: if warning: self.warn(warning, v) else: raise DistutilsError("Download error for %s: %s" % (url, v)) def _download_url(self, scheme, url, tmpdir): # Determine download filename # name, fragment = egg_info_for_url(url) if name: while '..' in name: name = name.replace('..', '.').replace('\\', '_') else: name = "__downloaded__" # default if URL has no path contents if name.endswith('.egg.zip'): name = name[:-4] # strip the extra .zip before download filename = os.path.join(tmpdir, name) # Download the file # if scheme == 'svn' or scheme.startswith('svn+'): return self._download_svn(url, filename) elif scheme == 'git' or scheme.startswith('git+'): return self._download_git(url, filename) elif scheme.startswith('hg+'): return self._download_hg(url, filename) elif scheme == 'file': return urllib.request.url2pathname(urllib.parse.urlparse(url)[2]) else: self.url_ok(url, True) # raises error if not allowed return self._attempt_download(url, filename) def scan_url(self, url): self.process_url(url, True) def _attempt_download(self, url, filename): headers = self._download_to(url, filename) if 'html' in headers.get('content-type', '').lower(): return self._download_html(url, headers, filename) else: return filename def _download_html(self, url, headers, filename): file = open(filename) for line in file: if line.strip(): # Check for a subversion index page if re.search(r'<title>([^- ]+ - )?Revision \d+:', line): # it's a subversion index page: file.close() os.unlink(filename) return self._download_svn(url, filename) break # not an index page file.close() os.unlink(filename) raise DistutilsError("Unexpected HTML page found at " + url) def _download_svn(self, url, filename): warnings.warn("SVN download support is deprecated", UserWarning) url = url.split('#', 1)[0] # remove any fragment for svn's sake creds = '' if url.lower().startswith('svn:') and '@' in url: scheme, netloc, path, p, q, f = urllib.parse.urlparse(url) if not netloc and path.startswith('//') and '/' in path[2:]: netloc, path = path[2:].split('/', 1) auth, host = _splituser(netloc) if auth: if ':' in auth: user, pw = auth.split(':', 1) creds = " --username=%s --password=%s" % (user, pw) else: creds = " --username=" + auth netloc = host parts = scheme, netloc, url, p, q, f url = urllib.parse.urlunparse(parts) self.info("Doing subversion checkout from %s to %s", url, filename) os.system("svn checkout%s -q %s %s" % (creds, url, filename)) return filename @staticmethod def _vcs_split_rev_from_url(url, pop_prefix=False): scheme, netloc, path, query, frag = urllib.parse.urlsplit(url) scheme = scheme.split('+', 1)[-1] # Some fragment identification fails path = path.split('#', 1)[0] rev = None if '@' in path: path, rev = path.rsplit('@', 1) # Also, discard fragment url = urllib.parse.urlunsplit((scheme, netloc, path, query, '')) return url, rev def _download_git(self, url, filename): filename = filename.split('#', 1)[0] url, rev = self._vcs_split_rev_from_url(url, pop_prefix=True) self.info("Doing git clone from %s to %s", url, filename) os.system("git clone --quiet %s %s" % (url, filename)) if rev is not None: self.info("Checking out %s", rev) os.system("git -C %s checkout --quiet %s" % ( filename, rev, )) return filename def _download_hg(self, url, filename): filename = filename.split('#', 1)[0] url, rev = self._vcs_split_rev_from_url(url, pop_prefix=True) self.info("Doing hg clone from %s to %s", url, filename) os.system("hg clone --quiet %s %s" % (url, filename)) if rev is not None: self.info("Updating to %s", rev) os.system("hg --cwd %s up -C -r %s -q" % ( filename, rev, )) return filename def debug(self, msg, *args): log.debug(msg, *args) def info(self, msg, *args): log.info(msg, *args) def warn(self, msg, *args): log.warn(msg, *args) # This pattern matches a character entity reference (a decimal numeric # references, a hexadecimal numeric reference, or a named reference). entity_sub = re.compile(r'&(#(\d+|x[\da-fA-F]+)|[\w.:-]+);?').sub def decode_entity(match): what = match.group(0) return unescape(what) def htmldecode(text): """ Decode HTML entities in the given text. >>> htmldecode( ... 'https://../package_name-0.1.2.tar.gz' ... '?tokena=A&tokenb=B">package_name-0.1.2.tar.gz') 'https://../package_name-0.1.2.tar.gz?tokena=A&tokenb=B">package_name-0.1.2.tar.gz' """ return entity_sub(decode_entity, text) def socket_timeout(timeout=15): def _socket_timeout(func): def _socket_timeout(*args, **kwargs): old_timeout = socket.getdefaulttimeout() socket.setdefaulttimeout(timeout) try: return func(*args, **kwargs) finally: socket.setdefaulttimeout(old_timeout) return _socket_timeout return _socket_timeout def _encode_auth(auth): """ A function compatible with Python 2.3-3.3 that will encode auth from a URL suitable for an HTTP header. >>> str(_encode_auth('username%3Apassword')) 'dXNlcm5hbWU6cGFzc3dvcmQ=' Long auth strings should not cause a newline to be inserted. >>> long_auth = 'username:' + 'password'*10 >>> chr(10) in str(_encode_auth(long_auth)) False """ auth_s = urllib.parse.unquote(auth) # convert to bytes auth_bytes = auth_s.encode() encoded_bytes = base64.b64encode(auth_bytes) # convert back to a string encoded = encoded_bytes.decode() # strip the trailing carriage return return encoded.replace('\n', '') class Credential: """ A username/password pair. Use like a namedtuple. """ def __init__(self, username, password): self.username = username self.password = password def __iter__(self): yield self.username yield self.password def __str__(self): return '%(username)s:%(password)s' % vars(self) class PyPIConfig(configparser.RawConfigParser): def __init__(self): """ Load from ~/.pypirc """ defaults = dict.fromkeys(['username', 'password', 'repository'], '') configparser.RawConfigParser.__init__(self, defaults) rc = os.path.join(os.path.expanduser('~'), '.pypirc') if os.path.exists(rc): self.read(rc) @property def creds_by_repository(self): sections_with_repositories = [ section for section in self.sections() if self.get(section, 'repository').strip() ] return dict(map(self._get_repo_cred, sections_with_repositories)) def _get_repo_cred(self, section): repo = self.get(section, 'repository').strip() return repo, Credential( self.get(section, 'username').strip(), self.get(section, 'password').strip(), ) def find_credential(self, url): """ If the URL indicated appears to be a repository defined in this config, return the credential for that repository. """ for repository, cred in self.creds_by_repository.items(): if url.startswith(repository): return cred def open_with_auth(url, opener=urllib.request.urlopen): """Open a urllib2 request, handling HTTP authentication""" parsed = urllib.parse.urlparse(url) scheme, netloc, path, params, query, frag = parsed # Double scheme does not raise on Mac OS X as revealed by a # failing test. We would expect "nonnumeric port". Refs #20. if netloc.endswith(':'): raise http_client.InvalidURL("nonnumeric port: ''") if scheme in ('http', 'https'): auth, address = _splituser(netloc) else: auth = None if not auth: cred = PyPIConfig().find_credential(url) if cred: auth = str(cred) info = cred.username, url log.info('Authenticating as %s for %s (from .pypirc)', *info) if auth: auth = "Basic " + _encode_auth(auth) parts = scheme, address, path, params, query, frag new_url = urllib.parse.urlunparse(parts) request = urllib.request.Request(new_url) request.add_header("Authorization", auth) else: request = urllib.request.Request(url) request.add_header('User-Agent', user_agent) fp = opener(request) if auth: # Put authentication info back into request URL if same host, # so that links found on the page will work s2, h2, path2, param2, query2, frag2 = urllib.parse.urlparse(fp.url) if s2 == scheme and h2 == address: parts = s2, netloc, path2, param2, query2, frag2 fp.url = urllib.parse.urlunparse(parts) return fp # copy of urllib.parse._splituser from Python 3.8 def _splituser(host): """splituser('user[:passwd]@host[:port]') --> 'user[:passwd]', 'host[:port]'.""" user, delim, host = host.rpartition('@') return (user if delim else None), host # adding a timeout to avoid freezing package_index open_with_auth = socket_timeout(_SOCKET_TIMEOUT)(open_with_auth) def fix_sf_url(url): return url # backward compatibility def local_open(url): """Read a local path, with special support for directories""" scheme, server, path, param, query, frag = urllib.parse.urlparse(url) filename = urllib.request.url2pathname(path) if os.path.isfile(filename): return urllib.request.urlopen(url) elif path.endswith('/') and os.path.isdir(filename): files = [] for f in os.listdir(filename): filepath = os.path.join(filename, f) if f == 'index.html': with open(filepath, 'r') as fp: body = fp.read() break elif os.path.isdir(filepath): f += '/' files.append('<a href="{name}">{name}</a>'.format(name=f)) else: tmpl = ( "<html><head><title>{url}" "{files}") body = tmpl.format(url=url, files='\n'.join(files)) status, message = 200, "OK" else: status, message, body = 404, "Path not found", "Not found" headers = {'content-type': 'text/html'} body_stream = six.StringIO(body) return urllib.error.HTTPError(url, status, message, headers, body_stream) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430189.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/pep425tags.py0000644000175000017500000002515500000000000026742 0ustar00vincentvincent00000000000000# This file originally from pip: # https://github.com/pypa/pip/blob/8f4f15a5a95d7d5b511ceaee9ed261176c181970/src/pip/_internal/pep425tags.py """Generate and work with PEP 425 Compatibility Tags.""" from __future__ import absolute_import import distutils.util from distutils import log import platform import re import sys import sysconfig import warnings from collections import OrderedDict from .extern import six from . import glibc _osx_arch_pat = re.compile(r'(.+)_(\d+)_(\d+)_(.+)') def get_config_var(var): try: return sysconfig.get_config_var(var) except IOError as e: # Issue #1074 warnings.warn("{}".format(e), RuntimeWarning) return None def get_abbr_impl(): """Return abbreviated implementation name.""" if hasattr(sys, 'pypy_version_info'): pyimpl = 'pp' elif sys.platform.startswith('java'): pyimpl = 'jy' elif sys.platform == 'cli': pyimpl = 'ip' else: pyimpl = 'cp' return pyimpl def get_impl_ver(): """Return implementation version.""" impl_ver = get_config_var("py_version_nodot") if not impl_ver or get_abbr_impl() == 'pp': impl_ver = ''.join(map(str, get_impl_version_info())) return impl_ver def get_impl_version_info(): """Return sys.version_info-like tuple for use in decrementing the minor version.""" if get_abbr_impl() == 'pp': # as per https://github.com/pypa/pip/issues/2882 return (sys.version_info[0], sys.pypy_version_info.major, sys.pypy_version_info.minor) else: return sys.version_info[0], sys.version_info[1] def get_impl_tag(): """ Returns the Tag for this specific implementation. """ return "{}{}".format(get_abbr_impl(), get_impl_ver()) def get_flag(var, fallback, expected=True, warn=True): """Use a fallback method for determining SOABI flags if the needed config var is unset or unavailable.""" val = get_config_var(var) if val is None: if warn: log.debug("Config variable '%s' is unset, Python ABI tag may " "be incorrect", var) return fallback() return val == expected def get_abi_tag(): """Return the ABI tag based on SOABI (if available) or emulate SOABI (CPython 2, PyPy).""" soabi = get_config_var('SOABI') impl = get_abbr_impl() if not soabi and impl in {'cp', 'pp'} and hasattr(sys, 'maxunicode'): d = '' m = '' u = '' if get_flag('Py_DEBUG', lambda: hasattr(sys, 'gettotalrefcount'), warn=(impl == 'cp')): d = 'd' if get_flag('WITH_PYMALLOC', lambda: impl == 'cp', warn=(impl == 'cp')): m = 'm' if get_flag('Py_UNICODE_SIZE', lambda: sys.maxunicode == 0x10ffff, expected=4, warn=(impl == 'cp' and six.PY2)) \ and six.PY2: u = 'u' abi = '%s%s%s%s%s' % (impl, get_impl_ver(), d, m, u) elif soabi and soabi.startswith('cpython-'): abi = 'cp' + soabi.split('-')[1] elif soabi: abi = soabi.replace('.', '_').replace('-', '_') else: abi = None return abi def _is_running_32bit(): return sys.maxsize == 2147483647 def get_platform(): """Return our platform name 'win32', 'linux_x86_64'""" if sys.platform == 'darwin': # distutils.util.get_platform() returns the release based on the value # of MACOSX_DEPLOYMENT_TARGET on which Python was built, which may # be significantly older than the user's current machine. release, _, machine = platform.mac_ver() split_ver = release.split('.') if machine == "x86_64" and _is_running_32bit(): machine = "i386" elif machine == "ppc64" and _is_running_32bit(): machine = "ppc" return 'macosx_{}_{}_{}'.format(split_ver[0], split_ver[1], machine) # XXX remove distutils dependency result = distutils.util.get_platform().replace('.', '_').replace('-', '_') if result == "linux_x86_64" and _is_running_32bit(): # 32 bit Python program (running on a 64 bit Linux): pip should only # install and run 32 bit compiled extensions in that case. result = "linux_i686" return result def is_manylinux1_compatible(): # Only Linux, and only x86-64 / i686 if get_platform() not in {"linux_x86_64", "linux_i686"}: return False # Check for presence of _manylinux module try: import _manylinux return bool(_manylinux.manylinux1_compatible) except (ImportError, AttributeError): # Fall through to heuristic check below pass # Check glibc version. CentOS 5 uses glibc 2.5. return glibc.have_compatible_glibc(2, 5) def get_darwin_arches(major, minor, machine): """Return a list of supported arches (including group arches) for the given major, minor and machine architecture of a macOS machine. """ arches = [] def _supports_arch(major, minor, arch): # Looking at the application support for macOS versions in the chart # provided by https://en.wikipedia.org/wiki/OS_X#Versions it appears # our timeline looks roughly like: # # 10.0 - Introduces ppc support. # 10.4 - Introduces ppc64, i386, and x86_64 support, however the ppc64 # and x86_64 support is CLI only, and cannot be used for GUI # applications. # 10.5 - Extends ppc64 and x86_64 support to cover GUI applications. # 10.6 - Drops support for ppc64 # 10.7 - Drops support for ppc # # Given that we do not know if we're installing a CLI or a GUI # application, we must be conservative and assume it might be a GUI # application and behave as if ppc64 and x86_64 support did not occur # until 10.5. # # Note: The above information is taken from the "Application support" # column in the chart not the "Processor support" since I believe # that we care about what instruction sets an application can use # not which processors the OS supports. if arch == 'ppc': return (major, minor) <= (10, 5) if arch == 'ppc64': return (major, minor) == (10, 5) if arch == 'i386': return (major, minor) >= (10, 4) if arch == 'x86_64': return (major, minor) >= (10, 5) if arch in groups: for garch in groups[arch]: if _supports_arch(major, minor, garch): return True return False groups = OrderedDict([ ("fat", ("i386", "ppc")), ("intel", ("x86_64", "i386")), ("fat64", ("x86_64", "ppc64")), ("fat32", ("x86_64", "i386", "ppc")), ]) if _supports_arch(major, minor, machine): arches.append(machine) for garch in groups: if machine in groups[garch] and _supports_arch(major, minor, garch): arches.append(garch) arches.append('universal') return arches def get_supported(versions=None, noarch=False, platform=None, impl=None, abi=None): """Return a list of supported tags for each version specified in `versions`. :param versions: a list of string versions, of the form ["33", "32"], or None. The first version will be assumed to support our ABI. :param platform: specify the exact platform you want valid tags for, or None. If None, use the local system platform. :param impl: specify the exact implementation you want valid tags for, or None. If None, use the local interpreter impl. :param abi: specify the exact abi you want valid tags for, or None. If None, use the local interpreter abi. """ supported = [] # Versions must be given with respect to the preference if versions is None: versions = [] version_info = get_impl_version_info() major = version_info[:-1] # Support all previous minor Python versions. for minor in range(version_info[-1], -1, -1): versions.append(''.join(map(str, major + (minor,)))) impl = impl or get_abbr_impl() abis = [] abi = abi or get_abi_tag() if abi: abis[0:0] = [abi] abi3s = set() import imp for suffix in imp.get_suffixes(): if suffix[0].startswith('.abi'): abi3s.add(suffix[0].split('.', 2)[1]) abis.extend(sorted(list(abi3s))) abis.append('none') if not noarch: arch = platform or get_platform() if arch.startswith('macosx'): # support macosx-10.6-intel on macosx-10.9-x86_64 match = _osx_arch_pat.match(arch) if match: name, major, minor, actual_arch = match.groups() tpl = '{}_{}_%i_%s'.format(name, major) arches = [] for m in reversed(range(int(minor) + 1)): for a in get_darwin_arches(int(major), m, actual_arch): arches.append(tpl % (m, a)) else: # arch pattern didn't match (?!) arches = [arch] elif platform is None and is_manylinux1_compatible(): arches = [arch.replace('linux', 'manylinux1'), arch] else: arches = [arch] # Current version, current API (built specifically for our Python): for abi in abis: for arch in arches: supported.append(('%s%s' % (impl, versions[0]), abi, arch)) # abi3 modules compatible with older version of Python for version in versions[1:]: # abi3 was introduced in Python 3.2 if version in {'31', '30'}: break for abi in abi3s: # empty set if not Python 3 for arch in arches: supported.append(("%s%s" % (impl, version), abi, arch)) # Has binaries, does not use the Python API: for arch in arches: supported.append(('py%s' % (versions[0][0]), 'none', arch)) # No abi / arch, but requires our implementation: supported.append(('%s%s' % (impl, versions[0]), 'none', 'any')) # Tagged specifically as being cross-version compatible # (with just the major version specified) supported.append(('%s%s' % (impl, versions[0][0]), 'none', 'any')) # No abi / arch, generic Python for i, version in enumerate(versions): supported.append(('py%s' % (version,), 'none', 'any')) if i == 0: supported.append(('py%s' % (version[0]), 'none', 'any')) return supported implementation_tag = get_impl_tag() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430189.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/py27compat.py0000644000175000017500000000103000000000000027033 0ustar00vincentvincent00000000000000""" Compatibility Support for Python 2.7 and earlier """ import platform from setuptools.extern import six def get_all_headers(message, key): """ Given an HTTPMessage, return all headers matching a given key. """ return message.get_all(key) if six.PY2: def get_all_headers(message, key): return message.getheaders(key) linux_py2_ascii = ( platform.system() == 'Linux' and six.PY2 ) rmtree_safe = str if linux_py2_ascii else lambda x: x """Workaround for http://bugs.python.org/issue24672""" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430189.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/py31compat.py0000644000175000017500000000150600000000000027036 0ustar00vincentvincent00000000000000__all__ = [] __metaclass__ = type try: # Python >=3.2 from tempfile import TemporaryDirectory except ImportError: import shutil import tempfile class TemporaryDirectory: """ Very simple temporary directory context manager. Will try to delete afterward, but will also ignore OS and similar errors on deletion. """ def __init__(self, **kwargs): self.name = None # Handle mkdtemp raising an exception self.name = tempfile.mkdtemp(**kwargs) def __enter__(self): return self.name def __exit__(self, exctype, excvalue, exctrace): try: shutil.rmtree(self.name, True) except OSError: # removal errors are not the only possible pass self.name = None ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430189.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/py33compat.py0000644000175000017500000000225300000000000027040 0ustar00vincentvincent00000000000000import dis import array import collections try: import html except ImportError: html = None from setuptools.extern import six from setuptools.extern.six.moves import html_parser __metaclass__ = type OpArg = collections.namedtuple('OpArg', 'opcode arg') class Bytecode_compat: def __init__(self, code): self.code = code def __iter__(self): """Yield '(op,arg)' pair for each operation in code object 'code'""" bytes = array.array('b', self.code.co_code) eof = len(self.code.co_code) ptr = 0 extended_arg = 0 while ptr < eof: op = bytes[ptr] if op >= dis.HAVE_ARGUMENT: arg = bytes[ptr + 1] + bytes[ptr + 2] * 256 + extended_arg ptr += 3 if op == dis.EXTENDED_ARG: long_type = six.integer_types[-1] extended_arg = arg * long_type(65536) continue else: arg = None ptr += 1 yield OpArg(op, arg) Bytecode = getattr(dis, 'Bytecode', Bytecode_compat) unescape = getattr(html, 'unescape', html_parser.HTMLParser().unescape) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430189.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/sandbox.py0000644000175000017500000003370400000000000026501 0ustar00vincentvincent00000000000000import os import sys import tempfile import operator import functools import itertools import re import contextlib import pickle import textwrap from setuptools.extern import six from setuptools.extern.six.moves import builtins, map import pkg_resources.py31compat if sys.platform.startswith('java'): import org.python.modules.posix.PosixModule as _os else: _os = sys.modules[os.name] try: _file = file except NameError: _file = None _open = open from distutils.errors import DistutilsError from pkg_resources import working_set __all__ = [ "AbstractSandbox", "DirectorySandbox", "SandboxViolation", "run_setup", ] def _execfile(filename, globals, locals=None): """ Python 3 implementation of execfile. """ mode = 'rb' with open(filename, mode) as stream: script = stream.read() if locals is None: locals = globals code = compile(script, filename, 'exec') exec(code, globals, locals) @contextlib.contextmanager def save_argv(repl=None): saved = sys.argv[:] if repl is not None: sys.argv[:] = repl try: yield saved finally: sys.argv[:] = saved @contextlib.contextmanager def save_path(): saved = sys.path[:] try: yield saved finally: sys.path[:] = saved @contextlib.contextmanager def override_temp(replacement): """ Monkey-patch tempfile.tempdir with replacement, ensuring it exists """ pkg_resources.py31compat.makedirs(replacement, exist_ok=True) saved = tempfile.tempdir tempfile.tempdir = replacement try: yield finally: tempfile.tempdir = saved @contextlib.contextmanager def pushd(target): saved = os.getcwd() os.chdir(target) try: yield saved finally: os.chdir(saved) class UnpickleableException(Exception): """ An exception representing another Exception that could not be pickled. """ @staticmethod def dump(type, exc): """ Always return a dumped (pickled) type and exc. If exc can't be pickled, wrap it in UnpickleableException first. """ try: return pickle.dumps(type), pickle.dumps(exc) except Exception: # get UnpickleableException inside the sandbox from setuptools.sandbox import UnpickleableException as cls return cls.dump(cls, cls(repr(exc))) class ExceptionSaver: """ A Context Manager that will save an exception, serialized, and restore it later. """ def __enter__(self): return self def __exit__(self, type, exc, tb): if not exc: return # dump the exception self._saved = UnpickleableException.dump(type, exc) self._tb = tb # suppress the exception return True def resume(self): "restore and re-raise any exception" if '_saved' not in vars(self): return type, exc = map(pickle.loads, self._saved) six.reraise(type, exc, self._tb) @contextlib.contextmanager def save_modules(): """ Context in which imported modules are saved. Translates exceptions internal to the context into the equivalent exception outside the context. """ saved = sys.modules.copy() with ExceptionSaver() as saved_exc: yield saved sys.modules.update(saved) # remove any modules imported since del_modules = ( mod_name for mod_name in sys.modules if mod_name not in saved # exclude any encodings modules. See #285 and not mod_name.startswith('encodings.') ) _clear_modules(del_modules) saved_exc.resume() def _clear_modules(module_names): for mod_name in list(module_names): del sys.modules[mod_name] @contextlib.contextmanager def save_pkg_resources_state(): saved = pkg_resources.__getstate__() try: yield saved finally: pkg_resources.__setstate__(saved) @contextlib.contextmanager def setup_context(setup_dir): temp_dir = os.path.join(setup_dir, 'temp') with save_pkg_resources_state(): with save_modules(): hide_setuptools() with save_path(): with save_argv(): with override_temp(temp_dir): with pushd(setup_dir): # ensure setuptools commands are available __import__('setuptools') yield def _needs_hiding(mod_name): """ >>> _needs_hiding('setuptools') True >>> _needs_hiding('pkg_resources') True >>> _needs_hiding('setuptools_plugin') False >>> _needs_hiding('setuptools.__init__') True >>> _needs_hiding('distutils') True >>> _needs_hiding('os') False >>> _needs_hiding('Cython') True """ pattern = re.compile(r'(setuptools|pkg_resources|distutils|Cython)(\.|$)') return bool(pattern.match(mod_name)) def hide_setuptools(): """ Remove references to setuptools' modules from sys.modules to allow the invocation to import the most appropriate setuptools. This technique is necessary to avoid issues such as #315 where setuptools upgrading itself would fail to find a function declared in the metadata. """ modules = filter(_needs_hiding, sys.modules) _clear_modules(modules) def run_setup(setup_script, args): """Run a distutils setup script, sandboxed in its directory""" setup_dir = os.path.abspath(os.path.dirname(setup_script)) with setup_context(setup_dir): try: sys.argv[:] = [setup_script] + list(args) sys.path.insert(0, setup_dir) # reset to include setup dir, w/clean callback list working_set.__init__() working_set.callbacks.append(lambda dist: dist.activate()) # __file__ should be a byte string on Python 2 (#712) dunder_file = ( setup_script if isinstance(setup_script, str) else setup_script.encode(sys.getfilesystemencoding()) ) with DirectorySandbox(setup_dir): ns = dict(__file__=dunder_file, __name__='__main__') _execfile(setup_script, ns) except SystemExit as v: if v.args and v.args[0]: raise # Normal exit, just return class AbstractSandbox: """Wrap 'os' module and 'open()' builtin for virtualizing setup scripts""" _active = False def __init__(self): self._attrs = [ name for name in dir(_os) if not name.startswith('_') and hasattr(self, name) ] def _copy(self, source): for name in self._attrs: setattr(os, name, getattr(source, name)) def __enter__(self): self._copy(self) if _file: builtins.file = self._file builtins.open = self._open self._active = True def __exit__(self, exc_type, exc_value, traceback): self._active = False if _file: builtins.file = _file builtins.open = _open self._copy(_os) def run(self, func): """Run 'func' under os sandboxing""" with self: return func() def _mk_dual_path_wrapper(name): original = getattr(_os, name) def wrap(self, src, dst, *args, **kw): if self._active: src, dst = self._remap_pair(name, src, dst, *args, **kw) return original(src, dst, *args, **kw) return wrap for name in ["rename", "link", "symlink"]: if hasattr(_os, name): locals()[name] = _mk_dual_path_wrapper(name) def _mk_single_path_wrapper(name, original=None): original = original or getattr(_os, name) def wrap(self, path, *args, **kw): if self._active: path = self._remap_input(name, path, *args, **kw) return original(path, *args, **kw) return wrap if _file: _file = _mk_single_path_wrapper('file', _file) _open = _mk_single_path_wrapper('open', _open) for name in [ "stat", "listdir", "chdir", "open", "chmod", "chown", "mkdir", "remove", "unlink", "rmdir", "utime", "lchown", "chroot", "lstat", "startfile", "mkfifo", "mknod", "pathconf", "access" ]: if hasattr(_os, name): locals()[name] = _mk_single_path_wrapper(name) def _mk_single_with_return(name): original = getattr(_os, name) def wrap(self, path, *args, **kw): if self._active: path = self._remap_input(name, path, *args, **kw) return self._remap_output(name, original(path, *args, **kw)) return original(path, *args, **kw) return wrap for name in ['readlink', 'tempnam']: if hasattr(_os, name): locals()[name] = _mk_single_with_return(name) def _mk_query(name): original = getattr(_os, name) def wrap(self, *args, **kw): retval = original(*args, **kw) if self._active: return self._remap_output(name, retval) return retval return wrap for name in ['getcwd', 'tmpnam']: if hasattr(_os, name): locals()[name] = _mk_query(name) def _validate_path(self, path): """Called to remap or validate any path, whether input or output""" return path def _remap_input(self, operation, path, *args, **kw): """Called for path inputs""" return self._validate_path(path) def _remap_output(self, operation, path): """Called for path outputs""" return self._validate_path(path) def _remap_pair(self, operation, src, dst, *args, **kw): """Called for path pairs like rename, link, and symlink operations""" return ( self._remap_input(operation + '-from', src, *args, **kw), self._remap_input(operation + '-to', dst, *args, **kw) ) if hasattr(os, 'devnull'): _EXCEPTIONS = [os.devnull,] else: _EXCEPTIONS = [] class DirectorySandbox(AbstractSandbox): """Restrict operations to a single subdirectory - pseudo-chroot""" write_ops = dict.fromkeys([ "open", "chmod", "chown", "mkdir", "remove", "unlink", "rmdir", "utime", "lchown", "chroot", "mkfifo", "mknod", "tempnam", ]) _exception_patterns = [ # Allow lib2to3 to attempt to save a pickled grammar object (#121) r'.*lib2to3.*\.pickle$', ] "exempt writing to paths that match the pattern" def __init__(self, sandbox, exceptions=_EXCEPTIONS): self._sandbox = os.path.normcase(os.path.realpath(sandbox)) self._prefix = os.path.join(self._sandbox, '') self._exceptions = [ os.path.normcase(os.path.realpath(path)) for path in exceptions ] AbstractSandbox.__init__(self) def _violation(self, operation, *args, **kw): from setuptools.sandbox import SandboxViolation raise SandboxViolation(operation, args, kw) if _file: def _file(self, path, mode='r', *args, **kw): if mode not in ('r', 'rt', 'rb', 'rU', 'U') and not self._ok(path): self._violation("file", path, mode, *args, **kw) return _file(path, mode, *args, **kw) def _open(self, path, mode='r', *args, **kw): if mode not in ('r', 'rt', 'rb', 'rU', 'U') and not self._ok(path): self._violation("open", path, mode, *args, **kw) return _open(path, mode, *args, **kw) def tmpnam(self): self._violation("tmpnam") def _ok(self, path): active = self._active try: self._active = False realpath = os.path.normcase(os.path.realpath(path)) return ( self._exempted(realpath) or realpath == self._sandbox or realpath.startswith(self._prefix) ) finally: self._active = active def _exempted(self, filepath): start_matches = ( filepath.startswith(exception) for exception in self._exceptions ) pattern_matches = ( re.match(pattern, filepath) for pattern in self._exception_patterns ) candidates = itertools.chain(start_matches, pattern_matches) return any(candidates) def _remap_input(self, operation, path, *args, **kw): """Called for path inputs""" if operation in self.write_ops and not self._ok(path): self._violation(operation, os.path.realpath(path), *args, **kw) return path def _remap_pair(self, operation, src, dst, *args, **kw): """Called for path pairs like rename, link, and symlink operations""" if not self._ok(src) or not self._ok(dst): self._violation(operation, src, dst, *args, **kw) return (src, dst) def open(self, file, flags, mode=0o777, *args, **kw): """Called for low-level os.open()""" if flags & WRITE_FLAGS and not self._ok(file): self._violation("os.open", file, flags, mode, *args, **kw) return _os.open(file, flags, mode, *args, **kw) WRITE_FLAGS = functools.reduce( operator.or_, [getattr(_os, a, 0) for a in "O_WRONLY O_RDWR O_APPEND O_CREAT O_TRUNC O_TEMPORARY".split()] ) class SandboxViolation(DistutilsError): """A setup script attempted to modify the filesystem outside the sandbox""" tmpl = textwrap.dedent(""" SandboxViolation: {cmd}{args!r} {kwargs} The package setup script has attempted to modify files on your system that are not within the EasyInstall build area, and has been aborted. This package cannot be safely installed by EasyInstall, and may not support alternate installation locations even if you run its setup script by hand. Please inform the package's author and the EasyInstall maintainers to find out if a fix or workaround is available. """).lstrip() def __str__(self): cmd, args, kwargs = self.args return self.tmpl.format(**locals()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430189.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/site-patch.py0000644000175000017500000000437600000000000027107 0ustar00vincentvincent00000000000000def __boot(): import sys import os PYTHONPATH = os.environ.get('PYTHONPATH') if PYTHONPATH is None or (sys.platform == 'win32' and not PYTHONPATH): PYTHONPATH = [] else: PYTHONPATH = PYTHONPATH.split(os.pathsep) pic = getattr(sys, 'path_importer_cache', {}) stdpath = sys.path[len(PYTHONPATH):] mydir = os.path.dirname(__file__) for item in stdpath: if item == mydir or not item: continue # skip if current dir. on Windows, or my own directory importer = pic.get(item) if importer is not None: loader = importer.find_module('site') if loader is not None: # This should actually reload the current module loader.load_module('site') break else: try: import imp # Avoid import loop in Python 3 stream, path, descr = imp.find_module('site', [item]) except ImportError: continue if stream is None: continue try: # This should actually reload the current module imp.load_module('site', stream, path, descr) finally: stream.close() break else: raise ImportError("Couldn't find the real 'site' module") known_paths = dict([(makepath(item)[1], 1) for item in sys.path]) # 2.2 comp oldpos = getattr(sys, '__egginsert', 0) # save old insertion position sys.__egginsert = 0 # and reset the current one for item in PYTHONPATH: addsitedir(item) sys.__egginsert += oldpos # restore effective old position d, nd = makepath(stdpath[0]) insert_at = None new_path = [] for item in sys.path: p, np = makepath(item) if np == nd and insert_at is None: # We've hit the first 'system' path entry, so added entries go here insert_at = len(new_path) if np in known_paths or insert_at is None: new_path.append(item) else: # new path after the insert point, back-insert it new_path.insert(insert_at, item) insert_at += 1 sys.path[:] = new_path if __name__ == 'site': __boot() del __boot ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430189.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/ssl_support.py0000644000175000017500000002045500000000000027437 0ustar00vincentvincent00000000000000import os import socket import atexit import re import functools from setuptools.extern.six.moves import urllib, http_client, map, filter from pkg_resources import ResolutionError, ExtractionError try: import ssl except ImportError: ssl = None __all__ = [ 'VerifyingHTTPSHandler', 'find_ca_bundle', 'is_available', 'cert_paths', 'opener_for' ] cert_paths = """ /etc/pki/tls/certs/ca-bundle.crt /etc/ssl/certs/ca-certificates.crt /usr/share/ssl/certs/ca-bundle.crt /usr/local/share/certs/ca-root.crt /etc/ssl/cert.pem /System/Library/OpenSSL/certs/cert.pem /usr/local/share/certs/ca-root-nss.crt /etc/ssl/ca-bundle.pem """.strip().split() try: HTTPSHandler = urllib.request.HTTPSHandler HTTPSConnection = http_client.HTTPSConnection except AttributeError: HTTPSHandler = HTTPSConnection = object is_available = ssl is not None and object not in (HTTPSHandler, HTTPSConnection) try: from ssl import CertificateError, match_hostname except ImportError: try: from backports.ssl_match_hostname import CertificateError from backports.ssl_match_hostname import match_hostname except ImportError: CertificateError = None match_hostname = None if not CertificateError: class CertificateError(ValueError): pass if not match_hostname: def _dnsname_match(dn, hostname, max_wildcards=1): """Matching according to RFC 6125, section 6.4.3 https://tools.ietf.org/html/rfc6125#section-6.4.3 """ pats = [] if not dn: return False # Ported from python3-syntax: # leftmost, *remainder = dn.split(r'.') parts = dn.split(r'.') leftmost = parts[0] remainder = parts[1:] wildcards = leftmost.count('*') if wildcards > max_wildcards: # Issue #17980: avoid denials of service by refusing more # than one wildcard per fragment. A survey of established # policy among SSL implementations showed it to be a # reasonable choice. raise CertificateError( "too many wildcards in certificate DNS name: " + repr(dn)) # speed up common case w/o wildcards if not wildcards: return dn.lower() == hostname.lower() # RFC 6125, section 6.4.3, subitem 1. # The client SHOULD NOT attempt to match a presented identifier in which # the wildcard character comprises a label other than the left-most label. if leftmost == '*': # When '*' is a fragment by itself, it matches a non-empty dotless # fragment. pats.append('[^.]+') elif leftmost.startswith('xn--') or hostname.startswith('xn--'): # RFC 6125, section 6.4.3, subitem 3. # The client SHOULD NOT attempt to match a presented identifier # where the wildcard character is embedded within an A-label or # U-label of an internationalized domain name. pats.append(re.escape(leftmost)) else: # Otherwise, '*' matches any dotless string, e.g. www* pats.append(re.escape(leftmost).replace(r'\*', '[^.]*')) # add the remaining fragments, ignore any wildcards for frag in remainder: pats.append(re.escape(frag)) pat = re.compile(r'\A' + r'\.'.join(pats) + r'\Z', re.IGNORECASE) return pat.match(hostname) def match_hostname(cert, hostname): """Verify that *cert* (in decoded format as returned by SSLSocket.getpeercert()) matches the *hostname*. RFC 2818 and RFC 6125 rules are followed, but IP addresses are not accepted for *hostname*. CertificateError is raised on failure. On success, the function returns nothing. """ if not cert: raise ValueError("empty or no certificate") dnsnames = [] san = cert.get('subjectAltName', ()) for key, value in san: if key == 'DNS': if _dnsname_match(value, hostname): return dnsnames.append(value) if not dnsnames: # The subject is only checked when there is no dNSName entry # in subjectAltName for sub in cert.get('subject', ()): for key, value in sub: # XXX according to RFC 2818, the most specific Common Name # must be used. if key == 'commonName': if _dnsname_match(value, hostname): return dnsnames.append(value) if len(dnsnames) > 1: raise CertificateError("hostname %r " "doesn't match either of %s" % (hostname, ', '.join(map(repr, dnsnames)))) elif len(dnsnames) == 1: raise CertificateError("hostname %r " "doesn't match %r" % (hostname, dnsnames[0])) else: raise CertificateError("no appropriate commonName or " "subjectAltName fields were found") class VerifyingHTTPSHandler(HTTPSHandler): """Simple verifying handler: no auth, subclasses, timeouts, etc.""" def __init__(self, ca_bundle): self.ca_bundle = ca_bundle HTTPSHandler.__init__(self) def https_open(self, req): return self.do_open( lambda host, **kw: VerifyingHTTPSConn(host, self.ca_bundle, **kw), req ) class VerifyingHTTPSConn(HTTPSConnection): """Simple verifying connection: no auth, subclasses, timeouts, etc.""" def __init__(self, host, ca_bundle, **kw): HTTPSConnection.__init__(self, host, **kw) self.ca_bundle = ca_bundle def connect(self): sock = socket.create_connection( (self.host, self.port), getattr(self, 'source_address', None) ) # Handle the socket if a (proxy) tunnel is present if hasattr(self, '_tunnel') and getattr(self, '_tunnel_host', None): self.sock = sock self._tunnel() # http://bugs.python.org/issue7776: Python>=3.4.1 and >=2.7.7 # change self.host to mean the proxy server host when tunneling is # being used. Adapt, since we are interested in the destination # host for the match_hostname() comparison. actual_host = self._tunnel_host else: actual_host = self.host if hasattr(ssl, 'create_default_context'): ctx = ssl.create_default_context(cafile=self.ca_bundle) self.sock = ctx.wrap_socket(sock, server_hostname=actual_host) else: # This is for python < 2.7.9 and < 3.4? self.sock = ssl.wrap_socket( sock, cert_reqs=ssl.CERT_REQUIRED, ca_certs=self.ca_bundle ) try: match_hostname(self.sock.getpeercert(), actual_host) except CertificateError: self.sock.shutdown(socket.SHUT_RDWR) self.sock.close() raise def opener_for(ca_bundle=None): """Get a urlopen() replacement that uses ca_bundle for verification""" return urllib.request.build_opener( VerifyingHTTPSHandler(ca_bundle or find_ca_bundle()) ).open # from jaraco.functools def once(func): @functools.wraps(func) def wrapper(*args, **kwargs): if not hasattr(func, 'always_returns'): func.always_returns = func(*args, **kwargs) return func.always_returns return wrapper @once def get_win_certfile(): try: import wincertstore except ImportError: return None class CertFile(wincertstore.CertFile): def __init__(self): super(CertFile, self).__init__() atexit.register(self.close) def close(self): try: super(CertFile, self).close() except OSError: pass _wincerts = CertFile() _wincerts.addstore('CA') _wincerts.addstore('ROOT') return _wincerts.name def find_ca_bundle(): """Return an existing CA bundle path, or None""" extant_cert_paths = filter(os.path.isfile, cert_paths) return ( get_win_certfile() or next(extant_cert_paths, None) or _certifi_where() ) def _certifi_where(): try: return __import__('certifi').where() except (ImportError, ResolutionError, ExtractionError): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430189.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/unicode_utils.py0000644000175000017500000000174400000000000027710 0ustar00vincentvincent00000000000000import unicodedata import sys from setuptools.extern import six # HFS Plus uses decomposed UTF-8 def decompose(path): if isinstance(path, six.text_type): return unicodedata.normalize('NFD', path) try: path = path.decode('utf-8') path = unicodedata.normalize('NFD', path) path = path.encode('utf-8') except UnicodeError: pass # Not UTF-8 return path def filesys_decode(path): """ Ensure that the given path is decoded, NONE when no expected encoding works """ if isinstance(path, six.text_type): return path fs_enc = sys.getfilesystemencoding() or 'utf-8' candidates = fs_enc, 'utf-8' for enc in candidates: try: return path.decode(enc) except UnicodeDecodeError: continue def try_encode(string, enc): "turn unicode encoding into a functional routine" try: return string.encode(enc) except UnicodeEncodeError: return None ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430189.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/version.py0000644000175000017500000000022000000000000026513 0ustar00vincentvincent00000000000000import pkg_resources try: __version__ = pkg_resources.get_distribution('setuptools').version except Exception: __version__ = 'unknown' ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430189.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/wheel.py0000644000175000017500000001763600000000000026155 0ustar00vincentvincent00000000000000"""Wheels support.""" from distutils.util import get_platform import email import itertools import os import posixpath import re import zipfile import pkg_resources import setuptools from pkg_resources import parse_version from setuptools.extern.packaging.utils import canonicalize_name from setuptools.extern.six import PY3 from setuptools import pep425tags from setuptools.command.egg_info import write_requirements __metaclass__ = type WHEEL_NAME = re.compile( r"""^(?P.+?)-(?P\d.*?) ((-(?P\d.*?))?-(?P.+?)-(?P.+?)-(?P.+?) )\.whl$""", re.VERBOSE).match NAMESPACE_PACKAGE_INIT = '''\ try: __import__('pkg_resources').declare_namespace(__name__) except ImportError: __path__ = __import__('pkgutil').extend_path(__path__, __name__) ''' def unpack(src_dir, dst_dir): '''Move everything under `src_dir` to `dst_dir`, and delete the former.''' for dirpath, dirnames, filenames in os.walk(src_dir): subdir = os.path.relpath(dirpath, src_dir) for f in filenames: src = os.path.join(dirpath, f) dst = os.path.join(dst_dir, subdir, f) os.renames(src, dst) for n, d in reversed(list(enumerate(dirnames))): src = os.path.join(dirpath, d) dst = os.path.join(dst_dir, subdir, d) if not os.path.exists(dst): # Directory does not exist in destination, # rename it and prune it from os.walk list. os.renames(src, dst) del dirnames[n] # Cleanup. for dirpath, dirnames, filenames in os.walk(src_dir, topdown=True): assert not filenames os.rmdir(dirpath) class Wheel: def __init__(self, filename): match = WHEEL_NAME(os.path.basename(filename)) if match is None: raise ValueError('invalid wheel name: %r' % filename) self.filename = filename for k, v in match.groupdict().items(): setattr(self, k, v) def tags(self): '''List tags (py_version, abi, platform) supported by this wheel.''' return itertools.product( self.py_version.split('.'), self.abi.split('.'), self.platform.split('.'), ) def is_compatible(self): '''Is the wheel is compatible with the current platform?''' supported_tags = pep425tags.get_supported() return next((True for t in self.tags() if t in supported_tags), False) def egg_name(self): return pkg_resources.Distribution( project_name=self.project_name, version=self.version, platform=(None if self.platform == 'any' else get_platform()), ).egg_name() + '.egg' def get_dist_info(self, zf): # find the correct name of the .dist-info dir in the wheel file for member in zf.namelist(): dirname = posixpath.dirname(member) if (dirname.endswith('.dist-info') and canonicalize_name(dirname).startswith( canonicalize_name(self.project_name))): return dirname raise ValueError("unsupported wheel format. .dist-info not found") def install_as_egg(self, destination_eggdir): '''Install wheel as an egg directory.''' with zipfile.ZipFile(self.filename) as zf: self._install_as_egg(destination_eggdir, zf) def _install_as_egg(self, destination_eggdir, zf): dist_basename = '%s-%s' % (self.project_name, self.version) dist_info = self.get_dist_info(zf) dist_data = '%s.data' % dist_basename egg_info = os.path.join(destination_eggdir, 'EGG-INFO') self._convert_metadata(zf, destination_eggdir, dist_info, egg_info) self._move_data_entries(destination_eggdir, dist_data) self._fix_namespace_packages(egg_info, destination_eggdir) @staticmethod def _convert_metadata(zf, destination_eggdir, dist_info, egg_info): def get_metadata(name): with zf.open(posixpath.join(dist_info, name)) as fp: value = fp.read().decode('utf-8') if PY3 else fp.read() return email.parser.Parser().parsestr(value) wheel_metadata = get_metadata('WHEEL') # Check wheel format version is supported. wheel_version = parse_version(wheel_metadata.get('Wheel-Version')) wheel_v1 = ( parse_version('1.0') <= wheel_version < parse_version('2.0dev0') ) if not wheel_v1: raise ValueError( 'unsupported wheel format version: %s' % wheel_version) # Extract to target directory. os.mkdir(destination_eggdir) zf.extractall(destination_eggdir) # Convert metadata. dist_info = os.path.join(destination_eggdir, dist_info) dist = pkg_resources.Distribution.from_location( destination_eggdir, dist_info, metadata=pkg_resources.PathMetadata(destination_eggdir, dist_info), ) # Note: Evaluate and strip markers now, # as it's difficult to convert back from the syntax: # foobar; "linux" in sys_platform and extra == 'test' def raw_req(req): req.marker = None return str(req) install_requires = list(sorted(map(raw_req, dist.requires()))) extras_require = { extra: sorted( req for req in map(raw_req, dist.requires((extra,))) if req not in install_requires ) for extra in dist.extras } os.rename(dist_info, egg_info) os.rename( os.path.join(egg_info, 'METADATA'), os.path.join(egg_info, 'PKG-INFO'), ) setup_dist = setuptools.Distribution( attrs=dict( install_requires=install_requires, extras_require=extras_require, ), ) write_requirements( setup_dist.get_command_obj('egg_info'), None, os.path.join(egg_info, 'requires.txt'), ) @staticmethod def _move_data_entries(destination_eggdir, dist_data): """Move data entries to their correct location.""" dist_data = os.path.join(destination_eggdir, dist_data) dist_data_scripts = os.path.join(dist_data, 'scripts') if os.path.exists(dist_data_scripts): egg_info_scripts = os.path.join( destination_eggdir, 'EGG-INFO', 'scripts') os.mkdir(egg_info_scripts) for entry in os.listdir(dist_data_scripts): # Remove bytecode, as it's not properly handled # during easy_install scripts install phase. if entry.endswith('.pyc'): os.unlink(os.path.join(dist_data_scripts, entry)) else: os.rename( os.path.join(dist_data_scripts, entry), os.path.join(egg_info_scripts, entry), ) os.rmdir(dist_data_scripts) for subdir in filter(os.path.exists, ( os.path.join(dist_data, d) for d in ('data', 'headers', 'purelib', 'platlib') )): unpack(subdir, destination_eggdir) if os.path.exists(dist_data): os.rmdir(dist_data) @staticmethod def _fix_namespace_packages(egg_info, destination_eggdir): namespace_packages = os.path.join( egg_info, 'namespace_packages.txt') if os.path.exists(namespace_packages): with open(namespace_packages) as fp: namespace_packages = fp.read().split() for mod in namespace_packages: mod_dir = os.path.join(destination_eggdir, *mod.split('.')) mod_init = os.path.join(mod_dir, '__init__.py') if os.path.exists(mod_dir) and not os.path.exists(mod_init): with open(mod_init, 'w') as fp: fp.write(NAMESPACE_PACKAGE_INIT) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430189.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/setuptools/windows_support.py0000644000175000017500000000131200000000000030317 0ustar00vincentvincent00000000000000import platform import ctypes def windows_only(func): if platform.system() != 'Windows': return lambda *args, **kwargs: None return func @windows_only def hide_file(path): """ Set the hidden attribute on a file or directory. From http://stackoverflow.com/questions/19622133/ `path` must be text. """ __import__('ctypes.wintypes') SetFileAttributes = ctypes.windll.kernel32.SetFileAttributesW SetFileAttributes.argtypes = ctypes.wintypes.LPWSTR, ctypes.wintypes.DWORD SetFileAttributes.restype = ctypes.wintypes.BOOL FILE_ATTRIBUTE_HIDDEN = 0x02 ret = SetFileAttributes(path, FILE_ATTRIBUTE_HIDDEN) if not ret: raise ctypes.WinError() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735100.0699942 cypari2-2.1.2/venv/lib/python3.7/site-packages/wheel/0000755000175000017500000000000000000000000023345 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430191.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/wheel/__init__.py0000644000175000017500000000014000000000000025451 0ustar00vincentvincent00000000000000# __variables__ with double-quoted values will be available in setup.py: __version__ = "0.33.4" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430191.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/wheel/__main__.py0000644000175000017500000000064100000000000025440 0ustar00vincentvincent00000000000000""" Wheel command line tool (enable python -m wheel syntax) """ import sys def main(): # needed for console script if __package__ == '': # To be able to run 'python wheel-0.9.whl/wheel': import os.path path = os.path.dirname(os.path.dirname(__file__)) sys.path[0:0] = [path] import wheel.cli sys.exit(wheel.cli.main()) if __name__ == "__main__": sys.exit(main()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430191.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/wheel/bdist_wheel.py0000644000175000017500000003464500000000000026224 0ustar00vincentvincent00000000000000""" Create a wheel (.whl) distribution. A wheel is a built archive format. """ import os import shutil import sys import re from email.generator import Generator from distutils.core import Command from distutils.sysconfig import get_python_version from distutils import log as logger from glob import iglob from shutil import rmtree from warnings import warn import pkg_resources from .pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag, get_platform from .pkginfo import write_pkg_info from .metadata import pkginfo_to_metadata from .wheelfile import WheelFile from . import pep425tags from . import __version__ as wheel_version safe_name = pkg_resources.safe_name safe_version = pkg_resources.safe_version PY_LIMITED_API_PATTERN = r'cp3\d' def safer_name(name): return safe_name(name).replace('-', '_') def safer_version(version): return safe_version(version).replace('-', '_') class bdist_wheel(Command): description = 'create a wheel distribution' user_options = [('bdist-dir=', 'b', "temporary directory for creating the distribution"), ('plat-name=', 'p', "platform name to embed in generated filenames " "(default: %s)" % get_platform()), ('keep-temp', 'k', "keep the pseudo-installation tree around after " + "creating the distribution archive"), ('dist-dir=', 'd', "directory to put final built distributions in"), ('skip-build', None, "skip rebuilding everything (for testing/debugging)"), ('relative', None, "build the archive using relative paths " "(default: false)"), ('owner=', 'u', "Owner name used when creating a tar file" " [default: current user]"), ('group=', 'g', "Group name used when creating a tar file" " [default: current group]"), ('universal', None, "make a universal wheel" " (default: false)"), ('python-tag=', None, "Python implementation compatibility tag" " (default: py%s)" % get_impl_ver()[0]), ('build-number=', None, "Build number for this particular version. " "As specified in PEP-0427, this must start with a digit. " "[default: None]"), ('py-limited-api=', None, "Python tag (cp32|cp33|cpNN) for abi3 wheel tag" " (default: false)"), ] boolean_options = ['keep-temp', 'skip-build', 'relative', 'universal'] def initialize_options(self): self.bdist_dir = None self.data_dir = None self.plat_name = None self.plat_tag = None self.format = 'zip' self.keep_temp = False self.dist_dir = None self.egginfo_dir = None self.root_is_pure = None self.skip_build = None self.relative = False self.owner = None self.group = None self.universal = False self.python_tag = 'py' + get_impl_ver()[0] self.build_number = None self.py_limited_api = False self.plat_name_supplied = False def finalize_options(self): if self.bdist_dir is None: bdist_base = self.get_finalized_command('bdist').bdist_base self.bdist_dir = os.path.join(bdist_base, 'wheel') self.data_dir = self.wheel_dist_name + '.data' self.plat_name_supplied = self.plat_name is not None need_options = ('dist_dir', 'plat_name', 'skip_build') self.set_undefined_options('bdist', *zip(need_options, need_options)) self.root_is_pure = not (self.distribution.has_ext_modules() or self.distribution.has_c_libraries()) if self.py_limited_api and not re.match(PY_LIMITED_API_PATTERN, self.py_limited_api): raise ValueError("py-limited-api must match '%s'" % PY_LIMITED_API_PATTERN) # Support legacy [wheel] section for setting universal wheel = self.distribution.get_option_dict('wheel') if 'universal' in wheel: # please don't define this in your global configs logger.warn('The [wheel] section is deprecated. Use [bdist_wheel] instead.') val = wheel['universal'][1].strip() if val.lower() in ('1', 'true', 'yes'): self.universal = True if self.build_number is not None and not self.build_number[:1].isdigit(): raise ValueError("Build tag (build-number) must start with a digit.") @property def wheel_dist_name(self): """Return distribution full name with - replaced with _""" components = (safer_name(self.distribution.get_name()), safer_version(self.distribution.get_version())) if self.build_number: components += (self.build_number,) return '-'.join(components) def get_tag(self): # bdist sets self.plat_name if unset, we should only use it for purepy # wheels if the user supplied it. if self.plat_name_supplied: plat_name = self.plat_name elif self.root_is_pure: plat_name = 'any' else: plat_name = self.plat_name or get_platform() if plat_name in ('linux-x86_64', 'linux_x86_64') and sys.maxsize == 2147483647: plat_name = 'linux_i686' plat_name = plat_name.replace('-', '_').replace('.', '_') if self.root_is_pure: if self.universal: impl = 'py2.py3' else: impl = self.python_tag tag = (impl, 'none', plat_name) else: impl_name = get_abbr_impl() impl_ver = get_impl_ver() impl = impl_name + impl_ver # We don't work on CPython 3.1, 3.0. if self.py_limited_api and (impl_name + impl_ver).startswith('cp3'): impl = self.py_limited_api abi_tag = 'abi3' else: abi_tag = str(get_abi_tag()).lower() tag = (impl, abi_tag, plat_name) supported_tags = pep425tags.get_supported( supplied_platform=plat_name if self.plat_name_supplied else None) # XXX switch to this alternate implementation for non-pure: if not self.py_limited_api: assert tag == supported_tags[0], "%s != %s" % (tag, supported_tags[0]) assert tag in supported_tags, "would build wheel with unsupported tag {}".format(tag) return tag def run(self): build_scripts = self.reinitialize_command('build_scripts') build_scripts.executable = 'python' build_scripts.force = True build_ext = self.reinitialize_command('build_ext') build_ext.inplace = False if not self.skip_build: self.run_command('build') install = self.reinitialize_command('install', reinit_subcommands=True) install.root = self.bdist_dir install.compile = False install.skip_build = self.skip_build install.warn_dir = False # A wheel without setuptools scripts is more cross-platform. # Use the (undocumented) `no_ep` option to setuptools' # install_scripts command to avoid creating entry point scripts. install_scripts = self.reinitialize_command('install_scripts') install_scripts.no_ep = True # Use a custom scheme for the archive, because we have to decide # at installation time which scheme to use. for key in ('headers', 'scripts', 'data', 'purelib', 'platlib'): setattr(install, 'install_' + key, os.path.join(self.data_dir, key)) basedir_observed = '' if os.name == 'nt': # win32 barfs if any of these are ''; could be '.'? # (distutils.command.install:change_roots bug) basedir_observed = os.path.normpath(os.path.join(self.data_dir, '..')) self.install_libbase = self.install_lib = basedir_observed setattr(install, 'install_purelib' if self.root_is_pure else 'install_platlib', basedir_observed) logger.info("installing to %s", self.bdist_dir) self.run_command('install') impl_tag, abi_tag, plat_tag = self.get_tag() archive_basename = "{}-{}-{}-{}".format(self.wheel_dist_name, impl_tag, abi_tag, plat_tag) if not self.relative: archive_root = self.bdist_dir else: archive_root = os.path.join( self.bdist_dir, self._ensure_relative(install.install_base)) self.set_undefined_options('install_egg_info', ('target', 'egginfo_dir')) distinfo_dirname = '{}-{}.dist-info'.format( safer_name(self.distribution.get_name()), safer_version(self.distribution.get_version())) distinfo_dir = os.path.join(self.bdist_dir, distinfo_dirname) self.egg2dist(self.egginfo_dir, distinfo_dir) self.write_wheelfile(distinfo_dir) # Make the archive if not os.path.exists(self.dist_dir): os.makedirs(self.dist_dir) wheel_path = os.path.join(self.dist_dir, archive_basename + '.whl') with WheelFile(wheel_path, 'w') as wf: wf.write_files(archive_root) # Add to 'Distribution.dist_files' so that the "upload" command works getattr(self.distribution, 'dist_files', []).append( ('bdist_wheel', get_python_version(), wheel_path)) if not self.keep_temp: logger.info('removing %s', self.bdist_dir) if not self.dry_run: rmtree(self.bdist_dir) def write_wheelfile(self, wheelfile_base, generator='bdist_wheel (' + wheel_version + ')'): from email.message import Message msg = Message() msg['Wheel-Version'] = '1.0' # of the spec msg['Generator'] = generator msg['Root-Is-Purelib'] = str(self.root_is_pure).lower() if self.build_number is not None: msg['Build'] = self.build_number # Doesn't work for bdist_wininst impl_tag, abi_tag, plat_tag = self.get_tag() for impl in impl_tag.split('.'): for abi in abi_tag.split('.'): for plat in plat_tag.split('.'): msg['Tag'] = '-'.join((impl, abi, plat)) wheelfile_path = os.path.join(wheelfile_base, 'WHEEL') logger.info('creating %s', wheelfile_path) with open(wheelfile_path, 'w') as f: Generator(f, maxheaderlen=0).flatten(msg) def _ensure_relative(self, path): # copied from dir_util, deleted drive, path = os.path.splitdrive(path) if path[0:1] == os.sep: path = drive + path[1:] return path @property def license_paths(self): metadata = self.distribution.get_option_dict('metadata') files = set() patterns = sorted({ option for option in metadata.get('license_files', ('', ''))[1].split() }) if 'license_file' in metadata: warn('The "license_file" option is deprecated. Use "license_files" instead.', DeprecationWarning) files.add(metadata['license_file'][1]) if 'license_file' not in metadata and 'license_files' not in metadata: patterns = ('LICEN[CS]E*', 'COPYING*', 'NOTICE*', 'AUTHORS*') for pattern in patterns: for path in iglob(pattern): if path not in files and os.path.isfile(path): logger.info('adding license file "%s" (matched pattern "%s")', path, pattern) files.add(path) return files def egg2dist(self, egginfo_path, distinfo_path): """Convert an .egg-info directory into a .dist-info directory""" def adios(p): """Appropriately delete directory, file or link.""" if os.path.exists(p) and not os.path.islink(p) and os.path.isdir(p): shutil.rmtree(p) elif os.path.exists(p): os.unlink(p) adios(distinfo_path) if not os.path.exists(egginfo_path): # There is no egg-info. This is probably because the egg-info # file/directory is not named matching the distribution name used # to name the archive file. Check for this case and report # accordingly. import glob pat = os.path.join(os.path.dirname(egginfo_path), '*.egg-info') possible = glob.glob(pat) err = "Egg metadata expected at %s but not found" % (egginfo_path,) if possible: alt = os.path.basename(possible[0]) err += " (%s found - possible misnamed archive file?)" % (alt,) raise ValueError(err) if os.path.isfile(egginfo_path): # .egg-info is a single file pkginfo_path = egginfo_path pkg_info = pkginfo_to_metadata(egginfo_path, egginfo_path) os.mkdir(distinfo_path) else: # .egg-info is a directory pkginfo_path = os.path.join(egginfo_path, 'PKG-INFO') pkg_info = pkginfo_to_metadata(egginfo_path, pkginfo_path) # ignore common egg metadata that is useless to wheel shutil.copytree(egginfo_path, distinfo_path, ignore=lambda x, y: {'PKG-INFO', 'requires.txt', 'SOURCES.txt', 'not-zip-safe'} ) # delete dependency_links if it is only whitespace dependency_links_path = os.path.join(distinfo_path, 'dependency_links.txt') with open(dependency_links_path, 'r') as dependency_links_file: dependency_links = dependency_links_file.read().strip() if not dependency_links: adios(dependency_links_path) write_pkg_info(os.path.join(distinfo_path, 'METADATA'), pkg_info) for license_path in self.license_paths: filename = os.path.basename(license_path) shutil.copy(license_path, os.path.join(distinfo_path, filename)) adios(egginfo_path) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735100.0699942 cypari2-2.1.2/venv/lib/python3.7/site-packages/wheel/cli/0000755000175000017500000000000000000000000024114 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430191.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/wheel/cli/__init__.py0000644000175000017500000000501400000000000026225 0ustar00vincentvincent00000000000000""" Wheel command-line utility. """ from __future__ import print_function import argparse import os import sys def require_pkgresources(name): try: import pkg_resources # noqa: F401 except ImportError: raise RuntimeError("'{0}' needs pkg_resources (part of setuptools).".format(name)) class WheelError(Exception): pass def unpack_f(args): from .unpack import unpack unpack(args.wheelfile, args.dest) def pack_f(args): from .pack import pack pack(args.directory, args.dest_dir, args.build_number) def convert_f(args): from .convert import convert convert(args.files, args.dest_dir, args.verbose) def version_f(args): from .. import __version__ print("wheel %s" % __version__) def parser(): p = argparse.ArgumentParser() s = p.add_subparsers(help="commands") unpack_parser = s.add_parser('unpack', help='Unpack wheel') unpack_parser.add_argument('--dest', '-d', help='Destination directory', default='.') unpack_parser.add_argument('wheelfile', help='Wheel file') unpack_parser.set_defaults(func=unpack_f) repack_parser = s.add_parser('pack', help='Repack wheel') repack_parser.add_argument('directory', help='Root directory of the unpacked wheel') repack_parser.add_argument('--dest-dir', '-d', default=os.path.curdir, help="Directory to store the wheel (default %(default)s)") repack_parser.add_argument('--build-number', help="Build tag to use in the wheel name") repack_parser.set_defaults(func=pack_f) convert_parser = s.add_parser('convert', help='Convert egg or wininst to wheel') convert_parser.add_argument('files', nargs='*', help='Files to convert') convert_parser.add_argument('--dest-dir', '-d', default=os.path.curdir, help="Directory to store wheels (default %(default)s)") convert_parser.add_argument('--verbose', '-v', action='store_true') convert_parser.set_defaults(func=convert_f) version_parser = s.add_parser('version', help='Print version and exit') version_parser.set_defaults(func=version_f) help_parser = s.add_parser('help', help='Show this help') help_parser.set_defaults(func=lambda args: p.print_help()) return p def main(): p = parser() args = p.parse_args() if not hasattr(args, 'func'): p.print_help() else: try: args.func(args) return 0 except WheelError as e: print(e, file=sys.stderr) return 1 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430191.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/wheel/cli/convert.py0000644000175000017500000002243100000000000026150 0ustar00vincentvincent00000000000000import os.path import re import shutil import sys import tempfile import zipfile from distutils import dist from glob import iglob from ..bdist_wheel import bdist_wheel from ..wheelfile import WheelFile from . import WheelError, require_pkgresources egg_info_re = re.compile(r''' (?P.+?)-(?P.+?) (-(?Ppy\d\.\d) (-(?P.+?))? )?.egg$''', re.VERBOSE) class _bdist_wheel_tag(bdist_wheel): # allow the client to override the default generated wheel tag # The default bdist_wheel implementation uses python and abi tags # of the running python process. This is not suitable for # generating/repackaging prebuild binaries. full_tag_supplied = False full_tag = None # None or a (pytag, soabitag, plattag) triple def get_tag(self): if self.full_tag_supplied and self.full_tag is not None: return self.full_tag else: return bdist_wheel.get_tag(self) def egg2wheel(egg_path, dest_dir): filename = os.path.basename(egg_path) match = egg_info_re.match(filename) if not match: raise WheelError('Invalid egg file name: {}'.format(filename)) egg_info = match.groupdict() dir = tempfile.mkdtemp(suffix="_e2w") if os.path.isfile(egg_path): # assume we have a bdist_egg otherwise with zipfile.ZipFile(egg_path) as egg: egg.extractall(dir) else: # support buildout-style installed eggs directories for pth in os.listdir(egg_path): src = os.path.join(egg_path, pth) if os.path.isfile(src): shutil.copy2(src, dir) else: shutil.copytree(src, os.path.join(dir, pth)) pyver = egg_info['pyver'] if pyver: pyver = egg_info['pyver'] = pyver.replace('.', '') arch = (egg_info['arch'] or 'any').replace('.', '_').replace('-', '_') # assume all binary eggs are for CPython abi = 'cp' + pyver[2:] if arch != 'any' else 'none' root_is_purelib = egg_info['arch'] is None if root_is_purelib: bw = bdist_wheel(dist.Distribution()) else: bw = _bdist_wheel_tag(dist.Distribution()) bw.root_is_pure = root_is_purelib bw.python_tag = pyver bw.plat_name_supplied = True bw.plat_name = egg_info['arch'] or 'any' if not root_is_purelib: bw.full_tag_supplied = True bw.full_tag = (pyver, abi, arch) dist_info_dir = os.path.join(dir, '{name}-{ver}.dist-info'.format(**egg_info)) bw.egg2dist(os.path.join(dir, 'EGG-INFO'), dist_info_dir) bw.write_wheelfile(dist_info_dir, generator='egg2wheel') wheel_name = '{name}-{ver}-{pyver}-{}-{}.whl'.format(abi, arch, **egg_info) with WheelFile(os.path.join(dest_dir, wheel_name), 'w') as wf: wf.write_files(dir) shutil.rmtree(dir) def parse_wininst_info(wininfo_name, egginfo_name): """Extract metadata from filenames. Extracts the 4 metadataitems needed (name, version, pyversion, arch) from the installer filename and the name of the egg-info directory embedded in the zipfile (if any). The egginfo filename has the format:: name-ver(-pyver)(-arch).egg-info The installer filename has the format:: name-ver.arch(-pyver).exe Some things to note: 1. The installer filename is not definitive. An installer can be renamed and work perfectly well as an installer. So more reliable data should be used whenever possible. 2. The egg-info data should be preferred for the name and version, because these come straight from the distutils metadata, and are mandatory. 3. The pyver from the egg-info data should be ignored, as it is constructed from the version of Python used to build the installer, which is irrelevant - the installer filename is correct here (even to the point that when it's not there, any version is implied). 4. The architecture must be taken from the installer filename, as it is not included in the egg-info data. 5. Architecture-neutral installers still have an architecture because the installer format itself (being executable) is architecture-specific. We should therefore ignore the architecture if the content is pure-python. """ egginfo = None if egginfo_name: egginfo = egg_info_re.search(egginfo_name) if not egginfo: raise ValueError("Egg info filename %s is not valid" % (egginfo_name,)) # Parse the wininst filename # 1. Distribution name (up to the first '-') w_name, sep, rest = wininfo_name.partition('-') if not sep: raise ValueError("Installer filename %s is not valid" % (wininfo_name,)) # Strip '.exe' rest = rest[:-4] # 2. Python version (from the last '-', must start with 'py') rest2, sep, w_pyver = rest.rpartition('-') if sep and w_pyver.startswith('py'): rest = rest2 w_pyver = w_pyver.replace('.', '') else: # Not version specific - use py2.py3. While it is possible that # pure-Python code is not compatible with both Python 2 and 3, there # is no way of knowing from the wininst format, so we assume the best # here (the user can always manually rename the wheel to be more # restrictive if needed). w_pyver = 'py2.py3' # 3. Version and architecture w_ver, sep, w_arch = rest.rpartition('.') if not sep: raise ValueError("Installer filename %s is not valid" % (wininfo_name,)) if egginfo: w_name = egginfo.group('name') w_ver = egginfo.group('ver') return {'name': w_name, 'ver': w_ver, 'arch': w_arch, 'pyver': w_pyver} def wininst2wheel(path, dest_dir): with zipfile.ZipFile(path) as bdw: # Search for egg-info in the archive egginfo_name = None for filename in bdw.namelist(): if '.egg-info' in filename: egginfo_name = filename break info = parse_wininst_info(os.path.basename(path), egginfo_name) root_is_purelib = True for zipinfo in bdw.infolist(): if zipinfo.filename.startswith('PLATLIB'): root_is_purelib = False break if root_is_purelib: paths = {'purelib': ''} else: paths = {'platlib': ''} dist_info = "%(name)s-%(ver)s" % info datadir = "%s.data/" % dist_info # rewrite paths to trick ZipFile into extracting an egg # XXX grab wininst .ini - between .exe, padding, and first zip file. members = [] egginfo_name = '' for zipinfo in bdw.infolist(): key, basename = zipinfo.filename.split('/', 1) key = key.lower() basepath = paths.get(key, None) if basepath is None: basepath = datadir + key.lower() + '/' oldname = zipinfo.filename newname = basepath + basename zipinfo.filename = newname del bdw.NameToInfo[oldname] bdw.NameToInfo[newname] = zipinfo # Collect member names, but omit '' (from an entry like "PLATLIB/" if newname: members.append(newname) # Remember egg-info name for the egg2dist call below if not egginfo_name: if newname.endswith('.egg-info'): egginfo_name = newname elif '.egg-info/' in newname: egginfo_name, sep, _ = newname.rpartition('/') dir = tempfile.mkdtemp(suffix="_b2w") bdw.extractall(dir, members) # egg2wheel abi = 'none' pyver = info['pyver'] arch = (info['arch'] or 'any').replace('.', '_').replace('-', '_') # Wininst installers always have arch even if they are not # architecture-specific (because the format itself is). # So, assume the content is architecture-neutral if root is purelib. if root_is_purelib: arch = 'any' # If the installer is architecture-specific, it's almost certainly also # CPython-specific. if arch != 'any': pyver = pyver.replace('py', 'cp') wheel_name = '-'.join((dist_info, pyver, abi, arch)) if root_is_purelib: bw = bdist_wheel(dist.Distribution()) else: bw = _bdist_wheel_tag(dist.Distribution()) bw.root_is_pure = root_is_purelib bw.python_tag = pyver bw.plat_name_supplied = True bw.plat_name = info['arch'] or 'any' if not root_is_purelib: bw.full_tag_supplied = True bw.full_tag = (pyver, abi, arch) dist_info_dir = os.path.join(dir, '%s.dist-info' % dist_info) bw.egg2dist(os.path.join(dir, egginfo_name), dist_info_dir) bw.write_wheelfile(dist_info_dir, generator='wininst2wheel') wheel_path = os.path.join(dest_dir, wheel_name) with WheelFile(wheel_path, 'w') as wf: wf.write_files(dir) shutil.rmtree(dir) def convert(files, dest_dir, verbose): # Only support wheel convert if pkg_resources is present require_pkgresources('wheel convert') for pat in files: for installer in iglob(pat): if os.path.splitext(installer)[1] == '.egg': conv = egg2wheel else: conv = wininst2wheel if verbose: print("{}... ".format(installer)) sys.stdout.flush() conv(installer, dest_dir) if verbose: print("OK") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430191.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/wheel/cli/pack.py0000644000175000017500000000432700000000000025412 0ustar00vincentvincent00000000000000from __future__ import print_function import os.path import re import sys from wheel.cli import WheelError from wheel.wheelfile import WheelFile DIST_INFO_RE = re.compile(r"^(?P(?P.+?)-(?P\d.*?))\.dist-info$") def pack(directory, dest_dir, build_number): """Repack a previously unpacked wheel directory into a new wheel file. The .dist-info/WHEEL file must contain one or more tags so that the target wheel file name can be determined. :param directory: The unpacked wheel directory :param dest_dir: Destination directory (defaults to the current directory) """ # Find the .dist-info directory dist_info_dirs = [fn for fn in os.listdir(directory) if os.path.isdir(os.path.join(directory, fn)) and DIST_INFO_RE.match(fn)] if len(dist_info_dirs) > 1: raise WheelError('Multiple .dist-info directories found in {}'.format(directory)) elif not dist_info_dirs: raise WheelError('No .dist-info directories found in {}'.format(directory)) # Determine the target wheel filename dist_info_dir = dist_info_dirs[0] name_version = DIST_INFO_RE.match(dist_info_dir).group('namever') # Add the build number if specific if build_number: name_version += '-' + build_number # Read the tags from .dist-info/WHEEL with open(os.path.join(directory, dist_info_dir, 'WHEEL')) as f: tags = [line.split(' ')[1].rstrip() for line in f if line.startswith('Tag: ')] if not tags: raise WheelError('No tags present in {}/WHEEL; cannot determine target wheel filename' .format(dist_info_dir)) # Reassemble the tags for the wheel file impls = sorted({tag.split('-')[0] for tag in tags}) abivers = sorted({tag.split('-')[1] for tag in tags}) platforms = sorted({tag.split('-')[2] for tag in tags}) tagline = '-'.join(['.'.join(impls), '.'.join(abivers), '.'.join(platforms)]) # Repack the wheel wheel_path = os.path.join(dest_dir, '{}-{}.whl'.format(name_version, tagline)) with WheelFile(wheel_path, 'w') as wf: print("Repacking wheel as {}...".format(wheel_path), end='') sys.stdout.flush() wf.write_files(directory) print('OK') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430191.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/wheel/cli/unpack.py0000644000175000017500000000124100000000000025745 0ustar00vincentvincent00000000000000from __future__ import print_function import os.path import sys from ..wheelfile import WheelFile def unpack(path, dest='.'): """Unpack a wheel. Wheel content will be unpacked to {dest}/{name}-{ver}, where {name} is the package name and {ver} its version. :param path: The path to the wheel. :param dest: Destination directory (default to current directory). """ with WheelFile(path) as wf: namever = wf.parsed_filename.group('namever') destination = os.path.join(dest, namever) print("Unpacking to: {}...".format(destination), end='') sys.stdout.flush() wf.extractall(destination) print('OK') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430191.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/wheel/metadata.py0000644000175000017500000001113300000000000025476 0ustar00vincentvincent00000000000000""" Tools for converting old- to new-style metadata. """ import os.path import re import textwrap import pkg_resources from .pkginfo import read_pkg_info # Wheel itself is probably the only program that uses non-extras markers # in METADATA/PKG-INFO. Support its syntax with the extra at the end only. EXTRA_RE = re.compile( r"""^(?P.*?)(;\s*(?P.*?)(extra == '(?P.*?)')?)$""") def requires_to_requires_dist(requirement): """Return the version specifier for a requirement in PEP 345/566 fashion.""" if getattr(requirement, 'url', None): return " @ " + requirement.url requires_dist = [] for op, ver in requirement.specs: requires_dist.append(op + ver) if not requires_dist: return '' return " (%s)" % ','.join(sorted(requires_dist)) def convert_requirements(requirements): """Yield Requires-Dist: strings for parsed requirements strings.""" for req in requirements: parsed_requirement = pkg_resources.Requirement.parse(req) spec = requires_to_requires_dist(parsed_requirement) extras = ",".join(sorted(parsed_requirement.extras)) if extras: extras = "[%s]" % extras yield (parsed_requirement.project_name + extras + spec) def generate_requirements(extras_require): """ Convert requirements from a setup()-style dictionary to ('Requires-Dist', 'requirement') and ('Provides-Extra', 'extra') tuples. extras_require is a dictionary of {extra: [requirements]} as passed to setup(), using the empty extra {'': [requirements]} to hold install_requires. """ for extra, depends in extras_require.items(): condition = '' extra = extra or '' if ':' in extra: # setuptools extra:condition syntax extra, condition = extra.split(':', 1) extra = pkg_resources.safe_extra(extra) if extra: yield 'Provides-Extra', extra if condition: condition = "(" + condition + ") and " condition += "extra == '%s'" % extra if condition: condition = ' ; ' + condition for new_req in convert_requirements(depends): yield 'Requires-Dist', new_req + condition def pkginfo_to_metadata(egg_info_path, pkginfo_path): """ Convert .egg-info directory with PKG-INFO to the Metadata 2.1 format """ pkg_info = read_pkg_info(pkginfo_path) pkg_info.replace_header('Metadata-Version', '2.1') # Those will be regenerated from `requires.txt`. del pkg_info['Provides-Extra'] del pkg_info['Requires-Dist'] requires_path = os.path.join(egg_info_path, 'requires.txt') if os.path.exists(requires_path): with open(requires_path) as requires_file: requires = requires_file.read() parsed_requirements = sorted(pkg_resources.split_sections(requires), key=lambda x: x[0] or '') for extra, reqs in parsed_requirements: for key, value in generate_requirements({extra: reqs}): if (key, value) not in pkg_info.items(): pkg_info[key] = value description = pkg_info['Description'] if description: pkg_info.set_payload(dedent_description(pkg_info)) del pkg_info['Description'] return pkg_info def pkginfo_unicode(pkg_info, field): """Hack to coax Unicode out of an email Message() - Python 3.3+""" text = pkg_info[field] field = field.lower() if not isinstance(text, str): if not hasattr(pkg_info, 'raw_items'): # Python 3.2 return str(text) for item in pkg_info.raw_items(): if item[0].lower() == field: text = item[1].encode('ascii', 'surrogateescape') \ .decode('utf-8') break return text def dedent_description(pkg_info): """ Dedent and convert pkg_info['Description'] to Unicode. """ description = pkg_info['Description'] # Python 3 Unicode handling, sorta. surrogates = False if not isinstance(description, str): surrogates = True description = pkginfo_unicode(pkg_info, 'Description') description_lines = description.splitlines() description_dedent = '\n'.join( # if the first line of long_description is blank, # the first line here will be indented. (description_lines[0].lstrip(), textwrap.dedent('\n'.join(description_lines[1:])), '\n')) if surrogates: description_dedent = description_dedent \ .encode("utf8") \ .decode("ascii", "surrogateescape") return description_dedent ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430191.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/wheel/pep425tags.py0000644000175000017500000001342400000000000025621 0ustar00vincentvincent00000000000000"""Generate and work with PEP 425 Compatibility Tags.""" import distutils.util import platform import sys import sysconfig import warnings try: from importlib.machinery import get_all_suffixes except ImportError: from imp import get_suffixes as get_all_suffixes def get_config_var(var): try: return sysconfig.get_config_var(var) except IOError as e: # pip Issue #1074 warnings.warn("{0}".format(e), RuntimeWarning) return None def get_abbr_impl(): """Return abbreviated implementation name.""" impl = platform.python_implementation() if impl == 'PyPy': return 'pp' elif impl == 'Jython': return 'jy' elif impl == 'IronPython': return 'ip' elif impl == 'CPython': return 'cp' raise LookupError('Unknown Python implementation: ' + impl) def get_impl_ver(): """Return implementation version.""" impl_ver = get_config_var("py_version_nodot") if not impl_ver or get_abbr_impl() == 'pp': impl_ver = ''.join(map(str, get_impl_version_info())) return impl_ver def get_impl_version_info(): """Return sys.version_info-like tuple for use in decrementing the minor version.""" if get_abbr_impl() == 'pp': # as per https://github.com/pypa/pip/issues/2882 return (sys.version_info[0], sys.pypy_version_info.major, sys.pypy_version_info.minor) else: return sys.version_info[0], sys.version_info[1] def get_flag(var, fallback, expected=True, warn=True): """Use a fallback method for determining SOABI flags if the needed config var is unset or unavailable.""" val = get_config_var(var) if val is None: if warn: warnings.warn("Config variable '{0}' is unset, Python ABI tag may " "be incorrect".format(var), RuntimeWarning, 2) return fallback() return val == expected def get_abi_tag(): """Return the ABI tag based on SOABI (if available) or emulate SOABI (CPython 2, PyPy).""" soabi = get_config_var('SOABI') impl = get_abbr_impl() if not soabi and impl in ('cp', 'pp') and hasattr(sys, 'maxunicode'): d = '' m = '' u = '' if get_flag('Py_DEBUG', lambda: hasattr(sys, 'gettotalrefcount'), warn=(impl == 'cp')): d = 'd' if get_flag('WITH_PYMALLOC', lambda: impl == 'cp', warn=(impl == 'cp')): m = 'm' if get_flag('Py_UNICODE_SIZE', lambda: sys.maxunicode == 0x10ffff, expected=4, warn=(impl == 'cp' and sys.version_info < (3, 3))) \ and sys.version_info < (3, 3): u = 'u' abi = '%s%s%s%s%s' % (impl, get_impl_ver(), d, m, u) elif soabi and soabi.startswith('cpython-'): abi = 'cp' + soabi.split('-')[1] elif soabi: abi = soabi.replace('.', '_').replace('-', '_') else: abi = None return abi def get_platform(): """Return our platform name 'win32', 'linux_x86_64'""" # XXX remove distutils dependency result = distutils.util.get_platform().replace('.', '_').replace('-', '_') if result == "linux_x86_64" and sys.maxsize == 2147483647: # pip pull request #3497 result = "linux_i686" return result def get_supported(versions=None, supplied_platform=None): """Return a list of supported tags for each version specified in `versions`. :param versions: a list of string versions, of the form ["33", "32"], or None. The first version will be assumed to support our ABI. """ supported = [] # Versions must be given with respect to the preference if versions is None: versions = [] version_info = get_impl_version_info() major = version_info[:-1] # Support all previous minor Python versions. for minor in range(version_info[-1], -1, -1): versions.append(''.join(map(str, major + (minor,)))) impl = get_abbr_impl() abis = [] abi = get_abi_tag() if abi: abis[0:0] = [abi] abi3s = set() for suffix in get_all_suffixes(): if suffix[0].startswith('.abi'): abi3s.add(suffix[0].split('.', 2)[1]) abis.extend(sorted(list(abi3s))) abis.append('none') platforms = [] if supplied_platform: platforms.append(supplied_platform) platforms.append(get_platform()) # Current version, current API (built specifically for our Python): for abi in abis: for arch in platforms: supported.append(('%s%s' % (impl, versions[0]), abi, arch)) # abi3 modules compatible with older version of Python for version in versions[1:]: # abi3 was introduced in Python 3.2 if version in ('31', '30'): break for abi in abi3s: # empty set if not Python 3 for arch in platforms: supported.append(("%s%s" % (impl, version), abi, arch)) # No abi / arch, but requires our implementation: for i, version in enumerate(versions): supported.append(('%s%s' % (impl, version), 'none', 'any')) if i == 0: # Tagged specifically as being cross-version compatible # (with just the major version specified) supported.append(('%s%s' % (impl, versions[0][0]), 'none', 'any')) # Major Python version + platform; e.g. binaries not using the Python API for arch in platforms: supported.append(('py%s' % (versions[0][0]), 'none', arch)) # No abi / arch, generic Python for i, version in enumerate(versions): supported.append(('py%s' % (version,), 'none', 'any')) if i == 0: supported.append(('py%s' % (version[0]), 'none', 'any')) return supported ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430191.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/wheel/pkginfo.py0000644000175000017500000000235100000000000025355 0ustar00vincentvincent00000000000000"""Tools for reading and writing PKG-INFO / METADATA without caring about the encoding.""" from email.parser import Parser try: unicode _PY3 = False except NameError: _PY3 = True if not _PY3: from email.generator import Generator def read_pkg_info_bytes(bytestr): return Parser().parsestr(bytestr) def read_pkg_info(path): with open(path, "r") as headers: message = Parser().parse(headers) return message def write_pkg_info(path, message): with open(path, 'w') as metadata: Generator(metadata, mangle_from_=False, maxheaderlen=0).flatten(message) else: from email.generator import BytesGenerator def read_pkg_info_bytes(bytestr): headers = bytestr.decode(encoding="ascii", errors="surrogateescape") message = Parser().parsestr(headers) return message def read_pkg_info(path): with open(path, "r", encoding="ascii", errors="surrogateescape") as headers: message = Parser().parse(headers) return message def write_pkg_info(path, message): with open(path, "wb") as out: BytesGenerator(out, mangle_from_=False, maxheaderlen=0).flatten(message) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430191.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/wheel/util.py0000644000175000017500000000163400000000000024700 0ustar00vincentvincent00000000000000import base64 import io import sys if sys.version_info[0] < 3: text_type = unicode # noqa: F821 StringIO = io.BytesIO def native(s, encoding='utf-8'): if isinstance(s, unicode): return s.encode(encoding) return s else: text_type = str StringIO = io.StringIO def native(s, encoding='utf-8'): if isinstance(s, bytes): return s.decode(encoding) return s def urlsafe_b64encode(data): """urlsafe_b64encode without padding""" return base64.urlsafe_b64encode(data).rstrip(b'=') def urlsafe_b64decode(data): """urlsafe_b64decode without padding""" pad = b'=' * (4 - (len(data) & 3)) return base64.urlsafe_b64decode(data + pad) def as_unicode(s): if isinstance(s, bytes): return s.decode('utf-8') return s def as_bytes(s): if isinstance(s, text_type): return s.encode('utf-8') return s ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430191.0 cypari2-2.1.2/venv/lib/python3.7/site-packages/wheel/wheelfile.py0000644000175000017500000001606600000000000025674 0ustar00vincentvincent00000000000000from __future__ import print_function import csv import hashlib import os.path import re import stat import time from collections import OrderedDict from distutils import log as logger from zipfile import ZIP_DEFLATED, ZipInfo, ZipFile from wheel.cli import WheelError from wheel.util import urlsafe_b64decode, as_unicode, native, urlsafe_b64encode, as_bytes, StringIO # Non-greedy matching of an optional build number may be too clever (more # invalid wheel filenames will match). Separate regex for .dist-info? WHEEL_INFO_RE = re.compile( r"""^(?P(?P.+?)-(?P.+?))(-(?P\d[^-]*))? -(?P.+?)-(?P.+?)-(?P.+?)\.whl$""", re.VERBOSE) def get_zipinfo_datetime(timestamp=None): # Some applications need reproducible .whl files, but they can't do this without forcing # the timestamp of the individual ZipInfo objects. See issue #143. timestamp = int(os.environ.get('SOURCE_DATE_EPOCH', timestamp or time.time())) return time.gmtime(timestamp)[0:6] class WheelFile(ZipFile): """A ZipFile derivative class that also reads SHA-256 hashes from .dist-info/RECORD and checks any read files against those. """ _default_algorithm = hashlib.sha256 def __init__(self, file, mode='r'): basename = os.path.basename(file) self.parsed_filename = WHEEL_INFO_RE.match(basename) if not basename.endswith('.whl') or self.parsed_filename is None: raise WheelError("Bad wheel filename {!r}".format(basename)) ZipFile.__init__(self, file, mode, compression=ZIP_DEFLATED, allowZip64=True) self.dist_info_path = '{}.dist-info'.format(self.parsed_filename.group('namever')) self.record_path = self.dist_info_path + '/RECORD' self._file_hashes = OrderedDict() self._file_sizes = {} if mode == 'r': # Ignore RECORD and any embedded wheel signatures self._file_hashes[self.record_path] = None, None self._file_hashes[self.record_path + '.jws'] = None, None self._file_hashes[self.record_path + '.p7s'] = None, None # Fill in the expected hashes by reading them from RECORD try: record = self.open(self.record_path) except KeyError: raise WheelError('Missing {} file'.format(self.record_path)) with record: for line in record: line = line.decode('utf-8') path, hash_sum, size = line.rsplit(u',', 2) if hash_sum: algorithm, hash_sum = hash_sum.split(u'=') try: hashlib.new(algorithm) except ValueError: raise WheelError('Unsupported hash algorithm: {}'.format(algorithm)) if algorithm.lower() in {'md5', 'sha1'}: raise WheelError( 'Weak hash algorithm ({}) is not permitted by PEP 427' .format(algorithm)) self._file_hashes[path] = ( algorithm, urlsafe_b64decode(hash_sum.encode('ascii'))) def open(self, name_or_info, mode="r", pwd=None): def _update_crc(newdata, eof=None): if eof is None: eof = ef._eof update_crc_orig(newdata) else: # Python 2 update_crc_orig(newdata, eof) running_hash.update(newdata) if eof and running_hash.digest() != expected_hash: raise WheelError("Hash mismatch for file '{}'".format(native(ef_name))) ef = ZipFile.open(self, name_or_info, mode, pwd) ef_name = as_unicode(name_or_info.filename if isinstance(name_or_info, ZipInfo) else name_or_info) if mode == 'r' and not ef_name.endswith('/'): if ef_name not in self._file_hashes: raise WheelError("No hash found for file '{}'".format(native(ef_name))) algorithm, expected_hash = self._file_hashes[ef_name] if expected_hash is not None: # Monkey patch the _update_crc method to also check for the hash from RECORD running_hash = hashlib.new(algorithm) update_crc_orig, ef._update_crc = ef._update_crc, _update_crc return ef def write_files(self, base_dir): logger.info("creating '%s' and adding '%s' to it", self.filename, base_dir) deferred = [] for root, dirnames, filenames in os.walk(base_dir): # Sort the directory names so that `os.walk` will walk them in a # defined order on the next iteration. dirnames.sort() for name in sorted(filenames): path = os.path.normpath(os.path.join(root, name)) if os.path.isfile(path): arcname = os.path.relpath(path, base_dir) if arcname == self.record_path: pass elif root.endswith('.dist-info'): deferred.append((path, arcname)) else: self.write(path, arcname) deferred.sort() for path, arcname in deferred: self.write(path, arcname) def write(self, filename, arcname=None, compress_type=None): with open(filename, 'rb') as f: st = os.fstat(f.fileno()) data = f.read() zinfo = ZipInfo(arcname or filename, date_time=get_zipinfo_datetime(st.st_mtime)) zinfo.external_attr = (stat.S_IMODE(st.st_mode) | stat.S_IFMT(st.st_mode)) << 16 zinfo.compress_type = ZIP_DEFLATED self.writestr(zinfo, data, compress_type) def writestr(self, zinfo_or_arcname, bytes, compress_type=None): ZipFile.writestr(self, zinfo_or_arcname, bytes, compress_type) fname = (zinfo_or_arcname.filename if isinstance(zinfo_or_arcname, ZipInfo) else zinfo_or_arcname) logger.info("adding '%s'", fname) if fname != self.record_path: hash_ = self._default_algorithm(bytes) self._file_hashes[fname] = hash_.name, native(urlsafe_b64encode(hash_.digest())) self._file_sizes[fname] = len(bytes) def close(self): # Write RECORD if self.fp is not None and self.mode == 'w' and self._file_hashes: data = StringIO() writer = csv.writer(data, delimiter=',', quotechar='"', lineterminator='\n') writer.writerows(( ( fname, algorithm + "=" + hash_, self._file_sizes[fname] ) for fname, (algorithm, hash_) in self._file_hashes.items() )) writer.writerow((format(self.record_path), "", "")) zinfo = ZipInfo(native(self.record_path), date_time=get_zipinfo_datetime()) zinfo.compress_type = ZIP_DEFLATED zinfo.external_attr = 0o664 << 16 self.writestr(zinfo, as_bytes(data.getvalue())) ZipFile.close(self) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564430186.0 cypari2-2.1.2/venv/lib/python3.7/site.py0000644000175000017500000007053500000000000021051 0ustar00vincentvincent00000000000000"""Append module search paths for third-party packages to sys.path. **************************************************************** * This module is automatically imported during initialization. * **************************************************************** In earlier versions of Python (up to 1.5a3), scripts or modules that needed to use site-specific modules would place ``import site'' somewhere near the top of their code. Because of the automatic import, this is no longer necessary (but code that does it still works). This will append site-specific paths to the module search path. On Unix, it starts with sys.prefix and sys.exec_prefix (if different) and appends lib/python/site-packages as well as lib/site-python. It also supports the Debian convention of lib/python/dist-packages. On other platforms (mainly Mac and Windows), it uses just sys.prefix (and sys.exec_prefix, if different, but this is unlikely). The resulting directories, if they exist, are appended to sys.path, and also inspected for path configuration files. FOR DEBIAN, this sys.path is augmented with directories in /usr/local. Local addons go into /usr/local/lib/python/site-packages (resp. /usr/local/lib/site-python), Debian addons install into /usr/{lib,share}/python/dist-packages. A path configuration file is a file whose name has the form .pth; its contents are additional directories (one per line) to be added to sys.path. Non-existing directories (or non-directories) are never added to sys.path; no directory is added to sys.path more than once. Blank lines and lines beginning with '#' are skipped. Lines starting with 'import' are executed. For example, suppose sys.prefix and sys.exec_prefix are set to /usr/local and there is a directory /usr/local/lib/python2.X/site-packages with three subdirectories, foo, bar and spam, and two path configuration files, foo.pth and bar.pth. Assume foo.pth contains the following: # foo package configuration foo bar bletch and bar.pth contains: # bar package configuration bar Then the following directories are added to sys.path, in this order: /usr/local/lib/python2.X/site-packages/bar /usr/local/lib/python2.X/site-packages/foo Note that bletch is omitted because it doesn't exist; bar precedes foo because bar.pth comes alphabetically before foo.pth; and spam is omitted because it is not mentioned in either path configuration file. After these path manipulations, an attempt is made to import a module named sitecustomize, which can perform arbitrary additional site-specific customizations. If this import fails with an ImportError exception, it is silently ignored. """ import os import sys try: import __builtin__ as builtins except ImportError: import builtins try: set except NameError: from sets import Set as set # Prefixes for site-packages; add additional prefixes like /usr/local here PREFIXES = [sys.prefix, sys.exec_prefix] # Enable per user site-packages directory # set it to False to disable the feature or True to force the feature ENABLE_USER_SITE = None # for distutils.commands.install USER_SITE = None USER_BASE = None _is_64bit = (getattr(sys, "maxsize", None) or getattr(sys, "maxint")) > 2 ** 32 _is_pypy = hasattr(sys, "pypy_version_info") _is_jython = sys.platform[:4] == "java" if _is_jython: ModuleType = type(os) def makepath(*paths): dir = os.path.join(*paths) if _is_jython and (dir == "__classpath__" or dir.startswith("__pyclasspath__")): return dir, dir dir = os.path.abspath(dir) return dir, os.path.normcase(dir) def abs__file__(): """Set all module' __file__ attribute to an absolute path""" for m in sys.modules.values(): if (_is_jython and not isinstance(m, ModuleType)) or hasattr(m, "__loader__"): # only modules need the abspath in Jython. and don't mess # with a PEP 302-supplied __file__ continue f = getattr(m, "__file__", None) if f is None: continue m.__file__ = os.path.abspath(f) def removeduppaths(): """ Remove duplicate entries from sys.path along with making them absolute""" # This ensures that the initial path provided by the interpreter contains # only absolute pathnames, even if we're running from the build directory. L = [] known_paths = set() for dir in sys.path: # Filter out duplicate paths (on case-insensitive file systems also # if they only differ in case); turn relative paths into absolute # paths. dir, dircase = makepath(dir) if not dircase in known_paths: L.append(dir) known_paths.add(dircase) sys.path[:] = L return known_paths # XXX This should not be part of site.py, since it is needed even when # using the -S option for Python. See http://www.python.org/sf/586680 def addbuilddir(): """Append ./build/lib. in case we're running in the build dir (especially for Guido :-)""" from distutils.util import get_platform s = "build/lib.{}-{:.3}".format(get_platform(), sys.version) if hasattr(sys, "gettotalrefcount"): s += "-pydebug" s = os.path.join(os.path.dirname(sys.path[-1]), s) sys.path.append(s) def _init_pathinfo(): """Return a set containing all existing directory entries from sys.path""" d = set() for dir in sys.path: try: if os.path.isdir(dir): dir, dircase = makepath(dir) d.add(dircase) except TypeError: continue return d def addpackage(sitedir, name, known_paths): """Add a new path to known_paths by combining sitedir and 'name' or execute sitedir if it starts with 'import'""" if known_paths is None: _init_pathinfo() reset = 1 else: reset = 0 fullname = os.path.join(sitedir, name) try: f = open(fullname, "r") except IOError: return try: for line in f: if line.startswith("#"): continue if line.startswith("import"): exec(line) continue line = line.rstrip() dir, dircase = makepath(sitedir, line) if not dircase in known_paths and os.path.exists(dir): sys.path.append(dir) known_paths.add(dircase) finally: f.close() if reset: known_paths = None return known_paths def addsitedir(sitedir, known_paths=None): """Add 'sitedir' argument to sys.path if missing and handle .pth files in 'sitedir'""" if known_paths is None: known_paths = _init_pathinfo() reset = 1 else: reset = 0 sitedir, sitedircase = makepath(sitedir) if not sitedircase in known_paths: sys.path.append(sitedir) # Add path component try: names = os.listdir(sitedir) except os.error: return names.sort() for name in names: if name.endswith(os.extsep + "pth"): addpackage(sitedir, name, known_paths) if reset: known_paths = None return known_paths def addsitepackages(known_paths, sys_prefix=sys.prefix, exec_prefix=sys.exec_prefix): """Add site-packages (and possibly site-python) to sys.path""" prefixes = [os.path.join(sys_prefix, "local"), sys_prefix] if exec_prefix != sys_prefix: prefixes.append(os.path.join(exec_prefix, "local")) for prefix in prefixes: if prefix: if sys.platform in ("os2emx", "riscos") or _is_jython: sitedirs = [os.path.join(prefix, "Lib", "site-packages")] elif _is_pypy: sitedirs = [os.path.join(prefix, "site-packages")] elif sys.platform == "darwin" and prefix == sys_prefix: if prefix.startswith("/System/Library/Frameworks/"): # Apple's Python sitedirs = [ os.path.join("/Library/Python", sys.version[:3], "site-packages"), os.path.join(prefix, "Extras", "lib", "python"), ] else: # any other Python distros on OSX work this way sitedirs = [os.path.join(prefix, "lib", "python" + sys.version[:3], "site-packages")] elif os.sep == "/": sitedirs = [ os.path.join(prefix, "lib", "python" + sys.version[:3], "site-packages"), os.path.join(prefix, "lib", "site-python"), os.path.join(prefix, "python" + sys.version[:3], "lib-dynload"), ] lib64_dir = os.path.join(prefix, "lib64", "python" + sys.version[:3], "site-packages") if os.path.exists(lib64_dir) and os.path.realpath(lib64_dir) not in [ os.path.realpath(p) for p in sitedirs ]: if _is_64bit: sitedirs.insert(0, lib64_dir) else: sitedirs.append(lib64_dir) try: # sys.getobjects only available in --with-pydebug build sys.getobjects sitedirs.insert(0, os.path.join(sitedirs[0], "debug")) except AttributeError: pass # Debian-specific dist-packages directories: sitedirs.append(os.path.join(prefix, "local/lib", "python" + sys.version[:3], "dist-packages")) if sys.version[0] == "2": sitedirs.append(os.path.join(prefix, "lib", "python" + sys.version[:3], "dist-packages")) else: sitedirs.append(os.path.join(prefix, "lib", "python" + sys.version[0], "dist-packages")) sitedirs.append(os.path.join(prefix, "lib", "dist-python")) else: sitedirs = [prefix, os.path.join(prefix, "lib", "site-packages")] if sys.platform == "darwin": # for framework builds *only* we add the standard Apple # locations. Currently only per-user, but /Library and # /Network/Library could be added too if "Python.framework" in prefix: home = os.environ.get("HOME") if home: sitedirs.append(os.path.join(home, "Library", "Python", sys.version[:3], "site-packages")) for sitedir in sitedirs: if os.path.isdir(sitedir): addsitedir(sitedir, known_paths) return None def check_enableusersite(): """Check if user site directory is safe for inclusion The function tests for the command line flag (including environment var), process uid/gid equal to effective uid/gid. None: Disabled for security reasons False: Disabled by user (command line option) True: Safe and enabled """ if hasattr(sys, "flags") and getattr(sys.flags, "no_user_site", False): return False if hasattr(os, "getuid") and hasattr(os, "geteuid"): # check process uid == effective uid if os.geteuid() != os.getuid(): return None if hasattr(os, "getgid") and hasattr(os, "getegid"): # check process gid == effective gid if os.getegid() != os.getgid(): return None return True def addusersitepackages(known_paths): """Add a per user site-package to sys.path Each user has its own python directory with site-packages in the home directory. USER_BASE is the root directory for all Python versions USER_SITE is the user specific site-packages directory USER_SITE/.. can be used for data. """ global USER_BASE, USER_SITE, ENABLE_USER_SITE env_base = os.environ.get("PYTHONUSERBASE", None) def joinuser(*args): return os.path.expanduser(os.path.join(*args)) # if sys.platform in ('os2emx', 'riscos'): # # Don't know what to put here # USER_BASE = '' # USER_SITE = '' if os.name == "nt": base = os.environ.get("APPDATA") or "~" if env_base: USER_BASE = env_base else: USER_BASE = joinuser(base, "Python") USER_SITE = os.path.join(USER_BASE, "Python" + sys.version[0] + sys.version[2], "site-packages") else: if env_base: USER_BASE = env_base else: USER_BASE = joinuser("~", ".local") USER_SITE = os.path.join(USER_BASE, "lib", "python" + sys.version[:3], "site-packages") if ENABLE_USER_SITE and os.path.isdir(USER_SITE): addsitedir(USER_SITE, known_paths) if ENABLE_USER_SITE: for dist_libdir in ("lib", "local/lib"): user_site = os.path.join(USER_BASE, dist_libdir, "python" + sys.version[:3], "dist-packages") if os.path.isdir(user_site): addsitedir(user_site, known_paths) return known_paths def setBEGINLIBPATH(): """The OS/2 EMX port has optional extension modules that do double duty as DLLs (and must use the .DLL file extension) for other extensions. The library search path needs to be amended so these will be found during module import. Use BEGINLIBPATH so that these are at the start of the library search path. """ dllpath = os.path.join(sys.prefix, "Lib", "lib-dynload") libpath = os.environ["BEGINLIBPATH"].split(";") if libpath[-1]: libpath.append(dllpath) else: libpath[-1] = dllpath os.environ["BEGINLIBPATH"] = ";".join(libpath) def setquit(): """Define new built-ins 'quit' and 'exit'. These are simply strings that display a hint on how to exit. """ if os.sep == ":": eof = "Cmd-Q" elif os.sep == "\\": eof = "Ctrl-Z plus Return" else: eof = "Ctrl-D (i.e. EOF)" class Quitter(object): def __init__(self, name): self.name = name def __repr__(self): return "Use {}() or {} to exit".format(self.name, eof) def __call__(self, code=None): # Shells like IDLE catch the SystemExit, but listen when their # stdin wrapper is closed. try: sys.stdin.close() except: pass raise SystemExit(code) builtins.quit = Quitter("quit") builtins.exit = Quitter("exit") class _Printer(object): """interactive prompt objects for printing the license text, a list of contributors and the copyright notice.""" MAXLINES = 23 def __init__(self, name, data, files=(), dirs=()): self.__name = name self.__data = data self.__files = files self.__dirs = dirs self.__lines = None def __setup(self): if self.__lines: return data = None for dir in self.__dirs: for filename in self.__files: filename = os.path.join(dir, filename) try: fp = open(filename, "r") data = fp.read() fp.close() break except IOError: pass if data: break if not data: data = self.__data self.__lines = data.split("\n") self.__linecnt = len(self.__lines) def __repr__(self): self.__setup() if len(self.__lines) <= self.MAXLINES: return "\n".join(self.__lines) else: return "Type %s() to see the full %s text" % ((self.__name,) * 2) def __call__(self): self.__setup() prompt = "Hit Return for more, or q (and Return) to quit: " lineno = 0 while 1: try: for i in range(lineno, lineno + self.MAXLINES): print(self.__lines[i]) except IndexError: break else: lineno += self.MAXLINES key = None while key is None: try: key = raw_input(prompt) except NameError: key = input(prompt) if key not in ("", "q"): key = None if key == "q": break def setcopyright(): """Set 'copyright' and 'credits' in __builtin__""" builtins.copyright = _Printer("copyright", sys.copyright) if _is_jython: builtins.credits = _Printer("credits", "Jython is maintained by the Jython developers (www.jython.org).") elif _is_pypy: builtins.credits = _Printer("credits", "PyPy is maintained by the PyPy developers: http://pypy.org/") else: builtins.credits = _Printer( "credits", """\ Thanks to CWI, CNRI, BeOpen.com, Zope Corporation and a cast of thousands for supporting Python development. See www.python.org for more information.""", ) here = os.path.dirname(os.__file__) builtins.license = _Printer( "license", "See http://www.python.org/%.3s/license.html" % sys.version, ["LICENSE.txt", "LICENSE"], [os.path.join(here, os.pardir), here, os.curdir], ) class _Helper(object): """Define the built-in 'help'. This is a wrapper around pydoc.help (with a twist). """ def __repr__(self): return "Type help() for interactive help, " "or help(object) for help about object." def __call__(self, *args, **kwds): import pydoc return pydoc.help(*args, **kwds) def sethelper(): builtins.help = _Helper() def aliasmbcs(): """On Windows, some default encodings are not provided by Python, while they are always available as "mbcs" in each locale. Make them usable by aliasing to "mbcs" in such a case.""" if sys.platform == "win32": import locale, codecs enc = locale.getdefaultlocale()[1] if enc.startswith("cp"): # "cp***" ? try: codecs.lookup(enc) except LookupError: import encodings encodings._cache[enc] = encodings._unknown encodings.aliases.aliases[enc] = "mbcs" def setencoding(): """Set the string encoding used by the Unicode implementation. The default is 'ascii', but if you're willing to experiment, you can change this.""" encoding = "ascii" # Default value set by _PyUnicode_Init() if 0: # Enable to support locale aware default string encodings. import locale loc = locale.getdefaultlocale() if loc[1]: encoding = loc[1] if 0: # Enable to switch off string to Unicode coercion and implicit # Unicode to string conversion. encoding = "undefined" if encoding != "ascii": # On Non-Unicode builds this will raise an AttributeError... sys.setdefaultencoding(encoding) # Needs Python Unicode build ! def execsitecustomize(): """Run custom site specific code, if available.""" try: import sitecustomize except ImportError: pass def virtual_install_main_packages(): f = open(os.path.join(os.path.dirname(__file__), "orig-prefix.txt")) sys.real_prefix = f.read().strip() f.close() pos = 2 hardcoded_relative_dirs = [] if sys.path[0] == "": pos += 1 if _is_jython: paths = [os.path.join(sys.real_prefix, "Lib")] elif _is_pypy: if sys.version_info > (3, 2): cpyver = "%d" % sys.version_info[0] elif sys.pypy_version_info >= (1, 5): cpyver = "%d.%d" % sys.version_info[:2] else: cpyver = "%d.%d.%d" % sys.version_info[:3] paths = [os.path.join(sys.real_prefix, "lib_pypy"), os.path.join(sys.real_prefix, "lib-python", cpyver)] if sys.pypy_version_info < (1, 9): paths.insert(1, os.path.join(sys.real_prefix, "lib-python", "modified-%s" % cpyver)) hardcoded_relative_dirs = paths[:] # for the special 'darwin' case below # # This is hardcoded in the Python executable, but relative to sys.prefix: for path in paths[:]: plat_path = os.path.join(path, "plat-%s" % sys.platform) if os.path.exists(plat_path): paths.append(plat_path) elif sys.platform == "win32": paths = [os.path.join(sys.real_prefix, "Lib"), os.path.join(sys.real_prefix, "DLLs")] else: paths = [os.path.join(sys.real_prefix, "lib", "python" + sys.version[:3])] hardcoded_relative_dirs = paths[:] # for the special 'darwin' case below lib64_path = os.path.join(sys.real_prefix, "lib64", "python" + sys.version[:3]) if os.path.exists(lib64_path): if _is_64bit: paths.insert(0, lib64_path) else: paths.append(lib64_path) # This is hardcoded in the Python executable, but relative to # sys.prefix. Debian change: we need to add the multiarch triplet # here, which is where the real stuff lives. As per PEP 421, in # Python 3.3+, this lives in sys.implementation, while in Python 2.7 # it lives in sys. try: arch = getattr(sys, "implementation", sys)._multiarch except AttributeError: # This is a non-multiarch aware Python. Fallback to the old way. arch = sys.platform plat_path = os.path.join(sys.real_prefix, "lib", "python" + sys.version[:3], "plat-%s" % arch) if os.path.exists(plat_path): paths.append(plat_path) # This is hardcoded in the Python executable, but # relative to sys.prefix, so we have to fix up: for path in list(paths): tk_dir = os.path.join(path, "lib-tk") if os.path.exists(tk_dir): paths.append(tk_dir) # These are hardcoded in the Apple's Python executable, # but relative to sys.prefix, so we have to fix them up: if sys.platform == "darwin": hardcoded_paths = [ os.path.join(relative_dir, module) for relative_dir in hardcoded_relative_dirs for module in ("plat-darwin", "plat-mac", "plat-mac/lib-scriptpackages") ] for path in hardcoded_paths: if os.path.exists(path): paths.append(path) sys.path.extend(paths) def force_global_eggs_after_local_site_packages(): """ Force easy_installed eggs in the global environment to get placed in sys.path after all packages inside the virtualenv. This maintains the "least surprise" result that packages in the virtualenv always mask global packages, never the other way around. """ egginsert = getattr(sys, "__egginsert", 0) for i, path in enumerate(sys.path): if i > egginsert and path.startswith(sys.prefix): egginsert = i sys.__egginsert = egginsert + 1 def virtual_addsitepackages(known_paths): force_global_eggs_after_local_site_packages() return addsitepackages(known_paths, sys_prefix=sys.real_prefix) def fixclasspath(): """Adjust the special classpath sys.path entries for Jython. These entries should follow the base virtualenv lib directories. """ paths = [] classpaths = [] for path in sys.path: if path == "__classpath__" or path.startswith("__pyclasspath__"): classpaths.append(path) else: paths.append(path) sys.path = paths sys.path.extend(classpaths) def execusercustomize(): """Run custom user specific code, if available.""" try: import usercustomize except ImportError: pass def enablerlcompleter(): """Enable default readline configuration on interactive prompts, by registering a sys.__interactivehook__. If the readline module can be imported, the hook will set the Tab key as completion key and register ~/.python_history as history file. This can be overridden in the sitecustomize or usercustomize module, or in a PYTHONSTARTUP file. """ def register_readline(): import atexit try: import readline import rlcompleter except ImportError: return # Reading the initialization (config) file may not be enough to set a # completion key, so we set one first and then read the file. readline_doc = getattr(readline, "__doc__", "") if readline_doc is not None and "libedit" in readline_doc: readline.parse_and_bind("bind ^I rl_complete") else: readline.parse_and_bind("tab: complete") try: readline.read_init_file() except OSError: # An OSError here could have many causes, but the most likely one # is that there's no .inputrc file (or .editrc file in the case of # Mac OS X + libedit) in the expected location. In that case, we # want to ignore the exception. pass if readline.get_current_history_length() == 0: # If no history was loaded, default to .python_history. # The guard is necessary to avoid doubling history size at # each interpreter exit when readline was already configured # through a PYTHONSTARTUP hook, see: # http://bugs.python.org/issue5845#msg198636 history = os.path.join(os.path.expanduser("~"), ".python_history") try: readline.read_history_file(history) except OSError: pass def write_history(): try: readline.write_history_file(history) except (FileNotFoundError, PermissionError): # home directory does not exist or is not writable # https://bugs.python.org/issue19891 pass atexit.register(write_history) sys.__interactivehook__ = register_readline def main(): global ENABLE_USER_SITE virtual_install_main_packages() abs__file__() paths_in_sys = removeduppaths() if os.name == "posix" and sys.path and os.path.basename(sys.path[-1]) == "Modules": addbuilddir() if _is_jython: fixclasspath() GLOBAL_SITE_PACKAGES = not os.path.exists(os.path.join(os.path.dirname(__file__), "no-global-site-packages.txt")) if not GLOBAL_SITE_PACKAGES: ENABLE_USER_SITE = False if ENABLE_USER_SITE is None: ENABLE_USER_SITE = check_enableusersite() paths_in_sys = addsitepackages(paths_in_sys) paths_in_sys = addusersitepackages(paths_in_sys) if GLOBAL_SITE_PACKAGES: paths_in_sys = virtual_addsitepackages(paths_in_sys) if sys.platform == "os2emx": setBEGINLIBPATH() setquit() setcopyright() sethelper() if sys.version_info[0] == 3: enablerlcompleter() aliasmbcs() setencoding() execsitecustomize() if ENABLE_USER_SITE: execusercustomize() # Remove sys.setdefaultencoding() so that users cannot change the # encoding after initialization. The test for presence is needed when # this module is run as a script, because this code is executed twice. if hasattr(sys, "setdefaultencoding"): del sys.setdefaultencoding main() def _script(): help = """\ %s [--user-base] [--user-site] Without arguments print some useful information With arguments print the value of USER_BASE and/or USER_SITE separated by '%s'. Exit codes with --user-base or --user-site: 0 - user site directory is enabled 1 - user site directory is disabled by user 2 - uses site directory is disabled by super user or for security reasons >2 - unknown error """ args = sys.argv[1:] if not args: print("sys.path = [") for dir in sys.path: print(" {!r},".format(dir)) print("]") def exists(path): if os.path.isdir(path): return "exists" else: return "doesn't exist" print("USER_BASE: {!r} ({})".format(USER_BASE, exists(USER_BASE))) print("USER_SITE: {!r} ({})".format(USER_SITE, exists(USER_BASE))) print("ENABLE_USER_SITE: %r" % ENABLE_USER_SITE) sys.exit(0) buffer = [] if "--user-base" in args: buffer.append(USER_BASE) if "--user-site" in args: buffer.append(USER_SITE) if buffer: print(os.pathsep.join(buffer)) if ENABLE_USER_SITE: sys.exit(0) elif ENABLE_USER_SITE is False: sys.exit(1) elif ENABLE_USER_SITE is None: sys.exit(2) else: sys.exit(3) else: import textwrap print(textwrap.dedent(help % (sys.argv[0], os.pathsep))) sys.exit(10) if __name__ == "__main__": _script() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735099.7833276 cypari2-2.1.2/venv/share/0000755000175000017500000000000000000000000016304 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1604735100.0699942 cypari2-2.1.2/venv/share/cysignals/0000755000175000017500000000000000000000000020300 5ustar00vincentvincent00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1564431446.0 cypari2-2.1.2/venv/share/cysignals/cysignals-CSI-helper.py0000644000175000017500000001154600000000000024546 0ustar00vincentvincent00000000000000# Run by ``cysignals-CSI`` in gdb's Python interpreter. #***************************************************************************** # Copyright (C) 2013 Volker Braun # # cysignals is free software: you can redistribute it and/or modify it # under the terms of the GNU Lesser General Public License as published # by the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # cysignals is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with cysignals. If not, see . # #***************************************************************************** import os import sys import glob import gdb from Cython.Debugger import libpython, libcython from Cython.Debugger.libcython import cy, CythonCommand if not color: # noqa # disable escape-sequence coloring libcython.pygments = None def cython_debug_files(): """ Cython extra debug information files """ # Search all subdirectories of sys.path directories for a # "cython_debug" directory. Note that sys_path is a variable set by # cysignals-CSI. It may differ from sys.path if GDB is run with a # different Python interpreter. files = [] for path in sys_path: # noqa pattern = os.path.join(path, '*', 'cython_debug', 'cython_debug_info_*') files.extend(glob.glob(pattern)) return files print('\n\n') print('Cython backtrace') print('----------------') # The Python interpreter in GDB does not do automatic backtraces for you try: for f in cython_debug_files(): cy.import_.invoke(f, None) class Backtrace(CythonCommand): name = 'cy fullbt' alias = 'cy full_backtrace' command_class = gdb.COMMAND_STACK completer_class = gdb.COMPLETE_NONE cy = cy def print_stackframe(self, frame, index, is_c=False): if not is_c and self.is_python_function(frame): pyframe = libpython.Frame(frame).get_pyop() if pyframe is None or pyframe.is_optimized_out(): # print this python function as a C function return self.print_stackframe(frame, index, is_c=True) func_name = pyframe.co_name func_cname = 'PyEval_EvalFrameEx' func_args = [] elif self.is_cython_function(frame): cyfunc = self.get_cython_function(frame) func_name = cyfunc.name func_cname = cyfunc.cname func_args = [] else: func_name = frame.name() func_cname = func_name func_args = [] try: gdb_value = gdb.parse_and_eval(func_cname) except (RuntimeError, TypeError): func_address = 0 else: func_address = int(str(gdb_value.address).split()[0], 0) out = '#%-2d 0x%016x' % (index, func_address) try: a = ', '.join('%s=%s' % (name, val) for name, val in func_args) out += ' in %s (%s)' % (func_name or "??", a) source_desc, lineno = self.get_source_desc(frame) if source_desc.filename is not None: out += ' at %s:%s' % (source_desc.filename, lineno) except Exception: return finally: print(out) try: source = source_desc.get_source(lineno - 5, lineno + 5, mark_line=lineno, lex_entire=True) print(source) except gdb.GdbError: pass def invoke(self, args, from_tty): self.newest_first_order(args, from_tty) def newest_first_order(self, args, from_tty): frame = gdb.newest_frame() index = 0 while frame: frame.select() self.print_stackframe(frame, index) index += 1 frame = frame.older() def newest_last_order(self, args, from_tty): frame = gdb.newest_frame() n_frames = 0 while frame.older(): frame = frame.older() n_frames += 1 index = 0 while frame: frame.select() self.print_stackframe(frame, index) index += 1 frame = frame.newer() trace = Backtrace.register() trace.invoke(None, None) except Exception: import traceback traceback.print_exc() sys.stderr.flush() print("") sys.stdout.flush()