upstream-ontologist_0.1.24.orig/.flake80000644000000000000000000000017014010633652014755 0ustar00[flake8] extend-ignore = E203, E266, E501, W293, W291 max-line-length = 88 max-complexity = 18 select = B,C,E,F,W,T4,B9 upstream-ontologist_0.1.24.orig/.github/0000755000000000000000000000000014010642475015147 5ustar00upstream-ontologist_0.1.24.orig/.gitignore0000644000000000000000000000011414162102635015570 0ustar00*~ __pycache__ .mypy_cache MANIFEST build dist upstream_ontologist.egg-info upstream-ontologist_0.1.24.orig/AUTHORS0000644000000000000000000000004314011353352014646 0ustar00Jelmer Vernooij upstream-ontologist_0.1.24.orig/CODE_OF_CONDUCT.md0000644000000000000000000000642314015606240016406 0ustar00# Contributor Covenant Code of Conduct ## Our Pledge In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. ## Our Standards Examples of behavior that contributes to creating a positive environment include: * Using welcoming and inclusive language * Being respectful of differing viewpoints and experiences * Gracefully accepting constructive criticism * Focusing on what is best for the community * Showing empathy towards other community members Examples of unacceptable behavior by participants include: * The use of sexualized language or imagery and unwelcome sexual attention or advances * Trolling, insulting/derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or electronic address, without explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Our Responsibilities Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. ## Scope This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at team@dulwich.io. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html [homepage]: https://www.contributor-covenant.org For answers to common questions about this code of conduct, see https://www.contributor-covenant.org/faq upstream-ontologist_0.1.24.orig/LICENSE0000644000000000000000000004325414011353352014616 0ustar00 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. upstream-ontologist_0.1.24.orig/MANIFEST.in0000644000000000000000000000012114033177122015334 0ustar00include CODE_OF_CONDUCT.md include README.md include SECURITY.md include AUTHORS upstream-ontologist_0.1.24.orig/Makefile0000644000000000000000000000004014005303530015227 0ustar00check:: python3 setup.py test upstream-ontologist_0.1.24.orig/PKG-INFO0000644000000000000000000000101114162102635014672 0ustar00Metadata-Version: 2.1 Name: upstream-ontologist Version: 0.1.24 Summary: tracking of upstream project metadata Home-page: https://github.com/jelmer/upstream-ontologist Author: Jelmer Vernooij Author-email: jelmer@jelmer.uk Maintainer: Jelmer Vernooij Maintainer-email: jelmer@jelmer.uk License: UNKNOWN Project-URL: Repository, https://github.com/jelmer/upstream-ontologist.git Platform: UNKNOWN Provides-Extra: cargo Provides-Extra: readme Provides-Extra: setup.cfg License-File: LICENSE License-File: AUTHORS UNKNOWN upstream-ontologist_0.1.24.orig/README.md0000644000000000000000000000771014162102635015070 0ustar00Upstream Ontologist =================== The upstream ontologist provides a common interface for finding metadata about upstream software projects. It will gather information from any sources available, prioritize data that it has higher confidence in as well as report the confidence for each of the bits of metadata. The ontologist originated in Debian and the currently reported metadata fields are loosely based on [DEP-12](https://dep-team.pages.debian.net/deps/dep12), but it is meant to be distribution-agnostic. Provided Fields --------------- Standard fields: * Homepage: homepage URL * Name: human name of the upstream project * Contact: contact address of some sort of the upstream (e-mail, mailing list URL) * Repository: VCS URL * Repository-Browse: Web URL for viewing the VCS * Bug-Database: Bug database URL (for web viewing, generally) * Bug-Submit: URL to use to submit new bugs (either on the web or an e-mail address) * Screenshots: List of URLs with screenshots * Archive: Archive used - e.g. SourceForge * Security-Contact: e-mail or URL with instructions for reporting security issues * Documentation: Link to documentation on the web Extensions for upstream-ontologist, not defined in DEP-12: * X-SourceForge-Project: sourceforge project name * X-Wiki: Wiki URL * X-Summary: one-line description of the project * X-Description: longer description of the project * X-License: Single line license (e.g. "GPL 2.0") * X-Copyright: List of copyright holders * X-Version: Current upstream version * X-Security-MD: URL to markdown file with security policy * X-Authors: List of people who contributed to the project * X-Maintainer: The maintainer of the project Supported Data Sources ---------------------- At the moment, the ontologist can read metadata from the following upstream data sources: * Python package metadata (PKG-INFO, setup.py, setup.cfg, pyproject.timl) * [package.json](https://docs.npmjs.com/cli/v7/configuring-npm/package-json) * [composer.json](https://getcomposer.org/doc/04-schema.md) * [package.xml](https://pear.php.net/manual/en/guide.developers.package2.dependencies.php) * Perl package metadata (dist.ini, META.json, META.yml, Makefile.PL) * [Perl POD files](https://perldoc.perl.org/perlpod) * GNU configure files * [R DESCRIPTION files](https://r-pkgs.org/description.html) * [Rust Cargo.toml](https://doc.rust-lang.org/cargo/reference/manifest.html) * [Maven pom.xml](https://maven.apache.org/pom.html) * [metainfo.xml](https://www.freedesktop.org/software/appstream/docs/chap-Metadata.html) * [.git/config](https://git-scm.com/docs/git-config) * SECURITY.md * [DOAP](https://github.com/ewilderj/doap) * [Haskell cabal files](https://cabal.readthedocs.io/en/3.4/cabal-package.html) * [go.mod](https://golang.org/doc/modules/gomod-ref) * [ruby gemspec files](https://guides.rubygems.org/specification-reference/) * [nuspec files](https://docs.microsoft.com/en-us/nuget/reference/nuspec) * [OPAM files](https://opam.ocaml.org/doc/Manual.html#Package-definitions) * Debian packaging metadata (debian/watch, debian/control, debian/rules, debian/get-orig-source.sh, debian/copyright, debian/patches) It will also scan README and INSTALL for possible upstream repository URLs (and will attempt to verify that those match the local repository). In addition to local files, it can also consult external directories using their APIs: * GitHub * SourceForge * repology * Launchpad * PECL * AUR Example Usage ------------- The easiest way to use the upstream ontologist is by invoking the ``guess-upstream-metadata`` command in a software project: ```console $ guess-upstream-metadata ~/src/dulwich X-Security-MD: https://github.com/dulwich/dulwich/tree/HEAD/SECURITY.md Name: dulwich X-Version: 0.20.15 Bug-Database: https://github.com/dulwich/dulwich/issues Repository: https://www.dulwich.io/code/ X-Summary: Python Git Library Bug-Submit: https://github.com/dulwich/dulwich/issues/new ``` Alternatively, there is a Python API. upstream-ontologist_0.1.24.orig/SECURITY.md0000644000000000000000000000045614015606241015401 0ustar00# Security Policy ## Supported Versions upstream-ontologist is still under heavy development. Only the latest version is security supported. ## Reporting a Vulnerability Please report security issues by e-mail to jelmer@jelmer.uk, ideally PGP encrypted to the key at https://jelmer.uk/D729A457.asc upstream-ontologist_0.1.24.orig/docs/0000755000000000000000000000000014005636705014542 5ustar00upstream-ontologist_0.1.24.orig/man/0000755000000000000000000000000014024123102014344 5ustar00upstream-ontologist_0.1.24.orig/releaser.conf0000644000000000000000000000060514034104624016254 0ustar00name: "upstream-ontologist" timeout_days: 5 tag_name: "v$VERSION" verify_command: "python3 setup.py test" update_version { path: "setup.py" match: "^ version=\"(.*)\",$" new_line: " version=\"$VERSION\"," } update_version { path: "upstream_ontologist/__init__.py" match: "^version_string = \"(.*)\"" new_line: "version_string = \"$VERSION\"" } update_manpages: "man/*.1" upstream-ontologist_0.1.24.orig/setup.cfg0000644000000000000000000000011414162102635015421 0ustar00[mypy] ignore_missing_imports = True [egg_info] tag_build = tag_date = 0 upstream-ontologist_0.1.24.orig/setup.py0000755000000000000000000000241214162102635015320 0ustar00#!/usr/bin/python3 from setuptools import setup setup( name="upstream-ontologist", packages=[ "upstream_ontologist", "upstream_ontologist.debian", "upstream_ontologist.tests", ], package_data={ 'upstream_ontologist.tests': ['readme_data/*/*'], }, version="0.1.24", author="Jelmer Vernooij", author_email="jelmer@jelmer.uk", maintainer="Jelmer Vernooij", maintainer_email="jelmer@jelmer.uk", url="https://github.com/jelmer/upstream-ontologist", description="tracking of upstream project metadata", project_urls={ "Repository": "https://github.com/jelmer/upstream-ontologist.git", }, entry_points={ 'console_scripts': [ ('guess-upstream-metadata=' 'upstream_ontologist.__main__:main'), ('autodoap=' 'upstream_ontologist.doap:main'), ], }, install_requires=['python_debian', 'debmutate'], extras_require={ 'cargo': ['tomlkit'], 'readme': ['docutils', 'lxml', 'bs4', 'markdown'], 'setup.cfg': ['setuptools'], }, tests_require=['breezy'], test_suite="upstream_ontologist.tests.test_suite", data_files=[ ('share/man/man1', ['man/guess-upstream-metadata.1']), ], ) upstream-ontologist_0.1.24.orig/upstream_ontologist.egg-info/0000755000000000000000000000000014162102635021417 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/0000755000000000000000000000000013764020206017725 5ustar00upstream-ontologist_0.1.24.orig/.github/workflows/0000755000000000000000000000000014010642475017204 5ustar00upstream-ontologist_0.1.24.orig/.github/workflows/pythonpackage.yml0000644000000000000000000000252514162102635022565 0ustar00--- name: Python package on: push: pull_request: schedule: - cron: '0 6 * * *' # Daily 6AM UTC build jobs: build: runs-on: ${{ matrix.os }} strategy: matrix: os: [ubuntu-latest, macos-latest] python-version: [3.7, 3.8, 3.9] fail-fast: false steps: - uses: actions/checkout@v2 - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v2 with: python-version: ${{ matrix.python-version }} - name: Install dependencies (Linux) run: | sudo apt install libxml2-dev libxslt1-dev if: "matrix.os == 'ubuntu-latest'" - name: Install dependencies run: | python -m pip install --upgrade pip flake8 cython python -m pip install -e '.[readme,cargo]' python setup.py develop - name: Install breezy run: | python -m pip install breezy if: "matrix.os != 'windows-latest'" - name: Style checks run: | python -m flake8 - name: Typing checks run: | pip install -U mypy types-docutils types-Markdown types-toml python -m mypy upstream_ontologist/ - name: Test suite run run: | python -m unittest upstream_ontologist.tests.test_suite env: PYTHONHASHSEED: random upstream-ontologist_0.1.24.orig/docs/vcs.md0000644000000000000000000000017614005642663015663 0ustar00Version control URLs are reported as two-tuples: * vcs family * URL TODO(jelmer): subpath handling TODO(jelmer): branches upstream-ontologist_0.1.24.orig/man/autodoap.10000644000000000000000000000234614162102635016262 0ustar00.TH AUTODOAP 1 'December 2021' 'autodoap 0.1.24' 'User Commands' .SH NAME autodoap \- automatically write DOAP files for upstream projects .SH DESCRIPTION autodoap [\-h] [\-\-trust] [\-\-disable\-net\-access] [\-\-check] [\-\-consult\-external\-directory] [\-\-version] [path] This tool tries to guess upstream metadata (Homepage, Contact, VCS Repository) information for an upstream project. It does this by parsing various files in the package, and possibly calling out to external services (unless --disable-net-access is specified). Data is written to standard out in DOAP. .SS "positional arguments:" .IP path .SS "optional arguments:" .TP \fB\-h\fR, \fB\-\-help\fR show this help message and exit .TP \fB\-\-trust\fR Whether to allow running code from the package. .TP \fB\-\-disable\-net\-access\fR Do not probe external services. .TP \fB\-\-check\fR Check guessed metadata against external sources. .TP \fB\-\-consult\-external\-directory\fR Pull in external (not maintained by upstream) directory data .TP \fB\-\-version\fR show program's version number and exit .SH "SEE ALSO" \&\fIapply-multiarch-hints\fR\|(1) \&\fIguess-upstream-metadata\fR\|(1) \&\fIlintian-brush\fR\|(1) \&\fIlintian\fR\|(1) .SH AUTHORS Jelmer Vernooij upstream-ontologist_0.1.24.orig/man/guess-upstream-metadata.10000644000000000000000000000233214162102635021203 0ustar00.TH GUESS-UPSTREAM-METADATA 1 'December 2021' 'guess-upstream-metadata 0.1.24' 'User Commands' .SH NAME guess-upstream-metadata \- guess upstream package metadata .SH DESCRIPTION guess\-upstream\-metadata [\-h] [\-\-trust] [\-\-disable\-net\-access] [\-\-check] [\-\-consult\-external\-directory] [\-\-version] [path] This tool tries to guess upstream metadata (Homepage, Contact, VCS Repository) for an upstream project. It does this by parsing various files in the package, and possibly calling out to external services (unless --disable-net-access is specified). .SS "positional arguments:" .IP path .SS "optional arguments:" .TP \fB\-h\fR, \fB\-\-help\fR show this help message and exit .TP \fB\-\-trust\fR Whether to allow running code from the package. .TP \fB\-\-disable\-net\-access\fR Do not probe external services. .TP \fB\-\-check\fR Check guessed metadata against external sources. .TP \fB\-\-consult\-external\-directory\fR Pull in external (not maintained by upstream) directory data .TP \fB\-\-version\fR show program's version number and exit .SH "SEE ALSO" \&\fIapply-multiarch-hints\fR\|(1) \&\fIguess-upstream-metadata\fR\|(1) \&\fIlintian-brush\fR\|(1) \&\fIlintian\fR\|(1) .SH AUTHORS Jelmer Vernooij upstream-ontologist_0.1.24.orig/upstream_ontologist.egg-info/PKG-INFO0000644000000000000000000000101114162102635022505 0ustar00Metadata-Version: 2.1 Name: upstream-ontologist Version: 0.1.24 Summary: tracking of upstream project metadata Home-page: https://github.com/jelmer/upstream-ontologist Author: Jelmer Vernooij Author-email: jelmer@jelmer.uk Maintainer: Jelmer Vernooij Maintainer-email: jelmer@jelmer.uk License: UNKNOWN Project-URL: Repository, https://github.com/jelmer/upstream-ontologist.git Platform: UNKNOWN Provides-Extra: cargo Provides-Extra: readme Provides-Extra: setup.cfg License-File: LICENSE License-File: AUTHORS UNKNOWN upstream-ontologist_0.1.24.orig/upstream_ontologist.egg-info/SOURCES.txt0000644000000000000000000000751214162102635023310 0ustar00.flake8 .gitignore AUTHORS CODE_OF_CONDUCT.md LICENSE MANIFEST.in Makefile README.md SECURITY.md releaser.conf setup.cfg setup.py .github/workflows/pythonpackage.yml docs/vcs.md man/autodoap.1 man/guess-upstream-metadata.1 upstream_ontologist/__init__.py upstream_ontologist/__main__.py upstream_ontologist/doap.py upstream_ontologist/guess.py upstream_ontologist/homepage.py upstream_ontologist/readme.py upstream_ontologist/vcs.py upstream_ontologist.egg-info/PKG-INFO upstream_ontologist.egg-info/SOURCES.txt upstream_ontologist.egg-info/dependency_links.txt upstream_ontologist.egg-info/entry_points.txt upstream_ontologist.egg-info/requires.txt upstream_ontologist.egg-info/top_level.txt upstream_ontologist/debian/__init__.py upstream_ontologist/tests/__init__.py upstream_ontologist/tests/test_readme.py upstream_ontologist/tests/test_upstream_ontologist.py upstream_ontologist/tests/test_vcs.py upstream_ontologist/tests/readme_data/aiozipkin/README.rst upstream_ontologist/tests/readme_data/aiozipkin/description upstream_ontologist/tests/readme_data/argparse/README.rst upstream_ontologist/tests/readme_data/argparse/description upstream_ontologist/tests/readme_data/bitlbee/README.md upstream_ontologist/tests/readme_data/bitlbee/description upstream_ontologist/tests/readme_data/bup/README.md upstream_ontologist/tests/readme_data/bup/description upstream_ontologist/tests/readme_data/cbor2/README.rst upstream_ontologist/tests/readme_data/cbor2/description upstream_ontologist/tests/readme_data/django-ical/README.rst upstream_ontologist/tests/readme_data/django-ical/description upstream_ontologist/tests/readme_data/dulwich/README.rst upstream_ontologist/tests/readme_data/dulwich/description upstream_ontologist/tests/readme_data/empty/README.md upstream_ontologist/tests/readme_data/erbium/README.md upstream_ontologist/tests/readme_data/erbium/description upstream_ontologist/tests/readme_data/isso/README.md upstream_ontologist/tests/readme_data/isso/description upstream_ontologist/tests/readme_data/jadx/README.md upstream_ontologist/tests/readme_data/jadx/description upstream_ontologist/tests/readme_data/jupyter-client/README.md upstream_ontologist/tests/readme_data/jupyter-client/description upstream_ontologist/tests/readme_data/libtrace/README upstream_ontologist/tests/readme_data/libtrace/description upstream_ontologist/tests/readme_data/perl-timedate/README upstream_ontologist/tests/readme_data/perl-timedate/description upstream_ontologist/tests/readme_data/perl5-xml-compile-cache/README.md upstream_ontologist/tests/readme_data/perl5-xml-compile-cache/description upstream_ontologist/tests/readme_data/pylint-flask/README.md upstream_ontologist/tests/readme_data/pylint-flask/description upstream_ontologist/tests/readme_data/python-icalendar/README.rst upstream_ontologist/tests/readme_data/python-icalendar/description upstream_ontologist/tests/readme_data/python-rsa/README.md upstream_ontologist/tests/readme_data/python-rsa/description upstream_ontologist/tests/readme_data/ruby-columnize/README.md upstream_ontologist/tests/readme_data/ruby-columnize/description upstream_ontologist/tests/readme_data/ruby-sha3/README.md upstream_ontologist/tests/readme_data/ruby-sha3/description upstream_ontologist/tests/readme_data/samba/README.md upstream_ontologist/tests/readme_data/samba/description upstream_ontologist/tests/readme_data/saneyaml/README.rst upstream_ontologist/tests/readme_data/saneyaml/description upstream_ontologist/tests/readme_data/sfcgal/README.md upstream_ontologist/tests/readme_data/sfcgal/description upstream_ontologist/tests/readme_data/statuscake/README.md upstream_ontologist/tests/readme_data/statuscake/description upstream_ontologist/tests/readme_data/text-worddif/README.md upstream_ontologist/tests/readme_data/text-worddif/description upstream_ontologist/tests/readme_data/wandio/README upstream_ontologist/tests/readme_data/wandio/descriptionupstream-ontologist_0.1.24.orig/upstream_ontologist.egg-info/dependency_links.txt0000644000000000000000000000000114162102635025465 0ustar00 upstream-ontologist_0.1.24.orig/upstream_ontologist.egg-info/entry_points.txt0000644000000000000000000000017014162102635024713 0ustar00[console_scripts] autodoap = upstream_ontologist.doap:main guess-upstream-metadata = upstream_ontologist.__main__:main upstream-ontologist_0.1.24.orig/upstream_ontologist.egg-info/requires.txt0000644000000000000000000000014614162102635024020 0ustar00debmutate python_debian [cargo] tomlkit [readme] bs4 docutils lxml markdown [setup.cfg] setuptools upstream-ontologist_0.1.24.orig/upstream_ontologist.egg-info/top_level.txt0000644000000000000000000000002414162102635024145 0ustar00upstream_ontologist upstream-ontologist_0.1.24.orig/upstream_ontologist/__init__.py0000644000000000000000000001356214162102635022045 0ustar00#!/usr/bin/python3 # Copyright (C) 2018 Jelmer Vernooij # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """Functions for working with upstream metadata. This gathers information about upstreams from various places. Each bit of information gathered is wrapped in a UpstreamDatum object, which contains the field name. The fields used here match those in https://wiki.debian.org/UpstreamMetadata Supported fields: - Homepage - Name - Contact - Repository - Repository-Browse - Bug-Database - Bug-Submit - Screenshots - Archive - Security-Contact Extensions for upstream-ontologist. - X-SourceForge-Project: Name of the SourceForge project - X-Wiki: URL to a wiki - X-Summary: A one-line description - X-Description: Multi-line description - X-License: Short description of the license - X-Copyright - X-Maintainer - X-Authors Supported, but currently not set. - FAQ - Donation - Documentation - Registration - Webservice """ from typing import Optional, Sequence from dataclasses import dataclass from email.utils import parseaddr SUPPORTED_CERTAINTIES = ["certain", "confident", "likely", "possible", None] version_string = "0.1.24" USER_AGENT = "upstream-ontologist/" + version_string # Too aggressive? DEFAULT_URLLIB_TIMEOUT = 3 @dataclass class Person: name: str email: Optional[str] = None url: Optional[str] = None @classmethod def from_string(cls, text): text = text.replace(' at ', '@') text = text.replace('[AT]', '@') if '(' in text and text.endswith(')'): (p1, p2) = text[:-1].split('(', 1) if p2.startswith('https://') or p2.startswith('http://'): url = p2 if '<' in p1: (name, email) = parseaddr(p1) return cls(name=name, email=email, url=url) return cls(name=p1, url=url) elif '@' in p2: return cls(name=p1, email=p2) return cls(text) elif '<' in text: (name, email) = parseaddr(text) return cls(name=name, email=email) else: return cls(name=text) def __str__(self): if self.email: return '%s <%s>' % (self.name, self.email) return self.name class UpstreamDatum(object): """A single piece of upstream metadata.""" __slots__ = ["field", "value", "certainty", "origin"] def __init__(self, field, value, certainty=None, origin=None): self.field = field if value is None: raise ValueError(field) self.value = value if certainty not in SUPPORTED_CERTAINTIES: raise ValueError(certainty) self.certainty = certainty self.origin = origin def __eq__(self, other): return ( isinstance(other, type(self)) and self.field == other.field and self.value == other.value and self.certainty == other.certainty and self.origin == other.origin ) def __str__(self): return "%s: %s" % (self.field, self.value) def __repr__(self): return "%s(%r, %r, %r, %r)" % ( type(self).__name__, self.field, self.value, self.certainty, self.origin, ) class UpstreamPackage(object): def __init__(self, family, name): self.family = family self.name = name # If we're setting them new, put Name and Contact first def upstream_metadata_sort_key(x): (k, v) = x return { "Name": "00-Name", "Contact": "01-Contact", }.get(k, k) def min_certainty(certainties: Sequence[str]) -> str: confidences = [certainty_to_confidence(c) for c in certainties] return confidence_to_certainty(max([c for c in confidences if c is not None] + [0])) def certainty_to_confidence(certainty: Optional[str]) -> Optional[int]: if certainty in ("unknown", None): return None return SUPPORTED_CERTAINTIES.index(certainty) def confidence_to_certainty(confidence: Optional[int]) -> str: if confidence is None: return "unknown" try: return SUPPORTED_CERTAINTIES[confidence] or "unknown" except IndexError: raise ValueError(confidence) def certainty_sufficient( actual_certainty: str, minimum_certainty: Optional[str] ) -> bool: """Check if the actual certainty is sufficient. Args: actual_certainty: Actual certainty with which changes were made minimum_certainty: Minimum certainty to keep changes Returns: boolean """ actual_confidence = certainty_to_confidence(actual_certainty) if actual_confidence is None: # Actual confidence is unknown. # TODO(jelmer): Should we really be ignoring this? return True minimum_confidence = certainty_to_confidence(minimum_certainty) if minimum_confidence is None: return True return actual_confidence <= minimum_confidence def _load_json_url(http_url: str, timeout: int = DEFAULT_URLLIB_TIMEOUT): from urllib.request import urlopen, Request import json headers = {'User-Agent': USER_AGENT, 'Accept': 'application/json'} http_contents = urlopen( Request(http_url, headers=headers), timeout=timeout).read() return json.loads(http_contents) upstream-ontologist_0.1.24.orig/upstream_ontologist/__main__.py0000644000000000000000000000627214051270175022030 0ustar00#!/usr/bin/python3 # Copyright (C) 2018 Jelmer Vernooij # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """Functions for working with upstream metadata.""" import logging import os import sys from . import ( version_string, UpstreamDatum, Person, ) from .guess import ( guess_upstream_metadata, guess_upstream_info, ) def main(argv=None): import argparse import ruamel.yaml parser = argparse.ArgumentParser(sys.argv[0]) parser.add_argument("path", default=".", nargs="?") parser.add_argument( "--trust", action="store_true", help="Whether to allow running code from the package.", ) parser.add_argument( "--disable-net-access", help="Do not probe external services.", action="store_true", default=False, ) parser.add_argument( "--check", action="store_true", help="Check guessed metadata against external sources.", ) parser.add_argument( "--scan", action="store_true", help="Scan for metadata rather than printing results.", ) parser.add_argument( "--consult-external-directory", action="store_true", help="Pull in external (not maintained by upstream) directory data", ) parser.add_argument( "--version", action="version", version="%(prog)s " + version_string ) parser.add_argument('--verbose', action='store_true') args = parser.parse_args(argv) if args.verbose: logging.basicConfig(level=logging.DEBUG) else: logging.basicConfig(level=logging.INFO) if not os.path.isdir(args.path): sys.stderr.write("%s is not a directory\n" % args.path) return 1 if args.scan: for entry in guess_upstream_info(args.path, args.trust): if isinstance(entry, UpstreamDatum): print( "%s: %r - certainty %s (from %s)" % (entry.field, entry.value, entry.certainty, entry.origin) ) else: raise TypeError(entry) else: metadata = guess_upstream_metadata( args.path, args.trust, not args.disable_net_access, consult_external_directory=args.consult_external_directory, check=args.check, ) yaml = ruamel.yaml.YAML() ruamel.yaml.scalarstring.walk_tree(metadata) yaml.register_class(Person) yaml.dump(metadata, sys.stdout) return 0 if __name__ == "__main__": sys.exit(main()) upstream-ontologist_0.1.24.orig/upstream_ontologist/debian/0000755000000000000000000000000014005303530021140 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/doap.py0000644000000000000000000001127314034075110021221 0ustar00#!/usr/bin/python3 # Copyright (C) 2021 Jelmer Vernooij # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA from lxml import etree Element = etree.Element SubElement = etree.SubElement tostring = etree.tostring RDF_NS = "http://www.w3.org/1999/02/22-rdf-syntax-ns" etree.register_namespace('rdf', RDF_NS) FOAF_NS = "http://xmlns.com/foaf/0.1/" etree.register_namespace('foaf', FOAF_NS) DOAP_NS = "http://usefulinc.com/ns/doap" etree.register_namespace('doap', DOAP_NS) def doap_file_from_upstream_info(upstream_info): project = Element('{%s}Project' % DOAP_NS) if 'Name' in upstream_info: SubElement(project, '{%s}name' % DOAP_NS).text = upstream_info['Name'] if 'Homepage' in upstream_info: hp = SubElement(project, '{%s}homepage' % DOAP_NS) hp.set('{%s}resource' % RDF_NS, upstream_info['Homepage']) if 'X-Summary' in upstream_info: sd = SubElement(project, '{%s}shortdesc' % DOAP_NS) sd.text = upstream_info['X-Summary'] if 'X-Description' in upstream_info: sd = SubElement(project, '{%s}description' % DOAP_NS) sd.text = upstream_info['X-Description'] if 'X-Download' in upstream_info: dp = SubElement(project, '{%s}download-page' % DOAP_NS) dp.set('{%s}resource' % RDF_NS, upstream_info['X-Download']) if 'Repository' in upstream_info or 'Repository-Browse' in upstream_info: repository = SubElement(project, '{%s}repository' % DOAP_NS) # TODO(jelmer): how do we know the repository type? git_repo = SubElement(repository, '{%s}GitRepository' % DOAP_NS) if 'Repository' in upstream_info: location = SubElement(git_repo, '{%s}location' % DOAP_NS) location.set('{%s}resource' % RDF_NS, upstream_info['Repository']) if 'Repository-Browse' in upstream_info: location = SubElement(git_repo, '{%s}browse' % DOAP_NS) location.set('{%s}resource' % RDF_NS, upstream_info['Repository-Browse']) if 'X-Mailing-List' in upstream_info: mailinglist = SubElement(project, '{%s}mailing-list' % DOAP_NS) mailinglist.set('{%s}resource' % RDF_NS, upstream_info['X-Mailing-List']) if 'Bug-Database' in upstream_info: bugdb = SubElement(project, '{%s}bug-database' % DOAP_NS) bugdb.set('{%s}resource' % RDF_NS, upstream_info['Bug-Database']) if 'Screenshots' in upstream_info: screenshots = SubElement(project, '{%s}screenshots' % DOAP_NS) screenshots.set('{%s}resource' % RDF_NS, upstream_info['Screenshots']) if 'Security-Contact' in upstream_info: security_contact = SubElement(project, '{%s}security-contact' % DOAP_NS) security_contact.set('{%s}resource' % RDF_NS, upstream_info['Security-Contact']) if 'X-Wiki' in upstream_info: wiki = SubElement(project, '{%s}wiki' % DOAP_NS) wiki.set('{%s}resource' % RDF_NS, upstream_info['X-Wiki']) return etree.ElementTree(project) def main(argv=None): from .guess import get_upstream_info import argparse import sys if argv is None: argv = sys.argv parser = argparse.ArgumentParser(argv) parser.add_argument("path", default=".", nargs="?") parser.add_argument( "--trust", action="store_true", help="Whether to allow running code from the package.", ) parser.add_argument( "--disable-net-access", help="Do not probe external services.", action="store_true", default=False, ) parser.add_argument( "--check", action="store_true", help="Check guessed metadata against external sources.", ) args = parser.parse_args() upstream_info = get_upstream_info( args.path, trust_package=args.trust, net_access=not args.disable_net_access, check=args.check) et = doap_file_from_upstream_info(upstream_info) et.write( sys.stdout.buffer, xml_declaration=True, method="xml", encoding="utf-8", pretty_print=True) if __name__ == '__main__': import sys sys.exit(main(sys.argv)) upstream-ontologist_0.1.24.orig/upstream_ontologist/guess.py0000644000000000000000000036140214162102635021433 0ustar00#!/usr/bin/python3 # Copyright (C) 2018 Jelmer Vernooij # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA import json import logging import operator import os import re import socket import urllib.error from typing import Optional, Iterable, List from urllib.parse import quote, urlparse, urlunparse, urljoin from urllib.request import urlopen, Request from .vcs import ( unsplit_vcs_url, browse_url_from_repo_url, plausible_url as plausible_vcs_url, plausible_browse_url as plausible_vcs_browse_url, sanitize_url as sanitize_vcs_url, is_gitlab_site, guess_repo_from_url, verify_repository_url, ) from . import ( DEFAULT_URLLIB_TIMEOUT, USER_AGENT, UpstreamDatum, min_certainty, certainty_to_confidence, certainty_sufficient, _load_json_url, Person, ) # Pecl is quite slow, so up the timeout a bit. PECL_URLLIB_TIMEOUT = 15 logger = logging.getLogger(__name__) class NoSuchSourceForgeProject(Exception): def __init__(self, project): self.project = project def get_sf_metadata(project): url = 'https://sourceforge.net/rest/p/%s' % project try: return _load_json_url(url) except urllib.error.HTTPError as e: if e.code != 404: raise raise NoSuchSourceForgeProject(project) class NoSuchRepologyProject(Exception): def __init__(self, project): self.project = project def get_repology_metadata(srcname, repo='debian_unstable'): url = ('https://repology.org/tools/project-by?repo=%s&name_type=srcname' '&target_page=api_v1_project&name=%s' % (repo, srcname)) try: return _load_json_url(url) except urllib.error.HTTPError as e: if e.code != 404: raise raise NoSuchRepologyProject(srcname) DATUM_TYPES = { 'Bug-Submit': str, 'Bug-Database': str, 'Repository': str, 'Repository-Browse': str, 'Documentation': str, 'X-License': str, 'X-Summary': str, 'X-Description': str, 'X-Wiki': str, 'X-SourceForge-Project': str, 'Archive': str, 'Homepage': str, 'Name': str, 'X-Version': str, 'X-Download': str, 'X-Pecl-URL': str, 'Screenshots': list, 'Contact': str, 'X-Maintainer': Person, } def known_bad_guess(datum): # noqa: C901 try: expected_type = DATUM_TYPES[datum.field] except KeyError: if datum.field.startswith('X-'): logging.debug('Unknown field %s', datum.field) else: logging.warning('Unknown field %s', datum.field) return False if not isinstance(datum.value, expected_type): logging.warning( 'filtering out bad value %r for %s', datum.value, datum.field) return True if datum.field in ('Bug-Submit', 'Bug-Database'): parsed_url = urlparse(datum.value) if parsed_url.hostname == 'bugzilla.gnome.org': return True if parsed_url.hostname == 'bugs.freedesktop.org': return True if datum.field == 'Repository': if '${' in datum.value: return True parsed_url = urlparse(datum.value) if parsed_url.hostname == 'anongit.kde.org': return True if parsed_url.hostname == 'git.gitorious.org': return True if datum.field == 'Homepage': parsed_url = urlparse(datum.value) if parsed_url.hostname in ('pypi.org', 'rubygems.org'): return True if datum.field == 'Repository-Browse': if '${' in datum.value: return True parsed_url = urlparse(datum.value) if parsed_url.hostname == 'cgit.kde.org': return True if datum.field == 'Name': if datum.value.lower() == 'package': return True if datum.field == 'X-Version': if datum.value.lower() in ('devel', ): return True if isinstance(datum.value, str) and datum.value.strip().lower() == 'unknown': return True return False def filter_bad_guesses( guessed_items: Iterable[UpstreamDatum]) -> Iterable[UpstreamDatum]: return filter(lambda x: not known_bad_guess(x), guessed_items) def update_from_guesses(upstream_metadata, guessed_items): changed = False for datum in guessed_items: current_datum = upstream_metadata.get(datum.field) if not current_datum or ( certainty_to_confidence(datum.certainty) < certainty_to_confidence(current_datum.certainty)): upstream_metadata[datum.field] = datum changed = True return changed def guess_from_debian_rules(path, trust_package): from debmutate._rules import Makefile mf = Makefile.from_path(path) try: upstream_git = mf.get_variable(b'UPSTREAM_GIT') except KeyError: pass else: yield UpstreamDatum( "Repository", upstream_git.decode(), "likely") try: upstream_url = mf.get_variable(b'DEB_UPSTREAM_URL') except KeyError: pass else: yield UpstreamDatum("X-Download", upstream_url.decode(), "likely") def _metadata_from_url(url: str, origin=None): """Obtain metadata from a URL related to the project. Args: url: The URL to inspect origin: Origin to report for metadata """ m = re.match('https?://www.(sf|sourceforge).net/projects/([^/]+)', url) if m: yield UpstreamDatum( "Archive", "SourceForge", "certain", origin=origin) yield UpstreamDatum( "X-SourceForge-Project", m.group(2), "certain", origin=origin) m = re.match('https?://(sf|sourceforge).net/([^/]+)', url) if m: yield UpstreamDatum( "Archive", "SourceForge", "certain", origin=origin) if m.group(1) != "www": yield UpstreamDatum( "X-SourceForge-Project", m.group(2), "certain", origin=origin) return m = re.match('https?://(.*).(sf|sourceforge).net/', url) if m: yield UpstreamDatum( "Archive", "SourceForge", "certain", origin=origin) if m.group(1) != "www": yield UpstreamDatum( "X-SourceForge-Project", m.group(1), "certain", origin=origin) return if (url.startswith('https://pecl.php.net/package/') or url.startswith('http://pecl.php.net/package/')): yield UpstreamDatum('X-Pecl-URL', url, 'certain', origin=origin) def guess_from_debian_watch(path, trust_package): from debmutate.watch import ( parse_watch_file, MissingVersion, ) def get_package_name(): from debian.deb822 import Deb822 with open(os.path.join(os.path.dirname(path), 'control'), 'r') as f: return Deb822(f)['Source'] with open(path, 'r') as f: try: wf = parse_watch_file(f) except MissingVersion: return if not wf: return for w in wf: url = w.format_url(package=get_package_name) if 'mode=git' in w.options: yield UpstreamDatum( "Repository", url, "confident", origin=path) continue if 'mode=svn' in w.options: yield UpstreamDatum( "Repository", url, "confident", origin=path) continue if url.startswith('https://') or url.startswith('http://'): repo = guess_repo_from_url(url) if repo: yield UpstreamDatum( "Repository", repo, "likely", origin=path) continue yield from _metadata_from_url(url, origin=path) m = re.match( 'https?://hackage.haskell.org/package/(.*)/distro-monitor', url) if m: yield UpstreamDatum( "Archive", "Hackage", "certain", origin=path) yield UpstreamDatum( "X-Hackage-Package", m.group(1), "certain", origin=path) def guess_from_debian_control(path, trust_package): with open(path, 'r') as f: from debian.deb822 import Deb822 control = Deb822(f) if 'Homepage' in control: yield UpstreamDatum('Homepage', control['Homepage'], 'certain') if 'XS-Go-Import-Path' in control: yield ( UpstreamDatum( 'Repository', 'https://' + control['XS-Go-Import-Path'], 'likely')) if 'Description' in control: yield UpstreamDatum( 'X-Summary', control['Description'].splitlines(False)[0], 'certain') yield UpstreamDatum( 'X-Description', ''.join(control['Description'].splitlines(True)[1:]), 'certain') def guess_from_debian_changelog(path, trust_package): from debian.changelog import Changelog with open(path, 'rb') as f: cl = Changelog(f) source = cl.package if source.startswith('rust-'): try: from toml.decoder import load as load_toml with open('debian/debcargo.toml', 'r') as f: debcargo = load_toml(f) except FileNotFoundError: semver_suffix = False else: semver_suffix = debcargo.get('semver_suffix') from debmutate.debcargo import parse_debcargo_source_name, cargo_translate_dashes crate, crate_semver_version = parse_debcargo_source_name( source, semver_suffix) if '-' in crate: crate = cargo_translate_dashes(crate) yield UpstreamDatum('Archive', 'crates.io', 'certain') yield UpstreamDatum('X-Cargo-Crate', crate, 'certain') def guess_from_python_metadata(pkg_info): if 'Name' in pkg_info: yield UpstreamDatum('Name', pkg_info['name'], 'certain') if 'Version' in pkg_info: yield UpstreamDatum('X-Version', pkg_info['Version'], 'certain') if 'Home-Page' in pkg_info: repo = guess_repo_from_url(pkg_info['Home-Page']) if repo: yield UpstreamDatum( 'Repository', repo, 'likely') for value in pkg_info.get_all('Project-URL', []): url_type, url = value.split(', ') if url_type in ('GitHub', 'Repository', 'Source Code'): yield UpstreamDatum( 'Repository', url, 'certain') if url_type in ('Bug Tracker', ): yield UpstreamDatum( 'Bug-Database', url, 'certain') if 'Summary' in pkg_info: yield UpstreamDatum('X-Summary', pkg_info['Summary'], 'certain') if 'Author' in pkg_info: author_email = pkg_info.get('Author-email') author = Person(pkg_info['Author'], author_email) yield UpstreamDatum('X-Authors', [author], 'certain') if 'License' in pkg_info: yield UpstreamDatum('X-License', pkg_info['License'], 'certain') if 'Download-URL' in pkg_info: yield UpstreamDatum('X-Download', pkg_info['Download-URL'], 'certain') yield from parse_python_long_description( pkg_info.get_payload(), pkg_info.get_content_type()) def guess_from_pkg_info(path, trust_package): """Get the metadata from a PKG-INFO file.""" from email.parser import Parser try: with open(path, 'r') as f: pkg_info = Parser().parse(f) except FileNotFoundError: return yield from guess_from_python_metadata(pkg_info) def parse_python_long_description(long_description, content_type): if long_description in (None, ''): return # Discard encoding, etc. if content_type: content_type = content_type.split(';')[0] if content_type in (None, 'text/plain'): if len(long_description.splitlines()) > 30: return yield UpstreamDatum( 'X-Description', long_description, 'possible') extra_md = [] elif content_type in ('text/restructured-text', 'text/x-rst'): from .readme import description_from_readme_rst description, extra_md = description_from_readme_rst(long_description) if description: yield UpstreamDatum('X-Description', description, 'possible') elif content_type == 'text/markdown': from .readme import description_from_readme_md description, extra_md = description_from_readme_md(long_description) if description: yield UpstreamDatum('X-Description', description, 'possible') else: extra_md = [] for datum in extra_md: yield datum def guess_from_setup_cfg(path, trust_package): from setuptools.config import read_configuration # read_configuration needs a function cwd try: os.getcwd() except FileNotFoundError: os.chdir(os.path.dirname(path)) config = read_configuration(path) metadata = config.get('metadata') if metadata: if 'name' in metadata: yield UpstreamDatum('Name', metadata['name'], 'certain') if 'url' in metadata: yield from parse_python_url(metadata['url']) yield from parse_python_long_description( metadata.get('long_description'), metadata.get('long_description_content_type')) if 'description' in metadata: yield UpstreamDatum('X-Summary', metadata['description'], 'certain') def parse_python_url(url): repo = guess_repo_from_url(url) if repo: yield UpstreamDatum('Repository', repo, 'likely') yield UpstreamDatum('Homepage', url, 'likely') def guess_from_setup_py_executed(path): from distutils.core import run_setup result = run_setup(os.path.abspath(path), stop_after="init") if result.get_name() not in (None, '', 'UNKNOWN'): yield UpstreamDatum('Name', result.get_name(), 'certain') if result.get_version() not in (None, '', 'UNKNOWN'): yield UpstreamDatum('X-Version', result.get_version(), 'certain') if result.get_url() not in (None, '', 'UNKNOWN'): yield from parse_python_url(result.get_url()) if result.get_download_url() not in (None, '', 'UNKNOWN'): yield UpstreamDatum( 'X-Download', result.get_download_url(), 'likely') if result.get_license() not in (None, '', 'UNKNOWN'): yield UpstreamDatum( 'X-License', result.get_license(), 'likely') if result.get_contact() not in (None, '', 'UNKNOWN'): contact = result.get_contact() if result.get_contact_email() not in (None, '', 'UNKNOWN'): contact += " <%s>" % result.get_contact_email() yield UpstreamDatum('Contact', contact, 'likely') if result.get_description() not in (None, '', 'UNKNOWN'): yield UpstreamDatum('X-Summary', result.get_description(), 'certain') if result.metadata.long_description not in (None, '', 'UNKNOWN'): yield from parse_python_long_description( result.metadata.long_description, getattr(result.metadata, 'long_description_content_type', None)) yield from parse_python_project_urls(getattr(result.metadata, 'project_urls', {})) def parse_python_project_urls(urls): for url_type, url in urls.items(): if url_type in ('GitHub', 'Repository', 'Source Code'): yield UpstreamDatum( 'Repository', url, 'certain') if url_type in ('Bug Tracker', ): yield UpstreamDatum( 'Bug-Database', url, 'certain') def guess_from_setup_py(path, trust_package): # noqa: C901 if trust_package: try: yield from guess_from_setup_py_executed(path) except Exception as e: logging.warning('Failed to run setup.py: %r', e) else: return with open(path) as inp: setup_text = inp.read() import ast # Based on pypi.py in https://github.com/nexB/scancode-toolkit/blob/develop/src/packagedcode/pypi.py # # Copyright (c) nexB Inc. and others. All rights reserved. # ScanCode is a trademark of nexB Inc. # SPDX-License-Identifier: Apache-2.0 try: tree = ast.parse(setup_text) except SyntaxError as e: logging.warning('Syntax error while parsing setup.py: %s', e) return setup_args = {} for statement in tree.body: # We only care about function calls or assignments to functions named # `setup` or `main` if (isinstance(statement, (ast.Expr, ast.Call, ast.Assign)) and isinstance(statement.value, ast.Call) and isinstance(statement.value.func, ast.Name) # we also look for main as sometimes this is used instead of # setup() and statement.value.func.id in ('setup', 'main')): # Process the arguments to the setup function for kw in getattr(statement.value, 'keywords', []): arg_name = kw.arg if isinstance(kw.value, (ast.Str, ast.Constant)): setup_args[arg_name] = kw.value.s elif isinstance(kw.value, (ast.List, ast.Tuple, ast.Set,)): # We collect the elements of a list if the element # and tag function calls value = [ elt.s for elt in kw.value.elts if isinstance(elt, ast.Constant) ] setup_args[arg_name] = value elif isinstance(kw.value, ast.Dict): setup_args[arg_name] = {} for (key, value) in zip(kw.value.keys, kw.value.values): if isinstance(value, (ast.Str, ast.Constant)): setup_args[key.s] = value.s # TODO: what if kw.value is an expression like a call to # version=get_version or version__version__ # End code from https://github.com/nexB/scancode-toolkit/blob/develop/src/packagedcode/pypi.py if 'name' in setup_args: yield UpstreamDatum('Name', setup_args['name'], 'certain') if 'version' in setup_args: yield UpstreamDatum('X-Version', setup_args['version'], 'certain') if 'description' in setup_args: yield UpstreamDatum('X-Summary', setup_args['description'], 'certain') if 'long_description' in setup_args: yield from parse_python_long_description( setup_args['long_description'], setup_args.get('long_description_content_type')) if 'license' in setup_args: yield UpstreamDatum('X-License', setup_args['license'], 'certain') if 'download_url' in setup_args and setup_args.get('download_url'): yield UpstreamDatum('X-Download', setup_args['download_url'], 'certain') if 'url' in setup_args: yield from parse_python_url(setup_args['url']) if 'project_urls' in setup_args: yield from parse_python_project_urls(setup_args['project_urls']) if 'maintainer' in setup_args: maintainer_email = setup_args.get('maintainer_email') maintainer = setup_args['maintainer'] if isinstance(maintainer, list) and len(maintainer) == 1: maintainer = maintainer[0] if isinstance(maintainer, str): maintainer = Person(maintainer, maintainer_email) yield UpstreamDatum('X-Maintainer', maintainer, 'certain') def guess_from_composer_json(path, trust_package): # https://getcomposer.org/doc/04-schema.md with open(path, 'r') as f: package = json.load(f) if 'name' in package: yield UpstreamDatum('Name', package['name'], 'certain') if 'homepage' in package: yield UpstreamDatum('Homepage', package['homepage'], 'certain') if 'description' in package: yield UpstreamDatum('X-Summary', package['description'], 'certain') if 'license' in package: yield UpstreamDatum('X-License', package['license'], 'certain') if 'version' in package: yield UpstreamDatum('X-Version', package['version'], 'certain') def guess_from_package_json(path, trust_package): # noqa: C901 # see https://docs.npmjs.com/cli/v7/configuring-npm/package-json with open(path, 'r') as f: package = json.load(f) if 'name' in package: yield UpstreamDatum('Name', package['name'], 'certain') if 'homepage' in package: yield UpstreamDatum('Homepage', package['homepage'], 'certain') if 'description' in package: yield UpstreamDatum('X-Summary', package['description'], 'certain') if 'license' in package: yield UpstreamDatum('X-License', package['license'], 'certain') if 'version' in package: yield UpstreamDatum('X-Version', package['version'], 'certain') if 'repository' in package: if isinstance(package['repository'], dict): repo_url = package['repository'].get('url') elif isinstance(package['repository'], str): repo_url = package['repository'] else: repo_url = None if repo_url: parsed_url = urlparse(repo_url) if parsed_url.scheme and parsed_url.netloc: yield UpstreamDatum( 'Repository', repo_url, 'certain') elif repo_url.startswith('github:'): # Some people seem to default to github. :( repo_url = 'https://github.com/' + repo_url.split(':', 1)[1] yield UpstreamDatum('Repository', repo_url, 'likely') else: # Some people seem to default to github. :( repo_url = 'https://github.com/' + parsed_url.path yield UpstreamDatum( 'Repository', repo_url, 'likely') if 'bugs' in package: if isinstance(package['bugs'], dict): url = package['bugs'].get('url') if url is None and package['bugs'].get('email'): url = 'mailto:' + package['bugs']['email'] else: url = package['bugs'] if url: yield UpstreamDatum('Bug-Database', url, 'certain') if 'author' in package: if isinstance(package['author'], dict): yield UpstreamDatum( 'X-Author', [Person( name=package['author'].get('name'), url=package['author'].get('url'), email=package['author'].get('email'))], 'confident') elif isinstance(package['author'], str): yield UpstreamDatum( 'X-Author', [Person.from_string(package['author'])], 'confident') else: logging.warning( 'Unsupported type for author in package.json: %r', type(package['author'])) def xmlparse_simplify_namespaces(path, namespaces): import xml.etree.ElementTree as ET namespaces = ['{%s}' % ns for ns in namespaces] tree = ET.iterparse(path) for _, el in tree: for namespace in namespaces: el.tag = el.tag.replace(namespace, '') return tree.root def guess_from_package_xml(path, trust_package): # https://pear.php.net/manual/en/guide.developers.package2.dependencies.php import xml.etree.ElementTree as ET try: root = xmlparse_simplify_namespaces(path, [ 'http://pear.php.net/dtd/package-2.0', 'http://pear.php.net/dtd/package-2.1']) except ET.ParseError as e: logging.warning('Unable to parse package.xml: %s', e) return assert root.tag == 'package', 'root tag is %r' % root.tag name_tag = root.find('name') if name_tag is not None: yield UpstreamDatum('Name', name_tag.text, 'certain') summary_tag = root.find('summary') if summary_tag is not None: yield UpstreamDatum('X-Summary', summary_tag.text, 'certain') description_tag = root.find('description') if description_tag is not None: yield UpstreamDatum('X-Description', description_tag.text, 'certain') version_tag = root.find('version') if version_tag is not None: release_tag = version_tag.find('release') if release_tag is not None: yield UpstreamDatum('X-Version', release_tag.text, 'certain') license_tag = root.find('license') if license_tag is not None: yield UpstreamDatum('X-License', license_tag.text, 'certain') for url_tag in root.findall('url'): if url_tag.get('type') == 'repository': yield UpstreamDatum( 'Repository', url_tag.text, 'certain') if url_tag.get('type') == 'bugtracker': yield UpstreamDatum('Bug-Database', url_tag.text, 'certain') def guess_from_pod(contents): # See https://perldoc.perl.org/perlpod by_header = {} inheader = None for line in contents.splitlines(True): if line.startswith(b'=head1 '): inheader = line.rstrip(b'\n').split(b' ', 1)[1] by_header[inheader.decode('utf-8', 'surrogateescape').upper()] = '' elif inheader: by_header[inheader.decode('utf-8', 'surrogateescape').upper()] += line.decode('utf-8', 'surrogateescape') if 'DESCRIPTION' in by_header: description = by_header['DESCRIPTION'].lstrip('\n') description = re.sub(r'[FXZSCBI]\<([^>]+)>', r'\1', description) description = re.sub(r'L\<([^\|]+)\|([^\>]+)\>', r'\2', description) description = re.sub(r'L\<([^\>]+)\>', r'\1', description) # TODO(jelmer): Support E<> yield UpstreamDatum('X-Description', description, 'likely') if 'NAME' in by_header: lines = by_header['NAME'].strip().splitlines() if lines: name = lines[0] if ' - ' in name: (name, summary) = name.split(' - ', 1) yield UpstreamDatum('Name', name.strip(), 'confident') yield UpstreamDatum('X-Summary', summary.strip(), 'confident') elif ' ' not in name: yield UpstreamDatum('Name', name.strip(), 'confident') def guess_from_perl_module(path): import subprocess try: stdout = subprocess.check_output(['perldoc', '-u', path]) except subprocess.CalledProcessError: logging.warning('Error running perldoc, skipping.') return yield from guess_from_pod(stdout) def guess_from_perl_dist_name(path, dist_name): mod_path = os.path.join( os.path.dirname(path), 'lib', dist_name.replace('-', '/') + '.pm') if os.path.exists(mod_path): yield from guess_from_perl_module(mod_path) def guess_from_dist_ini(path, trust_package): from configparser import ( RawConfigParser, NoSectionError, NoOptionError, ParsingError, ) parser = RawConfigParser(strict=False) with open(path, 'r') as f: try: parser.read_string('[START]\n' + f.read()) except ParsingError as e: logging.warning('Unable to parse dist.ini: %r', e) try: dist_name = parser['START']['name'] except (NoSectionError, NoOptionError, KeyError): dist_name = None else: yield UpstreamDatum('Name', dist_name, 'certain') try: yield UpstreamDatum('X-Version', parser['START']['version'], 'certain') except (NoSectionError, NoOptionError, KeyError): pass try: yield UpstreamDatum('X-Summary', parser['START']['abstract'], 'certain') except (NoSectionError, NoOptionError, KeyError): pass try: yield UpstreamDatum( 'Bug-Database', parser['MetaResources']['bugtracker.web'], 'certain') except (NoSectionError, NoOptionError, KeyError): pass try: yield UpstreamDatum( 'Repository', parser['MetaResources']['repository.url'], 'certain') except (NoSectionError, NoOptionError, KeyError): pass try: yield UpstreamDatum( 'X-License', parser['START']['license'], 'certain') except (NoSectionError, NoOptionError, KeyError): pass try: copyright = '%s %s' % ( parser['START']['copyright_year'], parser['START']['copyright_holder'], ) except (NoSectionError, NoOptionError, KeyError): pass else: yield UpstreamDatum('X-Copyright', copyright, 'certain') # Wild guess: if dist_name: yield from guess_from_perl_dist_name(path, dist_name) def guess_from_debian_copyright(path, trust_package): from debian.copyright import ( Copyright, NotMachineReadableError, MachineReadableFormatError, ) from_urls = [] with open(path, 'r') as f: try: copyright = Copyright(f, strict=False) except NotMachineReadableError: header = None except MachineReadableFormatError as e: logging.warning('Error parsing copyright file: %s', e) header = None except ValueError as e: # This can happen with an error message of # ValueError: value must not have blank lines logging.warning('Error parsing copyright file: %s', e) header = None else: header = copyright.header if header: if header.upstream_name: yield UpstreamDatum("Name", header.upstream_name, 'certain') if header.upstream_contact: yield UpstreamDatum( "Contact", ','.join(header.upstream_contact), 'certain') if header.source: if ' ' in header.source: from_urls.extend([u for u in re.split('[ ,\n]', header.source) if u]) else: from_urls.append(header.source) if "X-Upstream-Bugs" in header: yield UpstreamDatum( "Bug-Database", header["X-Upstream-Bugs"], 'certain') if "X-Source-Downloaded-From" in header: url = guess_repo_from_url(header["X-Source-Downloaded-From"]) if url is not None: yield UpstreamDatum("Repository", url, 'certain') if header.source: from_urls.extend( [m.group(0) for m in re.finditer(r'((http|https):\/\/([^ ]+))', header.source)]) else: with open(path, 'r') as f: for line in f: m = re.match(r'.* was downloaded from ([^\s]+)', line) if m: from_urls.append(m.group(1)) for from_url in from_urls: yield from _metadata_from_url(from_url, origin=path) repo_url = guess_repo_from_url(from_url) if repo_url: yield UpstreamDatum( 'Repository', repo_url, 'likely') def url_from_cvs_co_command(command): from breezy.location import cvs_to_url from breezy import urlutils import shlex argv = shlex.split(command.decode('utf-8', 'surrogateescape')) args = [arg for arg in argv if arg.strip()] i = 0 cvsroot = None module = None command_seen = False del args[0] while i < len(args): if args[i] == '-d': del args[i] cvsroot = args[i] del args[i] continue if args[i].startswith('-d'): cvsroot = args[i][2:] del args[i] continue if command_seen and not args[i].startswith('-'): module = args[i] elif args[i] in ('co', 'checkout'): command_seen = True del args[i] if cvsroot is not None: url = cvs_to_url(cvsroot) if module is not None: return urlutils.join(url, module) return url return None def url_from_svn_co_command(command): import shlex argv = shlex.split(command.decode('utf-8', 'surrogateescape')) args = [arg for arg in argv if arg.strip()] URL_SCHEMES = ['svn+ssh', 'http', 'https', 'svn'] for arg in args: if any([arg.startswith('%s://' % scheme) for scheme in URL_SCHEMES]): return arg return None def url_from_git_clone_command(command): import shlex argv = shlex.split(command.decode('utf-8', 'surrogateescape')) args = [arg for arg in argv if arg.strip()] i = 0 while i < len(args): if not args[i].startswith('-'): i += 1 continue if '=' in args[i]: del args[i] continue # arguments that take a parameter if args[i] in ('-b', '--depth', '--branch'): del args[i] del args[i] continue del args[i] try: url = args[2] except IndexError: url = args[0] if plausible_vcs_url(url): return url return None def url_from_fossil_clone_command(command): import shlex argv = shlex.split(command.decode('utf-8', 'surrogateescape')) args = [arg for arg in argv if arg.strip()] i = 0 while i < len(args): if not args[i].startswith('-'): i += 1 continue if '=' in args[i]: del args[i] continue del args[i] try: url = args[2] except IndexError: url = args[0] if plausible_vcs_url(url): return url return None def guess_from_install(path, trust_package): # noqa: C901 urls = [] try: with open(path, 'rb') as f: lines = list(f.readlines()) for i, line in enumerate(lines): line = line.strip() cmdline = line.strip().lstrip(b'$').strip() if (cmdline.startswith(b'git clone ') or cmdline.startswith(b'fossil clone ')): while cmdline.endswith(b'\\'): cmdline += lines[i+1] cmdline = cmdline.strip() i += 1 if cmdline.startswith(b'git clone '): url = url_from_git_clone_command(cmdline) elif cmdline.startswith(b'fossil clone '): url = url_from_fossil_clone_command(cmdline) if url: urls.append(url) for m in re.findall(b"[\"'`](git clone.*)[\"`']", line): url = url_from_git_clone_command(m) if url: urls.append(url) project_re = b'([^/]+)/([^/?.()"#>\\s]*[^-/?.()"#>\\s])' for m in re.finditer( b'https://github.com/' + project_re + b'(.git)?', line): yield UpstreamDatum( 'Repository', m.group(0).rstrip(b'.').decode().rstrip(), 'possible') m = re.fullmatch( b'https://github.com/' + project_re, line) if m: yield UpstreamDatum( 'Repository', line.strip().rstrip(b'.').decode(), 'possible') m = re.fullmatch(b'git://([^ ]+)', line) if m: yield UpstreamDatum( 'Repository', line.strip().rstrip(b'.').decode(), 'possible') for m in re.finditer( b'https://([^]/]+)/([^]\\s()"#]+)', line): if is_gitlab_site(m.group(1).decode()): url = m.group(0).rstrip(b'.').decode().rstrip() try: repo_url = guess_repo_from_url(url) except ValueError: logger.warning( 'Ignoring invalid URL %s in %s', url, path) else: if repo_url: yield UpstreamDatum( 'Repository', repo_url, 'possible') except IsADirectoryError: pass def guess_from_readme(path, trust_package): # noqa: C901 urls = [] try: with open(path, 'rb') as f: lines = list(f.readlines()) for i, line in enumerate(lines): line = line.strip() cmdline = line.strip().lstrip(b'$').strip() if (cmdline.startswith(b'git clone ') or cmdline.startswith(b'fossil clone ')): while cmdline.endswith(b'\\'): cmdline += lines[i+1] cmdline = cmdline.strip() i += 1 if cmdline.startswith(b'git clone '): url = url_from_git_clone_command(cmdline) elif cmdline.startswith(b'fossil clone '): url = url_from_fossil_clone_command(cmdline) if url: urls.append(url) for m in re.findall(b"[\"'`](git clone.*)[\"`']", line): url = url_from_git_clone_command(m) if url: urls.append(url) m = re.fullmatch(rb'cvs.*-d\s*:pserver:.*', line) if m: url = url_from_cvs_co_command(m.group(0)) if url: urls.append(url) for m in re.finditer(b'($ )?(svn co .*)', line): url = url_from_svn_co_command(m.group(2)) if url: urls.append(url) project_re = b'([^/]+)/([^/?.()"#>\\s]*[^-,/?.()"#>\\s])' for m in re.finditer( b'https://travis-ci.org/' + project_re, line): yield UpstreamDatum( 'Repository', 'https://github.com/%s/%s' % ( m.group(1).decode(), m.group(2).decode().rstrip()), 'possible') for m in re.finditer( b'https://coveralls.io/r/' + project_re, line): yield UpstreamDatum( 'Repository', 'https://github.com/%s/%s' % ( m.group(1).decode(), m.group(2).decode().rstrip()), 'possible') for m in re.finditer( b'https://github.com/([^/]+)/([^/]+)/issues', line): yield UpstreamDatum( 'Bug-Database', m.group(0).decode().rstrip(), 'possible') for m in re.finditer( b'https://github.com/' + project_re + b'(.git)?', line): yield UpstreamDatum( 'Repository', m.group(0).rstrip(b'.').decode().rstrip(), 'possible') m = re.fullmatch( b'https://github.com/' + project_re, line) if m: yield UpstreamDatum( 'Repository', line.strip().rstrip(b'.').decode(), 'possible') m = re.fullmatch(b'git://([^ ]+)', line) if m: yield UpstreamDatum( 'Repository', line.strip().rstrip(b'.').decode(), 'possible') for m in re.finditer( b'https://([^]/]+)/([^]\\s()"#]+)', line): if is_gitlab_site(m.group(1).decode()): url = m.group(0).rstrip(b'.').decode().rstrip() try: repo_url = guess_repo_from_url(url) except ValueError: logger.warning( 'Ignoring invalid URL %s in %s', url, path) else: if repo_url: yield UpstreamDatum( 'Repository', repo_url, 'possible') if path.lower().endswith('readme.md'): with open(path, 'rb') as f: from .readme import description_from_readme_md contents = f.read().decode('utf-8', 'surrogateescape') description, extra_md = description_from_readme_md(contents) elif path.lower().endswith('readme.rst'): with open(path, 'rb') as f: from .readme import description_from_readme_rst contents = f.read().decode('utf-8', 'surrogateescape') description, extra_md = description_from_readme_rst(contents) elif path.lower().endswith('readme'): with open(path, 'rb') as f: from .readme import description_from_readme_plain contents = f.read().decode('utf-8', 'surrogateescape') description, extra_md = description_from_readme_plain(contents) else: description = None extra_md = [] if description is not None: yield UpstreamDatum( 'X-Description', description, 'possible') for datum in extra_md: yield datum if path.lower().endswith('readme.pod'): with open(path, 'rb') as f: yield from guess_from_pod(f.read()) except IsADirectoryError: pass def prefer_public(url): parsed_url = urlparse(url) if 'ssh' in parsed_url.scheme: return 1 return 0 urls.sort(key=prefer_public) if urls: yield UpstreamDatum('Repository', urls[0], 'possible') def guess_from_debian_patch(path, trust_package): with open(path, 'rb') as f: for line in f: if line.startswith(b'Forwarded: '): forwarded = line.split(b':', 1)[1].strip() bug_db = bug_database_from_issue_url(forwarded.decode('utf-8')) if bug_db: yield UpstreamDatum('Bug-Database', bug_db, 'possible') repo_url = repo_url_from_merge_request_url( forwarded.decode('utf-8')) if repo_url: yield UpstreamDatum('Repository', repo_url, 'possible') def guess_from_meta_json(path, trust_package): with open(path, 'r') as f: data = json.load(f) if 'name' in data: dist_name = data['name'] yield UpstreamDatum('Name', data['name'], 'certain') else: dist_name = None if 'version' in data: version = str(data['version']) if version.startswith('v'): version = version[1:] yield UpstreamDatum('X-Version', version, 'certain') if 'abstract' in data: yield UpstreamDatum('X-Summary', data['abstract'], 'certain') if 'resources' in data: resources = data['resources'] if 'bugtracker' in resources and 'web' in resources['bugtracker']: yield UpstreamDatum( "Bug-Database", resources["bugtracker"]["web"], 'certain') # TODO(jelmer): Support resources["bugtracker"]["mailto"] if 'homepage' in resources: yield UpstreamDatum( "Homepage", resources["homepage"], 'certain') if 'repository' in resources: repo = resources['repository'] if 'url' in repo: yield UpstreamDatum( 'Repository', repo["url"], 'certain') if 'web' in repo: yield UpstreamDatum( 'Repository-Browse', repo['web'], 'certain') # Wild guess: if dist_name: yield from guess_from_perl_dist_name(path, dist_name) def guess_from_travis_yml(path, trust_package): import ruamel.yaml import ruamel.yaml.reader with open(path, 'rb') as f: try: ruamel.yaml.load(f, ruamel.yaml.SafeLoader) except ruamel.yaml.reader.ReaderError as e: logging.warning('Unable to parse %s: %s', path, e) return def guess_from_meta_yml(path, trust_package): """Guess upstream metadata from a META.yml file. See http://module-build.sourceforge.net/META-spec-v1.4.html for the specification of the format. """ import ruamel.yaml import ruamel.yaml.reader with open(path, 'rb') as f: try: data = ruamel.yaml.load(f, ruamel.yaml.SafeLoader) except ruamel.yaml.reader.ReaderError as e: logging.warning('Unable to parse %s: %s', path, e) return except ruamel.yaml.parser.ParserError as e: logging.warning('Unable to parse %s: %s', path, e) return if data is None: # Empty file? return if 'name' in data: dist_name = data['name'] yield UpstreamDatum('Name', data['name'], 'certain') else: dist_name = None if data.get('license'): yield UpstreamDatum('X-License', data['license'], 'certain') if 'version' in data: yield UpstreamDatum('X-Version', str(data['version']), 'certain') if 'resources' in data: resources = data['resources'] if 'bugtracker' in resources: yield UpstreamDatum( 'Bug-Database', resources['bugtracker'], 'certain') if 'homepage' in resources: yield UpstreamDatum( 'Homepage', resources['homepage'], 'certain') if 'repository' in resources: if isinstance(resources['repository'], dict): url = resources['repository'].get('url') else: url = resources['repository'] if url: yield UpstreamDatum( 'Repository', url, 'certain') # Wild guess: if dist_name: yield from guess_from_perl_dist_name(path, dist_name) def guess_from_metainfo(path, trust_package): # See https://www.freedesktop.org/software/appstream/docs/chap-Metadata.html from xml.etree import ElementTree el = ElementTree.parse(path) root = el.getroot() for child in root: if child.tag == 'id': yield UpstreamDatum('Name', child.text, 'certain') if child.tag == 'project_license': yield UpstreamDatum('X-License', child.text, 'certain') if child.tag == 'url': urltype = child.attrib.get('type') if urltype == 'homepage': yield UpstreamDatum('Homepage', child.text, 'certain') elif urltype == 'bugtracker': yield UpstreamDatum('Bug-Database', child.text, 'certain') if child.tag == 'description': yield UpstreamDatum('X-Description', child.text, 'certain') if child.tag == 'summary': yield UpstreamDatum('X-Summary', child.text, 'certain') if child.tag == 'name': yield UpstreamDatum('Name', child.text, 'certain') def guess_from_doap(path, trust_package): # noqa: C901 """Guess upstream metadata from a DOAP file. """ # See https://github.com/ewilderj/doap from xml.etree import ElementTree el = ElementTree.parse(path) root = el.getroot() DOAP_NAMESPACE = 'http://usefulinc.com/ns/doap#' if root.tag == '{http://www.w3.org/1999/02/22-rdf-syntax-ns#}RDF': # If things are wrapped in RDF, unpack. [root] = list(root) if root.tag != ('{%s}Project' % DOAP_NAMESPACE): logging.warning('Doap file does not have DOAP project as root') return def extract_url(el): return el.attrib.get( '{http://www.w3.org/1999/02/22-rdf-syntax-ns#}resource') def extract_lang(el): return el.attrib.get('{http://www.w3.org/XML/1998/namespace}lang') screenshots = [] for child in root: if child.tag == ('{%s}name' % DOAP_NAMESPACE) and child.text: yield UpstreamDatum('Name', child.text, 'certain') elif child.tag == ('{%s}short-name' % DOAP_NAMESPACE) and child.text: yield UpstreamDatum('Name', child.text, 'likely') elif child.tag == ('{%s}bug-database' % DOAP_NAMESPACE): url = extract_url(child) if url: yield UpstreamDatum('Bug-Database', url, 'certain') elif child.tag == ('{%s}homepage' % DOAP_NAMESPACE): url = extract_url(child) if url: yield UpstreamDatum('Homepage', url, 'certain') elif child.tag == ('{%s}download-page' % DOAP_NAMESPACE): url = extract_url(child) if url: yield UpstreamDatum('X-Download', url, 'certain') elif child.tag == ('{%s}shortdesc' % DOAP_NAMESPACE): lang = extract_lang(child) if lang in ('en', None): yield UpstreamDatum('X-Summary', child.text, 'certain') elif child.tag == ('{%s}description' % DOAP_NAMESPACE): lang = extract_lang(child) if lang in ('en', None): yield UpstreamDatum('X-Description', child.text, 'certain') elif child.tag == ('{%s}license' % DOAP_NAMESPACE): pass # TODO elif child.tag == ('{%s}repository' % DOAP_NAMESPACE): for repo in child: if repo.tag in ( '{%s}SVNRepository' % DOAP_NAMESPACE, '{%s}GitRepository' % DOAP_NAMESPACE): repo_location = repo.find( '{http://usefulinc.com/ns/doap#}location') if repo_location is not None: repo_url = extract_url(repo_location) else: repo_url = None if repo_url: yield UpstreamDatum('Repository', repo_url, 'certain') web_location = repo.find( '{http://usefulinc.com/ns/doap#}browse') if web_location is not None: web_url = extract_url(web_location) else: web_url = None if web_url: yield UpstreamDatum( 'Repository-Browse', web_url, 'certain') elif child.tag == '{%s}category' % DOAP_NAMESPACE: pass elif child.tag == '{%s}programming-language' % DOAP_NAMESPACE: pass elif child.tag == '{%s}os' % DOAP_NAMESPACE: pass elif child.tag == '{%s}implements' % DOAP_NAMESPACE: pass elif child.tag == '{https://schema.org/}logo': pass elif child.tag == '{https://schema.org/}screenshot': url = extract_url(child) if url: screenshots.append(url) elif child.tag == '{%s}wiki' % DOAP_NAMESPACE: url = extract_url(child) if url: yield UpstreamDatum('X-Wiki', url, 'certain') elif child.tag == '{%s}maintainer' % DOAP_NAMESPACE: for person in child: if person.tag != '{http://xmlns.com/foaf/0.1/}Person': continue name = person.find('{http://xmlns.com/foaf/0.1/}name').text email_tag = person.find('{http://xmlns.com/foaf/0.1/}mbox') maintainer = Person( name, email_tag.text if email_tag is not None else None) yield UpstreamDatum('X-Maintainer', maintainer, 'certain') elif child.tag == '{%s}mailing-list' % DOAP_NAMESPACE: yield UpstreamDatum('X-MailingList', extract_url(child), 'certain') else: logging.warning('Unknown tag %s in DOAP file', child.tag) def _yield_opam_fields(f): in_field = None val = None field = None for lineno, line in enumerate(f, 1): if in_field and line.rstrip().endswith(in_field): val += line[:-3] in_field = False yield field, val continue elif in_field: val += line continue try: (field, val) = line.rstrip().split(':', 1) except ValueError: logging.debug('Error parsing line %d: %r', lineno, line) in_field = None continue val = val.lstrip() if val.startswith('"""'): val = val[3:] if val.endswith('"""'): yield field, val[:-3] in_field = None else: in_field = '"""' elif val.startswith('"'): yield field, val[1:-1] in_field = None elif val.startswith('['): val = val[1:] if val.endswith(']'): yield field, val[-1] in_field = None else: in_field = ']' def guess_from_opam(path, trust_package=False): # Documentation: https://opam.ocaml.org/doc/Manual.html#Package-definitions with open(path, 'r') as f: for key, value in _yield_opam_fields(f): if key == 'maintainer': yield UpstreamDatum('Maintainer', Person.from_string(value), 'confident') elif key == 'license': yield UpstreamDatum('X-License', value, 'confident') elif key == 'homepage': yield UpstreamDatum('Homepage', value, 'confident') elif key == 'dev-repo': yield UpstreamDatum('Repository', value, 'confident') elif key == 'bug-reports': yield UpstreamDatum('Bug-Database', value, 'confident') elif key == 'synopsis': yield UpstreamDatum('X-Summary', value, 'confident') elif key == 'description': yield UpstreamDatum('X-Description', value, 'confident') elif key == 'doc': yield UpstreamDatum('Documentation', value, 'confident') elif key == 'version': yield UpstreamDatum('X-Version', value, 'confident') elif key == 'authors': if isinstance(value, str): yield UpstreamDatum( 'X-Author', [Person.from_string(value)], 'confident') elif isinstance(value, list): yield UpstreamDatum( 'X-Author', [Person.from_string(p) for p in value], 'confident') def guess_from_nuspec(path, trust_package=False): # Documentation: https://docs.microsoft.com/en-us/nuget/reference/nuspec import xml.etree.ElementTree as ET try: root = xmlparse_simplify_namespaces(path, [ "http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd"]) except ET.ParseError as e: logging.warning('Unable to parse nuspec: %s', e) return assert root.tag == 'package', 'root tag is %r' % root.tag metadata = root.find('metadata') if metadata is None: return version_tag = metadata.find('version') if version_tag is not None: yield UpstreamDatum('X-Version', version_tag.text, 'certain') description_tag = metadata.find('description') if description_tag is not None: yield UpstreamDatum('X-Description', description_tag.text, 'certain') authors_tag = metadata.find('authors') if authors_tag is not None: yield UpstreamDatum( 'X-Author', [Person.from_string(p) for p in authors_tag.text.split(',')], 'certain') project_url_tag = metadata.find('projectUrl') if project_url_tag is not None: repo_url = guess_repo_from_url(project_url_tag.text) if repo_url: yield UpstreamDatum('Repository', repo_url, 'confident') yield UpstreamDatum('Homepage', project_url_tag.text, 'certain') license_tag = metadata.find('license') if license_tag is not None: yield UpstreamDatum('X-License', license_tag.text, 'certain') copyright_tag = metadata.find('copyright') if copyright_tag is not None: yield UpstreamDatum('X-Copyright', copyright_tag.text, 'certain') title_tag = metadata.find('title') if title_tag is not None: yield UpstreamDatum('Name', title_tag.text, 'likely') summary_tag = metadata.find('title') if summary_tag is not None: yield UpstreamDatum('X-Summary', summary_tag.text, 'certain') repository_tag = metadata.find('repository') if repository_tag is not None: repo_url = repository_tag.get('url') branch = repository_tag.get('branch') yield UpstreamDatum('Repository', unsplit_vcs_url(repo_url, branch), 'certain') def guess_from_cabal_lines(lines): # noqa: C901 # TODO(jelmer): Perhaps use a standard cabal parser in Python? # The current parser is not really correct, but good enough for our needs. # https://www.haskell.org/cabal/release/cabal-1.10.1.0/doc/users-guide/ repo_url = None repo_branch = None repo_subpath = None section = None for line in lines: if line.lstrip().startswith('--'): # Comment continue if not line.strip(): section = None continue try: (field, value) = line.split(':', 1) except ValueError: if not line.startswith(' '): section = line.strip().lower() continue # The case of field names is not sigificant field = field.lower() value = value.strip() if not field.startswith(' '): if field == 'homepage': yield 'Homepage', value if field == 'bug-reports': yield 'Bug-Database', value if field == 'name': yield 'Name', value if field == 'maintainer': yield 'X-Maintainer', Person.from_string(value) if field == 'copyright': yield 'X-Copyright', value if field == 'license': yield 'X-License', value if field == 'author': yield 'X-Author', Person.from_string(value) else: field = field.strip() if section == 'source-repository head': if field == 'location': repo_url = value if field == 'branch': repo_branch = value if field == 'subdir': repo_subpath = value if repo_url: yield ( 'Repository', unsplit_vcs_url(repo_url, repo_branch, repo_subpath)) def guess_from_cabal(path, trust_package=False): # noqa: C901 with open(path, 'r', encoding='utf-8') as f: for name, value in guess_from_cabal_lines(f): yield UpstreamDatum(name, value, 'certain', origin=path) def is_email_address(value: str) -> bool: return '@' in value or ' (at) ' in value def guess_from_configure(path, trust_package=False): if os.path.isdir(path): return with open(path, 'rb') as f: for line in f: if b'=' not in line: continue (key, value) = line.strip().split(b'=', 1) if b' ' in key: continue if b'$' in value: continue value = value.strip() if value.startswith(b"'") and value.endswith(b"'"): value = value[1:-1] if not value: continue if key == b'PACKAGE_NAME': yield UpstreamDatum( 'Name', value.decode(), 'certain', './configure') elif key == b'PACKAGE_VERSION': yield UpstreamDatum( 'X-Version', value.decode(), 'certain', './configure') elif key == b'PACKAGE_BUGREPORT': if value in (b'BUG-REPORT-ADDRESS', ): certainty = 'invalid' elif (is_email_address(value.decode()) and not value.endswith(b'gnu.org')): # Downgrade the trustworthiness of this field for most # upstreams if it contains an e-mail address. Most # upstreams seem to just set this to some random address, # and then forget about it. certainty = 'possible' elif b'mailing list' in value: # Downgrade the trustworthiness of this field if # it contains a mailing list certainty = 'possible' else: parsed_url = urlparse(value.decode()) if parsed_url.path.strip('/'): certainty = 'certain' else: # It seems unlikely that the bug submit URL lives at # the root. certainty = 'possible' if certainty != 'invalid': yield UpstreamDatum( 'Bug-Submit', value.decode(), certainty, './configure') elif key == b'PACKAGE_URL': yield UpstreamDatum( 'Homepage', value.decode(), 'certain', './configure') def guess_from_r_description(path, trust_package: bool = False): # noqa: C901 import textwrap # See https://r-pkgs.org/description.html with open(path, 'rb') as f: # TODO(jelmer): use rfc822 instead? from debian.deb822 import Deb822 description = Deb822(f) if 'Package' in description: yield UpstreamDatum('Name', description['Package'], 'certain') if 'Repository' in description: yield UpstreamDatum( 'Archive', description['Repository'], 'certain') if 'BugReports' in description: yield UpstreamDatum( 'Bug-Database', description['BugReports'], 'certain') if description.get('Version'): yield UpstreamDatum('X-Version', description['Version'], 'certain') if 'License' in description: yield UpstreamDatum('X-License', description['License'], 'certain') if 'Title' in description: yield UpstreamDatum('X-Summary', description['Title'], 'certain') if 'Description' in description: lines = description['Description'].splitlines(True) if lines: reflowed = lines[0] + textwrap.dedent(''.join(lines[1:])) yield UpstreamDatum('X-Description', reflowed, 'certain') if 'Maintainer' in description: yield UpstreamDatum( 'X-Maintainer', Person.from_string(description['Maintainer']), 'certain') if 'URL' in description: entries = [entry.strip() for entry in re.split('[\n,]', description['URL'])] urls = [] for entry in entries: m = re.match('([^ ]+) \\((.*)\\)', entry) if m: url = m.group(1) label = m.group(2) else: url = entry label = None urls.append((label, url)) if len(urls) == 1: yield UpstreamDatum('Homepage', urls[0][1], 'possible') for label, url in urls: parsed_url = urlparse(url) if parsed_url.hostname == 'bioconductor.org': yield UpstreamDatum('Archive', 'Bioconductor', 'confident') if label and label.lower() in ('devel', 'repository'): yield UpstreamDatum('Repository', sanitize_vcs_url(url), 'certain') elif label and label.lower() in ('homepage', ): yield UpstreamDatum('Homepage', url, 'certain') else: repo_url = guess_repo_from_url(url) if repo_url: yield UpstreamDatum('Repository', sanitize_vcs_url(repo_url), 'certain') def guess_from_environment(): try: yield UpstreamDatum( 'Repository', os.environ['UPSTREAM_BRANCH_URL'], 'certain') except KeyError: pass def guess_from_path(path): basename = os.path.basename(os.path.abspath(path)) m = re.fullmatch('(.*)-([0-9.]+)', basename) if m: yield UpstreamDatum('Name', m.group(1), 'possible') yield UpstreamDatum('X-Version', m.group(2), 'possible') else: yield UpstreamDatum('Name', basename, 'possible') def guess_from_cargo(path, trust_package): # see https://doc.rust-lang.org/cargo/reference/manifest.html try: from tomlkit import loads from tomlkit.exceptions import ParseError except ImportError: return try: with open(path, 'r') as f: cargo = loads(f.read()) except FileNotFoundError: return except ParseError as e: logging.warning('Error parsing toml file %s: %s', path, e) return try: package = cargo['package'] except KeyError: pass else: if 'name' in package: yield UpstreamDatum('Name', str(package['name']), 'certain') if 'description' in package: yield UpstreamDatum('X-Summary', str(package['description']), 'certain') if 'homepage' in package: yield UpstreamDatum('Homepage', str(package['homepage']), 'certain') if 'license' in package: yield UpstreamDatum('X-License', str(package['license']), 'certain') if 'repository' in package: yield UpstreamDatum('Repository', str(package['repository']), 'certain') if 'version' in package: yield UpstreamDatum('X-Version', str(package['version']), 'confident') def guess_from_pyproject_toml(path, trust_package): try: from tomlkit import loads from tomlkit.exceptions import ParseError except ImportError: return try: with open(path, 'r') as f: pyproject = loads(f.read()) except FileNotFoundError: return except ParseError as e: logging.warning('Error parsing toml file %s: %s', path, e) return if 'poetry' in pyproject.get('tool', []): poetry = pyproject['tool']['poetry'] if 'version' in poetry: yield UpstreamDatum('X-Version', str(poetry['version']), 'certain') if 'description' in poetry: yield UpstreamDatum('X-Summary', str(poetry['description']), 'certain') if 'license' in poetry: yield UpstreamDatum('X-License', str(poetry['license']), 'certain') if 'repository' in poetry: yield UpstreamDatum('Repository', str(poetry['repository']), 'certain') if 'name' in poetry: yield UpstreamDatum('Name', str(poetry['name']), 'certain') def guess_from_pom_xml(path, trust_package=False): # noqa: C901 # Documentation: https://maven.apache.org/pom.html import xml.etree.ElementTree as ET try: root = xmlparse_simplify_namespaces(path, [ 'http://maven.apache.org/POM/4.0.0']) except ET.ParseError as e: logging.warning('Unable to parse package.xml: %s', e) return assert root.tag == 'project', 'root tag is %r' % root.tag name_tag = root.find('name') if name_tag is not None and '$' not in name_tag.text: yield UpstreamDatum('Name', name_tag.text, 'certain') else: artifact_id_tag = root.find('artifactId') if artifact_id_tag is not None: yield UpstreamDatum('Name', artifact_id_tag.text, 'possible') description_tag = root.find('description') if description_tag is not None and description_tag.text: yield UpstreamDatum('X-Summary', description_tag.text, 'certain') version_tag = root.find('version') if version_tag is not None and '$' not in version_tag.text: yield UpstreamDatum('X-Version', version_tag.text, 'certain') licenses_tag = root.find('licenses') if licenses_tag is not None: licenses = [] for license_tag in licenses_tag.findall('license'): name_tag = license_tag.find('name') if name_tag is not None: licenses.append(name_tag.text) for scm_tag in root.findall('scm'): url_tag = scm_tag.find('url') if url_tag is not None: if (url_tag.text.startswith('scm:') and url_tag.text.count(':') >= 3): url = url_tag.text.split(':', 2)[2] else: url = url_tag.text if plausible_vcs_browse_url(url): yield UpstreamDatum('Repository-Browse', url, 'certain') connection_tag = scm_tag.find('connection') if connection_tag is not None: connection = connection_tag.text try: (scm, provider, provider_specific) = connection.split(':', 2) except ValueError: logging.warning( 'Invalid format for SCM connection: %s', connection) continue if scm != 'scm': logging.warning( 'SCM connection does not start with scm: prefix: %s', connection) continue yield UpstreamDatum( 'Repository', provider_specific, 'certain') for issue_mgmt_tag in root.findall('issueManagement'): url_tag = issue_mgmt_tag.find('url') if url_tag is not None: yield UpstreamDatum('Bug-Database', url_tag.text, 'certain') url_tag = root.find('url') if url_tag is not None: if not url_tag.text.startswith('scm:'): # Yeah, uh, not a URL. pass else: yield UpstreamDatum('Homepage', url_tag.text, 'certain') def guess_from_git_config(path, trust_package=False): # See https://git-scm.com/docs/git-config from dulwich.config import ConfigFile cfg = ConfigFile.from_path(path) # If there's a remote named upstream, that's a plausible source.. try: urlb = cfg.get((b'remote', b'upstream'), b'url') except KeyError: pass else: url = urlb.decode('utf-8') if not url.startswith('../'): yield UpstreamDatum('Repository', url, 'likely') # It's less likely that origin is correct, but let's try anyway # (with a lower certainty) # Either way, it's probably incorrect if this is a packaging # repository. if not os.path.exists( os.path.join(os.path.dirname(path), '..', 'debian')): try: urlb = cfg.get((b'remote', b'origin'), b'url') except KeyError: pass else: url = urlb.decode('utf-8') if not url.startswith('../'): yield UpstreamDatum('Repository', url, 'possible') def guess_from_get_orig_source(path, trust_package=False): with open(path, 'rb') as f: for line in f: if line.startswith(b'git clone'): url = url_from_git_clone_command(line) if url: yield UpstreamDatum('Repository', url, 'likely') # https://docs.github.com/en/free-pro-team@latest/github/\ # managing-security-vulnerabilities/adding-a-security-policy-to-your-repository def guess_from_security_md(path, trust_package=False): if path.startswith('./'): path = path[2:] # TODO(jelmer): scan SECURITY.md for email addresses/URLs with instructions yield UpstreamDatum('X-Security-MD', path, 'certain') def guess_from_go_mod(path, trust_package=False): # See https://golang.org/doc/modules/gomod-ref with open(path, 'rb') as f: for line in f: if line.startswith(b'module '): modname = line.strip().split(b' ', 1)[1] yield UpstreamDatum('Name', modname.decode('utf-8'), 'certain') def guess_from_gemspec(path, trust_package=False): # TODO(jelmer): use a proper ruby wrapper instead? with open(path, 'r') as f: for line in f: if line.startswith('#'): continue if not line.strip(): continue if line in ('Gem::Specification.new do |s|\n', 'end\n'): continue if line.startswith(' s.'): try: (key, rawval) = line[4:].split('=', 1) except ValueError: continue key = key.strip() rawval = rawval.strip() if rawval.startswith('"') and rawval.endswith('".freeze'): val = rawval[1:-len('".freeze')] elif rawval.startswith('"') and rawval.endswith('"'): val = rawval[1:-1] else: continue if key == "name": yield UpstreamDatum('Name', val, 'certain') elif key == 'version': yield UpstreamDatum('X-Version', val, 'certain') elif key == 'homepage': yield UpstreamDatum('Homepage', val, 'certain') elif key == 'summary': yield UpstreamDatum('X-Summary', val, 'certain') elif key == 'description': yield UpstreamDatum('X-Description', val, 'certain') else: logging.debug( 'ignoring unparseable line in %s: %r', path, line) def guess_from_makefile_pl(path, trust_package=False): dist_name = None with open(path, 'rb') as f: for line in f: m = re.fullmatch(br"name '([^'\"]+)';$", line.rstrip()) if m: dist_name = m.group(1).decode() yield UpstreamDatum('Name', dist_name, 'confident') m = re.fullmatch(br"repository '([^'\"]+)';$", line.rstrip()) if m: yield UpstreamDatum('Repository', m.group(1).decode(), 'confident') if dist_name: yield from guess_from_perl_dist_name(path, dist_name) def guess_from_wscript(path, trust_package=False): with open(path, 'rb') as f: for line in f: m = re.fullmatch(b'APPNAME = [\'"](.*)[\'"]', line.rstrip(b'\n')) if m: yield UpstreamDatum('Name', m.group(1).decode(), 'confident') m = re.fullmatch(b'VERSION = [\'"](.*)[\'"]', line.rstrip(b'\n')) if m: yield UpstreamDatum('X-Version', m.group(1).decode(), 'confident') def guess_from_authors(path, trust_package=False): authors = [] with open(path, 'rb') as f: for line in f: m = line.strip().decode('utf-8', 'surrogateescape') if not m: continue if m.startswith('arch-tag: '): continue if m.endswith(':'): continue if m.startswith('$Id'): continue if m.startswith('*') or m.startswith('-'): m = m[1:].strip() if len(m) < 3: continue if m.endswith('.'): continue if ' for ' in m: m = m.split(' for ')[0] if not m[0].isalpha(): continue if '<' in m or m.count(' ') < 5: authors.append(Person.from_string(m)) yield UpstreamDatum('X-Authors', authors, 'likely') def _get_guessers(path, trust_package=False): # noqa: C901 CANDIDATES = [ ('debian/watch', guess_from_debian_watch), ('debian/control', guess_from_debian_control), ('debian/changelog', guess_from_debian_changelog), ('debian/rules', guess_from_debian_rules), ('PKG-INFO', guess_from_pkg_info), ('package.json', guess_from_package_json), ('composer.json', guess_from_composer_json), ('package.xml', guess_from_package_xml), ('dist.ini', guess_from_dist_ini), ('debian/copyright', guess_from_debian_copyright), ('META.json', guess_from_meta_json), ('MYMETA.json', guess_from_meta_json), ('META.yml', guess_from_meta_yml), ('MYMETA.yml', guess_from_meta_yml), ('configure', guess_from_configure), ('DESCRIPTION', guess_from_r_description), ('Cargo.toml', guess_from_cargo), ('pom.xml', guess_from_pom_xml), ('.git/config', guess_from_git_config), ('debian/get-orig-source.sh', guess_from_get_orig_source), ('SECURITY.md', guess_from_security_md), ('.github/SECURITY.md', guess_from_security_md), ('docs/SECURITY.md', guess_from_security_md), ('pyproject.toml', guess_from_pyproject_toml), ('setup.cfg', guess_from_setup_cfg), ('go.mod', guess_from_go_mod), ('Makefile.PL', guess_from_makefile_pl), ('wscript', guess_from_wscript), ('AUTHORS', guess_from_authors), ('INSTALL', guess_from_install), ] # Search for something Python-y found_pkg_info = os.path.exists(os.path.join(path, 'PKG-INFO')) for entry in os.scandir(path): if entry.name.endswith('.egg-info'): CANDIDATES.append( (os.path.join(entry.name, 'PKG-INFO'), guess_from_pkg_info)) found_pkg_info = True if entry.name.endswith('.dist-info'): CANDIDATES.append( (os.path.join(entry.name, 'METADATA'), guess_from_pkg_info)) found_pkg_info = True if not found_pkg_info and os.path.exists(os.path.join(path, 'setup.py')): CANDIDATES.append(('setup.py', guess_from_setup_py)) for entry in os.scandir(path): if entry.name.endswith('.gemspec'): CANDIDATES.append((entry.name, guess_from_gemspec)) # TODO(jelmer): Perhaps scan all directories if no other primary project # information file has been found? for entry in os.scandir(path): if entry.is_dir(): subpath = os.path.join(entry.path, 'DESCRIPTION') if os.path.exists(subpath): CANDIDATES.append( (os.path.join(entry.name, 'DESCRIPTION'), guess_from_r_description)) doap_filenames = [ n for n in os.listdir(path) if n.endswith('.doap') or (n.endswith('.xml') and n.startswith('doap_XML_'))] if doap_filenames: if len(doap_filenames) == 1: CANDIDATES.append((doap_filenames[0], guess_from_doap)) else: logging.warning( 'More than one doap filename, ignoring all: %r', doap_filenames) metainfo_filenames = [ n for n in os.listdir(path) if n.endswith('.metainfo.xml')] if metainfo_filenames: if len(metainfo_filenames) == 1: CANDIDATES.append((metainfo_filenames[0], guess_from_metainfo)) else: logging.warning( 'More than one metainfo filename, ignoring all: %r', metainfo_filenames) cabal_filenames = [n for n in os.listdir(path) if n.endswith('.cabal')] if cabal_filenames: if len(cabal_filenames) == 1: CANDIDATES.append((cabal_filenames[0], guess_from_cabal)) else: logging.warning( 'More than one cabal filename, ignoring all: %r', cabal_filenames) readme_filenames = [ n for n in os.listdir(path) if any([n.startswith(p) for p in ['readme', 'ReadMe', 'Readme', 'README', 'HACKING', 'CONTRIBUTING']]) and os.path.splitext(n)[1] not in ('.html', '.pdf', '.xml') and not n.endswith('~')] CANDIDATES.extend([(n, guess_from_readme) for n in readme_filenames]) nuspec_filenames = [n for n in os.listdir(path) if n.endswith('.nuspec')] if nuspec_filenames: if len(nuspec_filenames) == 1: CANDIDATES.append((nuspec_filenames[0], guess_from_nuspec)) else: logging.warning( 'More than one nuspec filename, ignoring all: %r', nuspec_filenames) opam_filenames = [n for n in os.listdir(path) if n.endswith('.opam')] if opam_filenames: if len(opam_filenames) == 1: CANDIDATES.append((opam_filenames[0], guess_from_opam)) else: logging.warning( 'More than one opam filename, ignoring all: %r', opam_filenames) try: debian_patches = [ os.path.join('debian', 'patches', n) for n in os.listdir('debian/patches') if os.path.isfile(os.path.join('debian/patches', n))] except FileNotFoundError: pass else: CANDIDATES.extend( [(p, guess_from_debian_patch) for p in debian_patches]) yield 'environment', guess_from_environment() yield 'path', guess_from_path(path) for relpath, guesser in CANDIDATES: abspath = os.path.join(path, relpath) if not os.path.exists(abspath): continue yield relpath, guesser(abspath, trust_package=trust_package) def guess_upstream_metadata_items( path: str, trust_package: bool = False, minimum_certainty: Optional[str] = None ) -> Iterable[UpstreamDatum]: """Guess upstream metadata items, in no particular order. Args: path: Path to the package trust: Whether to trust the package contents and i.e. run executables in it Yields: UpstreamDatum """ for entry in guess_upstream_info(path, trust_package=trust_package): if isinstance(entry, UpstreamDatum): if certainty_sufficient(entry.certainty, minimum_certainty): yield entry def guess_upstream_info( path: str, trust_package: bool = False) -> Iterable[UpstreamDatum]: guessers = _get_guessers(path, trust_package=trust_package) for name, guesser in guessers: for entry in guesser: if entry.origin is None: entry.origin = name yield entry def get_upstream_info(path, trust_package=False, net_access=False, consult_external_directory=False, check=False): metadata_items = [] for entry in guess_upstream_info(path, trust_package=trust_package): if isinstance(entry, UpstreamDatum): metadata_items.append(entry) metadata = summarize_upstream_metadata( metadata_items, path, net_access=net_access, consult_external_directory=consult_external_directory, check=check) return metadata def summarize_upstream_metadata( metadata_items, path, net_access=False, consult_external_directory=False, check=False): """Summarize the upstream metadata into a dictionary. Args: metadata_items: Iterator over metadata items path: Path to the package trust_package: Whether to trust the package contents and i.e. run executables in it net_access: Whether to allow net access consult_external_directory: Whether to pull in data from external (user-maintained) directories. """ upstream_metadata = {} update_from_guesses( upstream_metadata, filter_bad_guesses(metadata_items)) extend_upstream_metadata( upstream_metadata, path, net_access=net_access, consult_external_directory=consult_external_directory) if check: check_upstream_metadata(upstream_metadata) fix_upstream_metadata(upstream_metadata) return {k: v.value for (k, v) in upstream_metadata.items()} def guess_upstream_metadata( path, trust_package=False, net_access=False, consult_external_directory=False, check=False): """Guess the upstream metadata dictionary. Args: path: Path to the package trust_package: Whether to trust the package contents and i.e. run executables in it net_access: Whether to allow net access consult_external_directory: Whether to pull in data from external (user-maintained) directories. """ metadata_items = guess_upstream_metadata_items( path, trust_package=trust_package) return summarize_upstream_metadata( metadata_items, path, net_access=net_access, consult_external_directory=consult_external_directory, check=check) def _possible_fields_missing(upstream_metadata, fields, field_certainty): for field in fields: if field not in upstream_metadata: return True if upstream_metadata[field].certainty != 'certain': return True else: return False def _sf_git_extract_url(page): try: from bs4 import BeautifulSoup except ModuleNotFoundError: logging.warning( 'Not scanning sourceforge page, since python3-bs4 is missing') return None bs = BeautifulSoup(page, features='lxml') el = bs.find(id='access_url') if el is not None: return None value = el.get('value') if value is None: return None access_command = value.split(' ') if access_command[:2] != ['git', 'clone']: return None return access_command[2] def guess_from_sf(sf_project: str, subproject: Optional[str] = None): # noqa: C901 try: data = get_sf_metadata(sf_project) except socket.timeout: logging.warning( 'timeout contacting sourceforge, ignoring: %s', sf_project) return except urllib.error.URLError as e: logging.warning( 'Unable to retrieve sourceforge project metadata: %s: %s', sf_project, e) return if data.get('name'): yield 'Name', data['name'] if data.get('external_homepage'): yield 'Homepage', data['external_homepage'] if data.get('preferred_support_url'): if verify_bug_database_url(data['preferred_support_url']): yield 'Bug-Database', data['preferred_support_url'] # In theory there are screenshots linked from the sourceforge project that # we can use, but if there are multiple "subprojects" then it will be # unclear which one they belong to. # TODO(jelmer): What about cvs and bzr? VCS_NAMES = ['hg', 'git', 'svn', 'cvs', 'bzr'] vcs_tools = [ (tool['name'], tool.get('mount_label'), tool['url']) for tool in data.get('tools', []) if tool['name'] in VCS_NAMES] if len(vcs_tools) > 1: # Try to filter out some irrelevant stuff vcs_tools = [tool for tool in vcs_tools if tool[2].strip('/').rsplit('/')[-1] not in ['www', 'homepage']] if len(vcs_tools) > 1 and subproject: new_vcs_tools = [ tool for tool in vcs_tools if tool[1] == subproject] if len(new_vcs_tools) > 0: vcs_tools = new_vcs_tools # if both vcs and another tool appear, then assume cvs is old. if len(vcs_tools) > 1 and 'cvs' in [t[0] for t in vcs_tools]: vcs_tools = [v for v in vcs_tools if v[0] != 'cvs'] if len(vcs_tools) == 1: (kind, label, url) = vcs_tools[0] if kind == 'git': url = urljoin('https://sourceforge.net/', url) headers = {'User-Agent': USER_AGENT, 'Accept': 'text/html'} http_contents = urlopen( Request(url, headers=headers), timeout=DEFAULT_URLLIB_TIMEOUT).read() url = _sf_git_extract_url(http_contents) elif kind == 'svn': url = urljoin('https://svn.code.sf.net/', url) elif kind == 'hg': url = urljoin('https://hg.code.sf.net/', url) elif kind == 'cvs': url = 'cvs+pserver://anonymous@%s.cvs.sourceforge.net/cvsroot/%s' % ( sf_project, url.strip('/').rsplit('/')[-2]) elif kind == 'bzr': # TODO(jelmer) url = None else: raise KeyError(kind) if url is not None: yield 'Repository', url elif len(vcs_tools) > 1: logging.warning('multiple possible VCS URLs found: %r', vcs_tools) def guess_from_repology(repology_project): try: metadata = get_repology_metadata(repology_project) except socket.timeout: logging.warning( 'timeout contacting repology, ignoring: %s', repology_project) return fields = {} def _add_field(name, value, add): fields.setdefault(name, {}) fields[name].setdefault(value, 0) fields[name][value] += add for entry in metadata: if entry.get('status') == 'outdated': score = 1 else: score = 10 if 'www' in entry: for www in entry['www']: _add_field('Homepage', www, score) if 'licenses' in entry: for license in entry['licenses']: _add_field('X-License', license, score) if 'summary' in entry: _add_field('X-Summary', entry['summary'], score) if 'downloads' in entry: for download in entry['downloads']: _add_field('X-Download', download, score) for field, scores in fields.items(): yield field, max(scores, key=operator.itemgetter(1)) def extend_from_external_guesser( upstream_metadata, guesser_certainty, guesser_fields, guesser): if not _possible_fields_missing( upstream_metadata, guesser_fields, guesser_certainty): return update_from_guesses( upstream_metadata, [UpstreamDatum(key, value, guesser_certainty) for (key, value) in guesser]) def extend_from_repology(upstream_metadata, minimum_certainty, source_package): # The set of fields that sf can possibly provide: repology_fields = ['Homepage', 'X-License', 'X-Summary', 'X-Download'] certainty = 'confident' if certainty_sufficient(certainty, minimum_certainty): # Don't bother talking to repology if we're not # speculating. return return extend_from_external_guesser( upstream_metadata, certainty, repology_fields, guess_from_repology(source_package)) class NoSuchHackagePackage(Exception): def __init__(self, package): self.package = package def guess_from_hackage(hackage_package): http_url = 'http://hackage.haskell.org/package/%s/%s.cabal' % ( hackage_package, hackage_package) headers = {'User-Agent': USER_AGENT} try: http_contents = urlopen( Request(http_url, headers=headers), timeout=DEFAULT_URLLIB_TIMEOUT).read() except urllib.error.HTTPError as e: if e.code == 404: raise NoSuchHackagePackage(hackage_package) raise return guess_from_cabal_lines( http_contents.decode('utf-8', 'surrogateescape').splitlines(True)) def extend_from_hackage(upstream_metadata, hackage_package): # The set of fields that sf can possibly provide: hackage_fields = [ 'Homepage', 'Name', 'Repository', 'X-Maintainer', 'X-Copyright', 'X-License', 'Bug-Database'] hackage_certainty = upstream_metadata['Archive'].certainty return extend_from_external_guesser( upstream_metadata, hackage_certainty, hackage_fields, guess_from_hackage(hackage_package)) def guess_from_crates_io(crate: str): data = _load_json_url('https://crates.io/api/v1/crates/%s' % crate) crate_data = data['crate'] yield 'Name', crate_data['name'] if crate_data.get('homepage'): yield 'Homepage', crate_data['homepage'] if crate_data.get('repository'): yield 'Repository', crate_data['repository'] if crate_data.get('newest_version'): yield 'X-Version', crate_data['newest_version'] if crate_data.get('description'): yield 'X-Summary', crate_data['description'] class NoSuchCrate(Exception): def __init__(self, crate): self.crate = crate def extend_from_crates_io(upstream_metadata, crate): # The set of fields that crates.io can possibly provide: crates_io_fields = [ 'Homepage', 'Name', 'Repository', 'X-Version', 'X-Summary'] crates_io_certainty = upstream_metadata['Archive'].certainty return extend_from_external_guesser( upstream_metadata, crates_io_certainty, crates_io_fields, guess_from_crates_io(crate)) def extend_from_sf(upstream_metadata, sf_project): # The set of fields that sf can possibly provide: sf_fields = ['Homepage', 'Name', 'Repository', 'Bug-Database'] sf_certainty = upstream_metadata['Archive'].certainty if 'Name' in upstream_metadata: subproject = upstream_metadata['Name'].value else: subproject = None return extend_from_external_guesser( upstream_metadata, sf_certainty, sf_fields, guess_from_sf(sf_project, subproject=subproject)) def extend_from_pecl(upstream_metadata, pecl_url, certainty): pecl_fields = ['Homepage', 'Repository', 'Bug-Database'] return extend_from_external_guesser( upstream_metadata, certainty, pecl_fields, guess_from_pecl_url(pecl_url)) def extend_from_lp(upstream_metadata, minimum_certainty, package, distribution=None, suite=None): # The set of fields that Launchpad can possibly provide: lp_fields = ['Homepage', 'Repository', 'Name'] lp_certainty = 'possible' if certainty_sufficient(lp_certainty, minimum_certainty): # Don't bother talking to launchpad if we're not # speculating. return extend_from_external_guesser( upstream_metadata, lp_certainty, lp_fields, guess_from_launchpad( package, distribution=distribution, suite=suite)) def extend_from_aur(upstream_metadata, minimum_certainty, package): # The set of fields that AUR can possibly provide: aur_fields = ['Homepage', 'Repository'] aur_certainty = 'possible' if certainty_sufficient(aur_certainty, minimum_certainty): # Don't bother talking to AUR if we're not speculating. return extend_from_external_guesser( upstream_metadata, aur_certainty, aur_fields, guess_from_aur(package)) def extract_sf_project_name(url): if isinstance(url, list): return None m = re.fullmatch('https?://(.*).(sf|sourceforge).(net|io)/?', url) if m: return m.group(1) m = re.match('https://sourceforge.net/(projects|p)/([^/]+)', url) if m: return m.group(2) def repo_url_from_merge_request_url(url): parsed_url = urlparse(url) if parsed_url.netloc == 'github.com': path_elements = parsed_url.path.strip('/').split('/') if len(path_elements) > 2 and path_elements[2] == 'issues': return urlunparse( ('https', 'github.com', '/'.join(path_elements[:3]), None, None, None)) if is_gitlab_site(parsed_url.netloc): path_elements = parsed_url.path.strip('/').split('/') if (len(path_elements) > 2 and path_elements[-2] == 'merge_requests' and path_elements[-1].isdigit()): return urlunparse( ('https', parsed_url.netloc, '/'.join(path_elements[:-2]), None, None, None)) def bug_database_from_issue_url(url): parsed_url = urlparse(url) if parsed_url.netloc == 'github.com': path_elements = parsed_url.path.strip('/').split('/') if len(path_elements) > 2 and path_elements[2] == 'issues': return urlunparse( ('https', 'github.com', '/'.join(path_elements[:3]), None, None, None)) if is_gitlab_site(parsed_url.netloc): path_elements = parsed_url.path.strip('/').split('/') if (len(path_elements) > 2 and path_elements[-2] == 'issues' and path_elements[-1].isdigit()): return urlunparse( ('https', parsed_url.netloc, '/'.join(path_elements[:-2]), None, None, None)) def guess_bug_database_url_from_repo_url(url): parsed_url = urlparse(url) if parsed_url.netloc == 'github.com': path = '/'.join(parsed_url.path.split('/')[:3]) if path.endswith('.git'): path = path[:-4] path = path + '/issues' return urlunparse( ('https', 'github.com', path, None, None, None)) if is_gitlab_site(parsed_url.hostname): path = '/'.join(parsed_url.path.split('/')[:3]) if path.endswith('.git'): path = path[:-4] path = path + '/issues' return urlunparse( ('https', parsed_url.hostname, path, None, None, None)) return None def bug_database_url_from_bug_submit_url(url): parsed_url = urlparse(url) path_elements = parsed_url.path.strip('/').split('/') if parsed_url.netloc == 'github.com': if len(path_elements) not in (3, 4): return None if path_elements[2] != 'issues': return None return urlunparse( ('https', 'github.com', '/'.join(path_elements[:3]), None, None, None)) if parsed_url.netloc == 'bugs.launchpad.net': if len(path_elements) >= 1: return urlunparse( parsed_url._replace(path='/%s' % path_elements[0])) if is_gitlab_site(parsed_url.netloc): if len(path_elements) < 2: return None if path_elements[-2] != 'issues': return None if path_elements[-1] == 'new': path_elements.pop(-1) return urlunparse( parsed_url._replace(path='/'.join(path_elements))) if parsed_url.hostname == 'sourceforge.net': if len(path_elements) < 3: return None if path_elements[0] != 'p' or path_elements[2] != 'bugs': return None if len(path_elements) > 3: path_elements.pop(-1) return urlunparse( parsed_url._replace(path='/'.join(path_elements))) return None def bug_submit_url_from_bug_database_url(url): parsed_url = urlparse(url) path_elements = parsed_url.path.strip('/').split('/') if parsed_url.netloc == 'github.com': if len(path_elements) != 3: return None if path_elements[2] != 'issues': return None return urlunparse( ('https', 'github.com', parsed_url.path + '/new', None, None, None)) if parsed_url.netloc == 'bugs.launchpad.net': if len(path_elements) == 1: return urlunparse( parsed_url._replace(path=parsed_url.path+'/+filebug')) if is_gitlab_site(parsed_url.netloc): if len(path_elements) < 2: return None if path_elements[-1] != 'issues': return None return urlunparse( parsed_url._replace(path=parsed_url.path.rstrip('/')+'/new')) return None def verify_bug_database_url(url): parsed_url = urlparse(url) if parsed_url.netloc == 'github.com': path_elements = parsed_url.path.strip('/').split('/') if len(path_elements) < 3 or path_elements[2] != 'issues': return False api_url = 'https://api.github.com/repos/%s/%s' % ( path_elements[0], path_elements[1]) try: data = _load_json_url(api_url) except urllib.error.HTTPError as e: if e.code == 404: return False if e.code == 403: # Probably rate limited logging.warning( 'Unable to verify bug database URL %s: %s', url, e.reason) return None raise return data['has_issues'] and not data.get('archived', False) if is_gitlab_site(parsed_url.netloc): path_elements = parsed_url.path.strip('/').split('/') if len(path_elements) < 3 or path_elements[-1] != 'issues': return False api_url = 'https://%s/api/v4/projects/%s/issues' % ( parsed_url.netloc, quote('/'.join(path_elements[:-1]), safe='')) try: data = _load_json_url(api_url) except urllib.error.HTTPError as e: if e.code == 404: return False raise return len(data) > 0 return None def verify_bug_submit_url(url): parsed_url = urlparse(url) if parsed_url.netloc == 'github.com' or is_gitlab_site(parsed_url.netloc): path = '/'.join(parsed_url.path.strip('/').split('/')[:-1]) return verify_bug_database_url( urlunparse(parsed_url._replace(path=path))) return None def _extrapolate_repository_from_homepage(upstream_metadata, net_access): repo = guess_repo_from_url( upstream_metadata['Homepage'].value, net_access=net_access) if repo: yield UpstreamDatum( 'Repository', repo, min_certainty(['likely', upstream_metadata['Homepage'].certainty])) def _extrapolate_repository_from_download(upstream_metadata, net_access): repo = guess_repo_from_url( upstream_metadata['X-Download'].value, net_access=net_access) if repo: yield UpstreamDatum( 'Repository', repo, min_certainty( ['likely', upstream_metadata['X-Download'].certainty])) def _extrapolate_repository_from_bug_db(upstream_metadata, net_access): repo = guess_repo_from_url( upstream_metadata['Bug-Database'].value, net_access=net_access) if repo: yield UpstreamDatum( 'Repository', repo, min_certainty( ['likely', upstream_metadata['Bug-Database'].certainty])) def _extrapolate_name_from_repository(upstream_metadata, net_access): repo = guess_repo_from_url( upstream_metadata['Repository'].value, net_access=net_access) if repo: parsed = urlparse(repo) name = parsed.path.split('/')[-1] if name.endswith('.git'): name = name[:-4] if name: yield UpstreamDatum( 'Name', name, min_certainty( ['likely', upstream_metadata['Repository'].certainty])) def _extrapolate_repository_browse_from_repository( upstream_metadata, net_access): browse_url = browse_url_from_repo_url( upstream_metadata['Repository'].value) if browse_url: yield UpstreamDatum( 'Repository-Browse', browse_url, upstream_metadata['Repository'].certainty) def _extrapolate_repository_from_repository_browse( upstream_metadata, net_access): repo = guess_repo_from_url( upstream_metadata['Repository-Browse'].value, net_access=net_access) if repo: yield UpstreamDatum( 'Repository', repo, upstream_metadata['Repository-Browse'].certainty) def _extrapolate_bug_database_from_repository( upstream_metadata, net_access): repo_url = upstream_metadata['Repository'].value if not isinstance(repo_url, str): return bug_db_url = guess_bug_database_url_from_repo_url(repo_url) if bug_db_url: yield UpstreamDatum( 'Bug-Database', bug_db_url, min_certainty( ['likely', upstream_metadata['Repository'].certainty])) def _extrapolate_bug_submit_from_bug_db( upstream_metadata, net_access): bug_submit_url = bug_submit_url_from_bug_database_url( upstream_metadata['Bug-Database'].value) if bug_submit_url: yield UpstreamDatum( 'Bug-Submit', bug_submit_url, upstream_metadata['Bug-Database'].certainty) def _extrapolate_bug_db_from_bug_submit( upstream_metadata, net_access): bug_db_url = bug_database_url_from_bug_submit_url( upstream_metadata['Bug-Submit'].value) if bug_db_url: yield UpstreamDatum( 'Bug-Database', bug_db_url, upstream_metadata['Bug-Submit'].certainty) def _copy_bug_db_field(upstream_metadata, net_access): ret = UpstreamDatum( 'Bug-Database', upstream_metadata['Bugs-Database'].value, upstream_metadata['Bugs-Database'].certainty, upstream_metadata['Bugs-Database'].origin) del upstream_metadata['Bugs-Database'] return ret def _extrapolate_security_contact_from_security_md( upstream_metadata, net_access): repository_url = upstream_metadata['Repository'] security_md_path = upstream_metadata['X-Security-MD'] security_url = browse_url_from_repo_url( repository_url.value, security_md_path.value) if security_url is None: return None yield UpstreamDatum( 'Security-Contact', security_url, certainty=min_certainty( [repository_url.certainty, security_md_path.certainty]), origin=security_md_path.origin) def _extrapolate_contact_from_maintainer(upstream_metadata, net_access): maintainer = upstream_metadata['X-Maintainer'] yield UpstreamDatum( 'Contact', str(maintainer.value), certainty=min_certainty([maintainer.certainty]), origin=maintainer.origin) def _extrapolate_homepage_from_repository_browse( upstream_metadata, net_access): browse_url = upstream_metadata['Repository-Browse'].value parsed = urlparse(browse_url) # Some hosting sites are commonly used as Homepage # TODO(jelmer): Maybe check that there is a README file that # can serve as index? if parsed.netloc in ('github.com', ) or is_gitlab_site(parsed.netloc): yield UpstreamDatum('Homepage', browse_url, 'possible') def _consult_homepage(upstream_metadata, net_access): if not net_access: return from .homepage import guess_from_homepage for entry in guess_from_homepage(upstream_metadata['Homepage'].value): entry.certainty = min_certainty([ upstream_metadata['Homepage'].certainty, entry.certainty]) yield entry EXTRAPOLATE_FNS = [ (['Homepage'], ['Repository'], _extrapolate_repository_from_homepage), (['Repository-Browse'], ['Homepage'], _extrapolate_homepage_from_repository_browse), (['Bugs-Database'], ['Bug-Database'], _copy_bug_db_field), (['Bug-Database'], ['Repository'], _extrapolate_repository_from_bug_db), (['Repository'], ['Repository-Browse'], _extrapolate_repository_browse_from_repository), (['Repository-Browse'], ['Repository'], _extrapolate_repository_from_repository_browse), (['Repository'], ['Bug-Database'], _extrapolate_bug_database_from_repository), (['Bug-Database'], ['Bug-Submit'], _extrapolate_bug_submit_from_bug_db), (['Bug-Submit'], ['Bug-Database'], _extrapolate_bug_db_from_bug_submit), (['X-Download'], ['Repository'], _extrapolate_repository_from_download), (['Repository'], ['Name'], _extrapolate_name_from_repository), (['Repository', 'X-Security-MD'], 'Security-Contact', _extrapolate_security_contact_from_security_md), (['X-Maintainer'], ['Contact'], _extrapolate_contact_from_maintainer), (['Homepage'], ['Bug-Database', 'Repository'], _consult_homepage), ] def extend_upstream_metadata(upstream_metadata, # noqa: C901 path, minimum_certainty=None, net_access=False, consult_external_directory=False): """Extend a set of upstream metadata. """ # TODO(jelmer): Use EXTRAPOLATE_FNS mechanism for this? for field in ['Homepage', 'Bug-Database', 'Bug-Submit', 'Repository', 'Repository-Browse', 'X-Download']: if field not in upstream_metadata: continue project = extract_sf_project_name(upstream_metadata[field].value) if project: certainty = min_certainty( ['likely', upstream_metadata[field].certainty]) upstream_metadata['Archive'] = UpstreamDatum( 'Archive', 'SourceForge', certainty) upstream_metadata['X-SourceForge-Project'] = UpstreamDatum( 'X-SourceForge-Project', project, certainty) break archive = upstream_metadata.get('Archive') if (archive and archive.value == 'SourceForge' and 'X-SourceForge-Project' in upstream_metadata and net_access): sf_project = upstream_metadata['X-SourceForge-Project'].value try: extend_from_sf(upstream_metadata, sf_project) except NoSuchSourceForgeProject: del upstream_metadata['X-SourceForge-Project'] if (archive and archive.value == 'Hackage' and 'X-Hackage-Package' in upstream_metadata and net_access): hackage_package = upstream_metadata['X-Hackage-Package'].value try: extend_from_hackage(upstream_metadata, hackage_package) except NoSuchHackagePackage: del upstream_metadata['X-Hackage-Package'] if (archive and archive.value == 'crates.io' and 'X-Cargo-Crate' in upstream_metadata and net_access): crate = upstream_metadata['X-Cargo-Crate'].value try: extend_from_crates_io(upstream_metadata, crate) except NoSuchCrate: del upstream_metadata['X-Cargo-Crate'] if net_access and consult_external_directory: # TODO(jelmer): Don't assume debian/control exists from debian.deb822 import Deb822 try: with open(os.path.join(path, 'debian/control'), 'r') as f: package = Deb822(f)['Source'] except FileNotFoundError: # Huh, okay. pass else: extend_from_lp(upstream_metadata, minimum_certainty, package) extend_from_aur(upstream_metadata, minimum_certainty, package) extend_from_repology(upstream_metadata, minimum_certainty, package) pecl_url = upstream_metadata.get('X-Pecl-URL') if net_access and pecl_url: extend_from_pecl(upstream_metadata, pecl_url.value, pecl_url.certainty) _extrapolate_fields( upstream_metadata, net_access=net_access, minimum_certainty=minimum_certainty) DEFAULT_ITERATION_LIMIT = 100 def _extrapolate_fields( upstream_metadata, net_access: bool = False, minimum_certainty: Optional[str] = None, iteration_limit: int = DEFAULT_ITERATION_LIMIT): changed = True iterations = 0 while changed: changed = False iterations += 1 if iterations > iteration_limit: raise Exception('hit iteration limit %d' % iteration_limit) for from_fields, to_fields, fn in EXTRAPOLATE_FNS: from_certainties: Optional[List[str]] = [] for from_field in from_fields: try: from_value = upstream_metadata[from_field] except KeyError: from_certainties = None break from_certainties.append(from_value.certainty) # type: ignore if not from_certainties: # Nope continue from_certainty = min_certainty(from_certainties) old_to_values = { to_field: upstream_metadata.get(to_field) for to_field in to_fields} if all([old_value is not None and certainty_to_confidence(from_certainty) > certainty_to_confidence(old_value.certainty) for old_value in old_to_values.values()]): continue changed = update_from_guesses( upstream_metadata, fn(upstream_metadata, net_access)) def verify_screenshots(urls): headers = {'User-Agent': USER_AGENT} for url in urls: try: response = urlopen( Request(url, headers=headers, method='HEAD'), timeout=DEFAULT_URLLIB_TIMEOUT) except urllib.error.HTTPError as e: if e.code == 404: yield url, False else: yield url, None else: assert response is not None # TODO(jelmer): Check content-type? yield url, True def check_upstream_metadata(upstream_metadata, version=None): """Check upstream metadata. This will make network connections, etc. """ repository = upstream_metadata.get('Repository') if repository and repository.certainty == 'likely': if verify_repository_url(repository.value, version=version): repository.certainty = 'certain' derived_browse_url = browse_url_from_repo_url(repository.value) browse_repo = upstream_metadata.get('Repository-Browse') if browse_repo and derived_browse_url == browse_repo.value: browse_repo.certainty = repository.certainty else: # TODO(jelmer): Remove altogether, or downgrade to a lesser # certainty? pass bug_database = upstream_metadata.get('Bug-Database') if bug_database and bug_database.certainty == 'likely': if verify_bug_database_url(bug_database.value): bug_database.certainty = 'certain' bug_submit = upstream_metadata.get('Bug-Submit') if bug_submit and bug_submit.certainty == 'likely': if verify_bug_submit_url(bug_submit.value): bug_submit.certainty = 'certain' screenshots = upstream_metadata.get('Screenshots') if screenshots and screenshots.certainty == 'likely': newvalue = [] screenshots.certainty = 'certain' for i, (url, status) in enumerate(verify_screenshots( screenshots.value)): if status is True: newvalue.append(url) elif status is False: pass else: screenshots.certainty = 'likely' screenshots.value = newvalue def parse_pkgbuild_variables(f): import shlex variables = {} keep = None existing = None for line in f: if existing: line = existing + line if line.endswith(b'\\\n'): existing = line[:-2] continue existing = None if (line.startswith(b'\t') or line.startswith(b' ') or line.startswith(b'#')): continue if keep: keep = (keep[0], keep[1] + line) if line.rstrip().endswith(b')'): variables[keep[0].decode()] = shlex.split( keep[1].rstrip(b'\n').decode()) keep = None continue try: (key, value) = line.split(b'=', 1) except ValueError: continue if value.startswith(b'('): if value.rstrip().endswith(b')'): value = value.rstrip()[1:-1] else: keep = (key, value[1:]) continue variables[key.decode()] = shlex.split(value.rstrip(b'\n').decode()) return variables def guess_from_pecl(package): if not package.startswith('php-'): return iter([]) php_package = package[4:] url = 'https://pecl.php.net/packages/%s' % php_package.replace('-', '_') data = dict(guess_from_pecl_url(url)) try: data['Repository'] = guess_repo_from_url( data['Repository-Browse'], net_access=True) except KeyError: pass return data.items() def guess_from_pecl_url(url): headers = {'User-Agent': USER_AGENT} try: f = urlopen( Request(url, headers=headers), timeout=PECL_URLLIB_TIMEOUT) except urllib.error.HTTPError as e: if e.code != 404: raise return except socket.timeout: logging.warning('timeout contacting pecl, ignoring: %s', url) return try: from bs4 import BeautifulSoup except ModuleNotFoundError: logging.warning( 'bs4 missing so unable to scan pecl page, ignoring %s', url) return bs = BeautifulSoup(f.read(), features='lxml') tag = bs.find('a', text='Browse Source') if tag is not None: yield 'Repository-Browse', tag.attrs['href'] tag = bs.find('a', text='Package Bugs') if tag is not None: yield 'Bug-Database', tag.attrs['href'] label_tag = bs.find('th', text='Homepage') if label_tag is not None: tag = label_tag.parent.find('a') if tag is not None: yield 'Homepage', tag.attrs['href'] def strip_vcs_prefixes(url): for prefix in ['git', 'hg']: if url.startswith(prefix+'+'): return url[len(prefix)+1:] return url def guess_from_aur(package: str): vcses = ['git', 'bzr', 'hg'] for vcs in vcses: url = ( 'https://aur.archlinux.org/cgit/aur.git/plain/PKGBUILD?h=%s-%s' % (package, vcs)) headers = {'User-Agent': USER_AGENT} try: f = urlopen( Request(url, headers=headers), timeout=DEFAULT_URLLIB_TIMEOUT) except urllib.error.HTTPError as e: if e.code != 404: raise continue except socket.timeout: logging.warning('timeout contacting aur, ignoring: %s', url) continue else: break else: return variables = parse_pkgbuild_variables(f) for key, value in variables.items(): if key == 'url': yield 'Homepage', value[0] if key == 'source': if not value: continue value = value[0] if "${" in value: for k, v in variables.items(): value = value.replace('${%s}' % k, ' '.join(v)) value = value.replace('$%s' % k, ' '.join(v)) try: unique_name, url = value.split('::', 1) except ValueError: url = value url = url.replace('#branch=', ',branch=') if any([url.startswith(vcs+'+') for vcs in vcses]): yield 'Repository', strip_vcs_prefixes(url) if key == '_gitroot': repo_url = value[0] yield 'Repository', strip_vcs_prefixes(repo_url) def guess_from_launchpad(package, distribution=None, suite=None): # noqa: C901 if distribution is None: # Default to Ubuntu; it's got more fields populated. distribution = 'ubuntu' if suite is None: if distribution == 'ubuntu': from distro_info import UbuntuDistroInfo, DistroDataOutdated ubuntu = UbuntuDistroInfo() try: suite = ubuntu.devel() except DistroDataOutdated as e: logging.warning('%s', str(e)) suite = ubuntu.all[-1] elif distribution == 'debian': suite = 'sid' sourcepackage_url = ( 'https://api.launchpad.net/devel/%(distribution)s/' '%(suite)s/+source/%(package)s' % { 'package': package, 'suite': suite, 'distribution': distribution}) try: sourcepackage_data = _load_json_url(sourcepackage_url) except urllib.error.HTTPError as e: if e.code != 404: raise return except socket.timeout: logging.warning('timeout contacting launchpad, ignoring') return productseries_url = sourcepackage_data.get('productseries_link') if not productseries_url: return productseries_data = _load_json_url(productseries_url) project_link = productseries_data['project_link'] project_data = _load_json_url(project_link) if project_data.get('homepage_url'): yield 'Homepage', project_data['homepage_url'] yield 'Name', project_data['display_name'] if project_data.get('sourceforge_project'): yield ('X-SourceForge-Project', project_data['sourceforge_project']) if project_data.get('wiki_url'): yield ('X-Wiki', project_data['wiki_url']) if project_data.get('summary'): yield ('X-Summary', project_data['summary']) if project_data['vcs'] == 'Bazaar': branch_link = productseries_data.get('branch_link') if branch_link: try: code_import_data = _load_json_url( branch_link + '/+code-import') if code_import_data['url']: # Sometimes this URL is not set, e.g. for CVS repositories. yield 'Repository', code_import_data['url'] except urllib.error.HTTPError as e: if e.code != 404: raise if project_data['official_codehosting']: try: branch_data = _load_json_url(branch_link) except urllib.error.HTTPError as e: if e.code != 404: raise branch_data = None if branch_data: yield 'Archive', 'launchpad' yield 'Repository', branch_data['bzr_identity'] yield 'Repository-Browse', branch_data['web_link'] elif project_data['vcs'] == 'Git': repo_link = ( 'https://api.launchpad.net/devel/+git?ws.op=getByPath&path=%s' % project_data['name']) repo_data = _load_json_url(repo_link) if not repo_data: return code_import_link = repo_data.get('code_import_link') if code_import_link: code_import_data = _load_json_url(repo_data['code_import_link']) if code_import_data['url']: # Sometimes this URL is not set, e.g. for CVS repositories. yield 'Repository', code_import_data['url'] else: if project_data['official_codehosting']: yield 'Archive', 'launchpad' yield 'Repository', repo_data['git_https_url'] yield 'Repository-Browse', repo_data['web_link'] elif project_data.get('vcs') is not None: raise AssertionError('unknown vcs: %r' % project_data['vcs']) def fix_upstream_metadata(upstream_metadata): """Fix existing upstream metadata.""" if 'Repository' in upstream_metadata: repo = upstream_metadata['Repository'] url = repo.value url = sanitize_vcs_url(url) repo.value = url if 'X-Summary' in upstream_metadata: summary = upstream_metadata['X-Summary'] summary.value = summary.value.split('. ')[0] summary.value = summary.value.rstrip().rstrip('.') upstream-ontologist_0.1.24.orig/upstream_ontologist/homepage.py0000644000000000000000000000425014127142267022073 0ustar00#!/usr/bin/python3 # Copyright (C) 2021 Jelmer Vernooij # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA import urllib.error from urllib.request import Request, urlopen import logging from . import UpstreamDatum, USER_AGENT logger = logging.getLogger(__name__) def guess_from_homepage(url: str): req = Request(url, headers={'User-Agent': USER_AGENT}) try: f = urlopen(req) except urllib.error.HTTPError as e: logger.warning( 'unable to access homepage %r: %s', url, e) return except urllib.error.URLError as e: logger.warning( 'unable to access homepage %r: %s', url, e) return except ConnectionResetError as e: logging.warning( 'unable to access homepage %r: %s', url, e) return for entry in _guess_from_page(f.read()): entry.origin = url yield entry def _guess_from_page(text: bytes): try: from bs4 import BeautifulSoup, FeatureNotFound except ModuleNotFoundError: logger.debug('BeautifulSoup not available, not parsing homepage') return try: soup = BeautifulSoup(text, 'lxml') except FeatureNotFound: logger.debug('lxml not available, not parsing README.md') return return _guess_from_soup(soup) def _guess_from_soup(soup): for a in soup.findAll('a'): href = a.get('href') if a.get('aria-label') in ('github', 'git', 'repository'): yield UpstreamDatum('Repository', href, certainty='confident') upstream-ontologist_0.1.24.orig/upstream_ontologist/readme.py0000644000000000000000000003561214162102635021543 0ustar00#!/usr/bin/python3 # Copyright (C) 2018 Jelmer Vernooij # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """README parsing.""" import logging import platform import re from typing import Optional, Tuple, Iterable, List from urllib.parse import urlparse from . import UpstreamDatum logger = logging.getLogger(__name__) def _skip_paragraph(para, metadata): # noqa: C901 if re.match(r'See .* for more (details|information)\.', para): return True if re.match(r'See .* for instructions', para): return True if re.match(r'Please refer .*\.', para): return True m = re.match(r'It is licensed under (.*)', para) if m: metadata.append(UpstreamDatum('X-License', m.group(1), 'possible')) return True m = re.match(r'License: (.*)', para, re.I) if m: metadata.append(UpstreamDatum('X-License', m.group(1), 'likely')) return True m = re.match('(Home page|homepage_url|Main website|Website|Homepage): (.*)', para, re.I) if m: url = m.group(2) if url.startswith('<') and url.endswith('>'): url = url[1:-1] metadata.append(UpstreamDatum('Homepage', url, 'likely')) return True m = re.match('More documentation .* at http.*', para) if m: return True m = re.match('Documentation (can be found|is hosted) (at|on) ([^ ]+)', para) if m: metadata.append(UpstreamDatum('Documentation', m.group(3), 'likely')) return True m = re.match(r'Documentation for (.*)\s+(can\s+be\s+found|is\s+hosted)\s+(at|on)\s+([^ ]+)', para) if m: metadata.append(UpstreamDatum('Name', m.group(1), 'possible')) metadata.append(UpstreamDatum('Documentation', m.group(4), 'likely')) return True if re.match(r'Documentation[, ].*found.*(at|on).*\.', para, re.S): return True m = re.match('See (http.*|gopkg.in.*|github.com.*)', para) if m: return True m = re.match('Available on (.*)', para) if m: return True m = re.match( r'This software is freely distributable under the (.*) license.*', para) if m: metadata.append(UpstreamDatum('X-License', m.group(1), 'likely')) return True m = re.match(r'This .* is hosted at .*', para) if m: return True m = re.match(r'This code has been developed by .*', para) if m: return True if para.startswith('Download and install using:'): return True m = re.match('Bugs should be reported by .*', para) if m: return True m = re.match(r'The bug tracker can be found at (http[^ ]+[^.])', para) if m: metadata.append(UpstreamDatum('Bug-Database', m.group(1), 'likely')) return True m = re.match(r'Copyright (\(c\) |)(.*)', para) if m: metadata.append(UpstreamDatum('X-Copyright', m.group(2), 'possible')) return True if re.match('You install .*', para): return True if re.match('This .* is free software; .*', para): return True m = re.match('Please report any bugs(.*) to <(.*)>', para) if m: metadata.append(UpstreamDatum('Bugs-Submit', m.group(2), 'possible')) return True if re.match('Share and Enjoy', para, re.I): return True lines = para.splitlines(False) if lines and lines[0].strip() in ('perl Makefile.PL', 'make', './configure'): return True if re.match('For further information, .*', para): return True if re.match('Further information .*', para): return True m = re.match(r'A detailed Changelog can be found.*:\s+(http.*)', para, re.I) if m: metadata.append(UpstreamDatum('Changelog', m.group(1), 'possible')) return True def _skip_paragraph_block(para, metadata): # noqa: C901 if _skip_paragraph(para.get_text(), metadata): return True for c in para.children: if isinstance(c, str) and not c.strip(): continue if c.name == 'a': if len(list(c.children)) != 1: name = None elif isinstance(list(c.children)[0], str): name = list(c.children)[0] elif list(c.children)[0].name == 'img': name = list(c.children)[0].get('alt') else: name = None if name in ('CRAN', 'CRAN_Status_Badge', 'CRAN_Logs_Badge'): metadata.append(UpstreamDatum('Archive', 'CRAN', 'confident')) elif name == 'Gitter': parsed_url = urlparse(c.get('href')) metadata.append(UpstreamDatum( 'Repository', 'https://github.com/%s' % '/'.join(parsed_url.path.strip('/').split('/')[:2]), 'confident')) elif name == 'Build Status': parsed_url = urlparse(c.get('href')) if parsed_url.hostname == 'travis-ci.org': metadata.append(UpstreamDatum( 'Repository', 'https://github.com/%s' % '/'.join(parsed_url.path.strip('/').split('/')[:2]), 'confident')) elif name: m = re.match('(.*) License', name) if m: metadata.append(UpstreamDatum('X-License', m.group(1), 'likely')) else: logging.debug('Unnhandled field %r in README', name) continue break else: return True if para.get_text() == '': return True return False def render(el): return el.get_text() def _parse_first_header_text(text): m = re.fullmatch('([A-Za-z]+) ([0-9.]+)', text) if m: return m.group(1), None, m.group(2) m = re.fullmatch('([A-Za-z]+): (.+)', text) if m: return m.group(1), m.group(2), None m = re.fullmatch('([A-Za-z]+) - (.+)', text) if m: return m.group(1), m.group(2), None m = re.fullmatch('([A-Za-z]+) -- (.+)', text) if m: return m.group(1), m.group(2), None m = re.fullmatch('([A-Za-z]+) version ([^ ]+)', text) if m: name, version = text.split(' version ', 1) summary = None return name, summary, version return None, None, None def _parse_first_header(el): name, summary, version = _parse_first_header_text(el.get_text()) if not name and el.get_text(): name = el.get_text() if name: if 'installation' in name.lower(): certainty = 'possible' else: certainty = 'likely' if name.startswith('About '): name = name[len('About '):] yield UpstreamDatum('Name', name.strip(), certainty) if summary: yield UpstreamDatum('X-Summary', summary, 'likely') if version: yield UpstreamDatum('X-Version', version, 'likely') def _is_semi_header(el): if el.name != 'p': return False if el.get_text().strip() == 'INSTALLATION': return True if el.get_text().count('\n') > 0: return False m = re.match(r'([a-z-A-Z0-9]+) - ([^\.]+)', el.get_text()) if m: return True return False def _ul_is_field_list(el): names = ['Issues', 'Home', 'Documentation', 'License'] for li in el.findAll('li'): m = re.match(r'([A-Za-z]+)\s*:.*', li.get_text().strip()) if not m or m.group(1) not in names: return False return True def _extract_paragraphs(children, metadata): paragraphs = [] for el in children: if isinstance(el, str): continue if el.name == 'div': paragraphs.extend(_extract_paragraphs(el.children, metadata)) if paragraphs and 'section' in el.get('class'): break if el.name == 'p': if _is_semi_header(el): if len(paragraphs) == 0: metadata.extend(_parse_first_header(el)) continue else: break if _skip_paragraph_block(el, metadata): if len(paragraphs) > 0: break else: continue if el.get_text().strip(): paragraphs.append(render(el) + '\n') elif el.name == 'pre': paragraphs.append(render(el)) elif el.name == 'ul' and len(paragraphs) > 0: if _ul_is_field_list(el): metadata.extend(_parse_ul_field_list(el)) else: paragraphs.append( ''.join( '* %s\n' % li.get_text() for li in el.findAll('li'))) elif re.match('h[0-9]', el.name): if len(paragraphs) == 0: if el.get_text() not in ('About', 'Introduction', 'Overview'): metadata.extend(_parse_first_header(el)) continue break return paragraphs def _parse_field(name, body): if name == 'Homepage' and body.find('a'): yield UpstreamDatum('Homepage', body.find('a').get('href'), 'confident') if name == 'Home' and body.find('a'): yield UpstreamDatum('Homepage', body.find('a').get('href'), 'confident') if name == 'Issues' and body.find('a'): yield UpstreamDatum('Bug-Database', body.find('a').get('href'), 'confident') if name == 'Documentation' and body.find('a'): yield UpstreamDatum('Documentation', body.find('a').get('href'), 'confident') if name == 'License': yield UpstreamDatum('X-License', body.get_text(), 'confident') def _parse_ul_field_list(el): for li in el.findAll('li'): cs = list(li.children) if len(cs) == 2 and isinstance(cs[0], str): name = cs[0].strip().rstrip(':') body = cs[1] yield from _parse_field(name, body) def _parse_field_list(tab): for tr in tab.findAll('tr', {'class': 'field'}): name_cell = tr.find('th', {'class': 'field-name'}) if not name_cell: continue name = name_cell.get_text().rstrip(':') body = tr.find('td', {'class': 'field-body'}) if not body: continue yield from _parse_field(name, body) def _description_from_basic_soup(soup) -> Tuple[Optional[str], Iterable[UpstreamDatum]]: # Drop any headers metadata = [] if soup is None: return None, {} # First, skip past the first header. for el in soup.children: if el.name in ('h1', 'h2', 'h3'): metadata.extend(_parse_first_header(el)) el.decompose() break elif isinstance(el, str): pass else: break table = soup.find('table', {'class': 'field-list'}) if table: metadata.extend(_parse_field_list(table)) paragraphs: List[str] = [] paragraphs.extend(_extract_paragraphs(soup.children, metadata)) if len(paragraphs) == 0: logging.debug('Empty description; no paragraphs.') return None, metadata if len(paragraphs) < 6: return '\n'.join(paragraphs), metadata logging.debug( 'Not returning description, number of paragraphs too high: %d', len(paragraphs)) return None, metadata def description_from_readme_md(md_text: str) -> Tuple[Optional[str], Iterable[UpstreamDatum]]: """Description from README.md.""" try: import markdown except ModuleNotFoundError: logger.debug('markdown not available, not parsing README.md') return None, {} html_text = markdown.markdown(md_text) try: from bs4 import BeautifulSoup, FeatureNotFound except ModuleNotFoundError: logger.debug('BeautifulSoup not available, not parsing README.md') return None, {} try: soup = BeautifulSoup(html_text, 'lxml') except FeatureNotFound: logger.debug('lxml not available, not parsing README.md') return None, {} return _description_from_basic_soup(soup.body) def description_from_readme_rst(rst_text: str) -> Tuple[Optional[str], Iterable[UpstreamDatum]]: """Description from README.rst.""" if platform.python_implementation() == "PyPy": logger.debug('docutils does not appear to work on PyPy, skipping README.rst.') return None, {} try: from docutils.core import publish_parts except ModuleNotFoundError: logger.debug('docutils not available, not parsing README.rst') return None, {} from docutils.writers.html4css1 import Writer settings = {'initial_header_level': 2, 'report_level': 0} html_text = publish_parts( rst_text, writer=Writer(), settings_overrides=settings).get('html_body') try: from bs4 import BeautifulSoup, FeatureNotFound except ModuleNotFoundError: logger.debug('BeautifulSoup not available, not parsing README.rst') return None, {} try: soup = BeautifulSoup(html_text, 'lxml') except FeatureNotFound: logger.debug('lxml not available, not parsing README.rst') return None, {} return _description_from_basic_soup(list(soup.body.children)[0]) def description_from_readme_plain(text: str) -> Tuple[Optional[str], Iterable[UpstreamDatum]]: lines = list(text.splitlines(False)) metadata = [] if not lines: return None, {} if lines[0].strip() and len(lines) > 1 and (not lines[1] or not lines[1][0].isalnum()): name, summary, version = _parse_first_header_text(lines[0]) if name: metadata.append(UpstreamDatum('Name', name, 'likely')) if version: metadata.append(UpstreamDatum('X-Version', version, 'likely')) if summary: metadata.append(UpstreamDatum('X-Summary', summary, 'likely')) if name or version or summary: lines.pop(0) else: name = version = summary = None while lines and not lines[0].strip('-').strip(): lines.pop(0) paras: List[List[str]] = [[]] for line in lines: if not line.strip(): paras.append([]) else: paras[-1].append(line) output: List[str] = [] for para in paras: if not para: continue line = '\n'.join(para) if _skip_paragraph(line, metadata): continue output.append(line + '\n') if len(output) > 30: return None, {} while output and not output[-1].strip(): output.pop(-1) return '\n'.join(output), metadata upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/0000755000000000000000000000000013764020206021067 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/vcs.py0000644000000000000000000005647014162102635021106 0ustar00#!/usr/bin/python3 # Copyright (C) 2018 Jelmer Vernooij # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA __all__ = [ "plausible_url", "plausible_browse_url", "sanitize_url", "is_gitlab_site", "browse_url_from_repo_url", ] import http.client import re from typing import Optional, Union, List, Tuple import socket import urllib from urllib.parse import urlparse, urlunparse, ParseResult, parse_qs from . import _load_json_url KNOWN_GITLAB_SITES = [ "salsa.debian.org", "invent.kde.org", "0xacab.org", ] KNOWN_HOSTING_SITES = [ 'code.launchpad.net', 'github.com', 'launchpad.net', 'git.openstack.org'] def plausible_browse_url(url: str) -> bool: return url.startswith("https://") or url.startswith("http://") def plausible_url(url: str) -> bool: return ":" in url def unsplit_vcs_url( repo_url: str, branch: Optional[str] = None, subpath: Optional[str] = None ) -> str: """Unsplit a Debian VCS URL. Args: repo_url: Repository URL branch: Branch name subpath: Subpath in the tree Returns: full URL """ url = repo_url if branch: url = "%s -b %s" % (url, branch) if subpath: url = "%s [%s]" % (url, subpath) return url def probe_gitlab_host(hostname: str): import json try: _load_json_url("https://%s/api/v4/version" % hostname) except urllib.error.HTTPError as e: if e.code == 401: try: if json.loads(e.read()) == {"message": "401 Unauthorized"}: return True except json.JSONDecodeError: return False return False except UnicodeDecodeError: return False except json.JSONDecodeError: return False except (socket.timeout, urllib.error.URLError): # Probably not? return False except http.client.RemoteDisconnected: return False return False def is_gitlab_site(hostname: str, net_access: bool = False) -> bool: if hostname is None: return False if hostname in KNOWN_GITLAB_SITES: return True if hostname.startswith("gitlab."): return True if net_access: return probe_gitlab_host(hostname) return False def browse_url_from_repo_url(url: str, subpath: Optional[str] = None) -> Optional[str]: # noqa: C901 if isinstance(url, list): return None parsed_url = urlparse(url) if parsed_url.netloc == "github.com": path = "/".join(parsed_url.path.split("/")[:3]) if path.endswith(".git"): path = path[:-4] if subpath is not None: path += "/tree/HEAD/" + subpath return urlunparse(("https", "github.com", path, None, None, None)) elif parsed_url.hostname == 'gopkg.in': els = parsed_url.path.split("/")[:3] if len(els) != 2: return None try: els[-1], version = els[-1].split('.v', 1) except ValueError: els[-1] = els[-1] version = "HEAD" els.extend(['tree', version]) path = "/".join(els) if subpath is not None: path += "/" + subpath return urlunparse(("https", "github.com", path, None, None, None)) elif parsed_url.netloc in ("code.launchpad.net", "launchpad.net"): if subpath is not None: path = parsed_url.path + "/view/head:/" + subpath return urlunparse( ( "https", "bazaar.launchpad.net", path, parsed_url.query, parsed_url.params, parsed_url.fragment, ) ) else: return urlunparse( ( "https", "code.launchpad.net", parsed_url.path, parsed_url.query, parsed_url.params, parsed_url.fragment, ) ) elif parsed_url.netloc == "svn.apache.org": path_elements = parsed_url.path.strip("/").split("/") if path_elements[:2] != ["repos", "asf"]: return None path_elements.pop(0) path_elements[0] = "viewvc" if subpath is not None: path_elements.append(subpath) return urlunparse( ("https", parsed_url.netloc, "/".join(path_elements), None, None, None) ) elif parsed_url.hostname in ("git.savannah.gnu.org", "git.sv.gnu.org"): path_elements = parsed_url.path.strip("/").split("/") if parsed_url.scheme == "https" and path_elements[0] == "git": path_elements.pop(0) # Why cgit and not gitweb? path_elements.insert(0, "cgit") if subpath is not None: path_elements.append("tree") path_elements.append(subpath) return urlunparse( ("https", parsed_url.netloc, "/".join(path_elements), None, None, None) ) elif is_gitlab_site(parsed_url.netloc): path = parsed_url.path if path.endswith(".git"): path = path[:-4] if subpath is not None: path += "/-/blob/HEAD/" + subpath return urlunparse(("https", parsed_url.netloc, path, None, None, None)) return None SECURE_SCHEMES = ["https", "git+ssh", "bzr+ssh", "hg+ssh", "ssh", "svn+ssh"] def try_open_branch(url: str, branch_name: Optional[str] = None): import breezy.ui from breezy.controldir import ControlDir old_ui = breezy.ui.ui_factory breezy.ui.ui_factory = breezy.ui.SilentUIFactory() try: c = ControlDir.open(url) b = c.open_branch(name=branch_name) b.last_revision() return b except Exception: # TODO(jelmer): Catch more specific exceptions? return None finally: breezy.ui.ui_factory = old_ui def find_secure_repo_url( url: str, branch: Optional[str] = None, net_access: bool = True ) -> Optional[str]: parsed_repo_url = urlparse(url) if parsed_repo_url.scheme in SECURE_SCHEMES: return url # Sites we know to be available over https if parsed_repo_url.hostname and ( is_gitlab_site(parsed_repo_url.hostname, net_access) or parsed_repo_url.hostname in [ "github.com", "git.launchpad.net", "bazaar.launchpad.net", "code.launchpad.net", ] ): parsed_repo_url = parsed_repo_url._replace(scheme="https") if parsed_repo_url.scheme == "lp": parsed_repo_url = parsed_repo_url._replace( scheme="https", netloc="code.launchpad.net" ) if parsed_repo_url.hostname in ("git.savannah.gnu.org", "git.sv.gnu.org"): if parsed_repo_url.scheme == "http": parsed_repo_url = parsed_repo_url._replace(scheme="https") else: parsed_repo_url = parsed_repo_url._replace( scheme="https", path="/git" + parsed_repo_url.path ) if net_access: secure_repo_url = parsed_repo_url._replace(scheme="https") insecure_branch = try_open_branch(url, branch) secure_branch = try_open_branch(urlunparse(secure_repo_url), branch) if secure_branch: if ( not insecure_branch or secure_branch.last_revision() == insecure_branch.last_revision() ): parsed_repo_url = secure_repo_url if parsed_repo_url.scheme in SECURE_SCHEMES: return urlunparse(parsed_repo_url) # Can't find a secure URI :( return None def canonical_git_repo_url(repo_url: str) -> str: parsed_url = urlparse(repo_url) if is_gitlab_site(parsed_url.netloc) or parsed_url.netloc in ["github.com"]: if not parsed_url.path.rstrip("/").endswith(".git"): parsed_url = parsed_url._replace(path=parsed_url.path.rstrip("/") + ".git") return urlunparse(parsed_url) return repo_url def find_public_repo_url(repo_url: str) -> Optional[str]: parsed = urlparse(repo_url) if not parsed.scheme and not parsed.hostname and ':' in parsed.path: m = re.match('^(?P[^@:/]+@)?(?P[^/:]+):(?P.*)$', repo_url) if m: host = m.group('host') path = m.group('path') if host == 'github.com' or is_gitlab_site(host): return urlunparse(("https", "github.com", path, None, None, None)) parsed = urlparse(repo_url) revised_url = None if parsed.hostname == "github.com": if parsed.scheme in ("https", "http", "git"): return repo_url revised_url = urlunparse(("https", "github.com", parsed.path, None, None, None)) if parsed.hostname and is_gitlab_site(parsed.hostname): # Not sure if gitlab even support plain http? if parsed.scheme in ("https", "http"): return repo_url if parsed.scheme == "ssh": revised_url = urlunparse( ("https", parsed.hostname, parsed.path, None, None, None) ) if parsed.hostname in ( "code.launchpad.net", "bazaar.launchpad.net", "git.launchpad.net", ): if parsed.scheme.startswith("http") or parsed.scheme == "lp": return repo_url if parsed.scheme in ("ssh", "bzr+ssh"): revised_url = urlunparse( ("https", parsed.hostname, parsed.path, None, None, None) ) if revised_url: return revised_url return None def fixup_rcp_style_git_repo_url(url: str) -> str: from breezy.location import rcp_location_to_url try: repo_url = rcp_location_to_url(url) except ValueError: return url return repo_url def drop_vcs_in_scheme(url: str) -> str: if url.startswith("git+http:") or url.startswith("git+https:"): url = url[4:] if url.startswith("hg+https:") or url.startswith("hg+http"): url = url[3:] if url.startswith("bzr+lp:") or url.startswith("bzr+http"): url = url.split("+", 1)[1] return url def fix_path_in_port( parsed: ParseResult, branch: Optional[str], subpath: Optional[str] ): if ":" not in parsed.netloc or parsed.netloc.endswith("]"): return None, None, None host, port = parsed.netloc.rsplit(":", 1) if host.split("@")[-1] not in (KNOWN_GITLAB_SITES + ["github.com"]): return None, None, None if not port or port.isdigit(): return None, None, None return ( parsed._replace(path="%s/%s" % (port, parsed.path.lstrip("/")), netloc=host), branch, subpath, ) def fix_gitlab_scheme(parsed, branch, subpath: Optional[str]): if is_gitlab_site(parsed.hostname): return parsed._replace(scheme="https"), branch, subpath return None, None, None def fix_salsa_cgit_url(parsed, branch, subpath): if parsed.hostname == "salsa.debian.org" and parsed.path.startswith("/cgit/"): return parsed._replace(path=parsed.path[5:]), branch, subpath return None, None, None def fix_gitlab_tree_in_url(parsed, branch, subpath): if is_gitlab_site(parsed.hostname): parts = parsed.path.split("/") if len(parts) >= 5 and parts[3] == "tree": branch = "/".join(parts[4:]) return parsed._replace(path="/".join(parts[:3])), branch, subpath return None, None, None def fix_double_slash(parsed, branch, subpath): if parsed.path.startswith("//"): return parsed._replace(path=parsed.path[1:]), branch, subpath return None, None, None def fix_extra_colon(parsed, branch, subpath): return parsed._replace(netloc=parsed.netloc.rstrip(":")), branch, subpath def drop_git_username(parsed, branch, subpath): if parsed.hostname not in ("salsa.debian.org", "github.com"): return None, None, None if parsed.scheme not in ("git", "http", "https"): return None, None, None if parsed.username == "git" and parsed.netloc.startswith("git@"): return parsed._replace(netloc=parsed.netloc[4:]), branch, subpath return None, None, None def fix_branch_argument(parsed, branch, subpath): if parsed.hostname != "github.com": return None, None, None # TODO(jelmer): Handle gitlab sites too? path_elements = parsed.path.strip("/").split("/") if len(path_elements) > 2 and path_elements[2] == "tree": return ( parsed._replace(path="/".join(path_elements[:2])), "/".join(path_elements[3:]), subpath, ) return None, None, None def fix_git_gnome_org_url(parsed, branch, subpath): if parsed.netloc == "git.gnome.org": if parsed.path.startswith("/browse"): path = parsed.path[7:] else: path = parsed.path parsed = parsed._replace( netloc="gitlab.gnome.org", scheme="https", path="/GNOME" + path ) return parsed, branch, subpath return None, None, None def fix_anongit_url(parsed, branch, subpath): if parsed.netloc == "anongit.kde.org" and parsed.scheme == "git": parsed = parsed._replace(scheme="https") return parsed, branch, subpath return None, None, None def fix_freedesktop_org_url( parsed: ParseResult, branch: Optional[str], subpath: Optional[str] ): if parsed.netloc == "anongit.freedesktop.org": path = parsed.path if path.startswith("/git/"): path = path[len("/git") :] parsed = parsed._replace( netloc="gitlab.freedesktop.org", scheme="https", path=path ) return parsed, branch, subpath return None, None, None FIXERS = [ fix_path_in_port, fix_gitlab_scheme, fix_salsa_cgit_url, fix_gitlab_tree_in_url, fix_double_slash, fix_extra_colon, drop_git_username, fix_branch_argument, fix_git_gnome_org_url, fix_anongit_url, fix_freedesktop_org_url, ] def fixup_broken_git_details( repo_url: str, branch: Optional[str], subpath: Optional[str] ) -> Tuple[str, Optional[str], Optional[str]]: """Attempt to fix up broken Git URLs. A common misspelling is to add an extra ":" after the hostname """ parsed = urlparse(repo_url) changed = False for fn in FIXERS: newparsed, newbranch, newsubpath = fn(parsed, branch, subpath) if newparsed: changed = True parsed = newparsed branch = newbranch subpath = newsubpath if changed: return urlunparse(parsed), branch, subpath return repo_url, branch, subpath def convert_cvs_list_to_str(urls): if not isinstance(urls, list): return urls if urls[0].startswith(":extssh:") or urls[0].startswith(":pserver:"): try: from breezy.location import cvs_to_url except ImportError: from breezy.location import pserver_to_url as cvs_to_url if urls[0].startswith(":extssh:"): raise NotImplementedError("unable to deal with extssh CVS locations.") return cvs_to_url(urls[0]) + "#" + urls[1] return urls SANITIZERS = [ convert_cvs_list_to_str, drop_vcs_in_scheme, lambda url: fixup_broken_git_details(url, None, None)[0], fixup_rcp_style_git_repo_url, lambda url: find_public_repo_url(url) or url, canonical_git_repo_url, lambda url: find_secure_repo_url(url, net_access=False) or url, ] def sanitize_url(url: Union[str, List[str]]) -> str: if isinstance(url, str): url = url.strip() for sanitizer in SANITIZERS: url = sanitizer(url) return url # type: ignore def guess_repo_from_url(url, net_access=False): # noqa: C901 if isinstance(url, list): return None parsed_url = urlparse(url) path_elements = parsed_url.path.strip('/').split('/') if parsed_url.netloc == 'github.com': if len(path_elements) < 2: return None return ('https://github.com' + '/'.join(parsed_url.path.split('/')[:3])) if parsed_url.netloc == 'travis-ci.org': return ('https://github.com/' + '/'.join(path_elements[:3])) if (parsed_url.netloc == 'coveralls.io' and parsed_url.path.startswith('/r/')): return ('https://github.com/' + '/'.join(path_elements[1:4])) if parsed_url.netloc == 'launchpad.net': return 'https://code.launchpad.net/%s' % ( parsed_url.path.strip('/').split('/')[0]) if parsed_url.netloc == 'git.savannah.gnu.org': if len(path_elements) != 2 or path_elements[0] != 'git': return None return url if parsed_url.netloc in ('freedesktop.org', 'www.freedesktop.org'): if len(path_elements) >= 2 and path_elements[0] == 'software': return 'https://github.com/freedesktop/%s' % path_elements[1] if len(path_elements) >= 3 and path_elements[:2] == [ 'wiki', 'Software']: return 'https://github.com/freedesktop/%s.git' % path_elements[2] if parsed_url.netloc == 'download.gnome.org': if len(path_elements) >= 2 and path_elements[0] == 'sources': return 'https://gitlab.gnome.org/GNOME/%s.git' % path_elements[1] if parsed_url.netloc == 'download.kde.org': if len(path_elements) >= 2 and path_elements[0] in ( 'stable', 'unstable'): return 'https://anongit.kde.org/%s.git' % path_elements[1] if parsed_url.netloc == 'ftp.gnome.org': if (len(path_elements) >= 4 and [ e.lower() for e in path_elements[:3]] == [ 'pub', 'gnome', 'sources']): return 'https://gitlab.gnome.org/GNOME/%s.git' % path_elements[3] if parsed_url.netloc == 'sourceforge.net': if (len(path_elements) >= 4 and path_elements[0] == 'p' and path_elements[3] == 'ci'): return 'https://sourceforge.net/p/%s/%s' % ( path_elements[1], path_elements[2]) if parsed_url.netloc == 'www.apache.org': if len(path_elements) > 2 and path_elements[0] == 'dist': return 'https://svn.apache.org/repos/asf/%s/%s' % ( path_elements[1], path_elements[2]) if parsed_url.netloc == 'bitbucket.org': if len(path_elements) >= 2: return 'https://bitbucket.org/%s/%s' % ( path_elements[0], path_elements[1]) if parsed_url.netloc == 'ftp.gnu.org': if len(path_elements) >= 2 and path_elements[0] == 'gnu': return 'https://git.savannah.gnu.org/git/%s.git' % ( path_elements[1]) return None if parsed_url.netloc == 'download.savannah.gnu.org': if len(path_elements) >= 2 and path_elements[0] == 'releases': return 'https://git.savannah.gnu.org/git/%s.git' % ( path_elements[1]) return None if is_gitlab_site(parsed_url.netloc, net_access): if parsed_url.path.strip('/').count('/') < 1: return None parts = parsed_url.path.split('/') if '-' in parts: parts = parts[:parts.index('-')] return urlunparse( parsed_url._replace(path='/'.join(parts), query='')) if parsed_url.hostname == 'git.php.net': if parsed_url.path.startswith('/repository/'): return url if not parsed_url.path.strip('/'): qs = parse_qs(parsed_url.query) if 'p' in qs: return urlunparse(parsed_url._replace( path='/repository/' + qs['p'][0], query='')) if parsed_url.netloc in KNOWN_HOSTING_SITES: return url # Maybe it's already pointing at a VCS repo? if parsed_url.netloc.startswith('svn.'): # 'svn' subdomains are often used for hosting SVN repositories. return url if net_access: if verify_repository_url(url): return url return None return None def verify_repository_url(url: str, version: Optional[str] = None) -> bool: """Verify whether a repository URL is valid.""" parsed_url = urlparse(url) if parsed_url.netloc == 'github.com': path_elements = parsed_url.path.strip('/').split('/') if len(path_elements) < 2: return False if path_elements[1].endswith('.git'): path_elements[1] = path_elements[1][:-4] api_url = 'https://api.github.com/repos/%s/%s' % ( path_elements[0], path_elements[1]) try: data = _load_json_url(api_url) except urllib.error.HTTPError as e: if e.code == 404: return False elif e.code == 403: # Probably rate-limited. Let's just hope for the best. pass else: raise else: if data.get('archived', False): return False if data['description']: if data['description'].startswith('Moved to '): return False if 'has moved' in data['description']: return False if data['description'].startswith('Mirror of '): return False homepage = data.get('homepage') if homepage and is_gitlab_site(homepage): return False # TODO(jelmer): Look at the contents of the repository; if it # contains just a single README file with < 10 lines, assume # the worst. # return data['clone_url'] return probe_upstream_branch_url(url, version=version) def probe_upstream_branch_url(url: str, version=None): parsed = urlparse(url) if parsed.scheme in ('git+ssh', 'ssh', 'bzr+ssh'): # Let's not probe anything possibly non-public. return None import breezy.ui from breezy.branch import Branch old_ui = breezy.ui.ui_factory breezy.ui.ui_factory = breezy.ui.SilentUIFactory() try: b = Branch.open(url) b.last_revision() if version is not None: version = version.split('+git')[0] tag_names = b.tags.get_tag_dict().keys() if not tag_names: # Uhm, hmm return True if _version_in_tags(version, tag_names): return True return False else: return True except Exception: # TODO(jelmer): Catch more specific exceptions? return False finally: breezy.ui.ui_factory = old_ui def _version_in_tags(version, tag_names): if version in tag_names: return True if 'v%s' % version in tag_names: return True if 'release/%s' % version in tag_names: return True if version.replace('.', '_') in tag_names: return True for tag_name in tag_names: if tag_name.endswith('_' + version): return True if tag_name.endswith('-' + version): return True if tag_name.endswith('_%s' % version.replace('.', '_')): return True return False upstream-ontologist_0.1.24.orig/upstream_ontologist/debian/__init__.py0000644000000000000000000000467514034104624023272 0ustar00#!/usr/bin/python3 # Copyright (C) 2018 Jelmer Vernooij # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA import re from typing import Optional from .. import UpstreamPackage def debian_to_upstream_version(version): """Drop debian-specific modifiers from an upstream version string.""" return version.upstream_version.split("+dfsg")[0] def upstream_name_to_debian_source_name(upstream_name: str) -> str: if upstream_name.startswith("GNU "): upstream_name = upstream_name[len("GNU ") :] return upstream_name.lower().replace('_', '-').replace(' ', '-').replace('/', '-') def upstream_version_to_debian_upstream_version( version: str, family: Optional[str] = None ) -> str: # TODO(jelmer) return version def upstream_package_to_debian_source_name(package: UpstreamPackage) -> str: if package.family == "rust": return "rust-%s" % package.name.lower() if package.family == "perl": return "lib%s-perl" % package.name.lower().replace("::", "-") if package.family == "node": return "node-%s" % package.name.lower() # TODO(jelmer): return upstream_name_to_debian_source_name(package.name) def upstream_package_to_debian_binary_name(package: UpstreamPackage) -> str: if package.family == "rust": return "rust-%s" % package.name.lower() if package.family == "perl": return "lib%s-perl" % package.name.lower().replace("::", "-") if package.family == "node": return "node-%s" % package.name.lower() # TODO(jelmer): return package.name.lower().replace('_', '-') def compare_upstream_versions(family, version1, version2): raise NotImplementedError package_name_re = re.compile("[a-z0-9][a-z0-9+-.]+") def valid_debian_package_name(name): return bool(package_name_re.fullmatch(name)) upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/__init__.py0000644000000000000000000000206514023160202023171 0ustar00#!/usr/bin/python # Copyright (C) 2018 Jelmer Vernooij # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA import unittest def test_suite(): names = [ "upstream_ontologist", "vcs", ] module_names = [__name__ + ".test_" + name for name in names] module_names.append(__name__ + ".test_readme.test_suite") loader = unittest.TestLoader() return loader.loadTestsFromNames(module_names) upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/0000755000000000000000000000000014023143032023305 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/test_readme.py0000644000000000000000000000700214051270175023736 0ustar00#!/usr/bin/python # Copyright (C) 2019 Jelmer Vernooij # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """Tests for readme parsing.""" import os import platform from unittest import TestCase, TestSuite from upstream_ontologist.readme import ( description_from_readme_md, description_from_readme_rst, description_from_readme_plain, ) class ReadmeTestCase(TestCase): def __init__(self, path): super(ReadmeTestCase, self).__init__() self.path = path def setUp(self): super(ReadmeTestCase, self).setUp() self.maxDiff = None def runTest(self): readme_md = None readme_rst = None readme_plain = None description = None for entry in os.scandir(self.path): if entry.name.endswith('~'): continue base, ext = os.path.splitext(entry.name) if entry.name == 'description': with open(entry.path, 'r') as f: description = f.read() elif base == "README": if ext == '.md': with open(entry.path, 'r') as f: readme_md = f.read() elif ext == '.rst': with open(entry.path, 'r') as f: readme_rst = f.read() elif ext == '': with open(entry.path, 'r') as f: readme_plain = f.read() else: raise NotImplementedError(ext) else: raise NotImplementedError(ext) if readme_md is not None: try: import markdown # noqa: F401 except ModuleNotFoundError: self.skipTest( 'Skipping README.md tests, markdown not available') actual_description, unused_md = description_from_readme_md( readme_md) self.assertEqual(actual_description, description) if readme_rst is not None: if platform.python_implementation() == "PyPy": self.skipTest('Skipping README.rst tests on pypy') try: import docutils # noqa: F401 except ModuleNotFoundError: self.skipTest( 'Skipping README.rst tests, docutils not available') actual_description, unused_rst = description_from_readme_rst( readme_rst) self.assertEqual(actual_description, description) if readme_plain is not None: actual_description, unused_rst = description_from_readme_plain( readme_plain) self.assertEqual(actual_description, description) def test_suite(): suite = TestSuite() for entry in os.scandir(os.path.join(os.path.dirname(__file__), 'readme_data')): suite.addTest(ReadmeTestCase(entry.path)) return suite upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/test_upstream_ontologist.py0000644000000000000000000003135614162102635026631 0ustar00#!/usr/bin/python # Copyright (C) 2019 Jelmer Vernooij # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """Tests for upstream_ontologist.""" import os import shutil import tempfile from unittest import ( TestCase, ) from upstream_ontologist import ( UpstreamDatum, Person, min_certainty, certainty_to_confidence, confidence_to_certainty, certainty_sufficient, ) from upstream_ontologist.guess import ( guess_repo_from_url, guess_from_package_json, guess_from_debian_watch, guess_from_r_description, bug_database_url_from_bug_submit_url, url_from_git_clone_command, url_from_fossil_clone_command, ) class TestCaseInTempDir(TestCase): def setUp(self): super(TestCaseInTempDir, self).setUp() self.testdir = tempfile.mkdtemp() os.chdir(self.testdir) self.addCleanup(shutil.rmtree, self.testdir) class GuessFromDebianWatchTests(TestCaseInTempDir): def test_empty(self): with open("watch", "w") as f: f.write( """\ # Blah """ ) self.assertEqual([], list(guess_from_debian_watch("watch", False))) def test_simple(self): with open("watch", "w") as f: f.write( """\ version=4 https://github.com/jelmer/dulwich/tags/dulwich-(.*).tar.gz """ ) self.assertEqual( [ UpstreamDatum( "Repository", "https://github.com/jelmer/dulwich", "likely", "watch" ) ], list(guess_from_debian_watch("watch", False)), ) class GuessFromPackageJsonTests(TestCaseInTempDir): def test_simple(self): with open("package.json", "w") as f: f.write( """\ { "name": "autosize", "version": "4.0.2", "author": { "name": "Jack Moore", "url": "http://www.jacklmoore.com", "email": "hello@jacklmoore.com" }, "main": "dist/autosize.js", "license": "MIT", "homepage": "http://www.jacklmoore.com/autosize", "demo": "http://www.jacklmoore.com/autosize", "repository": { "type": "git", "url": "http://github.com/jackmoore/autosize.git" } } """ ) self.assertEqual( [ UpstreamDatum("Name", "autosize", "certain"), UpstreamDatum( "Homepage", "http://www.jacklmoore.com/autosize", "certain" ), UpstreamDatum("X-License", "MIT", "certain", None), UpstreamDatum("X-Version", "4.0.2", "certain"), UpstreamDatum( "Repository", "http://github.com/jackmoore/autosize.git", "certain" ), UpstreamDatum( 'X-Author', [Person( name="Jack Moore", url="http://www.jacklmoore.com", email="hello@jacklmoore.com")], 'confident') ], list(guess_from_package_json("package.json", False)), ) def test_dummy(self): with open("package.json", "w") as f: f.write( """\ { "name": "mozillaeslintsetup", "description": "This package file is for setup of ESLint.", "repository": {}, "license": "MPL-2.0", "dependencies": { "eslint": "4.18.1", "eslint-plugin-html": "4.0.2", "eslint-plugin-mozilla": "file:tools/lint/eslint/eslint-plugin-mozilla", "eslint-plugin-no-unsanitized": "2.0.2", "eslint-plugin-react": "7.1.0", "eslint-plugin-spidermonkey-js": "file:tools/lint/eslint/eslint-plugin-spidermonkey-js" }, "devDependencies": {} } """ ) self.assertEqual( [ UpstreamDatum("Name", "mozillaeslintsetup", "certain"), UpstreamDatum( "X-Summary", "This package file is for setup of ESLint.", "certain", None, ), UpstreamDatum("X-License", "MPL-2.0", "certain", None), ], list(guess_from_package_json("package.json", False)), ) class GuessFromRDescriptionTests(TestCaseInTempDir): def test_read(self): with open("DESCRIPTION", "w") as f: f.write( """\ Package: crul Title: HTTP Client Description: A simple HTTP client, with tools for making HTTP requests, and mocking HTTP requests. The package is built on R6, and takes inspiration from Ruby's 'faraday' gem () The package name is a play on curl, the widely used command line tool for HTTP, and this package is built on top of the R package 'curl', an interface to 'libcurl' (). Version: 0.8.4 License: MIT + file LICENSE Authors@R: c( person("Scott", "Chamberlain", role = c("aut", "cre"), email = "myrmecocystus@gmail.com", comment = c(ORCID = "0000-0003-1444-9135")) ) URL: https://github.com/ropensci/crul (devel) https://ropenscilabs.github.io/http-testing-book/ (user manual) https://www.example.com/crul (homepage) BugReports: https://github.com/ropensci/crul/issues Encoding: UTF-8 Language: en-US Imports: curl (>= 3.3), R6 (>= 2.2.0), urltools (>= 1.6.0), httpcode (>= 0.2.0), jsonlite, mime Suggests: testthat, fauxpas (>= 0.1.0), webmockr (>= 0.1.0), knitr VignetteBuilder: knitr RoxygenNote: 6.1.1 X-schema.org-applicationCategory: Web X-schema.org-keywords: http, https, API, web-services, curl, download, libcurl, async, mocking, caching X-schema.org-isPartOf: https://ropensci.org NeedsCompilation: no Packaged: 2019-08-02 19:58:21 UTC; sckott Author: Scott Chamberlain [aut, cre] () Maintainer: Scott Chamberlain Repository: CRAN Date/Publication: 2019-08-02 20:30:02 UTC """ ) ret = guess_from_r_description("DESCRIPTION", True) self.assertEqual( list(ret), [ UpstreamDatum("Name", "crul", "certain"), UpstreamDatum("Archive", "CRAN", "certain"), UpstreamDatum( "Bug-Database", "https://github.com/ropensci/crul/issues", "certain" ), UpstreamDatum('X-Version', '0.8.4', 'certain'), UpstreamDatum('X-License', 'MIT + file LICENSE', 'certain'), UpstreamDatum('X-Summary', 'HTTP Client', 'certain'), UpstreamDatum('X-Description', """\ A simple HTTP client, with tools for making HTTP requests, and mocking HTTP requests. The package is built on R6, and takes inspiration from Ruby's 'faraday' gem () The package name is a play on curl, the widely used command line tool for HTTP, and this package is built on top of the R package 'curl', an interface to 'libcurl' ().""", 'certain'), UpstreamDatum( 'X-Maintainer', Person('Scott Chamberlain', email='myrmecocystus@gmail.com'), 'certain'), UpstreamDatum( "Repository", "https://github.com/ropensci/crul.git", "certain" ), UpstreamDatum("Homepage", "https://www.example.com/crul", "certain"), ], ) class GuessRepoFromUrlTests(TestCase): def test_github(self): self.assertEqual( "https://github.com/jelmer/blah", guess_repo_from_url("https://github.com/jelmer/blah"), ) self.assertEqual( "https://github.com/jelmer/blah", guess_repo_from_url("https://github.com/jelmer/blah/blob/README"), ) self.assertIs(None, guess_repo_from_url("https://github.com/jelmer")) def test_none(self): self.assertIs(None, guess_repo_from_url("https://www.jelmer.uk/")) def test_known(self): self.assertEqual( "http://code.launchpad.net/blah", guess_repo_from_url("http://code.launchpad.net/blah"), ) def test_launchpad(self): self.assertEqual( "https://code.launchpad.net/bzr", guess_repo_from_url("http://launchpad.net/bzr/+download"), ) def test_savannah(self): self.assertEqual( "https://git.savannah.gnu.org/git/auctex.git", guess_repo_from_url("https://git.savannah.gnu.org/git/auctex.git"), ) self.assertIs( None, guess_repo_from_url("https://git.savannah.gnu.org/blah/auctex.git") ) def test_bitbucket(self): self.assertEqual( "https://bitbucket.org/fenics-project/dolfin", guess_repo_from_url( "https://bitbucket.org/fenics-project/dolfin/downloads/" ), ) class BugDbFromBugSubmitUrlTests(TestCase): def test_github(self): self.assertEqual( "https://github.com/dulwich/dulwich/issues", bug_database_url_from_bug_submit_url( "https://github.com/dulwich/dulwich/issues/new" ), ) def test_sf(self): self.assertEqual( "https://sourceforge.net/p/dulwich/bugs", bug_database_url_from_bug_submit_url( "https://sourceforge.net/p/dulwich/bugs/new" ), ) class UrlFromGitCloneTests(TestCase): def test_guess_simple(self): self.assertEqual( "https://github.com/jelmer/blah", url_from_git_clone_command(b"git clone https://github.com/jelmer/blah"), ) self.assertEqual( "https://github.com/jelmer/blah", url_from_git_clone_command( b"git clone https://github.com/jelmer/blah target" ), ) def test_args(self): self.assertEqual( "https://github.com/jelmer/blah", url_from_git_clone_command( b"git clone -b foo https://github.com/jelmer/blah target" ), ) class UrlFromFossilCloneTests(TestCase): def test_guess_simple(self): self.assertEqual( "https://example.com/repo/blah", url_from_fossil_clone_command( b"fossil clone https://example.com/repo/blah blah.fossil" ), ) class CertaintySufficientTests(TestCase): def test_sufficient(self): self.assertTrue(certainty_sufficient("certain", "certain")) self.assertTrue(certainty_sufficient("certain", "possible")) self.assertTrue(certainty_sufficient("certain", None)) self.assertTrue(certainty_sufficient("possible", None)) # TODO(jelmer): Should we really always allow unknown certainties # through? self.assertTrue(certainty_sufficient(None, "certain")) def test_insufficient(self): self.assertFalse(certainty_sufficient("possible", "certain")) class CertaintyVsConfidenceTests(TestCase): def test_confidence_to_certainty(self): self.assertEqual("certain", confidence_to_certainty(0)) self.assertEqual("confident", confidence_to_certainty(1)) self.assertEqual("likely", confidence_to_certainty(2)) self.assertEqual("possible", confidence_to_certainty(3)) self.assertEqual("unknown", confidence_to_certainty(None)) self.assertRaises(ValueError, confidence_to_certainty, 2000) def test_certainty_to_confidence(self): self.assertEqual(0, certainty_to_confidence("certain")) self.assertEqual(1, certainty_to_confidence("confident")) self.assertEqual(2, certainty_to_confidence("likely")) self.assertEqual(3, certainty_to_confidence("possible")) self.assertIs(None, certainty_to_confidence("unknown")) self.assertRaises(ValueError, certainty_to_confidence, "blah") class MinimumCertaintyTests(TestCase): def test_minimum(self): self.assertEqual("certain", min_certainty([])) self.assertEqual("certain", min_certainty(["certain"])) self.assertEqual("possible", min_certainty(["possible"])) self.assertEqual("possible", min_certainty(["possible", "certain"])) self.assertEqual("likely", min_certainty(["likely", "certain"])) self.assertEqual("possible", min_certainty(["likely", "certain", "possible"])) upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/test_vcs.py0000644000000000000000000001115014034104624023267 0ustar00#!/usr/bin/python3 # Copyright (C) 2018 Jelmer Vernooij # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA from unittest import TestCase from upstream_ontologist.vcs import ( plausible_url, fixup_rcp_style_git_repo_url, is_gitlab_site, canonical_git_repo_url, find_public_repo_url, guess_repo_from_url, ) class PlausibleUrlTests(TestCase): def test_url(self): self.assertFalse(plausible_url("the")) self.assertFalse(plausible_url("1")) self.assertTrue(plausible_url("git@foo:blah")) self.assertTrue(plausible_url("git+ssh://git@foo/blah")) self.assertTrue(plausible_url("https://foo/blah")) class TestIsGitLabSite(TestCase): def test_not_gitlab(self): self.assertFalse(is_gitlab_site("foo.example.com")) self.assertFalse(is_gitlab_site("github.com")) self.assertFalse(is_gitlab_site(None)) def test_gitlab(self): self.assertTrue(is_gitlab_site("gitlab.somehost.com")) self.assertTrue(is_gitlab_site("salsa.debian.org")) class CanonicalizeVcsUrlTests(TestCase): def test_github(self): self.assertEqual( "https://github.com/jelmer/example.git", canonical_git_repo_url("https://github.com/jelmer/example"), ) def test_salsa(self): self.assertEqual( "https://salsa.debian.org/jelmer/example.git", canonical_git_repo_url("https://salsa.debian.org/jelmer/example"), ) self.assertEqual( "https://salsa.debian.org/jelmer/example.git", canonical_git_repo_url("https://salsa.debian.org/jelmer/example.git"), ) class FindPublicVcsUrlTests(TestCase): def test_github(self): self.assertEqual( "https://github.com/jelmer/example", find_public_repo_url("ssh://git@github.com/jelmer/example"), ) self.assertEqual( "https://github.com/jelmer/example", find_public_repo_url("https://github.com/jelmer/example"), ) self.assertEqual( "https://github.com/jelmer/example", find_public_repo_url("git@github.com:jelmer/example"), ) def test_salsa(self): self.assertEqual( "https://salsa.debian.org/jelmer/example", find_public_repo_url("ssh://salsa.debian.org/jelmer/example"), ) self.assertEqual( "https://salsa.debian.org/jelmer/example", find_public_repo_url("https://salsa.debian.org/jelmer/example"), ) class FixupRcpStyleUrlTests(TestCase): def test_fixup(self): try: import breezy # noqa: F401 except ModuleNotFoundError: self.skipTest("breezy is not available") self.assertEqual( "ssh://github.com/jelmer/example", fixup_rcp_style_git_repo_url("github.com:jelmer/example"), ) self.assertEqual( "ssh://git@github.com/jelmer/example", fixup_rcp_style_git_repo_url("git@github.com:jelmer/example"), ) def test_leave(self): try: import breezy # noqa: F401 except ModuleNotFoundError: self.skipTest("breezy is not available") self.assertEqual( "https://salsa.debian.org/jelmer/example", fixup_rcp_style_git_repo_url("https://salsa.debian.org/jelmer/example"), ) self.assertEqual( "ssh://git@salsa.debian.org/jelmer/example", fixup_rcp_style_git_repo_url("ssh://git@salsa.debian.org/jelmer/example"), ) class GuessRepoFromUrlTests(TestCase): def test_travis_ci_org(self): self.assertEqual( 'https://github.com/jelmer/dulwich', guess_repo_from_url( 'https://travis-ci.org/jelmer/dulwich')) def test_coveralls(self): self.assertEqual( 'https://github.com/jelmer/dulwich', guess_repo_from_url( 'https://coveralls.io/r/jelmer/dulwich')) upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/aiozipkin/0000755000000000000000000000000014050514157025314 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/argparse/0000755000000000000000000000000014032077372025126 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/bitlbee/0000755000000000000000000000000014034111434024716 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/bup/0000755000000000000000000000000014025667057024117 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/cbor2/0000755000000000000000000000000014025766244024336 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/django-ical/0000755000000000000000000000000014024376244025473 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/dulwich/0000755000000000000000000000000014023145612024752 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/empty/0000755000000000000000000000000014024657402024457 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/erbium/0000755000000000000000000000000014023143032024570 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/isso/0000755000000000000000000000000014023433157024274 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/jadx/0000755000000000000000000000000014027625472024254 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/jupyter-client/0000755000000000000000000000000014025667057026307 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/libtrace/0000755000000000000000000000000014075351260025105 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/perl-timedate/0000755000000000000000000000000014035402357026054 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/perl5-xml-compile-cache/0000755000000000000000000000000014025707126027635 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/pylint-flask/0000755000000000000000000000000014025160634025733 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/python-icalendar/0000755000000000000000000000000014025777046026572 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/python-rsa/0000755000000000000000000000000014032216722025420 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/ruby-columnize/0000755000000000000000000000000014037755137026315 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/ruby-sha3/0000755000000000000000000000000014027632503025134 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/samba/0000755000000000000000000000000014024131056024374 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/saneyaml/0000755000000000000000000000000014023146352025126 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/sfcgal/0000755000000000000000000000000014023143032024544 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/statuscake/0000755000000000000000000000000014024413051025456 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/text-worddif/0000755000000000000000000000000014030712662025736 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/wandio/0000755000000000000000000000000014075346463024612 5ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/aiozipkin/README.rst0000644000000000000000000001474314050514157027014 0ustar00aiozipkin ========= .. image:: https://github.com/aio-libs/aiozipkin/workflows/CI/badge.svg :target: https://github.com/aio-libs/aiozipkin/actions?query=workflow%3ACI .. image:: https://codecov.io/gh/aio-libs/aiozipkin/branch/master/graph/badge.svg :target: https://codecov.io/gh/aio-libs/aiozipkin .. image:: https://api.codeclimate.com/v1/badges/1ff813d5cad2d702cbf1/maintainability :target: https://codeclimate.com/github/aio-libs/aiozipkin/maintainability :alt: Maintainability .. image:: https://img.shields.io/pypi/v/aiozipkin.svg :target: https://pypi.python.org/pypi/aiozipkin .. image:: https://readthedocs.org/projects/aiozipkin/badge/?version=latest :target: http://aiozipkin.readthedocs.io/en/latest/?badge=latest :alt: Documentation Status .. image:: https://badges.gitter.im/Join%20Chat.svg :target: https://gitter.im/aio-libs/Lobby :alt: Chat on Gitter **aiozipkin** is Python 3.6+ module that adds distributed tracing capabilities from asyncio_ applications with zipkin (http://zipkin.io) server instrumentation. zipkin_ is a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in microservice architectures. It manages both the collection and lookup of this data. Zipkin’s design is based on the Google Dapper paper. Applications are instrumented with **aiozipkin** report timing data to zipkin_. The Zipkin UI also presents a Dependency diagram showing how many traced requests went through each application. If you are troubleshooting latency problems or errors, you can filter or sort all traces based on the application, length of trace, annotation, or timestamp. .. image:: https://raw.githubusercontent.com/aio-libs/aiozipkin/master/docs/zipkin_animation2.gif :alt: zipkin ui animation Features ======== * Distributed tracing capabilities to **asyncio** applications. * Support zipkin_ ``v2`` protocol. * Easy to use API. * Explicit context handling, no thread local variables. * Can work with jaeger_ and stackdriver_ through zipkin compatible API. zipkin vocabulary ----------------- Before code lets learn important zipkin_ vocabulary, for more detailed information please visit https://zipkin.io/pages/instrumenting .. image:: https://raw.githubusercontent.com/aio-libs/aiozipkin/master/docs/zipkin_glossary.png :alt: zipkin ui glossary * **Span** represents one specific method (RPC) call * **Annotation** string data associated with a particular timestamp in span * **Tag** - key and value associated with given span * **Trace** - collection of spans, related to serving particular request Simple example -------------- .. code:: python import asyncio import aiozipkin as az async def run(): # setup zipkin client zipkin_address = 'http://127.0.0.1:9411/api/v2/spans' endpoint = az.create_endpoint( "simple_service", ipv4="127.0.0.1", port=8080) tracer = await az.create(zipkin_address, endpoint, sample_rate=1.0) # create and setup new trace with tracer.new_trace(sampled=True) as span: # give a name for the span span.name("Slow SQL") # tag with relevant information span.tag("span_type", "root") # indicate that this is client span span.kind(az.CLIENT) # make timestamp and name it with START SQL query span.annotate("START SQL SELECT * FROM") # imitate long SQL query await asyncio.sleep(0.1) # make other timestamp and name it "END SQL" span.annotate("END SQL") await tracer.close() if __name__ == "__main__": loop = asyncio.get_event_loop() loop.run_until_complete(run()) aiohttp example --------------- *aiozipkin* includes *aiohttp* server instrumentation, for this create `web.Application()` as usual and install aiozipkin plugin: .. code:: python import aiozipkin as az def init_app(): host, port = "127.0.0.1", 8080 app = web.Application() endpoint = az.create_endpoint("AIOHTTP_SERVER", ipv4=host, port=port) tracer = await az.create(zipkin_address, endpoint, sample_rate=1.0) az.setup(app, tracer) That is it, plugin adds middleware that tries to fetch context from headers, and create/join new trace. Optionally on client side you can add propagation headers in order to force tracing and to see network latency between client and server. .. code:: python import aiozipkin as az endpoint = az.create_endpoint("AIOHTTP_CLIENT") tracer = await az.create(zipkin_address, endpoint) with tracer.new_trace() as span: span.kind(az.CLIENT) headers = span.context.make_headers() host = "http://127.0.0.1:8080/api/v1/posts/{}".format(i) resp = await session.get(host, headers=headers) await resp.text() Documentation ------------- http://aiozipkin.readthedocs.io/ Installation ------------ Installation process is simple, just:: $ pip install aiozipkin Support of other collectors =========================== **aiozipkin** can work with any other zipkin_ compatible service, currently we tested it with jaeger_ and stackdriver_. Jaeger support -------------- jaeger_ supports zipkin_ span format as result it is possible to use *aiozipkin* with jaeger_ server. You just need to specify *jaeger* server address and it should work out of the box. Not need to run local zipkin server. For more informations see tests and jaeger_ documentation. .. image:: https://raw.githubusercontent.com/aio-libs/aiozipkin/master/docs/jaeger.png :alt: jaeger ui animation Stackdriver support ------------------- Google stackdriver_ supports zipkin_ span format as result it is possible to use *aiozipkin* with this google_ service. In order to make this work you need to setup zipkin service locally, that will send trace to the cloud. See google_ cloud documentation how to setup make zipkin collector: .. image:: https://raw.githubusercontent.com/aio-libs/aiozipkin/master/docs/stackdriver.png :alt: jaeger ui animation Requirements ------------ * Python_ 3.6+ * aiohttp_ .. _PEP492: https://www.python.org/dev/peps/pep-0492/ .. _Python: https://www.python.org .. _aiohttp: https://github.com/KeepSafe/aiohttp .. _asyncio: http://docs.python.org/3.5/library/asyncio.html .. _uvloop: https://github.com/MagicStack/uvloop .. _zipkin: http://zipkin.io .. _jaeger: http://jaeger.readthedocs.io/en/latest/ .. _stackdriver: https://cloud.google.com/stackdriver/ .. _google: https://cloud.google.com/trace/docs/zipkin upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/aiozipkin/description0000644000000000000000000000136314050514157027565 0ustar00aiozipkin is Python 3.6+ module that adds distributed tracing capabilities from asyncio applications with zipkin (http://zipkin.io) server instrumentation. zipkin is a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in microservice architectures. It manages both the collection and lookup of this data. Zipkin’s design is based on the Google Dapper paper. Applications are instrumented with aiozipkin report timing data to zipkin. The Zipkin UI also presents a Dependency diagram showing how many traced requests went through each application. If you are troubleshooting latency problems or errors, you can filter or sort all traces based on the application, length of trace, annotation, or timestamp. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/argparse/README.rst0000644000000000000000000004111414032077372026616 0ustar00ConfigArgParse -------------- .. image:: https://img.shields.io/pypi/v/ConfigArgParse.svg?style=flat :alt: PyPI version :target: https://pypi.python.org/pypi/ConfigArgParse .. image:: https://img.shields.io/pypi/pyversions/ConfigArgParse.svg :alt: Supported Python versions :target: https://pypi.python.org/pypi/ConfigArgParse .. image:: https://travis-ci.org/bw2/ConfigArgParse.svg?branch=master :alt: Travis CI build :target: https://travis-ci.org/bw2/ConfigArgParse Overview ~~~~~~~~ Applications with more than a handful of user-settable options are best configured through a combination of command line args, config files, hard-coded defaults, and in some cases, environment variables. Python's command line parsing modules such as argparse have very limited support for config files and environment variables, so this module extends argparse to add these features. Available on PyPI: http://pypi.python.org/pypi/ConfigArgParse .. image:: https://travis-ci.org/bw2/ConfigArgParse.svg?branch=master :target: https://travis-ci.org/bw2/ConfigArgParse Features ~~~~~~~~ - command-line, config file, env var, and default settings can now be defined, documented, and parsed in one go using a single API (if a value is specified in more than one way then: command line > environment variables > config file values > defaults) - config files can have .ini or .yaml style syntax (eg. key=value or key: value) - user can provide a config file via a normal-looking command line arg (eg. -c path/to/config.txt) rather than the argparse-style @config.txt - one or more default config file paths can be specified (eg. ['/etc/bla.conf', '~/.my_config'] ) - all argparse functionality is fully supported, so this module can serve as a drop-in replacement (verified by argparse unittests). - env vars and config file keys & syntax are automatically documented in the -h help message - new method :code:`print_values()` can report keys & values and where they were set (eg. command line, env var, config file, or default). - lite-weight (no 3rd-party library dependencies except (optionally) PyYAML) - extensible (:code:`ConfigFileParser` can be subclassed to define a new config file format) - unittested by running the unittests that came with argparse but on configargparse, and using tox to test with Python 2.7 and Python 3+ Example ~~~~~~~ *config_test.py*: Script that defines 4 options and a positional arg and then parses and prints the values. Also, it prints out the help message as well as the string produced by :code:`format_values()` to show what they look like. .. code:: py import configargparse p = configargparse.ArgParser(default_config_files=['/etc/app/conf.d/*.conf', '~/.my_settings']) p.add('-c', '--my-config', required=True, is_config_file=True, help='config file path') p.add('--genome', required=True, help='path to genome file') # this option can be set in a config file because it starts with '--' p.add('-v', help='verbose', action='store_true') p.add('-d', '--dbsnp', help='known variants .vcf', env_var='DBSNP_PATH') # this option can be set in a config file because it starts with '--' p.add('vcf', nargs='+', help='variant file(s)') options = p.parse_args() print(options) print("----------") print(p.format_help()) print("----------") print(p.format_values()) # useful for logging where different settings came from *config.txt:* Since the script above set the config file as required=True, lets create a config file to give it: .. code:: py # settings for config_test.py genome = HCMV # cytomegalovirus genome dbsnp = /data/dbsnp/variants.vcf *command line:* Now run the script and pass it the config file: .. code:: bash DBSNP_PATH=/data/dbsnp/variants_v2.vcf python config_test.py --my-config config.txt f1.vcf f2.vcf *output:* Here is the result: .. code:: bash Namespace(dbsnp='/data/dbsnp/variants_v2.vcf', genome='HCMV', my_config='config.txt', v=False, vcf=['f1.vcf', 'f2.vcf']) ---------- usage: config_test.py [-h] -c MY_CONFIG --genome GENOME [-v] [-d DBSNP] vcf [vcf ...] Args that start with '--' (eg. --genome) can also be set in a config file (/etc/app/conf.d/*.conf or ~/.my_settings or specified via -c). Config file syntax allows: key=value, flag=true, stuff=[a,b,c] (for details, see syntax at https://goo.gl/R74nmi). If an arg is specified in more than one place, then commandline values override environment variables which override config file values which override defaults. positional arguments: vcf variant file(s) optional arguments: -h, --help show this help message and exit -c MY_CONFIG, --my-config MY_CONFIG config file path --genome GENOME path to genome file -v verbose -d DBSNP, --dbsnp DBSNP known variants .vcf [env var: DBSNP_PATH] ---------- Command Line Args: --my-config config.txt f1.vcf f2.vcf Environment Variables: DBSNP_PATH: /data/dbsnp/variants_v2.vcf Config File (config.txt): genome: HCMV Special Values ~~~~~~~~~~~~~~ Under the hood, configargparse handles environment variables and config file values by converting them to their corresponding command line arg. For example, "key = value" will be processed as if "--key value" was specified on the command line. Also, the following special values (whether in a config file or an environment variable) are handled in a special way to support booleans and lists: - :code:`key = true` is handled as if "--key" was specified on the command line. In your python code this key must be defined as a boolean flag (eg. action="store_true" or similar). - :code:`key = [value1, value2, ...]` is handled as if "--key value1 --key value2" etc. was specified on the command line. In your python code this key must be defined as a list (eg. action="append"). Config File Syntax ~~~~~~~~~~~~~~~~~~ Only command line args that have a long version (eg. one that starts with '--') can be set in a config file. For example, "--color" can be set by putting "color=green" in a config file. The config file syntax depends on the constuctor arg: :code:`config_file_parser_class` which can be set to one of the provided classes: :code:`DefaultConfigFileParser`, :code:`YAMLConfigFileParser`, :code:`ConfigparserConfigFileParser` or to your own subclass of the :code:`ConfigFileParser` abstract class. *DefaultConfigFileParser* - the full range of valid syntax is: .. code:: yaml # this is a comment ; this is also a comment (.ini style) --- # lines that start with --- are ignored (yaml style) ------------------- [section] # .ini-style section names are treated as comments # how to specify a key-value pair (all of these are equivalent): name value # key is case sensitive: "Name" isn't "name" name = value # (.ini style) (white space is ignored, so name = value same as name=value) name: value # (yaml style) --name value # (argparse style) # how to set a flag arg (eg. arg which has action="store_true") --name name name = True # "True" and "true" are the same # how to specify a list arg (eg. arg which has action="append") fruit = [apple, orange, lemon] indexes = [1, 12, 35 , 40] *YAMLConfigFileParser* - allows a subset of YAML syntax (http://goo.gl/VgT2DU) .. code:: yaml # a comment name1: value name2: true # "True" and "true" are the same fruit: [apple, orange, lemon] indexes: [1, 12, 35, 40] *ConfigparserConfigFileParser* - allows a subset of python's configparser module syntax (https://docs.python.org/3.7/library/configparser.html). In particular the following configparser options are set: .. code:: py config = configparser.ArgParser( delimiters=("=",":"), allow_no_value=False, comment_prefixes=("#",";"), inline_comment_prefixes=("#",";"), strict=True, empty_lines_in_values=False, ) Once configparser parses the config file all section names are removed, thus all keys must have unique names regardless of which INI section they are defined under. Also, any keys which have python list syntax are converted to lists by evaluating them as python code using ast.literal_eval (https://docs.python.org/3/library/ast.html#ast.literal_eval). To facilitate this all multi-line values are converted to single-line values. Thus multi-line string values will have all new-lines converted to spaces. Note, since key-value pairs that have python dictionary syntax are saved as single-line strings, even if formatted across multiple lines in the config file, dictionaries can be read in and converted to valid python dictionaries with PyYAML's safe_load. Example given below: .. code:: py # inside your config file (e.g. config.ini) [section1] # INI sections treated as comments system1_settings: { # start of multi-line dictionary 'a':True, 'b':[2, 4, 8, 16], 'c':{'start':0, 'stop':1000}, 'd':'experiment 32 testing simulation with parameter a on' } # end of multi-line dictionary value ....... # in your configargparse setup import configargparse import yaml parser = configargparse.ArgParser( config_file_parser_class=configargparse.ConfigparserConfigFileParser ) parser.add_argument('--system1_settings', type=yaml.safe_load) args = parser.parse_args() # now args.system1 is a valid python dict ArgParser Singletons ~~~~~~~~~~~~~~~~~~~~~~~~~ To make it easier to configure different modules in an application, configargparse provides globally-available ArgumentParser instances via configargparse.get_argument_parser('name') (similar to logging.getLogger('name')). Here is an example of an application with a utils module that also defines and retrieves its own command-line args. *main.py* .. code:: py import configargparse import utils p = configargparse.get_argument_parser() p.add_argument("-x", help="Main module setting") p.add_argument("--m-setting", help="Main module setting") options = p.parse_known_args() # using p.parse_args() here may raise errors. *utils.py* .. code:: py import configargparse p = configargparse.get_argument_parser() p.add_argument("--utils-setting", help="Config-file-settable option for utils") if __name__ == "__main__": options = p.parse_known_args() Help Formatters ~~~~~~~~~~~~~~~ :code:`ArgumentDefaultsRawHelpFormatter` is a new HelpFormatter that both adds default values AND disables line-wrapping. It can be passed to the constructor: :code:`ArgParser(.., formatter_class=ArgumentDefaultsRawHelpFormatter)` Aliases ~~~~~~~ The configargparse.ArgumentParser API inherits its class and method names from argparse and also provides the following shorter names for convenience: - p = configargparse.get_arg_parser() # get global singleton instance - p = configargparse.get_parser() - p = configargparse.ArgParser() # create a new instance - p = configargparse.Parser() - p.add_arg(..) - p.add(..) - options = p.parse(..) HelpFormatters: - RawFormatter = RawDescriptionHelpFormatter - DefaultsFormatter = ArgumentDefaultsHelpFormatter - DefaultsRawFormatter = ArgumentDefaultsRawHelpFormatter Design Notes ~~~~~~~~~~~~ Unit tests: tests/test_configargparse.py contains custom unittests for features specific to this module (such as config file and env-var support), as well as a hook to load and run argparse unittests (see the built-in test.test_argparse module) but on configargparse in place of argparse. This ensures that configargparse will work as a drop in replacement for argparse in all usecases. Previously existing modules (PyPI search keywords: config argparse): - argparse (built-in module Python v2.7+) - Good: - fully featured command line parsing - can read args from files using an easy to understand mechanism - Bad: - syntax for specifying config file path is unusual (eg. @file.txt)and not described in the user help message. - default config file syntax doesn't support comments and is unintuitive (eg. --namevalue) - no support for environment variables - ConfArgParse v1.0.15 (https://pypi.python.org/pypi/ConfArgParse) - Good: - extends argparse with support for config files parsed by ConfigParser - clear documentation in README - Bad: - config file values are processed using ArgumentParser.set_defaults(..) which means "required" and "choices" are not handled as expected. For example, if you specify a required value in a config file, you still have to specify it again on the command line. - doesn't work with Python 3 yet - no unit tests, code not well documented - appsettings v0.5 (https://pypi.python.org/pypi/appsettings) - Good: - supports config file (yaml format) and env_var parsing - supports config-file-only setting for specifying lists and dicts - Bad: - passes in config file and env settings via parse_args namespace param - tests not finished and don't work with Python 3 (import StringIO) - argparse_config v0.5.1 (https://pypi.python.org/pypi/argparse_config) - Good: - similar features to ConfArgParse v1.0.15 - Bad: - doesn't work with Python 3 (error during pip install) - yconf v0.3.2 - (https://pypi.python.org/pypi/yconf) - features and interface not that great - hieropt v0.3 - (https://pypi.python.org/pypi/hieropt) - doesn't appear to be maintained, couldn't find documentation - configurati v0.2.3 - (https://pypi.python.org/pypi/configurati) - Good: - JSON, YAML, or Python configuration files - handles rich data structures such as dictionaries - can group configuration names into sections (like .ini files) - Bad: - doesn't work with Python 3 - 2+ years since last release to PyPI - apparently unmaintained Design choices: 1. all options must be settable via command line. Having options that can only be set using config files or env. vars adds complexity to the API, and is not a useful enough feature since the developer can split up options into sections and call a section "config file keys", with command line args that are just "--" plus the config key. 2. config file and env. var settings should be processed by appending them to the command line (another benefit of #1). This is an easy-to-implement solution and implicitly takes care of checking that all "required" args are provied, etc., plus the behavior should be easy for users to understand. 3. configargparse shouldn't override argparse's convert_arg_line_to_args method so that all argparse unit tests can be run on configargparse. 4. in terms of what to allow for config file keys, the "dest" value of an option can't serve as a valid config key because many options can have the same dest. Instead, since multiple options can't use the same long arg (eg. "--long-arg-x"), let the config key be either "--long-arg-x" or "long-arg-x". This means the developer can allow only a subset of the command-line args to be specified via config file (eg. short args like -x would be excluded). Also, that way config keys are automatically documented whenever the command line args are documented in the help message. 5. don't force users to put config file settings in the right .ini [sections]. This doesn't have a clear benefit since all options are command-line settable, and so have a globally unique key anyway. Enforcing sections just makes things harder for the user and adds complexity to the implementation. 6. if necessary, config-file-only args can be added later by implementing a separate add method and using the namespace arg as in appsettings_v0.5 Relevant sites: - http://stackoverflow.com/questions/6133517/parse-config-file-environment-and-command-line-arguments-to-get-a-single-coll - http://tricksntweaks.blogspot.com/2013_05_01_archive.html - http://www.youtube.com/watch?v=vvCwqHgZJc8#t=35 .. |Travis CI Status for bw2/ConfigArgParse| image:: https://travis-ci.org/bw2/ConfigArgParse.svg?branch=master Versioning ~~~~~~~~~~ This software follows `Semantic Versioning`_ .. _Semantic Versioning: http://semver.org/ upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/argparse/description0000644000000000000000000000060114032077372027371 0ustar00Applications with more than a handful of user-settable options are best configured through a combination of command line args, config files, hard-coded defaults, and in some cases, environment variables. Python's command line parsing modules such as argparse have very limited support for config files and environment variables, so this module extends argparse to add these features. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/bitlbee/README.md0000644000000000000000000000347614034111434026207 0ustar00# BitlBee ![](https://www.bitlbee.org/style/logo.png) [![Build Status](https://travis-ci.org/bitlbee/bitlbee.svg)](https://travis-ci.org/bitlbee/bitlbee) [![Coverity Scan Build Status](https://scan.coverity.com/projects/4028/badge.svg)](https://scan.coverity.com/projects/4028) An IRC to other chat networks gateway Main website: https://www.bitlbee.org/ Bug tracker: https://bugs.bitlbee.org/ Wiki: https://wiki.bitlbee.org/ License: GPLv2 ## Installation BitlBee is available in the package managers of most distros. For debian/ubuntu/etc you may use the nightly APT repository: https://code.bitlbee.org/debian/ You can also use a public server (such as `im.bitlbee.org`) instead of installing it: https://www.bitlbee.org/main.php/servers.html ## Compiling If you wish to compile it yourself, ensure you have the following packages and their headers: * glib 2.32 or newer (not to be confused with glibc) * gnutls * python 2 or 3 (for the user guide) Some optional features have additional dependencies, such as libpurple, libotr, libevent, etc. NSS and OpenSSL are also available but not as well supported as GnuTLS. Once you have the dependencies, building should be a matter of: ./configure make sudo make install ## Development tips * To enable debug symbols: `./configure --debug=1` * To get some additional debug output for some protocols: `BITLBEE_DEBUG=1 ./bitlbee -Dnv` * Use github pull requests against the 'develop' branch to submit patches. * The coding style based on K&R with tabs and 120 columns. See `./doc/uncrustify.cfg` for the parameters used to reformat the code. * Mappings of bzr revisions to git commits (for historical purposes) are available in `./doc/git-bzr-rev-map` * See also `./doc/README` and `./doc/HACKING` ## Help? Join **#BitlBee** on OFTC (**irc.oftc.net**) (OFTC, *not* freenode!) upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/bitlbee/description0000644000000000000000000000004614034111434027164 0ustar00An IRC to other chat networks gateway upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/bup/README.md0000644000000000000000000006176014025667057025410 0ustar00bup: It backs things up ======================= bup is a program that backs things up. It's short for "backup." Can you believe that nobody else has named an open source program "bup" after all this time? Me neither. Despite its unassuming name, bup is pretty cool. To give you an idea of just how cool it is, I wrote you this poem: Bup is teh awesome What rhymes with awesome? I guess maybe possum But that's irrelevant. Hmm. Did that help? Maybe prose is more useful after all. Reasons bup is awesome ---------------------- bup has a few advantages over other backup software: - It uses a rolling checksum algorithm (similar to rsync) to split large files into chunks. The most useful result of this is you can backup huge virtual machine (VM) disk images, databases, and XML files incrementally, even though they're typically all in one huge file, and not use tons of disk space for multiple versions. - It uses the packfile format from git (the open source version control system), so you can access the stored data even if you don't like bup's user interface. - Unlike git, it writes packfiles *directly* (instead of having a separate garbage collection / repacking stage) so it's fast even with gratuitously huge amounts of data. bup's improved index formats also allow you to track far more filenames than git (millions) and keep track of far more objects (hundreds or thousands of gigabytes). - Data is "automagically" shared between incremental backups without having to know which backup is based on which other one - even if the backups are made from two different computers that don't even know about each other. You just tell bup to back stuff up, and it saves only the minimum amount of data needed. - You can back up directly to a remote bup server, without needing tons of temporary disk space on the computer being backed up. And if your backup is interrupted halfway through, the next run will pick up where you left off. And it's easy to set up a bup server: just install bup on any machine where you have ssh access. - Bup can use "par2" redundancy to recover corrupted backups even if your disk has undetected bad sectors. - Even when a backup is incremental, you don't have to worry about restoring the full backup, then each of the incrementals in turn; an incremental backup *acts* as if it's a full backup, it just takes less disk space. - You can mount your bup repository as a FUSE filesystem and access the content that way, and even export it over Samba. - It's written in python (with some C parts to make it faster) so it's easy for you to extend and maintain. Reasons you might want to avoid bup ----------------------------------- - It's not remotely as well tested as something like tar, so it's more likely to eat your data. It's also missing some probably-critical features, though fewer than it used to be. - It requires python 3.7 or newer (or 2.7 for a bit longer), a C compiler, and an installed git version >= 1.5.6. It also requires par2 if you want fsck to be able to generate the information needed to recover from some types of corruption. While python 2.7 is still supported, please make plans to upgrade. Python 2 upstream support ended on 2020-01-01, and we plan to drop support soon too. - It currently only works on Linux, FreeBSD, NetBSD, OS X >= 10.4, Solaris, or Windows (with Cygwin, and WSL). Patches to support other platforms are welcome. - Until resolved, a [glibc bug](https://sourceware.org/bugzilla/show_bug.cgi?id=26034) might cause bup to crash on startup for some (unusual) command line argument values, when bup is configured to use Python 3. - Any items in "Things that are stupid" below. Notable changes introduced by a release ======================================= - Changes in 0.32 as compared to 0.31 - Changes in 0.31 as compared to 0.30.1 - Changes in 0.30.1 as compared to 0.30 - Changes in 0.30 as compared to 0.29.3 - Changes in 0.29.3 as compared to 0.29.2 - Changes in 0.29.2 as compared to 0.29.1 - Changes in 0.29.1 as compared to 0.29 - Changes in 0.29 as compared to 0.28.1 - Changes in 0.28.1 as compared to 0.28 - Changes in 0.28 as compared to 0.27.1 - Changes in 0.27.1 as compared to 0.27 Test status =========== | branch | Debian | FreeBSD | macOS | |--------|------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------| | master | [![Debian test status](https://api.cirrus-ci.com/github/bup/bup.svg?branch=master&task=debian)](https://cirrus-ci.com/github/bup/bup) | [![FreeBSD test status](https://api.cirrus-ci.com/github/bup/bup.svg?branch=master&task=freebsd)](https://cirrus-ci.com/github/bup/bup) | [![macOS test status](https://api.cirrus-ci.com/github/bup/bup.svg?branch=master&task=macos)](https://cirrus-ci.com/github/bup/bup) | | 0.30.x | [![Debian test status](https://api.cirrus-ci.com/github/bup/bup.svg?branch=0.30.x&task=debian)](https://cirrus-ci.com/github/bup/bup) | [![FreeBSD test status](https://api.cirrus-ci.com/github/bup/bup.svg?branch=0.30.x&task=freebsd)](https://cirrus-ci.com/github/bup/bup) | [![macOS test status](https://api.cirrus-ci.com/github/bup/bup.svg?branch=0.30.x&task=macos)](https://cirrus-ci.com/github/bup/bup) | | 0.29.x | [![Debian test status](https://api.cirrus-ci.com/github/bup/bup.svg?branch=0.29.x&task=debian)](https://cirrus-ci.com/github/bup/bup) | [![FreeBSD test status](https://api.cirrus-ci.com/github/bup/bup.svg?branch=0.29.x&task=freebsd)](https://cirrus-ci.com/github/bup/bup) | [![macOS test status](https://api.cirrus-ci.com/github/bup/bup.svg?branch=0.29.x&task=macos)](https://cirrus-ci.com/github/bup/bup) | Getting started =============== From source ----------- - Check out the bup source code using git: ```sh git clone https://github.com/bup/bup ``` - This will leave you on the master branch, which is perfect if you would like to help with development, but if you'd just like to use bup, please check out the latest stable release like this: ```sh git checkout 0.32 ``` You can see the latest stable release here: https://github.com/bup/bup/releases. - Install the required python libraries (including the development libraries). On very recent Debian/Ubuntu versions, this may be sufficient (run as root): ```sh apt-get build-dep bup ``` Otherwise try this: ```sh apt-get install python3.7-dev python3-fuse apt-get install python3-pyxattr python3-pytest apt-get install python3-distutils apt-get install pkg-config linux-libc-dev libacl1-dev apt-get install gcc make acl attr rsync apt-get isntall python3-pytest-xdist # optional (parallel tests) apt-get install par2 # optional (error correction) apt-get install libreadline-dev # optional (bup ftp) apt-get install python3-tornado # optional (bup web) ``` Or, if you can't yet migrate to Python 3 (please try to soon): ```sh apt-get install python2.7-dev python-fuse apt-get install python-pyxattr python-pytest apt-get install pkg-config linux-libc-dev libacl1-dev apt-get install gcc make acl attr rsync apt-get isntall python-pytest-xdist # optional (parallel tests) apt-get install par2 # optional (error correction) apt-get install libreadline-dev # optional (bup ftp) apt-get install python-tornado # optional (bup web) ``` On CentOS (for CentOS 6, at least), this should be sufficient (run as root): ```sh yum groupinstall "Development Tools" yum install python2 python2-devel libacl-devel pylibacl yum install fuse-python pyxattr yum install perl-Time-HiRes yum install readline-devel # optional (bup ftp) yum install python-tornado # optional (bup web) ``` In addition to the default CentOS repositories, you may need to add RPMForge (for fuse-python) and EPEL (for pyxattr). On Cygwin, install python, make, rsync, and gcc4. If you would like to use the optional bup web server on systems without a tornado package, you may want to try this: ```sh pip install tornado ``` - Build the python module and symlinks: ```sh make ``` - Run the tests: ```sh make long-check ``` or if you're in a bit more of a hurry: ```sh make check ``` If you have the Python xdist module installed, then you can probably run the tests faster by adding the make -j option (see ./HACKING for additional information): ```sh make -j check ``` The tests should pass. If they don't pass for you, stop here and send an email to bup-list@googlegroups.com. Though if there are symbolic links along the current working directory path, the tests may fail. Running something like this before "make test" should sidestep the problem: ```sh cd "$(pwd -P)" ``` - You can install bup via "make install", and override the default destination with DESTDIR and PREFIX. Files are normally installed to "$DESTDIR/$PREFIX" where DESTDIR is empty by default, and PREFIX is set to /usr/local. So if you wanted to install bup to /opt/bup, you might do something like this: ```sh make install DESTDIR=/opt/bup PREFIX='' ``` - The Python executable that bup will use is chosen by ./configure, which will search for a reasonable version unless PYTHON is set in the environment, in which case, bup will use that path. You can see which Python executable was chosen by looking at the configure output, or examining cmd/python-cmd.sh, and you can change the selection by re-running ./configure. From binary packages -------------------- Binary packages of bup are known to be built for the following OSes: - Debian: http://packages.debian.org/search?searchon=names&keywords=bup - Ubuntu: http://packages.ubuntu.com/search?searchon=names&keywords=bup - pkgsrc (NetBSD, Dragonfly, and others) http://pkgsrc.se/sysutils/bup http://cvsweb.netbsd.org/bsdweb.cgi/pkgsrc/sysutils/bup/ - Arch Linux: https://www.archlinux.org/packages/?sort=&q=bup - Fedora: https://apps.fedoraproject.org/packages/bup - macOS (Homebrew): https://formulae.brew.sh/formula/bup Using bup --------- - Get help for any bup command: ```sh bup help bup help init bup help index bup help save bup help restore ... ``` - Initialize the default BUP_DIR (~/.bup -- you can choose another by either specifying `bup -d DIR ...` or setting the `BUP_DIR` environment variable for a command): ```sh bup init ``` - Make a local backup (-v or -vv will increase the verbosity): ```sh bup index /etc bup save -n local-etc /etc ``` - Restore a local backup to ./dest: ```sh bup restore -C ./dest local-etc/latest/etc ls -l dest/etc ``` - Look at how much disk space your backup took: ```sh du -s ~/.bup ``` - Make another backup (which should be mostly identical to the last one; notice that you don't have to *specify* that this backup is incremental, it just saves space automatically): ```sh bup index /etc bup save -n local-etc /etc ``` - Look how little extra space your second backup used (on top of the first): ```sh du -s ~/.bup ``` - Get a list of your previous backups: ```sh bup ls local-etc ``` - Restore your first backup again: ```sh bup restore -C ./dest-2 local-etc/2013-11-23-11195/etc ``` - Make a backup to a remote server which must already have the 'bup' command somewhere in its PATH (see /etc/profile, etc/environment, ~/.profile, or ~/.bashrc), and be accessible via ssh. Make sure to replace SERVERNAME with the actual hostname of your server: ```sh bup init -r SERVERNAME:path/to/remote-bup-dir bup index /etc bup save -r SERVERNAME:path/to/remote-bup-dir -n local-etc /etc ``` - Make a remote backup to ~/.bup on SERVER: ```sh bup index /etc bup save -r SERVER: -n local-etc /etc ``` - See what saves are available in ~/.bup on SERVER: ```sh bup ls -r SERVER: ``` - Restore the remote backup to ./dest: ```sh bup restore -r SERVER: -C ./dest local-etc/latest/etc ls -l dest/etc ``` - Defend your backups from death rays (OK fine, more likely from the occasional bad disk block). This writes parity information (currently via par2) for all of the existing data so that bup may be able to recover from some amount of repository corruption: ```sh bup fsck -g ``` - Use split/join instead of index/save/restore. Try making a local backup using tar: ```sh tar -cvf - /etc | bup split -n local-etc -vv ``` - Try restoring the tarball: ```sh bup join local-etc | tar -tf - ``` - Look at how much disk space your backup took: ```sh du -s ~/.bup ``` - Make another tar backup: ```sh tar -cvf - /etc | bup split -n local-etc -vv ``` - Look at how little extra space your second backup used on top of the first: ```sh du -s ~/.bup ``` - Restore the first tar backup again (the ~1 is git notation for "one older than the most recent"): ```sh bup join local-etc~1 | tar -tf - ``` - Get a list of your previous split-based backups: ```sh GIT_DIR=~/.bup git log local-etc ``` - Save a tar archive to a remote server (without tar -z to facilitate deduplication): ```sh tar -cvf - /etc | bup split -r SERVERNAME: -n local-etc -vv ``` - Restore the archive: ```sh bup join -r SERVERNAME: local-etc | tar -tf - ``` That's all there is to it! Notes on FreeBSD ---------------- - FreeBSD's default 'make' command doesn't like bup's Makefile. In order to compile the code, run tests and install bup, you need to install GNU Make from the port named 'gmake' and use its executable instead in the commands seen above. (i.e. 'gmake test' runs bup's test suite) - Python's development headers are automatically installed with the 'python' port so there's no need to install them separately. - To use the 'bup fuse' command, you need to install the fuse kernel module from the 'fusefs-kmod' port in the 'sysutils' section and the libraries from the port named 'py-fusefs' in the 'devel' section. - The 'par2' command can be found in the port named 'par2cmdline'. - In order to compile the documentation, you need pandoc which can be found in the port named 'hs-pandoc' in the 'textproc' section. Notes on NetBSD/pkgsrc ---------------------- - See pkgsrc/sysutils/bup, which should be the most recent stable release and includes man pages. It also has a reasonable set of dependencies (git, par2, py-fuse-bindings). - The "fuse-python" package referred to is hard to locate, and is a separate tarball for the python language binding distributed by the fuse project on sourceforge. It is available as pkgsrc/filesystems/py-fuse-bindings and on NetBSD 5, "bup fuse" works with it. - "bup fuse" presents every directory/file as inode 0. The directory traversal code ("fts") in NetBSD's libc will interpret this as a cycle and error out, so "ls -R" and "find" will not work. - There is no support for ACLs. If/when some enterprising person fixes this, adjust dev/compare-trees. Notes on Cygwin --------------- - There is no support for ACLs. If/when some enterprising person fixes this, adjust dev/compare-trees. - In test/ext/test-misc, two tests have been disabled. These tests check to see that repeated saves produce identical trees and that an intervening index doesn't change the SHA1. Apparently Cygwin has some unusual behaviors with respect to access times (that probably warrant further investigation). Possibly related: http://cygwin.com/ml/cygwin/2007-06/msg00436.html Notes on OS X ------------- - There is no support for ACLs. If/when some enterprising person fixes this, adjust dev/compare-trees. How it works ============ Basic storage: -------------- bup stores its data in a git-formatted repository. Unfortunately, git itself doesn't actually behave very well for bup's use case (huge numbers of files, files with huge sizes, retaining file permissions/ownership are important), so we mostly don't use git's *code* except for a few helper programs. For example, bup has its own git packfile writer written in python. Basically, 'bup split' reads the data on stdin (or from files specified on the command line), breaks it into chunks using a rolling checksum (similar to rsync), and saves those chunks into a new git packfile. There is at least one git packfile per backup. When deciding whether to write a particular chunk into the new packfile, bup first checks all the other packfiles that exist to see if they already have that chunk. If they do, the chunk is skipped. git packs come in two parts: the pack itself (*.pack) and the index (*.idx). The index is pretty small, and contains a list of all the objects in the pack. Thus, when generating a remote backup, we don't have to have a copy of the packfiles from the remote server: the local end just downloads a copy of the server's *index* files, and compares objects against those when generating the new pack, which it sends directly to the server. The "-n" option to 'bup split' and 'bup save' is the name of the backup you want to create, but it's actually implemented as a git branch. So you can do cute things like checkout a particular branch using git, and receive a bunch of chunk files corresponding to the file you split. If you use '-b' or '-t' or '-c' instead of '-n', bup split will output a list of blobs, a tree containing that list of blobs, or a commit containing that tree, respectively, to stdout. You can use this to construct your own scripts that do something with those values. The bup index: -------------- 'bup index' walks through your filesystem and updates a file (whose name is, by default, ~/.bup/bupindex) to contain the name, attributes, and an optional git SHA1 (blob id) of each file and directory. 'bup save' basically just runs the equivalent of 'bup split' a whole bunch of times, once per file in the index, and assembles a git tree that contains all the resulting objects. Among other things, that makes 'git diff' much more useful (compared to splitting a tarball, which is essentially a big binary blob). However, since bup splits large files into smaller chunks, the resulting tree structure doesn't *exactly* correspond to what git itself would have stored. Also, the tree format used by 'bup save' will probably change in the future to support storing file ownership, more complex file permissions, and so on. If a file has previously been written by 'bup save', then its git blob/tree id is stored in the index. This lets 'bup save' avoid reading that file to produce future incremental backups, which means it can go *very* fast unless a lot of files have changed. Things that are stupid for now but which we'll fix later ======================================================== Help with any of these problems, or others, is very welcome. Join the mailing list (see below) if you'd like to help. - 'bup save' and 'bup restore' have immature metadata support. On the plus side, they actually do have support now, but it's new, and not remotely as well tested as tar/rsync/whatever's. However, you have to start somewhere, and as of 0.25, we think it's ready for more general use. Please let us know if you have any trouble. Also, if any strip or graft-style options are specified to 'bup save', then no metadata will be written for the root directory. That's obviously less than ideal. - bup is overly optimistic about mmap. Right now bup just assumes that it can mmap as large a block as it likes, and that mmap will never fail. Yeah, right... If nothing else, this has failed on 32-bit architectures (and 31-bit is even worse -- looking at you, s390). To fix this, we might just implement a FakeMmap[1] class that uses normal file IO and handles all of the mmap methods[2] that bup actually calls. Then we'd swap in one of those whenever mmap fails. This would also require implementing some of the methods needed to support "[]" array access, probably at a minimum __getitem__, __setitem__, and __setslice__ [3]. [1] http://comments.gmane.org/gmane.comp.sysutils.backup.bup/613 [2] http://docs.python.org/2/library/mmap.html [3] http://docs.python.org/2/reference/datamodel.html#emulating-container-types - 'bup index' is slower than it should be. It's still rather fast: it can iterate through all the filenames on my 600,000 file filesystem in a few seconds. But it still needs to rewrite the entire index file just to add a single filename, which is pretty nasty; it should just leave the new files in a second "extra index" file or something. - bup could use inotify for *really* efficient incremental backups. You could even have your system doing "continuous" backups: whenever a file changes, we immediately send an image of it to the server. We could give the continuous-backup process a really low CPU and I/O priority so you wouldn't even know it was running. - bup only has experimental support for pruning old backups. While you should now be able to drop old saves and branches with `bup rm`, and reclaim the space occupied by data that's no longer needed by other backups with `bup gc`, these commands are experimental, and should be handled with great care. See the man pages for more information. Unless you want to help test the new commands, one possible workaround is to just start a new BUP_DIR occasionally, i.e. bup-2013, bup-2014... - bup has never been tested on anything but Linux, FreeBSD, NetBSD, OS X, and Windows+Cygwin. There's nothing that makes it *inherently* non-portable, though, so that's mostly a matter of someone putting in some effort. (For a "native" Windows port, the most annoying thing is the absence of ssh in a default Windows installation.) - bup needs better documentation. According to an article about bup in Linux Weekly News (https://lwn.net/Articles/380983/), "it's a bit short on examples and a user guide would be nice." Documentation is the sort of thing that will never be great unless someone from outside contributes it (since the developers can never remember which parts are hard to understand). - bup is "relatively speedy" and has "pretty good" compression. ...according to the same LWN article. Clearly neither of those is good enough. We should have awe-inspiring speed and crazy-good compression. Must work on that. Writing more parts in C might help with the speed. - bup has no GUI. Actually, that's not stupid, but you might consider it a limitation. See the ["Related Projects"](https://bup.github.io/) list for some possible options. More Documentation ================== bup has an extensive set of man pages. Try using 'bup help' to get started, or use 'bup help SUBCOMMAND' for any bup subcommand (like split, join, index, save, etc.) to get details on that command. For further technical details, please see ./DESIGN. How you can help ================ bup is a work in progress and there are many ways it can still be improved. If you'd like to contribute patches, ideas, or bug reports, please join the bup mailing list. You can find the mailing list archives here: http://groups.google.com/group/bup-list and you can subscribe by sending a message to: bup-list+subscribe@googlegroups.com Please see ./HACKING for additional information, i.e. how to submit patches (hint - no pull requests), how we handle branches, etc. Have fun, Avery upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/bup/description0000644000000000000000000000102614025667057026364 0ustar00bup is a program that backs things up. It's short for "backup." Can you believe that nobody else has named an open source program "bup" after all this time? Me neither. Despite its unassuming name, bup is pretty cool. To give you an idea of just how cool it is, I wrote you this poem: Bup is teh awesome What rhymes with awesome? I guess maybe possum But that's irrelevant. Hmm. Did that help? Maybe prose is more useful after all. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/cbor2/README.rst0000644000000000000000000000732314025766244026032 0ustar00.. image:: https://travis-ci.com/agronholm/cbor2.svg?branch=master :target: https://travis-ci.com/agronholm/cbor2 :alt: Build Status .. image:: https://coveralls.io/repos/github/agronholm/cbor2/badge.svg?branch=master :target: https://coveralls.io/github/agronholm/cbor2?branch=master :alt: Code Coverage .. image:: https://readthedocs.org/projects/cbor2/badge/?version=latest :target: https://cbor2.readthedocs.io/en/latest/?badge=latest :alt: Documentation Status About ===== This library provides encoding and decoding for the Concise Binary Object Representation (CBOR) (`RFC 7049`_) serialization format. `Read the docs `_ to learn more. It is implemented in pure python with an optional C backend. On PyPy, cbor2 runs with almost identical performance to the C backend. .. _RFC 7049: https://tools.ietf.org/html/rfc7049 Features -------- * Simple api like ``json`` or ``pickle`` modules. * Support many `CBOR tags`_ with `stdlib objects`_. * Generic tag decoding. * `Shared value`_ references including cyclic references. * Optional C module backend tested on big- and little-endian architectures. * Extensible `tagged value handling`_ using ``tag_hook`` and ``object_hook`` on decode and ``default`` on encode. * Command-line diagnostic tool, converting CBOR file or stream to JSON ``python -m cbor2.tool`` (This is a lossy conversion, for diagnostics only) * Thorough test suite. .. _CBOR tags: https://www.iana.org/assignments/cbor-tags/cbor-tags.xhtml .. _stdlib objects: https://cbor2.readthedocs.io/en/latest/usage.html#tag-support .. _Shared value: http://cbor.schmorp.de/value-sharing .. _tagged value handling: https://cbor2.readthedocs.io/en/latest/customizing.html#using-the-cbor-tags-for-custom-types Installation ============ :: pip install cbor2 Requirements ------------ * Python >= 3.6 (or `PyPy3`_ 3.6+) * C-extension: Any C compiler that can build Python extensions. Any modern libc with the exception of Glibc<2.9 .. _PyPy3: https://www.pypy.org/ Building the C-Extension ------------------------ To force building of the optional C-extension, set OS env ``CBOR2_BUILD_C_EXTENSION=1``. To disable building of the optional C-extension, set OS env ``CBOR2_BUILD_C_EXTENSION=0``. If this environment variable is unset, setup.py will default to auto detecting a compatible C library and attempt to compile the extension. Usage ===== `Basic Usage `_ Command-line Usage ================== ``python -m cbor2.tool`` converts CBOR data in raw binary or base64 encoding into a representation that allows printing as JSON. This is a lossy transformation as each datatype is converted into something that can be represented as a JSON value. Usage:: # Pass hexadecimal through xxd. $ echo a16568656c6c6f65776f726c64 | xxd -r -ps | python -m cbor2.tool --pretty { "hello": "world" } # Decode Base64 directly $ echo ggEC | python -m cbor2.tool --decode [1, 2] # Read from a file encoded in Base64 $ python -m cbor2.tool -d tests/examples.cbor.b64 {...} It can be used in a pipeline with json processing tools like `jq`_ to allow syntax coloring, field extraction and more. CBOR data items concatenated into a sequence can be decoded also:: $ echo ggECggMEggUG | python -m cbor2.tool -d --sequence [1, 2] [3, 4] [5, 6] Multiple files can also be sent to a single output file:: $ python -m cbor2.tool -o all_files.json file1.cbor file2.cbor ... fileN.cbor .. _jq: https://stedolan.github.io/jq/ Security ======== This library has not been tested against malicious input. In theory it should be as safe as JSON, since unlike ``pickle`` the decoder does not execute any code. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/cbor2/description0000644000000000000000000000044514025766244026607 0ustar00This library provides encoding and decoding for the Concise Binary Object Representation (CBOR) (RFC 7049) serialization format. Read the docs to learn more. It is implemented in pure python with an optional C backend. On PyPy, cbor2 runs with almost identical performance to the C backend. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/django-ical/README.rst0000644000000000000000000000326314024376244027166 0ustar00django-ical =========== |pypi| |docs| |build| |coverage| |jazzband| django-ical is a simple library/framework for creating `iCal `_ feeds based in Django's `syndication feed framework `_. This documentation is modeled after the documentation for the syndication feed framework so you can think of it as a simple extension. If you are familiar with the Django syndication feed framework you should be able to be able to use django-ical fairly quickly. It works the same way as the Django syndication framework but adds a few extension properties to support iCalendar feeds. django-ical uses the `icalendar `_ library under the hood to generate iCalendar feeds. Documentation ------------- Documentation is hosted on Read the Docs: https://django-ical.readthedocs.io/en/latest/ .. |pypi| image:: https://img.shields.io/pypi/v/django-ical.svg :alt: PyPI :target: https://pypi.org/project/django-ical/ .. |docs| image:: https://readthedocs.org/projects/django-ical/badge/?version=latest :alt: Documentation Status :scale: 100% :target: http://django-ical.readthedocs.io/en/latest/?badge=latest .. |build| image:: https://github.com/jazzband/django-ical/workflows/Test/badge.svg :target: https://github.com/jazzband/django-ical/actions :alt: GitHub Actions .. |coverage| image:: https://codecov.io/gh/jazzband/django-ical/branch/master/graph/badge.svg :target: https://codecov.io/gh/jazzband/django-ical :alt: Coverage .. |jazzband| image:: https://jazzband.co/static/img/badge.svg :target: https://jazzband.co/ :alt: Jazzband upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/django-ical/description0000644000000000000000000000110714024376244027740 0ustar00django-ical is a simple library/framework for creating iCal feeds based in Django's syndication feed framework. This documentation is modeled after the documentation for the syndication feed framework so you can think of it as a simple extension. If you are familiar with the Django syndication feed framework you should be able to be able to use django-ical fairly quickly. It works the same way as the Django syndication framework but adds a few extension properties to support iCalendar feeds. django-ical uses the icalendar library under the hood to generate iCalendar feeds. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/dulwich/README.rst0000644000000000000000000000560414023145612026446 0ustar00Dulwich ======= This is the Dulwich project. It aims to provide an interface to git repos (both local and remote) that doesn't call out to git directly but instead uses pure Python. **Main website**: **License**: Apache License, version 2 or GNU General Public License, version 2 or later. The project is named after the part of London that Mr. and Mrs. Git live in in the particular Monty Python sketch. Installation ------------ By default, Dulwich' setup.py will attempt to build and install the optional C extensions. The reason for this is that they significantly improve the performance since some low-level operations that are executed often are much slower in CPython. If you don't want to install the C bindings, specify the --pure argument to setup.py:: $ python setup.py --pure install or if you are installing from pip:: $ pip install dulwich --global-option="--pure" Note that you can also specify --global-option in a `requirements.txt `_ file, e.g. like this:: dulwich --global-option=--pure Getting started --------------- Dulwich comes with both a lower-level API and higher-level plumbing ("porcelain"). For example, to use the lower level API to access the commit message of the last commit:: >>> from dulwich.repo import Repo >>> r = Repo('.') >>> r.head() '57fbe010446356833a6ad1600059d80b1e731e15' >>> c = r[r.head()] >>> c >>> c.message 'Add note about encoding.\n' And to print it using porcelain:: >>> from dulwich import porcelain >>> porcelain.log('.', max_entries=1) -------------------------------------------------- commit: 57fbe010446356833a6ad1600059d80b1e731e15 Author: Jelmer Vernooij Date: Sat Apr 29 2017 23:57:34 +0000 Add note about encoding. Further documentation --------------------- The dulwich documentation can be found in docs/ and built by running ``make doc``. It can also be found `on the web `_. Help ---- There is a *#dulwich* IRC channel on the `Freenode `_, and `dulwich-announce `_ and `dulwich-discuss `_ mailing lists. Contributing ------------ For a full list of contributors, see the git logs or `AUTHORS `_. If you'd like to contribute to Dulwich, see the `CONTRIBUTING `_ file and `list of open issues `_. Supported versions of Python ---------------------------- At the moment, Dulwich supports (and is tested on) CPython 3.5 and later and Pypy. The latest release series to support Python 2.x was the 0.19 series. See the 0.19 branch in the Dulwich git repository. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/dulwich/description0000644000000000000000000000024714034104624027223 0ustar00This is the Dulwich project. It aims to provide an interface to git repos (both local and remote) that doesn't call out to git directly but instead uses pure Python. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/empty/README.md0000644000000000000000000000000014024657402025724 0ustar00upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/erbium/README.md0000644000000000000000000000101014023143032026037 0ustar00Erbium ====== Erbium[^0] provides networking services for use on small/home networks. Erbium currently supports both DNS and DHCP, with other protocols hopefully coming soon. Erbium is in early development. * DNS is still in early development, and not ready for use. * DHCP is beta quality. Should be ready for test use. * Router Advertisements are alpha quality. Should be ready for limited testing. [^0]: Erbium is the 68th element in the periodic table, the same as the client port number for DHCP. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/erbium/description0000644000000000000000000000075614023143032027046 0ustar00Erbium[^0] provides networking services for use on small/home networks. Erbium currently supports both DNS and DHCP, with other protocols hopefully coming soon. Erbium is in early development. * DNS is still in early development, and not ready for use. * DHCP is beta quality. Should be ready for test use. * Router Advertisements are alpha quality. Should be ready for limited testing. [^0]: Erbium is the 68th element in the periodic table, the same as the client port number for DHCP. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/isso/README.md0000644000000000000000000000060714023433157025556 0ustar00Isso – a commenting server similar to Disqus ============================================ Isso – *Ich schrei sonst* – is a lightweight commenting server written in Python and JavaScript. It aims to be a drop-in replacement for [Disqus](http://disqus.com). ![Isso in Action](http://posativ.org/~tmp/isso-sample.png) See [posativ.org/isso](http://posativ.org/isso/) for more details. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/isso/description0000644000000000000000000000022314023433157026537 0ustar00Isso – Ich schrei sonst – is a lightweight commenting server written in Python and JavaScript. It aims to be a drop-in replacement for Disqus. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/jadx/README.md0000644000000000000000000001233514027625472025537 0ustar00## JADX [![Build Status](https://travis-ci.org/skylot/jadx.png?branch=master)](https://travis-ci.org/skylot/jadx) [![Code Coverage](https://codecov.io/gh/skylot/jadx/branch/master/graph/badge.svg)](https://codecov.io/gh/skylot/jadx) [![SonarQube Bugs](https://sonarcloud.io/api/project_badges/measure?project=jadx&metric=bugs)](https://sonarcloud.io/dashboard?id=jadx) [![License](http://img.shields.io/:license-apache-blue.svg)](http://www.apache.org/licenses/LICENSE-2.0.html) [![semantic-release](https://img.shields.io/badge/%20%20%F0%9F%93%A6%F0%9F%9A%80-semantic--release-e10079.svg)](https://github.com/semantic-release/semantic-release) **jadx** - Dex to Java decompiler Command line and GUI tools for produce Java source code from Android Dex and Apk files ![jadx-gui screenshot](https://i.imgur.com/h917IBZ.png) ### Downloads - latest [unstable build: ![Download](https://api.bintray.com/packages/skylot/jadx/unstable/images/download.svg) ](https://bintray.com/skylot/jadx/unstable/_latestVersion#files) - release from [github: ![Latest release](https://img.shields.io/github/release/skylot/jadx.svg)](https://github.com/skylot/jadx/releases/latest) - release from [bintray: ![Download](https://api.bintray.com/packages/skylot/jadx/releases/images/download.svg) ](https://bintray.com/skylot/jadx/releases/_latestVersion#files) After download unpack zip file go to `bin` directory and run: - `jadx` - command line version - `jadx-gui` - graphical version On Windows run `.bat` files with double-click\ **Note:** ensure you have installed Java 8 64-bit version ### Related projects: - [PyJadx](https://github.com/romainthomas/pyjadx) - python binding for jadx by [@romainthomas](https://github.com/romainthomas) ### Building jadx from source JDK 8 or higher must be installed: git clone https://github.com/skylot/jadx.git cd jadx ./gradlew dist (on Windows, use `gradlew.bat` instead of `./gradlew`) Scripts for run jadx will be placed in `build/jadx/bin` and also packed to `build/jadx-.zip` ### macOS You can install using brew: brew install jadx ### Run Run **jadx** on itself: cd build/jadx/ bin/jadx -d out lib/jadx-core-*.jar # or bin/jadx-gui lib/jadx-core-*.jar ### Usage ``` jadx[-gui] [options] (.apk, .dex, .jar, .class, .smali, .zip, .aar, .arsc) options: -d, --output-dir - output directory -ds, --output-dir-src - output directory for sources -dr, --output-dir-res - output directory for resources -j, --threads-count - processing threads count -r, --no-res - do not decode resources -s, --no-src - do not decompile source code --single-class - decompile a single class --output-format - can be 'java' or 'json' (default: java) -e, --export-gradle - save as android gradle project --show-bad-code - show inconsistent code (incorrectly decompiled) --no-imports - disable use of imports, always write entire package name --no-debug-info - disable debug info --no-inline-anonymous - disable anonymous classes inline --no-replace-consts - don't replace constant value with matching constant field --escape-unicode - escape non latin characters in strings (with \u) --respect-bytecode-access-modifiers - don't change original access modifiers --deobf - activate deobfuscation --deobf-min - min length of name, renamed if shorter (default: 3) --deobf-max - max length of name, renamed if longer (default: 64) --deobf-rewrite-cfg - force to save deobfuscation map --deobf-use-sourcename - use source file name as class name alias --rename-flags - what to rename, comma-separated, 'case' for system case sensitivity, 'valid' for java identifiers, 'printable' characters, 'none' or 'all' --fs-case-sensitive - treat filesystem as case sensitive, false by default --cfg - save methods control flow graph to dot file --raw-cfg - save methods control flow graph (use raw instructions) -f, --fallback - make simple dump (using goto instead of 'if', 'for', etc) -v, --verbose - verbose output --version - print jadx version -h, --help - print this help Example: jadx -d out classes.dex jadx --rename-flags "none" classes.dex jadx --rename-flags "valid,printable" classes.dex ``` These options also worked on jadx-gui running from command line and override options from preferences dialog ### Troubleshooting ##### Out of memory error: - Reduce processing threads count (`-j` option) - Increase maximum java heap size: * command line (example for linux): `JAVA_OPTS="-Xmx4G" jadx -j 1 some.apk` * edit 'jadx' script (jadx.bat on Windows) and setup bigger heap size: `DEFAULT_JVM_OPTS="-Xmx2500M"` --------------------------------------- *Licensed under the Apache 2.0 License* *Copyright 2018 by Skylot* upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/jadx/description0000644000000000000000000000012714027625472026522 0ustar00Command line and GUI tools for produce Java source code from Android Dex and Apk files upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/jupyter-client/README.md0000644000000000000000000000443414025667057027573 0ustar00# Jupyter Client [![Build Status](https://github.com/jupyter/jupyter_client/workflows/CI/badge.svg)](https://github.com/jupyter/jupyter_client/actions) [![Code Health](https://landscape.io/github/jupyter/jupyter_client/master/landscape.svg?style=flat)](https://landscape.io/github/jupyter/jupyter_client/master) `jupyter_client` contains the reference implementation of the [Jupyter protocol][]. It also provides client and kernel management APIs for working with kernels. It also provides the `jupyter kernelspec` entrypoint for installing kernelspecs for use with Jupyter frontends. [Jupyter protocol]: https://jupyter-client.readthedocs.io/en/latest/messaging.html # Development Setup The [Jupyter Contributor Guides](http://jupyter.readthedocs.io/en/latest/contributor/content-contributor.html) provide extensive information on contributing code or documentation to Jupyter projects. The limited instructions below for setting up a development environment are for your convenience. ## Coding You'll need Python and `pip` on the search path. Clone the Jupyter Client git repository to your computer, for example in `/my/project/jupyter_client`. Now create an [editable install](https://pip.pypa.io/en/stable/reference/pip_install/#editable-installs) and download the dependencies of code and test suite by executing: cd /my/projects/jupyter_client/ pip install -e .[test] py.test The last command runs the test suite to verify the setup. During development, you can pass filenames to `py.test`, and it will execute only those tests. ## Documentation The documentation of Jupyter Client is generated from the files in `docs/` using Sphinx. Instructions for setting up Sphinx with a selection of optional modules are in the [Documentation Guide](https://jupyter.readthedocs.io/en/latest/contributing/docs-contributions/index.html). You'll also need the `make` command. For a minimal Sphinx installation to process the Jupyter Client docs, execute: pip install ipykernel sphinx sphinx_rtd_theme The following commands build the documentation in HTML format and check for broken links: cd /my/projects/jupyter_client/docs/ make html linkcheck Point your browser to the following URL to access the generated documentation: _file:///my/projects/jupyter\_client/docs/\_build/html/index.html_ upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/jupyter-client/description0000644000000000000000000000041214025667057030552 0ustar00jupyter_client contains the reference implementation of the Jupyter protocol. It also provides client and kernel management APIs for working with kernels. It also provides the jupyter kernelspec entrypoint for installing kernelspecs for use with Jupyter frontends. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/libtrace/README0000644000000000000000000000324314075351260025767 0ustar00libtrace 4.0.7 --------------------------------------------------------------------------- Copyright (c) 2007-2019 The University of Waikato, Hamilton, New Zealand. All rights reserved. This code has been developed by the University of Waikato WAND research group. For further information please see http://www.wand.net.nz/. --------------------------------------------------------------------------- See INSTALL for instructions on how to install libtrace. This directory contains source code for libtrace, a userspace library for processing of network traffic capture from live interfaces or from offline traces. libtrace was primarily designed for use with the real-time interface to the Waikato DAG Capture Point software running at The University of Waikato, and has been since extended to a range of other trace and interface formats. In version 4.0, we have introduced an API for processing packets in parallel using multiple threads. See libtrace_parallel.h for a detailed description of the API. Further information about libtrace, see http://research.wand.net.nz/software/libtrace.php Bugs should be reported by either emailing contact@wand.net.nz or filing an issue at https://github.com/LibtraceTeam/libtrace It is licensed under the GNU Lesser General Public License (GPL) version 3. Please see the included files COPYING and COPYING.LESSER for details of this license. A detailed ChangeLog can be found on the libtrace wiki: https://github.com/LibtraceTeam/libtrace/wiki/ChangeLog Documentation, usage instructions and a detailed tutorial can also found on the libtrace wiki. For further information, please contact the WAND group. See http://www.wand.net.nz/ for details. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/libtrace/description0000644000000000000000000000104514075351260027353 0ustar00This directory contains source code for libtrace, a userspace library for processing of network traffic capture from live interfaces or from offline traces. libtrace was primarily designed for use with the real-time interface to the Waikato DAG Capture Point software running at The University of Waikato, and has been since extended to a range of other trace and interface formats. In version 4.0, we have introduced an API for processing packets in parallel using multiple threads. See libtrace_parallel.h for a detailed description of the API. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/perl-timedate/README0000644000000000000000000000145314035402357026737 0ustar00This is the perl5 TimeDate distribution. It requires perl version 5.003 or later This distribution replaces my earlier GetDate distribution, which was only a date parser. The date parser contained in this distribution is far superior to the yacc based parser, and a *lot* fatser. The parser contained here will only parse absolute dates, if you want a date parser that can parse relative dates then take a look at the Time modules by David Muir on CPAN. You install the library by running these commands: perl Makefile.PL make make test make install Please report any bugs/suggestions to Copyright 1995-2009 Graham Barr. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. Share and Enjoy! Graham upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/perl-timedate/description0000644000000000000000000000071014035402357030320 0ustar00This is the perl5 TimeDate distribution. It requires perl version 5.003 or later This distribution replaces my earlier GetDate distribution, which was only a date parser. The date parser contained in this distribution is far superior to the yacc based parser, and a *lot* fatser. The parser contained here will only parse absolute dates, if you want a date parser that can parse relative dates then take a look at the Time modules by David Muir on CPAN. ././@PaxHeader0000000000000000000000000000016100000000000010213 xustar00113 path=upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/perl5-xml-compile-cache/README.md upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/perl5-xml-compile-cache/README0000644000000000000000000000422014025707126030513 0ustar00# distribution XML-Compile-Cache * My extended documentation: * Development via GitHub: * Download from CPAN: * Indexed from CPAN: and The XML-Compile suite is a large set of modules for various XML related standards. This optional component is very useful: it manages compiled handlers and helps you define prefixes. ## Development → Release Important to know, is that I use an extension on POD to write the manuals. The "raw" unprocessed version is visible on GitHub. It will run without problems, but does not contain manual-pages. Releases to CPAN are different: "raw" documentation gets removed from the code and translated into real POD and clean HTML. This reformatting is implemented with the OODoc distribution (A name I chose before OpenOffice existed, sorry for the confusion) Clone from github for the "raw" version. For instance, when you want to contribute a new feature. On github, you can find the processed version for each release. But the better source is CPAN; to get it installed simply run: ```sh cpan -i XML::Compile::Cache ``` ## Contributing When you want to contribute to this module, you do not need to provide a perfect patch... actually: it is nearly impossible to create a patch which I will merge without modification. Usually, I need to adapt the style of code and documentation to my own strict rules. When you submit an extension, please contribute a set with 1. code 2. code documentation 3. regression tests in t/ **Please note:** When you contribute in any way, you agree to transfer the copyrights to Mark Overmeer (you will get the honors in the code and/or ChangeLog). You also automatically agree that your contribution is released under the same license as this project: licensed as perl itself. ## Copyright and License This project is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See ././@PaxHeader0000000000000000000000000000016300000000000010215 xustar00115 path=upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/perl5-xml-compile-cache/description upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/perl5-xml-compile-cache/descri0000644000000000000000000000027014025707126031030 0ustar00The XML-Compile suite is a large set of modules for various XML related standards. This optional component is very useful: it manages compiled handlers and helps you define prefixes. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/pylint-flask/README.md0000644000000000000000000000414314025160634027214 0ustar00pylint-flask =============== [![Build Status](https://travis-ci.org/jschaf/pylint-flask.svg?branch=master)](https://travis-ci.org/jschaf/pylint-flask) [![Coverage Status](https://coveralls.io/repos/jschaf/pylint-flask/badge.svg?branch=master)](https://coveralls.io/r/jschaf/pylint-flask?branch=master) [![PyPI](https://img.shields.io/pypi/v/pylint-flask.svg)](https://pypi.python.org/pypi/pylint-flask) [![License](https://img.shields.io/badge/license-GPLv2%20License-blue.svg)](https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html) ## About `pylint-flask` is [Pylint](http://pylint.org) plugin for improving code analysis when editing code using [Flask](http://flask.pocoo.org/). Inspired by [pylint-django](https://github.com/landscapeio/pylint-django). ### Problems pylint-flask solves: 1. Recognize `flask.ext.*` style imports. Say you have the following code: ```python from flask.ext import wtf from flask.ext.wtf import validators class PostForm(wtf.Form): content = wtf.TextAreaField('Content', validators=[validators.Required()]) ``` Normally, pylint will throw errors like: ``` E: 1,0: No name 'wtf' in module 'flask.ext' E: 2,0: No name 'wtf' in module 'flask.ext' F: 2,0: Unable to import 'flask.ext.wtf' ``` As pylint builds it's own abstract syntax tree, `pylint-flask` will translate the `flask.ext` imports into the actual module name, so pylint can continue checking your code. ## Usage Ensure `pylint-flask` is installed and on your path, and then run pylint using pylint-flask as a plugin. ``` pip install pylint-flask pylint --load-plugins pylint_flask [..your module..] ``` ## Contributing Pull requests are always welcome. Here's an outline of the steps you need to prepare your code. 1. git clone https://github.com/jschaf/pylint-flask.git 2. cd pylint-flask 3. mkvirtualenv pylint-flask 4. pip install -r dev-requirements.txt 5. git checkout -b MY-NEW-FIX 6. Hack away 7. Make sure everything is green by running `tox` 7. git push origin MY-NEW-FIX 8. Create a pull request ## License pylint-flask is available under the GPLv2 license.upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/pylint-flask/description0000644000000000000000000000016414025160634030202 0ustar00pylint-flask is Pylint plugin for improving code analysis when editing code using Flask. Inspired by pylint-django. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/python-icalendar/README.rst0000644000000000000000000000224314025777046030262 0ustar00========================================================== Internet Calendaring and Scheduling (iCalendar) for Python ========================================================== The `icalendar`_ package is a `RFC 5545`_ compatible parser/generator for iCalendar files. ---- :Homepage: https://icalendar.readthedocs.io :Code: https://github.com/collective/icalendar :Mailing list: https://github.com/collective/icalendar/issues :Dependencies: `python-dateutil`_ and `pytz`_. :Compatible with: Python 2.7 and 3.4+ :License: `BSD`_ ---- .. image:: https://travis-ci.org/collective/icalendar.svg?branch=master :target: https://travis-ci.org/collective/icalendar .. _`icalendar`: https://pypi.org/project/icalendar/ .. _`RFC 5545`: https://www.ietf.org/rfc/rfc5545.txt .. _`python-dateutil`: https://github.com/dateutil/dateutil/ .. _`pytz`: https://pypi.org/project/pytz/ .. _`BSD`: https://github.com/collective/icalendar/issues/2 Related projects ================ * `icalevents `_. It is built on top of icalendar and allows you to query iCal files and get the events happening on specific dates. It manages recurrent events as well. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/python-icalendar/description0000644000000000000000000000012514025777046031036 0ustar00The icalendar package is a RFC 5545 compatible parser/generator for iCalendar files. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/python-rsa/README.md0000644000000000000000000000361014032216722026677 0ustar00# Pure Python RSA implementation [![PyPI](https://img.shields.io/pypi/v/rsa.svg)](https://pypi.org/project/rsa/) [![Build Status](https://travis-ci.org/sybrenstuvel/python-rsa.svg?branch=master)](https://travis-ci.org/sybrenstuvel/python-rsa) [![Coverage Status](https://coveralls.io/repos/github/sybrenstuvel/python-rsa/badge.svg?branch=master)](https://coveralls.io/github/sybrenstuvel/python-rsa?branch=master) [![Code Climate](https://api.codeclimate.com/v1/badges/a99a88d28ad37a79dbf6/maintainability)](https://codeclimate.com/github/codeclimate/codeclimate/maintainability) [Python-RSA](https://stuvel.eu/rsa) is a pure-Python RSA implementation. It supports encryption and decryption, signing and verifying signatures, and key generation according to PKCS#1 version 1.5. It can be used as a Python library as well as on the commandline. The code was mostly written by Sybren A. Stüvel. Documentation can be found at the [Python-RSA homepage](https://stuvel.eu/rsa). For all changes, check [the changelog](https://github.com/sybrenstuvel/python-rsa/blob/master/CHANGELOG.md). Download and install using: pip install rsa or download it from the [Python Package Index](https://pypi.org/project/rsa/). The source code is maintained at [GitHub](https://github.com/sybrenstuvel/python-rsa/) and is licensed under the [Apache License, version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ## Security Because of how Python internally stores numbers, it is very hard (if not impossible) to make a pure-Python program secure against timing attacks. This library is no exception, so use it with care. See https://securitypitfalls.wordpress.com/2018/08/03/constant-time-compare-in-python/ for more info. ## Setup of Development Environment ``` python3 -m venv .venv . ./.venv/bin/activate pip install poetry poetry install ``` ## Publishing a New Release ``` . ./.venv/bin/activate poetry publish --build ``` upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/python-rsa/description0000644000000000000000000000044214033200324027656 0ustar00Python-RSA is a pure-Python RSA implementation. It supports encryption and decryption, signing and verifying signatures, and key generation according to PKCS#1 version 1.5. It can be used as a Python library as well as on the commandline. The code was mostly written by Sybren A. Stüvel. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/ruby-columnize/README.md0000644000000000000000000000515414037755137027601 0ustar00[![Build Status](https://travis-ci.org/rocky/columnize.png)](https://travis-ci.org/rocky/columnize) [![Gem Version](https://badge.fury.io/rb/columnize.svg)](http://badge.fury.io/rb/columnize) Columnize - Format an Array as a Column-aligned String ============================================================================ In showing a long lists, sometimes one would prefer to see the value arranged aligned in columns. Some examples include listing methods of an object, listing debugger commands, or showing a numeric array with data aligned. Setup ----- $ irb >> require 'columnize' => true With numeric data ----------------- >> a = (1..10).to_a => [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] >> a.columnize => "1 2 3 4 5 6 7 8 9 10" >> puts a.columnize :arrange_array => true, :displaywidth => 10 [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] => nil >> puts a.columnize :arrange_array => true, :displaywidth => 20 [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] => nil With String data ---------------- >> g = %w(bibrons golden madascar leopard mourning suras tokay) => ["bibrons", "golden", "madascar", "leopard", "mourning", "suras", "tokay"] >> puts g.columnize :displaywidth => 15 bibrons suras golden tokay madascar leopard mourning => nil >> puts g.columnize :displaywidth => 19, :colsep => ' | ' bibrons | suras golden | tokay madascar leopard mourning => nil >> puts g.columnize :displaywidth => 18, :colsep => ' | ', :ljust => false bibrons | mourning golden | suras madascar | tokay leopard => nil Using Columnize.columnize ------------------------- >> Columnize.columnize(a) => "1 2 3 4 5 6 7 8 9 10" >> puts Columnize.columnize(a, :displaywidth => 10) 1 5 9 2 6 10 3 7 4 8 => nil >> Columnize.columnize(g) => "bibrons golden madascar leopard mourning suras tokay" >> puts Columnize.columnize(g, :displaywidth => 19, :colsep => ' | ') bibrons | mourning golden | suras madascar | tokay leopard => nil Credits ------- This is adapted from a method of the same name from Python's cmd module. Other stuff ----------- Authors: Rocky Bernstein [![endorse](https://api.coderwall.com/rocky/endorsecount.png)](https://coderwall.com/rocky) and [Martin Davis](https://github.com/waslogic) License: Copyright (c) 2011,2013 Rocky Bernstein Warranty -------- You can redistribute it and/or modify it under either the terms of the GPL version 2 or the conditions listed in COPYING upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/ruby-columnize/description0000644000000000000000000000033714037755137030566 0ustar00In showing a long lists, sometimes one would prefer to see the value arranged aligned in columns. Some examples include listing methods of an object, listing debugger commands, or showing a numeric array with data aligned. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/ruby-sha3/README.md0000644000000000000000000000767514027632503026432 0ustar00# sha3 [![Gem Version](https://badge.fury.io/rb/sha3.svg)](https://badge.fury.io/rb/sha3) [![CI](https://secure.travis-ci.org/johanns/sha3.png)](https://secure.travis-ci.org/johanns/sha3) [![Dependencies](https://gemnasium.com/johanns/sha3.png)](https://gemnasium.com/johanns/sha3) [![CodeClimate](https://codeclimate.com/github/johanns/sha3.png)](https://codeclimate.com/github/johanns/sha3) **SHA3 for Ruby** is a native (C) binding to SHA3 (Keccak FIPS 202) cryptographic hashing algorithm. - Home :: [https://github.com/johanns/sha3#readme]() - Issues :: [https://github.com/johanns/sha3/issues]() - Documentation :: [http://rubydoc.info/gems/sha3/frames]() ## Warnings - Version 1.0+ breaks compatibility with previous versions of this gem. - Do NOT use SHA3 to hash passwords; use either ```bcrypt``` or ```scrypt``` instead! ## Module details **SHA3::Digest**: A standard *Digest* _subclass_. The interface, and operation of this class are parallel to digest classes bundled with MRI-based Rubies (e.g.: **Digest::SHA2**, and **OpenSSL::Digest**). See [documentation for Ruby's **Digest** class for additional details](http://www.ruby-doc.org/stdlib-2.2.3/libdoc/digest/rdoc/Digest.html). ## Installation ```shell gem install sha3 ``` ## Usage ```ruby require 'sha3' ``` Valid hash bit-lengths are: *224*, *256*, *384*, *512*. ```ruby :sha224 :sha256 :sha384 :sha512 # SHA3::Digest.new(224) is SHA3::Digest.new(:sha224) ``` Alternatively, you can instantiate using one of four sub-classes: ```ruby SHA3::Digest::SHA224.new() # 224 bits SHA3::Digest::SHA256.new() # 256 bits SHA3::Digest::SHA384.new() # 384 bits SHA3::Digest::SHA512.new() # 512 bits ``` ### Basics ```ruby # Instantiate a new SHA3::Digest class with 256 bit length s = SHA3::Digest.new(:sha256) # OR # s = SHA3::Digest::SHA256.new() # Update hash state, and compute new value s.update "Compute Me" # << is an .update() alias s << "Me too" # Returns digest value in bytes s.digest # => "\xBE\xDF\r\xD9\xA1..." # Returns digest value as hex string s.hexdigest # => "bedf0dd9a15b647..." ### Digest class-methods: ### SHA3::Digest.hexdigest(:sha224, "Hash me, please") # => "200e7bc18cd613..." SHA3::Digest::SHA384.digest("Hash me, please") # => "\xF5\xCEpC\xB0eV..." ``` ### Hashing a file ```ruby # Compute the hash value for given file, and return the result as hex s = SHA3::Digest::SHA224.file("my_fantastical_file.bin").hexdigest # Calling SHA3::Digest.file(...) defaults to SHA256 s = SHA3::Digest.file("tests.sh") # => # ``` ## Development * Native build tools (e.g., GCC, Minigw, etc.) * Gems: rubygems-tasks, rake, rspec, yard ### Testing + RSpec Call ```rake``` to run the included RSpec tests. Only a small subset of test vectors are included in the source repository; however, the complete test vectors suite is available for download. Simply run the ```tests.sh``` shell script (available in the root of source directory) to generate full byte-length RSpec test files. ```sh tests.sh``` ### Rubies Tested with Rubies: - MRI Ruby-Head - MRI 2.1.0 - MRI 2.0.0 - MRI 1.9.3 - MRI 1.9.2 - MRI 1.8.7 - Rubinius 2 On: - Ubuntu 12.04, 12.10, 13.04, 14.04, 15.04 - Windows 7, 8, 8.1, 10 - Mac OS X 10.6 - 10.11 ## Releases - *1.0.1* :: FIPS 202 compliance (breaks compatibility with earlier releases) - *0.2.6* :: Fixed bug #4 - *0.2.5* :: Bug fixes. (See ChangeLog.rdoc) - *0.2.4* :: Bug fixes. (YANKED) - *0.2.3* :: Added documentation file (decoupled form C source); refactored C source. - *0.2.2* :: Added sub-class for each SHA3 supported bit-lengths (example: SHA3::Digest::SHA256). Minor bug fix. - *0.2.0* :: Production worthy, but breaks API compatibility with 0.1.x. Backward-compatibility will be maintained henceforth. - *0.1.x* :: Alpha code, and not suitable for production. ## TO DO - Add SHAKE128/256 support ## Copyright Copyright (c) 2012 - 2015 Johanns Gregorian (https://github.com/johanns) **See LICENSE.txt for details.** upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/ruby-sha3/description0000644000000000000000000000014114027632503027376 0ustar00SHA3 for Ruby is a native (C) binding to SHA3 (Keccak FIPS 202) cryptographic hashing algorithm. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/samba/README.md0000644000000000000000000001143414024131056025656 0ustar00About Samba =========== Samba is the standard Windows interoperability suite of programs for Linux and Unix. Samba is Free Software licensed under the GNU General Public License and the Samba project is a member of the Software Freedom Conservancy. Since 1992, Samba has provided secure, stable and fast file and print services for all clients using the SMB/CIFS protocol, such as all versions of DOS and Windows, OS/2, Linux and many others. Samba is an important component to seamlessly integrate Linux/Unix Servers and Desktops into Active Directory environments. It can function both as a domain controller or as a regular domain member. For the AD DC implementation a full HOWTO is provided at: https://wiki.samba.org/index.php/Samba4/HOWTO Community guidelines can be read at: https://wiki.samba.org/index.php/How_to_do_Samba:_Nicely This software is freely distributable under the GNU public license, a copy of which you should have received with this software (in a file called COPYING). CONTRIBUTIONS ============= Please see https://wiki.samba.org/index.php/Contribute for detailed set-by-step instructions on how to submit a patch for Samba via GitLab. Samba's GitLab mirror is at https://gitlab.com/samba-team/samba OUR CONTRIBUTORS ================ See https://www.samba.org/samba/team/ for details of the Samba Team, as well as details of all those currently active in Samba development. If you like a particular feature then look through the git change-log (on the web at https://gitweb.samba.org/?p=samba.git;a=summary) and see who added it, then send them an email. Remember that free software of this kind lives or dies by the response we get. If no one tells us they like it then we'll probably move onto something else. MORE INFO ========= DOCUMENTATION ------------- There is quite a bit of documentation included with the package, including man pages and the wiki at https://wiki.samba.org If you would like to help with our documentation, please contribute that improved content to the wiki, we are moving as much content there as possible. MAILING LIST ------------ Please do NOT send subscription/unsubscription requests to the lists! There is a mailing list for discussion of Samba. For details go to or send mail to There is also an announcement mailing list where new versions are announced. To subscribe go to or send mail to . All announcements also go to the samba list, so you only need to be on one. For details of other Samba mailing lists and for access to archives, see MAILING LIST ETIQUETTE ---------------------- A few tips when submitting to this or any mailing list. 1. Make your subject short and descriptive. Avoid the words "help" or "Samba" in the subject. The readers of this list already know that a) you need help, and b) you are writing about samba (of course, you may need to distinguish between Samba PDC and other file sharing software). Avoid phrases such as "what is" and "how do i". Some good subject lines might look like "Slow response with Excel files" or "Migrating from Samba PDC to NT PDC". 2. If you include the original message in your reply, trim it so that only the relevant lines, enough to establish context, are included. Chances are (since this is a mailing list) we've already read the original message. 3. Trim irrelevant headers from the original message in your reply. All we need to see is a) From, b) Date, and c) Subject. We don't even really need the Subject, if you haven't changed it. Better yet is to just preface the original message with "On [date] [someone] wrote:". 4. Please don't reply to or argue about spam, spam filters or viruses on any Samba lists. We do have a spam filtering system that is working quite well thank you very much but occasionally unwanted messages slip through. Deal with it. 5. Never say "Me too." It doesn't help anyone solve the problem. Instead, if you ARE having the same problem, give more information. Have you seen something that the other writer hasn't mentioned, which may be helpful? 6. If you ask about a problem, then come up with the solution on your own or through another source, by all means post it. Someone else may have the same problem and is waiting for an answer, but never hears of it. 7. Give as much *relevant* information as possible such as Samba release number, OS, kernel version, etc... 8. RTFM. Google. WEBSITE ------- A Samba website has been setup with lots of useful info. Connect to: https://www.samba.org/ As well as general information and documentation, this also has searchable archives of the mailing list and links to other useful resources such as the wiki. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/samba/description0000644000000000000000000000147614024131056026652 0ustar00Samba is the standard Windows interoperability suite of programs for Linux and Unix. Samba is Free Software licensed under the GNU General Public License and the Samba project is a member of the Software Freedom Conservancy. Since 1992, Samba has provided secure, stable and fast file and print services for all clients using the SMB/CIFS protocol, such as all versions of DOS and Windows, OS/2, Linux and many others. Samba is an important component to seamlessly integrate Linux/Unix Servers and Desktops into Active Directory environments. It can function both as a domain controller or as a regular domain member. For the AD DC implementation a full HOWTO is provided at: https://wiki.samba.org/index.php/Samba4/HOWTO Community guidelines can be read at: https://wiki.samba.org/index.php/How_to_do_Samba:_Nicely upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/saneyaml/README.rst0000644000000000000000000000261014023146352026614 0ustar00======== saneyaml ======== This micro library is a PyYaml wrapper with sane behaviour to read and write readable YAML safely, typically when used with configuration files. With saneyaml you can dump readable and clean YAML and load safely any YAML preserving ordering and avoiding surprises of type conversions by loading everything except booleans as strings. Optionally you can check for duplicated map keys when loading YAML. Works with Python 2 and 3. Requires PyYAML. License: apache-2.0 Homepage_url: https://github.com/nexB/saneyaml Usage:: pip install saneyaml >>> from saneyaml import load as l >>> from saneyaml import dump as d >>> a=l('''version: 3.0.0.dev6 ... ... description: | ... AboutCode Toolkit is a tool to process ABOUT files. An ABOUT file ... provides a way to document a software component. ... ''') >>> a OrderedDict([ (u'version', u'3.0.0.dev6'), (u'description', u'AboutCode Toolkit is a tool to process ABOUT files. ' 'An ABOUT file\nprovides a way to document a software component.\n')]) >>> pprint(a.items()) [(u'version', u'3.0.0.dev6'), (u'description', u'AboutCode Toolkit is a tool to process ABOUT files. An ABOUT file\nprovides a way to document a software component.\n')] >>> print(d(a)) version: 3.0.0.dev6 description: | AboutCode Toolkit is a tool to process ABOUT files. An ABOUT file provides a way to document a software component. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/saneyaml/description0000644000000000000000000000070014023146352027371 0ustar00This micro library is a PyYaml wrapper with sane behaviour to read and write readable YAML safely, typically when used with configuration files. With saneyaml you can dump readable and clean YAML and load safely any YAML preserving ordering and avoiding surprises of type conversions by loading everything except booleans as strings. Optionally you can check for duplicated map keys when loading YAML. Works with Python 2 and 3. Requires PyYAML. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/sfcgal/README.md0000644000000000000000000000044114023143032026022 0ustar00SFCGAL ====== SFCGAL is a C++ wrapper library around [CGAL](http://www.cgal.org) with the aim of supporting ISO 191007:2013 and OGC Simple Features for 3D operations. Please refer to the project page for an updated installation procedure. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/sfcgal/description0000644000000000000000000000020314024121740027010 0ustar00SFCGAL is a C++ wrapper library around CGAL with the aim of supporting ISO 191007:2013 and OGC Simple Features for 3D operations. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/statuscake/README.md0000644000000000000000000000060714024413051026740 0ustar00# statuscake [![Build Status](https://travis-ci.org/DreamItGetIT/statuscake.svg?branch=master)](https://travis-ci.org/DreamItGetIT/statuscake) `statuscake` is a Go pkg that implements a client for the [statuscake]("https://statuscake.com") API. More documentation and examples at [http://godoc.org/github.com/DreamItGetIT/statuscake](http://godoc.org/github.com/DreamItGetIT/statuscake). upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/statuscake/description0000644000000000000000000000011014024413051027714 0ustar00statuscake is a Go pkg that implements a client for the statuscake API. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/text-worddif/README.md0000644000000000000000000000224514030712662027220 0ustar00Text/WordDiff version 0.09 ========================== This library's module, Text::WordDiff, is a variation on the lovely [Text::Diff](http://search.cpan.org/perldoc?Text::Diff) module. Rather than generating traditional line-oriented diffs, however, it generates word-oriented diffs. This can be useful for tracking changes in narrative documents or documents with very long lines. To diff source code, one is still best off using Text::Diff. But if you want to see how a short story changed from one version to the next, this module will do the job very nicely. INSTALLATION To install this module, type the following: perl Build.PL ./Build ./Build test ./Build install Or, if you don't have Module::Build installed, type the following: perl Makefile.PL make make test make install Dependencies ------------ Text::WordDiff requires the following modules: * Algorithm::Diff '1.19', * Term::ANSIColor '0', * HTML::Entities '0', Copyright and License --------------------- Copyright (c) 2005-2011 David E. Wheeler. Some Rights Reserved. This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/text-worddif/description0000644000000000000000000000072114030712662030204 0ustar00This library's module, Text::WordDiff, is a variation on the lovely Text::Diff module. Rather than generating traditional line-oriented diffs, however, it generates word-oriented diffs. This can be useful for tracking changes in narrative documents or documents with very long lines. To diff source code, one is still best off using Text::Diff. But if you want to see how a short story changed from one version to the next, this module will do the job very nicely. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/wandio/README0000644000000000000000000000244414075346463025476 0ustar00WANDIO 4.2.1 --------------------------------------------------------------------------- Copyright (c) 2007-2019 The University of Waikato, Hamilton, New Zealand. All rights reserved. This code has been developed by the University of Waikato WAND research group. For further information please see http://www.wand.net.nz/. --------------------------------------------------------------------------- See INSTALL for instructions on how to install WANDIO. This directory contains source code for WANDIO, a library for reading from, and writing to, files. Depending on libraries available at compile time, WANDIO provides transparent compression/decompression for the following formats: - zlib (gzip) - bzip2 - lzo (write-only) - lzma - zstd - lz4 - Intel QAT (write-only) - http (read-only) WANDIO also improves IO performance by performing compression/decompression in a separate thread (if pthreads are available). Documentation for WANDIO and its included tools can be found at https://github.com/wanduow/wandio/wiki Bugs should be reported by either emailing contact@wand.net.nz or filing an issue at https://github.com/wanduow/wandio It is licensed under the Lesser GNU General Public License (LGPL) version 3. Please see the included files COPYING and COPYING.LESSER for details of this license. upstream-ontologist_0.1.24.orig/upstream_ontologist/tests/readme_data/wandio/description0000644000000000000000000000072614075346463027065 0ustar00This directory contains source code for WANDIO, a library for reading from, and writing to, files. Depending on libraries available at compile time, WANDIO provides transparent compression/decompression for the following formats: - zlib (gzip) - bzip2 - lzo (write-only) - lzma - zstd - lz4 - Intel QAT (write-only) - http (read-only) WANDIO also improves IO performance by performing compression/decompression in a separate thread (if pthreads are available).