././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1708114846.5243025 hypothesmith-0.3.3/0000755000175100001770000000000000000000000015050 5ustar00runnerdocker00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114834.0 hypothesmith-0.3.3/CHANGELOG.md0000644000175100001770000001007100000000000016660 0ustar00runnerdocker00000000000000# Changelog ### 0.3.3 - 2024-02-16 - Add Python 3.12 and 3.13 to CI - Can now be imported on Python 3.13 ### 0.3.2 - 2024-01-13 - Performance improvements based on [Hypothesis 6.93.0](https://hypothesis.readthedocs.io/en/latest/changes.html#v6-93-0) ### 0.3.1 - 2023-09-06 - Hypothesis >= 6.89.0 made some internal changes which broke our `from_grammar()`. This patch restores compatibility, and requires the new Hypothesis. ### 0.3.0 - 2023-09-06 - Even more efficient generation of Python identifiers, using [Hypothesis 6.84.1](https://hypothesis.readthedocs.io/en/latest/changes.html#v6-84-1) - Dropped support for Python 3.7, which is end-of-life and not supported by Hypothesis ### 0.2.3 - 2022-12-01 - Actually package up the renamed grammar file this time. ### 0.2.2 - 2022-11-29 - Correct the minimum required version of LibCST - Use `importlib.resources` to load the Python grammar (zip-safe) ### 0.2.1 - 2022-11-25 - Use the `lark` package on PyPI, for Lark version 1.0+ (requires recent Hypothesis) - Use updated Python grammar for `from_grammar()`, thanks to Lark updates - Drop support for Python 3.6, add 3.10 and 3.11 to CI ### 0.2.0 - 2021-11-27 - More efficient generation of Python identifiers - Workaround for [CPython parser bug in 3.9.8](https://bugs.python.org/issue45738) (#16) Yes, *yet another* one - still, that's why `hypothesmith` exists... ### 0.1.9 - 2021-08-19 - Fixed rare internal error when `from_node()` generated misplaced `except:` clauses ### 0.1.8 - 2021-01-10 - Now compatible with Hypothesis 5.46 and later - Increased diversity of examples generated by `from_node()` ### 0.1.7 - 2020-11-15 - Adds a workaround for [BPO-42218](https://bugs.python.org/issue42218) - Bumped Hypothesis requirement to version 5.41 or later (better internal error messages) - Added `MANIFEST.in` so the sdist includes license, tests, etc. ### 0.1.6 - 2020-11-01 - Improved error messages from internal compiler errors *again* ([BPO-42218](https://bugs.python.org/issue42218) will be fixed in Python 3.9.1) - Bumped Hypothesis requirement to version 5.39 or later ### 0.1.5 - 2020-10-12 - Emit additional debug info when Python fails to compile a string ### 0.1.4 - 2020-08-16 - Improve handling of identifiers - Fix internal error in `from_grammar("single_input")` ### 0.1.3 - 2020-07-30 - Update to latest versions of LibCST and Hypothesis, for Python 3.9 support ### 0.1.2 - 2020-05-17 - Emit *more* debug info to diagnose a `compile()` issue in CPython nightly ### 0.1.1 - 2020-05-17 - Emit some debug info to help diagnose a possible upstream bug in CPython nightly ### 0.1.0 - 2020-04-24 - Added `auto_target=True` argument to the `from_node()` strategy. - Improved `from_node()` generation of comments and trailing whitespace. ### 0.0.8 - 2020-04-23 - Added a `from_node()` strategy which uses [`LibCST`](https://pypi.org/project/libcst/) to generate source code. This is a proof-of-concept rather than a robust tool, but IMO it's a pretty cool concept. ### 0.0.7 - 2020-04-19 - The `from_grammar()` strategy now takes an `auto_target=True` argument, to drive generated examples towards (relatively) larger and more complex programs. ### 0.0.6 - 2020-04-08 - support for non-ASCII identifiers ### 0.0.5 - 2019-11-27 - Updated project metadata and started testing on Python 3.8 ### 0.0.4 - 2019-09-10 - Depends on more recent Hypothesis version, with upstreamed grammar generation. - Improved filtering rejects fewer valid examples, finding another bug in Black. ### 0.0.3 - 2019-08-08 Checks validity at statement level, which makes filtering much more efficient. Improved testing, input validation, and code comments. ### 0.0.2 - 2019-08-07 Improved filtering and fixing of source code generated from the grammar. This version found a novel bug: `"pass #\\r#\\n"` is accepted by the built-in `compile()` and `exec()` functions, but not by `black` or `lib2to3`. ### 0.0.1 - 2019-08-06 Initial release. This is a minimal proof of concept, generating from the grammar and rejecting it if we get errors from `black` or `tokenize`. Cool, but while promising not very useful at this stage. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114834.0 hypothesmith-0.3.3/LICENSE0000644000175100001770000004052400000000000016062 0ustar00runnerdocker00000000000000Mozilla Public License Version 2.0 ================================== 1. Definitions -------------- 1.1. "Contributor" means each individual or legal entity that creates, contributes to the creation of, or owns Covered Software. 1.2. "Contributor Version" means the combination of the Contributions of others (if any) used by a Contributor and that particular Contributor's Contribution. 1.3. "Contribution" means Covered Software of a particular Contributor. 1.4. "Covered Software" means Source Code Form to which the initial Contributor has attached the notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source Code Form, in each case including portions thereof. 1.5. "Incompatible With Secondary Licenses" means (a) that the initial Contributor has attached the notice described in Exhibit B to the Covered Software; or (b) that the Covered Software was made available under the terms of version 1.1 or earlier of the License, but not also under the terms of a Secondary License. 1.6. "Executable Form" means any form of the work other than Source Code Form. 1.7. "Larger Work" means a work that combines Covered Software with other material, in a separate file or files, that is not Covered Software. 1.8. "License" means this document. 1.9. "Licensable" means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently, any and all of the rights conveyed by this License. 1.10. "Modifications" means any of the following: (a) any file in Source Code Form that results from an addition to, deletion from, or modification of the contents of Covered Software; or (b) any new file in Source Code Form that contains any Covered Software. 1.11. "Patent Claims" of a Contributor means any patent claim(s), including without limitation, method, process, and apparatus claims, in any patent Licensable by such Contributor that would be infringed, but for the grant of the License, by the making, using, selling, offering for sale, having made, import, or transfer of either its Contributions or its Contributor Version. 1.12. "Secondary License" means either the GNU General Public License, Version 2.0, the GNU Lesser General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any later versions of those licenses. 1.13. "Source Code Form" means the form of the work preferred for making modifications. 1.14. "You" (or "Your") means an individual or a legal entity exercising rights under this License. For legal entities, "You" includes any entity that controls, is controlled by, or is under common control with You. For purposes of this definition, "control" means (a) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity. 2. License Grants and Conditions -------------------------------- 2.1. Grants Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license: (a) under intellectual property rights (other than patent or trademark) Licensable by such Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger Work; and (b) under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import, and otherwise transfer either its Contributions or its Contributor Version. 2.2. Effective Date The licenses granted in Section 2.1 with respect to any Contribution become effective for each Contribution on the date the Contributor first distributes such Contribution. 2.3. Limitations on Grant Scope The licenses granted in this Section 2 are the only rights granted under this License. No additional rights or licenses will be implied from the distribution or licensing of Covered Software under this License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor: (a) for any code that a Contributor has removed from Covered Software; or (b) for infringements caused by: (i) Your and any other third party's modifications of Covered Software, or (ii) the combination of its Contributions with other software (except as part of its Contributor Version); or (c) under Patent Claims infringed by Covered Software in the absence of its Contributions. This License does not grant any rights in the trademarks, service marks, or logos of any Contributor (except as may be necessary to comply with the notice requirements in Section 3.4). 2.4. Subsequent Licenses No Contributor makes additional grants as a result of Your choice to distribute the Covered Software under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary License (if permitted under the terms of Section 3.3). 2.5. Representation Each Contributor represents that the Contributor believes its Contributions are its original creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this License. 2.6. Fair Use This License is not intended to limit any rights You have under applicable copyright doctrines of fair use, fair dealing, or other equivalents. 2.7. Conditions Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1. 3. Responsibilities ------------------- 3.1. Distribution of Source Form All distribution of Covered Software in Source Code Form, including any Modifications that You create or to which You contribute, must be under the terms of this License. You must inform recipients that the Source Code Form of the Covered Software is governed by the terms of this License, and how they can obtain a copy of this License. You may not attempt to alter or restrict the recipients' rights in the Source Code Form. 3.2. Distribution of Executable Form If You distribute Covered Software in Executable Form then: (a) such Covered Software must also be made available in Source Code Form, as described in Section 3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution to the recipient; and (b) You may distribute such Executable Form under the terms of this License, or sublicense it under different terms, provided that the license for the Executable Form does not attempt to limit or alter the recipients' rights in the Source Code Form under this License. 3.3. Distribution of a Larger Work You may create and distribute a Larger Work under terms of Your choice, provided that You also comply with the requirements of this License for the Covered Software. If the Larger Work is a combination of Covered Software with a work governed by one or more Secondary Licenses, and the Covered Software is not Incompatible With Secondary Licenses, this License permits You to additionally distribute such Covered Software under the terms of such Secondary License(s), so that the recipient of the Larger Work may, at their option, further distribute the Covered Software under the terms of either this License or such Secondary License(s). 3.4. Notices You may not remove or alter the substance of any license notices (including copyright notices, patent notices, disclaimers of warranty, or limitations of liability) contained within the Source Code Form of the Covered Software, except that You may alter any license notices to the extent required to remedy known factual inaccuracies. 3.5. Application of Additional Terms You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Software. However, You may do so only on Your own behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree to indemnify every Contributor for any liability incurred by such Contributor as a result of warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of warranty and limitations of liability specific to any jurisdiction. 4. Inability to Comply Due to Statute or Regulation --------------------------------------------------- If it is impossible for You to comply with any of the terms of this License with respect to some or all of the Covered Software due to statute, judicial order, or regulation then You must: (a) comply with the terms of this License to the maximum extent possible; and (b) describe the limitations and the code they affect. Such description must be placed in a text file included with all distributions of the Covered Software under this License. Except to the extent prohibited by statute or regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be able to understand it. 5. Termination -------------- 5.1. The rights granted under this License will terminate automatically if You fail to comply with any of its terms. However, if You become compliant, then the rights granted under this License from a particular Contributor are reinstated (a) provisionally, unless and until such Contributor explicitly and finally terminates Your grants, and (b) on an ongoing basis, if such Contributor fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this is the first time You have received notice of non-compliance with this License from such Contributor, and You become compliant prior to 30 days after Your receipt of the notice. 5.2. If You initiate litigation against any entity by asserting a patent infringement claim (excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a Contributor Version directly or indirectly infringes any patent, then the rights granted to You by any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate. 5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements (excluding distributors and resellers) which have been validly granted by You or Your distributors under this License prior to termination shall survive termination. ************************************************************************ * * * 6. Disclaimer of Warranty * * ------------------------- * * * * Covered Software is provided under this License on an "as is" * * basis, without warranty of any kind, either expressed, implied, or * * statutory, including, without limitation, warranties that the * * Covered Software is free of defects, merchantable, fit for a * * particular purpose or non-infringing. The entire risk as to the * * quality and performance of the Covered Software is with You. * * Should any Covered Software prove defective in any respect, You * * (not any Contributor) assume the cost of any necessary servicing, * * repair, or correction. This disclaimer of warranty constitutes an * * essential part of this License. No use of any Covered Software is * * authorized under this License except under this disclaimer. * * * ************************************************************************ ************************************************************************ * * * 7. Limitation of Liability * * -------------------------- * * * * Under no circumstances and under no legal theory, whether tort * * (including negligence), contract, or otherwise, shall any * * Contributor, or anyone who distributes Covered Software as * * permitted above, be liable to You for any direct, indirect, * * special, incidental, or consequential damages of any character * * including, without limitation, damages for lost profits, loss of * * goodwill, work stoppage, computer failure or malfunction, or any * * and all other commercial damages or losses, even if such party * * shall have been informed of the possibility of such damages. This * * limitation of liability shall not apply to liability for death or * * personal injury resulting from such party's negligence to the * * extent applicable law prohibits such limitation. Some * * jurisdictions do not allow the exclusion or limitation of * * incidental or consequential damages, so this exclusion and * * limitation may not apply to You. * * * ************************************************************************ 8. Litigation ------------- Any litigation relating to this License may be brought only in the courts of a jurisdiction where the defendant maintains its principal place of business and such litigation shall be governed by laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this Section shall prevent a party's ability to bring cross-claims or counter-claims. 9. Miscellaneous ---------------- This License represents the complete agreement concerning the subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not be used to construe this License against a Contributor. 10. Versions of the License --------------------------- 10.1. New Versions Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the license steward has the right to modify or publish new versions of this License. Each version will be given a distinguishing version number. 10.2. Effect of New Versions You may distribute the Covered Software under the terms of the version of the License under which You originally received the Covered Software, or under the terms of any subsequent version published by the license steward. 10.3. Modified Versions If you create software not governed by this License, and you want to create a new license for such software, you may create and use a modified version of this License if you rename the license and remove any references to the name of the license steward (except to note that such modified license differs from this License). 10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the terms of this version of the License, the notice described in Exhibit B of this License must be attached. Exhibit A - Source Code Form License Notice ------------------------------------------- This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. If it is not possible or desirable to put the notice in a particular file, then You may include the notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be likely to look for such a notice. You may add additional accurate notices of copyright ownership. Exhibit B - "Incompatible With Secondary Licenses" Notice --------------------------------------------------------- This Source Code Form is "Incompatible With Secondary Licenses", as defined by the Mozilla Public License, v. 2.0.././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114834.0 hypothesmith-0.3.3/MANIFEST.in0000644000175100001770000000016100000000000016604 0ustar00runnerdocker00000000000000include *.md include LICENSE include tox.ini recursive-include deps *.in *.md *.txt recursive-include tests *.py ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1708114846.5243025 hypothesmith-0.3.3/PKG-INFO0000644000175100001770000001040600000000000016146 0ustar00runnerdocker00000000000000Metadata-Version: 2.1 Name: hypothesmith Version: 0.3.3 Summary: Hypothesis strategies for generating Python programs, something like CSmith Home-page: https://github.com/Zac-HD/hypothesmith Author: Zac Hatfield-Dodds Author-email: zac@hypothesis.works License: MPL 2.0 Project-URL: Funding, https://github.com/sponsors/Zac-HD Keywords: python testing fuzzing property-based-testing Classifier: Development Status :: 4 - Beta Classifier: Framework :: Hypothesis Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0) Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Topic :: Software Development :: Testing Requires-Python: >=3.8 Description-Content-Type: text/markdown License-File: LICENSE Requires-Dist: hypothesis[lark]>=6.93.0 Requires-Dist: libcst>=1.0.1 # hypothesmith Hypothesis strategies for generating Python programs, something like CSmith. This is definitely pre-alpha, but if you want to play with it feel free! You can even keep the shiny pieces when - not if - it breaks. Get it today with [`pip install hypothesmith`](https://pypi.org/project/hypothesmith/), or by cloning [the GitHub repo](https://github.com/Zac-HD/hypothesmith). You can run the tests, such as they are, with `tox` on Python 3.6 or later. Use `tox -va` to see what environments are available. ## Usage This package provides two Hypothesis strategies for generating Python source code. The generated code will always be syntatically valid, and is useful for testing parsers, linters, auto-formatters, and other tools that operate on source code. > DO NOT EXECUTE CODE GENERATED BY THESE STRATEGIES. > > It could do literally anything that running Python code is able to do, > including changing, deleting, or uploading important data. Arbitrary > code can be useful, but "arbitrary code execution" can be very, very bad. #### `hypothesmith.from_grammar(start="file_input", *, auto_target=True)` Generates syntactically-valid Python source code based on the grammar. Valid values for ``start`` are ``"single_input"``, ``"file_input"``, or ``"eval_input"``; respectively a single interactive statement, a module or sequence of commands read from a file, and input for the eval() function. If ``auto_target`` is ``True``, this strategy uses ``hypothesis.target()`` internally to drive towards larger and more complex examples. We recommend leaving this enabled, as the grammar is quite complex and only simple examples tend to be generated otherwise. #### `hypothesmith.from_node(node=libcst.Module, *, auto_target=True)` Generates syntactically-valid Python source code based on the node types defined by the [`LibCST`](https://libcst.readthedocs.io/en/latest/) project. You can pass any subtype of `libcst.CSTNode`. Alternatively, you can use Hypothesis' built-in `from_type(node_type).map(lambda n: libcst.Module([n]).code`, after Hypothesmith has registered the required strategies. However, this does not include automatic targeting and limitations of LibCST may lead to invalid code being generated. ## Notable bugs found with Hypothesmith - [BPO-40661, a segfault in the new parser](https://bugs.python.org/issue40661), was given maximum priority and blocked the planned release of CPython 3.9 beta1. - [BPO-38953](https://bugs.python.org/issue38953) `tokenize` -> `untokenize` roundtrip bugs. - [BPO-42218](https://bugs.python.org/issue42218) mishandled error case in new PEG parser. - [`lib2to3` errors on \r in comment](https://github.com/psf/black/issues/970) - [Black fails on files ending in a backslash](https://github.com/psf/black/issues/1012) - [At least three round-trip bugs in LibCST](https://github.com/Instagram/LibCST#acknowledgements) ([search commits for "hypothesis"](https://github.com/Instagram/LibCST/search?q=Fix+hypothesis&type=commits)) - [Invalid code generated by LibCST](https://github.com/Instagram/LibCST/issues/287) ### Changelog Patch notes [can be found in `CHANGELOG.md`](https://github.com/Zac-HD/hypothesmith/blob/master/CHANGELOG.md). ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114834.0 hypothesmith-0.3.3/README.md0000644000175100001770000000623000000000000016330 0ustar00runnerdocker00000000000000# hypothesmith Hypothesis strategies for generating Python programs, something like CSmith. This is definitely pre-alpha, but if you want to play with it feel free! You can even keep the shiny pieces when - not if - it breaks. Get it today with [`pip install hypothesmith`](https://pypi.org/project/hypothesmith/), or by cloning [the GitHub repo](https://github.com/Zac-HD/hypothesmith). You can run the tests, such as they are, with `tox` on Python 3.6 or later. Use `tox -va` to see what environments are available. ## Usage This package provides two Hypothesis strategies for generating Python source code. The generated code will always be syntatically valid, and is useful for testing parsers, linters, auto-formatters, and other tools that operate on source code. > DO NOT EXECUTE CODE GENERATED BY THESE STRATEGIES. > > It could do literally anything that running Python code is able to do, > including changing, deleting, or uploading important data. Arbitrary > code can be useful, but "arbitrary code execution" can be very, very bad. #### `hypothesmith.from_grammar(start="file_input", *, auto_target=True)` Generates syntactically-valid Python source code based on the grammar. Valid values for ``start`` are ``"single_input"``, ``"file_input"``, or ``"eval_input"``; respectively a single interactive statement, a module or sequence of commands read from a file, and input for the eval() function. If ``auto_target`` is ``True``, this strategy uses ``hypothesis.target()`` internally to drive towards larger and more complex examples. We recommend leaving this enabled, as the grammar is quite complex and only simple examples tend to be generated otherwise. #### `hypothesmith.from_node(node=libcst.Module, *, auto_target=True)` Generates syntactically-valid Python source code based on the node types defined by the [`LibCST`](https://libcst.readthedocs.io/en/latest/) project. You can pass any subtype of `libcst.CSTNode`. Alternatively, you can use Hypothesis' built-in `from_type(node_type).map(lambda n: libcst.Module([n]).code`, after Hypothesmith has registered the required strategies. However, this does not include automatic targeting and limitations of LibCST may lead to invalid code being generated. ## Notable bugs found with Hypothesmith - [BPO-40661, a segfault in the new parser](https://bugs.python.org/issue40661), was given maximum priority and blocked the planned release of CPython 3.9 beta1. - [BPO-38953](https://bugs.python.org/issue38953) `tokenize` -> `untokenize` roundtrip bugs. - [BPO-42218](https://bugs.python.org/issue42218) mishandled error case in new PEG parser. - [`lib2to3` errors on \r in comment](https://github.com/psf/black/issues/970) - [Black fails on files ending in a backslash](https://github.com/psf/black/issues/1012) - [At least three round-trip bugs in LibCST](https://github.com/Instagram/LibCST#acknowledgements) ([search commits for "hypothesis"](https://github.com/Instagram/LibCST/search?q=Fix+hypothesis&type=commits)) - [Invalid code generated by LibCST](https://github.com/Instagram/LibCST/issues/287) ### Changelog Patch notes [can be found in `CHANGELOG.md`](https://github.com/Zac-HD/hypothesmith/blob/master/CHANGELOG.md). ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1708114846.5203025 hypothesmith-0.3.3/deps/0000755000175100001770000000000000000000000016003 5ustar00runnerdocker00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114834.0 hypothesmith-0.3.3/deps/README.md0000644000175100001770000000135500000000000017266 0ustar00runnerdocker00000000000000# Pinning dependencies Hypothesmith pins *all* our dependencies for testing, and disables installation of any unlisted dependencies to make sure the set of pins is complete. How does this work? 1. `setup.py` lists all our top-level dependencies for the library, and *also* lists the development and test-time dependencies. 2. `pip-compile` calculates all the transitive dependencies we need, with exact version pins. We use `tox -e deps` to make this more convenient, and don't bother pinning `pip-tools` as it's always run manually (never in CI). 3. `tox` then installs from the files full of pinned versions here! That's it - a simple implementation but it stabilises the whole dependency chain and really improves visibility :-) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114834.0 hypothesmith-0.3.3/deps/check.in0000644000175100001770000000024100000000000017405 0ustar00runnerdocker00000000000000# Top-level dependencies for `tox -e check` flake8 flake8-bandit flake8-bugbear flake8-comprehensions flake8-docstrings hypothesis # for typechecking mypy shed ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114834.0 hypothesmith-0.3.3/deps/check.txt0000644000175100001770000000363100000000000017624 0ustar00runnerdocker00000000000000# # This file is autogenerated by pip-compile with Python 3.10 # by the following command: # # pip-compile --output-file=deps/check.txt deps/check.in # attrs==23.2.0 # via # flake8-bugbear # hypothesis autoflake==2.2.1 # via shed bandit==1.7.7 # via flake8-bandit black==24.2.0 # via shed click==8.1.7 # via black com2ann==0.3.0 # via shed exceptiongroup==1.2.0 # via hypothesis flake8==7.0.0 # via # -r deps/check.in # flake8-bandit # flake8-bugbear # flake8-comprehensions # flake8-docstrings flake8-bandit==4.1.1 # via -r deps/check.in flake8-bugbear==24.2.6 # via -r deps/check.in flake8-comprehensions==3.14.0 # via -r deps/check.in flake8-docstrings==1.7.0 # via -r deps/check.in hypothesis==6.98.6 # via -r deps/check.in isort==5.13.2 # via shed libcst==1.1.0 # via shed markdown-it-py==3.0.0 # via rich mccabe==0.7.0 # via flake8 mdurl==0.1.2 # via markdown-it-py mypy==1.8.0 # via -r deps/check.in mypy-extensions==1.0.0 # via # black # mypy # typing-inspect packaging==23.2 # via black pathspec==0.12.1 # via black pbr==6.0.0 # via stevedore platformdirs==4.2.0 # via black pycodestyle==2.11.1 # via flake8 pydocstyle==6.3.0 # via flake8-docstrings pyflakes==3.2.0 # via # autoflake # flake8 pygments==2.17.2 # via rich pyupgrade==3.15.0 # via shed pyyaml==6.0.1 # via # bandit # libcst rich==13.7.0 # via bandit shed==2024.1.1 # via -r deps/check.in snowballstemmer==2.2.0 # via pydocstyle sortedcontainers==2.4.0 # via hypothesis stevedore==5.1.0 # via bandit tokenize-rt==5.2.0 # via pyupgrade tomli==2.0.1 # via # autoflake # black # mypy typing-extensions==4.9.0 # via # black # libcst # mypy # typing-inspect typing-inspect==0.9.0 # via libcst ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114834.0 hypothesmith-0.3.3/deps/test.in0000644000175100001770000000012600000000000017311 0ustar00runnerdocker00000000000000# Top-level dependencies for `tox -e test` black parso pytest pytest-cov pytest-xdist ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114834.0 hypothesmith-0.3.3/deps/test.txt0000644000175100001770000000242400000000000017525 0ustar00runnerdocker00000000000000# # This file is autogenerated by pip-compile with Python 3.10 # by the following command: # # pip-compile --output-file=deps/test.txt deps/test.in setup.py # attrs==23.2.0 # via hypothesis black==24.2.0 # via -r deps/test.in click==8.1.7 # via black coverage[toml]==7.4.1 # via pytest-cov exceptiongroup==1.2.0 # via # hypothesis # pytest execnet==2.0.2 # via pytest-xdist hypothesis[lark]==6.98.6 # via hypothesmith (setup.py) iniconfig==2.0.0 # via pytest lark==1.1.9 # via hypothesis libcst==1.1.0 # via hypothesmith (setup.py) mypy-extensions==1.0.0 # via # black # typing-inspect packaging==23.2 # via # black # pytest parso==0.8.3 # via -r deps/test.in pathspec==0.12.1 # via black platformdirs==4.2.0 # via black pluggy==1.4.0 # via pytest pytest==8.0.0 # via # -r deps/test.in # pytest-cov # pytest-xdist pytest-cov==4.1.0 # via -r deps/test.in pytest-xdist==3.5.0 # via -r deps/test.in pyyaml==6.0.1 # via libcst sortedcontainers==2.4.0 # via hypothesis tomli==2.0.1 # via # black # coverage # pytest typing-extensions==4.9.0 # via # black # libcst # typing-inspect typing-inspect==0.9.0 # via libcst ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1708114846.5243025 hypothesmith-0.3.3/setup.cfg0000644000175100001770000000004600000000000016671 0ustar00runnerdocker00000000000000[egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114834.0 hypothesmith-0.3.3/setup.py0000644000175100001770000000350500000000000016565 0ustar00runnerdocker00000000000000"""Packaging config for Hypothesmith.""" import os import setuptools def local_file(name: str) -> str: """Interpret filename as relative to this file.""" return os.path.relpath(os.path.join(os.path.dirname(__file__), name)) SOURCE = local_file("src") README = local_file("README.md") with open(local_file("src/hypothesmith/__init__.py")) as o: for line in o: if line.startswith("__version__"): _, __version__, _ = line.split('"') setuptools.setup( name="hypothesmith", version=__version__, author="Zac Hatfield-Dodds", author_email="zac@hypothesis.works", packages=setuptools.find_packages(SOURCE), package_dir={"": SOURCE}, package_data={"": ["py.typed", "python.lark"]}, url="https://github.com/Zac-HD/hypothesmith", project_urls={"Funding": "https://github.com/sponsors/Zac-HD"}, license="MPL 2.0", description="Hypothesis strategies for generating Python programs, something like CSmith", zip_safe=False, install_requires=["hypothesis[lark]>=6.93.0", "libcst>=1.0.1"], python_requires=">=3.8", classifiers=[ "Development Status :: 4 - Beta", "Framework :: Hypothesis", "Intended Audience :: Developers", "License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Topic :: Software Development :: Testing", ], long_description=open(README).read(), long_description_content_type="text/markdown", keywords="python testing fuzzing property-based-testing", ) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1708114846.5203025 hypothesmith-0.3.3/src/0000755000175100001770000000000000000000000015637 5ustar00runnerdocker00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1708114846.5203025 hypothesmith-0.3.3/src/hypothesmith/0000755000175100001770000000000000000000000020364 5ustar00runnerdocker00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114834.0 hypothesmith-0.3.3/src/hypothesmith/__init__.py0000644000175100001770000000035400000000000022477 0ustar00runnerdocker00000000000000"""Hypothesis strategies for generating Python source code, somewhat like CSmith.""" from hypothesmith.cst import from_node from hypothesmith.syntactic import from_grammar __version__ = "0.3.3" __all__ = ["from_grammar", "from_node"] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114834.0 hypothesmith-0.3.3/src/hypothesmith/cst.py0000644000175100001770000002465400000000000021542 0ustar00runnerdocker00000000000000""" Generating Python source code from a syntax tree. Thanks to Instagram for open-sourcing libCST (which is great!) and thanks to Tolkein for the name of this module. """ import ast import dis from inspect import getfullargspec, isabstract from tokenize import ( Floatnumber as FLOATNUMBER_RE, Imagnumber as IMAGNUMBER_RE, Intnumber as INTNUMBER_RE, ) from typing import Type import libcst from hypothesis import assume, infer, strategies as st, target from hypothesis.strategies._internal.types import _global_type_lookup from libcst._nodes.expression import ExpressionPosition from libcst._nodes.statement import _INDENT_WHITESPACE_RE from .syntactic import ALLOWED_CHARS def py_from_regex(pattern): return st.from_regex(pattern, fullmatch=True, alphabet=ALLOWED_CHARS) # For some nodes, we just need to ensure that they use the appropriate regex # pattern instead of allowing literally any string. for node_type, pattern in { libcst.Float: FLOATNUMBER_RE, libcst.Integer: INTNUMBER_RE, libcst.Imaginary: IMAGNUMBER_RE, libcst.SimpleWhitespace: libcst._nodes.whitespace.SIMPLE_WHITESPACE_RE, }.items(): _strategy = st.builds(node_type, py_from_regex(pattern)) st.register_type_strategy(node_type, _strategy) # type-ignore comments are special in the 3.8+ (typed) ast, so boost their chances) _comments = py_from_regex(libcst._nodes.whitespace.COMMENT_RE) st.register_type_strategy( libcst.Comment, st.builds(libcst.Comment, _comments | st.just("# type: ignore")) ) # `from_type()` has less laziness than other strategies, we we register for these # foundational node types *before* referring to them in other strategies. st.register_type_strategy( libcst.Name, st.builds(libcst.Name, st.text().filter(str.isidentifier)) ) st.register_type_strategy( libcst.SimpleString, st.builds(libcst.SimpleString, st.text().map(repr)) ) # Ensure that ImportAlias uses Attribute nodes composed only of Name nodes. names = st.from_type(libcst.Name) name_only_attributes = st.one_of( names, st.builds(libcst.Attribute, names, names), st.builds(libcst.Attribute, st.builds(libcst.Attribute, names, names), names), ) st.register_type_strategy( libcst.ImportAlias, st.builds(libcst.ImportAlias, name_only_attributes) ) def nonempty_seq(*node: Type[libcst.CSTNode]) -> st.SearchStrategy: return st.lists(st.one_of(*map(st.from_type, node)), min_size=1) # There are around 150 concrete types of CST nodes. Delightfully, libCST uses # dataclasses for all these classes, so we can allow the `builds` & `from_type` # inference to provide most of our arguments for us. # However, in some cases we want to either restrict arguments (e.g. libcst.Name), # or supply something nastier than the default argument (e.g. libcst.SimpleWhitespace) nonempty_whitespace = st.builds(libcst.SimpleWhitespace, py_from_regex(" +")) REGISTERED = ( [libcst.Asynchronous, nonempty_whitespace], [libcst.AsName, st.from_type(libcst.Name)], [libcst.AnnAssign, infer, infer, infer], [libcst.Assign, nonempty_seq(libcst.AssignTarget)], [libcst.Await, infer, st.just(()), st.just(()), nonempty_whitespace], [libcst.Attribute, infer, infer, infer], [libcst.Comparison, infer, nonempty_seq(libcst.ComparisonTarget)], [libcst.Decorator, st.from_type(libcst.Name) | st.from_type(libcst.Attribute)], [libcst.EmptyLine, infer, infer, infer], [libcst.ExceptHandler, infer, infer, infer, infer, nonempty_whitespace, infer], [libcst.Global, nonempty_seq(libcst.NameItem)], [libcst.Import, nonempty_seq(libcst.ImportAlias)], [ libcst.ImportFrom, st.from_type(libcst.Name) | st.from_type(libcst.Attribute), nonempty_seq(libcst.ImportAlias), ], [libcst.IndentedBlock, infer, infer, py_from_regex(_INDENT_WHITESPACE_RE)], [libcst.IsNot, infer, nonempty_whitespace, infer], [ libcst.MatchSingleton, st.builds(libcst.Name, st.sampled_from(["None", "False", "True"])), ], [libcst.NamedExpr, st.from_type(libcst.Name)], [ libcst.Nonlocal, nonempty_seq(libcst.NameItem).map( lambda names: names[:-1] + [names[-1].with_changes(comma=libcst.MaybeSentinel.DEFAULT)] ), ], [libcst.NotEqual, st.just("!=")], [libcst.NotIn, infer, nonempty_whitespace, infer], [libcst.Set, nonempty_seq(libcst.Element, libcst.StarredElement)], [libcst.Subscript, infer, nonempty_seq(libcst.SubscriptElement)], [libcst.TrailingWhitespace, infer, infer], [libcst.With, nonempty_seq(libcst.WithItem)], ) @st.composite def builds_filtering(draw, t, **kwargs): # type: ignore try: return draw(st.builds(t, **kwargs)) except libcst.CSTValidationError: assume(False) # This is where the magic happens: teach `st.from_type` to generate each node type for node_type, *strats in REGISTERED: # TODO: once everything else is working, come back here and use `infer` for # all arguments without an explicit strategy - inference is more "interesting" # than just using the default argument... in the proverbial sense. # Mostly this will consist of ensuring that parens remain balanced. args = [name for name in getfullargspec(node_type).args if name != "self"] kwargs = dict(zip(args, strats)) st.register_type_strategy(node_type, builds_filtering(node_type, **kwargs)) # We have special handling for `Try` nodes, because there are two options. # If a Try node has no `except` clause, it *must* have a `finally` clause and # *must not* have an `else` clause. With one or more except clauses, it may # have an else and/or a finally, or neither. # The .map() ensures that any bare-`except:` clauses are ordered last. st.register_type_strategy( libcst.Try, st.builds(libcst.Try, finalbody=st.from_type(libcst.Finally)) | st.builds( libcst.Try, body=infer, handlers=st.lists( st.deferred(lambda: st.from_type(libcst.ExceptHandler)), min_size=1, unique_by=lambda caught: caught.type, ).map(lambda xs: sorted(xs, key=lambda x: x.type is None)), orelse=infer, finalbody=infer, ), ) # Assert can either have a comma and message, or neither st.register_type_strategy( libcst.Assert, st.builds( libcst.Assert, test=infer, whitespace_after_assert=nonempty_whitespace, semicolon=infer, ) | st.builds( libcst.Assert, test=infer, whitespace_after_assert=nonempty_whitespace, comma=st.from_type(libcst.Comma), msg=st.from_type(libcst.BaseExpression), semicolon=infer, ), ) # either posargs, kwargs, or **args, but only one at a time st.register_type_strategy( libcst.Arg, st.builds( libcst.Arg, value=infer, comma=infer, star=infer, whitespace_after_star=infer, whitespace_after_arg=infer, ) | st.builds( libcst.Arg, value=infer, keyword=st.from_type(libcst.Name), equal=st.from_type(libcst.AssignEqual), comma=infer, star=st.just(""), whitespace_after_arg=infer, ), ) @st.composite def boolean_op_with_whitespace(draw): # type: ignore # for BooleanOperation, some expressions require whitespace before # and/or after e.g. a or b whereas (1)or(2) is OK. left = draw(st.from_type(libcst.BaseExpression)) right = draw(st.from_type(libcst.BaseExpression)) op = draw(st.from_type(libcst.BaseBooleanOp)) if op.whitespace_before.empty and not left._safe_to_use_with_word_operator( ExpressionPosition.LEFT ): # pragma: no cover op = op.with_changes(whitespace_before=libcst.SimpleWhitespace(" ")) if op.whitespace_after.empty and not right._safe_to_use_with_word_operator( ExpressionPosition.RIGHT ): # pragma: no cover op = op.with_changes(whitespace_after=libcst.SimpleWhitespace(" ")) return libcst.BooleanOperation(left, op, right) st.register_type_strategy(libcst.BooleanOperation, boolean_op_with_whitespace()) st.register_type_strategy( libcst.ComparisonTarget, builds_filtering(libcst.ComparisonTarget) ) # TODO: this works pretty well, but it's also a pretty poor trick for performance. # Instead of filtering, we should continue expanding the specific node # strategies as used above in order to generate valid things by construction. for t in vars(libcst).values(): if ( isinstance(t, type) and not isabstract(t) and issubclass(t, libcst.CSTNode) and t not in _global_type_lookup ): st.register_type_strategy(t, builds_filtering(t)) def record_targets(code: str) -> str: # target larger inputs - the Hypothesis engine will do a multi-objective # hill-climbing search using these scores to generate 'better' examples. nodes = list(ast.walk(ast.parse(code))) uniq_nodes = {type(n) for n in nodes} instructions = list(dis.Bytecode(compile(code, "", "exec"))) for value, label in [ (len(instructions), "(hypothesmith from_node) instructions in bytecode"), (len(nodes), "(hypothesmith from_node) total number of ast nodes"), (len(uniq_nodes), "(hypothesmith from_node) number of unique ast node types"), ]: target(float(value), label=label) return code def compilable(code: str, mode: str = "exec") -> bool: # This is used as a filter on `from_node()`, but note that LibCST aspires to # disallow construction of a CST node which is converted to invalid code. # (that is, if the resulting code would be invalid, raise an error instead) # See also https://github.com/Instagram/LibCST/issues/287 try: compile(code, "", mode) return True except (SyntaxError, ValueError): return False def from_node( node: Type[libcst.CSTNode] = libcst.Module, *, auto_target: bool = True ) -> st.SearchStrategy[str]: """Generate syntactically-valid Python source code for a LibCST node type. You can pass any subtype of `libcst.CSTNode`. Alternatively, you can use Hypothesis' built-in `from_type(node_type).map(lambda n: libcst.Module([n]).code`, after Hypothesmith has registered the required strategies. However, this does not include automatic targeting and limitations of LibCST may lead to invalid code being generated. """ assert issubclass(node, libcst.CSTNode) code = st.from_type(node).map(lambda n: libcst.Module([n]).code).filter(compilable) return code.map(record_targets) if auto_target else code ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114834.0 hypothesmith-0.3.3/src/hypothesmith/py.typed0000644000175100001770000000000000000000000022051 0ustar00runnerdocker00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114834.0 hypothesmith-0.3.3/src/hypothesmith/python.lark0000644000175100001770000002470300000000000022566 0ustar00runnerdocker00000000000000// Python 3 grammar for Lark // This grammar should parse all python 3.x code successfully. // Adapted from: https://docs.python.org/3/reference/grammar.html // Start symbols for the grammar: // single_input is a single interactive statement; // file_input is a module or sequence of commands read from an input file; // eval_input is the input for the eval() functions. // NB: compound_stmt in single_input is followed by extra NEWLINE! // single_input: _NEWLINE | simple_stmt | compound_stmt _NEWLINE file_input: (_NEWLINE | stmt)* eval_input: testlist _NEWLINE* decorator: "@" dotted_name [ "(" [arguments] ")" ] _NEWLINE decorators: decorator+ decorated: decorators (classdef | funcdef | async_funcdef) async_funcdef: "async" funcdef funcdef: "def" name "(" [parameters] ")" ["->" test] ":" suite parameters: paramvalue ("," paramvalue)* ["," SLASH ("," paramvalue)*] ["," [starparams | kwparams]] | starparams | kwparams SLASH: "/" // Otherwise the it will completely disappear and it will be undisguisable in the result starparams: (starparam | starguard) poststarparams starparam: "*" typedparam starguard: "*" poststarparams: ("," paramvalue)* ["," kwparams] kwparams: "**" typedparam ","? ?paramvalue: typedparam ("=" test)? ?typedparam: name (":" test)? lambdef: "lambda" [lambda_params] ":" test lambdef_nocond: "lambda" [lambda_params] ":" test_nocond lambda_params: lambda_paramvalue ("," lambda_paramvalue)* ["," [lambda_starparams | lambda_kwparams]] | lambda_starparams | lambda_kwparams ?lambda_paramvalue: name ("=" test)? lambda_starparams: "*" [name] ("," lambda_paramvalue)* ["," [lambda_kwparams]] lambda_kwparams: "**" name ","? ?stmt: simple_stmt | compound_stmt ?simple_stmt: small_stmt (";" small_stmt)* [";"] _NEWLINE ?small_stmt: (expr_stmt | assign_stmt | del_stmt | pass_stmt | flow_stmt | import_stmt | global_stmt | nonlocal_stmt | assert_stmt) expr_stmt: testlist_star_expr assign_stmt: annassign | augassign | assign annassign: testlist_star_expr ":" test ["=" test] assign: testlist_star_expr ("=" (yield_expr|testlist_star_expr))+ augassign: testlist_star_expr augassign_op (yield_expr|testlist) !augassign_op: "+=" | "-=" | "*=" | "@=" | "/=" | "%=" | "&=" | "|=" | "^=" | "<<=" | ">>=" | "**=" | "//=" ?testlist_star_expr: test_or_star_expr | test_or_star_expr ("," test_or_star_expr)+ ","? -> tuple | test_or_star_expr "," -> tuple // For normal and annotated assignments, additional restrictions enforced by the interpreter del_stmt: "del" exprlist pass_stmt: "pass" ?flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt break_stmt: "break" continue_stmt: "continue" return_stmt: "return" [testlist] yield_stmt: yield_expr raise_stmt: "raise" [test ["from" test]] import_stmt: import_name | import_from import_name: "import" dotted_as_names // note below: the ("." | "...") is necessary because "..." is tokenized as ELLIPSIS import_from: "from" (dots? dotted_name | dots) "import" ("*" | "(" import_as_names ")" | import_as_names) !dots: "."+ import_as_name: name ["as" name] dotted_as_name: dotted_name ["as" name] import_as_names: import_as_name ("," import_as_name)* [","] dotted_as_names: dotted_as_name ("," dotted_as_name)* dotted_name: name ("." name)* global_stmt: "global" name ("," name)* nonlocal_stmt: "nonlocal" name ("," name)* assert_stmt: "assert" test ["," test] ?compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | match_stmt | with_stmt | funcdef | classdef | decorated | async_stmt async_stmt: "async" (funcdef | with_stmt | for_stmt) if_stmt: "if" test ":" suite elifs ["else" ":" suite] elifs: elif_* elif_: "elif" test ":" suite while_stmt: "while" test ":" suite ["else" ":" suite] for_stmt: "for" exprlist "in" testlist ":" suite ["else" ":" suite] try_stmt: "try" ":" suite except_clauses ["else" ":" suite] [finally] | "try" ":" suite finally -> try_finally finally: "finally" ":" suite except_clauses: except_clause+ except_clause: "except" [test ["as" name]] ":" suite // NB compile.c makes sure that the default except clause is last with_stmt: "with" with_items ":" suite with_items: with_item ("," with_item)* with_item: test ["as" name] match_stmt: "match" test ":" _NEWLINE _INDENT case+ _DEDENT case: "case" pattern ["if" test] ":" suite ?pattern: sequence_item_pattern "," _sequence_pattern -> sequence_pattern | as_pattern ?as_pattern: or_pattern ("as" NAME)? ?or_pattern: closed_pattern ("|" closed_pattern)* ?closed_pattern: literal_pattern | NAME -> capture_pattern | "_" -> any_pattern | attr_pattern | "(" as_pattern ")" | "[" _sequence_pattern "]" -> sequence_pattern | "(" (sequence_item_pattern "," _sequence_pattern)? ")" -> sequence_pattern | "{" (mapping_item_pattern ("," mapping_item_pattern)* ","?)?"}" -> mapping_pattern | "{" (mapping_item_pattern ("," mapping_item_pattern)* ",")? "**" NAME ","? "}" -> mapping_star_pattern | class_pattern literal_pattern: inner_literal_pattern ?inner_literal_pattern: "None" -> const_none | "True" -> const_true | "False" -> const_false | STRING -> string | number attr_pattern: NAME ("." NAME)+ -> value name_or_attr_pattern: NAME ("." NAME)* -> value mapping_item_pattern: (literal_pattern|attr_pattern) ":" as_pattern _sequence_pattern: (sequence_item_pattern ("," sequence_item_pattern)* ","?)? ?sequence_item_pattern: as_pattern | "*" NAME -> star_pattern class_pattern: name_or_attr_pattern "(" [arguments_pattern ","?] ")" arguments_pattern: pos_arg_pattern ["," keyws_arg_pattern] | keyws_arg_pattern -> no_pos_arguments pos_arg_pattern: as_pattern ("," as_pattern)* keyws_arg_pattern: keyw_arg_pattern ("," keyw_arg_pattern)* keyw_arg_pattern: NAME "=" as_pattern suite: simple_stmt | _NEWLINE _INDENT stmt+ _DEDENT ?test: or_test ("if" or_test "else" test)? | lambdef | assign_expr assign_expr: name ":=" test ?test_nocond: or_test | lambdef_nocond ?or_test: and_test ("or" and_test)* ?and_test: not_test_ ("and" not_test_)* ?not_test_: "not" not_test_ -> not_test | comparison ?comparison: expr (comp_op expr)* star_expr: "*" expr ?expr: or_expr ?or_expr: xor_expr ("|" xor_expr)* ?xor_expr: and_expr ("^" and_expr)* ?and_expr: shift_expr ("&" shift_expr)* ?shift_expr: arith_expr (_shift_op arith_expr)* ?arith_expr: term (_add_op term)* ?term: factor (_mul_op factor)* ?factor: _unary_op factor | power !_unary_op: "+"|"-"|"~" !_add_op: "+"|"-" !_shift_op: "<<"|">>" !_mul_op: "*"|"@"|"/"|"%"|"//" // <> isn't actually a valid comparison operator in Python. It's here for the // sake of a __future__ import described in PEP 401 (which really works :-) !comp_op: "<"|">"|"=="|">="|"<="|"<>"|"!="|"in"|"not" "in"|"is"|"is" "not" ?power: await_expr ("**" factor)? ?await_expr: AWAIT? atom_expr AWAIT: "await" ?atom_expr: atom_expr "(" [arguments] ")" -> funccall | atom_expr "[" subscriptlist "]" -> getitem | atom_expr "." name -> getattr | atom ?atom: "(" yield_expr ")" | "(" _tuple_inner? ")" -> tuple | "(" comprehension{test_or_star_expr} ")" -> tuple_comprehension | "[" _testlist_comp? "]" -> list | "[" comprehension{test_or_star_expr} "]" -> list_comprehension | "{" _dict_exprlist? "}" -> dict | "{" comprehension{key_value} "}" -> dict_comprehension | "{" _set_exprlist "}" -> set | "{" comprehension{test} "}" -> set_comprehension | name -> var | number | string_concat | "(" test ")" | "..." -> ellipsis | "None" -> const_none | "True" -> const_true | "False" -> const_false ?string_concat: string+ _testlist_comp: test | _tuple_inner _tuple_inner: test_or_star_expr (("," test_or_star_expr)+ [","] | ",") ?test_or_star_expr: test | star_expr ?subscriptlist: subscript | subscript (("," subscript)+ [","] | ",") -> subscript_tuple ?subscript: test | ([test] ":" [test] [sliceop]) -> slice sliceop: ":" [test] ?exprlist: (expr|star_expr) | (expr|star_expr) (("," (expr|star_expr))+ [","]|",") ?testlist: test | testlist_tuple testlist_tuple: test (("," test)+ [","] | ",") _dict_exprlist: (key_value | "**" expr) ("," (key_value | "**" expr))* [","] key_value: test ":" test _set_exprlist: test_or_star_expr ("," test_or_star_expr)* [","] classdef: "class" name ["(" [arguments] ")"] ":" suite arguments: argvalue ("," argvalue)* ("," [ starargs | kwargs])? | starargs | kwargs | comprehension{test} starargs: stararg ("," stararg)* ("," argvalue)* ["," kwargs] stararg: "*" test kwargs: "**" test ("," argvalue)* ?argvalue: test ("=" test)? comprehension{comp_result}: comp_result comp_fors [comp_if] comp_fors: comp_for+ comp_for: [ASYNC] "for" exprlist "in" or_test ASYNC: "async" ?comp_if: "if" test_nocond // not used in grammar, but may appear in "node" passed from Parser to Compiler encoding_decl: name yield_expr: "yield" [testlist] | "yield" "from" test -> yield_from number: DEC_NUMBER | HEX_NUMBER | BIN_NUMBER | OCT_NUMBER | FLOAT_NUMBER | IMAG_NUMBER string: STRING | LONG_STRING // Other terminals _NEWLINE: ( /\r?\n[\t ]*/ | COMMENT )+ %ignore /[\t \f]+/ // WS %ignore /\\[\t \f]*\r?\n/ // LINE_CONT %ignore COMMENT %declare _INDENT _DEDENT // Python terminals !name: NAME | "match" | "case" NAME: /[^\W\d]\w*/ COMMENT: /#[^\n]*/ STRING: /([ubf]?r?|r[ubf])("(?!"").*?(?", "exec"))) targets = data.target_observations for value, label in [ (instructions, "(hypothesmith) instructions in bytecode"), (nodes, "(hypothesmith) total number of ast nodes"), (uniq_nodes, "(hypothesmith) number of unique ast node types"), ]: targets[label] = max(float(len(value)), targets.get(label, 0.0)) return result def draw_symbol(self, data, symbol, draw_state): # type: ignore count = len(draw_state) super().draw_symbol(data, symbol, draw_state) if symbol.name in COMPILE_MODES: try: compile( source="".join(draw_state[count:]), filename="", mode=COMPILE_MODES[symbol.name], ) except SyntaxError: # Python's grammar doesn't actually fully describe the behaviour of the # CPython parser and AST-post-processor, so we just filter out errors. assume(False) except Exception as err: # pragma: no cover # Attempting to compile almost-valid strings has triggered a wide range # of bizzare errors in CPython, especially with the new PEG parser, # and so we maintain this extra clause to ensure that we get a decent # error message out of it. if isinstance(err, SystemError) and ( sys.version_info[:3] == (3, 9, 0) or sys.version_info[:3] >= (3, 9, 8) and str(err) == "Negative size passed to PyUnicode_New" ): # We've triggered https://bugs.python.org/issue42218 - it's been # fixed upstream, so we'll treat it as if it were a SyntaxError. # Or the new https://bugs.python.org/issue45738 which makes me # wish CPython would start running proptests in CI already. assume(False) source_code = ascii("".join(draw_state[count:])) raise type(err)( f"compile({source_code}, '', " f"{COMPILE_MODES[symbol.name]!r}) " f"raised {type(err).__name__}: {str(err)}" ) from err def gen_ignore(self, data, draw_state): # type: ignore # Set a consistent 1/4 chance of generating any ignored tokens (comments, # whitespace, line-continuations) as part of this draw. # See https://github.com/HypothesisWorks/hypothesis/issues/2643 for plans # to do more sophisticated swarm testing for grammars, upstream. if data.draw( st.shared( st.sampled_from([False, True, False, False]), key="hypothesmith_gen_ignored", ) ): super().gen_ignore(data, draw_state) def from_grammar( start: str = "file_input", *, auto_target: bool = True ) -> st.SearchStrategy[str]: """Generate syntactically-valid Python source code based on the grammar. Valid values for ``start`` are ``"single_input"``, ``"file_input"``, or ``"eval_input"``; respectively a single interactive statement, a module or sequence of commands read from a file, and input for the eval() function. If ``auto_target`` is True, this strategy uses ``hypothesis.target()`` internally to drive towards larger and more complex examples. We recommend leaving this enabled, as the grammar is quite complex and only simple examples tend to be generated otherwise. .. warning:: DO NOT EXECUTE CODE GENERATED BY THIS STRATEGY. It could do literally anything that running Python code is able to do, including changing, deleting, or uploading important data. Arbitrary code can be useful, but "arbitrary code execution" can be very, very bad. """ assert start in {"single_input", "file_input", "eval_input"} assert isinstance(auto_target, bool) grammar = Lark(LARK_GRAMMAR, parser="lalr", postlex=PythonIndenter(), start=start) return GrammarStrategy(grammar, start, auto_target) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1708114846.5243025 hypothesmith-0.3.3/src/hypothesmith.egg-info/0000755000175100001770000000000000000000000022056 5ustar00runnerdocker00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114846.0 hypothesmith-0.3.3/src/hypothesmith.egg-info/PKG-INFO0000644000175100001770000001040600000000000023154 0ustar00runnerdocker00000000000000Metadata-Version: 2.1 Name: hypothesmith Version: 0.3.3 Summary: Hypothesis strategies for generating Python programs, something like CSmith Home-page: https://github.com/Zac-HD/hypothesmith Author: Zac Hatfield-Dodds Author-email: zac@hypothesis.works License: MPL 2.0 Project-URL: Funding, https://github.com/sponsors/Zac-HD Keywords: python testing fuzzing property-based-testing Classifier: Development Status :: 4 - Beta Classifier: Framework :: Hypothesis Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0) Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Topic :: Software Development :: Testing Requires-Python: >=3.8 Description-Content-Type: text/markdown License-File: LICENSE Requires-Dist: hypothesis[lark]>=6.93.0 Requires-Dist: libcst>=1.0.1 # hypothesmith Hypothesis strategies for generating Python programs, something like CSmith. This is definitely pre-alpha, but if you want to play with it feel free! You can even keep the shiny pieces when - not if - it breaks. Get it today with [`pip install hypothesmith`](https://pypi.org/project/hypothesmith/), or by cloning [the GitHub repo](https://github.com/Zac-HD/hypothesmith). You can run the tests, such as they are, with `tox` on Python 3.6 or later. Use `tox -va` to see what environments are available. ## Usage This package provides two Hypothesis strategies for generating Python source code. The generated code will always be syntatically valid, and is useful for testing parsers, linters, auto-formatters, and other tools that operate on source code. > DO NOT EXECUTE CODE GENERATED BY THESE STRATEGIES. > > It could do literally anything that running Python code is able to do, > including changing, deleting, or uploading important data. Arbitrary > code can be useful, but "arbitrary code execution" can be very, very bad. #### `hypothesmith.from_grammar(start="file_input", *, auto_target=True)` Generates syntactically-valid Python source code based on the grammar. Valid values for ``start`` are ``"single_input"``, ``"file_input"``, or ``"eval_input"``; respectively a single interactive statement, a module or sequence of commands read from a file, and input for the eval() function. If ``auto_target`` is ``True``, this strategy uses ``hypothesis.target()`` internally to drive towards larger and more complex examples. We recommend leaving this enabled, as the grammar is quite complex and only simple examples tend to be generated otherwise. #### `hypothesmith.from_node(node=libcst.Module, *, auto_target=True)` Generates syntactically-valid Python source code based on the node types defined by the [`LibCST`](https://libcst.readthedocs.io/en/latest/) project. You can pass any subtype of `libcst.CSTNode`. Alternatively, you can use Hypothesis' built-in `from_type(node_type).map(lambda n: libcst.Module([n]).code`, after Hypothesmith has registered the required strategies. However, this does not include automatic targeting and limitations of LibCST may lead to invalid code being generated. ## Notable bugs found with Hypothesmith - [BPO-40661, a segfault in the new parser](https://bugs.python.org/issue40661), was given maximum priority and blocked the planned release of CPython 3.9 beta1. - [BPO-38953](https://bugs.python.org/issue38953) `tokenize` -> `untokenize` roundtrip bugs. - [BPO-42218](https://bugs.python.org/issue42218) mishandled error case in new PEG parser. - [`lib2to3` errors on \r in comment](https://github.com/psf/black/issues/970) - [Black fails on files ending in a backslash](https://github.com/psf/black/issues/1012) - [At least three round-trip bugs in LibCST](https://github.com/Instagram/LibCST#acknowledgements) ([search commits for "hypothesis"](https://github.com/Instagram/LibCST/search?q=Fix+hypothesis&type=commits)) - [Invalid code generated by LibCST](https://github.com/Instagram/LibCST/issues/287) ### Changelog Patch notes [can be found in `CHANGELOG.md`](https://github.com/Zac-HD/hypothesmith/blob/master/CHANGELOG.md). ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114846.0 hypothesmith-0.3.3/src/hypothesmith.egg-info/SOURCES.txt0000644000175100001770000000113600000000000023743 0ustar00runnerdocker00000000000000CHANGELOG.md LICENSE MANIFEST.in README.md setup.py tox.ini deps/README.md deps/check.in deps/check.txt deps/test.in deps/test.txt src/hypothesmith/__init__.py src/hypothesmith/cst.py src/hypothesmith/py.typed src/hypothesmith/python.lark src/hypothesmith/syntactic.py src/hypothesmith.egg-info/PKG-INFO src/hypothesmith.egg-info/SOURCES.txt src/hypothesmith.egg-info/dependency_links.txt src/hypothesmith.egg-info/not-zip-safe src/hypothesmith.egg-info/requires.txt src/hypothesmith.egg-info/top_level.txt tests/__init__.py tests/conftest.py tests/test_cst.py tests/test_syntactic.py tests/test_version.py././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114846.0 hypothesmith-0.3.3/src/hypothesmith.egg-info/dependency_links.txt0000644000175100001770000000000100000000000026124 0ustar00runnerdocker00000000000000 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114846.0 hypothesmith-0.3.3/src/hypothesmith.egg-info/not-zip-safe0000644000175100001770000000000100000000000024304 0ustar00runnerdocker00000000000000 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114846.0 hypothesmith-0.3.3/src/hypothesmith.egg-info/requires.txt0000644000175100001770000000004700000000000024457 0ustar00runnerdocker00000000000000hypothesis[lark]>=6.93.0 libcst>=1.0.1 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114846.0 hypothesmith-0.3.3/src/hypothesmith.egg-info/top_level.txt0000644000175100001770000000001500000000000024604 0ustar00runnerdocker00000000000000hypothesmith ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1708114846.5243025 hypothesmith-0.3.3/tests/0000755000175100001770000000000000000000000016212 5ustar00runnerdocker00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114834.0 hypothesmith-0.3.3/tests/__init__.py0000644000175100001770000000000000000000000020311 0ustar00runnerdocker00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114834.0 hypothesmith-0.3.3/tests/conftest.py0000644000175100001770000000032700000000000020413 0ustar00runnerdocker00000000000000from hypothesis import HealthCheck, settings settings.register_profile( "slow", deadline=None, suppress_health_check=[HealthCheck.too_slow, HealthCheck.filter_too_much], ) settings.load_profile("slow") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114834.0 hypothesmith-0.3.3/tests/test_cst.py0000644000175100001770000000522000000000000020413 0ustar00runnerdocker00000000000000"""Tests for the hypothesmith.cst module.""" import ast import sys from inspect import isabstract from operator import attrgetter import black import libcst import parso import pytest from hypothesis import example, given, note, strategies as st import hypothesmith from hypothesmith.cst import compilable NODE_TYPES = frozenset( v for v in vars(libcst).values() if isinstance(v, type) and issubclass(v, libcst.CSTNode) and not isabstract(v) ) @pytest.mark.parametrize("node", sorted(NODE_TYPES, key=attrgetter("__name__"))) @given(data=st.data()) def test_source_code_from_libcst_node_type(node, data): try: val = data.draw(st.from_type(node)) except NameError: pytest.skip("NameError, probably a forward reference") except TypeError as e: if str(e).startswith("super"): pytest.skip("something weird here, back later") if str(e).startswith("Can't instantiate"): pytest.skip("abstract classes somehow leaking into builds()") raise note(val) if not isinstance(val, libcst.Module): val = libcst.Module([val]) try: code = val.code except libcst._nodes.base.CSTCodegenError: pytest.skip("codegen not supported yet, e.g. Annotation") note(code) @pytest.mark.skipif(not hasattr(ast, "unparse"), reason="Can't test before available") @given(source_code=hypothesmith.from_node()) def test_ast_unparse_from_nodes(source_code): first = ast.parse(source_code) unparsed = ast.unparse(first) second = ast.parse(unparsed) assert ast.dump(first) == ast.dump(second) @pytest.mark.xfail @example("A\u2592", black.Mode()) @given( source_code=hypothesmith.from_node(), mode=st.builds( black.Mode, line_length=st.just(88) | st.integers(0, 200), string_normalization=st.booleans(), is_pyi=st.booleans(), ), ) def test_black_autoformatter_from_nodes(source_code, mode): try: result = black.format_file_contents(source_code, fast=False, mode=mode) except black.NothingChanged: pass else: with pytest.raises(black.NothingChanged): black.format_file_contents(result, fast=False, mode=mode) @given(source_code=hypothesmith.from_node()) def test_from_node_always_compilable(source_code): compile(source_code, "", "exec") @example("\x00") @given(st.text()) def test_compilable_never_raises(s): compilable(s) @pytest.mark.xfail(sys.version_info >= (3, 13), reason="parso does not support 3.13") @given(source_code=hypothesmith.from_node()) def test_parso_from_node(source_code): result = parso.parse(source_code).get_code() assert source_code == result ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114834.0 hypothesmith-0.3.3/tests/test_syntactic.py0000644000175100001770000000717300000000000021634 0ustar00runnerdocker00000000000000"""Tests for the hypothesmith.syntactic module.""" import ast import io import sys import tokenize import black import blib2to3 import parso import pytest from hypothesis import example, given, reject, strategies as st import hypothesmith def fixup(s): """Avoid known issues with tokenize() by editing the string.""" return "".join(x for x in s if x.isprintable()).strip().strip("\\").strip() + "\n" @pytest.mark.xfail @example("#") @example("\n\\\n") @example("#\n\x0cpass#\n") @given(source_code=hypothesmith.from_grammar().map(fixup).filter(str.strip)) def test_tokenize_round_trip_bytes(source_code): try: source = source_code.encode("utf-8-sig") except UnicodeEncodeError: reject() tokens = list(tokenize.tokenize(io.BytesIO(source).readline)) outbytes = tokenize.untokenize(tokens) # may have changed whitespace from source output = list(tokenize.tokenize(io.BytesIO(outbytes).readline)) assert [(t.type, t.string) for t in tokens] == [(t.type, t.string) for t in output] # It would be nice if the round-tripped string stabilised. It doesn't. # assert outbytes == tokenize.untokenize(output) @pytest.mark.xfail @example("#") @example("\n\\\n") @example("#\n\x0cpass#\n") @given(source_code=hypothesmith.from_grammar().map(fixup).filter(str.strip)) def test_tokenize_round_trip_string(source_code): tokens = list(tokenize.generate_tokens(io.StringIO(source_code).readline)) outstring = tokenize.untokenize(tokens) # may have changed whitespace from source output = tokenize.generate_tokens(io.StringIO(outstring).readline) assert [(t.type, t.string) for t in tokens] == [(t.type, t.string) for t in output] # It would be nice if the round-tripped string stabilised. It doesn't. # assert outstring == tokenize.untokenize(output) @pytest.mark.skipif(not hasattr(ast, "unparse"), reason="Can't test before available") @given(source_code=hypothesmith.from_grammar()) def test_ast_unparse_from_grammar(source_code): first = ast.parse(source_code) unparsed = ast.unparse(first) second = ast.parse(unparsed) assert ast.dump(first) == ast.dump(second) @example("\\", black.Mode()) @example("A#\r#", black.Mode()) @given( source_code=hypothesmith.from_grammar(), mode=st.builds( black.Mode, line_length=st.just(88) | st.integers(0, 200), string_normalization=st.booleans(), is_pyi=st.booleans(), ), ) def test_black_autoformatter_from_grammar(source_code, mode): try: result = black.format_file_contents(source_code, fast=False, mode=mode) except black.NothingChanged: pass except blib2to3.pgen2.tokenize.TokenError: # Fails to tokenise e.g. "\\", though compile("\\", "", "exec") works. # See https://github.com/psf/black/issues/1012 reject() except black.InvalidInput: # e.g. "A#\r#", see https://github.com/psf/black/issues/970 reject() else: with pytest.raises(black.NothingChanged): black.format_file_contents(result, fast=False, mode=mode) @given(source_code=hypothesmith.from_grammar("eval_input")) def test_eval_input_generation(source_code): compile(source_code, filename="", mode="eval") @given(source_code=hypothesmith.from_grammar(auto_target=False)) def test_generation_without_targeting(source_code): compile(source_code, filename="", mode="exec") @pytest.mark.xfail(sys.version_info >= (3, 13), reason="parso does not support 3.13") @given(source_code=hypothesmith.from_grammar()) def test_parso_from_grammar(source_code): result = parso.parse(source_code).get_code() assert source_code == result ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114834.0 hypothesmith-0.3.3/tests/test_version.py0000644000175100001770000000332700000000000021315 0ustar00runnerdocker00000000000000"""Tests for the hypothesmith package metadata.""" import re from datetime import date from functools import lru_cache from pathlib import Path from typing import NamedTuple import hypothesmith class Version(NamedTuple): major: int minor: int patch: int @classmethod def from_string(cls, string): return cls(*map(int, string.split("."))) @lru_cache def get_releases(): pattern = re.compile(r"^### (\d+\.\d+\.\d+) - (\d\d\d\d-\d\d-\d\d)$") with open(Path(__file__).parent.parent / "CHANGELOG.md") as f: return tuple( (Version.from_string(match.group(1)), match.group(2)) for match in map(pattern.match, f) if match is not None ) def test_last_release_against_changelog(): last_version, last_date = get_releases()[0] assert last_version == Version.from_string(hypothesmith.__version__) assert last_date <= date.today().isoformat() def test_changelog_is_ordered(): versions, dates = zip(*get_releases()) assert versions == tuple(sorted(versions, reverse=True)) assert dates == tuple(sorted(dates, reverse=True)) def test_version_increments_are_correct(): # We either increment the patch version by one, increment the minor version # and reset the patch, or increment major and reset both minor and patch. versions, _ = zip(*get_releases()) for prev, current in zip(versions[1:], versions): assert prev < current # remember that `versions` is newest-first assert current in ( prev._replace(patch=prev.patch + 1), prev._replace(minor=prev.minor + 1, patch=0), prev._replace(major=prev.major + 1, minor=0, patch=0), ), f"{current} does not follow {prev}" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1708114834.0 hypothesmith-0.3.3/tox.ini0000644000175100001770000000300000000000000016354 0ustar00runnerdocker00000000000000# The test environment and commands [tox] envlist = check, test skipsdist = True [testenv:check] description = Runs all formatting tools then static analysis (quick) deps = --no-deps --requirement deps/check.txt commands = shed flake8 ; mypy --config-file=tox.ini src/hypothesmith/ [testenv:{py38-, py39-, py310-, py311-, py312-, py313-,}test] description = Runs pytest with posargs - `tox -e test -- -v` == `pytest -v` deps = --no-deps --requirement deps/test.txt commands = pip install --no-deps . pytest {posargs:-n auto} # Run `tox -e deps` to update pinned dependencies [testenv:deps] base_python = 3.10 description = Updates pinned dependencies in the `deps/*.txt` files deps = pip-tools commands = pip-compile --quiet --rebuild --upgrade --output-file=deps/check.txt deps/check.in pip-compile --quiet --rebuild --upgrade --output-file=deps/test.txt deps/test.in setup.py # Settings for other tools [pytest] addopts = -Werror --tb=short --cov=hypothesmith --cov-branch --cov-report=term-missing:skip-covered --cov-fail-under=100 [flake8] ignore = D1,E501,W503,S101,S310 exclude = .*,__pycache__ [mypy] python_version = 3.8 platform = linux disallow_untyped_calls = True disallow_untyped_defs = True disallow_untyped_decorators = True follow_imports = silent ignore_missing_imports = True warn_no_return = True warn_return_any = True warn_unused_ignores = True warn_unused_configs = True warn_redundant_casts = True [mypy-tests.*] ignore_errors = True