././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1710442639.1658149
coverage-7.4.4/ 0000755 0001751 0000177 00000000000 00000000000 014127 5 ustar 00runner docker 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/.editorconfig 0000644 0001751 0000177 00000001551 00000000000 016606 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
# This file is for unifying the coding style for different editors and IDEs.
# More information at http://EditorConfig.org
root = true
[*]
charset = utf-8
end_of_line = lf
indent_size = 4
indent_style = space
insert_final_newline = true
max_line_length = 80
trim_trailing_whitespace = true
[*.py]
max_line_length = 100
[*.pyi]
max_line_length = 100
[*.c]
max_line_length = 100
[*.h]
max_line_length = 100
[*.yml]
indent_size = 2
[*.rst]
max_line_length = 79
[*.tok]
trim_trailing_whitespace = false
[*_dos.tok]
end_of_line = crlf
[Makefile]
indent_style = tab
indent_size = 8
[*,cover]
trim_trailing_whitespace = false
[*.diff]
trim_trailing_whitespace = false
[.git/*]
trim_trailing_whitespace = false
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/.git-blame-ignore-revs 0000644 0001751 0000177 00000001726 00000000000 020235 0 ustar 00runner docker 0000000 0000000 # Commits to ignore when doing git-blame.
# 2023-01-05 style: use good style for annotated defaults parameters
78444f4c06df6a634fa67dd99ee7c07b6b633d9e
# 2023-01-06 style(perf): blacken lab/benchmark.py
bf6c12f5da54db7c5c0cc47cbf22c70f686e8236
# 2023-03-22 style: use double-quotes
16abd82b6e87753184e8308c4b2606ff3979f8d3
b7be64538aa480fce641349d3053e9a84862d571
# 2023-04-01 style: use double-quotes in JavaScript
b03ab92bae24c54f1d5a98baa3af6b9a18de4d36
# 2023-11-04 style: ruff format igor.py, setup.py, __main__.py
acb80450d7c033a6ea6e06eb2e74d3590c268435
# 2023-11-20 style: fr"" is better than rf"", for real
d8daa08b347fe6b7099c437b09d926eb999d0803
# 2023-12-02 style: check_coverage close parens should be on their own line
5d0b5d4464b84adb6389c8894c207a323edb2b2b
# 2024-02-27 style: fix COM812 Trailing comma missing
e4e238a9ed8f2ad2b9060247591b4c057c2953bf
# 2024-02-27 style: modernize type hints, a few more f-strings
401a63bf08bdfd780b662f64d2dfe3603f2584dd
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1710442639.1018147
coverage-7.4.4/.github/ 0000755 0001751 0000177 00000000000 00000000000 015467 5 ustar 00runner docker 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/.github/CODE_OF_CONDUCT.md 0000644 0001751 0000177 00000000540 00000000000 020265 0 ustar 00runner docker 0000000 0000000 # Treat each other well
Everyone participating in the coverage.py project, and in particular in the
issue tracker, pull requests, and social media activity, is expected to treat
other people with respect and to follow the guidelines articulated in the
[Python Community Code of Conduct][psf_coc].
[psf_coc]: https://www.python.org/psf/codeofconduct/
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/.github/FUNDING.yml 0000644 0001751 0000177 00000000047 00000000000 017305 0 ustar 00runner docker 0000000 0000000 github: nedbat
tidelift: pypi/coverage
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1710442639.1018147
coverage-7.4.4/.github/ISSUE_TEMPLATE/ 0000755 0001751 0000177 00000000000 00000000000 017652 5 ustar 00runner docker 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/.github/ISSUE_TEMPLATE/bug_report.md 0000644 0001751 0000177 00000002041 00000000000 022341 0 ustar 00runner docker 0000000 0000000 ---
name: Bug report
about: Report a problem with coverage.py
title: ''
labels: bug, needs triage
assignees: ''
---
**Describe the bug**
A clear and concise description of the bug.
**To Reproduce**
How can we reproduce the problem? Please *be specific*. Don't link to a failing CI job. Answer the questions below:
1. What version of Python are you using?
1. What version of coverage.py shows the problem? The output of `coverage debug sys` is helpful.
1. What versions of what packages do you have installed? The output of `pip freeze` is helpful.
1. What code shows the problem? Give us a specific commit of a specific repo that we can check out. If you've already worked around the problem, please provide a commit before that fix.
1. What commands should we run to reproduce the problem? *Be specific*. Include everything, even `git clone`, `pip install`, and so on. Explain like we're five!
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/.github/ISSUE_TEMPLATE/config.yml 0000644 0001751 0000177 00000001120 00000000000 021634 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
# https://help.github.com/en/github/building-a-strong-community/configuring-issue-templates-for-your-repository
blank_issues_enabled: false
contact_links:
- name: Frequently Asked Questions
url: https://coverage.readthedocs.io/en/latest/faq.html
about: Some common problems are described here.
- name: Tidelift security contact
url: https://tidelift.com/security
about: Please report security vulnerabilities here.
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/.github/ISSUE_TEMPLATE/feature_request.md 0000644 0001751 0000177 00000001132 00000000000 023374 0 ustar 00runner docker 0000000 0000000 ---
name: Feature request
about: Suggest an idea for coverage.py
title: ''
labels: enhancement, needs triage
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context about the feature request here.
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/.github/ISSUE_TEMPLATE/support.md 0000644 0001751 0000177 00000001237 00000000000 021713 0 ustar 00runner docker 0000000 0000000 ---
name: Support request
about: Ask for help using coverage.py
title: ''
labels: support, needs triage
assignees: ''
---
**Have you asked elsewhere?**
There are other good places to ask for help using coverage.py. These places let
other people suggest solutions, are more likely places for people to find your
question:
- [Stack Overflow](https://stackoverflow.com/questions/tagged/coverage.py)
- [discuss.python.org](https://discuss.python.org/search?q=coverage.py)
**Describe your situation**
Wherever you ask your question, be sure to explain:
- What you did
- What happened
- How that was different than what you wanted to happen
- What kind of help you need
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/.github/SECURITY.md 0000644 0001751 0000177 00000000311 00000000000 017253 0 ustar 00runner docker 0000000 0000000 # Security Disclosures
To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security).
Tidelift will coordinate the fix and disclosure with maintainers.
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/.github/dependabot.yml 0000644 0001751 0000177 00000000602 00000000000 020315 0 ustar 00runner docker 0000000 0000000 # From:
# https://docs.github.com/en/code-security/supply-chain-security/keeping-your-dependencies-updated-automatically/keeping-your-actions-up-to-date-with-dependabot
# Set update schedule for GitHub Actions
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
# Check for updates to GitHub Actions every weekday
interval: "daily"
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1710442639.1018147
coverage-7.4.4/.github/workflows/ 0000755 0001751 0000177 00000000000 00000000000 017524 5 ustar 00runner docker 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/.github/workflows/codeql-analysis.yml 0000644 0001751 0000177 00000004546 00000000000 023350 0 ustar 00runner docker 0000000 0000000 # For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches:
- master
pull_request:
# The branches below must be a subset of the branches above
branches:
- master
schedule:
- cron: '30 20 * * 6'
permissions:
contents: read
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language:
- python
- javascript
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ]
# Learn more about CodeQL language support at https://git.io/codeql-language-support
steps:
- name: Checkout repository
uses: actions/checkout@v4
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v3
# โน๏ธ Command-line programs to run using the OS shell.
# ๐ https://git.io/JvXDl
# โ๏ธ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language
#- run: |
# make bootstrap
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/.github/workflows/coverage.yml 0000644 0001751 0000177 00000021312 00000000000 022041 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
name: "Coverage"
on:
# As currently structured, this adds too many jobs (checks?), so don't run it
# on pull requests yet.
push:
branches:
- master
- "**/*metacov*"
workflow_dispatch:
defaults:
run:
shell: bash
env:
PIP_DISABLE_PIP_VERSION_CHECK: 1
FORCE_COLOR: 1 # Get colored pytest output
permissions:
contents: read
concurrency:
group: "${{ github.workflow }}-${{ github.ref }}"
cancel-in-progress: true
jobs:
coverage:
name: "${{ matrix.python-version }} on ${{ matrix.os }}"
runs-on: "${{ matrix.os }}-latest"
env:
MATRIX_ID: "${{ matrix.python-version }}.${{ matrix.os }}"
strategy:
matrix:
os:
- ubuntu
- macos
- windows
python-version:
# When changing this list, be sure to check the [gh] list in
# tox.ini so that tox will run properly. PYVERSIONS
# Available versions:
# https://github.com/actions/python-versions/blob/main/versions-manifest.json
- "3.8"
- "3.9"
- "3.10"
- "3.11"
- "3.12"
- "3.13"
- "pypy-3.8"
- "pypy-3.9"
- "pypy-3.10"
exclude:
# Mac PyPy always takes the longest, and doesn't add anything.
- os: macos
python-version: "pypy-3.8"
- os: macos
python-version: "pypy-3.9"
- os: macos
python-version: "pypy-3.10"
# Windows pypy 3.9 and 3.10 get stuck with PyPy 7.3.15. I hope to
# unstick them, but I don't want that to block all other progress, so
# skip them for now.
- os: windows
python-version: "pypy-3.9"
- os: windows
python-version: "pypy-3.10"
# Skip 3.13.0a4 and pin to 3.13.0a3 for Windows due to build error.
# Undo when 3.13.0a5 is released.
- os: windows
python-version: "3.13"
include:
- os: windows
python-version: "3.13.0-alpha.3"
# If one job fails, stop the whole thing.
fail-fast: true
steps:
- name: "Check out the repo"
uses: "actions/checkout@v4"
- name: "Set up Python"
uses: "actions/setup-python@v5"
with:
python-version: "${{ matrix.python-version }}"
allow-prereleases: true
# At a certain point, installing dependencies failed on pypy 3.9 and
# 3.10 on Windows. Commenting out the cache here fixed it. Someday
# try using the cache again.
#cache: pip
#cache-dependency-path: 'requirements/*.pip'
- name: "Show environment"
run: |
set -xe
python -VV
python -m site
env
- name: "Install dependencies"
run: |
echo matrix id: $MATRIX_ID
set -xe
python -VV
python -m site
python -m pip install -r requirements/tox.pip
- name: "Run tox coverage for ${{ matrix.python-version }}"
env:
COVERAGE_COVERAGE: "yes"
COVERAGE_CONTEXT: "${{ matrix.python-version }}.${{ matrix.os }}"
run: |
set -xe
python -m tox
- name: "Combine data"
env:
COVERAGE_RCFILE: "metacov.ini"
run: |
python -m coverage combine
mv .metacov .metacov.$MATRIX_ID
- name: "Upload coverage data"
uses: actions/upload-artifact@v4
with:
name: metacov-${{ env.MATRIX_ID }}
path: .metacov.*
combine:
name: "Combine coverage data"
needs: coverage
runs-on: ubuntu-latest
outputs:
total: ${{ steps.total.outputs.total }}
env:
COVERAGE_RCFILE: "metacov.ini"
steps:
- name: "Check out the repo"
uses: "actions/checkout@v4"
- name: "Set up Python"
uses: "actions/setup-python@v5"
with:
python-version: "3.8" # Minimum of PYVERSIONS
# At a certain point, installing dependencies failed on pypy 3.9 and
# 3.10 on Windows. Commenting out the cache here fixed it. Someday
# try using the cache again.
#cache: pip
#cache-dependency-path: 'requirements/*.pip'
- name: "Show environment"
run: |
set -xe
python -VV
python -m site
env
- name: "Install dependencies"
run: |
set -xe
python -m pip install -e .
python igor.py zip_mods
- name: "Download coverage data"
uses: actions/download-artifact@v4
with:
pattern: metacov-*
merge-multiple: true
- name: "Combine and report"
id: combine
env:
COVERAGE_CONTEXT: "yes"
run: |
set -xe
python igor.py combine_html
- name: "Upload HTML report"
uses: actions/upload-artifact@v4
with:
name: html_report
path: htmlcov
- name: "Get total"
id: total
run: |
echo "total=$(python -m coverage report --format=total)" >> $GITHUB_OUTPUT
publish:
name: "Publish coverage report"
needs: combine
runs-on: ubuntu-latest
steps:
- name: "Show environment"
run: |
set -xe
env
- name: "Compute info for later steps"
id: info
run: |
set -xe
env
export SHA10=$(echo ${{ github.sha }} | cut -c 1-10)
export SLUG=$(date +'%Y%m%d')_$SHA10
export REPORT_DIR=reports/$SLUG/htmlcov
export REF="${{ github.ref }}"
echo "total=${{ needs.combine.outputs.total }}" >> $GITHUB_ENV
echo "sha10=$SHA10" >> $GITHUB_ENV
echo "slug=$SLUG" >> $GITHUB_ENV
echo "report_dir=$REPORT_DIR" >> $GITHUB_ENV
echo "url=https://htmlpreview.github.io/?https://github.com/nedbat/coverage-reports/blob/main/reports/$SLUG/htmlcov/index.html" >> $GITHUB_ENV
echo "branch=${REF#refs/heads/}" >> $GITHUB_ENV
- name: "Summarize"
run: |
echo '### Total coverage: ${{ env.total }}%' >> $GITHUB_STEP_SUMMARY
- name: "Checkout reports repo"
if: ${{ github.ref == 'refs/heads/master' }}
run: |
set -xe
git clone --depth=1 --no-checkout https://${{ secrets.COVERAGE_REPORTS_TOKEN }}@github.com/nedbat/coverage-reports reports_repo
cd reports_repo
git sparse-checkout init --cone
git sparse-checkout set --skip-checks '/*' '!/reports'
git config user.name nedbat
git config user.email ned@nedbatchelder.com
git checkout main
- name: "Download coverage HTML report"
if: ${{ github.ref == 'refs/heads/master' }}
uses: actions/download-artifact@v4
with:
name: html_report
path: reports_repo/${{ env.report_dir }}
- name: "Push to report repo"
if: |
github.repository_owner == 'nedbat'
&& github.ref == 'refs/heads/master'
env:
COMMIT_MESSAGE: ${{ github.event.head_commit.message }}
run: |
set -xe
# Make the redirect to the latest report.
echo "
" > reports_repo/latest.html
echo "" >> reports_repo/latest.html
echo "Coverage report redirect..." >> reports_repo/latest.html
# Make the commit message.
echo "${{ env.total }}% - $COMMIT_MESSAGE" > commit.txt
echo "" >> commit.txt
echo "${{ env.url }}" >> commit.txt
echo "${{ env.sha10 }}: ${{ env.branch }}" >> commit.txt
# Commit.
cd ./reports_repo
git sparse-checkout set --skip-checks '/*' '${{ env.report_dir }}'
rm ${{ env.report_dir }}/.gitignore
git add ${{ env.report_dir }} latest.html
git commit --file=../commit.txt
git push
echo '[${{ env.url }}](${{ env.url }})' >> $GITHUB_STEP_SUMMARY
- name: "Create badge"
if: |
github.repository_owner == 'nedbat'
&& github.ref == 'refs/heads/master'
# https://gist.githubusercontent.com/nedbat/8c6980f77988a327348f9b02bbaf67f5
uses: schneegans/dynamic-badges-action@e9a478b16159b4d31420099ba146cdc50f134483
with:
auth: ${{ secrets.METACOV_GIST_SECRET }}
gistID: 8c6980f77988a327348f9b02bbaf67f5
filename: metacov.json
label: Coverage
message: ${{ env.total }}%
minColorRange: 60
maxColorRange: 95
valColorRange: ${{ env.total }}
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/.github/workflows/dependency-review.yml 0000644 0001751 0000177 00000002221 00000000000 023661 0 ustar 00runner docker 0000000 0000000 # Dependency Review Action
#
# This Action will scan dependency manifest files that change as part of a Pull Reqest, surfacing known-vulnerable versions of the packages declared or updated in the PR. Once installed, if the workflow run is marked as required, PRs introducing known-vulnerable packages will be blocked from merging.
#
# Source repository: https://github.com/actions/dependency-review-action
# Public documentation: https://docs.github.com/en/code-security/supply-chain-security/understanding-your-software-supply-chain/about-dependency-review#dependency-review-enforcement
name: 'Dependency Review'
on:
push:
branches:
- master
- nedbat/*
pull_request:
workflow_dispatch:
permissions:
contents: read
jobs:
dependency-review:
if: github.repository_owner == 'nedbat'
runs-on: ubuntu-latest
steps:
- name: 'Checkout Repository'
uses: actions/checkout@v4
- name: 'Dependency Review'
uses: actions/dependency-review-action@v4
with:
base-ref: ${{ github.event.pull_request.base.sha || 'master' }}
head-ref: ${{ github.event.pull_request.head.sha || github.ref }}
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/.github/workflows/kit.yml 0000644 0001751 0000177 00000022426 00000000000 021044 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
# This file is meant to be processed with cog.
# Running "make prebuild" will bring it up to date.
# Based on:
# https://github.com/joerick/cibuildwheel/blob/master/examples/github-deploy.yml
# To test installing wheels without uploading them to PyPI:
#
# $ mkdir /tmp/pypi
# $ cp dist/* /tmp/pypi
# $ python -m pip install piprepo
# $ piprepo build /tmp/pypi
# $ python -m pip install -v coverage --index-url=file:///tmp/pypi/simple
#
# Note that cibuildwheel recommends not shipping wheels for pre-release versions
# of Python: https://cibuildwheel.readthedocs.io/en/stable/options/#prerelease-pythons
# So we don't.
name: "Kits"
on:
push:
branches:
# Don't build kits all the time, but do if the branch is about kits.
- "**/*kit*"
workflow_dispatch:
repository_dispatch:
types:
- build-kits
defaults:
run:
shell: bash
env:
PIP_DISABLE_PIP_VERSION_CHECK: 1
permissions:
contents: read
concurrency:
group: "${{ github.workflow }}-${{ github.ref }}"
cancel-in-progress: true
jobs:
wheels:
name: "${{ matrix.py }} ${{ matrix.os }} ${{ matrix.arch }} wheels"
runs-on: ${{ matrix.os }}-latest
env:
MATRIX_ID: "${{ matrix.py }}-${{ matrix.os }}-${{ matrix.arch }}"
strategy:
matrix:
include:
# To change the matrix, edit the choices, then process this file with cog:
#
# $ make workflows
#
# which runs:
#
# $ python -m pip install cogapp
# $ python -m cogapp -crP .github/workflows/kit.yml
#
# Choices come from the table on https://pypi.org/project/cibuildwheel/
#
# [[[cog
# #----- vvv Choices for the matrix vvv -----
#
# # Operating systems:
# oss = ["ubuntu", "macos", "windows"]
#
# # For each OS, what arch to use with cibuildwheel:
# os_archs = {
# "ubuntu": ["x86_64", "i686", "aarch64"],
# "macos": ["arm64", "x86_64"],
# "windows": ["x86", "AMD64"],
# }
# # PYVERSIONS. Available versions:
# # https://github.com/actions/python-versions/blob/main/versions-manifest.json
# # PyPy versions are handled further below in the "pypy" step.
# pys = ["cp38", "cp39", "cp310", "cp311", "cp312"]
#
# # Some OS/arch combinations need overrides for the Python versions:
# os_arch_pys = {
# ("macos", "arm64"): ["cp38", "cp39", "cp310", "cp311", "cp312"],
# }
#
# #----- ^^^ ---------------------- ^^^ -----
#
# import json
# for the_os in oss:
# for the_arch in os_archs[the_os]:
# for the_py in os_arch_pys.get((the_os, the_arch), pys):
# them = {
# "os": the_os,
# "py": the_py,
# "arch": the_arch,
# }
# print(f"- {json.dumps(them)}")
# ]]]
- {"os": "ubuntu", "py": "cp38", "arch": "x86_64"}
- {"os": "ubuntu", "py": "cp39", "arch": "x86_64"}
- {"os": "ubuntu", "py": "cp310", "arch": "x86_64"}
- {"os": "ubuntu", "py": "cp311", "arch": "x86_64"}
- {"os": "ubuntu", "py": "cp312", "arch": "x86_64"}
- {"os": "ubuntu", "py": "cp38", "arch": "i686"}
- {"os": "ubuntu", "py": "cp39", "arch": "i686"}
- {"os": "ubuntu", "py": "cp310", "arch": "i686"}
- {"os": "ubuntu", "py": "cp311", "arch": "i686"}
- {"os": "ubuntu", "py": "cp312", "arch": "i686"}
- {"os": "ubuntu", "py": "cp38", "arch": "aarch64"}
- {"os": "ubuntu", "py": "cp39", "arch": "aarch64"}
- {"os": "ubuntu", "py": "cp310", "arch": "aarch64"}
- {"os": "ubuntu", "py": "cp311", "arch": "aarch64"}
- {"os": "ubuntu", "py": "cp312", "arch": "aarch64"}
- {"os": "macos", "py": "cp38", "arch": "arm64"}
- {"os": "macos", "py": "cp39", "arch": "arm64"}
- {"os": "macos", "py": "cp310", "arch": "arm64"}
- {"os": "macos", "py": "cp311", "arch": "arm64"}
- {"os": "macos", "py": "cp312", "arch": "arm64"}
- {"os": "macos", "py": "cp38", "arch": "x86_64"}
- {"os": "macos", "py": "cp39", "arch": "x86_64"}
- {"os": "macos", "py": "cp310", "arch": "x86_64"}
- {"os": "macos", "py": "cp311", "arch": "x86_64"}
- {"os": "macos", "py": "cp312", "arch": "x86_64"}
- {"os": "windows", "py": "cp38", "arch": "x86"}
- {"os": "windows", "py": "cp39", "arch": "x86"}
- {"os": "windows", "py": "cp310", "arch": "x86"}
- {"os": "windows", "py": "cp311", "arch": "x86"}
- {"os": "windows", "py": "cp312", "arch": "x86"}
- {"os": "windows", "py": "cp38", "arch": "AMD64"}
- {"os": "windows", "py": "cp39", "arch": "AMD64"}
- {"os": "windows", "py": "cp310", "arch": "AMD64"}
- {"os": "windows", "py": "cp311", "arch": "AMD64"}
- {"os": "windows", "py": "cp312", "arch": "AMD64"}
# [[[end]]] (checksum: a6ca53e9c620c9e5ca85e7322122056c)
fail-fast: false
steps:
- name: "Setup QEMU"
if: matrix.os == 'ubuntu'
uses: docker/setup-qemu-action@68827325e0b33c7199eb31dd4e31fbe9023e06e3
with:
platforms: arm64
- name: "Check out the repo"
uses: actions/checkout@v4
- name: "Install Python 3.8"
uses: actions/setup-python@v5
with:
# PYVERSIONS
python-version: "3.8"
cache: pip
cache-dependency-path: 'requirements/*.pip'
- name: "Install tools"
run: |
python -m pip install -r requirements/kit.pip
- name: "Build wheels"
env:
CIBW_BUILD: ${{ matrix.py }}-*
CIBW_ARCHS: ${{ matrix.arch }}
CIBW_ENVIRONMENT: PIP_DISABLE_PIP_VERSION_CHECK=1
CIBW_PRERELEASE_PYTHONS: True
CIBW_TEST_COMMAND: python -c "from coverage.tracer import CTracer; print('CTracer OK!')"
run: |
python -m cibuildwheel --output-dir wheelhouse
- name: "List wheels"
run: |
ls -al wheelhouse/
- name: "Upload wheels"
uses: actions/upload-artifact@v4
with:
name: dist-${{ env.MATRIX_ID }}
path: wheelhouse/*.whl
retention-days: 7
sdist:
name: "Source distribution"
runs-on: ubuntu-latest
steps:
- name: "Check out the repo"
uses: actions/checkout@v4
- name: "Install Python 3.8"
uses: actions/setup-python@v5
with:
# PYVERSIONS
python-version: "3.8"
cache: pip
cache-dependency-path: 'requirements/*.pip'
- name: "Install tools"
run: |
python -m pip install -r requirements/kit.pip
- name: "Build sdist"
run: |
python -m build
- name: "List tarballs"
run: |
ls -al dist/
- name: "Upload sdist"
uses: actions/upload-artifact@v4
with:
name: dist-sdist
path: dist/*.tar.gz
retention-days: 7
pypy:
name: "PyPy wheel"
runs-on: ubuntu-latest
steps:
- name: "Check out the repo"
uses: actions/checkout@v4
- name: "Install PyPy"
uses: actions/setup-python@v5
with:
python-version: "pypy-3.8" # Minimum of PyPy PYVERSIONS
cache: pip
cache-dependency-path: 'requirements/*.pip'
- name: "Install requirements"
run: |
pypy3 -m pip install -r requirements/kit.pip
- name: "Build wheel"
env:
DIST_EXTRA_CONFIG: extra.cfg
run: |
# One wheel works for all PyPy versions. PYVERSIONS
# yes, this is weird syntax: https://github.com/pypa/build/issues/202
echo -e "[bdist_wheel]\npython_tag=pp38.pp39.pp310" > $DIST_EXTRA_CONFIG
pypy3 -m build -w
- name: "List wheels"
run: |
ls -al dist/
- name: "Upload wheels"
uses: actions/upload-artifact@v4
with:
name: dist-pypy
path: dist/*.whl
retention-days: 7
sign:
# This signs our artifacts, but we don't use the signatures for anything
# yet. Someday maybe PyPI will have a way to upload and verify them.
name: "Sign artifacts"
needs:
- wheels
- sdist
- pypy
runs-on: ubuntu-latest
permissions:
id-token: write
steps:
- name: "Download artifacts"
uses: actions/download-artifact@v4
with:
pattern: dist-*
merge-multiple: true
- name: "Sign artifacts"
uses: sigstore/gh-action-sigstore-python@v2.1.1
with:
inputs: coverage-*.*
- name: "List files"
run: |
ls -alR
- name: "Upload signatures"
uses: actions/upload-artifact@v4
with:
name: signatures
path: |
*.crt
*.sig
*.sigstore
retention-days: 7
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/.github/workflows/python-nightly.yml 0000644 0001751 0000177 00000005441 00000000000 023250 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
name: "Python Nightly Tests"
on:
push:
branches:
- "**/*nightly*"
schedule:
# Run at 2:22am early every morning Eastern time (6/7:22 UTC)
# so that we get tips of CPython development tested.
# https://crontab.guru/#22_7_%2a_%2a_%2a
- cron: "22 7 * * *"
workflow_dispatch:
defaults:
run:
shell: bash
env:
PIP_DISABLE_PIP_VERSION_CHECK: 1
COVERAGE_IGOR_VERBOSE: 1
permissions:
contents: read
concurrency:
group: "${{ github.workflow }}-${{ github.ref }}"
cancel-in-progress: true
jobs:
tests:
name: "${{ matrix.python-version }}"
# Choose a recent Ubuntu that deadsnakes still builds all the versions for.
# For example, deadsnakes doesn't provide 3.10 nightly for 22.04 (jammy)
# because jammy ships 3.10, and deadsnakes doesn't want to clobber it.
# https://launchpad.net/~deadsnakes/+archive/ubuntu/nightly/+packages
# https://github.com/deadsnakes/issues/issues/234
# bionic: 18, focal: 20, jammy: 22
runs-on: ubuntu-20.04
# If it doesn't finish in an hour, it's not going to. Don't spin for six
# hours needlessly.
timeout-minutes: 60
strategy:
matrix:
python-version:
# When changing this list, be sure to check the [gh] list in
# tox.ini so that tox will run properly. PYVERSIONS
# Available versions:
# https://launchpad.net/~deadsnakes/+archive/ubuntu/nightly/+packages
- "3.11-dev"
- "3.12-dev"
- "3.13-dev"
# https://github.com/actions/setup-python#available-versions-of-pypy
- "pypy-3.8-nightly"
- "pypy-3.9-nightly"
- "pypy-3.10-nightly"
fail-fast: false
steps:
- name: "Check out the repo"
uses: "actions/checkout@v4"
- name: "Install ${{ matrix.python-version }} with deadsnakes"
uses: deadsnakes/action@6c8b9b82fe0b4344f4b98f2775fcc395df45e494
if: "!startsWith(matrix.python-version, 'pypy-')"
with:
python-version: "${{ matrix.python-version }}"
- name: "Install ${{ matrix.python-version }} with setup-python"
uses: "actions/setup-python@v5"
if: "startsWith(matrix.python-version, 'pypy-')"
with:
python-version: "${{ matrix.python-version }}"
- name: "Show diagnostic info"
run: |
set -xe
python -VV
python -m site
python -m coverage debug sys
python -m coverage debug pybehave
env
- name: "Install dependencies"
run: |
python -m pip install -r requirements/tox.pip
- name: "Run tox"
run: |
python -m tox -- -rfsEX
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/.github/workflows/quality.yml 0000644 0001751 0000177 00000004772 00000000000 021751 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
name: "Quality"
on:
push:
branches:
- master
- nedbat/*
pull_request:
workflow_dispatch:
defaults:
run:
shell: bash
env:
PIP_DISABLE_PIP_VERSION_CHECK: 1
permissions:
contents: read
concurrency:
group: "${{ github.workflow }}-${{ github.ref }}"
cancel-in-progress: true
jobs:
lint:
name: "Pylint etc"
# Because pylint can report different things on different OS's (!)
# (https://github.com/PyCQA/pylint/issues/3489), run this on Mac where local
# pylint gets run.
runs-on: macos-latest
steps:
- name: "Check out the repo"
uses: "actions/checkout@v4"
- name: "Install Python"
uses: "actions/setup-python@v5"
with:
python-version: "3.8" # Minimum of PYVERSIONS
cache: pip
cache-dependency-path: 'requirements/*.pip'
- name: "Install dependencies"
run: |
python -m pip install -r requirements/tox.pip
- name: "Tox lint"
run: |
python -m tox -e lint
mypy:
name: "Check types"
runs-on: ubuntu-latest
steps:
- name: "Check out the repo"
uses: "actions/checkout@v4"
- name: "Install Python"
uses: "actions/setup-python@v5"
with:
python-version: "3.8" # Minimum of PYVERSIONS, but at least 3.8
cache: pip
cache-dependency-path: 'requirements/*.pip'
- name: "Install dependencies"
run: |
# We run on 3.8, but the pins were made on 3.7, so don't insist on
# hashes, which won't match.
python -m pip install -r requirements/tox.pip
- name: "Tox mypy"
run: |
python -m tox -e mypy
doc:
name: "Build docs"
runs-on: ubuntu-latest
steps:
- name: "Check out the repo"
uses: "actions/checkout@v4"
- name: "Install Python"
uses: "actions/setup-python@v5"
with:
python-version: "3.11" # Doc version from PYVERSIONS
cache: pip
cache-dependency-path: 'requirements/*.pip'
- name: "Show environment"
run: |
set -xe
python -VV
python -m site
env
- name: "Install dependencies"
run: |
set -xe
python -m pip install -r requirements/tox.pip
- name: "Tox doc"
run: |
python -m tox -e doc
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/.github/workflows/testsuite.yml 0000644 0001751 0000177 00000007522 00000000000 022306 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
name: "Tests"
on:
push:
branches:
- master
- nedbat/*
pull_request:
workflow_dispatch:
defaults:
run:
shell: bash
env:
PIP_DISABLE_PIP_VERSION_CHECK: 1
COVERAGE_IGOR_VERBOSE: 1
FORCE_COLOR: 1 # Get colored pytest output
permissions:
contents: read
concurrency:
group: "${{ github.workflow }}-${{ github.ref }}"
cancel-in-progress: true
jobs:
tests:
name: "${{ matrix.python-version }} on ${{ matrix.os }}"
runs-on: "${{ matrix.os }}-latest"
# Don't run tests if the branch name includes "-notests"
if: "!contains(github.ref, '-notests')"
strategy:
matrix:
os:
- ubuntu
- macos
- windows
python-version:
# When changing this list, be sure to check the [gh] list in
# tox.ini so that tox will run properly. PYVERSIONS
# Available versions:
# https://github.com/actions/python-versions/blob/main/versions-manifest.json
# https://github.com/actions/setup-python/blob/main/docs/advanced-usage.md#available-versions-of-python-and-pypy
- "3.8"
- "3.9"
- "3.10"
- "3.11"
- "3.12"
- "3.13"
- "pypy-3.8"
- "pypy-3.9"
- "pypy-3.10"
exclude:
# Windows pypy 3.9 and 3.10 get stuck with PyPy 7.3.15. I hope to
# unstick them, but I don't want that to block all other progress, so
# skip them for now.
- os: windows
python-version: "pypy-3.9"
- os: windows
python-version: "pypy-3.10"
# Skip 3.13.0a4 and pin to 3.13.0a3 for Windows due to build error.
# Undo when 3.13.0a5 is released.
- os: windows
python-version: "3.13"
include:
- os: windows
python-version: "3.13.0-alpha.3"
fail-fast: false
steps:
- name: "Check out the repo"
uses: "actions/checkout@v4"
- name: "Set up Python"
uses: "actions/setup-python@v5"
with:
python-version: "${{ matrix.python-version }}"
allow-prereleases: true
# At a certain point, installing dependencies failed on pypy 3.9 and
# 3.10 on Windows. Commenting out the cache here fixed it. Someday
# try using the cache again.
#cache: pip
#cache-dependency-path: 'requirements/*.pip'
- name: "Show environment"
run: |
set -xe
python -VV
python -m site
# For extreme debugging:
# python -c "import urllib.request as r; exec(r.urlopen('https://bit.ly/pydoctor').read())"
env
- name: "Install dependencies"
run: |
set -xe
python -m pip install -r requirements/tox.pip
- name: "Run tox for ${{ matrix.python-version }}"
run: |
python -m tox -- -rfsEX
- name: "Retry tox for ${{ matrix.python-version }}"
if: failure()
run: |
# `exit 1` makes sure that the job remains red with flaky runs
python -m tox -- -rfsEX --lf -vvvvv && exit 1
# This job aggregates test results. It's the required check for branch protection.
# https://github.com/marketplace/actions/alls-green#why
# https://github.com/orgs/community/discussions/33579
success:
name: Tests successful
# The tests didn't run if the branch name includes "-notests"
if: "!contains(github.ref, '-notests')"
needs:
- tests
runs-on: ubuntu-latest
steps:
- name: Decide whether the needed jobs succeeded or failed
uses: re-actors/alls-green@05ac9388f0aebcb5727afa17fcccfecd6f8ec5fe
with:
jobs: ${{ toJSON(needs) }}
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/.readthedocs.yaml 0000644 0001751 0000177 00000001122 00000000000 017352 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
#
# ReadTheDocs configuration.
# See https://docs.readthedocs.io/en/stable/config-file/v2.html
version: 2
build:
os: ubuntu-22.04
tools:
# PYVERSIONS: the version we use for building docs. Check tox.ini[doc] also.
python: "3.11"
sphinx:
builder: html
configuration: doc/conf.py
# Don't build anything except HTML.
formats: []
python:
install:
- requirements: doc/requirements.pip
- method: pip
path: .
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/CHANGES.rst 0000644 0001751 0000177 00000155771 00000000000 015751 0 ustar 00runner docker 0000000 0000000 .. Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
.. For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
==============================
Change history for coverage.py
==============================
These changes are listed in decreasing version number order. Note this can be
different from a strict chronological order when there are two branches in
development at the same time, such as 4.5.x and 5.0.
See :ref:`migrating` for significant changes that might be required when
upgrading your version of coverage.py.
.. When updating the "Unreleased" header to a specific version, use this
.. format. Don't forget the jump target:
..
.. .. _changes_9-8-1:
..
.. Version 9.8.1 โ 2027-07-27
.. --------------------------
.. scriv-start-here
.. _changes_7-4-4:
Version 7.4.4 โ 2024-03-14
--------------------------
- Fix: in some cases, even with ``[run] relative_files=True``, a data file
could be created with absolute path names. When combined with other relative
data files, it was random whether the absolute file names would be made
relative or not. If they weren't, then a file would be listed twice in
reports, as detailed in `issue 1752`_. This is now fixed: absolute file
names are always made relative when combining. Thanks to Bruno Rodrigues dos
Santos for support.
- Fix: the last case of a match/case statement had an incorrect message if the
branch was missed. It said the pattern never matched, when actually the
branch is missed if the last case always matched.
- Fix: clicking a line number in the HTML report now positions more accurately.
- Fix: the ``report:format`` setting was defined as a boolean, but should be a
string. Thanks, `Tanaydin Sirin `_. It is also now documented
on the :ref:`configuration page `.
.. _issue 1752: https://github.com/nedbat/coveragepy/issues/1752
.. _pull 1754: https://github.com/nedbat/coveragepy/pull/1754
.. _changes_7-4-3:
Version 7.4.3 โ 2024-02-23
--------------------------
- Fix: in some cases, coverage could fail with a RuntimeError: "Set changed
size during iteration." This is now fixed, closing `issue 1733`_.
.. _issue 1733: https://github.com/nedbat/coveragepy/issues/1733
.. _changes_7-4-2:
Version 7.4.2 โ 2024-02-20
--------------------------
- Fix: setting ``COVERAGE_CORE=sysmon`` no longer errors on 3.11 and lower,
thanks `Hugo van Kemenade `_. It now issues a warning that
sys.monitoring is not available and falls back to the default core instead.
.. _pull 1747: https://github.com/nedbat/coveragepy/pull/1747
.. _changes_7-4-1:
Version 7.4.1 โ 2024-01-26
--------------------------
- Python 3.13.0a3 is supported.
- Fix: the JSON report now includes an explicit format version number, closing
`issue 1732`_.
.. _issue 1732: https://github.com/nedbat/coveragepy/issues/1732
.. _changes_7-4-0:
Version 7.4.0 โ 2023-12-27
--------------------------
- In Python 3.12 and above, you can try an experimental core based on the new
:mod:`sys.monitoring ` module by defining a
``COVERAGE_CORE=sysmon`` environment variable. This should be faster for
line coverage, but not for branch coverage, and plugins and dynamic contexts
are not yet supported with it. I am very interested to hear how it works (or
doesn't!) for you.
.. _changes_7-3-4:
Version 7.3.4 โ 2023-12-20
--------------------------
- Fix: the change for multi-line signature exclusions in 7.3.3 broke other
forms of nested clauses being excluded properly. This is now fixed, closing
`issue 1713`_.
- Fix: in the HTML report, selecting code for copying won't select the line
numbers also. Thanks, `Robert Harris `_.
.. _issue 1713: https://github.com/nedbat/coveragepy/issues/1713
.. _pull 1717: https://github.com/nedbat/coveragepy/pull/1717
.. _changes_7-3-3:
Version 7.3.3 โ 2023-12-14
--------------------------
- Fix: function definitions with multi-line signatures can now be excluded by
matching any of the lines, closing `issue 684`_. Thanks, `Jan Rusak,
Maciej Kowalczyk and Joanna Ejzel `_.
- Fix: XML reports could fail with a TypeError if files had numeric components
that were duplicates except for leading zeroes, like ``file1.py`` and
``file001.py``. Fixes `issue 1709`_.
- The ``coverage annotate`` command used to announce that it would be removed
in a future version. Enough people got in touch to say that they use it, so
it will stay. Don't expect it to keep up with other new features though.
- Added new :ref:`debug options `:
- ``pytest`` writes the pytest test name into the debug output.
- ``dataop2`` writes the full data being added to CoverageData objects.
.. _issue 684: https://github.com/nedbat/coveragepy/issues/684
.. _pull 1705: https://github.com/nedbat/coveragepy/pull/1705
.. _issue 1709: https://github.com/nedbat/coveragepy/issues/1709
.. _changes_7-3-2:
Version 7.3.2 โ 2023-10-02
--------------------------
- The ``coverage lcov`` command ignored the ``[report] exclude_lines`` and
``[report] exclude_also`` settings (`issue 1684`_). This is now fixed,
thanks `Jacqueline Lee `_.
- Sometimes SQLite will create journal files alongside the coverage.py database
files. These are ephemeral, but could be mistakenly included when combining
data files. Now they are always ignored, fixing `issue 1605`_. Thanks to
Brad Smith for suggesting fixes and providing detailed debugging.
- On Python 3.12+, we now disable SQLite writing journal files, which should be
a little faster.
- The new 3.12 soft keyword ``type`` is properly bolded in HTML reports.
- Removed the "fullcoverage" feature used by CPython to measure the coverage of
early-imported standard library modules. CPython `stopped using it
<88054_>`_ in 2021, and it stopped working completely in Python 3.13.
.. _issue 1605: https://github.com/nedbat/coveragepy/issues/1605
.. _issue 1684: https://github.com/nedbat/coveragepy/issues/1684
.. _pull 1685: https://github.com/nedbat/coveragepy/pull/1685
.. _88054: https://github.com/python/cpython/issues/88054
.. _changes_7-3-1:
Version 7.3.1 โ 2023-09-06
--------------------------
- The semantics of stars in file patterns has been clarified in the docs. A
leading or trailing star matches any number of path components, like a double
star would. This is different than the behavior of a star in the middle of a
pattern. This discrepancy was `identified by Sviatoslav Sydorenko
`_, who `provided patient detailed diagnosis `_ and
graciously agreed to a pragmatic resolution.
- The API docs were missing from the last version. They are now `restored
`_.
.. _apidocs: https://coverage.readthedocs.io/en/latest/api_coverage.html
.. _starbad: https://github.com/nedbat/coveragepy/issues/1407#issuecomment-1631085209
.. _pull 1650: https://github.com/nedbat/coveragepy/pull/1650
.. _changes_7-3-0:
Version 7.3.0 โ 2023-08-12
--------------------------
- Added a :meth:`.Coverage.collect` context manager to start and stop coverage
data collection.
- Dropped support for Python 3.7.
- Fix: in unusual circumstances, SQLite cannot be set to asynchronous mode.
Coverage.py would fail with the error ``Safety level may not be changed
inside a transaction.`` This is now avoided, closing `issue 1646`_. Thanks
to Michael Bell for the detailed bug report.
- Docs: examples of configuration files now include separate examples for the
different syntaxes: .coveragerc, pyproject.toml, setup.cfg, and tox.ini.
- Fix: added ``nosemgrep`` comments to our JavaScript code so that
semgrep-based SAST security checks won't raise false alarms about security
problems that aren't problems.
- Added a CITATION.cff file, thanks to `Ken Schackart `_.
.. _pull 1641: https://github.com/nedbat/coveragepy/pull/1641
.. _issue 1646: https://github.com/nedbat/coveragepy/issues/1646
.. _changes_7-2-7:
Version 7.2.7 โ 2023-05-29
--------------------------
- Fix: reverted a `change from 6.4.3 `_ that helped Cython, but
also increased the size of data files when using dynamic contexts, as
described in the now-fixed `issue 1586`_. The problem is now avoided due to a
recent change (`issue 1538 `_). Thanks to `Anders Kaseorg
`_ and David Szotten for persisting with problem reports and
detailed diagnoses.
- Wheels are now provided for CPython 3.12.
.. _pull 1347b: https://github.com/nedbat/coveragepy/pull/1347
.. _issue 1538b: https://github.com/nedbat/coveragepy/issues/1538
.. _issue 1586: https://github.com/nedbat/coveragepy/issues/1586
.. _pull 1629: https://github.com/nedbat/coveragepy/pull/1629
.. _changes_7-2-6:
Version 7.2.6 โ 2023-05-23
--------------------------
- Fix: the ``lcov`` command could raise an IndexError exception if a file is
translated to Python but then executed under its own name. Jinja2 does this
when rendering templates. Fixes `issue 1553`_.
- Python 3.12 beta 1 now inlines comprehensions. Previously they were compiled
as invisible functions and coverage.py would warn you if they weren't
completely executed. This no longer happens under Python 3.12.
- Fix: the ``coverage debug sys`` command includes some environment variables
in its output. This could have included sensitive data. Those values are
now hidden with asterisks, closing `issue 1628`_.
.. _issue 1553: https://github.com/nedbat/coveragepy/issues/1553
.. _issue 1628: https://github.com/nedbat/coveragepy/issues/1628
.. _changes_7-2-5:
Version 7.2.5 โ 2023-04-30
--------------------------
- Fix: ``html_report()`` could fail with an AttributeError on ``isatty`` if run
in an unusual environment where sys.stdout had been replaced. This is now
fixed.
.. _changes_7-2-4:
Version 7.2.4 โ 2023-04-28
--------------------------
PyCon 2023 sprint fixes!
- Fix: with ``relative_files = true``, specifying a specific file to include or
omit wouldn't work correctly (`issue 1604`_). This is now fixed, with
testing help by `Marc Gibbons `_.
- Fix: the XML report would have an incorrect ```` element when using
relative files and the source option ended with a slash (`issue 1541`_).
This is now fixed, thanks to `Kevin Brown-Silva `_.
- When the HTML report location is printed to the terminal, it's now a
terminal-compatible URL, so that you can click the location to open the HTML
file in your browser. Finishes `issue 1523`_ thanks to `Ricardo Newbery
`_.
- Docs: a new :ref:`Migrating page ` with details about how to
migrate between major versions of coverage.py. It currently covers the
wildcard changes in 7.x. Thanks, `Brian Grohe `_.
.. _issue 1523: https://github.com/nedbat/coveragepy/issues/1523
.. _issue 1541: https://github.com/nedbat/coveragepy/issues/1541
.. _issue 1604: https://github.com/nedbat/coveragepy/issues/1604
.. _pull 1608: https://github.com/nedbat/coveragepy/pull/1608
.. _pull 1609: https://github.com/nedbat/coveragepy/pull/1609
.. _pull 1610: https://github.com/nedbat/coveragepy/pull/1610
.. _pull 1613: https://github.com/nedbat/coveragepy/pull/1613
.. _changes_7-2-3:
Version 7.2.3 โ 2023-04-06
--------------------------
- Fix: the :ref:`config_run_sigterm` setting was meant to capture data if a
process was terminated with a SIGTERM signal, but it didn't always. This was
fixed thanks to `Lewis Gaul `_, closing `issue 1599`_.
- Performance: HTML reports with context information are now much more compact.
File sizes are typically as small as one-third the previous size, but can be
dramatically smaller. This closes `issue 1584`_ thanks to `Oleh Krehel
`_.
- Development dependencies no longer use hashed pins, closing `issue 1592`_.
.. _issue 1584: https://github.com/nedbat/coveragepy/issues/1584
.. _pull 1587: https://github.com/nedbat/coveragepy/pull/1587
.. _issue 1592: https://github.com/nedbat/coveragepy/issues/1592
.. _issue 1599: https://github.com/nedbat/coveragepy/issues/1599
.. _pull 1600: https://github.com/nedbat/coveragepy/pull/1600
.. _changes_7-2-2:
Version 7.2.2 โ 2023-03-16
--------------------------
- Fix: if a virtualenv was created inside a source directory, and a sourced
package was installed inside the virtualenv, then all of the third-party
packages inside the virtualenv would be measured. This was incorrect, but
has now been fixed: only the specified packages will be measured, thanks to
`Manuel Jacob `_.
- Fix: the ``coverage lcov`` command could create a .lcov file with incorrect
LF (lines found) and LH (lines hit) totals. This is now fixed, thanks to
`Ian Moore `_.
- Fix: the ``coverage xml`` command on Windows could create a .xml file with
duplicate ```` elements. This is now fixed, thanks to `Benjamin
Parzella `_, closing `issue 1573`_.
.. _pull 1560: https://github.com/nedbat/coveragepy/pull/1560
.. _issue 1573: https://github.com/nedbat/coveragepy/issues/1573
.. _pull 1574: https://github.com/nedbat/coveragepy/pull/1574
.. _pull 1583: https://github.com/nedbat/coveragepy/pull/1583
.. _changes_7-2-1:
Version 7.2.1 โ 2023-02-26
--------------------------
- Fix: the PyPI page had broken links to documentation pages, but no longer
does, closing `issue 1566`_.
- Fix: public members of the coverage module are now properly indicated so that
mypy will find them, fixing `issue 1564`_.
.. _issue 1564: https://github.com/nedbat/coveragepy/issues/1564
.. _issue 1566: https://github.com/nedbat/coveragepy/issues/1566
.. _changes_7-2-0:
Version 7.2.0 โ 2023-02-22
--------------------------
- Added a new setting ``[report] exclude_also`` to let you add more exclusions
without overwriting the defaults. Thanks, `Alpha Chen `_,
closing `issue 1391`_.
- Added a :meth:`.CoverageData.purge_files` method to remove recorded data for
a particular file. Contributed by `Stephan Deibel `_.
- Fix: when reporting commands fail, they will no longer congratulate
themselves with messages like "Wrote XML report to file.xml" before spewing a
traceback about their failure.
- Fix: arguments in the public API that name file paths now accept pathlib.Path
objects. This includes the ``data_file`` and ``config_file`` arguments to
the Coverage constructor and the ``basename`` argument to CoverageData.
Closes `issue 1552`_.
- Fix: In some embedded environments, an IndexError could occur on stop() when
the originating thread exits before completion. This is now fixed, thanks to
`Russell Keith-Magee `_, closing `issue 1542`_.
- Added a ``py.typed`` file to announce our type-hintedness. Thanks,
`KotlinIsland `_.
.. _issue 1391: https://github.com/nedbat/coveragepy/issues/1391
.. _issue 1542: https://github.com/nedbat/coveragepy/issues/1542
.. _pull 1543: https://github.com/nedbat/coveragepy/pull/1543
.. _pull 1547: https://github.com/nedbat/coveragepy/pull/1547
.. _pull 1550: https://github.com/nedbat/coveragepy/pull/1550
.. _issue 1552: https://github.com/nedbat/coveragepy/issues/1552
.. _pull 1557: https://github.com/nedbat/coveragepy/pull/1557
.. _changes_7-1-0:
Version 7.1.0 โ 2023-01-24
--------------------------
- Added: the debug output file can now be specified with ``[run] debug_file``
in the configuration file. Closes `issue 1319`_.
- Performance: fixed a slowdown with dynamic contexts that's been around since
6.4.3. The fix closes `issue 1538`_. Thankfully this doesn't break the
`Cython change`_ that fixed `issue 972`_. Thanks to Mathieu Kniewallner for
the deep investigative work and comprehensive issue report.
- Typing: all product and test code has type annotations.
.. _Cython change: https://github.com/nedbat/coveragepy/pull/1347
.. _issue 972: https://github.com/nedbat/coveragepy/issues/972
.. _issue 1319: https://github.com/nedbat/coveragepy/issues/1319
.. _issue 1538: https://github.com/nedbat/coveragepy/issues/1538
.. _changes_7-0-5:
Version 7.0.5 โ 2023-01-10
--------------------------
- Fix: On Python 3.7, a file with type annotations but no ``from __future__
import annotations`` would be missing statements in the coverage report. This
is now fixed, closing `issue 1524`_.
.. _issue 1524: https://github.com/nedbat/coveragepy/issues/1524
.. _changes_7-0-4:
Version 7.0.4 โ 2023-01-07
--------------------------
- Performance: an internal cache of file names was accidentally disabled,
resulting in sometimes drastic reductions in performance. This is now fixed,
closing `issue 1527`_. Thanks to Ivan Ciuvalschii for the reproducible test
case.
.. _issue 1527: https://github.com/nedbat/coveragepy/issues/1527
.. _changes_7-0-3:
Version 7.0.3 โ 2023-01-03
--------------------------
- Fix: when using pytest-cov or pytest-xdist, or perhaps both, the combining
step could fail with ``assert row is not None`` using 7.0.2. This was due to
a race condition that has always been possible and is still possible. In
7.0.1 and before, the error was silently swallowed by the combining code.
Now it will produce a message "Couldn't combine data file" and ignore the
data file as it used to do before 7.0.2. Closes `issue 1522`_.
.. _issue 1522: https://github.com/nedbat/coveragepy/issues/1522
.. _changes_7-0-2:
Version 7.0.2 โ 2023-01-02
--------------------------
- Fix: when using the ``[run] relative_files = True`` setting, a relative
``[paths]`` pattern was still being made absolute. This is now fixed,
closing `issue 1519`_.
- Fix: if Python doesn't provide tomllib, then TOML configuration files can
only be read if coverage.py is installed with the ``[toml]`` extra.
Coverage.py will raise an error if TOML support is not installed when it sees
your settings are in a .toml file. But it didn't understand that
``[tools.coverage]`` was a valid section header, so the error wasn't reported
if you used that header, and settings were silently ignored. This is now
fixed, closing `issue 1516`_.
- Fix: adjusted how decorators are traced on PyPy 7.3.10, fixing `issue 1515`_.
- Fix: the ``coverage lcov`` report did not properly implement the
``--fail-under=MIN`` option. This has been fixed.
- Refactor: added many type annotations, including a number of refactorings.
This should not affect outward behavior, but they were a bit invasive in some
places, so keep your eyes peeled for oddities.
- Refactor: removed the vestigial and long untested support for Jython and
IronPython.
.. _issue 1515: https://github.com/nedbat/coveragepy/issues/1515
.. _issue 1516: https://github.com/nedbat/coveragepy/issues/1516
.. _issue 1519: https://github.com/nedbat/coveragepy/issues/1519
.. _changes_7-0-1:
Version 7.0.1 โ 2022-12-23
--------------------------
- When checking if a file mapping resolved to a file that exists, we weren't
considering files in .whl files. This is now fixed, closing `issue 1511`_.
- File pattern rules were too strict, forbidding plus signs and curly braces in
directory and file names. This is now fixed, closing `issue 1513`_.
- Unusual Unicode or control characters in source files could prevent
reporting. This is now fixed, closing `issue 1512`_.
- The PyPy wheel now installs on PyPy 3.7, 3.8, and 3.9, closing `issue 1510`_.
.. _issue 1510: https://github.com/nedbat/coveragepy/issues/1510
.. _issue 1511: https://github.com/nedbat/coveragepy/issues/1511
.. _issue 1512: https://github.com/nedbat/coveragepy/issues/1512
.. _issue 1513: https://github.com/nedbat/coveragepy/issues/1513
.. _changes_7-0-0:
Version 7.0.0 โ 2022-12-18
--------------------------
Nothing new beyond 7.0.0b1.
.. _changes_7-0-0b1:
Version 7.0.0b1 โ 2022-12-03
----------------------------
A number of changes have been made to file path handling, including pattern
matching and path remapping with the ``[paths]`` setting (see
:ref:`config_paths`). These changes might affect you, and require you to
update your settings.
(This release includes the changes from `6.6.0b1`__, since 6.6.0 was never
released.)
__ https://coverage.readthedocs.io/en/latest/changes.html#changes-6-6-0b1
- Changes to file pattern matching, which might require updating your
configuration:
- Previously, ``*`` would incorrectly match directory separators, making
precise matching difficult. This is now fixed, closing `issue 1407`_.
- Now ``**`` matches any number of nested directories, including none.
- Improvements to combining data files when using the
:ref:`config_run_relative_files` setting, which might require updating your
configuration:
- During ``coverage combine``, relative file paths are implicitly combined
without needing a ``[paths]`` configuration setting. This also fixed
`issue 991`_.
- A ``[paths]`` setting like ``*/foo`` will now match ``foo/bar.py`` so that
relative file paths can be combined more easily.
- The :ref:`config_run_relative_files` setting is properly interpreted in
more places, fixing `issue 1280`_.
- When remapping file paths with ``[paths]``, a path will be remapped only if
the resulting path exists. The documentation has long said the prefix had to
exist, but it was never enforced. This fixes `issue 608`_, improves `issue
649`_, and closes `issue 757`_.
- Reporting operations now implicitly use the ``[paths]`` setting to remap file
paths within a single data file. Combining multiple files still requires the
``coverage combine`` step, but this simplifies some single-file situations.
Closes `issue 1212`_ and `issue 713`_.
- The ``coverage report`` command now has a ``--format=`` option. The original
style is now ``--format=text``, and is the default.
- Using ``--format=markdown`` will write the table in Markdown format, thanks
to `Steve Oswald `_, closing `issue 1418`_.
- Using ``--format=total`` will write a single total number to the
output. This can be useful for making badges or writing status updates.
- Combining data files with ``coverage combine`` now hashes the data files to
skip files that add no new information. This can reduce the time needed.
Many details affect the speed-up, but for coverage.py's own test suite,
combining is about 40% faster. Closes `issue 1483`_.
- When searching for completely un-executed files, coverage.py uses the
presence of ``__init__.py`` files to determine which directories have source
that could have been imported. However, `implicit namespace packages`_ don't
require ``__init__.py``. A new setting ``[report]
include_namespace_packages`` tells coverage.py to consider these directories
during reporting. Thanks to `Felix Horvat `_ for the
contribution. Closes `issue 1383`_ and `issue 1024`_.
- Fixed environment variable expansion in pyproject.toml files. It was overly
broad, causing errors outside of coverage.py settings, as described in `issue
1481`_ and `issue 1345`_. This is now fixed, but in rare cases will require
changing your pyproject.toml to quote non-string values that use environment
substitution.
- An empty file has a coverage total of 100%, but used to fail with
``--fail-under``. This has been fixed, closing `issue 1470`_.
- The text report table no longer writes out two separator lines if there are
no files listed in the table. One is plenty.
- Fixed a mis-measurement of a strange use of wildcard alternatives in
match/case statements, closing `issue 1421`_.
- Fixed internal logic that prevented coverage.py from running on
implementations other than CPython or PyPy (`issue 1474`_).
- The deprecated ``[run] note`` setting has been completely removed.
.. _implicit namespace packages: https://peps.python.org/pep-0420/
.. _issue 608: https://github.com/nedbat/coveragepy/issues/608
.. _issue 649: https://github.com/nedbat/coveragepy/issues/649
.. _issue 713: https://github.com/nedbat/coveragepy/issues/713
.. _issue 757: https://github.com/nedbat/coveragepy/issues/757
.. _issue 991: https://github.com/nedbat/coveragepy/issues/991
.. _issue 1024: https://github.com/nedbat/coveragepy/issues/1024
.. _issue 1212: https://github.com/nedbat/coveragepy/issues/1212
.. _issue 1280: https://github.com/nedbat/coveragepy/issues/1280
.. _issue 1345: https://github.com/nedbat/coveragepy/issues/1345
.. _issue 1383: https://github.com/nedbat/coveragepy/issues/1383
.. _issue 1407: https://github.com/nedbat/coveragepy/issues/1407
.. _issue 1418: https://github.com/nedbat/coveragepy/issues/1418
.. _issue 1421: https://github.com/nedbat/coveragepy/issues/1421
.. _issue 1470: https://github.com/nedbat/coveragepy/issues/1470
.. _issue 1474: https://github.com/nedbat/coveragepy/issues/1474
.. _issue 1481: https://github.com/nedbat/coveragepy/issues/1481
.. _issue 1483: https://github.com/nedbat/coveragepy/issues/1483
.. _pull 1387: https://github.com/nedbat/coveragepy/pull/1387
.. _pull 1479: https://github.com/nedbat/coveragepy/pull/1479
.. _changes_6-6-0b1:
Version 6.6.0b1 โ 2022-10-31
----------------------------
(Note: 6.6.0 final was never released. These changes are part of `7.0.0b1`__.)
__ https://coverage.readthedocs.io/en/latest/changes.html#changes-7-0-0b1
- Changes to file pattern matching, which might require updating your
configuration:
- Previously, ``*`` would incorrectly match directory separators, making
precise matching difficult. This is now fixed, closing `issue 1407`_.
- Now ``**`` matches any number of nested directories, including none.
- Improvements to combining data files when using the
:ref:`config_run_relative_files` setting:
- During ``coverage combine``, relative file paths are implicitly combined
without needing a ``[paths]`` configuration setting. This also fixed
`issue 991`_.
- A ``[paths]`` setting like ``*/foo`` will now match ``foo/bar.py`` so that
relative file paths can be combined more easily.
- The setting is properly interpreted in more places, fixing `issue 1280`_.
- Fixed environment variable expansion in pyproject.toml files. It was overly
broad, causing errors outside of coverage.py settings, as described in `issue
1481`_ and `issue 1345`_. This is now fixed, but in rare cases will require
changing your pyproject.toml to quote non-string values that use environment
substitution.
- Fixed internal logic that prevented coverage.py from running on
implementations other than CPython or PyPy (`issue 1474`_).
.. _issue 991: https://github.com/nedbat/coveragepy/issues/991
.. _issue 1280: https://github.com/nedbat/coveragepy/issues/1280
.. _issue 1345: https://github.com/nedbat/coveragepy/issues/1345
.. _issue 1407: https://github.com/nedbat/coveragepy/issues/1407
.. _issue 1474: https://github.com/nedbat/coveragepy/issues/1474
.. _issue 1481: https://github.com/nedbat/coveragepy/issues/1481
.. _changes_6-5-0:
Version 6.5.0 โ 2022-09-29
--------------------------
- The JSON report now includes details of which branches were taken, and which
are missing for each file. Thanks, `Christoph Blessing `_. Closes
`issue 1425`_.
- Starting with coverage.py 6.2, ``class`` statements were marked as a branch.
This wasn't right, and has been reverted, fixing `issue 1449`_. Note this
will very slightly reduce your coverage total if you are measuring branch
coverage.
- Packaging is now compliant with `PEP 517`_, closing `issue 1395`_.
- A new debug option ``--debug=pathmap`` shows details of the remapping of
paths that happens during combine due to the ``[paths]`` setting.
- Fix an internal problem with caching of invalid Python parsing. Found by
OSS-Fuzz, fixing their `bug 50381`_.
.. _bug 50381: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=50381
.. _PEP 517: https://peps.python.org/pep-0517/
.. _issue 1395: https://github.com/nedbat/coveragepy/issues/1395
.. _issue 1425: https://github.com/nedbat/coveragepy/issues/1425
.. _issue 1449: https://github.com/nedbat/coveragepy/issues/1449
.. _pull 1438: https://github.com/nedbat/coveragepy/pull/1438
.. _changes_6-4-4:
Version 6.4.4 โ 2022-08-16
--------------------------
- Wheels are now provided for Python 3.11.
.. _changes_6-4-3:
Version 6.4.3 โ 2022-08-06
--------------------------
- Fix a failure when combining data files if the file names contained glob-like
patterns. Thanks, `Michael Krebs and Benjamin Schubert `_.
- Fix a messaging failure when combining Windows data files on a different
drive than the current directory, closing `issue 1428`_. Thanks, `Lorenzo
Micรฒ `_.
- Fix path calculations when running in the root directory, as you might do in
a Docker container. Thanks `Arthur Rio `_.
- Filtering in the HTML report wouldn't work when reloading the index page.
This is now fixed. Thanks, `Marc Legendre `_.
- Fix a problem with Cython code measurement, closing `issue 972`_. Thanks,
`Matus Valo `_.
.. _issue 972: https://github.com/nedbat/coveragepy/issues/972
.. _issue 1428: https://github.com/nedbat/coveragepy/issues/1428
.. _pull 1347: https://github.com/nedbat/coveragepy/pull/1347
.. _pull 1403: https://github.com/nedbat/coveragepy/issues/1403
.. _pull 1405: https://github.com/nedbat/coveragepy/issues/1405
.. _pull 1413: https://github.com/nedbat/coveragepy/issues/1413
.. _pull 1430: https://github.com/nedbat/coveragepy/pull/1430
.. _changes_6-4-2:
Version 6.4.2 โ 2022-07-12
--------------------------
- Updated for a small change in Python 3.11.0 beta 4: modules now start with a
line with line number 0, which is ignored. This line cannot be executed, so
coverage totals were thrown off. This line is now ignored by coverage.py,
but this also means that truly empty modules (like ``__init__.py``) have no
lines in them, rather than one phantom line. Fixes `issue 1419`_.
- Internal debugging data added to sys.modules is now an actual module, to
avoid confusing code that examines everything in sys.modules. Thanks,
`Yilei Yang `_.
.. _issue 1419: https://github.com/nedbat/coveragepy/issues/1419
.. _pull 1399: https://github.com/nedbat/coveragepy/pull/1399
.. _changes_6-4-1:
Version 6.4.1 โ 2022-06-02
--------------------------
- Greatly improved performance on PyPy, and other environments that need the
pure Python trace function. Thanks, Carl Friedrich Bolz-Tereick (`pull
1381`_ and `pull 1388`_). Slightly improved performance when using the C
trace function, as most environments do. Closes `issue 1339`_.
- The conditions for using tomllib from the standard library have been made
more precise, so that 3.11 alphas will continue to work. Closes `issue
1390`_.
.. _issue 1339: https://github.com/nedbat/coveragepy/issues/1339
.. _pull 1381: https://github.com/nedbat/coveragepy/pull/1381
.. _pull 1388: https://github.com/nedbat/coveragepy/pull/1388
.. _issue 1390: https://github.com/nedbat/coveragepy/issues/1390
.. _changes_64:
Version 6.4 โ 2022-05-22
------------------------
- A new setting, :ref:`config_run_sigterm`, controls whether a SIGTERM signal
handler is used. In 6.3, the signal handler was always installed, to capture
data at unusual process ends. Unfortunately, this introduced other problems
(see `issue 1310`_). Now the signal handler is only used if you opt-in by
setting ``[run] sigterm = true``.
- Small changes to the HTML report:
- Added links to next and previous file, and more keyboard shortcuts: ``[``
and ``]`` for next file and previous file; ``u`` for up to the index; and
``?`` to open/close the help panel. Thanks, `J. M. F. Tsang
`_.
- The time stamp and version are displayed at the top of the report. Thanks,
`Ammar Askar `_. Closes `issue 1351`_.
- A new debug option ``debug=sqldata`` adds more detail to ``debug=sql``,
logging all the data being written to the database.
- Previously, running ``coverage report`` (or any of the reporting commands) in
an empty directory would create a .coverage data file. Now they do not,
fixing `issue 1328`_.
- On Python 3.11, the ``[toml]`` extra no longer installs tomli, instead using
tomllib from the standard library. Thanks `Shantanu `_.
- In-memory CoverageData objects now properly update(), closing `issue 1323`_.
.. _issue 1310: https://github.com/nedbat/coveragepy/issues/1310
.. _issue 1323: https://github.com/nedbat/coveragepy/issues/1323
.. _issue 1328: https://github.com/nedbat/coveragepy/issues/1328
.. _issue 1351: https://github.com/nedbat/coveragepy/issues/1351
.. _pull 1354: https://github.com/nedbat/coveragepy/pull/1354
.. _pull 1359: https://github.com/nedbat/coveragepy/pull/1359
.. _pull 1364: https://github.com/nedbat/coveragepy/pull/1364
.. _changes_633:
Version 6.3.3 โ 2022-05-12
--------------------------
- Fix: Coverage.py now builds successfully on CPython 3.11 (3.11.0b1) again.
Closes `issue 1367`_. Some results for generators may have changed.
.. _issue 1367: https://github.com/nedbat/coveragepy/issues/1367
.. _changes_632:
Version 6.3.2 โ 2022-02-20
--------------------------
- Fix: adapt to pypy3.9's decorator tracing behavior. It now traces function
decorators like CPython 3.8: both the @-line and the def-line are traced.
Fixes `issue 1326`_.
- Debug: added ``pybehave`` to the list of :ref:`coverage debug `
and :ref:`cmd_run_debug` options.
- Fix: show an intelligible error message if ``--concurrency=multiprocessing``
is used without a configuration file. Closes `issue 1320`_.
.. _issue 1320: https://github.com/nedbat/coveragepy/issues/1320
.. _issue 1326: https://github.com/nedbat/coveragepy/issues/1326
.. _changes_631:
Version 6.3.1 โ 2022-02-01
--------------------------
- Fix: deadlocks could occur when terminating processes. Some of these
deadlocks (described in `issue 1310`_) are now fixed.
- Fix: a signal handler was being set from multiple threads, causing an error:
"ValueError: signal only works in main thread". This is now fixed, closing
`issue 1312`_.
- Fix: ``--precision`` on the command-line was being ignored while considering
``--fail-under``. This is now fixed, thanks to
`Marcelo Trylesinski `_.
- Fix: releases no longer provide 3.11.0-alpha wheels. Coverage.py uses CPython
internal fields which are moving during the alpha phase. Fixes `issue 1316`_.
.. _issue 1310: https://github.com/nedbat/coveragepy/issues/1310
.. _issue 1312: https://github.com/nedbat/coveragepy/issues/1312
.. _issue 1316: https://github.com/nedbat/coveragepy/issues/1316
.. _pull 1317: https://github.com/nedbat/coveragepy/pull/1317
.. _changes_63:
Version 6.3 โ 2022-01-25
------------------------
- Feature: Added the ``lcov`` command to generate reports in LCOV format.
Thanks, `Bradley Burns `_. Closes issues `587 `_
and `626 `_.
- Feature: the coverage data file can now be specified on the command line with
the ``--data-file`` option in any command that reads or writes data. This is
in addition to the existing ``COVERAGE_FILE`` environment variable. Closes
`issue 624`_. Thanks, `Nikita Bloshchanevich `_.
- Feature: coverage measurement data will now be written when a SIGTERM signal
is received by the process. This includes
:meth:`Process.terminate `,
and other ways to terminate a process. Currently this is only on Linux and
Mac; Windows is not supported. Fixes `issue 1307`_.
- Dropped support for Python 3.6, which reached end-of-life on 2021-12-23.
- Updated Python 3.11 support to 3.11.0a4, fixing `issue 1294`_.
- Fix: the coverage data file is now created in a more robust way, to avoid
problems when multiple processes are trying to write data at once. Fixes
issues `1303 `_ and `883 `_.
- Fix: a .gitignore file will only be written into the HTML report output
directory if the directory is empty. This should prevent certain unfortunate
accidents of writing the file where it is not wanted.
- Releases now have MacOS arm64 wheels for Apple Silicon, fixing `issue 1288`_.
.. _issue 587: https://github.com/nedbat/coveragepy/issues/587
.. _issue 624: https://github.com/nedbat/coveragepy/issues/624
.. _issue 626: https://github.com/nedbat/coveragepy/issues/626
.. _issue 883: https://github.com/nedbat/coveragepy/issues/883
.. _issue 1288: https://github.com/nedbat/coveragepy/issues/1288
.. _issue 1294: https://github.com/nedbat/coveragepy/issues/1294
.. _issue 1303: https://github.com/nedbat/coveragepy/issues/1303
.. _issue 1307: https://github.com/nedbat/coveragepy/issues/1307
.. _pull 1289: https://github.com/nedbat/coveragepy/pull/1289
.. _pull 1304: https://github.com/nedbat/coveragepy/pull/1304
.. _changes_62:
Version 6.2 โ 2021-11-26
------------------------
- Feature: Now the ``--concurrency`` setting can have a list of values, so that
threads and another lightweight threading package can be measured together,
such as ``--concurrency=gevent,thread``. Closes `issue 1012`_ and `issue
1082`_. This also means that ``thread`` must be explicitly specified in some
cases that used to be implicit such as ``--concurrency=multiprocessing``,
which must be changed to ``--concurrency=multiprocessing,thread``.
- Fix: A module specified as the ``source`` setting is imported during startup,
before the user program imports it. This could cause problems if the rest of
the program isn't ready yet. For example, `issue 1203`_ describes a Django
setting that is accessed before settings have been configured. Now the early
import is wrapped in a try/except so errors then don't stop execution.
- Fix: A colon in a decorator expression would cause an exclusion to end too
early, preventing the exclusion of the decorated function. This is now fixed.
- Fix: The HTML report now will not overwrite a .gitignore file that already
exists in the HTML output directory (follow-on for `issue 1244
`_).
- API: The exceptions raised by Coverage.py have been specialized, to provide
finer-grained catching of exceptions by third-party code.
- API: Using ``suffix=False`` when constructing a Coverage object with
multiprocessing wouldn't suppress the data file suffix (`issue 989`_). This
is now fixed.
- Debug: The ``coverage debug data`` command will now sniff out combinable data
files, and report on all of them.
- Debug: The ``coverage debug`` command used to accept a number of topics at a
time, and show all of them, though this was never documented. This no longer
works, to allow for command-line options in the future.
.. _issue 989: https://github.com/nedbat/coveragepy/issues/989
.. _issue 1012: https://github.com/nedbat/coveragepy/issues/1012
.. _issue 1082: https://github.com/nedbat/coveragepy/issues/1082
.. _issue 1203: https://github.com/nedbat/coveragepy/issues/1203
.. _issue 1244b: https://github.com/nedbat/coveragepy/issues/1244
.. _changes_612:
Version 6.1.2 โ 2021-11-10
--------------------------
- Python 3.11 is supported (tested with 3.11.0a2). One still-open issue has to
do with `exits through with-statements `_.
- Fix: When remapping file paths through the ``[paths]`` setting while
combining, the ``[run] relative_files`` setting was ignored, resulting in
absolute paths for remapped file names (`issue 1147`_). This is now fixed.
- Fix: Complex conditionals over excluded lines could have incorrectly reported
a missing branch (`issue 1271`_). This is now fixed.
- Fix: More exceptions are now handled when trying to parse source files for
reporting. Problems that used to terminate coverage.py can now be handled
with ``[report] ignore_errors``. This helps with plugins failing to read
files (`django_coverage_plugin issue 78`_).
- Fix: Removed another vestige of jQuery from the source tarball
(`issue 840 `_).
- Fix: Added a default value for a new-to-6.x argument of an internal class.
This unsupported class is being used by coveralls (`issue 1273`_). Although
I'd rather not "fix" unsupported interfaces, it's actually nicer with a
default value.
.. _django_coverage_plugin issue 78: https://github.com/nedbat/django_coverage_plugin/issues/78
.. _issue 840b: https://github.com/nedbat/coveragepy/issues/840
.. _issue 1147: https://github.com/nedbat/coveragepy/issues/1147
.. _issue 1270: https://github.com/nedbat/coveragepy/issues/1270
.. _issue 1271: https://github.com/nedbat/coveragepy/issues/1271
.. _issue 1273: https://github.com/nedbat/coveragepy/issues/1273
.. _changes_611:
Version 6.1.1 โ 2021-10-31
--------------------------
- Fix: The sticky header on the HTML report didn't work unless you had branch
coverage enabled. This is now fixed: the sticky header works for everyone.
(Do people still use coverage without branch measurement!? j/k)
- Fix: When using explicitly declared namespace packages, the "already imported
a file that will be measured" warning would be issued (`issue 888`_). This
is now fixed.
.. _issue 888: https://github.com/nedbat/coveragepy/issues/888
.. _changes_61:
Version 6.1 โ 2021-10-30
------------------------
- Deprecated: The ``annotate`` command and the ``Coverage.annotate`` function
will be removed in a future version, unless people let me know that they are
using it. Instead, the ``html`` command gives better-looking (and more
accurate) output, and the ``report -m`` command will tell you line numbers of
missing lines. Please get in touch if you have a reason to use ``annotate``
over those better options: ned@nedbatchelder.com.
- Feature: Coverage now sets an environment variable, ``COVERAGE_RUN`` when
running your code with the ``coverage run`` command. The value is not
important, and may change in the future. Closes `issue 553`_.
- Feature: The HTML report pages for Python source files now have a sticky
header so the file name and controls are always visible.
- Feature: The ``xml`` and ``json`` commands now describe what they wrote
where.
- Feature: The ``html``, ``combine``, ``xml``, and ``json`` commands all accept
a ``-q/--quiet`` option to suppress the messages they write to stdout about
what they are doing (`issue 1254`_).
- Feature: The ``html`` command writes a ``.gitignore`` file into the HTML
output directory, to prevent the report from being committed to git. If you
want to commit it, you will need to delete that file. Closes `issue 1244`_.
- Feature: Added support for PyPy 3.8.
- Fix: More generated code is now excluded from measurement. Code such as
`attrs`_ boilerplate, or doctest code, was being measured though the
synthetic line numbers meant they were never reported. Once Cython was
involved though, the generated .so files were parsed as Python, raising
syntax errors, as reported in `issue 1160`_. This is now fixed.
- Fix: When sorting human-readable names, numeric components are sorted
correctly: file10.py will appear after file9.py. This applies to file names,
module names, environment variables, and test contexts.
- Performance: Branch coverage measurement is faster, though you might only
notice on code that is executed many times, such as long-running loops.
- Build: jQuery is no longer used or vendored (`issue 840`_ and `issue 1118`_).
Huge thanks to Nils Kattenbeck (septatrix) for the conversion to vanilla
JavaScript in `pull request 1248`_.
.. _issue 553: https://github.com/nedbat/coveragepy/issues/553
.. _issue 840: https://github.com/nedbat/coveragepy/issues/840
.. _issue 1118: https://github.com/nedbat/coveragepy/issues/1118
.. _issue 1160: https://github.com/nedbat/coveragepy/issues/1160
.. _issue 1244: https://github.com/nedbat/coveragepy/issues/1244
.. _pull request 1248: https://github.com/nedbat/coveragepy/pull/1248
.. _issue 1254: https://github.com/nedbat/coveragepy/issues/1254
.. _attrs: https://www.attrs.org/
.. _changes_602:
Version 6.0.2 โ 2021-10-11
--------------------------
- Namespace packages being measured weren't properly handled by the new code
that ignores third-party packages. If the namespace package was installed, it
was ignored as a third-party package. That problem (`issue 1231`_) is now
fixed.
- Packages named as "source packages" (with ``source``, or ``source_pkgs``, or
pytest-cov's ``--cov``) might have been only partially measured. Their
top-level statements could be marked as un-executed, because they were
imported by coverage.py before measurement began (`issue 1232`_). This is
now fixed, but the package will be imported twice, once by coverage.py, then
again by your test suite. This could cause problems if importing the package
has side effects.
- The :meth:`.CoverageData.contexts_by_lineno` method was documented to return
a dict, but was returning a defaultdict. Now it returns a plain dict. It
also no longer returns negative numbered keys.
.. _issue 1231: https://github.com/nedbat/coveragepy/issues/1231
.. _issue 1232: https://github.com/nedbat/coveragepy/issues/1232
.. _changes_601:
Version 6.0.1 โ 2021-10-06
--------------------------
- In 6.0, the coverage.py exceptions moved from coverage.misc to
coverage.exceptions. These exceptions are not part of the public supported
API, CoverageException is. But a number of other third-party packages were
importing the exceptions from coverage.misc, so they are now available from
there again (`issue 1226`_).
- Changed an internal detail of how tomli is imported, so that tomli can use
coverage.py for their own test suite (`issue 1228`_).
- Defend against an obscure possibility under code obfuscation, where a
function can have an argument called "self", but no local named "self"
(`pull request 1210`_). Thanks, Ben Carlsson.
.. _pull request 1210: https://github.com/nedbat/coveragepy/pull/1210
.. _issue 1226: https://github.com/nedbat/coveragepy/issues/1226
.. _issue 1228: https://github.com/nedbat/coveragepy/issues/1228
.. _changes_60:
Version 6.0 โ 2021-10-03
------------------------
- The ``coverage html`` command now prints a message indicating where the HTML
report was written. Fixes `issue 1195`_.
- The ``coverage combine`` command now prints messages indicating each data
file being combined. Fixes `issue 1105`_.
- The HTML report now includes a sentence about skipped files due to
``skip_covered`` or ``skip_empty`` settings. Fixes `issue 1163`_.
- Unrecognized options in the configuration file are no longer errors. They are
now warnings, to ease the use of coverage across versions. Fixes `issue
1035`_.
- Fix handling of exceptions through context managers in Python 3.10. A missing
exception is no longer considered a missing branch from the with statement.
Fixes `issue 1205`_.
- Fix another rarer instance of "Error binding parameter 0 - probably
unsupported type." (`issue 1010 `_).
- Creating a directory for the coverage data file now is safer against
conflicts when two coverage runs happen simultaneously (`pull 1220`_).
Thanks, Clรฉment Pit-Claudel.
.. _issue 1010b: https://github.com/nedbat/coveragepy/issues/1010
.. _issue 1035: https://github.com/nedbat/coveragepy/issues/1035
.. _issue 1105: https://github.com/nedbat/coveragepy/issues/1105
.. _issue 1163: https://github.com/nedbat/coveragepy/issues/1163
.. _issue 1195: https://github.com/nedbat/coveragepy/issues/1195
.. _issue 1205: https://github.com/nedbat/coveragepy/issues/1205
.. _pull 1220: https://github.com/nedbat/coveragepy/pull/1220
.. _changes_60b1:
Version 6.0b1 โ 2021-07-18
--------------------------
- Dropped support for Python 2.7, PyPy 2, and Python 3.5.
- Added support for the Python 3.10 ``match/case`` syntax.
- Data collection is now thread-safe. There may have been rare instances of
exceptions raised in multi-threaded programs.
- Plugins (like the `Django coverage plugin`_) were generating "Already
imported a file that will be measured" warnings about Django itself. These
have been fixed, closing `issue 1150`_.
- Warnings generated by coverage.py are now real Python warnings.
- Using ``--fail-under=100`` with coverage near 100% could result in the
self-contradictory message :code:`total of 100 is less than fail-under=100`.
This bug (`issue 1168`_) is now fixed.
- The ``COVERAGE_DEBUG_FILE`` environment variable now accepts ``stdout`` and
``stderr`` to write to those destinations.
- TOML parsing now uses the `tomli`_ library.
- Some minor changes to usually invisible details of the HTML report:
- Use a modern hash algorithm when fingerprinting, for high-security
environments (`issue 1189`_). When generating the HTML report, we save the
hash of the data, to avoid regenerating an unchanged HTML page. We used to
use MD5 to generate the hash, and now use SHA-3-256. This was never a
security concern, but security scanners would notice the MD5 algorithm and
raise a false alarm.
- Change how report file names are generated, to avoid leading underscores
(`issue 1167`_), to avoid rare file name collisions (`issue 584`_), and to
avoid file names becoming too long (`issue 580`_).
.. _Django coverage plugin: https://pypi.org/project/django-coverage-plugin/
.. _issue 580: https://github.com/nedbat/coveragepy/issues/580
.. _issue 584: https://github.com/nedbat/coveragepy/issues/584
.. _issue 1150: https://github.com/nedbat/coveragepy/issues/1150
.. _issue 1167: https://github.com/nedbat/coveragepy/issues/1167
.. _issue 1168: https://github.com/nedbat/coveragepy/issues/1168
.. _issue 1189: https://github.com/nedbat/coveragepy/issues/1189
.. _tomli: https://pypi.org/project/tomli/
.. _changes_56b1:
Version 5.6b1 โ 2021-04-13
--------------------------
Note: 5.6 final was never released. These changes are part of 6.0.
- Third-party packages are now ignored in coverage reporting. This solves a
few problems:
- Coverage will no longer report about other people's code (`issue 876`_).
This is true even when using ``--source=.`` with a venv in the current
directory.
- Coverage will no longer generate "Already imported a file that will be
measured" warnings about coverage itself (`issue 905`_).
- The HTML report uses j/k to move up and down among the highlighted chunks of
code. They used to highlight the current chunk, but 5.0 broke that behavior.
Now the highlighting is working again.
- The JSON report now includes ``percent_covered_display``, a string with the
total percentage, rounded to the same number of decimal places as the other
reports' totals.
.. _issue 876: https://github.com/nedbat/coveragepy/issues/876
.. _issue 905: https://github.com/nedbat/coveragepy/issues/905
.. _changes_55:
Version 5.5 โ 2021-02-28
------------------------
- ``coverage combine`` has a new option, ``--keep`` to keep the original data
files after combining them. The default is still to delete the files after
they have been combined. This was requested in `issue 1108`_ and implemented
in `pull request 1110`_. Thanks, รric Lariviรจre.
- When reporting missing branches in ``coverage report``, branches aren't
reported that jump to missing lines. This adds to the long-standing behavior
of not reporting branches from missing lines. Now branches are only reported
if both the source and destination lines are executed. Closes both `issue
1065`_ and `issue 955`_.
- Minor improvements to the HTML report:
- The state of the line visibility selector buttons is saved in local storage
so you don't have to fiddle with them so often, fixing `issue 1123`_.
- It has a little more room for line numbers so that 4-digit numbers work
well, fixing `issue 1124`_.
- Improved the error message when combining line and branch data, so that users
will be more likely to understand what's happening, closing `issue 803`_.
.. _issue 803: https://github.com/nedbat/coveragepy/issues/803
.. _issue 955: https://github.com/nedbat/coveragepy/issues/955
.. _issue 1065: https://github.com/nedbat/coveragepy/issues/1065
.. _issue 1108: https://github.com/nedbat/coveragepy/issues/1108
.. _pull request 1110: https://github.com/nedbat/coveragepy/pull/1110
.. _issue 1123: https://github.com/nedbat/coveragepy/issues/1123
.. _issue 1124: https://github.com/nedbat/coveragepy/issues/1124
.. _changes_54:
Version 5.4 โ 2021-01-24
------------------------
- The text report produced by ``coverage report`` now always outputs a TOTAL
line, even if only one Python file is reported. This makes regex parsing
of the output easier. Thanks, Judson Neer. This had been requested a number
of times (`issue 1086`_, `issue 922`_, `issue 732`_).
- The ``skip_covered`` and ``skip_empty`` settings in the configuration file
can now be specified in the ``[html]`` section, so that text reports and HTML
reports can use separate settings. The HTML report will still use the
``[report]`` settings if there isn't a value in the ``[html]`` section.
Closes `issue 1090`_.
- Combining files on Windows across drives now works properly, fixing `issue
577`_. Thanks, `Valentin Lab `_.
- Fix an obscure warning from deep in the _decimal module, as reported in
`issue 1084`_.
- Update to support Python 3.10 alphas in progress, including `PEP 626: Precise
line numbers for debugging and other tools `_.
.. _issue 577: https://github.com/nedbat/coveragepy/issues/577
.. _issue 732: https://github.com/nedbat/coveragepy/issues/732
.. _issue 922: https://github.com/nedbat/coveragepy/issues/922
.. _issue 1084: https://github.com/nedbat/coveragepy/issues/1084
.. _issue 1086: https://github.com/nedbat/coveragepy/issues/1086
.. _issue 1090: https://github.com/nedbat/coveragepy/issues/1090
.. _pr1080: https://github.com/nedbat/coveragepy/pull/1080
.. _pep626: https://www.python.org/dev/peps/pep-0626/
.. _changes_531:
Version 5.3.1 โ 2020-12-19
--------------------------
- When using ``--source`` on a large source tree, v5.x was slower than previous
versions. This performance regression is now fixed, closing `issue 1037`_.
- Mysterious SQLite errors can happen on PyPy, as reported in `issue 1010`_. An
immediate retry seems to fix the problem, although it is an unsatisfying
solution.
- The HTML report now saves the sort order in a more widely supported way,
fixing `issue 986`_. Thanks, Sebastiรกn Ramรญrez (`pull request 1066`_).
- The HTML report pages now have a :ref:`Sleepy Snake ` favicon.
- Wheels are now provided for manylinux2010, and for PyPy3 (pp36 and pp37).
- Continuous integration has moved from Travis and AppVeyor to GitHub Actions.
.. _issue 986: https://github.com/nedbat/coveragepy/issues/986
.. _issue 1037: https://github.com/nedbat/coveragepy/issues/1037
.. _issue 1010: https://github.com/nedbat/coveragepy/issues/1010
.. _pull request 1066: https://github.com/nedbat/coveragepy/pull/1066
.. _changes_53:
Version 5.3 โ 2020-09-13
------------------------
- The ``source`` setting has always been interpreted as either a file path or a
module, depending on which existed. If both interpretations were valid, it
was assumed to be a file path. The new ``source_pkgs`` setting can be used
to name a package to disambiguate this case. Thanks, Thomas Grainger. Fixes
`issue 268`_.
- If a plugin was disabled due to an exception, we used to still try to record
its information, causing an exception, as reported in `issue 1011`_. This is
now fixed.
.. _issue 268: https://github.com/nedbat/coveragepy/issues/268
.. _issue 1011: https://github.com/nedbat/coveragepy/issues/1011
.. scriv-end-here
Older changes
-------------
The complete history is available in the `coverage.py docs`__.
__ https://coverage.readthedocs.io/en/latest/changes.html
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/CITATION.cff 0000644 0001751 0000177 00000001771 00000000000 016027 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
cff-version: 1.2.0
title: "Coverage.py: The code coverage tool for Python"
message: >-
If you use this software, please cite it using the metadata from this file.
type: software
authors:
- family-names: Batchelder
given-names: Ned
orcid: https://orcid.org/0009-0006-2659-884X
- name: "Contributors to Coverage.py"
repository-code: "https://github.com/nedbat/coveragepy"
url: "https://coverage.readthedocs.io/"
abstract: >-
Coverage.py is a tool for measuring code coverage of Python programs. It monitors your program,
noting which parts of the code have been executed, then analyzes the source to identify code
that could have been executed but was not.
Coverage measurement is typically used to gauge the effectiveness of tests. It can show which
parts of your code are being exercised by tests, and which are not.
license: Apache-2.0
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/CONTRIBUTORS.txt 0000644 0001751 0000177 00000006706 00000000000 016636 0 ustar 00runner docker 0000000 0000000 Coverage.py was originally written by Gareth Rees, and since 2004 has been
extended and maintained by Ned Batchelder.
Other contributions, including writing code, updating docs, and submitting
useful bug reports, have been made by:
Abdeali Kothari
Adi Roiban
Agbonze O. Jeremiah
Albertas Agejevas
Aleksi Torhamo
Alex Gaynor
Alex Groce
Alex Sandro
Alexander Todorov
Alexander Walters
Alpha Chen
Ammar Askar
Anders Kaseorg
Andrew Hoos
Anthony Sottile
Arcadiy Ivanov
Aron Griffis
Artem Dayneko
Arthur Deygin
Arthur Rio
Asher Foa
Ben Carlsson
Ben Finney
Benjamin Parzella
Benjamin Schubert
Bernรกt Gรกbor
Bill Hart
Brad Smith
Bradley Burns
Brandon Rhodes
Brett Cannon
Brian Grohe
Bruno Oliveira
Bruno P. Kinoshita
Bruno Rodrigues dos Santos
Buck Evan
Buck Golemon
Calen Pennington
Carl Friedrich Bolz-Tereick
Carl Gieringer
Catherine Proulx
Charles Chan
Chris Adams
Chris Jerdonek
Chris Rose
Chris Warrick
Christian Clauss
Christian Heimes
Christine Lytwynec
Christoph Blessing
Christoph Zwerschke
Christopher Pickering
Clรฉment Pit-Claudel
Conrad Ho
Cosimo Lupo
Dan Hemberger
Dan Riti
Dan Wandschneider
Danek Duvall
Daniel Hahler
Danny Allen
David Christian
David MacIver
David Stanek
David Szotten
Dennis Sweeney
Detlev Offenbach
Devin Jeanpierre
Dirk Thomas
Dmitry Shishov
Dmitry Trofimov
Edgar Ramรญrez Mondragรณn
Eduardo Schettino
Edward Loper
Eli Skeggs
Emil Madsen
รric Lariviรจre
Federico Bond
Felix Horvat
Frazer McLean
Geoff Bache
George Paci
George Song
George-Cristian Bรฎrzan
Greg Rogers
Guido van Rossum
Guillaume Chazarain
Holger Krekel
Hugo van Kemenade
Ian Moore
Ilia Meerovich
Imri Goldberg
Ionel Cristian Mฤrieศ
Ivan Ciuvalschii
J. M. F. Tsang
JT Olds
Jacqueline Lee
Jakub Wilk
Jan Rusak
Janakarajan Natarajan
Jerin Peter George
Jessamyn Smith
Joanna Ejzel
Joe Doherty
Joe Jevnik
John Vandenberg
Jon Chappell
Jon Dufresne
Joseph Tate
Josh Williams
Judson Neer
Julian Berman
Julien Voisin
Justas Sadzeviฤius
Karthikeyan Singaravelan
Kassandra Keeton
Ken Schackart
Kevin Brown-Silva
Kjell Braden
Krystian Kichewko
Kyle Altendorf
Lars Hupfeldt Nielsen
Latrice Wilgus
Leonardo Pistone
Lewis Gaul
Lex Berezhny
Loรฏc Dachary
Lorenzo Micรฒ
Louis Heredero
Luis Nell
ลukasz Stolcman
Maciej Kowalczyk
Manuel Jacob
Marc Abramowitz
Marc Gibbons
Marc Legendre
Marcelo Trylesinski
Marcus Cobden
Mariatta
Marius Gedminas
Mark van der Wal
Martin Fuzzey
Mathieu Kniewallner
Matt Bachmann
Matthew Boehm
Matthew Desmarais
Matus Valo
Max Linke
Mayank Singhal
Michael Bell
Michael Krebs
Michaล Bultrowicz
Michaล Gรณrny
Mickie Betz
Mike Fiedler
Min ho Kim
Nathan Land
Naveen Srinivasan
Naveen Yadav
Neil Pilgrim
Nicholas Nadeau
Nikita Bloshchanevich
Nikita Sobolev
Nils Kattenbeck
Noel O'Boyle
Oleg Hรถfling
Oleh Krehel
Olivier Grisel
Ori Avtalion
Pablo Carballo
Pankaj Pandey
Patrick Mezard
Pavel Tsialnou
Peter Baughman
Peter Ebden
Peter Portante
Phebe Polk
Reya B
Ricardo Newbery
Robert Harris
Rodrigue Cloutier
Roger Hu
Roland Illig
Ross Lawley
Roy Williams
Russell Keith-Magee
S. Y. Lee
Salvatore Zagaria
Sandra Martocchia
Scott Belden
Sebastiรกn Ramรญrez
Sergey B Kirpichev
Shantanu
Sigve Tjora
Simon Willison
Stan Hu
Stanisลaw Pitucha
Stefan Behnel
Stephan Deibel
Stephan Richter
Stephen Finucane
Steve Dower
Steve Leonard
Steve Oswald
Steve Peak
Sviatoslav Sydorenko
Tanaydin Sirin
Teake Nutma
Ted Wexler
Thijs Triemstra
Thomas Grainger
Timo Furrer
Titus Brown
Tom Gurion
Valentin Lab
Ville Skyttรค
Vince Salvino
Wonwin McBrootles
Xie Yanbo
Yilei "Dolee" Yang
Yury Selivanov
Zac Hatfield-Dodds
Zooko Wilcox-O'Hearn
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/LICENSE.txt 0000644 0001751 0000177 00000023676 00000000000 015770 0 ustar 00runner docker 0000000 0000000
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/MANIFEST.in 0000644 0001751 0000177 00000002415 00000000000 015667 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
# MANIFEST.in file for coverage.py
# This file includes everything needed to recreate the entire project, even
# though many of these files are not installed by setup.py. Unpacking the
# .tar.gz source distribution would give you everything needed to continue
# developing the project. "pip install" will not install many of these files.
include .editorconfig
include .git-blame-ignore-revs
include .readthedocs.yaml
include CHANGES.rst
include CITATION.cff
include CONTRIBUTORS.txt
include LICENSE.txt
include MANIFEST.in
include Makefile
include NOTICE.txt
include README.rst
include __main__.py
include howto.txt
include igor.py
include metacov.ini
include setup.py
include tox.ini
recursive-include ci *
recursive-include lab *
recursive-include .github *
recursive-include coverage *.pyi
recursive-include coverage/ctracer *.c *.h
recursive-include doc *.py *.in *.pip *.rst *.txt *.png
recursive-include doc/_static *
prune doc/_build
prune doc/_spell
recursive-include requirements *.in *.pip
recursive-include tests *.py *.tok
recursive-include tests/gold *
recursive-include tests js/* qunit/*
prune tests/eggsrc/build
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/Makefile 0000644 0001751 0000177 00000024127 00000000000 015575 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
# Makefile for utility work on coverage.py.
.DEFAULT_GOAL := help
##@ Utilities
.PHONY: help clean_platform clean sterile
help: ## Show this help.
@# Adapted from https://www.thapaliya.com/en/writings/well-documented-makefiles/
@echo Available targets:
@awk -F ':.*##' '/^[^: ]+:.*##/{printf " \033[1m%-20s\033[m %s\n",$$1,$$2} /^##@/{printf "\n%s\n",substr($$0,5)}' $(MAKEFILE_LIST)
_clean_platform:
@rm -f *.so */*.so
@rm -f *.pyd */*.pyd
@rm -rf __pycache__ */__pycache__ */*/__pycache__ */*/*/__pycache__ */*/*/*/__pycache__ */*/*/*/*/__pycache__
@rm -f *.pyc */*.pyc */*/*.pyc */*/*/*.pyc */*/*/*/*.pyc */*/*/*/*/*.pyc
@rm -f *.pyo */*.pyo */*/*.pyo */*/*/*.pyo */*/*/*/*.pyo */*/*/*/*/*.pyo
@rm -f *$$py.class */*$$py.class */*/*$$py.class */*/*/*$$py.class */*/*/*/*$$py.class */*/*/*/*/*$$py.class
debug_clean: ## Delete various debugging artifacts.
@rm -rf /tmp/dis $$COVERAGE_DEBUG_FILE
clean: debug_clean _clean_platform ## Remove artifacts of test execution, installation, etc.
@echo "Cleaning..."
@-pip uninstall -yq coverage
@mkdir -p build # so the chmod won't fail if build doesn't exist
@chmod -R 777 build
@rm -rf build coverage.egg-info dist htmlcov
@rm -f *.bak */*.bak */*/*.bak */*/*/*.bak */*/*/*/*.bak */*/*/*/*/*.bak
@rm -f coverage/*,cover
@rm -f MANIFEST
@rm -f .coverage .coverage.* .metacov*
@rm -f coverage.xml coverage.json
@rm -f .tox/*/lib/*/site-packages/zzz_metacov.pth
@rm -f */.coverage */*/.coverage */*/*/.coverage */*/*/*/.coverage */*/*/*/*/.coverage */*/*/*/*/*/.coverage
@rm -f tests/covmain.zip tests/zipmods.zip tests/zip1.zip
@rm -rf doc/_build doc/_spell doc/sample_html_beta
@rm -rf tmp
@rm -rf .*cache */.*cache */*/.*cache */*/*/.*cache .hypothesis
@rm -rf tests/actual
@-make -C tests/gold/html clean
sterile: clean ## Remove all non-controlled content, even if expensive.
rm -rf .tox
rm -f cheats.txt
##@ Tests and quality checks
.PHONY: lint smoke
lint: ## Run linters and checkers.
tox -q -e lint
PYTEST_SMOKE_ARGS = -n auto -m "not expensive" --maxfail=3 $(ARGS)
smoke: ## Run tests quickly with the C tracer in the lowest supported Python versions.
COVERAGE_TEST_CORES=ctrace tox -q -e py38 -- $(PYTEST_SMOKE_ARGS)
##@ Metacov: coverage measurement of coverage.py itself
# See metacov.ini for details.
.PHONY: metacov metahtml metasmoke
metacov: ## Run meta-coverage, measuring ourself.
COVERAGE_COVERAGE=yes tox -q $(ARGS)
metahtml: ## Produce meta-coverage HTML reports.
python igor.py combine_html
metasmoke:
COVERAGE_TEST_CORES=ctrace ARGS="-e py39" make metacov metahtml
##@ Requirements management
# When updating requirements, a few rules to follow:
#
# 1) Don't install more than one .pip file at once. Always use pip-compile to
# combine .in files onto a single .pip file that can be installed where needed.
#
# 2) Check manual pins before `make upgrade` to see if they can be removed. Look
# in requirements/pins.pip, and search for "windows" in .in files to find pins
# and extra requirements that have been needed, but might be obsolete.
.PHONY: upgrade doc_upgrade diff_upgrade
DOCBIN = .tox/doc/bin
PIP_COMPILE = pip-compile ${COMPILE_OPTS} --allow-unsafe --resolver=backtracking
upgrade: ## Update the *.pip files with the latest packages satisfying *.in files.
$(MAKE) _upgrade COMPILE_OPTS="--upgrade"
upgrade_one: ## Update the *.pip files for one package. `make upgrade-one package=...`
@test -n "$(package)" || { echo "\nUsage: make upgrade-one package=...\n"; exit 1; }
$(MAKE) _upgrade COMPILE_OPTS="--upgrade-package $(package)"
_upgrade: export CUSTOM_COMPILE_COMMAND=make upgrade
_upgrade:
pip install -q -r requirements/pip-tools.pip
$(PIP_COMPILE) -o requirements/pip-tools.pip requirements/pip-tools.in
$(PIP_COMPILE) -o requirements/pip.pip requirements/pip.in
$(PIP_COMPILE) -o requirements/pytest.pip requirements/pytest.in
$(PIP_COMPILE) -o requirements/kit.pip requirements/kit.in
$(PIP_COMPILE) -o requirements/tox.pip requirements/tox.in
$(PIP_COMPILE) -o requirements/dev.pip requirements/dev.in
$(PIP_COMPILE) -o requirements/light-threads.pip requirements/light-threads.in
$(PIP_COMPILE) -o requirements/mypy.pip requirements/mypy.in
doc_upgrade: export CUSTOM_COMPILE_COMMAND=make doc_upgrade
doc_upgrade: $(DOCBIN) ## Update the doc/requirements.pip file
$(DOCBIN)/pip install -q -r requirements/pip-tools.pip
$(DOCBIN)/$(PIP_COMPILE) --upgrade -o doc/requirements.pip doc/requirements.in
diff_upgrade: ## Summarize the last `make upgrade`
@# The sort flags sort by the package name first, then by the -/+, and
@# sort by version numbers, so we get a summary with lines like this:
@# -bashlex==0.16
@# +bashlex==0.17
@# -build==0.9.0
@# +build==0.10.0
@git diff -U0 | grep -v '^@' | grep == | sort -k1.2,1.99 -k1.1,1.1r -u -V
##@ Pre-builds for prepping the code
.PHONY: css workflows prebuild
CSS = coverage/htmlfiles/style.css
SCSS = coverage/htmlfiles/style.scss
css: $(CSS) ## Compile .scss into .css.
$(CSS): $(SCSS)
pysassc --style=compact $(SCSS) $@
cp $@ tests/gold/html/styled
workflows: ## Run cog on the workflows to keep them up-to-date.
python -m cogapp -crP .github/workflows/*.yml
prebuild: css workflows cogdoc ## One command for all source prep.
##@ Sample HTML reports
.PHONY: _sample_cog_html sample_html sample_html_beta
_sample_cog_html: clean
python -m pip install -e .
cd ~/cog; \
rm -rf htmlcov; \
PYTEST_ADDOPTS= coverage run --branch --source=cogapp -m pytest -k CogTestsInMemory; \
coverage combine; \
coverage html
sample_html: _sample_cog_html ## Generate sample HTML report.
rm -f doc/sample_html/*.*
cp -r ~/cog/htmlcov/ doc/sample_html/
rm doc/sample_html/.gitignore
sample_html_beta: _sample_cog_html ## Generate sample HTML report for a beta release.
rm -f doc/sample_html_beta/*.*
cp -r ~/cog/htmlcov/ doc/sample_html_beta/
rm doc/sample_html_beta/.gitignore
##@ Kitting: making releases
.PHONY: edit_for_release cheats relbranch relcommit1 relcommit2
.PHONY: kit kit_upload test_upload kit_local build_kits download_kits check_kits
.PHONY: tag bump_version
REPO_OWNER = nedbat/coveragepy
edit_for_release: #: Edit sources to insert release facts (see howto.txt).
python igor.py edit_for_release
cheats: ## Create some useful snippets for releasing.
python igor.py cheats | tee cheats.txt
relbranch: #: Create the branch for releasing (see howto.txt).
git switch -c nedbat/release-$$(date +%Y%m%d)
relcommit1: #: Commit the first release changes (see howto.txt).
git commit -am "docs: prep for $$(python setup.py --version)"
relcommit2: #: Commit the latest sample HTML report (see howto.txt).
git commit -am "docs: sample HTML for $$(python setup.py --version)"
kit: ## Make the source distribution.
python -m build
kit_upload: ## Upload the built distributions to PyPI.
twine upload --verbose dist/*
test_upload: ## Upload the distributions to PyPI's testing server.
twine upload --verbose --repository testpypi --password $$TWINE_TEST_PASSWORD dist/*
kit_local:
# pip.conf looks like this:
# [global]
# find-links = file:///Users/ned/Downloads/local_pypi
cp -v dist/* `awk -F "//" '/find-links/ {print $$2}' ~/.pip/pip.conf`
# pip caches wheels of things it has installed. Clean them out so we
# don't go crazy trying to figure out why our new code isn't installing.
find ~/Library/Caches/pip/wheels -name 'coverage-*' -delete
build_kits: ## Trigger GitHub to build kits
python ci/trigger_build_kits.py $(REPO_OWNER)
download_kits: ## Download the built kits from GitHub.
python ci/download_gha_artifacts.py $(REPO_OWNER) 'dist-*' dist
check_kits: ## Check that dist/* are well-formed.
python -m twine check dist/*
@echo $$(ls -1 dist | wc -l) distribution kits
tag: #: Make a git tag with the version number (see howto.txt).
git tag -s -m "Version $$(python setup.py --version)" $$(python setup.py --version)
git push --follow-tags
bump_version: #: Edit sources to bump the version after a release (see howto.txt).
git switch -c nedbat/bump-version
python igor.py bump_version
git commit -a -m "build: bump version"
git push -u origin @
##@ Documentation
.PHONY: cogdoc dochtml docdev docspell
SPHINXOPTS = -aE
SPHINXBUILD = $(DOCBIN)/sphinx-build $(SPHINXOPTS)
SPHINXAUTOBUILD = $(DOCBIN)/sphinx-autobuild --port 9876 --ignore '.git/**' --open-browser
$(DOCBIN):
tox -q -e doc --notest
cogdoc: $(DOCBIN) ## Run docs through cog.
$(DOCBIN)/python -m cogapp -crP --verbosity=1 doc/*.rst
dochtml: cogdoc $(DOCBIN) ## Build the docs HTML output.
$(SPHINXBUILD) -b html doc doc/_build/html
docdev: dochtml ## Build docs, and auto-watch for changes.
PATH=$(DOCBIN):$(PATH) $(SPHINXAUTOBUILD) -b html doc doc/_build/html
docspell: $(DOCBIN) ## Run the spell checker on the docs.
# Very mac-specific...
PYENCHANT_LIBRARY_PATH=/opt/homebrew/lib/libenchant-2.dylib $(SPHINXBUILD) -b spelling doc doc/_spell
##@ Publishing docs
.PHONY: publish publishbeta relnotes_json github_releases comment_on_fixes
WEBHOME = ~/web/stellated
WEBSAMPLE = $(WEBHOME)/files/sample_coverage_html
WEBSAMPLEBETA = $(WEBHOME)/files/sample_coverage_html_beta
publish: ## Publish the sample HTML report.
rm -f $(WEBSAMPLE)/*.*
mkdir -p $(WEBSAMPLE)
cp doc/sample_html/*.* $(WEBSAMPLE)
publishbeta:
rm -f $(WEBSAMPLEBETA)/*.*
mkdir -p $(WEBSAMPLEBETA)
cp doc/sample_html_beta/*.* $(WEBSAMPLEBETA)
CHANGES_MD = tmp/rst_rst/changes.md
RELNOTES_JSON = tmp/relnotes.json
$(CHANGES_MD): CHANGES.rst $(DOCBIN)
$(SPHINXBUILD) -b rst doc tmp/rst_rst
pandoc -frst -tmarkdown_strict --markdown-headings=atx --wrap=none tmp/rst_rst/changes.rst > $(CHANGES_MD)
relnotes_json: $(RELNOTES_JSON) ## Convert changelog to JSON for further parsing.
$(RELNOTES_JSON): $(CHANGES_MD)
$(DOCBIN)/python ci/parse_relnotes.py tmp/rst_rst/changes.md $(RELNOTES_JSON)
github_releases: $(DOCBIN) ## Update GitHub releases.
$(DOCBIN)/python -m scriv github-release --all
comment_on_fixes: $(RELNOTES_JSON) ## Add a comment to issues that were fixed.
python ci/comment_on_fixes.py $(REPO_OWNER)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/NOTICE.txt 0000644 0001751 0000177 00000001251 00000000000 015650 0 ustar 00runner docker 0000000 0000000 Copyright 2001 Gareth Rees. All rights reserved.
Copyright 2004-2024 Ned Batchelder. All rights reserved.
Except where noted otherwise, this software is licensed under the Apache
License, Version 2.0 (the "License"); you may not use this work except in
compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1710442639.1658149
coverage-7.4.4/PKG-INFO 0000644 0001751 0000177 00000017767 00000000000 015246 0 ustar 00runner docker 0000000 0000000 Metadata-Version: 2.1
Name: coverage
Version: 7.4.4
Summary: Code coverage measurement for Python
Home-page: https://github.com/nedbat/coveragepy
Author: Ned Batchelder and 224 others
Author-email: ned@nedbatchelder.com
License: Apache-2.0
Project-URL: Documentation, https://coverage.readthedocs.io/en/7.4.4
Project-URL: Funding, https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=pypi
Project-URL: Issues, https://github.com/nedbat/coveragepy/issues
Project-URL: Mastodon, https://hachyderm.io/@coveragepy
Project-URL: Mastodon (nedbat), https://hachyderm.io/@nedbat
Keywords: code coverage testing
Classifier: Environment :: Console
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Programming Language :: Python :: Implementation :: PyPy
Classifier: Topic :: Software Development :: Quality Assurance
Classifier: Topic :: Software Development :: Testing
Classifier: Development Status :: 5 - Production/Stable
Requires-Python: >=3.8
Description-Content-Type: text/x-rst
License-File: LICENSE.txt
Provides-Extra: toml
Requires-Dist: tomli; python_full_version <= "3.11.0a6" and extra == "toml"
.. Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
.. For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
===========
Coverage.py
===========
Code coverage testing for Python.
.. image:: https://raw.githubusercontent.com/vshymanskyy/StandWithUkraine/main/banner2-direct.svg
:target: https://vshymanskyy.github.io/StandWithUkraine
:alt: Stand with Ukraine
-------------
| |kit| |license| |versions|
| |test-status| |quality-status| |docs| |metacov|
| |tidelift| |sponsor| |stars| |mastodon-coveragepy| |mastodon-nedbat|
Coverage.py measures code coverage, typically during test execution. It uses
the code analysis tools and tracing hooks provided in the Python standard
library to determine which lines are executable, and which have been executed.
Coverage.py runs on these versions of Python:
.. PYVERSIONS
* Python 3.8 through 3.12, and 3.13.0a3 and up.
* PyPy3 versions 3.8 through 3.10.
Documentation is on `Read the Docs`_. Code repository and issue tracker are on
`GitHub`_.
.. _Read the Docs: https://coverage.readthedocs.io/en/7.4.4/
.. _GitHub: https://github.com/nedbat/coveragepy
**New in 7.x:**
experimental support for sys.monitoring;
dropped support for Python 3.7;
added ``Coverage.collect()`` context manager;
improved data combining;
``[run] exclude_also`` setting;
``report --format=``;
type annotations.
**New in 6.x:**
dropped support for Python 2.7, 3.5, and 3.6;
write data on SIGTERM;
added support for 3.10 match/case statements.
For Enterprise
--------------
.. |tideliftlogo| image:: https://nedbatchelder.com/pix/Tidelift_Logo_small.png
:alt: Tidelift
:target: https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme
.. list-table::
:widths: 10 100
* - |tideliftlogo|
- `Available as part of the Tidelift Subscription. `_
Coverage and thousands of other packages are working with
Tidelift to deliver one enterprise subscription that covers all of the open
source you use. If you want the flexibility of open source and the confidence
of commercial-grade software, this is for you.
`Learn more. `_
Getting Started
---------------
Looking to run ``coverage`` on your test suite? See the `Quick Start section`_
of the docs.
.. _Quick Start section: https://coverage.readthedocs.io/en/7.4.4/#quick-start
Change history
--------------
The complete history of changes is on the `change history page`_.
.. _change history page: https://coverage.readthedocs.io/en/7.4.4/changes.html
Code of Conduct
---------------
Everyone participating in the coverage.py project is expected to treat other
people with respect and to follow the guidelines articulated in the `Python
Community Code of Conduct`_.
.. _Python Community Code of Conduct: https://www.python.org/psf/codeofconduct/
Contributing
------------
Found a bug? Want to help improve the code or documentation? See the
`Contributing section`_ of the docs.
.. _Contributing section: https://coverage.readthedocs.io/en/7.4.4/contributing.html
Security
--------
To report a security vulnerability, please use the `Tidelift security
contact`_. Tidelift will coordinate the fix and disclosure.
.. _Tidelift security contact: https://tidelift.com/security
License
-------
Licensed under the `Apache 2.0 License`_. For details, see `NOTICE.txt`_.
.. _Apache 2.0 License: http://www.apache.org/licenses/LICENSE-2.0
.. _NOTICE.txt: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
.. |test-status| image:: https://github.com/nedbat/coveragepy/actions/workflows/testsuite.yml/badge.svg?branch=master&event=push
:target: https://github.com/nedbat/coveragepy/actions/workflows/testsuite.yml
:alt: Test suite status
.. |quality-status| image:: https://github.com/nedbat/coveragepy/actions/workflows/quality.yml/badge.svg?branch=master&event=push
:target: https://github.com/nedbat/coveragepy/actions/workflows/quality.yml
:alt: Quality check status
.. |docs| image:: https://readthedocs.org/projects/coverage/badge/?version=latest&style=flat
:target: https://coverage.readthedocs.io/en/7.4.4/
:alt: Documentation
.. |kit| image:: https://img.shields.io/pypi/v/coverage
:target: https://pypi.org/project/coverage/
:alt: PyPI status
.. |versions| image:: https://img.shields.io/pypi/pyversions/coverage.svg?logo=python&logoColor=FBE072
:target: https://pypi.org/project/coverage/
:alt: Python versions supported
.. |license| image:: https://img.shields.io/pypi/l/coverage.svg
:target: https://pypi.org/project/coverage/
:alt: License
.. |metacov| image:: https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/nedbat/8c6980f77988a327348f9b02bbaf67f5/raw/metacov.json
:target: https://nedbat.github.io/coverage-reports/latest.html
:alt: Coverage reports
.. |tidelift| image:: https://tidelift.com/badges/package/pypi/coverage
:target: https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme
:alt: Tidelift
.. |stars| image:: https://img.shields.io/github/stars/nedbat/coveragepy.svg?logo=github
:target: https://github.com/nedbat/coveragepy/stargazers
:alt: GitHub stars
.. |mastodon-nedbat| image:: https://img.shields.io/badge/dynamic/json?style=flat&labelColor=450657&logo=mastodon&logoColor=ffffff&link=https%3A%2F%2Fhachyderm.io%2F%40nedbat&url=https%3A%2F%2Fhachyderm.io%2Fusers%2Fnedbat%2Ffollowers.json&query=totalItems&label=@nedbat
:target: https://hachyderm.io/@nedbat
:alt: nedbat on Mastodon
.. |mastodon-coveragepy| image:: https://img.shields.io/badge/dynamic/json?style=flat&labelColor=450657&logo=mastodon&logoColor=ffffff&link=https%3A%2F%2Fhachyderm.io%2F%40coveragepy&url=https%3A%2F%2Fhachyderm.io%2Fusers%2Fcoveragepy%2Ffollowers.json&query=totalItems&label=@coveragepy
:target: https://hachyderm.io/@coveragepy
:alt: coveragepy on Mastodon
.. |sponsor| image:: https://img.shields.io/badge/%E2%9D%A4-Sponsor%20me-brightgreen?style=flat&logo=GitHub
:target: https://github.com/sponsors/nedbat
:alt: Sponsor me on GitHub
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/README.rst 0000644 0001751 0000177 00000014421 00000000000 015620 0 ustar 00runner docker 0000000 0000000 .. Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
.. For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
===========
Coverage.py
===========
Code coverage testing for Python.
.. image:: https://raw.githubusercontent.com/vshymanskyy/StandWithUkraine/main/banner2-direct.svg
:target: https://vshymanskyy.github.io/StandWithUkraine
:alt: Stand with Ukraine
-------------
| |kit| |license| |versions|
| |test-status| |quality-status| |docs| |metacov|
| |tidelift| |sponsor| |stars| |mastodon-coveragepy| |mastodon-nedbat|
Coverage.py measures code coverage, typically during test execution. It uses
the code analysis tools and tracing hooks provided in the Python standard
library to determine which lines are executable, and which have been executed.
Coverage.py runs on these versions of Python:
.. PYVERSIONS
* Python 3.8 through 3.12, and 3.13.0a3 and up.
* PyPy3 versions 3.8 through 3.10.
Documentation is on `Read the Docs`_. Code repository and issue tracker are on
`GitHub`_.
.. _Read the Docs: https://coverage.readthedocs.io/
.. _GitHub: https://github.com/nedbat/coveragepy
**New in 7.x:**
experimental support for sys.monitoring;
dropped support for Python 3.7;
added ``Coverage.collect()`` context manager;
improved data combining;
``[run] exclude_also`` setting;
``report --format=``;
type annotations.
**New in 6.x:**
dropped support for Python 2.7, 3.5, and 3.6;
write data on SIGTERM;
added support for 3.10 match/case statements.
For Enterprise
--------------
.. |tideliftlogo| image:: https://nedbatchelder.com/pix/Tidelift_Logo_small.png
:alt: Tidelift
:target: https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme
.. list-table::
:widths: 10 100
* - |tideliftlogo|
- `Available as part of the Tidelift Subscription. `_
Coverage and thousands of other packages are working with
Tidelift to deliver one enterprise subscription that covers all of the open
source you use. If you want the flexibility of open source and the confidence
of commercial-grade software, this is for you.
`Learn more. `_
Getting Started
---------------
Looking to run ``coverage`` on your test suite? See the `Quick Start section`_
of the docs.
.. _Quick Start section: https://coverage.readthedocs.io/#quick-start
Change history
--------------
The complete history of changes is on the `change history page`_.
.. _change history page: https://coverage.readthedocs.io/en/latest/changes.html
Code of Conduct
---------------
Everyone participating in the coverage.py project is expected to treat other
people with respect and to follow the guidelines articulated in the `Python
Community Code of Conduct`_.
.. _Python Community Code of Conduct: https://www.python.org/psf/codeofconduct/
Contributing
------------
Found a bug? Want to help improve the code or documentation? See the
`Contributing section`_ of the docs.
.. _Contributing section: https://coverage.readthedocs.io/en/latest/contributing.html
Security
--------
To report a security vulnerability, please use the `Tidelift security
contact`_. Tidelift will coordinate the fix and disclosure.
.. _Tidelift security contact: https://tidelift.com/security
License
-------
Licensed under the `Apache 2.0 License`_. For details, see `NOTICE.txt`_.
.. _Apache 2.0 License: http://www.apache.org/licenses/LICENSE-2.0
.. _NOTICE.txt: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
.. |test-status| image:: https://github.com/nedbat/coveragepy/actions/workflows/testsuite.yml/badge.svg?branch=master&event=push
:target: https://github.com/nedbat/coveragepy/actions/workflows/testsuite.yml
:alt: Test suite status
.. |quality-status| image:: https://github.com/nedbat/coveragepy/actions/workflows/quality.yml/badge.svg?branch=master&event=push
:target: https://github.com/nedbat/coveragepy/actions/workflows/quality.yml
:alt: Quality check status
.. |docs| image:: https://readthedocs.org/projects/coverage/badge/?version=latest&style=flat
:target: https://coverage.readthedocs.io/
:alt: Documentation
.. |kit| image:: https://img.shields.io/pypi/v/coverage
:target: https://pypi.org/project/coverage/
:alt: PyPI status
.. |versions| image:: https://img.shields.io/pypi/pyversions/coverage.svg?logo=python&logoColor=FBE072
:target: https://pypi.org/project/coverage/
:alt: Python versions supported
.. |license| image:: https://img.shields.io/pypi/l/coverage.svg
:target: https://pypi.org/project/coverage/
:alt: License
.. |metacov| image:: https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/nedbat/8c6980f77988a327348f9b02bbaf67f5/raw/metacov.json
:target: https://nedbat.github.io/coverage-reports/latest.html
:alt: Coverage reports
.. |tidelift| image:: https://tidelift.com/badges/package/pypi/coverage
:target: https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme
:alt: Tidelift
.. |stars| image:: https://img.shields.io/github/stars/nedbat/coveragepy.svg?logo=github
:target: https://github.com/nedbat/coveragepy/stargazers
:alt: GitHub stars
.. |mastodon-nedbat| image:: https://img.shields.io/badge/dynamic/json?style=flat&labelColor=450657&logo=mastodon&logoColor=ffffff&link=https%3A%2F%2Fhachyderm.io%2F%40nedbat&url=https%3A%2F%2Fhachyderm.io%2Fusers%2Fnedbat%2Ffollowers.json&query=totalItems&label=@nedbat
:target: https://hachyderm.io/@nedbat
:alt: nedbat on Mastodon
.. |mastodon-coveragepy| image:: https://img.shields.io/badge/dynamic/json?style=flat&labelColor=450657&logo=mastodon&logoColor=ffffff&link=https%3A%2F%2Fhachyderm.io%2F%40coveragepy&url=https%3A%2F%2Fhachyderm.io%2Fusers%2Fcoveragepy%2Ffollowers.json&query=totalItems&label=@coveragepy
:target: https://hachyderm.io/@coveragepy
:alt: coveragepy on Mastodon
.. |sponsor| image:: https://img.shields.io/badge/%E2%9D%A4-Sponsor%20me-brightgreen?style=flat&logo=GitHub
:target: https://github.com/sponsors/nedbat
:alt: Sponsor me on GitHub
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/__main__.py 0000644 0001751 0000177 00000000645 00000000000 016226 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Be able to execute coverage.py by pointing Python at a working tree."""
import runpy
import os
PKG = "coverage"
run_globals = runpy.run_module(PKG, run_name="__main__", alter_sys=True)
executed = os.path.splitext(os.path.basename(run_globals["__file__"]))[0]
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1710442639.105815
coverage-7.4.4/ci/ 0000755 0001751 0000177 00000000000 00000000000 014522 5 ustar 00runner docker 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/ci/README.txt 0000644 0001751 0000177 00000000061 00000000000 016215 0 ustar 00runner docker 0000000 0000000 Files to support continuous integration systems.
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/ci/comment_on_fixes.py 0000644 0001751 0000177 00000003236 00000000000 020434 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Add a release comment to all the issues mentioned in the latest release."""
import json
import re
import sys
from session import get_session
with open("tmp/relnotes.json") as frn:
relnotes = json.load(frn)
latest = relnotes[0]
version = latest["version"]
comment = (
f"This is now released as part of [coverage {version}]" +
f"(https://pypi.org/project/coverage/{version})."
)
print(f"Comment will be:\n\n{comment}\n")
repo_owner = sys.argv[1]
for m in re.finditer(fr"https://github.com/{repo_owner}/(issues|pull)/(\d+)", latest["text"]):
kind, number = m.groups()
do_comment = False
if kind == "issues":
url = f"https://api.github.com/repos/{repo_owner}/issues/{number}"
issue_data = get_session().get(url).json()
if issue_data["state"] == "closed":
do_comment = True
else:
print(f"Still open, comment manually: {m[0]}")
else:
url = f"https://api.github.com/repos/{repo_owner}/pulls/{number}"
pull_data = get_session().get(url).json()
if pull_data["state"] == "closed":
if pull_data["merged"]:
do_comment = True
else:
print(f"Not merged, comment manually: {m[0]}")
else:
print(f"Still open, comment manually: {m[0]}")
if do_comment:
print(f"Commenting on {m[0]}")
url = f"https://api.github.com/repos/{repo_owner}/issues/{number}/comments"
resp = get_session().post(url, json={"body": comment})
print(resp)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/ci/download_gha_artifacts.py 0000644 0001751 0000177 00000006620 00000000000 021566 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Use the GitHub API to download built artifacts."""
import collections
import datetime
import fnmatch
import operator
import os
import os.path
import sys
import time
import zipfile
from session import get_session
def download_url(url, filename):
"""Download a file from `url` to `filename`."""
response = get_session().get(url, stream=True)
if response.status_code == 200:
with open(filename, "wb") as f:
for chunk in response.iter_content(16*1024):
f.write(chunk)
else:
raise RuntimeError(f"Fetching {url} produced: status={response.status_code}")
def unpack_zipfile(filename):
"""Unpack a zipfile, using the names in the zip."""
with open(filename, "rb") as fzip:
z = zipfile.ZipFile(fzip)
for name in z.namelist():
print(f" extracting {name}")
z.extract(name)
def utc2local(timestring):
"""
Convert a UTC time into local time in a more readable form.
For example: '20201208T122900Z' to '2020-12-08 07:29:00'.
"""
dt = datetime.datetime
utc = dt.fromisoformat(timestring.rstrip("Z"))
epoch = time.mktime(utc.timetuple())
offset = dt.fromtimestamp(epoch) - dt.utcfromtimestamp(epoch)
local = utc + offset
return local.strftime("%Y-%m-%d %H:%M:%S")
def all_items(url, key):
"""
Get all items from a paginated GitHub URL.
`key` is the key in the top-level returned object that has a list of items.
"""
url += ("&" if "?" in url else "?") + "per_page=100"
while url:
response = get_session().get(url)
response.raise_for_status()
data = response.json()
if isinstance(data, dict) and (msg := data.get("message")):
raise RuntimeError(f"URL {url!r} failed: {msg}")
yield from data.get(key, ())
try:
url = response.links.get("next").get("url")
except AttributeError:
url = None
def main(owner_repo, artifact_pattern, dest_dir):
"""
Download and unzip the latest artifacts matching a pattern.
`owner_repo` is a GitHub pair for the repo, like "nedbat/coveragepy".
`artifact_pattern` is a filename glob for the artifact name.
`dest_dir` is the directory to unpack them into.
"""
# Get all artifacts matching the pattern, grouped by name.
url = f"https://api.github.com/repos/{owner_repo}/actions/artifacts"
artifacts_by_name = collections.defaultdict(list)
for artifact in all_items(url, "artifacts"):
name = artifact["name"]
if not fnmatch.fnmatch(name, artifact_pattern):
continue
artifacts_by_name[name].append(artifact)
os.makedirs(dest_dir, exist_ok=True)
os.chdir(dest_dir)
temp_zip = "artifacts.zip"
# Download the latest of each name.
for name, artifacts in artifacts_by_name.items():
artifact = max(artifacts, key=operator.itemgetter("created_at"))
print(
f"Downloading {artifact['name']}, "
+ f"size: {artifact['size_in_bytes']}, "
+ f"created: {utc2local(artifact['created_at'])}",
)
download_url(artifact["archive_download_url"], temp_zip)
unpack_zipfile(temp_zip)
os.remove(temp_zip)
if __name__ == "__main__":
sys.exit(main(*sys.argv[1:]))
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/ci/ghrel_template.md.j2 0000644 0001751 0000177 00000000646 00000000000 020360 0 ustar 00runner docker 0000000 0000000 {# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -#}
{# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt -#}
{# This file is for use with scriv to create GitHub releases. -#}
{{body}}
:arrow_right: PyPI page: [coverage {{version}}](https://pypi.org/project/coverage/{{version}}).
:arrow_right: To install: `python3 -m pip install coverage=={{version}}`
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/ci/parse_relnotes.py 0000644 0001751 0000177 00000006630 00000000000 020126 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""
Parse CHANGES.md into a JSON structure.
Run with two arguments: the .md file to parse, and the JSON file to write:
python parse_relnotes.py CHANGES.md relnotes.json
Every section that has something that looks like a version number in it will
be recorded as the release notes for that version.
"""
import json
import re
import sys
class TextChunkBuffer:
"""Hold onto text chunks until needed."""
def __init__(self):
self.buffer = []
def append(self, text):
"""Add `text` to the buffer."""
self.buffer.append(text)
def clear(self):
"""Clear the buffer."""
self.buffer = []
def flush(self):
"""Produce a ("text", text) tuple if there's anything here."""
buffered = "".join(self.buffer).strip()
if buffered:
yield ("text", buffered)
self.clear()
def parse_md(lines):
"""Parse markdown lines, producing (type, text) chunks."""
buffer = TextChunkBuffer()
for line in lines:
if header_match := re.search(r"^(#+) (.+)$", line):
yield from buffer.flush()
hashes, text = header_match.groups()
yield (f"h{len(hashes)}", text)
else:
buffer.append(line)
yield from buffer.flush()
def sections(parsed_data):
"""Convert a stream of parsed tokens into sections with text and notes.
Yields a stream of:
('h-level', 'header text', 'text')
"""
header = None
text = []
for ttype, ttext in parsed_data:
if ttype.startswith('h'):
if header:
yield (*header, "\n".join(text))
text = []
header = (ttype, ttext)
elif ttype == "text":
text.append(ttext)
else:
raise RuntimeError(f"Don't know ttype {ttype!r}")
yield (*header, "\n".join(text))
def refind(regex, text):
"""Find a regex in some text, and return the matched text, or None."""
if m := re.search(regex, text):
return m.group()
else:
return None
def fix_ref_links(text, version):
"""Find links to .rst files, and make them full RTFD links."""
def new_link(m):
return f"](https://coverage.readthedocs.io/en/{version}/{m[1]}.html{m[2]})"
return re.sub(r"\]\((\w+)\.rst(#.*?)\)", new_link, text)
def relnotes(mdlines):
r"""Yield (version, text) pairs from markdown lines.
Each tuple is a separate version mentioned in the release notes.
A version is any section with \d\.\d in the header text.
"""
for _, htext, text in sections(parse_md(mdlines)):
version = refind(r"\d+\.\d[^ ]*", htext)
if version:
prerelease = any(c in version for c in "abc")
when = refind(r"\d+-\d+-\d+", htext)
text = fix_ref_links(text, version)
yield {
"version": version,
"text": text,
"prerelease": prerelease,
"when": when,
}
def parse(md_filename, json_filename):
"""Main function: parse markdown and write JSON."""
with open(md_filename) as mf:
markdown = mf.read()
with open(json_filename, "w") as jf:
json.dump(list(relnotes(markdown.splitlines(True))), jf, indent=4)
if __name__ == "__main__":
parse(*sys.argv[1:3])
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/ci/session.py 0000644 0001751 0000177 00000001506 00000000000 016561 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Help make a requests Session with proper authentication."""
import os
import requests
_SESSION = None
def get_session():
"""Get a properly authenticated requests Session."""
global _SESSION
if _SESSION is None:
# If GITHUB_TOKEN is in the environment, use it.
_SESSION = requests.session()
token = os.environ.get("GITHUB_TOKEN")
if token is not None:
_SESSION.headers["Authorization"] = f"token {token}"
# requests.get() will always prefer the .netrc file even if a header
# is already set. This tells it to ignore the .netrc file.
_SESSION.trust_env = False
return _SESSION
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/ci/trigger_build_kits.py 0000644 0001751 0000177 00000001353 00000000000 020752 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Trigger the GitHub action to build our kits."""
import sys
from session import get_session
repo_owner = sys.argv[1]
# The GitHub URL makes no mention of which workflow to use. It's found based on
# the event_type, which matches the types in the workflow:
#
# on:
# repository_dispatch:
# types:
# - build-kits
#
resp = get_session().post(
f"https://api.github.com/repos/{repo_owner}/dispatches",
json={"event_type": "build-kits"},
)
if resp.status_code // 100 == 2:
print("Success")
else:
print(f"Status: {resp.status_code}")
print(resp.text)
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1710442639.1138148
coverage-7.4.4/coverage/ 0000755 0001751 0000177 00000000000 00000000000 015722 5 ustar 00runner docker 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/__init__.py 0000644 0001751 0000177 00000002320 00000000000 020030 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""
Code coverage measurement for Python.
Ned Batchelder
https://coverage.readthedocs.io
"""
from __future__ import annotations
# mypy's convention is that "import as" names are public from the module.
# We import names as themselves to indicate that. Pylint sees it as pointless,
# so disable its warning.
# pylint: disable=useless-import-alias
from coverage.version import (
__version__ as __version__,
version_info as version_info,
)
from coverage.control import (
Coverage as Coverage,
process_startup as process_startup,
)
from coverage.data import CoverageData as CoverageData
from coverage.exceptions import CoverageException as CoverageException
from coverage.plugin import (
CoveragePlugin as CoveragePlugin,
FileReporter as FileReporter,
FileTracer as FileTracer,
)
# Backward compatibility.
coverage = Coverage
# On Windows, we encode and decode deep enough that something goes wrong and
# the encodings.utf_8 module is loaded and then unloaded, I don't know why.
# Adding a reference here prevents it from being unloaded. Yuk.
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/__main__.py 0000644 0001751 0000177 00000000445 00000000000 020017 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Coverage.py's main entry point."""
from __future__ import annotations
import sys
from coverage.cmdline import main
sys.exit(main())
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/annotate.py 0000644 0001751 0000177 00000007233 00000000000 020112 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Source file annotation for coverage.py."""
from __future__ import annotations
import os
import re
from typing import Iterable, TYPE_CHECKING
from coverage.files import flat_rootname
from coverage.misc import ensure_dir, isolate_module
from coverage.plugin import FileReporter
from coverage.report_core import get_analysis_to_report
from coverage.results import Analysis
from coverage.types import TMorf
if TYPE_CHECKING:
from coverage import Coverage
os = isolate_module(os)
class AnnotateReporter:
"""Generate annotated source files showing line coverage.
This reporter creates annotated copies of the measured source files. Each
.py file is copied as a .py,cover file, with a left-hand margin annotating
each line::
> def h(x):
- if 0: #pragma: no cover
- pass
> if x == 1:
! a = 1
> else:
> a = 2
> h(2)
Executed lines use ">", lines not executed use "!", lines excluded from
consideration use "-".
"""
def __init__(self, coverage: Coverage) -> None:
self.coverage = coverage
self.config = self.coverage.config
self.directory: str | None = None
blank_re = re.compile(r"\s*(#|$)")
else_re = re.compile(r"\s*else\s*:\s*(#|$)")
def report(self, morfs: Iterable[TMorf] | None, directory: str | None = None) -> None:
"""Run the report.
See `coverage.report()` for arguments.
"""
self.directory = directory
self.coverage.get_data()
for fr, analysis in get_analysis_to_report(self.coverage, morfs):
self.annotate_file(fr, analysis)
def annotate_file(self, fr: FileReporter, analysis: Analysis) -> None:
"""Annotate a single file.
`fr` is the FileReporter for the file to annotate.
"""
statements = sorted(analysis.statements)
missing = sorted(analysis.missing)
excluded = sorted(analysis.excluded)
if self.directory:
ensure_dir(self.directory)
dest_file = os.path.join(self.directory, flat_rootname(fr.relative_filename()))
if dest_file.endswith("_py"):
dest_file = dest_file[:-3] + ".py"
dest_file += ",cover"
else:
dest_file = fr.filename + ",cover"
with open(dest_file, "w", encoding="utf-8") as dest:
i = j = 0
covered = True
source = fr.source()
for lineno, line in enumerate(source.splitlines(True), start=1):
while i < len(statements) and statements[i] < lineno:
i += 1
while j < len(missing) and missing[j] < lineno:
j += 1
if i < len(statements) and statements[i] == lineno:
covered = j >= len(missing) or missing[j] > lineno
if self.blank_re.match(line):
dest.write(" ")
elif self.else_re.match(line):
# Special logic for lines containing only "else:".
if j >= len(missing):
dest.write("> ")
elif statements[i] == missing[j]:
dest.write("! ")
else:
dest.write("> ")
elif lineno in excluded:
dest.write("- ")
elif covered:
dest.write("> ")
else:
dest.write("! ")
dest.write(line)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/bytecode.py 0000644 0001751 0000177 00000001311 00000000000 020066 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Bytecode manipulation for coverage.py"""
from __future__ import annotations
from types import CodeType
from typing import Iterator
def code_objects(code: CodeType) -> Iterator[CodeType]:
"""Iterate over all the code objects in `code`."""
stack = [code]
while stack:
# We're going to return the code object on the stack, but first
# push its children for later returning.
code = stack.pop()
for c in code.co_consts:
if isinstance(c, CodeType):
stack.append(c)
yield code
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/cmdline.py 0000644 0001751 0000177 00000102724 00000000000 017715 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Command-line support for coverage.py."""
from __future__ import annotations
import glob
import optparse # pylint: disable=deprecated-module
import os
import os.path
import shlex
import sys
import textwrap
import traceback
from typing import cast, Any, NoReturn
import coverage
from coverage import Coverage
from coverage import env
from coverage.collector import HAS_CTRACER
from coverage.config import CoverageConfig
from coverage.control import DEFAULT_DATAFILE
from coverage.data import combinable_files, debug_data_file
from coverage.debug import info_header, short_stack, write_formatted_info
from coverage.exceptions import _BaseCoverageException, _ExceptionDuringRun, NoSource
from coverage.execfile import PyRunner
from coverage.results import Numbers, should_fail_under
from coverage.version import __url__
# When adding to this file, alphabetization is important. Look for
# "alphabetize" comments throughout.
class Opts:
"""A namespace class for individual options we'll build parsers from."""
# Keep these entries alphabetized (roughly) by the option name as it
# appears on the command line.
append = optparse.make_option(
"-a", "--append", action="store_true",
help="Append coverage data to .coverage, otherwise it starts clean each time.",
)
branch = optparse.make_option(
"", "--branch", action="store_true",
help="Measure branch coverage in addition to statement coverage.",
)
concurrency = optparse.make_option(
"", "--concurrency", action="store", metavar="LIBS",
help=(
"Properly measure code using a concurrency library. " +
"Valid values are: {}, or a comma-list of them."
).format(", ".join(sorted(CoverageConfig.CONCURRENCY_CHOICES))),
)
context = optparse.make_option(
"", "--context", action="store", metavar="LABEL",
help="The context label to record for this coverage run.",
)
contexts = optparse.make_option(
"", "--contexts", action="store", metavar="REGEX1,REGEX2,...",
help=(
"Only display data from lines covered in the given contexts. " +
"Accepts Python regexes, which must be quoted."
),
)
datafile = optparse.make_option(
"", "--data-file", action="store", metavar="DATAFILE",
help=(
"Base name of the data files to operate on. " +
"Defaults to '.coverage'. [env: COVERAGE_FILE]"
),
)
datafle_input = optparse.make_option(
"", "--data-file", action="store", metavar="INFILE",
help=(
"Read coverage data for report generation from this file. " +
"Defaults to '.coverage'. [env: COVERAGE_FILE]"
),
)
datafile_output = optparse.make_option(
"", "--data-file", action="store", metavar="OUTFILE",
help=(
"Write the recorded coverage data to this file. " +
"Defaults to '.coverage'. [env: COVERAGE_FILE]"
),
)
debug = optparse.make_option(
"", "--debug", action="store", metavar="OPTS",
help="Debug options, separated by commas. [env: COVERAGE_DEBUG]",
)
directory = optparse.make_option(
"-d", "--directory", action="store", metavar="DIR",
help="Write the output files to DIR.",
)
fail_under = optparse.make_option(
"", "--fail-under", action="store", metavar="MIN", type="float",
help="Exit with a status of 2 if the total coverage is less than MIN.",
)
format = optparse.make_option(
"", "--format", action="store", metavar="FORMAT",
help="Output format, either text (default), markdown, or total.",
)
help = optparse.make_option(
"-h", "--help", action="store_true",
help="Get help on this command.",
)
ignore_errors = optparse.make_option(
"-i", "--ignore-errors", action="store_true",
help="Ignore errors while reading source files.",
)
include = optparse.make_option(
"", "--include", action="store", metavar="PAT1,PAT2,...",
help=(
"Include only files whose paths match one of these patterns. " +
"Accepts shell-style wildcards, which must be quoted."
),
)
keep = optparse.make_option(
"", "--keep", action="store_true",
help="Keep original coverage files, otherwise they are deleted.",
)
pylib = optparse.make_option(
"-L", "--pylib", action="store_true",
help=(
"Measure coverage even inside the Python installed library, " +
"which isn't done by default."
),
)
show_missing = optparse.make_option(
"-m", "--show-missing", action="store_true",
help="Show line numbers of statements in each module that weren't executed.",
)
module = optparse.make_option(
"-m", "--module", action="store_true",
help=(
" is an importable Python module, not a script path, " +
"to be run as 'python -m' would run it."
),
)
omit = optparse.make_option(
"", "--omit", action="store", metavar="PAT1,PAT2,...",
help=(
"Omit files whose paths match one of these patterns. " +
"Accepts shell-style wildcards, which must be quoted."
),
)
output_xml = optparse.make_option(
"-o", "", action="store", dest="outfile", metavar="OUTFILE",
help="Write the XML report to this file. Defaults to 'coverage.xml'",
)
output_json = optparse.make_option(
"-o", "", action="store", dest="outfile", metavar="OUTFILE",
help="Write the JSON report to this file. Defaults to 'coverage.json'",
)
output_lcov = optparse.make_option(
"-o", "", action="store", dest="outfile", metavar="OUTFILE",
help="Write the LCOV report to this file. Defaults to 'coverage.lcov'",
)
json_pretty_print = optparse.make_option(
"", "--pretty-print", action="store_true",
help="Format the JSON for human readers.",
)
parallel_mode = optparse.make_option(
"-p", "--parallel-mode", action="store_true",
help=(
"Append the machine name, process id and random number to the " +
"data file name to simplify collecting data from " +
"many processes."
),
)
precision = optparse.make_option(
"", "--precision", action="store", metavar="N", type=int,
help=(
"Number of digits after the decimal point to display for " +
"reported coverage percentages."
),
)
quiet = optparse.make_option(
"-q", "--quiet", action="store_true",
help="Don't print messages about what is happening.",
)
rcfile = optparse.make_option(
"", "--rcfile", action="store",
help=(
"Specify configuration file. " +
"By default '.coveragerc', 'setup.cfg', 'tox.ini', and " +
"'pyproject.toml' are tried. [env: COVERAGE_RCFILE]"
),
)
show_contexts = optparse.make_option(
"--show-contexts", action="store_true",
help="Show contexts for covered lines.",
)
skip_covered = optparse.make_option(
"--skip-covered", action="store_true",
help="Skip files with 100% coverage.",
)
no_skip_covered = optparse.make_option(
"--no-skip-covered", action="store_false", dest="skip_covered",
help="Disable --skip-covered.",
)
skip_empty = optparse.make_option(
"--skip-empty", action="store_true",
help="Skip files with no code.",
)
sort = optparse.make_option(
"--sort", action="store", metavar="COLUMN",
help=(
"Sort the report by the named column: name, stmts, miss, branch, brpart, or cover. " +
"Default is name."
),
)
source = optparse.make_option(
"", "--source", action="store", metavar="SRC1,SRC2,...",
help="A list of directories or importable names of code to measure.",
)
timid = optparse.make_option(
"", "--timid", action="store_true",
help="Use the slower Python trace function core.",
)
title = optparse.make_option(
"", "--title", action="store", metavar="TITLE",
help="A text string to use as the title on the HTML.",
)
version = optparse.make_option(
"", "--version", action="store_true",
help="Display version information and exit.",
)
class CoverageOptionParser(optparse.OptionParser):
"""Base OptionParser for coverage.py.
Problems don't exit the program.
Defaults are initialized for all options.
"""
def __init__(self, *args: Any, **kwargs: Any) -> None:
kwargs["add_help_option"] = False
super().__init__(*args, **kwargs)
self.set_defaults(
# Keep these arguments alphabetized by their names.
action=None,
append=None,
branch=None,
concurrency=None,
context=None,
contexts=None,
data_file=None,
debug=None,
directory=None,
fail_under=None,
format=None,
help=None,
ignore_errors=None,
include=None,
keep=None,
module=None,
omit=None,
parallel_mode=None,
precision=None,
pylib=None,
quiet=None,
rcfile=True,
show_contexts=None,
show_missing=None,
skip_covered=None,
skip_empty=None,
sort=None,
source=None,
timid=None,
title=None,
version=None,
)
self.disable_interspersed_args()
class OptionParserError(Exception):
"""Used to stop the optparse error handler ending the process."""
pass
def parse_args_ok(self, args: list[str]) -> tuple[bool, optparse.Values | None, list[str]]:
"""Call optparse.parse_args, but return a triple:
(ok, options, args)
"""
try:
options, args = super().parse_args(args)
except self.OptionParserError:
return False, None, []
return True, options, args
def error(self, msg: str) -> NoReturn:
"""Override optparse.error so sys.exit doesn't get called."""
show_help(msg)
raise self.OptionParserError
class GlobalOptionParser(CoverageOptionParser):
"""Command-line parser for coverage.py global option arguments."""
def __init__(self) -> None:
super().__init__()
self.add_options([
Opts.help,
Opts.version,
])
class CmdOptionParser(CoverageOptionParser):
"""Parse one of the new-style commands for coverage.py."""
def __init__(
self,
action: str,
options: list[optparse.Option],
description: str,
usage: str | None = None,
):
"""Create an OptionParser for a coverage.py command.
`action` is the slug to put into `options.action`.
`options` is a list of Option's for the command.
`description` is the description of the command, for the help text.
`usage` is the usage string to display in help.
"""
if usage:
usage = "%prog " + usage
super().__init__(
usage=usage,
description=description,
)
self.set_defaults(action=action)
self.add_options(options)
self.cmd = action
def __eq__(self, other: str) -> bool: # type: ignore[override]
# A convenience equality, so that I can put strings in unit test
# results, and they will compare equal to objects.
return (other == f"")
__hash__ = None # type: ignore[assignment]
def get_prog_name(self) -> str:
"""Override of an undocumented function in optparse.OptionParser."""
program_name = super().get_prog_name()
# Include the sub-command for this parser as part of the command.
return f"{program_name} {self.cmd}"
# In lists of Opts, keep them alphabetized by the option names as they appear
# on the command line, since these lists determine the order of the options in
# the help output.
#
# In COMMANDS, keep the keys (command names) alphabetized.
GLOBAL_ARGS = [
Opts.debug,
Opts.help,
Opts.rcfile,
]
COMMANDS = {
"annotate": CmdOptionParser(
"annotate",
[
Opts.directory,
Opts.datafle_input,
Opts.ignore_errors,
Opts.include,
Opts.omit,
] + GLOBAL_ARGS,
usage="[options] [modules]",
description=(
"Make annotated copies of the given files, marking statements that are executed " +
"with > and statements that are missed with !."
),
),
"combine": CmdOptionParser(
"combine",
[
Opts.append,
Opts.datafile,
Opts.keep,
Opts.quiet,
] + GLOBAL_ARGS,
usage="[options] ... ",
description=(
"Combine data from multiple coverage files. " +
"The combined results are written to a single " +
"file representing the union of the data. The positional " +
"arguments are data files or directories containing data files. " +
"If no paths are provided, data files in the default data file's " +
"directory are combined."
),
),
"debug": CmdOptionParser(
"debug", GLOBAL_ARGS,
usage="",
description=(
"Display information about the internals of coverage.py, " +
"for diagnosing problems. " +
"Topics are: " +
"'data' to show a summary of the collected data; " +
"'sys' to show installation information; " +
"'config' to show the configuration; " +
"'premain' to show what is calling coverage; " +
"'pybehave' to show internal flags describing Python behavior."
),
),
"erase": CmdOptionParser(
"erase",
[
Opts.datafile,
] + GLOBAL_ARGS,
description="Erase previously collected coverage data.",
),
"help": CmdOptionParser(
"help", GLOBAL_ARGS,
usage="[command]",
description="Describe how to use coverage.py",
),
"html": CmdOptionParser(
"html",
[
Opts.contexts,
Opts.directory,
Opts.datafle_input,
Opts.fail_under,
Opts.ignore_errors,
Opts.include,
Opts.omit,
Opts.precision,
Opts.quiet,
Opts.show_contexts,
Opts.skip_covered,
Opts.no_skip_covered,
Opts.skip_empty,
Opts.title,
] + GLOBAL_ARGS,
usage="[options] [modules]",
description=(
"Create an HTML report of the coverage of the files. " +
"Each file gets its own page, with the source decorated to show " +
"executed, excluded, and missed lines."
),
),
"json": CmdOptionParser(
"json",
[
Opts.contexts,
Opts.datafle_input,
Opts.fail_under,
Opts.ignore_errors,
Opts.include,
Opts.omit,
Opts.output_json,
Opts.json_pretty_print,
Opts.quiet,
Opts.show_contexts,
] + GLOBAL_ARGS,
usage="[options] [modules]",
description="Generate a JSON report of coverage results.",
),
"lcov": CmdOptionParser(
"lcov",
[
Opts.datafle_input,
Opts.fail_under,
Opts.ignore_errors,
Opts.include,
Opts.output_lcov,
Opts.omit,
Opts.quiet,
] + GLOBAL_ARGS,
usage="[options] [modules]",
description="Generate an LCOV report of coverage results.",
),
"report": CmdOptionParser(
"report",
[
Opts.contexts,
Opts.datafle_input,
Opts.fail_under,
Opts.format,
Opts.ignore_errors,
Opts.include,
Opts.omit,
Opts.precision,
Opts.sort,
Opts.show_missing,
Opts.skip_covered,
Opts.no_skip_covered,
Opts.skip_empty,
] + GLOBAL_ARGS,
usage="[options] [modules]",
description="Report coverage statistics on modules.",
),
"run": CmdOptionParser(
"run",
[
Opts.append,
Opts.branch,
Opts.concurrency,
Opts.context,
Opts.datafile_output,
Opts.include,
Opts.module,
Opts.omit,
Opts.pylib,
Opts.parallel_mode,
Opts.source,
Opts.timid,
] + GLOBAL_ARGS,
usage="[options] [program options]",
description="Run a Python program, measuring code execution.",
),
"xml": CmdOptionParser(
"xml",
[
Opts.datafle_input,
Opts.fail_under,
Opts.ignore_errors,
Opts.include,
Opts.omit,
Opts.output_xml,
Opts.quiet,
Opts.skip_empty,
] + GLOBAL_ARGS,
usage="[options] [modules]",
description="Generate an XML report of coverage results.",
),
}
def show_help(
error: str | None = None,
topic: str | None = None,
parser: optparse.OptionParser | None = None,
) -> None:
"""Display an error message, or the named topic."""
assert error or topic or parser
program_path = sys.argv[0]
if program_path.endswith(os.path.sep + "__main__.py"):
# The path is the main module of a package; get that path instead.
program_path = os.path.dirname(program_path)
program_name = os.path.basename(program_path)
if env.WINDOWS:
# entry_points={"console_scripts":...} on Windows makes files
# called coverage.exe, coverage3.exe, and coverage-3.5.exe. These
# invoke coverage-script.py, coverage3-script.py, and
# coverage-3.5-script.py. argv[0] is the .py file, but we want to
# get back to the original form.
auto_suffix = "-script.py"
if program_name.endswith(auto_suffix):
program_name = program_name[:-len(auto_suffix)]
help_params = dict(coverage.__dict__)
help_params["__url__"] = __url__
help_params["program_name"] = program_name
if HAS_CTRACER:
help_params["extension_modifier"] = "with C extension"
else:
help_params["extension_modifier"] = "without C extension"
if error:
print(error, file=sys.stderr)
print(f"Use '{program_name} help' for help.", file=sys.stderr)
elif parser:
print(parser.format_help().strip())
print()
else:
assert topic is not None
help_msg = textwrap.dedent(HELP_TOPICS.get(topic, "")).strip()
if help_msg:
print(help_msg.format(**help_params))
else:
print(f"Don't know topic {topic!r}")
print("Full documentation is at {__url__}".format(**help_params))
OK, ERR, FAIL_UNDER = 0, 1, 2
class CoverageScript:
"""The command-line interface to coverage.py."""
def __init__(self) -> None:
self.global_option = False
self.coverage: Coverage
def command_line(self, argv: list[str]) -> int:
"""The bulk of the command line interface to coverage.py.
`argv` is the argument list to process.
Returns 0 if all is well, 1 if something went wrong.
"""
# Collect the command-line options.
if not argv:
show_help(topic="minimum_help")
return OK
# The command syntax we parse depends on the first argument. Global
# switch syntax always starts with an option.
parser: optparse.OptionParser | None
self.global_option = argv[0].startswith("-")
if self.global_option:
parser = GlobalOptionParser()
else:
parser = COMMANDS.get(argv[0])
if not parser:
show_help(f"Unknown command: {argv[0]!r}")
return ERR
argv = argv[1:]
ok, options, args = parser.parse_args_ok(argv)
if not ok:
return ERR
assert options is not None
# Handle help and version.
if self.do_help(options, args, parser):
return OK
# Listify the list options.
source = unshell_list(options.source)
omit = unshell_list(options.omit)
include = unshell_list(options.include)
debug = unshell_list(options.debug)
contexts = unshell_list(options.contexts)
if options.concurrency is not None:
concurrency = options.concurrency.split(",")
else:
concurrency = None
# Do something.
self.coverage = Coverage(
data_file=options.data_file or DEFAULT_DATAFILE,
data_suffix=options.parallel_mode,
cover_pylib=options.pylib,
timid=options.timid,
branch=options.branch,
config_file=options.rcfile,
source=source,
omit=omit,
include=include,
debug=debug,
concurrency=concurrency,
check_preimported=True,
context=options.context,
messages=not options.quiet,
)
if options.action == "debug":
return self.do_debug(args)
elif options.action == "erase":
self.coverage.erase()
return OK
elif options.action == "run":
return self.do_run(options, args)
elif options.action == "combine":
if options.append:
self.coverage.load()
data_paths = args or None
self.coverage.combine(data_paths, strict=True, keep=bool(options.keep))
self.coverage.save()
return OK
# Remaining actions are reporting, with some common options.
report_args = dict(
morfs=unglob_args(args),
ignore_errors=options.ignore_errors,
omit=omit,
include=include,
contexts=contexts,
)
# We need to be able to import from the current directory, because
# plugins may try to, for example, to read Django settings.
sys.path.insert(0, "")
self.coverage.load()
total = None
if options.action == "report":
total = self.coverage.report(
precision=options.precision,
show_missing=options.show_missing,
skip_covered=options.skip_covered,
skip_empty=options.skip_empty,
sort=options.sort,
output_format=options.format,
**report_args,
)
elif options.action == "annotate":
self.coverage.annotate(directory=options.directory, **report_args)
elif options.action == "html":
total = self.coverage.html_report(
directory=options.directory,
precision=options.precision,
skip_covered=options.skip_covered,
skip_empty=options.skip_empty,
show_contexts=options.show_contexts,
title=options.title,
**report_args,
)
elif options.action == "xml":
total = self.coverage.xml_report(
outfile=options.outfile,
skip_empty=options.skip_empty,
**report_args,
)
elif options.action == "json":
total = self.coverage.json_report(
outfile=options.outfile,
pretty_print=options.pretty_print,
show_contexts=options.show_contexts,
**report_args,
)
elif options.action == "lcov":
total = self.coverage.lcov_report(
outfile=options.outfile,
**report_args,
)
else:
# There are no other possible actions.
raise AssertionError
if total is not None:
# Apply the command line fail-under options, and then use the config
# value, so we can get fail_under from the config file.
if options.fail_under is not None:
self.coverage.set_option("report:fail_under", options.fail_under)
if options.precision is not None:
self.coverage.set_option("report:precision", options.precision)
fail_under = cast(float, self.coverage.get_option("report:fail_under"))
precision = cast(int, self.coverage.get_option("report:precision"))
if should_fail_under(total, fail_under, precision):
msg = "total of {total} is less than fail-under={fail_under:.{p}f}".format(
total=Numbers(precision=precision).display_covered(total),
fail_under=fail_under,
p=precision,
)
print("Coverage failure:", msg)
return FAIL_UNDER
return OK
def do_help(
self,
options: optparse.Values,
args: list[str],
parser: optparse.OptionParser,
) -> bool:
"""Deal with help requests.
Return True if it handled the request, False if not.
"""
# Handle help.
if options.help:
if self.global_option:
show_help(topic="help")
else:
show_help(parser=parser)
return True
if options.action == "help":
if args:
for a in args:
parser_maybe = COMMANDS.get(a)
if parser_maybe is not None:
show_help(parser=parser_maybe)
else:
show_help(topic=a)
else:
show_help(topic="help")
return True
# Handle version.
if options.version:
show_help(topic="version")
return True
return False
def do_run(self, options: optparse.Values, args: list[str]) -> int:
"""Implementation of 'coverage run'."""
if not args:
if options.module:
# Specified -m with nothing else.
show_help("No module specified for -m")
return ERR
command_line = cast(str, self.coverage.get_option("run:command_line"))
if command_line is not None:
args = shlex.split(command_line)
if args and args[0] in {"-m", "--module"}:
options.module = True
args = args[1:]
if not args:
show_help("Nothing to do.")
return ERR
if options.append and self.coverage.get_option("run:parallel"):
show_help("Can't append to data files in parallel mode.")
return ERR
if options.concurrency == "multiprocessing":
# Can't set other run-affecting command line options with
# multiprocessing.
for opt_name in ["branch", "include", "omit", "pylib", "source", "timid"]:
# As it happens, all of these options have no default, meaning
# they will be None if they have not been specified.
if getattr(options, opt_name) is not None:
show_help(
"Options affecting multiprocessing must only be specified " +
"in a configuration file.\n" +
f"Remove --{opt_name} from the command line.",
)
return ERR
os.environ["COVERAGE_RUN"] = "true"
runner = PyRunner(args, as_module=bool(options.module))
runner.prepare()
if options.append:
self.coverage.load()
# Run the script.
self.coverage.start()
code_ran = True
try:
runner.run()
except NoSource:
code_ran = False
raise
finally:
self.coverage.stop()
if code_ran:
self.coverage.save()
return OK
def do_debug(self, args: list[str]) -> int:
"""Implementation of 'coverage debug'."""
if not args:
show_help("What information would you like: config, data, sys, premain, pybehave?")
return ERR
if args[1:]:
show_help("Only one topic at a time, please")
return ERR
if args[0] == "sys":
write_formatted_info(print, "sys", self.coverage.sys_info())
elif args[0] == "data":
print(info_header("data"))
data_file = self.coverage.config.data_file
debug_data_file(data_file)
for filename in combinable_files(data_file):
print("-----")
debug_data_file(filename)
elif args[0] == "config":
write_formatted_info(print, "config", self.coverage.config.debug_info())
elif args[0] == "premain":
print(info_header("premain"))
print(short_stack(full=True))
elif args[0] == "pybehave":
write_formatted_info(print, "pybehave", env.debug_info())
else:
show_help(f"Don't know what you mean by {args[0]!r}")
return ERR
return OK
def unshell_list(s: str) -> list[str] | None:
"""Turn a command-line argument into a list."""
if not s:
return None
if env.WINDOWS:
# When running coverage.py as coverage.exe, some of the behavior
# of the shell is emulated: wildcards are expanded into a list of
# file names. So you have to single-quote patterns on the command
# line, but (not) helpfully, the single quotes are included in the
# argument, so we have to strip them off here.
s = s.strip("'")
return s.split(",")
def unglob_args(args: list[str]) -> list[str]:
"""Interpret shell wildcards for platforms that need it."""
if env.WINDOWS:
globbed = []
for arg in args:
if "?" in arg or "*" in arg:
globbed.extend(glob.glob(arg))
else:
globbed.append(arg)
args = globbed
return args
HELP_TOPICS = {
"help": """\
Coverage.py, version {__version__} {extension_modifier}
Measure, collect, and report on code coverage in Python programs.
usage: {program_name} [options] [args]
Commands:
annotate Annotate source files with execution information.
combine Combine a number of data files.
debug Display information about the internals of coverage.py
erase Erase previously collected coverage data.
help Get help on using coverage.py.
html Create an HTML report.
json Create a JSON report of coverage results.
lcov Create an LCOV report of coverage results.
report Report coverage stats on modules.
run Run a Python program and measure code execution.
xml Create an XML report of coverage results.
Use "{program_name} help " for detailed help on any command.
""",
"minimum_help": (
"Code coverage for Python, version {__version__} {extension_modifier}. " +
"Use '{program_name} help' for help."
),
"version": "Coverage.py, version {__version__} {extension_modifier}",
}
def main(argv: list[str] | None = None) -> int | None:
"""The main entry point to coverage.py.
This is installed as the script entry point.
"""
if argv is None:
argv = sys.argv[1:]
try:
status = CoverageScript().command_line(argv)
except _ExceptionDuringRun as err:
# An exception was caught while running the product code. The
# sys.exc_info() return tuple is packed into an _ExceptionDuringRun
# exception.
traceback.print_exception(*err.args) # pylint: disable=no-value-for-parameter
status = ERR
except _BaseCoverageException as err:
# A controlled error inside coverage.py: print the message to the user.
msg = err.args[0]
print(msg)
status = ERR
except SystemExit as err:
# The user called `sys.exit()`. Exit with their argument, if any.
if err.args:
status = err.args[0]
else:
status = None
return status
# Profiling using ox_profile. Install it from GitHub:
# pip install git+https://github.com/emin63/ox_profile.git
#
# $set_env.py: COVERAGE_PROFILE - Set to use ox_profile.
_profile = os.getenv("COVERAGE_PROFILE")
if _profile: # pragma: debugging
from ox_profile.core.launchers import SimpleLauncher # pylint: disable=import-error
original_main = main
def main( # pylint: disable=function-redefined
argv: list[str] | None = None,
) -> int | None:
"""A wrapper around main that profiles."""
profiler = SimpleLauncher.launch()
try:
return original_main(argv)
finally:
data, _ = profiler.query(re_filter="coverage", max_records=100)
print(profiler.show(query=data, limit=100, sep="", col=""))
profiler.cancel()
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/collector.py 0000644 0001751 0000177 00000052430 00000000000 020266 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Raw data collector for coverage.py."""
from __future__ import annotations
import functools
import os
import sys
from types import FrameType
from typing import (
cast, Any, Callable, Dict, List, Mapping, Set, TypeVar,
)
from coverage import env
from coverage.config import CoverageConfig
from coverage.data import CoverageData
from coverage.debug import short_stack
from coverage.disposition import FileDisposition
from coverage.exceptions import ConfigError
from coverage.misc import human_sorted_items, isolate_module
from coverage.plugin import CoveragePlugin
from coverage.pytracer import PyTracer
from coverage.sysmon import SysMonitor
from coverage.types import (
TArc, TFileDisposition, TTraceData, TTraceFn, TracerCore, TWarnFn,
)
os = isolate_module(os)
try:
# Use the C extension code when we can, for speed.
from coverage.tracer import CTracer, CFileDisposition
HAS_CTRACER = True
except ImportError:
# Couldn't import the C extension, maybe it isn't built.
if os.getenv("COVERAGE_CORE") == "ctrace": # pragma: part covered
# During testing, we use the COVERAGE_CORE environment variable
# to indicate that we've fiddled with the environment to test this
# fallback code. If we thought we had a C tracer, but couldn't import
# it, then exit quickly and clearly instead of dribbling confusing
# errors. I'm using sys.exit here instead of an exception because an
# exception here causes all sorts of other noise in unittest.
sys.stderr.write("*** COVERAGE_CORE is 'ctrace' but can't import CTracer!\n")
sys.exit(1)
HAS_CTRACER = False
T = TypeVar("T")
class Collector:
"""Collects trace data.
Creates a Tracer object for each thread, since they track stack
information. Each Tracer points to the same shared data, contributing
traced data points.
When the Collector is started, it creates a Tracer for the current thread,
and installs a function to create Tracers for each new thread started.
When the Collector is stopped, all active Tracers are stopped.
Threads started while the Collector is stopped will never have Tracers
associated with them.
"""
# The stack of active Collectors. Collectors are added here when started,
# and popped when stopped. Collectors on the stack are paused when not
# the top, and resumed when they become the top again.
_collectors: list[Collector] = []
# The concurrency settings we support here.
LIGHT_THREADS = {"greenlet", "eventlet", "gevent"}
def __init__(
self,
should_trace: Callable[[str, FrameType], TFileDisposition],
check_include: Callable[[str, FrameType], bool],
should_start_context: Callable[[FrameType], str | None] | None,
file_mapper: Callable[[str], str],
timid: bool,
branch: bool,
warn: TWarnFn,
concurrency: list[str],
metacov: bool,
) -> None:
"""Create a collector.
`should_trace` is a function, taking a file name and a frame, and
returning a `coverage.FileDisposition object`.
`check_include` is a function taking a file name and a frame. It returns
a boolean: True if the file should be traced, False if not.
`should_start_context` is a function taking a frame, and returning a
string. If the frame should be the start of a new context, the string
is the new context. If the frame should not be the start of a new
context, return None.
`file_mapper` is a function taking a filename, and returning a Unicode
filename. The result is the name that will be recorded in the data
file.
If `timid` is true, then a slower simpler trace function will be
used. This is important for some environments where manipulation of
tracing functions make the faster more sophisticated trace function not
operate properly.
If `branch` is true, then branches will be measured. This involves
collecting data on which statements followed each other (arcs). Use
`get_arc_data` to get the arc data.
`warn` is a warning function, taking a single string message argument
and an optional slug argument which will be a string or None, to be
used if a warning needs to be issued.
`concurrency` is a list of strings indicating the concurrency libraries
in use. Valid values are "greenlet", "eventlet", "gevent", or "thread"
(the default). "thread" can be combined with one of the other three.
Other values are ignored.
"""
self.should_trace = should_trace
self.check_include = check_include
self.should_start_context = should_start_context
self.file_mapper = file_mapper
self.branch = branch
self.warn = warn
self.concurrency = concurrency
assert isinstance(self.concurrency, list), f"Expected a list: {self.concurrency!r}"
self.pid = os.getpid()
self.covdata: CoverageData
self.threading = None
self.static_context: str | None = None
self.origin = short_stack()
self.concur_id_func = None
self._trace_class: type[TracerCore]
self.file_disposition_class: type[TFileDisposition]
core: str | None
if timid:
core = "pytrace"
else:
core = os.getenv("COVERAGE_CORE")
if core == "sysmon" and not env.PYBEHAVIOR.pep669:
self.warn("sys.monitoring isn't available, using default core", slug="no-sysmon")
core = None
if not core:
# Once we're comfortable with sysmon as a default:
# if env.PYBEHAVIOR.pep669 and self.should_start_context is None:
# core = "sysmon"
if HAS_CTRACER:
core = "ctrace"
else:
core = "pytrace"
if core == "sysmon":
self._trace_class = SysMonitor
self._core_kwargs = {"tool_id": 3 if metacov else 1}
self.file_disposition_class = FileDisposition
self.supports_plugins = False
self.packed_arcs = False
self.systrace = False
elif core == "ctrace":
self._trace_class = CTracer
self._core_kwargs = {}
self.file_disposition_class = CFileDisposition
self.supports_plugins = True
self.packed_arcs = True
self.systrace = True
elif core == "pytrace":
self._trace_class = PyTracer
self._core_kwargs = {}
self.file_disposition_class = FileDisposition
self.supports_plugins = False
self.packed_arcs = False
self.systrace = True
else:
raise ConfigError(f"Unknown core value: {core!r}")
# We can handle a few concurrency options here, but only one at a time.
concurrencies = set(self.concurrency)
unknown = concurrencies - CoverageConfig.CONCURRENCY_CHOICES
if unknown:
show = ", ".join(sorted(unknown))
raise ConfigError(f"Unknown concurrency choices: {show}")
light_threads = concurrencies & self.LIGHT_THREADS
if len(light_threads) > 1:
show = ", ".join(sorted(light_threads))
raise ConfigError(f"Conflicting concurrency settings: {show}")
do_threading = False
tried = "nothing" # to satisfy pylint
try:
if "greenlet" in concurrencies:
tried = "greenlet"
import greenlet
self.concur_id_func = greenlet.getcurrent
elif "eventlet" in concurrencies:
tried = "eventlet"
import eventlet.greenthread # pylint: disable=import-error,useless-suppression
self.concur_id_func = eventlet.greenthread.getcurrent
elif "gevent" in concurrencies:
tried = "gevent"
import gevent # pylint: disable=import-error,useless-suppression
self.concur_id_func = gevent.getcurrent
if "thread" in concurrencies:
do_threading = True
except ImportError as ex:
msg = f"Couldn't trace with concurrency={tried}, the module isn't installed."
raise ConfigError(msg) from ex
if self.concur_id_func and not hasattr(self._trace_class, "concur_id_func"):
raise ConfigError(
"Can't support concurrency={} with {}, only threads are supported.".format(
tried, self.tracer_name(),
),
)
if do_threading or not concurrencies:
# It's important to import threading only if we need it. If
# it's imported early, and the program being measured uses
# gevent, then gevent's monkey-patching won't work properly.
import threading
self.threading = threading
self.reset()
def __repr__(self) -> str:
return f""
def use_data(self, covdata: CoverageData, context: str | None) -> None:
"""Use `covdata` for recording data."""
self.covdata = covdata
self.static_context = context
self.covdata.set_context(self.static_context)
def tracer_name(self) -> str:
"""Return the class name of the tracer we're using."""
return self._trace_class.__name__
def _clear_data(self) -> None:
"""Clear out existing data, but stay ready for more collection."""
# We used to use self.data.clear(), but that would remove filename
# keys and data values that were still in use higher up the stack
# when we are called as part of switch_context.
for d in self.data.values():
d.clear()
for tracer in self.tracers:
tracer.reset_activity()
def reset(self) -> None:
"""Clear collected data, and prepare to collect more."""
# The trace data we are collecting.
self.data: TTraceData = {}
# A dictionary mapping file names to file tracer plugin names that will
# handle them.
self.file_tracers: dict[str, str] = {}
self.disabled_plugins: set[str] = set()
# The .should_trace_cache attribute is a cache from file names to
# coverage.FileDisposition objects, or None. When a file is first
# considered for tracing, a FileDisposition is obtained from
# Coverage.should_trace. Its .trace attribute indicates whether the
# file should be traced or not. If it should be, a plugin with dynamic
# file names can decide not to trace it based on the dynamic file name
# being excluded by the inclusion rules, in which case the
# FileDisposition will be replaced by None in the cache.
if env.PYPY:
import __pypy__ # pylint: disable=import-error
# Alex Gaynor said:
# should_trace_cache is a strictly growing key: once a key is in
# it, it never changes. Further, the keys used to access it are
# generally constant, given sufficient context. That is to say, at
# any given point _trace() is called, pypy is able to know the key.
# This is because the key is determined by the physical source code
# line, and that's invariant with the call site.
#
# This property of a dict with immutable keys, combined with
# call-site-constant keys is a match for PyPy's module dict,
# which is optimized for such workloads.
#
# This gives a 20% benefit on the workload described at
# https://bitbucket.org/pypy/pypy/issue/1871/10x-slower-than-cpython-under-coverage
self.should_trace_cache = __pypy__.newdict("module")
else:
self.should_trace_cache = {}
# Our active Tracers.
self.tracers: list[TracerCore] = []
self._clear_data()
def _start_tracer(self) -> TTraceFn | None:
"""Start a new Tracer object, and store it in self.tracers."""
tracer = self._trace_class(**self._core_kwargs)
tracer.data = self.data
tracer.trace_arcs = self.branch
tracer.should_trace = self.should_trace
tracer.should_trace_cache = self.should_trace_cache
tracer.warn = self.warn
if hasattr(tracer, 'concur_id_func'):
tracer.concur_id_func = self.concur_id_func
if hasattr(tracer, 'file_tracers'):
tracer.file_tracers = self.file_tracers
if hasattr(tracer, 'threading'):
tracer.threading = self.threading
if hasattr(tracer, 'check_include'):
tracer.check_include = self.check_include
if hasattr(tracer, 'should_start_context'):
tracer.should_start_context = self.should_start_context
if hasattr(tracer, 'switch_context'):
tracer.switch_context = self.switch_context
if hasattr(tracer, 'disable_plugin'):
tracer.disable_plugin = self.disable_plugin
fn = tracer.start()
self.tracers.append(tracer)
return fn
# The trace function has to be set individually on each thread before
# execution begins. Ironically, the only support the threading module has
# for running code before the thread main is the tracing function. So we
# install this as a trace function, and the first time it's called, it does
# the real trace installation.
#
# New in 3.12: threading.settrace_all_threads: https://github.com/python/cpython/pull/96681
def _installation_trace(self, frame: FrameType, event: str, arg: Any) -> TTraceFn | None:
"""Called on new threads, installs the real tracer."""
# Remove ourselves as the trace function.
sys.settrace(None)
# Install the real tracer.
fn: TTraceFn | None = self._start_tracer()
# Invoke the real trace function with the current event, to be sure
# not to lose an event.
if fn:
fn = fn(frame, event, arg)
# Return the new trace function to continue tracing in this scope.
return fn
def start(self) -> None:
"""Start collecting trace information."""
# We may be a new collector in a forked process. The old process'
# collectors will be in self._collectors, but they won't be usable.
# Find them and discard them.
keep_collectors = []
for c in self._collectors:
if c.pid == self.pid:
keep_collectors.append(c)
else:
c.post_fork()
self._collectors[:] = keep_collectors
if self._collectors:
self._collectors[-1].pause()
self.tracers = []
try:
# Install the tracer on this thread.
self._start_tracer()
except:
if self._collectors:
self._collectors[-1].resume()
raise
# If _start_tracer succeeded, then we add ourselves to the global
# stack of collectors.
self._collectors.append(self)
# Install our installation tracer in threading, to jump-start other
# threads.
if self.systrace and self.threading:
self.threading.settrace(self._installation_trace)
def stop(self) -> None:
"""Stop collecting trace information."""
assert self._collectors
if self._collectors[-1] is not self:
print("self._collectors:")
for c in self._collectors:
print(f" {c!r}\n{c.origin}")
assert self._collectors[-1] is self, (
f"Expected current collector to be {self!r}, but it's {self._collectors[-1]!r}"
)
self.pause()
# Remove this Collector from the stack, and resume the one underneath (if any).
self._collectors.pop()
if self._collectors:
self._collectors[-1].resume()
def pause(self) -> None:
"""Pause tracing, but be prepared to `resume`."""
for tracer in self.tracers:
tracer.stop()
stats = tracer.get_stats()
if stats:
print("\nCoverage.py tracer stats:")
for k, v in human_sorted_items(stats.items()):
print(f"{k:>20}: {v}")
if self.threading:
self.threading.settrace(None)
def resume(self) -> None:
"""Resume tracing after a `pause`."""
for tracer in self.tracers:
tracer.start()
if self.systrace:
if self.threading:
self.threading.settrace(self._installation_trace)
else:
self._start_tracer()
def post_fork(self) -> None:
"""After a fork, tracers might need to adjust."""
for tracer in self.tracers:
if hasattr(tracer, "post_fork"):
tracer.post_fork()
def _activity(self) -> bool:
"""Has any activity been traced?
Returns a boolean, True if any trace function was invoked.
"""
return any(tracer.activity() for tracer in self.tracers)
def switch_context(self, new_context: str | None) -> None:
"""Switch to a new dynamic context."""
context: str | None
self.flush_data()
if self.static_context:
context = self.static_context
if new_context:
context += "|" + new_context
else:
context = new_context
self.covdata.set_context(context)
def disable_plugin(self, disposition: TFileDisposition) -> None:
"""Disable the plugin mentioned in `disposition`."""
file_tracer = disposition.file_tracer
assert file_tracer is not None
plugin = file_tracer._coverage_plugin
plugin_name = plugin._coverage_plugin_name
self.warn(f"Disabling plug-in {plugin_name!r} due to previous exception")
plugin._coverage_enabled = False
disposition.trace = False
@functools.lru_cache(maxsize=None) # pylint: disable=method-cache-max-size-none
def cached_mapped_file(self, filename: str) -> str:
"""A locally cached version of file names mapped through file_mapper."""
return self.file_mapper(filename)
def mapped_file_dict(self, d: Mapping[str, T]) -> dict[str, T]:
"""Return a dict like d, but with keys modified by file_mapper."""
# The call to list(items()) ensures that the GIL protects the dictionary
# iterator against concurrent modifications by tracers running
# in other threads. We try three times in case of concurrent
# access, hoping to get a clean copy.
runtime_err = None
for _ in range(3): # pragma: part covered
try:
items = list(d.items())
except RuntimeError as ex: # pragma: cant happen
runtime_err = ex
else:
break
else: # pragma: cant happen
assert isinstance(runtime_err, Exception)
raise runtime_err
return {self.cached_mapped_file(k): v for k, v in items if v}
def plugin_was_disabled(self, plugin: CoveragePlugin) -> None:
"""Record that `plugin` was disabled during the run."""
self.disabled_plugins.add(plugin._coverage_plugin_name)
def flush_data(self) -> bool:
"""Save the collected data to our associated `CoverageData`.
Data may have also been saved along the way. This forces the
last of the data to be saved.
Returns True if there was data to save, False if not.
"""
if not self._activity():
return False
if self.branch:
if self.packed_arcs:
# Unpack the line number pairs packed into integers. See
# tracer.c:CTracer_record_pair for the C code that creates
# these packed ints.
arc_data: dict[str, list[TArc]] = {}
packed_data = cast(Dict[str, Set[int]], self.data)
# The list() here and in the inner loop are to get a clean copy
# even as tracers are continuing to add data.
for fname, packeds in list(packed_data.items()):
tuples = []
for packed in list(packeds):
l1 = packed & 0xFFFFF
l2 = (packed & (0xFFFFF << 20)) >> 20
if packed & (1 << 40):
l1 *= -1
if packed & (1 << 41):
l2 *= -1
tuples.append((l1, l2))
arc_data[fname] = tuples
else:
arc_data = cast(Dict[str, List[TArc]], self.data)
self.covdata.add_arcs(self.mapped_file_dict(arc_data))
else:
line_data = cast(Dict[str, Set[int]], self.data)
self.covdata.add_lines(self.mapped_file_dict(line_data))
file_tracers = {
k: v for k, v in self.file_tracers.items()
if v not in self.disabled_plugins
}
self.covdata.add_file_tracers(self.mapped_file_dict(file_tracers))
self._clear_data()
return True
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/config.py 0000644 0001751 0000177 00000052662 00000000000 017554 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Config file for coverage.py"""
from __future__ import annotations
import collections
import configparser
import copy
import os
import os.path
import re
from typing import (
Any, Callable, Iterable, Union,
)
from coverage.exceptions import ConfigError
from coverage.misc import isolate_module, human_sorted_items, substitute_variables
from coverage.tomlconfig import TomlConfigParser, TomlDecodeError
from coverage.types import (
TConfigurable, TConfigSectionIn, TConfigValueIn, TConfigSectionOut,
TConfigValueOut, TPluginConfig,
)
os = isolate_module(os)
class HandyConfigParser(configparser.ConfigParser):
"""Our specialization of ConfigParser."""
def __init__(self, our_file: bool) -> None:
"""Create the HandyConfigParser.
`our_file` is True if this config file is specifically for coverage,
False if we are examining another config file (tox.ini, setup.cfg)
for possible settings.
"""
super().__init__(interpolation=None)
self.section_prefixes = ["coverage:"]
if our_file:
self.section_prefixes.append("")
def read( # type: ignore[override]
self,
filenames: Iterable[str],
encoding_unused: str | None = None,
) -> list[str]:
"""Read a file name as UTF-8 configuration data."""
return super().read(filenames, encoding="utf-8")
def real_section(self, section: str) -> str | None:
"""Get the actual name of a section."""
for section_prefix in self.section_prefixes:
real_section = section_prefix + section
has = super().has_section(real_section)
if has:
return real_section
return None
def has_option(self, section: str, option: str) -> bool:
real_section = self.real_section(section)
if real_section is not None:
return super().has_option(real_section, option)
return False
def has_section(self, section: str) -> bool:
return bool(self.real_section(section))
def options(self, section: str) -> list[str]:
real_section = self.real_section(section)
if real_section is not None:
return super().options(real_section)
raise ConfigError(f"No section: {section!r}")
def get_section(self, section: str) -> TConfigSectionOut:
"""Get the contents of a section, as a dictionary."""
d: dict[str, TConfigValueOut] = {}
for opt in self.options(section):
d[opt] = self.get(section, opt)
return d
def get(self, section: str, option: str, *args: Any, **kwargs: Any) -> str: # type: ignore
"""Get a value, replacing environment variables also.
The arguments are the same as `ConfigParser.get`, but in the found
value, ``$WORD`` or ``${WORD}`` are replaced by the value of the
environment variable ``WORD``.
Returns the finished value.
"""
for section_prefix in self.section_prefixes:
real_section = section_prefix + section
if super().has_option(real_section, option):
break
else:
raise ConfigError(f"No option {option!r} in section: {section!r}")
v: str = super().get(real_section, option, *args, **kwargs)
v = substitute_variables(v, os.environ)
return v
def getlist(self, section: str, option: str) -> list[str]:
"""Read a list of strings.
The value of `section` and `option` is treated as a comma- and newline-
separated list of strings. Each value is stripped of white space.
Returns the list of strings.
"""
value_list = self.get(section, option)
values = []
for value_line in value_list.split("\n"):
for value in value_line.split(","):
value = value.strip()
if value:
values.append(value)
return values
def getregexlist(self, section: str, option: str) -> list[str]:
"""Read a list of full-line regexes.
The value of `section` and `option` is treated as a newline-separated
list of regexes. Each value is stripped of white space.
Returns the list of strings.
"""
line_list = self.get(section, option)
value_list = []
for value in line_list.splitlines():
value = value.strip()
try:
re.compile(value)
except re.error as e:
raise ConfigError(
f"Invalid [{section}].{option} value {value!r}: {e}",
) from e
if value:
value_list.append(value)
return value_list
TConfigParser = Union[HandyConfigParser, TomlConfigParser]
# The default line exclusion regexes.
DEFAULT_EXCLUDE = [
r"#\s*(pragma|PRAGMA)[:\s]?\s*(no|NO)\s*(cover|COVER)",
]
# The default partial branch regexes, to be modified by the user.
DEFAULT_PARTIAL = [
r"#\s*(pragma|PRAGMA)[:\s]?\s*(no|NO)\s*(branch|BRANCH)",
]
# The default partial branch regexes, based on Python semantics.
# These are any Python branching constructs that can't actually execute all
# their branches.
DEFAULT_PARTIAL_ALWAYS = [
"while (True|1|False|0):",
"if (True|1|False|0):",
]
class CoverageConfig(TConfigurable, TPluginConfig):
"""Coverage.py configuration.
The attributes of this class are the various settings that control the
operation of coverage.py.
"""
# pylint: disable=too-many-instance-attributes
def __init__(self) -> None:
"""Initialize the configuration attributes to their defaults."""
# Metadata about the config.
# We tried to read these config files.
self.attempted_config_files: list[str] = []
# We did read these config files, but maybe didn't find any content for us.
self.config_files_read: list[str] = []
# The file that gave us our configuration.
self.config_file: str | None = None
self._config_contents: bytes | None = None
# Defaults for [run] and [report]
self._include = None
self._omit = None
# Defaults for [run]
self.branch = False
self.command_line: str | None = None
self.concurrency: list[str] = []
self.context: str | None = None
self.cover_pylib = False
self.data_file = ".coverage"
self.debug: list[str] = []
self.debug_file: str | None = None
self.disable_warnings: list[str] = []
self.dynamic_context: str | None = None
self.parallel = False
self.plugins: list[str] = []
self.relative_files = False
self.run_include: list[str] = []
self.run_omit: list[str] = []
self.sigterm = False
self.source: list[str] | None = None
self.source_pkgs: list[str] = []
self.timid = False
self._crash: str | None = None
# Defaults for [report]
self.exclude_list = DEFAULT_EXCLUDE[:]
self.exclude_also: list[str] = []
self.fail_under = 0.0
self.format: str | None = None
self.ignore_errors = False
self.include_namespace_packages = False
self.report_include: list[str] | None = None
self.report_omit: list[str] | None = None
self.partial_always_list = DEFAULT_PARTIAL_ALWAYS[:]
self.partial_list = DEFAULT_PARTIAL[:]
self.precision = 0
self.report_contexts: list[str] | None = None
self.show_missing = False
self.skip_covered = False
self.skip_empty = False
self.sort: str | None = None
# Defaults for [html]
self.extra_css: str | None = None
self.html_dir = "htmlcov"
self.html_skip_covered: bool | None = None
self.html_skip_empty: bool | None = None
self.html_title = "Coverage report"
self.show_contexts = False
# Defaults for [xml]
self.xml_output = "coverage.xml"
self.xml_package_depth = 99
# Defaults for [json]
self.json_output = "coverage.json"
self.json_pretty_print = False
self.json_show_contexts = False
# Defaults for [lcov]
self.lcov_output = "coverage.lcov"
# Defaults for [paths]
self.paths: dict[str, list[str]] = {}
# Options for plugins
self.plugin_options: dict[str, TConfigSectionOut] = {}
MUST_BE_LIST = {
"debug", "concurrency", "plugins",
"report_omit", "report_include",
"run_omit", "run_include",
}
def from_args(self, **kwargs: TConfigValueIn) -> None:
"""Read config values from `kwargs`."""
for k, v in kwargs.items():
if v is not None:
if k in self.MUST_BE_LIST and isinstance(v, str):
v = [v]
setattr(self, k, v)
def from_file(self, filename: str, warn: Callable[[str], None], our_file: bool) -> bool:
"""Read configuration from a .rc file.
`filename` is a file name to read.
`our_file` is True if this config file is specifically for coverage,
False if we are examining another config file (tox.ini, setup.cfg)
for possible settings.
Returns True or False, whether the file could be read, and it had some
coverage.py settings in it.
"""
_, ext = os.path.splitext(filename)
cp: TConfigParser
if ext == ".toml":
cp = TomlConfigParser(our_file)
else:
cp = HandyConfigParser(our_file)
self.attempted_config_files.append(filename)
try:
files_read = cp.read(filename)
except (configparser.Error, TomlDecodeError) as err:
raise ConfigError(f"Couldn't read config file {filename}: {err}") from err
if not files_read:
return False
self.config_files_read.extend(map(os.path.abspath, files_read))
any_set = False
try:
for option_spec in self.CONFIG_FILE_OPTIONS:
was_set = self._set_attr_from_config_option(cp, *option_spec)
if was_set:
any_set = True
except ValueError as err:
raise ConfigError(f"Couldn't read config file {filename}: {err}") from err
# Check that there are no unrecognized options.
all_options = collections.defaultdict(set)
for option_spec in self.CONFIG_FILE_OPTIONS:
section, option = option_spec[1].split(":")
all_options[section].add(option)
for section, options in all_options.items():
real_section = cp.real_section(section)
if real_section:
for unknown in set(cp.options(section)) - options:
warn(
"Unrecognized option '[{}] {}=' in config file {}".format(
real_section, unknown, filename,
),
)
# [paths] is special
if cp.has_section("paths"):
for option in cp.options("paths"):
self.paths[option] = cp.getlist("paths", option)
any_set = True
# plugins can have options
for plugin in self.plugins:
if cp.has_section(plugin):
self.plugin_options[plugin] = cp.get_section(plugin)
any_set = True
# Was this file used as a config file? If it's specifically our file,
# then it was used. If we're piggybacking on someone else's file,
# then it was only used if we found some settings in it.
if our_file:
used = True
else:
used = any_set
if used:
self.config_file = os.path.abspath(filename)
with open(filename, "rb") as f:
self._config_contents = f.read()
return used
def copy(self) -> CoverageConfig:
"""Return a copy of the configuration."""
return copy.deepcopy(self)
CONCURRENCY_CHOICES = {"thread", "gevent", "greenlet", "eventlet", "multiprocessing"}
CONFIG_FILE_OPTIONS = [
# These are *args for _set_attr_from_config_option:
# (attr, where, type_="")
#
# attr is the attribute to set on the CoverageConfig object.
# where is the section:name to read from the configuration file.
# type_ is the optional type to apply, by using .getTYPE to read the
# configuration value from the file.
# [run]
("branch", "run:branch", "boolean"),
("command_line", "run:command_line"),
("concurrency", "run:concurrency", "list"),
("context", "run:context"),
("cover_pylib", "run:cover_pylib", "boolean"),
("data_file", "run:data_file"),
("debug", "run:debug", "list"),
("debug_file", "run:debug_file"),
("disable_warnings", "run:disable_warnings", "list"),
("dynamic_context", "run:dynamic_context"),
("parallel", "run:parallel", "boolean"),
("plugins", "run:plugins", "list"),
("relative_files", "run:relative_files", "boolean"),
("run_include", "run:include", "list"),
("run_omit", "run:omit", "list"),
("sigterm", "run:sigterm", "boolean"),
("source", "run:source", "list"),
("source_pkgs", "run:source_pkgs", "list"),
("timid", "run:timid", "boolean"),
("_crash", "run:_crash"),
# [report]
("exclude_list", "report:exclude_lines", "regexlist"),
("exclude_also", "report:exclude_also", "regexlist"),
("fail_under", "report:fail_under", "float"),
("format", "report:format"),
("ignore_errors", "report:ignore_errors", "boolean"),
("include_namespace_packages", "report:include_namespace_packages", "boolean"),
("partial_always_list", "report:partial_branches_always", "regexlist"),
("partial_list", "report:partial_branches", "regexlist"),
("precision", "report:precision", "int"),
("report_contexts", "report:contexts", "list"),
("report_include", "report:include", "list"),
("report_omit", "report:omit", "list"),
("show_missing", "report:show_missing", "boolean"),
("skip_covered", "report:skip_covered", "boolean"),
("skip_empty", "report:skip_empty", "boolean"),
("sort", "report:sort"),
# [html]
("extra_css", "html:extra_css"),
("html_dir", "html:directory"),
("html_skip_covered", "html:skip_covered", "boolean"),
("html_skip_empty", "html:skip_empty", "boolean"),
("html_title", "html:title"),
("show_contexts", "html:show_contexts", "boolean"),
# [xml]
("xml_output", "xml:output"),
("xml_package_depth", "xml:package_depth", "int"),
# [json]
("json_output", "json:output"),
("json_pretty_print", "json:pretty_print", "boolean"),
("json_show_contexts", "json:show_contexts", "boolean"),
# [lcov]
("lcov_output", "lcov:output"),
]
def _set_attr_from_config_option(
self,
cp: TConfigParser,
attr: str,
where: str,
type_: str = "",
) -> bool:
"""Set an attribute on self if it exists in the ConfigParser.
Returns True if the attribute was set.
"""
section, option = where.split(":")
if cp.has_option(section, option):
method = getattr(cp, "get" + type_)
setattr(self, attr, method(section, option))
return True
return False
def get_plugin_options(self, plugin: str) -> TConfigSectionOut:
"""Get a dictionary of options for the plugin named `plugin`."""
return self.plugin_options.get(plugin, {})
def set_option(self, option_name: str, value: TConfigValueIn | TConfigSectionIn) -> None:
"""Set an option in the configuration.
`option_name` is a colon-separated string indicating the section and
option name. For example, the ``branch`` option in the ``[run]``
section of the config file would be indicated with `"run:branch"`.
`value` is the new value for the option.
"""
# Special-cased options.
if option_name == "paths":
self.paths = value # type: ignore[assignment]
return
# Check all the hard-coded options.
for option_spec in self.CONFIG_FILE_OPTIONS:
attr, where = option_spec[:2]
if where == option_name:
setattr(self, attr, value)
return
# See if it's a plugin option.
plugin_name, _, key = option_name.partition(":")
if key and plugin_name in self.plugins:
self.plugin_options.setdefault(plugin_name, {})[key] = value # type: ignore[index]
return
# If we get here, we didn't find the option.
raise ConfigError(f"No such option: {option_name!r}")
def get_option(self, option_name: str) -> TConfigValueOut | None:
"""Get an option from the configuration.
`option_name` is a colon-separated string indicating the section and
option name. For example, the ``branch`` option in the ``[run]``
section of the config file would be indicated with `"run:branch"`.
Returns the value of the option.
"""
# Special-cased options.
if option_name == "paths":
return self.paths # type: ignore[return-value]
# Check all the hard-coded options.
for option_spec in self.CONFIG_FILE_OPTIONS:
attr, where = option_spec[:2]
if where == option_name:
return getattr(self, attr) # type: ignore[no-any-return]
# See if it's a plugin option.
plugin_name, _, key = option_name.partition(":")
if key and plugin_name in self.plugins:
return self.plugin_options.get(plugin_name, {}).get(key)
# If we get here, we didn't find the option.
raise ConfigError(f"No such option: {option_name!r}")
def post_process_file(self, path: str) -> str:
"""Make final adjustments to a file path to make it usable."""
return os.path.expanduser(path)
def post_process(self) -> None:
"""Make final adjustments to settings to make them usable."""
self.data_file = self.post_process_file(self.data_file)
self.html_dir = self.post_process_file(self.html_dir)
self.xml_output = self.post_process_file(self.xml_output)
self.paths = {
k: [self.post_process_file(f) for f in v]
for k, v in self.paths.items()
}
self.exclude_list += self.exclude_also
def debug_info(self) -> list[tuple[str, Any]]:
"""Make a list of (name, value) pairs for writing debug info."""
return human_sorted_items(
(k, v) for k, v in self.__dict__.items() if not k.startswith("_")
)
def config_files_to_try(config_file: bool | str) -> list[tuple[str, bool, bool]]:
"""What config files should we try to read?
Returns a list of tuples:
(filename, is_our_file, was_file_specified)
"""
# Some API users were specifying ".coveragerc" to mean the same as
# True, so make it so.
if config_file == ".coveragerc":
config_file = True
specified_file = (config_file is not True)
if not specified_file:
# No file was specified. Check COVERAGE_RCFILE.
rcfile = os.getenv("COVERAGE_RCFILE")
if rcfile:
config_file = rcfile
specified_file = True
if not specified_file:
# Still no file specified. Default to .coveragerc
config_file = ".coveragerc"
assert isinstance(config_file, str)
files_to_try = [
(config_file, True, specified_file),
("setup.cfg", False, False),
("tox.ini", False, False),
("pyproject.toml", False, False),
]
return files_to_try
def read_coverage_config(
config_file: bool | str,
warn: Callable[[str], None],
**kwargs: TConfigValueIn,
) -> CoverageConfig:
"""Read the coverage.py configuration.
Arguments:
config_file: a boolean or string, see the `Coverage` class for the
tricky details.
warn: a function to issue warnings.
all others: keyword arguments from the `Coverage` class, used for
setting values in the configuration.
Returns:
config:
config is a CoverageConfig object read from the appropriate
configuration file.
"""
# Build the configuration from a number of sources:
# 1) defaults:
config = CoverageConfig()
# 2) from a file:
if config_file:
files_to_try = config_files_to_try(config_file)
for fname, our_file, specified_file in files_to_try:
config_read = config.from_file(fname, warn, our_file=our_file)
if config_read:
break
if specified_file:
raise ConfigError(f"Couldn't read {fname!r} as a config file")
# 3) from environment variables:
env_data_file = os.getenv("COVERAGE_FILE")
if env_data_file:
config.data_file = env_data_file
# $set_env.py: COVERAGE_DEBUG - Debug options: https://coverage.rtfd.io/cmd.html#debug
debugs = os.getenv("COVERAGE_DEBUG")
if debugs:
config.debug.extend(d.strip() for d in debugs.split(","))
# 4) from constructor arguments:
config.from_args(**kwargs)
# Once all the config has been collected, there's a little post-processing
# to do.
config.post_process()
return config
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/context.py 0000644 0001751 0000177 00000004627 00000000000 017771 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Determine contexts for coverage.py"""
from __future__ import annotations
from types import FrameType
from typing import cast, Callable, Sequence
def combine_context_switchers(
context_switchers: Sequence[Callable[[FrameType], str | None]],
) -> Callable[[FrameType], str | None] | None:
"""Create a single context switcher from multiple switchers.
`context_switchers` is a list of functions that take a frame as an
argument and return a string to use as the new context label.
Returns a function that composites `context_switchers` functions, or None
if `context_switchers` is an empty list.
When invoked, the combined switcher calls `context_switchers` one-by-one
until a string is returned. The combined switcher returns None if all
`context_switchers` return None.
"""
if not context_switchers:
return None
if len(context_switchers) == 1:
return context_switchers[0]
def should_start_context(frame: FrameType) -> str | None:
"""The combiner for multiple context switchers."""
for switcher in context_switchers:
new_context = switcher(frame)
if new_context is not None:
return new_context
return None
return should_start_context
def should_start_context_test_function(frame: FrameType) -> str | None:
"""Is this frame calling a test_* function?"""
co_name = frame.f_code.co_name
if co_name.startswith("test") or co_name == "runTest":
return qualname_from_frame(frame)
return None
def qualname_from_frame(frame: FrameType) -> str | None:
"""Get a qualified name for the code running in `frame`."""
co = frame.f_code
fname = co.co_name
method = None
if co.co_argcount and co.co_varnames[0] == "self":
self = frame.f_locals.get("self", None)
method = getattr(self, fname, None)
if method is None:
func = frame.f_globals.get(fname)
if func is None:
return None
return cast(str, func.__module__ + "." + fname)
func = getattr(method, "__func__", None)
if func is None:
cls = self.__class__
return cast(str, cls.__module__ + "." + cls.__name__ + "." + fname)
return cast(str, func.__module__ + "." + func.__qualname__)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/control.py 0000644 0001751 0000177 00000144772 00000000000 017773 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Central control stuff for coverage.py."""
from __future__ import annotations
import atexit
import collections
import contextlib
import os
import os.path
import platform
import signal
import sys
import threading
import time
import warnings
from types import FrameType
from typing import (
cast,
Any, Callable, IO, Iterable, Iterator, List,
)
from coverage import env
from coverage.annotate import AnnotateReporter
from coverage.collector import Collector, HAS_CTRACER
from coverage.config import CoverageConfig, read_coverage_config
from coverage.context import should_start_context_test_function, combine_context_switchers
from coverage.data import CoverageData, combine_parallel_data
from coverage.debug import (
DebugControl, NoDebugging, short_stack, write_formatted_info, relevant_environment_display,
)
from coverage.disposition import disposition_debug_msg
from coverage.exceptions import ConfigError, CoverageException, CoverageWarning, PluginError
from coverage.files import PathAliases, abs_file, relative_filename, set_relative_directory
from coverage.html import HtmlReporter
from coverage.inorout import InOrOut
from coverage.jsonreport import JsonReporter
from coverage.lcovreport import LcovReporter
from coverage.misc import bool_or_none, join_regex
from coverage.misc import DefaultValue, ensure_dir_for_file, isolate_module
from coverage.multiproc import patch_multiprocessing
from coverage.plugin import FileReporter
from coverage.plugin_support import Plugins
from coverage.python import PythonFileReporter
from coverage.report import SummaryReporter
from coverage.report_core import render_report
from coverage.results import Analysis
from coverage.types import (
FilePath, TConfigurable, TConfigSectionIn, TConfigValueIn, TConfigValueOut,
TFileDisposition, TLineNo, TMorf,
)
from coverage.xmlreport import XmlReporter
os = isolate_module(os)
@contextlib.contextmanager
def override_config(cov: Coverage, **kwargs: TConfigValueIn) -> Iterator[None]:
"""Temporarily tweak the configuration of `cov`.
The arguments are applied to `cov.config` with the `from_args` method.
At the end of the with-statement, the old configuration is restored.
"""
original_config = cov.config
cov.config = cov.config.copy()
try:
cov.config.from_args(**kwargs)
yield
finally:
cov.config = original_config
DEFAULT_DATAFILE = DefaultValue("MISSING")
_DEFAULT_DATAFILE = DEFAULT_DATAFILE # Just in case, for backwards compatibility
class Coverage(TConfigurable):
"""Programmatic access to coverage.py.
To use::
from coverage import Coverage
cov = Coverage()
cov.start()
#.. call your code ..
cov.stop()
cov.html_report(directory="covhtml")
A context manager is available to do the same thing::
cov = Coverage()
with cov.collect():
#.. call your code ..
cov.html_report(directory="covhtml")
Note: in keeping with Python custom, names starting with underscore are
not part of the public API. They might stop working at any point. Please
limit yourself to documented methods to avoid problems.
Methods can raise any of the exceptions described in :ref:`api_exceptions`.
"""
# The stack of started Coverage instances.
_instances: list[Coverage] = []
@classmethod
def current(cls) -> Coverage | None:
"""Get the latest started `Coverage` instance, if any.
Returns: a `Coverage` instance, or None.
.. versionadded:: 5.0
"""
if cls._instances:
return cls._instances[-1]
else:
return None
def __init__( # pylint: disable=too-many-arguments
self,
data_file: FilePath | DefaultValue | None = DEFAULT_DATAFILE,
data_suffix: str | bool | None = None,
cover_pylib: bool | None = None,
auto_data: bool = False,
timid: bool | None = None,
branch: bool | None = None,
config_file: FilePath | bool = True,
source: Iterable[str] | None = None,
source_pkgs: Iterable[str] | None = None,
omit: str | Iterable[str] | None = None,
include: str | Iterable[str] | None = None,
debug: Iterable[str] | None = None,
concurrency: str | Iterable[str] | None = None,
check_preimported: bool = False,
context: str | None = None,
messages: bool = False,
) -> None:
"""
Many of these arguments duplicate and override values that can be
provided in a configuration file. Parameters that are missing here
will use values from the config file.
`data_file` is the base name of the data file to use. The config value
defaults to ".coverage". None can be provided to prevent writing a data
file. `data_suffix` is appended (with a dot) to `data_file` to create
the final file name. If `data_suffix` is simply True, then a suffix is
created with the machine and process identity included.
`cover_pylib` is a boolean determining whether Python code installed
with the Python interpreter is measured. This includes the Python
standard library and any packages installed with the interpreter.
If `auto_data` is true, then any existing data file will be read when
coverage measurement starts, and data will be saved automatically when
measurement stops.
If `timid` is true, then a slower and simpler trace function will be
used. This is important for some environments where manipulation of
tracing functions breaks the faster trace function.
If `branch` is true, then branch coverage will be measured in addition
to the usual statement coverage.
`config_file` determines what configuration file to read:
* If it is ".coveragerc", it is interpreted as if it were True,
for backward compatibility.
* If it is a string, it is the name of the file to read. If the
file can't be read, it is an error.
* If it is True, then a few standard files names are tried
(".coveragerc", "setup.cfg", "tox.ini"). It is not an error for
these files to not be found.
* If it is False, then no configuration file is read.
`source` is a list of file paths or package names. Only code located
in the trees indicated by the file paths or package names will be
measured.
`source_pkgs` is a list of package names. It works the same as
`source`, but can be used to name packages where the name can also be
interpreted as a file path.
`include` and `omit` are lists of file name patterns. Files that match
`include` will be measured, files that match `omit` will not. Each
will also accept a single string argument.
`debug` is a list of strings indicating what debugging information is
desired.
`concurrency` is a string indicating the concurrency library being used
in the measured code. Without this, coverage.py will get incorrect
results if these libraries are in use. Valid strings are "greenlet",
"eventlet", "gevent", "multiprocessing", or "thread" (the default).
This can also be a list of these strings.
If `check_preimported` is true, then when coverage is started, the
already-imported files will be checked to see if they should be
measured by coverage. Importing measured files before coverage is
started can mean that code is missed.
`context` is a string to use as the :ref:`static context
` label for collected data.
If `messages` is true, some messages will be printed to stdout
indicating what is happening.
.. versionadded:: 4.0
The `concurrency` parameter.
.. versionadded:: 4.2
The `concurrency` parameter can now be a list of strings.
.. versionadded:: 5.0
The `check_preimported` and `context` parameters.
.. versionadded:: 5.3
The `source_pkgs` parameter.
.. versionadded:: 6.0
The `messages` parameter.
"""
# Start self.config as a usable default configuration. It will soon be
# replaced with the real configuration.
self.config = CoverageConfig()
# data_file=None means no disk file at all. data_file missing means
# use the value from the config file.
self._no_disk = data_file is None
if isinstance(data_file, DefaultValue):
data_file = None
if data_file is not None:
data_file = os.fspath(data_file)
# This is injectable by tests.
self._debug_file: IO[str] | None = None
self._auto_load = self._auto_save = auto_data
self._data_suffix_specified = data_suffix
# Is it ok for no data to be collected?
self._warn_no_data = True
self._warn_unimported_source = True
self._warn_preimported_source = check_preimported
self._no_warn_slugs: list[str] = []
self._messages = messages
# A record of all the warnings that have been issued.
self._warnings: list[str] = []
# Other instance attributes, set with placebos or placeholders.
# More useful objects will be created later.
self._debug: DebugControl = NoDebugging()
self._inorout: InOrOut | None = None
self._plugins: Plugins = Plugins()
self._data: CoverageData | None = None
self._collector: Collector | None = None
self._metacov = False
self._file_mapper: Callable[[str], str] = abs_file
self._data_suffix = self._run_suffix = None
self._exclude_re: dict[str, str] = {}
self._old_sigterm: Callable[[int, FrameType | None], Any] | None = None
# State machine variables:
# Have we initialized everything?
self._inited = False
self._inited_for_start = False
# Have we started collecting and not stopped it?
self._started = False
# Should we write the debug output?
self._should_write_debug = True
# Build our configuration from a number of sources.
if not isinstance(config_file, bool):
config_file = os.fspath(config_file)
self.config = read_coverage_config(
config_file=config_file,
warn=self._warn,
data_file=data_file,
cover_pylib=cover_pylib,
timid=timid,
branch=branch,
parallel=bool_or_none(data_suffix),
source=source,
source_pkgs=source_pkgs,
run_omit=omit,
run_include=include,
debug=debug,
report_omit=omit,
report_include=include,
concurrency=concurrency,
context=context,
)
# If we have sub-process measurement happening automatically, then we
# want any explicit creation of a Coverage object to mean, this process
# is already coverage-aware, so don't auto-measure it. By now, the
# auto-creation of a Coverage object has already happened. But we can
# find it and tell it not to save its data.
if not env.METACOV:
_prevent_sub_process_measurement()
def _init(self) -> None:
"""Set all the initial state.
This is called by the public methods to initialize state. This lets us
construct a :class:`Coverage` object, then tweak its state before this
function is called.
"""
if self._inited:
return
self._inited = True
# Create and configure the debugging controller.
self._debug = DebugControl(self.config.debug, self._debug_file, self.config.debug_file)
if self._debug.should("process"):
self._debug.write("Coverage._init")
if "multiprocessing" in (self.config.concurrency or ()):
# Multi-processing uses parallel for the subprocesses, so also use
# it for the main process.
self.config.parallel = True
# _exclude_re is a dict that maps exclusion list names to compiled regexes.
self._exclude_re = {}
set_relative_directory()
if self.config.relative_files:
self._file_mapper = relative_filename
# Load plugins
self._plugins = Plugins.load_plugins(self.config.plugins, self.config, self._debug)
# Run configuring plugins.
for plugin in self._plugins.configurers:
# We need an object with set_option and get_option. Either self or
# self.config will do. Choosing randomly stops people from doing
# other things with those objects, against the public API. Yes,
# this is a bit childish. :)
plugin.configure([self, self.config][int(time.time()) % 2])
def _post_init(self) -> None:
"""Stuff to do after everything is initialized."""
if self._should_write_debug:
self._should_write_debug = False
self._write_startup_debug()
# "[run] _crash" will raise an exception if the value is close by in
# the call stack, for testing error handling.
if self.config._crash and self.config._crash in short_stack():
raise RuntimeError(f"Crashing because called by {self.config._crash}")
def _write_startup_debug(self) -> None:
"""Write out debug info at startup if needed."""
wrote_any = False
with self._debug.without_callers():
if self._debug.should("config"):
config_info = self.config.debug_info()
write_formatted_info(self._debug.write, "config", config_info)
wrote_any = True
if self._debug.should("sys"):
write_formatted_info(self._debug.write, "sys", self.sys_info())
for plugin in self._plugins:
header = "sys: " + plugin._coverage_plugin_name
info = plugin.sys_info()
write_formatted_info(self._debug.write, header, info)
wrote_any = True
if self._debug.should("pybehave"):
write_formatted_info(self._debug.write, "pybehave", env.debug_info())
wrote_any = True
if wrote_any:
write_formatted_info(self._debug.write, "end", ())
def _should_trace(self, filename: str, frame: FrameType) -> TFileDisposition:
"""Decide whether to trace execution in `filename`.
Calls `_should_trace_internal`, and returns the FileDisposition.
"""
assert self._inorout is not None
disp = self._inorout.should_trace(filename, frame)
if self._debug.should("trace"):
self._debug.write(disposition_debug_msg(disp))
return disp
def _check_include_omit_etc(self, filename: str, frame: FrameType) -> bool:
"""Check a file name against the include/omit/etc, rules, verbosely.
Returns a boolean: True if the file should be traced, False if not.
"""
assert self._inorout is not None
reason = self._inorout.check_include_omit_etc(filename, frame)
if self._debug.should("trace"):
if not reason:
msg = f"Including {filename!r}"
else:
msg = f"Not including {filename!r}: {reason}"
self._debug.write(msg)
return not reason
def _warn(self, msg: str, slug: str | None = None, once: bool = False) -> None:
"""Use `msg` as a warning.
For warning suppression, use `slug` as the shorthand.
If `once` is true, only show this warning once (determined by the
slug.)
"""
if not self._no_warn_slugs:
self._no_warn_slugs = list(self.config.disable_warnings)
if slug in self._no_warn_slugs:
# Don't issue the warning
return
self._warnings.append(msg)
if slug:
msg = f"{msg} ({slug})"
if self._debug.should("pid"):
msg = f"[{os.getpid()}] {msg}"
warnings.warn(msg, category=CoverageWarning, stacklevel=2)
if once:
assert slug is not None
self._no_warn_slugs.append(slug)
def _message(self, msg: str) -> None:
"""Write a message to the user, if configured to do so."""
if self._messages:
print(msg)
def get_option(self, option_name: str) -> TConfigValueOut | None:
"""Get an option from the configuration.
`option_name` is a colon-separated string indicating the section and
option name. For example, the ``branch`` option in the ``[run]``
section of the config file would be indicated with `"run:branch"`.
Returns the value of the option. The type depends on the option
selected.
As a special case, an `option_name` of ``"paths"`` will return an
dictionary with the entire ``[paths]`` section value.
.. versionadded:: 4.0
"""
return self.config.get_option(option_name)
def set_option(self, option_name: str, value: TConfigValueIn | TConfigSectionIn) -> None:
"""Set an option in the configuration.
`option_name` is a colon-separated string indicating the section and
option name. For example, the ``branch`` option in the ``[run]``
section of the config file would be indicated with ``"run:branch"``.
`value` is the new value for the option. This should be an
appropriate Python value. For example, use True for booleans, not the
string ``"True"``.
As an example, calling:
.. code-block:: python
cov.set_option("run:branch", True)
has the same effect as this configuration file:
.. code-block:: ini
[run]
branch = True
As a special case, an `option_name` of ``"paths"`` will replace the
entire ``[paths]`` section. The value should be a dictionary.
.. versionadded:: 4.0
"""
self.config.set_option(option_name, value)
def load(self) -> None:
"""Load previously-collected coverage data from the data file."""
self._init()
if self._collector is not None:
self._collector.reset()
should_skip = self.config.parallel and not os.path.exists(self.config.data_file)
if not should_skip:
self._init_data(suffix=None)
self._post_init()
if not should_skip:
assert self._data is not None
self._data.read()
def _init_for_start(self) -> None:
"""Initialization for start()"""
# Construct the collector.
concurrency: list[str] = self.config.concurrency or []
if "multiprocessing" in concurrency:
if self.config.config_file is None:
raise ConfigError("multiprocessing requires a configuration file")
patch_multiprocessing(rcfile=self.config.config_file)
dycon = self.config.dynamic_context
if not dycon or dycon == "none":
context_switchers = []
elif dycon == "test_function":
context_switchers = [should_start_context_test_function]
else:
raise ConfigError(f"Don't understand dynamic_context setting: {dycon!r}")
context_switchers.extend(
plugin.dynamic_context for plugin in self._plugins.context_switchers
)
should_start_context = combine_context_switchers(context_switchers)
self._collector = Collector(
should_trace=self._should_trace,
check_include=self._check_include_omit_etc,
should_start_context=should_start_context,
file_mapper=self._file_mapper,
timid=self.config.timid,
branch=self.config.branch,
warn=self._warn,
concurrency=concurrency,
metacov=self._metacov,
)
suffix = self._data_suffix_specified
if suffix:
if not isinstance(suffix, str):
# if data_suffix=True, use .machinename.pid.random
suffix = True
elif self.config.parallel:
if suffix is None:
suffix = True
elif not isinstance(suffix, str):
suffix = bool(suffix)
else:
suffix = None
self._init_data(suffix)
assert self._data is not None
self._collector.use_data(self._data, self.config.context)
# Early warning if we aren't going to be able to support plugins.
if self._plugins.file_tracers and not self._collector.supports_plugins:
self._warn(
"Plugin file tracers ({}) aren't supported with {}".format(
", ".join(
plugin._coverage_plugin_name
for plugin in self._plugins.file_tracers
),
self._collector.tracer_name(),
),
)
for plugin in self._plugins.file_tracers:
plugin._coverage_enabled = False
# Create the file classifying substructure.
self._inorout = InOrOut(
config=self.config,
warn=self._warn,
debug=(self._debug if self._debug.should("trace") else None),
include_namespace_packages=self.config.include_namespace_packages,
)
self._inorout.plugins = self._plugins
self._inorout.disp_class = self._collector.file_disposition_class
# It's useful to write debug info after initing for start.
self._should_write_debug = True
# Register our clean-up handlers.
atexit.register(self._atexit)
if self.config.sigterm:
is_main = (threading.current_thread() == threading.main_thread())
if is_main and not env.WINDOWS:
# The Python docs seem to imply that SIGTERM works uniformly even
# on Windows, but that's not my experience, and this agrees:
# https://stackoverflow.com/questions/35772001/x/35792192#35792192
self._old_sigterm = signal.signal( # type: ignore[assignment]
signal.SIGTERM, self._on_sigterm,
)
def _init_data(self, suffix: str | bool | None) -> None:
"""Create a data file if we don't have one yet."""
if self._data is None:
# Create the data file. We do this at construction time so that the
# data file will be written into the directory where the process
# started rather than wherever the process eventually chdir'd to.
ensure_dir_for_file(self.config.data_file)
self._data = CoverageData(
basename=self.config.data_file,
suffix=suffix,
warn=self._warn,
debug=self._debug,
no_disk=self._no_disk,
)
def start(self) -> None:
"""Start measuring code coverage.
Coverage measurement is only collected in functions called after
:meth:`start` is invoked. Statements in the same scope as
:meth:`start` won't be measured.
Once you invoke :meth:`start`, you must also call :meth:`stop`
eventually, or your process might not shut down cleanly.
The :meth:`collect` method is a context manager to handle both
starting and stopping collection.
"""
self._init()
if not self._inited_for_start:
self._inited_for_start = True
self._init_for_start()
self._post_init()
assert self._collector is not None
assert self._inorout is not None
# Issue warnings for possible problems.
self._inorout.warn_conflicting_settings()
# See if we think some code that would eventually be measured has
# already been imported.
if self._warn_preimported_source:
self._inorout.warn_already_imported_files()
if self._auto_load:
self.load()
self._collector.start()
self._started = True
self._instances.append(self)
def stop(self) -> None:
"""Stop measuring code coverage."""
if self._instances:
if self._instances[-1] is self:
self._instances.pop()
if self._started:
assert self._collector is not None
self._collector.stop()
self._started = False
@contextlib.contextmanager
def collect(self) -> Iterator[None]:
"""A context manager to start/stop coverage measurement collection.
.. versionadded:: 7.3
"""
self.start()
try:
yield
finally:
self.stop()
def _atexit(self, event: str = "atexit") -> None:
"""Clean up on process shutdown."""
if self._debug.should("process"):
self._debug.write(f"{event}: pid: {os.getpid()}, instance: {self!r}")
if self._started:
self.stop()
if self._auto_save or event == "sigterm":
self.save()
def _on_sigterm(self, signum_unused: int, frame_unused: FrameType | None) -> None:
"""A handler for signal.SIGTERM."""
self._atexit("sigterm")
# Statements after here won't be seen by metacov because we just wrote
# the data, and are about to kill the process.
signal.signal(signal.SIGTERM, self._old_sigterm) # pragma: not covered
os.kill(os.getpid(), signal.SIGTERM) # pragma: not covered
def erase(self) -> None:
"""Erase previously collected coverage data.
This removes the in-memory data collected in this session as well as
discarding the data file.
"""
self._init()
self._post_init()
if self._collector is not None:
self._collector.reset()
self._init_data(suffix=None)
assert self._data is not None
self._data.erase(parallel=self.config.parallel)
self._data = None
self._inited_for_start = False
def switch_context(self, new_context: str) -> None:
"""Switch to a new dynamic context.
`new_context` is a string to use as the :ref:`dynamic context
` label for collected data. If a :ref:`static
context ` is in use, the static and dynamic context
labels will be joined together with a pipe character.
Coverage collection must be started already.
.. versionadded:: 5.0
"""
if not self._started: # pragma: part started
raise CoverageException("Cannot switch context, coverage is not started")
assert self._collector is not None
if self._collector.should_start_context:
self._warn("Conflicting dynamic contexts", slug="dynamic-conflict", once=True)
self._collector.switch_context(new_context)
def clear_exclude(self, which: str = "exclude") -> None:
"""Clear the exclude list."""
self._init()
setattr(self.config, which + "_list", [])
self._exclude_regex_stale()
def exclude(self, regex: str, which: str = "exclude") -> None:
"""Exclude source lines from execution consideration.
A number of lists of regular expressions are maintained. Each list
selects lines that are treated differently during reporting.
`which` determines which list is modified. The "exclude" list selects
lines that are not considered executable at all. The "partial" list
indicates lines with branches that are not taken.
`regex` is a regular expression. The regex is added to the specified
list. If any of the regexes in the list is found in a line, the line
is marked for special treatment during reporting.
"""
self._init()
excl_list = getattr(self.config, which + "_list")
excl_list.append(regex)
self._exclude_regex_stale()
def _exclude_regex_stale(self) -> None:
"""Drop all the compiled exclusion regexes, a list was modified."""
self._exclude_re.clear()
def _exclude_regex(self, which: str) -> str:
"""Return a regex string for the given exclusion list."""
if which not in self._exclude_re:
excl_list = getattr(self.config, which + "_list")
self._exclude_re[which] = join_regex(excl_list)
return self._exclude_re[which]
def get_exclude_list(self, which: str = "exclude") -> list[str]:
"""Return a list of excluded regex strings.
`which` indicates which list is desired. See :meth:`exclude` for the
lists that are available, and their meaning.
"""
self._init()
return cast(List[str], getattr(self.config, which + "_list"))
def save(self) -> None:
"""Save the collected coverage data to the data file."""
data = self.get_data()
data.write()
def _make_aliases(self) -> PathAliases:
"""Create a PathAliases from our configuration."""
aliases = PathAliases(
debugfn=(self._debug.write if self._debug.should("pathmap") else None),
relative=self.config.relative_files,
)
for paths in self.config.paths.values():
result = paths[0]
for pattern in paths[1:]:
aliases.add(pattern, result)
return aliases
def combine(
self,
data_paths: Iterable[str] | None = None,
strict: bool = False,
keep: bool = False,
) -> None:
"""Combine together a number of similarly-named coverage data files.
All coverage data files whose name starts with `data_file` (from the
coverage() constructor) will be read, and combined together into the
current measurements.
`data_paths` is a list of files or directories from which data should
be combined. If no list is passed, then the data files from the
directory indicated by the current data file (probably the current
directory) will be combined.
If `strict` is true, then it is an error to attempt to combine when
there are no data files to combine.
If `keep` is true, then original input data files won't be deleted.
.. versionadded:: 4.0
The `data_paths` parameter.
.. versionadded:: 4.3
The `strict` parameter.
.. versionadded: 5.5
The `keep` parameter.
"""
self._init()
self._init_data(suffix=None)
self._post_init()
self.get_data()
assert self._data is not None
combine_parallel_data(
self._data,
aliases=self._make_aliases(),
data_paths=data_paths,
strict=strict,
keep=keep,
message=self._message,
)
def get_data(self) -> CoverageData:
"""Get the collected data.
Also warn about various problems collecting data.
Returns a :class:`coverage.CoverageData`, the collected coverage data.
.. versionadded:: 4.0
"""
self._init()
self._init_data(suffix=None)
self._post_init()
if self._collector is not None:
for plugin in self._plugins:
if not plugin._coverage_enabled:
self._collector.plugin_was_disabled(plugin)
if self._collector.flush_data():
self._post_save_work()
assert self._data is not None
return self._data
def _post_save_work(self) -> None:
"""After saving data, look for warnings, post-work, etc.
Warn about things that should have happened but didn't.
Look for un-executed files.
"""
assert self._data is not None
assert self._inorout is not None
# If there are still entries in the source_pkgs_unmatched list,
# then we never encountered those packages.
if self._warn_unimported_source:
self._inorout.warn_unimported_source()
# Find out if we got any data.
if not self._data and self._warn_no_data:
self._warn("No data was collected.", slug="no-data-collected")
# Touch all the files that could have executed, so that we can
# mark completely un-executed files as 0% covered.
file_paths = collections.defaultdict(list)
for file_path, plugin_name in self._inorout.find_possibly_unexecuted_files():
file_path = self._file_mapper(file_path)
file_paths[plugin_name].append(file_path)
for plugin_name, paths in file_paths.items():
self._data.touch_files(paths, plugin_name)
# Backward compatibility with version 1.
def analysis(self, morf: TMorf) -> tuple[str, list[TLineNo], list[TLineNo], str]:
"""Like `analysis2` but doesn't return excluded line numbers."""
f, s, _, m, mf = self.analysis2(morf)
return f, s, m, mf
def analysis2(
self,
morf: TMorf,
) -> tuple[str, list[TLineNo], list[TLineNo], list[TLineNo], str]:
"""Analyze a module.
`morf` is a module or a file name. It will be analyzed to determine
its coverage statistics. The return value is a 5-tuple:
* The file name for the module.
* A list of line numbers of executable statements.
* A list of line numbers of excluded statements.
* A list of line numbers of statements not run (missing from
execution).
* A readable formatted string of the missing line numbers.
The analysis uses the source file itself and the current measured
coverage data.
"""
analysis = self._analyze(morf)
return (
analysis.filename,
sorted(analysis.statements),
sorted(analysis.excluded),
sorted(analysis.missing),
analysis.missing_formatted(),
)
def _analyze(self, it: FileReporter | TMorf) -> Analysis:
"""Analyze a single morf or code unit.
Returns an `Analysis` object.
"""
# All reporting comes through here, so do reporting initialization.
self._init()
self._post_init()
data = self.get_data()
if isinstance(it, FileReporter):
fr = it
else:
fr = self._get_file_reporter(it)
return Analysis(data, self.config.precision, fr, self._file_mapper)
def _get_file_reporter(self, morf: TMorf) -> FileReporter:
"""Get a FileReporter for a module or file name."""
assert self._data is not None
plugin = None
file_reporter: str | FileReporter = "python"
if isinstance(morf, str):
mapped_morf = self._file_mapper(morf)
plugin_name = self._data.file_tracer(mapped_morf)
if plugin_name:
plugin = self._plugins.get(plugin_name)
if plugin:
file_reporter = plugin.file_reporter(mapped_morf)
if file_reporter is None:
raise PluginError(
"Plugin {!r} did not provide a file reporter for {!r}.".format(
plugin._coverage_plugin_name, morf,
),
)
if file_reporter == "python":
file_reporter = PythonFileReporter(morf, self)
assert isinstance(file_reporter, FileReporter)
return file_reporter
def _get_file_reporters(self, morfs: Iterable[TMorf] | None = None) -> list[FileReporter]:
"""Get a list of FileReporters for a list of modules or file names.
For each module or file name in `morfs`, find a FileReporter. Return
the list of FileReporters.
If `morfs` is a single module or file name, this returns a list of one
FileReporter. If `morfs` is empty or None, then the list of all files
measured is used to find the FileReporters.
"""
assert self._data is not None
if not morfs:
morfs = self._data.measured_files()
# Be sure we have a collection.
if not isinstance(morfs, (list, tuple, set)):
morfs = [morfs] # type: ignore[list-item]
file_reporters = [self._get_file_reporter(morf) for morf in morfs]
return file_reporters
def _prepare_data_for_reporting(self) -> None:
"""Re-map data before reporting, to get implicit "combine" behavior."""
if self.config.paths:
mapped_data = CoverageData(warn=self._warn, debug=self._debug, no_disk=True)
if self._data is not None:
mapped_data.update(self._data, aliases=self._make_aliases())
self._data = mapped_data
def report(
self,
morfs: Iterable[TMorf] | None = None,
show_missing: bool | None = None,
ignore_errors: bool | None = None,
file: IO[str] | None = None,
omit: str | list[str] | None = None,
include: str | list[str] | None = None,
skip_covered: bool | None = None,
contexts: list[str] | None = None,
skip_empty: bool | None = None,
precision: int | None = None,
sort: str | None = None,
output_format: str | None = None,
) -> float:
"""Write a textual summary report to `file`.
Each module in `morfs` is listed, with counts of statements, executed
statements, missing statements, and a list of lines missed.
If `show_missing` is true, then details of which lines or branches are
missing will be included in the report. If `ignore_errors` is true,
then a failure while reporting a single file will not stop the entire
report.
`file` is a file-like object, suitable for writing.
`output_format` determines the format, either "text" (the default),
"markdown", or "total".
`include` is a list of file name patterns. Files that match will be
included in the report. Files matching `omit` will not be included in
the report.
If `skip_covered` is true, don't report on files with 100% coverage.
If `skip_empty` is true, don't report on empty files (those that have
no statements).
`contexts` is a list of regular expression strings. Only data from
:ref:`dynamic contexts ` that match one of those
expressions (using :func:`re.search `) will be
included in the report.
`precision` is the number of digits to display after the decimal
point for percentages.
All of the arguments default to the settings read from the
:ref:`configuration file `.
Returns a float, the total percentage covered.
.. versionadded:: 4.0
The `skip_covered` parameter.
.. versionadded:: 5.0
The `contexts` and `skip_empty` parameters.
.. versionadded:: 5.2
The `precision` parameter.
.. versionadded:: 7.0
The `format` parameter.
"""
self._prepare_data_for_reporting()
with override_config(
self,
ignore_errors=ignore_errors,
report_omit=omit,
report_include=include,
show_missing=show_missing,
skip_covered=skip_covered,
report_contexts=contexts,
skip_empty=skip_empty,
precision=precision,
sort=sort,
format=output_format,
):
reporter = SummaryReporter(self)
return reporter.report(morfs, outfile=file)
def annotate(
self,
morfs: Iterable[TMorf] | None = None,
directory: str | None = None,
ignore_errors: bool | None = None,
omit: str | list[str] | None = None,
include: str | list[str] | None = None,
contexts: list[str] | None = None,
) -> None:
"""Annotate a list of modules.
Each module in `morfs` is annotated. The source is written to a new
file, named with a ",cover" suffix, with each line prefixed with a
marker to indicate the coverage of the line. Covered lines have ">",
excluded lines have "-", and missing lines have "!".
See :meth:`report` for other arguments.
"""
self._prepare_data_for_reporting()
with override_config(
self,
ignore_errors=ignore_errors,
report_omit=omit,
report_include=include,
report_contexts=contexts,
):
reporter = AnnotateReporter(self)
reporter.report(morfs, directory=directory)
def html_report(
self,
morfs: Iterable[TMorf] | None = None,
directory: str | None = None,
ignore_errors: bool | None = None,
omit: str | list[str] | None = None,
include: str | list[str] | None = None,
extra_css: str | None = None,
title: str | None = None,
skip_covered: bool | None = None,
show_contexts: bool | None = None,
contexts: list[str] | None = None,
skip_empty: bool | None = None,
precision: int | None = None,
) -> float:
"""Generate an HTML report.
The HTML is written to `directory`. The file "index.html" is the
overview starting point, with links to more detailed pages for
individual modules.
`extra_css` is a path to a file of other CSS to apply on the page.
It will be copied into the HTML directory.
`title` is a text string (not HTML) to use as the title of the HTML
report.
See :meth:`report` for other arguments.
Returns a float, the total percentage covered.
.. note::
The HTML report files are generated incrementally based on the
source files and coverage results. If you modify the report files,
the changes will not be considered. You should be careful about
changing the files in the report folder.
"""
self._prepare_data_for_reporting()
with override_config(
self,
ignore_errors=ignore_errors,
report_omit=omit,
report_include=include,
html_dir=directory,
extra_css=extra_css,
html_title=title,
html_skip_covered=skip_covered,
show_contexts=show_contexts,
report_contexts=contexts,
html_skip_empty=skip_empty,
precision=precision,
):
reporter = HtmlReporter(self)
ret = reporter.report(morfs)
return ret
def xml_report(
self,
morfs: Iterable[TMorf] | None = None,
outfile: str | None = None,
ignore_errors: bool | None = None,
omit: str | list[str] | None = None,
include: str | list[str] | None = None,
contexts: list[str] | None = None,
skip_empty: bool | None = None,
) -> float:
"""Generate an XML report of coverage results.
The report is compatible with Cobertura reports.
Each module in `morfs` is included in the report. `outfile` is the
path to write the file to, "-" will write to stdout.
See :meth:`report` for other arguments.
Returns a float, the total percentage covered.
"""
self._prepare_data_for_reporting()
with override_config(
self,
ignore_errors=ignore_errors,
report_omit=omit,
report_include=include,
xml_output=outfile,
report_contexts=contexts,
skip_empty=skip_empty,
):
return render_report(self.config.xml_output, XmlReporter(self), morfs, self._message)
def json_report(
self,
morfs: Iterable[TMorf] | None = None,
outfile: str | None = None,
ignore_errors: bool | None = None,
omit: str | list[str] | None = None,
include: str | list[str] | None = None,
contexts: list[str] | None = None,
pretty_print: bool | None = None,
show_contexts: bool | None = None,
) -> float:
"""Generate a JSON report of coverage results.
Each module in `morfs` is included in the report. `outfile` is the
path to write the file to, "-" will write to stdout.
`pretty_print` is a boolean, whether to pretty-print the JSON output or not.
See :meth:`report` for other arguments.
Returns a float, the total percentage covered.
.. versionadded:: 5.0
"""
self._prepare_data_for_reporting()
with override_config(
self,
ignore_errors=ignore_errors,
report_omit=omit,
report_include=include,
json_output=outfile,
report_contexts=contexts,
json_pretty_print=pretty_print,
json_show_contexts=show_contexts,
):
return render_report(self.config.json_output, JsonReporter(self), morfs, self._message)
def lcov_report(
self,
morfs: Iterable[TMorf] | None = None,
outfile: str | None = None,
ignore_errors: bool | None = None,
omit: str | list[str] | None = None,
include: str | list[str] | None = None,
contexts: list[str] | None = None,
) -> float:
"""Generate an LCOV report of coverage results.
Each module in `morfs` is included in the report. `outfile` is the
path to write the file to, "-" will write to stdout.
See :meth:`report` for other arguments.
.. versionadded:: 6.3
"""
self._prepare_data_for_reporting()
with override_config(
self,
ignore_errors=ignore_errors,
report_omit=omit,
report_include=include,
lcov_output=outfile,
report_contexts=contexts,
):
return render_report(self.config.lcov_output, LcovReporter(self), morfs, self._message)
def sys_info(self) -> Iterable[tuple[str, Any]]:
"""Return a list of (key, value) pairs showing internal information."""
import coverage as covmod
self._init()
self._post_init()
def plugin_info(plugins: list[Any]) -> list[str]:
"""Make an entry for the sys_info from a list of plug-ins."""
entries = []
for plugin in plugins:
entry = plugin._coverage_plugin_name
if not plugin._coverage_enabled:
entry += " (disabled)"
entries.append(entry)
return entries
info = [
("coverage_version", covmod.__version__),
("coverage_module", covmod.__file__),
("core", self._collector.tracer_name() if self._collector is not None else "-none-"),
("CTracer", "available" if HAS_CTRACER else "unavailable"),
("plugins.file_tracers", plugin_info(self._plugins.file_tracers)),
("plugins.configurers", plugin_info(self._plugins.configurers)),
("plugins.context_switchers", plugin_info(self._plugins.context_switchers)),
("configs_attempted", self.config.attempted_config_files),
("configs_read", self.config.config_files_read),
("config_file", self.config.config_file),
("config_contents",
repr(self.config._config_contents) if self.config._config_contents else "-none-",
),
("data_file", self._data.data_filename() if self._data is not None else "-none-"),
("python", sys.version.replace("\n", "")),
("platform", platform.platform()),
("implementation", platform.python_implementation()),
("executable", sys.executable),
("def_encoding", sys.getdefaultencoding()),
("fs_encoding", sys.getfilesystemencoding()),
("pid", os.getpid()),
("cwd", os.getcwd()),
("path", sys.path),
("environment", [f"{k} = {v}" for k, v in relevant_environment_display(os.environ)]),
("command_line", " ".join(getattr(sys, "argv", ["-none-"]))),
]
if self._inorout is not None:
info.extend(self._inorout.sys_info())
info.extend(CoverageData.sys_info())
return info
# Mega debugging...
# $set_env.py: COVERAGE_DEBUG_CALLS - Lots and lots of output about calls to Coverage.
if int(os.getenv("COVERAGE_DEBUG_CALLS", 0)): # pragma: debugging
from coverage.debug import decorate_methods, show_calls
Coverage = decorate_methods( # type: ignore[misc]
show_calls(show_args=True),
butnot=["get_data"],
)(Coverage)
def process_startup() -> Coverage | None:
"""Call this at Python start-up to perhaps measure coverage.
If the environment variable COVERAGE_PROCESS_START is defined, coverage
measurement is started. The value of the variable is the config file
to use.
There are two ways to configure your Python installation to invoke this
function when Python starts:
#. Create or append to sitecustomize.py to add these lines::
import coverage
coverage.process_startup()
#. Create a .pth file in your Python installation containing::
import coverage; coverage.process_startup()
Returns the :class:`Coverage` instance that was started, or None if it was
not started by this call.
"""
cps = os.getenv("COVERAGE_PROCESS_START")
if not cps:
# No request for coverage, nothing to do.
return None
# This function can be called more than once in a process. This happens
# because some virtualenv configurations make the same directory visible
# twice in sys.path. This means that the .pth file will be found twice,
# and executed twice, executing this function twice. We set a global
# flag (an attribute on this function) to indicate that coverage.py has
# already been started, so we can avoid doing it twice.
#
# https://github.com/nedbat/coveragepy/issues/340 has more details.
if hasattr(process_startup, "coverage"):
# We've annotated this function before, so we must have already
# started coverage.py in this process. Nothing to do.
return None
cov = Coverage(config_file=cps)
process_startup.coverage = cov # type: ignore[attr-defined]
cov._warn_no_data = False
cov._warn_unimported_source = False
cov._warn_preimported_source = False
cov._auto_save = True
cov.start()
return cov
def _prevent_sub_process_measurement() -> None:
"""Stop any subprocess auto-measurement from writing data."""
auto_created_coverage = getattr(process_startup, "coverage", None)
if auto_created_coverage is not None:
auto_created_coverage._auto_save = False
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1710442639.1178148
coverage-7.4.4/coverage/ctracer/ 0000755 0001751 0000177 00000000000 00000000000 017345 5 ustar 00runner docker 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/ctracer/datastack.c 0000644 0001751 0000177 00000002627 00000000000 021457 0 ustar 00runner docker 0000000 0000000 /* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */
/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */
#include "util.h"
#include "datastack.h"
#define STACK_DELTA 20
int
DataStack_init(Stats *pstats, DataStack *pdata_stack)
{
pdata_stack->depth = -1;
pdata_stack->stack = NULL;
pdata_stack->alloc = 0;
return RET_OK;
}
void
DataStack_dealloc(Stats *pstats, DataStack *pdata_stack)
{
int i;
for (i = 0; i < pdata_stack->alloc; i++) {
Py_XDECREF(pdata_stack->stack[i].file_data);
}
PyMem_Free(pdata_stack->stack);
}
int
DataStack_grow(Stats *pstats, DataStack *pdata_stack)
{
pdata_stack->depth++;
if (pdata_stack->depth >= pdata_stack->alloc) {
/* We've outgrown our data_stack array: make it bigger. */
int bigger = pdata_stack->alloc + STACK_DELTA;
DataStackEntry * bigger_data_stack = PyMem_Realloc(pdata_stack->stack, bigger * sizeof(DataStackEntry));
STATS( pstats->stack_reallocs++; )
if (bigger_data_stack == NULL) {
PyErr_NoMemory();
pdata_stack->depth--;
return RET_ERROR;
}
/* Zero the new entries. */
memset(bigger_data_stack + pdata_stack->alloc, 0, STACK_DELTA * sizeof(DataStackEntry));
pdata_stack->stack = bigger_data_stack;
pdata_stack->alloc = bigger;
}
return RET_OK;
}
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/ctracer/datastack.h 0000644 0001751 0000177 00000003005 00000000000 021453 0 ustar 00runner docker 0000000 0000000 /* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */
/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */
#ifndef _COVERAGE_DATASTACK_H
#define _COVERAGE_DATASTACK_H
#include "util.h"
#include "stats.h"
/* An entry on the data stack. For each call frame, we need to record all
* the information needed for CTracer_handle_line to operate as quickly as
* possible.
*/
typedef struct DataStackEntry {
/* The current file_data set. Owned. */
PyObject * file_data;
/* The disposition object for this frame. A borrowed instance of CFileDisposition. */
PyObject * disposition;
/* The FileTracer handling this frame, or None if it's Python. Borrowed. */
PyObject * file_tracer;
/* The line number of the last line recorded, for tracing arcs.
-1 means there was no previous line, as when entering a code object.
*/
int last_line;
BOOL started_context;
} DataStackEntry;
/* A data stack is a dynamically allocated vector of DataStackEntry's. */
typedef struct DataStack {
int depth; /* The index of the last-used entry in stack. */
int alloc; /* number of entries allocated at stack. */
/* The file data at each level, or NULL if not recording. */
DataStackEntry * stack;
} DataStack;
int DataStack_init(Stats * pstats, DataStack *pdata_stack);
void DataStack_dealloc(Stats * pstats, DataStack *pdata_stack);
int DataStack_grow(Stats * pstats, DataStack *pdata_stack);
#endif /* _COVERAGE_DATASTACK_H */
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/ctracer/filedisp.c 0000644 0001751 0000177 00000006300 00000000000 021307 0 ustar 00runner docker 0000000 0000000 /* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */
/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */
#include "util.h"
#include "filedisp.h"
void
CFileDisposition_dealloc(CFileDisposition *self)
{
Py_XDECREF(self->original_filename);
Py_XDECREF(self->canonical_filename);
Py_XDECREF(self->source_filename);
Py_XDECREF(self->trace);
Py_XDECREF(self->reason);
Py_XDECREF(self->file_tracer);
Py_XDECREF(self->has_dynamic_filename);
}
static PyMemberDef
CFileDisposition_members[] = {
{ "original_filename", T_OBJECT, offsetof(CFileDisposition, original_filename), 0,
PyDoc_STR("") },
{ "canonical_filename", T_OBJECT, offsetof(CFileDisposition, canonical_filename), 0,
PyDoc_STR("") },
{ "source_filename", T_OBJECT, offsetof(CFileDisposition, source_filename), 0,
PyDoc_STR("") },
{ "trace", T_OBJECT, offsetof(CFileDisposition, trace), 0,
PyDoc_STR("") },
{ "reason", T_OBJECT, offsetof(CFileDisposition, reason), 0,
PyDoc_STR("") },
{ "file_tracer", T_OBJECT, offsetof(CFileDisposition, file_tracer), 0,
PyDoc_STR("") },
{ "has_dynamic_filename", T_OBJECT, offsetof(CFileDisposition, has_dynamic_filename), 0,
PyDoc_STR("") },
{ NULL }
};
PyTypeObject
CFileDispositionType = {
PyVarObject_HEAD_INIT(NULL, 0)
"coverage.CFileDispositionType", /*tp_name*/
sizeof(CFileDisposition), /*tp_basicsize*/
0, /*tp_itemsize*/
(destructor)CFileDisposition_dealloc, /*tp_dealloc*/
0, /*tp_print*/
0, /*tp_getattr*/
0, /*tp_setattr*/
0, /*tp_compare*/
0, /*tp_repr*/
0, /*tp_as_number*/
0, /*tp_as_sequence*/
0, /*tp_as_mapping*/
0, /*tp_hash */
0, /*tp_call*/
0, /*tp_str*/
0, /*tp_getattro*/
0, /*tp_setattro*/
0, /*tp_as_buffer*/
Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /*tp_flags*/
"CFileDisposition objects", /* tp_doc */
0, /* tp_traverse */
0, /* tp_clear */
0, /* tp_richcompare */
0, /* tp_weaklistoffset */
0, /* tp_iter */
0, /* tp_iternext */
0, /* tp_methods */
CFileDisposition_members, /* tp_members */
0, /* tp_getset */
0, /* tp_base */
0, /* tp_dict */
0, /* tp_descr_get */
0, /* tp_descr_set */
0, /* tp_dictoffset */
0, /* tp_init */
0, /* tp_alloc */
0, /* tp_new */
};
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/ctracer/filedisp.h 0000644 0001751 0000177 00000001256 00000000000 021321 0 ustar 00runner docker 0000000 0000000 /* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */
/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */
#ifndef _COVERAGE_FILEDISP_H
#define _COVERAGE_FILEDISP_H
#include "util.h"
#include "structmember.h"
typedef struct CFileDisposition {
PyObject_HEAD
PyObject * original_filename;
PyObject * canonical_filename;
PyObject * source_filename;
PyObject * trace;
PyObject * reason;
PyObject * file_tracer;
PyObject * has_dynamic_filename;
} CFileDisposition;
void CFileDisposition_dealloc(CFileDisposition *self);
extern PyTypeObject CFileDispositionType;
#endif /* _COVERAGE_FILEDISP_H */
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/ctracer/module.c 0000644 0001751 0000177 00000003064 00000000000 021001 0 ustar 00runner docker 0000000 0000000 /* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */
/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */
#include "util.h"
#include "tracer.h"
#include "filedisp.h"
/* Module definition */
#define MODULE_DOC PyDoc_STR("Fast coverage tracer.")
static PyModuleDef
moduledef = {
PyModuleDef_HEAD_INIT,
"coverage.tracer",
MODULE_DOC,
-1,
NULL, /* methods */
NULL,
NULL, /* traverse */
NULL, /* clear */
NULL
};
PyObject *
PyInit_tracer(void)
{
PyObject * mod = PyModule_Create(&moduledef);
if (mod == NULL) {
return NULL;
}
if (CTracer_intern_strings() < 0) {
return NULL;
}
/* Initialize CTracer */
CTracerType.tp_new = PyType_GenericNew;
if (PyType_Ready(&CTracerType) < 0) {
Py_DECREF(mod);
return NULL;
}
Py_INCREF(&CTracerType);
if (PyModule_AddObject(mod, "CTracer", (PyObject *)&CTracerType) < 0) {
Py_DECREF(mod);
Py_DECREF(&CTracerType);
return NULL;
}
/* Initialize CFileDisposition */
CFileDispositionType.tp_new = PyType_GenericNew;
if (PyType_Ready(&CFileDispositionType) < 0) {
Py_DECREF(mod);
Py_DECREF(&CTracerType);
return NULL;
}
Py_INCREF(&CFileDispositionType);
if (PyModule_AddObject(mod, "CFileDisposition", (PyObject *)&CFileDispositionType) < 0) {
Py_DECREF(mod);
Py_DECREF(&CTracerType);
Py_DECREF(&CFileDispositionType);
return NULL;
}
return mod;
}
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/ctracer/stats.h 0000644 0001751 0000177 00000001306 00000000000 020654 0 ustar 00runner docker 0000000 0000000 /* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */
/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */
#ifndef _COVERAGE_STATS_H
#define _COVERAGE_STATS_H
#include "util.h"
#if COLLECT_STATS
#define STATS(x) x
#else
#define STATS(x)
#endif
typedef struct Stats {
unsigned int calls; /* Need at least one member, but the rest only if needed. */
#if COLLECT_STATS
unsigned int lines;
unsigned int returns;
unsigned int others;
unsigned int files;
unsigned int stack_reallocs;
unsigned int errors;
unsigned int pycalls;
unsigned int start_context_calls;
#endif
} Stats;
#endif /* _COVERAGE_STATS_H */
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/ctracer/tracer.c 0000644 0001751 0000177 00000103154 00000000000 020775 0 ustar 00runner docker 0000000 0000000 /* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */
/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */
/* C-based Tracer for coverage.py. */
#include "util.h"
#include "datastack.h"
#include "filedisp.h"
#include "tracer.h"
/* Python C API helpers. */
static int
pyint_as_int(PyObject * pyint, int *pint)
{
int the_int = (int)PyLong_AsLong(pyint);
if (the_int == -1 && PyErr_Occurred()) {
return RET_ERROR;
}
*pint = the_int;
return RET_OK;
}
/* Interned strings to speed GetAttr etc. */
static PyObject *str__coverage_plugin;
static PyObject *str__coverage_plugin_name;
static PyObject *str_dynamic_source_filename;
static PyObject *str_line_number_range;
int
CTracer_intern_strings(void)
{
int ret = RET_ERROR;
#define INTERN_STRING(v, s) \
v = PyUnicode_InternFromString(s); \
if (v == NULL) { \
goto error; \
}
INTERN_STRING(str__coverage_plugin, "_coverage_plugin")
INTERN_STRING(str__coverage_plugin_name, "_coverage_plugin_name")
INTERN_STRING(str_dynamic_source_filename, "dynamic_source_filename")
INTERN_STRING(str_line_number_range, "line_number_range")
ret = RET_OK;
error:
return ret;
}
static void CTracer_disable_plugin(CTracer *self, PyObject * disposition);
static int
CTracer_init(CTracer *self, PyObject *args_unused, PyObject *kwds_unused)
{
int ret = RET_ERROR;
if (DataStack_init(&self->stats, &self->data_stack) < 0) {
goto error;
}
self->pdata_stack = &self->data_stack;
self->context = Py_None;
Py_INCREF(self->context);
ret = RET_OK;
goto ok;
error:
STATS( self->stats.errors++; )
ok:
return ret;
}
static void
CTracer_dealloc(CTracer *self)
{
int i;
if (self->started) {
PyEval_SetTrace(NULL, NULL);
}
Py_XDECREF(self->should_trace);
Py_XDECREF(self->check_include);
Py_XDECREF(self->warn);
Py_XDECREF(self->concur_id_func);
Py_XDECREF(self->data);
Py_XDECREF(self->file_tracers);
Py_XDECREF(self->should_trace_cache);
Py_XDECREF(self->should_start_context);
Py_XDECREF(self->switch_context);
Py_XDECREF(self->context);
Py_XDECREF(self->disable_plugin);
DataStack_dealloc(&self->stats, &self->data_stack);
if (self->data_stacks) {
for (i = 0; i < self->data_stacks_used; i++) {
DataStack_dealloc(&self->stats, self->data_stacks + i);
}
PyMem_Free(self->data_stacks);
}
Py_XDECREF(self->data_stack_index);
Py_TYPE(self)->tp_free((PyObject*)self);
}
#if TRACE_LOG
/* Set debugging constants: a file substring and line number to start logging. */
static const char * start_file = "badasync.py";
static int start_line = 1;
static const char *
indent(int n)
{
static const char * spaces =
" "
" "
" "
" "
;
return spaces + strlen(spaces) - n*2;
}
static BOOL logging = FALSE;
static void
CTracer_showlog(CTracer * self, int lineno, PyObject * filename, const char * msg)
{
if (logging) {
int depth = self->pdata_stack->depth;
printf("%x: %s%3d ", (int)self, indent(depth), depth);
if (lineno) {
printf("%4d", lineno);
}
else {
printf(" ");
}
if (filename) {
PyObject *ascii = PyUnicode_AsASCIIString(filename);
printf(" %s", PyBytes_AS_STRING(ascii));
Py_DECREF(ascii);
}
if (msg) {
printf(" %s", msg);
}
printf("\n");
}
}
#define SHOWLOG(l,f,m) CTracer_showlog(self,l,f,m)
#else
#define SHOWLOG(l,f,m)
#endif /* TRACE_LOG */
#if WHAT_LOG
static const char * what_sym[] = {"CALL", "EXC ", "LINE", "RET "};
#endif
/* Record a pair of integers in self->pcur_entry->file_data. */
static int
CTracer_record_pair(CTracer *self, int l1, int l2)
{
int ret = RET_ERROR;
PyObject * packed_obj = NULL;
uint64 packed = 0;
// Conceptually, data is a set of tuples (l1, l2), but that literally
// making a set of tuples would require us to construct a tuple just to
// see if we'd already recorded an arc. On many-times-executed code,
// that would mean we construct a tuple, find the tuple is already in the
// set, then discard the tuple. We can avoid that overhead by packing
// the two line numbers into one integer instead.
// See collector.py:flush_data for the Python code that unpacks this.
if (l1 < 0) {
packed |= (1LL << 40);
l1 = -l1;
}
if (l2 < 0) {
packed |= (1LL << 41);
l2 = -l2;
}
packed |= (((uint64)l2) << 20) + (uint64)l1;
packed_obj = PyLong_FromUnsignedLongLong(packed);
if (packed_obj == NULL) {
goto error;
}
if (PySet_Add(self->pcur_entry->file_data, packed_obj) < 0) {
goto error;
}
ret = RET_OK;
error:
Py_XDECREF(packed_obj);
return ret;
}
/* Set self->pdata_stack to the proper data_stack to use. */
static int
CTracer_set_pdata_stack(CTracer *self)
{
int ret = RET_ERROR;
PyObject * co_obj = NULL;
PyObject * stack_index = NULL;
if (self->concur_id_func != Py_None) {
int the_index = 0;
if (self->data_stack_index == NULL) {
PyObject * weakref = NULL;
weakref = PyImport_ImportModule("weakref");
if (weakref == NULL) {
goto error;
}
STATS( self->stats.pycalls++; )
self->data_stack_index = PyObject_CallMethod(weakref, "WeakKeyDictionary", NULL);
Py_XDECREF(weakref);
if (self->data_stack_index == NULL) {
goto error;
}
}
STATS( self->stats.pycalls++; )
co_obj = PyObject_CallObject(self->concur_id_func, NULL);
if (co_obj == NULL) {
goto error;
}
stack_index = PyObject_GetItem(self->data_stack_index, co_obj);
if (stack_index == NULL) {
/* PyObject_GetItem sets an exception if it didn't find the thing. */
PyErr_Clear();
/* A new concurrency object. Make a new data stack. */
the_index = self->data_stacks_used;
stack_index = PyLong_FromLong((long)the_index);
if (stack_index == NULL) {
goto error;
}
if (PyObject_SetItem(self->data_stack_index, co_obj, stack_index) < 0) {
goto error;
}
self->data_stacks_used++;
if (self->data_stacks_used >= self->data_stacks_alloc) {
int bigger = self->data_stacks_alloc + 10;
DataStack * bigger_stacks = PyMem_Realloc(self->data_stacks, bigger * sizeof(DataStack));
if (bigger_stacks == NULL) {
PyErr_NoMemory();
goto error;
}
self->data_stacks = bigger_stacks;
self->data_stacks_alloc = bigger;
}
DataStack_init(&self->stats, &self->data_stacks[the_index]);
}
else {
if (pyint_as_int(stack_index, &the_index) < 0) {
goto error;
}
}
self->pdata_stack = &self->data_stacks[the_index];
}
else {
self->pdata_stack = &self->data_stack;
}
ret = RET_OK;
error:
Py_XDECREF(co_obj);
Py_XDECREF(stack_index);
return ret;
}
/*
* Parts of the trace function.
*/
static int
CTracer_handle_call(CTracer *self, PyFrameObject *frame)
{
int ret = RET_ERROR;
int ret2;
/* Owned references that we clean up at the very end of the function. */
PyObject * disposition = NULL;
PyObject * plugin = NULL;
PyObject * plugin_name = NULL;
PyObject * next_tracename = NULL;
#ifdef RESUME
PyObject * pCode = NULL;
#endif
/* Borrowed references. */
PyObject * filename = NULL;
PyObject * disp_trace = NULL;
PyObject * tracename = NULL;
PyObject * file_tracer = NULL;
PyObject * has_dynamic_filename = NULL;
CFileDisposition * pdisp = NULL;
STATS( self->stats.calls++; )
/* Grow the stack. */
if (CTracer_set_pdata_stack(self) < 0) {
goto error;
}
if (DataStack_grow(&self->stats, self->pdata_stack) < 0) {
goto error;
}
self->pcur_entry = &self->pdata_stack->stack[self->pdata_stack->depth];
/* See if this frame begins a new context. */
if (self->should_start_context != Py_None && self->context == Py_None) {
PyObject * context;
/* We're looking for our context, ask should_start_context if this is the start. */
STATS( self->stats.start_context_calls++; )
STATS( self->stats.pycalls++; )
context = PyObject_CallFunctionObjArgs(self->should_start_context, frame, NULL);
if (context == NULL) {
goto error;
}
if (context != Py_None) {
PyObject * val;
Py_DECREF(self->context);
self->context = context;
self->pcur_entry->started_context = TRUE;
STATS( self->stats.pycalls++; )
val = PyObject_CallFunctionObjArgs(self->switch_context, context, NULL);
if (val == NULL) {
goto error;
}
Py_DECREF(val);
}
else {
Py_DECREF(context);
self->pcur_entry->started_context = FALSE;
}
}
else {
self->pcur_entry->started_context = FALSE;
}
/* Check if we should trace this line. */
filename = MyFrame_GetCode(frame)->co_filename;
disposition = PyDict_GetItem(self->should_trace_cache, filename);
if (disposition == NULL) {
if (PyErr_Occurred()) {
goto error;
}
STATS( self->stats.files++; )
/* We've never considered this file before. */
/* Ask should_trace about it. */
STATS( self->stats.pycalls++; )
disposition = PyObject_CallFunctionObjArgs(self->should_trace, filename, frame, NULL);
if (disposition == NULL) {
/* An error occurred inside should_trace. */
goto error;
}
if (PyDict_SetItem(self->should_trace_cache, filename, disposition) < 0) {
goto error;
}
}
else {
Py_INCREF(disposition);
}
if (disposition == Py_None) {
/* A later check_include returned false, so don't trace it. */
disp_trace = Py_False;
}
else {
/* The object we got is a CFileDisposition, use it efficiently. */
pdisp = (CFileDisposition *) disposition;
disp_trace = pdisp->trace;
if (disp_trace == NULL) {
goto error;
}
}
if (disp_trace == Py_True) {
/* If tracename is a string, then we're supposed to trace. */
tracename = pdisp->source_filename;
if (tracename == NULL) {
goto error;
}
file_tracer = pdisp->file_tracer;
if (file_tracer == NULL) {
goto error;
}
if (file_tracer != Py_None) {
plugin = PyObject_GetAttr(file_tracer, str__coverage_plugin);
if (plugin == NULL) {
goto error;
}
plugin_name = PyObject_GetAttr(plugin, str__coverage_plugin_name);
if (plugin_name == NULL) {
goto error;
}
}
has_dynamic_filename = pdisp->has_dynamic_filename;
if (has_dynamic_filename == NULL) {
goto error;
}
if (has_dynamic_filename == Py_True) {
STATS( self->stats.pycalls++; )
next_tracename = PyObject_CallMethodObjArgs(
file_tracer, str_dynamic_source_filename,
tracename, frame, NULL
);
if (next_tracename == NULL) {
/* An exception from the function. Alert the user with a
* warning and a traceback.
*/
CTracer_disable_plugin(self, disposition);
/* Because we handled the error, goto ok. */
goto ok;
}
tracename = next_tracename;
if (tracename != Py_None) {
/* Check the dynamic source filename against the include rules. */
PyObject * included = NULL;
int should_include;
included = PyDict_GetItem(self->should_trace_cache, tracename);
if (included == NULL) {
PyObject * should_include_bool;
if (PyErr_Occurred()) {
goto error;
}
STATS( self->stats.files++; )
STATS( self->stats.pycalls++; )
should_include_bool = PyObject_CallFunctionObjArgs(self->check_include, tracename, frame, NULL);
if (should_include_bool == NULL) {
goto error;
}
should_include = (should_include_bool == Py_True);
Py_DECREF(should_include_bool);
if (PyDict_SetItem(self->should_trace_cache, tracename, should_include ? disposition : Py_None) < 0) {
goto error;
}
}
else {
should_include = (included != Py_None);
}
if (!should_include) {
tracename = Py_None;
}
}
}
}
else {
tracename = Py_None;
}
if (tracename != Py_None) {
PyObject * file_data = PyDict_GetItem(self->data, tracename);
if (file_data == NULL) {
if (PyErr_Occurred()) {
goto error;
}
file_data = PySet_New(NULL);
if (file_data == NULL) {
goto error;
}
ret2 = PyDict_SetItem(self->data, tracename, file_data);
if (ret2 < 0) {
goto error;
}
/* If the disposition mentions a plugin, record that. */
if (file_tracer != Py_None) {
ret2 = PyDict_SetItem(self->file_tracers, tracename, plugin_name);
if (ret2 < 0) {
goto error;
}
}
}
else {
/* PyDict_GetItem gives a borrowed reference. Own it. */
Py_INCREF(file_data);
}
Py_XDECREF(self->pcur_entry->file_data);
self->pcur_entry->file_data = file_data;
self->pcur_entry->file_tracer = file_tracer;
SHOWLOG(PyFrame_GetLineNumber(frame), filename, "traced");
}
else {
Py_XDECREF(self->pcur_entry->file_data);
self->pcur_entry->file_data = NULL;
self->pcur_entry->file_tracer = Py_None;
MyFrame_NoTraceLines(frame);
SHOWLOG(PyFrame_GetLineNumber(frame), filename, "skipped");
}
self->pcur_entry->disposition = disposition;
/* Make the frame right in case settrace(gettrace()) happens. */
MyFrame_SetTrace(frame, self);
/* A call event is really a "start frame" event, and can happen for
* re-entering a generator also. How we tell the difference depends on
* the version of Python.
*/
BOOL real_call = FALSE;
#ifdef RESUME
/*
* The current opcode is guaranteed to be RESUME. The argument
* determines what kind of resume it is.
*/
pCode = MyCode_GetCode(MyFrame_GetCode(frame));
real_call = (PyBytes_AS_STRING(pCode)[MyFrame_GetLasti(frame) + 1] == 0);
#else
// f_lasti is -1 for a true call, and a real byte offset for a generator re-entry.
real_call = (MyFrame_GetLasti(frame) < 0);
#endif
if (real_call) {
self->pcur_entry->last_line = -MyFrame_GetCode(frame)->co_firstlineno;
}
else {
self->pcur_entry->last_line = PyFrame_GetLineNumber(frame);
}
ok:
ret = RET_OK;
error:
#ifdef RESUME
MyCode_FreeCode(pCode);
#endif
Py_XDECREF(next_tracename);
Py_XDECREF(disposition);
Py_XDECREF(plugin);
Py_XDECREF(plugin_name);
return ret;
}
static void
CTracer_disable_plugin(CTracer *self, PyObject * disposition)
{
PyObject * ret;
PyErr_Print();
STATS( self->stats.pycalls++; )
ret = PyObject_CallFunctionObjArgs(self->disable_plugin, disposition, NULL);
if (ret == NULL) {
goto error;
}
Py_DECREF(ret);
return;
error:
/* This function doesn't return a status, so if an error happens, print it,
* but don't interrupt the flow. */
/* PySys_WriteStderr is nicer, but is not in the public API. */
fprintf(stderr, "Error occurred while disabling plug-in:\n");
PyErr_Print();
}
static int
CTracer_unpack_pair(CTracer *self, PyObject *pair, int *p_one, int *p_two)
{
int ret = RET_ERROR;
int the_int;
PyObject * pyint = NULL;
int index;
if (!PyTuple_Check(pair) || PyTuple_Size(pair) != 2) {
PyErr_SetString(
PyExc_TypeError,
"line_number_range must return 2-tuple"
);
goto error;
}
for (index = 0; index < 2; index++) {
pyint = PyTuple_GetItem(pair, index);
if (pyint == NULL) {
goto error;
}
if (pyint_as_int(pyint, &the_int) < 0) {
goto error;
}
*(index == 0 ? p_one : p_two) = the_int;
}
ret = RET_OK;
error:
return ret;
}
static int
CTracer_handle_line(CTracer *self, PyFrameObject *frame)
{
int ret = RET_ERROR;
int ret2;
STATS( self->stats.lines++; )
if (self->pdata_stack->depth >= 0) {
SHOWLOG(PyFrame_GetLineNumber(frame), MyFrame_GetCode(frame)->co_filename, "line");
if (self->pcur_entry->file_data) {
int lineno_from = -1;
int lineno_to = -1;
/* We're tracing in this frame: record something. */
if (self->pcur_entry->file_tracer != Py_None) {
PyObject * from_to = NULL;
STATS( self->stats.pycalls++; )
from_to = PyObject_CallMethodObjArgs(self->pcur_entry->file_tracer, str_line_number_range, frame, NULL);
if (from_to == NULL) {
CTracer_disable_plugin(self, self->pcur_entry->disposition);
goto ok;
}
ret2 = CTracer_unpack_pair(self, from_to, &lineno_from, &lineno_to);
Py_DECREF(from_to);
if (ret2 < 0) {
CTracer_disable_plugin(self, self->pcur_entry->disposition);
goto ok;
}
}
else {
lineno_from = lineno_to = PyFrame_GetLineNumber(frame);
}
if (lineno_from != -1) {
for (; lineno_from <= lineno_to; lineno_from++) {
if (self->tracing_arcs) {
/* Tracing arcs: key is (last_line,this_line). */
if (CTracer_record_pair(self, self->pcur_entry->last_line, lineno_from) < 0) {
goto error;
}
}
else {
/* Tracing lines: key is simply this_line. */
PyObject * this_line = PyLong_FromLong((long)lineno_from);
if (this_line == NULL) {
goto error;
}
ret2 = PySet_Add(self->pcur_entry->file_data, this_line);
Py_DECREF(this_line);
if (ret2 < 0) {
goto error;
}
}
self->pcur_entry->last_line = lineno_from;
}
}
}
}
ok:
ret = RET_OK;
error:
return ret;
}
static int
CTracer_handle_return(CTracer *self, PyFrameObject *frame)
{
int ret = RET_ERROR;
PyObject * pCode = NULL;
STATS( self->stats.returns++; )
/* A near-copy of this code is above in the missing-return handler. */
if (CTracer_set_pdata_stack(self) < 0) {
goto error;
}
self->pcur_entry = &self->pdata_stack->stack[self->pdata_stack->depth];
if (self->pdata_stack->depth >= 0) {
if (self->tracing_arcs && self->pcur_entry->file_data) {
BOOL real_return = FALSE;
pCode = MyCode_GetCode(MyFrame_GetCode(frame));
int lasti = MyFrame_GetLasti(frame);
Py_ssize_t code_size = PyBytes_GET_SIZE(pCode);
unsigned char * code_bytes = (unsigned char *)PyBytes_AS_STRING(pCode);
#ifdef RESUME
if (lasti == code_size - 2) {
real_return = TRUE;
}
else {
#if ENV_LASTI_IS_YIELD
lasti += 2;
#endif
real_return = (code_bytes[lasti] != RESUME);
}
#else
/* Need to distinguish between RETURN_VALUE and YIELD_VALUE. Read
* the current bytecode to see what it is. In unusual circumstances
* (Cython code), co_code can be the empty string, so range-check
* f_lasti before reading the byte.
*/
BOOL is_yield = FALSE;
BOOL is_yield_from = FALSE;
if (lasti < code_size) {
is_yield = (code_bytes[lasti] == YIELD_VALUE);
if (lasti + 2 < code_size) {
is_yield_from = (code_bytes[lasti + 2] == YIELD_FROM);
}
}
real_return = !(is_yield || is_yield_from);
#endif
if (real_return) {
int first = MyFrame_GetCode(frame)->co_firstlineno;
if (CTracer_record_pair(self, self->pcur_entry->last_line, -first) < 0) {
goto error;
}
}
}
/* If this frame started a context, then returning from it ends the context. */
if (self->pcur_entry->started_context) {
PyObject * val;
Py_DECREF(self->context);
self->context = Py_None;
Py_INCREF(self->context);
STATS( self->stats.pycalls++; )
val = PyObject_CallFunctionObjArgs(self->switch_context, self->context, NULL);
if (val == NULL) {
goto error;
}
Py_DECREF(val);
}
/* Pop the stack. */
SHOWLOG(PyFrame_GetLineNumber(frame), MyFrame_GetCode(frame)->co_filename, "return");
self->pdata_stack->depth--;
self->pcur_entry = &self->pdata_stack->stack[self->pdata_stack->depth];
}
ret = RET_OK;
error:
MyCode_FreeCode(pCode);
return ret;
}
/*
* The Trace Function
*/
static int
CTracer_trace(CTracer *self, PyFrameObject *frame, int what, PyObject *arg_unused)
{
int ret = RET_ERROR;
#if DO_NOTHING
return RET_OK;
#endif
if (!self->started) {
/* If CTracer.stop() has been called from another thread, the tracer
is still active in the current thread. Let's deactivate ourselves
now. */
PyEval_SetTrace(NULL, NULL);
return RET_OK;
}
#if WHAT_LOG || TRACE_LOG
PyObject * ascii = NULL;
#endif
#if WHAT_LOG
const char * w = "XXX ";
if (what <= (int)(sizeof(what_sym)/sizeof(const char *))) {
w = what_sym[what];
}
ascii = PyUnicode_AsASCIIString(MyFrame_GetCode(frame)->co_filename);
printf("%x trace: f:%x %s @ %s %d\n", (int)self, (int)frame, what_sym[what], PyBytes_AS_STRING(ascii), PyFrame_GetLineNumber(frame));
Py_DECREF(ascii);
#endif
#if TRACE_LOG
ascii = PyUnicode_AsASCIIString(MyFrame_GetCode(frame)->co_filename);
if (strstr(PyBytes_AS_STRING(ascii), start_file) && PyFrame_GetLineNumber(frame) == start_line) {
logging = TRUE;
}
Py_DECREF(ascii);
#endif
self->activity = TRUE;
switch (what) {
case PyTrace_CALL:
if (CTracer_handle_call(self, frame) < 0) {
goto error;
}
break;
case PyTrace_RETURN:
if (CTracer_handle_return(self, frame) < 0) {
goto error;
}
break;
case PyTrace_LINE:
if (CTracer_handle_line(self, frame) < 0) {
goto error;
}
break;
default:
STATS( self->stats.others++; )
break;
}
ret = RET_OK;
goto cleanup;
error:
STATS( self->stats.errors++; )
cleanup:
return ret;
}
/*
* Python has two ways to set the trace function: sys.settrace(fn), which
* takes a Python callable, and PyEval_SetTrace(func, obj), which takes
* a C function and a Python object. The way these work together is that
* sys.settrace(pyfn) calls PyEval_SetTrace(builtin_func, pyfn), using the
* Python callable as the object in PyEval_SetTrace. So sys.gettrace()
* simply returns the Python object used as the second argument to
* PyEval_SetTrace. So sys.gettrace() will return our self parameter, which
* means it must be callable to be used in sys.settrace().
*
* So we make ourself callable, equivalent to invoking our trace function.
*/
static PyObject *
CTracer_call(CTracer *self, PyObject *args, PyObject *kwds)
{
PyFrameObject *frame;
PyObject *what_str;
PyObject *arg;
int what;
PyObject *ret = NULL;
PyObject * ascii = NULL;
#if DO_NOTHING
CRASH
#endif
static char *what_names[] = {
"call", "exception", "line", "return",
"c_call", "c_exception", "c_return",
NULL
};
static char *kwlist[] = {"frame", "event", "arg", NULL};
if (!PyArg_ParseTupleAndKeywords(args, kwds, "O!O!O|i:Tracer_call", kwlist,
&PyFrame_Type, &frame, &PyUnicode_Type, &what_str, &arg)) {
goto done;
}
/* In Python, the what argument is a string, we need to find an int
for the C function. */
for (what = 0; what_names[what]; what++) {
int should_break;
ascii = PyUnicode_AsASCIIString(what_str);
should_break = !strcmp(PyBytes_AS_STRING(ascii), what_names[what]);
Py_DECREF(ascii);
if (should_break) {
break;
}
}
#if WHAT_LOG
ascii = PyUnicode_AsASCIIString(MyFrame_GetCode(frame)->co_filename);
printf("pytrace: %s @ %s %d\n", what_sym[what], PyBytes_AS_STRING(ascii), PyFrame_GetLineNumber(frame));
Py_DECREF(ascii);
#endif
/* Invoke the C function, and return ourselves. */
if (CTracer_trace(self, frame, what, arg) == RET_OK) {
Py_INCREF(self);
ret = (PyObject *)self;
}
/* For better speed, install ourselves the C way so that future calls go
directly to CTracer_trace, without this intermediate function.
Only do this if this is a CALL event, since new trace functions only
take effect then. If we don't condition it on CALL, then we'll clobber
the new trace function before it has a chance to get called. To
understand why, there are three internal values to track: frame.f_trace,
c_tracefunc, and c_traceobj. They are explained here:
https://nedbatchelder.com/text/trace-function.html
Without the conditional on PyTrace_CALL, this is what happens:
def func(): # f_trace c_tracefunc c_traceobj
# -------------- -------------- --------------
# CTracer CTracer.trace CTracer
sys.settrace(my_func)
# CTracer trampoline my_func
# Now Python calls trampoline(CTracer), which calls this function
# which calls PyEval_SetTrace below, setting us as the tracer again:
# CTracer CTracer.trace CTracer
# and it's as if the settrace never happened.
*/
if (what == PyTrace_CALL) {
PyEval_SetTrace((Py_tracefunc)CTracer_trace, (PyObject*)self);
}
done:
return ret;
}
static PyObject *
CTracer_start(CTracer *self, PyObject *args_unused)
{
PyEval_SetTrace((Py_tracefunc)CTracer_trace, (PyObject*)self);
self->started = TRUE;
self->tracing_arcs = self->trace_arcs && PyObject_IsTrue(self->trace_arcs);
/* start() returns a trace function usable with sys.settrace() */
Py_INCREF(self);
return (PyObject *)self;
}
static PyObject *
CTracer_stop(CTracer *self, PyObject *args_unused)
{
if (self->started) {
/* Set the started flag only. The actual call to
PyEval_SetTrace(NULL, NULL) is delegated to the callback
itself to ensure that it called from the right thread.
*/
self->started = FALSE;
}
Py_RETURN_NONE;
}
static PyObject *
CTracer_activity(CTracer *self, PyObject *args_unused)
{
if (self->activity) {
Py_RETURN_TRUE;
}
else {
Py_RETURN_FALSE;
}
}
static PyObject *
CTracer_reset_activity(CTracer *self, PyObject *args_unused)
{
self->activity = FALSE;
Py_RETURN_NONE;
}
static PyObject *
CTracer_get_stats(CTracer *self, PyObject *args_unused)
{
#if COLLECT_STATS
return Py_BuildValue(
"{sI,sI,sI,sI,sI,sI,si,sI,sI,sI}",
"calls", self->stats.calls,
"lines", self->stats.lines,
"returns", self->stats.returns,
"others", self->stats.others,
"files", self->stats.files,
"stack_reallocs", self->stats.stack_reallocs,
"stack_alloc", self->pdata_stack->alloc,
"errors", self->stats.errors,
"pycalls", self->stats.pycalls,
"start_context_calls", self->stats.start_context_calls
);
#else
Py_RETURN_NONE;
#endif /* COLLECT_STATS */
}
static PyMemberDef
CTracer_members[] = {
{ "should_trace", T_OBJECT, offsetof(CTracer, should_trace), 0,
PyDoc_STR("Function indicating whether to trace a file.") },
{ "check_include", T_OBJECT, offsetof(CTracer, check_include), 0,
PyDoc_STR("Function indicating whether to include a file.") },
{ "warn", T_OBJECT, offsetof(CTracer, warn), 0,
PyDoc_STR("Function for issuing warnings.") },
{ "concur_id_func", T_OBJECT, offsetof(CTracer, concur_id_func), 0,
PyDoc_STR("Function for determining concurrency context") },
{ "data", T_OBJECT, offsetof(CTracer, data), 0,
PyDoc_STR("The raw dictionary of trace data.") },
{ "file_tracers", T_OBJECT, offsetof(CTracer, file_tracers), 0,
PyDoc_STR("Mapping from file name to plugin name.") },
{ "should_trace_cache", T_OBJECT, offsetof(CTracer, should_trace_cache), 0,
PyDoc_STR("Dictionary caching should_trace results.") },
{ "trace_arcs", T_OBJECT, offsetof(CTracer, trace_arcs), 0,
PyDoc_STR("Should we trace arcs, or just lines?") },
{ "should_start_context", T_OBJECT, offsetof(CTracer, should_start_context), 0,
PyDoc_STR("Function for starting contexts.") },
{ "switch_context", T_OBJECT, offsetof(CTracer, switch_context), 0,
PyDoc_STR("Function for switching to a new context.") },
{ "disable_plugin", T_OBJECT, offsetof(CTracer, disable_plugin), 0,
PyDoc_STR("Function for disabling a plugin.") },
{ NULL }
};
static PyMethodDef
CTracer_methods[] = {
{ "start", (PyCFunction) CTracer_start, METH_VARARGS,
PyDoc_STR("Start the tracer") },
{ "stop", (PyCFunction) CTracer_stop, METH_VARARGS,
PyDoc_STR("Stop the tracer") },
{ "get_stats", (PyCFunction) CTracer_get_stats, METH_VARARGS,
PyDoc_STR("Get statistics about the tracing") },
{ "activity", (PyCFunction) CTracer_activity, METH_VARARGS,
PyDoc_STR("Has there been any activity?") },
{ "reset_activity", (PyCFunction) CTracer_reset_activity, METH_VARARGS,
PyDoc_STR("Reset the activity flag") },
{ NULL }
};
PyTypeObject
CTracerType = {
PyVarObject_HEAD_INIT(NULL, 0)
"coverage.CTracer", /*tp_name*/
sizeof(CTracer), /*tp_basicsize*/
0, /*tp_itemsize*/
(destructor)CTracer_dealloc, /*tp_dealloc*/
0, /*tp_print*/
0, /*tp_getattr*/
0, /*tp_setattr*/
0, /*tp_compare*/
0, /*tp_repr*/
0, /*tp_as_number*/
0, /*tp_as_sequence*/
0, /*tp_as_mapping*/
0, /*tp_hash */
(ternaryfunc)CTracer_call, /*tp_call*/
0, /*tp_str*/
0, /*tp_getattro*/
0, /*tp_setattro*/
0, /*tp_as_buffer*/
Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /*tp_flags*/
"CTracer objects", /* tp_doc */
0, /* tp_traverse */
0, /* tp_clear */
0, /* tp_richcompare */
0, /* tp_weaklistoffset */
0, /* tp_iter */
0, /* tp_iternext */
CTracer_methods, /* tp_methods */
CTracer_members, /* tp_members */
0, /* tp_getset */
0, /* tp_base */
0, /* tp_dict */
0, /* tp_descr_get */
0, /* tp_descr_set */
0, /* tp_dictoffset */
(initproc)CTracer_init, /* tp_init */
0, /* tp_alloc */
0, /* tp_new */
};
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/ctracer/tracer.h 0000644 0001751 0000177 00000003702 00000000000 021000 0 ustar 00runner docker 0000000 0000000 /* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */
/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */
#ifndef _COVERAGE_TRACER_H
#define _COVERAGE_TRACER_H
#include "util.h"
#include "structmember.h"
#include "frameobject.h"
#include "opcode.h"
#include "datastack.h"
/* The CTracer type. */
typedef struct CTracer {
PyObject_HEAD
/* Python objects manipulated directly by the Collector class. */
PyObject * should_trace;
PyObject * check_include;
PyObject * warn;
PyObject * concur_id_func;
PyObject * data;
PyObject * file_tracers;
PyObject * should_trace_cache;
PyObject * trace_arcs;
PyObject * should_start_context;
PyObject * switch_context;
PyObject * disable_plugin;
/* Has the tracer been started? */
BOOL started;
/* Are we tracing arcs, or just lines? */
BOOL tracing_arcs;
/* Have we had any activity? */
BOOL activity;
/* The current dynamic context. */
PyObject * context;
/*
The data stack is a stack of sets. Each set collects
data for a single source file. The data stack parallels the call stack:
each call pushes the new frame's file data onto the data stack, and each
return pops file data off.
The file data is a set whose form depends on the tracing options.
If tracing arcs, the values are line number pairs. If not tracing arcs,
the values are line numbers.
*/
DataStack data_stack; /* Used if we aren't doing concurrency. */
PyObject * data_stack_index; /* Used if we are doing concurrency. */
DataStack * data_stacks;
int data_stacks_alloc;
int data_stacks_used;
DataStack * pdata_stack;
/* The current file's data stack entry. */
DataStackEntry * pcur_entry;
Stats stats;
} CTracer;
int CTracer_intern_strings(void);
extern PyTypeObject CTracerType;
#endif /* _COVERAGE_TRACER_H */
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/ctracer/util.h 0000644 0001751 0000177 00000005663 00000000000 020505 0 ustar 00runner docker 0000000 0000000 /* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */
/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */
#ifndef _COVERAGE_UTIL_H
#define _COVERAGE_UTIL_H
#include
/* Compile-time debugging helpers */
#undef WHAT_LOG /* Define to log the WHAT params in the trace function. */
#undef TRACE_LOG /* Define to log our bookkeeping. */
#undef COLLECT_STATS /* Collect counters: stats are printed when tracer is stopped. */
#undef DO_NOTHING /* Define this to make the tracer do nothing. */
#if PY_VERSION_HEX >= 0x030B00A0
// 3.11 moved f_lasti into an internal structure. This is totally the wrong way
// to make this work, but it's all I've got until https://bugs.python.org/issue40421
// is resolved.
#if PY_VERSION_HEX < 0x030D0000
#include
#endif
#if PY_VERSION_HEX >= 0x030B00A7
#define MyFrame_GetLasti(f) (PyFrame_GetLasti(f))
#else
#define MyFrame_GetLasti(f) ((f)->f_frame->f_lasti * 2)
#endif
#elif PY_VERSION_HEX >= 0x030A00A7
// The f_lasti field changed meaning in 3.10.0a7. It had been bytes, but
// now is instructions, so we need to adjust it to use it as a byte index.
#define MyFrame_GetLasti(f) ((f)->f_lasti * 2)
#else
#define MyFrame_GetLasti(f) ((f)->f_lasti)
#endif
#if PY_VERSION_HEX >= 0x030D0000
#define MyFrame_NoTraceLines(f) (PyObject_SetAttrString((PyObject*)(f), "f_trace_lines", Py_False))
#define MyFrame_SetTrace(f, obj) (PyObject_SetAttrString((PyObject*)(f), "f_trace", (PyObject*)(obj)))
#else
#define MyFrame_NoTraceLines(f) ((f)->f_trace_lines = 0)
#define MyFrame_SetTrace(f, obj) {Py_INCREF(obj); Py_XSETREF((f)->f_trace, (PyObject*)(obj));}
#endif
// Access f_code should be done through a helper starting in 3.9.
#if PY_VERSION_HEX >= 0x03090000
#define MyFrame_GetCode(f) (PyFrame_GetCode(f))
#else
#define MyFrame_GetCode(f) ((f)->f_code)
#endif
#if PY_VERSION_HEX >= 0x030B00B1
#define MyCode_GetCode(co) (PyCode_GetCode(co))
#define MyCode_FreeCode(code) Py_XDECREF(code)
#elif PY_VERSION_HEX >= 0x030B00A7
#define MyCode_GetCode(co) (PyObject_GetAttrString((PyObject *)(co), "co_code"))
#define MyCode_FreeCode(code) Py_XDECREF(code)
#else
#define MyCode_GetCode(co) ((co)->co_code)
#define MyCode_FreeCode(code)
#endif
// Where does frame.f_lasti point when yielding from a generator?
// It used to point at the YIELD, now it points at the RESUME.
// https://github.com/python/cpython/issues/113728
#define ENV_LASTI_IS_YIELD (PY_VERSION_HEX < 0x030D0000)
/* The values returned to indicate ok or error. */
#define RET_OK 0
#define RET_ERROR -1
/* Nicer booleans */
typedef int BOOL;
#define FALSE 0
#define TRUE 1
#if SIZEOF_LONG_LONG < 8
#error long long too small!
#endif
typedef unsigned long long uint64;
/* Only for extreme machete-mode debugging! */
#define CRASH { printf("*** CRASH! ***\n"); *((int*)1) = 1; }
#endif /* _COVERAGE_UTIL_H */
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/data.py 0000644 0001751 0000177 00000017325 00000000000 017215 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Coverage data for coverage.py.
This file had the 4.x JSON data support, which is now gone. This file still
has storage-agnostic helpers, and is kept to avoid changing too many imports.
CoverageData is now defined in sqldata.py, and imported here to keep the
imports working.
"""
from __future__ import annotations
import glob
import hashlib
import os.path
from typing import Callable, Iterable
from coverage.exceptions import CoverageException, NoDataError
from coverage.files import PathAliases
from coverage.misc import Hasher, file_be_gone, human_sorted, plural
from coverage.sqldata import CoverageData
def line_counts(data: CoverageData, fullpath: bool = False) -> dict[str, int]:
"""Return a dict summarizing the line coverage data.
Keys are based on the file names, and values are the number of executed
lines. If `fullpath` is true, then the keys are the full pathnames of
the files, otherwise they are the basenames of the files.
Returns a dict mapping file names to counts of lines.
"""
summ = {}
filename_fn: Callable[[str], str]
if fullpath:
# pylint: disable=unnecessary-lambda-assignment
filename_fn = lambda f: f
else:
filename_fn = os.path.basename
for filename in data.measured_files():
lines = data.lines(filename)
assert lines is not None
summ[filename_fn(filename)] = len(lines)
return summ
def add_data_to_hash(data: CoverageData, filename: str, hasher: Hasher) -> None:
"""Contribute `filename`'s data to the `hasher`.
`hasher` is a `coverage.misc.Hasher` instance to be updated with
the file's data. It should only get the results data, not the run
data.
"""
if data.has_arcs():
hasher.update(sorted(data.arcs(filename) or []))
else:
hasher.update(sorted_lines(data, filename))
hasher.update(data.file_tracer(filename))
def combinable_files(data_file: str, data_paths: Iterable[str] | None = None) -> list[str]:
"""Make a list of data files to be combined.
`data_file` is a path to a data file. `data_paths` is a list of files or
directories of files.
Returns a list of absolute file paths.
"""
data_dir, local = os.path.split(os.path.abspath(data_file))
data_paths = data_paths or [data_dir]
files_to_combine = []
for p in data_paths:
if os.path.isfile(p):
files_to_combine.append(os.path.abspath(p))
elif os.path.isdir(p):
pattern = glob.escape(os.path.join(os.path.abspath(p), local)) +".*"
files_to_combine.extend(glob.glob(pattern))
else:
raise NoDataError(f"Couldn't combine from non-existent path '{p}'")
# SQLite might have made journal files alongside our database files.
# We never want to combine those.
files_to_combine = [fnm for fnm in files_to_combine if not fnm.endswith("-journal")]
# Sorting isn't usually needed, since it shouldn't matter what order files
# are combined, but sorting makes tests more predictable, and makes
# debugging more understandable when things go wrong.
return sorted(files_to_combine)
def combine_parallel_data(
data: CoverageData,
aliases: PathAliases | None = None,
data_paths: Iterable[str] | None = None,
strict: bool = False,
keep: bool = False,
message: Callable[[str], None] | None = None,
) -> None:
"""Combine a number of data files together.
`data` is a CoverageData.
Treat `data.filename` as a file prefix, and combine the data from all
of the data files starting with that prefix plus a dot.
If `aliases` is provided, it's a `PathAliases` object that is used to
re-map paths to match the local machine's.
If `data_paths` is provided, it is a list of directories or files to
combine. Directories are searched for files that start with
`data.filename` plus dot as a prefix, and those files are combined.
If `data_paths` is not provided, then the directory portion of
`data.filename` is used as the directory to search for data files.
Unless `keep` is True every data file found and combined is then deleted
from disk. If a file cannot be read, a warning will be issued, and the
file will not be deleted.
If `strict` is true, and no files are found to combine, an error is
raised.
`message` is a function to use for printing messages to the user.
"""
files_to_combine = combinable_files(data.base_filename(), data_paths)
if strict and not files_to_combine:
raise NoDataError("No data to combine")
file_hashes = set()
combined_any = False
for f in files_to_combine:
if f == data.data_filename():
# Sometimes we are combining into a file which is one of the
# parallel files. Skip that file.
if data._debug.should("dataio"):
data._debug.write(f"Skipping combining ourself: {f!r}")
continue
try:
rel_file_name = os.path.relpath(f)
except ValueError:
# ValueError can be raised under Windows when os.getcwd() returns a
# folder from a different drive than the drive of f, in which case
# we print the original value of f instead of its relative path
rel_file_name = f
with open(f, "rb") as fobj:
hasher = hashlib.new("sha3_256")
hasher.update(fobj.read())
sha = hasher.digest()
combine_this_one = sha not in file_hashes
delete_this_one = not keep
if combine_this_one:
if data._debug.should("dataio"):
data._debug.write(f"Combining data file {f!r}")
file_hashes.add(sha)
try:
new_data = CoverageData(f, debug=data._debug)
new_data.read()
except CoverageException as exc:
if data._warn:
# The CoverageException has the file name in it, so just
# use the message as the warning.
data._warn(str(exc))
if message:
message(f"Couldn't combine data file {rel_file_name}: {exc}")
delete_this_one = False
else:
data.update(new_data, aliases=aliases)
combined_any = True
if message:
message(f"Combined data file {rel_file_name}")
else:
if message:
message(f"Skipping duplicate data {rel_file_name}")
if delete_this_one:
if data._debug.should("dataio"):
data._debug.write(f"Deleting data file {f!r}")
file_be_gone(f)
if strict and not combined_any:
raise NoDataError("No usable data files")
def debug_data_file(filename: str) -> None:
"""Implementation of 'coverage debug data'."""
data = CoverageData(filename)
filename = data.data_filename()
print(f"path: {filename}")
if not os.path.exists(filename):
print("No data collected: file doesn't exist")
return
data.read()
print(f"has_arcs: {data.has_arcs()!r}")
summary = line_counts(data, fullpath=True)
filenames = human_sorted(summary.keys())
nfiles = len(filenames)
print(f"{nfiles} file{plural(nfiles)}:")
for f in filenames:
line = f"{f}: {summary[f]} line{plural(summary[f])}"
plugin = data.file_tracer(f)
if plugin:
line += f" [{plugin}]"
print(line)
def sorted_lines(data: CoverageData, filename: str) -> list[int]:
"""Get the sorted lines for a file, for tests."""
lines = data.lines(filename)
return sorted(lines or [])
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/debug.py 0000644 0001751 0000177 00000050356 00000000000 017373 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Control of and utilities for debugging."""
from __future__ import annotations
import atexit
import contextlib
import functools
import inspect
import itertools
import os
import pprint
import re
import reprlib
import sys
import traceback
import types
import _thread
from typing import (
overload,
Any, Callable, IO, Iterable, Iterator, Mapping,
)
from coverage.misc import human_sorted_items, isolate_module
from coverage.types import AnyCallable, TWritable
os = isolate_module(os)
# When debugging, it can be helpful to force some options, especially when
# debugging the configuration mechanisms you usually use to control debugging!
# This is a list of forced debugging options.
FORCED_DEBUG: list[str] = []
FORCED_DEBUG_FILE = None
class DebugControl:
"""Control and output for debugging."""
show_repr_attr = False # For auto_repr
def __init__(
self,
options: Iterable[str],
output: IO[str] | None,
file_name: str | None = None,
) -> None:
"""Configure the options and output file for debugging."""
self.options = list(options) + FORCED_DEBUG
self.suppress_callers = False
filters = []
if self.should("process"):
filters.append(CwdTracker().filter)
filters.append(ProcessTracker().filter)
if self.should("pytest"):
filters.append(PytestTracker().filter)
if self.should("pid"):
filters.append(add_pid_and_tid)
self.output = DebugOutputFile.get_one(
output,
file_name=file_name,
filters=filters,
)
self.raw_output = self.output.outfile
def __repr__(self) -> str:
return f""
def should(self, option: str) -> bool:
"""Decide whether to output debug information in category `option`."""
if option == "callers" and self.suppress_callers:
return False
return (option in self.options)
@contextlib.contextmanager
def without_callers(self) -> Iterator[None]:
"""A context manager to prevent call stacks from being logged."""
old = self.suppress_callers
self.suppress_callers = True
try:
yield
finally:
self.suppress_callers = old
def write(self, msg: str, *, exc: BaseException | None = None) -> None:
"""Write a line of debug output.
`msg` is the line to write. A newline will be appended.
If `exc` is provided, a stack trace of the exception will be written
after the message.
"""
self.output.write(msg + "\n")
if exc is not None:
self.output.write("".join(traceback.format_exception(None, exc, exc.__traceback__)))
if self.should("self"):
caller_self = inspect.stack()[1][0].f_locals.get("self")
if caller_self is not None:
self.output.write(f"self: {caller_self!r}\n")
if self.should("callers"):
dump_stack_frames(out=self.output, skip=1)
self.output.flush()
class NoDebugging(DebugControl):
"""A replacement for DebugControl that will never try to do anything."""
def __init__(self) -> None:
# pylint: disable=super-init-not-called
...
def should(self, option: str) -> bool:
"""Should we write debug messages? Never."""
return False
def write(self, msg: str, *, exc: BaseException | None = None) -> None:
"""This will never be called."""
raise AssertionError("NoDebugging.write should never be called.")
def info_header(label: str) -> str:
"""Make a nice header string."""
return "--{:-<60s}".format(" "+label+" ")
def info_formatter(info: Iterable[tuple[str, Any]]) -> Iterator[str]:
"""Produce a sequence of formatted lines from info.
`info` is a sequence of pairs (label, data). The produced lines are
nicely formatted, ready to print.
"""
info = list(info)
if not info:
return
label_len = 30
assert all(len(l) < label_len for l, _ in info)
for label, data in info:
if data == []:
data = "-none-"
if isinstance(data, tuple) and len(repr(tuple(data))) < 30:
# Convert to tuple to scrub namedtuples.
yield "%*s: %r" % (label_len, label, tuple(data))
elif isinstance(data, (list, set, tuple)):
prefix = "%*s:" % (label_len, label)
for e in data:
yield "%*s %s" % (label_len+1, prefix, e)
prefix = ""
else:
yield "%*s: %s" % (label_len, label, data)
def write_formatted_info(
write: Callable[[str], None],
header: str,
info: Iterable[tuple[str, Any]],
) -> None:
"""Write a sequence of (label,data) pairs nicely.
`write` is a function write(str) that accepts each line of output.
`header` is a string to start the section. `info` is a sequence of
(label, data) pairs, where label is a str, and data can be a single
value, or a list/set/tuple.
"""
write(info_header(header))
for line in info_formatter(info):
write(f" {line}")
def exc_one_line(exc: Exception) -> str:
"""Get a one-line summary of an exception, including class name and message."""
lines = traceback.format_exception_only(type(exc), exc)
return "|".join(l.rstrip() for l in lines)
_FILENAME_REGEXES: list[tuple[str, str]] = [
(r".*[/\\]pytest-of-.*[/\\]pytest-\d+([/\\]popen-gw\d+)?", "tmp:"),
]
_FILENAME_SUBS: list[tuple[str, str]] = []
@overload
def short_filename(filename: str) -> str:
pass
@overload
def short_filename(filename: None) -> None:
pass
def short_filename(filename: str | None) -> str | None:
"""Shorten a file name. Directories are replaced by prefixes like 'syspath:'"""
if not _FILENAME_SUBS:
for pathdir in sys.path:
_FILENAME_SUBS.append((pathdir, "syspath:"))
import coverage
_FILENAME_SUBS.append((os.path.dirname(coverage.__file__), "cov:"))
_FILENAME_SUBS.sort(key=(lambda pair: len(pair[0])), reverse=True)
if filename is not None:
for pat, sub in _FILENAME_REGEXES:
filename = re.sub(pat, sub, filename)
for before, after in _FILENAME_SUBS:
filename = filename.replace(before, after)
return filename
def short_stack(
skip: int = 0,
full: bool = False,
frame_ids: bool = False,
short_filenames: bool = False,
) -> str:
"""Return a string summarizing the call stack.
The string is multi-line, with one line per stack frame. Each line shows
the function name, the file name, and the line number:
...
start_import_stop : /Users/ned/coverage/trunk/tests/coveragetest.py:95
import_local_file : /Users/ned/coverage/trunk/tests/coveragetest.py:81
import_local_file : /Users/ned/coverage/trunk/coverage/backward.py:159
...
`skip` is the number of closest immediate frames to skip, so that debugging
functions can call this and not be included in the result.
If `full` is true, then include all frames. Otherwise, initial "boring"
frames (ones in site-packages and earlier) are omitted.
`short_filenames` will shorten filenames using `short_filename`, to reduce
the amount of repetitive noise in stack traces.
"""
# Regexes in initial frames that we don't care about.
BORING_PRELUDE = [
"", # pytest-xdist has string execution.
r"\bigor.py$", # Our test runner.
r"\bsite-packages\b", # pytest etc getting to our tests.
]
stack: Iterable[inspect.FrameInfo] = inspect.stack()[:skip:-1]
if not full:
for pat in BORING_PRELUDE:
stack = itertools.dropwhile(
(lambda fi, pat=pat: re.search(pat, fi.filename)), # type: ignore[misc]
stack,
)
lines = []
for frame_info in stack:
line = f"{frame_info.function:>30s} : "
if frame_ids:
line += f"{id(frame_info.frame):#x} "
filename = frame_info.filename
if short_filenames:
filename = short_filename(filename)
line += f"{filename}:{frame_info.lineno}"
lines.append(line)
return "\n".join(lines)
def dump_stack_frames(out: TWritable, skip: int = 0) -> None:
"""Print a summary of the stack to `out`."""
out.write(short_stack(skip=skip+1) + "\n")
def clipped_repr(text: str, numchars: int = 50) -> str:
"""`repr(text)`, but limited to `numchars`."""
r = reprlib.Repr()
r.maxstring = numchars
return r.repr(text)
def short_id(id64: int) -> int:
"""Given a 64-bit id, make a shorter 16-bit one."""
id16 = 0
for offset in range(0, 64, 16):
id16 ^= id64 >> offset
return id16 & 0xFFFF
def add_pid_and_tid(text: str) -> str:
"""A filter to add pid and tid to debug messages."""
# Thread ids are useful, but too long. Make a shorter one.
tid = f"{short_id(_thread.get_ident()):04x}"
text = f"{os.getpid():5d}.{tid}: {text}"
return text
AUTO_REPR_IGNORE = {"$coverage.object_id"}
def auto_repr(self: Any) -> str:
"""A function implementing an automatic __repr__ for debugging."""
show_attrs = (
(k, v) for k, v in self.__dict__.items()
if getattr(v, "show_repr_attr", True)
and not inspect.ismethod(v)
and k not in AUTO_REPR_IGNORE
)
return "<{klass} @{id:#x}{attrs}>".format(
klass=self.__class__.__name__,
id=id(self),
attrs="".join(f" {k}={v!r}" for k, v in show_attrs),
)
def simplify(v: Any) -> Any: # pragma: debugging
"""Turn things which are nearly dict/list/etc into dict/list/etc."""
if isinstance(v, dict):
return {k:simplify(vv) for k, vv in v.items()}
elif isinstance(v, (list, tuple)):
return type(v)(simplify(vv) for vv in v)
elif hasattr(v, "__dict__"):
return simplify({"."+k: v for k, v in v.__dict__.items()})
else:
return v
def pp(v: Any) -> None: # pragma: debugging
"""Debug helper to pretty-print data, including SimpleNamespace objects."""
# Might not be needed in 3.9+
pprint.pprint(simplify(v))
def filter_text(text: str, filters: Iterable[Callable[[str], str]]) -> str:
"""Run `text` through a series of filters.
`filters` is a list of functions. Each takes a string and returns a
string. Each is run in turn. After each filter, the text is split into
lines, and each line is passed through the next filter.
Returns: the final string that results after all of the filters have
run.
"""
clean_text = text.rstrip()
ending = text[len(clean_text):]
text = clean_text
for filter_fn in filters:
lines = []
for line in text.splitlines():
lines.extend(filter_fn(line).splitlines())
text = "\n".join(lines)
return text + ending
class CwdTracker:
"""A class to add cwd info to debug messages."""
def __init__(self) -> None:
self.cwd: str | None = None
def filter(self, text: str) -> str:
"""Add a cwd message for each new cwd."""
cwd = os.getcwd()
if cwd != self.cwd:
text = f"cwd is now {cwd!r}\n" + text
self.cwd = cwd
return text
class ProcessTracker:
"""Track process creation for debug logging."""
def __init__(self) -> None:
self.pid: int = os.getpid()
self.did_welcome = False
def filter(self, text: str) -> str:
"""Add a message about how new processes came to be."""
welcome = ""
pid = os.getpid()
if self.pid != pid:
welcome = f"New process: forked {self.pid} -> {pid}\n"
self.pid = pid
elif not self.did_welcome:
argv = getattr(sys, "argv", None)
welcome = (
f"New process: {pid=}, executable: {sys.executable!r}\n"
+ f"New process: cmd: {argv!r}\n"
)
if hasattr(os, "getppid"):
welcome += f"New process parent pid: {os.getppid()!r}\n"
if welcome:
self.did_welcome = True
return welcome + text
else:
return text
class PytestTracker:
"""Track the current pytest test name to add to debug messages."""
def __init__(self) -> None:
self.test_name: str | None = None
def filter(self, text: str) -> str:
"""Add a message when the pytest test changes."""
test_name = os.getenv("PYTEST_CURRENT_TEST")
if test_name != self.test_name:
text = f"Pytest context: {test_name}\n" + text
self.test_name = test_name
return text
class DebugOutputFile:
"""A file-like object that includes pid and cwd information."""
def __init__(
self,
outfile: IO[str] | None,
filters: Iterable[Callable[[str], str]],
):
self.outfile = outfile
self.filters = list(filters)
self.pid = os.getpid()
@classmethod
def get_one(
cls,
fileobj: IO[str] | None = None,
file_name: str | None = None,
filters: Iterable[Callable[[str], str]] = (),
interim: bool = False,
) -> DebugOutputFile:
"""Get a DebugOutputFile.
If `fileobj` is provided, then a new DebugOutputFile is made with it.
If `fileobj` isn't provided, then a file is chosen (`file_name` if
provided, or COVERAGE_DEBUG_FILE, or stderr), and a process-wide
singleton DebugOutputFile is made.
`filters` are the text filters to apply to the stream to annotate with
pids, etc.
If `interim` is true, then a future `get_one` can replace this one.
"""
if fileobj is not None:
# Make DebugOutputFile around the fileobj passed.
return cls(fileobj, filters)
the_one, is_interim = cls._get_singleton_data()
if the_one is None or is_interim:
if file_name is not None:
fileobj = open(file_name, "a", encoding="utf-8")
else:
# $set_env.py: COVERAGE_DEBUG_FILE - Where to write debug output
file_name = os.getenv("COVERAGE_DEBUG_FILE", FORCED_DEBUG_FILE)
if file_name in ("stdout", "stderr"):
fileobj = getattr(sys, file_name)
elif file_name:
fileobj = open(file_name, "a", encoding="utf-8")
atexit.register(fileobj.close)
else:
fileobj = sys.stderr
the_one = cls(fileobj, filters)
cls._set_singleton_data(the_one, interim)
if not(the_one.filters):
the_one.filters = list(filters)
return the_one
# Because of the way igor.py deletes and re-imports modules,
# this class can be defined more than once. But we really want
# a process-wide singleton. So stash it in sys.modules instead of
# on a class attribute. Yes, this is aggressively gross.
SYS_MOD_NAME = "$coverage.debug.DebugOutputFile.the_one"
SINGLETON_ATTR = "the_one_and_is_interim"
@classmethod
def _set_singleton_data(cls, the_one: DebugOutputFile, interim: bool) -> None:
"""Set the one DebugOutputFile to rule them all."""
singleton_module = types.ModuleType(cls.SYS_MOD_NAME)
setattr(singleton_module, cls.SINGLETON_ATTR, (the_one, interim))
sys.modules[cls.SYS_MOD_NAME] = singleton_module
@classmethod
def _get_singleton_data(cls) -> tuple[DebugOutputFile | None, bool]:
"""Get the one DebugOutputFile."""
singleton_module = sys.modules.get(cls.SYS_MOD_NAME)
return getattr(singleton_module, cls.SINGLETON_ATTR, (None, True))
@classmethod
def _del_singleton_data(cls) -> None:
"""Delete the one DebugOutputFile, just for tests to use."""
if cls.SYS_MOD_NAME in sys.modules:
del sys.modules[cls.SYS_MOD_NAME]
def write(self, text: str) -> None:
"""Just like file.write, but filter through all our filters."""
assert self.outfile is not None
self.outfile.write(filter_text(text, self.filters))
self.outfile.flush()
def flush(self) -> None:
"""Flush our file."""
assert self.outfile is not None
self.outfile.flush()
def log(msg: str, stack: bool = False) -> None: # pragma: debugging
"""Write a log message as forcefully as possible."""
out = DebugOutputFile.get_one(interim=True)
out.write(msg+"\n")
if stack:
dump_stack_frames(out=out, skip=1)
def decorate_methods(
decorator: Callable[..., Any],
butnot: Iterable[str] = (),
private: bool = False,
) -> Callable[..., Any]: # pragma: debugging
"""A class decorator to apply a decorator to methods."""
def _decorator(cls): # type: ignore[no-untyped-def]
for name, meth in inspect.getmembers(cls, inspect.isroutine):
if name not in cls.__dict__:
continue
if name != "__init__":
if not private and name.startswith("_"):
continue
if name in butnot:
continue
setattr(cls, name, decorator(meth))
return cls
return _decorator
def break_in_pudb(func: AnyCallable) -> AnyCallable: # pragma: debugging
"""A function decorator to stop in the debugger for each call."""
@functools.wraps(func)
def _wrapper(*args: Any, **kwargs: Any) -> Any:
import pudb
sys.stdout = sys.__stdout__
pudb.set_trace()
return func(*args, **kwargs)
return _wrapper
OBJ_IDS = itertools.count()
CALLS = itertools.count()
OBJ_ID_ATTR = "$coverage.object_id"
def show_calls(
show_args: bool = True,
show_stack: bool = False,
show_return: bool = False,
) -> Callable[..., Any]: # pragma: debugging
"""A method decorator to debug-log each call to the function."""
def _decorator(func: AnyCallable) -> AnyCallable:
@functools.wraps(func)
def _wrapper(self: Any, *args: Any, **kwargs: Any) -> Any:
oid = getattr(self, OBJ_ID_ATTR, None)
if oid is None:
oid = f"{os.getpid():08d} {next(OBJ_IDS):04d}"
setattr(self, OBJ_ID_ATTR, oid)
extra = ""
if show_args:
eargs = ", ".join(map(repr, args))
ekwargs = ", ".join("{}={!r}".format(*item) for item in kwargs.items())
extra += "("
extra += eargs
if eargs and ekwargs:
extra += ", "
extra += ekwargs
extra += ")"
if show_stack:
extra += " @ "
extra += "; ".join(short_stack(short_filenames=True).splitlines())
callid = next(CALLS)
msg = f"{oid} {callid:04d} {func.__name__}{extra}\n"
DebugOutputFile.get_one(interim=True).write(msg)
ret = func(self, *args, **kwargs)
if show_return:
msg = f"{oid} {callid:04d} {func.__name__} return {ret!r}\n"
DebugOutputFile.get_one(interim=True).write(msg)
return ret
return _wrapper
return _decorator
def relevant_environment_display(env: Mapping[str, str]) -> list[tuple[str, str]]:
"""Filter environment variables for a debug display.
Select variables to display (with COV or PY in the name, or HOME, TEMP, or
TMP), and also cloak sensitive values with asterisks.
Arguments:
env: a dict of environment variable names and values.
Returns:
A list of pairs (name, value) to show.
"""
slugs = {"COV", "PY"}
include = {"HOME", "TEMP", "TMP"}
cloak = {"API", "TOKEN", "KEY", "SECRET", "PASS", "SIGNATURE"}
to_show = []
for name, val in env.items():
keep = False
if name in include:
keep = True
elif any(slug in name for slug in slugs):
keep = True
if keep:
if any(slug in name for slug in cloak):
val = re.sub(r"\w", "*", val)
to_show.append((name, val))
return human_sorted_items(to_show)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/disposition.py 0000644 0001751 0000177 00000003546 00000000000 020650 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Simple value objects for tracking what to do with files."""
from __future__ import annotations
from typing import TYPE_CHECKING
from coverage.types import TFileDisposition
if TYPE_CHECKING:
from coverage.plugin import FileTracer
class FileDisposition:
"""A simple value type for recording what to do with a file."""
original_filename: str
canonical_filename: str
source_filename: str | None
trace: bool
reason: str
file_tracer: FileTracer | None
has_dynamic_filename: bool
def __repr__(self) -> str:
return f""
# FileDisposition "methods": FileDisposition is a pure value object, so it can
# be implemented in either C or Python. Acting on them is done with these
# functions.
def disposition_init(cls: type[TFileDisposition], original_filename: str) -> TFileDisposition:
"""Construct and initialize a new FileDisposition object."""
disp = cls()
disp.original_filename = original_filename
disp.canonical_filename = original_filename
disp.source_filename = None
disp.trace = False
disp.reason = ""
disp.file_tracer = None
disp.has_dynamic_filename = False
return disp
def disposition_debug_msg(disp: TFileDisposition) -> str:
"""Make a nice debug message of what the FileDisposition is doing."""
if disp.trace:
msg = f"Tracing {disp.original_filename!r}"
if disp.original_filename != disp.source_filename:
msg += f" as {disp.source_filename!r}"
if disp.file_tracer:
msg += f": will be traced by {disp.file_tracer!r}"
else:
msg = f"Not tracing {disp.original_filename!r}: {disp.reason}"
return msg
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/env.py 0000644 0001751 0000177 00000012466 00000000000 017075 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Determine facts about the environment."""
from __future__ import annotations
import os
import platform
import sys
from typing import Any, Iterable
# debug_info() at the bottom wants to show all the globals, but not imports.
# Grab the global names here to know which names to not show. Nothing defined
# above this line will be in the output.
_UNINTERESTING_GLOBALS = list(globals())
# These names also shouldn't be shown.
_UNINTERESTING_GLOBALS += ["PYBEHAVIOR", "debug_info"]
# Operating systems.
WINDOWS = sys.platform == "win32"
LINUX = sys.platform.startswith("linux")
OSX = sys.platform == "darwin"
# Python implementations.
CPYTHON = (platform.python_implementation() == "CPython")
PYPY = (platform.python_implementation() == "PyPy")
# Python versions. We amend version_info with one more value, a zero if an
# official version, or 1 if built from source beyond an official version.
# Only use sys.version_info directly where tools like mypy need it to understand
# version-specfic code, otherwise use PYVERSION.
PYVERSION = sys.version_info + (int(platform.python_version()[-1] == "+"),)
if PYPY:
PYPYVERSION = sys.pypy_version_info # type: ignore[attr-defined]
# Python behavior.
class PYBEHAVIOR:
"""Flags indicating this Python's behavior."""
# Does Python conform to PEP626, Precise line numbers for debugging and other tools.
# https://www.python.org/dev/peps/pep-0626
pep626 = (PYVERSION > (3, 10, 0, "alpha", 4))
# Is "if __debug__" optimized away?
optimize_if_debug = not pep626
# Is "if not __debug__" optimized away? The exact details have changed
# across versions.
if pep626:
optimize_if_not_debug = 1
elif PYPY:
if PYVERSION >= (3, 9):
optimize_if_not_debug = 2
else:
optimize_if_not_debug = 3
else:
optimize_if_not_debug = 2
# 3.7 changed how functions with only docstrings are numbered.
docstring_only_function = (not PYPY) and (PYVERSION <= (3, 10))
# When a break/continue/return statement in a try block jumps to a finally
# block, does the finally jump back to the break/continue/return (pre-3.10)
# to do the work?
finally_jumps_back = (PYVERSION < (3, 10))
# CPython 3.11 now jumps to the decorator line again while executing
# the decorator.
trace_decorator_line_again = (CPYTHON and PYVERSION > (3, 11, 0, "alpha", 3, 0))
# CPython 3.9a1 made sys.argv[0] and other reported files absolute paths.
report_absolute_files = (
(CPYTHON or (PYPY and PYPYVERSION >= (7, 3, 10)))
and PYVERSION >= (3, 9)
)
# Lines after break/continue/return/raise are no longer compiled into the
# bytecode. They used to be marked as missing, now they aren't executable.
omit_after_jump = (
pep626
or (PYPY and PYVERSION >= (3, 9) and PYPYVERSION >= (7, 3, 12))
)
# PyPy has always omitted statements after return.
omit_after_return = omit_after_jump or PYPY
# Optimize away unreachable try-else clauses.
optimize_unreachable_try_else = pep626
# Modules used to have firstlineno equal to the line number of the first
# real line of code. Now they always start at 1.
module_firstline_1 = pep626
# Are "if 0:" lines (and similar) kept in the compiled code?
keep_constant_test = pep626
# When leaving a with-block, do we visit the with-line again for the exit?
exit_through_with = (PYVERSION >= (3, 10, 0, "beta"))
# Match-case construct.
match_case = (PYVERSION >= (3, 10))
# Some words are keywords in some places, identifiers in other places.
soft_keywords = (PYVERSION >= (3, 10))
# Modules start with a line numbered zero. This means empty modules have
# only a 0-number line, which is ignored, giving a truly empty module.
empty_is_empty = (PYVERSION >= (3, 11, 0, "beta", 4))
# Are comprehensions inlined (new) or compiled as called functions (old)?
# Changed in https://github.com/python/cpython/pull/101441
comprehensions_are_functions = (PYVERSION <= (3, 12, 0, "alpha", 7, 0))
# PEP669 Low Impact Monitoring: https://peps.python.org/pep-0669/
pep669 = bool(getattr(sys, "monitoring", None))
# Where does frame.f_lasti point when yielding from a generator?
# It used to point at the YIELD, now it points at the RESUME.
# https://github.com/python/cpython/issues/113728
lasti_is_yield = (PYVERSION < (3, 13))
# Coverage.py specifics, about testing scenarios. See tests/testenv.py also.
# Are we coverage-measuring ourselves?
METACOV = os.getenv("COVERAGE_COVERAGE") is not None
# Are we running our test suite?
# Even when running tests, you can use COVERAGE_TESTING=0 to disable the
# test-specific behavior like AST checking.
TESTING = os.getenv("COVERAGE_TESTING") == "True"
def debug_info() -> Iterable[tuple[str, Any]]:
"""Return a list of (name, value) pairs for printing debug information."""
info = [
(name, value) for name, value in globals().items()
if not name.startswith("_") and name not in _UNINTERESTING_GLOBALS
]
info += [
(name, value) for name, value in PYBEHAVIOR.__dict__.items()
if not name.startswith("_")
]
return sorted(info)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/exceptions.py 0000644 0001751 0000177 00000002565 00000000000 020465 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Exceptions coverage.py can raise."""
from __future__ import annotations
class _BaseCoverageException(Exception):
"""The base-base of all Coverage exceptions."""
pass
class CoverageException(_BaseCoverageException):
"""The base class of all exceptions raised by Coverage.py."""
pass
class ConfigError(_BaseCoverageException):
"""A problem with a config file, or a value in one."""
pass
class DataError(CoverageException):
"""An error in using a data file."""
pass
class NoDataError(CoverageException):
"""We didn't have data to work with."""
pass
class NoSource(CoverageException):
"""We couldn't find the source for a module."""
pass
class NoCode(NoSource):
"""We couldn't find any code at all."""
pass
class NotPython(CoverageException):
"""A source file turned out not to be parsable Python."""
pass
class PluginError(CoverageException):
"""A plugin misbehaved."""
pass
class _ExceptionDuringRun(CoverageException):
"""An exception happened while running customer code.
Construct it with three arguments, the values from `sys.exc_info`.
"""
pass
class CoverageWarning(Warning):
"""A warning from Coverage.py."""
pass
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1710442632.0
coverage-7.4.4/coverage/execfile.py 0000644 0001751 0000177 00000027472 00000000000 020074 0 ustar 00runner docker 0000000 0000000 # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Execute files of Python code."""
from __future__ import annotations
import importlib.machinery
import importlib.util
import inspect
import marshal
import os
import struct
import sys
from importlib.machinery import ModuleSpec
from types import CodeType, ModuleType
from typing import Any
from coverage import env
from coverage.exceptions import CoverageException, _ExceptionDuringRun, NoCode, NoSource
from coverage.files import canonical_filename, python_reported_file
from coverage.misc import isolate_module
from coverage.python import get_python_source
os = isolate_module(os)
PYC_MAGIC_NUMBER = importlib.util.MAGIC_NUMBER
class DummyLoader:
"""A shim for the pep302 __loader__, emulating pkgutil.ImpLoader.
Currently only implements the .fullname attribute
"""
def __init__(self, fullname: str, *_args: Any) -> None:
self.fullname = fullname
def find_module(
modulename: str,
) -> tuple[str | None, str, ModuleSpec]:
"""Find the module named `modulename`.
Returns the file path of the module, the name of the enclosing
package, and the spec.
"""
try:
spec = importlib.util.find_spec(modulename)
except ImportError as err:
raise NoSource(str(err)) from err
if not spec:
raise NoSource(f"No module named {modulename!r}")
pathname = spec.origin
packagename = spec.name
if spec.submodule_search_locations:
mod_main = modulename + ".__main__"
spec = importlib.util.find_spec(mod_main)
if not spec:
raise NoSource(
f"No module named {mod_main}; " +
f"{modulename!r} is a package and cannot be directly executed",
)
pathname = spec.origin
packagename = spec.name
packagename = packagename.rpartition(".")[0]
return pathname, packagename, spec
class PyRunner:
"""Multi-stage execution of Python code.
This is meant to emulate real Python execution as closely as possible.
"""
def __init__(self, args: list[str], as_module: bool = False) -> None:
self.args = args
self.as_module = as_module
self.arg0 = args[0]
self.package: str | None = None
self.modulename: str | None = None
self.pathname: str | None = None
self.loader: DummyLoader | None = None
self.spec: ModuleSpec | None = None
def prepare(self) -> None:
"""Set sys.path properly.
This needs to happen before any importing, and without importing anything.
"""
path0: str | None
if self.as_module:
path0 = os.getcwd()
elif os.path.isdir(self.arg0):
# Running a directory means running the __main__.py file in that
# directory.
path0 = self.arg0
else:
path0 = os.path.abspath(os.path.dirname(self.arg0))
if os.path.isdir(sys.path[0]):
# sys.path fakery. If we are being run as a command, then sys.path[0]
# is the directory of the "coverage" script. If this is so, replace
# sys.path[0] with the directory of the file we're running, or the
# current directory when running modules. If it isn't so, then we
# don't know what's going on, and just leave it alone.
top_file = inspect.stack()[-1][0].f_code.co_filename
sys_path_0_abs = os.path.abspath(sys.path[0])
top_file_dir_abs = os.path.abspath(os.path.dirname(top_file))
sys_path_0_abs = canonical_filename(sys_path_0_abs)
top_file_dir_abs = canonical_filename(top_file_dir_abs)
if sys_path_0_abs != top_file_dir_abs:
path0 = None
else:
# sys.path[0] is a file. Is the next entry the directory containing
# that file?
if sys.path[1] == os.path.dirname(sys.path[0]):
# Can it be right to always remove that?
del sys.path[1]
if path0 is not None:
sys.path[0] = python_reported_file(path0)
def _prepare2(self) -> None:
"""Do more preparation to run Python code.
Includes finding the module to run and adjusting sys.argv[0].
This method is allowed to import code.
"""
if self.as_module:
self.modulename = self.arg0
pathname, self.package, self.spec = find_module(self.modulename)
if self.spec is not None:
self.modulename = self.spec.name
self.loader = DummyLoader(self.modulename)
assert pathname is not None
self.pathname = os.path.abspath(pathname)
self.args[0] = self.arg0 = self.pathname
elif os.path.isdir(self.arg0):
# Running a directory means running the __main__.py file in that
# directory.
for ext in [".py", ".pyc", ".pyo"]:
try_filename = os.path.join(self.arg0, "__main__" + ext)
# 3.8.10 changed how files are reported when running a
# directory. But I'm not sure how far this change is going to
# spread, so I'll just hard-code it here for now.
if env.PYVERSION >= (3, 8, 10):
try_filename = os.path.abspath(try_filename)
if os.path.exists(try_filename):
self.arg0 = try_filename
break
else:
raise NoSource(f"Can't find '__main__' module in '{self.arg0}'")
# Make a spec. I don't know if this is the right way to do it.
try_filename = python_reported_file(try_filename)
self.spec = importlib.machinery.ModuleSpec("__main__", None, origin=try_filename)
self.spec.has_location = True
self.package = ""
self.loader = DummyLoader("__main__")
else:
self.loader = DummyLoader("__main__")
self.arg0 = python_reported_file(self.arg0)
def run(self) -> None:
"""Run the Python code!"""
self._prepare2()
# Create a module to serve as __main__
main_mod = ModuleType("__main__")
from_pyc = self.arg0.endswith((".pyc", ".pyo"))
main_mod.__file__ = self.arg0
if from_pyc:
main_mod.__file__ = main_mod.__file__[:-1]
if self.package is not None:
main_mod.__package__ = self.package
main_mod.__loader__ = self.loader # type: ignore[assignment]
if self.spec is not None:
main_mod.__spec__ = self.spec
main_mod.__builtins__ = sys.modules["builtins"] # type: ignore[attr-defined]
sys.modules["__main__"] = main_mod
# Set sys.argv properly.
sys.argv = self.args
try:
# Make a code object somehow.
if from_pyc:
code = make_code_from_pyc(self.arg0)
else:
code = make_code_from_py(self.arg0)
except CoverageException:
raise
except Exception as exc:
msg = f"Couldn't run '{self.arg0}' as Python code: {exc.__class__.__name__}: {exc}"
raise CoverageException(msg) from exc
# Execute the code object.
# Return to the original directory in case the test code exits in
# a non-existent directory.
cwd = os.getcwd()
try:
exec(code, main_mod.__dict__)
except SystemExit: # pylint: disable=try-except-raise
# The user called sys.exit(). Just pass it along to the upper
# layers, where it will be handled.
raise
except Exception:
# Something went wrong while executing the user code.
# Get the exc_info, and pack them into an exception that we can
# throw up to the outer loop. We peel one layer off the traceback
# so that the coverage.py code doesn't appear in the final printed
# traceback.
typ, err, tb = sys.exc_info()
assert typ is not None
assert err is not None
assert tb is not None
# PyPy3 weirdness. If I don't access __context__, then somehow it
# is non-None when the exception is reported at the upper layer,
# and a nested exception is shown to the user. This getattr fixes
# it somehow? https://bitbucket.org/pypy/pypy/issue/1903
getattr(err, "__context__", None)
# Call the excepthook.
try:
assert err.__traceback__ is not None
err.__traceback__ = err.__traceback__.tb_next
sys.excepthook(typ, err, tb.tb_next)
except SystemExit: # pylint: disable=try-except-raise
raise
except Exception as exc:
# Getting the output right in the case of excepthook
# shenanigans is kind of involved.
sys.stderr.write("Error in sys.excepthook:\n")
typ2, err2, tb2 = sys.exc_info()
assert typ2 is not None
assert err2 is not None
assert tb2 is not None
err2.__suppress_context__ = True
assert err2.__traceback__ is not None
err2.__traceback__ = err2.__traceback__.tb_next
sys.__excepthook__(typ2, err2, tb2.tb_next)
sys.stderr.write("\nOriginal exception was:\n")
raise _ExceptionDuringRun(typ, err, tb.tb_next) from exc
else:
sys.exit(1)
finally:
os.chdir(cwd)
def run_python_module(args: list[str]) -> None:
"""Run a Python module, as though with ``python -m name args...``.
`args` is the argument array to present as sys.argv, including the first
element naming the module being executed.
This is a helper for tests, to encapsulate how to use PyRunner.
"""
runner = PyRunner(args, as_module=True)
runner.prepare()
runner.run()
def run_python_file(args: list[str]) -> None:
"""Run a Python file as if it were the main program on the command line.
`args` is the argument array to present as sys.argv, including the first
element naming the file being executed. `package` is the name of the
enclosing package, if any.
This is a helper for tests, to encapsulate how to use PyRunner.
"""
runner = PyRunner(args, as_module=False)
runner.prepare()
runner.run()
def make_code_from_py(filename: str) -> CodeType:
"""Get source from `filename` and make a code object of it."""
# Open the source file.
try:
source = get_python_source(filename)
except (OSError, NoSource) as exc:
raise NoSource(f"No file to run: '{filename}'") from exc
return compile(source, filename, "exec", dont_inherit=True)
def make_code_from_pyc(filename: str) -> CodeType:
"""Get a code object from a .pyc file."""
try:
fpyc = open(filename, "rb")
except OSError as exc:
raise NoCode(f"No file to run: '{filename}'") from exc
with fpyc:
# First four bytes are a version-specific magic number. It has to
# match or we won't run the file.
magic = fpyc.read(4)
if magic != PYC_MAGIC_NUMBER:
raise NoCode(f"Bad magic number in .pyc file: {magic!r} != {PYC_MAGIC_NUMBER!r}")
flags = struct.unpack("