tuf-6.0.0/tox.ini0000644000000000000000000000326513615410400010554 0ustar00# Tox (https://tox.readthedocs.io/en/latest/) is a tool for running tests # in multiple virtualenvs. This configuration file will run the # test suite on all supported python versions. To use it, "pip install tox" # and then run "tox" from this directory. [tox] isolated_build=true envlist = lint,docs,py skipsdist = true [testenv] commands = python3 --version python3 -m coverage run -m unittest python3 -m coverage report -m --fail-under 97 deps = -r{toxinidir}/requirements/test.txt install_command = python3 -m pip install {opts} {packages} # Workaround https://github.com/tox-dev/tox/issues/2801 (python3 not allowed in Windows) allowlist_externals = python3 # Develop test env to run tests against securesystemslib's main branch # Must to be invoked explicitly with, e.g. `tox -e with-sslib-main` [testenv:with-sslib-main] commands_pre = python3 -m pip install --force-reinstall git+https://github.com/secure-systems-lab/securesystemslib.git@main#egg=securesystemslib[crypto] commands = python3 -m coverage run -m unittest python3 -m coverage report -m [testenv:lint] deps = -r{toxinidir}/requirements/lint.txt lint_dirs = tuf examples tests verify_release .github/scripts passenv = RUFF_OUTPUT_FORMAT commands = ruff check {[testenv:lint]lint_dirs} ruff format --diff {[testenv:lint]lint_dirs} mypy {[testenv:lint]lint_dirs} zizmor --persona=pedantic -q . [testenv:fix] deps = {[testenv:lint]deps} commands = ruff check --fix {[testenv:lint]lint_dirs} ruff format {[testenv:lint]lint_dirs} [testenv:docs] deps = -r{toxinidir}/requirements/docs.txt commands = sphinx-build --fail-on-warning --quiet --builder html docs docs/build/html tuf-6.0.0/docs/1.0.0-ANNOUNCEMENT.md0000644000000000000000000000414113615410400013031 0ustar00# Announcing TUF 1.0.0 Python-TUF v1.0.0 is a rewritten stable reference implementation of the TUF specification, which *currently* includes: - a modern low-level [*metadata API*](https://theupdateframework.readthedocs.io/en/latest/api/tuf.api.html) - a fully specification-compliant [*updater client*](https://theupdateframework.readthedocs.io/en/latest/api/tuf.ngclient.html), serving as a more robust and yet more flexible stand-in replacement for the legacy client updater For the reasons outlined in [ADR 10](https://github.com/theupdateframework/python-tuf/blob/develop/docs/adr/0010-repository-library-design.md ), this release *does not yet* include *repository tool*-like functionality. However, the new *metadata API* makes it easy to replicate the desired functionality tailored to the specific needs of any given repository (see *Migration* for details). As discussed in [ADR 2](https://github.com/theupdateframework/python-tuf/blob/develop/docs/adr/0002-pre-1-0-deprecation-strategy.md), this release *does not* include any legacy code, as its maintenance has become infeasible for the python-tuf team. The pre-1.0.0 deprecation strategy from ADR 2 applies as follows: > *Bugs reported with tuf versions prior to 1.0.0 will likely not be addressed directly by tuf’s maintainers. Pull Requests to fix bugs in the last release prior to 1.0.0 will be considered, and merged (subject to normal review processes). Note that there may be delays due to the lack of developer resources for reviewing such pull requests.* ## Migration Given the clean cut with the legacy reference implementation, we provide the following migration support: - detailed code documentation on [https://theupdateframework.readthedocs.io](https://theupdateframework.readthedocs.io/) - verbose [code examples](https://github.com/theupdateframework/python-tuf/tree/develop/examples) for *client updater* usage, and repository-side operations based on the low-level *metadata API* - individual migration support upon [request](https://github.com/theupdateframework/python-tuf#contact) - targeted migration support initiative for known users tuf-6.0.0/docs/CHANGELOG.md0000644000000000000000000010155613615410400012004 0ustar00# Changelog ## Unreleased ## v6.0.0 This release is not strictly speaking an API break from 5.1 but it does contain some major internal changes that users should be aware of when upgrading. ### Changed * ngclient: urllib3 is used as the HTTP library by default instead of requests (#2762, #2773, #2789) * This removes dependencies on `requests`, `idna`, `charset-normalizer` and `certifi` * The deprecated RequestsFetcher implementation is available but requires selecting the fetcher at Updater initialization and explicitly depending on requests * ngclient: TLS certificate source was changed. Certificates now come from operating system certificate store instead of `certifi` (#2762) * ngclient: The updater can now initialize from embedded initial root metadata every time. Users are recommended to provide the `bootstrap` argument to Updater (#2767) * Test infrastructure has improved and should now be more usable externally, e.g. in distro test suites (#2749) ## v5.1.0 ### Changed * ngclient: default user-agent was updated from "tuf/x.y.z" to "python-tuf/x.y.z" (#2632) * ngclient: max_root_rotations default value was bumped to 256 to prevent a too small value from creating issues in actual deployments were the embedded root is not easily updateable (#2675) * repository: do_snapshot() and do_timestamp() now always create new versions if current version is not correctly signed (#2650) * Various infrastructure and documentation improvements ## v5.0.0 This release, most notably, marks stable securesystemslib v1.0.0 as minimum requirement. The update causes a minor break in the new DSSE API (see below) and affects users who also directly depend on securesystemslib. See the [securesystemslib release notes](https://github.com/secure-systems-lab/securesystemslib/blob/main/CHANGELOG.md#securesystemslib-v100) and the updated python-tuf `examples` (#2617) for details. ngclient API remains backwards-compatible. ### Changed * DSSE API: change `SimpleEnvelope.signatures` type to `dict`, remove `SimpleEnvelope.signatures_dict` (#2617) * ngclient: support app-specific user-agents (#2612) * Various build, test and lint improvements ## v4.0.0 This release is a small API change for Metadata API users (see below). ngclient API is compatible but optional DSSE support has been added. ### Added * Added optional DSSE support to Metadata API and ngclient (#2436) ### Changed * Metadata API: Improved verification functionality for repository users (#2551): * This is an API change for Metadata API users ( `Root.get_verification_result()` and `Targets.get_verification_result()` specifically) * `Root.get_root_verification_result()` has been added to handle the special case of root verification * Started using UTC datetimes instead of naive datetimes internally (#2573) * Constrain securesystemslib dependency to <0.32.0 in preparation for future securesystemslib API changes * Various build, test and lint improvements ## v3.1.1 This is a security fix release to address advisory GHSA-77hh-43cm-v8j6. The issue does **not** affect tuf.ngclient users, but could affect tuf.api.metadata users. ### Changed * Added additional input validation to `tuf.api.metadata.Targets.get_delegated_role()` ## v3.1.0 ### Added * Metadata API: move verify_delegate() to Root/Targets (#2378) - *verify_delegate() on Metadata is now deprecated* * Metadata API: add get_verification_result() as verbose alternative for verify_delegate() (#2481) * Metadata API: add MetaFile.from_data() convenience factory (#2273) ### Changed * Metadata API: change Root.roles type hint to Dict (#2411) * Various minor improvements in tests (#2447, #2491), docs (#2390, #2392, #2474) and build (#2389, #2453, #2479, #2488) ### Removed * build: Python 3.7 support (#2460) ## v3.0.0 The notable change in this release is #2165: The tuf.api.metadata.Key class implementation was moved to Securesystemslib with minor API changes. These changes require no action in tuf.ngclient users but may require small changes in tuf.api.metadata using repository implementations that create keys. As a result of these changes, both signing and verification are now fully extensible, see Securesystemslib signer API for details. tuf.repository remains an unstable module in 3.0.0. ### Added * Build: Use pydocstyle to lint docstrings (#2283, #2281) * Examples: Add Repository uploader/signer tool example (#2241) * Metadata API: Add TargetFile.get_prefixed_paths() (#2166) * ngclient: Export TargetFile (#2279) * repository: Add strictly typed accessors and context managers (#2311) * Release: Use PyPI Trusted Publishing https://docs.pypi.org/trusted-publishers/ (#2371) ### Changed * Build: Various minor build and release infrastructure improvements, dependency updates * Metadata API: Key class is still part of the API but now comes from Securesystemslib (#2165): * `Key.verify_signature()` method signature has changed * `Key.from_securesystemslib_key()` was removed: Use Securesystemslibs `SSlibKey.from_securesystemslib_key()` instead ## v2.1.0 ### Added * repo: experimental repository module and example (#2193) * ngclient: expose default requests fetcher (#2277) * workflow: OpenSSF scorecard (#2190) * build: Python 3.11 support (#2157) * docs: security policy (#2098, #2178) * blog: signer API (#2276) * blog: security audit (#2155, #2156) ### Changed * Metadata API: bump specification version 1.0.31 (#2119) * Metadata API: allow zero length metadata files (#2137) * Metadata API: add default value for MetaFile version (#2211) * Metadata API, ngclient: decrease logger verbosity (#2243) * ngclient: define API explicitly (#2233) * ngclient: improve example client output (#2194) * ngclient: support URLs without host part (#2075) * ngclient: update metaclass syntax (#2215) * ngclient: fail gracefully on missing role (#2197) * ngclient: improve type annotations in TrustedMetadataSet (#2250) * doc: misc improvements (2097, #2130, #2183, #2185, #2201, #2208, #2230, #2278) * build: misc improvements (#2090, #2091, #2122, #2187, #2188, #2217, #2252) * workflow: misc improvements (#2001, #2092, #2147, #2159, #2173) ## v2.0.0 This release, most notably, adds support for [TAP 15] - succinct hash bin delegation, which results in a few backwards-incompatible changes in the Metadata API. **NOTE**: While TAP 15 has been accepted it is not yet part of the TUF specification. Therefore, adopters should be prepared for potential changes to the implementation in future and for a lack of support for TAP 15 in other TUF implementations. [TAP 15]: https://github.com/theupdateframework/taps/blob/master/tap15.md ### Added * Metadata API: TAP 15 - succinct hash bin delegation (#2010, #2031, #2038, #2039) * build: CodeQL analysis action (#1932) * build: Dependency review action (#1974) * blog: ngclient design (#1914) * blog: tricky test cases (#1941, #2027) ### Changed * Metadata API: **BREAKING CHANGES** in Root and Targets class (#2010) - Argument order changed in add_key() and remove_key() - remove_key() renamed to revoke_key() * Metadata API: Update supported spec version to 1.0.30 (#2035) * ngclient: Use trusted timestamp role if new timestamp has equal version (#2024) * docs: Misc improvements (#1983, #2002, #2004, #2041, #2051, #2064) * tests: Misc improvements (#2017) * tests: Stop using requests type annotations (#1991) * build: Pin hatchling version (#1989) * build: Tweak pip download in verify_release script (#1982) * build: Update pinned dependency versions ### Fixes * Metadata API: Check None instead of falsyness for some optional arguments (#1975) * ngclient: Prevent use of potentially undefined variable (#2003) * tests: Change git attributes for test data (#2063) ## v1.1.0 This release contains major build improvements as well as fixes and backwards-compatible API improvements. ### Added * build: Release process was moved to CD platform (#1946, #1971, #1976) * build: Build is now reproducible thanks to Hatchling (#1896, #1900) * build: Build results are now verifiable (#1913, #1926, #1947, #1979) * build: test dependencies are now pinned for reproducibility (#1867, #1918) * Metadata API: Validation is now possible during serialization (#1775) * Infrastructure: Setup development blog (#1886, #1887) ### Changed * Metadata API: Supported specification version updated (#1908, #1960) * Metadata API: unrecognized_fields annotation fix (#1950) * Metadata API: Constructors are now easier to use (#1922) * Metadata API: Logging and error message improvements (#1876) * build: Include examples in source distribution (#1970) * build: Updated pinned dependency versions * tests: Various improvements (#1707, #1758, #1808, #1860, #1915, #1936, #1953, #1954, #1955) ## v1.0.0 This release makes ngclient and the Metadata API the supported python-tuf APIs. It also removes the legacy implementation as documented in the [1.0.0 announcement](1.0.0-ANNOUNCEMENT.md): all library code is now contained in `tuf.api` or `tuf.ngclient`. ### Added * tests: Extend testing (#1689, #1703, #1711, #1728, #1735, #1738, #1742, #1766, #1777, #1809, #1831) ### Changed * Metadata API: Disallow microseconds in expiry (#1712) * Metadata API: Preserve role keyid order (#1754) * Metadata API: Make exceptions more consistent (#1725, #1734, #1787, #1840, #1836) * Metadata API: Update supported spec version to "1.0.28" (#1825) * Metadata API: Accept legacy spec version "1.0" (#1796) * Metadata API: Accept custom fields in Metadata (#1861) * ngclient: Remove temporary file in failure cases (#1757) * ngclient: Explicitly encode rolename in URL (#1759) * ngclient: Allow HTTP payload compression (#1774) * ngclient: Make exceptions more consistent (#1799, #1810) * docs: Improve documentation (#1744, #1749, #1750, #1755, #1771, #1776, #1772, #1780, #1781, #1800, #1815, #1820, #1829, #1838, #1850, #1853, #1855, #1856 #1868, #1871) * build: Various build infrastructure improvements (#1718, #1724, #1760, #1762, #1767, #1803, #1830, #1832, #1837, #1839) * build: Stop supporting EOL Python 3.6 (#1783) * build: Update dependencies (#1809, #1827, #1834, #1863, #1865, #1870) ### Removed * Remove all legacy code including old client, repository_tool, repository_lib and the scripts (#1790) * Metadata API: Remove modification helper methods that are no longer necessary (#1736, #1740, #1743) * tests: Remove client tests that were replaced with better ones (#1741) * tests: Stop using unittest_toolbox (#1792) * docs: Remove deprecated documentation (#1768, #1769, #1773, #1848) ## v0.20.0 *__NOTE:__ This will be the final release of python-tuf that includes the legacy implementation code. Please see the [*1.0.0 announcement*](1.0.0-ANNOUNCEMENT.md) page for more details about the next release and the deprecation of the legacy implementation, including migration instructions.* ### Added * metadata API: misc input validation (#1630, #1688, #1668, #1672, #1690) * doc: repository library design document and ADR (#1693) * doc: 1.0.0 announcement (#1706) * doc: misc docstrings in metadata API (#1620) * doc: repository and client examples (#1675, #1685, #1700) * test: ngclient key rotation (#1635, #1649, #1691) * test: ngclient top-level role update (#1636) * test: ngclient non-consistent snapshot (#1666, #1705) * test: more lint/type checks and auto-formatting (#1658, #1664, #1659, #1674, #1677, #1687, #1699, #1701, #1708, #1710, #1720, #1726) * build: Python 3.10 support (#1628) ### Changed * ngclient: misc API changes (#1604, #1731) * ngclient: avoid re-loading verified targets metadata (#1593) * ngclient: implicitly call refresh() (#1654) * ngclient: return loaded metadata (#1680) * ngclient: skip visited nodes on delegation tree traversal (#1683) * ngclient: remove URL normalisation (#1686) * build: modernise packaging configuration (#1626) * build: bump dependencies (#1609, #1611, #1616, #1621) * build: limit GitHub Action token visibility and permissions (#1652, #1663) * test: misc test changes (#1715, #1670, #1671, #1631, #1695, #1702) ### Removed * doc: obsolete roadmap (#1698) ## v0.19.0 For users of legacy client (tuf.client module) this is purely a security fix release with no API or functionality changes. For ngclient (tuf.ngclient) and Metadata API (tuf.api.metadata), some API changes are included. **All users are advised to upgrade**. Note that python-tuf has required python>=3.5 since release 0.18.0. ### Fixed * GHSA-wjw6-2cqr-j4qr: Fix client side issue in both legacy client (tuf.client) and ngclient (tuf.ngclient) where a malicious repository could trick client to overwrite files outside the client metadata store during a metadata update. The fix includes percent-encoding the metadata rolename before using it as part of a filename https://github.com/theupdateframework/python-tuf/security/advisories/GHSA-wjw6-2cqr-j4qr * ngclient: Do not use urljoin to form metadata URL (included in GHSA-wjw6-2cqr-j4qr) * ngclient: Persist metadata safely (#1574) * ngclient: Handle timeout on session.get() (#1588) ### Added * build: Dependabot now monitors GitHub Actions (#1572) * tests: ngclient test improvements (#1564, #1569, #1587) * Metadata API: Add TargetFile.from_file() (#1521) ### Changed * build: Bump dependency charset-normalizer (#1581, #1586) * build: Bump dependency urllib3 (#1589) * build: Bump dependency cryptography (#1596) * Metadata API: Documentation improvements (#1533, #1590) * Metadata API: change Timestamp meta API (#1446) * Metadata API: change Delegations roles API (#1537) * ngclient: Remove unnecessary sleep() (#1608) * ngclient: Fix consistent targets URL resolution (#1591) * ngclient: Don't use target path as local path (#1592) ## v0.18.1 ### Changed * Update setup.cfg to not build universal wheels (#1566) ## v0.18.0 0.18 is a big release with 3 main themes: * Support only Python 3 and modernize the infrastructure accordingly * Metadata API (a low-level API for metadata de/serialization and modification) is now feature-complete for the client use cases * ngclient (a new high-level client API) was added. ngclient should be considered an unstable API and is not yet recommended for production use. Additionally the Github project name changed: project is now "python-tuf" instead of "tuf". Redirects are in place for the old name but updating links is advised. ### Added * Add ADR6: Where to implement serialization (#1270) * Add ADR8: Unrecognized fields (#1343) * Add ADR9: Refine reference implementation purpose (#1554) * Add client Network IO abstraction (#1250, #1302) * Add many features to Metadata API to support de/serializing specification-compliant metadata, and safer access through API: * Metadata.from_bytes()/to_bytes() (#1354, #1490) * Key, Role (#1360, #1386, #1423, #1480, #1481, #1520) * DelegationRole, Delegations (#1370, #1512) * MetaFile, TargetFile (#1329, #1437, #1454, #1514) * verification of threshold of signatures (#1435, #1436) * expiration check method (#1347) * support unrecognized fields in metadata (#1345) * use Generics to improve static typing (#1457) * Extensive Metadata API testing and validation (#1359, #1416, #1416, #1430, #1449, #1450, #1451, #1460, #1466, #1511) * Add ngclient: a new client library implementation (#1408, #1448, #1463 #1467, #1470, #1474, #1501, #1509, #1519, #1524) * Infrastructure improvements: * mypy, black and isort integration (#1314, #1363, #1395, #1455, #1489) * API reference documentation build (#1517) ### Removed * Remove Python 2 support (#1293) * Remove direct dependency on six * Remove obsolete reference to Thandy in a LICENSE file (#1472) ### Changed * Bump dependencies: * Certifi * Cryptography * Idna * Requests * Securesystemslib * Six * Urllib3 * Replace indirect dependency chardet with charset-normalizer * Move Metadata API serialization to sub-package (#1279) * Use SecureSystemslib Signer interface in Metadata API (#1272) * Make imports compatible with vendoring (#1261) ### Fixed * 'ecdsa' is a supported key type (#1453) * Fix various build infrastructure issues (#1289, #1295, #1321, #1327, #1364, #1369, #1542) * Test fixes (#1337, #1346) ## v0.17.0 **NOTE**: this will be the final release of tuf that supports Python 2.7. This is because Python 2.7 was marked [end-of-life]( https://www.python.org/dev/peps/pep-0373/) in January of 2020, and since then several of tuf's direct and transient dependencies have stopped supporting Python 2.7. ### Added * Added Architectural Decisions Records (ADRs) for: * where to develop python-tuf 1.0 (#1220) * to justify the extent of OOP in the metadata model (#1229) * to decide on a Python code style guide (#1232) ### Changed * Switch to GitHub Actions for CI (#1242, #1283, #1252) * Switch to only running bandit on Python versions greater than 3.5 (#1234) * Bump dependencies: requests (#1245), chardet (#1239), urllib3 (#1268), cffi (#1280), securesystemslib (#1285), cryptography (#1282, #1286). **NOTE**: the latest version of cryptography is no longer used on Python 2, as that is not supported. * Moved from dependabot-preview to GitHub native Dependabot (#1258) * Configure dependabot to ignore idna, as it breaks Python 2.7 builds (#1259) * Install securesystemslib in tox in non-editable mode (#1228) * Change the editable venv installation order (#1271) ### Fixed * Updated expiration check in Updater to better match the specification (#1235) * Ensure tempfile's are closed in Updater (#1226) ### Removed * Dropped support for Python 3.5 (#1238) ## v0.16.0 ### Added * Begin to document architectural and project-wide decisions as Architectural Decision Records (ADRs) in docs/adr (#1182, #1203) * Add Python 3.9 to the CI test matrix (#1200) * Implement a class for Root metadata in the simple TUF role metadata model in `tuf.api` (#1193) ### Changed * Bump dependencies: cryptography (#1189, #1190), requests (#1210), urllib (#1212), cffi (#1222), certifi (#1201), securesystemslib (#1191) * Simplify the test runner (`aggregate_tests`) and stop executing unit test modules in a random order (#1187) * Speed up indefinite freeze tests by removing `sleep()` calls (#1194) * Adapt to securesystemslib changes in key generation interfaces (#1191) * Migrate from travis-ci.org to travis-ci.com (#1208) * Make metadata signatures ordered by keyid, to ensure deterministic signature ordering in metadata files (#1217) * Improve test reliability by using thread-safe `Queue`s, rather than files, for process communication (#1198) * Avoid reading an entire target file into memory when generating target file hashes in `tuf.client.updater` (#1219) * Remove use of an empty list (`[]`) as the default argument in a test function (#1216) * Simplified updater logic for downloading and verifying target files (#1202) ### Fixed * Fix threshold computation in `_verify_root_self_signed()` such that signatures by the same root key count only once towards the threshold (#1218) ## v0.15.0 ### Added * Simple TUF role metadata model in the `tuf.api` package for interacting with metadata files directly, per-file without the overheads of reading and writing the entire repository at once (#1112, #1177, #1183) * Raise `MissingLocalRepositoryError` in updater when local repository can not be found (#1173) * Tests for targets metadata generation with existing `fileinfo` (#1078) * Test-verbosity documentation (#1151) ### Changed * Raise an error in `tuf.client.updater` when metadata is loaded without a signature (#1100) * Print a warning in `tuf.repository_tool` when metadata is written without a signature (#1100) * Remove iso8661 dependency (#1176) * Bump dependencies: cffi (#1146), cryptography (#1149), urllib (#1179), securesystemslib (#1183) * Overhauled logging to be less verbose and less alarming, by removing logging in the library when an exception is raised (including the same information that was logged) and using more appropriate log levels (#1145) * Make test output more useful by reducing and improving logging (#1145, #1104, #1170) * Make the `targets_path`, `metadata_path` and `confined_target_dirs` fields in `tuf.client.updater`s mirror configuration optional (#1153, #1166) * Include LICENSE files with source distributions (#1162) * Update Python version to be used in release instructions (#1163) * Remove direct use of `colorama` and dependency (#1180) ### Fixed * Ensure file objects and `requests.Responses` are closed during tests (#1147) * Auto-test against `securesystemslib` head of development (#1185) * Fix parameter name in `tuf.repository_lib` error message (#1078) ## v0.14.0 ### Added * Added a mechanism to the Updater to disable the hash prefix for target files even when `consistent_snapshot` is enabled for a repository (#1102) ### Changed * Updater now uses keyids provided in the metadata, rather than re-calculating keyids using `keyid_hash_algorithms` (#1014, #1121) * When loading an existing repository the keyids provided in the metadata will be used, rather than re-calculating keyids using `keyid_hash_algorithms` (#1014, #1121) * Improve reliability and performance of tests by removing sleep calls, instead use polling to check whether the simple_server is ready to accept connections (#1096) * Only calculate lengths and hashes of files listed by timestamp and snapshot metadata when those lengths and hashes will be included in the metadata (#1097) * Re-raise chained exceptions explicitly per PEP 3134 (#1116) * Remove use of `securesystemslib.settings.HASH_ALGORITHMS`, instead pass desired algorithms explicitly to securesystemslib's `keys.format_metadata_to_key` (#1016) ### Fixed * Better adhere to the detailed client workflow in the specification by ensuring that a newly downloaded root metadata file is verified with a threshold of its own signatures (#1101) * Update a delegating role's metadata when adding a new verification key to a delegated role (#1037) ## v0.13.0 ### Added * Add support for BLAKE hash functions (#993) * Don't list root metadata in snapshot metadata, per latest spec (#988) * Enable targets metadata to be generated without access to the target files (#1007, #1020) * Implement support for abstract files and directories (#1024, #1034) * Make lengths and hashes optional for timestamp and snapshot roles (#1031) ### Changed * Revise requirements files to have layered requirements (#978, #982) * Update tutorial instructions (#981, #992) and documentation (#1054, #1001) * Replace hard-coded logger names (#989) * Fix target file path hashing to ensure paths are hashed as they appear in targets metadata (#1007) * Refactor code handling hashed bins (#1007, #1013, #1040, #1058) * Improve performance when delegating to a large number of hashed bins (#1012) * Improve path handling consistency when adding targets and paths (#1008) * Clarify error message and docstring for custom parameter of add_target() (#1027) * Ensure each key applies to signature threshold only once (#1091) ### Fixed * Fix broken CI (#985) * Fix tests (#1029, #1064, #1067) * Fix loading of delegated targets during repository load (#1049, #1052, #1071) * Fix key loading in repo.py (#1066) * Remove redundant code in downloader (#1073) * Fix alarming logging in updater (#1092) ## v0.12.2 * Fix incorrect threshold signature computation (#974) * Drop support for python 3.4 (#966) * Improve documentation (#970, #960, #962, #961, 972) * Improve test suite and tutorial scripts (#775) ## v0.12.1 * Relax spec version format check for backwards compatibility (#950) * Update project metadata (#937, #939, #944, #947, #948, #953, #954) * Update misc dependencies (#936, #941, #942, #945, #956) ## v0.12.0 * Add backwards incompatible TUF spec version checks (#842, #844, #854, #914) * Adopt securesystemslib v0.12.0 update (#909, #910, #855, #912, #934) * Fix multi-root rotation (#885, #930) * Fix duplicate schema definitions (#929) * Refactor metadata generation (#836) * Refactor securesystemslib interface (#919) * Update implementation roadmap (#833) * Improve tests and testing infrastructure (#825, #839, #890, #915, #892, #923) * Improve documentation (#824, #849, #852, #853, #893, #924, #928, et al.) * Update misc dependencies (#850, #851, #916, #922, #926, #931) ## v0.11.1 * Prevent persistent freeze attack (pr [#737](https://github.com/theupdateframework/python-tuf/pull/737)). * Add --no-release option to CLI. * Issue deprecation warning for all_targets() and targets_of_role(). * Disable file logging, by default. * Tweak network settings (in settings.py) for production environments. * Add tuf.log.enable_file_logging() and tuf.log.disable_file_logging(). * Replace %xx escapes in URLs. * Support Appveyor (for Windows) with Continuous Integration. * Run unit tests in Python 3.4 & 3.5 under Appveyor. * Edit contact text to encourage users to report issues with specification. * Generate (w/ CLI) Ed25519 keys, by default. * Upgrade dependencies to latest versions. * Add requirements.in, which is used to generate the other requirement files. * Update list of adopters. * Convert README to Markdown. * Update installation instructions to note SSLib's optional dependencies that should be installed to support RSA, ECDSA, etc. keys. * Add unit test for persistent freeze attack. * Update list of tasks in ROADMAP.md. ## v0.11.0 Note: This is a backwards-incompatible pre-release. * Make significant improvements to execution speed of updater. * Resolve all of the unit test failures in Windows. * Add or revise many CLI options. - Add --revoke - Support ECDSA, RSA, and Ed25519 keys - Fully support delegated roles - Revise help descriptions - Allow 2+ roles to delegate to the same role - Add --remove - Add --trust - Remove obsolete code - Add --distrust - Allow any top-level role to be signed - Allow multiple signing keys with --sign - Rename default directories - etc. * Revise CLI documentation, such as QUICKSTART.md. * Ensure consistent behavior between add_targets and add_target(). * Add a CLI doc that demonstrates more complex examples. * Move LICENSE files to the root directory. * Update dependencies. * Update TUTORIAL.md to fix links. * Fix bug where the latest consistent metadata is not loaded. * Modify the pyup update schedule from daily to weekly. * Add hashes to requirements.txt. * Update AUTHORS.txt and add organizations. * Replace deprecated 'cryptography' functions. * Remove dependency in dev-requirements.txt that causes error. * Ensure that the latest consistent metadata is added to Snapshot. * Tweak a few logger and exception messages. * Revise introductory text in README. * Update ADOPTERS.md and link to pages that cover each adoption. * Remove target paths in metadata that contain leading path separators. * Address Pylint/Bandit warnings for the CLI modules. * Replace calls to deprecated 'imp' module. * Fix bug where the hashing algorithms used to generate local KEYIDs does not match the ones chosen by the repo. * Fix bug in tuf.sig.get_signature_status() where a given threshold is not used. * Refactor code that stores the previous keyids of a role. ## v0.10.2 Note: This is a backwards-incompatible pre-release. * Support TAP 4 (multiple repository consensus on entrusted targets). https://github.com/theupdateframework/taps/blob/master/tap4.md * Add quick start guide. * Add CLI (repo.py) to create and modify repositories. * Refactor client CLI (client.py). * Add pyup.io to manage dependencies. * Update all dependencies to their latest versions. * Add Pylint and Bandit (security) linters to Travis CI. Fix issues reported by both linters. * Tidy up documentation and directory structure. * Add option to exclude custom field when returning valid targetinfo with MultiRepoUpdater.get_valid_targetinfo(). * Fix PGP key fingerprint provided for security vulnerability reports. * Modify API for creating delegations. * Add wrapper functions for securesystemslib functions. * Fix bug: non-default repository names raises an exception. * Refactor modules for inconsistent use of whitespace and indentation. * Add cryptographic functions to read and write keys from memory. * Add full support for ECDSA keys. List `ecdsa-sha2-nistp256` in specification. * Remove example metadata. Documentation now points to up-to-date metadata in the tests directory. * Remove all references to PyCrypto. * Add copyright and license to all modules. * Add README for the unit tests. * Remove remnants of the compressed metadata feature (now discontinued). * Fix minor issues such as broken links, typos, etc. * Update configuration files to fix issues, such as duplicate upgrade commands, badges, etc. * Revise policy on static code analysis, CI, etc. * Earn CII Best Practices Badge. * Reach 98% score for CII Silver Badge. * Remove obsolete code, such as tufcli.py, interposition, check_crypto_libraries(), etc. ## v0.10.1 Note: This is a backwards-incompatible pre-release. * Add CHANGELOG.md, MAINTAINERS.txt, CODE-OF-CONDUCT.md, GOVERNANCE.md, ADOPTERS.md, DCO requirements, and instructions for submitting a vulnerability report. * Move specification to github.com/theupdateframework/specification. * Dual license the project: MIT license and Apache license, version 2. * Update to latest version of securesystemslib v0.10.8, which dropped PyCrypto and multi-lib support. * Add ecdsa-sha2-nistp256 to specification. * Remove directory of example metadata. Documentation now references unit test metadata. * Implement TAP 9 (mandatory metadata signing schemes). https://github.com/theupdateframework/taps/blob/master/tap9.md * Drop support for Python 2.6 and 3.3. * Support Python 3.6. * Improve code coverage to 99%. * Convert specification from text to Markdown format. * Add MERCURY paper, which covers protection against roleback attacks. * Implement TAP 6 (include specification version in metadata). * Implement TAP 10 (remove native support for compressed metadata). * Support ability to append an externally-generated signature to metadata. * Remove capitalization from rolenames listed in metadata. * Add a more detailed client workflow to specification. * Modify client workflow: A client must now fetch root first. Intermediate versions of Root must also be downloaded and verified by the client. See specification for modified workflow. * Fix bug with key IDs, where incorrect number of key IDs are detected. * Minor bug fixes, such as catching correct type and number of exceptions, detection of slow retrieval attack, etc. * Do not list Root's hash and length in Snapshot (only its version number). * Allow user to configure hashing algorithm used to generate hashed bin delegations. * Fix Markdown errors in SECURITY.md. * Add fast-forward attack to specification * Remove simple-settings dependency * Move crypto-related code to external library (securesystemslib). * Allow replacement of already listed targets in metadata. Fix issue #319. * Add instructions for contributors in README. * Copy (rather than link) target file to consistent target. Fix issue #390. * Rename target() -> get_one_valid_targetinfo(). * Ensure consistent Root is written if consistent snapshot = False. Fix issue #391. * repository_tool.status(): Print status of only the top-level roles. * Document and demonstrate protection against repository attacks. * Add installation instructions for Fedora-based environments. * Exclude "private" dict key from metadata. * "backtrack" attribute renamed to "terminating". * Fix data loss that might occur during sudden power failure. Pull requests #365, 367. * Add repository tool function that can mark roles as dirty. * Store all delegated roles in one flat directory. * Support Unix shell-style wildcards for paths listed in metadata. * Add draft of specification (version 1.0). * Sleep a short while during download.py while loop to release CPU. * Support multiple key ID hashing algorithms. * Prepend version number to filename of consistent metadata. * Remove updater method: refresh_targets_metadata_chain(). * Add Diplomat paper. It covers integrating TUF with community repositories. * Add project logo. * Delegations now resemble a graph, rather than a tree. ## v0.10.0 @vladimir-v-diaz vladimir-v-diaz released this on Jan 22, 2016 · 879 commits to develop since this release * Fix Python 3 str<->bytes issues * Drop support for Python 3.2 * Support Python 3.5 * Fix for Issue #244 (hash, rather than hash algorithm, should be prepended to consistent targets) ## TUF v0.9.9 @vladimir-v-diaz vladimir-v-diaz released this on Jul 23, 2014 · 1058 commits to develop since this release * Support externally created PEM files. Previous release generated an unexpected keyid for the external public key because of trailing whitespace, which did not match the format of internally generated keys saved to metadata. * Fix installation instructions. Non-wheel installation instruction listed an invalid command-line option to pip (-no-use-wheel, which is missing a leading hyphen.) * Add paragraph to Using TUF section of the README. ## TUF v0.9.8 @vladimir-v-diaz vladimir-v-diaz released this on Jul 16, 2014 · 1069 commits to develop since this release * TUF 0.9.8 (pre-release) ## TUF v0.7.5 @trishankkkarthik trishankkarthik released this on Sep 21, 2013 · 1877 commits to develop since this release * TUF 0.7.5 (pre-release) tuf-6.0.0/docs/CODE-OF-CONDUCT.md0000644000000000000000000000025613615410400012621 0ustar00## The Update Framework Community Code of Conduct The Update Framework follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md) tuf-6.0.0/docs/CODEOWNERS0000644000000000000000000000005413615410400011555 0ustar00* @theupdateframework/python-tuf-maintainerstuf-6.0.0/docs/CONTRIBUTING.rst0000644000000000000000000000412713615410400012630 0ustar00Instructions for contributors ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Contribute to python-tuf by submitting pull requests against the "develop" branch of this repository. Detailed instructions are available in our `development guidelines `_. All submitted code should follow our `style guidelines `_ and must be `unit tested <#unit-tests>`_. .. note:: Also see `development installation instructions `_. DCO === Contributors must indicate acceptance of the `Developer Certificate of Origin `_ by appending a ``Signed-off-by: Your Name `` to each git commit message (see `git commit --signoff `_). Testing ======= With `tox `_ the whole test suite can be executed in a separate *virtual environment* for each supported Python version available on the system. ``tuf`` and its dependencies are installed automatically for each tox run. :: tox Below, you will see more details about each step managed by ``tox``, in case you need debug/run outside ``tox``. Unit tests ---------- test suite can be executed directly as well (in this case the environment managed by tox is not used): :: python3 -m unittest Individual tests can also be executed. Optional ``-v`` flags can be added to increase log level up to DEBUG (``-vvvv``). :: python3 tests/test_updater_ng.py -v Coverage -------- To run the tests and measure their code coverage, the aggregation script can be invoked with the ``coverage`` tool (requires installation of ``coverage``, e.g. via PyPI). :: coverage run -m unittest Auto-formatting --------------- The linter in CI/CD will check that new TUF code is formatted with `ruff `_. Auto-formatting can be done on the command line: :: tox -e fix tuf-6.0.0/docs/GOVERNANCE.md0000644000000000000000000000733513615410400012144 0ustar00# TUF governance This document covers the project's governance and committer process. The project consists of the TUF [specification](https://github.com/theupdateframework/specification) and [reference implementation](https://github.com/theupdateframework/python-tuf). ## Maintainership and Consensus Builder The project is maintained by the people indicated in [MAINTAINERS](MAINTAINERS.txt). A maintainer is expected to (1) submit and review GitHub pull requests and (2) open issues or [submit vulnerability reports](https://github.com/theupdateframework/python-tuf#security-issues-and-bugs). A maintainer has the authority to approve or reject pull requests submitted by contributors. More significant changes in the project, such as those that require a TAP or changes in governance, are guided by a maintainer called the Consensus Builder (CB). The project's Consensus Builder (CB) is Justin Cappos , who has a lifetime appointment. ## Contributions [A contributor can submit GitHub pull requests](CONTRIBUTING.rst) to the project's repositories. They must follow the project's [code of conduct](CODE-OF-CONDUCT.md), the [developer certificate of origin](https://developercertificate.org/), the [code style guidelines](https://github.com/secure-systems-lab/code-style-guidelines), and must unit test any new software feature or change. Submitted pull requests undergo review and automated testing, including, but not limited to: * Unit and build testing via [GitHub Actions](https://github.com/theupdateframework/python-tuf/actions) and [Tox](https://tox.readthedocs.io/en/latest/). * Static code analysis via [Pylint](https://www.pylint.org/) and [Bandit](https://wiki.openstack.org/wiki/Security/Projects/Bandit). - Auto-formatting with [black](https://black.readthedocs.io/) and [isort](https://pycqa.github.io/isort/). * Checks for Signed-off-by commits via [Probot: DCO](https://github.com/probot/dco). * Review by one or more [maintainers](MAINTAINERS.txt). A contributor can propose changes to the specification with a [TUF Augmentation Proposal](https://github.com/theupdateframework/taps) (TAP). It is a design document providing information to the TUF community, or describing a new feature for TUF or its processes or environment. A [TAP](TAP.rst) can be approved or rejected by the CB after it has been reviewed and discussed. Discussions take place on the project's [mailing list](https://groups.google.com/forum/?fromgroups#!forum/theupdateframework) or the TAPs GitHub issue tracker. ## Changes in maintainership A contributor to the project must express interest in becoming a maintainer. The CB has the authority to add or remove maintainers. ## Changes in governance The CB supervises changes in governance, but a majority of maintainers must vote +1 on the PR. ## Changes in the consensus builder The consensus builder may be appointed for a fixed term or it may be a lifetime appointment. To initiate a change of consensus builder, or a change in the length of the appointment, a GitHub PR must be opened. If a fixed term is specified, the PR should be opened no earlier than 6 weeks before the end of the CB's term. If there is not a fixed term appointment, the PR may be opened at any time. In either case, the PR must be kept open for no less than 4 weeks. Additionally, the PR can only be merged with more +1 than -1 in the binding votes. Anyone from the community can vote on the PR with either +1 or -1. Only votes from maintainers that have been listed in the top-level [MAINTAINERS](MAINTAINERS.txt) file before the PR is opened are binding. When there are conflicting PRs about changes in the consensus builder, the PR with the most binding +1 votes is merged. The consensus builder can volunteer to step down. tuf-6.0.0/docs/INSTALLATION.rst0000644000000000000000000000617213615410400012624 0ustar00Installation ============ All versions of ``python-tuf`` can be installed from `PyPI `_ with `pip `_. :: python3 -m pip install tuf By default tuf is installed as pure python package with limited cryptographic abilities. See `Install with full cryptographic abilities`_ for more options. Install with full cryptographic abilities ----------------------------------------- Default installation supports signature verification only, using a pure Python *ed25519* implementation. While this allows to operate a *basic client* on almost any computing device, you will need additional cryptographic abilities for *repository* code, i.e. key and signature generation, additional algorithms, and more performant backends. Opt-in is available via ``securesystemslib``. .. note:: Please consult with underlying crypto backend installation docs. e.g. `cryptography `_ for possible system dependencies. :: python3 -m pip securesystemslib[crypto] tuf Install for development ----------------------- To install tuf in editable mode together with development dependencies, `clone `_ the `python-tuf repository `_ from GitHub, change into the project root directory, and install with pip (using `venv `_ is recommended). .. note:: Development installation will `Install with full cryptographic abilities`_. Please check above for possible system dependencies. :: python3 -m pip install -r requirements/dev.txt Verify release signatures ------------------------- Releases on PyPI are signed with a maintainer key using `gpg `_ (see `MAINTAINERS.txt `_ for key fingerprints). Signatures can be downloaded from the `GitHub release `_ page (look for *\*.asc* files in the *Assets* section). Below code shows how to verify the signature of a `built `_ distribution, signed by the maintainer *Lukas Pühringer*. It works alike for `source `_ distributions. :: # Get wheel from PyPI and signature from GitHub python3 -m pip download --no-deps tuf==0.20.0 wget https://github.com/theupdateframework/python-tuf/releases/download/v0.20.0/tuf-0.20.0-py3-none-any.whl.asc # Get public key, compare fingerprint in MAINTAINERS.txt, and verify with gpg gpg --recv-keys 89A2AD3C07D962E8 gpg --verify tuf-0.20.0-py3-none-any.whl.asc # Output: # gpg: assuming signed data in 'tuf-0.20.0-py3-none-any.whl' # gpg: Signature made Thu Dec 16 09:21:38 2021 CET # gpg: using RSA key 8BA69B87D43BE294F23E812089A2AD3C07D962E8 # gpg: Good signature from "Lukas Pühringer " [ultimate] tuf-6.0.0/docs/MAINTAINERS.txt0000644000000000000000000000177713615410400012512 0ustar00The project is currently managed by Justin Cappos at New York University. Please see GOVERNANCE.md for the project's governance and maintainership. Consensus Builder: Justin Cappos Email: jcappos@nyu.edu GitHub username: @JustinCappos PGP fingerprint: E9C0 59EC 0D32 64FA B35F 94AD 465B F9F6 F8EB 475A Maintainers: Marina Moore Email: mm9693@nyu.edu GitHub username: @mnm678 Lukas Puehringer Email: lukas.puehringer@nyu.edu GitHub username: @lukpueh PGP fingerprint: 8BA6 9B87 D43B E294 F23E 8120 89A2 AD3C 07D9 62E8 Jussi Kukkonen Email: jkukkonen@google.com GitHub username: @jku PGP fingerprint: 1343 C98F AB84 859F E5EC 9E37 0527 D8A3 7F52 1A2F Kairo de Araujo Email: kairo@dearaujo.nl GitHub username: @kairoaraujo PGP fingerprint: FFD5 219E 49E0 06C2 1D9C 7C89 F26E 23EE 723E C8CA Emeritus Maintainers: Joshua Lock Santiago Torres-Arias Sebastien Awwad Teodora Sechkova Trishank Karthik Kuppusamy (NYU, Datadog) Vladimir Diaz tuf-6.0.0/docs/RELEASE.md0000644000000000000000000000502413615410400011566 0ustar00# Release process **Prerequisites (one-time setup)** 1. Enable "Trusted Publishing" in PyPI project settings * Publisher: GitHub * Owner: theupdateframework * Project: python-tuf * Workflow: cd.yml * Environment: release 1. Go to [GitHub settings](https://github.com/theupdateframework/python-tuf/settings/environments), create an [environment](https://docs.github.com/en/actions/deployment/targeting-different-environments/using-environments-for-deployment#creating-an-environment) called `release` and configure [review protection](https://docs.github.com/en/actions/deployment/targeting-different-environments/using-environments-for-deployment#required-reviewers). ## Release 1. Ensure `docs/CHANGELOG.md` contains a one-line summary of each [notable change](https://keepachangelog.com/) since the prior release 2. Update `tuf/__init__.py` to the new version number `A.B.C` 3. Create a PR with updated `CHANGELOG.md` and version bumps ➔ Review PR on GitHub 4. Once the PR is merged, pull the updated `develop` branch locally 5. Create a signed tag for the version number on the merge commit `git tag --sign vA.B.C -m "vA.B.C"` 6. Push the tag to GitHub `git push origin vA.B.C` *A tag push triggers the [CD workflow](https://github.com/theupdateframework/python-tuf/blob/develop/.github/workflows/cd.yml), which runs the tests, builds source dist and wheel, creates a preliminary GitHub release under `vA.B.C-rc`, and pauses for review.* 7. Run `verify_release --skip-pypi` locally to make sure a build on your machine matches the preliminary release artifacts published on GitHub. ➔ [Review *deployment*](https://docs.github.com/en/actions/managing-workflow-runs/reviewing-deployments) on GitHub *An approval resumes the CD workflow to publish the release on PyPI, and to finalize the GitHub release (removes `-rc` suffix and updates release notes).* 8. Run `verify_release` to make sure the PyPI release artifacts match the local build as well. When called as `verify_release --sign []` the script additionally creates gpg release signatures. When signed by maintainers with a corresponding GPG fingerprint in the MAINTAINERS.md file, these signature files should be made available on the GitHub release page under Assets. 9. Announce the release on [#tuf on CNCF Slack](https://cloud-native.slack.com/archives/C8NMD3QJ3) 10. Ensure [POUF 1](https://github.com/theupdateframework/taps/blob/master/POUFs/reference-POUF/pouf1.md), for the reference implementation, is up-to-date tuf-6.0.0/docs/SECURITY.md0000644000000000000000000000133013615410400011751 0ustar00# Security Issues and Bugs Security issues can be reported to maintainers [privately via GitHub](https://docs.github.com/en/code-security/security-advisories/guidance-on-reporting-and-writing/privately-reporting-a-security-vulnerability): - [**Report new vulnerability**](https://github.com/theupdateframework/python-tuf/security/advisories/new) Please do not use the GitHub issue tracker to submit vulnerability reports. The issue tracker is intended for bug reports and to make feature requests. Major feature requests, such as design changes to the specification, should be proposed via a [TUF Augmentation Proposal](https://theupdateframework.github.io/specification/latest/#tuf-augmentation-proposal-tap-support) (TAP). tuf-6.0.0/docs/_config.yml0000644000000000000000000000070413615410400012313 0ustar00title: Python-TUF author: Python-TUF community #email: your-email@domain.com description: > # for footer and for search engines Development blog for Python-TUF, a supply chain security framework for secure content delivery and updates. github_username: theupdateframework show_excerpts: true # set to false to remove excerpts on the homepage header_pages: # make sure ordinary docs are not linked from blog header - index.md theme: minima tuf-6.0.0/docs/conf.py0000644000000000000000000000475613615410400011476 0ustar00# Configuration file for the Sphinx documentation builder. # # This file only contains a selection of the most common options. For a full # list see the documentation: # https://www.sphinx-doc.org/en/master/usage/configuration.html # -- Path setup -------------------------------------------------------------- # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # import os import sys sys.path.insert(0, os.path.abspath(os.path.join(".."))) import tuf # -- Project information ----------------------------------------------------- project = "TUF" copyright = "2021, New York University and the TUF contributors" author = "New York University and the TUF contributors" # -- General configuration --------------------------------------------------- master_doc = "index" # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ "sphinx.ext.napoleon", "sphinx.ext.autosummary", "sphinx.ext.autosectionlabel", ] autosectionlabel_prefix_document = True # Add any paths that contain templates here, relative to this directory. templates_path = ["_templates"] # -- Options for HTML output ------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # html_theme = "sphinx_rtd_theme" html_theme_options = {"logo_only": True} html_logo = "tuf-horizontal-white.png" html_favicon = "tuf-icon-32.png" # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". # html_static_path = ['_static'] # -- Autodoc configuration --------------------------------------------------- # https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html # Tone down the "tuf.api.metadata." repetition add_module_names = False python_use_unqualified_type_names = True # Show typehints in argument doc lines, but not in signatures autodoc_typehints = "description" autodoc_default_options = { "members": True, "inherited-members": "Exception", # excl. members inherited from 'Exception' "exclude-members": "to_dict, from_dict", } # Version version = tuf.__version__ tuf-6.0.0/docs/index.md0000644000000000000000000000045713615410400011622 0ustar00--- title: "Python-TUF development blog" --- This is the development blog for the [Python-TUF](https://github.com/theupdateframework/python-tuf) project, welcome! If you want to learn how to use Python-TUF, check out our [developer documentation](https://theupdateframework.readthedocs.io/en/latest/). tuf-6.0.0/docs/index.rst0000644000000000000000000000130413615410400012022 0ustar00TUF Developer Documentation =========================== This documentation provides essential information for those developing software with the `Python reference implementation of The Update Framework (TUF) `_. The reference implementation provides easy-to-use components for Python developers but also aims to be a readable guide and demonstration for those working on implementing TUF in their own languages, environments, or update systems. .. toctree:: :maxdepth: 1 :caption: Contents: api/api-reference INSTALLATION Usage examples Contribute tuf-6.0.0/docs/repository-library-design-ownership.jpg0000644000000000000000000014200113615410400020027 0ustar00JFIFC    $.' ",#(7),01444'9=82<.342C  2!!22222222222222222222222222222222222222222222222222" }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr $4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (#C1iEnՏT G_j(oZ/}&K-iEh G_j(oZ/}&K-iEh G_j(oZ/}U“͏€,o6? <(_hK-Vc͏€,o6? <(_hK-Vc͏€,o6? <(_hK-Vc͏€,o6? <(_hK-Vc͏€,o6? <(_hK-VŸր&K-iEh G_j(oZ/}&K-iEh G_j(oZ/}&K-iEh 1xQRPEPEPEPEMcw_leҦi!S nB b{@QEVBYmo#G m -VQY k bxKT<x@tQEQEQEQU`m.o졘=Ŧ:~Br9 TQEU[FkHnfw/N[{+­PEUm4{ɄQ ˻Qǩ TvZec6*8+\c׆z(bOnQij\^,)! *om+/Y#E"e# ( (څi2c% p@hj67v0gxsV((ֺo6 Y ӌ QEQEQEQEQEQECu}j[V>\ivsu,½d||Pk^!U-']CNW>T+ݑC⸴2l.3"Emd z s:O»'Vޤw,m+ ;O'ӜV4wz}8*{B_Zl@2pNHɼymmWmݥud7d`v8EfGern< \䍬3Wg\ۿ G[)ƿJuQEQEQEQEQEQEQEQE]J?ΤR((>;k>'Ѽ9a'幷ayqm7T`;R {s,,"j7V~NYOPq9wAk GM+M$[0$0Ly_}k;[khd݋C#-#= ;R\Zy?j&Q4^laF\a9!Y-M~VvIH m¦#n}It4u{5Ls:=J@Aoa~Ҽ1Xجe--d^uDF]\_\X}qw[/FpO.. S I=$eLN:SayxZ֯Xu /̇n GeJszVҴ瑤k Vwn*KJrq`TM]ugl[R^fL?j]<]S$8#k`9Q'Œ̫- g׿a&O r[tǸȚWBgG9V.4[*kb,!I#=p7jY.n˼xEKr*KNH9oq%W2@y<τ# 1p1_xsDb,.Ǘ4FjۥܗKoȊ0@r@''9;Eďsv֜-# ܜk'֯ν|?D.em*8nޢ.;kKxH@px{Ed- 3-dlhm귓x&-v.^I;@P_݃Gyk |Gi"OH[nTA:OÚ%EZ[<ՆҴ浚լ-Md Y0I<{@]gX׵PBU B߻pz57׿f9w 2HR}}08Oއ}wO}6l OҤt{˹.t)eC$[3 $F@8ψzVoutsiwrmy|DrrqdsOZKԮky#d3b!O$yzviZQŪ鶗ۑnaYQqI.ϧ>.dR9wku1qjv5[wv!xKs&-w1+n=I+Ieӵ.帿Z0a8u-)Ao-A!HNp2=kÚqi5r\d 'y9 {Yn+h!]MˆQtRY'+mm-m+xI8I'jntӧt,Q˥[!Xy3? ڮfծmu16qsR]cyhv~흤@DZXQF:aWqx7Oibm"OнӵjŢiPiGJLM8`QW4#QrXXfA3Iqs$ipm %/+>F|O{+P[nV(F#c7B2>V6˧66sgͷU$rv,,4-n3C7 `4վ X\MmwZeE N$|qE[zцD`LsJbFAC}#MP{ZHL9KgJt,<7pC\6ȂQ rx4Ed|3 x;6jqw$,s GrHY.vծ(b|O(OՍM]YD_"|c#88ȢR($hUTp@)}BVuiLGwn1n^$d WSss}ӂ5y$knr`3=̾דK\&@z$s>}/>?t?}P=(δOk.!.aKmc{qddt µN{s[w5c,.RfF(#zUk[{+t( A$ @EQEQEQEQEQE nXJ*Γgw3DZ;#frzES4MkKo2&P1]9mT"*QKEQEQEQEQEQEQEQE;xWK髙vן^A'~ j%:& mU.7U@Aנ:v-J=B4B$| H *:h9Tq$7< +Fŕ>X_jol%JH#$HDGΠՆqgAk vk?Zis#qr̅qM`E}"J.Lқ=RKX;S2 (9v:떺N>5ޚ2 ?) t ;M2 Ϩ\iwj0'J:j@ 8?IK߳xz=?|* V\U$\t}3\SR e':Ҁq6۸o PYaѭ&ٗ8Lo]Tt\ߎu ]RG 2 Jz.nPSY\[kVhYj!'U@[ySr3xAMU4KaI RӘI =p03gTƱaijJ{q,&tC:DH#6.>qup!c,W$:ON']t]ZQԵ5AZ [a .HX5.3_M'$B%#*Xl|7.>$GkoѮm֞)%b1pA3գAhKd0Lo\0q1޸_izΝɭ][Ax;3.k"D~Q8k=o>4OHm"HqrG,( OHI'mxIKf[Ioo/.& e8$I''m5I|I\c LDh;mUGFHAY񞏨޶i0z.V'BbzokjpMen62$IqMc$*qy&5-c[u|*Ht-]Qخp!3 rH8H?02zpqWkVڴڮnڕ0#۔$I>Z:dMz3 ;b<Z ؓLSݘCG&!F;u Rf!$]%BۀpyӴZjz\X(d|(/Jև5i7ڷʛb26U$al$]3#ݡ㟗gQ| icOSʰ^6K.NV'\t@ր"Vڎ4?4FKM0Y1f /׼A[;+=cP}mo~YGϹ@m1$gnhމk,pe}xב p- p;7ƯMqYl*g.1@mȺաtż[JnDq$sv!F$d+[!{;(f?|S,W0rC-q\[߭\ $۴u?ûjvHڥ]+.BhI7s[O!K,/L2&es卬7)0>T"Skqbх>k!ݟm=/hzfuM&;;Qfךy{y@wH䃞: ˻,ܑo4 ]rp:v\<_3Aqi]]% ]m(_8φu-~nK1bP[d+.}WzUĶ=hf$rxe#Gz}RQӵ 4g9|]w#`P RR ( ( ( ( ( ( ( ( ( u_+_%tk-<7w) ZAC,KH(((((fkWټsݷjRM";u"'6~Y[5n((((G? $ȲjSE"}J ȿկҟLZ)QEQEQEQEQEQEQEQEv+:J?Τ(+xYH_@NGY]20e9W|p|Yɡ!3h$ dXdvUÚυ6.L*C7L:xwtQEa_ÚL7vKyon}:f,2_]"KUMAѮJ~۟hWBYgRܢ|8e8Kjj}H&māNFcV= OT1XƤ2n=+R45MRqn=qP4&(ԵK+8V7 b3ּH }^f񅼳I>eo {cefݏʶF?-|3lZbo" MPj>#e2LldiԁRCp;l_xn\.V=*riͦQd~ cTFqڙ^YXcWmgd: جg!jW r`#<O6q\jPʥKTu$'c|ue%k5@'ed@NX ~NZm.Eeׯm怅%pXsĚd.V|Ta] qƿRhu]Ho>VO6a g94۟WKmui\4P[V/v ֱ#[?58-yacI''@+Q>Fs.g@mc+pހ= {G{lfk0Mp`EjWrX7W1 Ԁr+.7l/ov=1RG Lfդ4M2Yv\Nqހ; ':u焴js0 `.nUUI˹Zk:UWsVq[JUGP2HbpF9_jKţnɧL .0޵tmIw=w-](ʰ\nPpytFO~Ch\[wqss[wHӮᴾlmnfU U'&PU<K̚<%`$gêϤKdly2Iyfaw1iaq 嵌°2Bȭ\2XW:hhV,Z_آBDpwnhƚ[V]Y{q,ұ)(]銳M֗o4qsF^U@)b'e"Jl?v2mY"RZCs=yYҵW4J;e[yCmkFw5cus`Gt(9皴cz>KmY~YD0|8{O [֩X!i엷F7DUY8$;zQ@Q@Q@Q@Q@ZVՏT(((((((((5&-Z[yP::Az(48ir3A.?`{SyOlj][yA?*<' ?5GЫcWz~cyʀ9oOJIQVէM+>ݪ:"/II5 HAAS*Z((((((((( _ԕWu%QEQEQEV5Yt%PIn#M$ 3Dq%  c\3xoayⴚ}F[ t~^y4BHӯu7<#pEAAjƿ{.T ֑yw#h&Tx9#NjKS:mdž.5MR+u۪$hZE1Q=sJ+ mm3FeG*˃"WoR.·wLc{4rGߌ9tڻ+w.h :K)&;D) 7)B%;q=Z%'dbB X(@8^mxYĚ%徫O(b.eR#nT<#{w Sj MG3a2 *NzGJʃ}mHҖHƿøAÚ/n8ffշPE;zAUkx=[i6 \yxw9z⋍ZWpy(1& @$.<poRz>i2C}m;hVCGҀ=.-KNɴ`Auv6Vʂ57(\.-kWn5;KA,Mbo0|s1mĞ$%wַJd v!`Ѿ9\bIs@Yk=V- ^%oE$ҹ ]VKYm@T'd/S}o5g췉{+|b n:cd\yn<.2Dᕇ:泮HLit/n |DB6(I8=kNR.skqy:I=ڎ 14h)zѮ忂RdKhBHG;(g#rA⥺buKէVhR}w9zt/4,qF(I)c&%pYNCЃWz4'KSn3g~c9]$SX|3݂%2ÑPIEp6:tEMR+5X'[u@ HH;cc=ao{yoԣ",BHG;(f#A @̨dxė" k۝mc+8SH sqgz.MjSXY<řBJ<;PTt뛛k;[V q3+$9Skk2QG-%-Q>+ίT#E3w'H"@rTdLq@PѴ]H,JI5"񦯧i:"`a”|vV~&5X>6!̷lO<@A5TP'y̓dd L5k%UUmWdwdBkZ柫\$"RZ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@T/! vC doOaܞÚSMKXzF|8?#=N0wR dZLaQ?Mk;v\Is+'tg>%ݕB;w+C!R9U(/Q 2 S[W< zFE^-m1\ў" hc66c_8G_@'[# J+3v>/!=HQ_֮[\E2z7 ?1@QEQEQEQEQEQEQEQEQEQEQEv+:J?Τ((( wA_닧=ާC 3(Mp$q8Yl+^."FGcEQ1=$xF[KnX{)\Z ݃g4^pŤo=sMeo%"8< mť\]O{qucl#+$vP+c$p0*^6͞qv *p2Jtpcۜ:eaa$w60^G}nae+$`>Og۞c+H)$M9c+C#+0aI*4c[KH!.8:vqޭi~35[kTٯ$P2J׎ppq+f^]ižW;jP0)w42{9!khrDeVVP1~f85Q@IJ|vy|9W$U<#KjCyel?ͽ ջEQ4MwKNFh%;+) r }džu&Ե[#o-d yEs^mgko5%kMv@z1x9'$Zo46ZذRPܙ#mGs WKEs~l[ֵka$U$Zwt-ogIhm>~YJ?tViqXwrѼVVmF,pGaҴ[I]668_b#1%I$}B u&TN9`+c:|1ZRGLDt˨\s>ーqn@έm3V/P@Ǯ08@UKJ?i*X`Vn*˚ע9 MҤc̊ي}鑒NZL1Xo5I-Kcdm˴c:|1]}|+6i5,)~KJIlܞ*\1K[ F鷴Fc Q@ޛ7MNv,TWw&Dml+WG-t=14=B<7N]۟Eq_ {[).1\]$fYF3lW<⪏k5>Yܘh;Cu9e84xJO{9tle'2A]=g&8³odyK

_~XYa?p}y-uW~OsZ4>UA9=>槢(((*Ə]IKe z*s(3"HSq2Ȋ1{pJӢ3?>ȇQ2 oҤZn$BS,;lүs[sxcȡh+"% .g#~qUZ)H#eʢVg?lγ=+"(6o4Tm8O+L,D#<ڗ6\ %?;Vm5++9~G@ d@A@qGOY((((((IQRPEPEPEP9 *Q.{3Pm +(c8rAKC׼]IydZϠiG#dx$!?/qx9"mYͧ~d[7*<'Q(((((((((((((((((((((((nՏTw_֪PEPEPEPEPEPEPEPEPEPEPEPEPFHޤ&U,,oSVMw^%$H$HUA%# [((((((((((zu>muz­Q@`̃>ҕ5ERcv;s?:lѴr"0+ =Q\Ana32icAF=p6ߌEu4QEQEQEQEQEv+:J?Τ((((((((((((((((((((((((((((nՏTw_֪PEPEPEPEPEPEPEPEP/5;;Oش+s=ݵ4їEfH 98Oo 7 /]r-Fugtb[kVc̥Hi8GZޮzo) ѷUEP?ԯJ+: ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (!V>RZ@Q@Q@Q@Q@Q@Q@Q@Q@~am?Jo/]xjkdxŞwA:JB@qtþ)Ҽ WΣ%K)v6 uaFțWhؿ9'<)Юφ]SvY\s{VSM/hh'A&YC2w?GO5l i*kJMtүmmchd6, aJ~| oM5"LZ0Hrp2I&vmE߼T|i*9Bg=9g%ݧr}턝FZuEmo((" J+}_I.kO $ NJuQEQEQEPMo*_ Ѷ6Xd`5#i6My ..MPu<=4˝kƙfZI As)'bk^fh D IOQFF(犮bծCdӑ>c]ʀ˾?K񵆭i0[(Uw-9?C\I5=]+kI܈䴔 VRpC!9xxKÿtUOsCjIerjNpDdUO!ltk.,,9|' PK8cMNRYck&FQ#?6>j/ͧ\& k~-MKO֨x'ͺDFy&3/NzşnO4]$$Ǎ`q,o_M贽R{;,d5Vt(9}8ȫ'/%!ttkQ*VP.<;'~qd5;ἵ.CU'#EPEPEP7۪kJmlmt4QEv+:J?Τ((((((((((((((((((((((((((((nՏTw_֪PEPEPEPEPEPEPEPEP/[OҸ?sKqWs@ZVF4Zܽ#.S^@3qWkcm4]Y4粸ڙUdTx={ͶO]SÇzӡLxf3q@qo ݼ042)WE PADx<o>sjuܷlUsF)$űfyzwy-i^) dhd#JFzQEQEU_KVo1F_7g]{U((O $?߅.6S:p})QEQEQEW7m%B76ֺJY&F]{yaq{u'ooM+p9<Җ(/-|Ʋ#r8< dDO2E5qzxuMW:텃e.[F cQ GDDg##> %eCidXDDb cߢTY Q;8]TD1f;O<}^;VlYM rm<Ƭq3jyxÚQsqቚR\D qƽ> ̴߽I"H<֝QEQEQE%666`?u] QE]J?ΤR(((((((((((((((((((((((((((((cU*QEQEQEQEQEQEQEQEi*ku͌grOowor+WI Z|uPq  Mɶ5sx ?LhţCkw%˱ ,kĿ*NKB {߆-E^ktP99Îct+GO$#Qid#yn@+Ŀ*NKB bIn]IK*pAޮWu_llG'^%V4W9 {?ڮ9#Gc3Ыcgz+Ͽ:/ ?6GЫcgz+Ͽ:/ ?6GЫcgzm~KĿ*Q|Mik`7,<߹0LJ:sڀ;DNNREPEPEP\ݷ /Z+TҼSki6;I)Z0`B>q0zm֟ru SS5Fz}g}&ΈNrrx[?# ׉U=to% _ܵԃ|ӃW?QKkͥ\B!EepñLĿ*NKB zem{m Kk±!,}Qyzfc +^y ~y&NKB Ŀ*=cψDW! 3 UYlmĶ[$PܴV,ID|1Һ(IQRPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEP7_֪UcU((((((((( 3Q} pQH*z(CG!>d*o& 3 cБO$?$E }2qb@} >OEAH($?=!৤ʨ%QEQEQEQES(4(CG!>d $?} =Q T1׊!&.Hc)*=> $?} pQH*z(CG!$HtbEQEv+:J?Τ((((((((((((((((((((((((((((ps̞SQ@=Z'TP?fOV5ٓը2zMECdj>̞SQ@=Z'TP?fOV5ٓը2zMECdj>̞SQ@=Z'TP?fOV5VԛiD,Ra|qR}=Z F!d`R?fOV5ٓը2zMECdj>̞SQ@=Z'TP?fOV5Pt((((((((((((((((((((((((((((((((((((((((((m&ۇ>#95CmMgoϮ55QEQEQEQEQEQEQEQEQEQEQECts6 ރ?+msSPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPdF9*?Qu}j[Jz5iOFPQjE[Jz5iOFPQjE[Jz5iOFPQjE[Jz5iOFPQjE[Jz5iOFPQjE[Jz5iOFPQjEMow o~=R=BВ~v1(SѨJz5T-=UJ(SѨJz5T-=UJ(SѨJz5T-=UJ(SѨJz5T-=UJ(SѨJz5T-=UJ(SѨJz5T&Z^6tw/J)Bv)o+U J-=UJ(SѨJz5T-=UJ(SѨJz5T-=UJ(0u :uGI@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@ZlèۯcTeT Ğ!t2o&0XIӎxĚF%mJ$0 幍ʞdg r| a?n//&de"70@ڷa[xsEѵ2CgW$=~S@]NH^qQȏ#@밓zgce˻[ďN66@,Xێ6(}jnuJ ZLhӡP{c =kЦ~(jڕ$ˤZ43ᑙ:# h N1qs9{Sthz7r߫w#k;u s KXDV k[S'.u- EЮf7gHtb0[hoxk;ۻ%.-6@eV2Eh t4feg pȱ6$uI8~ki:fi`c2Ӧtkhn?mg`hV q>>.,'}qey˸g(\I  rJO^ #ݵWZ.4MK|ȩ|?]nO EyE]KI9|OiSgM` NA94<Yjsv7|jG-ŤvڮX.r8=z\4-!qn84'tش e5ԖAKq+'Gu]W^k1m0GPB%' 0Y:R.[KԤ!a)O/oFss]v]YxZ<0~HGnh Y44CQYF-bv$8U:Vo+KWֵ CpaHÕSE\9#$ZZnxRtkKKHaI{X(ow!dAG:u=ln As P|cuXqIAH]4'xq)ɕ0<[xMXemWUTV?06P@6N9wo-tmsʈHnn]dxu6Aoj7ʝe :#i8W7s];U+mcz}.e"YdmSk`7+ _6>iLžm, #2~|qIifd6 ] 6>hjiZsVh>el#2#9sKOzUב\a[;~Pmcg5 UKҤ-0_҉A%A_߉o[]CC}9 nKc@׍PDyP]DӬan+XnUT=ņw8D98\|cbzy{4uj|3 o7Zx4t-1MQo64lC'ƭa`-"6Gcp?@r#_+Gtwu*άu+sƚnhAHrQsܵ\@31"닓:̐x}ⷈ]49dsϧla,VK+ CJh_ģ́9v]܌ *%X]K[,Vtteyk8"uxM:ln-2iB6_ dSCsko~q+2/r0y( hέCk ld{ $q/ym`ˤjN$ eR>VwRiZRZq/ZǮhJ7$R[]9ĤܦAۼ:k4:߇-t"Cjty.ef #Xz+RZҵ_+Β2̄6:eP@ k{q<1p4XYRxcx<Vn$Kq =Û(Pn @NRa@vIҝF{Xd`sP"o Ŭ78Y`G@®CRfm$mr˷n;puw;;.ָS"F!NA(W%/5|Wi^spn$EdOaN_@ vR4ʶb\̞_u h@Q@Q@Q@Q@Q@Q@Fae9,0$AGKxV@~@c\zsV ۆ9r?:q#`8 1!`C`g> C95-QEQEQEQEQEQEQEQEQEQEG:m,n%-X~P} m*mB3z uHcfb@Gjρ{K1Q;dVfF4$3>h=FFF+IF$c'((((Wu%GI@Q@Q@Q@uBJn/$ooFN}zeW ]۲A/0',v4=֬VЧ-r~:_xRi##Mv1IhӂYhsib*^ u-:ؖ$2F V>VxP,mvw ,W9'-zP\G:!I&HoZ;infvE;OLphG NJ.4(,%nTC eܪImđQ^gc`"]_j񨶖Sbr"YXA }柢xjǗq+d,KRnn4TWE-ޏ=nDL>mt2͎3|՘xLУ䯯ϣ\I&2"L9 q@ExYŅ \7ng?(27nnc){]/^ԍ8=>E(Mtd:xbm3j"I#%2?F?UxRP٦ծ%b~t1!Q,΀=*Eskw EmqPm@#r.[xDlEM%0 ^QK]:MKgK $,nlsMex+ݪ:jsjVIs6IZ3ޜP\h@MFI}Qv{JSc o`K}&HNb׭MKf"xocB07{Q^/aia4 )I*˘0pS n_ 6M!ne39ID3> ?^^e-m -En,IMZi]Tk~R&KwtHppp 20 oiŶ Զ"֏2F#hW\Qx?^!DqZOor$`^p#Niu{Ke&-)m`򔤐շ%(!-dY>kW d.9sǥhWkkkG4%d%78SGijfHԥ TI4RS* D X{±%{ )nՌcyo4Ca1#(Ѭ|a;jIv98 J3} +dE{UYK*X1$^5].EqIw(`!pIk6zOG[cmg+"E<{# w+J4PGlT~7o^O&i0)W叛 PxR{n-%%"#_8* B=kV_‘Yx-4:&+x/^آKú\^۬q, ȹоfAP{EM", Z\k[5A`'8G|kFŷͰ_':,82hU~\0w OZۿ[j7 iysuemob#2C!(;_T`P5iv/={Slr0}k%7v=f0sTŸNDzefҼ=VQ$I]I$(J(((((((/8Ϯ5R,"w'tǥ[%hOT/ >{k;Ƀ^û(Fze*Jλ!,)20C@3 EL3688Ь*K򢵍$t+R ( ( ( (.RTp_ԔQEQ\Gğ4InݖD Ķ1۽hx v;klFhYlp 20AZ袊i6uܖNw#a{O';{y|{䁓/kgº&BZ5c ~5|7mj j\.A8p=P"3Hf PHTn H$g qUFmkw)M޻qFjk}Dkǵkɦf`'I;= &X6uvng/ H$1] ^xm-o8?w>_N8↳?}-M+$wHFP9?.;amB1icm0O*]qp3llk[+x%W%RԐ9knZw]<+4oxb>5j;.4Cg\ e w^7(8Lnb1\E I0$+p(̡@qQ~)_ A&6ԭH̠$q5dJ𶏫i-ƠoV! ɖ:/'[W\D}WҜ,V[hDQ0hF6b+/>.Cnq ;ȏT:@3[S> M+NѝFp9%*Sr9u4Oo 1 4r(eaZ->go% mTו{[#vVr~3)tBFKͺ3Jddq sG'ІQ1#;ȴ> #g vCcɋ6sg3ڸG]H&ʷM"FUR+x2ig9Ze66WRHQaªbyym+N[9-eSVAR1V"8"Xc`(&ֱp,,,1ͨϦG,$Q %J᱌z/ot;Vpkif3* @y0I#xck19$i:eOucmg- |La7S\sx[STIO%f+mrd yxti66ae DpFi-1_# L Y$Q*H5<0o IJ0'(^iw6/s%ޭpcP9PvbsF[V/5kM%4K B}>FW2Dp2Tq@٠+1)2 PAivu(Ulp67 gI?.%%Uf((((((XV [cY0h7m821@oFxay~EƏ3ɘlnj7QY;êpyنvIcA^ ]2oH/̟ضRcnoNzEÍFö>aY]ۊ 7p3@Ũx]֥av1\mVGHScs=>TOBdl~U<$ҽOWO&cO'f~Rsc/ÝE<3sxH#$TxKmi aO?($q@I=#Jb mVf:uI KM^}f8.5iVk1ƗcoO]?Z$?Ũ/lo 7a^@˾FPZ$YAkD{SQEQEQEQEQEQEQEQEQEG BJcf\Iyqާ}kB" ``>81T'm 8 [cQ]@P=ۡvh((((((((((9إqT'o)eP:TGj&.Hc)*=+ `g'gBUcHH$G7xY6@[f\Fh((((Wu%GI@Q@|NYi'׌8>?* ~ѢҴ&2[31ORkZ(tץ󕵲[fDsQjt}JM2 |JgbhZkJH7m&`N2~fm. 2@O:>m+R~ l "Cmeʜg-&A:jE$7Wp7.1;v5Fm/;hbiF3Qr;S|A^캍I"ᄌ G\* ޗvoWAf#vP=/|ڕjCwsũ[̱ycc #޵/b"iR+KB  H[o,SX7V$GcTb[s@Mcun˅D$$I G3Tjؓ nIǦ MC,o,[ImA !coO^ jJiIl. V6HH#SSW:vcgr[ ^@.b7.c{A֮E\ϰFm8동ymJВ&/I9vҀ04zZXsbTFa(TV^t4l@R>.4r mGpv0aַdgӭ51gu"o4mx:ҏ'J2y^yW'4+.uk{qq+YhpY-6-o#JY!) hX3G<֞m>/ j:΍qm5Ex̣ :Eh.oo.|O;0Q[S $^^s@t+:xz<@0ϓ~oj+.4J=^۶٥2`NFY ڞgy/ I#+xi!I'.cHe2*F9c5[~Z:֫-zl %[9L[3+g.zݧnnYol?ⳎJ%!XY zSQ+_ZK j)ϢA/ Jɴ!xp* 23~]?R:ǥ铤ky捎9$ֵthڥZ"͟<4&Ϫ 2g_sV'?OzǴEŵ՝GIf0)R?( Ү=&Fѵ]6[TVo:bB yOƛwCxJTОfն_J(V6|y99o.[Y.qF ˻<;q[^Y}B M>DS'hտ{9n' *o4BU{[bǔOMè{ǀI\Md-mFF$񴑓Z±/$nJ ' Xx=֛X-;VwʱH U*gwoqo)ʔsLPivCLl r1j~no:mݦ| ;ۋk_4灖ؼk}QcyZiqZ)` $PiT(ԳUP:OAQ$HIoUvG^\~%=vĘ Q3UH\sj_Zԯ}cK=řme*fpr#5P+wa,k#äܨ%C,X+s%<^%F `ȟl# ێq:lEw<$pKIw]# Og{mYwgrpkLP0%JӾеk_k^*RSN0n"6ѣ*#&fGI?nul4K [}A Ծ.5 g[X$Rim+.ݞ ǂ@p;קUMOLtDֳ3F@ AXk,gK٢\)xsקqX4Iughl_+,Oa_bd յiF-c"%v2sw^pcRM:۴XR@ \+ WhY"YQ0%s#h} 1Hby!m`=OxBJFֹ\"egdp`Y0$w7uwO0m*!m8A'z}e}*Uǟ&yx3nk6 +ݖc֬eqڋ2cI]=e}*Uǟ&yx3nk6 +֨ ( ( ( ( (!V>RZ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@oc}#*86yg7sI@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@* 1֤6}_3>^û椠((((( _ԕWu%QEQEQEs+nBY7@K$r= q_5@ɧ[QLq; YcpIڽ7U.M!.P*d9QT1*id}qcq;y>Ϭh,_)bFa W,I0ߩ#\i?nI4y*A'W 1sO ϧLRp+muosYm8mFwxo<=h+6-Hbcf+:z@վ Ě䶎#hG[vw3sV?vh.#7 "Pvuwo TN3'Y;#^[kY,帹Y%M`Zk-G]P8!3Td9's] gIWRId@Aߪ.Ю/ 8I1Q]r9Iik #(tUQ?!@M 5|<5Z%Dq+w^XRvts^LM"m&W/.% {[qq%ŽǗq/-]ʐ<Ϙc wqמڟtMZ+k v]w "qn:P_D.i z]&Yо\͟c95<ߋ.=NCz*A%șdڮQf28LpkԼjw\8EKv1eG`zvG?49"Kxn,M& ,nd"rPX3yZ5o X^_#\ >c)*CH =A GռF<;oxMF%#†89׶+Ҵ2Gӡ 6N9$I$ITlE2Olc|Ns@Dο7oŮKYpT~G [w` 0X`V tv?gGnXtPrk][-4kQs \ OчAƷ ʗ1,Sf v`g]lj?t5:NR8˫|Qpo^)O}Kma^ H]}0H#R8+i, .L]N22^ջi*a{+VHT(xmc_#%ht*?#ʆFO-0Bw,08Zu}=%g'SݴVt\|0|Y 2ϲy֫s \ NG$d洮2Z.mVK<86on:=zd8[wWXt{ϭ(^{UqĂpXvV~};Z}wWqz[qO2y#V*X#סSFMEn- hK󰑕UTu +Z jmF$U7'Ǡ _j>(ٸ-i$բJ৕jߋ9,4mKRIIYnd YT@IG=kWI𞗣_>=וחRNǜR\sOռ1k_Nn6g1\}LPwZWϩ<\_\ZQd+F.`dW)s@?ί4ROG墛;pwxs^:wK-VoI4fi,\!,$i\x9jᢺ \\ikObXe@1`H  Ѯ,4E2! 2E,).rAdsSYUz-Ɲ=έsg%, HTnU~18u\:յ޺Nin̒ȱ+4;t],'.- ̬溲'.3dzm^{K k![H7s8@|K5}^FXm&eF0 /r9m _V<=ɪI`ir\ƂI%FIc+t-A)ʹv)!DQ(z n=f,m"X[\9`:zP%kzƟᱪ^_4pO= yvJP5x*UHIIc*ΨJ3NXCȹHB.|wǓry5h:1ù9vwcff$ԓ@袊((((XJu}jQEQEQEQEQEQEQEQE5cRTu,p)h|vΜ̊c.:kQh7bCeCʊ[gZPY%DdA!襆*4!"U 3V!`08з3є_j@晈-k^kQh7bZPSeCʊ[gJfeHz)aʵ( 4M)HAC TWG ~t5(4aaZ(4zxKNUk0oƠ`EP?ԯJ+: ( ( (1E.akF pa_yfOr9zeV5scmz-\`3NuǤrol|l{ —pm| }M]_:}qڕZE$ƆCvd n$]TXi-CpX)m-N/ ky?h~<<rvְ+yG=ܒǩ\JD!quVukHtHT}gx\YTRW o _}(:|)+>o xIO?SLQ k<~c+w\ qڀ5mfֱiọHY{uuENdaU שe烓KHn<=>2eԤrn Å zg'=9Hv~;}*|O0s mxKI(tل(g' 6 ("6X_SʆEW8A#8?tS/{1rq@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@ZVՏT((((((((.Q:֕f"[z3fv,|-ȭneyb k>${E$s!I"NOF~1m W~mc)_f ]$-=U1/ǢKssSsqieH?vY/hhRtG i> MZ)A JPP08Tm>ڏo4;\i> pIt=2Qo|$}r'>ܳ16ٌcZڶh5XLgV%,Oqˎrsh/}4{6[12 G%\ .-<}y"֭|c'<[kT䉘1x;M"S^0wv~{Ws[h'ӨeEpxXC*6owf~w esm1|1%%̿-mF rQAbOίzu}.m:~s3s#ϕqyO?ZUK혃vs|5n_2e2aOY( ( ( ( }x w }`XckZ{]$-4$ 5CVnYXM$wU@H$ rI ZI7kѢnk$Yވ6ND JK"R8▼P>;-ΐZy+x՞[#DzbImj}{iV%ͻG.E172Hd9.h}B1mD9w~Cq yP.uaէC RwOݼ A(EE#Q$lwaasqo i[mD7(N}O~W:r?6pGX^wO,B3gbϧivnk\SP4=ܵ9<3Mm}m&j_]Fт?i^7[ 7[ԍ  NJ`p'>v_ھy6hnݫ#p3sFJR>} }J2~f*@.ܱ>4MD-`o4iϸ_yk+M욞np[,u^wRxBC}Vw(x^C1P2!!_|ṴM9{ckhX۠FIj/J6T5춖$v9@{}2m %0%ˮnfnU u>Aieimkh0'3sk']/A(] i |: P[9N15;=gMl'Yn|r/C== [(1Q `OO֭U[Bd0i(((()O0aLۡ7l6?@YSY“[L$U5 1ۻHgtHBǶU|Oy>=fO.y|6 yaM-կ6F]y ]O4y(I%}EQW2ih.ы4e`(yی1~"ƚ6iRV;ռp2xNl+-cY.j Pp1㝣'v]>{Kmi 5"T$U{J7 nw.|,7zdj^>t5$Uv3KG cwJĚyYhScs!gAEw:uw4pT>7c>yëM>mCOٮ㻸mnaP` aq]^GU̖QӮ-RZ@Q@Q@Q@Q@Q@Q@Q@Q@ eF1mCpVf71ˇ-eV:'ַ5Ps 6rK4d$/s]CšVz~yoe?sv΄zqVcڡZ7 f{'Pd$2NW9mCN.[xLVFxʃߝ<a!Q{j(DU斊((pIٍ@lJe:HgFL)QEv+:J?Τ((((((((((((((((((((((((((((nՏTw_֪PEPEPEPEPEPEPEPEP22fb:9QYL4d4$1FXHU?Z} FȱFyB֏"+F3(^5F6ET> R/1_YAҟE0J%1tr#Eep'@ HbHK}~)"} ?Zȇfь(h!x(P@TP*^$bt(̾iLO ( ( ( ( Mpl á)h$1FXHU?ZEbQIEGCy^T~^sh~4jlT }*J( R/1_YA҃m(ƆAʌƟE0Eep'!2"E-R6E5F~y^W1IEFB4Q/Rxbx>((Q) (S(?>bEeq@>((Wu%GI@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@ZVՏT(((((((((((((((((((((((((((((( _ԕWu%QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQE!IQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@ QQ@`REQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEtuf-6.0.0/docs/repository-library-design-usage.jpg0000644000000000000000000012376513615410400017135 0ustar00JFIFC    $.' ",#(7),01444'9=82<.342C  2!!22222222222222222222222222222222222222222222222222" }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr $4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ("89|Ou}jIQTtP||GEIQTtP||GEIQTtP||GEIQTtP||GEIQTtP||GEIQT%zc ? 7gMD{%ήS&>\L%XlUmbr9sY:O àzMY#lԜW>-|Kj> {BF6qvuwz^jiQE-Ah#ldrAk[,kZ,Vv:ic$B Dv ~ =fIX{9-`_0r1 ˩ld-$HImIF:Psib$I'kdİ\ [Zu&;w[KkZ2##ܪwFI'ps4i∤-bKwTĄAfeAq ahr\sF q8it-SiVd37RpH\M_ŎYxGRsf`,#*$22++Qf{O-Ցgޫ,ۆ3\^&ti5}2K~ndWmmN/Zքۮ )]v:tmZA&7 #`  4QEQEQEsZk%ԫol~RX)bO*/q@Z0Ŵzs$ A?ʠ ?3M'T]] 4?C'T]] 4?C'T]] 4?C'T]] 4?C'T]] 4?C'T]] 4?C'T]] 4?C'T]] 4?C'T]] 4?C'T]]ϟxG/d?ގ4a +28οK.|֞sh$\qr$mab:nv]_b?PQM5S((((((((Wu%GI@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@ZVՏTck=ݽܶŶ&V\{EX ,崼9]E"XzSB ( ( ( ( ( ( ( u_+_%t;xWKϯ ƿ5]D6fM*Hm|NkktwC}׉|;e qj{!vhHݾ\$qtk F}~M[8MGVbQڬIm]7s^6RHދI$$h#PA8ⳠеxGf𵟇4뇹X丹fB|I&"{>gyZk&iM%ҬE)Ksހ;kLu]'BџVMMJFXp:tÎHO.4u5]] WAwfnMͥK<=.vX.:g>sThIe^i@Kj 7z(e֓Ht_y ] &7CA:wZsxw:.)#uI%=h 7( I)fw̞$KZh6/<-ĒۄU@ID[uՎ(Wg=h{Pnu#@N>׎KnʊH;8 u1q2FsqU x\ӴG,vu*B9m*=IYg_7r` @8ֹxR;+AM MIn\C"G p8Һ] 5 {=2/$9L"T!?.[$8+!oi>ռa-rGu37q2h~]ŤhNڮlWS d܆9$ +o~!OZﵒvpA5oL֟j=zCgsh'D cY F{fshuϣX updKKV./!uu>|/asW#\P_OE~((((((((IQRPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEP7_֪UcU((((((((((gU¿/\WM\"xwrŨad2Ŀ?((((*m~οn0}>^~@/=R(Y)xgUV袀 ( ( ( |s>K뫏&E9_ItW!?Ԩ?܋Z)կҟ@Q@Q@Q@Q@Q@Q@Q@Q@`R*8?ԯJ((((((((((((((((((((((((((((XJu}jQEQEQEQEQEQEQEQEQEi1jʁԂ k@Mrx[# ?U=u?cyʏ<@'$VyOlj]Ocp1P#/Ld ;Aڤ'K ?U=tWn,6QBnv(KgHO646zg=ğ*OR ( ( ( ( ( ( ( (1|Ii|Fš[/2FXe'GJ?`ѿO/$w*ho 7 ?[Cy&(o 7 ?[Cy&(o 7 ?[Cy&(o 7 ?[Cy&(o 7 ?[Cy&(o 7 ?[Cy&(o 7 䚥w/tMKH[KLkgX4 u̲ fq01'_\9JuأeH!rGҟEWP9FޥN>Uo3gi?onf ( ( ( ( ( ( ( ( ( ( (.RTp_ԔQEQM.ʬ3}O&@Q@BJ5-ZTIj#r@I<ME}uhϦ?hxe@@;\FAh]]Kko}m5?!UgO#񦦯It5ĀgRr@OCP*K}mԟyT;94˽Zњ&X! yEfǭZEjWil(T3(;C z֊e # ]ioZMswθPr(Vf[X GpYbY9*ESm[MK}Bn$Fe:3}&O0[lCɧk3,a9AtTUYu;/c;?w.rijmYϨZEu&6@*gEs>0KkNۤ$BBP$7 ׹,,oi4ʥϠ UKSO7O28܃UIcY#ut`YNA4*  5|zhò7dc-QX~%Me[[v~z*:иl,y{mlYT.}O4r+#R&kz^s*΢"˨ sNy<S@VYPHfi&DyDrmFq@9⊩ui-[[ T/拽SOklO2O[_Z}mΒ@OS_R_=ِ\4#Ǯ-QYb+ {nQUWk9=VѼWzuIokhGK`&S[U,>cO^ .1Ks_z9.dl'U30 pN#AmtEj2i਒+Ig`}kOHS^jWzn{kkG, 1`He Te[}oyre BфR@@ۓMG7OkWCo-Nҧ[kIWt9Ͻv4=Y-S(RMkjn\m8RzՀr=j5HD;v,EQEQEQEQEQEQEQEQEQEQEv+:J?Τ(>* :èiVyBh P'?&Z9ⱷQ-D,p'5=QEw#_S&L2\H-IpZMWYu|3-GWO/$0NTy '$=kԼh]3cqO𵽼au)VC' yK6ѭ kGaJg7?n۝^Lyc^EVs84MCyLA?h$8]q‘uvz<Ɨq{<~^I=p}=+(!Ф4t+PKs xŸ9:HrAme{-vӤfmLH9c@!C6ym$u9aq31%$M.Mfl?(}_͋lW[L۱`_j 4{mJNiGFzݍy=m;Ǻt]zQKDi6?x?/$5kV[-gGB `n#zvhC۷`p8`|ȬJGj&i Ǽ֮ui%bo]F\ N0pd0 /4@Y]ca$1 ww<{b.|Wڰ@Iwls\$/[j5C35ϷG9 ק@U I|GG+(0?]OH>A(F6(+y1|лgLҀn5 {Gh?*:rU!SP֐|0Ou7w}*lŖZ_^_tv0I.H@*s5"iHBy`:wE>90aE#<ub3y-O$\FYt iķƱ[ sv.1 =֞X($Pm@\So%槩iWҚ\[C)\A :MEN1潚 6Y.1cfԋ^8.26v) GcmomݠӼKw&8EfP1S@i$\G"FNvCym5h|M>oJ1V)+ r6q?/GI- shMi^YԵF8ypK 6Š:T'rZ±=\rrOִh ('h(((((nՏTw_֪PEPEPEPEPEPEPEPEP/[OҸ?sKqWs@ZVF4Zܽ#.S^@3qWkcm4]Y4粸ڙUdTx={ͶO]SÇzӡLxf3q@qo ݼ042)WE PADx<o>sjuܷlUsF)$űfyzwy-i^) dhd#JFzQEQEU_KVo1F_7g]{U((O $?߅.6S:p})QEUio A_˷u뚳P-ZJ-A #Ҁ&`L҅FlrE> ( ( ( ( ( ( ( ( ( (.RTp_ԔQEQEQE/ևky~[xA [N3RM/b|UΞOk:ؒm:DR`K-}kּ!kjr}>_-  VCpgmI7!QhT\|͓Z j: EjS/Jq~^A1Y5M$&YΚ Ƥd'@?mjw Ț!~V=1e½]VXdx䌛Ww*!~kZCO|KiKth2D!9A+X֬mJK:+o4̒r`L{-bkJg՚e2yQ9 6 _0-^}iBʲ@$T >FI9s k6WWvţɩ (F e1V izv{iyi+jAxуv7q;{ڴt k w\#&9+5huIq΢)?. h[o|j]M{hgN{yˠfJx'oA3JPԵ]/Ğ9֭o[](~Yas+Լi\]Zco{wkk"P:2.WM{5-7pgEJ!Rw(c4j˨N{y E,ȻK0I˞sTkzºƵ.sټ[cH#BxHT݅:{cū&crue AF/T:|ֱ"I/ \m ]X9[x!ѭ5k6ۯ[,1advV tzXNOr{)QkR2#B>Mr{]Zbhh((((XJu}jQEQEQEQEQEQEQEQEr1_+]iwscܪJB$d&3֪x+cgztGm\^ qZ1h] o!-Bx:/ ?6GЫcgz/~{wZ d+Np]> ;ZY&-$[?:/ ?6GЫcgzm4جon[{WuRʄ. ?tqwx[?# ׉U=z ;?ڍ_#9??:/ ?6GAЫcgz?:/ ?6GAЫcgz?:/ ?6GAO $NKB uwږ rc ģ=O>DE-QEQEV).M̏!hTFO֬i }B 쥺+ -$'bEPEPEPEPEPEPEPEPEPEP?ԯJ+: ( ( (8{X9C,sJYĻ6ミ“A]ZZyktڕyH۲  JThM"KM ­sH7H;s?¹Fmxm 1uϸ/&Z;8(񞑨I$jnXHL/W@n#덧e!0 \;|t+ ۋy)k ;Y w8^@6g3+5hfKYֱ1i:n;8>pq}kDQYdJX6oPgC6tf{+-42UT7!㰝yoz&X_6r9|VUŦ|.oOkMwóG 1K"7wbU*H!Tq@jwY iz0 r>!imiJڌb$6Lʒ*ɍ=pkގ,4 $zHyvXk:?#I4 fkBk4B'.VMݼʕ =:{i+j.#c $jFrx8k[񞠞*lNtI,\+ŲAc{m;iCbtcPo-&9+2y N+OY$0̺^h6Vx~DYPI]2{9{h[_-wompfEK~V6;\~jB9>k^(lwۗ"ea׌ֹ[o j_Ο*h6W , u;@2"/bu+XYen{ t85R ( ( ( ( ( ( ( (Ȯ05!>d $?} pQH*z(CUt!X{=hUk }JѭU& Tr9Jw!pSM+Gښ#(VhCG!>= U@(((( ژ)#ץYJH,^*;8(y'd\dhz( ( ( ( ( ( ( ( ( (.RTp_ԔQEQEQEX_u6,"r<۰Qc\q >p;xv$Db9ǵ"|D0-b{6.](z]Uy#e]JxZɂ{6Ian址Bp Gխ^m_Xwaz`Io}\-G[hf9PyLPc 8z xCu=R8Ǩ2\Cpɨy3̈C8#v2h*R1pSy1m[xˤVF$]{xaQ{yhEőB*{-ZvM;fHFwv~c yO*A$=.Q&dx'MX~U]&=BTỘ^Ib*2WyɫΡ>iƠ_9i%i $~c68bxlt4Wk|8Renb+'?l#P\ 2] -T}Į\Ol/^{>gfh_|l2@R^Ьs1'f rIں׆l/5mF(IU\hyQ (݌@\ UK38'^/k6utAk 8 o+ I#<h:M*MyLߴ:|X1CHMۆ 9'ۚ5WSMҝ e<#N0h"is"Tn@g^Ko/-%Pa]Q ۖv]bazIGS4 Sj7qXw:X٦kh_'w*Ez77iw}iwr97!.,@v' WM/]'W;hݼ8P$C!Iq((((76_YYA=͓D b'+x^ ( ( ( ( ( ( ( ( (Ȯ0"GԔP~DMDMIEGGGԔP~DMDMIEGGGԔP~DMDMIEGGGԔP~DMDMIEGGGԔP~DMDMIEGGGԔP~DMDMIEGGGԔP~DMDMIEEhs=3Ut%ձŭ$2m lp#sWꮡl.czGmq{DMDMIEGGGԔP~DMDMIEGGGԔP~DMDMIEGG[4Y*%iP¯U[[6NHX辂&#h#jJ(?"?"?##h#jJ(?"?"?##h#jJ(?"?"?##h#jJ(?"?"?##h#jJ(?"?"?T* Z((|g|OwGVN=}xCh x\)I}FI8K.dR2+cpv~oSC]KUe!OȬpM36^ëkj JI ȗ Yd=~pHPU MHte= ѴaZG !6]7ֹ+I=վ_jVEi{|B$ 7) ) ެx^#VELF?Ճwd@ Pƥwx 9_OҴ`d1 &^6L,<Ҥ'^URTy9lg=mZ[Km_MkI8ci<˶h7q'@Ɵgwqmo:zcxGګ=A3$@Usyêb[q@9uStԵ(Z8`Xd9 09@5ڿ[[5;wevmaZѵ-> 2[H1bbُW+?A?/l4K`72ߙ@;nlR/*Hu FJ^93)Rq)jƅ6&ft4v '&cmmokZxU#d\#=ynO%.3ꏩ*[/=G7(@S89F*կ略բ@jlDVU 0=hQ)"{&A5ʘ%sd=sR[xoEŅkg1 !V^+RK7QaS*pGLw$:O>mqG#u*T-Jk޻i^|G/#3I:npdC[Eum-ʅ$F `Esrx;(u+3m>\I˼LaX2tW_Eλufe$r\ݎ3oZNg)Z$(L_hGq͞#}:W:asb05˻ AFcMsZg[NM.wK!)+V x+{H֗X#XQ1l v o]KÚ.u ֣ZO G0ns2ޙ>jp1xaN2Ś֗ UܚTjJJĄFN22sZSҬV$I;z`y94]gC n4GǓŎ>\}¯YYZqY[EmmQ UQQ-uo .h5ŧk+#\+-m7G'.' C]߅!T~8䓜P{EqO OMKm+Yȷ{2,сn8|@NY@h4Cvmd7@ȇ/o)4^wiѣ{۹C=3\U]=7ĈCC-̮'pɀSqZk- ),[j+[8!gy*nq|DcEYA<|z յH s";j+OF=tQr'XPc$3 Õn:cz+~$Ƨ$|3\ |Y?yQ H8X|_yqoM {/ rW4X~ kxzY&]6F,5$y$@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@ƦHp-UrsVa;RhdY"u !A46E Օ嶓yN3DB\ d (Q@Q@Q@Q@Uk{yDG#ebUlKkLelWP( ( ( ( ( ( ( ( ( ( ( (<<9n]ymDP:hA qz0mpB"B"(($ޓ wEnӵMWPReijp;Ǖ"`{+BQcY/-QiTMRz^!Q;(T]ܭHJ [,R< kZ{)s] p%eCA5Zj}Gg}ks$\ḪS∵]:{⿵h3F)hg# >*dbby%ø%G?\~5_[}c[I,֡H> 4MrЎ+bp#PU;UH >^UϘ;cހ)“ڝԗ1ȶUTZ2Iof'T`nCX-.'Fi|ͼRC]w-ǷwC(cHM/P vZP3LdYzXxÚfy<_E:-ʠ@rq3Zks"Y22y 暶j{_ޝ;b8WqN9Ch^`1-#~.jڝ[E O#ʪ$Tu/c&"UTj7}#@ Cj)kwR+4,f$\'8$Jebկn4Ic5e,tAhpT.T&HM>kײK 2_"X{i$`ԓҀ([i&i얾G"X-@pcxoSwmėQ-484 nhKc] {k02nc}ͨ[r2'dg$ {1c໋OxSMkKʜHJ\<=k/.bdY];yy0J1޽&QUcm`\)8</jlG~P9/ML!XbEm9$c$yD\ݝ{b9Bl+~}o8Qyg RȻ~$x!NĚ|^*í*-e o\9' rW_.T#B0W/rno45ū˄9+zҹg{6e ,s - b= _ OO inbY r:}ThRibh%.\(19K1_ ZT,eg:(IiU{{;O]5'Ύ@^GZ0vUpA 4QEQEQEQEQEQEQE6IkJz5_֪PQjE[Jz5iOFPQjE[Jz5iOFPQjE[Jz5iOFPQjE[Jz5iOFPQjE[Jz5iOFPQjE[Jz5iOFPQjE[Jz5iOFPS^R6aaVc}-y&)@ܞƬ}=[[Zؼg̭ϟlQjE[Jz5iOFPQjE[Jz5Tl-|;ټˆ ܜ>I7pˑ[C{X$@dl dSѨJz5T-=UJ(SѨJz5T-=UJ(SѨJz5T-=UJ(SѨJz5T-=UJ(SѨJz5T-=UJ(0u :uGI@Q@Q@Q@]{_{_X>ZOwoc"B9VS23+Wآțf;x=6_qj VOL5K˙i#tB$+r{`ײ@>uKv1`H$!@A(G#g^X v8c|SmiO v;TGP"+(K '&wmonywufnaKJ3es V'c?Iغ qi/hY},iCs,H»(v 5b?q6cpǘ|Ht> H.|ku}cA{)L5mfnrG(((((((X'%cf@&Z/tf~$𭖹1[y4 NuoxCƖ$4.6[jW'نHo-nT##8pk{K Wqy4-[&,q6տ 2Ú.;:V&QXvuD "D~a]g֫? /]$zqid/`wwQCWsPj|cF Q]5CVԭY&]"ѡG x>c@]tvٌ>[`sڛCա_싾7\g%\Wۯm;]7w]B%%$  =Zڝ F3@]/{}ogm\،F9xm$.pr u" V]- ы!'봇J k_o,>B|98 ]-/%Kin͢#|o.5IUzg=:ViRZ\-FA@ZkF/;/%SΛa u:vc|\:diA?"-όѰ=hj'dկ3V+C.#o=]r? _z -uQEQEQEQEQEQEQEQEQEǝi~c>_8XC8A9{fttDpy`#4f((( ׶K}hLIb}qVjXڒ_XAVh((((((((((Wu%GI@Q@Q@Q@g-n>YZ}u#n#Z *q׹K@umCTYjA3(#S2G' Gq^rj\} I*byR=jYfG/z~ƀ0n6^OicUi#Qg%ϑ± ;H,Ǧ@5V= [:x]<520#sd~.kh1IBqqaM/N?=5&0r4eq#$hΗ ZO[oKd}P9ǧsQK>$%=F%֭*K$G" 6jVI4q@ht`FGQ¸Җ-#h&qp BۊK`n(W<7S,wQEp 0(Î^YY\ir[ɢ.NUY`1]bxN6sw}usqwsHl (Pn04Y]\^=RS5却ʰʗ]k.hM@a#8#)U*F-QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQECu}j[V>B_K1/\ oOH!:s/mILyu i<'Z{7+Q+A0(€+n!ӥ:M63JyW ew#9Ep Y"oH.p+njƀ;+]U↥ks"LIɗn9'nv4=: róvv]pD #B$PqHJ_j2BҼ=%X<",H" <e>œ:oimĸcs|0sҀN-Շ+Lz*HF-QEQEQEQEQEQEQEQEQEQE]J?ΤR((qUL5[[pV)Bp:׉|@3kv:6{ eT/p^glV6kc'ߊ(+^5Nau-.q.ͣ.|Ws֥^Zv6jR*줃snx<Ҥ<';SHH0\Ē*ư|[h$nF]i Ir26< W y. nx|Vu59#,6]yHh>#gU{k4bKI2巂Np1j:4RM庵O2uQ#t:͞']{ Ϩo h7ZN=[(fb#cg#Z<~!VvY,|>dҴ w[T04O[jzduyBI Amnjt{xo2yd[DZXay$eUTp2sR=o]'a t(k저cs?j2~W]ʳ[p nkciO Nh_hLy-[Hcq-\L5ޛ^.rv$zWg2`7rዙf3y0=I+bPo9";:Vώ{W = 22%oZZ:Kiqo4ϱ'q֭fu5iH% I?t-jpirG1Em:IQ$k ( I񖑫^ 8)q1lQwMRK. Y嵒8H>[ x5t}6Hyv j@hPHKd@jލ 6mJHT@6oA@Jɢ1\Ea+DIch`Ķ:5fn[sys%d8]9k? jJɫ-ŽxA}q9ƟyVO,ɺMx)p2A뎔2m.$y-ܵ#(!A<5o[z d?tJڸ[ eR4Y%uR`pP]o-C涍 <G;4'Uda(%FA8=2qLNk-g>|v6Р o:OxS5MkC[bF.ldT4[bK1LQy$a |q-D E٤18ĸۜ9ψ^me;)^4!]aG~5Xx+R Αn[WehĞXpl`94 I|/Oq=_އUsH83[}O֚TT"GГ@cfƚ/Ḛ=@bxpۃjwNKM^]14Xqp3J啶n j`gx#IM{ 줽iq1]b s@ZI?R[+M"@%{yeژ\(䂸<& 0^BGx$z0W+Wx{\{t[!նh*k1g9Vgoj8O\S閖seEbHF`v2rI@}Q@Q@Q@Q@Q@Q@Q@Zͼ]eQH+cT8݂I=yׂ N$0J cwcf^ddz4 m5 Wiec',Gj˦X/abi/OckE{c:\[J7G,g*ڀ镛xJåYGl&Iv$O *( ( ( ( ( ( ( H=(h֚]1J6nzE^Ipb}W$zzzEWie è5b ( (+[Yo\Km双aGaVjvb=B{>v2Ϙv'Y(p/a'ЉW%>NMw/u@pNXM1@b#RK$4( ( ( ( ( ( ( ( ( ( (.RTp_ԔQEQEQEQ\%mOō[Ǫ=-l`{6$ySX^o|Ok7T/͕IrE0ʅX c3#IbxP)ze6vDA ,qƃ ^si~Pґo][YDHvA9Q[|7Wr5Y#uI|9N(h%WxHubKIaXy>mĞy UWoX+umI Γj)-LwQ2$䁂xY_^*տ{+ǂh5@V$P`O<#sIx\jQi7?c z;} N0 P]M7co1J㑎UMLGKm=Dc·͒ NyZmռGkGRvmcS8ZZ=QY2%[ P:V_ÝCP2-8 qP=KT-VSM,p [vV`+tGwz ˾}d|T n9{N2C#l3@w:ofu鵈lu8孒BÜhP1q9[O:Κeo ]O¿ܨs@EycxźvjzLpR%]  RlYxHլn msUF%Ԓ8@Q@Q4m7EHt(-#C,>Q@Q@4 bo5@_2X@}@9pPEPEPEPEPEPEPEPEPEPEPEPEP7_ֲ?^MjZϹAoqj3,.4yL eVz~5}]gy;0#\ރY~)ᛟ A "r[kH[ 2yA#}MgƑVhtoCj0vhݎ2OjZn[kY:춟 o86sᮋN|@^y4[~rz1%-A{df{}&#Lll ]6FG='f\,j%(NL(((((((s*"IEc[~ 2j1}rHg M{ϛ#dMng3s; [pV Umd4q4g-~\PHW0{ۜ~>L V|N,Ygqppe;g> 9%GS}꽽 s1v䪏O@*kd!ħ-}SPwI{g $p.=V[HmKw-d|@Q@lLy?tϔBjXIL`0aH#ϵT*XA,BYfyIc*k4]VD\ 4r*ֶRojRŒoc{?*Pfgm܍JHG-QEQEQEQEQEQEQEQEQEQEQE]J?ΤR((((+S·uivSƻ9}F{)n鴛FC$p@,nI%tOOSZPEDž* .m"ͬmac>4\x[AҠ,<3yw$}y`3l2zKoY:}oOd3v^+j`<;&v]Ԓz|#IX?/8 xoELJmPI`:zvjQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@ZVՏT((((((((((\6fy<ێ6֬o<ON|7?~9 4QEQEQEGpfқp}Kcf/戉ۻ<5-Vwg>ϻn׾:,D(J*;~`fMW3VHb%TrŏL'VB`UTeJ n\fBs ӑWPuVlGlg~1nzbuޤ *Z,uSƒ]D`ɱxNzt7Bc,yc`9'>U+HHsۥ]((((((((((( _ԕWu%QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQECu}j[V>R ( ( ( ( ( ( ( ( ( ( 2so'ϛgۏ5f]ImI!T8I%1\w@h((;qU>(|s+u{(Z1cs.>9? K{XeY`ezEifͩ!\P:?h^*K7 p4-_q<C0},5f1-Q(n}ݵ&x#a۷8ӯj4?6 I&QJ(h1G #IMUXc$'9\~,)2qr " cɠ ((((((((((((IQRPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEP7_֪UcU(((((((((v cMD>WG ~tdSu<,? iS\BF?Jx*TRt?Jբ2,#" E,3P&1 :aʵ( D$t2}iحaGJD>WG ~t5(4aaZ()2E-A`3Kl !8[4P[]uz]Nty ۄh1:VC‘k*ֵI$b8€cxGDJ ZG; zfʩ|DIkqg7~]w! 1Pֽ ((((I4P͕=70ӭ_7~^x*TRt?JS4K(ȂCK UEeiLBD2gI␰IQ}kV[]h/`G!:?/8ݸcZ()!DWiu  V2fkV1̞[GV*rD< ֝aMtm ;k?=M+C-cWLZh"Lzo隷U/@20n,t Q@Q@Q@R3*)f!T Nw"SnAL*$tѕC)"2}?vQ|ZMkVB/uk೅bU8ƨj-+ P*Vy𤁀I$U#6Mqz4W wė_6YѦ#_H3ր=iIdV*TRג~%SO0o dhoUU9,I;O/tM3>*u$hu2F1\ %zTϨ]}1mȀG.Os1*eά<h|Aw}y N" }}j?m,.n-M"+m1hIϩހ==GN_xynK FcbX9Y. mu+jg[gL8 cf(((϶3yk[֛Nek(_F`~EP?ԯJ+: ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (!V>RZ@Q@Q@Q@Q@Q@Q@Q@Q@D#8mOpҬ҅mCǥiPk->NH%pY5NO [_[IڼyQ`|p9sWMAu#C'Ӡ1Ò\(O]ǶMgwj!\ Ҁ$Լ/5BRx ,@Ͼ* A.lX/|sp1|wy}k{&[%3Wsā*Ԟ?РUb+qu c4p7ÇW:5|;-?hԼcP7چkqrw_t:6=\5-k:cNNQ+tPpv#K$}-Hڰw-azGeI,`f[Ög}Wn=>_xnm2N}6--y皯xKүn퍾u {-I#wG<2@^L[kH/IL r[f=FHOiŠiZYZC[Goc(4#pt xJBh}_4N1Tsb4NYm {[GbCV(n[ȭ+ND'>UoKQd 38PVF#M1.dm.y?ZEV{Sc r˽AYh O֋缎60,i׊i(@Kc;Y;%A,$zQ@/8CPVF#M1.dm.y?Zcƃ!!"bq:GX[hhK. 9r#M1.dm.y?Zh(r0v qWx.2)졔Tz xԋwE⽟k*meM;wj"F#UDQ0Kn+뒼OygoYMgw Mm27WSK4n!#A" U=wY<{X؃o[-7DVc[_wrI5u?xwXx_Pmxc;$9G^t_ŧF/$XѕDmn2H`k3YhN[]J.Xn#{V!HbNV:{UԮ䵍d%H-C Cvv t9m.%,WRoTU4*$7Yܹ|ՑxE8PV[4,y=~+Ƴ/jzeN"KGU̅1^PZ ݌~U4 ?Öf6ݴm Bǹ-9uzgluW2[=GN0nbcR2FAU hvZ_KjopUϮ qꨮQ!h;.?SQ&=7|=뫠((  M&jwzMc?v#R[?ҙ@Q@`R*8?ԯJ((((((((((((((((((((((((((((XJu}jQEQEQEQEQEQEQEQE51}Zu^c3LcpӠ.RZ@Q@Q@Q@Q@Q@Q@Q@Q@ʮ0zi6JcC FGO"e2HzQАb"T iPk)"} ?Z<|+ʏvmҤ#h!x(P@R1HTH~eJ}m(ƆAʌƁk)FCŒƟE1!2"E-,lj(jJ(?"+F3Thd_ARQ@ xbx>252?>(((()67b"T iR6E5F~%yQyͣPB4Q/R(<1HTH~eJ QSG*2?}k)FCŒƄ(*gO#X!H((PCy^T~^sh~%A ƨFȿuJҕBF+K(8SiD42Td~4YL4d4(q1ƈ[4(( _ԕWu%QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQECu}j[V>R ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (.RTp_ԔQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQE '_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE7O/G_ʝE KEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEtuf-6.0.0/docs/repository-library-design.md0000644000000000000000000002665013615410400015646 0ustar00# Python-tuf repository API proposal: _minimal repository abstraction_ This is an attachment to ADR 10: _Repository library design built on top of Metadata API_, and documents the design proposal in Dec 2021. ## Design principles Primary goals of this repository library design are 1. Support full range of repository implementations: from command line “repository editing” tools to production repositories like PyPI 2. Provide canonical solutions for the difficult repository problems but avoid making implementation decisions 3. Keep python-tuf maintenance burden in mind: less is more Why does this design look so different from both legacy python-tuf code and other implementations? * Most existing implementations are focused on a specific use case (typically a command line application): this is a valid design choice but severely limits goal #1 * The problem space contains many application decisions. Many implementations solve this by creating functions with 15 arguments: this design tries to find another way (#2) * The Metadata API makes modifying individual pieces of metadata simpler. This, combined with good repository API design, should enable more variance in where things are implemented: The repository library does not have to implement every little detail as we can safely let specific implementations handle things, see goal #3 * This variance means we can start by implementing a minimal design: as experience from implementations is collected, we can then move implementation details into the library (goals #2, #3) ## Design ### Application and library components ![Design: Application and library components](repository-library-design-ownership.jpg) The design expects a fully functional repository application to contain code at three levels: * Repository library (abstract classes that are part of python-tuf) * The Repository abstract class provides an ergonomic abstract metadata editing API for all code levels to use. It also provides implementations for some core edit actions like _snapshot update_. * A small amount of related functionality is also provided (private key management API, maybe repository validation). * is a very small library: possibly a few hundred lines of code. * Concrete Repository implementation (typically part of application code, implements interfaces provided by the repository API in python-tuf) * Contains the “application level” decisions that the Repository abstraction requires to operate: examples of application decisions include * _When should “targets” metadata next expire when it is edited?_ * _What is the current “targets” metadata version? Where do we load it from?_ * _Where to store current “targets” after editing? Should the previous version be deleted from storage?_ * Actual application * Uses the Repository API to do the repository actions it needs to do For context here’s a trivial example showing what “ergonomic editing” means -- this key-adding code could be in the application (or later, if common patterns are found, in the python-tuf library): ```python with repository.edit(“targets”) as targets: # adds a key for role1 (as an example, arbitrary edits are allowed) targets.add_key(key, “role1”) ``` This code loads current targets metadata for editing, adds the key to a role, and handles version and expiry bumps before persisting the new targets version. The reason for the context manager style is that it manages two things simultaneously: * Hides the complexity of loading and persisting metadata, and updating expiry and versions from the editing code (by putting it in the repository implementation that is defined in python-tuf but implemented by the application) * Still allows completely arbitrary edits on the metadata in question: now the library does not need to anticipate what application wants to do and on the other hand library can still provide e.g. snapshot functionality without knowing about the application decisions mentioned in previous point. Other designs do not seem to manage both of these. ### How the components are used ![Design: How components are used](repository-library-design-usage.jpg) The core idea here is that because editing is ergonomic enough, when new functionality (like “developer uploads new targets”) is added, _it can be added at any level_: the application might add a `handle_new_target_files()` method that adds a bunch of targets into the metadata, but one of the previous layers could offer that as a helper function as well: code in both cases would look similar as it would use the common editing interface. The proposed design is purposefully spartan in that the library provides very few high-level actions (the prototype only provided _sign_ and _snapshot_): everything else is left to implementer at this point. As we gain experience of common usage patterns we can start providing other features as well. There are a few additional items worth mentioning: * Private key management: the Repository API should come with a “keyring abstraction” -- a way for the application to provide roles’ private keys for the Repository to use. Some implementations could be provided as well. * Validating repository state: the design is very much focused on enabling efficient editing of individual metadata. Implementations are also likely to be interested in validating (after some edits) that the repository is correct according to client workflow and that it contains the expected changes. The Repository API should provide some validation, but we should recognise that validation may be implementation specific. * Improved metadata editing: There are a small number of improvements that could be made to metadata editing. These do not necessarily need to be part of the repository API: they could be part of Metadata API as well It would make sense for python-tuf to ship with at least one concrete Repository implementation: possibly a repo.py look alike. This implementation should not be part of the library but an example. ## Details This section includes links to a Proof of Concept implementation in [repository-editor-for-tuf](https://github.com/vmware-labs/repository-editor-for-tuf/): it should not be seen as the exact proposed API but a prototype of the ideas. The ideas in this document map to POC components like this: | Concept | repository-editor-for-tuf implementation | |-|-| | Repository API | [librepo/repo.py](https://github.com/vmware-labs/repository-editor-for-tuf/blob/main/tufrepo/librepo/repo.py), [librepo/keys.py](https://github.com/vmware-labs/repository-editor-for-tuf/blob/main/tufrepo/librepo/repo.py) | | Example of repository implementation | [git_repo.py](https://github.com/vmware-labs/repository-editor-for-tuf/blob/main/tufrepo/git_repo.py) | |Application code | [cli.py (command line app)](https://github.com/vmware-labs/repository-editor-for-tuf/blob/main/tufrepo/cli.py), [keys_impl.py (keyring implementation)](https://github.com/vmware-labs/repository-editor-for-tuf/blob/main/tufrepo/keys_impl.py) | | Repository validation | [verifier.py (very rough, not intended for python-tuf)](https://github.com/vmware-labs/repository-editor-for-tuf/blob/main/tufrepo/verifier.py) | Improved Metadata editing | [helpers.py](https://github.com/vmware-labs/repository-editor-for-tuf/blob/main/tufrepo/helpers.py) ### Repository API Repository itself is a minimal abstract class: The value of this class is in defining the abstract method signatures (most importantly `_load`, `_save()`, `edit()`) that enable ergonomic metadata editing. The Repository class in this proposal includes concrete implementations only for the following: * `sign()` -- signing without editing metadata payload * `snapshot()` -- updates snapshot and timestamp metadata based on given input. Note that a concrete Repository implementation could provide an easier to use snapshot that does not require input (see example in git_repo.py) More concrete method implementations (see cli.py for examples) could be added to Repository itself but none seem essential at this point. The current prototype API defines five abstract methods that take care of access to metadata storage, expiry updates, version updates and signing. These must be implemented in the concrete implementation: * **keyring()**: A property that returns the private key mapping that should be used for signing. * **_load()**: Loads metadata from storage or cache. Is used by edit() and sign(). * **_save()**: Signs and persists metadata in cache/storage. Is used by edit() and sign(). * **edit()**: The ContextManager that enables ergonomic metadata editing by handling expiry and version number management. * **init_role()**: initializes new metadata handling expiry and version number. (_init_role is in a way a special case of edit and should potentially be integrated there_). The API requires a “Keyring” abstraction that the repository code can use to lookup a set of signers for a specific role. Specific implementations of Keyring could include a file-based keyring for testing, env-var keyring for CI use, etc. Some implementations should be provided in the python-tuf code base and more could be implemented in applications. _Prototype status: Prototype Repository and Keyring abstractions exist in librepo/repo.py._ ### Example concrete Repository implementation The design decisions that the included example `GitRepository` makes are not important but provide an example of what is possible: * Metadata versions are stored in files in git, with filenames that allow serving the metadata directory as is over HTTP * Version bumps are made based on git status (so edits in staging area only bump version once) * “Current version” when loading metadata is decided based on filenames on disk * Files are removed once they are no longer part of the snapshot (to keep directory uncluttered) * Expiry times are decided based on an application specific metadata field * Private keys can be stored in a file or in environment variables (for CI use) Note that GitRepository implementation is significantly larger than the Repository interface -- but all of the complexity in GitRepository is really related to the design decisions made there. _Prototype status: The GitRepository example exists in git_repo.py._ ### Validating repository state This is mostly undesigned but something built on top of TrustedMetadataSet (currently ngclient component) might work as a way to easily check specific aspects like: * Is top-level metadata valid according to client workflow * Is a role included in the snapshot and the delegation tree It’s likely that different implementations will have different needs though: a command line app for small repos might want to validate loading all metadata into memory, but a server application hosting tens of thousands of pieces of metadata is unlikely to do so. _Prototype status: A very rough implementation exists in verifier.py : this is unlikely to be very useful_ ### Improved metadata editing Currently the identified improvement areas are: * Metadata initialization: this could potentially be improved by adding default argument values to Metadata API constructors * Modifying and looking up data about roles in delegating metadata (root/targets): they do similar things but root and targets do not have identical API. This may be a very specific use case and not interesting for some applications _Prototype status: Some potential improvements have been collected in helpers.py_ tuf-6.0.0/docs/tuf-horizontal-white.png0000644000000000000000000001572713615410400015010 0ustar00PNG  IHDRTxsRGBeXIfMM*JR(iZ,,ȠTt pHYs.#.#x?vYiTXtXML:com.adobe.xmp 1 L'YIDATx dUuPĨ ҰED"Cq@4`2@X! $( @Pb5; "! $:r^zUUug}>ýfZ4222rid/6?kᗨ;ҹV[k`rC6 а_Ġ51D9$ Vˁe9x3dl[Tk`byE[Vxn7_ Ax"d(Yꕥs(q/5tM91/LW*ͤ,Qv'\*OX}q=ؠӄo7WW߀ m}[Xlu~YlGQ^WQ8;aj[^ׯN"k. \A&p"jWWj9j4:›' iIN\Ʋp 0KZ\ QGU,O4׭ͭF74ʿkװ5 #KIrK {KD8=557 `\i6W m d17-5A%>lSB `TFMD Zޝ{jmUO!uE[*И'/җcD?`Nx}pMt~uj^kg `D9++=1Ҧ o*ND4tJ޷QȬhK*cQ7qpW)o\\xЗ8d칯CĂ}lQwߐo t}@y< ؙ,Q _-^O1]dNqM1-]ѱوȴv㛲͗0؆!hl#hܧ4{CFF>/8 g:`>8-à/8 p4+6i; o˄8gؿQ~#8(D FĘs y$?0t@ٖᚆL[N%}OS ܖ|)ˑDUmȴ^mɍtX.Aϲ3%RuH_fN~8K[>jjWgғ >;ǐI[j,<)7;wՑaIl*#.m45~M5u9љ59]r+UG'˪ek>$>Lq7,nV9*)~H_ޙ JеQ y \WE?y=48f[\2oqCb׶Nk>C5 (Fѵj۩dycE9'䒖QNʸ o-D瘷"oq靵Hﹿ݌.35pU3wCn$'@Vz22b @n!2f刺8݆%bzhaII4xNp+@z6\oUdLDu YnߤdȔww cٓ "fwwsĻ"H z'.Ir _KC쮷VԩA84eTk 9j@VWORTUoІ!,KAxp*9E_Lk^k ièfaCE$ToQ~IEL}-gYU_"`^j|!C 2DŽ/U )-jʀw]knҝ + }j^ZQ0qWƙ\ǭ۬zǶͻlҡȢ\yjwZ :Ysk@1nKauRJH+]$b5*IiVCrVlG\W泎sQG,UgN}mWgl~<~:U%?@yA⢯4H ۴lqy2|§[)Ub$,T󙯁Fjww\k|"8^$OI_bUMr(>"N_@:zqWn9/4Zg'LG9z?ȼ&/Mi]Gj ]ן@J];Ѹ8"iˌ1zʖKi W3PY)ɔ+i=QԅƗC > o$; ݕ ^~ +9TaSۣ^A_$ :E >YttRS-\~v `cu(s^g=mk [^^m7(M^+YNGQ&=sz Z͢'|NgD5+dOFnlȜ햡'Ұ4Z Ts=owpkV૑JhY:]~LQxo \BFɶͮ:#)s.c©.(cI7H EdvYe;zS.򢿥K?yc0K3p6)5l<lAgjaw eR8A#V|R>:j#Տ}:싀clfqIͅ /,߭,ۓz5XIWV,ANmIuU:h4 T nn>?w)#K8_9|׀cډrg!s/F9{Bl\o 12jeƓcmJ~饑4PvR֤[ii&E^6@L.ʉB~^,7HL.r.2G8UU=Τ\Zq(Mv6db>ڮzI3UuK 9%h`Kr9ȹѧ 1. d<uYGyRt!6d_, J["Hp5IW0M4׍ 4du#zd\y%'z2m66NBuzLԅv<_J=]ʇoCk)OIVLs#Y|v1e|~Em+ypiېw :.MvEhRff\Ntuѝo/:lȬ}S_Q6Toz.ltwQ)Y 6ä:m:-,\{n~Sí *!;Bw~|0 ؽO vR#>"%}޶q*Y Dn= Ǡa)ޤ_ɇ s+u |ewSTsH5V()mtGa4a>GY|xmZpom$_uW+_ XC|8I8iޅ m{+Νmӯv.;!ϷS@e69a\2VN Cfs=3w|pks]xMëi ` ^`Gz |!s>pnI4جk;Sa!!>mr~Z"\@»^`G=v" ՏFzU٢8tz>?ldc 93Mg?mDFhg:6S5q.c|&]A0h㟁O;c aq8uv='r9nvqQEݘ-|?4dJE$fILJhra?ak:HE8ҝ`wLyl)$~;A'u;Y3WV:C42l~ěyM| +H³9~X@go.$ V )5n/߮\#Vt NbͅU઄>`aq_95:tb =N򾸨XWBYN\'&B:ؓ oɰہmathaFiIKtԳ yryЯi a"ZMmF[Y==^,$.ǒy%]'RhÛ #<:/%L)Ձ>Sc]6M7zG5 7W4Oӊ1&slB8?,3,{€:7EKb|3izЋveqi'{R8/9"tHsjMoSvK|n;>vC8m%0:;"~iԀgf C}}j/E43o^=`6]MR\F,5%QƬ~3YXϟqf, F-rfSATW*MT^>wLk5ƌ~24i:'0+:O?Na_7x=70*tX=HNvO *5;۩r}^Ө1bg< 8N\P$%?Gr \1p롌Šh[!VܔO'rku uP@wҵO X}pl4|!i©N<)4b G ?\MvRkp tP.rW0fDzUTcz8+G#Dwb,<u|zK+zDžIR(11-1+T @G ĻXM')JK G2-)'dMGMÓСz 玼H+%B a47{(%P? +M/`1_߫Vm3#z)8mK{W`*cx}. ȢQ׍^)tyСҝe>} )1a*ËFlj|q8[Р$Wp)ҡ.{>6w5@`[z%^l,kӏ1DIC'I/Rm9CIi ߉~^ާvU.*֋Э|jݔiUoijj1'"M]N>ډ/> |./^jX#kRtSfo' QO2l0?<1HEM)bzP_ M;mܗnwmDFN+)m{jDHCZz$Φn4~l bFnHYIENDB`tuf-6.0.0/docs/tuf-icon-200.png0000644000000000000000000003604713615410400012726 0ustar00PNG  IHDRXgAMA a cHRMz&u0`:pQ<eXIfMM*JR(iZ,,Ƞ pHYs.#.#x?vhiTXtXML:com.adobe.xmp 1 2 1 300 300 T98IDATx} \E~{' &#A"&\Et_ > V̀ '"*. ($""(A HB2>Mu{:SSUщ4V $! *Fa(CCT dיJOR@Acj曊9V TW6E<MCzyEkxӈBx.GxĹQ5Z^pԪ"@F#@4fȳl|[p#` =@H?V@RU߃bw# ^ "ԃWƑF@( !qDx r6 aFp|O9zf)3ҿ~iJ$( h@opiR"ꍅӣ? aof:COeVGb<ߨJ{~ñ܂T0F,4&uEV{!/4$gfݦF?dY8=.3&hDH6FH17"o/DVL+/AsNP(z;P$C`uDYES יW'#)O0W|j`X+ײ, g;>cd:V<0P`U*CP8* f73HE*l&D_b8 0 0O5 (jT0-cKl=;E y jO=Y!>zp-TF)Zp)&˦EY̟WΧ,B$z !fg&Џ5ʝ.A,00aIW7'%?рWq?AgsTd`3&Hd>ؽɎFbv:O4ʆ<h%B1ʉ@ QdhЩ#`PZC > SC/< @(og9 c[uੂ/hN@Cx铼;Qi\p9; e7d= 5F`T3;ᗦ[:88.jj^"BcS.4q 9Xcd؅^>9&֏c @ nXd ]{L;ʰtLvC9"a?=7FFr;:W'04 gv,ts;%oV$bI* {\~( " ¶I2r)n&[SYi NX5(5}:EA4s  ʏX>5l}(.c&$G4 MrCq+Zvdp:& Ѐ<,p7#9S:RuĸM hsOi: 9u Lm*zO#E+埃4mNhF'KCЂѨtp_P@q& Bo>"HeMa\d{`4SWS/O(fgb $h;{OA-12mfLzvIp3pnZUdoTNQ2J ɎX/X}g('y_{qPN"S[rMA/2ފ'QY$?S^3H.O~-㺧8B ZsҴ=HW|)>рOi74K򉤃Y೷2t(حj%"8jT7a`~z1E  %S~V[Zwa$`X{x&4:u"ۆY;@$4 f#@j@%יBaӅZ^;ɤ3|"OyP 3ݲ!=DŽ6P @:0BO&.XV AfO 10pdǘ c d[ނ7 pAG?ziMy{ ڭ"P 1*Nmۡh=5{5׷ 1NЯ4pB-I_H\5T\jFN0dn'svZ h=rّYSm=ΆPzݒSpݱ%%c ;T[o6'P/a!%+W ̞ fj)xa| P=`&W8$@rUcZ40{Iq3fWSMCt`\![wZj|[ ]1/̞8jW .߯gjzOkhjuU6i@bC+'HW=*y[/E /|ď Jvd4A 1MCT)战 moQJtLfv9G_ָε.N?b!8lNZ sj!RQwxٜ[!+ݶ1ҏts8j4b&9o (봻ٞƧ[{eG& YS,Do=e k L<ŵ)"Fk:o-hu`Irc=.2Xnh@Mf 45N7xhXہ`+ g#{ކǓQ2-a{/B[@Z@Hc0"{tw瑌5 8Kᔞ?A-p*G!--j]l(?~$&`>ikRI l!m,v4m4|9>$hB5xxjY^u4: ?[cA]T~U}y_kO19/a ݂&0< `r3,B`{xL'Cnƭݔ7 .DgC原g7|bR;hUyJSXzтY2y# $ފ#}|;}ƶƃ~'n8M;RV>ƭ|b[15nMν+Lv5@"Õ"61(v-FCG#I!(@}8 .coCT;|mcqCC#Y#$9!Pp䚆YfX,\#9#LG aqO!<>"2@tmb\6{fZJ*n ?3ϝz7XT+ThuS*::k-+h=%\PI4ZeD@&H.jBCBÁT6 %٤ G,A_BV{ 0i[)VH`p^wMȫQYrw ^w!H.Awj\(6ٞz#_6a{ v(Xsԃ\C65 G=Uߠ$$kxt;Ϸ){≣J= E"*IIyQIūn^'f޴stKcZ4"Ȉ(hu"˘2g@H_ ge/ob4GÓ&\Vނ:eʯq 4c$OVO=gFHݚNk ߌ7Yݥa.򹂚lpeAԅKvBگJpr qO~*T(| x< -rb טf}aaS2 E! ȫPьY"D9U\NSĜ}T2ӤfraupU22J^|ܟԟj_/ zc_9vw#'ˋT Ԃkjkji6e'_L/!&8HGa` c]c>dnDrr-3T^c,[.z^ϕ^q7T8=PN^ vQe)^@HlOzds.6NAkCKZ%!~yS/ n7G\o=D6ywffW{!8UMűmݿ$.%=B$GL25GiKK~T~ǧ(X ]&K$c/Hsq_um̠x&Ut/⬷UbF)y/4;G#ϕ`AdOV?^J*#ˆ aTLOCXZmpe-Ûfx ʨ#tȝ}?3)Y;_tbڞ|P:8՚<4"`G>$ \{c4ڋtn d0VNw@hS$DH aB4v;@}\3)BecEhSf u>4%(˺$c!9IH&FV##;mP2*6@Ѩߌ-:p>GnQL r~L"' [QuK&HGRQʳ⠎*ddb;w;2Ҩɫɝ}/hޘVNƥu}upwj?&~3Zy*'&C{[}!(9r۠,u98 49r1`NA纙B gc$T@!0S!(ZI\7Oʥ>O<~K"( ӯ3$x˨vGM[&?yjþaʛP8zBa]]&OT( AC~j(e S2OTd`@|S,!1Xp`٩qz}$yI2gk!6ɦ[WbJAD^Q T$H L@Rd],($;f@2Sd5UN/PXaۍp >N4ms7p=iԷ n,hDd 7e\Mtl?'0 )6pw)0J;HS&pH ,#NX{$(.iG̘s{Qu2 t6mQMO;$U aC@O+I4~9ᆥU0=%`)7HQ }3Ȣ@FZBl(J.d;o2 aK/5Yd4:[?E~$c6ͅ,֟'I}n݉4^Q;1 v9h$ Aʖa]hY1>>E5m Y~o2i ($ˌq1@2ކd _g[jmߚe<HlL[:X' .ÑA`h4a# RC&2,$ۛ4e1n:-n1&KPt16B +%<>܎GY]5πQ&&`P& "@pn_\{*螏^^<"HȠI7Qs(RǨbƁŖ0fǚ!0>'nY bSɛ݈W kk|sŀbssӁ9LQOgzy ɒ?H#UۖhJ)n 3d+M㬾2Bzo$Y 02v؛0uf1l>2&xna@G7w;J3LF"BEmiکyɏ:L #'#m: B30zf2ɳ\7)>HI,G0T&AMى N / ųvFu;JAnNIÞF2(;FNv1dl=eM57 5h-E]W0uf ,c<'Xc+ F$؝f\eGdz35;NkYF)dT*BjUڋ-b-?ȆB{Z*S<7]1D4zaCՑ]`*M) F$S0>?e+2oeIga`-T`;ETEFQKlUR^D1FĎ1~Kn?uɔd>G68܃Nb~~XO„Ύx:Y$dgXiug _7~/= ct9d7*貗F]ZNmBb 4-قS4^Bz /4J|cRXOeTGJzdιљcmP`*ND H~ Qit@m7F:6hx8;OЎ&\e# MOBm{j„꧊4K=;\2!h_z77pMc&caNT]im7nA;X +qяF.8.H;0>wtqk98\khnMTsNV75CF4 24Fvl3az%W<ɫ-| @T|^Gݿִ~O"dV:GӬsp6'McH?7mࠑdXe 2=wSW:ɣ4vbm hlֺNlde$VO$2Qi]XM4PS?Elnx*S35]'jLEmŨ ȓo-u-h{i[阿f-v4@cרt䊺XhYFN[kƗ~b'o, T;bk;NiX"C:S*2i@ y:u)iWU0TܮV_H $8bx:kmz.^~c:m7[;/3^4ƪxh@ o5q5d?3W{R;l)Q ߢ)1!?m=r{aY=ej~L2)_cnyF hU/4`0uO"3044η Z5j;A5G.Edk#a߭AQ )y<mcԏB|t trZFfhqPOeYFh'B=2=ȓ5 47#8Dp2&zdcVlQ@A{R;lH{HstpҌ:;A_$߅bTc44R D ('H,'ϴe HLz&N-%, GʩA-[#"8zi߾dxyyLC4f.@?  fxvN`S;4jN'ÚHm_Q/BF/Y4j3pvAh4 Y"4B_򳬎f ȓVAN>o3N(7yr*ɬH8ZˠF /=T+4ظ|C#WޠO\kӦg+l >;}X9T:Ӏ 1oƧ^AT@x" \7OCvrkY4zRK-?AiM7H;dyWO#HQb5ЉyfknkT>PpLEN|-3x~p:9$0 .A$o#? WfEb5КdȘΤ{p70yUa5N? ?w6|.pvPFg1o ^'p ycm(jx hj{ f'kryx!PKWk@*U?+xLF tG4b)5S.i?L:,Ϻp'=T5NZEl)8Q]A50LjjiX fy Aȿ(C6}eWeoݨA@vɨӀ< nM+0kqF~5c zQ;U:1ڭy3MתQ=Ҁ@x9kcGNH&WY ߿rg9y M0ʿMHQ@0I5zVu23L}qxo;5Z.v"7\JN9#7bO5^~ŀ @y͛2PH;Usx%0xWx:nPa6B;DhLOGAa/Q3-qOu_nnH W4Ra,`𜶏 u@U8*Nfn tMrwzfW$0drO-?ǝy"zC@*/1FM{j6gwɛ>x^X~utp?jwx䔩jp͗ ޭz %h1bdέY˦>KwIz`P۹Pw췢u HuS:th2wOw_7m:܊GVJeKRMs*2 @2 WzQnć excab<; "l3~ "BZ^i 3 4:df?݆cqlb9%S1Z#{MsZ6lj ӧ )vH9FՍ X<exH_5M *BѪ_c( +X4flܓ˜;xf5 Q_2m}h#e`^l mmq?- #q=fmH@L1c̄悂S0ďCnHb/2[E} z aaHW=S婏‘ '"rO"Y4uL;GV@6<\$Ղ!n[1I O4fpSTW#$?*'Rs `[=feKq/U[aj {xex L9ʣ49jSKqi˅}CPD$qoj]0vVnck~oM<~܌wZw֬Z}!8S e>K?`ː/kh R֝JV 2at]nۻ[;V0u)unNhMS'%ƥ fVsp_Jkz NV4@_?굺{ܥ+3Ýq^WͲ{`\@fh4<$7"V#DSר,MN99TjmisWb/T?5C% i\sE5Kw[M*%Bv';*ß A)jc @mL6eⴉA01Hf[*Uf*ר|8{ Ж,:9&X${F@Hcp$HO."i dB4z~$\!8D[X<P3ԃb"NC1,XER$U=` `sX0~N8Lr]! =><0t;F4a2t7#HV(4 O2Zd}CZ /IENDB`tuf-6.0.0/docs/tuf-icon-32.png0000644000000000000000000000354113615410400012642 0ustar00PNG  IHDR szzsRGBeXIfMM*JR(iZ,,  9_ pHYs.#.#x?vYiTXtXML:com.adobe.xmp 1 L'YIDATX [UU^8Q4dA&؃bҕ z2^(m(j " (8%6]А6\_[gΔ}w]뻬oOCQ3\is+:W^0\20LRNk&C>:+P }![I磗-(&IUoήʧ& juVN5ϡd GIfPZ ے3A TJ`8gl7Ve|c&jUG6U?] j X@b?1L[#hܳ6 h87 TdE$]EyN\מ偖Lf{ⷠ|Ԕ Z9'(΁]YwuY/wQa.uq l"hX(172`Zki׽}Vmp4(pF4*{17_͌2T? V2;"t W\~LNV5NJ3AA=Us^ Z,eWFD]|: ;?6`֚mL%J&r5{ 3N 6l0~;z!@(,B nhj8}g䄋I[%hĭy.`wgߘ9Ai^׳bbLdt83ne6/j,i*U!]|@ |r_œa2Ɯb%{Iv 4^ބ7dĊd)~UZ+?j뻐&U@]ssАs c,~G~UAI#N~JId,}3Zag|[3:*tgwV;_1*Å~_OMa[yjD7tJ^*;#]i֐^w#L1ʌʦ`F]rW08پ>q5uw, s1|:hd R@ǝ8|Kg.u# _flD7Ѐ,f8췡$K h`zOeoAkه'-=xW#a-}} \C!౪:zv -O0i|EU)W' 6+9+my}h$[܋ I+G@n.ܔD4e Python-TUF is the reference implementation of [The Update Framework](https://theupdateframework.io/) specification, an open source framework for securing content delivery and updates. It protects against various types of supply chain attacks and provides resilience to compromise. For the past 7 releases the project has introduced new designs and implementations, which have gradually formed two new stable APIs: - [`ngclient`](https://theupdateframework.readthedocs.io/en/latest/api/tuf.ngclient.html): A client API that offers a robust internal design providing implementation safety and flexibility to application developers. - [`Metadata API`](https://theupdateframework.readthedocs.io/en/latest/api/tuf.api.html): A low-level interface for both consuming and creating TUF metadata. Metadata API is a flexible and easy-to-use building block for any higher level tool or library. Python-TUF 1.0.0 is the result of a comprehensive rewrite of the project, removing several hard to maintain modules and replacing them with safer and easier to use APIs: - The project was reduced from 4700 lines of hard to maintain code to 1400 lines of modern, maintainable code - The implementation details are now easier to reason about, which should accelerate future improvements on the project - Metadata API provides a solid base to build other tools on top of – as proven by the ngclient implementation and the [repository code examples](https://github.com/theupdateframework/python-tuf/tree/develop/examples/repository) - Both new APIs are highly extensible and allow application developers to include custom network stacks, file storage systems or public-key cryptography algorithms, while providing easy-to-use default implementations With this foundation laid, Python-TUF developers are currently planning next steps. At the very least, you can expect improved repository side tooling, but we're also open to new ideas. Pop in to [#tuf](https://cloud-native.slack.com/archives/C8NMD3QJ3) on CNCF Slack or [Github issues](https://github.com/theupdateframework/python-tuf/issues/new) and let’s talk. tuf-6.0.0/docs/_posts/2022-05-04-ngclient-design.md0000644000000000000000000001113513615410400016115 0ustar00--- title: "What's new in Python-TUF ngclient?" author: Jussi Kukkonen --- We recently released a new TUF client implementation, `ngclient`, in Python-TUF. This post explains why we ended up doing that when a client already existed. # Simpler implementation, "correct" abstractions The legacy code had a few problems that could be summarized as non-optimal abstractions: Significant effort had been put to code reuse, but not enough attention had been paid to ensure the expectations and promises of that shared code were the same in all cases of reuse. This combined with Pythons type ambiguity, use of dictionaries as "blob"-like data structures and extensive use of global state meant touching the shared functions was a gamble: there was no way to be sure something wouldn't break. During the redesign, we really concentrated on finding abstractions that fit the processes we wanted to implement. It may be worth mentioning that in some cases this meant abstractions that have no equivalent in the TUF specification: some of the issues in the legacy implementation look like the result of mapping the TUF specifications [_Detailed client workflow_](https://theupdateframework.github.io/specification/latest/#detailed-client-workflow) directly into code. Here are the core abstractions we ended up with (number of lines of code in parenthesis to provide a bit of context, alongside links to sources and docs): * `Metadata` (900 SLOC, [docs](https://theupdateframework.readthedocs.io/en/latest/api/tuf.api.html)) handles everything related to individual pieces of TUF metadata: deserialization, signing, and verifying * `TrustedMetadataSet` (170 SLOC) is a collection of local, trusted metadata. It defines rules for how new metadata can be added into the set and ensures that metadata in it is always consistent and valid: As an example, if `TrustedMetadataSet` contains a targets metadata, the set guarantees that the targets metadata is signed by trusted keys and is part of a currently valid TUF snapshot * `Updater` (250 SLOC, [docs](https://theupdateframework.readthedocs.io/en/latest/api/tuf.ngclient.updater.html)) makes decisions on what metadata should be loaded into `TrustedMetadataSet`, both from the local cache and from a remote repository. While `TrustedMetadataSet` always raises an exception if a metadata is not valid, `Updater` considers the context and handles some failures as a part of the process and some as actual errors. `Updater` also handles persisting validated metadata and targets onto local storage and provides the user-facing API * `FetcherInterface` (100 SLOC, [docs](https://theupdateframework.readthedocs.io/en/latest/api/tuf.ngclient.fetcher.html)) is the abstract file downloader. By default, a Requests-based implementation is used but clients can use custom fetchers to tweak how downloads are done No design is perfect but so far we're quite happy with the above split. It has dramatically simplified the implementation: The code is subjectively easier to understand but also has significantly lower code branching counts for the same operations. # PyPI client requirements A year ago we added TUF support into pip as a prototype: this revealed some design issues that made the integration more difficult than it needed to be. As the potential pip integration is a goal for Python-TUF we wanted to smooth those rough edges. The main addition here was the `FetcherInterface`: it allows pip to keep doing all of the HTTP tweaks they have collected over the years. There were a bunch of smaller API tweaks as well: as an example, legacy Python-TUF had not anticipated downloading target files from a different host than it downloads metadata from. This is the design that PyPI uses with pypi.org and files.pythonhosted.org. # better API Since we knew we had to break API with the legacy implementation anyway, we also fixed multiple paper cuts in the API: * Actual data structures are now exposed instead of dictionary "blobs" * Configuration was removed or made non-global * Exceptions are defined in a way that is useful to client applications # Plain old software engineering In addition to the big-ticket items, the rewrite allowed loads of improvements in project engineering practices. Some highlights: * Type annotations are now used extensively * Coding style is now consistent (and is now a common Python style) * There is a healthy culture of review in the project: bar for accepting changes is where it should be for a security project * Testing has so many improvements they probably need a blog post of their own These are not `ngclient` features as such but we expect they will show in the quality of products built with it. tuf-6.0.0/docs/_posts/2022-06-15-testing-ngclient.md0000644000000000000000000002024513615410400016326 0ustar00--- title: "Testing Tricky Edge Cases in a TUF Client" author: Ivana Atanasova --- Usually the TUF Specification creates an impression of simple and straightforward approach to address software update systems security gaps. In the next few paragraphs we'll try to convince you that the devil is in the details. With the [v1.0.0 release](https://blogs.vmware.com/opensource/2022/02/22/python-tuf-reaches-version-1-0-0/) we can say that the current reference implementation is finally in a good place, although it wouldn’t be so trustworthy without all the awesome test functionality it provides. Therein lies some interesting surprises, for the conformance tests reflect use cases and tricky details that wouldn’t easily come to mind. TUF, in fact, is capable of managing some tricky business! Before looking into them, let’s first introduce the test functionality itself. ## Some repository simulator magic The test suite is heavily based on [RepositorySimulator](https://github.com/theupdateframework/python-tuf/blob/develop/tests/repository_simulator.py), which allows you to play with repository metadata by modifying it, signing and storing new roles versions, while serving older ones in the client test code. You can also simulate downloading new metadata from a remote without the need of file access or network connections, and modify expiry dates and time. Even though `RepositorySimulator` hosts repos purely in memory, you can supply the `--dump` flag to write its contents to a temporary directory on the local filesystem with "/metadata/..." and "/targets/..." URL paths that host metadata and targets respectively in order to audit the metadata. The test suite provides you with the ability to see the "live" test repository state for debugging purposes. Let’s cite a specific example with testing expired metadata to demonstrate the cool thing the `RepositorySimulator` provides, i.e. the capability to simulate real repository chains of updates as suggested by the spec, and not just modify individual metadata. More specifically, we would like to simulate a workflow in which a [targets](https://theupdateframework.github.io/specification/latest/#targets) version is being increased and a [timestamp](https://theupdateframework.github.io/specification/latest/#timestamp) expiry date is being changed. We are going to elaborate below on how this can be used to test the `Updater` above all programmatically. Now, let's just focus on how to verify that the `RepositorySimulator` did what we expected. Let's assume we did the following: * Upgraded `targets` to v2 * Changed `timestamp` v2 expiry date We can verify that the metadata looks as expected, without the need to implement file access. First, we need to find the corresponding temporary directory: ``` $ python3 test_updater_top_level_update.py TestRefresh.test_expired_metadata --dump Repository Simulator dumps in /var/folders/pr/b0xyysh907s7mvs3wxv7vvb80000gp/T/tmpzvr5xah_ ``` Once we know it, we can verify that the metadata has 2 cached versions: ``` $ tree /var/folders/pr/b0xyysh907s7mvs3wxv7vvb80000gp/T/tmpzvr5xah_/test_expired_metadata /var/folders/pr/b0xyysh907s7mvs3wxv7vvb80000gp/T/tmpzvr5xah_/test_expired_metadata ├── 1 │   ├── 1.root.json │   ├── snapshot.json │   ├── targets.json │   └── timestamp.json └── 2 ├── 2.root.json ├── snapshot.json ├── targets.json └── timestamp.json ``` And now we can also see that after bumping the version and moving timestamp v2 expiry date two weeks forward from v1, the v2 corresponding timestamp metadata has recorded that expiry date correctly: Timestamp v1:
$ cat /var/folders/pr/b0xyysh907s7mvs3wxv7vvb80000gp/T/tmpzvr5xah_/test_expired_metadata/1/timestamp.json 
{
 "signatures": [{...}],
 "signed": {
  "_type": "timestamp",
  "expires": "2022-03-30T00:18:31Z",
  "meta": { "snapshot.json": {"version": 1}},
  "spec_version": "1.0.28",
  "version": 1
 }}
Timestamp v2:
$ cat /var/folders/pr/b0xyysh907s7mvs3wxv7vvb80000gp/T/tmpzvr5xah_/test_expired_metadata/2/timestamp.json 
{
 "signatures": [{...}],
 "signed": {
  "_type": "timestamp",
  "expires": "2022-04-13T00:18:31Z",
  "meta": { "snapshot.json": {"version": 2}},
  "spec_version": "1.0.28",
  "version": 2
 }}
As you can see, the first date is 30 Mar and the second - 13 Apr, which is exactly 14 days later. This is a great way to observe what the tests really do and check if they do it successfully. ## When we talk about security, edge cases are the norm Now, let’s take a closer look at two edge cases, using in this test the cool things the RepositorySimulator provides: ### Example with expired metadata: Imagine that we have performed an update and stored metadata in a cache. And the locally stored timestamp/snapshot has expired. But we still need it to perform an update from remote by verifying the signatures and we need to use the expired timestamp. We can play with versions and expiry to verify that this scenario not explicitly mentioned in the spec works correctly and safely. By using the simulator, we can do the following: 1. Set the timestamp expiry one week ahead (to day 7) 2. On the very first day (day 0) download, verify, and load metadata for the [top-level roles](https://theupdateframework.github.io/specification/latest/#roles-and-pki) following the TUF specification order. This is done by simply calling `updater.refresh()`. 3. Then we bump [snapshot](https://theupdateframework.github.io/specification/latest/#update-snapshot) and [targets](https://theupdateframework.github.io/specification/latest/#targets) versions to v2 in the repository on the same day (day 0) 4. Set v2 expiry dates three weeks ahead (to day 21) 5. Travel in time somewhere between day 7 and day 21 6. Perform a successful `refresh` (with `updater.refresh()` call) with the expired locally cached timestamp 7. Check that the final repository version of the snapshot and targets roles is v2. This is a not so obvious use-case to keep in mind when thinking about updates. You can see how it looks in practice in the [reference implementation](https://github.com/theupdateframework/python-tuf/blob/develop/tests/test_updater_top_level_update.py#:~:text=test_expired_metadata). ### Example rollback protection check with expired metadata: Now let’s see if a rollback attack protection can be performed when the local timestamp has expired. In this case we need at least two timestamp and snapshot versions, an expired older version of timestamp, and a verification that a rollback check is performed with the old version. For a timestamp rollback, the case is pretty similar to the use of expired metadata. We can do the following: 1. Set timestamp v1 expiry one week ahead (to day 7) 2. Perform `updater.refresh()` on the very first day 3. Publish timestamp v2 in the repository with expiry three weeks ahead (to day 21) 4. Perform `updater.refresh()` somewhere between day 7 and day 21 5. Verify that rollback check uses the expired timestamp v1. (For reference, see the implementation [example](https://github.com/theupdateframework/python-tuf/blob/develop/tests/test_updater_top_level_update.py#:~:text=test_expired_timestamp_version_rollback)). A similar approach can be used when testing both timestamp and snapshot rollback protection. We just need to guarantee that after the last snapshot update, the snapshot version is not the latest in order to verify a rollback check is performed both with expired timestamp and an older snapshot. Sounds complicated, but it’s pretty easy with the simulator and [this example](https://github.com/theupdateframework/python-tuf/blob/develop/tests/test_updater_top_level_update.py#:~:text=test_expired_timestamp_snapshot_rollback) illustrates it pretty well. ## The devil is in the details One of the great things about a reference implementation is that one can learn a lot about the TUF specification by looking at the tests, which are full of examples that would hardly come to mind when you read the abstract straightforward workflow explained in the spec. And those tests most likely do not cover everything… Do you have a comment about the TUF spec or the cited examples? An idea? Please share it with us! tuf-6.0.0/docs/_posts/2022-10-21-python-tuf-security-assessment.md0000644000000000000000000001165713615410400021214 0ustar00--- title: "Python-tuf source code audit" author: Joshua Lock --- We are pleased to announce completion of a source code audit of the recently refactored python-tuf codebase. # Background In February 2022 the python-tuf team [released version 1.0]( https://theupdateframework.github.io/python-tuf/2022/02/21/release-1-0-0.html ). This release was the product of a significant refactoring effort with the code being rewritten from scratch to provide two new stable API’s: * A low-level interface for creating and consuming TUF metadata * A robust and pluggable client implementation Unifying both of these APIs is a focus on developer ergonomics and flexibility of the API. While the new python-tuf codebase is much leaner, a mere 1,400 lines of code at release, compared to the legacy code’s 4,700 lines, and builds on the lessons learned from development (and developers) on the prior versions of python-tuf, we were very conscious of the fact that our first major release of a security project was made up of newly authored code. To improve our confidence in this newly authored code we engaged with the Open Source Technology Improvement Fund (OSTIF) to have an independent security assessment of the new python-tuf code. OSTIF connected us with the team at X41 D-Sec who performed a thorough source code audit, the results of which we are releasing today. # Results and resolutions The report prepared by X41 included one medium severity and three low severity issues, we describe below how we are addressing each of those reported items. **Private Key World-Readable (TUF-CR-22-01) – Medium** This vulnerability is not in any code called by python-tuf, but was included in demonstrative code the python-tuf team provided to the X41 team. The underlying issue is in [securesystemslib](https://github.com/secure-systems-lab/securesystemslib), a utility library used by python-tuf which provides a consistent interface around various cryptography APIs and related functionality, where any files were created with the default permissions of the running process. We resolved this issue by [adding an optional restrict parameter]( https://github.com/secure-systems-lab/securesystemslib/pull/231/files) to the `storage.put()` interface and in the corresponding filesystem implementation of the interface ensuring that when `restrict=True` files are created with octal permissions `0o600` (read and write for the user only). This enhancement has been included in the recent release of [securesystemslib 0.25.0]( https://github.com/secure-systems-lab/securesystemslib/releases/tag/v0.25.0 ). **Shallow Build Artifact Verification (TUF-CR-22-02) – Low** The `verify_release` script, run by python-tuf developers as part of the release process and available to users to verify that a release on GitHub or PyPI matches a build of source code from the repository, was only performing a shallow comparison of files. That is, only the type, size, and modification times were compared. We have [modified the script]( https://github.com/theupdateframework/python-tuf/pull/2122/files ) to perform a deep comparison of the contents and attributes of files being verified. **Quadratic Complexity in JSON Number Parsing (TUF-CR-22-03) – Low** This issue was not in python-tuf itself, rather the problem was in Python’s built-in json module. Fortunately, we did not need to take any action for this issue as it was independently reported upstream and has been fixed in Python. Find more details in [CVE-2020-10735: Prevent DoS by large int<->str conversions]( https://github.com/python/cpython/issues/95778) on Python’s issue tracker. **Release Signatures Add No Protection (TUF-CR-22-04) – Low** python-tuf releases are built by GitHub Actions in response to a developer pushing a tag. However, before those releases are published to the project’s GitHub releases page and PyPI a developer must verify (using the `verify_release` script discussed earlier) and approve the release. Part of the approval includes creating a detached signature and including that in the release artifacts. While these do not add any additional protection, we do believe that the additional authenticity signal is worthwhile to users. Furthermore, along with the above notice and the recommendations in the informational notes we will continue to iterate on our build and release process to provide additional security for users of python-tuf. # Thank you We are extremely grateful to X41 for their thorough audit of the python-tuf code, to [Open Source Technology Improvement Fund](https://ostif.org) (OSTIF) for connecting us with the [X41 D-Sec, GMBH](https://x41-dsec.de) team, and to the [Cloud Native Computing Foundation](https://www.cncf.io) (CNCF) for funding the source code audit – thank you all. Read the full report here: [Source Code Audit on The Update Framework for Open Source Technology Improvement Fund (OSTIF)]( https://theupdateframework.io/audits/x41-python-tuf-audit-2022-09-09.pdf). tuf-6.0.0/docs/_posts/2023-01-24-securesystemslib-signer-api.md0000644000000000000000000001255413615410400020511 0ustar00--- title: New signing API author: Jussi Kukkonen --- > Things should be made as simple as possible – but no simpler. > > _- sometimes attributed to Einstein_ I believe the rule of thumb above stands on its own merit when it comes to software systems so the credibility of the attribution is not important (it's also possible that we should not take software design advice from a physicist). This post is about the PKI signing API provided by [Securesystemslib](https://github.com/secure-systems-lab/securesystemslib/) and used by applications built with python-tuf. It's an example of how keeping a thing too simple can actually make it more complex. ## The problem with private keys The original `securesystemslib.keys` module is based on the assumption that there are three distinct steps in the lifetime of a private-public keypair in a system like a TUF repository: 1. Generate private and public key 1. Sign with private key 1. Verify signature with public key This all seems logical on paper but in practice implementing signing for different underlying technologies (like online key vaults and Yubikeys) forces the API surface to grow linearly, and still requires the applications to also be aware of all the different signing technologies and their configuration. It was clear that something was wrong. ## New signer module In reality there are four distinct events during the lifetime of a signing key. All of these steps can happen on different systems, with different operators and different access to the underlying signing system: 1. Generate private and public keys – _This may happen in securesystemslib but also in an online key vault configuration UI or the Yubikey command line tool_ 1. Store the public key _and the information needed to access the private key_ 1. Sign using the information stored in step 2 1. Verify signature with public key Securesystemslib 0.26 introduces an improved signer API that recognizes this process complexity – and in turn makes managing and signing with keys simpler in practical application development. There are three main changes, all in the `securesystemslib.signer` module that defines Signer and Key classes: * The concept of **Private key URIs** is introduced – this is a relatively simple string that identifies a signing technology and encodes how to access and sign with a specific private key. Examples: - `gcpkms:projects/python-tuf-kms/locations/global/keyRings/git-repo-demo/cryptoKeys/online/cryptoKeyVersions/1` (A Google Cloud KMS key) - `file:/home/jku/keys/mykey?encrypted=true` (A key in an encrypted file) - `hsm:` (A hardware security module like Yubikey) * **Importing** public keys and constructing private key URIs is handled by Signers (there's no generic API though: this detail is specific to signing technology) * **Dynamic dispatch** is added for both Signers and Keys (former based on the private key URI, latter on the key content): As a result application code does not need to care about the specific technology used to sign/verify but securesystemslib can still support a wide array of signing methods -- and this support can even be extended with out-of-tree implementations. ## Code examples These examples are slightly simplified copies from my latest repository implementation and should represent any new application code using the python-tuf Metadata API in the future[^1]. Some things to note in these examples: * Application code that signs does not care what signing technology is used * Public key import (and related private key URI construction) is specific to the underlying signing technology * Private key URIs can be stored wherever makes sense for the specific application ### Example 1: Online key in a KMS Here’s an example where the private key URI is stored in a custom field in the metadata (this makes sense for online keys). First, the setup code that imports a key from Google Cloud KMS – this code runs in a repository maintainer tool: ```python def import_google_cloud_key() -> Key gcp_key_id = input("Please enter the Google Cloud KMS key id") uri, key = GCPSigner.import_(gcp_key_id) # embed the uri in the public key metadata key.unrecognized_fields["x-online-uri"] = uri return key ``` Then signing with the same key – this code runs in the online repository component and only needs the public key as an argument since we embedded the private key URI in the public key metadata. It does require the `cloudkms.signer` role permissions on Google Cloud though: ```python def sign_online(self, md: Metadata, key: Key) -> None: uri = key.unrecognized_fields["x-online-uri"] signer = Signer.from_priv_key_uri(uri, key) md.sign(signer) ``` ### Example 2: Maintainer key on a Yubikey This time we're importing the maintainers Yubikey: ```python def import_yubikey(config: ConfigParser) -> Key input("Insert your HW key and press enter") uri, key = HSMSigner.import_() # store the uri in application configuration config["keyring"][key.keyid] = uri return key ``` Later we sign with the Yubikey: ```python def sign_local(md: Metadata, key: Key, config: ConfigParser) -> None: uri = config["keyring"][key.keyid] signer = Signer.from_priv_key_uri(uri, key) md.sign(signer) ``` [^1]: The new signer API is not used in python-tuf quite yet: follow Pull Request [#2165](https://github.com/theupdateframework/python-tuf/pull/2165) to see when the support is merged.tuf-6.0.0/docs/adr/0000-use-markdown-architectural-decision-records.md0000644000000000000000000000140413615410400022325 0ustar00# Use Markdown Architectural Decision Records * Status: accepted * Date: 2020-10-20 Technical Story: https://github.com/theupdateframework/python-tuf/issues/1141 ## Context and Problem Statement We want to record architectural decisions made in this project. Which format and structure should these records follow? ## Considered Options * [MADR](https://adr.github.io/madr/) 2.1.2 – The Markdown Architectural Decision Records * Formless – No conventions for file format and structure ## Decision Outcome Chosen option: "MADR 2.1.2", because * Implicit assumptions should be made explicit. Design documentation is important to enable people understanding the decisions later on. * The MADR structure is comprehensible and facilitates usage & maintenance. tuf-6.0.0/docs/adr/0001-python-version-3-6-plus.md0000644000000000000000000000321013615410400016115 0ustar00# Default to Python 3.6 or newer for new development * Status: accepted * Date: 2020-10-20 Technical Story: https://github.com/theupdateframework/python-tuf/issues/1125 ## Context and Problem Statement We are planning a refactor of tuf where: * We do not want to try and support end-of-life versions of the language. * We want to use modern language features, such as typing. * We want to ease maintainer burden, by reducing the major language versions supported. ## Decision Drivers * Python 2.7 is end-of-life * Python 3.5 is end-of-life * Modern Python allows use of desirable features such as type hints * Supporting end-of-life Python versions adds maintenance overhead ## Considered Options * Support Python 2.7 and 3.5+ * Support Python 2.7 and 3.6+ * Support Python 2.7 and 3.6+ (with polyfill modules) * Support only Python 3.6+ ## Decision Outcome Chosen option: "Support only Python 3.6+", because we want modern features and lower maintainer effort as we work to improve our codebase through the refactor effort. New modules should target Python 3.6+. Using modules to polyfill standard library features from Python 3.6+ feels untenable as more libraries are dropping support for EOL Python releases. ### Negative Consequences * Leaves major adopter and contributor without an actively developed client for some of their customers stuck on older Python versions. ## Links * [Discussion of how/where to develop the refactored codebase](https://github.com/theupdateframework/python-tuf/issues/1126) * [Discussion of deprecation policy for the pre-1.0, Python 2.7 supporting, code](https://github.com/theupdateframework/python-tuf/issues/1127) tuf-6.0.0/docs/adr/0002-pre-1-0-deprecation-strategy.md0000644000000000000000000000355213615410400017053 0ustar00# Deprecation strategy for current release series (0.x) * Date: 2020-11-05 Technical Story: https://github.com/theupdateframework/python-tuf/issues/1127 ## Context and Problem Statement We plan to refactor the reference implementation significantly and, as part of that effort, drop support for no-longer maintained versions of Python (see ADR 0001). However, a major user of (and contributor to) the project has users of the client stuck on older Python versions. We would like to define a reasonable support policy for the current, Python 2.7 supporting, codebase. ## Decision Drivers * We have finite resources. * A major adopter/user of the project has a need to maintain support for Python 2.7 clients. ## Considered Options * Maintain the code in parallel for a fixed period of time after releasing the refactored code. * Abandon the old code once the refactored code is released. * Support the old code on a best-effort basis once the refactored code is released. ## Decision Outcome Chosen option: "Support the old code on a best-effort basis once the refactored code is released", because we only have finite resources and want to focus them on moving the project forward, including supporting PyPI/pip integration and providing a solid implementation for developing specification enhancements in. We should document this outcome clearly in a governance document describing the release process with words along the lines of: "Support for older releases: Bugs reported with tuf versions prior to 1.0.0 will likely not be addressed directly by tuf’s maintainers. Pull Requests to fix bugs in the last release prior to 1.0.0 will be considered, and merged (subject to normal review processes). Note that there may be delays due to the lack of developer resources for reviewing such pull requests." ## Links * [ADR 0001](0001-python-version-3-6-plus.md) Python version tuf-6.0.0/docs/adr/0003-where-to-develop-TUF-1-0-0.md0000644000000000000000000000403113615410400016103 0ustar00# Develop TUF 1.0.0 in a subdirectory of the current TUF implementation * Status: accepted * Date: 2020-11-23 Technical Story: https://github.com/theupdateframework/python-tuf/issues/1126 ## Context and Problem Statement The plan is to implement a refactored TUF (1.0.0) alongside the current code base, in order to not disrupt existing usage and keep providing a Python 2.7 client. We need to decide on the best place to do this development. ## Decision Drivers * Developing the new code piecemeal * Continuing to make releases in the interim * Avoiding maintenance overhead ## Considered Options Develop TUF 1.0.0: * In its own repository * In a separate development branch of the current TUF implementation * In the default branch, archiving the current implementation * In a subdirectory of the current TUF implementation ## Decision Outcome Chosen option: "Develop TUF 1.0.0 in a subdirectory of the current TUF implementation", because we want to add the new TUF code gradually while keep maintaining the current implementation given limited maintenance resources. Once development of the new version is complete, we will transition from TUF 1.0.0 in a subdirectory to stand-alone TUF 1.0.0 by the following procedure: * flesh out tuf/api/* * implement tuf/client/new-updater.py * implement tuf/repository/* * \ * git mv tuf/client/new-updater.py tuf/client/updater.py * git rm tuf/\*.py * tag 1.0.0 ## Pros and Cons of the Options Developing TUF 1.0.0 in a subdirectory of the current TUF implementation seems to have the least maintenance overhead compared to option 1 and 2, while allowing us to continue making releases with the old code unlike option 3. ### Negative Consequences * In progress development in the default branch causes messiness in plain sight. ## Links * [Discussion of Python version support in TUF 1.0.0](https://github.com/theupdateframework/python-tuf/issues/1125) * [Discussion of deprecation policy for the pre-1.0, Python 2.7 supporting, code](https://github.com/theupdateframework/python-tuf/issues/1127) tuf-6.0.0/docs/adr/0004-extent-of-OOP-in-metadata-model.md0000644000000000000000000000354413615410400017466 0ustar00# Add classes for complex metadata attributes * Status: accepted * Date: 2020-11-30 Technical Story: https://github.com/theupdateframework/python-tuf/issues/1133 ## Context and Problem Statement Custom classes for the TUF signed metadata wrapper (Metadata) and metadata payload containers (Root, Timestamp, Snapshot, Targets) were added recently. Complex attributes on these classes are still represented as dictionaries. Should we add classes for these attributes too? ## Decision Drivers * Transition to class-based role metadata containers in progress (see *"class model"* links below) * Harden in-memory representation of metadata model * Replace `securesystemslib` schema validation (see *"schema checker"* link below) ## Considered Options * Use custom classes for complex attributes * Use dictionaries for complex attributes ## Decision Outcome Chosen option: "Use custom classes for complex attributes", to provide a consistently object-oriented, well-defined, single source of truth about the TUF metadata model (not only its containers). In addition to convenience update methods, the model may be extended with self-validation behavior (see *"validation guidelines"* link below) to replace `securesystemslib` schema checks. ### Negative Consequences * Implementation overhead * Less flexibility in usage and development (this is actually desired) * Maybe less idiomatic than dictionaries ## Links * [class model](https://github.com/theupdateframework/python-tuf/pull/1112) * [class model (root)](https://github.com/theupdateframework/python-tuf/pull/1193) * [WIP: class model (complex attributes)](https://github.com/theupdateframework/python-tuf/pull/1223) * [new TUF validation guidelines](https://github.com/theupdateframework/python-tuf/issues/1130) * [securesystemslib schema checker issues](https://github.com/secure-systems-lab/securesystemslib/issues/183) tuf-6.0.0/docs/adr/0005-use-google-python-style-guide.md0000644000000000000000000000370513615410400017460 0ustar00# Use Google Python style guide with minimal refinements Technical Story: https://github.com/theupdateframework/python-tuf/issues/1128 ## Context and Problem Statement The Secure Systems Lab code style guide, which has been used for most of the code base, has become outdated. Through the upcoming rewrite, we have the chance to ignore consistency considerations with existing code style and can choose a more standard and up-to-date style guide. ## Decision Drivers * Flaws in original Secure Systems Lab style guide * Curating a complete custom style guide is time consuming * Well-established style rules lower contribution barrier * Custom style is not supported by default in common tooling (i.e. editors and linters) ## Considered Options * Use custom style guide * Use Google style guide with refinements ## Decision Outcome Chosen option: "Use Google style guide with refinements", because the Google style guide is a comprehensive, well-established style guide that is mostly based on PEP-8 and was accepted by everyone on the TUF team. There is no need to replicate these recommendations. However, we do provide a very slim document with additional refinements, in order to emphasize items the we consider especially important, want to be handled differently, or in one specific way, where the Google guide would allow multiple. **Course of Action:** * Follow existing style when working on existing code (files) * Follow new style in any new code (files) * Consider providing linter and formatter configuration (e.g. pylint, flake8, black, yapf) to enforce and facilitate new style ## Links * [New Slim Secure Systems Lab style guide](https://github.com/secure-systems-lab/code-style-guidelines/pull/21) * [Google Python style guide](https://google.github.io/styleguide/pyguide.html) * [PEP 8](https://www.python.org/dev/peps/pep-0008/) * [Issues in original Secure Systems Lab style guide](https://github.com/secure-systems-lab/code-style-guidelines/issues/20) tuf-6.0.0/docs/adr/0006-where-to-implemenent-model-serialization.md0000644000000000000000000001440413615410400021657 0ustar00# Separate metadata serialization from metadata class model but keep helpers Technical Story: https://github.com/theupdateframework/python-tuf/pull/1279 ## Context and Problem Statement In the course of implementing a class-based role metadata model we have also reviewed options on how to design serialization infrastructure between wire formats and the class model. In an initial attempt we have implemented serialization on the metadata class (see option 1), but issues with inheritance and calls for more flexibility have caused us to rethink this approach. ## Decision Drivers * A class-based role metadata model (see ADR4) requires serialization routines from and to wire formats * TUF integrators may require custom serialization implementations for custom wire formats * Readability and simplicity of implementation for users and maintainers * Recognizability of specification ## Considered Options 1. Serialization in metadata classes 2. Serialization in metadata subclasses 3. Serialization separated from metadata classes 4. Compromise 1: Default serialization methods in metadata classes / non-default serialization separated 5. Compromise 2: Serialization separated / dict conversion helper methods for default serialization in metadata classes ## Decision Outcome Chosen option: "Compromise 2", because implementing dict conversion as methods on a corresponding class is idiomatic and allows for well-structured code. Together with a separated serialization interface, it provides both ease of use and maintenance, and full flexibility with regards to custom serialization implementations and wire formats. ## Pros and Cons of the Options ### Option 1: Serialization in metadata classes Serialization is implemented on metadata classes, e.g. `Metadata.serialize_as_json()`, etc. * Good, because serialization for any object is encapsulated within the corresponding class and thus structured in small code chunks, using the already existing hierarchical class model structure. * Good, because the TUF specification is heavily based on json, even if only for illustrative purposes, thus this option facilitates recognizability. * Bad, because it might suggest that TUF is limited to json alone. * Bad, because it does not facilitate custom serialization implementations. * Bad, because it can get complicated with inheritance in the class model. *NOTE: a workaround exists in #1279.* ### Option 2: Serialization in metadata subclasses Serialization is implemented on metadata subclasses, e.g. `JsonMetadata.serialize()`, etc. * Good, because the wire format is decoupled from the base classes, not giving the impression that TUF is limited to json, and facilitating custom implementations. * Bad, because a user needs to decide on serialization ahead of time, when instantiating the metadata objects. * Bad, because the metadata model has many classes, which would all need to be subclassed accordingly. ### Option 3: Serialization separated from metadata classes Serialization is implemented independently of the metadata class, e.g. by defining an abstract `Serializer` interface, which must be implemented in subclasses, e.g. `JsonSerializer`, etc. * Good, because the wire format is completely decoupled from the class model, not giving the impression that TUF is limited to json, and facilitating custom implementations. * Good, because it can serve as exact blueprint for custom implementations. * Bad, because a decoupled serialization implementation needs to "re-implement" the entire class hierarchy, likely in a procedural manner. ### Option 4: Compromise 1 Default json serialization is implemented on the metadata class as described in (1), but can be overridden using an independent `Serializer` interface as described in (3). * Good, for the reasons outlined in options (1) and (3), i.e. encapsulation within classes but decoupled class model and wire format. * Bad, because it creates two different code paths for default and non-default wire formats making the code more complex and prone to deteriorate, especially on the non-default path. * Bad, because the on-the-class default implementation can not be used as blueprint for custom implementations. ### Option 5: Compromise 2 Serialization is implemented independently of the metadata class as described in (3). However, the *meat* of the default `JsonSerializer`, i.e. conversion between metadata objects and dicts, is implemented on the metadata class, e.g. as `Metadata.to_dict()`, etc. * Good, for the reasons outlined in options (1) and (3), i.e. encapsulation within classes but decoupled class model and wire format, without the disadvantage in (4) of having two completely different code paths. * Good, because it makes the separate default serializer a minimal wrapper around the dict conversion methods. * Good, because other serialization implementations might also make use of dict conversion methods. * Good, because conversion between class objects and dicts is akin to type casting, which is idiomatic to implement on the class. * Bad, because the on-the-class default implementation can not be used as blueprint for custom implementations. ## Links * [ADR4: Add classes for complex metadata attributes (decision driver)](/Users/lukp/tuf/tuf/docs/adr/0004-extent-of-OOP-in-metadata-model.md) * [PR: Add simple TUF role metadata model (implements option 1)](https://github.com/theupdateframework/python-tuf/pull/1112) - [details about separation of serialization and instantiation](https://github.com/theupdateframework/python-tuf/commit/f63dce6dddb9cfbf8986141340c6fac00a36d46e) - [code comment about issues with inheritance](https://github.com/theupdateframework/python-tuf/blob/9401059101b08a18abc5e3be4d60e18670693f62/tuf/api/metadata.py#L297-L306) * [PR: New metadata API: add MetadataInfo and TargetFile classes (recent ADR discussion impetus)](https://github.com/theupdateframework/python-tuf/pull/1223) - [more discussion about issues with inheritance](https://github.com/theupdateframework/python-tuf/pull/1223#issuecomment-737188686) * [SSLIB/Issue: Add metadata container classes (comparison of options 1 and 2)](https://github.com/secure-systems-lab/securesystemslib/issues/272) * [tuf-on-a-plane parser (implements option 3)](https://github.com/trishankatdatadog/tuf-on-a-plane/blob/master/src/tuf_on_a_plane/parsers/) tuf-6.0.0/docs/adr/0008-accept-unrecognised-fields.md0000644000000000000000000000516313615410400017033 0ustar00# Accept metadata that includes unrecognized fields - Status: accepted - Date: 2021-04-08 Technical Story: https://github.com/theupdateframework/python-tuf/issues/1266 ## Context and Problem Statement The current reference implementation will ignore unrecognized fields in a metadata file when loading it. This leads to the side effect that if you read a metadata file with unrecognized fields and immediately write it back to the disk, this file will be modified. Furthermore, some TAPs like: - [TAP 6](https://github.com/theupdateframework/taps/blob/master/tap6.md) - [TAP 10](https://github.com/theupdateframework/taps/blob/master/tap10.md) - [TAP 14](https://github.com/theupdateframework/taps/blob/master/tap14.md) - [TAP 15](https://github.com/theupdateframework/taps/blob/master/tap15.md) - [TAP 16](https://github.com/theupdateframework/taps/blob/master/tap16.md) are relying on that unrecognized fields will be accepted to introduce new fields to the specification without making the metadata invalid for older clients who don't recognize the field. ## Decision Drivers - The TUF specification implies support for unrecognized attribute-value fields, see [Document formats](https://theupdateframework.github.io/specification/latest/#document-formats) - If we perform the following operations on a metadata file with no intermediate operations: 1. read the metadata file 2. write the metadata file back to the disk then, the checksum (the content) of the file must not be changed. - Flexibility to add new fields in the spec without adding breaking changes. - Don't store unrecognized fields when it is not allowed by the specification. ## Considered Options - Ignore and drop unrecognized fields. - Ignore, but store unrecognized fields as an additional attribute. - Ignore, but store unrecognized fields as an additional attribute except for a couple of places where it's not allowed by the specification. ## Decision Outcome Chosen option: "Ignore, but store unrecognized fields as an additional attribute except for a couple of places where it's not allowed by the specification." The motivation for this decision is that the TUF specification already implies that we should accept unrecognized fields for backward compatibility and easier future extensibility. Additionally, it seems unacceptable to change a metadata file content just by reading and writing it back. There are exceptions however for places in the metadata format when it is not allowed by specification: keys, roles, meta, hashes, and targets are actual dictionaries (vs JSON objects that most structures in the format are) where `unrecognized field` is not a meaningful concept. tuf-6.0.0/docs/adr/0009-what-is-a-reference-implementation.md0000644000000000000000000000535013615410400020415 0ustar00# Primary purpose of the reference implementation * Status: accepted * Date: 2021-08-25 ## Context and Problem Statement The original goal for the reference implementation refactor was to provide an implementation which is both an aid to understanding the specification and a good architecture for other implementations to mimic. During refactoring efforts on the metadata API and ngclient, several friction points have arisen where a safe object-oriented API would result in a less direct mapping to the [Document formats] in the specification. The archetypal example friction point is that [Timestamp] lists snapshot _only_ in a `meta` dictionary of `METAPATH` -> attribute fields. The dictionary will only ever contain one value and creates an extra level of indirection for implementations which try to map to the file format. When presented with such cases, we have considered multiple options: * Strict mapping to the [Document formats] * Simple and safe API in preference to mapping to the [Document formats] * Strict mapping to the [Document formats] with additional convenience API which is documented as the preferred interface for users So far implementation has tended towards the final option, but this is unsatisfying because: * the API contains traps for the unsuspecting users * two code paths to achieve the same goal is likely to result in inconsistent behaviour and bugs Therefore, we would like to define our primary purpose so that we can make consistent decisions. [Document formats]: https://theupdateframework.github.io/specification/latest/#document-formats [Timestamp]: https://theupdateframework.github.io/specification/latest/#file-formats-timestamp ## Decision Drivers * The reference implementation is often the starting point for new implementations, porting architecture of the reference implementation to new languages/frameworks * Reading reference implementation code is a common way to learn about TUF * The TUF formats include non-intuitive JSON object formats when mapping to OOP objects * Multiple code paths/API for the same feature is a common source of bugs ## Considered Options Primary purpose of the reference implementation is: * a learning resource to aid understanding of the specification (pedagogical reference) * a good architecture for other implementations to mimic (exemplary reference) ## Decision Outcome Primary purpose of the reference implementation is as an exemplary reference: providing a safe, consistent API for users and a good architecture for other implementations to mimic. ## Links * Discussed [on Slack](https://cloud-native.slack.com/archives/C01GT17AC5D/p1629357567021600) * Discussed in the [August 2021 TUF community meeting](https://hackmd.io/jdAk9rmPSpOYUdstbIvbjw#August-25-2021-Meeting) tuf-6.0.0/docs/adr/0010-repository-library-design.md0000644000000000000000000001413013615410400016760 0ustar00# Repository library design built on top of Metadata API ## Context and Problem Statement The Metadata API provides a modern Python API for accessing individual pieces of metadata. It does not provide any wider context help to someone looking to implement a TUF repository. The legacy python-tuf implementation offers tools for this but suffers from some issues (as do many other implementations): * There is a _very_ large amount of code to maintain: repo.py, repository_tool.py and repository_lib.py alone are almost 7000 lines of code. * The "library like" parts of the implementation do not form a good coherent API: methods routinely have a large number of arguments, code still depends on globals in a major way and application (repo.py) still implements a lot of "repository code" itself * The "library like" parts of the implementation make decisions that look like application decisions. As an example, repository_tool loads _every_ metadata file in the repository: this is fine for CLI that operates on a small repository but is unlikely to be a good choice for a large scale server. ## Decision Drivers * There is a consensus on removing the legacy code from python-tuf due to maintainability issues * Metadata API makes modifying metadata far easier than legacy code base: this makes significantly different designs possible * Not providing a "repository library" (and leaving implementers on their own) may be a short term solution because of the previous point, but to make adoption easier and to help adopters create safe implementations the project would benefit from some shared repository code and a shared repository design * Maintainability of new library code must be a top concern * Allowing a wide range of repository implementations (from CLI tools to minimal in-memory implementations to large scale application servers) would be good: unfortunately these can have wildly differing requirements ## Considered Options 1. No repository packages 2. repository_tool -like API 3. Minimal repository abstraction ## Decision Outcome Option 3: Minimal repository abstraction While option 1 might be used temporarily, the goal should be to implement a minimal repository abstraction as soon as possible: this should give the project a path forward where the maintenance burden is reasonable and results should be usable very soon. The python-tuf repository functionality can be later extended as ideas are experimented with in upstream projects and in python-tuf example code. The concept is still unproven but validating the design should be straight forward: decision could be re-evaluated in a few months if not in weeks. ## Pros and Cons of the Options ### No repository packages Metadata API makes editing the repository content vastly simpler. There are already repository implementations built with it[^1] so clearly a repository library is not an absolute requirement. Not providing repository packages in python-tuf does mean that external projects could experiment and create implementations without adding to the maintenance burden of python-tuf. This would be the easiest way to iterate many different designs and hopefully find good ones in the end. That said, there are some tricky parts of repository maintenance (e.g. initialization, snapshot update, hashed bin management) that would benefit from having a canonical implementation, both for easier adoption of python-tuf and as a reference for other implementations. Likewise, a well designed library could make some repeated actions (e.g. version bumps, expiry updates, signing) much easier to manage. ### repository_tool -like API It won't be possible to support the repository_tool API as it is but a similar one would certainly be an option. This would likely be the easiest upgrade path for any repository_tool users out there. The implementation would not be a huge amount of work as Metadata API makes many things easier. However, repository_tool (and parts of repo.py) are not a great API. It is likely that a similar API suffers from some of the same issues: it might end up being a substantial amount of code that is only a good fit for one application. ### Minimal repository abstraction python-tuf could define a tiny repository API that * provides carefully selected core functionality (like core snapshot update) * does not implement all repository actions itself, instead it makes it easy for the application code to do them * leaves application details to specific implementations (examples of decisions a library should not always decide: "are targets stored with the repo?", "which versions of metadata are stored?", "when to load metadata?", "when to unload metadata?", "when to bump metadata version?", "what is the new expiry date?", "which targets versions should be part of new snapshot?") python-tuf could also provide one or more implementations of this abstraction as examples -- this could include a _repo.py_- or _repository_tool_-like implementation. This could be a compromise that allows: * low maintenance burden on python-tuf: initial library could be tiny * sharing the important, canonical parts of a TUF repository implementation * ergonomic repository modification, meaning most actions do not have to be in the core code * very different repository implementations using the same core code and the same abstract API The approach does have some downsides: * it's not a drop in replacement for repository_tool or repo.py * A prototype has been implemented (see Links below) but the concept is still unproven More details in [Design document](../repository-library-design.md). ## Links * [Design document for minimal repository abstraction](../repository-library-design.md) * [Prototype implementation of minimal repository abstraction](https://github.com/vmware-labs/repository-editor-for-tuf/) [^1]: [RepositorySimulator](https://github.com/theupdateframework/python-tuf/blob/develop/tests/repository_simulator.py) in python-tuf tests is an in-memory implementation, while [repository-editor-for-tuf](https://github.com/vmware-labs/repository-editor-for-tuf) is an external Command line repository maintenance tool. tuf-6.0.0/docs/adr/index.md0000644000000000000000000000276113615410400012370 0ustar00# Architectural Decision Log This log lists the architectural decisions for tuf. - [ADR-0000](0000-use-markdown-architectural-decision-records.md) - Use Markdown Architectural Decision Records - [ADR-0001](0001-python-version-3-6-plus.md) - Default to Python 3.6 or newer for new development - [ADR-0002](0002-pre-1-0-deprecation-strategy.md) - Deprecation strategy - [ADR-0003](0003-where-to-develop-TUF-1-0-0.md) - Develop TUF 1.0.0 in a subdirectory of the current TUF implementation - [ADR-0004](0004-extent-of-OOP-in-metadata-model.md) - Add classes for complex metadata attributes - [ADR-0005](0005-use-google-python-style-guide.md) - Use Google Python style guide with minimal refinements - [ADR-0006](0006-where-to-implemenent-model-serialization.md) - Separate metadata serialization from metadata class model but keep helpers - [ADR-0008](0008-accept-unrecognised-fields.md) - Accept metadata that includes unrecognized fields - [ADR-0009](0009-what-is-a-reference-implementation.md) - Primary purpose of the reference implementation - [ADR-0010](0010-repository-library-design.md) - Repository library design built on top of Metadata API For new ADRs, please use [template.md](template.md) as basis. More information on MADR is available at . General information about architectural decision records is available at . tuf-6.0.0/docs/adr/template.md0000644000000000000000000000427613615410400013077 0ustar00# [short title of solved problem and solution] * Status: [proposed | rejected | accepted | deprecated | … | superseded by [ADR-0005](0005-example.md)] * Deciders: [list everyone involved in the decision] * Date: [YYYY-MM-DD when the decision was last updated] Technical Story: [description | ticket/issue URL] ## Context and Problem Statement [Describe the context and problem statement, e.g., in free form using two to three sentences. You may want to articulate the problem in form of a question.] ## Decision Drivers * [driver 1, e.g., a force, facing concern, …] * [driver 2, e.g., a force, facing concern, …] * … ## Considered Options * [option 1] * [option 2] * [option 3] * … ## Decision Outcome Chosen option: "[option 1]", because [justification. e.g., only option, which meets k.o. criterion decision driver | which resolves force force | … | comes out best (see below)]. ### Positive Consequences * [e.g., improvement of quality attribute satisfaction, follow-up decisions required, …] * … ### Negative Consequences * [e.g., compromising quality attribute, follow-up decisions required, …] * … ## Pros and Cons of the Options ### [option 1] [example | description | pointer to more information | …] * Good, because [argument a] * Good, because [argument b] * Bad, because [argument c] * … ### [option 2] [example | description | pointer to more information | …] * Good, because [argument a] * Good, because [argument b] * Bad, because [argument c] * … ### [option 3] [example | description | pointer to more information | …] * Good, because [argument a] * Good, because [argument b] * Bad, because [argument c] * … ## Links * [Link type] [Link to ADR] * … tuf-6.0.0/docs/api/api-reference.rst0000644000000000000000000000243613615410400014200 0ustar00API Reference ===================== TUF provides multiple APIs: * The low-level :doc:`tuf.api` provides access to a Metadata file abstraction that closely follows the TUF specification's `document formats`_. This API handles de/serialization to and from files and makes it easier to access and modify metadata content safely. It is purely focused on individual pieces of Metadata and provides no concepts like "repository" or "update workflow". * The `client update workflow`_ is implemented in the :doc:`tuf.ngclient` module: It is a higher-level API that provides ways to query and download target files securely, while handling the TUF update workflow behind the scenes. ngclient is implemented on top of the Metadata API and can be used to implement various TUF clients with relatively little effort. Code `examples `_ are available for client implementation using ngclient and a basic repository using Metadata API. .. toctree:: :maxdepth: 2 :caption: Contents: tuf.api tuf.ngclient .. _client update workflow: https://theupdateframework.github.io/specification/latest/#detailed-client-workflow .. _document formats: https://theupdateframework.github.io/specification/latest/#document-formats tuf-6.0.0/docs/api/tuf.api.metadata.metadata.rst0000644000000000000000000000013313615410400016367 0ustar00Metadata class --------------------------------- .. autoclass:: tuf.api.metadata.Metadata tuf-6.0.0/docs/api/tuf.api.metadata.root.rst0000644000000000000000000000012313615410400015571 0ustar00Root class --------------------------------- .. autoclass:: tuf.api.metadata.Root tuf-6.0.0/docs/api/tuf.api.metadata.snapshot.rst0000644000000000000000000000013313615410400016446 0ustar00Snapshot class --------------------------------- .. autoclass:: tuf.api.metadata.Snapshot tuf-6.0.0/docs/api/tuf.api.metadata.supporting.rst0000644000000000000000000000131413615410400017023 0ustar00Supporting classes --------------------------------- The Metadata API includes multiple classes that are used by the top-level ones (Root, Timestamp, Snapshot, Targets): .. autosummary:: :nosignatures: tuf.api.metadata.DelegatedRole tuf.api.metadata.Delegations tuf.api.metadata.Key tuf.api.metadata.MetaFile tuf.api.metadata.Role tuf.api.metadata.TargetFile tuf.api.metadata.SuccinctRoles .. autoclass:: tuf.api.metadata.DelegatedRole .. autoclass:: tuf.api.metadata.Delegations .. autoclass:: tuf.api.metadata.Key .. autoclass:: tuf.api.metadata.MetaFile .. autoclass:: tuf.api.metadata.Role .. autoclass:: tuf.api.metadata.TargetFile .. autoclass:: tuf.api.metadata.SuccinctRolestuf-6.0.0/docs/api/tuf.api.metadata.targets.rst0000644000000000000000000000013113615410400016256 0ustar00Targets class --------------------------------- .. autoclass:: tuf.api.metadata.Targets tuf-6.0.0/docs/api/tuf.api.metadata.timestamp.rst0000644000000000000000000000013513615410400016614 0ustar00Timestamp class --------------------------------- .. autoclass:: tuf.api.metadata.Timestamp tuf-6.0.0/docs/api/tuf.api.rst0000644000000000000000000000053013615410400013032 0ustar00Metadata API =============== .. toctree:: tuf.api.metadata.metadata tuf.api.metadata.root tuf.api.metadata.timestamp tuf.api.metadata.snapshot tuf.api.metadata.targets .. toctree:: :hidden: tuf.api.metadata.supporting tuf.api.serialization .. automodule:: tuf.api.metadata :no-members: :no-inherited-members: tuf-6.0.0/docs/api/tuf.api.serialization.rst0000644000000000000000000000030713615410400015710 0ustar00Serialization ============================= .. automodule:: tuf.api.serialization JSON serialization ----------------------------- .. automodule:: tuf.api.serialization.json :show-inheritance: tuf-6.0.0/docs/api/tuf.ngclient.config.rst0000644000000000000000000000012413615410400015327 0ustar00Configuration ============= .. automodule:: tuf.ngclient.config :undoc-members: tuf-6.0.0/docs/api/tuf.ngclient.fetcher.rst0000644000000000000000000000027013615410400015504 0ustar00Fetcher ============ .. autoclass:: tuf.ngclient.FetcherInterface :undoc-members: :private-members: _fetch .. autoclass:: tuf.ngclient.Urllib3Fetcher :no-inherited-members: tuf-6.0.0/docs/api/tuf.ngclient.rst0000644000000000000000000000104113615410400014062 0ustar00ngclient ======== The ngclient module contains a complete TUF client library implementation. * :doc:`tuf.ngclient.updater` implements the `detailed client workflow`_ * :doc:`tuf.ngclient.config` provides optional configuration for the updater * :doc:`tuf.ngclient.fetcher` can be used for optional low-level network I/O control .. toctree:: :hidden: tuf.ngclient.updater tuf.ngclient.config tuf.ngclient.fetcher .. _detailed client workflow: https://theupdateframework.github.io/specification/latest/#detailed-client-workflow tuf-6.0.0/docs/api/tuf.ngclient.updater.rst0000644000000000000000000000007013615410400015526 0ustar00Updater ========= .. automodule:: tuf.ngclient.updater tuf-6.0.0/examples/README.md0000644000000000000000000000021513615410400012326 0ustar00# Usage examples * [repository](repository) * [client](client) * [uploader tool](uploader) * [Low-level Metadata API examples](manual_repo) tuf-6.0.0/examples/client/README.md0000644000000000000000000000254613615410400013615 0ustar00# TUF Client Example TUF Client Example, using ``python-tuf``. This TUF Client Example implements the following actions: - Client Initialization - Target file download The client can be used against any TUF repository that serves metadata and targets under the same URL (in _/metadata/_ and _/targets/_ directories, respectively). The used TUF repository can be set with `--url` (default repository is "http://127.0.0.1:8001" which is also the default for the repository example). ### Usage with the repository example In one terminal, run the repository example and leave it running: ```console examples/repository/repo ``` In another terminal, run the client: ```console # initialize the client with Trust-On-First-Use ./client tofu # Then download example files from the repository: ./client download file1.txt ``` Note that unlike normal repositories, the example repository only exists in memory and is re-generated from scratch at every startup: This means your client needs to run `tofu` every time you restart the repository application. ### Usage with a repository on the internet ```console # On first use only, initialize the client with Trust-On-First-Use ./client --url https://jku.github.io/tuf-demo tofu # Then download example files from the repository: ./client --url https://jku.github.io/tuf-demo download demo/succinctly-delegated-1.txt ``` tuf-6.0.0/examples/client/client0000755000000000000000000001216213615410400013535 0ustar00#!/usr/bin/env python3 """TUF Client Example""" # Copyright 2012 - 2017, New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 import argparse import logging import os import sys import traceback from hashlib import sha256 from pathlib import Path import urllib3 from tuf.api.exceptions import DownloadError, RepositoryError from tuf.ngclient import Updater # constants DOWNLOAD_DIR = "./downloads" CLIENT_EXAMPLE_DIR = os.path.dirname(os.path.abspath(__file__)) def build_metadata_dir(base_url: str) -> str: """build a unique and reproducible directory name for the repository url""" name = sha256(base_url.encode()).hexdigest()[:8] # TODO: Make this not windows hostile? return f"{Path.home()}/.local/share/tuf-example/{name}" def init_tofu(base_url: str) -> bool: """Initialize local trusted metadata (Trust-On-First-Use) and create a directory for downloads NOTE: This is unsafe and for demonstration only: the bootstrap root should be deployed alongside your updater application """ metadata_dir = build_metadata_dir(base_url) response = urllib3.request("GET", f"{base_url}/metadata/1.root.json") if response.status != 200: print(f"Failed to download initial root {base_url}/metadata/1.root.json") return False Updater( metadata_dir=metadata_dir, metadata_base_url=f"{base_url}/metadata/", target_base_url=f"{base_url}/targets/", target_dir=DOWNLOAD_DIR, bootstrap=response.data, ) print(f"Trust-on-First-Use: Initialized new root in {metadata_dir}") return True def download(base_url: str, target: str) -> bool: """ Download the target file using ``ngclient`` Updater. The Updater refreshes the top-level metadata, get the target information, verifies if the target is already cached, and in case it is not cached, downloads the target file. Returns: A boolean indicating if process was successful """ metadata_dir = build_metadata_dir(base_url) if not os.path.isfile(f"{metadata_dir}/root.json"): print( "Trusted local root not found. Use 'tofu' command to " "Trust-On-First-Use or copy trusted root metadata to " f"{metadata_dir}/root.json" ) return False print(f"Using trusted root in {metadata_dir}") try: # NOTE: initial root should be provided with ``bootstrap`` argument: # This examples uses unsafe Trust-On-First-Use initialization so it is # not possible here. updater = Updater( metadata_dir=metadata_dir, metadata_base_url=f"{base_url}/metadata/", target_base_url=f"{base_url}/targets/", target_dir=DOWNLOAD_DIR, ) updater.refresh() info = updater.get_targetinfo(target) if info is None: print(f"Target {target} not found") return True path = updater.find_cached_target(info) if path: print(f"Target is available in {path}") return True path = updater.download_target(info) print(f"Target downloaded and available in {path}") except (OSError, RepositoryError, DownloadError) as e: print(f"Failed to download target {target}: {e}") if logging.root.level < logging.ERROR: traceback.print_exc() return False return True def main() -> str | None: """Main TUF Client Example function""" client_args = argparse.ArgumentParser(description="TUF Client Example") # Global arguments client_args.add_argument( "-v", "--verbose", help="Output verbosity level (-v, -vv, ...)", action="count", default=0, ) client_args.add_argument( "-u", "--url", help="Base repository URL", default="http://127.0.0.1:8001", ) # Sub commands sub_command = client_args.add_subparsers(dest="sub_command") # Trust-On-First-Use sub_command.add_parser( "tofu", help="Initialize client with Trust-On-First-Use", ) # Download download_parser = sub_command.add_parser( "download", help="Download a target file", ) download_parser.add_argument( "target", metavar="TARGET", help="Target file", ) command_args = client_args.parse_args() if command_args.verbose == 0: loglevel = logging.ERROR elif command_args.verbose == 1: loglevel = logging.WARNING elif command_args.verbose == 2: loglevel = logging.INFO else: loglevel = logging.DEBUG logging.basicConfig(level=loglevel) # initialize the TUF Client Example infrastructure if command_args.sub_command == "tofu": if not init_tofu(command_args.url): return "Failed to initialize local repository" elif command_args.sub_command == "download": if not download(command_args.url, command_args.target): return f"Failed to download {command_args.target}" else: client_args.print_help() return None if __name__ == "__main__": sys.exit(main()) tuf-6.0.0/examples/manual_repo/basic_repo.py0000644000000000000000000003426313615410400016043 0ustar00""" A TUF repository example using the low-level TUF Metadata API. The example code in this file demonstrates how to *manually* create and maintain repository metadata using the low-level Metadata API. It implements similar functionality to that of the deprecated legacy 'repository_tool' and 'repository_lib'. (see ADR-0010 for details about repository library design) Contents: * creation of top-level metadata * target file handling * consistent snapshots * key management * top-level delegation and signing thresholds * target delegation * in-band and out-of-band metadata signing * writing and reading metadata files * root key rotation NOTE: Metadata files will be written to a 'tmp*'-directory in CWD. """ from __future__ import annotations import os import tempfile from datetime import datetime, timedelta, timezone from pathlib import Path from securesystemslib.signer import CryptoSigner, Signer from tuf.api.metadata import ( SPECIFICATION_VERSION, DelegatedRole, Delegations, Metadata, MetaFile, Root, Snapshot, TargetFile, Targets, Timestamp, ) from tuf.api.serialization.json import JSONSerializer def _in(days: float) -> datetime: """Adds 'days' to now and returns datetime object w/o microseconds.""" return datetime.now(timezone.utc).replace(microsecond=0) + timedelta( days=days ) # Create top-level metadata # ========================= # Every TUF repository has at least four roles, i.e. the top-level roles # 'targets', 'snapshot', 'timestamp' and 'root'. Below we will discuss their # purpose, show how to create the corresponding metadata, and how to use them # to provide integrity, consistency and freshness for the files TUF aims to # protect, i.e. target files. # Common fields # ------------- # All roles have the same metadata container format, for which the metadata API # provides a generic 'Metadata' class. This class has two fields, one for # cryptographic signatures, i.e. 'signatures', and one for the payload over # which signatures are generated, i.e. 'signed'. The payload must be an # instance of either 'Targets', 'Snapshot', 'Timestamp' or 'Root' class. Common # fields in all of these 'Signed' classes are: # # spec_version -- The supported TUF specification version number. # version -- The metadata version number. # expires -- The metadata expiry date. # # The 'version', which is incremented on each metadata change, is used to # reference metadata from within other metadata, and thus allows for repository # consistency in addition to protecting against rollback attacks. # # The date the metadata 'expires' protects against freeze attacks and allows # for implicit key revocation. Choosing an appropriate expiration interval # depends on the volatility of a role and how easy it is to re-sign them. # Highly volatile roles (timestamp, snapshot, targets), usually have shorter # expiration intervals, whereas roles that change less and might use offline # keys (root, delegating targets) may have longer expiration intervals. SPEC_VERSION = ".".join(SPECIFICATION_VERSION) # Define containers for role objects and cryptographic keys created below. This # allows us to sign and write metadata in a batch more easily. roles: dict[str, Metadata] = {} signers: dict[str, Signer] = {} # Targets (integrity) # ------------------- # The targets role guarantees integrity for the files that TUF aims to protect, # i.e. target files. It does so by listing the relevant target files, along # with their hash and length. roles["targets"] = Metadata(Targets(expires=_in(7))) # For the purpose of this example we use the top-level targets role to protect # the integrity of this very example script. The metadata entry contains the # hash and length of this file at the local path. In addition, it specifies the # 'target path', which a client uses to locate the target file relative to a # configured mirror base URL. # # |----base artifact URL---||-------target path-------| # e.g. tuf-examples.org/artifacts/manual_repo/basic_repo.py local_path = Path(__file__).resolve() target_path = f"{local_path.parts[-2]}/{local_path.parts[-1]}" target_file_info = TargetFile.from_file(target_path, str(local_path)) roles["targets"].signed.targets[target_path] = target_file_info # Snapshot (consistency) # ---------------------- # The snapshot role guarantees consistency of the entire repository. It does so # by listing all available targets metadata files at their latest version. This # becomes relevant, when there are multiple targets metadata files in a # repository and we want to protect the client against mix-and-match attacks. roles["snapshot"] = Metadata(Snapshot(expires=_in(7))) # Timestamp (freshness) # --------------------- # The timestamp role guarantees freshness of the repository metadata. It does # so by listing the latest snapshot (which in turn lists all the latest # targets) metadata. A short expiration interval requires the repository to # regularly issue new timestamp metadata and thus protects the client against # freeze attacks. # # Note that snapshot and timestamp use the same generic wireline metadata # format. But given that timestamp metadata always has only one entry in its # 'meta' field, i.e. for the latest snapshot file, the timestamp object # provides the shortcut 'snapshot_meta'. roles["timestamp"] = Metadata(Timestamp(expires=_in(1))) # Root (root of trust) # -------------------- # The root role serves as root of trust for all top-level roles, including # itself. It does so by mapping cryptographic keys to roles, i.e. the keys that # are authorized to sign any top-level role metadata, and signing thresholds, # i.e. how many authorized keys are required for a given role (see 'roles' # field). This is called top-level delegation. # # In addition, root provides all public keys to verify these signatures (see # 'keys' field), and a configuration parameter that describes whether a # repository uses consistent snapshots (see section 'Persist metadata' below # for more details). # Create root metadata object roles["root"] = Metadata(Root(expires=_in(365))) # For this example, we generate one 'ed25519' key pair for each top-level role # using python-tuf's in-house crypto library. # See https://github.com/secure-systems-lab/securesystemslib for more details # about key handling, and don't forget to password-encrypt your private keys! for name in ["targets", "snapshot", "timestamp", "root"]: signers[name] = CryptoSigner.generate_ecdsa() roles["root"].signed.add_key(signers[name].public_key, name) # NOTE: We only need the public part to populate root, so it is possible to use # out-of-band mechanisms to generate key pairs and only expose the public part # to whoever maintains the root role. As a matter of fact, the very purpose of # signature thresholds is to avoid having private keys all in one place. # Signature thresholds # -------------------- # Given the importance of the root role, it is highly recommended to require a # threshold of multiple keys to sign root metadata. For this example we # generate another root key (you can pretend it's out-of-band) and increase the # required signature threshold. another_root_signer = CryptoSigner.generate_ecdsa() roles["root"].signed.add_key(another_root_signer.public_key, "root") roles["root"].signed.roles["root"].threshold = 2 # Sign top-level metadata (in-band) # ================================= # In this example we have access to all top-level signing keys, so we can use # them to create and add a signature for each role metadata. for name in ["targets", "snapshot", "timestamp", "root"]: roles[name].sign(signers[name]) # Persist metadata (consistent snapshot) # ====================================== # It is time to publish the first set of metadata for a client to safely # download the target file that we have registered for this example repository. # # For the purpose of this example we will follow the consistent snapshot naming # convention for all metadata. This means that each metadata file, must be # prefixed with its version number, except for timestamp. The naming convention # also affects the target files, but we don't cover this in the example. See # the TUF specification for more details: # https://theupdateframework.github.io/specification/latest/#writing-consistent-snapshots # # Also note that the TUF specification does not mandate a wireline format. In # this demo we use a non-compact JSON format and store all metadata in # temporary directory at CWD for review. PRETTY = JSONSerializer(compact=False) TMP_DIR = tempfile.mkdtemp(dir=os.getcwd()) for name in ["root", "targets", "snapshot"]: filename = f"{roles[name].signed.version}.{roles[name].signed.type}.json" path = os.path.join(TMP_DIR, filename) roles[name].to_file(path, serializer=PRETTY) roles["timestamp"].to_file( os.path.join(TMP_DIR, "timestamp.json"), serializer=PRETTY ) # Threshold signing (out-of-band) # =============================== # As mentioned above, using signature thresholds usually entails that not all # signing keys for a given role are in the same place. Let's briefly pretend # this is the case for the second root key we registered above, and we are now # on that key owner's computer. All the owner has to do is read the metadata # file, sign it, and write it back to the same file, and this can be repeated # until the threshold is satisfied. root_path = os.path.join(TMP_DIR, "1.root.json") root = Metadata.from_file(root_path) root.sign(another_root_signer, append=True) root.to_file(root_path, serializer=PRETTY) # Targets delegation # ================== # Similar to how the root role delegates responsibilities about integrity, # consistency and freshness to the corresponding top-level roles, a targets # role may further delegate its responsibility for target files (or a subset # thereof) to other targets roles. This allows creation of a granular trust # hierarchy, and further reduces the impact of a single role compromise. # # In this example the top-level targets role trusts a new "python-scripts" # targets role to provide integrity for any target file that ends with ".py". delegatee_name = "python-scripts" signers[delegatee_name] = CryptoSigner.generate_ecdsa() # Delegatee # --------- # Create a new targets role, akin to how we created top-level targets above, and # add target file info from above according to the delegatee's responsibility. roles[delegatee_name] = Metadata[Targets]( signed=Targets( version=1, spec_version=SPEC_VERSION, expires=_in(7), targets={target_path: target_file_info}, ), signatures={}, ) # Delegator # --------- # Akin to top-level delegation, the delegator expresses its trust in the # delegatee by authorizing a threshold of cryptographic keys to provide # signatures for the delegatee metadata. It also provides the corresponding # public key store. # The delegation info defined by the delegator further requires the provision # of a unique delegatee name and constraints about the target files the # delegatee is responsible for, e.g. a list of path patterns. For details about # all configuration parameters see # https://theupdateframework.github.io/specification/latest/#delegations delegatee_key = signers[delegatee_name].public_key roles["targets"].signed.delegations = Delegations( keys={delegatee_key.keyid: delegatee_key}, roles={ delegatee_name: DelegatedRole( name=delegatee_name, keyids=[delegatee_key.keyid], threshold=1, terminating=True, paths=["manual_repo/*.py"], ), }, ) # Remove target file info from top-level targets (delegatee is now responsible) del roles["targets"].signed.targets[target_path] # Increase expiry (delegators should be less volatile) roles["targets"].signed.expires = _in(365) # Snapshot + Timestamp + Sign + Persist # ------------------------------------- # In order to publish a new consistent set of metadata, we need to update # dependent roles (snapshot, timestamp) accordingly, bumping versions of all # changed metadata. # Bump targets version roles["targets"].signed.version += 1 # Update snapshot to account for changed and new targets metadata roles["snapshot"].signed.meta["targets.json"].version = roles[ "targets" ].signed.version roles["snapshot"].signed.meta[f"{delegatee_name}.json"] = MetaFile(version=1) roles["snapshot"].signed.version += 1 # Update timestamp to account for changed snapshot metadata roles["timestamp"].signed.snapshot_meta.version = roles[ "snapshot" ].signed.version roles["timestamp"].signed.version += 1 # Sign and write metadata for all changed roles, i.e. all but root for role_name in ["targets", "python-scripts", "snapshot", "timestamp"]: roles[role_name].sign(signers[role_name]) # Prefix all but timestamp with version number (see consistent snapshot) filename = f"{role_name}.json" if role_name != "timestamp": filename = f"{roles[role_name].signed.version}.{filename}" roles[role_name].to_file(os.path.join(TMP_DIR, filename), serializer=PRETTY) # Root key rotation (recover from a compromise / key loss) # ======================================================== # TUF makes it easy to recover from a key compromise in-band. Given the trust # hierarchy through top-level and targets delegation you can easily # replace compromised or lost keys for any role using the delegating role, even # for the root role. # However, since root authorizes its own keys, it always has to be signed with # both the threshold of keys from the previous version and the threshold of # keys from the new version. This establishes a trusted line of continuity. # # In this example we will replace a root key, and sign a new version of root # with the threshold of old and new keys. Since one of the previous root keys # remains in place, it can be used to count towards the old and new threshold. new_root_signer = CryptoSigner.generate_ecdsa() roles["root"].signed.revoke_key(signers["root"].public_key.keyid, "root") roles["root"].signed.add_key(new_root_signer.public_key, "root") roles["root"].signed.version += 1 roles["root"].signatures.clear() for signer in [signers["root"], another_root_signer, new_root_signer]: roles["root"].sign(signer, append=True) roles["root"].to_file( os.path.join(TMP_DIR, f"{roles['root'].signed.version}.root.json"), serializer=PRETTY, ) tuf-6.0.0/examples/manual_repo/hashed_bin_delegation.py0000644000000000000000000002112213615410400020202 0ustar00""" A TUF hash bin delegation example using the low-level TUF Metadata API. The example code in this file demonstrates how to *manually* perform hash bin delegation using the low-level Metadata API. It implements similar functionality to that of the deprecated legacy 'repository_tool' and 'repository_lib'. (see ADR-0010 for details about repository library design) Contents: - Reusable hash bin delegation helpers - Basic hash bin delegation example See 'basic_repo.py' for a more comprehensive TUF metadata API example. NOTE: Metadata files will be written to a 'tmp*'-directory in CWD. """ from __future__ import annotations import hashlib import os import tempfile from datetime import datetime, timedelta, timezone from pathlib import Path from typing import TYPE_CHECKING from securesystemslib.signer import CryptoSigner, Signer from tuf.api.metadata import ( DelegatedRole, Delegations, Metadata, TargetFile, Targets, ) from tuf.api.serialization.json import JSONSerializer if TYPE_CHECKING: from collections.abc import Iterator def _in(days: float) -> datetime: """Adds 'days' to now and returns datetime object w/o microseconds.""" return datetime.now(timezone.utc).replace(microsecond=0) + timedelta( days=days ) roles: dict[str, Metadata[Targets]] = {} signers: dict[str, Signer] = {} # Hash bin delegation # =================== # Hash bin delegation allows to distribute a large number of target files over # multiple delegated targets metadata. The consequence is smaller metadata # files and thus a lower network overhead for repository-client communication. # # The assignment of target files to targets metadata is done automatically, # based on the hash of the target file name. More precisely, only a prefix of # the target file name hash is needed to assign it to the correct hash bin. # # The number of bins is the only number that needs to be configured. Everything # else is derived using the mathematical operations shown below. # # The right number of bins depends on the expected number of target files in a # repository. For the purpose of this example we choose ... NUMBER_OF_BINS = 32 # ..., which determines the length of any hash prefix # considered for bin assignment (PREFIX_LEN), how many hash prefixes are # covered by all bins (NUMBER_OF_PREFIXES), and how many prefixes are covered # by each individual bin (BIN_SIZE): # # The prefix length is the number of digits in the hexadecimal representation # (see 'x' in Python Format Specification) of the number of bins minus one # (counting starts at zero), i.e. ... PREFIX_LEN = len(f"{(NUMBER_OF_BINS - 1):x}") # ... 2. # # Compared to decimal, hexadecimal numbers can express higher numbers with # fewer digits and thus further decrease metadata sizes. With the above prefix # length of 2 we can represent at most ... NUMBER_OF_PREFIXES = 16**PREFIX_LEN # ... 256 prefixes, i.e. 00, 01, ..., ff. # # If the number of bins is a power of two, hash prefixes are evenly distributed # over all bins, which allows to calculate the uniform size of ... BIN_SIZE = NUMBER_OF_PREFIXES // NUMBER_OF_BINS # ... 8, where each bin is # responsible for a range of 8 prefixes, i.e. 00-07, 08-0f, ..., f8-ff. # Helpers # ------- def _bin_name(low: int, high: int) -> str: """Generates a bin name according to the hash prefixes the bin serves. The name is either a single hash prefix for bin size 1, or a range of hash prefixes otherwise. The prefix length is needed to zero-left-pad the hex representation of the hash prefix for uniform bin name lengths. """ if low == high: return f"{low:0{PREFIX_LEN}x}" return f"{low:0{PREFIX_LEN}x}-{high:0{PREFIX_LEN}x}" def generate_hash_bins() -> Iterator[tuple[str, list[str]]]: """Returns generator for bin names and hash prefixes per bin.""" # Iterate over the total number of hash prefixes in 'bin size'-steps to # generate bin names and a list of hash prefixes served by each bin. for low in range(0, NUMBER_OF_PREFIXES, BIN_SIZE): high = low + BIN_SIZE - 1 bin_name = _bin_name(low, high) hash_prefixes = [] for prefix in range(low, low + BIN_SIZE): hash_prefixes.append(f"{prefix:0{PREFIX_LEN}x}") yield bin_name, hash_prefixes def find_hash_bin(path: str) -> str: """Returns name of bin for target file based on the target path hash.""" # Generate hash digest of passed target path and take its prefix, given the # global prefix length for the given number of bins. hasher = hashlib.sha256() hasher.update(path.encode("utf-8")) target_name_hash = hasher.hexdigest() prefix = int(target_name_hash[:PREFIX_LEN], 16) # Find lower and upper bounds for hash prefix given its numerical value and # the the general bin size for the given number of bins. low = prefix - (prefix % BIN_SIZE) high = low + BIN_SIZE - 1 return _bin_name(low, high) # Keys # ---- # Given that the primary concern of hash bin delegation is to reduce network # overhead, it is acceptable to reuse one signing key for all delegated # targets roles (bin-n). However, we do use a different key for the delegating # targets role (bins). Considering the high responsibility but also low # volatility of the bins role, it is recommended to require signature # thresholds and keep the keys offline in a real-world scenario. # NOTE: See "Targets delegation" and "Signature thresholds" paragraphs in # 'basic_repo.py' for more details for name in ["bin-n", "bins"]: signers[name] = CryptoSigner.generate_ecdsa() # Targets roles # ------------- # NOTE: See "Targets" and "Targets delegation" paragraphs in 'basic_repo.py' # example for more details about the Targets object. # Create preliminary delegating targets role (bins) and add public key for # delegated targets (bin_n) to key store. Delegation details are update below. roles["bins"] = Metadata(Targets(expires=_in(365))) bin_n_key = signers["bin-n"].public_key roles["bins"].signed.delegations = Delegations( keys={bin_n_key.keyid: bin_n_key}, roles={}, ) # The hash bin generator yields an ordered list of incremental hash bin names # (ranges), plus the hash prefixes each bin is responsible for, e.g.: # # bin_n_name: 00-07 bin_n_hash_prefixes: 00 01 02 03 04 05 06 07 # 08-0f 08 09 0a 0b 0c 0d 0e 0f # 10-17 10 11 12 13 14 15 16 17 # ... ... # f8-ff f8 f9 fa fb fc fd fe ff assert roles["bins"].signed.delegations.roles is not None for bin_n_name, bin_n_hash_prefixes in generate_hash_bins(): # Update delegating targets role (bins) with delegation details for each # delegated targets role (bin_n). roles["bins"].signed.delegations.roles[bin_n_name] = DelegatedRole( name=bin_n_name, keyids=[signers["bin-n"].public_key.keyid], threshold=1, terminating=False, path_hash_prefixes=bin_n_hash_prefixes, ) # Create delegated targets roles (bin_n) roles[bin_n_name] = Metadata(Targets(expires=_in(7))) # Add target file # --------------- # For the purpose of this example we will protect the integrity of this very # example script by adding its file info to the corresponding bin metadata. # NOTE: See "Targets" paragraph in 'basic_repo.py' example for more details # about adding target file infos to targets metadata. local_path = Path(__file__).resolve() target_path = f"{local_path.parts[-2]}/{local_path.parts[-1]}" target_file_info = TargetFile.from_file(target_path, str(local_path)) # The right bin for a target file is determined by the 'target_path' hash, e.g.: # # target_path: 'repo_example/hashed_bin_delegation.py' # target_path (hash digest): '85e1a6c06305bd9c1e15c7ae565fd16ea304bfc...' # # --> considered hash prefix '85', falls into bin '80-87' bin_for_target = find_hash_bin(target_path) roles[bin_for_target].signed.targets[target_path] = target_file_info # Sign and persist # ---------------- # Sign all metadata and write to temporary directory at CWD for review using # versioned file names. Most notably see '1.bins.json' and '1.80-87.json'. # NOTE: See "Persist metadata" paragraph in 'basic_repo.py' example for more # details about serialization formats and metadata file name conventions. PRETTY = JSONSerializer(compact=False) TMP_DIR = tempfile.mkdtemp(dir=os.getcwd()) for role_name, role in roles.items(): signer = signers["bins"] if role_name == "bins" else signers["bin-n"] role.sign(signer) filename = f"1.{role_name}.json" filepath = os.path.join(TMP_DIR, filename) role.to_file(filepath, serializer=PRETTY) tuf-6.0.0/examples/manual_repo/succinct_hash_bin_delegations.py0000644000000000000000000001457013615410400021760 0ustar00# Copyright New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """ A TUF succinct hash bin delegation example using the low-level TUF Metadata API. The example code in this file demonstrates how to perform succinct hash bin delegation using the low-level Metadata API. Succinct hash bin delegation achieves a similar result as using a standard hash bin delegation, but the delegating metadata is smaller, resulting in fewer bytes to transfer and parse. See 'basic_repo.py' for a more comprehensive TUF metadata API example. For a comprehensive explanation of succinct hash bin delegation and the difference between succinct and standard hash bin delegation read: https://github.com/theupdateframework/taps/blob/master/tap15.md NOTE: Metadata files will be written to a 'tmp*'-directory in CWD. """ from __future__ import annotations import math import os import tempfile from datetime import datetime, timedelta, timezone from pathlib import Path from securesystemslib.signer import CryptoSigner from tuf.api.metadata import ( Delegations, Key, Metadata, SuccinctRoles, TargetFile, Targets, ) from tuf.api.serialization.json import JSONSerializer # Succinct hash bin delegation # ============================ # Succinct hash bin delegation aims to distribute a large number of target files # over multiple delegated targets metadata roles (bins). The consequence is # smaller metadata files and thus a lower network overhead for repository-client # communication. # # The assignment of target files to a target's metadata is done automatically, # based on the byte digest of the target file name. # # The number of bins, name prefix for all bins and key threshold are all # attributes that need to be configured. # Number of bins, bit length and bin number computation # ----------------------------------------------------- # Determining the correct number of bins is dependent on the expected number of # target files in a repository. For the purpose of this example we choose: NUMBER_OF_BINS = 32 # # The number of bins will determine the number of bits in a target path # considered in assigning the target to a bin. BIT_LENGTH = int(math.log2(NUMBER_OF_BINS)) # Delegated role (bin) name format # -------------------------------- # Each bin has a name in the format of f"{NAME_PREFIX}-{bin_number}". # # Name prefix is the common prefix of all delegated target roles (bins). # For our example it will be: NAME_PREFIX = "delegated_bin" # # The suffix "bin_number" is a zero-padded hexadecimal number of that # particular bin. # Keys and threshold # ------------------ # Succinct hash bin delegation uses the same key(s) to sign all bins. This is # acceptable because the primary concern of this type of delegation is to reduce # network overhead. For the purpose of this example only one key is required. THRESHOLD = 1 # Create one signing key for all bins, and one for the delegating targets role. bins_signer = CryptoSigner.generate_ecdsa() bins_key = bins_signer.public_key targets_signer = CryptoSigner.generate_ecdsa() # Delegating targets role # ----------------------- # Akin to regular targets delegation, the delegating role ships the public keys # of the delegated roles. However, instead of providing individual delegation # information about each role, one single `SuccinctRoles` object is used to # provide the information for all delegated roles (bins). # NOTE: See "Targets" and "Targets delegation" paragraphs in 'basic_repo.py' # example for more details about the Targets object. expiration_date = datetime.now(timezone.utc).replace(microsecond=0) + timedelta( days=7 ) targets = Metadata(Targets(expires=expiration_date)) succinct_roles = SuccinctRoles( keyids=[bins_key.keyid], threshold=THRESHOLD, bit_length=BIT_LENGTH, name_prefix=NAME_PREFIX, ) delegations_keys_info: dict[str, Key] = {} delegations_keys_info[bins_key.keyid] = bins_key targets.signed.delegations = Delegations( delegations_keys_info, roles=None, succinct_roles=succinct_roles ) # Delegated targets roles (bins) # ------------------------------ # We can use the SuccinctRoles object from the delegating role above to iterate # over all bin names in the delegation and create the corresponding metadata. assert targets.signed.delegations.succinct_roles is not None # make mypy happy delegated_bins: dict[str, Metadata[Targets]] = {} for delegated_bin_name in targets.signed.delegations.succinct_roles.get_roles(): delegated_bins[delegated_bin_name] = Metadata( Targets(expires=expiration_date) ) # Add target file inside a delegated role (bin) # --------------------------------------------- # For the purpose of this example we will protect the integrity of this # example script by adding its file info to the corresponding bin metadata. # NOTE: See "Targets" paragraph in 'basic_repo.py' example for more details # about adding target file infos to targets metadata. local_path = Path(__file__).resolve() target_path = f"{local_path.parts[-2]}/{local_path.parts[-1]}" target_file_info = TargetFile.from_file(target_path, str(local_path)) # We don't know yet in which delegated role (bin) our target belongs. # With SuccinctRoles.get_role_for_target() we can get the name of the delegated # role (bin) responsible for that target_path. target_bin = targets.signed.delegations.succinct_roles.get_role_for_target( target_path ) # In our example with NUMBER_OF_BINS = 32 and the current file as target_path # the target_bin is "delegated_bin-0d" # Now we can add the current target to the bin responsible for it. delegated_bins[target_bin].signed.targets[target_path] = target_file_info # Sign and persist # ---------------- # Sign all metadata and write to a temporary directory at CWD for review using # versioned file names. Most notably see '1.targets.json' and # '1.delegated_bin-0d.json'. # NOTE: See "Persist metadata" paragraph in 'basic_repo.py' example for more # details about serialization formats and metadata file name convention. PRETTY = JSONSerializer(compact=False) TMP_DIR = tempfile.mkdtemp(dir=os.getcwd()) targets.sign(targets_signer) targets.to_file(os.path.join(TMP_DIR, "1.targets.json"), serializer=PRETTY) for bin_name, bin_target_role in delegated_bins.items(): file_name = f"1.{bin_name}.json" file_path = os.path.join(TMP_DIR, file_name) bin_target_role.sign(bins_signer, append=True) bin_target_role.to_file(file_path, serializer=PRETTY) tuf-6.0.0/examples/repository/README.md0000644000000000000000000000203513615410400014547 0ustar00# TUF Repository Application Example :warning: This example uses the repository module which is not considered part of the python-tuf stable API quite yet. This TUF Repository Application Example has the following features: - Initializes a completely new repository on startup - Stores everything (metadata, targets, signing keys) in-memory - Serves metadata and targets on localhost (default port 8001) - Simulates a live repository by automatically adding a new target file every 10 seconds. - Exposes a small API for the [uploader tool example](../uploader/). API POST endpoints are: - `/api/role/`: For uploading new delegated targets metadata. Payload is new version of ROLEs metadata - `/api/delegation/`: For modifying or creating a delegation for ROLE. Payload is a dict with one keyid:Key pair ### Usage ```console ./repo ``` Your repository is now running and is accessible on localhost, See e.g. http://127.0.0.1:8001/metadata/1.root.json. The [client example](../client/README.md) uses this address by default. tuf-6.0.0/examples/repository/_simplerepo.py0000644000000000000000000002071313615410400016163 0ustar00# Copyright 2021-2022 python-tuf contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Simple example of using the repository library to build a repository""" from __future__ import annotations import copy import json import logging from collections import defaultdict from datetime import datetime, timedelta, timezone from securesystemslib.signer import CryptoSigner, Key, Signer from tuf.api.exceptions import RepositoryError from tuf.api.metadata import ( DelegatedRole, Delegations, Metadata, MetaFile, Root, RootVerificationResult, Signed, Snapshot, TargetFile, Targets, Timestamp, VerificationResult, ) from tuf.repository import Repository logger = logging.getLogger(__name__) _signed_init = { Root.type: Root, Snapshot.type: Snapshot, Targets.type: Targets, Timestamp.type: Timestamp, } class SimpleRepository(Repository): """Very simple in-memory repository implementation This repository keeps the metadata for all versions of all roles in memory. It also keeps all target content in memory. Attributes: role_cache: Every historical metadata version of every role in this repository. Keys are role names and values are lists of Metadata signer_cache: All signers available to the repository. Keys are role names, values are lists of signers target_cache: All target files served by the repository. Keys are target paths and values are file contents as bytes. """ expiry_period = timedelta(days=1) def __init__(self) -> None: # all versions of all metadata self.role_cache: dict[str, list[Metadata]] = defaultdict(list) # all current keys self.signer_cache: dict[str, list[Signer]] = defaultdict(list) # all target content self.target_cache: dict[str, bytes] = {} # version cache for snapshot and all targets, updated in close(). # The 'defaultdict(lambda: ...)' trick allows close() to easily modify # the version without always creating a new MetaFile self._snapshot_info = MetaFile(1) self._targets_infos: dict[str, MetaFile] = defaultdict( lambda: MetaFile(1) ) # setup a basic repository, generate signing key per top-level role with self.edit_root() as root: for role in ["root", "timestamp", "snapshot", "targets"]: signer = CryptoSigner.generate_ecdsa() self.signer_cache[role].append(signer) root.add_key(signer.public_key, role) for role in ["timestamp", "snapshot", "targets"]: with self.edit(role): pass @property def targets_infos(self) -> dict[str, MetaFile]: return self._targets_infos @property def snapshot_info(self) -> MetaFile: return self._snapshot_info def _get_verification_result( self, role: str, md: Metadata ) -> VerificationResult | RootVerificationResult: """Verify roles metadata using the existing repository metadata""" if role == Root.type: assert isinstance(md.signed, Root) root = self.root() previous = root if root.version > 0 else None return md.signed.get_root_verification_result( previous, md.signed_bytes, md.signatures ) if role in [Timestamp.type, Snapshot.type, Targets.type]: delegator: Signed = self.root() else: delegator = self.targets() return delegator.get_verification_result( role, md.signed_bytes, md.signatures ) def open(self, role: str) -> Metadata: """Return current Metadata for role from 'storage' (or create a new one) """ if role not in self.role_cache: signed_init = _signed_init.get(role, Targets) md = Metadata(signed_init()) # this makes version bumping in close() simpler md.signed.version = 0 return md # return latest metadata from storage (but don't return a reference) return copy.deepcopy(self.role_cache[role][-1]) def close(self, role: str, md: Metadata) -> None: """Store a version of metadata. Handle version bumps, expiry, signing""" md.signed.version += 1 md.signed.expires = datetime.now(timezone.utc) + self.expiry_period md.signatures.clear() for signer in self.signer_cache[role]: md.sign(signer, append=True) # Double check that we only write verified metadata vr = self._get_verification_result(role, md) if not vr: raise ValueError(f"Role {role} failed to verify") keyids = [keyid[:7] for keyid in vr.signed] verify_str = f"verified with keys [{', '.join(keyids)}]" logger.debug("Role %s v%d: %s", role, md.signed.version, verify_str) # store new metadata version, update version caches self.role_cache[role].append(md) if role == "snapshot": self._snapshot_info.version = md.signed.version elif role not in ["root", "timestamp"]: self._targets_infos[f"{role}.json"].version = md.signed.version def add_target(self, path: str, content: str) -> None: """Add a target to top-level targets metadata""" data = bytes(content, "utf-8") # add content to cache for serving to clients self.target_cache[path] = data # add a target in the targets metadata with self.edit_targets() as targets: targets.targets[path] = TargetFile.from_data(path, data) # update snapshot, timestamp self.do_snapshot() self.do_timestamp() def submit_delegation(self, rolename: str, data: bytes) -> bool: """Add a delegation to a (offline signed) delegated targets metadata""" try: logger.debug("Processing new delegation to role %s", rolename) keyid, keydict = next(iter(json.loads(data).items())) key = Key.from_dict(keyid, keydict) # add delegation and key role = DelegatedRole(rolename, [], 1, True, [f"{rolename}/*"]) with self.edit_targets() as targets: if targets.delegations is None: targets.delegations = Delegations({}, {}) if targets.delegations.roles is None: targets.delegations.roles = {} targets.delegations.roles[rolename] = role targets.add_key(key, rolename) except (RepositoryError, json.JSONDecodeError) as e: logger.info("Failed to add delegation for %s: %s", rolename, e) return False # update snapshot, timestamp self.do_snapshot() self.do_timestamp() return True def submit_role(self, role: str, data: bytes) -> bool: """Add a new version of a delegated roles metadata""" try: logger.debug("Processing new version for role %s", role) if role in ["root", "snapshot", "timestamp", "targets"]: raise ValueError("Only delegated targets are accepted") md = Metadata.from_bytes(data) for targetpath in md.signed.targets: if not targetpath.startswith(f"{role}/"): raise ValueError(f"targets allowed under {role}/ only") if md.signed.version != self.targets(role).version + 1: raise ValueError("Invalid version {md.signed.version}") except (RepositoryError, ValueError) as e: logger.info("Failed to add new version for %s: %s", role, e) return False # Check that we only write verified metadata vr = self._get_verification_result(role, md) if not vr: logger.info("Role %s failed to verify", role) return False keyids = [keyid[:7] for keyid in vr.signed] verify_str = f"verified with keys [{', '.join(keyids)}]" logger.debug("Role %s v%d: %s", role, md.signed.version, verify_str) # Checks passed: Add new delegated role version self.role_cache[role].append(md) self._targets_infos[f"{role}.json"].version = md.signed.version # To keep it simple, target content is generated from targetpath for targetpath in md.signed.targets: self.target_cache[targetpath] = bytes(f"{targetpath}", "utf-8") # update snapshot, timestamp self.do_snapshot() self.do_timestamp() return True tuf-6.0.0/examples/repository/repo0000755000000000000000000001060513615410400014165 0ustar00#!/usr/bin/env python3 # Copyright 2021-2022 python-tuf contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Simple repository example application The application stores metadata and targets in memory, and serves them via http. Nothing is persisted on disk or loaded from disk. The application simulates a live repository by adding new target files periodically. """ import argparse import logging import sys from datetime import datetime from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer from time import time from typing import Dict, List from _simplerepo import SimpleRepository from tuf.api.serialization.json import JSONSerializer logger = logging.getLogger(__name__) class ReqHandler(BaseHTTPRequestHandler): """HTTP handler for the repository example application Serves metadata, targets and a small upload API using a SimpleRepository """ def do_POST(self): """Handle POST requests, aka the 'uploader API'""" content_len = int(self.headers.get("content-length", 0)) data = self.rfile.read(content_len) if self.path.startswith("/api/delegation/"): role = self.path[len("/api/delegation/") :] if not self.server.repo.submit_delegation(role, data): return self.send_error(400, f"Failed to delegate to {role}") elif self.path.startswith("/api/role/"): role = self.path[len("/api/role/") :] if not self.server.repo.submit_role(role, data): return self.send_error(400, f"Failed to submit role {role}") else: return self.send_error(404) self.send_response(200) self.end_headers() def do_GET(self): """Handle GET: metadata and target files""" data = None if self.path.startswith("/metadata/") and self.path.endswith(".json"): data = self.get_metadata( self.path[len("/metadata/") : -len(".json")] ) elif self.path.startswith("/targets/"): data = self.get_target(self.path[len("/targets/") :]) if data is None: self.send_error(404) else: self.send_response(200) self.send_header("Content-length", len(data)) self.end_headers() self.wfile.write(data) def get_metadata(self, ver_and_role: str): repo = self.server.repo ver_str, sep, role = ver_and_role.rpartition(".") if sep == "": # 0 will lead to list lookup with -1, meaning latest version ver = 0 else: ver = int(ver_str) if role not in repo.role_cache or ver > len(repo.role_cache[role]): return None # return metadata return repo.role_cache[role][ver - 1].to_bytes(JSONSerializer()) def get_target(self, targetpath: str): repo = self.server.repo # unimplement the dumb hashing scheme # TODO: maybe use hashed paths as the target_cache key dir, sep, hashname = targetpath.rpartition("/") _, _, name = hashname.partition(".") target = f"{dir}{sep}{name}" if target not in repo.target_cache: return None # send the target content return repo.target_cache[target] class RepositoryServer(ThreadingHTTPServer): def __init__(self, port: int): super().__init__(("127.0.0.1", port), ReqHandler) self.timeout = 1 self.repo = SimpleRepository() def main(argv: List[str]) -> None: """Example repository server""" parser = argparse.ArgumentParser() parser.add_argument("-v", "--verbose", action="count") parser.add_argument("-p", "--port", type=int, default=8001) args, _ = parser.parse_known_args(argv) level = logging.DEBUG if args.verbose else logging.INFO logging.basicConfig(level=level) server = RepositoryServer(args.port) last_change = 0 counter = 0 logger.info( f"Now serving. Root v1 at http://127.0.0.1:{server.server_port}/metadata/1.root.json" ) while True: # Simulate a live repository: Add a new target file every few seconds if time() - last_change > 10: last_change = int(time()) counter += 1 content = str(datetime.fromtimestamp(last_change)) server.repo.add_target(f"file{str(counter)}.txt", content) server.handle_request() if __name__ == "__main__": main(sys.argv) tuf-6.0.0/examples/uploader/README.md0000644000000000000000000000327213615410400014147 0ustar00# TUF Uploader Tool Example :warning: This example uses the repository module which is not considered part of the python-tuf stable API quite yet. This is an example maintainer tool: It makes it possible to add delegations to a remote repository, and then to upload delegated metadata to the repository. Features: - Initialization (much like the [client example](../client/)) - Claim delegation: this uses "unsafe repository API" in the sense that the uploader sends repository unsigned data. This operation can be compared to claiming a project name on PyPI.org - Add targetfile: Here uploader uses signing keys that were added to the delegation in the previous step to create a new version of the delegated metadata. The repository will verify signatures on this metadata. The used TUF repository can be set with `--url` (default repository is "http://127.0.0.1:8001" which is also the default for the repository example). In practice the uploader tool is only useful with the repository example. ### Usage with the repository example In one terminal, run the [repository example](../repository/) and leave it running: ```console examples/repository/repo ``` In another terminal, run uploader: ```console # Initialize with Trust-On-First-Use ./uploader tofu # Then claim a delegation for yourself (this also creates a new signing key): ./uploader add-delegation myrole # Then add a new downloadable target file to your delegated role (to keep the # example simple, the target file content is always the targetpath): ./uploader add-target myrole myrole/mytargetfile ``` At this point "myrole/mytargetfile" is downloadable from the repository with the [client example](../client/). tuf-6.0.0/examples/uploader/_localrepo.py0000644000000000000000000001203013615410400015351 0ustar00# Copyright 2021-2022 python-tuf contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """A Repository implementation for maintainer and developer tools""" from __future__ import annotations import contextlib import copy import json import logging import os from datetime import datetime, timedelta, timezone from securesystemslib.signer import CryptoSigner, Signer from urllib3 import request from tuf.api.exceptions import RepositoryError from tuf.api.metadata import Metadata, MetaFile, TargetFile, Targets from tuf.api.serialization.json import JSONSerializer from tuf.ngclient import Updater from tuf.repository import Repository logger = logging.getLogger(__name__) class LocalRepository(Repository): """A repository implementation that fetches data from a remote repository This implementation fetches metadata from a remote repository, potentially creates new versions of metadata, and submits to the remote repository. ngclient Updater is used to fetch metadata from remote server: this is good because we want to make sure the metadata we modify is verified, but also bad because we need some hacks to access the Updaters metadata. """ expiry_period = timedelta(days=1) def __init__(self, metadata_dir: str, key_dir: str, base_url: str): self.key_dir = key_dir if not os.path.isdir(self.key_dir): os.makedirs(self.key_dir) self.base_url = base_url self.updater = Updater( metadata_dir=metadata_dir, metadata_base_url=f"{base_url}/metadata/", ) self.updater.refresh() @property def targets_infos(self) -> dict[str, MetaFile]: raise NotImplementedError # we never call snapshot @property def snapshot_info(self) -> MetaFile: raise NotImplementedError # we never call timestamp def open(self, role: str) -> Metadata: """Return cached (or fetched) metadata""" # if there is a metadata version fetched from remote, use that # HACK: access Updater internals trusted_set = self.updater._trusted_set # noqa: SLF001 if role in trusted_set: # NOTE: The original signature wrapper (Metadata) was verified and # discarded upon inclusion in the trusted set. It is safe to use # a fresh wrapper. `close` will override existing signatures anyway. return Metadata(copy.deepcopy(trusted_set[role])) # otherwise we're creating metadata from scratch md = Metadata(Targets()) # this makes version bumping in close() simpler md.signed.version = 0 return md def close(self, role_name: str, md: Metadata) -> None: """Store a version of metadata. Handle version bumps, expiry, signing""" targets = self.targets() role = targets.get_delegated_role(role_name) public_key = targets.get_key(role.keyids[0]) uri = f"file2:{self.key_dir}/{role_name}" signer = Signer.from_priv_key_uri(uri, public_key) md.signed.version += 1 md.signed.expires = datetime.now(timezone.utc) + self.expiry_period md.sign(signer, append=False) # Upload using "api/role" uri = f"{self.base_url}/api/role/{role_name}" r = request("POST", uri, body=md.to_bytes(JSONSerializer()), timeout=5) if r.status != 200: raise RuntimeError(f"HTTP error {r.status}") def add_target(self, role: str, targetpath: str) -> bool: """Add target to roles metadata and submit new metadata version""" # HACK: make sure we have the roles metadata in updater._trusted_set # (or that we're publishing the first version) # HACK: Assume RepositoryError is because we're just publishing version # 1 (so the roles metadata does not exist on server yet) with contextlib.suppress(RepositoryError): self.updater.get_targetinfo(targetpath) data = bytes(targetpath, "utf-8") targetfile = TargetFile.from_data(targetpath, data) try: with self.edit_targets(role) as delegated: delegated.targets[targetpath] = targetfile except Exception as e: # noqa: BLE001 print(f"Failed to submit new {role} with added target: {e}") return False print(f"Uploaded role {role} v{delegated.version}") return True def add_delegation(self, role: str) -> bool: """Use the (unauthenticated) delegation adding API endpoint""" signer = CryptoSigner.generate_ecdsa() data = {signer.public_key.keyid: signer.public_key.to_dict()} url = f"{self.base_url}/api/delegation/{role}" r = request("POST", url, body=json.dumps(data), timeout=5) if r.status != 200: print(f"delegation failed with {r}") return False # Store the private key using rolename as filename with open(f"{self.key_dir}/{role}", "wb") as f: f.write(signer.private_bytes) print(f"Uploaded new delegation, stored key in {self.key_dir}/{role}") return True tuf-6.0.0/examples/uploader/uploader0000755000000000000000000001024713615410400014431 0ustar00#!/usr/bin/env python3 # Copyright 2021-2022 python-tuf contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Simple uploader tool example Uploader is a maintainer application that communicates with the repository example. Uploader controls offline signing keys and produces signed metadata that it sends to the repository application so that the metadata can be added to the repository. """ import argparse import logging import os import sys from hashlib import sha256 from pathlib import Path from typing import List, Optional from urllib import request from _localrepo import LocalRepository logger = logging.getLogger(__name__) def build_metadata_dir(base_url: str) -> str: """build a unique and reproducible metadata dirname for the repo url""" name = sha256(base_url.encode()).hexdigest()[:8] # TODO: Make this not windows hostile? return f"{Path.home()}/.local/share/tuf-upload-example/{name}" def build_key_dir(base_url: str) -> str: """build a unique and reproducible private key dir for the repository url""" name = sha256(base_url.encode()).hexdigest()[:8] # TODO: Make this not windows hostile? return f"{Path.home()}/.config/tuf-upload-example/{name}" def init_tofu(base_url: str) -> bool: """Initialize local trusted metadata (Trust-On-First-Use)""" metadata_dir = build_metadata_dir(base_url) if not os.path.isdir(metadata_dir): os.makedirs(metadata_dir) root_url = f"{base_url}/metadata/1.root.json" try: request.urlretrieve(root_url, f"{metadata_dir}/root.json") except OSError: print(f"Failed to download initial root from {root_url}") return False print(f"Trust-on-First-Use: Initialized new root in {metadata_dir}") return True def init(base_url: str) -> Optional[LocalRepository]: """Initialize a LocalRepository: local root.json must already exist""" metadata_dir = build_metadata_dir(base_url) keydir = build_key_dir(base_url) if not os.path.isfile(f"{metadata_dir}/root.json"): print( "Trusted local root not found. Use 'tofu' command to " "Trust-On-First-Use or copy trusted root metadata to " f"{metadata_dir}/root.json" ) return None print(f"Using trusted root in {metadata_dir}") return LocalRepository(metadata_dir, keydir, base_url) def main(argv: List[str]) -> None: """Example uploader tool""" parser = argparse.ArgumentParser() parser.add_argument("-v", "--verbose", action="count", default=0) parser.add_argument( "-u", "--url", help="Base repository URL", default="http://127.0.0.1:8001", ) subparsers = parser.add_subparsers(dest="sub_command") tofu_cmd = subparsers.add_parser( "tofu", help="Initialize client with Trust-On-First-Use", ) add_delegation_cmd = subparsers.add_parser( "add-delegation", help="Create a delegation and signing key", ) add_delegation_cmd.add_argument("rolename") add_target_cmd = subparsers.add_parser( "add-target", help="Add a target to a delegated role", ) add_target_cmd.add_argument("rolename") add_target_cmd.add_argument("targetpath") args = parser.parse_args() if args.verbose == 0: loglevel = logging.ERROR elif args.verbose == 1: loglevel = logging.WARNING elif args.verbose == 2: loglevel = logging.INFO else: loglevel = logging.DEBUG logging.basicConfig(level=loglevel) if args.sub_command == "tofu": if not init_tofu(args.url): return "Failed to initialize local repository" elif args.sub_command == "add-delegation": repo = init(args.url) if not repo: return "Failed to initialize" if not repo.add_delegation(args.rolename): return "Failed to add delegation" elif args.sub_command == "add-target": repo = init(args.url) if not repo: return "Failed to initialize" if not repo.add_target(args.rolename, args.targetpath): return "Failed to add target" else: parser.print_help() if __name__ == "__main__": sys.exit(main(sys.argv)) tuf-6.0.0/requirements/build.txt0000644000000000000000000000021313615410400013612 0ustar00# The build and tox versions specified here are also used as constraints # during CI and CD Github workflows build==1.2.2.post1 tox==4.1.2 tuf-6.0.0/requirements/dev.txt0000644000000000000000000000005113615410400013271 0ustar00-r build.txt -r test.txt -r lint.txt -e .tuf-6.0.0/requirements/docs.txt0000644000000000000000000000030113615410400013441 0ustar00# Install documentation build requirements # pinned tuf runtime dependencies (should auto-update and -trigger ci/cd) -r pinned.txt # install sphinx and its extensions sphinx sphinx-rtd-theme tuf-6.0.0/requirements/lint.txt0000644000000000000000000000053013615410400013463 0ustar00# Requirements needed in tox lint environment # pinned tuf runtime dependencies -r pinned.txt # Lint tools # (We are not so interested in the specific versions of the tools: the versions # are pinned to prevent unexpected linting failures when tools update) ruff==0.9.10 mypy==1.15.0 zizmor==1.4.1 # Required for type stubs freezegun==1.5.1 tuf-6.0.0/requirements/main.txt0000644000000000000000000000060013615410400013437 0ustar00# TUF runtime requirements plus securesystemslib with extra dependencies # # This file together with 'pip-compile' is used to generate a pinned # requirements file with all immediate and transitive dependencies. # # 'pinned.txt' is updated on GitHub with Dependabot, which # triggers CI/CD builds to automatically test against updated dependencies. # securesystemslib[crypto] urllib3 tuf-6.0.0/requirements/pinned.txt0000644000000000000000000000063313615410400013776 0ustar00# # This file is autogenerated by pip-compile with Python 3.11 # by the following command: # # pip-compile --output-file=requirements/pinned.txt --strip-extras requirements/main.txt # cffi==1.17.1 # via cryptography cryptography==44.0.2 # via securesystemslib pycparser==2.22 # via cffi securesystemslib==1.2.0 # via -r requirements/main.txt urllib3==2.3.0 # via -r requirements/main.txt tuf-6.0.0/requirements/test.txt0000644000000000000000000000031013615410400013470 0ustar00# Requirements needed in tox test environments # pinned tuf runtime dependencies (should auto-update and -trigger ci/cd) -r pinned.txt # coverage measurement coverage[toml]==7.6.12 freezegun==1.5.1 tuf-6.0.0/tests/__init__.py0000644000000000000000000000000013615410400012474 0ustar00tuf-6.0.0/tests/repository_simulator.py0000644000000000000000000003565713615410400015305 0ustar00# Copyright 2021, New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Test utility to simulate a repository RepositorySimulator provides methods to modify repository metadata so that it's easy to "publish" new repository versions with modified metadata, while serving the versions to client test code. RepositorySimulator implements FetcherInterface so Updaters in tests can use it as a way to "download" new metadata from remote: in practice no downloading, network connections or even file access happens as RepositorySimulator serves everything from memory. Metadata and targets "hosted" by the simulator are made available in URL paths "/metadata/..." and "/targets/..." respectively. Example:: # constructor creates repository with top-level metadata sim = RepositorySimulator() # metadata can be modified directly: it is immediately available to clients sim.snapshot.version += 1 # As an exception, new root versions require explicit publishing sim.root.version += 1 sim.publish_root() # there are helper functions sim.add_target("targets", b"content", "targetpath") sim.targets.version += 1 sim.update_snapshot() # Use the simulated repository from an Updater: updater = Updater( dir, "https://example.com/metadata/", "https://example.com/targets/", sim ) updater.refresh() """ from __future__ import annotations import datetime import logging import os import tempfile from dataclasses import dataclass, field from typing import TYPE_CHECKING from urllib import parse import securesystemslib.hash as sslib_hash from securesystemslib.signer import CryptoSigner, Signer from tuf.api.exceptions import DownloadHTTPError from tuf.api.metadata import ( SPECIFICATION_VERSION, TOP_LEVEL_ROLE_NAMES, DelegatedRole, Delegations, Metadata, MetaFile, Root, Snapshot, SuccinctRoles, TargetFile, Targets, Timestamp, ) from tuf.api.serialization.json import JSONSerializer from tuf.ngclient.fetcher import FetcherInterface if TYPE_CHECKING: from collections.abc import Iterator logger = logging.getLogger(__name__) SPEC_VER = ".".join(SPECIFICATION_VERSION) @dataclass class FetchTracker: """Fetcher counter for metadata and targets.""" metadata: list[tuple[str, int | None]] = field(default_factory=list) targets: list[tuple[str, str | None]] = field(default_factory=list) @dataclass class RepositoryTarget: """Contains actual target data and the related target metadata.""" data: bytes target_file: TargetFile class RepositorySimulator(FetcherInterface): """Simulates a repository that can be used for testing.""" def __init__(self) -> None: self.md_delegates: dict[str, Metadata[Targets]] = {} # other metadata is signed on-demand (when fetched) but roots must be # explicitly published with publish_root() which maintains this list self.signed_roots: list[bytes] = [] # signers are used on-demand at fetch time to sign metadata # keys are roles, values are dicts of {keyid: signer} self.signers: dict[str, dict[str, Signer]] = {} # target downloads are served from this dict self.target_files: dict[str, RepositoryTarget] = {} # Whether to compute hashes and length for meta in snapshot/timestamp self.compute_metafile_hashes_length = False # Enable hash-prefixed target file names self.prefix_targets_with_hash = True self.dump_dir: str | None = None self.dump_version = 0 self.fetch_tracker = FetchTracker() now = datetime.datetime.now(datetime.timezone.utc) self.safe_expiry = now.replace(microsecond=0) + datetime.timedelta( days=30 ) self._initialize() @property def root(self) -> Root: return self.md_root.signed @property def timestamp(self) -> Timestamp: return self.md_timestamp.signed @property def snapshot(self) -> Snapshot: return self.md_snapshot.signed @property def targets(self) -> Targets: return self.md_targets.signed def all_targets(self) -> Iterator[tuple[str, Targets]]: """Yield role name and signed portion of targets one by one.""" yield Targets.type, self.md_targets.signed for role, md in self.md_delegates.items(): yield role, md.signed def add_signer(self, role: str, signer: Signer) -> None: if role not in self.signers: self.signers[role] = {} self.signers[role][signer.public_key.keyid] = signer def rotate_keys(self, role: str) -> None: """remove all keys for role, then add threshold of new keys""" self.root.roles[role].keyids.clear() self.signers[role].clear() for _ in range(self.root.roles[role].threshold): signer = CryptoSigner.generate_ed25519() self.root.add_key(signer.public_key, role) self.add_signer(role, signer) def _initialize(self) -> None: """Setup a minimal valid repository.""" self.md_targets = Metadata(Targets(expires=self.safe_expiry)) self.md_snapshot = Metadata(Snapshot(expires=self.safe_expiry)) self.md_timestamp = Metadata(Timestamp(expires=self.safe_expiry)) self.md_root = Metadata(Root(expires=self.safe_expiry)) for role in TOP_LEVEL_ROLE_NAMES: signer = CryptoSigner.generate_ed25519() self.md_root.signed.add_key(signer.public_key, role) self.add_signer(role, signer) self.publish_root() def publish_root(self) -> None: """Sign and store a new serialized version of root.""" self.md_root.signatures.clear() for signer in self.signers[Root.type].values(): self.md_root.sign(signer, append=True) self.signed_roots.append(self.md_root.to_bytes(JSONSerializer())) logger.debug("Published root v%d", self.root.version) def _fetch(self, url: str) -> Iterator[bytes]: """Fetches data from the given url and returns an Iterator (or yields bytes). """ path = parse.urlparse(url).path if path.startswith("/metadata/") and path.endswith(".json"): # figure out rolename and version ver_and_name = path[len("/metadata/") :][: -len(".json")] version_str, _, role = ver_and_name.partition(".") # root is always version-prefixed while timestamp is always NOT if role == Root.type or ( self.root.consistent_snapshot and ver_and_name != Timestamp.type ): version: int | None = int(version_str) else: # the file is not version-prefixed role = ver_and_name version = None yield self.fetch_metadata(role, version) elif path.startswith("/targets/"): # figure out target path and hash prefix target_path = path[len("/targets/") :] dir_parts, sep, prefixed_filename = target_path.rpartition("/") # extract the hash prefix, if any prefix: str | None = None filename = prefixed_filename if self.root.consistent_snapshot and self.prefix_targets_with_hash: prefix, _, filename = prefixed_filename.partition(".") target_path = f"{dir_parts}{sep}{filename}" yield self.fetch_target(target_path, prefix) else: raise DownloadHTTPError(f"Unknown path '{path}'", 404) def fetch_target(self, target_path: str, target_hash: str | None) -> bytes: """Return data for 'target_path', checking 'target_hash' if it is given. If hash is None, then consistent_snapshot is not used. """ self.fetch_tracker.targets.append((target_path, target_hash)) repo_target = self.target_files.get(target_path) if repo_target is None: raise DownloadHTTPError(f"No target {target_path}", 404) if ( target_hash and target_hash not in repo_target.target_file.hashes.values() ): raise DownloadHTTPError(f"hash mismatch for {target_path}", 404) logger.debug("fetched target %s", target_path) return repo_target.data def fetch_metadata(self, role: str, version: int | None = None) -> bytes: """Return signed metadata for 'role', using 'version' if it is given. If version is None, non-versioned metadata is being requested. """ self.fetch_tracker.metadata.append((role, version)) # decode role for the metadata role = parse.unquote(role, encoding="utf-8") if role == Root.type: # return a version previously serialized in publish_root() if version is None or version > len(self.signed_roots): raise DownloadHTTPError(f"Unknown root version {version}", 404) logger.debug("fetched root version %d", version) return self.signed_roots[version - 1] # sign and serialize the requested metadata md: Metadata | None if role == Timestamp.type: md = self.md_timestamp elif role == Snapshot.type: md = self.md_snapshot elif role == Targets.type: md = self.md_targets else: md = self.md_delegates.get(role) if md is None: raise DownloadHTTPError(f"Unknown role {role}", 404) md.signatures.clear() for signer in self.signers[role].values(): md.sign(signer, append=True) logger.debug( "fetched %s v%d with %d sigs", role, md.signed.version, len(self.signers[role]), ) return md.to_bytes(JSONSerializer()) def _compute_hashes_and_length( self, role: str ) -> tuple[dict[str, str], int]: data = self.fetch_metadata(role) digest_object = sslib_hash.digest(sslib_hash.DEFAULT_HASH_ALGORITHM) digest_object.update(data) hashes = {sslib_hash.DEFAULT_HASH_ALGORITHM: digest_object.hexdigest()} return hashes, len(data) def update_timestamp(self) -> None: """Update timestamp and assign snapshot version to snapshot_meta version. """ hashes = None length = None if self.compute_metafile_hashes_length: hashes, length = self._compute_hashes_and_length(Snapshot.type) self.timestamp.snapshot_meta = MetaFile( self.snapshot.version, length, hashes ) self.timestamp.version += 1 def update_snapshot(self) -> None: """Update snapshot, assign targets versions and update timestamp.""" for role, delegate in self.all_targets(): hashes = None length = None if self.compute_metafile_hashes_length: hashes, length = self._compute_hashes_and_length(role) self.snapshot.meta[f"{role}.json"] = MetaFile( delegate.version, length, hashes ) self.snapshot.version += 1 self.update_timestamp() def _get_delegator(self, delegator_name: str) -> Targets: """Given a delegator name return, its corresponding Targets object.""" if delegator_name == Targets.type: return self.targets return self.md_delegates[delegator_name].signed def add_target(self, role: str, data: bytes, path: str) -> None: """Create a target from data and add it to the target_files.""" targets = self._get_delegator(role) target = TargetFile.from_data(path, data, ["sha256"]) targets.targets[path] = target self.target_files[path] = RepositoryTarget(data, target) def add_delegation( self, delegator_name: str, role: DelegatedRole, targets: Targets ) -> None: """Add delegated target role to the repository.""" delegator = self._get_delegator(delegator_name) if ( delegator.delegations is not None and delegator.delegations.succinct_roles is not None ): raise ValueError("Can't add a role when succinct_roles is used") # Create delegation if delegator.delegations is None: delegator.delegations = Delegations({}, roles={}) assert delegator.delegations.roles is not None # put delegation last by default delegator.delegations.roles[role.name] = role # By default add one new key for the role signer = CryptoSigner.generate_ed25519() delegator.add_key(signer.public_key, role.name) self.add_signer(role.name, signer) # Add metadata for the role if role.name not in self.md_delegates: self.md_delegates[role.name] = Metadata(targets, {}) def add_succinct_roles( self, delegator_name: str, bit_length: int, name_prefix: str ) -> None: """Add succinct roles info to a delegator with name "delegator_name". Note that for each delegated role represented by succinct roles an empty Targets instance is created. """ delegator = self._get_delegator(delegator_name) if ( delegator.delegations is not None and delegator.delegations.roles is not None ): raise ValueError( "Can't add a succinct_roles when delegated roles are used" ) signer = CryptoSigner.generate_ed25519() succinct_roles = SuccinctRoles([], 1, bit_length, name_prefix) delegator.delegations = Delegations({}, None, succinct_roles) # Add targets metadata for all bins. for delegated_name in succinct_roles.get_roles(): self.md_delegates[delegated_name] = Metadata( Targets(expires=self.safe_expiry) ) self.add_signer(delegated_name, signer) delegator.add_key(signer.public_key) def write(self) -> None: """Dump current repository metadata to self.dump_dir This is a debugging tool: dumping repository state before running Updater refresh may be useful while debugging a test. """ if self.dump_dir is None: self.dump_dir = tempfile.mkdtemp() print(f"Repository Simulator dumps in {self.dump_dir}") self.dump_version += 1 dest_dir = os.path.join(self.dump_dir, str(self.dump_version)) os.makedirs(dest_dir) for ver in range(1, len(self.signed_roots) + 1): with open(os.path.join(dest_dir, f"{ver}.root.json"), "wb") as f: f.write(self.fetch_metadata(Root.type, ver)) for role in [Timestamp.type, Snapshot.type, Targets.type]: with open(os.path.join(dest_dir, f"{role}.json"), "wb") as f: f.write(self.fetch_metadata(role)) for role in self.md_delegates: quoted_role = parse.quote(role, "") with open(os.path.join(dest_dir, f"{quoted_role}.json"), "wb") as f: f.write(self.fetch_metadata(role)) tuf-6.0.0/tests/simple_server.py0000755000000000000000000000121313615410400013626 0ustar00#!/usr/bin/env python3 # Copyright 2012 - 2017, New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Simple HTTP server for python-tuf tests""" import socketserver from http.server import SimpleHTTPRequestHandler # Allow reuse so you can re-run tests as often as you want even if the # tests reuse ports. Otherwise TCP TIME-WAIT prevents reuse for ~1 minute socketserver.TCPServer.allow_reuse_address = True httpd = socketserver.TCPServer(("localhost", 0), SimpleHTTPRequestHandler) port_message = "bind succeeded, server port is: " + str(httpd.server_address[1]) print(port_message) httpd.serve_forever() tuf-6.0.0/tests/test_api.py0000644000000000000000000014140313615410400012562 0ustar00# Copyright 2020, New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Unit tests for api/metadata.py""" from __future__ import annotations import json import logging import os import shutil import sys import tempfile import unittest from copy import copy, deepcopy from datetime import datetime, timedelta, timezone from pathlib import Path from typing import ClassVar from securesystemslib import exceptions as sslib_exceptions from securesystemslib import hash as sslib_hash from securesystemslib.signer import ( CryptoSigner, Key, SecretsHandler, Signer, ) from tests import utils from tuf.api import exceptions from tuf.api.dsse import SimpleEnvelope from tuf.api.metadata import ( TOP_LEVEL_ROLE_NAMES, DelegatedRole, Delegations, Metadata, MetaFile, Root, RootVerificationResult, Signature, Snapshot, SuccinctRoles, TargetFile, Targets, Timestamp, VerificationResult, ) from tuf.api.serialization import DeserializationError, SerializationError from tuf.api.serialization.json import JSONSerializer logger = logging.getLogger(__name__) class TestMetadata(unittest.TestCase): """Tests for public API of all classes in 'tuf/api/metadata.py'.""" temporary_directory: ClassVar[str] repo_dir: ClassVar[str] keystore_dir: ClassVar[str] signers: ClassVar[dict[str, Signer]] @classmethod def setUpClass(cls) -> None: # Create a temporary directory to store the repository, metadata, and # target files. 'temporary_directory' must be deleted in # TearDownClass() so that temporary files are always removed, even when # exceptions occur. cls.temporary_directory = tempfile.mkdtemp(dir=os.getcwd()) test_repo_data = os.path.join( os.path.dirname(os.path.realpath(__file__)), "repository_data" ) cls.repo_dir = os.path.join(cls.temporary_directory, "repository") shutil.copytree( os.path.join(test_repo_data, "repository"), cls.repo_dir ) cls.keystore_dir = os.path.join(cls.temporary_directory, "keystore") shutil.copytree( os.path.join(test_repo_data, "keystore"), cls.keystore_dir ) path = os.path.join(cls.repo_dir, "metadata", "root.json") root = Metadata[Root].from_file(path).signed # Load signers cls.signers = {} for role in [Snapshot.type, Targets.type, Timestamp.type]: uri = f"file2:{os.path.join(cls.keystore_dir, role + '_key')}" role_obj = root.get_delegated_role(role) key = root.get_key(role_obj.keyids[0]) cls.signers[role] = CryptoSigner.from_priv_key_uri(uri, key) @classmethod def tearDownClass(cls) -> None: # Remove the temporary repository directory, which should contain all # the metadata, targets, and key files generated for the test cases. shutil.rmtree(cls.temporary_directory) def test_generic_read(self) -> None: for metadata, inner_metadata_cls in [ (Root.type, Root), (Snapshot.type, Snapshot), (Timestamp.type, Timestamp), (Targets.type, Targets), ]: # Load JSON-formatted metadata of each supported type from file # and from out-of-band read JSON string path = os.path.join(self.repo_dir, "metadata", metadata + ".json") md_obj = Metadata.from_file(path) with open(path, "rb") as f: md_obj2 = Metadata.from_bytes(f.read()) # Assert that both methods instantiate the right inner class for # each metadata type and ... self.assertTrue(isinstance(md_obj.signed, inner_metadata_cls)) self.assertTrue(isinstance(md_obj2.signed, inner_metadata_cls)) # ... and return the same object (compared by dict representation) self.assertDictEqual(md_obj.to_dict(), md_obj2.to_dict()) # Assert that it chokes correctly on an unknown metadata type bad_metadata_path = "bad-metadata.json" bad_metadata = {"signed": {"_type": "bad-metadata"}} bad_string = json.dumps(bad_metadata).encode("utf-8") with open(bad_metadata_path, "wb") as f: f.write(bad_string) with self.assertRaises(DeserializationError): Metadata.from_file(bad_metadata_path) with self.assertRaises(DeserializationError): Metadata.from_bytes(bad_string) os.remove(bad_metadata_path) def test_md_read_write_file_exceptions(self) -> None: # Test writing to a file with bad filename with self.assertRaises(exceptions.StorageError): Metadata.from_file("bad-metadata.json") # Test serializing to a file with bad filename with self.assertRaises(exceptions.StorageError): md = Metadata.from_file( os.path.join(self.repo_dir, "metadata", "root.json") ) md.to_file("") def test_compact_json(self) -> None: path = os.path.join(self.repo_dir, "metadata", "targets.json") md_obj = Metadata.from_file(path) self.assertTrue( len(JSONSerializer(compact=True).serialize(md_obj)) < len(JSONSerializer().serialize(md_obj)) ) def test_read_write_read_compare(self) -> None: for metadata in TOP_LEVEL_ROLE_NAMES: path = os.path.join(self.repo_dir, "metadata", metadata + ".json") md_obj = Metadata.from_file(path) path_2 = path + ".tmp" md_obj.to_file(path_2) md_obj_2 = Metadata.from_file(path_2) self.assertDictEqual(md_obj.to_dict(), md_obj_2.to_dict()) os.remove(path_2) def test_serialize_with_validate(self) -> None: # Assert that by changing one required attribute validation will fail. root = Metadata.from_file( os.path.join(self.repo_dir, "metadata", "root.json") ) root.signed.version = 0 with self.assertRaises(SerializationError): root.to_bytes(JSONSerializer(validate=True)) def test_to_from_bytes(self) -> None: for metadata in TOP_LEVEL_ROLE_NAMES: path = os.path.join(self.repo_dir, "metadata", metadata + ".json") with open(path, "rb") as f: metadata_bytes = f.read() md_obj = Metadata.from_bytes(metadata_bytes) # Compare that from_bytes/to_bytes doesn't change the content # for two cases for the serializer: noncompact and compact. # Case 1: test noncompact by overriding the default serializer. self.assertEqual(md_obj.to_bytes(JSONSerializer()), metadata_bytes) # Case 2: test compact by using the default serializer. obj_bytes = md_obj.to_bytes() metadata_obj_2 = Metadata.from_bytes(obj_bytes) self.assertEqual(metadata_obj_2.to_bytes(), obj_bytes) def test_sign_verify(self) -> None: path = os.path.join(self.repo_dir, "metadata") root = Metadata[Root].from_file(os.path.join(path, "root.json")).signed # Locate the public keys we need from root targets_keyid = next(iter(root.roles[Targets.type].keyids)) targets_key = root.keys[targets_keyid] snapshot_keyid = next(iter(root.roles[Snapshot.type].keyids)) snapshot_key = root.keys[snapshot_keyid] timestamp_keyid = next(iter(root.roles[Timestamp.type].keyids)) timestamp_key = root.keys[timestamp_keyid] # Load sample metadata (targets) and assert ... md_obj = Metadata.from_file(os.path.join(path, "targets.json")) sig = md_obj.signatures[targets_keyid] data = md_obj.signed_bytes # ... it has a single existing signature, self.assertEqual(len(md_obj.signatures), 1) # ... which is valid for the correct key. targets_key.verify_signature(sig, data) with self.assertRaises(sslib_exceptions.VerificationError): snapshot_key.verify_signature(sig, data) # Append a new signature with the unrelated key and assert that ... snapshot_sig = md_obj.sign(self.signers[Snapshot.type], append=True) # ... there are now two signatures, and self.assertEqual(len(md_obj.signatures), 2) # ... both are valid for the corresponding keys. targets_key.verify_signature(sig, data) snapshot_key.verify_signature(snapshot_sig, data) # ... the returned (appended) signature is for snapshot key self.assertEqual(snapshot_sig.keyid, snapshot_keyid) # Create and assign (don't append) a new signature and assert that ... ts_sig = md_obj.sign(self.signers[Timestamp.type], append=False) # ... there now is only one signature, self.assertEqual(len(md_obj.signatures), 1) # ... valid for that key. timestamp_key.verify_signature(ts_sig, data) with self.assertRaises(sslib_exceptions.VerificationError): targets_key.verify_signature(ts_sig, data) def test_sign_failures(self) -> None: # Test throwing UnsignedMetadataError because of signing problems md = Metadata.from_file( os.path.join(self.repo_dir, "metadata", "snapshot.json") ) class FailingSigner(Signer): @classmethod def from_priv_key_uri( cls, priv_key_uri: str, public_key: Key, secrets_handler: SecretsHandler | None = None, ) -> Signer: pass @property def public_key(self) -> Key: raise RuntimeError("Not a real signer") def sign(self, _payload: bytes) -> Signature: raise RuntimeError("signing failed") failing_signer = FailingSigner() with self.assertRaises(exceptions.UnsignedMetadataError): md.sign(failing_signer) def test_key_verify_failures(self) -> None: root_path = os.path.join(self.repo_dir, "metadata", "root.json") root = Metadata[Root].from_file(root_path).signed # Locate the timestamp public key we need from root timestamp_keyid = next(iter(root.roles[Timestamp.type].keyids)) timestamp_key = root.keys[timestamp_keyid] # Load sample metadata (timestamp) path = os.path.join(self.repo_dir, "metadata", "timestamp.json") md_obj = Metadata.from_file(path) sig = md_obj.signatures[timestamp_keyid] data = md_obj.signed_bytes # Test failure on unknown scheme (securesystemslib # UnsupportedAlgorithmError) scheme = timestamp_key.scheme timestamp_key.scheme = "foo" with self.assertRaises(sslib_exceptions.VerificationError): timestamp_key.verify_signature(sig, data) timestamp_key.scheme = scheme # Test failure on broken public key data (securesystemslib # CryptoError) public = timestamp_key.keyval["public"] timestamp_key.keyval["public"] = "ffff" with self.assertRaises(sslib_exceptions.VerificationError): timestamp_key.verify_signature(sig, data) timestamp_key.keyval["public"] = public # Test failure with invalid signature (securesystemslib # FormatError) incorrect_sig = copy(sig) incorrect_sig.signature = "foo" with self.assertRaises(sslib_exceptions.VerificationError): timestamp_key.verify_signature(incorrect_sig, data) # Test failure with valid but incorrect signature incorrect_sig.signature = "ff" * 64 with self.assertRaises(sslib_exceptions.UnverifiedSignatureError): timestamp_key.verify_signature(incorrect_sig, data) def test_metadata_signed_is_expired(self) -> None: # Use of Snapshot is arbitrary, we're just testing the base class # features with real data snapshot_path = os.path.join(self.repo_dir, "metadata", "snapshot.json") md = Metadata.from_file(snapshot_path) expected_expiry = datetime(2030, 1, 1, 0, 0, tzinfo=timezone.utc) self.assertEqual(md.signed.expires, expected_expiry) # Test is_expired with reference_time provided is_expired = md.signed.is_expired(md.signed.expires) self.assertTrue(is_expired) is_expired = md.signed.is_expired(md.signed.expires + timedelta(days=1)) self.assertTrue(is_expired) is_expired = md.signed.is_expired(md.signed.expires - timedelta(days=1)) self.assertFalse(is_expired) # Test is_expired without reference_time, # manipulating md.signed.expires expires = md.signed.expires md.signed.expires = datetime.now(timezone.utc) is_expired = md.signed.is_expired() self.assertTrue(is_expired) md.signed.expires = datetime.now(timezone.utc) + timedelta(days=1) is_expired = md.signed.is_expired() self.assertFalse(is_expired) md.signed.expires = expires def test_metadata_verify_delegate(self) -> None: root_path = os.path.join(self.repo_dir, "metadata", "root.json") root = Metadata[Root].from_file(root_path) snapshot_path = os.path.join(self.repo_dir, "metadata", "snapshot.json") snapshot = Metadata[Snapshot].from_file(snapshot_path) targets_path = os.path.join(self.repo_dir, "metadata", "targets.json") targets = Metadata[Targets].from_file(targets_path) role1_path = os.path.join(self.repo_dir, "metadata", "role1.json") role1 = Metadata[Targets].from_file(role1_path) role2_path = os.path.join(self.repo_dir, "metadata", "role2.json") role2 = Metadata[Targets].from_file(role2_path) # test the expected delegation tree root.verify_delegate(Root.type, root) root.verify_delegate(Snapshot.type, snapshot) root.verify_delegate(Targets.type, targets) targets.verify_delegate("role1", role1) role1.verify_delegate("role2", role2) # only root and targets can verify delegates with self.assertRaises(TypeError): snapshot.verify_delegate(Snapshot.type, snapshot) # verify fails for roles that are not delegated by delegator with self.assertRaises(ValueError): root.verify_delegate("role1", role1) with self.assertRaises(ValueError): targets.verify_delegate(Targets.type, targets) # verify fails when delegator has no delegations with self.assertRaises(ValueError): role2.verify_delegate("role1", role1) def test_signed_verify_delegate(self) -> None: root_path = os.path.join(self.repo_dir, "metadata", "root.json") root_md = Metadata[Root].from_file(root_path) root = root_md.signed snapshot_path = os.path.join(self.repo_dir, "metadata", "snapshot.json") snapshot_md = Metadata[Snapshot].from_file(snapshot_path) snapshot = snapshot_md.signed targets_path = os.path.join(self.repo_dir, "metadata", "targets.json") targets_md = Metadata[Targets].from_file(targets_path) targets = targets_md.signed role1_path = os.path.join(self.repo_dir, "metadata", "role1.json") role1_md = Metadata[Targets].from_file(role1_path) role1 = role1_md.signed role2_path = os.path.join(self.repo_dir, "metadata", "role2.json") role2_md = Metadata[Targets].from_file(role2_path) role2 = role2_md.signed # test the expected delegation tree root.verify_delegate( Root.type, root_md.signed_bytes, root_md.signatures ) root.verify_delegate( Snapshot.type, snapshot_md.signed_bytes, snapshot_md.signatures ) root.verify_delegate( Targets.type, targets_md.signed_bytes, targets_md.signatures ) targets.verify_delegate( "role1", role1_md.signed_bytes, role1_md.signatures ) role1.verify_delegate( "role2", role2_md.signed_bytes, role2_md.signatures ) # only root and targets can verify delegates with self.assertRaises(AttributeError): snapshot.verify_delegate( Snapshot.type, snapshot_md.signed_bytes, snapshot_md.signatures ) # verify fails for roles that are not delegated by delegator with self.assertRaises(ValueError): root.verify_delegate( "role1", role1_md.signed_bytes, role1_md.signatures ) with self.assertRaises(ValueError): targets.verify_delegate( Targets.type, targets_md.signed_bytes, targets_md.signatures ) # verify fails when delegator has no delegations with self.assertRaises(ValueError): role2.verify_delegate( "role1", role1_md.signed_bytes, role1_md.signatures ) # verify fails when delegate content is modified expires = snapshot.expires snapshot.expires = expires + timedelta(days=1) with self.assertRaises(exceptions.UnsignedMetadataError): root.verify_delegate( Snapshot.type, snapshot_md.signed_bytes, snapshot_md.signatures ) snapshot.expires = expires # verify fails if sslib verify fails with VerificationError # (in this case signature is malformed) keyid = next(iter(root.roles[Snapshot.type].keyids)) good_sig = snapshot_md.signatures[keyid].signature snapshot_md.signatures[keyid].signature = "foo" with self.assertRaises(exceptions.UnsignedMetadataError): root.verify_delegate( Snapshot.type, snapshot_md.signed_bytes, snapshot_md.signatures ) snapshot_md.signatures[keyid].signature = good_sig # verify fails if roles keys do not sign the metadata with self.assertRaises(exceptions.UnsignedMetadataError): root.verify_delegate( Timestamp.type, snapshot_md.signed_bytes, snapshot_md.signatures ) # Add a key to snapshot role, make sure the new sig fails to verify ts_keyid = next(iter(root.roles[Timestamp.type].keyids)) root.add_key(root.keys[ts_keyid], Snapshot.type) snapshot_md.signatures[ts_keyid] = Signature(ts_keyid, "ff" * 64) # verify succeeds if threshold is reached even if some signatures # fail to verify root.verify_delegate( Snapshot.type, snapshot_md.signed_bytes, snapshot_md.signatures ) # verify fails if threshold of signatures is not reached root.roles[Snapshot.type].threshold = 2 with self.assertRaises(exceptions.UnsignedMetadataError): root.verify_delegate( Snapshot.type, snapshot_md.signed_bytes, snapshot_md.signatures ) # verify succeeds when we correct the new signature and reach the # threshold of 2 keys snapshot_md.sign(self.signers[Timestamp.type], append=True) root.verify_delegate( Snapshot.type, snapshot_md.signed_bytes, snapshot_md.signatures ) def test_verification_result(self) -> None: vr = VerificationResult(3, {"a": None}, {"b": None}) self.assertEqual(vr.missing, 2) self.assertFalse(vr.verified) self.assertFalse(vr) # Add a signature vr.signed["c"] = None self.assertEqual(vr.missing, 1) self.assertFalse(vr.verified) self.assertFalse(vr) # Add last missing signature vr.signed["d"] = None self.assertEqual(vr.missing, 0) self.assertTrue(vr.verified) self.assertTrue(vr) # Add one more signature vr.signed["e"] = None self.assertEqual(vr.missing, 0) self.assertTrue(vr.verified) self.assertTrue(vr) def test_root_verification_result(self) -> None: vr1 = VerificationResult(3, {"a": None}, {"b": None}) vr2 = VerificationResult(1, {"c": None}, {"b": None}) vr = RootVerificationResult(vr1, vr2) self.assertEqual(vr.signed, {"a": None, "c": None}) self.assertEqual(vr.unsigned, {"b": None}) self.assertFalse(vr.verified) self.assertFalse(vr) vr1.signed["c"] = None vr1.signed["f"] = None self.assertEqual(vr.signed, {"a": None, "c": None, "f": None}) self.assertEqual(vr.unsigned, {"b": None}) self.assertTrue(vr.verified) self.assertTrue(vr) def test_signed_get_verification_result(self) -> None: # Setup: Load test metadata and keys root_path = os.path.join(self.repo_dir, "metadata", "root.json") root = Metadata[Root].from_file(root_path) key1_id = root.signed.roles[Root.type].keyids[0] key1 = root.signed.get_key(key1_id) key2_id = root.signed.roles[Timestamp.type].keyids[0] key2 = root.signed.get_key(key2_id) key3_id = "123456789abcdefg" key4_id = self.signers[Snapshot.type].public_key.keyid # Test: 1 authorized key, 1 valid signature result = root.signed.get_verification_result( Root.type, root.signed_bytes, root.signatures ) self.assertTrue(result) self.assertEqual(result.signed, {key1_id: key1}) self.assertEqual(result.unsigned, {}) # Test: 2 authorized keys, 1 invalid signature # Adding a key, i.e. metadata change, invalidates existing signature root.signed.add_key(key2, Root.type) result = root.signed.get_verification_result( Root.type, root.signed_bytes, root.signatures ) self.assertFalse(result) self.assertEqual(result.signed, {}) self.assertEqual(result.unsigned, {key1_id: key1, key2_id: key2}) # Test: 3 authorized keys, 1 invalid signature, 1 key missing key data # Adding a keyid w/o key, fails verification but this key is not listed # in unsigned root.signed.roles[Root.type].keyids.append(key3_id) result = root.signed.get_verification_result( Root.type, root.signed_bytes, root.signatures ) self.assertFalse(result) self.assertEqual(result.signed, {}) self.assertEqual(result.unsigned, {key1_id: key1, key2_id: key2}) # Test: 3 authorized keys, 1 valid signature, 1 invalid signature, 1 # key missing key data root.sign(self.signers[Timestamp.type], append=True) result = root.signed.get_verification_result( Root.type, root.signed_bytes, root.signatures ) self.assertTrue(result) self.assertEqual(result.signed, {key2_id: key2}) self.assertEqual(result.unsigned, {key1_id: key1}) # Test: 3 authorized keys, 1 valid signature, 1 invalid signature, 1 # key missing key data, 1 ignored unrelated signature root.sign(self.signers[Snapshot.type], append=True) self.assertEqual( set(root.signatures.keys()), {key1_id, key2_id, key4_id} ) self.assertTrue(result) self.assertEqual(result.signed, {key2_id: key2}) self.assertEqual(result.unsigned, {key1_id: key1}) # See test_signed_verify_delegate for more related tests ... def test_root_get_root_verification_result(self) -> None: # Setup: Load test metadata and keys root_path = os.path.join(self.repo_dir, "metadata", "root.json") root = Metadata[Root].from_file(root_path) key1_id = root.signed.roles[Root.type].keyids[0] key1 = root.signed.get_key(key1_id) key2_id = root.signed.roles[Timestamp.type].keyids[0] key2 = root.signed.get_key(key2_id) # Test: Verify with no previous root version result = root.signed.get_root_verification_result( None, root.signed_bytes, root.signatures ) self.assertTrue(result) self.assertEqual(result.signed, {key1_id: key1}) self.assertEqual(result.unsigned, {}) # Test: Verify with other root that is not version N-1 prev_root: Metadata[Root] = deepcopy(root) with self.assertRaises(ValueError): result = root.signed.get_root_verification_result( prev_root.signed, root.signed_bytes, root.signatures ) # Test: Verify with previous root prev_root.signed.version -= 1 result = root.signed.get_root_verification_result( prev_root.signed, root.signed_bytes, root.signatures ) self.assertTrue(result) self.assertEqual(result.signed, {key1_id: key1}) self.assertEqual(result.unsigned, {}) # Test: Add a signer to previous root (threshold still 1) prev_root.signed.add_key(key2, Root.type) result = root.signed.get_root_verification_result( prev_root.signed, root.signed_bytes, root.signatures ) self.assertTrue(result) self.assertEqual(result.signed, {key1_id: key1}) self.assertEqual(result.unsigned, {key2_id: key2}) # Test: Increase threshold in previous root prev_root.signed.roles[Root.type].threshold += 1 result = root.signed.get_root_verification_result( prev_root.signed, root.signed_bytes, root.signatures ) self.assertFalse(result) self.assertEqual(result.signed, {key1_id: key1}) self.assertEqual(result.unsigned, {key2_id: key2}) # Test: Sign root with both keys root.sign(self.signers[Timestamp.type], append=True) result = root.signed.get_root_verification_result( prev_root.signed, root.signed_bytes, root.signatures ) self.assertTrue(result) self.assertEqual(result.signed, {key1_id: key1, key2_id: key2}) self.assertEqual(result.unsigned, {}) # Test: Sign root with an unrelated key root.sign(self.signers[Snapshot.type], append=True) result = root.signed.get_root_verification_result( prev_root.signed, root.signed_bytes, root.signatures ) self.assertTrue(result) self.assertEqual(result.signed, {key1_id: key1, key2_id: key2}) self.assertEqual(result.unsigned, {}) # Test: Remove key1 from previous root prev_root.signed.revoke_key(key1_id, Root.type) result = root.signed.get_root_verification_result( prev_root.signed, root.signed_bytes, root.signatures ) self.assertFalse(result) self.assertEqual(result.signed, {key1_id: key1, key2_id: key2}) self.assertEqual(result.unsigned, {}) # Test: Lower threshold in previous root prev_root.signed.roles[Root.type].threshold -= 1 result = root.signed.get_root_verification_result( prev_root.signed, root.signed_bytes, root.signatures ) self.assertTrue(result) self.assertEqual(result.signed, {key1_id: key1, key2_id: key2}) self.assertEqual(result.unsigned, {}) def test_root_add_key_and_revoke_key(self) -> None: root_path = os.path.join(self.repo_dir, "metadata", "root.json") root = Metadata[Root].from_file(root_path) # Create a new key signer = CryptoSigner.generate_ecdsa() key = signer.public_key # Assert that root does not contain the new key self.assertNotIn(key.keyid, root.signed.roles[Root.type].keyids) self.assertNotIn(key.keyid, root.signed.keys) # Assert that add_key with old argument order will raise an error with self.assertRaises(ValueError): root.signed.add_key(Root.type, key) # Add new root key root.signed.add_key(key, Root.type) # Assert that key is added self.assertIn(key.keyid, root.signed.roles[Root.type].keyids) self.assertIn(key.keyid, root.signed.keys) # Confirm that the newly added key does not break # the object serialization root.to_dict() # Try adding the same key again and assert its ignored. pre_add_keyid = root.signed.roles[Root.type].keyids.copy() root.signed.add_key(key, Root.type) self.assertEqual(pre_add_keyid, root.signed.roles[Root.type].keyids) # Add the same key to targets role as well root.signed.add_key(key, Targets.type) # Add the same key to a nonexistent role. with self.assertRaises(ValueError): root.signed.add_key(key, "nosuchrole") # Remove the key from root role (targets role still uses it) root.signed.revoke_key(key.keyid, Root.type) self.assertNotIn(key.keyid, root.signed.roles[Root.type].keyids) self.assertIn(key.keyid, root.signed.keys) # Remove the key from targets as well root.signed.revoke_key(key.keyid, Targets.type) self.assertNotIn(key.keyid, root.signed.roles[Targets.type].keyids) self.assertNotIn(key.keyid, root.signed.keys) with self.assertRaises(ValueError): root.signed.revoke_key("nosuchkey", Root.type) with self.assertRaises(ValueError): root.signed.revoke_key(key.keyid, "nosuchrole") def test_is_target_in_pathpattern(self) -> None: supported_use_cases = [ ("foo.tgz", "foo.tgz"), ("foo.tgz", "*"), ("foo.tgz", "*.tgz"), ("foo-version-a.tgz", "foo-version-?.tgz"), ("targets/foo.tgz", "targets/*.tgz"), ("foo/bar/zoo/k.tgz", "foo/bar/zoo/*"), ("foo/bar/zoo/k.tgz", "foo/*/zoo/*"), ("foo/bar/zoo/k.tgz", "*/*/*/*"), ("foo/bar", "f?o/bar"), ("foo/bar", "*o/bar"), ] for targetpath, pathpattern in supported_use_cases: self.assertTrue( DelegatedRole._is_target_in_pathpattern(targetpath, pathpattern) ) invalid_use_cases = [ ("targets/foo.tgz", "*.tgz"), ("/foo.tgz", "*.tgz"), ("targets/foo.tgz", "*"), ("foo-version-alpha.tgz", "foo-version-?.tgz"), ("foo//bar", "*/bar"), ("foo/bar", "f?/bar"), ] for targetpath, pathpattern in invalid_use_cases: self.assertFalse( DelegatedRole._is_target_in_pathpattern(targetpath, pathpattern) ) def test_targets_key_api(self) -> None: targets_path = os.path.join(self.repo_dir, "metadata", "targets.json") targets: Targets = Metadata[Targets].from_file(targets_path).signed # Add a new delegated role "role2" in targets delegated_role = DelegatedRole.from_dict( { "keyids": [], "name": "role2", "paths": ["fn3", "fn4"], "terminating": False, "threshold": 1, } ) assert isinstance(targets.delegations, Delegations) assert isinstance(targets.delegations.roles, dict) targets.delegations.roles["role2"] = delegated_role key_dict = { "keytype": "ed25519", "keyval": { "public": "edcd0a32a07dce33f7c7873aaffbff36d20ea30787574ead335eefd337e4dacd" }, "scheme": "ed25519", } key = Key.from_dict("id2", key_dict) # Assert that add_key with old argument order will raise an error with self.assertRaises(ValueError): targets.add_key("role1", key) # Assert that delegated role "role1" does not contain the new key self.assertNotIn(key.keyid, targets.delegations.roles["role1"].keyids) targets.add_key(key, "role1") # Assert that the new key is added to the delegated role "role1" self.assertIn(key.keyid, targets.delegations.roles["role1"].keyids) # Confirm that the newly added key does not break the obj serialization targets.to_dict() # Try adding the same key again and assert its ignored. past_keyid = targets.delegations.roles["role1"].keyids.copy() targets.add_key(key, "role1") self.assertEqual(past_keyid, targets.delegations.roles["role1"].keyids) # Try adding a key to a delegated role that doesn't exists with self.assertRaises(ValueError): targets.add_key(key, "nosuchrole") # Add the same key to "role2" as well targets.add_key(key, "role2") # Remove the key from "role1" role ("role2" still uses it) targets.revoke_key(key.keyid, "role1") # Assert that delegated role "role1" doesn't contain the key. self.assertNotIn(key.keyid, targets.delegations.roles["role1"].keyids) self.assertIn(key.keyid, targets.delegations.roles["role2"].keyids) # Remove the key from "role2" as well targets.revoke_key(key.keyid, "role2") self.assertNotIn(key.keyid, targets.delegations.roles["role2"].keyids) # Try remove key not used by "role1" with self.assertRaises(ValueError): targets.revoke_key(key.keyid, "role1") # Try removing a key from delegated role that doesn't exists with self.assertRaises(ValueError): targets.revoke_key(key.keyid, "nosuchrole") # Remove delegations as a whole targets.delegations = None # Test that calling add_key and revoke_key throws an error # and that delegations is still None after each of the api calls with self.assertRaises(ValueError): targets.add_key(key, "role1") self.assertTrue(targets.delegations is None) with self.assertRaises(ValueError): targets.revoke_key(key.keyid, "role1") self.assertTrue(targets.delegations is None) def test_targets_key_api_with_succinct_roles(self) -> None: targets_path = os.path.join(self.repo_dir, "metadata", "targets.json") targets: Targets = Metadata[Targets].from_file(targets_path).signed key_dict = { "keytype": "ed25519", "keyval": { "public": "edcd0a32a07dce33f7c7873aaffbff36d20ea30787574ead335eefd337e4dacd" }, "scheme": "ed25519", } key = Key.from_dict("id2", key_dict) # Remove delegated roles. assert targets.delegations is not None assert targets.delegations.roles is not None targets.delegations.roles = None targets.delegations.keys = {} # Add succinct_roles information. targets.delegations.succinct_roles = SuccinctRoles([], 1, 8, "foo") self.assertEqual(len(targets.delegations.keys), 0) self.assertEqual(len(targets.delegations.succinct_roles.keyids), 0) # Add a key to succinct_roles and verify it's saved. targets.add_key(key) self.assertIn(key.keyid, targets.delegations.keys) self.assertIn(key.keyid, targets.delegations.succinct_roles.keyids) self.assertEqual(len(targets.delegations.keys), 1) # Try adding the same key again and verify that noting is added. targets.add_key(key) self.assertEqual(len(targets.delegations.keys), 1) # Remove the key and verify it's not stored anymore. targets.revoke_key(key.keyid) self.assertNotIn(key.keyid, targets.delegations.keys) self.assertNotIn(key.keyid, targets.delegations.succinct_roles.keyids) self.assertEqual(len(targets.delegations.keys), 0) # Try removing it again. with self.assertRaises(ValueError): targets.revoke_key(key.keyid) def test_length_and_hash_validation(self) -> None: # Test metadata files' hash and length verification. # Use timestamp to get a MetaFile object and snapshot # for untrusted metadata file to verify. timestamp_path = os.path.join( self.repo_dir, "metadata", "timestamp.json" ) timestamp = Metadata[Timestamp].from_file(timestamp_path) snapshot_metafile = timestamp.signed.snapshot_meta snapshot_path = os.path.join(self.repo_dir, "metadata", "snapshot.json") with open(snapshot_path, "rb") as file: # test with data as a file object snapshot_metafile.verify_length_and_hashes(file) file.seek(0) data = file.read() # test with data as bytes snapshot_metafile.verify_length_and_hashes(data) # test exceptions expected_length = snapshot_metafile.length snapshot_metafile.length = 2345 with self.assertRaises(exceptions.LengthOrHashMismatchError): snapshot_metafile.verify_length_and_hashes(data) snapshot_metafile.length = expected_length snapshot_metafile.hashes = {"sha256": "incorrecthash"} with self.assertRaises(exceptions.LengthOrHashMismatchError): snapshot_metafile.verify_length_and_hashes(data) snapshot_metafile.hashes = { "unsupported-alg": "8f88e2ba48b412c3843e9bb26e1b6f8fc9e98aceb0fbaa97ba37b4c98717d7ab" } with self.assertRaises(exceptions.LengthOrHashMismatchError): snapshot_metafile.verify_length_and_hashes(data) # Test wrong algorithm format (sslib.FormatError) snapshot_metafile.hashes = { 256: "8f88e2ba48b412c3843e9bb26e1b6f8fc9e98aceb0fbaa97ba37b4c98717d7ab" # type: ignore[dict-item] } with self.assertRaises(exceptions.LengthOrHashMismatchError): snapshot_metafile.verify_length_and_hashes(data) # test optional length and hashes snapshot_metafile.length = None snapshot_metafile.hashes = None snapshot_metafile.verify_length_and_hashes(data) # Test target files' hash and length verification targets_path = os.path.join(self.repo_dir, "metadata", "targets.json") targets = Metadata[Targets].from_file(targets_path) file1_targetfile = targets.signed.targets["file1.txt"] filepath = os.path.join(self.repo_dir, Targets.type, "file1.txt") with open(filepath, "rb") as file1: file1_targetfile.verify_length_and_hashes(file1) # test exceptions expected_length = file1_targetfile.length file1_targetfile.length = 2345 with self.assertRaises(exceptions.LengthOrHashMismatchError): file1_targetfile.verify_length_and_hashes(file1) file1_targetfile.length = expected_length file1_targetfile.hashes = {"sha256": "incorrecthash"} with self.assertRaises(exceptions.LengthOrHashMismatchError): file1_targetfile.verify_length_and_hashes(file1) def test_targetfile_from_file(self) -> None: # Test with an existing file and valid hash algorithm file_path = os.path.join(self.repo_dir, Targets.type, "file1.txt") targetfile_from_file = TargetFile.from_file( file_path, file_path, ["sha256"] ) with open(file_path, "rb") as file: targetfile_from_file.verify_length_and_hashes(file) # Test with a non-existing file file_path = os.path.join(self.repo_dir, Targets.type, "file123.txt") with self.assertRaises(FileNotFoundError): TargetFile.from_file( file_path, file_path, [sslib_hash.DEFAULT_HASH_ALGORITHM] ) # Test with an unsupported algorithm file_path = os.path.join(self.repo_dir, Targets.type, "file1.txt") with self.assertRaises(ValueError): TargetFile.from_file(file_path, file_path, ["123"]) def test_targetfile_custom(self) -> None: # Test creating TargetFile and accessing custom. targetfile = TargetFile( 100, {"sha256": "abc"}, "file.txt", {"custom": "foo"} ) self.assertEqual(targetfile.custom, "foo") def test_targetfile_from_data(self) -> None: data = b"Inline test content" target_file_path = os.path.join( self.repo_dir, Targets.type, "file1.txt" ) # Test with a valid hash algorithm targetfile_from_data = TargetFile.from_data( target_file_path, data, ["sha256"] ) targetfile_from_data.verify_length_and_hashes(data) # Test with no algorithms specified targetfile_from_data = TargetFile.from_data(target_file_path, data) targetfile_from_data.verify_length_and_hashes(data) def test_metafile_from_data(self) -> None: data = b"Inline test content" # Test with a valid hash algorithm metafile = MetaFile.from_data(1, data, ["sha256"]) metafile.verify_length_and_hashes(data) # Test with an invalid hash algorithm with self.assertRaises(ValueError): metafile = MetaFile.from_data(1, data, ["invalid_algorithm"]) metafile.verify_length_and_hashes(data) self.assertEqual( metafile, MetaFile( 1, 19, { "sha256": "fcee2e6d56ab08eab279016f7db7e4e1d172ccea78e15f4cf8bd939991a418fa" }, ), ) def test_targetfile_get_prefixed_paths(self) -> None: target = TargetFile(100, {"sha256": "abc", "md5": "def"}, "a/b/f.ext") self.assertEqual( target.get_prefixed_paths(), ["a/b/abc.f.ext", "a/b/def.f.ext"] ) target = TargetFile(100, {"sha256": "abc", "md5": "def"}, "") self.assertEqual(target.get_prefixed_paths(), ["abc.", "def."]) target = TargetFile(100, {"sha256": "abc", "md5": "def"}, "a/b/") self.assertEqual(target.get_prefixed_paths(), ["a/b/abc.", "a/b/def."]) target = TargetFile(100, {"sha256": "abc", "md5": "def"}, "f.ext") self.assertEqual( target.get_prefixed_paths(), ["abc.f.ext", "def.f.ext"] ) target = TargetFile(100, {"sha256": "abc", "md5": "def"}, "a/b/.ext") self.assertEqual( target.get_prefixed_paths(), ["a/b/abc..ext", "a/b/def..ext"] ) target = TargetFile(100, {"sha256": "abc"}, "/root/file.ext") self.assertEqual(target.get_prefixed_paths(), ["/root/abc.file.ext"]) target = TargetFile(100, {"sha256": "abc"}, "/") self.assertEqual(target.get_prefixed_paths(), ["/abc."]) def test_is_delegated_role(self) -> None: # test path matches # see more extensive tests in test_is_target_in_pathpattern() for paths in [ ["a/path"], ["otherpath", "a/path"], ["*/?ath"], ]: role = DelegatedRole("", [], 1, False, paths, None) self.assertFalse(role.is_delegated_path("a/non-matching path")) self.assertTrue(role.is_delegated_path("a/path")) # test path hash prefix matches: sha256 sum of "a/path" is 927b0ecf9... for hash_prefixes in [ ["927b0ecf9"], ["other prefix", "927b0ecf9"], ["927b0"], ["92"], ]: role = DelegatedRole("", [], 1, False, None, hash_prefixes) self.assertFalse(role.is_delegated_path("a/non-matching path")) self.assertTrue(role.is_delegated_path("a/path")) def test_is_delegated_role_in_succinct_roles(self) -> None: succinct_roles = SuccinctRoles([], 1, 5, "bin") false_role_name_examples = [ "foo", "bin-", "bin-s", "bin-0t", "bin-20", "bin-100", ] for role_name in false_role_name_examples: msg = f"Error for {role_name}" self.assertFalse(succinct_roles.is_delegated_role(role_name), msg) # delegated role name suffixes are in hex format. true_name_examples = ["bin-00", "bin-0f", "bin-1f"] for role_name in true_name_examples: msg = f"Error for {role_name}" self.assertTrue(succinct_roles.is_delegated_role(role_name), msg) def test_get_roles_in_succinct_roles(self) -> None: succinct_roles = SuccinctRoles([], 1, 16, "bin") # bin names are in hex format and 4 hex digits are enough to represent # all bins between 0 and 2^16 - 1 meaning suffix_len must be 4 expected_suffix_length = 4 self.assertEqual(succinct_roles.suffix_len, expected_suffix_length) for bin_numer, role_name in enumerate(succinct_roles.get_roles()): # This adds zero-padding if the bin_numer is represented by a hex # number with a length less than expected_suffix_length. expected_bin_suffix = f"{bin_numer:0{expected_suffix_length}x}" self.assertEqual(role_name, f"bin-{expected_bin_suffix}") def test_delegations_get_delegated_role(self) -> None: delegations = Delegations({}, {}) targets = Targets(delegations=delegations) with self.assertRaises(ValueError): targets.get_delegated_role("abc") # test "normal" delegated role (path or path_hash_prefix) role = DelegatedRole("delegated", [], 1, False, []) delegations.roles = {"delegated": role} with self.assertRaises(ValueError): targets.get_delegated_role("not-delegated") self.assertEqual(targets.get_delegated_role("delegated"), role) delegations.roles = None # test succinct delegation bit_len = 3 role2 = SuccinctRoles([], 1, bit_len, "prefix") delegations.succinct_roles = role2 for name in ["prefix-", "prefix--1", f"prefix-{2**bit_len:0x}"]: with self.assertRaises(ValueError, msg=f"role name '{name}'"): targets.get_delegated_role(name) for i in range(2**bit_len): self.assertEqual( targets.get_delegated_role(f"prefix-{i:0x}"), role2 ) class TestSimpleEnvelope(unittest.TestCase): """Tests for public API in 'tuf/api/dsse.py'.""" @classmethod def setUpClass(cls) -> None: repo_data_dir = Path(utils.TESTS_DIR) / "repository_data" cls.metadata_dir = repo_data_dir / "repository" / "metadata" cls.keystore_dir = repo_data_dir / "keystore" cls.signers = {} root_path = os.path.join(cls.metadata_dir, "root.json") root: Root = Metadata.from_file(root_path).signed for role in [Snapshot, Targets, Timestamp]: uri = f"file2:{os.path.join(cls.keystore_dir, role.type + '_key')}" role_obj = root.get_delegated_role(role.type) key = root.get_key(role_obj.keyids[0]) cls.signers[role.type] = CryptoSigner.from_priv_key_uri(uri, key) def test_serialization(self) -> None: """Basic de/serialization test. 1. Load test metadata for each role 2. Wrap metadata payloads in envelope serializing the payload 3. Serialize envelope 4. De-serialize envelope 5. De-serialize payload """ for role in [Root, Timestamp, Snapshot, Targets]: metadata_path = self.metadata_dir / f"{role.type}.json" metadata = Metadata.from_file(str(metadata_path)) self.assertIsInstance(metadata.signed, role) envelope = SimpleEnvelope.from_signed(metadata.signed) envelope_bytes = envelope.to_bytes() envelope2 = SimpleEnvelope.from_bytes(envelope_bytes) payload = envelope2.get_signed() self.assertEqual(metadata.signed, payload) def test_fail_envelope_serialization(self) -> None: envelope = SimpleEnvelope(b"foo", "bar", ["baz"]) with self.assertRaises(SerializationError): envelope.to_bytes() def test_fail_envelope_deserialization(self) -> None: with self.assertRaises(DeserializationError): SimpleEnvelope.from_bytes(b"[") def test_fail_payload_serialization(self) -> None: with self.assertRaises(SerializationError): SimpleEnvelope.from_signed("foo") # type: ignore[type-var] def test_fail_payload_deserialization(self) -> None: payloads = [b"[", b'{"_type": "foo"}'] for payload in payloads: envelope = SimpleEnvelope(payload, "bar", []) with self.assertRaises(DeserializationError): envelope.get_signed() def test_verify_delegate(self) -> None: """Basic verification test. 1. Load test metadata for each role 2. Wrap non-root payloads in envelope serializing the payload 3. Sign with correct delegated key 4. Verify delegate with root """ root_path = self.metadata_dir / "root.json" root = Metadata[Root].from_file(str(root_path)).signed for role in [Timestamp, Snapshot, Targets]: metadata_path = self.metadata_dir / f"{role.type}.json" metadata = Metadata.from_file(str(metadata_path)) self.assertIsInstance(metadata.signed, role) signer = self.signers[role.type] self.assertIn(signer.public_key.keyid, root.roles[role.type].keyids) envelope = SimpleEnvelope.from_signed(metadata.signed) envelope.sign(signer) self.assertTrue(len(envelope.signatures) == 1) root.verify_delegate(role.type, envelope.pae(), envelope.signatures) # Run unit test. if __name__ == "__main__": utils.configure_test_logging(sys.argv) unittest.main() tuf-6.0.0/tests/test_examples.py0000644000000000000000000001320213615410400013622 0ustar00# Copyright 2020, New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Unit tests for 'examples' scripts.""" from __future__ import annotations import glob import os import shutil import sys import tempfile import unittest from pathlib import Path from typing import ClassVar from tests import utils class TestRepoExamples(unittest.TestCase): """Unit test class for 'manual_repo' scripts. Provides a '_run_example_script' method to run (exec) a script located in the 'manual_repo' directory. """ repo_examples_dir: ClassVar[Path] @classmethod def setUpClass(cls) -> None: """Locate the example dir.""" base = Path(__file__).resolve().parents[1] cls.repo_examples_dir = base / "examples" / "manual_repo" def setUp(self) -> None: """Create and change into test dir. NOTE: Test scripts are expected to create dirs/files in new CWD.""" self.original_cwd = os.getcwd() self.base_test_dir = os.path.realpath(tempfile.mkdtemp()) os.chdir(self.base_test_dir) def tearDown(self) -> None: """Change back to original dir and remove test dir, which may contain dirs/files the test created at test-time CWD.""" os.chdir(self.original_cwd) shutil.rmtree(self.base_test_dir) def _run_script_and_assert_files( self, script_name: str, filenames_created: list[str] ) -> None: """Run script in example dir and assert that it created the files corresponding to the passed filenames inside a 'tmp*' test dir at CWD.""" script_path = str(self.repo_examples_dir / script_name) with open(script_path, "rb") as f: exec( compile(f.read(), script_path, "exec"), {"__file__": script_path}, ) test_dirs = glob.glob("tmp*") self.assertTrue( len(test_dirs) == 1, f"expected 1 'tmp*' test dir, got {test_dirs}" ) test_dir = test_dirs.pop() for name in filenames_created: metadata_path = Path(test_dir) / f"{name}" self.assertTrue( metadata_path.exists(), f"missing '{metadata_path}' file" ) def test_basic_repo(self) -> None: """Run 'basic_repo.py' and assert creation of metadata files.""" self._run_script_and_assert_files( "basic_repo.py", [ "1.python-scripts.json", "1.root.json", "1.snapshot.json", "1.targets.json", "2.root.json", "2.snapshot.json", "2.targets.json", "timestamp.json", ], ) def test_hashed_bin_delegation(self) -> None: """Run 'hashed_bin_delegation.py' and assert creation of metadata files.""" self._run_script_and_assert_files( "hashed_bin_delegation.py", [ "1.bins.json", "1.00-07.json", "1.08-0f.json", "1.10-17.json", "1.18-1f.json", "1.20-27.json", "1.28-2f.json", "1.30-37.json", "1.38-3f.json", "1.40-47.json", "1.48-4f.json", "1.50-57.json", "1.58-5f.json", "1.60-67.json", "1.68-6f.json", "1.70-77.json", "1.78-7f.json", "1.80-87.json", "1.88-8f.json", "1.90-97.json", "1.98-9f.json", "1.a0-a7.json", "1.a8-af.json", "1.b0-b7.json", "1.b8-bf.json", "1.c0-c7.json", "1.c8-cf.json", "1.d0-d7.json", "1.d8-df.json", "1.e0-e7.json", "1.e8-ef.json", "1.f0-f7.json", "1.f8-ff.json", ], ) def test_succinct_hash_bin_delegation(self) -> None: self._run_script_and_assert_files( "succinct_hash_bin_delegations.py", [ "1.targets.json", "1.delegated_bin-00.json", "1.delegated_bin-01.json", "1.delegated_bin-02.json", "1.delegated_bin-03.json", "1.delegated_bin-04.json", "1.delegated_bin-05.json", "1.delegated_bin-06.json", "1.delegated_bin-07.json", "1.delegated_bin-08.json", "1.delegated_bin-09.json", "1.delegated_bin-0a.json", "1.delegated_bin-0b.json", "1.delegated_bin-0c.json", "1.delegated_bin-0d.json", "1.delegated_bin-0e.json", "1.delegated_bin-0f.json", "1.delegated_bin-10.json", "1.delegated_bin-11.json", "1.delegated_bin-12.json", "1.delegated_bin-13.json", "1.delegated_bin-14.json", "1.delegated_bin-15.json", "1.delegated_bin-16.json", "1.delegated_bin-17.json", "1.delegated_bin-18.json", "1.delegated_bin-19.json", "1.delegated_bin-1a.json", "1.delegated_bin-1b.json", "1.delegated_bin-1c.json", "1.delegated_bin-1d.json", "1.delegated_bin-1e.json", "1.delegated_bin-1f.json", ], ) if __name__ == "__main__": utils.configure_test_logging(sys.argv) unittest.main() tuf-6.0.0/tests/test_fetcher_ng.py0000644000000000000000000001426713615410400014124 0ustar00# Copyright 2021, New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Unit test for Urllib3Fetcher.""" import io import logging import math import os import sys import tempfile import unittest from typing import ClassVar from unittest.mock import Mock, patch import urllib3 from tests import utils from tuf.api import exceptions from tuf.ngclient import Urllib3Fetcher logger = logging.getLogger(__name__) class TestFetcher(unittest.TestCase): """Test Urllib3Fetcher class.""" server_process_handler: ClassVar[utils.TestServerProcess] @classmethod def setUpClass(cls) -> None: """ Create a temporary file and launch a simple server in the current working directory. """ cls.server_process_handler = utils.TestServerProcess(log=logger) cls.file_contents = b"junk data" cls.file_length = len(cls.file_contents) with tempfile.NamedTemporaryFile( dir=os.getcwd(), delete=False ) as cls.target_file: cls.target_file.write(cls.file_contents) cls.url_prefix = ( f"http://{utils.TEST_HOST_ADDRESS}:" f"{cls.server_process_handler.port!s}" ) target_filename = os.path.basename(cls.target_file.name) cls.url = f"{cls.url_prefix}/{target_filename}" @classmethod def tearDownClass(cls) -> None: # Stop server process and perform clean up. cls.server_process_handler.clean() os.remove(cls.target_file.name) def setUp(self) -> None: # Instantiate a concrete instance of FetcherInterface self.fetcher = Urllib3Fetcher() # Simple fetch. def test_fetch(self) -> None: with tempfile.TemporaryFile() as temp_file: for chunk in self.fetcher.fetch(self.url): temp_file.write(chunk) temp_file.seek(0) self.assertEqual(self.file_contents, temp_file.read()) # URL data downloaded in more than one chunk def test_fetch_in_chunks(self) -> None: # Set a smaller chunk size to ensure that the file will be downloaded # in more than one chunk self.fetcher.chunk_size = 4 # expected_chunks_count: 3 (depends on length of self.file_length) expected_chunks_count = math.ceil( self.file_length / self.fetcher.chunk_size ) self.assertEqual(expected_chunks_count, 3) chunks_count = 0 with tempfile.TemporaryFile() as temp_file: for chunk in self.fetcher.fetch(self.url): temp_file.write(chunk) chunks_count += 1 temp_file.seek(0) self.assertEqual(self.file_contents, temp_file.read()) # Check that we calculate chunks as expected self.assertEqual(chunks_count, expected_chunks_count) # Incorrect URL parsing def test_url_parsing(self) -> None: with self.assertRaises(exceptions.DownloadError): self.fetcher.fetch("http://invalid/") # File not found error def test_http_error(self) -> None: with self.assertRaises(exceptions.DownloadHTTPError) as cm: self.url = f"{self.url_prefix}/non-existing-path" self.fetcher.fetch(self.url) self.assertEqual(cm.exception.status_code, 404) # Response read timeout error @patch.object(urllib3.PoolManager, "request") def test_response_read_timeout(self, mock_session_get: Mock) -> None: mock_response = Mock() mock_response.status = 200 attr = { "stream.side_effect": urllib3.exceptions.MaxRetryError( urllib3.connectionpool.ConnectionPool("localhost"), "", urllib3.exceptions.TimeoutError(), ) } mock_response.configure_mock(**attr) mock_session_get.return_value = mock_response with self.assertRaises(exceptions.SlowRetrievalError): next(self.fetcher.fetch(self.url)) mock_response.stream.assert_called_once() # Read/connect session timeout error @patch.object( urllib3.PoolManager, "request", side_effect=urllib3.exceptions.MaxRetryError( urllib3.connectionpool.ConnectionPool("localhost"), "", urllib3.exceptions.TimeoutError(), ), ) def test_session_get_timeout(self, mock_session_get: Mock) -> None: with self.assertRaises(exceptions.SlowRetrievalError): self.fetcher.fetch(self.url) mock_session_get.assert_called_once() # Simple bytes download def test_download_bytes(self) -> None: data = self.fetcher.download_bytes(self.url, self.file_length) self.assertEqual(self.file_contents, data) # Download file smaller than required max_length def test_download_bytes_upper_length(self) -> None: data = self.fetcher.download_bytes(self.url, self.file_length + 4) self.assertEqual(self.file_contents, data) # Download a file bigger than expected def test_download_bytes_length_mismatch(self) -> None: with self.assertRaises(exceptions.DownloadLengthMismatchError): self.fetcher.download_bytes(self.url, self.file_length - 4) # Simple file download def test_download_file(self) -> None: with self.fetcher.download_file( self.url, self.file_length ) as temp_file: temp_file.seek(0, io.SEEK_END) self.assertEqual(self.file_length, temp_file.tell()) # Download file smaller than required max_length def test_download_file_upper_length(self) -> None: with self.fetcher.download_file( self.url, self.file_length + 4 ) as temp_file: temp_file.seek(0, io.SEEK_END) self.assertEqual(self.file_length, temp_file.tell()) # Download a file bigger than expected def test_download_file_length_mismatch(self) -> None: with self.assertRaises( exceptions.DownloadLengthMismatchError ), self.fetcher.download_file(self.url, self.file_length - 4): pass # we never get here as download_file() raises # Run unit test. if __name__ == "__main__": utils.configure_test_logging(sys.argv) unittest.main() tuf-6.0.0/tests/test_metadata_eq_.py0000644000000000000000000001525213615410400014417 0ustar00# Copyright New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Test __eq__ implementations of classes inside tuf/api/metadata.py.""" from __future__ import annotations import copy import os import sys import unittest from typing import Any, ClassVar from securesystemslib.signer import SSlibKey from tests import utils from tuf.api.metadata import ( TOP_LEVEL_ROLE_NAMES, DelegatedRole, Delegations, Metadata, MetaFile, Role, Signature, SuccinctRoles, TargetFile, ) class TestMetadataComparisons(unittest.TestCase): """Test __eq__ for all classes inside tuf/api/metadata.py.""" metadata: ClassVar[dict[str, bytes]] @classmethod def setUpClass(cls) -> None: cls.repo_dir = os.path.join( utils.TESTS_DIR, "repository_data", "repository", "metadata" ) # Store class instances in this dict instead of creating them inside the # test function in order to escape the need for reinitialization of the # instances on each run of the test function. cls.objects = {} for md in TOP_LEVEL_ROLE_NAMES: with open(os.path.join(cls.repo_dir, f"{md}.json"), "rb") as f: data = f.read() cls.objects[md.capitalize()] = Metadata.from_bytes(data).signed cls.objects["Metadata"] = Metadata(cls.objects["Timestamp"], {}) cls.objects["Signed"] = cls.objects["Timestamp"] cls.objects["Key"] = SSlibKey( "id", "rsa", "rsassa-pss-sha256", {"public": "foo"} ) cls.objects["Role"] = Role(["keyid1", "keyid2"], 3) cls.objects["MetaFile"] = MetaFile(1, 12, {"sha256": "abc"}) cls.objects["DelegatedRole"] = DelegatedRole("a", [], 1, False, ["d"]) cls.objects["SuccinctRoles"] = SuccinctRoles(["keyid"], 1, 8, "foo") cls.objects["Delegations"] = Delegations( {"keyid": cls.objects["Key"]}, {"a": cls.objects["DelegatedRole"]} ) cls.objects["TargetFile"] = TargetFile( 1, {"sha256": "abc"}, "file1.txt" ) # Keys are class names. # Values are dictionaries containing attribute names and their new values. classes_attributes_modifications = { "Metadata": {"signed": None, "signatures": None}, "Signed": {"version": -1, "spec_version": "0.0.0"}, "Key": {"keyid": "a", "keytype": "foo", "scheme": "b", "keyval": "b"}, "Role": {"keyids": [], "threshold": 10}, "Root": {"consistent_snapshot": None, "keys": {}}, "MetaFile": {"version": None, "length": None, "hashes": {}}, "Timestamp": {"snapshot_meta": None}, "Snapshot": {"meta": None}, "DelegatedRole": { "name": "", "terminating": None, "paths": [""], "path_hash_prefixes": [""], }, "SuccinctRoles": {"bit_length": 0, "name_prefix": ""}, "Delegations": {"keys": {}, "roles": {}}, "TargetFile": {"length": 0, "hashes": {}, "path": ""}, "Targets": {"targets": {}, "delegations": []}, } @utils.run_sub_tests_with_dataset(classes_attributes_modifications) def test_classes_eq_(self, test_case_data: dict[str, Any]) -> None: obj = self.objects[self.case_name] # Assert that obj is not equal to an object from another type self.assertNotEqual(obj, "") obj_2 = copy.deepcopy(obj) # Assert that __eq__ works for equal objects. self.assertEqual(obj, obj_2) for attr, value in test_case_data.items(): original_value = getattr(obj_2, attr) setattr(obj_2, attr, value) # Assert that the original object != modified one. self.assertNotEqual(obj, obj_2, f"Failed case: {attr}") # Restore the old value of the attribute. setattr(obj_2, attr, original_value) def test_md_eq_signatures_reversed_order(self) -> None: # Test comparing objects with same signatures but different order. # Remove all signatures and create new ones. md: Metadata = self.objects["Metadata"] md.signatures = {"a": Signature("a", "a"), "b": Signature("b", "b")} md_2 = copy.deepcopy(md) # Reverse signatures order in md_2. md_2.signatures = dict(reversed(md_2.signatures.items())) # Assert that both objects are not the same because of signatures order. self.assertNotEqual(md, md_2) # but if we fix the signatures order they will be equal md_2.signatures = {"a": Signature("a", "a"), "b": Signature("b", "b")} self.assertEqual(md, md_2) def test_md_eq_special_signatures_tests(self) -> None: # Test that metadata objects with different signatures are not equal. md: Metadata = self.objects["Metadata"] md_2 = copy.deepcopy(md) md_2.signatures = {} self.assertNotEqual(md, md_2) # Test that metadata objects with empty signatures are equal md.signatures = {} self.assertEqual(md, md_2) # Metadata objects with different signatures types are not equal. md_2.signatures = "" # type: ignore[assignment] self.assertNotEqual(md, md_2) def test_delegations_eq_roles_reversed_order(self) -> None: # Test comparing objects with same delegated roles but different order. role_one_dict = { "keyids": ["keyid1"], "name": "a", "terminating": False, "paths": ["fn1"], "threshold": 1, } role_two_dict = { "keyids": ["keyid2"], "name": "b", "terminating": True, "paths": ["fn2"], "threshold": 4, } delegations_dict = { "keys": { "keyid2": { "keytype": "ed25519", "scheme": "ed25519", "keyval": {"public": "bar"}, } }, "roles": [role_one_dict, role_two_dict], } delegations = Delegations.from_dict(copy.deepcopy(delegations_dict)) # Create a second delegations obj with reversed roles order delegations_2 = copy.deepcopy(delegations) assert isinstance(delegations.roles, dict) delegations_2.roles = dict(reversed(delegations.roles.items())) # Both objects are not the equal because of delegated roles order. self.assertNotEqual(delegations, delegations_2) # but if we fix the delegated roles order they will be equal delegations_2.roles = delegations.roles self.assertEqual(delegations, delegations_2) # Run unit test. if __name__ == "__main__": utils.configure_test_logging(sys.argv) unittest.main() tuf-6.0.0/tests/test_metadata_generation.py0000644000000000000000000000124113615410400015777 0ustar00"""Unit tests for 'tests/generated_data/generate_md.py'.""" # Copyright New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 import sys import unittest from tests import utils from tests.generated_data.generate_md import generate_all_files class TestMetadataGeneration(unittest.TestCase): """Test metadata files generation.""" @staticmethod def test_compare_static_md_to_generated() -> None: # md_generator = MetadataGenerator("generated_data/ed25519_metadata") generate_all_files(dump=False) # Run unit test. if __name__ == "__main__": utils.configure_test_logging(sys.argv) unittest.main() tuf-6.0.0/tests/test_metadata_serialization.py0000644000000000000000000010073413615410400016530 0ustar00# Copyright New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Unit tests testing tuf/api/metadata.py classes serialization and deserialization. """ import copy import json import logging import sys import unittest from securesystemslib.signer import Signature from tests import utils from tuf.api.metadata import ( DelegatedRole, Delegations, Key, Metadata, MetaFile, Role, Root, Snapshot, SuccinctRoles, TargetFile, Targets, Timestamp, ) from tuf.api.serialization import DeserializationError logger = logging.getLogger(__name__) class TestSerialization(unittest.TestCase): """Test serialization for all classes in 'tuf/api/metadata.py'.""" invalid_metadata = { "no signatures field": b'{"signed": \ { "_type": "timestamp", "spec_version": "1.0.0", "version": 1, "expires": "2030-01-01T00:00:00Z", \ "meta": {"snapshot.json": {"hashes": {"sha256" : "abc"}, "version": 1}}} \ }', "non unique duplicating signatures": b'{"signed": \ { "_type": "timestamp", "spec_version": "1.0.0", "version": 1, "expires": "2030-01-01T00:00:00Z", \ "meta": {"snapshot.json": {"hashes": {"sha256" : "abc"}, "version": 1}}}, \ "signatures": [{"keyid": "id", "sig": "b"}, {"keyid": "id", "sig": "b"}] \ }', } @utils.run_sub_tests_with_dataset(invalid_metadata) def test_invalid_metadata_serialization(self, test_data: bytes) -> None: # We expect a DeserializationError reraised from ValueError or KeyError. with self.assertRaises(DeserializationError): Metadata.from_bytes(test_data) valid_metadata = { "multiple signatures": b'{ \ "signed": \ { "_type": "timestamp", "spec_version": "1.0.0", "version": 1, "expires": "2030-01-01T00:00:00Z", \ "meta": {"snapshot.json": {"hashes": {"sha256" : "abc"}, "version": 1}}}, \ "signatures": [{ "keyid": "id", "sig": "b"}, {"keyid": "id2", "sig": "d" }] \ }', "no signatures": b'{ \ "signed": \ { "_type": "timestamp", "spec_version": "1.0.0", "version": 1, "expires": "2030-01-01T00:00:00Z", \ "meta": {"snapshot.json": {"hashes": {"sha256" : "abc"}, "version": 1}}}, \ "signatures": [] \ }', "unrecognized fields": b'{ \ "signed": \ { "_type": "timestamp", "spec_version": "1.0.0", "version": 1, "expires": "2030-01-01T00:00:00Z", \ "meta": {"snapshot.json": {"hashes": {"sha256" : "abc"}, "version": 1}}}, \ "signatures": [{"keyid": "id", "sig": "b"}], \ "foo": "bar" \ }', } @utils.run_sub_tests_with_dataset(valid_metadata) def test_valid_metadata_serialization(self, test_case_data: bytes) -> None: md = Metadata.from_bytes(test_case_data) # Convert to a JSON and sort the keys the way we do in JSONSerializer. separators = (",", ":") test_json = json.loads(test_case_data) test_bytes = json.dumps( test_json, separators=separators, sort_keys=True ).encode("utf-8") self.assertEqual(test_bytes, md.to_bytes()) invalid_signatures = { "missing keyid attribute in a signature": '{ "sig": "abc" }', "missing sig attribute in a signature": '{ "keyid": "id" }', } @utils.run_sub_tests_with_dataset(invalid_signatures) def test_invalid_signature_serialization(self, test_data: str) -> None: case_dict = json.loads(test_data) with self.assertRaises(KeyError): Signature.from_dict(case_dict) valid_signatures = { "all": '{ "keyid": "id", "sig": "b"}', "unrecognized fields": '{ "keyid": "id", "sig": "b", "foo": "bar"}', } @utils.run_sub_tests_with_dataset(valid_signatures) def test_signature_serialization(self, test_case_data: str) -> None: case_dict = json.loads(test_case_data) signature = Signature.from_dict(copy.copy(case_dict)) self.assertEqual(case_dict, signature.to_dict()) # Snapshot instances with meta = {} are valid, but for a full valid # repository it's required that meta has at least one element inside it. invalid_signed = { "no _type": '{"spec_version": "1.0.0", "expires": "2030-01-01T00:00:00Z", "meta": {}}', "no spec_version": '{"_type": "snapshot", "version": 1, "expires": "2030-01-01T00:00:00Z", "meta": {}}', "no version": '{"_type": "snapshot", "spec_version": "1.0.0", "expires": "2030-01-01T00:00:00Z", "meta": {}}', "no expires": '{"_type": "snapshot", "spec_version": "1.0.0", "version": 1, "meta": {}}', "empty str _type": '{"_type": "", "spec_version": "1.0.0", "version": 1, "expires": "2030-01-01T00:00:00Z", "meta": {}}', "empty str spec_version": '{"_type": "snapshot", "spec_version": "", "version": 1, "expires": "2030-01-01T00:00:00Z", "meta": {}}', "_type wrong type": '{"_type": "foo", "spec_version": "1.0.0", "version": 1, "expires": "2030-01-01T00:00:00Z", "meta": {}}', "version wrong type": '{"_type": "snapshot", "spec_version": "1.0.0", "version": "a", "expires": "2030-01-01T00:00:00Z", "meta": {}}', "invalid spec_version str": '{"_type": "snapshot", "spec_version": "abc", "version": 1, "expires": "2030-01-01T00:00:00Z", "meta": {}}', "non-number spec_version": '{"_type": "snapshot", "spec_version": "1.2.a", "version": 1, "expires": "2030-01-01T00:00:00Z", "meta": {}}', "one part spec_version": '{"_type": "snapshot", "spec_version": "1", "version": 1, "expires": "2030-01-01T00:00:00Z", "meta": {}}', "different major spec_version": '{"_type": "snapshot", "spec_version": "0.0.0", "version": 1, "expires": "2030-01-01T00:00:00Z", "meta": {}}', "version 0": '{"_type": "snapshot", "spec_version": "1.0.0", "version": 0, "expires": "2030-01-01T00:00:00Z", "meta": {}}', "version below 0": '{"_type": "snapshot", "spec_version": "1.0.0", "version": -1, "expires": "2030-01-01T00:00:00Z", "meta": {}}', "wrong datetime string": '{"_type": "snapshot", "spec_version": "1.0.0", "version": 1, "expires": "abc", "meta": {}}', } @utils.run_sub_tests_with_dataset(invalid_signed) def test_invalid_signed_serialization(self, test_case_data: str) -> None: case_dict = json.loads(test_case_data) with self.assertRaises((KeyError, ValueError, TypeError)): Snapshot.from_dict(case_dict) valid_keys = { "all": '{"keytype": "rsa", "scheme": "rsassa-pss-sha256", \ "keyval": {"public": "foo"}}', "unrecognized field": '{"keytype": "rsa", "scheme": "rsassa-pss-sha256", \ "keyval": {"public": "foo"}, "foo": "bar"}', "unrecognized field in keyval": '{"keytype": "rsa", "scheme": "rsassa-pss-sha256", \ "keyval": {"public": "foo", "foo": "bar"}}', } @utils.run_sub_tests_with_dataset(valid_keys) def test_valid_key_serialization(self, test_case_data: str) -> None: case_dict = json.loads(test_case_data) key = Key.from_dict("id", copy.copy(case_dict)) self.assertDictEqual(case_dict, key.to_dict()) invalid_keys = { "no keyid": '{"keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": {"public": "abc"}}', "no keytype": '{"keyid": "id", "scheme": "rsassa-pss-sha256", "keyval": {"public": "foo"}}', "no scheme": '{"keyid": "id", "keytype": "rsa", "keyval": {"public": "foo"}}', "no keyval": '{"keyid": "id", "keytype": "rsa", "scheme": "rsassa-pss-sha256"}', "keyid wrong type": '{"keyid": 1, "keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": {"public": "abc"}}', "keytype wrong type": '{"keyid": "id", "keytype": 1, "scheme": "rsassa-pss-sha256", "keyval": {"public": "abc"}}', "scheme wrong type": '{"keyid": "id", "keytype": "rsa", "scheme": 1, "keyval": {"public": "abc"}}', "keyval wrong type": '{"keyid": "id", "keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": 1}', } @utils.run_sub_tests_with_dataset(invalid_keys) def test_invalid_key_serialization(self, test_case_data: str) -> None: case_dict = json.loads(test_case_data) with self.assertRaises((TypeError, KeyError, ValueError)): keyid = case_dict.pop("keyid") Key.from_dict(keyid, case_dict) invalid_roles = { "no threshold": '{"keyids": ["keyid"]}', "no keyids": '{"threshold": 3}', "wrong threshold type": '{"keyids": ["keyid"], "threshold": "a"}', "wrong keyids type": '{"keyids": 1, "threshold": 3}', "threshold below 1": '{"keyids": ["keyid"], "threshold": 0}', "duplicate keyids": '{"keyids": ["keyid", "keyid"], "threshold": 3}', } @utils.run_sub_tests_with_dataset(invalid_roles) def test_invalid_role_serialization(self, test_case_data: str) -> None: case_dict = json.loads(test_case_data) with self.assertRaises((KeyError, TypeError, ValueError)): Role.from_dict(case_dict) valid_roles = { "all": '{"keyids": ["keyid"], "threshold": 3}', "many keyids": '{"keyids": ["a", "b", "c", "d", "e"], "threshold": 1}', "ordered keyids": '{"keyids": ["c", "b", "a"], "threshold": 1}', "empty keyids": '{"keyids": [], "threshold": 1}', "unrecognized field": '{"keyids": ["keyid"], "threshold": 3, "foo": "bar"}', } @utils.run_sub_tests_with_dataset(valid_roles) def test_role_serialization(self, test_case_data: str) -> None: case_dict = json.loads(test_case_data) role = Role.from_dict(copy.deepcopy(case_dict)) self.assertDictEqual(case_dict, role.to_dict()) valid_roots = { "all": '{"_type": "root", "spec_version": "1.0.0", "version": 1, \ "expires": "2030-01-01T00:00:00Z", "consistent_snapshot": false, \ "keys": { \ "keyid1" : {"keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": {"public": "foo"}}, \ "keyid2" : {"keytype": "ed25519", "scheme": "ed25519", "keyval": {"public": "bar"}}}, \ "roles": { \ "root": {"keyids": ["keyid1"], "threshold": 1}, \ "timestamp": {"keyids": ["keyid2"], "threshold": 1}, \ "targets": {"keyids": ["keyid1"], "threshold": 1}, \ "snapshot": {"keyids": ["keyid2"], "threshold": 1}} \ }', "no consistent_snapshot": '{ "_type": "root", "spec_version": "1.0.0", "version": 1, \ "expires": "2030-01-01T00:00:00Z", \ "keys": {"keyid" : {"keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": {"public": "foo"} }}, \ "roles": { \ "root": {"keyids": ["keyid"], "threshold": 1}, \ "timestamp": {"keyids": ["keyid"], "threshold": 1}, \ "targets": {"keyids": ["keyid"], "threshold": 1}, \ "snapshot": {"keyids": ["keyid"], "threshold": 1}} \ }', "empty keys": '{"_type": "root", "spec_version": "1.0.0", "version": 1, \ "expires": "2030-01-01T00:00:00Z", "consistent_snapshot": false, \ "keys": {}, \ "roles": { \ "root": {"keyids": [], "threshold": 1}, \ "timestamp": {"keyids": [], "threshold": 1}, \ "targets": {"keyids": [], "threshold": 1}, \ "snapshot": {"keyids": [], "threshold": 1}} \ }', "unrecognized field": '{"_type": "root", "spec_version": "1.0.0", "version": 1, \ "expires": "2030-01-01T00:00:00Z", "consistent_snapshot": false, \ "keys": {"keyid" : {"keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": {"public": "foo"}}}, \ "roles": { \ "root": {"keyids": ["keyid"], "threshold": 1}, \ "timestamp": {"keyids": ["keyid"], "threshold": 1}, \ "targets": {"keyids": ["keyid"], "threshold": 1}, \ "snapshot": {"keyids": ["keyid"], "threshold": 1} \ }, \ "foo": "bar"}', } @utils.run_sub_tests_with_dataset(valid_roots) def test_root_serialization(self, test_case_data: str) -> None: case_dict = json.loads(test_case_data) root = Root.from_dict(copy.deepcopy(case_dict)) self.assertDictEqual(case_dict, root.to_dict()) invalid_roots = { "invalid role name": '{"_type": "root", "spec_version": "1.0.0", "version": 1, \ "expires": "2030-01-01T00:00:00Z", "consistent_snapshot": false, \ "keys": { \ "keyid1" : {"keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": {"public": "foo"}}, \ "keyid2" : {"keytype": "ed25519", "scheme": "ed25519", "keyval": {"public": "bar"}}}, \ "roles": { \ "bar": {"keyids": ["keyid1"], "threshold": 1}, \ "timestamp": {"keyids": ["keyid2"], "threshold": 1}, \ "targets": {"keyids": ["keyid1"], "threshold": 1}, \ "snapshot": {"keyids": ["keyid2"], "threshold": 1}} \ }', "missing root role": '{"_type": "root", "spec_version": "1.0.0", "version": 1, \ "expires": "2030-01-01T00:00:00Z", "consistent_snapshot": false, \ "keys": { \ "keyid1" : {"keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": {"public": "foo"}}, \ "keyid2" : {"keytype": "ed25519", "scheme": "ed25519", "keyval": {"public": "bar"}}}, \ "roles": { \ "timestamp": {"keyids": ["keyid2"], "threshold": 1}, \ "targets": {"keyids": ["keyid1"], "threshold": 1}, \ "snapshot": {"keyids": ["keyid2"], "threshold": 1}} \ }', "one additional role": '{"_type": "root", "spec_version": "1.0.0", "version": 1, \ "expires": "2030-01-01T00:00:00Z", "consistent_snapshot": false, \ "keys": { \ "keyid1" : {"keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": {"public": "foo"}}, \ "keyid2" : {"keytype": "ed25519", "scheme": "ed25519", "keyval": {"public": "bar"}}}, \ "roles": { \ "root": {"keyids": ["keyid1"], "threshold": 1}, \ "timestamp": {"keyids": ["keyid2"], "threshold": 1}, \ "targets": {"keyids": ["keyid1"], "threshold": 1}, \ "snapshot": {"keyids": ["keyid2"], "threshold": 1}, \ "foo": {"keyids": ["keyid2"], "threshold": 1}} \ }', "invalid expiry with microseconds": '{"_type": "root", "spec_version": "1.0.0", "version": 1, \ "expires": "2030-01-01T12:00:00.123456Z", "consistent_snapshot": false, \ "keys": {}, "roles": {"root": {}, "timestamp": {}, "targets": {}, "snapshot": {}}}', } @utils.run_sub_tests_with_dataset(invalid_roots) def test_invalid_root_serialization(self, test_case_data: str) -> None: case_dict = json.loads(test_case_data) with self.assertRaises(ValueError): Root.from_dict(case_dict) invalid_metafiles = { "wrong length type": '{"version": 1, "length": "a", "hashes": {"sha256" : "abc"}}', "version 0": '{"version": 0, "length": 1, "hashes": {"sha256" : "abc"}}', "length below 0": '{"version": 1, "length": -1, "hashes": {"sha256" : "abc"}}', "empty hashes dict": '{"version": 1, "length": 1, "hashes": {}}', "hashes wrong type": '{"version": 1, "length": 1, "hashes": 1}', "hashes values wrong type": '{"version": 1, "length": 1, "hashes": {"sha256": 1}}', } @utils.run_sub_tests_with_dataset(invalid_metafiles) def test_invalid_metafile_serialization(self, test_case_data: str) -> None: case_dict = json.loads(test_case_data) with self.assertRaises((TypeError, ValueError, AttributeError)): MetaFile.from_dict(case_dict) valid_metafiles = { "all": '{"hashes": {"sha256" : "abc"}, "length": 12, "version": 1}', "no length": '{"hashes": {"sha256" : "abc"}, "version": 1 }', "length 0": '{"version": 1, "length": 0, "hashes": {"sha256" : "abc"}}', "no hashes": '{"length": 12, "version": 1}', "unrecognized field": '{"hashes": {"sha256" : "abc"}, "length": 12, "version": 1, "foo": "bar"}', "many hashes": '{"hashes": {"sha256" : "abc", "sha512": "cde"}, "length": 12, "version": 1}', } @utils.run_sub_tests_with_dataset(valid_metafiles) def test_metafile_serialization(self, test_case_data: str) -> None: case_dict = json.loads(test_case_data) metafile = MetaFile.from_dict(copy.copy(case_dict)) self.assertDictEqual(case_dict, metafile.to_dict()) invalid_timestamps = { "no metafile": '{ "_type": "timestamp", "spec_version": "1.0.0", "version": 1, "expires": "2030-01-01T00:00:00Z"}', } @utils.run_sub_tests_with_dataset(invalid_timestamps) def test_invalid_timestamp_serialization(self, test_case_data: str) -> None: case_dict = json.loads(test_case_data) with self.assertRaises((ValueError, KeyError)): Timestamp.from_dict(case_dict) valid_timestamps = { "all": '{ "_type": "timestamp", "spec_version": "1.0.0", "version": 1, "expires": "2030-01-01T00:00:00Z", \ "meta": {"snapshot.json": {"hashes": {"sha256" : "abc"}, "version": 1}}}', "legacy spec_version": '{ "_type": "timestamp", "spec_version": "1.0", "version": 1, "expires": "2030-01-01T00:00:00Z", \ "meta": {"snapshot.json": {"hashes": {"sha256" : "abc"}, "version": 1}}}', "unrecognized field": '{ "_type": "timestamp", "spec_version": "1.0.0", "version": 1, "expires": "2030-01-01T00:00:00Z", \ "meta": {"snapshot.json": {"hashes": {"sha256" : "abc"}, "version": 1}}, "foo": "bar"}', } @utils.run_sub_tests_with_dataset(valid_timestamps) def test_timestamp_serialization(self, test_case_data: str) -> None: case_dict = json.loads(test_case_data) timestamp = Timestamp.from_dict(copy.deepcopy(case_dict)) self.assertDictEqual(case_dict, timestamp.to_dict()) valid_snapshots = { "all": '{ "_type": "snapshot", "spec_version": "1.0.0", "version": 1, "expires": "2030-01-01T00:00:00Z", \ "meta": { \ "file1.txt": {"hashes": {"sha256" : "abc"}, "version": 1}, \ "file2.txt": {"hashes": {"sha256" : "cde"}, "version": 1} \ }}', "empty meta": '{ "_type": "snapshot", "spec_version": "1.0.0", "version": 1, "expires": "2030-01-01T00:00:00Z", \ "meta": {} \ }', "unrecognized field": '{ "_type": "snapshot", "spec_version": "1.0.0", "version": 1, "expires": "2030-01-01T00:00:00Z", \ "meta": { "file.txt": { "hashes": {"sha256" : "abc"}, "version": 1 }}, "foo": "bar"}', } @utils.run_sub_tests_with_dataset(valid_snapshots) def test_snapshot_serialization(self, test_case_data: str) -> None: case_dict = json.loads(test_case_data) snapshot = Snapshot.from_dict(copy.deepcopy(case_dict)) self.assertDictEqual(case_dict, snapshot.to_dict()) valid_delegated_roles = { # DelegatedRole inherits Role and some use cases can be found in the valid_roles. "no hash prefix attribute": '{"keyids": ["keyid"], "name": "a", "paths": ["fn1", "fn2"], \ "terminating": false, "threshold": 1}', "no path attribute": '{"keyids": ["keyid"], "name": "a", "terminating": false, \ "path_hash_prefixes": ["h1", "h2"], "threshold": 99}', "empty paths": '{"keyids": ["keyid"], "name": "a", "paths": [], \ "terminating": false, "threshold": 1}', "empty path_hash_prefixes": '{"keyids": ["keyid"], "name": "a", "terminating": false, \ "path_hash_prefixes": [], "threshold": 99}', "unrecognized field": '{"keyids": ["keyid"], "name": "a", "terminating": true, "paths": ["fn1"], "threshold": 3, "foo": "bar"}', "many keyids": '{"keyids": ["keyid1", "keyid2"], "name": "a", "paths": ["fn1", "fn2"], \ "terminating": false, "threshold": 1}', "ordered keyids": '{"keyids": ["keyid2", "keyid1"], "name": "a", "paths": ["fn1", "fn2"], \ "terminating": false, "threshold": 1}', } @utils.run_sub_tests_with_dataset(valid_delegated_roles) def test_delegated_role_serialization(self, test_case_data: str) -> None: case_dict = json.loads(test_case_data) deserialized_role = DelegatedRole.from_dict(copy.copy(case_dict)) self.assertDictEqual(case_dict, deserialized_role.to_dict()) invalid_delegated_roles = { # DelegatedRole inherits Role and some use cases can be found in the invalid_roles. "missing hash prefixes and paths": '{"name": "a", "keyids": ["keyid"], "threshold": 1, "terminating": false}', "both hash prefixes and paths": '{"name": "a", "keyids": ["keyid"], "threshold": 1, "terminating": false, \ "paths": ["fn1", "fn2"], "path_hash_prefixes": ["h1", "h2"]}', "invalid path type": '{"keyids": ["keyid"], "name": "a", "paths": [1,2,3], \ "terminating": false, "threshold": 1}', "invalid path_hash_prefixes type": '{"keyids": ["keyid"], "name": "a", "path_hash_prefixes": [1,2,3], \ "terminating": false, "threshold": 1}', } @utils.run_sub_tests_with_dataset(invalid_delegated_roles) def test_invalid_delegated_role_serialization( self, test_case_data: str ) -> None: case_dict = json.loads(test_case_data) with self.assertRaises(ValueError): DelegatedRole.from_dict(case_dict) valid_succinct_roles = { # SuccinctRoles inherits Role and some use cases can be found in the valid_roles. "standard succinct_roles information": '{"keyids": ["keyid"], "threshold": 1, \ "bit_length": 8, "name_prefix": "foo"}', "succinct_roles with unrecognized fields": '{"keyids": ["keyid"], "threshold": 1, \ "bit_length": 8, "name_prefix": "foo", "foo": "bar"}', } @utils.run_sub_tests_with_dataset(valid_succinct_roles) def test_succinct_roles_serialization(self, test_case_data: str) -> None: case_dict = json.loads(test_case_data) succinct_roles = SuccinctRoles.from_dict(copy.copy(case_dict)) self.assertDictEqual(case_dict, succinct_roles.to_dict()) invalid_succinct_roles = { # SuccinctRoles inherits Role and some use cases can be found in the invalid_roles. "missing bit_length from succinct_roles": '{"keyids": ["keyid"], "threshold": 1, "name_prefix": "foo"}', "missing name_prefix from succinct_roles": '{"keyids": ["keyid"], "threshold": 1, "bit_length": 8}', "succinct_roles with invalid bit_length type": '{"keyids": ["keyid"], "threshold": 1, "bit_length": "a", "name_prefix": "foo"}', "succinct_roles with invalid name_prefix type": '{"keyids": ["keyid"], "threshold": 1, "bit_length": 8, "name_prefix": 1}', "succinct_roles with high bit_length value": '{"keyids": ["keyid"], "threshold": 1, "bit_length": 50, "name_prefix": "foo"}', "succinct_roles with low bit_length value": '{"keyids": ["keyid"], "threshold": 1, "bit_length": 0, "name_prefix": "foo"}', } @utils.run_sub_tests_with_dataset(invalid_succinct_roles) def test_invalid_succinct_roles_serialization(self, test_data: str) -> None: case_dict = json.loads(test_data) with self.assertRaises((ValueError, KeyError, TypeError)): SuccinctRoles.from_dict(case_dict) invalid_delegations = { "empty delegations": "{}", "missing keys": '{ "roles": [ \ {"keyids": ["keyid"], "name": "a", "terminating": true, "paths": ["fn1"], "threshold": 3}, \ {"keyids": ["keyid2"], "name": "b", "terminating": true, "paths": ["fn2"], "threshold": 4} ] \ }', "missing roles": '{"keys": { \ "keyid1" : {"keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": {"public": "foo"}}, \ "keyid2" : {"keytype": "ed25519", "scheme": "ed25519", "keyval": {"public": "bar"}}}}', "bad keys": '{"keys": "foo", \ "roles": [{"keyids": ["keyid"], "name": "a", "paths": ["fn1", "fn2"], "terminating": false, "threshold": 3}]}', "bad roles": '{"keys": {"keyid" : {"keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": {"public": "foo"}}}, \ "roles": ["foo"]}', "duplicate role names": '{"keys": {"keyid" : {"keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": {"public": "foo"}}}, \ "roles": [ \ {"keyids": ["keyid"], "name": "a", "paths": ["fn1", "fn2"], "terminating": false, "threshold": 3}, \ {"keyids": ["keyid2"], "name": "a", "paths": ["fn3"], "terminating": false, "threshold": 2} \ ] \ }', "using empty string role name": '{"keys": { \ "keyid1" : {"keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": {"public": "foo"}}}, \ "roles": [ \ {"keyids": ["keyid1"], "name": "", "terminating": true, "paths": ["fn1"], "threshold": 3}] \ }', "using root as delegate role name": '{"keys": { \ "keyid1" : {"keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": {"public": "foo"}}}, \ "roles": [ \ {"keyids": ["keyid1"], "name": "root", "terminating": true, "paths": ["fn1"], "threshold": 3}] \ }', "using snapshot as delegate role name": '{"keys": { \ "keyid1" : {"keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": {"public": "foo"}}}, \ "roles": [ \ {"keyids": ["keyid1"], "name": "snapshot", "terminating": true, "paths": ["fn1"], "threshold": 3}] \ }', "using targets as delegate role name": '{"keys": { \ "keyid1" : {"keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": {"public": "foo"}}}, \ "roles": [ \ {"keyids": ["keyid1"], "name": "targets", "terminating": true, "paths": ["fn1"], "threshold": 3}] \ }', "using timestamp as delegate role name": '{"keys": { \ "keyid1" : {"keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": {"public": "foo"}}}, \ "roles": [ \ {"keyids": ["keyid1"], "name": "timestamp", "terminating": true, "paths": ["fn1"], "threshold": 3}] \ }', "using valid and top-level role name": '{"keys": { \ "keyid1" : {"keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": {"public": "foo"}}, \ "keyid2" : {"keytype": "ed25519", "scheme": "ed25519", "keyval": {"public": "bar"}}}, \ "roles": [ \ {"keyids": ["keyid1"], "name": "b", "terminating": true, "paths": ["fn1"], "threshold": 3}, \ {"keyids": ["keyid2"], "name": "root", "terminating": true, "paths": ["fn2"], "threshold": 4} ] \ }', "roles and succinct_roles set": '{"keys": { \ "keyid1" : {"keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": {"public": "foo"}}, \ "keyid2" : {"keytype": "ed25519", "scheme": "ed25519", "keyval": {"public": "bar"}}}, \ "roles": [ \ {"keyids": ["keyid"], "name": "a", "terminating": true, "paths": ["fn1"], "threshold": 3}, \ {"keyids": ["keyid2"], "name": "b", "terminating": true, "paths": ["fn2"], "threshold": 4} ], \ "succinct_roles": {"keyids": ["keyid"], "threshold": 1, "bit_length": 8, "name_prefix": "foo"}}', } @utils.run_sub_tests_with_dataset(invalid_delegations) def test_invalid_delegation_serialization( self, test_case_data: str ) -> None: case_dict = json.loads(test_case_data) with self.assertRaises((ValueError, KeyError, AttributeError)): Delegations.from_dict(case_dict) valid_delegations = { "with roles": '{"keys": { \ "keyid1" : {"keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": {"public": "foo"}}, \ "keyid2" : {"keytype": "ed25519", "scheme": "ed25519", "keyval": {"public": "bar"}}}, \ "roles": [ \ {"keyids": ["keyid"], "name": "a", "terminating": true, "paths": ["fn1"], "threshold": 3}, \ {"keyids": ["keyid2"], "name": "b", "terminating": true, "paths": ["fn2"], "threshold": 4} ] \ }', "with succinct_roles": '{"keys": { \ "keyid1" : {"keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": {"public": "foo"}}, \ "keyid2" : {"keytype": "ed25519", "scheme": "ed25519", "keyval": {"public": "bar"}}}, \ "succinct_roles": {"keyids": ["keyid"], "threshold": 1, "bit_length": 8, "name_prefix": "foo"}}', "unrecognized field": '{"keys": {"keyid" : {"keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": {"public": "foo"}}}, \ "roles": [ {"keyids": ["keyid"], "name": "a", "paths": ["fn1", "fn2"], "terminating": true, "threshold": 3} ], \ "foo": "bar"}', "empty keys and roles": '{"keys": {}, \ "roles": [] \ }', } @utils.run_sub_tests_with_dataset(valid_delegations) def test_delegation_serialization(self, test_case_data: str) -> None: case_dict = json.loads(test_case_data) delegation = Delegations.from_dict(copy.deepcopy(case_dict)) self.assertDictEqual(case_dict, delegation.to_dict()) invalid_targetfiles = { "no hashes": '{"length": 1}', "no length": '{"hashes": {"sha256": "abc"}}', # The remaining cases are the same as for invalid_hashes and # invalid_length datasets. } @utils.run_sub_tests_with_dataset(invalid_targetfiles) def test_invalid_targetfile_serialization( self, test_case_data: str ) -> None: case_dict = json.loads(test_case_data) with self.assertRaises(KeyError): TargetFile.from_dict(case_dict, "file1.txt") valid_targetfiles = { "all": '{"length": 12, "hashes": {"sha256" : "abc"}, \ "custom" : {"foo": "bar"} }', "no custom": '{"length": 12, "hashes": {"sha256" : "abc"}}', "unrecognized field": '{"length": 12, "hashes": {"sha256" : "abc"}, \ "custom" : {"foo": "bar"}, "foo": "bar"}', } @utils.run_sub_tests_with_dataset(valid_targetfiles) def test_targetfile_serialization(self, test_case_data: str) -> None: case_dict = json.loads(test_case_data) target_file = TargetFile.from_dict(copy.copy(case_dict), "file1.txt") self.assertDictEqual(case_dict, target_file.to_dict()) valid_targets = { "all attributes": '{"_type": "targets", "spec_version": "1.0.0", "version": 1, "expires": "2030-01-01T00:00:00Z", \ "targets": { \ "file.txt": {"length": 12, "hashes": {"sha256" : "abc"} }, \ "file2.txt": {"length": 50, "hashes": {"sha256" : "cde"} } }, \ "delegations": { \ "keys": { \ "keyid" : {"keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": {"public": "foo"}}, \ "keyid2": {"keytype": "ed25519", "scheme": "ed25519", "keyval": {"public": "bar"}}}, \ "roles": [ \ {"keyids": ["keyid"], "name": "a", "terminating": true, "paths": ["fn1"], "threshold": 3}, \ {"keyids": ["keyid2"], "name": "b", "terminating": true, "paths": ["fn2"], "threshold": 4} ] \ }}', "empty targets": '{"_type": "targets", "spec_version": "1.0.0", "version": 1, "expires": "2030-01-01T00:00:00Z", \ "targets": {}, \ "delegations": {"keys": {"keyid" : {"keytype": "rsa", \ "scheme": "rsassa-pss-sha256", "keyval": {"public": "foo"} }}, \ "roles": [ {"keyids": ["keyid"], "name": "a", "paths": ["fn1", "fn2"], "terminating": true, "threshold": 3} ]} \ }', "no delegations": '{"_type": "targets", "spec_version": "1.0.0", "version": 1, "expires": "2030-01-01T00:00:00Z", \ "targets": { "file.txt": {"length": 12, "hashes": {"sha256" : "abc"} } } \ }', "unrecognized_field": '{"_type": "targets", "spec_version": "1.0.0", "version": 1, "expires": "2030-01-01T00:00:00Z", \ "targets": {}, "foo": "bar"}', } @utils.run_sub_tests_with_dataset(valid_targets) def test_targets_serialization(self, test_case_data: str) -> None: case_dict = json.loads(test_case_data) targets = Targets.from_dict(copy.deepcopy(case_dict)) self.assertDictEqual(case_dict, targets.to_dict()) # Run unit test. if __name__ == "__main__": utils.configure_test_logging(sys.argv) unittest.main() tuf-6.0.0/tests/test_proxy_environment.py0000644000000000000000000002014513615410400015615 0ustar00# Copyright 2025, the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Test ngclient ProxyEnvironment""" from __future__ import annotations import sys import unittest from unittest.mock import Mock, patch from urllib3 import PoolManager, ProxyManager from tests import utils from tuf.ngclient._internal.proxy import ProxyEnvironment class TestProxyEnvironment(unittest.TestCase): """Test ngclient ProxyEnvironment implementation These tests use the ProxyEnvironment.get_pool_manager() endpoint and then look at the ProxyEnvironment._poolmanagers dict keys to decide if the result is correct. The test environment is changed via mocking getproxies(): this is a urllib method that returns a dict with the proxy environment variable contents. Testing ProxyEnvironment.request() would possibly be better but far more difficult: the current test implementation does not require actually setting up all of the different proxies. """ def assert_pool_managers( self, env: ProxyEnvironment, expected: list[str | None] ) -> None: # Pool managers have the expected proxy urls self.assertEqual(list(env._pool_managers.keys()), expected) # Pool manager types are as expected for proxy_url, pool_manager in env._pool_managers.items(): self.assertIsInstance(pool_manager, PoolManager) if proxy_url is not None: self.assertIsInstance(pool_manager, ProxyManager) @patch("tuf.ngclient._internal.proxy.getproxies") def test_no_variables(self, mock_getproxies: Mock) -> None: mock_getproxies.return_value = {} env = ProxyEnvironment() env.get_pool_manager("http", "example.com") env.get_pool_manager("https", "example.com") env.get_pool_manager("https", "example.com") env.get_pool_manager("https", "subdomain.example.com") env.get_pool_manager("https", "differentsite.com") # There is a single pool manager (no proxies) self.assert_pool_managers(env, [None]) @patch("tuf.ngclient._internal.proxy.getproxies") def test_proxy_set(self, mock_getproxies: Mock) -> None: mock_getproxies.return_value = { "https": "http://localhost:8888", } env = ProxyEnvironment() env.get_pool_manager("http", "example.com") env.get_pool_manager("https", "example.com") env.get_pool_manager("https", "example.com") env.get_pool_manager("https", "differentsite.com") # There are two pool managers: A plain poolmanager and https proxymanager self.assert_pool_managers(env, [None, "http://localhost:8888"]) @patch("tuf.ngclient._internal.proxy.getproxies") def test_proxies_set(self, mock_getproxies: Mock) -> None: mock_getproxies.return_value = { "http": "http://localhost:8888", "https": "http://localhost:9999", } env = ProxyEnvironment() env.get_pool_manager("http", "example.com") env.get_pool_manager("https", "example.com") env.get_pool_manager("https", "example.com") env.get_pool_manager("https", "subdomain.example.com") env.get_pool_manager("https", "differentsite.com") # There are two pool managers: A http proxymanager and https proxymanager self.assert_pool_managers( env, ["http://localhost:8888", "http://localhost:9999"] ) @patch("tuf.ngclient._internal.proxy.getproxies") def test_no_proxy_set(self, mock_getproxies: Mock) -> None: mock_getproxies.return_value = { "http": "http://localhost:8888", "https": "http://localhost:9999", "no": "somesite.com, example.com, another.site.com", } env = ProxyEnvironment() env.get_pool_manager("http", "example.com") env.get_pool_manager("https", "example.com") env.get_pool_manager("https", "example.com") # There is a single pool manager (no proxies) self.assert_pool_managers(env, [None]) env.get_pool_manager("http", "differentsite.com") env.get_pool_manager("https", "differentsite.com") # There are three pool managers: plain poolmanager for no_proxy domains, # http proxymanager and https proxymanager self.assert_pool_managers( env, [None, "http://localhost:8888", "http://localhost:9999"] ) @patch("tuf.ngclient._internal.proxy.getproxies") def test_no_proxy_subdomain_match(self, mock_getproxies: Mock) -> None: mock_getproxies.return_value = { "https": "http://localhost:9999", "no": "somesite.com, example.com, another.site.com", } env = ProxyEnvironment() # this should match example.com in no_proxy env.get_pool_manager("https", "subdomain.example.com") # There is a single pool manager (no proxies) self.assert_pool_managers(env, [None]) # this should not match example.com in no_proxy env.get_pool_manager("https", "xexample.com") # There are two pool managers: plain poolmanager for no_proxy domains, # and a https proxymanager self.assert_pool_managers(env, [None, "http://localhost:9999"]) @patch("tuf.ngclient._internal.proxy.getproxies") def test_no_proxy_wildcard(self, mock_getproxies: Mock) -> None: mock_getproxies.return_value = { "https": "http://localhost:8888", "no": "*", } env = ProxyEnvironment() env.get_pool_manager("https", "example.com") env.get_pool_manager("https", "differentsite.com") env.get_pool_manager("https", "subdomain.example.com") # There is a single pool manager, no proxies self.assert_pool_managers(env, [None]) @patch("tuf.ngclient._internal.proxy.getproxies") def test_no_proxy_leading_dot(self, mock_getproxies: Mock) -> None: mock_getproxies.return_value = { "https": "http://localhost:8888", "no": ".example.com", } env = ProxyEnvironment() env.get_pool_manager("https", "example.com") env.get_pool_manager("https", "subdomain.example.com") # There is a single pool manager, no proxies self.assert_pool_managers(env, [None]) @patch("tuf.ngclient._internal.proxy.getproxies") def test_all_proxy_set(self, mock_getproxies: Mock) -> None: mock_getproxies.return_value = { "all": "http://localhost:8888", } env = ProxyEnvironment() env.get_pool_manager("http", "example.com") env.get_pool_manager("https", "example.com") env.get_pool_manager("https", "example.com") env.get_pool_manager("https", "subdomain.example.com") env.get_pool_manager("https", "differentsite.com") # There is a single proxy manager self.assert_pool_managers(env, ["http://localhost:8888"]) # This urllib3 currently only handles http and https but let's test anyway env.get_pool_manager("file", None) # proxy manager and a plain pool manager self.assert_pool_managers(env, ["http://localhost:8888", None]) @patch("tuf.ngclient._internal.proxy.getproxies") def test_all_proxy_and_no_proxy_set(self, mock_getproxies: Mock) -> None: mock_getproxies.return_value = { "all": "http://localhost:8888", "no": "somesite.com, example.com, another.site.com", } env = ProxyEnvironment() env.get_pool_manager("http", "example.com") env.get_pool_manager("https", "example.com") env.get_pool_manager("https", "example.com") env.get_pool_manager("https", "subdomain.example.com") # There is a single pool manager (no proxies) self.assert_pool_managers(env, [None]) env.get_pool_manager("http", "differentsite.com") env.get_pool_manager("https", "differentsite.com") # There are two pool managers: plain poolmanager for no_proxy domains and # one proxymanager self.assert_pool_managers(env, [None, "http://localhost:8888"]) if __name__ == "__main__": utils.configure_test_logging(sys.argv) unittest.main() tuf-6.0.0/tests/test_repository.py0000644000000000000000000002204413615410400014227 0ustar00# Copyright 2024 python-tuf contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Tests for tuf.repository module""" from __future__ import annotations import copy import logging import sys import unittest from collections import defaultdict from datetime import datetime, timedelta, timezone from securesystemslib.signer import CryptoSigner, Signer from tests import utils from tuf.api.metadata import ( TOP_LEVEL_ROLE_NAMES, DelegatedRole, Delegations, Metadata, MetaFile, Root, Snapshot, TargetFile, Targets, Timestamp, ) from tuf.repository import Repository logger = logging.getLogger(__name__) _signed_init = { Root.type: Root, Snapshot.type: Snapshot, Targets.type: Targets, Timestamp.type: Timestamp, } class TestingRepository(Repository): """Very simple in-memory repository implementation This repository keeps the metadata for all versions of all roles in memory. It also keeps all target content in memory. Mostly copied from examples/repository. Attributes: role_cache: Every historical metadata version of every role in this repository. Keys are role names and values are lists of Metadata signer_cache: All signers available to the repository. Keys are role names, values are lists of signers """ expiry_period = timedelta(days=1) def __init__(self) -> None: # all versions of all metadata self.role_cache: dict[str, list[Metadata]] = defaultdict(list) # all current keys self.signer_cache: dict[str, list[Signer]] = defaultdict(list) # version cache for snapshot and all targets, updated in close(). # The 'defaultdict(lambda: ...)' trick allows close() to easily modify # the version without always creating a new MetaFile self._snapshot_info = MetaFile(1) self._targets_infos: dict[str, MetaFile] = defaultdict( lambda: MetaFile(1) ) # setup a basic repository, generate signing key per top-level role with self.edit_root() as root: for role in ["root", "timestamp", "snapshot", "targets"]: signer = CryptoSigner.generate_ecdsa() self.signer_cache[role].append(signer) root.add_key(signer.public_key, role) for role in ["timestamp", "snapshot", "targets"]: with self.edit(role): pass @property def targets_infos(self) -> dict[str, MetaFile]: return self._targets_infos @property def snapshot_info(self) -> MetaFile: return self._snapshot_info def open(self, role: str) -> Metadata: """Return current Metadata for role from 'storage' (or create a new one) """ if role not in self.role_cache: signed_init = _signed_init.get(role, Targets) md = Metadata(signed_init()) # this makes version bumping in close() simpler md.signed.version = 0 return md # return a _copy_ of latest metadata from storage return copy.deepcopy(self.role_cache[role][-1]) def close(self, role: str, md: Metadata) -> None: """Store a version of metadata. Handle version bumps, expiry, signing""" md.signed.version += 1 md.signed.expires = datetime.now(timezone.utc) + self.expiry_period md.signatures.clear() for signer in self.signer_cache[role]: md.sign(signer, append=True) # store new metadata version, update version caches self.role_cache[role].append(md) if role == "snapshot": self._snapshot_info.version = md.signed.version elif role not in ["root", "timestamp"]: self._targets_infos[f"{role}.json"].version = md.signed.version class TestRepository(unittest.TestCase): """Tests for tuf.repository module.""" def setUp(self) -> None: self.repo = TestingRepository() def test_initial_repo_setup(self) -> None: # check that we have metadata for top level roles self.assertEqual(4, len(self.repo.role_cache)) for role in TOP_LEVEL_ROLE_NAMES: # There should be a single version for each role role_versions = self.repo.role_cache[role] self.assertEqual(1, len(role_versions)) self.assertEqual(1, role_versions[-1].signed.version) # test the Repository helpers: self.assertIsInstance(self.repo.root(), Root) self.assertIsInstance(self.repo.timestamp(), Timestamp) self.assertIsInstance(self.repo.snapshot(), Snapshot) self.assertIsInstance(self.repo.targets(), Targets) def test_do_snapshot(self) -> None: # Expect no-op because targets have not changed and snapshot is still valid created, _ = self.repo.do_snapshot() self.assertFalse(created) snapshot_versions = self.repo.role_cache["snapshot"] self.assertEqual(1, len(snapshot_versions)) self.assertEqual(1, snapshot_versions[-1].signed.version) def test_do_snapshot_after_targets_change(self) -> None: # do a targets change, expect do_snapshot to create a new snapshot with self.repo.edit_targets() as targets: targets.targets["path"] = TargetFile.from_data("path", b"data") created, _ = self.repo.do_snapshot() self.assertTrue(created) snapshot_versions = self.repo.role_cache["snapshot"] self.assertEqual(2, len(snapshot_versions)) self.assertEqual(2, snapshot_versions[-1].signed.version) def test_do_snapshot_after_new_targets_delegation(self) -> None: # Add new delegated target, expect do_snapshot to create a new snapshot signer = CryptoSigner.generate_ecdsa() self.repo.signer_cache["delegated"].append(signer) # Add a new delegation to targets with self.repo.edit_targets() as targets: role = DelegatedRole("delegated", [], 1, True, []) targets.delegations = Delegations({}, {"delegated": role}) targets.add_key(signer.public_key, "delegated") # create a version of the delegated metadata with self.repo.edit("delegated") as _: pass created, _ = self.repo.do_snapshot() self.assertTrue(created) snapshot_versions = self.repo.role_cache["snapshot"] self.assertEqual(2, len(snapshot_versions)) self.assertEqual(2, snapshot_versions[-1].signed.version) def test_do_snapshot_after_snapshot_key_change(self) -> None: # change snapshot signing keys with self.repo.edit_root() as root: # remove key keyid = root.roles["snapshot"].keyids[0] root.revoke_key(keyid, "snapshot") self.repo.signer_cache["snapshot"].clear() # add new key signer = CryptoSigner.generate_ecdsa() self.repo.signer_cache["snapshot"].append(signer) root.add_key(signer.public_key, "snapshot") # snapshot is no longer signed correctly, expect do_snapshot to create a new snapshot created, _ = self.repo.do_snapshot() self.assertTrue(created) snapshot_versions = self.repo.role_cache["snapshot"] self.assertEqual(2, len(snapshot_versions)) self.assertEqual(2, snapshot_versions[-1].signed.version) def test_do_timestamp(self) -> None: # Expect no-op because snapshot has not changed and timestamp is still valid created, _ = self.repo.do_timestamp() self.assertFalse(created) timestamp_versions = self.repo.role_cache["timestamp"] self.assertEqual(1, len(timestamp_versions)) self.assertEqual(1, timestamp_versions[-1].signed.version) def test_do_timestamp_after_snapshot_change(self) -> None: # do a snapshot change, expect do_timestamp to create a new timestamp self.repo.do_snapshot(force=True) created, _ = self.repo.do_timestamp() self.assertTrue(created) timestamp_versions = self.repo.role_cache["timestamp"] self.assertEqual(2, len(timestamp_versions)) self.assertEqual(2, timestamp_versions[-1].signed.version) def test_do_timestamp_after_timestamp_key_change(self) -> None: # change timestamp signing keys with self.repo.edit_root() as root: # remove key keyid = root.roles["timestamp"].keyids[0] root.revoke_key(keyid, "timestamp") self.repo.signer_cache["timestamp"].clear() # add new key signer = CryptoSigner.generate_ecdsa() self.repo.signer_cache["timestamp"].append(signer) root.add_key(signer.public_key, "timestamp") # timestamp is no longer signed correctly, expect do_timestamp to create a new timestamp created, _ = self.repo.do_timestamp() self.assertTrue(created) timestamp_versions = self.repo.role_cache["timestamp"] self.assertEqual(2, len(timestamp_versions)) self.assertEqual(2, timestamp_versions[-1].signed.version) if __name__ == "__main__": utils.configure_test_logging(sys.argv) unittest.main() tuf-6.0.0/tests/test_trusted_metadata_set.py0000644000000000000000000005433613615410400016226 0ustar00"""Unit tests for 'tuf/ngclient/_internal/trusted_metadata_set.py'.""" from __future__ import annotations import logging import os import sys import unittest from datetime import datetime, timezone from typing import Callable, ClassVar from securesystemslib.signer import Signer from tests import utils from tuf.api import exceptions from tuf.api.dsse import SimpleEnvelope from tuf.api.metadata import ( Metadata, MetaFile, Root, Signed, Snapshot, Targets, Timestamp, ) from tuf.api.serialization.json import JSONSerializer from tuf.ngclient._internal.trusted_metadata_set import ( TrustedMetadataSet, _load_from_simple_envelope, ) from tuf.ngclient.config import EnvelopeType logger = logging.getLogger(__name__) class TestTrustedMetadataSet(unittest.TestCase): """Tests for all public API of the TrustedMetadataSet class.""" keystore: ClassVar[dict[str, Signer]] metadata: ClassVar[dict[str, bytes]] repo_dir: ClassVar[str] @classmethod def modify_metadata( cls, rolename: str, modification_func: Callable ) -> bytes: """Instantiate metadata from rolename type, call modification_func and sign it again with self.keystore[rolename] signer. Attributes: rolename: Denoting the name of the metadata which will be modified. modification_func: Function that will be called to modify the signed portion of metadata bytes. """ metadata = Metadata.from_bytes(cls.metadata[rolename]) modification_func(metadata.signed) metadata.sign(cls.keystore[rolename]) return metadata.to_bytes(JSONSerializer(validate=True)) @classmethod def setUpClass(cls) -> None: cls.repo_dir = os.path.join( utils.TESTS_DIR, "repository_data", "repository", "metadata" ) cls.metadata = {} for md in [ Root.type, Timestamp.type, Snapshot.type, Targets.type, "role1", "role2", ]: with open(os.path.join(cls.repo_dir, f"{md}.json"), "rb") as f: cls.metadata[md] = f.read() keystore_dir = os.path.join( utils.TESTS_DIR, "repository_data", "keystore" ) root = Metadata[Root].from_bytes(cls.metadata[Root.type]).signed cls.keystore = {} for role in [ Root.type, Snapshot.type, Targets.type, Timestamp.type, ]: uri = f"file2:{os.path.join(keystore_dir, role + '_key')}" role_obj = root.get_delegated_role(role) key = root.get_key(role_obj.keyids[0]) cls.keystore[role] = Signer.from_priv_key_uri(uri, key) def hashes_length_modifier(timestamp: Timestamp) -> None: timestamp.snapshot_meta.hashes = None timestamp.snapshot_meta.length = None cls.metadata[Timestamp.type] = cls.modify_metadata( Timestamp.type, hashes_length_modifier ) def setUp(self) -> None: self.trusted_set = TrustedMetadataSet( self.metadata[Root.type], EnvelopeType.METADATA ) def _update_all_besides_targets( self, timestamp_bytes: bytes | None = None, snapshot_bytes: bytes | None = None, ) -> None: """Update all metadata roles besides targets. Args: timestamp_bytes: Bytes used when calling trusted_set.update_timestamp(). Default self.metadata[Timestamp.type]. snapshot_bytes: Bytes used when calling trusted_set.update_snapshot(). Default self.metadata[Snapshot.type]. """ timestamp_bytes = timestamp_bytes or self.metadata[Timestamp.type] self.trusted_set.update_timestamp(timestamp_bytes) snapshot_bytes = snapshot_bytes or self.metadata[Snapshot.type] self.trusted_set.update_snapshot(snapshot_bytes) def test_update(self) -> None: self.trusted_set.update_timestamp(self.metadata[Timestamp.type]) self.trusted_set.update_snapshot(self.metadata[Snapshot.type]) self.trusted_set.update_targets(self.metadata[Targets.type]) self.trusted_set.update_delegated_targets( self.metadata["role1"], "role1", Targets.type ) self.trusted_set.update_delegated_targets( self.metadata["role2"], "role2", "role1" ) # the 4 top level metadata objects + 2 additional delegated targets self.assertTrue(len(self.trusted_set), 6) count = 0 for md in self.trusted_set: self.assertIsInstance(md, Signed) count += 1 self.assertTrue(count, 6) def test_update_metadata_output(self) -> None: timestamp = self.trusted_set.update_timestamp( self.metadata["timestamp"] ) snapshot = self.trusted_set.update_snapshot(self.metadata["snapshot"]) targets = self.trusted_set.update_targets(self.metadata["targets"]) delegated_targets_1 = self.trusted_set.update_delegated_targets( self.metadata["role1"], "role1", "targets" ) delegated_targets_2 = self.trusted_set.update_delegated_targets( self.metadata["role2"], "role2", "role1" ) self.assertIsInstance(timestamp, Timestamp) self.assertIsInstance(snapshot, Snapshot) self.assertIsInstance(targets, Targets) self.assertIsInstance(delegated_targets_1, Targets) self.assertIsInstance(delegated_targets_2, Targets) def test_out_of_order_ops(self) -> None: # Update snapshot before timestamp with self.assertRaises(RuntimeError): self.trusted_set.update_snapshot(self.metadata[Snapshot.type]) self.trusted_set.update_timestamp(self.metadata[Timestamp.type]) # Update root after timestamp with self.assertRaises(RuntimeError): self.trusted_set.update_root(self.metadata[Root.type]) # Update targets before snapshot with self.assertRaises(RuntimeError): self.trusted_set.update_targets(self.metadata[Targets.type]) self.trusted_set.update_snapshot(self.metadata[Snapshot.type]) # update timestamp after snapshot with self.assertRaises(RuntimeError): self.trusted_set.update_timestamp(self.metadata[Timestamp.type]) # Update delegated targets before targets with self.assertRaises(RuntimeError): self.trusted_set.update_delegated_targets( self.metadata["role1"], "role1", Targets.type ) self.trusted_set.update_targets(self.metadata[Targets.type]) # Update snapshot after successful targets update with self.assertRaises(RuntimeError): self.trusted_set.update_snapshot(self.metadata[Snapshot.type]) self.trusted_set.update_delegated_targets( self.metadata["role1"], "role1", Targets.type ) def test_bad_initial_root(self) -> None: # root is not json with self.assertRaises(exceptions.RepositoryError): TrustedMetadataSet(b"", EnvelopeType.METADATA) # root is invalid root = Metadata.from_bytes(self.metadata[Root.type]) root.signed.version += 1 with self.assertRaises(exceptions.UnsignedMetadataError): TrustedMetadataSet(root.to_bytes(), EnvelopeType.METADATA) # metadata is of wrong type with self.assertRaises(exceptions.RepositoryError): TrustedMetadataSet( self.metadata[Snapshot.type], EnvelopeType.METADATA ) def test_bad_root_update(self) -> None: # root is not json with self.assertRaises(exceptions.RepositoryError): self.trusted_set.update_root(b"") # root is invalid root = Metadata.from_bytes(self.metadata[Root.type]) root.signed.version += 1 with self.assertRaises(exceptions.UnsignedMetadataError): self.trusted_set.update_root(root.to_bytes()) # metadata is of wrong type with self.assertRaises(exceptions.RepositoryError): self.trusted_set.update_root(self.metadata[Snapshot.type]) def test_top_level_md_with_invalid_json(self) -> None: top_level_md: list[tuple[bytes, Callable[[bytes], Signed]]] = [ (self.metadata[Timestamp.type], self.trusted_set.update_timestamp), (self.metadata[Snapshot.type], self.trusted_set.update_snapshot), (self.metadata[Targets.type], self.trusted_set.update_targets), ] for metadata, update_func in top_level_md: md = Metadata.from_bytes(metadata) # metadata is not json with self.assertRaises(exceptions.RepositoryError): update_func(b"") # metadata is invalid md.signed.version += 1 with self.assertRaises(exceptions.UnsignedMetadataError): update_func(md.to_bytes()) # metadata is of wrong type with self.assertRaises(exceptions.RepositoryError): update_func(self.metadata[Root.type]) update_func(metadata) def test_update_root_new_root(self) -> None: # test that root can be updated with a new valid version def root_new_version_modifier(root: Root) -> None: root.version += 1 root = self.modify_metadata(Root.type, root_new_version_modifier) self.trusted_set.update_root(root) def test_update_root_new_root_fail_threshold_verification(self) -> None: # Increase threshold in new root, do not add enough keys def root_threshold_bump(root: Root) -> None: root.version += 1 root.roles[Root.type].threshold += 1 root = self.modify_metadata(Root.type, root_threshold_bump) with self.assertRaises(exceptions.UnsignedMetadataError): self.trusted_set.update_root(root) def test_update_root_new_root_ver_same_as_trusted_root_ver(self) -> None: with self.assertRaises(exceptions.BadVersionNumberError): self.trusted_set.update_root(self.metadata[Root.type]) def test_root_expired_final_root(self) -> None: def root_expired_modifier(root: Root) -> None: root.expires = datetime(1970, 1, 1, tzinfo=timezone.utc) # intermediate root can be expired root = self.modify_metadata(Root.type, root_expired_modifier) tmp_trusted_set = TrustedMetadataSet(root, EnvelopeType.METADATA) # update timestamp to trigger final root expiry check with self.assertRaises(exceptions.ExpiredMetadataError): tmp_trusted_set.update_timestamp(self.metadata[Timestamp.type]) def test_update_timestamp_new_timestamp_ver_below_trusted_ver(self) -> None: # new_timestamp.version < trusted_timestamp.version def version_modifier(timestamp: Timestamp) -> None: timestamp.version = 3 timestamp = self.modify_metadata(Timestamp.type, version_modifier) self.trusted_set.update_timestamp(timestamp) with self.assertRaises(exceptions.BadVersionNumberError): self.trusted_set.update_timestamp(self.metadata[Timestamp.type]) def test_update_timestamp_with_same_timestamp(self) -> None: # Test that timestamp is NOT updated if: # new_timestamp.version == trusted_timestamp.version self.trusted_set.update_timestamp(self.metadata[Timestamp.type]) initial_timestamp = self.trusted_set.timestamp # Update timestamp with the same version. with self.assertRaises(exceptions.EqualVersionNumberError): self.trusted_set.update_timestamp(self.metadata[Timestamp.type]) # Every object has a unique id() if they are equal, this means timestamp # was not updated. self.assertEqual(id(initial_timestamp), id(self.trusted_set.timestamp)) def test_update_timestamp_snapshot_ver_below_current(self) -> None: def bump_snapshot_version(timestamp: Timestamp) -> None: timestamp.snapshot_meta.version = 2 # The timestamp version must be increased to initiate a update. timestamp.version += 1 # set current known snapshot.json version to 2 timestamp = self.modify_metadata(Timestamp.type, bump_snapshot_version) self.trusted_set.update_timestamp(timestamp) # newtimestamp.meta.version < trusted_timestamp.meta.version with self.assertRaises(exceptions.BadVersionNumberError): self.trusted_set.update_timestamp(self.metadata[Timestamp.type]) def test_update_timestamp_expired(self) -> None: # new_timestamp has expired def timestamp_expired_modifier(timestamp: Timestamp) -> None: timestamp.expires = datetime(1970, 1, 1, tzinfo=timezone.utc) # expired intermediate timestamp is loaded but raises timestamp = self.modify_metadata( Timestamp.type, timestamp_expired_modifier ) with self.assertRaises(exceptions.ExpiredMetadataError): self.trusted_set.update_timestamp(timestamp) # snapshot update does start but fails because timestamp is expired with self.assertRaises(exceptions.ExpiredMetadataError): self.trusted_set.update_snapshot(self.metadata[Snapshot.type]) def test_update_snapshot_length_or_hash_mismatch(self) -> None: def modify_snapshot_length(timestamp: Timestamp) -> None: timestamp.snapshot_meta.length = 1 # set known snapshot.json length to 1 timestamp = self.modify_metadata(Timestamp.type, modify_snapshot_length) self.trusted_set.update_timestamp(timestamp) with self.assertRaises(exceptions.RepositoryError): self.trusted_set.update_snapshot(self.metadata[Snapshot.type]) def test_update_snapshot_fail_threshold_verification(self) -> None: self.trusted_set.update_timestamp(self.metadata[Timestamp.type]) snapshot = Metadata.from_bytes(self.metadata[Snapshot.type]) snapshot.signatures.clear() with self.assertRaises(exceptions.UnsignedMetadataError): self.trusted_set.update_snapshot(snapshot.to_bytes()) def test_update_snapshot_version_diverge_timestamp_snapshot_version( self, ) -> None: def timestamp_version_modifier(timestamp: Timestamp) -> None: timestamp.snapshot_meta.version = 2 timestamp = self.modify_metadata( Timestamp.type, timestamp_version_modifier ) self.trusted_set.update_timestamp(timestamp) # if intermediate snapshot version is incorrect, load it but also raise with self.assertRaises(exceptions.BadVersionNumberError): self.trusted_set.update_snapshot(self.metadata[Snapshot.type]) # targets update starts but fails if snapshot version does not match with self.assertRaises(exceptions.BadVersionNumberError): self.trusted_set.update_targets(self.metadata[Targets.type]) def test_update_snapshot_file_removed_from_meta(self) -> None: self._update_all_besides_targets(self.metadata[Timestamp.type]) def remove_file_from_meta(snapshot: Snapshot) -> None: del snapshot.meta["targets.json"] # Test removing a meta_file in new_snapshot compared to the old snapshot snapshot = self.modify_metadata(Snapshot.type, remove_file_from_meta) with self.assertRaises(exceptions.RepositoryError): self.trusted_set.update_snapshot(snapshot) def test_update_snapshot_meta_version_decreases(self) -> None: self.trusted_set.update_timestamp(self.metadata[Timestamp.type]) def version_meta_modifier(snapshot: Snapshot) -> None: snapshot.meta["targets.json"].version += 1 snapshot = self.modify_metadata(Snapshot.type, version_meta_modifier) self.trusted_set.update_snapshot(snapshot) with self.assertRaises(exceptions.BadVersionNumberError): self.trusted_set.update_snapshot(self.metadata[Snapshot.type]) def test_update_snapshot_expired_new_snapshot(self) -> None: self.trusted_set.update_timestamp(self.metadata[Timestamp.type]) def snapshot_expired_modifier(snapshot: Snapshot) -> None: snapshot.expires = datetime(1970, 1, 1, tzinfo=timezone.utc) # expired intermediate snapshot is loaded but will raise snapshot = self.modify_metadata( Snapshot.type, snapshot_expired_modifier ) with self.assertRaises(exceptions.ExpiredMetadataError): self.trusted_set.update_snapshot(snapshot) # targets update does start but fails because snapshot is expired with self.assertRaises(exceptions.ExpiredMetadataError): self.trusted_set.update_targets(self.metadata[Targets.type]) def test_update_snapshot_successful_rollback_checks(self) -> None: def meta_version_bump(timestamp: Timestamp) -> None: timestamp.snapshot_meta.version += 1 # The timestamp version must be increased to initiate a update. timestamp.version += 1 def version_bump(snapshot: Snapshot) -> None: snapshot.version += 1 # load a "local" timestamp, then update to newer one: self.trusted_set.update_timestamp(self.metadata[Timestamp.type]) new_timestamp = self.modify_metadata(Timestamp.type, meta_version_bump) self.trusted_set.update_timestamp(new_timestamp) # load a "local" snapshot with mismatching version (loading happens but # BadVersionNumberError is raised), then update to newer one: with self.assertRaises(exceptions.BadVersionNumberError): self.trusted_set.update_snapshot(self.metadata[Snapshot.type]) new_snapshot = self.modify_metadata(Snapshot.type, version_bump) self.trusted_set.update_snapshot(new_snapshot) # update targets to trigger final snapshot meta version check self.trusted_set.update_targets(self.metadata[Targets.type]) def test_update_targets_no_meta_in_snapshot(self) -> None: def no_meta_modifier(snapshot: Snapshot) -> None: snapshot.meta = {} snapshot = self.modify_metadata(Snapshot.type, no_meta_modifier) self._update_all_besides_targets( self.metadata[Timestamp.type], snapshot ) # remove meta information with information about targets from snapshot with self.assertRaises(exceptions.RepositoryError): self.trusted_set.update_targets(self.metadata[Targets.type]) def test_update_targets_hash_diverge_from_snapshot_meta_hash(self) -> None: def meta_length_modifier(snapshot: Snapshot) -> None: for metafile_path in snapshot.meta: snapshot.meta[metafile_path] = MetaFile(version=1, length=1) snapshot = self.modify_metadata(Snapshot.type, meta_length_modifier) self._update_all_besides_targets( self.metadata[Timestamp.type], snapshot ) # observed_hash != stored hash in snapshot meta for targets with self.assertRaises(exceptions.RepositoryError): self.trusted_set.update_targets(self.metadata[Targets.type]) def test_update_targets_version_diverge_snapshot_meta_version(self) -> None: def meta_modifier(snapshot: Snapshot) -> None: for metafile_path in snapshot.meta: snapshot.meta[metafile_path] = MetaFile(version=2) snapshot = self.modify_metadata(Snapshot.type, meta_modifier) self._update_all_besides_targets( self.metadata[Timestamp.type], snapshot ) # new_delegate.signed.version != meta.version stored in snapshot with self.assertRaises(exceptions.BadVersionNumberError): self.trusted_set.update_targets(self.metadata[Targets.type]) def test_update_targets_expired_new_target(self) -> None: self._update_all_besides_targets() # new_delegated_target has expired def target_expired_modifier(target: Targets) -> None: target.expires = datetime(1970, 1, 1, tzinfo=timezone.utc) targets = self.modify_metadata(Targets.type, target_expired_modifier) with self.assertRaises(exceptions.ExpiredMetadataError): self.trusted_set.update_targets(targets) # TODO test updating over initial metadata (new keys, newer timestamp, etc) def test_load_from_simple_envelope(self) -> None: """Basic unit test for ``_load_from_simple_envelope`` helper. TODO: Test via trusted metadata set tests like for traditional metadata """ metadata = Metadata.from_bytes(self.metadata[Root.type]) root = metadata.signed envelope = SimpleEnvelope.from_signed(root) # Unwrap unsigned envelope without verification envelope_bytes = envelope.to_bytes() payload_obj, signed_bytes, signatures = _load_from_simple_envelope( Root, envelope_bytes ) self.assertEqual(payload_obj, root) self.assertEqual(signed_bytes, envelope.pae()) self.assertDictEqual(signatures, {}) # Unwrap correctly signed envelope (use default role name) sig = envelope.sign(self.keystore[Root.type]) envelope_bytes = envelope.to_bytes() _, _, signatures = _load_from_simple_envelope( Root, envelope_bytes, root ) self.assertDictEqual(signatures, {sig.keyid: sig}) # Load correctly signed envelope (with explicit role name) _, _, signatures = _load_from_simple_envelope( Root, envelope.to_bytes(), root, Root.type ) self.assertDictEqual(signatures, {sig.keyid: sig}) # Fail load envelope with unexpected 'payload_type' envelope_bad_type = SimpleEnvelope.from_signed(root) envelope_bad_type.payload_type = "foo" envelope_bad_type_bytes = envelope_bad_type.to_bytes() with self.assertRaises(exceptions.RepositoryError): _load_from_simple_envelope(Root, envelope_bad_type_bytes) # Fail load envelope with unexpected payload type envelope_bad_signed = SimpleEnvelope.from_signed(root) envelope_bad_signed_bytes = envelope_bad_signed.to_bytes() with self.assertRaises(exceptions.RepositoryError): _load_from_simple_envelope(Targets, envelope_bad_signed_bytes) if __name__ == "__main__": utils.configure_test_logging(sys.argv) unittest.main() tuf-6.0.0/tests/test_updater_consistent_snapshot.py0000644000000000000000000002332713615410400017651 0ustar00# Copyright 2021, New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Test ngclient Updater toggling consistent snapshot""" from __future__ import annotations import os import sys import tempfile import unittest from typing import TYPE_CHECKING, Any from tests import utils from tests.repository_simulator import RepositorySimulator from tuf.api.metadata import ( SPECIFICATION_VERSION, TOP_LEVEL_ROLE_NAMES, DelegatedRole, TargetFile, Targets, ) from tuf.ngclient import Updater if TYPE_CHECKING: from collections.abc import Iterable class TestConsistentSnapshot(unittest.TestCase): """Test different combinations of 'consistent_snapshot' and 'prefix_targets_with_hash' and verify that the correct URLs are formed for each combination""" # set dump_dir to trigger repository state dumps dump_dir: str | None = None def setUp(self) -> None: self.subtest_count = 0 self.temp_dir = tempfile.TemporaryDirectory() self.metadata_dir = os.path.join(self.temp_dir.name, "metadata") self.targets_dir = os.path.join(self.temp_dir.name, "targets") os.mkdir(self.metadata_dir) os.mkdir(self.targets_dir) self.sim: RepositorySimulator def tearDown(self) -> None: self.temp_dir.cleanup() def setup_subtest( self, consistent_snapshot: bool, prefix_targets: bool = True ) -> None: self.sim = self._init_repo(consistent_snapshot, prefix_targets) self.subtest_count += 1 if self.dump_dir is not None: # create subtest dumpdir name = f"{self.id().split('.')[-1]}-{self.subtest_count}" self.sim.dump_dir = os.path.join(self.dump_dir, name) os.mkdir(self.sim.dump_dir) def teardown_subtest(self) -> None: if self.dump_dir is not None: self.sim.write() utils.cleanup_metadata_dir(self.metadata_dir) def _init_repo( self, consistent_snapshot: bool, prefix_targets: bool = True ) -> RepositorySimulator: """Create a new RepositorySimulator instance""" sim = RepositorySimulator() sim.root.consistent_snapshot = consistent_snapshot sim.root.version += 1 sim.publish_root() sim.prefix_targets_with_hash = prefix_targets # Init trusted root with the latest consistent_snapshot with open(os.path.join(self.metadata_dir, "root.json"), "bw") as f: f.write(sim.signed_roots[-1]) return sim def _init_updater(self) -> Updater: """Create a new Updater instance""" return Updater( self.metadata_dir, "https://example.com/metadata/", self.targets_dir, "https://example.com/targets/", self.sim, ) def _assert_metadata_files_exist(self, roles: Iterable[str]) -> None: """Assert that local metadata files exist for 'roles'""" local_metadata_files = os.listdir(self.metadata_dir) for role in roles: self.assertIn(f"{role}.json", local_metadata_files) def _assert_targets_files_exist(self, filenames: Iterable[str]) -> None: """Assert that local files with 'filenames' exist""" local_target_files = os.listdir(self.targets_dir) for filename in filenames: self.assertIn(filename, local_target_files) top_level_roles_data = { "consistent_snaphot disabled": { "consistent_snapshot": False, "calls": [ ("root", 3), ("timestamp", None), ("snapshot", None), ("targets", None), ], }, "consistent_snaphot enabled": { "consistent_snapshot": True, "calls": [ ("root", 3), ("timestamp", None), ("snapshot", 1), ("targets", 1), ], }, } @utils.run_sub_tests_with_dataset(top_level_roles_data) def test_top_level_roles_update( self, test_case_data: dict[str, Any] ) -> None: # Test if the client fetches and stores metadata files with the # correct version prefix, depending on 'consistent_snapshot' config try: consistent_snapshot: bool = test_case_data["consistent_snapshot"] exp_calls: list[Any] = test_case_data["calls"] self.setup_subtest(consistent_snapshot) updater = self._init_updater() # cleanup fetch tracker metadata self.sim.fetch_tracker.metadata.clear() updater.refresh() # metadata files are fetched with the expected version (or None) self.assertListEqual(self.sim.fetch_tracker.metadata, exp_calls) # metadata files are always persisted without a version prefix self._assert_metadata_files_exist(TOP_LEVEL_ROLE_NAMES) finally: self.teardown_subtest() delegated_roles_data = { "consistent_snaphot disabled": { "consistent_snapshot": False, "expected_version": None, }, "consistent_snaphot enabled": { "consistent_snapshot": True, "expected_version": 1, }, } @utils.run_sub_tests_with_dataset(delegated_roles_data) def test_delegated_roles_update( self, test_case_data: dict[str, Any] ) -> None: # Test if the client fetches and stores delegated metadata files with # the correct version prefix, depending on 'consistent_snapshot' config try: consistent_snapshot: bool = test_case_data["consistent_snapshot"] exp_version: int | None = test_case_data["expected_version"] rolenames = ["role1", "..", "."] exp_calls = [(role, exp_version) for role in rolenames] self.setup_subtest(consistent_snapshot) # Add new delegated targets spec_version = ".".join(SPECIFICATION_VERSION) for role in rolenames: delegated_role = DelegatedRole(role, [], 1, False, ["*"], None) targets = Targets( 1, spec_version, self.sim.safe_expiry, {}, None ) self.sim.add_delegation("targets", delegated_role, targets) self.sim.update_snapshot() updater = self._init_updater() updater.refresh() # cleanup fetch tracker metadata self.sim.fetch_tracker.metadata.clear() # trigger updater to fetch the delegated metadata updater.get_targetinfo("anything") # metadata files are fetched with the expected version (or None) self.assertListEqual(self.sim.fetch_tracker.metadata, exp_calls) # metadata files are always persisted without a version prefix self._assert_metadata_files_exist(rolenames) finally: self.teardown_subtest() targets_download_data = { "consistent_snaphot disabled": { "consistent_snapshot": False, "prefix_targets": True, "hash_algo": None, "targetpaths": ["file", "file.txt", "..file.ext", "f.le"], }, "consistent_snaphot enabled without prefixed targets": { "consistent_snapshot": True, "prefix_targets": False, "hash_algo": None, "targetpaths": ["file", "file.txt", "..file.ext", "f.le"], }, "consistent_snaphot enabled with prefixed targets": { "consistent_snapshot": True, "prefix_targets": True, "hash_algo": "sha256", "targetpaths": ["file", "file.txt", "..file.ext", "f.le"], }, } @utils.run_sub_tests_with_dataset(targets_download_data) def test_download_targets(self, test_case_data: dict[str, Any]) -> None: # Test if the client fetches and stores target files with # the correct hash prefix, depending on 'consistent_snapshot' # and 'prefix_targets_with_hash' config try: consistent_snapshot: bool = test_case_data["consistent_snapshot"] prefix_targets_with_hash: bool = test_case_data["prefix_targets"] hash_algo: str | None = test_case_data["hash_algo"] targetpaths: list[str] = test_case_data["targetpaths"] self.setup_subtest(consistent_snapshot, prefix_targets_with_hash) # Add targets to repository for targetpath in targetpaths: self.sim.targets.version += 1 self.sim.add_target("targets", b"content", targetpath) self.sim.update_snapshot() updater = self._init_updater() updater.config.prefix_targets_with_hash = prefix_targets_with_hash updater.refresh() for path in targetpaths: info = updater.get_targetinfo(path) assert isinstance(info, TargetFile) updater.download_target(info) # target files are always persisted without hash prefix self._assert_targets_files_exist([info.path]) # files are fetched with the expected hash prefix (or None) exp_calls = [ (path, None if not hash_algo else info.hashes[hash_algo]) ] self.assertListEqual(self.sim.fetch_tracker.targets, exp_calls) self.sim.fetch_tracker.targets.clear() finally: self.teardown_subtest() if __name__ == "__main__": if "--dump" in sys.argv: TestConsistentSnapshot.dump_dir = tempfile.mkdtemp() print( f"Repository Simulator dumps in {TestConsistentSnapshot.dump_dir}" ) sys.argv.remove("--dump") utils.configure_test_logging(sys.argv) unittest.main() tuf-6.0.0/tests/test_updater_delegation_graphs.py0000644000000000000000000005420213615410400017214 0ustar00# Copyright 2021, New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Test updating delegated targets roles and searching for target files with various delegation graphs""" from __future__ import annotations import os import sys import tempfile import unittest from dataclasses import astuple, dataclass, field from typing import TYPE_CHECKING from tests import utils from tests.repository_simulator import RepositorySimulator from tuf.api.exceptions import UnsignedMetadataError from tuf.api.metadata import ( SPECIFICATION_VERSION, TOP_LEVEL_ROLE_NAMES, DelegatedRole, Targets, ) from tuf.ngclient import Updater if TYPE_CHECKING: from collections.abc import Iterable @dataclass class TestDelegation: delegator: str rolename: str keyids: list[str] = field(default_factory=list) threshold: int = 1 terminating: bool = False paths: list[str] | None = field(default_factory=lambda: ["*"]) path_hash_prefixes: list[str] | None = None @dataclass class TestTarget: rolename: str content: bytes targetpath: str @dataclass class DelegationsTestCase: """A delegations graph as lists of delegations and target files and the expected order of traversal as a list of role names.""" delegations: list[TestDelegation] target_files: list[TestTarget] = field(default_factory=list) visited_order: list[str] = field(default_factory=list) @dataclass class TargetTestCase: targetpath: str found: bool visited_order: list[str] = field(default_factory=list) class TestDelegations(unittest.TestCase): """Base class for delegation tests""" # set dump_dir to trigger repository state dumps dump_dir: str | None = None def setUp(self) -> None: self.subtest_count = 0 self.temp_dir = tempfile.TemporaryDirectory() self.metadata_dir = os.path.join(self.temp_dir.name, "metadata") self.targets_dir = os.path.join(self.temp_dir.name, "targets") os.mkdir(self.metadata_dir) os.mkdir(self.targets_dir) self.sim: RepositorySimulator def tearDown(self) -> None: self.temp_dir.cleanup() def setup_subtest(self) -> None: self.subtest_count += 1 if self.dump_dir is not None: # create subtest dumpdir name = f"{self.id().split('.')[-1]}-{self.subtest_count}" self.sim.dump_dir = os.path.join(self.dump_dir, name) os.mkdir(self.sim.dump_dir) # dump the repo simulator metadata self.sim.write() def teardown_subtest(self) -> None: utils.cleanup_metadata_dir(self.metadata_dir) def _init_repo(self, test_case: DelegationsTestCase) -> None: """Create a new RepositorySimulator instance and populate it with delegations and target files""" self.sim = RepositorySimulator() spec_version = ".".join(SPECIFICATION_VERSION) for d in test_case.delegations: if d.rolename in self.sim.md_delegates: targets = self.sim.md_delegates[d.rolename].signed else: targets = Targets( 1, spec_version, self.sim.safe_expiry, {}, None ) # unpack 'd' but skip "delegator" role = DelegatedRole(*astuple(d)[1:]) self.sim.add_delegation(d.delegator, role, targets) for target in test_case.target_files: self.sim.add_target(*astuple(target)) if test_case.target_files: self.sim.targets.version += 1 self.sim.update_snapshot() def _init_updater(self) -> Updater: """Create a new Updater instance""" # Init trusted root for Updater with open(os.path.join(self.metadata_dir, "root.json"), "bw") as f: f.write(self.sim.signed_roots[0]) return Updater( self.metadata_dir, "https://example.com/metadata/", self.targets_dir, "https://example.com/targets/", self.sim, ) def _assert_files_exist(self, roles: Iterable[str]) -> None: """Assert that local metadata files match 'roles'""" expected_files = [f"{role}.json" for role in roles] found_files = [ e.name for e in os.scandir(self.metadata_dir) if e.is_file() ] self.assertListEqual(sorted(found_files), sorted(expected_files)) class TestDelegationsGraphs(TestDelegations): """Test creating delegations graphs with different complexity and successfully updating the delegated roles metadata""" graphs = { "basic delegation": DelegationsTestCase( delegations=[TestDelegation("targets", "A")], visited_order=["A"], ), "single level delegations": DelegationsTestCase( delegations=[ TestDelegation("targets", "A"), TestDelegation("targets", "B"), ], visited_order=["A", "B"], ), "two-level delegations": DelegationsTestCase( delegations=[ TestDelegation("targets", "A"), TestDelegation("targets", "B"), TestDelegation("B", "C"), ], visited_order=["A", "B", "C"], ), "two-level test DFS order of traversal": DelegationsTestCase( delegations=[ TestDelegation("targets", "A"), TestDelegation("targets", "B"), TestDelegation("A", "C"), TestDelegation("A", "D"), ], visited_order=["A", "C", "D", "B"], ), "three-level delegation test DFS order of traversal": DelegationsTestCase( delegations=[ TestDelegation("targets", "A"), TestDelegation("targets", "B"), TestDelegation("A", "C"), TestDelegation("C", "D"), ], visited_order=["A", "C", "D", "B"], ), "two-level terminating ignores all but role's descendants": DelegationsTestCase( delegations=[ TestDelegation("targets", "A"), TestDelegation("targets", "B"), TestDelegation("A", "C", terminating=True), TestDelegation("A", "D"), ], visited_order=["A", "C"], ), "three-level terminating ignores all but role's descendants": DelegationsTestCase( delegations=[ TestDelegation("targets", "A"), TestDelegation("targets", "B"), TestDelegation("A", "C", terminating=True), TestDelegation("C", "D"), ], visited_order=["A", "C", "D"], ), "two-level ignores all branches not matching 'paths'": DelegationsTestCase( delegations=[ TestDelegation("targets", "A", paths=["*.py"]), TestDelegation("targets", "B"), TestDelegation("A", "C"), ], visited_order=["B"], ), "three-level ignores all branches not matching 'paths'": DelegationsTestCase( delegations=[ TestDelegation("targets", "A"), TestDelegation("targets", "B"), TestDelegation("A", "C", paths=["*.py"]), TestDelegation("C", "D"), ], visited_order=["A", "B"], ), "cyclic graph": DelegationsTestCase( delegations=[ TestDelegation("targets", "A"), TestDelegation("targets", "B"), TestDelegation("B", "C"), TestDelegation("C", "D"), TestDelegation("D", "B"), ], visited_order=["A", "B", "C", "D"], ), "two roles delegating to a third": DelegationsTestCase( delegations=[ TestDelegation("targets", "A"), TestDelegation("targets", "B"), TestDelegation("B", "C"), TestDelegation("A", "C"), ], # Under all same conditions, 'C' is reached through 'A' first" visited_order=["A", "C", "B"], ), "two roles delegating to a third different 'paths'": DelegationsTestCase( delegations=[ TestDelegation("targets", "A"), TestDelegation("targets", "B"), TestDelegation("B", "C"), TestDelegation("A", "C", paths=["*.py"]), ], # 'C' is reached through 'B' since 'A' does not delegate a matching pattern" visited_order=["A", "B", "C"], ), "max number of delegations": DelegationsTestCase( delegations=[ TestDelegation("targets", "A"), TestDelegation("targets", "B"), TestDelegation("targets", "C"), TestDelegation("C", "D"), TestDelegation("C", "E"), ], # "E" is skipped, max_delegations is 4 visited_order=["A", "B", "C", "D"], ), } @utils.run_sub_tests_with_dataset(graphs) def test_graph_traversal(self, test_data: DelegationsTestCase) -> None: """Test that delegated roles are traversed in the order of appearance in the delegator's metadata, using pre-order depth-first search""" try: exp_files = [*TOP_LEVEL_ROLE_NAMES, *test_data.visited_order] exp_calls = [(role, 1) for role in test_data.visited_order] self._init_repo(test_data) self.setup_subtest() updater = self._init_updater() # restrict the max number of delegations to simplify the test updater.config.max_delegations = 4 # Call explicitly refresh to simplify the expected_calls list updater.refresh() self.sim.fetch_tracker.metadata.clear() # Check that metadata dir contains only top-level roles self._assert_files_exist(TOP_LEVEL_ROLE_NAMES) # Looking for a non-existing targetpath forces updater # to visit all possible delegated roles targetfile = updater.get_targetinfo("missingpath") self.assertIsNone(targetfile) # Check that the delegated roles were visited in the expected # order and the corresponding metadata files were persisted self.assertListEqual(self.sim.fetch_tracker.metadata, exp_calls) self._assert_files_exist(exp_files) finally: self.teardown_subtest() invalid_metadata = { "unsigned delegated role": DelegationsTestCase( delegations=[ TestDelegation("targets", "invalid"), TestDelegation("targets", "B"), TestDelegation("invalid", "C"), ], # The traversal stops after visiting an invalid role visited_order=["invalid"], ) } @utils.run_sub_tests_with_dataset(invalid_metadata) def test_invalid_metadata(self, test_data: DelegationsTestCase) -> None: try: self._init_repo(test_data) # The invalid role is the last visited invalid_role = test_data.visited_order[-1] self.sim.signers[invalid_role].clear() self.setup_subtest() # The invalid role metadata must not be persisted exp_files = [*TOP_LEVEL_ROLE_NAMES, *test_data.visited_order[:-1]] exp_calls = [(role, 1) for role in test_data.visited_order] updater = self._init_updater() # Call explicitly refresh to simplify the expected_calls list updater.refresh() self.sim.fetch_tracker.metadata.clear() with self.assertRaises(UnsignedMetadataError): updater.get_targetinfo("missingpath") # Check that there were no visited roles after the invalid one # and only the valid metadata files were persisted self.assertListEqual(self.sim.fetch_tracker.metadata, exp_calls) self._assert_files_exist(exp_files) finally: self.teardown_subtest() def test_safely_encoded_rolenames(self) -> None: """Test that delegated roles names are safely encoded in the filenames and URLs. """ roles_to_filenames = { "../a": "..%2Fa.json", ".": "..json", "/": "%2F.json", "ö": "%C3%B6.json", } delegations = [] for rolename in roles_to_filenames: delegations.append(TestDelegation("targets", rolename)) delegated_rolenames = DelegationsTestCase(delegations) self._init_repo(delegated_rolenames) updater = self._init_updater() updater.refresh() # trigger updater to fetch the delegated metadata self.sim.fetch_tracker.metadata.clear() updater.get_targetinfo("anything") # assert that local delegated metadata filenames are expected local_metadata = os.listdir(self.metadata_dir) for fname in roles_to_filenames.values(): self.assertTrue(fname in local_metadata) # assert that requested URLs are quoted without extension exp_calls = [(quoted[:-5], 1) for quoted in roles_to_filenames.values()] self.assertListEqual(self.sim.fetch_tracker.metadata, exp_calls) hash_bins_graph = { "delegations": DelegationsTestCase( delegations=[ TestDelegation( "targets", "role1", paths=None, path_hash_prefixes=["8", "9", "a", "b"], ), TestDelegation( "targets", "role2", paths=None, path_hash_prefixes=["0", "1", "2", "3"], ), TestDelegation( "targets", "role3", paths=None, path_hash_prefixes=["c", "d", "e", "f"], ), ], visited_order=["role1", "role2", "role3"], ), } @utils.run_sub_tests_with_dataset(hash_bins_graph) def test_hash_bins_graph_traversal( self, test_data: DelegationsTestCase ) -> None: """Test that delegated roles are traversed in the order of appearance in the delegator's metadata, using pre-order depth-first search and that they correctly refer to the corresponding hash bin prefixes""" try: exp_files = [*TOP_LEVEL_ROLE_NAMES, *test_data.visited_order] exp_calls = [(role, 1) for role in test_data.visited_order] self._init_repo(test_data) self.setup_subtest() updater = self._init_updater() # Call explicitly refresh to simplify the expected_calls list updater.refresh() self.sim.fetch_tracker.metadata.clear() # Check that metadata dir contains only top-level roles self._assert_files_exist(TOP_LEVEL_ROLE_NAMES) # Looking for a non-existing targetpath forces updater # to visit a correspondning delegated role targetfile = updater.get_targetinfo("missingpath") self.assertIsNone(targetfile) targetfile = updater.get_targetinfo("othermissingpath") self.assertIsNone(targetfile) targetfile = updater.get_targetinfo("thirdmissingpath") self.assertIsNone(targetfile) # Check that the delegated roles were visited in the expected # order and the corresponding metadata files were persisted self.assertListEqual(self.sim.fetch_tracker.metadata, exp_calls) self._assert_files_exist(exp_files) finally: self.teardown_subtest() @dataclass class SuccinctRolesTestCase: bit_length: int target_path: str expected_target_bin: str # By setting the bit_length the total number of bins is 2^bit_length. # In each test case target_path is a path to a random target we want to # fetch and expected_target_bin is the bin we are expecting to visit. succinct_bins_graph = { "bin amount = 2, target bin index 0": SuccinctRolesTestCase( bit_length=1, target_path="boo", expected_target_bin="bin-0", ), "bin amount = 2, target bin index 1": SuccinctRolesTestCase( bit_length=1, target_path="too", expected_target_bin="bin-1", ), "bin amount = 4, target bin index 0": SuccinctRolesTestCase( bit_length=2, target_path="foo", expected_target_bin="bin-0", ), "bin amount = 4, target bin index 1": SuccinctRolesTestCase( bit_length=2, target_path="doo", expected_target_bin="bin-1", ), "bin amount = 4, target bin index 2": SuccinctRolesTestCase( bit_length=2, target_path="too", expected_target_bin="bin-2", ), "bin amount = 4, target bin index 3": SuccinctRolesTestCase( bit_length=2, target_path="bar", expected_target_bin="bin-3", ), "bin amount = 256, target bin index fc": SuccinctRolesTestCase( bit_length=8, target_path="bar", expected_target_bin="bin-fc", ), } @utils.run_sub_tests_with_dataset(succinct_bins_graph) def test_succinct_roles_graph_traversal( self, test_data: SuccinctRolesTestCase ) -> None: # Test traversing the delegation tree when succinct roles is used. For a # successful traversal all top level metadata files plus the expected # bin should exist locally and only one bin must be downloaded. try: exp_files = [*TOP_LEVEL_ROLE_NAMES, test_data.expected_target_bin] exp_calls = [(test_data.expected_target_bin, 1)] self.sim = RepositorySimulator() self.sim.add_succinct_roles("targets", test_data.bit_length, "bin") self.sim.update_snapshot() self.setup_subtest() updater = self._init_updater() # Call explicitly refresh to simplify the expected_calls list. updater.refresh() self.sim.fetch_tracker.metadata.clear() # Check that metadata dir contains only top-level roles self._assert_files_exist(TOP_LEVEL_ROLE_NAMES) # Looking for a non-existing targetpath forces updater # to visit a corresponding delegated role. targetfile = updater.get_targetinfo(test_data.target_path) self.assertIsNone(targetfile) # Check that the delegated roles were visited in the expected # order and the corresponding metadata files were persisted. self.assertListEqual(self.sim.fetch_tracker.metadata, exp_calls) self._assert_files_exist(exp_files) finally: self.teardown_subtest() class TestTargetFileSearch(TestDelegations): r""" Create a single repository with the following delegations: targets *.doc, *md / \ release/*/* A B release/x/* / \ release/y/*.zip C D Test that Updater successfully finds the target files metadata, traversing the delegations as expected. """ delegations_tree = DelegationsTestCase( delegations=[ TestDelegation("targets", "A", paths=["*.doc", "*.md"]), TestDelegation("targets", "B", paths=["releases/*/*"]), TestDelegation("B", "C", paths=["releases/x/*"]), TestDelegation("B", "D", paths=["releases/y/*.zip"]), ], target_files=[ TestTarget("targets", b"targetfile content", "targetfile"), TestTarget("A", b"README by A", "README.md"), TestTarget("C", b"x release by C", "releases/x/x_v1"), TestTarget("D", b"y release by D", "releases/y/y_v1.zip"), TestTarget("D", b"z release by D", "releases/z/z_v1.zip"), ], ) def setUp(self) -> None: super().setUp() self._init_repo(self.delegations_tree) # fmt: off targets = { "no delegations": TargetTestCase("targetfile", True, []), "targetpath matches wildcard": TargetTestCase("README.md", True, ["A"]), "targetpath with separators x": TargetTestCase("releases/x/x_v1", True, ["B", "C"]), "targetpath with separators y": TargetTestCase("releases/y/y_v1.zip", True, ["B", "D"]), "targetpath is not delegated by all roles in the chain": TargetTestCase("releases/z/z_v1.zip", False, ["B"]), } # fmt: on @utils.run_sub_tests_with_dataset(targets) def test_targetfile_search(self, test_data: TargetTestCase) -> None: try: self.setup_subtest() exp_files = [*TOP_LEVEL_ROLE_NAMES, *test_data.visited_order] exp_calls = [(role, 1) for role in test_data.visited_order] exp_target = self.sim.target_files[test_data.targetpath].target_file updater = self._init_updater() # Call explicitly refresh to simplify the expected_calls list updater.refresh() self.sim.fetch_tracker.metadata.clear() target = updater.get_targetinfo(test_data.targetpath) if target is not None: # Confirm that the expected TargetFile is found self.assertTrue(test_data.found) self.assertDictEqual(target.to_dict(), exp_target.to_dict()) else: self.assertFalse(test_data.found) # Check that the delegated roles were visited in the expected # order and the corresponding metadata files were persisted self.assertListEqual(self.sim.fetch_tracker.metadata, exp_calls) self._assert_files_exist(exp_files) finally: self.teardown_subtest() if __name__ == "__main__": if "--dump" in sys.argv: TestDelegations.dump_dir = tempfile.mkdtemp() print(f"Repository Simulator dumps in {TestDelegations.dump_dir}") sys.argv.remove("--dump") utils.configure_test_logging(sys.argv) unittest.main() tuf-6.0.0/tests/test_updater_fetch_target.py0000644000000000000000000002040513615410400016172 0ustar00# Copyright 2021, New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Test 'Fetch target' from 'Detailed client workflow' as well as target files storing/loading from cache. """ from __future__ import annotations import os import sys import tempfile import unittest from dataclasses import dataclass from tests import utils from tests.repository_simulator import RepositorySimulator from tuf.api.exceptions import RepositoryError from tuf.api.metadata import DelegatedRole, Delegations from tuf.ngclient import Updater @dataclass class TestTarget: path: str content: bytes encoded_path: str class TestFetchTarget(unittest.TestCase): """Test ngclient downloading and caching target files.""" # set dump_dir to trigger repository state dumps dump_dir: str | None = None def setUp(self) -> None: self.temp_dir = tempfile.TemporaryDirectory() self.metadata_dir = os.path.join(self.temp_dir.name, "metadata") self.targets_dir = os.path.join(self.temp_dir.name, "targets") os.mkdir(self.metadata_dir) os.mkdir(self.targets_dir) # Setup the repository, bootstrap client root.json self.sim = RepositorySimulator() with open(os.path.join(self.metadata_dir, "root.json"), "bw") as f: f.write(self.sim.signed_roots[0]) if self.dump_dir is not None: # create test specific dump directory name = self.id().split(".")[-1] self.sim.dump_dir = os.path.join(self.dump_dir, name) os.mkdir(self.sim.dump_dir) def tearDown(self) -> None: self.temp_dir.cleanup() def _init_updater(self) -> Updater: """Creates a new updater instance.""" if self.sim.dump_dir is not None: self.sim.write() return Updater( self.metadata_dir, "https://example.com/metadata/", self.targets_dir, "https://example.com/targets/", self.sim, ) targets = { "standard case": TestTarget( path="targetpath", content=b"target content", encoded_path="targetpath", ), "non-asci case": TestTarget( path="åäö", content=b"more content", encoded_path="%C3%A5%C3%A4%C3%B6", ), "subdirectory case": TestTarget( path="a/b/c/targetpath", content=b"dir target content", encoded_path="a%2Fb%2Fc%2Ftargetpath", ), } @utils.run_sub_tests_with_dataset(targets) def test_fetch_target(self, target: TestTarget) -> None: path = os.path.join(self.targets_dir, target.encoded_path) updater = self._init_updater() # target does not exist yet self.assertIsNone(updater.get_targetinfo(target.path)) # Add targets to repository self.sim.targets.version += 1 self.sim.add_target("targets", target.content, target.path) self.sim.update_snapshot() updater = self._init_updater() # target now exists, is not in cache yet info = updater.get_targetinfo(target.path) assert info is not None # Test without and with explicit local filepath self.assertIsNone(updater.find_cached_target(info)) self.assertIsNone(updater.find_cached_target(info, path)) # download target, assert it is in cache and content is correct self.assertEqual(path, updater.download_target(info)) self.assertEqual(path, updater.find_cached_target(info)) self.assertEqual(path, updater.find_cached_target(info, path)) with open(path, "rb") as f: self.assertEqual(f.read(), target.content) # download using explicit filepath as well os.remove(path) self.assertEqual(path, updater.download_target(info, path)) self.assertEqual(path, updater.find_cached_target(info)) self.assertEqual(path, updater.find_cached_target(info, path)) def test_download_targets_with_succinct_roles(self) -> None: self.sim.add_succinct_roles("targets", 8, "bin") self.sim.update_snapshot() assert self.sim.targets.delegations is not None assert self.sim.targets.delegations.succinct_roles is not None succinct_roles = self.sim.targets.delegations.succinct_roles # Add lots of targets with unique data to imitate a real repository. for i in range(20): target_name = f"target-{i}" target_bin = succinct_roles.get_role_for_target(target_name) self.sim.add_target( target_bin, bytes(target_name, "utf-8"), target_name ) # download each target updater = self._init_updater() for i in range(20): target_name = f"target-{i}" # Verify that the target info was successfully found. target_info = updater.get_targetinfo(target_name) assert target_info is not None target_full_path = updater.download_target(target_info) # Verify that the target content is the same as the target name. with open(target_full_path, encoding="utf-8") as target: self.assertEqual(target.read(), target_name) def test_invalid_target_download(self) -> None: target = TestTarget("targetpath", b"content", "targetpath") # Add target to repository self.sim.targets.version += 1 self.sim.add_target("targets", target.content, target.path) self.sim.update_snapshot() updater = self._init_updater() info = updater.get_targetinfo(target.path) assert info is not None # Corrupt the file content to not match the hash self.sim.target_files[target.path].data = b"conten@" with self.assertRaises(RepositoryError): updater.download_target(info) # Corrupt the file content to not match the length self.sim.target_files[target.path].data = b"cont" with self.assertRaises(RepositoryError): updater.download_target(info) # Verify the file is not persisted in cache self.assertIsNone(updater.find_cached_target(info)) def test_invalid_target_cache(self) -> None: target = TestTarget("targetpath", b"content", "targetpath") # Add target to repository self.sim.targets.version += 1 self.sim.add_target("targets", target.content, target.path) self.sim.update_snapshot() # Download the target updater = self._init_updater() info = updater.get_targetinfo(target.path) assert info is not None path = updater.download_target(info) self.assertEqual(path, updater.find_cached_target(info)) # Add newer content to the same targetpath target.content = b"contentv2" self.sim.targets.version += 1 self.sim.add_target("targets", target.content, target.path) self.sim.update_snapshot() # Newer content is detected, old cached version is not used updater = self._init_updater() info = updater.get_targetinfo(target.path) assert info is not None self.assertIsNone(updater.find_cached_target(info)) # Download target, assert it is in cache and content is the newer path = updater.download_target(info) self.assertEqual(path, updater.find_cached_target(info)) with open(path, "rb") as f: self.assertEqual(f.read(), target.content) def test_meta_missing_delegated_role(self) -> None: """Test a delegation where the role is not part of the snapshot""" # Add new delegation, update snapshot. Do not add the actual role role = DelegatedRole("role1", [], 1, True, ["*"]) self.sim.targets.delegations = Delegations({}, roles={role.name: role}) self.sim.update_snapshot() # assert that RepositoryError is raised when role1 is needed updater = self._init_updater() with self.assertRaises(RepositoryError): updater.get_targetinfo("") if __name__ == "__main__": if "--dump" in sys.argv: TestFetchTarget.dump_dir = tempfile.mkdtemp() print(f"Repository Simulator dumps in {TestFetchTarget.dump_dir}") sys.argv.remove("--dump") utils.configure_test_logging(sys.argv) unittest.main() tuf-6.0.0/tests/test_updater_key_rotations.py0000644000000000000000000002720513615410400016432 0ustar00# Copyright 2021, New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Test ngclient Updater key rotation handling""" from __future__ import annotations import os import sys import tempfile import unittest from dataclasses import dataclass from typing import ClassVar from securesystemslib.signer import CryptoSigner, Signer from tests import utils from tests.repository_simulator import RepositorySimulator from tests.utils import run_sub_tests_with_dataset from tuf.api.exceptions import UnsignedMetadataError from tuf.api.metadata import Key, Root from tuf.ngclient import Updater @dataclass class MdVersion: keys: list[int] threshold: int sigs: list[int] res: type[Exception] | None = None class TestUpdaterKeyRotations(unittest.TestCase): """Test ngclient root rotation handling""" # set dump_dir to trigger repository state dumps dump_dir: str | None = None temp_dir: ClassVar[tempfile.TemporaryDirectory] keys: ClassVar[list[Key]] signers: ClassVar[list[Signer]] @classmethod def setUpClass(cls) -> None: cls.temp_dir = tempfile.TemporaryDirectory() # Pre-create a bunch of keys and signers cls.signers = [] for _ in range(10): signer = CryptoSigner.generate_ed25519() cls.signers.append(signer) @classmethod def tearDownClass(cls) -> None: cls.temp_dir.cleanup() def setup_subtest(self) -> None: # Setup repository for subtest: make sure no roots have been published self.sim = RepositorySimulator() self.sim.signed_roots.clear() self.sim.root.version = 0 if self.dump_dir is not None: # create subtest dumpdir name = f"{self.id().split('.')[-1]}-{self.case_name}" self.sim.dump_dir = os.path.join(self.dump_dir, name) os.mkdir(self.sim.dump_dir) def _run_refresh(self) -> None: """Create new updater, run refresh""" if self.sim.dump_dir is not None: self.sim.write() # bootstrap with initial root self.metadata_dir = tempfile.mkdtemp(dir=self.temp_dir.name) with open(os.path.join(self.metadata_dir, "root.json"), "bw") as f: f.write(self.sim.signed_roots[0]) updater = Updater( self.metadata_dir, "https://example.com/metadata/", fetcher=self.sim, ) updater.refresh() # fmt: off root_rotation_cases = { "1-of-1 key rotation": [ MdVersion(keys=[1], threshold=1, sigs=[1]), MdVersion(keys=[2], threshold=1, sigs=[2, 1]), MdVersion(keys=[2], threshold=1, sigs=[2]), ], "1-of-1 key rotation, unused signatures": [ MdVersion(keys=[1], threshold=1, sigs=[3, 1, 4]), MdVersion(keys=[2], threshold=1, sigs=[3, 2, 1, 4]), MdVersion(keys=[2], threshold=1, sigs=[3, 2, 4]), ], "1-of-1 key rotation fail: not signed with old key": [ MdVersion(keys=[1], threshold=1, sigs=[1]), MdVersion(keys=[2], threshold=1, sigs=[2, 3, 4], res=UnsignedMetadataError), ], "1-of-1 key rotation fail: not signed with new key": [ MdVersion(keys=[1], threshold=1, sigs=[1]), MdVersion(keys=[2], threshold=1, sigs=[1, 3, 4], res=UnsignedMetadataError), ], "3-of-5, sign with different keycombos": [ MdVersion(keys=[0, 1, 2, 3, 4], threshold=3, sigs=[0, 2, 4]), MdVersion(keys=[0, 1, 2, 3, 4], threshold=3, sigs=[0, 4, 1]), MdVersion(keys=[0, 1, 2, 3, 4], threshold=3, sigs=[0, 1, 3]), MdVersion(keys=[0, 1, 2, 3, 4], threshold=3, sigs=[0, 1, 3]), ], "3-of-5, one key rotated": [ MdVersion(keys=[0, 1, 2, 3, 4], threshold=3, sigs=[0, 2, 4]), MdVersion(keys=[0, 1, 3, 4, 5], threshold=3, sigs=[0, 4, 1]), ], "3-of-5, one key rotate fails: not signed with 3 new keys": [ MdVersion(keys=[0, 1, 2, 3, 4], threshold=3, sigs=[0, 2, 4]), MdVersion(keys=[0, 1, 3, 4, 5], threshold=3, sigs=[0, 2, 4], res=UnsignedMetadataError), ], "3-of-5, one key rotate fails: not signed with 3 old keys": [ MdVersion(keys=[0, 1, 2, 3, 4], threshold=3, sigs=[0, 2, 4]), MdVersion(keys=[0, 1, 3, 4, 5], threshold=3, sigs=[0, 4, 5], res=UnsignedMetadataError), ], "3-of-5, one key rotated, with intermediate step": [ MdVersion(keys=[0, 1, 2, 3, 4], threshold=3, sigs=[0, 2, 4]), MdVersion(keys=[0, 1, 3, 4, 5], threshold=3, sigs=[0, 2, 4, 5]), MdVersion(keys=[0, 1, 3, 4, 5], threshold=3, sigs=[0, 4, 5]), ], "3-of-5, all keys rotated, with intermediate step": [ MdVersion(keys=[0, 1, 2, 3, 4], threshold=3, sigs=[0, 2, 4]), MdVersion(keys=[5, 6, 7, 8, 9], threshold=3, sigs=[0, 2, 4, 5, 6, 7]), MdVersion(keys=[5, 6, 7, 8, 9], threshold=3, sigs=[5, 6, 7]), ], "1-of-3 threshold increase to 2-of-3": [ MdVersion(keys=[1, 2, 3], threshold=1, sigs=[1]), MdVersion(keys=[1, 2, 3], threshold=2, sigs=[1, 2]), ], "1-of-3 threshold bump to 2-of-3 fails: new threshold not reached": [ MdVersion(keys=[1, 2, 3], threshold=1, sigs=[1]), MdVersion(keys=[1, 2, 3], threshold=2, sigs=[2], res=UnsignedMetadataError), ], "2-of-3 threshold decrease to 1-of-3": [ MdVersion(keys=[1, 2, 3], threshold=2, sigs=[1, 2]), MdVersion(keys=[1, 2, 3], threshold=1, sigs=[1, 2]), MdVersion(keys=[1, 2, 3], threshold=1, sigs=[1]), ], "2-of-3 threshold decr. to 1-of-3 fails: old threshold not reached": [ MdVersion(keys=[1, 2, 3], threshold=2, sigs=[1, 2]), MdVersion(keys=[1, 2, 3], threshold=1, sigs=[1], res=UnsignedMetadataError), ], "1-of-2 threshold increase to 2-of-2": [ MdVersion(keys=[1], threshold=1, sigs=[1]), MdVersion(keys=[1, 2], threshold=2, sigs=[1, 2]), ], } # fmt: on @run_sub_tests_with_dataset(root_rotation_cases) def test_root_rotation(self, root_versions: list[MdVersion]) -> None: """Test Updater.refresh() with various sequences of root updates Each MdVersion in the list describes root keys and signatures of a remote root metadata version. As an example: MdVersion([1,2,3], 2, [1,2]) defines a root that contains keys 1, 2 and 3 with threshold 2. The metadata is signed with keys 1 and 2. Assert that refresh() result is expected and that local root on disk is the expected one after all roots have been loaded from remote using the standard client update workflow. """ self.setup_subtest() # Publish all remote root versions defined in root_versions for rootver in root_versions: # clear root keys, signers self.sim.root.roles[Root.type].keyids.clear() self.sim.signers[Root.type].clear() self.sim.root.roles[Root.type].threshold = rootver.threshold for i in rootver.keys: self.sim.root.add_key(self.signers[i].public_key, Root.type) for i in rootver.sigs: self.sim.add_signer(Root.type, self.signers[i]) self.sim.root.version += 1 self.sim.publish_root() # run client workflow, assert success/failure expected_error = root_versions[-1].res if expected_error is None: self._run_refresh() expected_local_root = self.sim.signed_roots[-1] else: # failure expected: local root should be the root before last with self.assertRaises(expected_error): self._run_refresh() expected_local_root = self.sim.signed_roots[-2] # assert local root on disk is expected with open(os.path.join(self.metadata_dir, "root.json"), "rb") as f: self.assertEqual(f.read(), expected_local_root) # fmt: off non_root_rotation_cases: dict[str, MdVersion] = { "1-of-1 key rotation": MdVersion(keys=[2], threshold=1, sigs=[2]), "1-of-1 key rotation, unused signatures": MdVersion(keys=[1], threshold=1, sigs=[3, 1, 4]), "1-of-1 key rotation fail: not signed with new key": MdVersion(keys=[2], threshold=1, sigs=[1, 3, 4], res=UnsignedMetadataError), "3-of-5, one key signature wrong: not signed with 3 expected keys": MdVersion(keys=[0, 1, 3, 4, 5], threshold=3, sigs=[0, 2, 4], res=UnsignedMetadataError), "2-of-5, one key signature missing: threshold not reached": MdVersion(keys=[0, 1, 3, 4, 5], threshold=3, sigs=[0, 4], res=UnsignedMetadataError), "3-of-5, sign first combo": MdVersion(keys=[0, 1, 2, 3, 4], threshold=3, sigs=[0, 2, 4]), "3-of-5, sign second combo": MdVersion(keys=[0, 1, 2, 3, 4], threshold=3, sigs=[0, 4, 1]), "3-of-5, sign third combo": MdVersion(keys=[0, 1, 2, 3, 4], threshold=3, sigs=[0, 1, 3]), "3-of-5, sign fourth combo": MdVersion(keys=[0, 1, 2, 3, 4], threshold=3, sigs=[1, 2, 3]), "3-of-5, sign fifth combo": MdVersion(keys=[0, 1, 2, 3, 4], threshold=3, sigs=[2, 3, 4]), } # fmt: on @run_sub_tests_with_dataset(non_root_rotation_cases) def test_non_root_rotations(self, md_version: MdVersion) -> None: """Test Updater.refresh() with various sequences of metadata updates Each MdVersion in the list describes metadata keys and signatures of a remote metadata version. As an example: MdVersion([1,2,3], 2, [1,2]) defines a metadata that contains keys 1, 2 and 3 with threshold 2. The metadata is signed with keys 1 and 2. Assert that refresh() result is expected and that local metadata on disk is the expected one after all roots have been loaded from remote using the standard client update workflow. """ self.setup_subtest() roles = ["timestamp", "snapshot", "targets"] for role in roles: # clear role keys, signers self.sim.root.roles[role].keyids.clear() self.sim.signers[role].clear() self.sim.root.roles[role].threshold = md_version.threshold for i in md_version.keys: self.sim.root.add_key(self.signers[i].public_key, role) for i in md_version.sigs: self.sim.add_signer(role, self.signers[i]) self.sim.root.version += 1 self.sim.publish_root() # run client workflow, assert success/failure expected_error = md_version.res if expected_error is None: self._run_refresh() # Call fetch_metadata to sign metadata with new keys expected_local_md: bytes = self.sim.fetch_metadata(role) # assert local metadata role is on disk as expected md_path = os.path.join(self.metadata_dir, f"{role}.json") with open(md_path, "rb") as f: data = f.read() self.assertEqual(data, expected_local_md) else: # failure expected with self.assertRaises(expected_error): self._run_refresh() if __name__ == "__main__": if "--dump" in sys.argv: TestUpdaterKeyRotations.dump_dir = tempfile.mkdtemp() print(f"Repository dumps in {TestUpdaterKeyRotations.dump_dir}") sys.argv.remove("--dump") utils.configure_test_logging(sys.argv) unittest.main() tuf-6.0.0/tests/test_updater_ng.py0000644000000000000000000003305213615410400014141 0ustar00# Copyright 2021, New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Test Updater class""" from __future__ import annotations import logging import os import shutil import sys import tempfile import unittest from collections.abc import Iterable from typing import TYPE_CHECKING, Callable, ClassVar from unittest.mock import MagicMock, patch from securesystemslib.signer import Signer from tests import utils from tuf.api import exceptions from tuf.api.metadata import ( Metadata, Root, Snapshot, TargetFile, Targets, Timestamp, ) from tuf.ngclient import Updater, UpdaterConfig if TYPE_CHECKING: from collections.abc import Iterable logger = logging.getLogger(__name__) class TestUpdater(unittest.TestCase): """Test the Updater class from 'tuf/ngclient/updater.py'.""" server_process_handler: ClassVar[utils.TestServerProcess] @classmethod def setUpClass(cls) -> None: cls.tmp_test_root_dir = tempfile.mkdtemp(dir=os.getcwd()) # Launch a SimpleHTTPServer # Test cases will request metadata and target files that have been # pre-generated in 'tuf/tests/repository_data', and are copied to # CWD/tmp_test_root_dir/* cls.server_process_handler = utils.TestServerProcess(log=logger) @classmethod def tearDownClass(cls) -> None: # Cleans resources, flush the logged lines (if any) and remove test dir cls.server_process_handler.clean() shutil.rmtree(cls.tmp_test_root_dir) def setUp(self) -> None: # Create tmp test dir inside of tmp test root dir to independently serve # new repository files for each test. We delete all tmp dirs at once in # tearDownClass after the server has released all resources. self.tmp_test_dir = tempfile.mkdtemp(dir=self.tmp_test_root_dir) # Copy the original repository files provided in the test folder so that # any modifications are restricted to the copies. # The 'repository_data' directory is expected to exist in 'tuf.tests/'. original_repository_files = os.path.join( utils.TESTS_DIR, "repository_data" ) original_repository = os.path.join( original_repository_files, "repository" ) original_keystore = os.path.join(original_repository_files, "keystore") original_client = os.path.join( original_repository_files, "client", "test_repository1", "metadata", "current", ) # Save references to the often-needed client repository directories. # Test cases need these references to access metadata and target files. self.repository_directory = os.path.join( self.tmp_test_dir, "repository" ) self.keystore_directory = os.path.join(self.tmp_test_dir, "keystore") self.client_directory = os.path.join(self.tmp_test_dir, "client") # Copy the original 'repository', 'client', and 'keystore' directories # to the temporary repository the test cases can use. shutil.copytree(original_repository, self.repository_directory) shutil.copytree(original_client, self.client_directory) shutil.copytree(original_keystore, self.keystore_directory) # 'path/to/tmp/repository' -> 'localhost:8001/tmp/repository'. repository_basepath = self.repository_directory[len(os.getcwd()) :] url_prefix = ( "http://" + utils.TEST_HOST_ADDRESS + ":" + str(self.server_process_handler.port) + repository_basepath.replace("\\", "/") ) self.metadata_url = f"{url_prefix}/metadata/" self.targets_url = f"{url_prefix}/targets/" self.dl_dir = tempfile.mkdtemp(dir=self.tmp_test_dir) # Creating a repository instance. The test cases will use this client # updater to refresh metadata, fetch target files, etc. self.updater = Updater( metadata_dir=self.client_directory, metadata_base_url=self.metadata_url, target_dir=self.dl_dir, target_base_url=self.targets_url, ) def tearDown(self) -> None: # Logs stdout and stderr from the sever subprocess. self.server_process_handler.flush_log() def _modify_repository_root( self, modification_func: Callable[[Metadata], None], bump_version: bool = False, ) -> None: """Apply 'modification_func' to root and persist it.""" role_path = os.path.join( self.repository_directory, "metadata", "root.json" ) root = Metadata[Root].from_file(role_path) modification_func(root) if bump_version: root.signed.version += 1 root_key_path = os.path.join(self.keystore_directory, "root_key") uri = f"file2:{root_key_path}" role = root.signed.get_delegated_role(Root.type) key = root.signed.get_key(role.keyids[0]) signer = Signer.from_priv_key_uri(uri, key) root.sign(signer) root.to_file( os.path.join(self.repository_directory, "metadata", "root.json") ) root.to_file( os.path.join( self.repository_directory, "metadata", f"{root.signed.version}.root.json", ) ) def _assert_files_exist(self, roles: Iterable[str]) -> None: """Assert that local metadata files match 'roles'""" expected_files = [f"{role}.json" for role in roles] found_files = [ e.name for e in os.scandir(self.client_directory) if e.is_file() ] self.assertListEqual(sorted(found_files), sorted(expected_files)) def test_refresh_and_download(self) -> None: # Test refresh without consistent targets - targets without hash prefix. # top-level targets are already in local cache (but remove others) os.remove(os.path.join(self.client_directory, "role1.json")) os.remove(os.path.join(self.client_directory, "role2.json")) # top-level metadata is in local directory already self.updater.refresh() self._assert_files_exist( [Root.type, Snapshot.type, Targets.type, Timestamp.type] ) # Get targetinfos, assert that cache does not contain files info1 = self.updater.get_targetinfo("file1.txt") assert isinstance(info1, TargetFile) self._assert_files_exist( [Root.type, Snapshot.type, Targets.type, Timestamp.type] ) # Get targetinfo for 'file3.txt' listed in the delegated role1 info3 = self.updater.get_targetinfo("file3.txt") assert isinstance(info3, TargetFile) expected_files = [ "role1", Root.type, Snapshot.type, Targets.type, Timestamp.type, ] self._assert_files_exist(expected_files) self.assertIsNone(self.updater.find_cached_target(info1)) self.assertIsNone(self.updater.find_cached_target(info3)) # Download files, assert that cache has correct files self.updater.download_target(info1) path = self.updater.find_cached_target(info1) self.assertEqual(path, os.path.join(self.dl_dir, info1.path)) self.assertIsNone(self.updater.find_cached_target(info3)) self.updater.download_target(info3) path = self.updater.find_cached_target(info1) self.assertEqual(path, os.path.join(self.dl_dir, info1.path)) path = self.updater.find_cached_target(info3) self.assertEqual(path, os.path.join(self.dl_dir, info3.path)) def test_refresh_with_only_local_root(self) -> None: os.remove(os.path.join(self.client_directory, "timestamp.json")) os.remove(os.path.join(self.client_directory, "snapshot.json")) os.remove(os.path.join(self.client_directory, "targets.json")) os.remove(os.path.join(self.client_directory, "role1.json")) os.remove(os.path.join(self.client_directory, "role2.json")) self._assert_files_exist([Root.type]) self.updater.refresh() self._assert_files_exist( [Root.type, Snapshot.type, Targets.type, Timestamp.type] ) # Get targetinfo for 'file3.txt' listed in the delegated role1 self.updater.get_targetinfo("file3.txt") expected_files = [ "role1", Root.type, Snapshot.type, Targets.type, Timestamp.type, ] self._assert_files_exist(expected_files) def test_implicit_refresh_with_only_local_root(self) -> None: os.remove(os.path.join(self.client_directory, "timestamp.json")) os.remove(os.path.join(self.client_directory, "snapshot.json")) os.remove(os.path.join(self.client_directory, "targets.json")) os.remove(os.path.join(self.client_directory, "role1.json")) os.remove(os.path.join(self.client_directory, "role2.json")) self._assert_files_exist(["root"]) # Get targetinfo for 'file3.txt' listed in the delegated role1 self.updater.get_targetinfo("file3.txt") expected_files = ["role1", "root", "snapshot", "targets", "timestamp"] self._assert_files_exist(expected_files) def test_both_target_urls_not_set(self) -> None: # target_base_url = None and Updater._target_base_url = None updater = Updater(self.client_directory, self.metadata_url, self.dl_dir) info = TargetFile(1, {"sha256": ""}, "targetpath") with self.assertRaises(ValueError): updater.download_target(info) def test_no_target_dir_no_filepath(self) -> None: # filepath = None and Updater.target_dir = None updater = Updater(self.client_directory, self.metadata_url) info = TargetFile(1, {"sha256": ""}, "targetpath") with self.assertRaises(ValueError): updater.find_cached_target(info) with self.assertRaises(ValueError): updater.download_target(info) def test_external_targets_url(self) -> None: self.updater.refresh() info = self.updater.get_targetinfo("file1.txt") assert isinstance(info, TargetFile) self.updater.download_target(info, target_base_url=self.targets_url) def test_length_hash_mismatch(self) -> None: self.updater.refresh() targetinfo = self.updater.get_targetinfo("file1.txt") assert isinstance(targetinfo, TargetFile) length = targetinfo.length with self.assertRaises(exceptions.RepositoryError): targetinfo.length = 44 self.updater.download_target(targetinfo) with self.assertRaises(exceptions.RepositoryError): targetinfo.length = length targetinfo.hashes = {"sha256": "abcd"} self.updater.download_target(targetinfo) def test_updating_root(self) -> None: # Bump root version, resign and refresh self._modify_repository_root(lambda _: None, bump_version=True) self.updater.refresh() self.assertEqual(self.updater._trusted_set.root.version, 2) def test_missing_targetinfo(self) -> None: self.updater.refresh() # Get targetinfo for non-existing file self.assertIsNone(self.updater.get_targetinfo("file33.txt")) @patch.object(os, "replace", wraps=os.replace) @patch.object(os, "remove", wraps=os.remove) def test_persist_metadata_fails( self, wrapped_remove: MagicMock, wrapped_replace: MagicMock ) -> None: # Testing that when write succeeds (the file is created) and replace # fails by throwing OSError, then the file will be deleted. wrapped_replace.side_effect = OSError() with self.assertRaises(OSError): self.updater._persist_metadata("target", b"data") wrapped_replace.assert_called_once() wrapped_remove.assert_called_once() # Assert that the created tempfile during writing is eventually deleted # or in other words, there is no temporary file left in the folder. for filename in os.listdir(self.updater._dir): self.assertFalse(filename.startswith("tmp")) def test_invalid_target_base_url(self) -> None: info = TargetFile(1, {"sha256": ""}, "targetpath") with self.assertRaises(exceptions.DownloadError): self.updater.download_target( info, target_base_url="http://invalid/" ) def test_non_existing_target_file(self) -> None: info = TargetFile(1, {"sha256": ""}, "/non_existing_file.txt") # When non-existing target file is given, download fails with # "404 Client Error: File not found for url" with self.assertRaises(exceptions.DownloadHTTPError): self.updater.download_target(info) def test_user_agent(self) -> None: # test default self.updater.refresh() poolmgr = self.updater._fetcher._proxy_env.get_pool_manager( "http", "localhost" ) ua = poolmgr.headers["User-Agent"] self.assertEqual(ua[:11], "python-tuf/") # test custom UA updater = Updater( self.client_directory, self.metadata_url, self.dl_dir, self.targets_url, config=UpdaterConfig(app_user_agent="MyApp/1.2.3"), ) updater.refresh() poolmgr = updater._fetcher._proxy_env.get_pool_manager( "http", "localhost" ) ua = poolmgr.headers["User-Agent"] self.assertEqual(ua[:23], "MyApp/1.2.3 python-tuf/") if __name__ == "__main__": utils.configure_test_logging(sys.argv) unittest.main() tuf-6.0.0/tests/test_updater_top_level_update.py0000644000000000000000000010335313615410400017072 0ustar00# Copyright 2021, New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Test ngclient Updater top-level metadata update workflow""" from __future__ import annotations import builtins import datetime import os import sys import tempfile import unittest from datetime import timezone from pathlib import Path from typing import TYPE_CHECKING from unittest.mock import MagicMock, call, patch import freezegun from tests import utils from tests.repository_simulator import RepositorySimulator from tuf.api.exceptions import ( BadVersionNumberError, DownloadLengthMismatchError, ExpiredMetadataError, LengthOrHashMismatchError, UnsignedMetadataError, ) from tuf.api.metadata import ( SPECIFICATION_VERSION, TOP_LEVEL_ROLE_NAMES, DelegatedRole, Metadata, Root, Snapshot, Targets, Timestamp, ) from tuf.ngclient import Updater if TYPE_CHECKING: from collections.abc import Iterable class TestRefresh(unittest.TestCase): """Test update of top-level metadata following 'Detailed client workflow' in the specification.""" # set dump_dir to trigger repository state dumps dump_dir: str | None = None past_datetime = datetime.datetime.now(timezone.utc).replace( microsecond=0 ) - datetime.timedelta(days=5) def setUp(self) -> None: self.temp_dir = tempfile.TemporaryDirectory() self.metadata_dir = os.path.join(self.temp_dir.name, "metadata") self.targets_dir = os.path.join(self.temp_dir.name, "targets") self.sim = RepositorySimulator() if self.dump_dir is not None: # create test specific dump directory name = self.id().split(".")[-1] self.sim.dump_dir = os.path.join(self.dump_dir, name) os.mkdir(self.sim.dump_dir) def tearDown(self) -> None: self.temp_dir.cleanup() def _run_refresh(self, skip_bootstrap: bool = False) -> Updater: """Create a new Updater instance and refresh""" updater = self._init_updater(skip_bootstrap) updater.refresh() return updater def _init_updater(self, skip_bootstrap: bool = False) -> Updater: """Create a new Updater instance""" if self.dump_dir is not None: self.sim.write() return Updater( self.metadata_dir, "https://example.com/metadata/", self.targets_dir, "https://example.com/targets/", self.sim, bootstrap=None if skip_bootstrap else self.sim.signed_roots[0], ) def _assert_files_exist(self, roles: Iterable[str]) -> None: """Assert that local metadata files match 'roles'""" expected_files = [f"{role}.json" for role in roles] found_files = [ e.name for e in os.scandir(self.metadata_dir) if e.is_file() ] self.assertListEqual(sorted(found_files), sorted(expected_files)) def _assert_content_equals( self, role: str, version: int | None = None ) -> None: """Assert that local file content is the expected""" expected_content = self.sim.fetch_metadata(role, version) with open(os.path.join(self.metadata_dir, f"{role}.json"), "rb") as f: self.assertEqual(f.read(), expected_content) def _assert_version_equals(self, role: str, expected_version: int) -> None: """Assert that local metadata version is the expected""" md = Metadata.from_file(os.path.join(self.metadata_dir, f"{role}.json")) self.assertEqual(md.signed.version, expected_version) def test_first_time_refresh(self) -> None: # Add one more root version to repository so that # refresh() updates from local trusted root (v1) to # remote root (v2) self.sim.root.version += 1 self.sim.publish_root() self._run_refresh() self._assert_files_exist(TOP_LEVEL_ROLE_NAMES) for role in TOP_LEVEL_ROLE_NAMES: version = 2 if role == Root.type else None self._assert_content_equals(role, version) def test_cached_root_missing_without_bootstrap(self) -> None: # Run update without a bootstrap, with empty cache: this fails since there is no # trusted root with self.assertRaises(OSError): self._run_refresh(skip_bootstrap=True) # Metadata dir is empty with self.assertRaises(FileNotFoundError): os.listdir(self.metadata_dir) def test_trusted_root_expired(self) -> None: # Create an expired root version self.sim.root.expires = self.past_datetime self.sim.root.version += 1 self.sim.publish_root() # Update to latest root which is expired but still # saved as a local root. updater = self._init_updater() with self.assertRaises(ExpiredMetadataError): updater.refresh() self._assert_files_exist([Root.type]) self._assert_content_equals(Root.type, 2) # Local root metadata can be loaded even if expired updater = self._init_updater() # Create a non-expired root version and refresh self.sim.root.expires = self.sim.safe_expiry self.sim.root.version += 1 self.sim.publish_root() updater.refresh() # Root is successfully updated to latest version self._assert_files_exist(TOP_LEVEL_ROLE_NAMES) self._assert_content_equals(Root.type, 3) def test_trusted_root_unsigned_without_bootstrap(self) -> None: # Cached root is not signed, bootstrap root is not used Path(self.metadata_dir).mkdir(parents=True) root_path = os.path.join(self.metadata_dir, "root.json") md_root = Metadata.from_bytes(self.sim.signed_roots[0]) md_root.signatures.clear() md_root.to_file(root_path) with self.assertRaises(UnsignedMetadataError): self._run_refresh(skip_bootstrap=True) # The update failed, no changes in metadata self._assert_files_exist([Root.type]) md_root_after = Metadata.from_file(root_path) self.assertEqual(md_root.to_bytes(), md_root_after.to_bytes()) def test_max_root_rotations(self) -> None: # Root must stop looking for new versions after Y number of # intermediate files were downloaded. updater = self._init_updater() updater.config.max_root_rotations = 3 # Create some number of roots greater than 'max_root_rotations' while self.sim.root.version < updater.config.max_root_rotations + 3: self.sim.root.version += 1 self.sim.publish_root() initial_root_version = 1 updater.refresh() # Assert that root version was increased with no more # than 'max_root_rotations' self._assert_version_equals( Root.type, initial_root_version + updater.config.max_root_rotations ) def test_intermediate_root_incorrectly_signed(self) -> None: # Check for an arbitrary software attack # Intermediate root v2 is unsigned self.sim.root.version += 1 root_signers = self.sim.signers[Root.type].copy() self.sim.signers[Root.type].clear() self.sim.publish_root() # Final root v3 is correctly signed self.sim.root.version += 1 self.sim.signers[Root.type] = root_signers self.sim.publish_root() # Incorrectly signed intermediate root is detected with self.assertRaises(UnsignedMetadataError): self._run_refresh() # The update failed, latest root version is v1 self._assert_files_exist([Root.type]) self._assert_content_equals(Root.type, 1) def test_intermediate_root_expired(self) -> None: # The expiration of the new (intermediate) root metadata file # does not matter yet # Intermediate root v2 is expired self.sim.root.expires = self.past_datetime self.sim.root.version += 1 self.sim.publish_root() # Final root v3 is up to date self.sim.root.expires = self.sim.safe_expiry self.sim.root.version += 1 self.sim.publish_root() self._run_refresh() # Successfully updated to root v3 self._assert_files_exist(TOP_LEVEL_ROLE_NAMES) self._assert_content_equals(Root.type, 3) def test_final_root_incorrectly_signed(self) -> None: # Check for an arbitrary software attack self.sim.root.version += 1 # root v2 self.sim.signers[Root.type].clear() self.sim.publish_root() with self.assertRaises(UnsignedMetadataError): self._run_refresh() # The update failed, latest root version is v1 self._assert_files_exist([Root.type]) self._assert_content_equals(Root.type, 1) def test_new_root_same_version(self) -> None: # Check for a rollback_attack # Repository serves a root file with the same version as previous self.sim.publish_root() with self.assertRaises(BadVersionNumberError): self._run_refresh() # The update failed, latest root version is v1 self._assert_files_exist([Root.type]) self._assert_content_equals(Root.type, 1) def test_new_root_nonconsecutive_version(self) -> None: # Repository serves non-consecutive root version self.sim.root.version += 2 self.sim.publish_root() with self.assertRaises(BadVersionNumberError): self._run_refresh() # The update failed, latest root version is v1 self._assert_files_exist([Root.type]) self._assert_content_equals(Root.type, 1) def test_final_root_expired(self) -> None: # Check for a freeze attack # Final root is expired self.sim.root.expires = self.past_datetime self.sim.root.version += 1 self.sim.publish_root() with self.assertRaises(ExpiredMetadataError): self._run_refresh() # The update failed but final root is persisted on the file system self._assert_files_exist([Root.type]) self._assert_content_equals(Root.type, 2) def test_new_timestamp_unsigned(self) -> None: # Check for an arbitrary software attack self.sim.signers[Timestamp.type].clear() with self.assertRaises(UnsignedMetadataError): self._run_refresh() self._assert_files_exist([Root.type]) def test_expired_timestamp_version_rollback(self) -> None: """Verifies that local timestamp is used in rollback checks even if it is expired. The timestamp updates and rollback checks are performed with the following timing: - Timestamp v1 expiry set to day 7 - First updater refresh performed on day 0 - Repository publishes timestamp v2 on day 0 - Timestamp v2 expiry set to day 21 - Second updater refresh performed on day 18: assert that rollback check uses expired timestamp v1""" now = datetime.datetime.now(timezone.utc) self.sim.timestamp.expires = now + datetime.timedelta(days=7) self.sim.timestamp.version = 2 # Make a successful update of valid metadata which stores it in cache self._run_refresh() self.sim.timestamp.expires = now + datetime.timedelta(days=21) self.sim.timestamp.version = 1 patcher = freezegun.freeze_time( datetime.datetime.now(timezone.utc) + datetime.timedelta(days=18) ) # Check that a rollback protection is performed even if # local timestamp has expired with patcher, self.assertRaises(BadVersionNumberError): self._run_refresh() self._assert_version_equals(Timestamp.type, 2) def test_expired_timestamp_snapshot_rollback(self) -> None: """Verifies that rollback protection is done even if local timestamp has expired. The snapshot updates and rollback protection checks are performed with the following timing: - Timestamp v1 expiry set to day 7 - Repository bumps snapshot to v3 on day 0 - First updater refresh performed on day 0 - Timestamp v2 expiry set to day 21 - Second updater refresh performed on day 18: assert that rollback protection is done with expired timestamp v1""" now = datetime.datetime.now(timezone.utc) self.sim.timestamp.expires = now + datetime.timedelta(days=7) # Bump the snapshot version number to 3 self.sim.update_snapshot() self.sim.update_snapshot() # Make a successful update of valid metadata which stores it in cache self._run_refresh() self.sim.snapshot.version = 1 # Snapshot version number is set to 2, which is still less than 3 self.sim.update_snapshot() self.sim.timestamp.expires = now + datetime.timedelta(days=21) patcher = freezegun.freeze_time( datetime.datetime.now(timezone.utc) + datetime.timedelta(days=18) ) # Assert that rollback protection is done even if # local timestamp has expired with patcher, self.assertRaises(BadVersionNumberError): self._run_refresh() self._assert_version_equals(Timestamp.type, 3) def test_new_timestamp_version_rollback(self) -> None: # Check for a rollback attack self.sim.timestamp.version = 2 self._run_refresh() self.sim.timestamp.version = 1 with self.assertRaises(BadVersionNumberError): self._run_refresh() self._assert_version_equals(Timestamp.type, 2) def test_new_timestamp_snapshot_rollback(self) -> None: # Check for a rollback attack. self.sim.snapshot.version = 2 self.sim.update_timestamp() # timestamp v2 self._run_refresh() # Snapshot meta version is smaller than previous self.sim.timestamp.snapshot_meta.version = 1 self.sim.timestamp.version += 1 # timestamp v3 with self.assertRaises(BadVersionNumberError): self._run_refresh() self._assert_version_equals(Timestamp.type, 2) def test_new_timestamp_expired(self) -> None: # Check for a freeze attack self.sim.timestamp.expires = self.past_datetime self.sim.update_timestamp() with self.assertRaises(ExpiredMetadataError): self._run_refresh() self._assert_files_exist([Root.type]) def test_new_timestamp_fast_forward_recovery(self) -> None: """Test timestamp fast-forward recovery using key rotation. The timestamp recovery is made by the following steps - Remove the timestamp key - Create and add a new key for timestamp - Bump and publish root - Rollback the timestamp version """ # attacker updates to a higher version self.sim.timestamp.version = 99999 # client refreshes the metadata and see the new timestamp version self._run_refresh() self._assert_version_equals(Timestamp.type, 99999) # repository rotates timestamp keys, rolls back timestamp version self.sim.rotate_keys(Timestamp.type) self.sim.root.version += 1 self.sim.publish_root() self.sim.timestamp.version = 1 # client refresh the metadata and see the initial timestamp version self._run_refresh() self._assert_version_equals(Timestamp.type, 1) def test_new_snapshot_hash_mismatch(self) -> None: # Check against timestamp role's snapshot hash # Update timestamp with snapshot's hashes self.sim.compute_metafile_hashes_length = True self.sim.update_timestamp() # timestamp v2 self._run_refresh() # Modify snapshot contents without updating # timestamp's snapshot hash self.sim.snapshot.expires += datetime.timedelta(days=1) self.sim.snapshot.version += 1 # snapshot v2 self.sim.timestamp.snapshot_meta.version = self.sim.snapshot.version self.sim.timestamp.version += 1 # timestamp v3 # Hash mismatch error with self.assertRaises(LengthOrHashMismatchError): self._run_refresh() self._assert_version_equals(Timestamp.type, 3) self._assert_version_equals(Snapshot.type, 1) def test_new_snapshot_unsigned(self) -> None: # Check for an arbitrary software attack self.sim.signers[Snapshot.type].clear() with self.assertRaises(UnsignedMetadataError): self._run_refresh() self._assert_files_exist([Root.type, Timestamp.type]) def test_new_snapshot_version_mismatch(self) -> None: # Check against timestamp role's snapshot version # Increase snapshot version without updating timestamp self.sim.snapshot.version += 1 with self.assertRaises(BadVersionNumberError): self._run_refresh() self._assert_files_exist([Root.type, Timestamp.type]) def test_new_snapshot_version_rollback(self) -> None: # Check for a rollback attack self.sim.snapshot.version = 2 self.sim.update_timestamp() self._run_refresh() self.sim.snapshot.version = 1 self.sim.update_timestamp() with self.assertRaises(BadVersionNumberError): self._run_refresh() self._assert_version_equals(Snapshot.type, 2) def test_new_snapshot_fast_forward_recovery(self) -> None: """Test snapshot fast-forward recovery using key rotation. The snapshot recovery requires the snapshot and timestamp key rotation. It is made by the following steps: - Remove the snapshot and timestamp keys - Create and add a new key for snapshot and timestamp - Rollback snapshot version - Bump and publish root - Bump the timestamp """ # attacker updates to a higher version (bumping timestamp is required) self.sim.snapshot.version = 99999 self.sim.update_timestamp() # client refreshes the metadata and see the new snapshot version self._run_refresh() self._assert_version_equals(Snapshot.type, 99999) # repository rotates snapshot & timestamp keys, rolls back snapshot self.sim.rotate_keys(Snapshot.type) self.sim.rotate_keys(Timestamp.type) self.sim.root.version += 1 self.sim.publish_root() self.sim.snapshot.version = 1 self.sim.update_timestamp() # client refresh the metadata and see the initial snapshot version self._run_refresh() self._assert_version_equals(Snapshot.type, 1) def test_new_snapshot_expired(self) -> None: # Check for a freeze attack self.sim.snapshot.expires = self.past_datetime self.sim.update_snapshot() with self.assertRaises(ExpiredMetadataError): self._run_refresh() self._assert_files_exist([Root.type, Timestamp.type]) def test_new_targets_hash_mismatch(self) -> None: # Check against snapshot role's targets hashes # Update snapshot with target's hashes self.sim.compute_metafile_hashes_length = True self.sim.update_snapshot() self._run_refresh() # Modify targets contents without updating # snapshot's targets hashes self.sim.targets.version += 1 self.sim.snapshot.meta[ "targets.json" ].version = self.sim.targets.version self.sim.snapshot.version += 1 self.sim.update_timestamp() with self.assertRaises(LengthOrHashMismatchError): self._run_refresh() self._assert_version_equals(Snapshot.type, 3) self._assert_version_equals(Targets.type, 1) def test_new_targets_unsigned(self) -> None: # Check for an arbitrary software attack self.sim.signers[Targets.type].clear() with self.assertRaises(UnsignedMetadataError): self._run_refresh() self._assert_files_exist([Root.type, Timestamp.type, Snapshot.type]) def test_new_targets_version_mismatch(self) -> None: # Check against snapshot role's targets version # Increase targets version without updating snapshot self.sim.targets.version += 1 with self.assertRaises(BadVersionNumberError): self._run_refresh() self._assert_files_exist([Root.type, Timestamp.type, Snapshot.type]) def test_new_targets_expired(self) -> None: # Check for a freeze attack. self.sim.targets.expires = self.past_datetime self.sim.update_snapshot() with self.assertRaises(ExpiredMetadataError): self._run_refresh() self._assert_files_exist([Root.type, Timestamp.type, Snapshot.type]) def test_compute_metafile_hashes_length(self) -> None: self.sim.compute_metafile_hashes_length = True self.sim.update_snapshot() self._run_refresh() self._assert_version_equals(Timestamp.type, 2) self._assert_version_equals(Snapshot.type, 2) self.sim.compute_metafile_hashes_length = False self.sim.update_snapshot() self._run_refresh() self._assert_version_equals(Timestamp.type, 3) self._assert_version_equals(Snapshot.type, 3) def test_new_targets_fast_forward_recovery(self) -> None: """Test targets fast-forward recovery using key rotation. The targets recovery is made by issuing new Snapshot keys, by following steps: - Remove the snapshot key - Create and add a new key for snapshot - Bump and publish root - Rollback the target version """ # attacker updates to a higher version self.sim.targets.version = 99999 self.sim.update_snapshot() # client refreshes the metadata and see the new targets version self._run_refresh() self._assert_version_equals(Targets.type, 99999) # repository rotates snapshot keys, rolls back targets version self.sim.rotate_keys(Snapshot.type) self.sim.root.version += 1 self.sim.publish_root() self.sim.targets.version = 1 self.sim.update_snapshot() # client refreshes the metadata version and see initial targets version self._run_refresh() self._assert_version_equals(Targets.type, 1) @patch.object(builtins, "open", wraps=builtins.open) def test_not_loading_targets_twice(self, wrapped_open: MagicMock) -> None: # Do not load targets roles more than once when traversing # the delegations tree # Add new delegated targets, update the snapshot spec_version = ".".join(SPECIFICATION_VERSION) targets = Targets(1, spec_version, self.sim.safe_expiry, {}, None) role = DelegatedRole("role1", [], 1, False, ["*"], None) self.sim.add_delegation("targets", role, targets) self.sim.update_snapshot() # Run refresh, top-level roles are loaded updater = self._run_refresh() # Clean up calls to open during refresh() wrapped_open.reset_mock() # First time looking for "somepath", only 'role1' must be loaded updater.get_targetinfo("somepath") wrapped_open.assert_called_once_with( os.path.join(self.metadata_dir, "role1.json"), "rb" ) wrapped_open.reset_mock() # Second call to get_targetinfo, all metadata is already loaded updater.get_targetinfo("somepath") wrapped_open.assert_not_called() def test_snapshot_rollback_with_local_snapshot_hash_mismatch(self) -> None: # Test triggering snapshot rollback check on a newly downloaded snapshot # when the local snapshot is loaded even when there is a hash mismatch # with timestamp.snapshot_meta. # By raising this flag on timestamp update the simulator would: # 1) compute the hash of the new modified version of snapshot # 2) assign the hash to timestamp.snapshot_meta # The purpose is to create a hash mismatch between timestamp.meta and # the local snapshot, but to have hash match between timestamp.meta and # the next snapshot version. self.sim.compute_metafile_hashes_length = True # Initialize all metadata and assign targets version higher than 1. self.sim.targets.version = 2 self.sim.update_snapshot() self._run_refresh() # The new targets must have a lower version than the local trusted one. self.sim.targets.version = 1 self.sim.update_snapshot() # During the snapshot update, the local snapshot will be loaded even if # there is a hash mismatch with timestamp.snapshot_meta, because it will # be considered as trusted. # Should fail as a new version of snapshot will be fetched which lowers # the snapshot.meta["targets.json"] version by 1 and throws an error. with self.assertRaises(BadVersionNumberError): self._run_refresh() @patch.object(builtins, "open", wraps=builtins.open) def test_load_metadata_from_cache(self, wrapped_open: MagicMock) -> None: # Add new delegated targets spec_version = ".".join(SPECIFICATION_VERSION) targets = Targets(1, spec_version, self.sim.safe_expiry, {}, None) role = DelegatedRole("role1", [], 1, False, ["*"], None) self.sim.add_delegation("targets", role, targets) self.sim.update_snapshot() # Make a successful update of valid metadata which stores it in cache updater = self._run_refresh() updater.get_targetinfo("non_existent_target") # Clear statistics for open() calls and metadata requests wrapped_open.reset_mock() self.sim.fetch_tracker.metadata.clear() # Create a new updater and perform a second update while # the metadata is already stored in cache (metadata dir) updater = self._init_updater() updater.get_targetinfo("non_existent_target") # Test that metadata is loaded from cache and not downloaded root_dir = os.path.join(self.metadata_dir, "root_history") wrapped_open.assert_has_calls( [ call(os.path.join(root_dir, "2.root.json"), "rb"), call(os.path.join(self.metadata_dir, "timestamp.json"), "rb"), call(os.path.join(self.metadata_dir, "snapshot.json"), "rb"), call(os.path.join(self.metadata_dir, "targets.json"), "rb"), call(os.path.join(self.metadata_dir, "role1.json"), "rb"), ] ) expected_calls = [("root", 2), ("timestamp", None)] self.assertListEqual(self.sim.fetch_tracker.metadata, expected_calls) @patch.object(builtins, "open", wraps=builtins.open) def test_intermediate_root_cache(self, wrapped_open: MagicMock) -> None: """Test that refresh uses the intermediate roots from cache""" # Add root versions 2, 3 self.sim.root.version += 1 self.sim.publish_root() self.sim.root.version += 1 self.sim.publish_root() # Make a successful update of valid metadata which stores it in cache self._run_refresh() # assert that cache lookups happened but data was downloaded from remote root_dir = os.path.join(self.metadata_dir, "root_history") wrapped_open.assert_has_calls( [ call(os.path.join(root_dir, "2.root.json"), "rb"), call(os.path.join(root_dir, "3.root.json"), "rb"), call(os.path.join(root_dir, "4.root.json"), "rb"), call(os.path.join(self.metadata_dir, "timestamp.json"), "rb"), call(os.path.join(self.metadata_dir, "snapshot.json"), "rb"), call(os.path.join(self.metadata_dir, "targets.json"), "rb"), ] ) expected_calls = [ ("root", 2), ("root", 3), ("root", 4), ("timestamp", None), ("snapshot", 1), ("targets", 1), ] self.assertListEqual(self.sim.fetch_tracker.metadata, expected_calls) # Clear statistics for open() calls and metadata requests wrapped_open.reset_mock() self.sim.fetch_tracker.metadata.clear() # Run update again, assert that metadata from cache was used (including intermediate roots) self._run_refresh() wrapped_open.assert_has_calls( [ call(os.path.join(root_dir, "2.root.json"), "rb"), call(os.path.join(root_dir, "3.root.json"), "rb"), call(os.path.join(root_dir, "4.root.json"), "rb"), call(os.path.join(self.metadata_dir, "timestamp.json"), "rb"), call(os.path.join(self.metadata_dir, "snapshot.json"), "rb"), call(os.path.join(self.metadata_dir, "targets.json"), "rb"), ] ) expected_calls = [("root", 4), ("timestamp", None)] self.assertListEqual(self.sim.fetch_tracker.metadata, expected_calls) def test_intermediate_root_cache_poisoning(self) -> None: """Test that refresh works as expected when intermediate roots in cache are poisoned""" # Add root versions 2, 3 self.sim.root.version += 1 self.sim.publish_root() self.sim.root.version += 1 self.sim.publish_root() # Make a successful update of valid metadata which stores it in cache self._run_refresh() # Modify cached intermediate root v2 so that it's no longer signed correctly root_path = os.path.join( self.metadata_dir, "root_history", "2.root.json" ) md = Metadata.from_file(root_path) md.signatures.clear() md.to_file(root_path) # Clear statistics for metadata requests self.sim.fetch_tracker.metadata.clear() # Update again, assert that intermediate root v2 was downloaded again self._run_refresh() expected_calls = [("root", 2), ("root", 4), ("timestamp", None)] self.assertListEqual(self.sim.fetch_tracker.metadata, expected_calls) # Clear statistics for metadata requests self.sim.fetch_tracker.metadata.clear() # Update again, this time assert that intermediate root v2 was used from cache self._run_refresh() expected_calls = [("root", 4), ("timestamp", None)] self.assertListEqual(self.sim.fetch_tracker.metadata, expected_calls) def test_expired_metadata(self) -> None: """Verifies that expired local timestamp/snapshot can be used for updating from remote. The updates and verifications are performed with the following timing: - Timestamp v1 expiry set to day 7 - First updater refresh performed on day 0 - Repository bumps snapshot and targets to v2 on day 0 - Timestamp v2 expiry set to day 21 - Second updater refresh performed on day 18, it is successful and timestamp/snapshot final versions are v2""" now = datetime.datetime.now(timezone.utc) self.sim.timestamp.expires = now + datetime.timedelta(days=7) # Make a successful update of valid metadata which stores it in cache self._run_refresh() self.sim.targets.version += 1 self.sim.update_snapshot() self.sim.timestamp.expires = now + datetime.timedelta(days=21) # Mocking time so that local timestam has expired # but the new timestamp has not with freezegun.freeze_time( datetime.datetime.now(timezone.utc) + datetime.timedelta(days=18) ): self._run_refresh() # Assert that the final version of timestamp/snapshot is version 2 # which means a successful refresh is performed # with expired local metadata for role in ["timestamp", "snapshot", "targets"]: md = Metadata.from_file( os.path.join(self.metadata_dir, f"{role}.json") ) self.assertEqual(md.signed.version, 2) def test_max_metadata_lengths(self) -> None: """Test that clients configured max metadata lengths are respected""" # client has root v1 already: create a new one available for download self.sim.root.version += 1 self.sim.publish_root() config_vars = [ "root_max_length", "timestamp_max_length", "snapshot_max_length", "targets_max_length", ] # make sure going over any length limit raises DownloadLengthMismatchError for var_name in config_vars: updater = self._init_updater() setattr(updater.config, var_name, 100) with self.assertRaises(DownloadLengthMismatchError): updater.refresh() # All good with normal length limits updater = self._init_updater() updater.refresh() def test_timestamp_eq_versions_check(self) -> None: # Test that a modified timestamp with different content, but the same # version doesn't replace the valid locally stored one. # Make a successful update of valid metadata which stores it in cache self._run_refresh() initial_timestamp_meta_ver = self.sim.timestamp.snapshot_meta.version # Change timestamp without bumping its version in order to test if a new # timestamp with the same version will be persisted. self.sim.timestamp.snapshot_meta.version = 100 self._run_refresh() # If the local timestamp md file has the same snapshot_meta.version as # the initial one, then the new modified timestamp has not been stored. timestamp_path = os.path.join(self.metadata_dir, "timestamp.json") timestamp: Metadata[Timestamp] = Metadata.from_file(timestamp_path) self.assertEqual( initial_timestamp_meta_ver, timestamp.signed.snapshot_meta.version ) if __name__ == "__main__": if "--dump" in sys.argv: TestRefresh.dump_dir = tempfile.mkdtemp() print(f"Repository Simulator dumps in {TestRefresh.dump_dir}") sys.argv.remove("--dump") utils.configure_test_logging(sys.argv) unittest.main() tuf-6.0.0/tests/test_updater_validation.py0000644000000000000000000000411013615410400015660 0ustar00# Copyright 2022, New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Test ngclient Updater validations.""" import os import sys import tempfile import unittest from tests import utils from tests.repository_simulator import RepositorySimulator from tuf.ngclient import Updater class TestUpdater(unittest.TestCase): """Test ngclient Updater input validation.""" def setUp(self) -> None: self.temp_dir = tempfile.TemporaryDirectory() self.metadata_dir = os.path.join(self.temp_dir.name, "metadata") self.targets_dir = os.path.join(self.temp_dir.name, "targets") os.mkdir(self.metadata_dir) os.mkdir(self.targets_dir) # Setup the repository, bootstrap client root.json self.sim = RepositorySimulator() with open(os.path.join(self.metadata_dir, "root.json"), "bw") as f: f.write(self.sim.signed_roots[0]) def tearDown(self) -> None: self.temp_dir.cleanup() def _new_updater(self) -> Updater: return Updater( self.metadata_dir, "https://example.com/metadata/", self.targets_dir, "https://example.com/targets/", fetcher=self.sim, ) def test_local_target_storage_fail(self) -> None: self.sim.add_target("targets", b"content", "targetpath") self.sim.targets.version += 1 self.sim.update_snapshot() updater = self._new_updater() target_info = updater.get_targetinfo("targetpath") assert target_info is not None with self.assertRaises(FileNotFoundError): updater.download_target(target_info, filepath="") def test_non_existing_metadata_dir(self) -> None: with self.assertRaises(FileNotFoundError): # Initialize Updater with non-existing metadata_dir Updater( "non_existing_metadata_dir", "https://example.com/metadata/", fetcher=self.sim, ) if __name__ == "__main__": utils.configure_test_logging(sys.argv) unittest.main() tuf-6.0.0/tests/test_utils.py0000644000000000000000000000416313615410400013152 0ustar00# Copyright 2020, TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """ test_utils.py Martin Vrachev. October 21, 2020. See LICENSE-MIT OR LICENSE for licensing information. Provide tests for some of the functions in utils.py module. """ import logging import os import socket import sys import unittest from tests import utils logger = logging.getLogger(__name__) def can_connect(port: int) -> bool: """Check if a socket can connect on the given port""" try: sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.connect(("localhost", port)) return True except Exception: # noqa: BLE001 return False finally: # The process will always enter in finally even after return. if sock: sock.close() class TestServerProcess(unittest.TestCase): """Test functionality provided in TestServerProcess from tests/utils.py.""" def test_simple_server_startup(self) -> None: # Test normal case server_process_handler = utils.TestServerProcess(log=logger) # Make sure we can connect to the server self.assertTrue(can_connect(server_process_handler.port)) server_process_handler.clean() def test_cleanup(self) -> None: # Test normal case server_process_handler = utils.TestServerProcess( log=logger, server=os.path.join(utils.TESTS_DIR, "simple_server.py") ) server_process_handler.clean() # Check if the process has successfully been killed. self.assertFalse(server_process_handler.is_process_running()) def test_server_exit_before_timeout(self) -> None: with self.assertRaises(utils.TestServerProcessError): utils.TestServerProcess(logger, server="non_existing_server.py") # Test starting a server which immediately exits." with self.assertRaises(utils.TestServerProcessError): utils.TestServerProcess(logger, server="fast_server_exit.py") if __name__ == "__main__": utils.configure_test_logging(sys.argv) unittest.main() tuf-6.0.0/tests/utils.py0000644000000000000000000003045713615410400012120 0ustar00# Copyright 2020, TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """ utils.py August 3, 2020. Jussi Kukkonen See LICENSE-MIT OR LICENSE for licensing information. Provide common utilities for TUF tests """ from __future__ import annotations import argparse import errno import logging import os import queue import socket import subprocess import sys import threading import time import warnings from contextlib import contextmanager from typing import IO, TYPE_CHECKING, Any, Callable if TYPE_CHECKING: import unittest from collections.abc import Iterator logger = logging.getLogger(__name__) # May be used to reliably read other files in tests dir regardless of cwd TESTS_DIR = os.path.dirname(os.path.realpath(__file__)) # Used when forming URLs on the client side TEST_HOST_ADDRESS = "127.0.0.1" # Test runner decorator: Runs the test as a set of N SubTests, # (where N is number of items in dataset), feeding the actual test # function one test case at a time def run_sub_tests_with_dataset( dataset: dict[str, Any], ) -> Callable[[Callable], Callable]: """Decorator starting a unittest.TestCase.subtest() for each of the cases in dataset""" def real_decorator( function: Callable[[unittest.TestCase, Any], None], ) -> Callable[[unittest.TestCase], None]: def wrapper(test_cls: unittest.TestCase) -> None: for case, data in dataset.items(): with test_cls.subTest(case=case): # Save case name for future reference test_cls.case_name = case.replace(" ", "_") function(test_cls, data) return wrapper return real_decorator class TestServerProcessError(Exception): def __init__(self, value: str = "TestServerProcess") -> None: super().__init__() self.value = value def __str__(self) -> str: return repr(self.value) @contextmanager def ignore_deprecation_warnings(module: str) -> Iterator[None]: with warnings.catch_warnings(): warnings.filterwarnings( "ignore", category=DeprecationWarning, module=module ) yield # Wait until host:port accepts connections. # Raises TimeoutError if this does not happen within timeout seconds # There are major differences between operating systems on how this works # but the current blocking connect() seems to work fast on Linux and seems # to at least work on Windows (ECONNREFUSED unfortunately has a 2 second # timeout on Windows) def wait_for_server( host: str, server: str, port: int, timeout: int = 10 ) -> None: """Wait for server start until timeout is reached or server has started""" start = time.time() remaining_timeout = timeout succeeded = False while not succeeded and remaining_timeout > 0: try: sock: socket.socket | None = socket.socket( socket.AF_INET, socket.SOCK_STREAM ) assert sock is not None sock.settimeout(remaining_timeout) sock.connect((host, port)) succeeded = True except socket.timeout: pass except OSError as e: # ECONNREFUSED is expected while the server is not started if e.errno not in [errno.ECONNREFUSED]: logger.warning( "Unexpected error while waiting for server: %s", str(e) ) # Avoid pegging a core just for this time.sleep(0.01) finally: if sock: sock.close() sock = None remaining_timeout = int(timeout - (time.time() - start)) if not succeeded: raise TimeoutError( "Could not connect to the " + server + " on port " + str(port) + "!" ) def configure_test_logging(argv: list[str]) -> None: """Configure logger level for a certain test file""" # parse arguments but only handle '-v': argv may contain # other things meant for unittest argument parser parser = argparse.ArgumentParser(add_help=False) parser.add_argument("-v", "--verbose", action="count", default=0) args, _ = parser.parse_known_args(argv) if args.verbose <= 1: # 0 and 1 both mean ERROR: this way '-v' makes unittest print test # names without increasing log level loglevel = logging.ERROR elif args.verbose == 2: loglevel = logging.WARNING elif args.verbose == 3: loglevel = logging.INFO else: loglevel = logging.DEBUG logging.basicConfig(level=loglevel) def cleanup_metadata_dir(path: str) -> None: """Delete the local metadata dir""" with os.scandir(path) as it: for entry in it: if entry.name == "root_history": cleanup_metadata_dir(entry.path) elif entry.name.endswith(".json"): os.remove(entry.path) else: raise ValueError(f"Unexpected local metadata file {entry.path}") class TestServerProcess: """Helper class used to create a child process with the subprocess.Popen object and use a thread-safe Queue structure for logging. Args: log: Logger which will be used for logging. server: Path to the server to run in the subprocess. timeout: Time in seconds in which the server should start or otherwise TimeoutError error will be raised. popen_cwd: Current working directory used when instancing a subprocess.Popen object. extra_cmd_args: Additional arguments for the command which will start the subprocess. More precisely: "python -u ". If no list is provided, an empty list ("[]") will be assigned to it. """ def __init__( self, log: logging.Logger, server: str = os.path.join(TESTS_DIR, "simple_server.py"), timeout: int = 10, popen_cwd: str = ".", extra_cmd_args: list[str] | None = None, ): self.server = server self.__logger = log # Stores popped messages from the queue. self.__logged_messages: list[str] = [] self.__server_process: subprocess.Popen | None = None self._log_queue: queue.Queue | None = None self.port = -1 if extra_cmd_args is None: extra_cmd_args = [] try: self._start_server(timeout, extra_cmd_args, popen_cwd) wait_for_server("localhost", self.server, self.port, timeout) except Exception as e: # Clean the resources and log the server errors if any exists. self.clean() raise e def _start_server( self, timeout: int, extra_cmd_args: list[str], popen_cwd: str ) -> None: """ Start the server subprocess and a thread responsible to redirect stdout/stderr to the Queue. Waits for the port message maximum timeout seconds. """ self._start_process(extra_cmd_args, popen_cwd) self._start_redirect_thread() self._wait_for_port(timeout) self.__logger.info("%s serving on %d", self.server, self.port) def _start_process(self, extra_cmd_args: list[str], popen_cwd: str) -> None: """Starts the process running the server.""" # The "-u" option forces stdin, stdout and stderr to be unbuffered. command = [sys.executable, "-u", self.server, *extra_cmd_args] # Reusing one subprocess in multiple tests, but split up the logs # for each. self.__server_process = subprocess.Popen( command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, cwd=popen_cwd, ) def _start_redirect_thread(self) -> None: """Starts a thread redirecting the stdout/stderr to the Queue.""" assert isinstance(self.__server_process, subprocess.Popen) # Run log_queue_worker() in a thread. # The thread will exit when the child process dies. self._log_queue = queue.Queue() log_thread = threading.Thread( target=self._log_queue_worker, args=(self.__server_process.stdout, self._log_queue), ) # "daemon = True" means the thread won't interfere with the # process exit. log_thread.daemon = True log_thread.start() @staticmethod def _log_queue_worker(stream: IO, line_queue: queue.Queue) -> None: """ Worker function to run in a separate thread. Reads from 'stream', puts lines in a Queue (Queue is thread-safe). """ while True: # readline() is a blocking operation. # decode to push a string in the queue instead of 8-bit bytes. log_line = stream.readline().decode("utf-8") line_queue.put(log_line) if len(log_line) == 0: # This is the end of the stream meaning the server process # has exited. stream.close() break def _wait_for_port(self, timeout: int) -> None: """ Validates the first item from the Queue against the port message. If validation is successful, self.port is set. Raises TestServerProcessError if the process has exited or TimeoutError if no message was found within timeout seconds. """ assert isinstance(self.__server_process, subprocess.Popen) assert isinstance(self._log_queue, queue.Queue) # We have hardcoded the message we expect on a successful server # startup. This message should be the first message sent by the server! expected_msg = "bind succeeded, server port is: " try: line = self._log_queue.get(timeout=timeout) if len(line) == 0: # The process has exited. raise TestServerProcessError( self.server + " exited unexpectedly " + "with code " + str(self.__server_process.poll()) + "!" ) if line.startswith(expected_msg): self.port = int(line[len(expected_msg) :]) else: # An exception or some other message is printed from the server. self.__logged_messages.append(line) # Check if more lines are logged. self.flush_log() raise TestServerProcessError( self.server + " did not print port " + "message as first stdout line as expected!" ) except queue.Empty as e: raise TimeoutError( "Failure during " + self.server + " startup!" ) from e def _kill_server_process(self) -> None: """Kills the server subprocess if it's running.""" assert isinstance(self.__server_process, subprocess.Popen) if self.is_process_running(): self.__logger.info( "Server process %d terminated", self.__server_process.pid ) self.__server_process.kill() self.__server_process.wait() def flush_log(self) -> None: """Flushes the log lines from the logging queue.""" assert isinstance(self._log_queue, queue.Queue) while True: # Get lines from log_queue try: line = self._log_queue.get(block=False) if len(line) > 0: self.__logged_messages.append(line) except queue.Empty: # No more lines are logged in the queue. break if len(self.__logged_messages) > 0: title = "Test server (" + self.server + ") output:\n" message = [title, *self.__logged_messages] self.__logger.info("| ".join(message)) self.__logged_messages = [] def clean(self) -> None: """ Kills the subprocess and closes the TempFile. Calls flush_log to check for logged information, but not yet flushed. """ # If there is anything logged, flush it before closing the resources. self.flush_log() self._kill_server_process() def is_process_running(self) -> bool: assert isinstance(self.__server_process, subprocess.Popen) return self.__server_process.poll() is None tuf-6.0.0/tests/generated_data/__init__.py0000644000000000000000000000000013615410400015423 0ustar00tuf-6.0.0/tests/generated_data/generate_md.py0000644000000000000000000001000613615410400016145 0ustar00"""Script for generating new metadata files.""" # Copyright New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 from __future__ import annotations import os import sys from datetime import datetime, timezone from cryptography.hazmat.primitives.asymmetric.ed25519 import Ed25519PrivateKey from securesystemslib.signer import CryptoSigner, Signer, SSlibKey from tests import utils from tuf.api.metadata import Metadata, Root, Snapshot, Targets, Timestamp from tuf.api.serialization.json import JSONSerializer # Hardcode keys and expiry time to achieve reproducibility. public_values: list[str] = [ "b11d2ff132c033a657318c74c39526476c56de7556c776f11070842dbc4ac14c", "250f9ae3d1d3d5c419a73cfb4a470c01de1d5d3d61a3825416b5f5d6b88f4a30", "82380623abb9666d4bf274b1a02577469445a972e5650d270101faa5107b19c8", "0e6738fc1ac6fb4de680b4be99ecbcd99b030f3963f291277eef67bb9bd123e9", ] private_values: list[bytes] = [ bytes.fromhex( "510e5e04d7a364af850533856eacdf65d30cc0f8803ecd5fdc0acc56ca2aa91c" ), bytes.fromhex( "e6645b00312c8a257782e3e61e85bafda4317ad072c52251ef933d480c387abd" ), bytes.fromhex( "cd13dd2180334b24c19b32aaf27f7e375a614d7ba0777220d5c2290bb2f9b868" ), bytes.fromhex( "7e2e751145d1b22f6e40d4ba2aa47158207acfd3c003f1cbd5a08141dfc22a15" ), ] keyids: list[str] = [ "5822582e7072996c1eef1cec24b61115d364987faa486659fe3d3dce8dae2aba", "09d440e3725cec247dcb8703b646a87dd2a4d75343e8095c036c32795eefe3b9", "3458204ed467519c19a5316eb278b5608472a1bbf15850ebfb462d5315e4f86d", "2be5c21e3614f9f178fb49c4a34d0c18ffac30abd14ced917c60a52c8d8094b7", ] signers: list[Signer] = [] for index in range(len(keyids)): key = SSlibKey( keyids[index], "ed25519", "ed25519", {"public": public_values[index]}, ) private_key = Ed25519PrivateKey.from_private_bytes(private_values[index]) signers.append(CryptoSigner(private_key, key)) EXPIRY = datetime(2050, 1, 1, tzinfo=timezone.utc) SERIALIZER = JSONSerializer() def verify_generation(md: Metadata, path: str) -> None: """Verify that newly generated file equals the locally stored one. Args: md: Newly generated metadata object. path: Path to the locally stored metadata file. """ with open(path, "rb") as f: static_md_bytes = f.read() md_bytes = md.to_bytes(SERIALIZER) if static_md_bytes != md_bytes: raise ValueError( f"Generated data != local data at {path}. Generate a new " "metadata with 'python generated_data/generate_md.py'" ) def generate_all_files(dump: bool = False) -> None: """Generate a new repository or verify that output has not changed. Args: dump: If True, new files are generated. If False, existing files are compared to generated files and an exception is raised if there are differences. """ md_root = Metadata(Root(expires=EXPIRY)) md_timestamp = Metadata(Timestamp(expires=EXPIRY)) md_snapshot = Metadata(Snapshot(expires=EXPIRY)) md_targets = Metadata(Targets(expires=EXPIRY)) md_root.signed.add_key(signers[0].public_key, "root") md_root.signed.add_key(signers[1].public_key, "timestamp") md_root.signed.add_key(signers[2].public_key, "snapshot") md_root.signed.add_key(signers[3].public_key, "targets") for i, md in enumerate([md_root, md_timestamp, md_snapshot, md_targets]): assert isinstance(md, Metadata) md.sign(signers[i]) path = os.path.join( utils.TESTS_DIR, "generated_data", "ed25519_metadata", f"{md.signed.type}_with_ed25519.json", ) if dump: md.to_file(path, SERIALIZER) else: verify_generation(md, path) if __name__ == "__main__": utils.configure_test_logging(sys.argv) # To generate a new set of metadata files this script is supposed to be run # from the "tests" folder. generate_all_files(dump=True) tuf-6.0.0/tests/generated_data/ed25519_metadata/root_with_ed25519.json0000644000000000000000000000357013615410400022156 0ustar00{ "signatures": [ { "keyid": "5822582e7072996c1eef1cec24b61115d364987faa486659fe3d3dce8dae2aba", "sig": "06fc2b26d10afae02689c96dee96ff40e702734accec6f3e69671dec0a59e0763bd7cb7d5ffa4b9e87441c4c98e798ce97cb462e7075e38ad9bc1d0d0c657309" } ], "signed": { "_type": "root", "consistent_snapshot": true, "expires": "2050-01-01T00:00:00Z", "keys": { "09d440e3725cec247dcb8703b646a87dd2a4d75343e8095c036c32795eefe3b9": { "keytype": "ed25519", "keyval": { "public": "250f9ae3d1d3d5c419a73cfb4a470c01de1d5d3d61a3825416b5f5d6b88f4a30" }, "scheme": "ed25519" }, "2be5c21e3614f9f178fb49c4a34d0c18ffac30abd14ced917c60a52c8d8094b7": { "keytype": "ed25519", "keyval": { "public": "0e6738fc1ac6fb4de680b4be99ecbcd99b030f3963f291277eef67bb9bd123e9" }, "scheme": "ed25519" }, "3458204ed467519c19a5316eb278b5608472a1bbf15850ebfb462d5315e4f86d": { "keytype": "ed25519", "keyval": { "public": "82380623abb9666d4bf274b1a02577469445a972e5650d270101faa5107b19c8" }, "scheme": "ed25519" }, "5822582e7072996c1eef1cec24b61115d364987faa486659fe3d3dce8dae2aba": { "keytype": "ed25519", "keyval": { "public": "b11d2ff132c033a657318c74c39526476c56de7556c776f11070842dbc4ac14c" }, "scheme": "ed25519" } }, "roles": { "root": { "keyids": [ "5822582e7072996c1eef1cec24b61115d364987faa486659fe3d3dce8dae2aba" ], "threshold": 1 }, "snapshot": { "keyids": [ "3458204ed467519c19a5316eb278b5608472a1bbf15850ebfb462d5315e4f86d" ], "threshold": 1 }, "targets": { "keyids": [ "2be5c21e3614f9f178fb49c4a34d0c18ffac30abd14ced917c60a52c8d8094b7" ], "threshold": 1 }, "timestamp": { "keyids": [ "09d440e3725cec247dcb8703b646a87dd2a4d75343e8095c036c32795eefe3b9" ], "threshold": 1 } }, "spec_version": "1.0.31", "version": 1 } }tuf-6.0.0/tests/generated_data/ed25519_metadata/snapshot_with_ed25519.json0000644000000000000000000000066013615410400023027 0ustar00{ "signatures": [ { "keyid": "3458204ed467519c19a5316eb278b5608472a1bbf15850ebfb462d5315e4f86d", "sig": "bab356be0a82b85b9529aa4625cbd7b8e03b71d1a0fb5d3242f6e8377f102bcf60cc1b8c2a566fd5618c5f5ee3fc07745e84920d26e5514ad455868d7899ae03" } ], "signed": { "_type": "snapshot", "expires": "2050-01-01T00:00:00Z", "meta": { "targets.json": { "version": 1 } }, "spec_version": "1.0.31", "version": 1 } }tuf-6.0.0/tests/generated_data/ed25519_metadata/targets_with_ed25519.json0000644000000000000000000000060413615410400022637 0ustar00{ "signatures": [ { "keyid": "2be5c21e3614f9f178fb49c4a34d0c18ffac30abd14ced917c60a52c8d8094b7", "sig": "9e47f85b3edc79b7215bfee1291da46655deca0b6de99cb3968293218f3329855e57c1523120a50e3a2a8cc50aa9e886f4f74d902d28f43559f294681152f30b" } ], "signed": { "_type": "targets", "expires": "2050-01-01T00:00:00Z", "spec_version": "1.0.31", "targets": {}, "version": 1 } }tuf-6.0.0/tests/generated_data/ed25519_metadata/timestamp_with_ed25519.json0000644000000000000000000000066213615410400023175 0ustar00{ "signatures": [ { "keyid": "09d440e3725cec247dcb8703b646a87dd2a4d75343e8095c036c32795eefe3b9", "sig": "f1b1921a5963485eb5f1cf83f1b44548420bdcced420a367f0c42b63c91950410287f6d062824941085361c3906bb44a365352e2971787a653443ff8df484007" } ], "signed": { "_type": "timestamp", "expires": "2050-01-01T00:00:00Z", "meta": { "snapshot.json": { "version": 1 } }, "spec_version": "1.0.31", "version": 1 } }tuf-6.0.0/tests/repository_data/client/map.json0000644000000000000000000000070113615410400016531 0ustar00{ "mapping": [ { "paths": [ "*1.txt" ], "repositories": [ "test_repository1", "test_repository2" ], "terminating": false, "threshold": 1 }, { "paths": [ "*3.txt" ], "repositories": [ "test_repository2" ], "terminating": true, "threshold": 1 } ], "repositories": { "test_repository1": [ "http://localhost:30001" ], "test_repository2": [ "http://localhost:30002" ] } } tuf-6.0.0/tests/repository_data/client/test_repository1/metadata/current/role1.json0000644000000000000000000000234413615410400025604 0ustar00{ "signatures": [ { "keyid": "c8022fa1e9b9cb239a6b362bbdffa9649e61ad2cb699d2e4bc4fdf7930a0e64a", "sig": "9408b46569e622a46f1d35d9fa3c10e17a9285631ced4f2c9c2bba2c2842413fcb796db4e81d6f988fc056c21c407fdc3c10441592cf1e837e088f2e2dfd5403" } ], "signed": { "_type": "targets", "delegations": { "keys": { "c8022fa1e9b9cb239a6b362bbdffa9649e61ad2cb699d2e4bc4fdf7930a0e64a": { "keyid_hash_algorithms": [ "sha256", "sha512" ], "keytype": "ed25519", "keyval": { "public": "fcf224e55fa226056adf113ef1eb3d55e308b75b321c8c8316999d8c4fd9e0d9" }, "scheme": "ed25519" } }, "roles": [ { "keyids": [ "c8022fa1e9b9cb239a6b362bbdffa9649e61ad2cb699d2e4bc4fdf7930a0e64a" ], "name": "role2", "paths": [], "terminating": false, "threshold": 1 } ] }, "expires": "2030-01-01T00:00:00Z", "spec_version": "1.0.0", "targets": { "file3.txt": { "hashes": { "sha256": "141f740f53781d1ca54b8a50af22cbf74e44c21a998fa2a8a05aaac2c002886b", "sha512": "ef5beafa16041bcdd2937140afebd485296cd54f7348ecd5a4d035c09759608de467a7ac0eb58753d0242df873c305e8bffad2454aa48f44480f15efae1cacd0" }, "length": 28 } }, "version": 1 } }tuf-6.0.0/tests/repository_data/client/test_repository1/metadata/current/role2.json0000644000000000000000000000067113615410400025606 0ustar00{ "signatures": [ { "keyid": "c8022fa1e9b9cb239a6b362bbdffa9649e61ad2cb699d2e4bc4fdf7930a0e64a", "sig": "6c32f8cc2c642803a7b3b022ede0cf727e82964c1aa934571ef366bd5050ed02cfe3fdfe5477c08d0cbcc2dd17bb786d37ab1ce2b27e01ad79faf087594e0300" } ], "signed": { "_type": "targets", "delegations": { "keys": {}, "roles": [] }, "expires": "2030-01-01T00:00:00Z", "spec_version": "1.0.0", "targets": {}, "version": 1 } }tuf-6.0.0/tests/repository_data/client/test_repository1/metadata/current/root.json0000644000000000000000000000650413615410400025547 0ustar00{ "signatures": [ { "keyid": "4e777de0d275f9d28588dd9a1606cc748e548f9e22b6795b7cb3f63f98035fcb", "sig": "a337d6375fedd2eabfcd6c2ef6c8a9c3bb85dc5a857715f6a6bd41123e7670c4972d8548bcd7248154f3d864bf25f1823af59d74c459f41ea09a02db057ca1245612ebbdb97e782c501dc3e094f7fa8aa1402b03c6ed0635f565e2a26f9f543a89237e15a2faf0c267e2b34c3c38f2a43a28ddcdaf8308a12ead8c6dc47d1b762de313e9ddda8cc5bc25aea1b69d0e5b9199ca02f5dda48c3bff615fd12a7136d00634b9abc6e75c3256106c4d6f12e6c43f6195071355b2857bbe377ce028619b58837696b805040ce144b393d50a472531f430fadfb68d3081b6a8b5e49337e328c9a0a3f11e80b0bc8eb2dc6e78d1451dd857e6e6e6363c3fd14c590aa95e083c9bfc77724d78af86eb7a7ef635eeddaa353030c79f66b3ba9ea11fab456cfe896a826fdfb50a43cd444f762821aada9bcd7b022c0ee85b8768f960343d5a1d3d76374cc0ac9e12a500de0bf5d48569e5398cadadadab045931c398e3bcb6cec88af2437ba91959f956079cbed159fed3938016e6c3b5e446131f81cc5981" } ], "signed": { "_type": "root", "consistent_snapshot": false, "expires": "2030-01-01T00:00:00Z", "keys": { "4e777de0d275f9d28588dd9a1606cc748e548f9e22b6795b7cb3f63f98035fcb": { "keyid_hash_algorithms": [ "sha256", "sha512" ], "keytype": "rsa", "keyval": { "public": "-----BEGIN PUBLIC KEY-----\nMIIBojANBgkqhkiG9w0BAQEFAAOCAY8AMIIBigKCAYEA0GjPoVrjS9eCqzoQ8VRe\nPkC0cI6ktiEgqPfHESFzyxyjC490Cuy19nuxPcJuZfN64MC48oOkR+W2mq4pM51i\nxmdG5xjvNOBRkJ5wUCc8fDCltMUTBlqt9y5eLsf/4/EoBU+zC4SW1iPU++mCsity\nfQQ7U6LOn3EYCyrkH51hZ/dvKC4o9TPYMVxNecJ3CL1q02Q145JlyjBTuM3Xdqsa\nndTHoXSRPmmzgB/1dL/c4QjMnCowrKW06mFLq9RAYGIaJWfM/0CbrOJpVDkATmEc\nMdpGJYDfW/sRQvRdlHNPo24ZW7vkQUCqdRxvnTWkK5U81y7RtjLt1yskbWXBIbOV\nz94GXsgyzANyCT9qRjHXDDz2mkLq+9I2iKtEqaEePcWRu3H6RLahpM/TxFzw684Y\nR47weXdDecPNxWyiWiyMGStRFP4Cg9trcwAGnEm1w8R2ggmWphznCd5dXGhPNjfA\na82yNFY8ubnOUVJOf0nXGg3Edw9iY3xyjJb2+nrsk5f3AgMBAAE=\n-----END PUBLIC KEY-----" }, "scheme": "rsassa-pss-sha256" }, "59a4df8af818e9ed7abe0764c0b47b4240952aa0d179b5b78346c470ac30278d": { "keyid_hash_algorithms": [ "sha256", "sha512" ], "keytype": "ed25519", "keyval": { "public": "edcd0a32a07dce33f7c7873aaffbff36d20ea30787574ead335eefd337e4dacd" }, "scheme": "ed25519" }, "65171251a9aff5a8b3143a813481cb07f6e0de4eb197c767837fe4491b739093": { "keyid_hash_algorithms": [ "sha256", "sha512" ], "keytype": "ed25519", "keyval": { "public": "89f28bd4ede5ec3786ab923fd154f39588d20881903e69c7b08fb504c6750815" }, "scheme": "ed25519" }, "8a1c4a3ac2d515dec982ba9910c5fd79b91ae57f625b9cff25d06bf0a61c1758": { "keyid_hash_algorithms": [ "sha256", "sha512" ], "keytype": "ed25519", "keyval": { "public": "82ccf6ac47298ff43bfa0cd639868894e305a99c723ff0515ae2e9856eb5bbf4" }, "scheme": "ed25519" } }, "roles": { "root": { "keyids": [ "4e777de0d275f9d28588dd9a1606cc748e548f9e22b6795b7cb3f63f98035fcb" ], "threshold": 1 }, "snapshot": { "keyids": [ "59a4df8af818e9ed7abe0764c0b47b4240952aa0d179b5b78346c470ac30278d" ], "threshold": 1 }, "targets": { "keyids": [ "65171251a9aff5a8b3143a813481cb07f6e0de4eb197c767837fe4491b739093" ], "threshold": 1 }, "timestamp": { "keyids": [ "8a1c4a3ac2d515dec982ba9910c5fd79b91ae57f625b9cff25d06bf0a61c1758" ], "threshold": 1 } }, "spec_version": "1.0.0", "version": 1 } }tuf-6.0.0/tests/repository_data/client/test_repository1/metadata/current/snapshot.json0000644000000000000000000000100313615410400026410 0ustar00{ "signatures": [ { "keyid": "59a4df8af818e9ed7abe0764c0b47b4240952aa0d179b5b78346c470ac30278d", "sig": "085672c70dffe26610e58542ee552843633cfed973abdad94c56138dbf0cd991644f2d3f27e4dda3098e08ab676e7f52627b587947ae69db1012d59a6da18e0c" } ], "signed": { "_type": "snapshot", "expires": "2030-01-01T00:00:00Z", "meta": { "role1.json": { "version": 1 }, "role2.json": { "version": 1 }, "targets.json": { "version": 1 } }, "spec_version": "1.0.0", "version": 1 } }tuf-6.0.0/tests/repository_data/client/test_repository1/metadata/current/targets.json0000644000000000000000000000313013615410400026225 0ustar00{ "signatures": [ { "keyid": "65171251a9aff5a8b3143a813481cb07f6e0de4eb197c767837fe4491b739093", "sig": "d65f8db0c1a8f0976552b9742bbb393f24a5fa5eaf145c37aee047236c79dd0b83cfbb8b49fa7803689dfe0031dcf22c4d006b593acac07d69093b9b81722c08" } ], "signed": { "_type": "targets", "delegations": { "keys": { "c8022fa1e9b9cb239a6b362bbdffa9649e61ad2cb699d2e4bc4fdf7930a0e64a": { "keyid_hash_algorithms": [ "sha256", "sha512" ], "keytype": "ed25519", "keyval": { "public": "fcf224e55fa226056adf113ef1eb3d55e308b75b321c8c8316999d8c4fd9e0d9" }, "scheme": "ed25519" } }, "roles": [ { "keyids": [ "c8022fa1e9b9cb239a6b362bbdffa9649e61ad2cb699d2e4bc4fdf7930a0e64a" ], "name": "role1", "paths": [ "file3.txt" ], "terminating": false, "threshold": 1 } ] }, "expires": "2030-01-01T00:00:00Z", "spec_version": "1.0.0", "targets": { "file1.txt": { "custom": { "file_permissions": "0644" }, "hashes": { "sha256": "65b8c67f51c993d898250f40aa57a317d854900b3a04895464313e48785440da", "sha512": "467430a68afae8e9f9c0771ea5d78bf0b3a0d79a2d3d3b40c69fde4dd42c461448aef76fcef4f5284931a1ffd0ac096d138ba3a0d6ca83fa8d7285a47a296f77" }, "length": 31 }, "file2.txt": { "hashes": { "sha256": "452ce8308500d83ef44248d8e6062359211992fd837ea9e370e561efb1a4ca99", "sha512": "052b49a21e03606b28942db69aa597530fe52d47ee3d748ba65afcd14b857738e36bc1714c4f4adde46c3e683548552fe5c96722e0e0da3acd9050c2524902d8" }, "length": 39 } }, "version": 1 } }tuf-6.0.0/tests/repository_data/client/test_repository1/metadata/current/timestamp.json0000644000000000000000000000105513615410400026563 0ustar00{ "signatures": [ { "keyid": "8a1c4a3ac2d515dec982ba9910c5fd79b91ae57f625b9cff25d06bf0a61c1758", "sig": "de0e16920f87bf5500cc65736488ac17e09788cce808f6a4e85eb9e4e478a312b4c1a2d7723af56f7bfb1df533c67d8c93b6f49d39eabe7fae391a08e1f72f01" } ], "signed": { "_type": "timestamp", "expires": "2030-01-01T00:00:00Z", "meta": { "snapshot.json": { "hashes": { "sha256": "8f88e2ba48b412c3843e9bb26e1b6f8fc9e98aceb0fbaa97ba37b4c98717d7ab" }, "length": 515, "version": 1 } }, "spec_version": "1.0.0", "version": 1 } }tuf-6.0.0/tests/repository_data/fishy_rolenames/1.a.json0000644000000000000000000000060413615410400020246 0ustar00{ "signatures": [ { "keyid": "f9b50dd62b5540788b5c5cde0842124b64fa467261bc349dd77de49568eed0ef", "sig": "a36aa69e0c35d8b5b9578bc656ce5d8a76ea05a2c814f59cc710a11f5e3fe6c7bcbef2bfba4812e3b2936f99e89f10862f6320c901e213f1343e79525474920a" } ], "signed": { "_type": "targets", "expires": "2050-10-22T11:21:56Z", "spec_version": "1.0.19", "targets": {}, "version": 1 } }tuf-6.0.0/tests/repository_data/fishy_rolenames/metadata/1...json0000644000000000000000000000060413615410400021743 0ustar00{ "signatures": [ { "keyid": "80a5bda93ec130c2fda8ce0c619d7b122b24cc2e0743afedf98a8e368d32019c", "sig": "8fff438c2347dd7c4fb94c43ec347bcd6b0e79521bd11d95121cb8cc25723efa38565a959a6123da0a2375a2093e53f13a5412df9e51397e06b313837d0d590c" } ], "signed": { "_type": "targets", "expires": "2050-10-22T11:21:56Z", "spec_version": "1.0.19", "targets": {}, "version": 1 } }tuf-6.0.0/tests/repository_data/fishy_rolenames/metadata/1.root.json0000644000000000000000000000357013615410400022576 0ustar00{ "signatures": [ { "keyid": "72b70899257dc30b596af3a9fe141a924af821aff28ed58d1aea0db9f70a70f7", "sig": "53ae844137dd04abf9d3ed10380ba46fa2726f328963ffe006aa955804afa3b0d100bc59610c1584234a9598ab4b9af762b533174b8b8d8aaf2be8e413c1b304" } ], "signed": { "_type": "root", "consistent_snapshot": true, "expires": "2050-10-22T11:21:56Z", "keys": { "59a4df8af818e9ed7abe0764c0b47b4240952aa0d179b5b78346c470ac30278d": { "keytype": "ed25519", "keyval": { "public": "edcd0a32a07dce33f7c7873aaffbff36d20ea30787574ead335eefd337e4dacd" }, "scheme": "ed25519" }, "65171251a9aff5a8b3143a813481cb07f6e0de4eb197c767837fe4491b739093": { "keytype": "ed25519", "keyval": { "public": "89f28bd4ede5ec3786ab923fd154f39588d20881903e69c7b08fb504c6750815" }, "scheme": "ed25519" }, "72b70899257dc30b596af3a9fe141a924af821aff28ed58d1aea0db9f70a70f7": { "keytype": "ed25519", "keyval": { "public": "3ba219e69666298bce5d1d653a166346aef807c02e32a846aaefcb5190fddeb4" }, "scheme": "ed25519" }, "8a1c4a3ac2d515dec982ba9910c5fd79b91ae57f625b9cff25d06bf0a61c1758": { "keytype": "ed25519", "keyval": { "public": "82ccf6ac47298ff43bfa0cd639868894e305a99c723ff0515ae2e9856eb5bbf4" }, "scheme": "ed25519" } }, "roles": { "root": { "keyids": [ "72b70899257dc30b596af3a9fe141a924af821aff28ed58d1aea0db9f70a70f7" ], "threshold": 1 }, "snapshot": { "keyids": [ "59a4df8af818e9ed7abe0764c0b47b4240952aa0d179b5b78346c470ac30278d" ], "threshold": 1 }, "targets": { "keyids": [ "65171251a9aff5a8b3143a813481cb07f6e0de4eb197c767837fe4491b739093" ], "threshold": 1 }, "timestamp": { "keyids": [ "8a1c4a3ac2d515dec982ba9910c5fd79b91ae57f625b9cff25d06bf0a61c1758" ], "threshold": 1 } }, "spec_version": "1.0.19", "version": 1 } }tuf-6.0.0/tests/repository_data/fishy_rolenames/metadata/1.targets.json0000644000000000000000000000342113615410400023257 0ustar00{ "signatures": [ { "keyid": "65171251a9aff5a8b3143a813481cb07f6e0de4eb197c767837fe4491b739093", "sig": "b390c5d9d5355b963e94dfa30ce04520c462fd869fad968d01f0a3b185db5895807b14435e725ff376adc793fd21ef8f01890ac722c94e9c05ab3797c4887101" } ], "signed": { "_type": "targets", "delegations": { "keys": { "426edf0d9fa383688c5b40b7b7d15a7cd11a991f12cc20da87f1b48dd6c036a1": { "keytype": "ed25519", "keyval": { "public": "d38eef769f6dee77b6d898dce548c0ea0f90add0072dc28a20769b6421552ec3" }, "scheme": "ed25519" }, "80a5bda93ec130c2fda8ce0c619d7b122b24cc2e0743afedf98a8e368d32019c": { "keytype": "ed25519", "keyval": { "public": "bb256c0b6d5226a5a9ae8377c0bf68e958fb668d063971f48638b9bae5251f3b" }, "scheme": "ed25519" }, "f9b50dd62b5540788b5c5cde0842124b64fa467261bc349dd77de49568eed0ef": { "keytype": "ed25519", "keyval": { "public": "da1b8586dc0cdd5fe0d8d428bde62dc63e06138f58cfc39770c424a4636f59f4" }, "scheme": "ed25519" } }, "roles": [ { "keyids": [ "f9b50dd62b5540788b5c5cde0842124b64fa467261bc349dd77de49568eed0ef" ], "name": "../a", "paths": [ "*" ], "terminating": false, "threshold": 1 }, { "keyids": [ "80a5bda93ec130c2fda8ce0c619d7b122b24cc2e0743afedf98a8e368d32019c" ], "name": ".", "paths": [ "*" ], "terminating": false, "threshold": 1 }, { "keyids": [ "426edf0d9fa383688c5b40b7b7d15a7cd11a991f12cc20da87f1b48dd6c036a1" ], "name": "\u00f6", "paths": [ "*" ], "terminating": false, "threshold": 1 } ] }, "expires": "2050-10-22T11:21:56Z", "spec_version": "1.0.19", "targets": {}, "version": 1 } }././@PaxHeader0000000000000000000000000000011300000000000010210 xustar0075 path=tuf-6.0.0/tests/repository_data/fishy_rolenames/metadata/1.ö.json tuf-6.0.0/tests/repository_data/fishy_rolenames/metadata/1.?.json0000644000000000000000000000060413615410400021764 0ustar00{ "signatures": [ { "keyid": "426edf0d9fa383688c5b40b7b7d15a7cd11a991f12cc20da87f1b48dd6c036a1", "sig": "faada7f8c9a238955d5b27dbd88032a6c9068742cb114a66f97c730235a8033dd1ff0647f4bbc2b49210c33655a3d7755e754e245799683b3f4e00a59f3da006" } ], "signed": { "_type": "targets", "expires": "2050-10-22T11:21:56Z", "spec_version": "1.0.19", "targets": {}, "version": 1 } }tuf-6.0.0/tests/repository_data/fishy_rolenames/metadata/2.snapshot.json0000644000000000000000000000105213615410400023444 0ustar00{ "signatures": [ { "keyid": "59a4df8af818e9ed7abe0764c0b47b4240952aa0d179b5b78346c470ac30278d", "sig": "5b00100e9cf1c083f8347371ab840cf60124780305124ed7a53fe31bf43473c90b1d2c802ed2f11f5057ba21e6b7a05118b1907f737d2e29c9692aa3345f9801" } ], "signed": { "_type": "snapshot", "expires": "2050-10-22T11:21:56Z", "meta": { "../a.json": { "version": 1 }, "..json": { "version": 1 }, "targets.json": { "version": 1 }, "\u00f6.json": { "version": 1 } }, "spec_version": "1.0.19", "version": 2 } }tuf-6.0.0/tests/repository_data/fishy_rolenames/metadata/timestamp.json0000644000000000000000000000066213615410400023456 0ustar00{ "signatures": [ { "keyid": "8a1c4a3ac2d515dec982ba9910c5fd79b91ae57f625b9cff25d06bf0a61c1758", "sig": "f7003e848366c7e55f474df2c0c68471c44c68a87c0d3c1aa56f64778c91e9c8f22c3adc4dd9ec0535b6b4dc04783f7fa4ca992bed2445c7395a58acff152f0d" } ], "signed": { "_type": "timestamp", "expires": "2050-10-22T11:21:56Z", "meta": { "snapshot.json": { "version": 2 } }, "spec_version": "1.0.19", "version": 2 } }tuf-6.0.0/tests/repository_data/keystore/delegation_key0000644000000000000000000000016713615410400020364 0ustar00-----BEGIN PRIVATE KEY----- MC4CAQAwBQYDK2VwBCIEIJ12nHk+mGJcC5/tw3PzDZq9gDr6NW/b4ezXfx5dSgsM -----END PRIVATE KEY----- tuf-6.0.0/tests/repository_data/keystore/root_key0000644000000000000000000000467013615410400017237 0ustar00-----BEGIN PRIVATE KEY----- MIIG/wIBADANBgkqhkiG9w0BAQEFAASCBukwggblAgEAAoIBgQDQaM+hWuNL14Kr OhDxVF4+QLRwjqS2ISCo98cRIXPLHKMLj3QK7LX2e7E9wm5l83rgwLjyg6RH5baa rikznWLGZ0bnGO804FGQnnBQJzx8MKW0xRMGWq33Ll4ux//j8SgFT7MLhJbWI9T7 6YKyK3J9BDtTos6fcRgLKuQfnWFn928oLij1M9gxXE15wncIvWrTZDXjkmXKMFO4 zdd2qxqd1MehdJE+abOAH/V0v9zhCMycKjCspbTqYUur1EBgYholZ8z/QJus4mlU OQBOYRwx2kYlgN9b+xFC9F2Uc0+jbhlbu+RBQKp1HG+dNaQrlTzXLtG2Mu3XKyRt ZcEhs5XP3gZeyDLMA3IJP2pGMdcMPPaaQur70jaIq0SpoR49xZG7cfpEtqGkz9PE XPDrzhhHjvB5d0N5w83FbKJaLIwZK1EU/gKD22tzAAacSbXDxHaCCZamHOcJ3l1c aE82N8BrzbI0Vjy5uc5RUk5/SdcaDcR3D2JjfHKMlvb6euyTl/cCAwEAAQKCAYEA kQzxvb7RRd7n3h6a3iw3L6K/MzvEXdKutYtGbKDYw7vZqtkcDeJ0PuoWEQL67VBJ 7JWV44xF0ZiKwBuJJ5hZv/bvfTZ4flfFzR7I0rCMQ29kVW14cUq5m7kU6gBfFBmr Hg87cT/F76KewPnj8feVRnekhvBgWM5Qyqz+exaBTegD4HZIIWkFBk3UynLTgCy9 ZgVwEES7Pb7m9k+lr70k2EbY7oF/+W199iXII4rJw4HpTqN6nx7xzNMM5LnkWHDN uj+g9cCRCPS8BNXcbUmBNthVpaDU79NhHwoFFaYswAOeW1jKpssF9hf1cLpQyaLp jQqSEF5VMdygEOzuKijq5oJef5zyuSgqkBpvtuUFLkcz9RkJQk3lTpIO5QUy9sek iikGjucVay5f3N1iJOQi+D+qDAI7cIJTi9hIL/0Xrt0PmSbcAPTvTGP/05I/wyi6 VD4ClpQFgyZ7OiCiDuwOjv+/mWusN4+mxNyJqtr2b4YZNupRBmsmTvjXSWuqHiih AoHBAOnnLy9MbeN+WDkbteHs4XE09NR4D6yEbMpEXTzJygZj8DPmoEAn6ojfanKC NipHvJ0JX+uphzDJ3ZlAdYjZr1ny2VziQNBcfcmf3o1VVxW0KZ8WI4eRmsljFJka Av+YaLtI+nKvNQxPgD3mS5t/Y6p/kxnGOMIpjbUhKT4HP1u/DdyzIuC5Ur+KJxlJ pvauHXz0xx6bszNvMIiuddDG0AG8jwZuiZzYGBEsFmscWDgrG3Hk90ir1416m1+7 jpgIMQKBwQDkGRO7qXNSYtfsWL9UcFTnjb+OYwoKMTppMNb2u+aBZXkWjTJFdT0H aJp1lsfsFARsNWCq/4uRQute+CMbxDDlXP72jZB407FAWlQie7UWsnRy6/+WeHRM 5gSeRl9n8NSOmb/EH5bsV0sjkLt4VXD0FTeDnu2SwhqVNZ+qdWnbhKmwxxTd2dAA VoEEftohucYDdRfKrp+YbZn8Sa8Dfge9QiLgE28MrHhy/ZlRUlhiSg9Bl4CDFlpL sn0wFV56QKcCgcEAnBhETPRcgW1XwwTTJKrI6JvGp+RX0XGuiG2HK4Ie6JTZQEmw uB/rTNyMVU7AhwbIwKP493RzXAPbduKljWZ4tzZyCKKVTnfrGhsuknNZYoqRHDHS FC7/dVZB8MqDJb+4ZQQW32I9rLGBi82ct3EUOjxZFuJKDolcoHw44cREbB3cSmTh 6cbDij/QR/f3DLi1xSY1nB+cP778TLrgtSt4tS/44vnxrFIp/YvGikSoOxPJhQCg ZkcH2srv1bt9NciBAoHAU0JcE5oMwDvYOStD25yNQWBaVa0NEx9ZBOCQ9ssrnnvd sT+k4/mhZzzldJqvKxs7agwp1wEkfseAhs/ocNAyUOabIoAWBiSvhJ/0Kgoh1cEa BIDkcJZTTWaAtQ1W8efUjqDMgNhPDMHoaXkBFTGK422DMAYpDfLQJTrHpz7ofvpz vlVM5pYE+LqaqXtsP/dBsi1hm9gV5VvMY2y593pfdNPZSxWM6YFjDgZHmomGPYpu +zBD9pWILC1gyNZkABftAoHBANXJibUQ35zhQJEnvHXZ5WJqzJN6R5Zv4sQVd6+o NM5EotwNMvmjHcW+Q13M2UtaH2mFsUJuOnYCwqqNarJw5+LLEq+CgO0GaJ0wE4TU 1n2/11vAdKMUqvsj92YYq5Y+L/sue9PAYHUMvPsTG75u6fv3ZhJEfneNRqen3jco 3uxlzo/Yjv1fPO6RD9821dRwZDawaoLFidj/Gnqm9PKHux2papWnfP/dkWKLQwl2 Vu3D0GBOEF8YB2ae3BSVpM+T1Q== -----END PRIVATE KEY----- tuf-6.0.0/tests/repository_data/keystore/root_key20000644000000000000000000000146613615410400017321 0ustar0077c02ab5647ee765d5f6c5fc202a5b32@@@@100000@@@@7c73c1100fab52dc8695c1b955d31770ed6e53f1820d9020aeb6541c948573d9@@@@98280307ffa9c5f6ff1fea1a4b79d0ea@@@@f3342882b1cf842e3377ab4205c0ca8fab564cc55fa742f55b364a1ac597e93d8c56a9a6e6bbb6a812556077be44a1066ac6781a6ed34b86beaf3985f846f007dab31c46af562e921f03c1ea8d299f15324ab137aa426ee61d396a7e20191aa71a70b670775b2ad48f25de367fb48881c55e93f468c6e59402907e82985c27c94c715161c85c5c1904353ba33c3d129988029f03a2d7d00720118697baaf73a3c4e72f8e538b4323866fe525ddccfcfc6dd45598545f65cd7ab581f5172bc253416283a66621eb03dbabaf33923bb1963f9f8cbae6fd6a1c86736a8f80c8d1ba3cbc3f53b0123ba9b0bdd44f25b65033b19a50ee978d2687d6a2ee724515a20026d0213ced59cda9bfdf37c82c59e1356795fd603d85996f448a3c9357b32de2042997a1d27353ee3866c0ed5218d633e0b28991119d77d147354c7fa2de8a168d17efdfd5fa9a8e528bd47ede4ff697tuf-6.0.0/tests/repository_data/keystore/root_key30000644000000000000000000000260613615410400017317 0ustar00a3d266f446cb23c0248feed240a8a85a@@@@100000@@@@61ea41c73d4b1d8bd7566a9884a2fdb88c1d4e48550341e532768f98c8f4bd3c@@@@46b15764c50c934fcfc041a5fa207337@@@@d84b8c473d5f42d2bbceca28b0087c2c5908673b2a92eb8f1ca91dacc27c1cfac24c98d06191f6f54633dd428e9ca0987f183e8f34322a104dc38a0f4fefcc168f21e203e3abc5842f132df2dcb61d6b31dc19d0ecb50e898655f81e9b8a9730f2bff4c5ca4b6fc0b572a7e3672b6dc814ed127c964d960a57155c29eccf44824442d3c6761662ed2d8a1c48a3222d0f0cb1a58f543ccd852c247522595d987d95d1bf49dfdffaf33f18085460dac791d81347cc576a83c6ebca2625d26ddd294e74fa67f676a02d533b52fc9702237b2c898469a30753d98b091cd6aa713aa7b0c4c741684674084b27862e64adf4b1e88fa22cfcf6eeae8608dd818a4cba020058fa7271028ea9d9a7302c9e50e82972a82ac2080201c0fb9f2fb1cadfe97d62470414428227add1c40594f5135a8169d0d7d0889cb4a1949b015e65f5dc656204c58c463acc5b7872f4a078d0bc5a09a7795187e360e7b225892601aa9065086b24397f653d20e59a656ec86ef94e64d5baf16080f12a7f2461b92f99dfb5bf2e4dadec91cc72d8eede952449fd586c863734d84f31e036ecc96c55ab7baa9b049c20b8281a7c28f5ca42d9cfad6498f51ee907bfd9dc17e2a1bc9b69145ee82a86a90817394c01770581727889d3ba1791592c7ac2e74753485f1811cc4477078732873185240fc1572927d2fef210066bdf015471bd9d1683e8074b3fb6957246589dc62dea4843a17a7c734ae45ae20d31f0083a32d3310fae459fe3fbf7c763e5e4ead4acd9b0233e45237f4465576e85ff707fe316488f329d5bc73596b104cc28b926d6b1f5a3d26a0a6ec534a3cbc54cab97f5cea51f17b8d7f1cc6c9977275c34ee4942dd3e22a19ae1e4252199226cc4fd60tuf-6.0.0/tests/repository_data/keystore/snapshot_key0000644000000000000000000000016713615410400020110 0ustar00-----BEGIN PRIVATE KEY----- MC4CAQAwBQYDK2VwBCIEIIOSksDAfmq3o/kDq7QpZ3/Kg1bium+Svw5pvR2ZBhs6 -----END PRIVATE KEY----- tuf-6.0.0/tests/repository_data/keystore/targets_key0000644000000000000000000000016713615410400017722 0ustar00-----BEGIN PRIVATE KEY----- MC4CAQAwBQYDK2VwBCIEIMKnhTXOqdJvhJ2bJd5dn80MvCykZTplwJ0SUpKiHfI5 -----END PRIVATE KEY----- tuf-6.0.0/tests/repository_data/keystore/timestamp_key0000644000000000000000000000016713615410400020254 0ustar00-----BEGIN PRIVATE KEY----- MC4CAQAwBQYDK2VwBCIEIB5Zzk1MbB0e30cDCjV7H3c712RsaRJgLn5GgUvbSRzH -----END PRIVATE KEY----- tuf-6.0.0/tests/repository_data/repository/metadata/1.root.json0000644000000000000000000000650413615410400021626 0ustar00{ "signatures": [ { "keyid": "4e777de0d275f9d28588dd9a1606cc748e548f9e22b6795b7cb3f63f98035fcb", "sig": "a337d6375fedd2eabfcd6c2ef6c8a9c3bb85dc5a857715f6a6bd41123e7670c4972d8548bcd7248154f3d864bf25f1823af59d74c459f41ea09a02db057ca1245612ebbdb97e782c501dc3e094f7fa8aa1402b03c6ed0635f565e2a26f9f543a89237e15a2faf0c267e2b34c3c38f2a43a28ddcdaf8308a12ead8c6dc47d1b762de313e9ddda8cc5bc25aea1b69d0e5b9199ca02f5dda48c3bff615fd12a7136d00634b9abc6e75c3256106c4d6f12e6c43f6195071355b2857bbe377ce028619b58837696b805040ce144b393d50a472531f430fadfb68d3081b6a8b5e49337e328c9a0a3f11e80b0bc8eb2dc6e78d1451dd857e6e6e6363c3fd14c590aa95e083c9bfc77724d78af86eb7a7ef635eeddaa353030c79f66b3ba9ea11fab456cfe896a826fdfb50a43cd444f762821aada9bcd7b022c0ee85b8768f960343d5a1d3d76374cc0ac9e12a500de0bf5d48569e5398cadadadab045931c398e3bcb6cec88af2437ba91959f956079cbed159fed3938016e6c3b5e446131f81cc5981" } ], "signed": { "_type": "root", "consistent_snapshot": false, "expires": "2030-01-01T00:00:00Z", "keys": { "4e777de0d275f9d28588dd9a1606cc748e548f9e22b6795b7cb3f63f98035fcb": { "keyid_hash_algorithms": [ "sha256", "sha512" ], "keytype": "rsa", "keyval": { "public": "-----BEGIN PUBLIC KEY-----\nMIIBojANBgkqhkiG9w0BAQEFAAOCAY8AMIIBigKCAYEA0GjPoVrjS9eCqzoQ8VRe\nPkC0cI6ktiEgqPfHESFzyxyjC490Cuy19nuxPcJuZfN64MC48oOkR+W2mq4pM51i\nxmdG5xjvNOBRkJ5wUCc8fDCltMUTBlqt9y5eLsf/4/EoBU+zC4SW1iPU++mCsity\nfQQ7U6LOn3EYCyrkH51hZ/dvKC4o9TPYMVxNecJ3CL1q02Q145JlyjBTuM3Xdqsa\nndTHoXSRPmmzgB/1dL/c4QjMnCowrKW06mFLq9RAYGIaJWfM/0CbrOJpVDkATmEc\nMdpGJYDfW/sRQvRdlHNPo24ZW7vkQUCqdRxvnTWkK5U81y7RtjLt1yskbWXBIbOV\nz94GXsgyzANyCT9qRjHXDDz2mkLq+9I2iKtEqaEePcWRu3H6RLahpM/TxFzw684Y\nR47weXdDecPNxWyiWiyMGStRFP4Cg9trcwAGnEm1w8R2ggmWphznCd5dXGhPNjfA\na82yNFY8ubnOUVJOf0nXGg3Edw9iY3xyjJb2+nrsk5f3AgMBAAE=\n-----END PUBLIC KEY-----" }, "scheme": "rsassa-pss-sha256" }, "59a4df8af818e9ed7abe0764c0b47b4240952aa0d179b5b78346c470ac30278d": { "keyid_hash_algorithms": [ "sha256", "sha512" ], "keytype": "ed25519", "keyval": { "public": "edcd0a32a07dce33f7c7873aaffbff36d20ea30787574ead335eefd337e4dacd" }, "scheme": "ed25519" }, "65171251a9aff5a8b3143a813481cb07f6e0de4eb197c767837fe4491b739093": { "keyid_hash_algorithms": [ "sha256", "sha512" ], "keytype": "ed25519", "keyval": { "public": "89f28bd4ede5ec3786ab923fd154f39588d20881903e69c7b08fb504c6750815" }, "scheme": "ed25519" }, "8a1c4a3ac2d515dec982ba9910c5fd79b91ae57f625b9cff25d06bf0a61c1758": { "keyid_hash_algorithms": [ "sha256", "sha512" ], "keytype": "ed25519", "keyval": { "public": "82ccf6ac47298ff43bfa0cd639868894e305a99c723ff0515ae2e9856eb5bbf4" }, "scheme": "ed25519" } }, "roles": { "root": { "keyids": [ "4e777de0d275f9d28588dd9a1606cc748e548f9e22b6795b7cb3f63f98035fcb" ], "threshold": 1 }, "snapshot": { "keyids": [ "59a4df8af818e9ed7abe0764c0b47b4240952aa0d179b5b78346c470ac30278d" ], "threshold": 1 }, "targets": { "keyids": [ "65171251a9aff5a8b3143a813481cb07f6e0de4eb197c767837fe4491b739093" ], "threshold": 1 }, "timestamp": { "keyids": [ "8a1c4a3ac2d515dec982ba9910c5fd79b91ae57f625b9cff25d06bf0a61c1758" ], "threshold": 1 } }, "spec_version": "1.0.0", "version": 1 } }tuf-6.0.0/tests/repository_data/repository/metadata/role1.json0000644000000000000000000000234413615410400021524 0ustar00{ "signatures": [ { "keyid": "c8022fa1e9b9cb239a6b362bbdffa9649e61ad2cb699d2e4bc4fdf7930a0e64a", "sig": "9408b46569e622a46f1d35d9fa3c10e17a9285631ced4f2c9c2bba2c2842413fcb796db4e81d6f988fc056c21c407fdc3c10441592cf1e837e088f2e2dfd5403" } ], "signed": { "_type": "targets", "delegations": { "keys": { "c8022fa1e9b9cb239a6b362bbdffa9649e61ad2cb699d2e4bc4fdf7930a0e64a": { "keyid_hash_algorithms": [ "sha256", "sha512" ], "keytype": "ed25519", "keyval": { "public": "fcf224e55fa226056adf113ef1eb3d55e308b75b321c8c8316999d8c4fd9e0d9" }, "scheme": "ed25519" } }, "roles": [ { "keyids": [ "c8022fa1e9b9cb239a6b362bbdffa9649e61ad2cb699d2e4bc4fdf7930a0e64a" ], "name": "role2", "paths": [], "terminating": false, "threshold": 1 } ] }, "expires": "2030-01-01T00:00:00Z", "spec_version": "1.0.0", "targets": { "file3.txt": { "hashes": { "sha256": "141f740f53781d1ca54b8a50af22cbf74e44c21a998fa2a8a05aaac2c002886b", "sha512": "ef5beafa16041bcdd2937140afebd485296cd54f7348ecd5a4d035c09759608de467a7ac0eb58753d0242df873c305e8bffad2454aa48f44480f15efae1cacd0" }, "length": 28 } }, "version": 1 } }tuf-6.0.0/tests/repository_data/repository/metadata/role2.json0000644000000000000000000000060313615410400021521 0ustar00{ "signatures": [ { "keyid": "c8022fa1e9b9cb239a6b362bbdffa9649e61ad2cb699d2e4bc4fdf7930a0e64a", "sig": "75b196a224fd200e46e738b1216b3316c5384f61083872f8d14b8b0a378b2344e64b1a6f1a89a711206a66a0b199d65ac0e30fe15ddbc4de89fa8ff645f99403" } ], "signed": { "_type": "targets", "expires": "2030-01-01T00:00:00Z", "spec_version": "1.0.0", "targets": {}, "version": 1 } }tuf-6.0.0/tests/repository_data/repository/metadata/root.json0000644000000000000000000000650413615410400021467 0ustar00{ "signatures": [ { "keyid": "4e777de0d275f9d28588dd9a1606cc748e548f9e22b6795b7cb3f63f98035fcb", "sig": "a337d6375fedd2eabfcd6c2ef6c8a9c3bb85dc5a857715f6a6bd41123e7670c4972d8548bcd7248154f3d864bf25f1823af59d74c459f41ea09a02db057ca1245612ebbdb97e782c501dc3e094f7fa8aa1402b03c6ed0635f565e2a26f9f543a89237e15a2faf0c267e2b34c3c38f2a43a28ddcdaf8308a12ead8c6dc47d1b762de313e9ddda8cc5bc25aea1b69d0e5b9199ca02f5dda48c3bff615fd12a7136d00634b9abc6e75c3256106c4d6f12e6c43f6195071355b2857bbe377ce028619b58837696b805040ce144b393d50a472531f430fadfb68d3081b6a8b5e49337e328c9a0a3f11e80b0bc8eb2dc6e78d1451dd857e6e6e6363c3fd14c590aa95e083c9bfc77724d78af86eb7a7ef635eeddaa353030c79f66b3ba9ea11fab456cfe896a826fdfb50a43cd444f762821aada9bcd7b022c0ee85b8768f960343d5a1d3d76374cc0ac9e12a500de0bf5d48569e5398cadadadab045931c398e3bcb6cec88af2437ba91959f956079cbed159fed3938016e6c3b5e446131f81cc5981" } ], "signed": { "_type": "root", "consistent_snapshot": false, "expires": "2030-01-01T00:00:00Z", "keys": { "4e777de0d275f9d28588dd9a1606cc748e548f9e22b6795b7cb3f63f98035fcb": { "keyid_hash_algorithms": [ "sha256", "sha512" ], "keytype": "rsa", "keyval": { "public": "-----BEGIN PUBLIC KEY-----\nMIIBojANBgkqhkiG9w0BAQEFAAOCAY8AMIIBigKCAYEA0GjPoVrjS9eCqzoQ8VRe\nPkC0cI6ktiEgqPfHESFzyxyjC490Cuy19nuxPcJuZfN64MC48oOkR+W2mq4pM51i\nxmdG5xjvNOBRkJ5wUCc8fDCltMUTBlqt9y5eLsf/4/EoBU+zC4SW1iPU++mCsity\nfQQ7U6LOn3EYCyrkH51hZ/dvKC4o9TPYMVxNecJ3CL1q02Q145JlyjBTuM3Xdqsa\nndTHoXSRPmmzgB/1dL/c4QjMnCowrKW06mFLq9RAYGIaJWfM/0CbrOJpVDkATmEc\nMdpGJYDfW/sRQvRdlHNPo24ZW7vkQUCqdRxvnTWkK5U81y7RtjLt1yskbWXBIbOV\nz94GXsgyzANyCT9qRjHXDDz2mkLq+9I2iKtEqaEePcWRu3H6RLahpM/TxFzw684Y\nR47weXdDecPNxWyiWiyMGStRFP4Cg9trcwAGnEm1w8R2ggmWphznCd5dXGhPNjfA\na82yNFY8ubnOUVJOf0nXGg3Edw9iY3xyjJb2+nrsk5f3AgMBAAE=\n-----END PUBLIC KEY-----" }, "scheme": "rsassa-pss-sha256" }, "59a4df8af818e9ed7abe0764c0b47b4240952aa0d179b5b78346c470ac30278d": { "keyid_hash_algorithms": [ "sha256", "sha512" ], "keytype": "ed25519", "keyval": { "public": "edcd0a32a07dce33f7c7873aaffbff36d20ea30787574ead335eefd337e4dacd" }, "scheme": "ed25519" }, "65171251a9aff5a8b3143a813481cb07f6e0de4eb197c767837fe4491b739093": { "keyid_hash_algorithms": [ "sha256", "sha512" ], "keytype": "ed25519", "keyval": { "public": "89f28bd4ede5ec3786ab923fd154f39588d20881903e69c7b08fb504c6750815" }, "scheme": "ed25519" }, "8a1c4a3ac2d515dec982ba9910c5fd79b91ae57f625b9cff25d06bf0a61c1758": { "keyid_hash_algorithms": [ "sha256", "sha512" ], "keytype": "ed25519", "keyval": { "public": "82ccf6ac47298ff43bfa0cd639868894e305a99c723ff0515ae2e9856eb5bbf4" }, "scheme": "ed25519" } }, "roles": { "root": { "keyids": [ "4e777de0d275f9d28588dd9a1606cc748e548f9e22b6795b7cb3f63f98035fcb" ], "threshold": 1 }, "snapshot": { "keyids": [ "59a4df8af818e9ed7abe0764c0b47b4240952aa0d179b5b78346c470ac30278d" ], "threshold": 1 }, "targets": { "keyids": [ "65171251a9aff5a8b3143a813481cb07f6e0de4eb197c767837fe4491b739093" ], "threshold": 1 }, "timestamp": { "keyids": [ "8a1c4a3ac2d515dec982ba9910c5fd79b91ae57f625b9cff25d06bf0a61c1758" ], "threshold": 1 } }, "spec_version": "1.0.0", "version": 1 } }tuf-6.0.0/tests/repository_data/repository/metadata/snapshot.json0000644000000000000000000000100313615410400022330 0ustar00{ "signatures": [ { "keyid": "59a4df8af818e9ed7abe0764c0b47b4240952aa0d179b5b78346c470ac30278d", "sig": "085672c70dffe26610e58542ee552843633cfed973abdad94c56138dbf0cd991644f2d3f27e4dda3098e08ab676e7f52627b587947ae69db1012d59a6da18e0c" } ], "signed": { "_type": "snapshot", "expires": "2030-01-01T00:00:00Z", "meta": { "role1.json": { "version": 1 }, "role2.json": { "version": 1 }, "targets.json": { "version": 1 } }, "spec_version": "1.0.0", "version": 1 } }tuf-6.0.0/tests/repository_data/repository/metadata/targets.json0000644000000000000000000000313013615410400022145 0ustar00{ "signatures": [ { "keyid": "65171251a9aff5a8b3143a813481cb07f6e0de4eb197c767837fe4491b739093", "sig": "d65f8db0c1a8f0976552b9742bbb393f24a5fa5eaf145c37aee047236c79dd0b83cfbb8b49fa7803689dfe0031dcf22c4d006b593acac07d69093b9b81722c08" } ], "signed": { "_type": "targets", "delegations": { "keys": { "c8022fa1e9b9cb239a6b362bbdffa9649e61ad2cb699d2e4bc4fdf7930a0e64a": { "keyid_hash_algorithms": [ "sha256", "sha512" ], "keytype": "ed25519", "keyval": { "public": "fcf224e55fa226056adf113ef1eb3d55e308b75b321c8c8316999d8c4fd9e0d9" }, "scheme": "ed25519" } }, "roles": [ { "keyids": [ "c8022fa1e9b9cb239a6b362bbdffa9649e61ad2cb699d2e4bc4fdf7930a0e64a" ], "name": "role1", "paths": [ "file3.txt" ], "terminating": false, "threshold": 1 } ] }, "expires": "2030-01-01T00:00:00Z", "spec_version": "1.0.0", "targets": { "file1.txt": { "custom": { "file_permissions": "0644" }, "hashes": { "sha256": "65b8c67f51c993d898250f40aa57a317d854900b3a04895464313e48785440da", "sha512": "467430a68afae8e9f9c0771ea5d78bf0b3a0d79a2d3d3b40c69fde4dd42c461448aef76fcef4f5284931a1ffd0ac096d138ba3a0d6ca83fa8d7285a47a296f77" }, "length": 31 }, "file2.txt": { "hashes": { "sha256": "452ce8308500d83ef44248d8e6062359211992fd837ea9e370e561efb1a4ca99", "sha512": "052b49a21e03606b28942db69aa597530fe52d47ee3d748ba65afcd14b857738e36bc1714c4f4adde46c3e683548552fe5c96722e0e0da3acd9050c2524902d8" }, "length": 39 } }, "version": 1 } }tuf-6.0.0/tests/repository_data/repository/metadata/timestamp.json0000644000000000000000000000105513615410400022503 0ustar00{ "signatures": [ { "keyid": "8a1c4a3ac2d515dec982ba9910c5fd79b91ae57f625b9cff25d06bf0a61c1758", "sig": "de0e16920f87bf5500cc65736488ac17e09788cce808f6a4e85eb9e4e478a312b4c1a2d7723af56f7bfb1df533c67d8c93b6f49d39eabe7fae391a08e1f72f01" } ], "signed": { "_type": "timestamp", "expires": "2030-01-01T00:00:00Z", "meta": { "snapshot.json": { "hashes": { "sha256": "8f88e2ba48b412c3843e9bb26e1b6f8fc9e98aceb0fbaa97ba37b4c98717d7ab" }, "length": 515, "version": 1 } }, "spec_version": "1.0.0", "version": 1 } }tuf-6.0.0/tests/repository_data/repository/targets/file1.txt0000644000000000000000000000003713615410400021236 0ustar00This is an example target file.tuf-6.0.0/tests/repository_data/repository/targets/file2.txt0000644000000000000000000000004713615410400021240 0ustar00This is an another example target file.tuf-6.0.0/tests/repository_data/repository/targets/file3.txt0000644000000000000000000000003413615410400021235 0ustar00This is role1's target file.tuf-6.0.0/tuf/__init__.py0000644000000000000000000000027213615410400012143 0ustar00# Copyright New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """TUF.""" # This value is used in the ngclient user agent. __version__ = "6.0.0" tuf-6.0.0/tuf/py.typed0000644000000000000000000000000013615410400011516 0ustar00tuf-6.0.0/tuf/api/__init__.py0000644000000000000000000000000013615410400012701 0ustar00tuf-6.0.0/tuf/api/_payload.py0000644000000000000000000017544113615410400012760 0ustar00# Copyright the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Helper classes for low-level Metadata API.""" from __future__ import annotations import abc import fnmatch import io import logging from dataclasses import dataclass from datetime import datetime, timezone from typing import ( IO, TYPE_CHECKING, Any, ClassVar, TypeVar, ) from securesystemslib import exceptions as sslib_exceptions from securesystemslib import hash as sslib_hash from securesystemslib.signer import Key, Signature from tuf.api.exceptions import LengthOrHashMismatchError, UnsignedMetadataError if TYPE_CHECKING: from collections.abc import Iterator _ROOT = "root" _SNAPSHOT = "snapshot" _TARGETS = "targets" _TIMESTAMP = "timestamp" # We aim to support SPECIFICATION_VERSION and require the input metadata # files to have the same major version (the first number) as ours. SPECIFICATION_VERSION = ["1", "0", "31"] TOP_LEVEL_ROLE_NAMES = {_ROOT, _TIMESTAMP, _SNAPSHOT, _TARGETS} logger = logging.getLogger(__name__) # T is a Generic type constraint for container payloads T = TypeVar("T", "Root", "Timestamp", "Snapshot", "Targets") class Signed(metaclass=abc.ABCMeta): """A base class for the signed part of TUF metadata. Objects with base class Signed are usually included in a ``Metadata`` object on the signed attribute. This class provides attributes and methods that are common for all TUF metadata types (roles). *All parameters named below are not just constructor arguments but also instance attributes.* Args: version: Metadata version number. If None, then 1 is assigned. spec_version: Supported TUF specification version. If None, then the version currently supported by the library is assigned. expires: Metadata expiry date in UTC timezone. If None, then current date and time is assigned. unrecognized_fields: Dictionary of all attributes that are not managed by TUF Metadata API Raises: ValueError: Invalid arguments. """ # type is required for static reference without changing the API type: ClassVar[str] = "signed" # _type and type are identical: 1st replicates file format, 2nd passes lint @property def _type(self) -> str: return self.type @property def expires(self) -> datetime: """Get the metadata expiry date.""" return self._expires @expires.setter def expires(self, value: datetime) -> None: """Set the metadata expiry date. # Use 'datetime' module to e.g. expire in seven days from now obj.expires = now(timezone.utc) + timedelta(days=7) """ self._expires = value.replace(microsecond=0) if self._expires.tzinfo is None: # Naive datetime: just make it UTC self._expires = self._expires.replace(tzinfo=timezone.utc) elif self._expires.tzinfo != timezone.utc: raise ValueError(f"Expected tz UTC, not {self._expires.tzinfo}") # NOTE: Signed is a stupid name, because this might not be signed yet, but # we keep it to match spec terminology (I often refer to this as "payload", # or "inner metadata") def __init__( self, version: int | None, spec_version: str | None, expires: datetime | None, unrecognized_fields: dict[str, Any] | None, ): if spec_version is None: spec_version = ".".join(SPECIFICATION_VERSION) # Accept semver (X.Y.Z) but also X.Y for legacy compatibility spec_list = spec_version.split(".") if len(spec_list) not in [2, 3] or not all( el.isdigit() for el in spec_list ): raise ValueError(f"Failed to parse spec_version {spec_version}") # major version must match if spec_list[0] != SPECIFICATION_VERSION[0]: raise ValueError(f"Unsupported spec_version {spec_version}") self.spec_version = spec_version self.expires = expires or datetime.now(timezone.utc) if version is None: version = 1 elif version <= 0: raise ValueError(f"version must be > 0, got {version}") self.version = version if unrecognized_fields is None: unrecognized_fields = {} self.unrecognized_fields = unrecognized_fields def __eq__(self, other: object) -> bool: if not isinstance(other, Signed): return False return ( self.type == other.type and self.version == other.version and self.spec_version == other.spec_version and self.expires == other.expires and self.unrecognized_fields == other.unrecognized_fields ) @abc.abstractmethod def to_dict(self) -> dict[str, Any]: """Serialize and return a dict representation of self.""" raise NotImplementedError @classmethod @abc.abstractmethod def from_dict(cls, signed_dict: dict[str, Any]) -> Signed: """Deserialization helper, creates object from json/dict representation. """ raise NotImplementedError @classmethod def _common_fields_from_dict( cls, signed_dict: dict[str, Any] ) -> tuple[int, str, datetime]: """Return common fields of ``Signed`` instances from the passed dict representation, and returns an ordered list to be passed as leading positional arguments to a subclass constructor. See ``{Root, Timestamp, Snapshot, Targets}.from_dict`` methods for usage. """ _type = signed_dict.pop("_type") if _type != cls.type: raise ValueError(f"Expected type {cls.type}, got {_type}") version = signed_dict.pop("version") spec_version = signed_dict.pop("spec_version") expires_str = signed_dict.pop("expires") # Convert 'expires' TUF metadata string to a datetime object, which is # what the constructor expects and what we store. The inverse operation # is implemented in '_common_fields_to_dict'. expires = datetime.strptime(expires_str, "%Y-%m-%dT%H:%M:%SZ").replace( tzinfo=timezone.utc ) return version, spec_version, expires def _common_fields_to_dict(self) -> dict[str, Any]: """Return a dict representation of common fields of ``Signed`` instances. See ``{Root, Timestamp, Snapshot, Targets}.to_dict`` methods for usage. """ return { "_type": self._type, "version": self.version, "spec_version": self.spec_version, "expires": self.expires.strftime("%Y-%m-%dT%H:%M:%SZ"), **self.unrecognized_fields, } def is_expired(self, reference_time: datetime | None = None) -> bool: """Check metadata expiration against a reference time. Args: reference_time: Time to check expiration date against. A naive datetime in UTC expected. Default is current UTC date and time. Returns: ``True`` if expiration time is less than the reference time. """ if reference_time is None: reference_time = datetime.now(timezone.utc) return reference_time >= self.expires class Role: """Container that defines which keys are required to sign roles metadata. Role defines how many keys are required to successfully sign the roles metadata, and which keys are accepted. *All parameters named below are not just constructor arguments but also instance attributes.* Args: keyids: Roles signing key identifiers. threshold: Number of keys required to sign this role's metadata. unrecognized_fields: Dictionary of all attributes that are not managed by TUF Metadata API Raises: ValueError: Invalid arguments. """ def __init__( self, keyids: list[str], threshold: int, unrecognized_fields: dict[str, Any] | None = None, ): if len(set(keyids)) != len(keyids): raise ValueError(f"Nonunique keyids: {keyids}") if threshold < 1: raise ValueError("threshold should be at least 1!") self.keyids = keyids self.threshold = threshold if unrecognized_fields is None: unrecognized_fields = {} self.unrecognized_fields = unrecognized_fields def __eq__(self, other: object) -> bool: if not isinstance(other, Role): return False return ( self.keyids == other.keyids and self.threshold == other.threshold and self.unrecognized_fields == other.unrecognized_fields ) @classmethod def from_dict(cls, role_dict: dict[str, Any]) -> Role: """Create ``Role`` object from its json/dict representation. Raises: ValueError, KeyError: Invalid arguments. """ keyids = role_dict.pop("keyids") threshold = role_dict.pop("threshold") # All fields left in the role_dict are unrecognized. return cls(keyids, threshold, role_dict) def to_dict(self) -> dict[str, Any]: """Return the dictionary representation of self.""" return { "keyids": self.keyids, "threshold": self.threshold, **self.unrecognized_fields, } @dataclass class VerificationResult: """Signature verification result for delegated role metadata. Attributes: threshold: Number of required signatures. signed: dict of keyid to Key, containing keys that have signed. unsigned: dict of keyid to Key, containing keys that have not signed. """ threshold: int signed: dict[str, Key] unsigned: dict[str, Key] def __bool__(self) -> bool: return self.verified @property def verified(self) -> bool: """True if threshold of signatures is met.""" return len(self.signed) >= self.threshold @property def missing(self) -> int: """Number of additional signatures required to reach threshold.""" return max(0, self.threshold - len(self.signed)) @dataclass class RootVerificationResult: """Signature verification result for root metadata. Root must be verified by itself and the previous root version. This dataclass represents both results. For the edge case of first version of root, these underlying results are identical. Note that `signed` and `unsigned` correctness requires the underlying VerificationResult keys to not conflict (no reusing the same keyid for different keys). Attributes: first: First underlying VerificationResult second: Second underlying VerificationResult """ first: VerificationResult second: VerificationResult def __bool__(self) -> bool: return self.verified @property def verified(self) -> bool: """True if threshold of signatures is met in both underlying VerificationResults. """ return self.first.verified and self.second.verified @property def signed(self) -> dict[str, Key]: """Dictionary of all signing keys that have signed, from both VerificationResults. return a union of all signed (in python<3.9 this requires dict unpacking) """ return {**self.first.signed, **self.second.signed} @property def unsigned(self) -> dict[str, Key]: """Dictionary of all signing keys that have not signed, from both VerificationResults. return a union of all unsigned (in python<3.9 this requires dict unpacking) """ return {**self.first.unsigned, **self.second.unsigned} class _DelegatorMixin(metaclass=abc.ABCMeta): """Class that implements verify_delegate() for Root and Targets""" @abc.abstractmethod def get_delegated_role(self, delegated_role: str) -> Role: """Return the role object for the given delegated role. Raises ValueError if delegated_role is not actually delegated. """ raise NotImplementedError @abc.abstractmethod def get_key(self, keyid: str) -> Key: """Return the key object for the given keyid. Raises ValueError if key is not found. """ raise NotImplementedError def get_verification_result( self, delegated_role: str, payload: bytes, signatures: dict[str, Signature], ) -> VerificationResult: """Return signature threshold verification result for delegated role. NOTE: Unlike `verify_delegate()` this method does not raise, if the role metadata is not fully verified. Args: delegated_role: Name of the delegated role to verify payload: Signed payload bytes for the delegated role signatures: Signatures over payload bytes Raises: ValueError: no delegation was found for ``delegated_role``. """ role = self.get_delegated_role(delegated_role) signed = {} unsigned = {} for keyid in role.keyids: try: key = self.get_key(keyid) except ValueError: logger.info("No key for keyid %s", keyid) continue if keyid not in signatures: unsigned[keyid] = key logger.info("No signature for keyid %s", keyid) continue sig = signatures[keyid] try: key.verify_signature(sig, payload) signed[keyid] = key except sslib_exceptions.UnverifiedSignatureError: unsigned[keyid] = key logger.info("Key %s failed to verify %s", keyid, delegated_role) return VerificationResult(role.threshold, signed, unsigned) def verify_delegate( self, delegated_role: str, payload: bytes, signatures: dict[str, Signature], ) -> None: """Verify signature threshold for delegated role. Verify that there are enough valid ``signatures`` over ``payload``, to meet the threshold of keys for ``delegated_role``, as defined by the delegator (``self``). Args: delegated_role: Name of the delegated role to verify payload: Signed payload bytes for the delegated role signatures: Signatures over payload bytes Raises: UnsignedMetadataError: ``delegated_role`` was not signed with required threshold of keys for ``role_name``. ValueError: no delegation was found for ``delegated_role``. """ result = self.get_verification_result( delegated_role, payload, signatures ) if not result: raise UnsignedMetadataError( f"{delegated_role} was signed by {len(result.signed)}/" f"{result.threshold} keys" ) class Root(Signed, _DelegatorMixin): """A container for the signed part of root metadata. Parameters listed below are also instance attributes. Args: version: Metadata version number. Default is 1. spec_version: Supported TUF specification version. Default is the version currently supported by the library. expires: Metadata expiry date. Default is current date and time. keys: Dictionary of keyids to Keys. Defines the keys used in ``roles``. Default is empty dictionary. roles: Dictionary of role names to Roles. Defines which keys are required to sign the metadata for a specific role. Default is a dictionary of top level roles without keys and threshold of 1. consistent_snapshot: ``True`` if repository supports consistent snapshots. Default is True. unrecognized_fields: Dictionary of all attributes that are not managed by TUF Metadata API Raises: ValueError: Invalid arguments. """ type = _ROOT def __init__( self, version: int | None = None, spec_version: str | None = None, expires: datetime | None = None, keys: dict[str, Key] | None = None, roles: dict[str, Role] | None = None, consistent_snapshot: bool | None = True, unrecognized_fields: dict[str, Any] | None = None, ): super().__init__(version, spec_version, expires, unrecognized_fields) self.consistent_snapshot = consistent_snapshot self.keys = keys if keys is not None else {} if roles is None: roles = {r: Role([], 1) for r in TOP_LEVEL_ROLE_NAMES} elif set(roles) != TOP_LEVEL_ROLE_NAMES: raise ValueError("Role names must be the top-level metadata roles") self.roles = roles def __eq__(self, other: object) -> bool: if not isinstance(other, Root): return False return ( super().__eq__(other) and self.keys == other.keys and self.roles == other.roles and self.consistent_snapshot == other.consistent_snapshot ) @classmethod def from_dict(cls, signed_dict: dict[str, Any]) -> Root: """Create ``Root`` object from its json/dict representation. Raises: ValueError, KeyError, TypeError: Invalid arguments. """ common_args = cls._common_fields_from_dict(signed_dict) consistent_snapshot = signed_dict.pop("consistent_snapshot", None) keys = signed_dict.pop("keys") roles = signed_dict.pop("roles") for keyid, key_dict in keys.items(): keys[keyid] = Key.from_dict(keyid, key_dict) for role_name, role_dict in roles.items(): roles[role_name] = Role.from_dict(role_dict) # All fields left in the signed_dict are unrecognized. return cls(*common_args, keys, roles, consistent_snapshot, signed_dict) def to_dict(self) -> dict[str, Any]: """Return the dict representation of self.""" root_dict = self._common_fields_to_dict() keys = {keyid: key.to_dict() for (keyid, key) in self.keys.items()} roles = {} for role_name, role in self.roles.items(): roles[role_name] = role.to_dict() if self.consistent_snapshot is not None: root_dict["consistent_snapshot"] = self.consistent_snapshot root_dict.update( { "keys": keys, "roles": roles, } ) return root_dict def add_key(self, key: Key, role: str) -> None: """Add new signing key for delegated role ``role``. Args: key: Signing key to be added for ``role``. role: Name of the role, for which ``key`` is added. Raises: ValueError: If the argument order is wrong or if ``role`` doesn't exist. """ # Verify that our users are not using the old argument order. if isinstance(role, Key): raise ValueError("Role must be a string, not a Key instance") if role not in self.roles: raise ValueError(f"Role {role} doesn't exist") if key.keyid not in self.roles[role].keyids: self.roles[role].keyids.append(key.keyid) self.keys[key.keyid] = key def revoke_key(self, keyid: str, role: str) -> None: """Revoke key from ``role`` and updates the key store. Args: keyid: Identifier of the key to be removed for ``role``. role: Name of the role, for which a signing key is removed. Raises: ValueError: If ``role`` doesn't exist or if ``role`` doesn't include the key. """ if role not in self.roles: raise ValueError(f"Role {role} doesn't exist") if keyid not in self.roles[role].keyids: raise ValueError(f"Key with id {keyid} is not used by {role}") self.roles[role].keyids.remove(keyid) for keyinfo in self.roles.values(): if keyid in keyinfo.keyids: return del self.keys[keyid] def get_delegated_role(self, delegated_role: str) -> Role: """Return the role object for the given delegated role. Raises ValueError if delegated_role is not actually delegated. """ if delegated_role not in self.roles: raise ValueError(f"Delegated role {delegated_role} not found") return self.roles[delegated_role] def get_key(self, keyid: str) -> Key: if keyid not in self.keys: raise ValueError(f"Key {keyid} not found") return self.keys[keyid] def get_root_verification_result( self, previous: Root | None, payload: bytes, signatures: dict[str, Signature], ) -> RootVerificationResult: """Return signature threshold verification result for two root roles. Verify root metadata with two roles (`self` and optionally `previous`). If the repository has no root role versions yet, `previous` can be left None. In all other cases, `previous` must be the previous version of the Root. NOTE: Unlike `verify_delegate()` this method does not raise, if the root metadata is not fully verified. Args: previous: The previous `Root` to verify payload with, or None payload: Signed payload bytes for root signatures: Signatures over payload bytes Raises: ValueError: no delegation was found for ``root`` or given Root versions are not sequential. """ if previous is None: previous = self elif self.version != previous.version + 1: versions = f"v{previous.version} and v{self.version}" raise ValueError( f"Expected sequential root versions, got {versions}." ) return RootVerificationResult( previous.get_verification_result(Root.type, payload, signatures), self.get_verification_result(Root.type, payload, signatures), ) class BaseFile: """A base class of ``MetaFile`` and ``TargetFile``. Encapsulates common static methods for length and hash verification. """ @staticmethod def _verify_hashes( data: bytes | IO[bytes], expected_hashes: dict[str, str] ) -> None: """Verify that the hash of ``data`` matches ``expected_hashes``.""" is_bytes = isinstance(data, bytes) for algo, exp_hash in expected_hashes.items(): try: if is_bytes: digest_object = sslib_hash.digest(algo) digest_object.update(data) else: # if data is not bytes, assume it is a file object digest_object = sslib_hash.digest_fileobject(data, algo) except ( sslib_exceptions.UnsupportedAlgorithmError, sslib_exceptions.FormatError, ) as e: raise LengthOrHashMismatchError( f"Unsupported algorithm '{algo}'" ) from e observed_hash = digest_object.hexdigest() if observed_hash != exp_hash: raise LengthOrHashMismatchError( f"Observed hash {observed_hash} does not match " f"expected hash {exp_hash}" ) @staticmethod def _verify_length(data: bytes | IO[bytes], expected_length: int) -> None: """Verify that the length of ``data`` matches ``expected_length``.""" if isinstance(data, bytes): observed_length = len(data) else: # if data is not bytes, assume it is a file object data.seek(0, io.SEEK_END) observed_length = data.tell() if observed_length != expected_length: raise LengthOrHashMismatchError( f"Observed length {observed_length} does not match " f"expected length {expected_length}" ) @staticmethod def _validate_hashes(hashes: dict[str, str]) -> None: if not hashes: raise ValueError("Hashes must be a non empty dictionary") for key, value in hashes.items(): if not (isinstance(key, str) and isinstance(value, str)): raise TypeError("Hashes items must be strings") @staticmethod def _validate_length(length: int) -> None: if length < 0: raise ValueError(f"Length must be >= 0, got {length}") @staticmethod def _get_length_and_hashes( data: bytes | IO[bytes], hash_algorithms: list[str] | None ) -> tuple[int, dict[str, str]]: """Calculate length and hashes of ``data``.""" if isinstance(data, bytes): length = len(data) else: data.seek(0, io.SEEK_END) length = data.tell() hashes = {} if hash_algorithms is None: hash_algorithms = [sslib_hash.DEFAULT_HASH_ALGORITHM] for algorithm in hash_algorithms: try: if isinstance(data, bytes): digest_object = sslib_hash.digest(algorithm) digest_object.update(data) else: digest_object = sslib_hash.digest_fileobject( data, algorithm ) except ( sslib_exceptions.UnsupportedAlgorithmError, sslib_exceptions.FormatError, ) as e: raise ValueError(f"Unsupported algorithm '{algorithm}'") from e hashes[algorithm] = digest_object.hexdigest() return (length, hashes) class MetaFile(BaseFile): """A container with information about a particular metadata file. *All parameters named below are not just constructor arguments but also instance attributes.* Args: version: Version of the metadata file. length: Length of the metadata file in bytes. hashes: Dictionary of hash algorithm names to hashes of the metadata file content. unrecognized_fields: Dictionary of all attributes that are not managed by TUF Metadata API Raises: ValueError, TypeError: Invalid arguments. """ def __init__( self, version: int = 1, length: int | None = None, hashes: dict[str, str] | None = None, unrecognized_fields: dict[str, Any] | None = None, ): if version <= 0: raise ValueError(f"Metafile version must be > 0, got {version}") if length is not None: self._validate_length(length) if hashes is not None: self._validate_hashes(hashes) self.version = version self.length = length self.hashes = hashes if unrecognized_fields is None: unrecognized_fields = {} self.unrecognized_fields = unrecognized_fields def __eq__(self, other: object) -> bool: if not isinstance(other, MetaFile): return False return ( self.version == other.version and self.length == other.length and self.hashes == other.hashes and self.unrecognized_fields == other.unrecognized_fields ) @classmethod def from_dict(cls, meta_dict: dict[str, Any]) -> MetaFile: """Create ``MetaFile`` object from its json/dict representation. Raises: ValueError, KeyError: Invalid arguments. """ version = meta_dict.pop("version") length = meta_dict.pop("length", None) hashes = meta_dict.pop("hashes", None) # All fields left in the meta_dict are unrecognized. return cls(version, length, hashes, meta_dict) @classmethod def from_data( cls, version: int, data: bytes | IO[bytes], hash_algorithms: list[str], ) -> MetaFile: """Creates MetaFile object from bytes. This constructor should only be used if hashes are wanted. By default, MetaFile(ver) should be used. Args: version: Version of the metadata file. data: Metadata bytes that the metafile represents. hash_algorithms: Hash algorithms to create the hashes with. If not specified, the securesystemslib default hash algorithm is used. Raises: ValueError: The hash algorithms list contains an unsupported algorithm. """ length, hashes = cls._get_length_and_hashes(data, hash_algorithms) return cls(version, length, hashes) def to_dict(self) -> dict[str, Any]: """Return the dictionary representation of self.""" res_dict: dict[str, Any] = { "version": self.version, **self.unrecognized_fields, } if self.length is not None: res_dict["length"] = self.length if self.hashes is not None: res_dict["hashes"] = self.hashes return res_dict def verify_length_and_hashes(self, data: bytes | IO[bytes]) -> None: """Verify that the length and hashes of ``data`` match expected values. Args: data: File object or its content in bytes. Raises: LengthOrHashMismatchError: Calculated length or hashes do not match expected values or hash algorithm is not supported. """ if self.length is not None: self._verify_length(data, self.length) if self.hashes is not None: self._verify_hashes(data, self.hashes) class Timestamp(Signed): """A container for the signed part of timestamp metadata. TUF file format uses a dictionary to contain the snapshot information: this is not the case with ``Timestamp.snapshot_meta`` which is a ``MetaFile``. *All parameters named below are not just constructor arguments but also instance attributes.* Args: version: Metadata version number. Default is 1. spec_version: Supported TUF specification version. Default is the version currently supported by the library. expires: Metadata expiry date. Default is current date and time. unrecognized_fields: Dictionary of all attributes that are not managed by TUF Metadata API snapshot_meta: Meta information for snapshot metadata. Default is a MetaFile with version 1. Raises: ValueError: Invalid arguments. """ type = _TIMESTAMP def __init__( self, version: int | None = None, spec_version: str | None = None, expires: datetime | None = None, snapshot_meta: MetaFile | None = None, unrecognized_fields: dict[str, Any] | None = None, ): super().__init__(version, spec_version, expires, unrecognized_fields) self.snapshot_meta = snapshot_meta or MetaFile(1) def __eq__(self, other: object) -> bool: if not isinstance(other, Timestamp): return False return ( super().__eq__(other) and self.snapshot_meta == other.snapshot_meta ) @classmethod def from_dict(cls, signed_dict: dict[str, Any]) -> Timestamp: """Create ``Timestamp`` object from its json/dict representation. Raises: ValueError, KeyError: Invalid arguments. """ common_args = cls._common_fields_from_dict(signed_dict) meta_dict = signed_dict.pop("meta") snapshot_meta = MetaFile.from_dict(meta_dict["snapshot.json"]) # All fields left in the timestamp_dict are unrecognized. return cls(*common_args, snapshot_meta, signed_dict) def to_dict(self) -> dict[str, Any]: """Return the dict representation of self.""" res_dict = self._common_fields_to_dict() res_dict["meta"] = {"snapshot.json": self.snapshot_meta.to_dict()} return res_dict class Snapshot(Signed): """A container for the signed part of snapshot metadata. Snapshot contains information about all target Metadata files. *All parameters named below are not just constructor arguments but also instance attributes.* Args: version: Metadata version number. Default is 1. spec_version: Supported TUF specification version. Default is the version currently supported by the library. expires: Metadata expiry date. Default is current date and time. unrecognized_fields: Dictionary of all attributes that are not managed by TUF Metadata API meta: Dictionary of targets filenames to ``MetaFile`` objects. Default is a dictionary with a Metafile for "snapshot.json" version 1. Raises: ValueError: Invalid arguments. """ type = _SNAPSHOT def __init__( self, version: int | None = None, spec_version: str | None = None, expires: datetime | None = None, meta: dict[str, MetaFile] | None = None, unrecognized_fields: dict[str, Any] | None = None, ): super().__init__(version, spec_version, expires, unrecognized_fields) self.meta = meta if meta is not None else {"targets.json": MetaFile(1)} def __eq__(self, other: object) -> bool: if not isinstance(other, Snapshot): return False return super().__eq__(other) and self.meta == other.meta @classmethod def from_dict(cls, signed_dict: dict[str, Any]) -> Snapshot: """Create ``Snapshot`` object from its json/dict representation. Raises: ValueError, KeyError: Invalid arguments. """ common_args = cls._common_fields_from_dict(signed_dict) meta_dicts = signed_dict.pop("meta") meta = {} for meta_path, meta_dict in meta_dicts.items(): meta[meta_path] = MetaFile.from_dict(meta_dict) # All fields left in the snapshot_dict are unrecognized. return cls(*common_args, meta, signed_dict) def to_dict(self) -> dict[str, Any]: """Return the dict representation of self.""" snapshot_dict = self._common_fields_to_dict() meta_dict = {} for meta_path, meta_info in self.meta.items(): meta_dict[meta_path] = meta_info.to_dict() snapshot_dict["meta"] = meta_dict return snapshot_dict class DelegatedRole(Role): """A container with information about a delegated role. A delegation can happen in two ways: - ``paths`` is set: delegates targets matching any path pattern in ``paths`` - ``path_hash_prefixes`` is set: delegates targets whose target path hash starts with any of the prefixes in ``path_hash_prefixes`` ``paths`` and ``path_hash_prefixes`` are mutually exclusive: both cannot be set, at least one of them must be set. *All parameters named below are not just constructor arguments but also instance attributes.* Args: name: Delegated role name. keyids: Delegated role signing key identifiers. threshold: Number of keys required to sign this role's metadata. terminating: ``True`` if this delegation terminates a target lookup. paths: Path patterns. See note above. path_hash_prefixes: Hash prefixes. See note above. unrecognized_fields: Dictionary of all attributes that are not managed by TUF Metadata API. Raises: ValueError: Invalid arguments. """ def __init__( self, name: str, keyids: list[str], threshold: int, terminating: bool, paths: list[str] | None = None, path_hash_prefixes: list[str] | None = None, unrecognized_fields: dict[str, Any] | None = None, ): super().__init__(keyids, threshold, unrecognized_fields) self.name = name self.terminating = terminating exclusive_vars = [paths, path_hash_prefixes] if sum(1 for var in exclusive_vars if var is not None) != 1: raise ValueError( "Only one of (paths, path_hash_prefixes) must be set" ) if paths is not None and any(not isinstance(p, str) for p in paths): raise ValueError("Paths must be strings") if path_hash_prefixes is not None and any( not isinstance(p, str) for p in path_hash_prefixes ): raise ValueError("Path_hash_prefixes must be strings") self.paths = paths self.path_hash_prefixes = path_hash_prefixes def __eq__(self, other: object) -> bool: if not isinstance(other, DelegatedRole): return False return ( super().__eq__(other) and self.name == other.name and self.terminating == other.terminating and self.paths == other.paths and self.path_hash_prefixes == other.path_hash_prefixes ) @classmethod def from_dict(cls, role_dict: dict[str, Any]) -> DelegatedRole: """Create ``DelegatedRole`` object from its json/dict representation. Raises: ValueError, KeyError, TypeError: Invalid arguments. """ name = role_dict.pop("name") keyids = role_dict.pop("keyids") threshold = role_dict.pop("threshold") terminating = role_dict.pop("terminating") paths = role_dict.pop("paths", None) path_hash_prefixes = role_dict.pop("path_hash_prefixes", None) # All fields left in the role_dict are unrecognized. return cls( name, keyids, threshold, terminating, paths, path_hash_prefixes, role_dict, ) def to_dict(self) -> dict[str, Any]: """Return the dict representation of self.""" base_role_dict = super().to_dict() res_dict = { "name": self.name, "terminating": self.terminating, **base_role_dict, } if self.paths is not None: res_dict["paths"] = self.paths elif self.path_hash_prefixes is not None: res_dict["path_hash_prefixes"] = self.path_hash_prefixes return res_dict @staticmethod def _is_target_in_pathpattern(targetpath: str, pathpattern: str) -> bool: """Determine whether ``targetpath`` matches the ``pathpattern``.""" # We need to make sure that targetpath and pathpattern are pointing to # the same directory as fnmatch doesn't threat "/" as a special symbol. target_parts = targetpath.split("/") pattern_parts = pathpattern.split("/") if len(target_parts) != len(pattern_parts): return False # Every part in the pathpattern could include a glob pattern, that's why # each of the target and pathpattern parts should match. for target_dir, pattern_dir in zip(target_parts, pattern_parts): if not fnmatch.fnmatch(target_dir, pattern_dir): return False return True def is_delegated_path(self, target_filepath: str) -> bool: """Determine whether the given ``target_filepath`` is in one of the paths that ``DelegatedRole`` is trusted to provide. The ``target_filepath`` and the ``DelegatedRole`` paths are expected to be in their canonical forms, so e.g. "a/b" instead of "a//b" . Only "/" is supported as target path separator. Leading separators are not handled as special cases (see `TUF specification on targetpath `_). Args: target_filepath: URL path to a target file, relative to a base targets URL. """ if self.path_hash_prefixes is not None: # Calculate the hash of the filepath # to determine in which bin to find the target. digest_object = sslib_hash.digest(algorithm="sha256") digest_object.update(target_filepath.encode("utf-8")) target_filepath_hash = digest_object.hexdigest() for path_hash_prefix in self.path_hash_prefixes: if target_filepath_hash.startswith(path_hash_prefix): return True elif self.paths is not None: for pathpattern in self.paths: # A delegated role path may be an explicit path or glob # pattern (Unix shell-style wildcards). if self._is_target_in_pathpattern(target_filepath, pathpattern): return True return False class SuccinctRoles(Role): """Succinctly defines a hash bin delegation graph. A ``SuccinctRoles`` object describes a delegation graph that covers all targets, distributing them uniformly over the delegated roles (i.e. bins) in the graph. The total number of bins is 2 to the power of the passed ``bit_length``. Bin names are the concatenation of the passed ``name_prefix`` and a zero-padded hex representation of the bin index separated by a hyphen. The passed ``keyids`` and ``threshold`` is used for each bin, and each bin is 'terminating'. For details: https://github.com/theupdateframework/taps/blob/master/tap15.md Args: keyids: Signing key identifiers for any bin metadata. threshold: Number of keys required to sign any bin metadata. bit_length: Number of bits between 1 and 32. name_prefix: Prefix of all bin names. unrecognized_fields: Dictionary of all attributes that are not managed by TUF Metadata API. Raises: ValueError, TypeError, AttributeError: Invalid arguments. """ def __init__( self, keyids: list[str], threshold: int, bit_length: int, name_prefix: str, unrecognized_fields: dict[str, Any] | None = None, ) -> None: super().__init__(keyids, threshold, unrecognized_fields) if bit_length <= 0 or bit_length > 32: raise ValueError("bit_length must be between 1 and 32") if not isinstance(name_prefix, str): raise ValueError("name_prefix must be a string") self.bit_length = bit_length self.name_prefix = name_prefix # Calculate the suffix_len value based on the total number of bins in # hex. If bit_length = 10 then number_of_bins = 1024 or bin names will # have a suffix between "000" and "3ff" in hex and suffix_len will be 3 # meaning the third bin will have a suffix of "003". self.number_of_bins = 2**bit_length # suffix_len is calculated based on "number_of_bins - 1" as the name # of the last bin contains the number "number_of_bins -1" as a suffix. self.suffix_len = len(f"{self.number_of_bins - 1:x}") def __eq__(self, other: object) -> bool: if not isinstance(other, SuccinctRoles): return False return ( super().__eq__(other) and self.bit_length == other.bit_length and self.name_prefix == other.name_prefix ) @classmethod def from_dict(cls, role_dict: dict[str, Any]) -> SuccinctRoles: """Create ``SuccinctRoles`` object from its json/dict representation. Raises: ValueError, KeyError, AttributeError, TypeError: Invalid arguments. """ keyids = role_dict.pop("keyids") threshold = role_dict.pop("threshold") bit_length = role_dict.pop("bit_length") name_prefix = role_dict.pop("name_prefix") # All fields left in the role_dict are unrecognized. return cls(keyids, threshold, bit_length, name_prefix, role_dict) def to_dict(self) -> dict[str, Any]: """Return the dict representation of self.""" base_role_dict = super().to_dict() return { "bit_length": self.bit_length, "name_prefix": self.name_prefix, **base_role_dict, } def get_role_for_target(self, target_filepath: str) -> str: """Calculate the name of the delegated role responsible for ``target_filepath``. The target at path ``target_filepath`` is assigned to a bin by casting the left-most ``bit_length`` of bits of the file path hash digest to int, using it as bin index between 0 and ``2**bit_length - 1``. Args: target_filepath: URL path to a target file, relative to a base targets URL. """ hasher = sslib_hash.digest(algorithm="sha256") hasher.update(target_filepath.encode("utf-8")) # We can't ever need more than 4 bytes (32 bits). hash_bytes = hasher.digest()[:4] # Right shift hash bytes, so that we only have the leftmost # bit_length bits that we care about. shift_value = 32 - self.bit_length bin_number = int.from_bytes(hash_bytes, byteorder="big") >> shift_value # Add zero padding if necessary and cast to hex the suffix. suffix = f"{bin_number:0{self.suffix_len}x}" return f"{self.name_prefix}-{suffix}" def get_roles(self) -> Iterator[str]: """Yield the names of all different delegated roles one by one.""" for i in range(self.number_of_bins): suffix = f"{i:0{self.suffix_len}x}" yield f"{self.name_prefix}-{suffix}" def is_delegated_role(self, role_name: str) -> bool: """Determine whether the given ``role_name`` is in one of the delegated roles that ``SuccinctRoles`` represents. Args: role_name: The name of the role to check against. """ desired_prefix = self.name_prefix + "-" if not role_name.startswith(desired_prefix): return False suffix = role_name[len(desired_prefix) :] if len(suffix) != self.suffix_len: return False try: # make sure suffix is hex value num = int(suffix, 16) except ValueError: return False return 0 <= num < self.number_of_bins class Delegations: """A container object storing information about all delegations. *All parameters named below are not just constructor arguments but also instance attributes.* Args: keys: Dictionary of keyids to Keys. Defines the keys used in ``roles``. roles: Ordered dictionary of role names to DelegatedRoles instances. It defines which keys are required to sign the metadata for a specific role. The roles order also defines the order that role delegations are considered during target searches. succinct_roles: Contains succinct information about hash bin delegations. Note that succinct roles is not a TUF specification feature yet and setting `succinct_roles` to a value makes the resulting metadata non-compliant. The metadata will not be accepted as valid by specification compliant clients such as those built with python-tuf <= 1.1.0. For more information see: https://github.com/theupdateframework/taps/blob/master/tap15.md unrecognized_fields: Dictionary of all attributes that are not managed by TUF Metadata API Exactly one of ``roles`` and ``succinct_roles`` must be set. Raises: ValueError: Invalid arguments. """ def __init__( self, keys: dict[str, Key], roles: dict[str, DelegatedRole] | None = None, succinct_roles: SuccinctRoles | None = None, unrecognized_fields: dict[str, Any] | None = None, ): self.keys = keys if sum(1 for v in [roles, succinct_roles] if v is not None) != 1: raise ValueError("One of roles and succinct_roles must be set") if roles is not None: for role in roles: if not role or role in TOP_LEVEL_ROLE_NAMES: raise ValueError( "Delegated roles cannot be empty or use top-level " "role names" ) self.roles = roles self.succinct_roles = succinct_roles if unrecognized_fields is None: unrecognized_fields = {} self.unrecognized_fields = unrecognized_fields def __eq__(self, other: object) -> bool: if not isinstance(other, Delegations): return False all_attributes_check = ( self.keys == other.keys and self.roles == other.roles and self.succinct_roles == other.succinct_roles and self.unrecognized_fields == other.unrecognized_fields ) if self.roles is not None and other.roles is not None: all_attributes_check = ( all_attributes_check # Order of the delegated roles matters (see issue #1788). and list(self.roles.items()) == list(other.roles.items()) ) return all_attributes_check @classmethod def from_dict(cls, delegations_dict: dict[str, Any]) -> Delegations: """Create ``Delegations`` object from its json/dict representation. Raises: ValueError, KeyError, TypeError: Invalid arguments. """ keys = delegations_dict.pop("keys") keys_res = {} for keyid, key_dict in keys.items(): keys_res[keyid] = Key.from_dict(keyid, key_dict) roles = delegations_dict.pop("roles", None) roles_res: dict[str, DelegatedRole] | None = None if roles is not None: roles_res = {} for role_dict in roles: new_role = DelegatedRole.from_dict(role_dict) if new_role.name in roles_res: raise ValueError(f"Duplicate role {new_role.name}") roles_res[new_role.name] = new_role succinct_roles_dict = delegations_dict.pop("succinct_roles", None) succinct_roles_info = None if succinct_roles_dict is not None: succinct_roles_info = SuccinctRoles.from_dict(succinct_roles_dict) # All fields left in the delegations_dict are unrecognized. return cls(keys_res, roles_res, succinct_roles_info, delegations_dict) def to_dict(self) -> dict[str, Any]: """Return the dict representation of self.""" keys = {keyid: key.to_dict() for keyid, key in self.keys.items()} res_dict: dict[str, Any] = { "keys": keys, **self.unrecognized_fields, } if self.roles is not None: roles = [role_obj.to_dict() for role_obj in self.roles.values()] res_dict["roles"] = roles elif self.succinct_roles is not None: res_dict["succinct_roles"] = self.succinct_roles.to_dict() return res_dict def get_roles_for_target( self, target_filepath: str ) -> Iterator[tuple[str, bool]]: """Given ``target_filepath`` get names and terminating status of all delegated roles who are responsible for it. Args: target_filepath: URL path to a target file, relative to a base targets URL. """ if self.roles is not None: for role in self.roles.values(): if role.is_delegated_path(target_filepath): yield role.name, role.terminating elif self.succinct_roles is not None: # We consider all succinct_roles as terminating. # For more information read TAP 15. yield self.succinct_roles.get_role_for_target(target_filepath), True class TargetFile(BaseFile): """A container with information about a particular target file. *All parameters named below are not just constructor arguments but also instance attributes.* Args: length: Length of the target file in bytes. hashes: Dictionary of hash algorithm names to hashes of the target file content. path: URL path to a target file, relative to a base targets URL. unrecognized_fields: Dictionary of all attributes that are not managed by TUF Metadata API Raises: ValueError, TypeError: Invalid arguments. """ def __init__( self, length: int, hashes: dict[str, str], path: str, unrecognized_fields: dict[str, Any] | None = None, ): self._validate_length(length) self._validate_hashes(hashes) self.length = length self.hashes = hashes self.path = path if unrecognized_fields is None: unrecognized_fields = {} self.unrecognized_fields = unrecognized_fields @property def custom(self) -> Any: # noqa: ANN401 """Get implementation specific data related to the target. python-tuf does not use or validate this data. """ return self.unrecognized_fields.get("custom") def __eq__(self, other: object) -> bool: if not isinstance(other, TargetFile): return False return ( self.length == other.length and self.hashes == other.hashes and self.path == other.path and self.unrecognized_fields == other.unrecognized_fields ) @classmethod def from_dict(cls, target_dict: dict[str, Any], path: str) -> TargetFile: """Create ``TargetFile`` object from its json/dict representation. Raises: ValueError, KeyError, TypeError: Invalid arguments. """ length = target_dict.pop("length") hashes = target_dict.pop("hashes") # All fields left in the target_dict are unrecognized. return cls(length, hashes, path, target_dict) def to_dict(self) -> dict[str, Any]: """Return the JSON-serializable dictionary representation of self.""" return { "length": self.length, "hashes": self.hashes, **self.unrecognized_fields, } @classmethod def from_file( cls, target_file_path: str, local_path: str, hash_algorithms: list[str] | None = None, ) -> TargetFile: """Create ``TargetFile`` object from a file. Args: target_file_path: URL path to a target file, relative to a base targets URL. local_path: Local path to target file content. hash_algorithms: Hash algorithms to calculate hashes with. If not specified the securesystemslib default hash algorithm is used. Raises: FileNotFoundError: The file doesn't exist. ValueError: The hash algorithms list contains an unsupported algorithm. """ with open(local_path, "rb") as file: return cls.from_data(target_file_path, file, hash_algorithms) @classmethod def from_data( cls, target_file_path: str, data: bytes | IO[bytes], hash_algorithms: list[str] | None = None, ) -> TargetFile: """Create ``TargetFile`` object from bytes. Args: target_file_path: URL path to a target file, relative to a base targets URL. data: Target file content. hash_algorithms: Hash algorithms to create the hashes with. If not specified the securesystemslib default hash algorithm is used. Raises: ValueError: The hash algorithms list contains an unsupported algorithm. """ length, hashes = cls._get_length_and_hashes(data, hash_algorithms) return cls(length, hashes, target_file_path) def verify_length_and_hashes(self, data: bytes | IO[bytes]) -> None: """Verify that length and hashes of ``data`` match expected values. Args: data: Target file object or its content in bytes. Raises: LengthOrHashMismatchError: Calculated length or hashes do not match expected values or hash algorithm is not supported. """ self._verify_length(data, self.length) self._verify_hashes(data, self.hashes) def get_prefixed_paths(self) -> list[str]: """ Return hash-prefixed URL path fragments for the target file path. """ paths = [] parent, sep, name = self.path.rpartition("/") for hash_value in self.hashes.values(): paths.append(f"{parent}{sep}{hash_value}.{name}") return paths class Targets(Signed, _DelegatorMixin): """A container for the signed part of targets metadata. Targets contains verifying information about target files and also delegates responsibility to other Targets roles. *All parameters named below are not just constructor arguments but also instance attributes.* Args: version: Metadata version number. Default is 1. spec_version: Supported TUF specification version. Default is the version currently supported by the library. expires: Metadata expiry date. Default is current date and time. targets: Dictionary of target filenames to TargetFiles. Default is an empty dictionary. delegations: Defines how this Targets delegates responsibility to other Targets Metadata files. Default is None. unrecognized_fields: Dictionary of all attributes that are not managed by TUF Metadata API Raises: ValueError: Invalid arguments. """ type = _TARGETS def __init__( self, version: int | None = None, spec_version: str | None = None, expires: datetime | None = None, targets: dict[str, TargetFile] | None = None, delegations: Delegations | None = None, unrecognized_fields: dict[str, Any] | None = None, ) -> None: super().__init__(version, spec_version, expires, unrecognized_fields) self.targets = targets if targets is not None else {} self.delegations = delegations def __eq__(self, other: object) -> bool: if not isinstance(other, Targets): return False return ( super().__eq__(other) and self.targets == other.targets and self.delegations == other.delegations ) @classmethod def from_dict(cls, signed_dict: dict[str, Any]) -> Targets: """Create ``Targets`` object from its json/dict representation. Raises: ValueError, KeyError, TypeError: Invalid arguments. """ common_args = cls._common_fields_from_dict(signed_dict) targets = signed_dict.pop(_TARGETS) try: delegations_dict = signed_dict.pop("delegations") except KeyError: delegations = None else: delegations = Delegations.from_dict(delegations_dict) res_targets = {} for target_path, target_info in targets.items(): res_targets[target_path] = TargetFile.from_dict( target_info, target_path ) # All fields left in the targets_dict are unrecognized. return cls(*common_args, res_targets, delegations, signed_dict) def to_dict(self) -> dict[str, Any]: """Return the dict representation of self.""" targets_dict = self._common_fields_to_dict() targets = {} for target_path, target_file_obj in self.targets.items(): targets[target_path] = target_file_obj.to_dict() targets_dict[_TARGETS] = targets if self.delegations is not None: targets_dict["delegations"] = self.delegations.to_dict() return targets_dict def add_key(self, key: Key, role: str | None = None) -> None: """Add new signing key for delegated role ``role``. If succinct_roles is used then the ``role`` argument is not required. Args: key: Signing key to be added for ``role``. role: Name of the role, for which ``key`` is added. Raises: ValueError: If the argument order is wrong or if there are no delegated roles or if ``role`` is not delegated by this Target. """ # Verify that our users are not using the old argument order. if isinstance(role, Key): raise ValueError("Role must be a string, not a Key instance") if self.delegations is None: raise ValueError(f"Delegated role {role} doesn't exist") if self.delegations.roles is not None: if role not in self.delegations.roles: raise ValueError(f"Delegated role {role} doesn't exist") if key.keyid not in self.delegations.roles[role].keyids: self.delegations.roles[role].keyids.append(key.keyid) elif self.delegations.succinct_roles is not None: if key.keyid not in self.delegations.succinct_roles.keyids: self.delegations.succinct_roles.keyids.append(key.keyid) self.delegations.keys[key.keyid] = key def revoke_key(self, keyid: str, role: str | None = None) -> None: """Revokes key from delegated role ``role`` and updates the delegations key store. If succinct_roles is used then the ``role`` argument is not required. Args: keyid: Identifier of the key to be removed for ``role``. role: Name of the role, for which a signing key is removed. Raises: ValueError: If there are no delegated roles or if ``role`` is not delegated by this ``Target`` or if key is not used by ``role`` or if key with id ``keyid`` is not used by succinct roles. """ if self.delegations is None: raise ValueError(f"Delegated role {role} doesn't exist") if self.delegations.roles is not None: if role not in self.delegations.roles: raise ValueError(f"Delegated role {role} doesn't exist") if keyid not in self.delegations.roles[role].keyids: raise ValueError(f"Key with id {keyid} is not used by {role}") self.delegations.roles[role].keyids.remove(keyid) for keyinfo in self.delegations.roles.values(): if keyid in keyinfo.keyids: return elif self.delegations.succinct_roles is not None: if keyid not in self.delegations.succinct_roles.keyids: raise ValueError( f"Key with id {keyid} is not used by succinct_roles" ) self.delegations.succinct_roles.keyids.remove(keyid) del self.delegations.keys[keyid] def get_delegated_role(self, delegated_role: str) -> Role: """Return the role object for the given delegated role. Raises ValueError if delegated_role is not actually delegated. """ if self.delegations is None: raise ValueError("No delegations found") role: Role | None = None if self.delegations.roles is not None: role = self.delegations.roles.get(delegated_role) elif self.delegations.succinct_roles is not None: succinct = self.delegations.succinct_roles if succinct.is_delegated_role(delegated_role): role = succinct if not role: raise ValueError(f"Delegated role {delegated_role} not found") return role def get_key(self, keyid: str) -> Key: if self.delegations is None: raise ValueError("No delegations found") if keyid not in self.delegations.keys: raise ValueError(f"Key {keyid} not found") return self.delegations.keys[keyid] tuf-6.0.0/tuf/api/dsse.py0000644000000000000000000001051013615410400012107 0ustar00"""Low-level TUF DSSE API. (experimental!)""" from __future__ import annotations import json from typing import Generic, cast from securesystemslib.dsse import Envelope as BaseSimpleEnvelope # Expose all payload classes to use API independently of ``tuf.api.metadata``. from tuf.api._payload import ( # noqa: F401 _ROOT, _SNAPSHOT, _TARGETS, _TIMESTAMP, SPECIFICATION_VERSION, TOP_LEVEL_ROLE_NAMES, BaseFile, DelegatedRole, Delegations, MetaFile, Role, Root, RootVerificationResult, Signed, Snapshot, SuccinctRoles, T, TargetFile, Targets, Timestamp, VerificationResult, ) from tuf.api.serialization import DeserializationError, SerializationError class SimpleEnvelope(Generic[T], BaseSimpleEnvelope): """Dead Simple Signing Envelope (DSSE) for TUF payloads. * Sign with ``self.sign()`` (inherited). * Verify with ``verify_delegate`` on a ``Root`` or ``Targets`` object:: delegator.verify_delegate( role_name, envelope.pae(), # Note, how we don't pass ``envelope.payload``! envelope.signatures, ) Attributes: payload: Serialized payload bytes. payload_type: Payload string identifier. signatures: Ordered dictionary of keyids to ``Signature`` objects. """ DEFAULT_PAYLOAD_TYPE = "application/vnd.tuf+json" @classmethod def from_bytes(cls, data: bytes) -> SimpleEnvelope[T]: """Load envelope from JSON bytes. NOTE: Unlike ``tuf.api.metadata.Metadata.from_bytes``, this method does not deserialize the contained payload. Use ``self.get_signed`` to deserialize the payload into a ``Signed`` object. Args: data: envelope JSON bytes. Raises: tuf.api.serialization.DeserializationError: data cannot be deserialized. Returns: TUF ``SimpleEnvelope`` object. """ try: envelope_dict = json.loads(data.decode()) envelope = SimpleEnvelope.from_dict(envelope_dict) except Exception as e: raise DeserializationError from e return cast(SimpleEnvelope[T], envelope) def to_bytes(self) -> bytes: """Return envelope as JSON bytes. NOTE: Unlike ``tuf.api.metadata.Metadata.to_bytes``, this method does not serialize the payload. Use ``SimpleEnvelope.from_signed`` to serialize a ``Signed`` object and wrap it in an SimpleEnvelope. Raises: tuf.api.serialization.SerializationError: self cannot be serialized. """ try: envelope_dict = self.to_dict() json_bytes = json.dumps(envelope_dict).encode() except Exception as e: raise SerializationError from e return json_bytes @classmethod def from_signed(cls, signed: T) -> SimpleEnvelope[T]: """Serialize payload as JSON bytes and wrap in envelope. Args: signed: ``Signed`` object. Raises: tuf.api.serialization.SerializationError: The signed object cannot be serialized. """ try: signed_dict = signed.to_dict() json_bytes = json.dumps(signed_dict).encode() except Exception as e: raise SerializationError from e return cls(json_bytes, cls.DEFAULT_PAYLOAD_TYPE, {}) def get_signed(self) -> T: """Extract and deserialize payload JSON bytes from envelope. Raises: tuf.api.serialization.DeserializationError: The signed object cannot be deserialized. """ try: payload_dict = json.loads(self.payload.decode()) # TODO: can we move this to tuf.api._payload? _type = payload_dict["_type"] if _type == _TARGETS: inner_cls: type[Signed] = Targets elif _type == _SNAPSHOT: inner_cls = Snapshot elif _type == _TIMESTAMP: inner_cls = Timestamp elif _type == _ROOT: inner_cls = Root else: raise ValueError(f'unrecognized role type "{_type}"') except Exception as e: raise DeserializationError from e return cast(T, inner_cls.from_dict(payload_dict)) tuf-6.0.0/tuf/api/exceptions.py0000644000000000000000000000375613615410400013350 0ustar00# Copyright New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """ Define TUF exceptions used inside the new modern implementation. The names chosen for TUF Exception classes should end in 'Error' except where there is a good reason not to, and provide that reason in those cases. """ #### Repository errors #### from securesystemslib.exceptions import StorageError # noqa: F401 class RepositoryError(Exception): """An error with a repository's state, such as a missing file. It covers all exceptions that come from the repository side when looking from the perspective of users of metadata API or ngclient. """ class UnsignedMetadataError(RepositoryError): """An error about metadata object with insufficient threshold of signatures. """ class BadVersionNumberError(RepositoryError): """An error for metadata that contains an invalid version number.""" class EqualVersionNumberError(BadVersionNumberError): """An error for metadata containing a previously verified version number.""" class ExpiredMetadataError(RepositoryError): """Indicate that a TUF Metadata file has expired.""" class LengthOrHashMismatchError(RepositoryError): """An error while checking the length and hash values of an object.""" #### Download Errors #### class DownloadError(Exception): """An error occurred while attempting to download a file.""" class DownloadLengthMismatchError(DownloadError): """Indicate that a mismatch of lengths was seen while downloading a file.""" class SlowRetrievalError(DownloadError): """Indicate that downloading a file took an unreasonably long time.""" class DownloadHTTPError(DownloadError): """ Returned by FetcherInterface implementations for HTTP errors. Args: message: The HTTP error message status_code: The HTTP status code """ def __init__(self, message: str, status_code: int): super().__init__(message) self.status_code = status_code tuf-6.0.0/tuf/api/metadata.py0000644000000000000000000003454513615410400012747 0ustar00# Copyright New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """The low-level Metadata API. The low-level Metadata API in ``tuf.api.metadata`` module contains: * Safe de/serialization of metadata to and from files. * Access to and modification of signed metadata content. * Signing metadata and verifying signatures. Metadata API implements functionality at the metadata file level, it does not provide TUF repository or client functionality on its own (but can be used to implement them). The API design is based on the file format defined in the `TUF specification `_ and the object attributes generally follow the JSON format used in the specification. The above principle means that a ``Metadata`` object represents a single metadata file, and has a ``signed`` attribute that is an instance of one of the four top level signed classes (``Root``, ``Timestamp``, ``Snapshot`` and ``Targets``). To make Python type annotations useful ``Metadata`` can be type constrained: e.g. the signed attribute of ``Metadata[Root]`` is known to be ``Root``. Currently Metadata API supports JSON as the file format. A basic example of repository implementation using the Metadata is available in `examples/repository `_. """ from __future__ import annotations import logging import tempfile from typing import TYPE_CHECKING, Any, Generic, cast from securesystemslib.signer import Signature, Signer from securesystemslib.storage import FilesystemBackend, StorageBackendInterface # Expose payload classes via ``tuf.api.metadata`` to maintain the API, # even if they are unused in the local scope. from tuf.api._payload import ( # noqa: F401 _ROOT, _SNAPSHOT, _TARGETS, _TIMESTAMP, SPECIFICATION_VERSION, TOP_LEVEL_ROLE_NAMES, BaseFile, DelegatedRole, Delegations, Key, LengthOrHashMismatchError, MetaFile, Role, Root, RootVerificationResult, Signed, Snapshot, SuccinctRoles, T, TargetFile, Targets, Timestamp, VerificationResult, ) from tuf.api.exceptions import UnsignedMetadataError if TYPE_CHECKING: from tuf.api.serialization import ( MetadataDeserializer, MetadataSerializer, SignedSerializer, ) logger = logging.getLogger(__name__) class Metadata(Generic[T]): """A container for signed TUF metadata. Provides methods to convert to and from dictionary, read and write to and from file and to create and verify metadata signatures. ``Metadata[T]`` is a generic container type where T can be any one type of [``Root``, ``Timestamp``, ``Snapshot``, ``Targets``]. The purpose of this is to allow static type checking of the signed attribute in code using Metadata:: root_md = Metadata[Root].from_file("root.json") # root_md type is now Metadata[Root]. This means signed and its # attributes like consistent_snapshot are now statically typed and the # types can be verified by static type checkers and shown by IDEs print(root_md.signed.consistent_snapshot) Using a type constraint is not required but not doing so means T is not a specific type so static typing cannot happen. Note that the type constraint ``[Root]`` is not validated at runtime (as pure annotations are not available then). New Metadata instances can be created from scratch with:: one_day = datetime.now(timezone.utc) + timedelta(days=1) timestamp = Metadata(Timestamp(expires=one_day)) Apart from ``expires`` all of the arguments to the inner constructors have reasonable default values for new metadata. *All parameters named below are not just constructor arguments but also instance attributes.* Args: signed: Actual metadata payload, i.e. one of ``Targets``, ``Snapshot``, ``Timestamp`` or ``Root``. signatures: Ordered dictionary of keyids to ``Signature`` objects, each signing the canonical serialized representation of ``signed``. Default is an empty dictionary. unrecognized_fields: Dictionary of all attributes that are not managed by TUF Metadata API. These fields are NOT signed and it's preferable if unrecognized fields are added to the Signed derivative classes. """ def __init__( self, signed: T, signatures: dict[str, Signature] | None = None, unrecognized_fields: dict[str, Any] | None = None, ): self.signed: T = signed self.signatures = signatures if signatures is not None else {} if unrecognized_fields is None: unrecognized_fields = {} self.unrecognized_fields = unrecognized_fields def __eq__(self, other: object) -> bool: if not isinstance(other, Metadata): return False return ( self.signatures == other.signatures # Order of the signatures matters (see issue #1788). and list(self.signatures.items()) == list(other.signatures.items()) and self.signed == other.signed and self.unrecognized_fields == other.unrecognized_fields ) @property def signed_bytes(self) -> bytes: """Default canonical json byte representation of ``self.signed``.""" # Use local scope import to avoid circular import errors from tuf.api.serialization.json import CanonicalJSONSerializer return CanonicalJSONSerializer().serialize(self.signed) @classmethod def from_dict(cls, metadata: dict[str, Any]) -> Metadata[T]: """Create ``Metadata`` object from its json/dict representation. Args: metadata: TUF metadata in dict representation. Raises: ValueError, KeyError, TypeError: Invalid arguments. Side Effect: Destroys the metadata dict passed by reference. Returns: TUF ``Metadata`` object. """ # Dispatch to contained metadata class on metadata _type field. _type = metadata["signed"]["_type"] if _type == _TARGETS: inner_cls: type[Signed] = Targets elif _type == _SNAPSHOT: inner_cls = Snapshot elif _type == _TIMESTAMP: inner_cls = Timestamp elif _type == _ROOT: inner_cls = Root else: raise ValueError(f'unrecognized metadata type "{_type}"') # Make sure signatures are unique signatures: dict[str, Signature] = {} for sig_dict in metadata.pop("signatures"): sig = Signature.from_dict(sig_dict) if sig.keyid in signatures: raise ValueError( f"Multiple signatures found for keyid {sig.keyid}" ) signatures[sig.keyid] = sig return cls( # Specific type T is not known at static type check time: use cast signed=cast(T, inner_cls.from_dict(metadata.pop("signed"))), signatures=signatures, # All fields left in the metadata dict are unrecognized. unrecognized_fields=metadata, ) @classmethod def from_file( cls, filename: str, deserializer: MetadataDeserializer | None = None, storage_backend: StorageBackendInterface | None = None, ) -> Metadata[T]: """Load TUF metadata from file storage. Args: filename: Path to read the file from. deserializer: ``MetadataDeserializer`` subclass instance that implements the desired wireline format deserialization. Per default a ``JSONDeserializer`` is used. storage_backend: Object that implements ``securesystemslib.storage.StorageBackendInterface``. Default is ``FilesystemBackend`` (i.e. a local file). Raises: StorageError: The file cannot be read. tuf.api.serialization.DeserializationError: The file cannot be deserialized. Returns: TUF ``Metadata`` object. """ if storage_backend is None: storage_backend = FilesystemBackend() with storage_backend.get(filename) as file_obj: return cls.from_bytes(file_obj.read(), deserializer) @classmethod def from_bytes( cls, data: bytes, deserializer: MetadataDeserializer | None = None, ) -> Metadata[T]: """Load TUF metadata from raw data. Args: data: Metadata content. deserializer: ``MetadataDeserializer`` implementation to use. Default is ``JSONDeserializer``. Raises: tuf.api.serialization.DeserializationError: The file cannot be deserialized. Returns: TUF ``Metadata`` object. """ if deserializer is None: # Use local scope import to avoid circular import errors from tuf.api.serialization.json import JSONDeserializer deserializer = JSONDeserializer() return deserializer.deserialize(data) def to_bytes(self, serializer: MetadataSerializer | None = None) -> bytes: """Return the serialized TUF file format as bytes. Note that if bytes are first deserialized into ``Metadata`` and then serialized with ``to_bytes()``, the two are not required to be identical even though the signatures are guaranteed to stay valid. If byte-for-byte equivalence is required (which is the case when content hashes are used in other metadata), the original content should be used instead of re-serializing. Args: serializer: ``MetadataSerializer`` instance that implements the desired serialization format. Default is ``JSONSerializer``. Raises: tuf.api.serialization.SerializationError: The metadata object cannot be serialized. """ if serializer is None: # Use local scope import to avoid circular import errors from tuf.api.serialization.json import JSONSerializer serializer = JSONSerializer(compact=True) return serializer.serialize(self) def to_dict(self) -> dict[str, Any]: """Return the dict representation of self.""" signatures = [sig.to_dict() for sig in self.signatures.values()] return { "signatures": signatures, "signed": self.signed.to_dict(), **self.unrecognized_fields, } def to_file( self, filename: str, serializer: MetadataSerializer | None = None, storage_backend: StorageBackendInterface | None = None, ) -> None: """Write TUF metadata to file storage. Note that if a file is first deserialized into ``Metadata`` and then serialized with ``to_file()``, the two files are not required to be identical even though the signatures are guaranteed to stay valid. If byte-for-byte equivalence is required (which is the case when file hashes are used in other metadata), the original file should be used instead of re-serializing. Args: filename: Path to write the file to. serializer: ``MetadataSerializer`` instance that implements the desired serialization format. Default is ``JSONSerializer``. storage_backend: ``StorageBackendInterface`` implementation. Default is ``FilesystemBackend`` (i.e. a local file). Raises: tuf.api.serialization.SerializationError: The metadata object cannot be serialized. StorageError: The file cannot be written. """ if storage_backend is None: storage_backend = FilesystemBackend() bytes_data = self.to_bytes(serializer) with tempfile.TemporaryFile() as temp_file: temp_file.write(bytes_data) storage_backend.put(temp_file, filename) # Signatures. def sign( self, signer: Signer, append: bool = False, signed_serializer: SignedSerializer | None = None, ) -> Signature: """Create signature over ``signed`` and assigns it to ``signatures``. Args: signer: A ``securesystemslib.signer.Signer`` object that provides a signing implementation to generate the signature. append: ``True`` if the signature should be appended to the list of signatures or replace any existing signatures. The default behavior is to replace signatures. signed_serializer: ``SignedSerializer`` that implements the desired serialization format. Default is ``CanonicalJSONSerializer``. Raises: tuf.api.serialization.SerializationError: ``signed`` cannot be serialized. UnsignedMetadataError: Signing errors. Returns: ``securesystemslib.signer.Signature`` object that was added into signatures. """ if signed_serializer is None: bytes_data = self.signed_bytes else: bytes_data = signed_serializer.serialize(self.signed) try: signature = signer.sign(bytes_data) except Exception as e: raise UnsignedMetadataError(f"Failed to sign: {e}") from e if not append: self.signatures.clear() self.signatures[signature.keyid] = signature return signature def verify_delegate( self, delegated_role: str, delegated_metadata: Metadata, signed_serializer: SignedSerializer | None = None, ) -> None: """Verify that ``delegated_metadata`` is signed with the required threshold of keys for ``delegated_role``. .. deprecated:: 3.1.0 Please use ``Root.verify_delegate()`` or ``Targets.verify_delegate()``. """ if self.signed.type not in ["root", "targets"]: raise TypeError("Call is valid only on delegator metadata") if signed_serializer is None: payload = delegated_metadata.signed_bytes else: payload = signed_serializer.serialize(delegated_metadata.signed) self.signed.verify_delegate( delegated_role, payload, delegated_metadata.signatures ) tuf-6.0.0/tuf/api/serialization/__init__.py0000644000000000000000000000350313615410400015571 0ustar00# Copyright New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """``tuf.api.serialization`` module provides abstract base classes and concrete implementations to serialize and deserialize TUF metadata. Any custom de/serialization implementations should inherit from the abstract base classes defined in this module. The implementations can use the ``to_dict()``/``from_dict()`` implementations available in the Metadata API objects. - Metadata de/serializers are used to convert to and from wireline formats. - Signed serializers are used to canonicalize data for cryptographic signatures generation and verification. """ import abc from typing import TYPE_CHECKING from tuf.api.exceptions import RepositoryError if TYPE_CHECKING: from tuf.api.metadata import Metadata, Signed class SerializationError(RepositoryError): """Error during serialization.""" class DeserializationError(RepositoryError): """Error during deserialization.""" class MetadataDeserializer(metaclass=abc.ABCMeta): """Abstract base class for deserialization of Metadata objects.""" @abc.abstractmethod def deserialize(self, raw_data: bytes) -> "Metadata": """Deserialize bytes to Metadata object.""" raise NotImplementedError class MetadataSerializer(metaclass=abc.ABCMeta): """Abstract base class for serialization of Metadata objects.""" @abc.abstractmethod def serialize(self, metadata_obj: "Metadata") -> bytes: """Serialize Metadata object to bytes.""" raise NotImplementedError class SignedSerializer(metaclass=abc.ABCMeta): """Abstract base class for serialization of Signed objects.""" @abc.abstractmethod def serialize(self, signed_obj: "Signed") -> bytes: """Serialize Signed object to bytes.""" raise NotImplementedError tuf-6.0.0/tuf/api/serialization/json.py0000644000000000000000000000747213615410400015014 0ustar00# Copyright New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """``tuf.api.serialization.json`` module provides concrete implementations to serialize and deserialize TUF role metadata to and from the JSON wireline format for transportation, and to serialize the 'signed' part of TUF role metadata to the OLPC Canonical JSON format for signature generation and verification. """ # We should not have shadowed stdlib json but that milk spilled already # ruff: noqa: A005 from __future__ import annotations import json from securesystemslib.formats import encode_canonical # ... to allow de/serializing Metadata and Signed objects here, while also # creating default de/serializers there (see metadata local scope imports). # NOTE: A less desirable alternative would be to add more abstraction layers. from tuf.api.metadata import Metadata, Signed from tuf.api.serialization import ( DeserializationError, MetadataDeserializer, MetadataSerializer, SerializationError, SignedSerializer, ) class JSONDeserializer(MetadataDeserializer): """Provides JSON to Metadata deserialize method.""" def deserialize(self, raw_data: bytes) -> Metadata: """Deserialize utf-8 encoded JSON bytes into Metadata object.""" try: json_dict = json.loads(raw_data.decode("utf-8")) metadata_obj = Metadata.from_dict(json_dict) except Exception as e: raise DeserializationError("Failed to deserialize JSON") from e return metadata_obj class JSONSerializer(MetadataSerializer): """Provides Metadata to JSON serialize method. Args: compact: A boolean indicating if the JSON bytes generated in 'serialize' should be compact by excluding whitespace. validate: Check that the metadata object can be deserialized again without change of contents and thus find common mistakes. This validation might slow down serialization significantly. """ def __init__(self, compact: bool = False, validate: bool | None = False): self.compact = compact self.validate = validate def serialize(self, metadata_obj: Metadata) -> bytes: """Serialize Metadata object into utf-8 encoded JSON bytes.""" try: indent = None if self.compact else 1 separators = (",", ":") if self.compact else (",", ": ") json_bytes = json.dumps( metadata_obj.to_dict(), indent=indent, separators=separators, sort_keys=True, ).encode("utf-8") if self.validate: try: new_md_obj = JSONDeserializer().deserialize(json_bytes) if metadata_obj != new_md_obj: raise ValueError( "Metadata changes if you serialize and deserialize." ) except Exception as e: raise ValueError("Metadata cannot be validated!") from e except Exception as e: raise SerializationError("Failed to serialize JSON") from e return json_bytes class CanonicalJSONSerializer(SignedSerializer): """Provides Signed to OLPC Canonical JSON serialize method.""" def serialize(self, signed_obj: Signed) -> bytes: """Serialize Signed object into utf-8 encoded OLPC Canonical JSON bytes. """ try: signed_dict = signed_obj.to_dict() canon_str = encode_canonical(signed_dict) # encode_canonical cannot return None if output_function is not set assert canon_str is not None # noqa: S101 canonical_bytes = canon_str.encode("utf-8") except Exception as e: raise SerializationError from e return canonical_bytes tuf-6.0.0/tuf/ngclient/README.md0000644000000000000000000000171413615410400013116 0ustar00## Next-gen TUF client for Python This package provides modules for TUF client implementers. **tuf.ngclient.Updater** is a class that implements the client workflow described in the TUF specification (see https://theupdateframework.github.io/specification/latest/#detailed-client-workflow) **tuf.ngclient.FetcherInterface** is an abstract class that client implementers can implement a concrete class of in order to reuse their own networking/download libraries -- a Requests-based implementation is used by default. This package: * Aims to be a clean, easy-to-validate reference client implementation written in modern Python * At the same time aims to be the library of choice for anyone implementing a TUF client in Python: light-weight, easy to integrate and with minimal required dependencies * Is still under development but is planned to become the default client in this implementation (i.e., the older `tuf.client` will be deprecated in the future) tuf-6.0.0/tuf/ngclient/__init__.py0000644000000000000000000000102513615410400013743 0ustar00# Copyright New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """TUF client public API.""" from tuf.api.metadata import TargetFile from tuf.ngclient.config import UpdaterConfig from tuf.ngclient.fetcher import FetcherInterface from tuf.ngclient.updater import Updater from tuf.ngclient.urllib3_fetcher import Urllib3Fetcher __all__ = [ # noqa: PLE0604 FetcherInterface.__name__, Urllib3Fetcher.__name__, TargetFile.__name__, Updater.__name__, UpdaterConfig.__name__, ] tuf-6.0.0/tuf/ngclient/config.py0000644000000000000000000000417513615410400013462 0ustar00# Copyright 2021, New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Configuration options for ``Updater`` class.""" from __future__ import annotations from dataclasses import dataclass from enum import Flag, unique @unique class EnvelopeType(Flag): """Configures deserialization and verification mode of TUF metadata. Args: METADATA: Traditional canonical JSON -based TUF Metadata. SIMPLE: Dead Simple Signing Envelope. (experimental) """ METADATA = 1 SIMPLE = 2 @dataclass class UpdaterConfig: """Used to store ``Updater`` configuration. Args: max_root_rotations: Maximum number of root rotations. max_delegations: Maximum number of delegations. root_max_length: Maximum length of a root metadata file. timestamp_max_length: Maximum length of a timestamp metadata file. snapshot_max_length: Maximum length of a snapshot metadata file. targets_max_length: Maximum length of a targets metadata file. prefix_targets_with_hash: When `consistent snapshots `_ are used, target download URLs are formed by prefixing the filename with a hash digest of file content by default. This can be overridden by setting ``prefix_targets_with_hash`` to ``False``. envelope_type: Configures deserialization and verification mode of TUF metadata. Per default, it is treated as traditional canonical JSON -based TUF Metadata. app_user_agent: Application user agent, e.g. "MyApp/1.0.0". This will be prefixed to ngclient user agent when the default fetcher is used. """ max_root_rotations: int = 256 max_delegations: int = 32 root_max_length: int = 512000 # bytes timestamp_max_length: int = 16384 # bytes snapshot_max_length: int = 2000000 # bytes targets_max_length: int = 5000000 # bytes prefix_targets_with_hash: bool = True envelope_type: EnvelopeType = EnvelopeType.METADATA app_user_agent: str | None = None tuf-6.0.0/tuf/ngclient/fetcher.py0000644000000000000000000001111413615410400013624 0ustar00# Copyright 2021, New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Provides an interface for network IO abstraction.""" # Imports import abc import logging import tempfile from collections.abc import Iterator from contextlib import contextmanager from typing import IO from tuf.api import exceptions logger = logging.getLogger(__name__) # Classes class FetcherInterface(metaclass=abc.ABCMeta): """Defines an interface for abstract network download. By providing a concrete implementation of the abstract interface, users of the framework can plug-in their preferred/customized network stack. Implementations of FetcherInterface only need to implement ``_fetch()``. The public API of the class is already implemented. """ @abc.abstractmethod def _fetch(self, url: str) -> Iterator[bytes]: """Fetch the contents of HTTP/HTTPS ``url`` from a remote server. Implementations must raise ``DownloadHTTPError`` if they receive an HTTP error code. Implementations may raise any errors but the ones that are not ``DownloadErrors`` will be wrapped in a ``DownloadError`` by ``fetch()``. Args: url: URL string that represents a file location. Raises: exceptions.DownloadHTTPError: HTTP error code was received. Returns: Bytes iterator """ raise NotImplementedError # pragma: no cover def fetch(self, url: str) -> Iterator[bytes]: """Fetch the contents of HTTP/HTTPS ``url`` from a remote server. Args: url: URL string that represents a file location. Raises: exceptions.DownloadError: An error occurred during download. exceptions.DownloadHTTPError: An HTTP error code was received. Returns: Bytes iterator """ # Ensure that fetch() only raises DownloadErrors, regardless of the # fetcher implementation try: return self._fetch(url) except exceptions.DownloadError as e: raise e except Exception as e: raise exceptions.DownloadError(f"Failed to download {url}") from e @contextmanager def download_file(self, url: str, max_length: int) -> Iterator[IO]: """Download file from given ``url``. It is recommended to use ``download_file()`` within a ``with`` block to guarantee that allocated file resources will always be released even if download fails. Args: url: URL string that represents the location of the file. max_length: Upper bound of file size in bytes. Raises: exceptions.DownloadError: An error occurred during download. exceptions.DownloadLengthMismatchError: Downloaded bytes exceed ``max_length``. exceptions.DownloadHTTPError: An HTTP error code was received. Yields: ``TemporaryFile`` object that points to the contents of ``url``. """ logger.debug("Downloading: %s", url) number_of_bytes_received = 0 with tempfile.TemporaryFile() as temp_file: chunks = self.fetch(url) for chunk in chunks: number_of_bytes_received += len(chunk) if number_of_bytes_received > max_length: raise exceptions.DownloadLengthMismatchError( f"Downloaded {number_of_bytes_received} bytes exceeding" f" the maximum allowed length of {max_length}" ) temp_file.write(chunk) logger.debug( "Downloaded %d out of %d bytes", number_of_bytes_received, max_length, ) temp_file.seek(0) yield temp_file def download_bytes(self, url: str, max_length: int) -> bytes: """Download bytes from given ``url``. Returns the downloaded bytes, otherwise like ``download_file()``. Args: url: URL string that represents the location of the file. max_length: Upper bound of data size in bytes. Raises: exceptions.DownloadError: An error occurred during download. exceptions.DownloadLengthMismatchError: Downloaded bytes exceed ``max_length``. exceptions.DownloadHTTPError: An HTTP error code was received. Returns: Content of the file in bytes. """ with self.download_file(url, max_length) as dl_file: return dl_file.read() tuf-6.0.0/tuf/ngclient/requests_fetcher.py0000644000000000000000000001316313615410400015565 0ustar00# Copyright 2021, New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Provides an implementation of ``FetcherInterface`` using the Requests HTTP library. Note that this module is deprecated, and the default fetcher is Urllib3Fetcher: * RequestsFetcher is still available to make it easy to fall back to previous implementation if issues are found with the Urllib3Fetcher * If RequestsFetcher is used, note that `requests` must be explicitly depended on: python-tuf does not do that. """ from __future__ import annotations import logging from typing import TYPE_CHECKING from urllib import parse # Imports import requests import tuf from tuf.api import exceptions from tuf.ngclient.fetcher import FetcherInterface if TYPE_CHECKING: from collections.abc import Iterator # Globals logger = logging.getLogger(__name__) # Classes class RequestsFetcher(FetcherInterface): """An implementation of ``FetcherInterface`` based on the requests library. Attributes: socket_timeout: Timeout in seconds, used for both initial connection delay and the maximum delay between bytes received. chunk_size: Chunk size in bytes used when downloading. """ def __init__( self, socket_timeout: int = 30, chunk_size: int = 400000, app_user_agent: str | None = None, ) -> None: # http://docs.python-requests.org/en/master/user/advanced/#session-objects: # # "The Session object allows you to persist certain parameters across # requests. It also persists cookies across all requests made from the # Session instance, and will use urllib3's connection pooling. So if # you're making several requests to the same host, the underlying TCP # connection will be reused, which can result in a significant # performance increase (see HTTP persistent connection)." # # NOTE: We use a separate requests.Session per scheme+hostname # combination, in order to reuse connections to the same hostname to # improve efficiency, but avoiding sharing state between different # hosts-scheme combinations to minimize subtle security issues. # Some cookies may not be HTTP-safe. self._sessions: dict[tuple[str, str], requests.Session] = {} # Default settings self.socket_timeout: int = socket_timeout # seconds self.chunk_size: int = chunk_size # bytes self.app_user_agent = app_user_agent def _fetch(self, url: str) -> Iterator[bytes]: """Fetch the contents of HTTP/HTTPS url from a remote server. Args: url: URL string that represents a file location. Raises: exceptions.SlowRetrievalError: Timeout occurs while receiving data. exceptions.DownloadHTTPError: HTTP error code is received. Returns: Bytes iterator """ # Get a customized session for each new schema+hostname combination. session = self._get_session(url) # Get the requests.Response object for this URL. # # Defer downloading the response body with stream=True. # Always set the timeout. This timeout value is interpreted by # requests as: # - connect timeout (max delay before first byte is received) # - read (gap) timeout (max delay between bytes received) try: response = session.get( url, stream=True, timeout=self.socket_timeout ) except requests.exceptions.Timeout as e: raise exceptions.SlowRetrievalError from e # Check response status. try: response.raise_for_status() except requests.HTTPError as e: response.close() status = e.response.status_code raise exceptions.DownloadHTTPError(str(e), status) from e return self._chunks(response) def _chunks(self, response: requests.Response) -> Iterator[bytes]: """A generator function to be returned by fetch. This way the caller of fetch can differentiate between connection and actual data download. """ try: yield from response.iter_content(self.chunk_size) except ( requests.exceptions.ConnectionError, requests.exceptions.Timeout, ) as e: raise exceptions.SlowRetrievalError from e finally: response.close() def _get_session(self, url: str) -> requests.Session: """Return a different customized requests.Session per schema+hostname combination. Raises: exceptions.DownloadError: When there is a problem parsing the url. """ # Use a different requests.Session per schema+hostname combination, to # reuse connections while minimizing subtle security issues. parsed_url = parse.urlparse(url) if not parsed_url.scheme: raise exceptions.DownloadError(f"Failed to parse URL {url}") session_index = (parsed_url.scheme, parsed_url.hostname or "") session = self._sessions.get(session_index) if not session: session = requests.Session() self._sessions[session_index] = session ua = f"python-tuf/{tuf.__version__} {session.headers['User-Agent']}" if self.app_user_agent is not None: ua = f"{self.app_user_agent} {ua}" session.headers["User-Agent"] = ua logger.debug("Made new session %s", session_index) else: logger.debug("Reusing session %s", session_index) return session tuf-6.0.0/tuf/ngclient/updater.py0000644000000000000000000005634213615410400013664 0ustar00# Copyright 2020, New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Client update workflow implementation. The ``Updater`` class provides an implementation of the `TUF client workflow `_. ``Updater`` provides an API to query available targets and to download them in a secure manner: All downloaded files are verified by signed metadata. High-level description of ``Updater`` functionality: * Initializing an ``Updater`` loads and validates the trusted local root metadata: This root metadata is used as the source of trust for all other metadata. Updater should always be initialized with the ``bootstrap`` argument: if this is not possible, it can be initialized from cache only. * ``refresh()`` can optionally be called to update and load all top-level metadata as described in the specification, using both locally cached metadata and metadata downloaded from the remote repository. If refresh is not done explicitly, it will happen automatically during the first target info lookup. * ``Updater`` can be used to download targets. For each target: * ``Updater.get_targetinfo()`` is first used to find information about a specific target. This will load new targets metadata as needed (from local cache or remote repository). * ``Updater.find_cached_target()`` can optionally be used to check if a target file is already locally cached. * ``Updater.download_target()`` downloads a target file and ensures it is verified correct by the metadata. Note that applications using ``Updater`` should be 'single instance' applications: running multiple instances that use the same cache directories at the same time is not supported. A simple example of using the Updater to implement a Python TUF client that downloads target files is available in `examples/client `_. Notes on how Updater uses HTTP by default: * urllib3 is the HTTP library * Typically all requests are retried by urllib3 three times (in cases where this seems useful) * Operating system certificate store is used for TLS, in other words ``certifi`` is not used as the certificate source * Proxy use can be configured with ``https_proxy`` and other similar environment variables All of the HTTP decisions can be changed with ``fetcher`` argument: Custom ``FetcherInterface`` implementations are possible. The alternative ``RequestsFetcher`` implementation is also provided (although deprecated). """ from __future__ import annotations import contextlib import logging import os import shutil import tempfile from pathlib import Path from typing import TYPE_CHECKING, cast from urllib import parse from tuf.api import exceptions from tuf.api.metadata import Root, Snapshot, TargetFile, Targets, Timestamp from tuf.ngclient._internal.trusted_metadata_set import TrustedMetadataSet from tuf.ngclient.config import EnvelopeType, UpdaterConfig from tuf.ngclient.urllib3_fetcher import Urllib3Fetcher if TYPE_CHECKING: from tuf.ngclient.fetcher import FetcherInterface logger = logging.getLogger(__name__) class Updater: """Creates a new ``Updater`` instance and loads trusted root metadata. Args: metadata_dir: Local metadata directory. Directory must be writable and it must contain a trusted root.json file metadata_base_url: Base URL for all remote metadata downloads target_dir: Local targets directory. Directory must be writable. It will be used as the default target download directory by ``find_cached_target()`` and ``download_target()`` target_base_url: ``Optional``; Default base URL for all remote target downloads. Can be individually set in ``download_target()`` fetcher: ``Optional``; ``FetcherInterface`` implementation used to download both metadata and targets. Default is ``Urllib3Fetcher`` config: ``Optional``; ``UpdaterConfig`` could be used to setup common configuration options. bootstrap: ``Optional``; initial root metadata. A bootstrap root should always be provided. If it is not, the current root.json in the metadata cache is used as the initial root. Raises: OSError: Local root.json cannot be read RepositoryError: Local root.json is invalid """ def __init__( self, metadata_dir: str, metadata_base_url: str, target_dir: str | None = None, target_base_url: str | None = None, fetcher: FetcherInterface | None = None, config: UpdaterConfig | None = None, bootstrap: bytes | None = None, ): self._dir = metadata_dir self._metadata_base_url = _ensure_trailing_slash(metadata_base_url) self.target_dir = target_dir if target_base_url is None: self._target_base_url = None else: self._target_base_url = _ensure_trailing_slash(target_base_url) self.config = config or UpdaterConfig() if fetcher is not None: self._fetcher = fetcher else: self._fetcher = Urllib3Fetcher( app_user_agent=self.config.app_user_agent ) supported_envelopes = [EnvelopeType.METADATA, EnvelopeType.SIMPLE] if self.config.envelope_type not in supported_envelopes: raise ValueError( f"config: envelope_type must be one of {supported_envelopes}, " f"got '{self.config.envelope_type}'" ) if not bootstrap: # if no root was provided, use the cached non-versioned root.json bootstrap = self._load_local_metadata(Root.type) # Load the initial root, make sure it's cached self._trusted_set = TrustedMetadataSet( bootstrap, self.config.envelope_type ) self._persist_root(self._trusted_set.root.version, bootstrap) self._update_root_symlink() def refresh(self) -> None: """Refresh top-level metadata. Downloads, verifies, and loads metadata for the top-level roles in the specified order (root -> timestamp -> snapshot -> targets) implementing all the checks required in the TUF client workflow. A ``refresh()`` can be done only once during the lifetime of an Updater. If ``refresh()`` has not been explicitly called before the first ``get_targetinfo()`` call, it will be done implicitly at that time. The metadata for delegated roles is not updated by ``refresh()``: that happens on demand during ``get_targetinfo()``. However, if the repository uses `consistent_snapshot `_, then all metadata downloaded by the Updater will use the same consistent repository state. Raises: OSError: New metadata could not be written to disk RepositoryError: Metadata failed to verify in some way DownloadError: Download of a metadata file failed in some way """ self._load_root() self._load_timestamp() self._load_snapshot() self._load_targets(Targets.type, Root.type) def _generate_target_file_path(self, targetinfo: TargetFile) -> str: if self.target_dir is None: raise ValueError("target_dir must be set if filepath is not given") # Use URL encoded target path as filename filename = parse.quote(targetinfo.path, "") return os.path.join(self.target_dir, filename) def get_targetinfo(self, target_path: str) -> TargetFile | None: """Return ``TargetFile`` instance with information for ``target_path``. The return value can be used as an argument to ``download_target()`` and ``find_cached_target()``. If ``refresh()`` has not been called before calling ``get_targetinfo()``, the refresh will be done implicitly. As a side-effect this method downloads all the additional (delegated targets) metadata it needs to return the target information. Args: target_path: `path-relative-URL string `_ that uniquely identifies the target within the repository. Raises: OSError: New metadata could not be written to disk RepositoryError: Metadata failed to verify in some way DownloadError: Download of a metadata file failed in some way Returns: ``TargetFile`` instance or ``None``. """ if Targets.type not in self._trusted_set: self.refresh() return self._preorder_depth_first_walk(target_path) def find_cached_target( self, targetinfo: TargetFile, filepath: str | None = None, ) -> str | None: """Check whether a local file is an up to date target. Args: targetinfo: ``TargetFile`` from ``get_targetinfo()``. filepath: Local path to file. If ``None``, a file path is generated based on ``target_dir`` constructor argument. Raises: ValueError: Incorrect arguments Returns: Local file path if the file is an up to date target file. ``None`` if file is not found or it is not up to date. """ if filepath is None: filepath = self._generate_target_file_path(targetinfo) try: with open(filepath, "rb") as target_file: targetinfo.verify_length_and_hashes(target_file) return filepath except (OSError, exceptions.LengthOrHashMismatchError): return None def download_target( self, targetinfo: TargetFile, filepath: str | None = None, target_base_url: str | None = None, ) -> str: """Download the target file specified by ``targetinfo``. Args: targetinfo: ``TargetFile`` from ``get_targetinfo()``. filepath: Local path to download into. If ``None``, the file is downloaded into directory defined by ``target_dir`` constructor argument using a generated filename. If file already exists, it is overwritten. target_base_url: Base URL used to form the final target download URL. Default is the value provided in ``Updater()`` Raises: ValueError: Invalid arguments DownloadError: Download of the target file failed in some way RepositoryError: Downloaded target failed to be verified in some way OSError: Failed to write target to file Returns: Local path to downloaded file """ if filepath is None: filepath = self._generate_target_file_path(targetinfo) Path(filepath).parent.mkdir(exist_ok=True, parents=True) if target_base_url is None: if self._target_base_url is None: raise ValueError( "target_base_url must be set in either " "download_target() or constructor" ) target_base_url = self._target_base_url else: target_base_url = _ensure_trailing_slash(target_base_url) target_filepath = targetinfo.path consistent_snapshot = self._trusted_set.root.consistent_snapshot if consistent_snapshot and self.config.prefix_targets_with_hash: hashes = list(targetinfo.hashes.values()) dirname, sep, basename = target_filepath.rpartition("/") target_filepath = f"{dirname}{sep}{hashes[0]}.{basename}" full_url = f"{target_base_url}{target_filepath}" with self._fetcher.download_file( full_url, targetinfo.length ) as target_file: targetinfo.verify_length_and_hashes(target_file) target_file.seek(0) with open(filepath, "wb") as destination_file: shutil.copyfileobj(target_file, destination_file) logger.debug("Downloaded target %s", targetinfo.path) return filepath def _download_metadata( self, rolename: str, length: int, version: int | None = None ) -> bytes: """Download a metadata file and return it as bytes.""" encoded_name = parse.quote(rolename, "") if version is None: url = f"{self._metadata_base_url}{encoded_name}.json" else: url = f"{self._metadata_base_url}{version}.{encoded_name}.json" return self._fetcher.download_bytes(url, length) def _load_local_metadata(self, rolename: str) -> bytes: encoded_name = parse.quote(rolename, "") with open(os.path.join(self._dir, f"{encoded_name}.json"), "rb") as f: return f.read() def _persist_metadata(self, rolename: str, data: bytes) -> None: """Write metadata to disk atomically to avoid data loss. Use a filename _not_ prefixed with version (e.g. "timestamp.json") . Encode the rolename to avoid issues with e.g. path separators """ encoded_name = parse.quote(rolename, "") filename = os.path.join(self._dir, f"{encoded_name}.json") self._persist_file(filename, data) def _persist_root(self, version: int, data: bytes) -> None: """Write root metadata to disk atomically to avoid data loss. The metadata is stored with version prefix (e.g. "root_history/1.root.json"). """ rootdir = Path(self._dir, "root_history") rootdir.mkdir(exist_ok=True, parents=True) self._persist_file(str(rootdir / f"{version}.root.json"), data) def _persist_file(self, filename: str, data: bytes) -> None: """Write a file to disk atomically to avoid data loss.""" temp_file_name = None try: with tempfile.NamedTemporaryFile( dir=self._dir, delete=False ) as temp_file: temp_file_name = temp_file.name temp_file.write(data) os.replace(temp_file.name, filename) except OSError as e: # remove tempfile if we managed to create one, # then let the exception happen if temp_file_name is not None: with contextlib.suppress(FileNotFoundError): os.remove(temp_file_name) raise e def _update_root_symlink(self) -> None: """Symlink root.json to current trusted root version in root_history/""" linkname = os.path.join(self._dir, "root.json") version = self._trusted_set.root.version current = os.path.join("root_history", f"{version}.root.json") with contextlib.suppress(FileNotFoundError): os.remove(linkname) os.symlink(current, linkname) def _load_root(self) -> None: """Load root metadata. Sequentially load newer root metadata versions. First try to load from local cache and if that does not work, from the remote repository. If metadata is loaded from remote repository, store it in local cache. """ # Update the root role lower_bound = self._trusted_set.root.version + 1 upper_bound = lower_bound + self.config.max_root_rotations try: for next_version in range(lower_bound, upper_bound): # look for next_version in local cache try: root_path = os.path.join( self._dir, "root_history", f"{next_version}.root.json" ) with open(root_path, "rb") as f: self._trusted_set.update_root(f.read()) continue except (OSError, exceptions.RepositoryError) as e: # this root did not exist locally or is invalid logger.debug("Local root is not valid: %s", e) # next_version was not found locally, try remote try: data = self._download_metadata( Root.type, self.config.root_max_length, next_version, ) self._trusted_set.update_root(data) self._persist_root(next_version, data) except exceptions.DownloadHTTPError as exception: if exception.status_code not in {403, 404}: raise # 404/403 means current root is newest available break finally: # Make sure the non-versioned root.json links to current version self._update_root_symlink() def _load_timestamp(self) -> None: """Load local and remote timestamp metadata.""" try: data = self._load_local_metadata(Timestamp.type) self._trusted_set.update_timestamp(data) except (OSError, exceptions.RepositoryError) as e: # Local timestamp does not exist or is invalid logger.debug("Local timestamp not valid as final: %s", e) # Load from remote (whether local load succeeded or not) data = self._download_metadata( Timestamp.type, self.config.timestamp_max_length ) try: self._trusted_set.update_timestamp(data) except exceptions.EqualVersionNumberError: # If the new timestamp version is the same as current, discard the # new timestamp. This is normal and it shouldn't raise any error. return self._persist_metadata(Timestamp.type, data) def _load_snapshot(self) -> None: """Load local (and if needed remote) snapshot metadata.""" try: data = self._load_local_metadata(Snapshot.type) self._trusted_set.update_snapshot(data, trusted=True) logger.debug("Local snapshot is valid: not downloading new one") except (OSError, exceptions.RepositoryError) as e: # Local snapshot does not exist or is invalid: update from remote logger.debug("Local snapshot not valid as final: %s", e) snapshot_meta = self._trusted_set.timestamp.snapshot_meta length = snapshot_meta.length or self.config.snapshot_max_length version = None if self._trusted_set.root.consistent_snapshot: version = snapshot_meta.version data = self._download_metadata(Snapshot.type, length, version) self._trusted_set.update_snapshot(data) self._persist_metadata(Snapshot.type, data) def _load_targets(self, role: str, parent_role: str) -> Targets: """Load local (and if needed remote) metadata for ``role``.""" # Avoid loading 'role' more than once during "get_targetinfo" if role in self._trusted_set: return cast(Targets, self._trusted_set[role]) try: data = self._load_local_metadata(role) delegated_targets = self._trusted_set.update_delegated_targets( data, role, parent_role ) logger.debug("Local %s is valid: not downloading new one", role) return delegated_targets except (OSError, exceptions.RepositoryError) as e: # Local 'role' does not exist or is invalid: update from remote logger.debug("Failed to load local %s: %s", role, e) snapshot = self._trusted_set.snapshot metainfo = snapshot.meta.get(f"{role}.json") if metainfo is None: raise exceptions.RepositoryError( f"Role {role} was delegated but is not part of snapshot" ) from None length = metainfo.length or self.config.targets_max_length version = None if self._trusted_set.root.consistent_snapshot: version = metainfo.version data = self._download_metadata(role, length, version) delegated_targets = self._trusted_set.update_delegated_targets( data, role, parent_role ) self._persist_metadata(role, data) return delegated_targets def _preorder_depth_first_walk( self, target_filepath: str ) -> TargetFile | None: """ Interrogates the tree of target delegations in order of appearance (which implicitly order trustworthiness), and returns the matching target found in the most trusted role. """ # List of delegations to be interrogated. A (role, parent role) pair # is needed to load and verify the delegated targets metadata. delegations_to_visit = [(Targets.type, Root.type)] visited_role_names: set[str] = set() # Preorder depth-first traversal of the graph of target delegations. while ( len(visited_role_names) <= self.config.max_delegations and len(delegations_to_visit) > 0 ): # Pop the role name from the top of the stack. role_name, parent_role = delegations_to_visit.pop(-1) # Skip any visited current role to prevent cycles. if role_name in visited_role_names: logger.debug("Skipping visited current role %s", role_name) continue # The metadata for 'role_name' must be downloaded/updated before # its targets, delegations, and child roles can be inspected. targets = self._load_targets(role_name, parent_role) target = targets.targets.get(target_filepath) if target is not None: logger.debug("Found target in current role %s", role_name) return target # After preorder check, add current role to set of visited roles. visited_role_names.add(role_name) if targets.delegations is not None: child_roles_to_visit = [] # NOTE: This may be a slow operation if there are many # delegated roles. for ( child_name, terminating, ) in targets.delegations.get_roles_for_target(target_filepath): logger.debug("Adding child role %s", child_name) child_roles_to_visit.append((child_name, role_name)) if terminating: logger.debug("Not backtracking to other roles") delegations_to_visit = [] break # Push 'child_roles_to_visit' in reverse order of appearance # onto 'delegations_to_visit'. Roles are popped from the end of # the list. child_roles_to_visit.reverse() delegations_to_visit.extend(child_roles_to_visit) if len(delegations_to_visit) > 0: logger.debug( "%d roles left to visit, but allowed at most %d delegations", len(delegations_to_visit), self.config.max_delegations, ) # If this point is reached then target is not found, return None return None def _ensure_trailing_slash(url: str) -> str: """Return url guaranteed to end in a slash.""" return url if url.endswith("/") else f"{url}/" tuf-6.0.0/tuf/ngclient/urllib3_fetcher.py0000644000000000000000000000660313615410400015267 0ustar00# Copyright 2021, New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Provides an implementation of ``FetcherInterface`` using the urllib3 HTTP library. """ from __future__ import annotations import logging from typing import TYPE_CHECKING # Imports import urllib3 import tuf from tuf.api import exceptions from tuf.ngclient._internal.proxy import ProxyEnvironment from tuf.ngclient.fetcher import FetcherInterface if TYPE_CHECKING: from collections.abc import Iterator # Globals logger = logging.getLogger(__name__) # Classes class Urllib3Fetcher(FetcherInterface): """An implementation of ``FetcherInterface`` based on the urllib3 library. Attributes: socket_timeout: Timeout in seconds, used for both initial connection delay and the maximum delay between bytes received. chunk_size: Chunk size in bytes used when downloading. """ def __init__( self, socket_timeout: int = 30, chunk_size: int = 400000, app_user_agent: str | None = None, ) -> None: # Default settings self.socket_timeout: int = socket_timeout # seconds self.chunk_size: int = chunk_size # bytes # Create User-Agent. ua = f"python-tuf/{tuf.__version__}" if app_user_agent is not None: ua = f"{app_user_agent} {ua}" self._proxy_env = ProxyEnvironment(headers={"User-Agent": ua}) def _fetch(self, url: str) -> Iterator[bytes]: """Fetch the contents of HTTP/HTTPS url from a remote server. Args: url: URL string that represents a file location. Raises: exceptions.SlowRetrievalError: Timeout occurs while receiving data. exceptions.DownloadHTTPError: HTTP error code is received. Returns: Bytes iterator """ # Defer downloading the response body with preload_content=False. # Always set the timeout. This timeout value is interpreted by # urllib3 as: # - connect timeout (max delay before first byte is received) # - read (gap) timeout (max delay between bytes received) try: response = self._proxy_env.request( "GET", url, preload_content=False, timeout=urllib3.Timeout(self.socket_timeout), ) except urllib3.exceptions.MaxRetryError as e: if isinstance(e.reason, urllib3.exceptions.TimeoutError): raise exceptions.SlowRetrievalError from e if response.status >= 400: response.close() raise exceptions.DownloadHTTPError( f"HTTP error occurred with status {response.status}", response.status, ) return self._chunks(response) def _chunks( self, response: urllib3.response.BaseHTTPResponse ) -> Iterator[bytes]: """A generator function to be returned by fetch. This way the caller of fetch can differentiate between connection and actual data download. """ try: yield from response.stream(self.chunk_size) except urllib3.exceptions.MaxRetryError as e: if isinstance(e.reason, urllib3.exceptions.TimeoutError): raise exceptions.SlowRetrievalError from e finally: response.release_conn() tuf-6.0.0/tuf/ngclient/_internal/__init__.py0000644000000000000000000000000013615410400015706 0ustar00tuf-6.0.0/tuf/ngclient/_internal/proxy.py0000644000000000000000000000657113615410400015353 0ustar00# Copyright New York University and the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Proxy environment variable handling with Urllib3""" from __future__ import annotations from typing import Any from urllib.request import getproxies from urllib3 import BaseHTTPResponse, PoolManager, ProxyManager from urllib3.util.url import parse_url # TODO: ProxyEnvironment could implement the whole PoolManager.RequestMethods # Mixin: We only need request() so nothing else is currently implemented class ProxyEnvironment: """A PoolManager manager for automatic proxy handling based on env variables Keeps track of PoolManagers for different proxy urls based on proxy environment variables. Use `get_pool_manager()` or `request()` to access the right manager for a scheme/host. Supports '*_proxy' variables, with special handling for 'no_proxy' and 'all_proxy'. """ def __init__( self, **kw_args: Any, # noqa: ANN401 ) -> None: self._pool_managers: dict[str | None, PoolManager] = {} self._kw_args = kw_args self._proxies = getproxies() self._all_proxy = self._proxies.pop("all", None) no_proxy = self._proxies.pop("no", None) if no_proxy is None: self._no_proxy_hosts = [] else: # split by comma, remove leading periods self._no_proxy_hosts = [ h.lstrip(".") for h in no_proxy.replace(" ", "").split(",") if h ] def _get_proxy(self, scheme: str | None, host: str | None) -> str | None: """Get a proxy url for scheme and host based on proxy env variables""" if host is None: # urllib3 only handles http/https but we can do something reasonable # even for schemes that don't require host (like file) return None # does host match any of the "no_proxy" hosts? for no_proxy_host in self._no_proxy_hosts: # wildcard match, exact hostname match, or parent domain match if no_proxy_host in ("*", host) or host.endswith( f".{no_proxy_host}" ): return None if scheme in self._proxies: return self._proxies[scheme] if self._all_proxy is not None: return self._all_proxy return None def get_pool_manager( self, scheme: str | None, host: str | None ) -> PoolManager: """Get a poolmanager for scheme and host. Returns a ProxyManager if that is correct based on current proxy env variables, otherwise returns a PoolManager """ proxy = self._get_proxy(scheme, host) if proxy not in self._pool_managers: if proxy is None: self._pool_managers[proxy] = PoolManager(**self._kw_args) else: self._pool_managers[proxy] = ProxyManager( proxy, **self._kw_args, ) return self._pool_managers[proxy] def request( self, method: str, url: str, **request_kw: Any, # noqa: ANN401 ) -> BaseHTTPResponse: """Make a request using a PoolManager chosen based on url and proxy environment variables. """ u = parse_url(url) manager = self.get_pool_manager(u.scheme, u.host) return manager.request(method, url, **request_kw) tuf-6.0.0/tuf/ngclient/_internal/trusted_metadata_set.py0000644000000000000000000004724313615410400020400 0ustar00# Copyright the TUF contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Trusted collection of client-side TUF Metadata. ``TrustedMetadataSet`` keeps track of the current valid set of metadata for the client, and handles almost every step of the "Detailed client workflow" ( https://theupdateframework.github.io/specification/latest#detailed-client-workflow) in the TUF specification: the remaining steps are related to filesystem and network IO, which are not handled here. Loaded metadata can be accessed via index access with rolename as key (``trusted_set[Root.type]``) or, in the case of top-level metadata, using the helper properties (``trusted_set.root``). Signatures are verified and discarded upon inclusion into the trusted set. The rules that ``TrustedMetadataSet`` follows for top-level metadata are * Metadata must be loaded in order: root -> timestamp -> snapshot -> targets -> (delegated targets). * Metadata can be loaded even if it is expired (or in the snapshot case if the meta info does not match): this is called "intermediate metadata". * Intermediate metadata can _only_ be used to load newer versions of the same metadata: As an example an expired root can be used to load a new root. * Metadata is loadable only if metadata before it in loading order is loaded (and is not intermediate): As an example timestamp can be loaded if a final (non-expired) root has been loaded. * Metadata is not loadable if any metadata after it in loading order has been loaded: As an example new roots cannot be loaded if timestamp is loaded. Exceptions are raised if metadata fails to load in any way. Example of loading root, timestamp and snapshot: >>> # Load local root (RepositoryErrors here stop the update) >>> with open(root_path, "rb") as f: >>> trusted_set = TrustedMetadataSet(f.read(), EnvelopeType.METADATA) >>> >>> # update root from remote until no more are available >>> with download(Root.type, trusted_set.root.version + 1) as f: >>> trusted_set.update_root(f.read()) >>> >>> # load local timestamp, then update from remote >>> try: >>> with open(timestamp_path, "rb") as f: >>> trusted_set.update_timestamp(f.read()) >>> except (RepositoryError, OSError): >>> pass # failure to load a local file is ok >>> >>> with download(Timestamp.type) as f: >>> trusted_set.update_timestamp(f.read()) >>> >>> # load local snapshot, then update from remote if needed >>> try: >>> with open(snapshot_path, "rb") as f: >>> trusted_set.update_snapshot(f.read()) >>> except (RepositoryError, OSError): >>> # local snapshot is not valid, load from remote >>> # (RepositoryErrors here stop the update) >>> with download(Snapshot.type, version) as f: >>> trusted_set.update_snapshot(f.read()) """ from __future__ import annotations import datetime import logging from collections import abc from typing import TYPE_CHECKING, Union, cast from tuf.api import exceptions from tuf.api.dsse import SimpleEnvelope from tuf.api.metadata import ( Metadata, Root, Signed, Snapshot, T, Targets, Timestamp, ) from tuf.ngclient.config import EnvelopeType if TYPE_CHECKING: from collections.abc import Iterator from securesystemslib.signer import Signature logger = logging.getLogger(__name__) Delegator = Union[Root, Targets] class TrustedMetadataSet(abc.Mapping): """Internal class to keep track of trusted metadata in ``Updater``. ``TrustedMetadataSet`` ensures that the collection of metadata in it is valid and trusted through the whole client update workflow. It provides easy ways to update the metadata with the caller making decisions on what is updated. """ def __init__(self, root_data: bytes, envelope_type: EnvelopeType): """Initialize ``TrustedMetadataSet`` by loading trusted root metadata. Args: root_data: Trusted root metadata as bytes. Note that this metadata will only be verified by itself: it is the source of trust for all metadata in the ``TrustedMetadataSet`` envelope_type: Configures deserialization and verification mode of TUF metadata. Raises: RepositoryError: Metadata failed to load or verify. The actual error type and content will contain more details. """ self._trusted_set: dict[str, Signed] = {} self.reference_time = datetime.datetime.now(datetime.timezone.utc) if envelope_type is EnvelopeType.SIMPLE: self._load_data = _load_from_simple_envelope else: self._load_data = _load_from_metadata # Load and validate the local root metadata. Valid initial trusted root # metadata is required logger.debug("Updating initial trusted root") self._load_trusted_root(root_data) def __getitem__(self, role: str) -> Signed: """Return current ``Signed`` for ``role``.""" return self._trusted_set[role] def __len__(self) -> int: """Return number of ``Signed`` objects in ``TrustedMetadataSet``.""" return len(self._trusted_set) def __iter__(self) -> Iterator[Signed]: """Return iterator over ``Signed`` objects in ``TrustedMetadataSet``. """ return iter(self._trusted_set.values()) # Helper properties for top level metadata @property def root(self) -> Root: """Get current root.""" return cast(Root, self._trusted_set[Root.type]) @property def timestamp(self) -> Timestamp: """Get current timestamp.""" return cast(Timestamp, self._trusted_set[Timestamp.type]) @property def snapshot(self) -> Snapshot: """Get current snapshot.""" return cast(Snapshot, self._trusted_set[Snapshot.type]) @property def targets(self) -> Targets: """Get current top-level targets.""" return cast(Targets, self._trusted_set[Targets.type]) # Methods for updating metadata def update_root(self, data: bytes) -> Root: """Verify and load ``data`` as new root metadata. Note that an expired intermediate root is considered valid: expiry is only checked for the final root in ``update_timestamp()``. Args: data: Unverified new root metadata as bytes Raises: RuntimeError: This function is called after updating timestamp. RepositoryError: Metadata failed to load or verify. The actual error type and content will contain more details. Returns: Deserialized and verified ``Root`` object """ if Timestamp.type in self._trusted_set: raise RuntimeError("Cannot update root after timestamp") logger.debug("Updating root") new_root, new_root_bytes, new_root_signatures = self._load_data( Root, data, self.root ) if new_root.version != self.root.version + 1: raise exceptions.BadVersionNumberError( f"Expected root version {self.root.version + 1}" f" instead got version {new_root.version}" ) # Verify that new root is signed by itself new_root.verify_delegate(Root.type, new_root_bytes, new_root_signatures) self._trusted_set[Root.type] = new_root logger.debug("Updated root v%d", new_root.version) return new_root def update_timestamp(self, data: bytes) -> Timestamp: """Verify and load ``data`` as new timestamp metadata. Note that an intermediate timestamp is allowed to be expired: ``TrustedMetadataSet`` will throw an ``ExpiredMetadataError`` in this case but the intermediate timestamp will be loaded. This way a newer timestamp can still be loaded (and the intermediate timestamp will be used for rollback protection). Expired timestamp will prevent loading snapshot metadata. Args: data: Unverified new timestamp metadata as bytes Raises: RuntimeError: This function is called after updating snapshot. RepositoryError: Metadata failed to load or verify as final timestamp. The actual error type and content will contain more details. Returns: Deserialized and verified ``Timestamp`` object """ if Snapshot.type in self._trusted_set: raise RuntimeError("Cannot update timestamp after snapshot") # client workflow 5.3.10: Make sure final root is not expired. if self.root.is_expired(self.reference_time): raise exceptions.ExpiredMetadataError("Final root.json is expired") # No need to check for 5.3.11 (fast forward attack recovery): # timestamp/snapshot can not yet be loaded at this point new_timestamp, _, _ = self._load_data(Timestamp, data, self.root) # If an existing trusted timestamp is updated, # check for a rollback attack if Timestamp.type in self._trusted_set: # Prevent rolling back timestamp version if new_timestamp.version < self.timestamp.version: raise exceptions.BadVersionNumberError( f"New timestamp version {new_timestamp.version} must" f" be >= {self.timestamp.version}" ) # Keep using old timestamp if versions are equal. if new_timestamp.version == self.timestamp.version: raise exceptions.EqualVersionNumberError # Prevent rolling back snapshot version snapshot_meta = self.timestamp.snapshot_meta new_snapshot_meta = new_timestamp.snapshot_meta if new_snapshot_meta.version < snapshot_meta.version: raise exceptions.BadVersionNumberError( f"New snapshot version must be >= {snapshot_meta.version}" f", got version {new_snapshot_meta.version}" ) # expiry not checked to allow old timestamp to be used for rollback # protection of new timestamp: expiry is checked in update_snapshot() self._trusted_set[Timestamp.type] = new_timestamp logger.debug("Updated timestamp v%d", new_timestamp.version) # timestamp is loaded: raise if it is not valid _final_ timestamp self._check_final_timestamp() return new_timestamp def _check_final_timestamp(self) -> None: """Raise if timestamp is expired.""" if self.timestamp.is_expired(self.reference_time): raise exceptions.ExpiredMetadataError("timestamp.json is expired") def update_snapshot( self, data: bytes, trusted: bool | None = False ) -> Snapshot: """Verify and load ``data`` as new snapshot metadata. Note that an intermediate snapshot is allowed to be expired and version is allowed to not match timestamp meta version: ``TrustedMetadataSet`` will throw an ``ExpiredMetadataError``/``BadVersionNumberError`` in these cases but the intermediate snapshot will be loaded. This way a newer snapshot can still be loaded (and the intermediate snapshot will be used for rollback protection). Expired snapshot or snapshot that does not match timestamp meta version will prevent loading targets. Args: data: Unverified new snapshot metadata as bytes trusted: ``True`` if data has at some point been verified by ``TrustedMetadataSet`` as a valid snapshot. Purpose of trusted is to allow loading of locally stored snapshot as intermediate snapshot even if hashes in current timestamp meta no longer match data. Default is False. Raises: RuntimeError: This function is called before updating timestamp or after updating targets. RepositoryError: Data failed to load or verify as final snapshot. The actual error type and content will contain more details. Returns: Deserialized and verified ``Snapshot`` object """ if Timestamp.type not in self._trusted_set: raise RuntimeError("Cannot update snapshot before timestamp") if Targets.type in self._trusted_set: raise RuntimeError("Cannot update snapshot after targets") logger.debug("Updating snapshot") # Snapshot cannot be loaded if final timestamp is expired self._check_final_timestamp() snapshot_meta = self.timestamp.snapshot_meta # Verify non-trusted data against the hashes in timestamp, if any. # Trusted snapshot data has already been verified once. if not trusted: snapshot_meta.verify_length_and_hashes(data) new_snapshot, _, _ = self._load_data(Snapshot, data, self.root) # version not checked against meta version to allow old snapshot to be # used in rollback protection: it is checked when targets is updated # If an existing trusted snapshot is updated, check for rollback attack if Snapshot.type in self._trusted_set: for filename, fileinfo in self.snapshot.meta.items(): new_fileinfo = new_snapshot.meta.get(filename) # Prevent removal of any metadata in meta if new_fileinfo is None: raise exceptions.RepositoryError( f"New snapshot is missing info for '{filename}'" ) # Prevent rollback of any metadata versions if new_fileinfo.version < fileinfo.version: raise exceptions.BadVersionNumberError( f"Expected {filename} version " f"{new_fileinfo.version}, got {fileinfo.version}." ) # expiry not checked to allow old snapshot to be used for rollback # protection of new snapshot: it is checked when targets is updated self._trusted_set[Snapshot.type] = new_snapshot logger.debug("Updated snapshot v%d", new_snapshot.version) # snapshot is loaded, but we raise if it's not valid _final_ snapshot self._check_final_snapshot() return new_snapshot def _check_final_snapshot(self) -> None: """Raise if snapshot is expired or meta version does not match.""" if self.snapshot.is_expired(self.reference_time): raise exceptions.ExpiredMetadataError("snapshot.json is expired") snapshot_meta = self.timestamp.snapshot_meta if self.snapshot.version != snapshot_meta.version: raise exceptions.BadVersionNumberError( f"Expected snapshot version {snapshot_meta.version}, " f"got {self.snapshot.version}" ) def update_targets(self, data: bytes) -> Targets: """Verify and load ``data`` as new top-level targets metadata. Args: data: Unverified new targets metadata as bytes Raises: RepositoryError: Metadata failed to load or verify. The actual error type and content will contain more details. Returns: Deserialized and verified `Targets`` object """ return self.update_delegated_targets(data, Targets.type, Root.type) def update_delegated_targets( self, data: bytes, role_name: str, delegator_name: str ) -> Targets: """Verify and load ``data`` as new metadata for target ``role_name``. Args: data: Unverified new metadata as bytes role_name: Role name of the new metadata delegator_name: Name of the role delegating to the new metadata Raises: RuntimeError: This function is called before updating snapshot. RepositoryError: Metadata failed to load or verify. The actual error type and content will contain more details. Returns: Deserialized and verified ``Targets`` object """ if Snapshot.type not in self._trusted_set: raise RuntimeError("Cannot load targets before snapshot") # Targets cannot be loaded if final snapshot is expired or its version # does not match meta version in timestamp self._check_final_snapshot() delegator: Delegator | None = self.get(delegator_name) if delegator is None: raise RuntimeError("Cannot load targets before delegator") logger.debug("Updating %s delegated by %s", role_name, delegator_name) # Verify against the hashes in snapshot, if any meta = self.snapshot.meta.get(f"{role_name}.json") if meta is None: raise exceptions.RepositoryError( f"Snapshot does not contain information for '{role_name}'" ) meta.verify_length_and_hashes(data) new_delegate, _, _ = self._load_data( Targets, data, delegator, role_name ) version = new_delegate.version if version != meta.version: raise exceptions.BadVersionNumberError( f"Expected {role_name} v{meta.version}, got v{version}." ) if new_delegate.is_expired(self.reference_time): raise exceptions.ExpiredMetadataError(f"New {role_name} is expired") self._trusted_set[role_name] = new_delegate logger.debug("Updated %s v%d", role_name, version) return new_delegate def _load_trusted_root(self, data: bytes) -> None: """Verify and load ``data`` as trusted root metadata. Note that an expired initial root is considered valid: expiry is only checked for the final root in ``update_timestamp()``. """ new_root, new_root_bytes, new_root_signatures = self._load_data( Root, data ) new_root.verify_delegate(Root.type, new_root_bytes, new_root_signatures) self._trusted_set[Root.type] = new_root logger.debug("Loaded trusted root v%d", new_root.version) def _load_from_metadata( role: type[T], data: bytes, delegator: Delegator | None = None, role_name: str | None = None, ) -> tuple[T, bytes, dict[str, Signature]]: """Load traditional metadata bytes, and extract and verify payload. If no delegator is passed, verification is skipped. Returns a tuple of deserialized payload, signed payload bytes, and signatures. """ md = Metadata[T].from_bytes(data) if md.signed.type != role.type: raise exceptions.RepositoryError( f"Expected '{role.type}', got '{md.signed.type}'" ) if delegator: if role_name is None: role_name = role.type delegator.verify_delegate(role_name, md.signed_bytes, md.signatures) return md.signed, md.signed_bytes, md.signatures def _load_from_simple_envelope( role: type[T], data: bytes, delegator: Delegator | None = None, role_name: str | None = None, ) -> tuple[T, bytes, dict[str, Signature]]: """Load simple envelope bytes, and extract and verify payload. If no delegator is passed, verification is skipped. Returns a tuple of deserialized payload, signed payload bytes, and signatures. """ envelope = SimpleEnvelope[T].from_bytes(data) if envelope.payload_type != SimpleEnvelope.DEFAULT_PAYLOAD_TYPE: raise exceptions.RepositoryError( f"Expected '{SimpleEnvelope.DEFAULT_PAYLOAD_TYPE}', " f"got '{envelope.payload_type}'" ) if delegator: if role_name is None: role_name = role.type delegator.verify_delegate( role_name, envelope.pae(), envelope.signatures ) signed = envelope.get_signed() if signed.type != role.type: raise exceptions.RepositoryError( f"Expected '{role.type}', got '{signed.type}'" ) return signed, envelope.pae(), envelope.signatures tuf-6.0.0/tuf/repository/__init__.py0000644000000000000000000000100213615410400014352 0ustar00# Copyright 2021-2022 python-tuf contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Repository API: A helper library for repository implementations This module is intended to make any "metadata editing" applications easier to implement: this includes repository applications, CI integration components as well as developer and signing tools. The repository module is not considered part of the stable python-tuf API yet. """ from tuf.repository._repository import AbortEdit, Repository # noqa: F401 tuf-6.0.0/tuf/repository/_repository.py0000644000000000000000000002352613615410400015210 0ustar00# Copyright 2021-2022 python-tuf contributors # SPDX-License-Identifier: MIT OR Apache-2.0 """Repository Abstraction for metadata management""" from __future__ import annotations import logging from abc import ABC, abstractmethod from contextlib import contextmanager, suppress from copy import deepcopy from typing import TYPE_CHECKING from tuf.api.exceptions import UnsignedMetadataError from tuf.api.metadata import ( Metadata, MetaFile, Root, Signed, Snapshot, Targets, Timestamp, ) if TYPE_CHECKING: from collections.abc import Generator logger = logging.getLogger(__name__) class AbortEdit(Exception): # noqa: N818 """Raise to exit the edit() contextmanager without saving changes""" class Repository(ABC): """Abstract class for metadata modifying implementations NOTE: The repository module is not considered part of the python-tuf stable API yet. This class is intended to be a base class used in any metadata editing application, whether it is a real repository server or a developer tool. Implementations must implement open() and close(), and can then use the edit() contextmanager to implement actual operations. Note that signing an already existing version of metadata (as could be done for threshold signing) does not fit into this model of open()+close() or edit(). A few operations (snapshot and timestamp) are already implemented in this base class. """ @abstractmethod def open(self, role: str) -> Metadata: """Load a roles metadata from storage or cache, return it If role has no metadata, create first version from scratch. """ raise NotImplementedError @abstractmethod def close(self, role: str, md: Metadata) -> None: """Write roles metadata into storage Update expiry and version and replace signatures with ones from all available keys. Keep snapshot_info and targets_infos updated. """ raise NotImplementedError @property def targets_infos(self) -> dict[str, MetaFile]: """Returns the MetaFiles for current targets metadatas This property is used by do_snapshot() to update Snapshot.meta: Repository implementations should override this property to enable do_snapshot(). Note that there is a difference between this return value and Snapshot.meta: This dictionary reflects the targets metadata that currently exists in the repository but Snapshot.meta also includes metadata that used to exist, but no longer exists, in the repository. """ raise NotImplementedError @property def snapshot_info(self) -> MetaFile: """Returns the MetaFile for current snapshot metadata This property is used by do_timestamp() to update Timestamp.meta: Repository implementations should override this property to enable do_timestamp(). """ raise NotImplementedError @contextmanager def edit(self, role: str) -> Generator[Signed, None, None]: """Context manager for editing a role's metadata Context manager takes care of loading the roles metadata (or creating new metadata), updating expiry and version. The caller can do other changes to the Signed object and when the context manager exits, a new version of the roles metadata is stored. Context manager user can raise AbortEdit from inside the with-block to cancel the edit: in this case none of the changes are stored. """ md = self.open(role) with suppress(AbortEdit): yield md.signed self.close(role, md) @contextmanager def edit_root(self) -> Generator[Root, None, None]: """Context manager for editing root metadata. See edit()""" with self.edit(Root.type) as root: if not isinstance(root, Root): raise AssertionError("Unexpected root type") yield root @contextmanager def edit_timestamp(self) -> Generator[Timestamp, None, None]: """Context manager for editing timestamp metadata. See edit()""" with self.edit(Timestamp.type) as timestamp: if not isinstance(timestamp, Timestamp): raise AssertionError("Unexpected timestamp type") yield timestamp @contextmanager def edit_snapshot(self) -> Generator[Snapshot, None, None]: """Context manager for editing snapshot metadata. See edit()""" with self.edit(Snapshot.type) as snapshot: if not isinstance(snapshot, Snapshot): raise AssertionError("Unexpected snapshot type") yield snapshot @contextmanager def edit_targets( self, rolename: str = Targets.type ) -> Generator[Targets, None, None]: """Context manager for editing targets metadata. See edit()""" with self.edit(rolename) as targets: if not isinstance(targets, Targets): raise AssertionError(f"Unexpected targets ({rolename}) type") yield targets def root(self) -> Root: """Read current root metadata""" root = self.open(Root.type).signed if not isinstance(root, Root): raise AssertionError("Unexpected root type") return root def timestamp(self) -> Timestamp: """Read current timestamp metadata""" timestamp = self.open(Timestamp.type).signed if not isinstance(timestamp, Timestamp): raise AssertionError("Unexpected timestamp type") return timestamp def snapshot(self) -> Snapshot: """Read current snapshot metadata""" snapshot = self.open(Snapshot.type).signed if not isinstance(snapshot, Snapshot): raise AssertionError("Unexpected snapshot type") return snapshot def targets(self, rolename: str = Targets.type) -> Targets: """Read current targets metadata""" targets = self.open(rolename).signed if not isinstance(targets, Targets): raise AssertionError("Unexpected targets type") return targets def do_snapshot( self, force: bool = False ) -> tuple[bool, dict[str, MetaFile]]: """Update snapshot meta information Updates the snapshot meta information according to current targets metadata state and the current snapshot meta information. Arguments: force: should new snapshot version be created even if meta information would not change? Returns: Tuple of - True if snapshot was created, False if not - MetaFiles for targets versions removed from snapshot meta """ # Snapshot update is needed if # * any targets files are not yet in snapshot or # * any targets version is incorrect update_version = force removed: dict[str, MetaFile] = {} root = self.root() snapshot_md = self.open(Snapshot.type) try: root.verify_delegate( Snapshot.type, snapshot_md.signed_bytes, snapshot_md.signatures, ) except UnsignedMetadataError: update_version = True with self.edit_snapshot() as snapshot: for keyname, new_meta in self.targets_infos.items(): if keyname not in snapshot.meta: update_version = True snapshot.meta[keyname] = deepcopy(new_meta) continue old_meta = snapshot.meta[keyname] if new_meta.version < old_meta.version: raise ValueError(f"{keyname} version rollback") if new_meta.version > old_meta.version: update_version = True snapshot.meta[keyname] = deepcopy(new_meta) removed[keyname] = old_meta if not update_version: # prevent edit_snapshot() from storing a new version raise AbortEdit("Skip snapshot: No targets version changes") if not update_version: # this is reachable as edit_snapshot() handles AbortEdit logger.debug("Snapshot update not needed") # type: ignore[unreachable] else: logger.debug("Snapshot v%d", snapshot.version) return update_version, removed def do_timestamp(self, force: bool = False) -> tuple[bool, MetaFile | None]: """Update timestamp meta information Updates timestamp according to current snapshot state Returns: Tuple of - True if timestamp was created, False if not - MetaFile for snapshot version removed from timestamp (if any) """ update_version = force removed = None root = self.root() timestamp_md = self.open(Timestamp.type) try: root.verify_delegate( Timestamp.type, timestamp_md.signed_bytes, timestamp_md.signatures, ) except UnsignedMetadataError: update_version = True with self.edit_timestamp() as timestamp: if self.snapshot_info.version < timestamp.snapshot_meta.version: raise ValueError("snapshot version rollback") if self.snapshot_info.version > timestamp.snapshot_meta.version: update_version = True removed = timestamp.snapshot_meta timestamp.snapshot_meta = deepcopy(self.snapshot_info) if not update_version: raise AbortEdit("Skip timestamp: No snapshot version changes") if not update_version: # this is reachable as edit_timestamp() handles AbortEdit logger.debug("Timestamp update not needed") # type: ignore[unreachable] else: logger.debug("Timestamp v%d", timestamp.version) return update_version, removed tuf-6.0.0/.gitignore0000644000000000000000000000060313615410400011222 0ustar00# root level directories dist/ build/ env/ # docs build directory docs/build/ # global file patterns *.log *.pyc *.session *.swo *.swp *.egg-info .coverage .tox/ tests/htmlcov/ .DS_Store .pybuild/ .python-version *~ *.tmp .pre-commit-config.yaml .vscode # Debian generated files debian/.debhelper/ debian/*-stamp debian/files debian/*.debhelper debian/*.substvars debian/python*-tuf/ tuf-6.0.0/LICENSE0000644000000000000000000002273613615410400010252 0ustar00 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. tuf-6.0.0/LICENSE-MIT0000644000000000000000000000207613615410400010674 0ustar00The MIT License (MIT) Copyright (c) 2010 New York University Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. tuf-6.0.0/README.md0000644000000000000000000001467013615410400010522 0ustar00# TUF A Framework for Securing Software Update Systems [![CI badge](https://github.com/theupdateframework/python-tuf/actions/workflows/ci.yml/badge.svg)](https://github.com/theupdateframework/python-tuf/actions/workflows/ci.yml) [![Conformance badge](https://github.com/theupdateframework/python-tuf/actions/workflows/conformance.yml/badge.svg)](https://github.com/theupdateframework/python-tuf/actions/workflows/conformance.yml) [![Coveralls badge](https://coveralls.io/repos/theupdateframework/python-tuf/badge.svg?branch=develop)](https://coveralls.io/r/theupdateframework/python-tuf?branch=develop) [![Docs badge](https://readthedocs.org/projects/theupdateframework/badge/)](https://theupdateframework.readthedocs.io/) [![CII badge](https://bestpractices.coreinfrastructure.org/projects/1351/badge)](https://bestpractices.coreinfrastructure.org/projects/1351) [![PyPI badge](https://img.shields.io/pypi/v/tuf)](https://pypi.org/project/tuf/) [![Scorecard badge](https://api.scorecard.dev/projects/github.com/theupdateframework/python-tuf/badge)](https://scorecard.dev/viewer/?uri=github.com/theupdateframework/python-tuf) ---------------------------- [The Update Framework (TUF)](https://theupdateframework.io/) is a framework for secure content delivery and updates. It protects against various types of supply chain attacks and provides resilience to compromise. This repository is a **reference implementation** written in Python. It is intended to conform to version 1.0 of the [TUF specification](https://theupdateframework.github.io/specification/latest/). Python-TUF provides the following APIs: * [`tuf.api.metadata`](https://theupdateframework.readthedocs.io/en/latest/api/tuf.api.html), a "low-level" API, designed to provide easy and safe access to TUF metadata and to handle (de)serialization from/to files. * [`tuf.ngclient`](https://theupdateframework.readthedocs.io/en/latest/api/tuf.ngclient.html), a client implementation built on top of the metadata API. * `tuf.repository`, a repository library also built on top of the metadata API. This module is currently not considered part of python-tuf stable API. The reference implementation strives to be a readable guide and demonstration for those working on implementing TUF in their own languages, environments, or update systems. About The Update Framework -------------------------- The Update Framework (TUF) design helps developers maintain the security of a software update system, even against attackers that compromise the repository or signing keys. TUF provides a flexible [specification](https://github.com/theupdateframework/specification/blob/master/tuf-spec.md) defining functionality that developers can use in any software update system or re-implement to fit their needs. TUF is hosted by the [Linux Foundation](https://www.linuxfoundation.org/) as part of the [Cloud Native Computing Foundation](https://www.cncf.io/) (CNCF) and its design is [used in production](https://theupdateframework.io/adoptions/) by various tech companies and open source organizations. A variant of TUF called [Uptane](https://uptane.github.io/) is used to secure over-the-air updates in automobiles. Please see [TUF's website](https://theupdateframework.com/) for more information about TUF! Documentation ------------- * [Introduction to TUF's Design](https://theupdateframework.io/overview/) * [The TUF Specification](https://theupdateframework.github.io/specification/latest/) * [Developer documentation](https://theupdateframework.readthedocs.io/), including [API reference]( https://theupdateframework.readthedocs.io/en/latest/api/api-reference.html) and [instructions for contributors](https://theupdateframework.readthedocs.io/en/latest/CONTRIBUTING.html) * [Usage examples](https://github.com/theupdateframework/python-tuf/tree/develop/examples/) * [Governance](https://github.com/theupdateframework/python-tuf/blob/develop/docs/GOVERNANCE.md) and [Maintainers](https://github.com/theupdateframework/python-tuf/blob/develop/docs/MAINTAINERS.txt) for the reference implementation * [Miscellaneous Docs](https://github.com/theupdateframework/python-tuf/tree/develop/docs) * [Python-TUF development blog](https://theupdateframework.github.io/python-tuf/) Contact ------- Questions, feedback, and suggestions are welcomed on our low volume [mailing list](https://groups.google.com/forum/?fromgroups#!forum/theupdateframework) or the [#tuf](https://cloud-native.slack.com/archives/C8NMD3QJ3) channel on [CNCF Slack](https://slack.cncf.io/). We strive to make the specification easy to implement, so if you come across any inconsistencies or experience any difficulty, do let us know by sending an email, or by reporting an issue in the GitHub [specification repo](https://github.com/theupdateframework/specification/issues). Security Issues and Bugs ------------------------ See [SECURITY.md](docs/SECURITY.md) License ------- This work is [dual-licensed](https://en.wikipedia.org/wiki/Multi-licensing) and distributed under the (1) MIT License and (2) Apache License, Version 2.0. Please see [LICENSE-MIT](https://github.com/theupdateframework/python-tuf/blob/develop/LICENSE-MIT) and [LICENSE](https://github.com/theupdateframework/python-tuf/blob/develop/LICENSE). Acknowledgements ---------------- This project is hosted by the Linux Foundation under the Cloud Native Computing Foundation. TUF's early development was managed by members of the [Secure Systems Lab](https://ssl.engineering.nyu.edu/) at [New York University](https://engineering.nyu.edu/). We appreciate the efforts of all [maintainers and emeritus maintainers](https://github.com/theupdateframework/python-tuf/blob/develop/docs/MAINTAINERS.txt), as well as the contributors Konstantin Andrianov, Kairo de Araujo, Ivana Atanasova, Geremy Condra, Zane Fisher, Pankhuri Goyal, Justin Samuel, Tian Tian, Martin Vrachev and Yuyu Zheng who are among those who helped significantly with TUF's reference implementation. Maintainers and Contributors are governed by the [CNCF Community Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md). This material is based upon work supported by the National Science Foundation under Grant Nos. CNS-1345049 and CNS-0959138. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. tuf-6.0.0/pyproject.toml0000644000000000000000000001000113615410400012137 0ustar00[build-system] requires = ["hatchling==1.27.0"] build-backend = "hatchling.build" [project] name = "tuf" description = "A secure updater framework for Python" readme = "README.md" license = "Apache-2.0 OR MIT" license-files = ["LICENSE", "LICENSE-MIT"] requires-python = ">=3.8" authors = [ { email = "theupdateframework@googlegroups.com" }, ] keywords = [ "authentication", "compromise", "key", "revocation", "secure", "update", "updater", ] classifiers = [ "Development Status :: 5 - Production/Stable", "Intended Audience :: Developers", "Natural Language :: English", "Operating System :: MacOS :: MacOS X", "Operating System :: Microsoft :: Windows", "Operating System :: POSIX", "Operating System :: POSIX :: Linux", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: Implementation :: CPython", "Topic :: Security", "Topic :: Software Development", ] dependencies = [ "securesystemslib~=1.0", "urllib3<3,>=1.21.1", ] dynamic = ["version"] [project.urls] Documentation = "https://theupdateframework.readthedocs.io/en/stable/" Homepage = "https://www.updateframework.com" Issues = "https://github.com/theupdateframework/python-tuf/issues" Source = "https://github.com/theupdateframework/python-tuf" [tool.hatch.version] path = "tuf/__init__.py" [tool.hatch.build.targets.sdist] include = [ "/docs", "/examples", "/tests", "/tuf", "/requirements", "/tox.ini", "/setup.py", ] # Ruff section # Read more here: https://docs.astral.sh/ruff/linter/#rule-selection [tool.ruff] line-length=80 [tool.ruff.lint] select = ["ALL"] ignore = [ # Rulesets we do not use at this moment "COM", "EM", "FIX", "FBT", "PERF", "PT", "PTH", "TD", "TRY", # Individual rules that have been disabled "D400", "D415", "D213", "D205", "D202", "D107", "D407", "D413", "D212", "D104", "D406", "D105", "D411", "D401", "D200", "D203", "PLR0913", "PLR2004", ] [tool.ruff.lint.per-file-ignores] "tests/*" = [ "D", # pydocstyle: no docstrings required for tests "E501", # line-too-long: embedded test data in "fmt: off" blocks is ok "ERA001", # commented code is fine in tests "RUF012", # ruff: mutable-class-default "S", # bandit: Not running bandit on tests "SLF001", # private member access is ok in tests "T201", # print is ok in tests ] "examples/*/*" = [ "D", # pydocstyle: no docstrings required for examples "ERA001", # commented code is fine in examples "INP001", # implicit package is fine in examples "S", # bandit: Not running bandit on examples "T201", # print is ok in examples ] "verify_release" = [ "ERA001", # commented code is fine here "S603", # bandit: this flags all uses of subprocess.run as vulnerable "T201", # print is ok in verify_release ] ".github/scripts/*" = [ "T201", # print is ok in conformance client ] [tool.ruff.lint.flake8-annotations] mypy-init-return = true # mypy section # Read more here: https://mypy.readthedocs.io/en/stable/config_file.html#using-a-pyproject-toml-file [tool.mypy] warn_unused_configs = "True" warn_redundant_casts = "True" warn_unused_ignores = "True" warn_unreachable = "True" strict_equality = "True" disallow_untyped_defs = "True" disallow_untyped_calls = "True" show_error_codes = "True" disable_error_code = ["attr-defined"] [[tool.mypy.overrides]] module = [ "requests.*", "securesystemslib.*", ] ignore_missing_imports = "True" [tool.coverage.report] exclude_also = [ # abstract class method definition "raise NotImplementedError", # defensive programming: these cannot happen "raise AssertionError", # imports for mypy only "if TYPE_CHECKING", ] [tool.coverage.run] branch = true omit = [ "tests/*", "tuf/ngclient/requests_fetcher.py" ] tuf-6.0.0/PKG-INFO0000644000000000000000000001767113615410400010344 0ustar00Metadata-Version: 2.4 Name: tuf Version: 6.0.0 Summary: A secure updater framework for Python Project-URL: Documentation, https://theupdateframework.readthedocs.io/en/stable/ Project-URL: Homepage, https://www.updateframework.com Project-URL: Issues, https://github.com/theupdateframework/python-tuf/issues Project-URL: Source, https://github.com/theupdateframework/python-tuf Author-email: theupdateframework@googlegroups.com License-Expression: Apache-2.0 OR MIT License-File: LICENSE License-File: LICENSE-MIT Keywords: authentication,compromise,key,revocation,secure,update,updater Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: Natural Language :: English Classifier: Operating System :: MacOS :: MacOS X Classifier: Operating System :: Microsoft :: Windows Classifier: Operating System :: POSIX Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Programming Language :: Python :: 3.13 Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Topic :: Security Classifier: Topic :: Software Development Requires-Python: >=3.8 Requires-Dist: securesystemslib~=1.0 Requires-Dist: urllib3<3,>=1.21.1 Description-Content-Type: text/markdown # TUF A Framework for Securing Software Update Systems [![CI badge](https://github.com/theupdateframework/python-tuf/actions/workflows/ci.yml/badge.svg)](https://github.com/theupdateframework/python-tuf/actions/workflows/ci.yml) [![Conformance badge](https://github.com/theupdateframework/python-tuf/actions/workflows/conformance.yml/badge.svg)](https://github.com/theupdateframework/python-tuf/actions/workflows/conformance.yml) [![Coveralls badge](https://coveralls.io/repos/theupdateframework/python-tuf/badge.svg?branch=develop)](https://coveralls.io/r/theupdateframework/python-tuf?branch=develop) [![Docs badge](https://readthedocs.org/projects/theupdateframework/badge/)](https://theupdateframework.readthedocs.io/) [![CII badge](https://bestpractices.coreinfrastructure.org/projects/1351/badge)](https://bestpractices.coreinfrastructure.org/projects/1351) [![PyPI badge](https://img.shields.io/pypi/v/tuf)](https://pypi.org/project/tuf/) [![Scorecard badge](https://api.scorecard.dev/projects/github.com/theupdateframework/python-tuf/badge)](https://scorecard.dev/viewer/?uri=github.com/theupdateframework/python-tuf) ---------------------------- [The Update Framework (TUF)](https://theupdateframework.io/) is a framework for secure content delivery and updates. It protects against various types of supply chain attacks and provides resilience to compromise. This repository is a **reference implementation** written in Python. It is intended to conform to version 1.0 of the [TUF specification](https://theupdateframework.github.io/specification/latest/). Python-TUF provides the following APIs: * [`tuf.api.metadata`](https://theupdateframework.readthedocs.io/en/latest/api/tuf.api.html), a "low-level" API, designed to provide easy and safe access to TUF metadata and to handle (de)serialization from/to files. * [`tuf.ngclient`](https://theupdateframework.readthedocs.io/en/latest/api/tuf.ngclient.html), a client implementation built on top of the metadata API. * `tuf.repository`, a repository library also built on top of the metadata API. This module is currently not considered part of python-tuf stable API. The reference implementation strives to be a readable guide and demonstration for those working on implementing TUF in their own languages, environments, or update systems. About The Update Framework -------------------------- The Update Framework (TUF) design helps developers maintain the security of a software update system, even against attackers that compromise the repository or signing keys. TUF provides a flexible [specification](https://github.com/theupdateframework/specification/blob/master/tuf-spec.md) defining functionality that developers can use in any software update system or re-implement to fit their needs. TUF is hosted by the [Linux Foundation](https://www.linuxfoundation.org/) as part of the [Cloud Native Computing Foundation](https://www.cncf.io/) (CNCF) and its design is [used in production](https://theupdateframework.io/adoptions/) by various tech companies and open source organizations. A variant of TUF called [Uptane](https://uptane.github.io/) is used to secure over-the-air updates in automobiles. Please see [TUF's website](https://theupdateframework.com/) for more information about TUF! Documentation ------------- * [Introduction to TUF's Design](https://theupdateframework.io/overview/) * [The TUF Specification](https://theupdateframework.github.io/specification/latest/) * [Developer documentation](https://theupdateframework.readthedocs.io/), including [API reference]( https://theupdateframework.readthedocs.io/en/latest/api/api-reference.html) and [instructions for contributors](https://theupdateframework.readthedocs.io/en/latest/CONTRIBUTING.html) * [Usage examples](https://github.com/theupdateframework/python-tuf/tree/develop/examples/) * [Governance](https://github.com/theupdateframework/python-tuf/blob/develop/docs/GOVERNANCE.md) and [Maintainers](https://github.com/theupdateframework/python-tuf/blob/develop/docs/MAINTAINERS.txt) for the reference implementation * [Miscellaneous Docs](https://github.com/theupdateframework/python-tuf/tree/develop/docs) * [Python-TUF development blog](https://theupdateframework.github.io/python-tuf/) Contact ------- Questions, feedback, and suggestions are welcomed on our low volume [mailing list](https://groups.google.com/forum/?fromgroups#!forum/theupdateframework) or the [#tuf](https://cloud-native.slack.com/archives/C8NMD3QJ3) channel on [CNCF Slack](https://slack.cncf.io/). We strive to make the specification easy to implement, so if you come across any inconsistencies or experience any difficulty, do let us know by sending an email, or by reporting an issue in the GitHub [specification repo](https://github.com/theupdateframework/specification/issues). Security Issues and Bugs ------------------------ See [SECURITY.md](docs/SECURITY.md) License ------- This work is [dual-licensed](https://en.wikipedia.org/wiki/Multi-licensing) and distributed under the (1) MIT License and (2) Apache License, Version 2.0. Please see [LICENSE-MIT](https://github.com/theupdateframework/python-tuf/blob/develop/LICENSE-MIT) and [LICENSE](https://github.com/theupdateframework/python-tuf/blob/develop/LICENSE). Acknowledgements ---------------- This project is hosted by the Linux Foundation under the Cloud Native Computing Foundation. TUF's early development was managed by members of the [Secure Systems Lab](https://ssl.engineering.nyu.edu/) at [New York University](https://engineering.nyu.edu/). We appreciate the efforts of all [maintainers and emeritus maintainers](https://github.com/theupdateframework/python-tuf/blob/develop/docs/MAINTAINERS.txt), as well as the contributors Konstantin Andrianov, Kairo de Araujo, Ivana Atanasova, Geremy Condra, Zane Fisher, Pankhuri Goyal, Justin Samuel, Tian Tian, Martin Vrachev and Yuyu Zheng who are among those who helped significantly with TUF's reference implementation. Maintainers and Contributors are governed by the [CNCF Community Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md). This material is based upon work supported by the National Science Foundation under Grant Nos. CNS-1345049 and CNS-0959138. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.